There’s a dime stuck in the road behind our local store, tails side up, for over 15 years. And that doesn’t even need error correction.
Why does it sound like technology is going backwards more and more each day?
Someone please explain to me how anything implementing error correction is even useful if it only lasts about an hour?
I mean, that’s literally how research works. You make small discoveries and use them to move forward.
What’s to research? A fucking abacus can hold data longer than a goddamn hour.
Welp, quantum computers have certain advantages (finding elements in O(sqrt(n)) time complexity, factorizing primes, etc). The difficulty is actually making everything stable because these machines are pretty complex.
Are you really comparing a fucking abacus to quantum mechanics and computing?
Are you aware that RAM in your Computing devices looses information if you read the bit?
Why don’t you switch from smartphone to abacus and dwell in the anti science reality of medieval times?
Because quantum physics. A qubit isn’t 0 or 1, it’s both and everything in between. You get a result as a distribution, not as distinct values.
Qubits are represented as (for example) quantumly entangled electron spins. And due to the nature of quantum physics, they are not very stable, and you cannot measure a value without influencing it.
Granted, my knowledge of quantum computing is very hand-wavy.
I do get that, yes it’s more complicated than I can fully wrap my brain around as well. But it also starts to beg the question, how many billions of dollars does it take to reinvent the abacus?
Again, I realize there’s a bit of a stark difference between the technologies, but when does the pursuit of over-complicated technology stop being worth it?
Shit, look at how much energy these AI datacenters consume, enough to power a city or more. Look at how much money is getting pumped into these projects…
Ask the AI how to deal with the energy crisis, I’ll only believe it’s actually intelligent when it answers “Shut me and all the other AI datacenters off, and recycle our parts for actual useful purposes.”
Blowing billions on quantum computing ain’t helping feed, clothe and house the homeless…
Contrarian much, you have multiple answers for exactly the answer you asked for.
As a species, the one thing that defines us is the pursuit of technology to overcome our natural physical ability. We are currently hitting a wall in regard to electron based computing.
I think you’re confusing technology with politics to the point you’re just making a point unrelated to the topic.
All tech raises the standard if that’s then used by people to horde resources and have an unbalance in quality of life that’s a policy issue not one of the technology.
The computers we have today help to do logistics to “feed, clothe and house the homeless”. They also help you to advocate to do more. How much of that would be comprehensible to someone living in 1900?
I’m not sure that homelessness is a problem quantum computing or AI are suitable for. However, AI has already contributed in helping to solve protein folding problems that are critical in modern medicine.
Solving homelessness and many other problems isn’t resource constrained as you think. It’s more about the will to solve them, and who profits from leaving them unsolved. We have known for decades that providing homes for the homeless in a large city actually saves the city money, but we’re still not doing it. Renewable energy has been cheaper than fossil fuels for almost as long. Medicare for all would cost significantly less than the US private healthcare system, and would lead to better results, but we aren’t doing that either.
and you cannot measure a value without influencing it.
Which, to me, kinda defeats the whole purpose. I’m yet to wrap my head around this whole quantum thing.
It can be useful if they build enough of these that they can run programs that regular computers can’t run at this scale, in less than an hour.
Quantum computers aren’t a replacement for regular computers because they’re much slower and can’t do normal calculations, but they can do the type of problem where you have to guess-and-check too many answers to be feasible with regular computers in many fewer steps.
I took a random wild guess, and found that if they quit blowing billions of dollars on over-complicated technology, they could do a lot more to take care of real world problems, like food, clothes and shelter for the homeless.
You think that’s wasteful? Wait until you hear about the military or prisons.
So what you’re saying is we should never make any scientific advancement until we make the world a paradise?
You know what would be at more effective, and just as realistic? Setting a limit that no one person or entity should have more than a half a billion dollars. The rest goes to charity to take care of all the problems we have now.
But… you know even if they didn’t use the money for this, they wouldn’t use it for those things, right? It’s Google…
Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.
As stable as that dime is, it’s utterly useless for all practical purposes.
What Google is talking about it making a stable qbit - the basic unit of a quantum computer. It’s extremely difficult to make a qbit stable - and as it underpins how a quantum computer would work instability introduces noise and errors into the calculations a quantum computer would make.
Stabilising a qbit in the way Google’s researchers have done shows that in principle if you scale up a quantum computer it will get more stable and accurate. It’s been a major aim in the development of quantum computing for some time.
Current quantum computers are small and error prone. The researchers have added another stepping stone on the way to useful quantum computers in the real world.
It sounds like your saying a large quantum computer is easier to make than a small quantum computer?
Do you have any idea the amount of error correction needed to get a regular desktop computer to do its thing? Between the peripheral bus and the CPU, inside your RAM if you have ECC, between the USB host controller and your printer, between your network card and your network switch/router, and so on and so forth. It’s amazing that something as complex and using such fast signalling as a modern PC does can function at all. At the frequencies that are being used to transfer data around the system, the copper traces behave more like radio frequency waveguides than they do wires. They are just “suggestions” for the signals to follow. So there’s tons of crosstalk/bleed over and external interference that must be taken into account.
Basically, if you want to send high speed signals more than a couple centimeters and have them arrive in a way that makes sense to the receiving entity, you’re going to need error correction. Having “error correction” doesn’t mean something is bad. We use it all the time. CRC, checksums, parity bits, and many other techniques exist to detect and correct for errors in data.
I’m well aware. I’m also aware that the various levels of error correction in a typical computer manage to retain the data integrity potentially for years or even decades.
Google bragging about an hour, regardless of it being a different type of computer, just sounds pathetic, especially given all the money being invested in the technology.
Traditional bits only have to be 0 or 1. Not a coherent superposition.
Managing to maintain a stable qubit for a meaningful amount of time is an important step. The final output from quantum computation is likely going to end up being traditional bits, stored traditionally, but superpositions allow qubits to be much more powerful during computation.
Being able to maintain a cached superposition seems like it would be an important step.
(Note: I am not even a quantum computer novice.)
How many calculations can your computer do in an hour? The answer is a lot.
Indeed, you’re very correct. It can also remember those results for over an hour. Hell, a jumping spider has better memory than that.
The output of a quantum computer is read by a classical computer and can then be transferred or stored as long as you liked use traditional means.
The lifetime of the error corrected qubit mentioned here is a limitation of how complex of a quantum calculation the quantum computer can fix. And an hour is a really, really long time by that standard.
Breaking RSA or other exciting things still requires a bunch of these error corrected qubits connected together. But this is still a pretty significant step.
it only lasts for an hour
“Only”? The “industry standard” is less than a millisecond.
Show the academic world how many computational tasks the physical structure of that coin has solved in the 15 years.
About how far does this leave us from a usable quantum processor? How far from all current cryptographic algorithms being junk?
The latest versions of TLS already have support post-quantum crypto, so no, it’s not all of them. For the ones that are vulnerable, we’re way, way far off from that. It may not even be possible to have enough qbits to break those at all.
Things like simulating medicines, folding proteins, and logistics are much closer, very useful, and more likely to be practical in the medium term.
Is there gov money in folding proteins though? I assume there’s a lot of 3 letter agencies what want decryption with a lot more funding.
There’s plenty of publicly funded research for that, yes.
Three letter agencies also want to protect their own nation’s secrets. They have as much interest in breaking it as they do protecting against it.
I know a good therapist, if need be!
Just in time for the fall of American democracy. What could possibly go wrong.
108 qubits, but error correction duty for some of them?
What size RSA key can it factor “instantly”?
afaik, without a need for error correction a quantum computer with 256 bits could break an old 256 bit RSA key. RSA keys are made by taking 2 (x-1 bit) primes and multiplying them together. It is relatively simple algorithms to factor numbers that size on both classsical and quantum computers, However, the larger the number/bits, the more billions of billions of years it takes a classical computer to factor it. The limit for a quantum computer is how many “practical qubits” it has. OP’s article did not answer this, and so far no quantum computer has been able to solve factoring a number any faster than your phone can in under a half second.