Is this a question?
We haven’t even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it’s technically the opposite.
Is this a question?
For the people who don’t know the answer? Yes.
Not everything you see is intended for your consumption. Let people enjoy learning things.
I totally agree. I know a teacher who who likes to say:
“I believe there really is no such thing as a dumb question. As long as it’s an honest question (not rhetorical or sarcastic), then it’s a genuine request for more information. So even if it’s coming from a place of extreme ignorance, asking a question is an attempt to learn something, and the effort should be applauded.”
It’s a link to an article I found interesting. It basically details why we’re still using 64-bit CPUs, just as you mentioned.
We don’t even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.
Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It’s harder to do half. Sure, it’d be slightly faster for some things, but it’s not significant.
And you can get 128-bit data to the CPU, so those things can be fast if we need them to be.
32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
Yeah I acknowledged the shortcomings in a different comment.
It was a duct take solution for sure.
Your other posts didn’t reply to your claim that it is a Windows only problem. Linux did and some distros (Raspberry Pi) have the same limitations as Windows 95.
32 bit Windows XP got PAE in 2001, two years after Linux. 64 bit Windows came out in 2005.
Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager
It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It’s more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.
A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.
Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc
Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.
Also the entire article comes down to simple math.
Bits is the number of digits.
So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999
So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.
10^4 is WAY smaller than 10^8
Only slightly related, but here’s the compiler flag to disable an arbitrary 2GB limit on x86 programs.
Finding the reason for its existence from a credible source isn’t as easy, however. If you’re fine with an explanation from StackOverflow, you can infer that it’s there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.
It’s a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you’re compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.
We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don’t we drive eight-wheel vehicles?
Funny how we are moving back to bicycles, as cars aren’t scalable solution.
See here’s where this analogy is perfect. Sometimes a bicycle is the best solution, just like how sometimes a microcontroller is the best solution. You use the tool you need for the job, and American product design is creating way too many “smart” products just like how American town planning demands too many cars. Bring back the microcontroller! Bring back the bike!
Okay, so why can’t we just not use exponentially growing values? Like 96 bit (64 + 36). I’d the something intrinsic about the size increases that they HAVE to be exponential? Why not linear scaling? 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, etc.
We can, but it’s awkward to do so. By having everything work with powers of 2 you don’t need to have everything the same size, but can still pack things in memory efficiently.
If your registers were 48bits long, you can use it to store 6 bytes, or 3 short ints, but only one int with 16-bits going unused. If they are powers of two in size, you can always fit smaller things in them with no wasted space.
so i guess the next bit after 64 cpu is qu-bit, quantum bit
Quantum computers won’t displace traditional computers. There’s certain niche use-cases for which quantum computers can become wildly faster in the future. But for most calculations we do today, they’re just unreliable. So, they’ll mostly coexist.
Presumably you’d have a QPU in your regular computer, like with other accelerators for graphics etc, or possibly a tiny one for cryptography integrated in the CPU
There would have to be some kind of currently unforseen breakthroughs before something like that would be even remotely possible. In all likelihood, quantum computing would stay in specialized data centers. For the problems quantum would solve, there is really no advantage to having it local anyways.