128 points

Is this a question?

We haven’t even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it’s technically the opposite.

permalink
report
reply
93 points

It’s a link to an article I found interesting. It basically details why we’re still using 64-bit CPUs, just as you mentioned.

permalink
report
parent
reply
19 points

Comment OP must never learn anything new. Good find.

permalink
report
parent
reply
35 points

Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It’s harder to do half. Sure, it’d be slightly faster for some things, but it’s not significant.

permalink
report
parent
reply
22 points

And you can get 128-bit data to the CPU, so those things can be fast if we need them to be.

permalink
report
parent
reply
21 points

And we have wide instructions that can process this data, such as for multimedia applications.

Addressing and memory size has been the historic motivator for wider registers, but it’s probably not going to be in my lifetime that I see the need for 128.

permalink
report
parent
reply
9 points

There’s plenty of instructions for processing integers and fp numbers from 8 bits to 512 bits with a single instruction and register. There’s been a lot of work in packed math instructions for neural network inference.

permalink
report
parent
reply
67 points

We don’t even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

permalink
report
parent
reply
-66 points
Deleted by creator
permalink
report
parent
reply
61 points

I think they were just adding to the conversation

permalink
report
parent
reply
5 points
Removed by mod
permalink
report
parent
reply
36 points

I actually added detail that wasn’t already discussed in the article?

permalink
report
parent
reply
34 points

Is this a question?

For the people who don’t know the answer? Yes.

Not everything you see is intended for your consumption. Let people enjoy learning things.

permalink
report
parent
reply
16 points
*

I totally agree. I know a teacher who who likes to say:

“I believe there really is no such thing as a dumb question. As long as it’s an honest question (not rhetorical or sarcastic), then it’s a genuine request for more information. So even if it’s coming from a place of extreme ignorance, asking a question is an attempt to learn something, and the effort should be applauded.”

permalink
report
parent
reply
3 points

Learned from the teacher. Thanks.

permalink
report
parent
reply
6 points

Is this a question?

Woah, meta.

Yes, it is.

This is not a question, though.

permalink
report
parent
reply
13 points

That would be like 6 minutes abs.

permalink
report
reply
5 points

That’s crazy. You can’t do six. It’s seven! SEVEN MINUTE ABS!

permalink
report
parent
reply
1 point

What’s this in reference to?

permalink
report
parent
reply
2 points

There’s Something about Mary (1998)

permalink
report
parent
reply
1 point

Ha, cool! It’s been a while since I saw that movie.

Man, 1998?! Time flies.

permalink
report
parent
reply

so i guess the next bit after 64 cpu is qu-bit, quantum bit

permalink
report
reply
19 points

Quantum computers won’t displace traditional computers. There’s certain niche use-cases for which quantum computers can become wildly faster in the future. But for most calculations we do today, they’re just unreliable. So, they’ll mostly coexist.

permalink
report
parent
reply
11 points

In other words like GPUs. GPUs suck ass at complex calculations. They however, work great for a large number of easy calculations, which is what is needed for graphics processing.

permalink
report
parent
reply
3 points
*

Presumably you’d have a QPU in your regular computer, like with other accelerators for graphics etc, or possibly a tiny one for cryptography integrated in the CPU

permalink
report
parent
reply
10 points

There would have to be some kind of currently unforseen breakthroughs before something like that would be even remotely possible. In all likelihood, quantum computing would stay in specialized data centers. For the problems quantum would solve, there is really no advantage to having it local anyways.

permalink
report
parent
reply
3 points

Probably not in consumer grade products in any foreseeable future.

permalink
report
parent
reply
28 points

Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven’t even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

permalink
report
reply
4 points

Tons of computing is done on x86 these days with 256 bit numbers, and even 512-bit numbers.

permalink
report
parent
reply
15 points

Being pedantic, but…

The amd64 ISA doesn’t have native 256-bit integer operations, let alone 512-bit. Those numbers you mention are for SIMD instructions, which is just 8x 32-bit integer operations running at the same time.

permalink
report
parent
reply
3 points

The ISA does include sse2 though which is 128 bit, already more than the pointer width. They also doubled the number of xmm registers compared to 32-bit sse2.

Back in the days using those instructions often gained you nothing as the CPUs didn’t come with enough APUs to actually do operations on the whole vector in parallel.

permalink
report
parent
reply
1 point
*

Ah fair enough, I figured that since the registers are 512 bit, that they’d support 512 bit math.

It does look like you can load/store and do binary operations on 512-bit numbers, at least.

Not much difference between 8x64 and 512 when it comes to integer math, anyways. Add and subtract are completely identical.

permalink
report
parent
reply
6 points
*

You can always combine integer operations in smaller chunks to simulate something that’s too big to fit in a register. Python even does this transparently for you, so your integers can be as big as you want.

The fundamental problem that led to requiring 64-bit was when we needed to start addressing more than 4 GB of RAM. It’s kind of similar to the problem of the Internet, where 4 billion unique IP addresses falls rather short of what we need. IPv6 has a host of improvements, but the massively improved address space is what gets talked about the most since that’s what is desperately needed.

Going back to RAM though, it’s sort of interesting that at the lowest levels of accessing memory, it is done in chunks that are larger than 8 bits, and that’s been the case for a long time now. CPUs have to provide the illusion that an 8-bit byte is the smallest addressible unit of memory since software would break badly were this not the case, but it’s somewhat amusing to me that we still shouldn’t really need more than 32 bits to address RAM at the lowest levels even with the 16 GB I have in my laptop right now. I’ve worked with 32-bit microcontrollers where the byte size is > 8 bits, and yeah, you can have plenty of addressible memory in there if you wanted.

permalink
report
parent
reply
5 points

I know a google engineer who was saying they’re having to update their code bases to handle > 16 exabytes of storage, if you can imagine. But yeah, that’s storage, not RAM.

permalink
report
parent
reply
4 points

I would kind of enjoy the trouble of needing to store and owning the place for 16 exabytes…

permalink
report
parent
reply
64 points

32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

permalink
report
reply
15 points

Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager

permalink
report
parent
reply
24 points
*

Only slightly related, but here’s the compiler flag to disable an arbitrary 2GB limit on x86 programs.

Finding the reason for its existence from a credible source isn’t as easy, however. If you’re fine with an explanation from StackOverflow, you can infer that it’s there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.

permalink
report
parent
reply
3 points

It’s a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you’re compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.

permalink
report
parent
reply
17 points

Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.

Also the entire article comes down to simple math.

Bits is the number of digits.

So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999

So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.

10^4 is WAY smaller than 10^8

permalink
report
parent
reply
15 points
*

It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It’s more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.

A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.

https://en.m.wikipedia.org/wiki/3_GB_barrier

permalink
report
parent
reply
12 points

Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc

permalink
report
parent
reply
43 points

You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

permalink
report
parent
reply
-3 points

Yeah I acknowledged the shortcomings in a different comment.

It was a duct take solution for sure.

permalink
report
parent
reply
2 points

Your other posts didn’t reply to your claim that it is a Windows only problem. Linux did and some distros (Raspberry Pi) have the same limitations as Windows 95.

32 bit Windows XP got PAE in 2001, two years after Linux. 64 bit Windows came out in 2005.

permalink
report
parent
reply
14 points

I’m not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.

permalink
report
parent
reply
10 points

Not really, Raspberry Pi had that same issue with its 32 bit distros.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 16K

    Monthly active users

  • 12K

    Posts

  • 557K

    Comments