Can anyone explain the purpose of a 32 gig NVMe SSD? I think it’s quite an apple thing to install such a stupidly tiny drive into a computer, but on the other hand it doesn’t seem right. This can’t be a system drive can it? But what else could it be? This is like an impractical, high-speed USB drive that requires disassembly of the computer to remove…

You are viewing a single thread.
View all comments View context
11 points

Sounds like intels optane drives

permalink
report
parent
reply
12 points

More or less. Fusion Drive was introduced when high capacity SSDs were rather expensive. Although the SSD part had 128 GB iirc. Apple stuck with Fusion Drive BTO options for way too long and also nerfed it to 32 GB as evidenced here.

permalink
report
parent
reply
6 points
*

Man, Intel seriously needs to license Optane out. That technology represents a new paradigm for digital storage. It’s simpler/cheaper to manufacturer than flash memory and its speed is more comparable to RAM than flash, it’s at least an order of magnitude faster than current nvme drives. It’s also three dimensional, so there’s potential to make super fast terabyte, even petabyte sized drives.

I wish the world was competing to make better Optane/xPoint drives like they are with flash, it’s a shame the tech is locked behind a patent…

permalink
report
parent
reply
9 points

As someone who has performed data recovery on optane systems. No, just no.

There’s larger slow mechanical storage.

Faster flash and xpoint storage

RAM.

Any device can use a level up to cache and appear faster.

But RAM caching is generally better handled by the OS itself.

Flash coaching isn’t an awful idea except when it goes wrong your back to “safety remove disk” being absolutely vital. So the OS needs to be aware and cutting power at the wrong time can kill your install.

Every update says “so not turn off your computer” for a reason but the actual redundancy we now have is leagues better than 10 years ago

God forbid one component in an optane chain becomes unreliable.

Ultimately everything needs to run in RAM. Everything needs persistent storage. A non-standard middle step between persistent and volatile memory is best avoided.

Xpoint was an interesting experiment but CXL replaced it. Ultimately the choice for data centres is to support more RAM. The additional RAM replacing the optane cache while the waits to be written is more compatible and predictable.

You can now have terrabytes of RAM and if you rarely boot and have redundant systems. There’s need for the middle step.

The cost of memory and SSD per gigabyte as a cache matters. But RAM error correction and other protocols give even more advantages to avoiding optane.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 544K

    Comments