You are viewing a single thread.
View all comments
87 points

I doubt this person actually had a computer than could run the 405b model. You need over 200gb of ram, let alone having enough vram to run it with gpu acceleration.

permalink
report
reply
2 points

Also worth noting that the 200 gb is for fp4, fp16 would be more like 800 gb

permalink
report
parent
reply
6 points

You can probably find a used workstation/server capable of using 256GB of RAM for a few hundred bucks and get at least a few gpus in there. You’ll probably spend a few hundred on top of that to max out the ram. Performance doesn’t go up much past 4 gpus because the CPU will have a difficult time dealing with the traffic. So for a ghetto build you’re looking at $2k unless you have a cheap/free local source.

permalink
report
parent
reply
1 point

PCIe will probably be the bottleneck way before the number of GPUs is, if you’re planning on storing the model in ram. Probably better to get a high end server CPU.

permalink
report
parent
reply
3 points

Without sufficient VRAM it probably couldn’t be GPU accelerated effectively. Regular RAM is for CPU use. You can swap data between both pools, and I think some AI engines do this to run larger models, but it’s a slow process and you probably wouldn’t gain much from it without using huge GPUs with lots of VRAM. PCIe just isn’t as fast as local RAM or VRAM. This means it would still run on the CPU, just very slowly.

permalink
report
parent
reply
4 points
*

Some apps allow you to offload to GPU, and CPU while loading the active part of the model. I have a an old SSD that give me 500gb of “usable” ram set up as swap.

It is horrendously slow and pointless but you can do it. I got about 2 tokens in 10 minutes before I gave up on a 70b model on a 1080 ti.

permalink
report
parent
reply
4 points

Even if they used more powerful hardware than you, the model they ran is still almost 6 times bigger - so if you got two tokens in 10 minutes, one token in 30 minutes for them sounds plausible.

permalink
report
parent
reply
4 points

I would have to use an entire 1tb drive for swap but I’m sure I could manage 1 token before the heat death of the universe.

permalink
report
parent
reply
4 points

I’m not sure what “FP16/FP8/INT4” means, and where would GTX 4090 fall in those categories, but the VRAM required is respectively 810Gb/403Gb/203Gb. I guess 4090 would fall under the INT4?

permalink
report
parent
reply
7 points
*

They stand for Floating Point 16-bit, 8-bit and 4 bit respectively. Normal floating point numbers are generally 32 or 64 bits in size, so if you’re willing to sacrifice some range, you can save a lot of space used by the model. Oh, and it’s about the model rather than the GPU

permalink
report
parent
reply
27 points
*

In terms of RAM it’s not impossible, my current little server has 192GB of RAM installed.

Pic from TrueNAS

The VRAM would be quite the hurdle though, I’m curious on it’s requirements for VRAM

Edit: Moving data in anticipation of a hardware migration ATM so basically none of the services are running.

permalink
report
parent
reply
6 points

VRAM would be 810Gb/403Gb/203Gb for FP16/FP8/INT4 for interferrence, according to their website.

permalink
report
parent
reply
4 points

Hot damn that’s a lot! They ain’t messing around with that requirement.

My current server has 32 MB of VRAM. Yes, MB not GB. Once I finish the hardware migration it’s going to 8GB but that’s not even a drop in the bucket compared to that requirement.

permalink
report
parent
reply
14 points

That’s not a little server.

permalink
report
parent
reply
3 points

You can have that much RAM with consumer ddr5.

permalink
report
parent
reply
11 points

It’s pretty old hardware to say the least, it’s also really proprietary. (Old Dell PowerEdge T610)

My hardware migration I’m currently in the midst of is going to bring it more in line with my typical use case for it.

Basically taking it down from 192 GB of ECC DDR3 to around 32 GB (maybe 64 GB) of DDR4 RAM. Also down to a single CPU rather than dual socket.

permalink
report
parent
reply
90 points
*

simple, just create 200GB of swap space and convince yourself that you really are patient enough to spend 3 days unable to use your computer while it uses its entire CPU and disk bandwidth to run ollama (and hate your SSD enough to let it spend 3 days constantly swapping)

permalink
report
parent
reply
5 points

Also invite some friends for BBQ. You don’t even need to remember where you put your old grill - you won’t be using it.

permalink
report
parent
reply
13 points

SSD, huh? Real AI enthusiasts swap with an HDD.

permalink
report
parent
reply
7 points

I don’t have any spare HDs but I can swap on a rewritable optical disc.

permalink
report
parent
reply
31 points

Reminds me of the time I compiled Qt on a 1GB Raspberry Pi.

permalink
report
parent
reply
11 points

All I can think to say is ‘ouch’.

permalink
report
parent
reply
3 points

I want this to be real though :(

permalink
report
parent
reply
3 points

there are other options less ram consuming?

permalink
report
parent
reply
5 points

Why, of course! People on here saying it’s impossible, smh

Let me introduce you to the wonderful world of thrashing. What is thrashing? It’s when you run out of ram. Luckily, most computers these days do something like swap space - they just treat your SSD as extra slow extra RAM.

Your computer gets locked up when it genuinely doesn’t have enough RAM still though, so it unloads some RAM into disk, puts what it needs right now back into RAM, executes a bit of processing, then the program tells it actually needs some of what got shelved on disk. And it does it super fast, so it’s dropping the thing it needs hundreds of times a second - technology is truly remarkable

Depending on how the software handles it, it might just crash… But instead it might just take literal hours

permalink
report
parent
reply
8 points

There’s quantization which basically compresses the model to use a smaller data type for each weight. Reduces memory requirements by half or even more.

There’s also airllm which loads a part of the model into RAM, runs those calculations, unloads that part, loads the next part, etc… It’s a nice option but the performance of all that loading/unloading is never going to be great, especially on a huge model like llama 405b

Then there are some neat projects to distribute models across multiple computers like exo and petals. They’re more targeted at a p2p-style random collection of computers. I’ve run petals in a small cluster and it works reasonably well.

permalink
report
parent
reply
1 point

Yes, but 200 gb is probably already with 4 bit quantization, the weights in fp16 would be more like 800 gb IDK if its even possible to quantize more, if it is, you’re probably better of going with a smaller model anyways

permalink
report
parent
reply