I doubt this person actually had a computer than could run the 405b model. You need over 200gb of ram, let alone having enough vram to run it with gpu acceleration.
You can probably find a used workstation/server capable of using 256GB of RAM for a few hundred bucks and get at least a few gpus in there. You’ll probably spend a few hundred on top of that to max out the ram. Performance doesn’t go up much past 4 gpus because the CPU will have a difficult time dealing with the traffic. So for a ghetto build you’re looking at $2k unless you have a cheap/free local source.
Without sufficient VRAM it probably couldn’t be GPU accelerated effectively. Regular RAM is for CPU use. You can swap data between both pools, and I think some AI engines do this to run larger models, but it’s a slow process and you probably wouldn’t gain much from it without using huge GPUs with lots of VRAM. PCIe just isn’t as fast as local RAM or VRAM. This means it would still run on the CPU, just very slowly.
Some apps allow you to offload to GPU, and CPU while loading the active part of the model. I have a an old SSD that give me 500gb of “usable” ram set up as swap.
It is horrendously slow and pointless but you can do it. I got about 2 tokens in 10 minutes before I gave up on a 70b model on a 1080 ti.
Even if they used more powerful hardware than you, the model they ran is still almost 6 times bigger - so if you got two tokens in 10 minutes, one token in 30 minutes for them sounds plausible.
I would have to use an entire 1tb drive for swap but I’m sure I could manage 1 token before the heat death of the universe.
I’m not sure what “FP16/FP8/INT4” means, and where would GTX 4090 fall in those categories, but the VRAM required is respectively 810Gb/403Gb/203Gb. I guess 4090 would fall under the INT4?
They stand for Floating Point 16-bit, 8-bit and 4 bit respectively. Normal floating point numbers are generally 32 or 64 bits in size, so if you’re willing to sacrifice some range, you can save a lot of space used by the model. Oh, and it’s about the model rather than the GPU
In terms of RAM it’s not impossible, my current little server has 192GB of RAM installed.
Pic from TrueNAS
The VRAM would be quite the hurdle though, I’m curious on it’s requirements for VRAM
Edit: Moving data in anticipation of a hardware migration ATM so basically none of the services are running.
VRAM would be 810Gb/403Gb/203Gb for FP16/FP8/INT4 for interferrence, according to their website.
It’s pretty old hardware to say the least, it’s also really proprietary. (Old Dell PowerEdge T610)
My hardware migration I’m currently in the midst of is going to bring it more in line with my typical use case for it.
Basically taking it down from 192 GB of ECC DDR3 to around 32 GB (maybe 64 GB) of DDR4 RAM. Also down to a single CPU rather than dual socket.
simple, just create 200GB of swap space and convince yourself that you really are patient enough to spend 3 days unable to use your computer while it uses its entire CPU and disk bandwidth to run ollama (and hate your SSD enough to let it spend 3 days constantly swapping)
Why, of course! People on here saying it’s impossible, smh
Let me introduce you to the wonderful world of thrashing. What is thrashing? It’s when you run out of ram. Luckily, most computers these days do something like swap space - they just treat your SSD as extra slow extra RAM.
Your computer gets locked up when it genuinely doesn’t have enough RAM still though, so it unloads some RAM into disk, puts what it needs right now back into RAM, executes a bit of processing, then the program tells it actually needs some of what got shelved on disk. And it does it super fast, so it’s dropping the thing it needs hundreds of times a second - technology is truly remarkable
Depending on how the software handles it, it might just crash… But instead it might just take literal hours
There’s quantization which basically compresses the model to use a smaller data type for each weight. Reduces memory requirements by half or even more.
There’s also airllm which loads a part of the model into RAM, runs those calculations, unloads that part, loads the next part, etc… It’s a nice option but the performance of all that loading/unloading is never going to be great, especially on a huge model like llama 405b
Then there are some neat projects to distribute models across multiple computers like exo and petals. They’re more targeted at a p2p-style random collection of computers. I’ve run petals in a small cluster and it works reasonably well.