You can try using du -h -d 1 /
to locate the largest directory under /
. Once you’ve located the largest directory, replace /
with that directory. Repeat that until you find the culprit (if there is a single large directory).
EDIT (2024-07-22T19:34Z): As suggested by @DarkThoughts@fedia.io, you can also use a program like Filelight, which provides a visual and more comprehensive breakdown of the sizes of directories.
It’s “Steam” inside .local eat up 6GB even though I have not open it yet and tmp files (almost 5GB) that is not clear itself after installing the OS
Whoops! You are correct — I have updated the original comment. I’m not sure why I wrote df
instead of du
. This is a good example of why one should always be wary of blindly copying commands 😜 It begins to teeter on being potentially disastrous if I had instead wrote dd
.
Or you could use baobab to do the same thing if you want an answer within 10 minutes.
Maybe you have a swap file that happens to be 16GB ?
I only allowed 4G for swap, maybe arch enabled zram and it used 8GB by default and I actually don’t need to create a swap partition?
It might have something to do with the dolphin you’re keeping in there.
Keep in mind that a part of the filesystem will be reserved on creation. Here if I create a completely empty ext4 filesystem with:
truncate -s 230G /tmp/img
mkfs.ext4 /tmp/img
mount /tmp/img /mnt
Dolphin reports “213.8 GiB free of 225.3 GiB (5% used)”