hello, on my server on which only Lemmy is running, I don’t understand why it fills up space all the time. Every day it increases by about 1GB thus risking reaching the limit after a short time.
It is not the images’ fault because there is a size limit for uploading and they are also transformed into .webp.
The docker-compose file is the one from Ansible 0.18.2 with the limits for loggin already in it (max-size 50m, max-file 4).
What could it be? Is there anything in particular that I can check?
Thanks!
28% - 40G (3 July)
29% - 42G (4 July)
30% - 43G (5 July)
31% - 44G (6 July)
36% - 51G (10 July)
37% - 52G (11 July)
37% - 53G (12 July)
39% - 55G (13 July)
39% - 56G (14 July)
Might want to check out this issue. Honestly, I’m not sure what exactly the activity table consists of or what it does but it’s been eating through everyone’s disk space pretty fast.
Extra config options always result in more complexity, so I would strongly prefer to change the hardcoded pruning interval instead.
Why would that be the case?
It warms my heart to see lemmy instance owners learning about the nuances of db administration 🥰
I don’t think you should be enjoying the fact that there are some problems that could realistically cause a large portion of Lemmy instances to become unsustainable. We should be working towards a way that we can ensure the Lemmy ecosystem thrives.
Bearing in mind that posts and comments from communities your users are subscribed to will flow into your instance, not as a reference, but as a copy. So all those “seeding” scripts are terrible ideas in bringing in content you don’t care about and filling up space for the heck of it. If you’re hosting a private instance, you can unsubscribe from things that won’t interest you, thereby slow down the accumulation of contents that are irrelevant and just wasting space.
Yes I had considered that, but considering that ours is not a giant but moderate instance (a thousand subscribers) it seemed exaggerated to get to have 1GB occupied every day.
- The growth is not about user count… not directly anyway. Rather, it’s about the number and activity of subscribed communities. When your users sub to big and highly active meme communities on Lemmy world, the post activity on world determines your storage reqs. I don’t really know, but I could imagine that a 1k user instance might have 80% of the federation storage that a 5k user instance has. 1k users is enough to sub most big communities, whereas the next 4k users are “mostly” going to sub the same big communities and a few low-traffic niche communities. So the next 4k users cause much less federated storage load than the first 1k did.
- But for comparison, a month ago… the largest Lemmy instance in the world had just over a thousand active users. I’m not sure 1k is as small as you think it is.
You can delete old entries from the table. The space will not be released to the filesystem automatically though, but you won’t have to worry about it until enough days pass where it’s filled up the same amount that was freed.
Im sure it couldn’t be difficult to do a rolling purge to keep the file at a fixed size?
If you need the space back on the filesystem, you could rebuild the table with VACUUM FULL. Do note that the table would be unavailable during that process as it would be locked exclusively. https://www.postgresql.org/docs/current/sql-vacuum.html
You’d need to take your site down for a while since write access is necessary to that table to avoid duplicates. But yeah, once you’ve done a vacuum full you could find a way to each day trim old entries.
Back in the early 2000s, Usenet servers could store posts in a fixed-size database with the oldest posts expiring as new posts came in. This approach would mean that you can’t index everything forever, but it does allow you to provide your users with current posts without having infinite storage space.