The virtual disk for my lemmy instance filled up which caused lemmy to throw a lot of errors. I resized the disk and expanded the filesystem but now the pictrs container is constantly restarting.
root@Lemmy:/srv/lemmy# le
less lessecho lessfile lesskey lesspipe let letsencrypt lexgrog
root@Lemmy:/srv/lemmy# ls
leemyalone.org
root@Lemmy:/srv/lemmy# cd leemyalone.org/
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------
leemyaloneorg_lemmy-ui_1 docker-entrypoint.sh /bin/ ... Up 1234/tcp
leemyaloneorg_lemmy_1 /app/lemmy Up
leemyaloneorg_pictrs_1 /sbin/tini -- /usr/local/b ... Restarting
leemyaloneorg_postfix_1 /root/run Up 25/tcp
leemyaloneorg_postgres_1 docker-entrypoint.sh postgres Up 5432/tcp
leemyaloneorg_proxy_1 /docker-entrypoint.sh ngin ... Up 80/tcp, 0.0.0.0:3378->8536/tcp,:::3378->8536/tcp
Might this be related?
In some cases, pict-rs might crash and be unable to start again. The most common reason for this is the filesystem reached 100% and pict-rs could not write to the disk, but this could also happen if pict-rs is killed at an unfortunate time. If this occurs, the solution is to first get more disk for your server, and then look in the sled-repo directory for pict-rs. Itโs likely that pict-rs created a zero-sized file called snap.somehash.generating. Delete that file and restart pict-rs.
https://git.asonix.dog/asonix/pict-rs#user-content-common-problems
Pictrs saves Thumbnails profile Images of users not just from your instance but also from other instances. Lemmy devs should look into optimising this like they did with the database.
Thatโs all pictrs is for? Can it just be disabled without breaking anything else?
Have you checked the logs for the pictures container to see why itโs restarting?
Could it be permissions?
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that Iโve seen in this thread:
Fewer Letters | More Letters |
---|---|
Git | Popular version control system, primarily for code |
HTTP | Hypertext Transfer Protocol, the Web |
nginx | Popular HTTP server |
[Thread #83 for this sub, first seen 27th Aug 2023, 14:45] [FAQ] [Full list] [Contact] [Source code]
What do the logs say?
Canโt see pictrs log because it never full starts.
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose logs leemyaloneorg_pictures_1
ERROR: No such service: leemyaloneorg_pictures_1
root@Lemmy:/srv/lemmy/leemyalone.org#
If the pictrs container doesnโt start check the docker logs.
journalctl -fexu docker
Itโll typically tell you why a container isnโt starting, usually a broken bind mount.
To prevent this from happening again, try migrating to an S3 backend; DigitalOcean have one thatโs fixed-price and includes egress, so you canโt accidentally end up with a ridiculous bill one month!
You can still see the logs using docker logs container_name>
. To get the container name you can use docker ps -a
. It should list the pictrs container there. The container name is usually the last column of the output.
` โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ BACKTRACE โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. 2023-08-26T20:46:43.679371Z WARN sled::pagecache::snapshot: corrupt snapshot file found, crc does not match expected Error: 0: Error in database 1: Read corrupted data at file offset None backtrace ()
Location: src/repo/sled.rs:84
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ SPANTRACE โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
0: pict_rs::repo::sled::build with path=โ/mnt/sled-repoโ cache_capacity=67108864 export_path=โ/mnt/exportsโ at src/repo/sled.rs:78 1: pict_rs::repo::open with config=Sled(Sled { path: โ/mnt/sled-repoโ, cache_capacity: 67108864, export_path: โ/mnt/exportsโ }) at src/repo.rs:464
root@Lemmy:~#`