Details here: https://github.com/LemmyNet/lemmy/issues/3165

This will VASTLY decrease the server load of I/O for PostgreSQL, as this mistaken code is doing writes of ~1700 rows (each known Lemmy instance in the database) on every single comment & post creation. This creates record-locking issues given it is writes, which are harsh on the system. Once this is fixed, some site operators will be able to downgrade their hardware! ;)

You are viewing a single thread.
View all comments
15 points

Get some DBA’s on the job and Lemmy will be blazing fast.

permalink
report
reply
14 points

We have had DBA’s, the problem is the Rust code uses ORM and an auto JSON framework that makes tracing the code time-consuming to learn.

permalink
report
parent
reply
3 points

Okay so you may need to refactor here and there to get more performance.

permalink
report
parent
reply
3 points

Honestly, ORMs are a waste of time. Why not use sqlx and just hand write the SQL to avoid issues like this.

permalink
report
parent
reply
1 point

In this one case, it was hand-written SQL inside a PostgreSQL FUNCTION that the ORM knows nothing about. But there is a approach in the entire application to have live-data from PostgreSQL for every little thing.

permalink
report
parent
reply

Lemmy Server Performance

!lemmyperformance@lemmy.ml

Create post

Lemmy Server Performance

lemmy_server uses the Diesel ORM that automatically generates SQL statements. There are serious performance problems in June and July 2023 preventing Lemmy from scaling. Topics include caching, PostgreSQL extensions for troubleshooting, Client/Server Code/SQL Data/server operator apps/sever operator API (performance and storage monitoring), etc.

Community stats

  • 1

    Monthly active users

  • 43

    Posts

  • 113

    Comments