Status update July 4th
Just wanted to let you know where we are with Lemmy.world.
Issues
As you might have noticed, things still won’t work as desired… we see several issues:
Performance
- Loading is mostly OK, but sometimes things take forever
- We (and you) see many 502 errors, resulting in empty pages etc.
- System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)
Bugs
- Replying to a DM doesn’t seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
- 2FA seems to be a problem for many people. It doesn’t always work as expected.
Troubleshooting
We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @nutomic@lemmy.ml is also helping with current issues.
So, all is not yet running smoothly as we hoped, but with all this help we’ll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!
As much as I hope that Lemmy grows, I wouldn’t want lemmy.world to suddenly get 1 million signups and multiply your headaches.
Perhaps we need more well run servers by other people to take the load.
Are there any plans to make user up and down votes not viewable publicly? I know for a lot of new adopters this can be a deal breaker.
If someone cares about their privacy that badly why vote at all? I’m sure votes are tracked by the sites on other websites?
On reddit only reddit knows your up and down votes which are never made public unless you check an option in settings. The fediverse is already a target for brigading due to the decentralized nature, allowing bad actors to figure out who to target seems like a terrible idea. Imagine what people like /r/againsthatesubreddits would do with that info.
On the flip side, it also makes brigading easier to detect, so instances can defederate communities that routinely vote in bad faith.
System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)
Shouldn’t we be discussing closing registrations?
Federation-wise it would be better if new users spread out. Between clueless redditors and impossible ideal, I prefer if they at least made an account and check out what Lemmy has to offer. The curious ones will eventually settle down and even redistribute into smaller instances.
The curious ones will eventually settle down and even redistribute into smaller instances
Absolutely. I migrated from lemmy.ml when that was having too many sign-ups, and I’m not opposed to migrating from lemmy.world to help with their load. I’m sure I’m one of many
We have loads of space at https://lemmy.myserv.one if people want to spread the load.
There’s a lot of momentum to move away from reddit right now, and closing registrations would be a wet blanket. Personally, I’ll take the performance issues and transparency in the process over closing registrations.
Does Lemmy have the ability to replace default links?
Basically, replace signup link with one that redirects to a page that gives a very simple as possible explanation what’s going on, what fediverse is and gives s list of other instances to try.
Reinforce “All are viable and can browse lemmy.world subs”… Or communities or whatever term we use here for lemmy equivalent of subreddits.
I was personally thinking more along the lines of if we could have a load balancer whose sole job is to route users to a random set of possible instances (which can all be administered by the same person, so that you’re still joining the instance “group” that you want). The load balancer would route someone the first time they land on the page and also handle logins. That’s it.
I’m assuming that the servers we’re talking about are single servers, because that’s how things sound. I’m personally used to only developing servers that use the “many servers behind a load balancer” approach. While distributed databases can certainly make those easier, in the absence of support for that, you could always run the backends as entirely separate servers, with the load balancer just serving to pick the backend. So you’d have a lemmy1.world and so on.
Of course, for all I know, maybe this is already the architecture of a Lemmy instance. I’ve never checked. Even with a good architecture, scaling can be difficult. An unnoticeable performance issue in a dev environment can be a massive bottleneck when you have tens of thousands of concurrent users.
That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%
who’d have thought memory leaks would be possible in Rust 🤯
(sorry not sorry Rust devs)
That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%
Lemmy has a memory leak? Or, should I say, a “lemmory leak”?
Wait isn’t lemmy written in rust how do you create a memory leak in rust? Unsafe mode?
That’s not a memory leak though. That’s just hording memory. Leaked memory is inaccessible.
I’m calling it - if there’s actually a memory leak in the Rust code, it’s gonna be the in memory queues because the DB’s iops can’t cope with the number of users.