It’s a serious question because so far, none have.
Edit: Some context for those asking.
Eternal September refers to a time when an online community was overrun by new participants to the detriment of that community.
When new people arrive piecemeal, like they’re doing right now, they join in and participate. If they make little social mistakes, they are steered by members of the community in the direction that the community has evolved into by supplying social, language and behavioural cues.
New participants alter their behaviour and the community grows a little with the new participant. If they don’t alter their behaviour, it’s likely that they’re removed from the community by some agreed process that has evolved over time.
If the growth is sudden, then the community will be overwhelmed by “blissful or deliberate ignorance” and the systems for cues, moderation and removal fail and the community, often drastically, changes or ceases to exist.
The reference to September is that’s when new University Students would get an account on the University computer systems and join Usenet News. They’d arrive every September, there’d be a blip in adjustment and the Usenet communities would absorb the new members.
Eternal September arrived when AOL joined its bulletin board to Usenet and it completely overran everything with people from all across the AOL userbase, most of them not first year University students.
I was there when this happened, alt.best.of.internet (ABOI) was a community where I participated. One of many “new groups” it was alphabetically the first on the AOL list and it imploded. Together with Malinda McCall, I wrote the FAQ in an ultimately fruitless attempt at educating the masses.
I’ve seen this play out over and over again across the decades I’ve been online, so that’s why I asked.
The ABOI FAQ is here: https://www.itmaze.com.au/articles/aboi-faq
You might also expand that consideration to the infrastructure this instance runs on.
I’ve had that thought and we’ve talked about it a bit with respect to upgrades, but honestly, it opens a larger can of worms and trust levels.
There’s a lot of sensitive information that I’m already by default trusted with and finding someone that, to be honest, I can trust would be a big conversation.
Also… deploying this ship is not the easiest thing in the world, even as a docker container. Documentation is like the wild west and sometimes, you have to take a best guess or know how docker deploys work or how the instance itself works.
BUT you are correct. if this is going to be a long lasted instance, having other people on board would help. This started out as a fun thing I wanted to do for amateur radio enthusiasts but with the spirit of your post in mind, we’d need more people on board.
Ultimately this is about risk mitigation, about what happens if. There are many different ways to tackle this. I have not found a guaranteed solution, but here are some to consider:
- If we keep the community size small, in other words, restrict the number of people who can join, the impact of this instance going away is limited in scope. It’s not a fun thing to contemplate, but it’s potentially a viable and effective solution.
- If we “appoint” moderators, there are implications of trust. Even the most trustworthy person you know might make a decision that’s not what you would have chosen. Furthermore, people make mistakes for many different reasons. Making rules around moderation is attractive, but edge-cases will always exist and “don’t be a dick” means different things to different people.
- The same is true for “appointing” an administrator, for the plumbing of the IT infrastructure, not the content. I have been spending a little time trying to either find or construct a “turn-key” fediverse “node” that can be instigated and run almost autonomously. This would reduce the requirements for human resources, but it would cost money.
- Creating a “body”, a formal agreement between “founders”, is another way to go. It does not guarantee that your efforts will be a success. Even if you write your constitution to deal with malicious intent mitigation there’s always someone who will take over and shit in the nest.
- We could leave well enough alone and let it crash when it does, either as a service, or as a community, as-in, it gets overtaken by unwanted content. We could stem the flow for a while, but if that’s ultimately unsuccessful, we could just shut it down.
- One point worth making separately is the legal aspect of this community. What happens if a member posts, willingly or not, illegal material? What happens if someone attempts to invoke the GDPR on some aspect of the ongoings here? How is that “risk” mitigated?
Note that I’m not advocating one solution over another. This is more an attempt at identifying ways to mitigate any potential “risk” in whatever shape that arrives.
I’ll also note that the amateur radio on-air experience is essentially ephemeral in nature. There is nothing wrong with treating this community in the same way. It has a nice symmetry to it if anything.
Awesome post! Since you made the original post, I’ve actually already started a github wiki to address my concerns about what it would look like going forward. To address your concerns in line:
- I’ll always want this to be a selective community. But as you said, this may be more than what “I” want and that may need to change in the future.
- People do make mistakes. I can tell you I have already as a moderator and that’s just how things are.
- For adding an admin, I have started a github organization that I would want someone to join who had experience in how things work and how to mitigate errors. I can tell you… I’ve brought down the site for an hour or 2 a few times and I hope no one noticed… And with the thought of money, I’m comfortable with where we are now with how much we cost, but going forward, there’s a lot I’d like to add like better backups that I know would add cost to our instance.
- I think this would always be the case. What we’re almost talking about is treating this like a business where we just put the trust of this instance in people. That’s where the aforementioned backups would help (hopefully)
- You already have me thinking too much about not letting this fail! I think that what we are discussing isn’t an overnight thing, but maybe over the coming year(s) we could implement.
- There are already a lot of pull requests and issues on the lemmy github about GDPR, and it seems as though it’s something that is sorta addressed already but is a big deal that will be fully addressed in the future