I’d like to thank the admins for being so open and direct about the issues that they’re facing.
To be fair, with a proper autoscaling scheme in place these services should scale down significantly when not in use.
That being said, a big reason for using AWS/GCP is all the additional services that are available on the platform… If the workload being run isn’t that complicated, the hyperscalers are probably overkill. Even DO or Linode would be a better option under those circumstances.
This. AWS architect here. There are a lot of ways to reduce pricing in AWS like horizontal scaling, serverless functions, reserved instances. Most people aren’t aware of it and if you’re going to dive in head first into something like cloud, you’ll need to bear the consequences and then learn eventually.
Even with ASGs, ec2 costs a bomb for performance.
And “serverless” functions are a trap.
If you’re gonna commit to reserved instances, just buy hardware for goodness sake, its a 3 year commitment with a huge upfront spend.
Yep. And if you want to really save some cash and don’t mind getting a little crazy, use an EKS node orchestrator that supports spot instances. I’m starting to do a serious dive into Harness at the moment actually.
Google recently released a white paper on cost saving in kubernetes as well.
I’m in a similar boat. I’m a sysadmin supporting a legacy application running on AWS EC2 instances and a new ‘serverless’ microservice based platform as well. It’s really really hard to scale and optimize anything running on EC2s unless you really know what you’re doing or the application is designed with clustering in mind.
You tend to end up sizing instances based on peak load and then wasting capacity 90% of the time (and burning through cash like crazy). I can imagine a lot of Lemmy admins are overspending so fast they give up before they figure it out.
AWS is perfect for large operations that value stability and elasticity over anything else.
It’s very easy to just spin up a thousand extra servers for momentary demand or some new exciting project. It’s also easy to locate multiple instances all over the world for low latency with your users.
If you know you’re going to need a couple servers for years and have the hardware knowhow, then it’s cheaper to do it yourself for sure.
It’s also possible to use aws more efficiently if you know all of their services. I ran a small utils website for my friends and I on it a while ago and it was essentially free since the static files were tiny and on s3 and the backend was lambda which gives you quite a few free calls before charging.
Habit (guess). Its what is used professionally, despite being proven over and over that cost-per-speed is terrible compared to less known providers.
If the average Web engineer’s salary capable of running a site like this is ~$180,000, then a $30,000 difference in cost is only about 2 months salary. Learning and dealing with a new hosting environment can easily exceed that.
What’s that? Taxes? And no way do I agree with this. $30k is a lot, no matter how much you make. Learning a new environment is not THAT hard.
That, and like others mentioned their flexibility, plus the fact that they’re fairly reliable (maybe less than some good Iaas providers but a fair bit more than your consumer vps places). Moments ago I went to the hetzner site to check them out and got:
Status Code 504 Gateway Timeout
The upstream server failed to send a request in the time allowed by the server. If you are the Administrator of the Upstream, check your server logs for errors.
Annoying if it’s you nextloud instance down for a minutes, but a worthy trade off if you’re paying 1/4 of the price. Extremely costly for big business or even risking peoples’s lives for a few different very important systems.
Hetzner has four nines availability, usually higher. AWS claims five nines but chances are you’ll mess up something on your end and end up at three to two nines, anyway. If you really need five nines you should probably colocate and only use the likes of AWS as a spike backup.
And I guess “messed something up on your end” happened in that case: I don’t think Hetzner is necessarily in the habit of maximising availability of their homepage at all cost (as opposed to the hosting infrastructure), you probably caught them in a middle of pushing a new version.
…speaking of spike backups: That is what AWS is actually good for. Quickly spinning up stuff and shutting it down again before it eats all your money.
I’m not a server admin, but I am a dev, and for many of us it’s just what we know because it’s what our employers use. So sadly, when it comes to setting up infrastructure on our own time, the path of least resistance is just to use what we’re already used to.
Personally I’m off AWS now though, but it definitely took some extra work (which was worth it, to be clear).
AWS is mostly only useful for large companies who need one hosting provider for all their needs, with every single product tightly integrated into other products
The only advantage would come if you could rewrite lemmy to be serverless
I mean I’m sure Lemmy’s server process is stateless, I’m sure it could use CloudRun/ECS pretty efficiently and that wouldn’t really require a rewrite (unless the process is stateful for some reason)
Can you point the everyone else? Just out of curiosity. I know there’s digital ocean but I’m not quite sure they’re cheaper than azure/aws
For the past 5 years i just check server hunter and get the specs I want: https://www.serverhunter.com
Before that, my preferred ‘everyone else’ was OVH.
On linode i can run a half dozen docker images on a little vm for ten bucks a month. And their s3 is a few bucks a month for 250 gigabytes. The vast majority of projects I deal with have a predictable compute requirement - I don’t get the need to pay the ridiculous premiums associated with elasticity. But I’m not exactly running uber or Netflix over here.