Its now running on a dedicated server with 6 cores/12 threads and 32 gb ram. I hope this will be enough for the near future. Nevertheless, new users should still prefer to signup on other instances.
hi! I would be interested in understaning more the setup of the lemmy.ml instance. Do you use a cloud provider, a SaaS platform or a traditional hosting ? What are the costs that are incurred? Cheers!
Is Lemmy made with horizontal scaling (a.k.a. launching more instances and have a load balancer proxy the requests to the various instances) in mind? It could help larger instances like lemmy.ml managing the load better rather than just putting it on a beefier machine.
You should be able to do that without problems. However the main bottleneck is the database, I think some people want to experiment with read replicas. However as developers we would rather focus on optimizations which will benefit everyone, not only the largest instances.
Ah awesome. The database horizontal scaling is a solved problem already luckily, especially an enterprise database like PostgreSQL has lots of options there.
as developers we would rather focus on optimizations which will benefit everyone, not only the largest instances.
Oh sure, but being able to horizontally scale shouldn't hurt small instances 😉 Personally I'd probably host a single-user instance at some point just like I do with Mastodon, so I personally don't really have a need for horizontal scaling either but it's good to think of those things.