Skip Navigation

[Done] New try at upgrading to 0.18.1 July 1st 20:00 CET

We'll give the upgrade new try tomorrow. I've had some good input from admins of other instances, which are also gonna help troubleshoot during/after the upgrade.

Also there are newer RC versions with fixed issues.

Be aware that might we need to rollback again, posts posted between the upgrade and the rollback will be lost.

We see a huge rise in new user signups (duh.. it's July 1st) which also stresses the server. Let's hope the improvements in 0.18.1 will also help with that.

406

You're viewing a single thread.

406 comments
  • What is new with the updated version later today?

    • It fixes the issue of new posts appearing on the homepage above others.

    • The difference between .17 and .18 is pretty substantial. Lemmy.world neglected to update to .18 because captcha support was not working for new account signups, so they waited for v0.18.1

      https://join-lemmy.org/news/2023-06-23_-_Lemmy_Release_v0.18.0

      There should be substantial performance improvements because it moves Lemmy from using websocket to HTTP API.

      There are lots of other fixes and things, but that is the most substantial change.

      • There should be substantial performance improvements because it moves Lemmy from using websocket to HTTP API.

        Websockets largely have a lower compute cost per request, HTTP requests are slow and expensive vs just firing off data in an already established TCP connection, so this isn't tracking for me?

        Was it just the overhead of managing the websockets? Shouldn't an API gateway be doing that anyways...?

        • The websocket implementation was streaming federated data to the UI in real-time, and was responsible for a bunch of bugs/UX issues that wouldn't have been evident when there were less people on Lemmy (such as scrolling down New posts on the homepage and seeing them zoom off the screen and back a bunch of pages as new federated posts from busy instances rolled in)

          Some instances were also hitting the open socket limit of their reverse proxies IIRC, which was causing some users to get stuck on a spinning loader indefinitely.

          I personally used an app instead because of the bugs/ UX issues caused by the websocket implementation, but since the 0.18 update on my instance the site is so much nicer to use.

          Edit:

          Websockets largely have a lower compute cost per request, HTTP requests are slow and expensive vs just firing off data in an already established TCP connection, so this isn’t tracking for me?

          Most browsers send keep-alive headers to request that the webserver keeps the TCP connection open - I believe for HTTP/2 and HTTP/3 the connection is automatically held open regardless. It doesn't address the overhead caused by resending headers etc, but it's faster than establishing a new connection each time, and 'manages itself' by closing the socket automatically after a period of inactivity.

          Websockets are still undisputably faster though, but I think it all comes down to implementation

          • Great explanation, thanks!

            You seem to be knowledgeable here, and I'm just onboarding to Lemmy. Is it possible for others to contribute distributed compute/networking/storage resources to Lemmy instances? (Kind of along the lines of Kafka)

            I have a cluster that I largely use as nodes for various projects that I enjoy. I'd be more than happy to provision a few VMs to be nodes if such a concept exists here 🤔

          • Great explanation, thanks!

            You seem to be knowledgeable here, and I'm just onboarding to Lemmy. Is it possible for others to contribute distributed compute/networking/storage resources to Lemmy instances? (Kind of along the lines of Kafka)

            I have a cluster that I largely use as nodes for various projects that I enjoy. I'd be more than happy to provision a few VMs to be nodes if such a concept exists here 🤔

      • Really looking forward to the improved perfomance then (hope it doesnt fail this time)

      • They didn't 'neglect' to upgrade. They tried several times, but had issues with each.

406 comments