Another day, another update.
More troubleshooting was done today. What did we do:
- Yesterday evening @phiresky@[email protected] did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
- @[email protected] created a docker image containing 3PR’s: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
- We started using this image, and saw a big drop in CPU usage and disk load.
- We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a
return 404
in nginx conf for/api/v3/ws
. - We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
- We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set
proxy_next_upstream timeout;
max_fails=5
in nginx.
Currently we’re running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the proxy_next_upstream timeout;
max_fails=5
workaround but for now it seems to hold with 1.
Thanks to @[email protected] , @[email protected] , @[email protected], @[email protected] , @[email protected] , @[email protected] for their help!
And not to forget, thanks to @[email protected] and @[email protected] for their continuing hard work on Lemmy!
And thank you all for your patience, we’ll keep working on it!
Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.
Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that’s now started, and I noticed the proxy_next_upstream timeout
setting didn’t work (or I didn’t set it properly) so I used max_fails=5
for each upstream, that does actually work.
Good job! 🤙
Things have been very noticably improving every day since the big update! I’ve had very, very few problems today browsing on Jerboa.
The new updates made a ton of improvements. Thank you very much. It is near ideal now.
Submitting PRs is literally the most effective response that helps everyone who uses Lemmy. Thanks to you all.
Thanks for this very nice report.
Everything is feeling great so far. The only bug I’m encountering is that when opening a thread (in Firefox on desktop) it auto-scrolls down past the content to the replies.
Thank you so much! I will be donating a few cappuccinos your way when my next check arrives. I really appreciate how awesome of a community you’ve brought together & all of the transparency with the updates (and the frequency) is astounding! Keep up the great work but don’t forget to take breaks :)
Literally a night and day difference in performance and stability! Thank you all for the hard work. To other users like me, consider reducing or replacing one of your lesser used subscriptions and directing that money to Lemmy. It’s much better served here if you ask me.
This is a lot of work, I genuinely feel honored by the service of everyone involved. Thank you!
Really great job, guys! I know from my experience in SRE that these types of debugs, monitoring and fixes can be much pain, so you have all my appreciation. I’m even determined to donate on Patreon if it’s available
Great work! Awesome to see how fast the technical side of the Lemmyverse is evolving and improving!
You guys are absolute legends, thanks for the update!
Minor thing but over night both wefwef and Memmy clients are showing the wrong comment score (karma) against my profile, and given they are showing the same amount I assume it’s related to API fed data. Value was correct yesterday. Easy for me to confirm given I have only 2 dozen posts and the value has dropped to single digits.
Not a biggie, but figured I’d report it in case there was some issue causing that. Might be some optimisation around indexing or something has intentionally or unintentionally impacted that.
Otherwise the service feels much more stable currently. No timeouts today where it’s been very frequent the past few days. Nice job. 👍
Are you guys able to create reports? I am not… It keeps spinning.