I was beginning to wonder if the bot purging got a little too ambitious. Also, requisite “I am not a bot” statement. Beep. Boop.
We are all bots down here.
Where are they storing the session files then, in Memcached with 512 kB limit? No such issues on sh.itjust.works, so probably not a software issue
Could still be a software issue. Someone said this already but it could be possible that Lemmy.world is using a load balancer and multiple servers. These two servers’ authentication tokens may be out of sync. So if you hit server 1 and you are sign in to server 1, you’re good. If you hit server 2, you’re signed out all of a sudden. This can also explain why the issue started to happen abruptly today. It’s possible the load on the server wasn’t that bad yesterday so the load balancer didn’t kick in. This is all speculation. Will have to wait for an official message to confirm anything.
I set up infrastructure for web apps and what you are describing is still most likely server config issue, not Lemmy issue itself, unless Lemmy is lacking something to allow load balancing (then the bug is missing feature actually, also I don’t think so). I don’t know how Lemmy keeps/reads its sessions, but usually it doesn’t matter from the application code standpoint. Preparing multi-host setup as an admin you need to take care about each instance accessing the same session data or whatever application data needs to be shared anyeay. There are many options:
- Database: it’s not good for DB performance and is usually avoided. The problem cannot occur here as all the instances have to access same database (or replicas) in the first place
- Filesystem: the problem can occur here, but can be worked around with CIFS or NFS, which hits performance
- Redis: good for performance, as many hosts as you want can access the same Redis instance (unless Redis is overloaded, which is pretty hard for small session values)
- Memcached: also an option, but all sessions would be gone on service restart
- …?
The load balancing scenario where all requests are handled by one host and the other only takes requests when the other is overloaded, is very unlikely. The most common algorithms for balancing are roundrobin - which means (more-or-less) split connections (not load!) equally across all targets, and leastconn - which means hit the host that is least busy with active connections. I mean of course they could’ve used ‘fallback’ alhorithm, but it’s rather inefficient in most scenarios.
Or maybe the issue is somewhere else, is caused by full-page/CDN cache etc.
Lemmy world rate limiting me when trying to sign up and making me use a different instance seems to have saved me from this.
Ha, I’m glad I saw this post. Thought I was going mad. :-)
Same thing goes for upvoting on some Lemmy apps. There are times I up vote something and scroll down. When I scroll back up my upvote is removed.
Holy LOL this is seriously apt.
Ya’ll should try Kbin.social. The admin at least lets us know when shits gonna crash and stop working.
I signed up to 3 instances with the same username because of this.
Same, and now I’m down to my last instance.
VLemmy, my previous main just…poof, vanished.
.world is on the fritz.
Please .one keep it together.
This was literally my experience 3 times in the past 1 minute before I saw this post … 😂
Thank god, I thought this was just a me problem
Glad it’s not just me. I can’t even log in on liftoff anymore for some reason, just sits there loading forever. Tried clearing cache and data and reinstalling to no avail.
Every reload is a quantum observation
So it’s not just me?
deleted by creator