Hi guys,
my worker daemon is trying to do some job that is making my system go OOM literally every 5 minutes.
Problem is, that worker runs, then goes nuts and consume 4 cores and all available memory (+-2,5Gigs), then OOMs. Nothing i tried had any effect.
I run 2025.07rc from YuNoHost, but had same issue on 2024.12
I would like to ask you for guidance - I guess I need to pin-point the job and propably delete it maually (or do it from my desktop with 32Gigs of RAM remotely).
Things I tried so far
Upgrading to 2025
Reduce worker threads
Reduce queries per worker job
Reduce buffer and packet sizes for MariaDB and PHP
Reduce pm.max_children
Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •Have no idea where to get the callstack, so I tried my best bet :)
Feel free to point me in the right direction, I'm away from my terminal so I tried to get these from web access at least...
dmesg
/var/log/syslog
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •As edited - I just found out I need to set limit for php-cli as well. My bad, sorry about that.
Now, where should I look for the callstack?
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •Not sure what to look for, only thing I do see there is:
2025-12-03T19:08:11Z worker [ALERT]: Fatal Error (E_ERROR): Allowed memory size of 1073741824 bytes exhausted (tried to allocate 16384 bytes) {"code":1,"message":"Allowed memory size of 1073741824 bytes exhausted (tried to allocate 16384 bytes)","file":"/var/www/friendica/src/Database/Database.php","line":543,"trace":null,"worker_id":"d142b9f","worker_cmd":null} - {"file":null,"line":null,"function":null,"request-id":"69308a8f0f968","stack":"","uid":"5624c8","process_id":8088}Limit for php-cli is 1024M
pastebin.com/D64dbJhn
Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •d142b9f. They belong together.Roland Häder🇩🇪 likes this.
Schmaker
in reply to Michael 🇺🇦 • • •Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •worker_idin the log line with the exception. Then collect all entries with the same value.Schmaker
in reply to Michael 🇺🇦 • • •I just checked in a hurry and it does not seems like there are any new worker related
DEBUGlines in the log. I'll let it run and check back in the evening after work.Thank you for your asistance.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •@Michael 🇺🇦
That's what I did, but in the morning - even though memory limit was reached - there was no related debug line.
Hoping for different outcome in the evening :)
@Friendica Developers
Schmaker
in reply to Michael 🇺🇦 • • •Managed to find a worker job that has some
DEBUGstuff attached.next.oscloud.cz/s/FrxM3Ks87M6y…
Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •That was my bad. I filtered wrong identificator and noticed too late and then suddenly connection to my VPS crashed. I just restored it, sorry about that :)
This one seems to have debug lines
pastebin.schmaker.eu/amapimipo…
Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •Michael 🇺🇦
in reply to Schmaker • • •You could increase the memory limit to find a sweet spot where your system doesn't crash but the worker runs through - but I don't know if this will happen.
if you are lucky, you could have a look into the
workerqueuetable to check if there is an entry with theidwith the value18327277. Then we could see which command is responsible.Schmaker
in reply to Michael 🇺🇦 • • •Schmaker
in reply to Michael 🇺🇦 • • •Did some research and it seems to me (not sure, not a dev) that it's all caused by 2023s activitypub-troll.cf raid.
Even though I banned and tried to purge this, it seems like there are way too many records for server to handle and that's why there were OOMs.
That could be propably fixed on Friendicas side by some way of limiting?
Schmaker
in reply to Schmaker • • •Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •I just want to confirm that deleting 37M (!) of gserver troll records fixed the leak. Some kind of limitation for the
UpdateBlockedServerscould make sense though :)Anyways, I thank you very much for your patience with my unskilled terminal fingers and thanks again for your assistance
Roland Häder🇩🇪 likes this.
Michael 🇺🇦
in reply to Schmaker • • •Schmaker
in reply to Michael 🇺🇦 • • •No problem, will do
@Friendica Developers
Schmaker
in reply to Michael 🇺🇦 • • •github.com/friendica/friendica…
Roland Häder🇩🇪
in reply to Schmaker • •gservertable. But it was later reverted. My fork still does have it. Commit idc8a09e493c627ed12e3dc94d1ae5bce395ec5ca3contains that change again: github.com/Quix0r/friendica/co…Friendica Developers reshared this.
Roland Häder🇩🇪
in reply to Roland Häder🇩🇪 • •gservertable has +150k records. I still find it a lot and I guess someone is flooding it again. But still not millions of records.Friendica Developers reshared this.
Schmaker
in reply to Michael 🇺🇦 • • •next.oscloud.cz/s/Ft4mJaCEZwEP…