Es gibt genügend die nur Verbrechen begehen um in den deutschen Wohlfühlknast zu kommen, denn nirgendwo geht es Verbechern bessern als dort ! Während unsere Armen nicht wissen wo sie die nächste warme Mahlzeit herbekommen , werden die #Knastrologen rundum bestens versorgt........einschließlich Fittness und Ausbildung ! Drängt sich die Frage auf : Gäbe es womöglich ohne Knast weniger oder gar keine Straftaten ? Ich glaube NEIN

bz-berlin.de/berlin/berlins-li…

Das sollte man nicht ignorieren ...


Das sollte man nicht ignorieren ... ;-) #Wartung #Server4You #Netzwerk #Downtime

MIME-Version: 1.0
Subject: Ankündigung Netzwerk Wartung 19.10.2022 / zulu289
From: "SERVER4YOU" <noreply@server4you.de>
Content-Type: multipart/alternative;
boundary="=_16fe56fcb1c71788487e2e508b5ff007"
Message-ID: <rjbu4w.en52th@df-ms-l1.server4you.de>
To: Roland Häder <x.y@z>

Information
Ankündigung Netzwerk Wartung 19.10.2022 / zulu289

Hallo Roland Häder,

wir möchten Sie über eine geplante Netzwerkwartung in unserem Rechenzentrum Straßburg (EU) informieren.
Von dieser Wartung ist betroffen:

Produkt: EcoServer Large X5
Servername: zulu289

Die Wartung wird in der Nacht vom Mittwoch, den 19.10.2022 auf Donnerstag, dem 20.10.2022 zwischen 23:00 und 01:00 Uhr MESZ durchgeführt.
(Für unsere internationalen Kunden: MESZ/Mitteleuropäische Sommerzeit = UTC +2)

Wir erwarten
in dieser Zeit Netzwerkunterbrechungen von bis zu 30 Minuten.

Diese Wartung ist notwendig, um die Qualität unserer Dienstleistungen zu erhalten und zu verbessern.
Für Rückfragen verwenden Sie bitte das Support-Ticketsystem im PowerPanel oder nehmen telefonisch Kontakt mit unserem Kundenservice auf.

Vielen Dank für Ihre Zeit und Ihr Verständnis.

Ihr Support Team

in reply to Roland Häder🇩🇪

Aber eines muss ich auch sagen, dass diese echt fix sind, mal gerade ~520 Pings (ich hatte den ersten Ping kurz unterbrochen) haben die Admins benoetigt:
$ ping www.mxchange.org
PING www.mxchange.org (188.138.90.169) 56(84) bytes of data.
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=518 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=519 ttl=53 time=16.2 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=520 ttl=53 time=16.6 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=521 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=522 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=523 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=524 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=525 ttl=53 time=16.5 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=526 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=527 ttl=53 time=16.3 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=528 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=529 ttl=53 time=16.0 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=530 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=531 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=532 ttl=53 time=16.3 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=533 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=534 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=535 ttl=53 time=16.3 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=536 ttl=53 time=16.1 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=537 ttl=53 time=15.8 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=538 ttl=53 time=16.6 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=539 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=540 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=541 ttl=53 time=16.5 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=542 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=543 ttl=53 time=16.4 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=544 ttl=53 time=16.6 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=545 ttl=53 time=16.3 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=546 ttl=53 time=16.5 ms
64 bytes from f.haeder.net (188.138.90.169): icmp_seq=547 ttl=53 time=16.4 ms
^C
--- www.mxchange.org ping statistics ---
547 packets transmitted, 30 received, 94.5155% packet loss, time 555376ms
rtt min/avg/max/mdev = 15.849/16.299/16.645/0.183 ms
$

Respekt, Respekt!

Configuration for FPM pool behind this Friendica instance


I have some PHP-FPM pools running on this server and noticed that often others like #Nextcloud were slowing down or had become unreachable. When I checked with htop on my server, I saw that a lot child processes (if not even all) are working for Friendica.

I tried lower or higher values for the amount of child processes but no real change happened.Then I tried to change the pool type to dynamic (it was set to ondemand) and I have no slow-downs on any PHP-FPM pool anymore.

So in case you are running in the same issue, this was the cause. And for those who are interested in the final configuration file:

root@zulu289:/etc/php/7.4/fpm# grep -v ";" pool.d/friendica.conf |sort --unique

 
chdir = /
[f.haeder.net]
group = www-data
listen.allowed_clients = 127.0.0.1
listen.group = www-data
listen.mode = 0660
listen.owner = www-data
listen = /var/run/apache2/php7.4-fpm_friendica.sock
php_admin_flag[log_errors] = on
php_admin_value[error_log] = /var/www/.../phptmp/fpm-php.friendica.log
php_admin_value[memory_limit] = 128M
php_admin_value[session.save_path] = /var/www/.../phptmp
pm = dynamic
pm.max_children = 1000
pm.max_requests = 10000
pm.max_spare_servers = 15
pm.min_spare_servers = 5
pm.status_path = /pool-status
request_slowlog_timeout = 20s
slowlog = /var/log/php-fpm/$pool-slow.log
user = vuXXXX
root@zulu289:/etc/php/7.4/fpm#

I have only obscured the full paths and user variable.

Your touchless "girlfriend" on Only-Fans or Twitch (ASMR)


This video clearly says it:
Touchless Girlfriends creating Delusional Incels

Touchless girlfriends creating delusional incels

That's not your girlfriend, incel. That's a "model" acting on a payed service you payed for. You don't have to pay your real-world-physical girlfriend to do whatever you love her to do to you, just ask her. But incels cannot do that, they shy away from real face-to-face interactions and physical contact, such as smelling and touching your partner. Surely not in their armpits or anus, that's gross my friend.

#e-girl #ASMR #OnlyFans #Twitch #incel

Switching OBS from Debian 26.x version to Flatpak 28.x wasn't 100% smooth but doable


I recently switched from a Debian-based package of #OBS (Open Broadcast Studio) to #Flatpak and it was not 100% smooth. When I started OBS with flatpak run com.obsproject.Studio, I first had to setup my #TwitchTV account again and after that all my scenes were gone. Gladly that I have exported them I was able to re-import then and drop the "Untitled" one.

So before you end up with same and have to setup all scenes all over again, here is a small guide that saves you a lot time:

  1. Startup old version (e.g. 26.1.2 was mine)
  2. Export your scenes e.g. to ~/Nextcloud/Backups/OBS/20220927 - My Scenes.json
  3. Run the usual package manager to uninstall obs-studio package and all its dependencies.
  4. Install it over flatpak as described on the download page: flathub.org/apps/details/com.o…
  5. E.g. I have added myself a small shell script to launch it: screen -dmS obs flatpak run com.obsproject.Studio, then I don't have to keep the xterm open for it.
  6. Now simply import the previously exported file (after you setup your streaming account again) and delete the "Untitled" one (generated by OBS).

What have we learned here? Yep, the usual one: Backups are life-savers! So when your program provides you an export feature then use it for making backups of your settings or data from it.

in reply to Roland Häder🇩🇪

Roland Häder🇩🇪 tagged Roland Häder🇩🇪's status with #Nextcloud


Switching OBS from Debian 26.x version to Flatpak 28.x wasn't 100% smooth but doable


I recently switched from a Debian-based package of #OBS (Open Broadcast Studio) to #Flatpak and it was not 100% smooth. When I started OBS with flatpak run com.obsproject.Studio, I first had to setup my #TwitchTV account again and after that all my scenes were gone. Gladly that I have exported them I was able to re-import then and drop the "Untitled" one.

So before you end up with same and have to setup all scenes all over again, here is a small guide that saves you a lot time:

  1. Startup old version (e.g. 26.1.2 was mine)
  2. Export your scenes e.g. to ~/Nextcloud/Backups/OBS/20220927 - My Scenes.json
  3. Run the usual package manager to uninstall obs-studio package and all its dependencies.
  4. Install it over flatpak as described on the download page: flathub.org/apps/details/com.o…
  5. E.g. I have added myself a small shell script to launch it: screen -dmS obs flatpak run com.obsproject.Studio, then I don't have to keep the xterm open for it.
  6. Now simply import the previously exported file (after you setup your streaming account again) and delete the "Untitled" one (generated by OBS).

What have we learned here? Yep, the usual one: Backups are life-savers! So when your program provides you an export feature then use it for making backups of your settings or data from it.


Default owncast administrator login and password?


Hi all,

I have successfully run screen -dmS owncast ./owncast -webserverport 8090 -webserverip 127.0.0.1 on my computer and it starts and is accessible. But when I try to open /admin I fail to guess the default admin login/password. I tried already many:

  • admin
  • root
  • owncast

All with same password as login or empty password.

Does someone know it? Please boost/reshare.