"Debloating" your Debian Linux even further

Why we need to do that?

Well, there're two main reasons for that. One is kinda important for everyone, who're running their SBCs out of SD cards, other is my own deep personal preference. 
 

Reason 1: save your SD card / EMMC chip from dying earlier

We're running Debian Linux, which was put together while keeping in mind desktop machines with HDD or SSD drives. What we have in our SBC are eMMC (at best) or SD card. 
 
The main difference between desktop-grade SSD disk and SD card is that SSD disk' controller is much more advanced. It protects the storage cells by distributing write operations evenly. SD card controllers also do that kind of thing, but not that good, because of obvious reasons (cost saving + smaller form factor).

And you need to know, the more you write to SD card, the less it lasts. So let's start. The idea of this exercise is to turn off as much write-heavy activities as we can while leaving system up and running. 
 

Reason 2: I don't like a lot of automation, which Linux offers out of the box

In this sense, Linux is becoming a sort of mini-me of Windows,  I hate so much. I can hardly stay calm when I see some windows service is eating up a lot of CPU / memory / doing some IO or sending something over the netwrok  and I can't even get to know what that service is doing.

- if it's doing something crucial for OS to live, and killing it will break Windows?
- is it something auxiliary  Windows can live without?
- does it mine some cryptocoins for Microsoft' benefit, but on my own hardware?

... because Windows is closed source, and you cannot get inside of it to see which of the above is true.

I also deeply hate Windows, when it decides it's time to apply some patches,  perform some kind of housekeeping for whatever built-in stuff or do a system reboot to make a system upgrade. Without asking me.


As I said, unfortunately in Linux (especially in Debian linux) I see it's drifting towards Windows behavior, and nobody cares about it. See the proof#1 and proof#2.

But fortunately, this is still Linux, and it's open soruce with tonns of documentation and questions being already asked and answered. So we can finetune it whatever we want, and do that knowingly, without any risk to get something broken in the completely distant part of the system. This is why I love Linux so much.

Disclaimer

 
A word of warning here. I'm going to turn off a lot of stuff here. You might hear it from others (probably they will even be screaming at you) that all the things I suggested to turn off here is crucial for your own existence. 
 
Make a pause there and take a deep breath. Think of the worst possible scenario. Like some critical security bug was found in Debian and given you turned off automatic updates,  your system still has that bug and potentially vulnerable, if ..
 
  • If you have your SBC connected directly to Internet with all ports exposed and you're running tons of software from 3rd parties / non official repos / snap which are exposing itslef by opening these ports to outer world, inviting hackers to come in

  • if you're using web browser of old version 

... this might be an issue, yeah. But, if you'll be doing apt update & apt upgrade time-to-timeyourself, it's nothing different to have this automatic update services running.

Remove cron jobs

 
Let's see what we have:
 
root@orangepi4-lts:~# ls /etc/cron.*
/etc/cron.d:
e2scrub_all  orangepi-truncate-logs  orangepi-updates  sysstat

/etc/cron.daily:
apt-compat  cracklib-runtime  locate     man-db                samba
aptitude    dpkg              logrotate  orangepi-ram-logging  sysstat

/etc/cron.hourly:
fake-hwclock

/etc/cron.monthly:

/etc/cron.weekly:
man-db  tor
 
A lot of things from there were moved to systemd timers, we'll deal with them just a bit later.
 
/etc/cron.d/e2scrub_all performs a check for ext2-4 file systems and marks corrupted filesystem with a tag, so fsck will fix that on next mount. Practically on next reboot. Leave that so far, but its existence is very questionable, given the actual fix could happen not earlier than upon next reboot.

TODO
: I want to see it myself, how can I get that tag/flag value as a result of e2scrub_all working. And probably get notified about that, rather than fsck will be silently fixing some issue

/etc/cron.d/orangepi-truncate-logs runs a shell script (/usr/lib/orangepi/orangepi-truncate-logs) every 15 minutes to truncate lot of logs. Looks good. But it somehow clashes with log2ram (described below). I'm gonna leave it so far.

/etc/cron.d/orangepi-updates calls shell script /usr/lib/orangepi/orangepi-apt-updates on a daily basis and after reboots to install updates silently. Part of orangepi-bsp-cli-orangepi4-lts package. Remove that file Some smartalek from OrangePi decided he knows it better than me.

/etc/cron.d/sysstat data collector for sysstat. Sysstat utilities are a collection of performance monitoring tools for Linux. These include sar, sadf, mpstat, iostat, tapestat, pidstat, cifsiostat and sa tools.
The cronjob is just a misery. It runs pretty much all the time /usr/lib/sysstat/debian-sa1 script, but that script exits if systemd is there - because sysstat was moved to systemd already. I've deleted that /etc/cron.d/sysstat
TODO: it's worth investigating where the raw statistics being put by systemd version of this grabber, to move that place to zram

/etc/cron.daily/apt-compat doesn't run if systemd is in place. Ho harm but I deleted that

/etc/cron.daily/aptitude a script that saves package states to a log file. Another logging/backup courtesy I never asked for. Deleting that

/etc/cron.daily/cracklib-runtime - part of cracklib-runtime for lame people who're using dictionary passwords. But we're not like them, right? Then purging the whole cracklib-runtime and libcrack2 packages together.

/etc/cron.daily/dpkg - part of dpkg. Backups the metadata from dpkg "database" about installed packages. Like if it could do something about it, if it finds something is broken. Hehe. Added "exit 0" to the top to disable it

/etc/cron.daily/locate - daily update to locate database. drop that. If you cannot find something with locate, and you're sure you have updated its index with updatedb recently and didn't change a bit since then - then the file is not there. No need to hammer your filesystem on a daily basis.

/etc/cron.daily/logrotate - is not doing anything when systemd is there. Can stay

/etc/cron.daily/man-db - also not doing anything when systemd is there (I'm assuming there's an appropriate timer, we'll look at them bit later).  Can stay

Looks like a lot of Debian packages are made with having this idea in mind that users might want to switch from systemd to initd, so all these crontabs will start to matter again.

/etc/cron.daily/orangepi-ram-logging - not doing anything. logrotate.timer is doing all the job instead. Can stay

/etc/cron.daily/samba - if you remember, we needed samba for our EmulationStation, so we could copy roms from our Windows machine with using its File Explorer. This job makes a backup of /etc/samba/smbpasswd file to /var/backup if it has changed. Really, the guy who wrote it is just a genius of proper backup strategy. Joking, he's not. Remove the job.

/etc/cron.daily/sysstat - doesn't do anything. All job was moved to systemd timers. Can stay

/etc/cron.weekly/man-db - not being executed with systemd. Can stay

 

Remove/disable unneeded systemd timers

As we seen above, a lot of things were migrated from cron to systemd timers. Cronned scripts are just silently existing if they see the systemd is there. So it's time to deal with systemd timers to see what we can safely disable:
 
systemctl list-timers --all
 
NEXT                        LEFT          LAST                        PASSED       UNIT                         ACTIVATES
Wed 2022-08-31 22:10:00 MSK 5min left     Wed 2022-08-31 22:00:59 MSK 3min 6s ago  sysstat-collect.timer        sysstat-collect.service
Thu 2022-09-01 00:00:00 MSK 1h 55min left Wed 2022-08-31 00:00:02 MSK 22h ago      logrotate.timer              logrotate.service
Thu 2022-09-01 00:00:00 MSK 1h 55min left Wed 2022-08-31 00:00:02 MSK 22h ago      man-db.timer                 man-db.service
Thu 2022-09-01 00:07:00 MSK 2h 2min left  n/a                         n/a          sysstat-summary.timer        sysstat-summary.service
Thu 2022-09-01 00:44:45 MSK 2h 40min left Wed 2022-08-31 00:44:45 MSK 21h ago      systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Thu 2022-09-01 06:44:30 MSK 8h left       Wed 2022-08-31 06:22:59 MSK 15h ago      apt-daily-upgrade.timer      apt-daily-upgrade.service
Thu 2022-09-01 17:18:53 MSK 19h left      Wed 2022-08-31 20:13:53 MSK 1h 50min ago apt-daily.timer              apt-daily.service
Sun 2022-09-04 03:10:13 MSK 3 days left   Sun 2022-08-28 03:10:50 MSK 3 days ago   e2scrub_all.timer            e2scrub_all.service
Mon 2022-09-05 01:26:35 MSK 4 days left   Mon 2022-08-29 00:23:07 MSK 2 days ago   fstrim.timer                 fstrim.service

 
Same bloatware. Let's get it cleaned:

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx

xxxx


Monitor what remaining processes are writing heavily to SD card and deal with it, one-by-one

For that we're going to use some very nice software Linux can offer us for free. Here's the picture for attracting your attention:


What we can use here to accomplish our task:

  • "raw" tools like pcstat, pidstat, iostat, iotop and blkstat

  • we can attempt to find any higher-level tools with using apt-rdepends -r [rawpackage])

  • some random tools we found on internet (like fatrace and csysdig). 

So let's start rolling on with simple things

fatrace

I really fell in love with that tool. In the call below I asked it:
  • to look after only specific events (-f W to monitor writes to files)
  • to limit the scope to consider only one mounted device, from the current directory (-c), so I first walked into /
  • to add timestamps to its log (-t)

I left it running for a while, and then I examined the resulted log file.

cd / ; fatrace -c -t -f W | tee /tmp/fatrace.log | tee -a /tmp/fatrace.log

 


The only unfortunate thing about fatrace is that it does not provide you a bit of info about amounts of data being written and you cannot overcome that, because the system interfaces it's using are not giving these figures either.

In my case I see it already what processess were constantly writing their shit very important data to my precious SD card. Mainly it was Firefox and down below I'll to teach it how not to update its bolloks sqlite database files inside of ~/.mozilla/firefox/ 

The problem however is not only about Firefox alone. All the rest browsers I tested were doing the very same thing. But don't you worry. We'll deal with them as well.

Thanks to fatrace I also discoverd some some vnstat daemon was collecing networking statistics and putting that to its own file to /var/lib/
What was that idiot, who installed that? Was it me? Given no other package was depending on that vnstat, apart from its own mini-me vnstati. Go to hell, both of you:

apt purge vnstat

iotop

On a contrast to previous tool, now you can see amounts of data written, but you cannot limit it to see only on a specific partition (like the / - mounted to SD card). So applicability of this tool is limited to see only amounts of data written, but not knowing where exactly that data was written to. Here are few examples how can you run it:

iotop -bktoqqq


In this mode I found iotop has a bug - it was not capturing write events from short-lived processes, like the process of screenshot creation. So it might miss some important share of IO load from such processess.

iotop -obPat 


 

Same story here. Firefox is winning the race by pushing few Mb of its shitty cache to SD card within a matter of few minutes. Firefox, I hate you! Why you're doing that? I have plenty, you hear it, plenty of RAM free. You're ruining my precious SD card.

iostat 

Every tool is using its own unique way of tracking IOs. So from iostat you can expect IO breakdown per disk. If you run it like this:

iostat -dzphN 10

... it will first show you the accumulated report since the systemboot (on a screenshot it's appearing on the very top) and then will be showing deltas every 10 seconds: 


 

log2ram + zram

If you used distro from Rearm.IT, everything should be already configured. Just check that you do have /dev/zram devices and below filesystems are mounted from it:
 
root@orangepi4-lts:~# mount | grep zram
/dev/zram1 on /var/log type ext4 (rw,relatime,discard)
root@orangepi4-lts:~# df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev             1904036        8   1904028   1% /dev
tmpfs             395600     2932    392668   1% /run
/dev/mmcblk2p1 122600084 86461672  34849544  72% /
tmpfs            1978000        0   1978000   0% /dev/shm
tmpfs               5120        8      5112   1% /run/lock
tmpfs            1978000       16   1977984   1% /tmp
/dev/zram1         49560    19796     26180  44% /var/log
tmpfs             395600       44    395556   1% /run/user/1000
 
In my case I can see that /var/tmp is not using zram so I need to fix it. Edit /etc/fstab and make sure you have these lines:

tmpfs /tmp tmpfs defaults,nosuid 0 0
tmpfs /var/tmp tmpfs size=10M,nosuid 0 0
tmpfs /var/cache/samba tmpfs size=5M,nosuid 0 0
 
If you modified any of those (or added missing ones, like I did), run mount -a to remount everything without the need of rebooting.

If you're using some other Linux distro, read how to configure log2ram here - https://ikarus.sg/extend-sd-card-lifespan-with-log2ram/

disable swap (makes sense for 4Gb+ RAM and higher models) or make sure swap is using zram

Even though I see in my Linux distro the swap is already mounted from zram0, to me it doesn't make much sense to have it like that. The only positive thing about it, is that zram uses a compression methods, so if your system will be swapping, the swap will be same in memory, but compressed. In the ohter hand:
  • I hardly seen my system was running out of memory. It was just once, due to the bug in Gwenview. But guys, I'm having luxurious 4 Gb of RAM

  • Using compression mechanisms on zram will definitely hurt CPU, when it comes to moving something to SWAP or reading it back
Here's how I have it:
 
root@orangepi4-lts:~# swapon
NAME       TYPE      SIZE USED PRIO
/dev/zram0 partition 1.9G 416M    5
 
I decided to keep this swap on zram so far, given it's still using memory and not the SD card. 

some further steps

https://raspberrypi.stackexchange.com/questions/169/how-can-i-extend-the-life-of-my-sd-card


Disable system.d services we don't need

Disclaimer: you really should know what you're doing and look into more details of what you're exactly disabling

In my case, I don't want any "automated" or "unattended" software upgrades / updates to happen, I don't want my computer to do something I can do myself. So I want all that disabled or even removed. 
 
Let's see what exactly we have running (I'm skipping some output from systemctl for the services I do want and I know what they're doing):

$ systemctl status

             ├─nfs-mountd.service
             │ └─119304 /usr/sbin/rpc.mountd --manage-gids

             ├─nfs-blkmap.service
             │ └─119752 /usr/sbin/blkmapd

             ├─nfs-idmapd.service
             │ └─119303 /usr/sbin/rpc.idmapd


             ├─packagekit.service
             │ └─16489 /usr/libexec/packagekitd

             ├─unattended-upgrades.service
             │ └─1123 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown

             ├─upower.service
             │ └─2296 /usr/libexec/upowerd

             ├─accounts-daemon.service
             │ └─2072 /usr/libexec/accounts-daemon

             ├─haveged.service
             │ └─745 /usr/sbin/haveged --Foreground --verbose=1

nfs-*.service

who needs nfs nowdays? drop that!

sudo nala remove nfs-common



packagekit.service

pi@orangepi4-lts:~ $ dpkg -S /usr/libexec/packagekitd
packagekit: /usr/libexec/packagekitd
pi@orangepi4-lts:~ $ apt info packagekit
Package: packagekit
Version: 1.2.2-2
Priority: optional
Section: admin
Maintainer: Matthias Klumpp <mak@debian.org>
Installed-Size: 2,857 kB
Depends: libglib2.0-bin, policykit-1, init-system-helpers (>= 1.52), libappstream4 (>= 0.10.0), libapt-pkg6.0 (>= 1.9.2), libc6 (>= 2.28), libgcc-s1 (>= 3.0), libglib2.0-0 (>= 2.54), libgstreamer1.0-0 (>= 1.0.0), libpackagekit-glib2-18 (>= 1.2.1), libpolkit-gobject-1-0 (>= 0.99), libsqlite3-0 (>= 3.5.9), libstdc++6 (>= 5.2), libsystemd0 (>= 214)
Recommends: packagekit-tools, systemd
Suggests: appstream
Breaks: libpackagekit-glib2-14 (<= 0.7.6-4), libpackagekit-qt2-2 (<= 0.7.6-4), packagekit-backend-apt (<< 1.0), packagekit-backend-aptcc (<< 1.0), packagekit-backend-smart (<< 1.0), packagekit-offline-update (<< 1.0), packagekit-plugin-click (<= 0.3.1), plymouth (<< 0.9.5)
Homepage: https://www.freedesktop.org/software/PackageKit/
Tag: admin::package-management, implemented-in::c, implemented-in::python,
 role::program
Download-Size: 575 kB
APT-Manual-Installed: no
APT-Sources: http://deb.debian.org/debian bullseye/main arm64 Packages
Description: Provides a package management service

This is the abstraction layer which makes it possible for applications like KDE Discover to work on any kind of distros, no matter what software packaging tools they're using - dpkg/apt, rpm/yum/dnf, pacman - whatever. 

If you want to keep using KDE Discover you'll need to keep that. I disabled it.


unattended-upgrades.service

pi@orangepi4-lts:~ $ dpkg -S /usr/share/unattended-upgrades/unattended-upgrade-shutdown
unattended-upgrades: /usr/share/unattended-upgrades/unattended-upgrade-shutdown
pi@orangepi4-lts:~ $ apt info unattended-upgrades
Package: unattended-upgrades
Version: 2.8
Priority: optional
Section: admin
Maintainer: Michael Vogt <mvo@debian.org>
Installed-Size: 334 kB
Depends: debconf (>= 0.5) | debconf-2.0, debconf, python3, python3-apt (>= 1.9.6~), python3-dbus, python3-distro-info, ucf, lsb-release, lsb-base, xz-utils
Recommends: systemd-sysv | cron | cron-daemon | anacron
Suggests: bsd-mailx, default-mta | mail-transport-agent, needrestart, powermgmt-base, python3-gi
Tag: admin::package-management, implemented-in::python, role::program,
 suite::debian, works-with::software:package
Download-Size: 88.6 kB
APT-Manual-Installed: yes
APT-Sources: http://deb.debian.org/debian bullseye/main arm64 Packages
Description: automatic installation of security upgrades 

Self explanatory. Remove that. I'll be able to install all my security updates myself!

apt purge unattended-upgrades

upower.service

pi@orangepi4-lts:~ $ dpkg -S /usr/libexec/upowerd
upower: /usr/libexec/upowerd
pi@orangepi4-lts:~ $ apt show upower
Package: upower
Version: 0.99.11-2
Priority: optional
Section: admin
Maintainer: Utopia Maintenance Team <pkg-utopia-maintainers@lists.alioth.debian.org>
Installed-Size: 420 kB
Depends: dbus, udev, libc6 (>= 2.17), libglib2.0-0 (>= 2.41.1), libgudev-1.0-0 (>= 147), libimobiledevice6 (>= 0.9.7), libplist3 (>= 1.11), libupower-glib3 (>= 0.99.8), libusb-1.0-0 (>= 2:1.0.8)
Recommends: policykit-1
Homepage: https://upower.freedesktop.org/
Tag: admin::power-management, hardware::power, hardware::power:acpi,
 hardware::power:ups, implemented-in::c, interface::daemon,
 role::program
Download-Size: 113 kB
APT-Manual-Installed: no
APT-Sources: http://deb.debian.org/debian bullseye/main arm64 Packages
Description: abstraction for power management

This service provides a various information about electrical power for your PC and linked devices, like remaining charge of a battery of your laptop or bluetooth mouse. I tried to remove it, but it also removes so many things with it (like sddm), so unfortunately you have to keep that beast. In my case, this service doesn't provide a correct information of remaining battery from connected bluethooth gamepads:
 
upower -d

Will raise a bug about this issue.

haveged.service

pi@orangepi4-lts:~ $ dpkg -S /usr/sbin/haveged
haveged: /usr/sbin/haveged
pi@orangepi4-lts:~ $ apt info haveged
Package: haveged
Version: 1.9.14-1
Priority: optional
Section: misc
Maintainer: Jérémy Bobbio <lunar@debian.org>
Installed-Size: 92.2 kB
Pre-Depends: init-system-helpers (>= 1.54~)
Depends: lsb-base (>= 3.2-14), libc6 (>= 2.17), libhavege2 (>= 1.9.13)
Suggests: apparmor
Homepage: https://issihosts.com/haveged/
Tag: implemented-in::c, interface::daemon, role::program, scope::utility,
 security::cryptography
Download-Size: 39.1 kB
APT-Manual-Installed: yes
APT-Sources: http://deb.debian.org/debian bullseye/main arm64 Packages
Description: Linux entropy source using the HAVEGE algorithm
 

Random number generation daemon. I'm not joking. And it's important part of distribution, as a lot of crypto things are depending on having a truely random number being generated. Don't touch it. I eats just 3 megs of ram but it provides a truly randomization for /dev/random

accounts-daemon.service

Coming soon

configure X to not generate huge .xsession-errors file (or move that file to /tmp


If you were following me with all the above steps, you might have noticed (with the help of fatrace) that a lot of stuff is written to ~/.xsession-errors file. This is how X.Org is configured by default in  /etc/X11/Xsession file:

...
ERRFILE=$HOME/.xsession-errors
...
# attempt to create an error file; abort if we cannot
if (umask 077 && touch "$ERRFILE") 2> /dev/null && [ -w "$ERRFILE" ] &&
  [ ! -L "$ERRFILE" ]; then
  chmod 600 "$ERRFILE"
elif ERRFILE=$(tempfile 2> /dev/null); then
  if ! ln -sf "$ERRFILE" "${TMPDIR:=/tmp}/xsession-$USER"; then
    message "warning: unable to symlink \"$TMPDIR/xsession-$USER\" to" \
             "\"$ERRFILE\"; look for session log/errors in" \
             "\"$TMPDIR/xsession-$USER\"."
  fi
else
  errormsg "unable to create X session log/error file; aborting."
fi

exec >>"$ERRFILE" 2>&1


 
Given our homedirs are on the SD card, we don't want these permanent writes being made to it with such logs. Let's fix that by changing that file to a symlink pointing somewhere to /tmp.  Tempdir is mounted as tmpfs (essentially - to memory) so we will avoid burden of constant writes to SD:

mv ~/.xsession-errors ~/.xsession-errors.bak
ln -s /tmp/$USER.xsession-errors ~/.xsession-errors

Another option will be to configure X to write only critical errors, but I'm fine with my current option now.

Disable journaling for ext4 filesystems

We're not running production server of a patients-life-critical application in hospital. If we loose a bit of info or some app will corrupt it's cache, if our SBC will be unexpectedly poweroff, we can survive that. We don't have such apps, who won't survive if. So let's go:


tune2fs -l /dev/mmcblk2p1
tune2fs -O ^has_journal /dev/mmcblk2p1
e2fsck -f /dev/mmcblk2p1
reboot

Configure your browsers to not write its cache that aggressively to your homedir

Same as above - as we figured it, browsers tend to write a lot of things to your ~/.cache or ~/.config or ~/.mozila or whatever else in your home dir. Some of the stuff we probably want to be written there, like cookies. But most of other stuff you'll see is written there is just lazy browser developers or plugin who didn't pay enough attention to such important details.

Firefox

In your address string put "about:config" and hit enter. Accept the warning and proceed. We need to modify these setitngs:

browser.cache.disk.enable = false
browser.cache.disk.smart_size.enabled = false
browser.cache.disk_cache_ssl = false


+ I'll need to search for more, as it still writes to number of its internal sqlite files like:

pi@orangepi4-lts:~ $ grep -oE "\/home[^ ]*" /tmp/fatrace.log | sort | uniq -c
     10 /home/pi/.mozilla/firefox/(x).default-esr/AlternateServices.txt
      1 /home/pi/.mozilla/firefox/(x).default-esr/broadcast-listeners.json
      1 /home/pi/.mozilla/firefox/(x).default-esr/broadcast-listeners.json.tmp
    325 /home/pi/.mozilla/firefox/(x).default-esr/cookies.sqlite
   2366 /home/pi/.mozilla/firefox/(x).default-esr/cookies.sqlite-wal
      4 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/aborted-session-ping
      4 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/aborted-session-ping.tmp
      1 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/archived/2022-09/xxxmain.jsonlz4
      1 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/archived/2022-09/xxxmain.jsonlz4.tmp
      1 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/session-state.json
      1 /home/pi/.mozilla/firefox/(x).default-esr/datareporting/session-state.json.tmp
      5 /home/pi/.mozilla/firefox/(x).default-esr/favicons.sqlite
    123 /home/pi/.mozilla/firefox/(x).default-esr/favicons.sqlite-wal
     28 /home/pi/.mozilla/firefox/(x).default-esr/formhistory.sqlite
    122 /home/pi/.mozilla/firefox/(x).default-esr/formhistory.sqlite-journal
     50 /home/pi/.mozilla/firefox/(x).default-esr/permissions.sqlite
    135 /home/pi/.mozilla/firefox/(x).default-esr/permissions.sqlite-journal
     54 /home/pi/.mozilla/firefox/(x).default-esr/places.sqlite
    420 /home/pi/.mozilla/firefox/(x).default-esr/places.sqlite-wal
     15 /home/pi/.mozilla/firefox/(x).default-esr/prefs-1.js
      3 /home/pi/.mozilla/firefox/(x).default-esr/prefs.js
      3 /home/pi/.mozilla/firefox/(x).default-esr/protections.sqlite
      9 /home/pi/.mozilla/firefox/(x).default-esr/protections.sqlite-journal
      1 /home/pi/.mozilla/firefox/(x).default-esr/sessionstore-backups/recovery.jsonlz4
    121 /home/pi/.mozilla/firefox/(x).default-esr/sessionstore-backups/recovery.jsonlz4.tmp
      8 /home/pi/.mozilla/firefox/(x).default-esr/SiteSecurityServiceState.txt
     22 /home/pi/.mozilla/firefox/(x).default-esr/storage/default/moz-extensionxxx/idb/xxx-eengsairo.sqlite
     18 /home/pi/.mozilla/firefox/(x).default-esr/storage/default/moz-extensionxxx/idb/xxx-eengsairo.sqlite-wal
      4 /home/pi/.mozilla/firefox/(x).default-esr/storage/permanent/chrome/idb/xxxxAmcateirvtiSty.sqlite
      4 /home/pi/.mozilla/firefox/(x).default-esr/storage/permanent/chrome/idb/xxxxAmcateirvtiSty.sqlite-wal
      6 /home/pi/.mozilla/firefox/(x).default-esr/storage/permanent/chrome/idb/xxxxrsegmnoittet-es.sqlite
      5 /home/pi/.mozilla/firefox/(x).default-esr/storage/permanent/chrome/idb/xxxxrsegmnoittet-es.sqlite-wal
      4 /home/pi/.mozilla/firefox/(x).default-esr/webappsstore.sqlite
     42 /home/pi/.mozilla/firefox/(x).default-esr/webappsstore.sqlite-wal
      8 /home/pi/.mozilla/firefox/(x).default-esr/xulstore.json
      6 /home/pi/.mozilla/firefox/(x).default-esr/xulstore.json.tmp


Qutebrowser

tbc

Chromium


Mesuring our success

 
After all the above fixes being in place, we can run iostat again to see the accumulated IO stats from the very last boot. Behold, this is my system after 13 hours uptime:

Every 2.0s: iostat -dzphN ; uptime                                                                                                                      orangepi4-lts: Fri Sep  2 13:21:31 2022

Linux 5.18.5-rk3399 (orangepi4-lts)     09/02/2022      _aarch64_       (6 CPU)

      tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd Device
     0.01         0.2k         0.0k         0.0k       5.3M       0.0k       0.0k mmcblk0
     0.00         0.1k         0.0k         0.0k       3.2M       0.0k       0.0k mmcblk0p1
     0.00         0.0k         0.0k         0.0k     348.0k       0.0k       0.0k mmcblk0boot0
     0.00         0.0k         0.0k         0.0k     348.0k       0.0k       0.0k mmcblk0boot1
     0.50        14.7k         0.9k         0.0k     499.5M      31.4M       0.0k mmcblk2
     0.50        14.6k         0.9k         0.0k     497.5M      31.4M       0.0k mmcblk2p1
     0.02         0.1k         0.0k         0.0k       2.3M       4.0k       0.0k zram0
     0.14         0.0k         1.5k         0.0k     476.0k      51.3M       0.0k zram1


 13:21:31 up  9:41,  2 users,  load average: 1.16, 1.24, 1.22
 
Just 30 Mb of writes for 13 hours and I do have a lot of daemons runnig. Launch of any web browser and few minutes serfing still adds to this picture like +30 Mb of data being written to SD, so this is something we still have to handle. But now it's way better than it was before.

How to sync your photos from android over internet to your computer

I used to use Google Photos. And Apple photos. And Flikr. And God knows what other services. And I paid. Paid a monthly subscription for all of them. But then, I decided - what a hell!

So I started to bring all my photos back offline. It took me a while, but thanks to Google, Apple and Flickr - they  offer this kind of option to bulk download all your digital assets.

So now I have an external 2Tb HDD + I bought an additional 1 Tb SSD to my main PC and have all my archive of photos there.

It's still unsorted, or should I say partially sorted, with lots of duplicates, because during last years I was switching from iPhone to Android (and even back for some short time, while my Android phone was in the repair shop). And I still own both iPhone as a backup phone and lovely iPad Mini.

So what I wanted to have is a Google Photo replacement free download no adds no registration no monthly subscription how many SEO specialists you need to replace a lightbulb.

I drafted some major requirements of how I wanted it to work:

Over the air backups from phone(s) to computer

I want my photos to be backed up from my phone (both Android and iPhone) over the air securely to some location, so in case if my phone breaks I won't care about lost photos

Two-way sync

I want a two-way sync, i.e. if I delete some photos from my phone I want them to disappear from all my other devices - computer, tablets, whatever is configured. Or if I have some spare time to sort out photos on my computer, and I delete something, I want that to be deleted on phone as well.


This requirement is bit controversial, if it's not implemented properly. It means that I have to have a full copy of all my photos on my all devices - phone, iPad, computer. Otherwise if something will be missing in one place, this gap should be processed by sync process:

- either to delete the same files on other devices
- or to bring them back to this device 

Having full copy of all photos is not a problem for SBC, as we can attach bigger SD card or even multi-Tb HDD to it. But it is a problem for mobile devices, which are limited in their storage.

To overcome this negative side effect we will be syncing just last year worth of photos. I think every mobile device can handle that amount of photos. It means that if you'll be running out of space on your phone, you can easily delete photos and videos older than year ago, and this your deletion will not be propagated to other devices. 

Jumping ahead, this is where you can do this in FolderSync:


Implementation

This was quite easy. On my Orange PI 4 LTS (can't stop showing off ;-) I configured sshd. I also have tor there configured to expose 'tor hidden service' so I can SSH from my phone to my SBC from any remote point of the world

Then on my Android device I also have tor installed. The app is called "Orbot" and it's official Tor client. It's a bit  laggish but generally works fine.

Then I installed an app called "Folder Sync". If you know about any open-source android app like that - let me know.  Folder Sync has a free version with adds and it works quite well for our purposes. I have configured Folder Sync to connect to my tohostnamef8j28jfh9jfjh7sdhf2.onion:2222 and sync couple of Android folders (with camera pictures and screenshots) to appropriate folders on my SBC. It works just great, but slow.

When I am at home and my phone can use home WiFi  I also configured few other sync pairs with the same folders, thanks to FolderSync flexibility, but this time they will only trigger when my phone sees my WiFi SSID - so when I'm at home it reaches to my server without Tor. This thing is optional, and I did that just to check how flexible FolderSync is. You can skip this. 

TODO

If the connection speed over the internet is not enough (because of using Tor),  I probably have to back off and configure a tunnel using Cloudflare.


Setup and configure few web services to act as online gallery. Candidates are:

  • Photoprism
  • Piwigo
  • Lychee
  • Plex
  • Librephotos
  • Nextcloud

Some of them can even do the sync.

Quick cheat sheet for strace

Today I've noticed that the gwenview (a GUI program to view images) was sitting on CPU even though I wasn't using that. This reminded me of an idea to practice my tracing skills.

For those, who don't know what am I talking about, in Linux you can use a very special kind of software, called debuggers or tracers, connect to any existing process (or spawn a new one) and see, what exactly it is doing! Not exactly like reading the source code of the program, but very close to that. Let's say some developer did a silly app, which does nothing rather opening some file, writing some bullshit into it and closing it. In a loop. So if you run that program with using strace you would be able to see, that it calls specific functions from Linux kernel like fopen / fclose / fwrite. I'm oversimplifying off course, but you get the idea.

So let's install strace and practice a bit:

apt install strace

And run it for some process (assuming you have a process with pid=70014):

sudo strace -p 70014 -r -o ./strace.log

Then after a short while press CTRL+C.
Let's break here for a while and see what options we used.

-p option tells strace to connect to an existing process
-r instructs it to add to the output a time difference between calls
-o <filename> redirects all the output to a file

Let's see what we've got in our log file. Interesting, huh? But it's a raw data, a lot of raw calls to some kernel functions. How can we get any additional use out of it?

We can ask strace to give us a summary report, like what kind of kernel functions were called and how long did all that took, in total for each function:

pi@orangepi4-lts:~$ sudo strace -c -p 70014
strace: Process 70014 attached
^Cstrace: Process 70014 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 42.47    0.178028          11     15074     14139 openat
 21.86    0.091656          25      3615           read
 11.62    0.048727           8      5925         2 newfstatat
  7.75    0.032476          10      3068           write
  5.08    0.021315          24       862        22 faccessat
  4.28    0.017959          19       945           ppoll
  2.47    0.010357          11       935           close
  1.37    0.005738           9       614        59 statx
  1.32    0.005552           6       924           fstat
  1.10    0.004626          27       168           getdents64
  0.61    0.002575           9       271           ioctl
  0.03    0.000109          10        10           lseek
  0.01    0.000047           9         5           getuid
  0.01    0.000043           8         5           geteuid
  0.00    0.000012          12         1           clone
  0.00    0.000000           0         2           futex
------ ----------- ----------- --------- --------- ----------------
100.00    0.419220          12     32424     14222 total
 

Here you can see, the most time this program spent opening some files with openat function, then reading something out of them with read and then getting file info with newfsatat

Knowing these facts, we can return back to inspecting raw log, but giving this time more scrutiny to what kind of files this pesky gwenview is reading all the time:

grep -E "openat|read|newfstatat" strace.log | less

It now becomes clear, that gwenview spending a lot of CPU while  attempting to open some thumbnails, usually it does that more than once for every thumbnail file, but gets an error from Kernel each time, but it repeats to try that over and over again:

     0.000093 openat(AT_FDCWD, "/home/pi/.cache/thumbnails/large/cd736b031fc7750f7d7ee3ca307
cee8e.png.pgm", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)

To me it's clearly a bug of gwenview it shouldn't act like this. It also consumed a lot of memory and went into swap. Sadly, I liked it much. But we can report this bug or check if the most recent version of that application behaves the same.

What could have bothered it, is that I opened it to display images in a folder, which was getting updated in a background - I do have an app on my android phone which syncs my images from Android gallery to my Linux PC over ssh.

I wanted you to look at this article - where a wonderful mate Raghu nailed that whole strace topic down

Update from future: I reported that bug and it was fix the very same day by magnificent The other behavior I was seeing (consuming a lot of memory first, then swap, then slowly making system to die) was  already fixed before But given Debian don't update packages all that often, a vast majority of its users (like me) won't see that fix.  

Window manager hopping - dwm, twm, bspwm

dwm. Dynamic Window Manager. Let's go hardcore!

dwm, together with a bunch of few other lightweight tools are developed by these guys - https://suckless.org/

To say the least I'm deeply impressed how tiny, comfortable and snappy this thing is. 

At first, I was little skeptical about tiling window managers, because we all used to floating ones. But hell, how good and fast they are! It really shifts your user experience from being constantly distracted by different things to be entirely focused on doing what you actually wanted to do. Such tiling window managers are unleashing the true power of keyboard. You can still use your mouse, but really it's order of magnitude faster to do everything by keyboard. An did I tell you it's tiny as grain of rice which means memory and CPU efficient? :)

In order to not get yourself lost at the beginning, make sure you'll install not only dwm, but also few additional packages (what we need the most is dmenu - it's part of suckless-tools):

apt install dwm sterm suckless-tools

Here are some useful shortcuts you'll need at the beginning:

Alt + Shift + Enter Starts a new terminal window
Alt + P Opens a dmenu (launcher). Whatever you'll run will be put to the top of a "master" (pile of windows on the left hand side). The existing window from "master" will be moved to "stack" (right hand side plie of windows)
Alt + i / d Increases or decreases the number of windows in "master" (left hand side half of screen)
Alt + h / lResize the split between "master" and "stack" and
Alt + j / kChanges current focus to the next / prev window
Alt + 1 .. 9 Changes current workspace to #1 .. #9 (they call it a tag)
Shift + Alt + 1 .. 9 Moves the current window to a workspace #1 .. #9
 
I strongly advise you to go visit the official tutorial to get that to understand what is the main idea behind dwm and why it's doing what it's doing - https://dwm.suckless.org/tutorial/
 
Things I did after installing dwm:
 
1) had to change background colors used by dolphin (gui file manager), because by default it was using plasma dark color with dark unreadable fonts.

2) added some additional lines to ~/.xsessionrc as the KDE version in Debian doesn't clear all its processes behind:

# if we're switching to dwm we don't need any KDE processess any longer
if [[ "$DESKTOP_SESSION" -eq "dwm" ]] ; then
  xrandr --output HDMI-1 --mode 1920x1080
  sudo killall kinit polkitd kdeconnectd kactivitymanagerd kded5 packagekitd kglobalaccel5 kdeinit5 klauncher
fi

3) the default st terminal is just awesome, when it comes to memory footprint. But it doesn't look very pretty and it's like inviting you to change it. I forked it and finally implemented the putty-like copy&paste behavior I was missing so much since I moved to Linux  - https://github.com/kha84/st/
 
 


 
(screenshots are made with scrot)
 

twm. Old as hell. But still useful

I used to use twm a lot in my earlier days with linux. Really. But now it looks so outdated and so hard to use. You can think of it as of some archeological museum exhibit item: you can admire it's still there and working, even play a bit with it, but to use it on a daily basis - nah, just skip that part altogether.

 


  

awesome

tbc

bspwm

tbc

Multi-cursors and multi-selection in Kate / KDevelop

Multi-cursors or multi-selection are one of those features you start loving once you first try. And you never want to come back to anything which doesn't support that. I think the first guys, who made these features really useful were the guys who developed SublimeText editor. Just look what it can do:


Just in case Sublime is available for Linux and I also heard the same features been supported by MS Visual Code studio, which is also available for Linux. But that's off course not an open-source.

Speaking of KDevelop and Kate editor. They're both built on a top of KTextEditor KDE component. There were number of attempts in the past to introduce this kind of advanced multi-cursor and multi-selection features support to what is already had (multi-cursor via block selection - https://kate-editor.org/2013/09/09/multi-line-text-editing-in-kate/):

Unfortunately none of them were eventually merged to a mainline.
So if you want to use one of these, you'll probably need to build it yourself or take an AppImage from Sven's page if you're running Linux on x86_64 platform - http://files.svenbrauch.de/kate-linux/multicursor/

But things has changed to a better just recently. From what I see in the recent news is that KDE dev team was advertising to support multi-cursors in KTextEditor starting from KDE Frameworks version 5.92 - https://kate-editor.org/post/2022/2022-03-10-ktexteditor-multicursor/  

I guess it will take another while for major distros to update their KDE packages up to that version, so everyone could start enjoying multi-cursor and multi-selection in Kate. If you want to use it earlier you need either to build it yourself again, find a flatpak/appimage/snap package or switch to KDE Neon :)

The world of Linux is tough one, yeah

Building Kate

So I decided to give it a try and build it. It was quite an interesting experience I need to tell you, but what impressed me the most you can reach out KDE/Kate guys on their IRC channel and they can help you. Hats off, gents. You've made me to love Linux and KDE community more and more every day.

KDE has quite a good document on how to start - https://community.kde.org/Get_Involved/development so start with reading it. It will tell you what you need to install and configure.

An older and wiser me from future suggests you to install these additional packages, to minimize your efforts of troubleshooting, on a top of what KDE page  tells you what to install:

sudo apt install qtdeclarative5-dev qtbase5-private-dev 
polkit-gobject-1-dev libpolkit-gobject-1-dev libpolkit-agent-1-dev 
libqt5x11extras5-dev libqt5waylandclient5-dev extra-cmake-modules 
qtwayland5-dev-tools libwayland-dev libkf5windowsystem-dev libxcb*-dev
sudo apt build-dep kconfig qml-module-org-kde-kwindowsystem libqt5svg5-dev

(Probably one day, when https://invent.kde.org/sdk/kdesrc-build/-/issues/9  will be completely done, everything would be automated by the kdesrc-build script, but it's bearable even in current state)

Then you follow https://kate-editor.org/build-it/#linux and be patient, as you'll have to deal with a lot of failing stuff, when you'll be finally running kdesrc-build --include-dependencies kate

I'll give you this short guidance, based on an example of one failure (out of many I had). What happened here is that kiconthemes module failed to be built during kdesrc-build --include-dependencies kate phase. When such failure happens, kdesrc-build gives us a favor and highlights that in its output with red font + gives you a location to a detailed log file. You need to examine  that log file carefully. I highlighted the important bit in red:

# kdesrc-build running: 'cmake' '-B' '.' '-S' '/home/pi/projects/kde/src/frameworks/kiconthemes' '-G' 'Kate - Ninja' '-DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=ON' '-DCMAKE_CXX_FLAGS:STRING=-pipe' '-DCMAKE_INSTALL_PREFIX=/home/pi/projects/kde/usr'
# from directory: /home/pi/projects/kde/build/frameworks/kiconthemes
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- 

Installing in /home/pi/projects/kde/usr. Run /home/pi/projects/kde/build/frameworks/kiconthemes/prefix.sh to set the environment for KIconThemes.
-- Setting build type to 'Debug' as none was specified.
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - found
-- Performing Test _OFFT_IS_64BIT
-- Performing Test _OFFT_IS_64BIT - Success
-- Performing Test HAVE_DATE_TIME
-- Performing Test HAVE_DATE_TIME - Success
-- Performing Test BSYMBOLICFUNCTIONS_AVAILABLE
-- Performing Test BSYMBOLICFUNCTIONS_AVAILABLE - Success
CMake Error at CMakeLists.txt:46 (find_package):
  Could not find a package configuration file provided by "Qt5Svg" (requested
  version 5.15.2) with any of the following names:

    Qt5SvgConfig.cmake
    qt5svg-config.cmake

  Add the installation prefix of "Qt5Svg" to CMAKE_PREFIX_PATH or set
  "Qt5Svg_DIR" to a directory containing one of the above files.  If "Qt5Svg"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!
See also "/home/pi/projects/kde/build/frameworks/kiconthemes/CMakeFiles/CMakeOutput.log".

Speaking plain English, in order to compile this KDE module, kdesrc-build needed to have specific files in your filesystem, which names I highlighted in red. But it didn't find those.

Usually such .cmake files (as well as .h files) are provided by special -dev packages in Debian/Ubuntu. Use the APT QUICK CHEAT to see how to use your package manager to find packages that are providing these files, then install them (hint: apt-file search <filename>)

Once that is installed, try to build the failing module again to ensure it's fine now:

kdesrc-build kiconthemes

Rinse and repeat for the next error you'll find. At the end of the day you'll be able to make it through all failures and you'll have a pretty green build. 

For my Orange PI 4 LTS aarch64 machine with 6 cores of RK3399 it took like 2-3 hours of compiling plus another couple of hours of troubleshooting. But it was worth it.



How to access your home server, if it's sitting behind NAT / Firewall / two NATs

Recently I stumbled upon this article https://raspberrydiy.com/access-raspberry-pi-over-internet/


I don't know if author will ever publish my comment, but here what I wrote to him:


==== cut ====


Oh mate, how can you trust to some random companies in such sensitive topics like security more than yourself? It's such a naive approach.

I'll tell you this: configure an SSH daemon on your Raspberry PI (here and after, the same applies to  anything running Linux). Make sure to disable the password-based authentication in sshd config so your weak passwords won't be bruteforced, while leaving only sshkey based authentication. Restart the sshd. Check you're only able to connect to your PI with using ssh-key and not the plain text password.

Now you can safely configure a port forwarding on your router to expose just-and-only 22 port from your PI to the external world. Nothing bad will ever happen to you. No Chineese or Russian hacker  will be able to get through that door. It's such a basic idea.

You should more trust to SSH rather than some random list of companies on internet, who only promise you that you'll be safe. Even if those companies are having some big names, like Google. Didn't you know you can use Chrome for remote accessing your PI? "You can" doesn't mean "you should".

All these companies are doing pretty much the same thing - they ask  you to install their own software on your machine in a way to create a "secured" network tunnel from your machine to their servers. So then you can grab your phone/tablet/laptop and access their servers from anywhere, they will authenticate you, and allow to use that your tunnel. But you see, this whole idea is just bleeding with a number of issues:

- the software they suggest you to install to your PI (usually they call it an agent) could have a backdoor,  malware, virus, having bugs or just silently mining some cryptocoins on your PI - you'll never know that until it's too late. On a contrary SSHD doesn't do that.

- you need to trust that the "protected" tunnel that software creates is really protected. Not just because they say so. At least you would want to look on the network traffic with tools like tcpdump/wireshark while copying some text file remotely. The thing is, if some 3rd party proprietary software is used for tunneling, developers  can easily miss some bug there or just not to use proper level of encryption (remember HTTP days?) so anyone in between your PI and their servers would be able to  see what you're doing on your machine. Again SSHD is far more secured than any random implementation of any "protected" tunnels from these companies.

- you need to trust these companies  won't let anyone else, apart from you, to login to their web servers and use your "secured tunnel" to get onto your PI. On a contrast to this, if  you're using SSHD + key based authentication noone apart from you will ever be able to authenticate.

SSHD is everywhere, it's used on a every single server. I do believe all these companies you listed are managing their own servers by logging on them with using SSH and not their own shitty software.

Trust me, you don't need all those  companies and their software. They are only existing due to the fact that most people are foolish / scared /  lazy / believing in fairy tales or just uneducated (yet).

Learn how to use SSH properly. You can tunnel everything through it.   You don't need all that extra software from some random companies, like you're not allowing yourself to swallow any random medicine on a market.

==== end of cut ====


Here I just wanted to add something on a top of that. If your ISP (internet service provider) has assigned a private IP address to your router, of course the port forwarding doesn't make any sense, because you'll be only exposing your ports to the inner network of ISP where your router exists probably together with similar routers of other ppl.

There're lot of ads of services on internet, which allow you to build pretty much the same as that guy listed - but none of them can be trusted due to the same reasons I listed above. I senselessly suggest you to avoid all of them:

https://www.pitunnel.com/
https://www.socketxp.com
https://www.dwservice.net/
https://remotedesktop.google.com/?pli=1
https://www.realvnc.com/en/connect/
https://www.remote.it/
and many-many others.

What you should be using instead are services, that are either open-source or based on the existing well-known technologies or disclosing it clearly, how do they traverse the NAT.


Tor and its hidden services feature

Here is my post on how to get it configured in just few simple steps

I2P - Invisible Internet Protocol

https://geti2p.net/en/

FreeNet

https://freenetproject.org/

Cloudflare tunnel

First of all, if you don't have a domain name, you can get one, even for free. See https://www.getfreedomain.name/ for various options (it is just an information site, they don't provide any services).

Once you have a domain name you can configure it to be served by Cloudflare name servers. Then you configure a tunnel, which requires you to install and run a special software in your LAN to keep that tunnel up and running. For personal and hobby projects they do offer a free plan

There're lots of tutorials on internet how to do so, here is one - https://youtu.be/uTwjJaoknBA

Some more advanced stuff, like protecting your services with additional Cloudflare authentication - https://youtu.be/eojWaJQvqiw

Wireguard 

https://en.wikipedia.org/wiki/WireGuard

Wireguard is akin to OpenVPN - that is the software which simply speaking creates secured tunnels between endpoints. 

Typical use case: if  you have a machine within your LAN, which runs a service you want to access to  outside of your LAN, you install and configure Wireguard somewhere within your LAN, and on the remote machine you want to have access from. Then, assuming your router gets real IP from your ISP  you configure your router to do the port forwarding to where you have Wireguard installed, so now you can use your remote machine to establish a safe connection to your home LAN.

Another use case, is if you don't have a real IP at your routers from ISP. Then you rent a VPS (which is by nature is having a public access from internet) and configure Wireguard there and somewhere within your LAN. These two endpoints will be connected by a secured tunnel. Then you have two options: either to install Wireguard on the device you want to remote access your service@home, so it will be "included" into this virtual LAN, or you expose the service on VPS.

Tailscale (or Headscale)

It is akin to VPN - you install the special software on all your devices, and if it's up, they appear in the same network. Even if those devices are behind firewall. If you want to selfhost something at home just for yourself, so you can access your own service from anywhere in the world, it's fine. 

But it doesn't work if you want to host a service, which you want to make available for yourself or other people without installing an additional piece of software. 

The good thing about Tailscale is that they opened source both client and server. They didn't went opensource for the managment / configuration server, so the opensource server is quite stripped, but still very useful, if you don't mind to host it on some VPS which is having a public IP address.

https://tailscale.com/opensource/

If you do trust to the server provided by Tailscale themselves, you can opt for Free plan, which allows you to connect up to 20 different devices together in the same virtual network. 

Zerotier

Similar to Tailscale. Even plans are similar. 
https://en.wikipedia.org/wiki/ZeroTier

Nebula

httptunnel

xxx

Tinc

http://tinc-vpn.org/

route48.org

xxx

Typical NAT traverse techniques 

https://blog.apnic.net/2022/05/03/how-nat-traversal-works-concerning-cgnats/

UDP hole punching
STUN / TURN / ICE


See also:

Mesa again. Trying to make OpenGL work in X. Solving issues with 2D acceleration in X.Org and 3D acceleration

Video graphics in Linux is very tedious and complicated thing. Take a huge cup of coffee if you really want to get into it right. Because there are lot of things here. You might even want to revisit this page multiple times as it's kinda hard to get everything from one set. It took me like a number of hours to get a head and a tail out of it, but yet I'm not 100% I'm giving you all correct information here. But at least you should get an idea.

There're multiple components in this topic we need to know about:

kernel driver ("DRM")

this is the piece of code in kernel which actually makes some low level calls to GPU device to make it to display something. If you don't do anything on purpose about it, Linux kernel probably fallback to some compatible driver which will allow it to talk at least some basic common language with your GPU so it can display things. But don't expect that fallback driver to give you any performance.

There're two types of drivers:
- opensource - usually from Mesa project
- binary / proprietary / aka blobs - made by hardware manufacturers

Opensource drivers are good not only because they're opensource, but also because they'll be supported in next versions of Linux Kernel. Which means if you decide to upgrade your distro or jump to some other distro with updated Kernel, you'll be able to do that, as your videodrivers will be there.

On a contrary, proprietary drivers are created by manufacturers for very specific kernel version, and usually they are not updated. 

Here you should ask a question, like "So what? Can I just use this Linux Kernel for the rest of this device life?" The answer is yes, sure. As long as the core libs in Linux (like glibc) are supporting it. So chances are, after few years, they won't be any longer, which means you won't be able to install updates to any software, which depends on those core libs.

user-space driver (usually from Mesa project). 

What they do is they provide Open GL or Vulkan API to application, who needs that in one hand, and in the other they make calls to appropriate kernel driver. This is what usually called mesa rendering backend. In case if you don't have GPU kernel drivers to support Mesa, it can fallback to one of the software rendering "backends", the most famous one is llvmpipe, but there're others. What does software rendering mean here, is that all the math calculations needed to display 3D graphics will be done on you CPU rather than on GPU. Usually it's a bad thing, as GPU are much more sophisticated

Xorg configuration files

xxx


WindowManager configuration (e.g. kwin can use XRender or OpenGL)  

xxx 


Here I'm not even speaking about video acceleration yet (i.e. how to make your YouTube videos not to consume 100% cpu)

Since I installed the KDE desktop environment on a top of retropi distro from Ream.It (effectively it's just Debian installed with Armbian scripts and a custom kernel to support my RK3399) I was noticing few issues.

 

Issue 1: playing video - both YouTube or via VLC/dragon player - was maxing out CPU 

Basically the machine was screaming at me saying it does not use  hardware video acceleration. I'll deal with it bit later 

Issue 2: in KDE my CPU was quite oftten maxed out by kwin_x11

I figured it out, that this issue goes away if you switch kwin renderer from OpenGL 2/3.1 to Xrender:
KDE System Settings -> Display and monitor -> Compositor -> Rendering backend

I'm guessing when kwin attempts to use OpenGL which was so far done by llvmpipe backend (software renderer in Mesa), the CPU was doing twice as much job as it should do:

kwin_x11 -> OpenGL -> software renderer driver of mesa (llvm)

And switching it directly to XRender - a purely software renderer made things so much easier for CPU:

kwin_x11 -> XRender

It is a workaround of course, the final goal is to make OpenGL working

Going deeper

In X glxinfo is showing I'm using llvmpipe - i.e. software renderer.

I have mesa installed of version 20.3.5 from Debian bullseye stable and kernel from Rearm.It. The kernel has some DRM modules though:

pi@orangepi4-lts:~/projects$ lsmod | grep rock
snd_soc_rockchip_i2s    20480  4
rockchip_vdec          81920  0
v4l2_vp9               24576  2 rockchip_vdec,hantro_vpu
snd_soc_core          266240  5 snd_soc_hdmi_codec,snd_soc_simple_card_utils,snd_soc_rockchip_i2s,snd_soc_simple_card,snd_soc_es8316
rockchip_iep           24576  0
v4l2_h264              16384  2 rockchip_vdec,hantro_vpu
videobuf2_dma_contig    24576  3 rockchip_vdec,hantro_vpu,rockchip_iep
v4l2_mem2mem           40960  3 rockchip_vdec,hantro_vpu,rockchip_iep
videobuf2_v4l2         32768  4 rockchip_vdec,hantro_vpu,rockchip_iep,v4l2_mem2mem
videobuf2_common       61440  7 rockchip_vdec,videobuf2_dma_contig,videobuf2_v4l2,hantro_vpu,rockchip_iep,v4l2_mem2mem,videobuf2_memops
videodev              249856  6 rockchip_vdec,videobuf2_v4l2,hantro_vpu,videobuf2_common,rockchip_iep,v4l2_mem2mem
mc                     61440  6 rockchip_vdec,videodev,videobuf2_v4l2,hantro_vpu,videobuf2_common,v4l2_mem2mem
phy_rockchip_dphy_rx0    20480  0


Just for the sake of sanity I also installed mesa-utils-extra and none of the demos work (no surprise)

However if I stop X and run kmscube it  detects proper renderer "Mali-T860 (Panfrost)" and says that I have mesa of version 22.1.4



important notes

to test OpenGL ES - eglinfo / eglgears
to test Open GL - glxinfo / glxgears 

Useful links



Start here

Disable Firefox from updating itself and flash those annoying "Restart to Keep Using Firefox" messages on you

I recently switched from Brave to Firefox. Just because Brave appeared to be some proprietary shit, even though they're masking themselv...