Wiki Table of Contents

Systemd overview

This presentation was created by Jens Mølgaard and is reprinted here with permission.

Introduction to systemd – a bit more than PID 1

Table of Contents

1 What is the init process?

1.1 Boot Sequence

The boot process is everything that happens from the time you press the power button until you are presented with a login screen for your console or desktop environment of choice. It happens in the following sequence:

  1. BIOS/UEFI locates and executes the boot loader (or UEFI can load a kernel directly).
  2. Boot loader (e.g. GRUB2, LILO, ReFind) loads the desired kernel into memory.
  3. Kernel starts the init process (pid=1).
  4. init process manages system initialisation: systemd (or SysVinit, Upstart, OpenRC, GNU Shepherd etc.).
  5. Handover to login shell or Display Manager: Now it’s up to you!

1.2 The init Process

This is the first user-level process run on the system, and continues to run until the system is shut down. It is by standard assigned the process id 1 (PID=1). Resides at or is symlinked to /sbin/init.

This process:

  • Handles the later stages of the boot process.
  • Configures the environment.
  • Starts all processes needed for logging in to the system.
  • Works with the kernel cleaning up after processes that have terminated.

Prior to systemd, the init process would inherit orphaned1 processes to try to ensure they get shut down properly along with the rest of the system, for example. This is now handled by separate process, kthreadd, which gets started by the kernel just after init.

1.3 SysVinit – How things used to work

Previously nearly all distributions had an init process based on and roughly compatible with the one in UNIX SystemV, often referred to as SysVinit. Under this scheme, starting up a system was conceived as a serial process, divided into a series of sequential stages. Each stage needed to be complete in order for the next stage to commence.

These stages are called runlevels, and each of them define a specific system state, numbered from 0 to 6.

Some runlevels are reserved for specific use and clearly defined:

0

system halt state

1

single-user mode

6

system reboot

The other runlevels can vary slightly between distributions, as they are used to define the services that are running on a ’standard’ system.

Generic list of system runlevels:

S,s

same as 1

0

system halt state, shut down and turn power off

1

single-user mode

2

multiple user, no NFS, only text login

3

multiple user, with NFS and network, only text login

4

not used (custom?)

5

multiple user, with NFS and network, graphical login with X or Wayland

6

system reboot

Current runlevel can be displayed with runlevel command:

$ runlevel
N 5

First character is previous level, N means unknown.

telinit can be used to change the runlevel of the system. For example, to go from runlevel 3 to 5, type:

sudo /sbin/telinit 5

These commands can usually still be used, as systemd maintains some backwards compatibility.

In practical terms, the startup process looked like this:

  • The init process reads /etc/inittab. This was originally used to specify which scripts to run for each runlevel.
  • Normally the rc.sysinit script will be run, and then the rc script is run with the desired runlevel as an argument.
  • The rc script runs every script in each of the directories under rc.d/rc[0-6].d up until the specified runlevel.
  • Each script is named with a numeric prefix and run in order. There are scripts for starting and killing each service.

1.4 What happened?

SysVinit was designed originally for large mainframes with many users, running continuously and only very rarely rebooted. Fast and efficient startup was not a major priority. Once laptops became widespread, and linux-based OSes started seing use in embedded systems, portable devices, and similar, there was an impetus to make booting up faster and more flexible. At the same time multi-core processors became the norm, parallel processing made it possible to run several processes simultaneously, and SysVinit had no way to make use of this advantage.

Alternatives start appearing to address this:

  1. Upstart – developed by Canonical, first included in 2006, default in 2009, in Fedora 9, RHEL 6 and derivatives (and embedded / mobile).
  2. systemd – Fedora, adopted 2011, crawling in everywhere. Adopted in RHEL 7 and Ubuntu 16.04 as standard.

Due to the trouble of migrating to never systems, compatibility layers for SysVinit utilities are bound to stay around for a long time yet. For instance, RHEL 6 used Upstart hidden behind a compatibility layer for old SysVinit utilities – so you might still run into them.

2 systemd Model

systemd replaces the sequentially run startup scripts with units. Instead of being run in a specific order, units can depend upon or conflict with other units, much like software dependencies are handled by package managers, and are only started once the units they depend upon have been started. Rather than a shell script, each unit is configured in an ini-like plain text file.

In the case of system services, their unit files are named something.service. It will describe how to start and stop the service, what type it is, how long to wait for it to start, what to do if the process dies unexpectedly.

Instead of runlevels, systemd has targets. Targets are used to group units together and usually define specific system states – once all the requisite units have been started (or stopped) the target is fulfilled. Some of these are roughly equivalent to runlevels, and will on some distributions be symlinked to targets named runlevel-3.target, runlevel-5.target:

rescue.target

like runlevel 1, single-user mode

multi-user.target

like runlevel 3, multi-user mode with text console

graphical.target

like runlevel 5, graphical session with X (or Wayland)

poweroff.target

halts system and turns off the machine – runlevel 0

reboot.target

reboots – runlevel 6

But there is also:

default.target

symlink to either multi-user.target or graphical.target

suspend.target

suspends the machine

hibernate.target

puts machine into hibernation

sound.target

started when a soundcard is detected

bluetooth.target

started when a bluetooth device is detected

Aside from services and targets, systemd has a number of other types of units. Most of these will probably rarely be of interest to users, but we will look at a few further down.

Systemd-on-fedora.png

Figure 1: Example systemd startup startup on Fedora (WikiMedia)

3 systemd Pros and Cons

3.1 Pros

  • Faster boot process
  • More flexible and easy to adapt
  • Standardisation of e.g. config files
  • Simple interface for end-users to interact with
  • Fancy additional features

3.2 Cons (for controversy!)

  • Personality conflicts / leadership issues
  • Feature creep & software bloat
  • logs stored in binary format
  • Too many interlocking dependencies
  • Runs contrary to unix philosophy
  • Hard to avoid, replace, or use only in part

4 Basics of systemctl

systemctl is the main utility for managing system services under systemd.

Basic syntax is:

systemctl [options] command [name]

Some examples:

  • Show status of everything controlled by systemd:

systemctl

  • Show all available services:

systemctl list-units -t service –all

  • Show only active services:

systemctl list-units -t service

  • Start one ore more units, where a unit can be a service or a socket:

sudo systemctl start foo
sudo systemctl start foo.service
sudo systemctl start /path/to/foo.service

  • Stop a unit/service:

sudo systemctl stop foo.service

  • Enable/disable a service to start it on every boot:

sudo systemctl enable sshd.service
sudo systemctl disable sshd.service

The last of these is equivalent to chkconfig –add/–del and doesn’t actually start the service in question.

Similar to changing runlevels, systemctl can be used to get the system to a specific target:

sudo systemctl isolate multi-user.target

This would be the equivalent of sudo telinit 3, and will bring the system to multi-user text mode.

Note: Not all systemctl commands require root privileges – generally only the ones that change the system state or alter files.

Fedora has a nice SysVinit to Systemd cheatsheet.

5 systemd Configuration Files

systemd can use both new standardised config files, or distro-dependent legacy configuration files as fall-back.

For example, the new config file /etc/hostname replaces /etc/sysconfig/network in Red Hat, /etc/HOSTNAME in SUSE and /etc/hostname in Debian.

Other examples:

/etc/vconsole.conf

default (console) keyboard mapping and console font

/etc/sysctl.d/*.conf

drop-in directory for kernel sysctl parameters

/etc/os-release

distribution ID file

6 Features

6.1 systemd-analyze

man page

Systemd comes with its own set of tools to analyze and optimize bootup performance and debug issues with systemd and its units.

Of interest:

6.1.1 systemd-analyze time

Time it took for system to boot up on the most recent boot.

systemd-analyze time
Startup finished in 15.965s (kernel) + 9.656s (userspace) = 25.622s
graphical.target reached after 9.415s in userspace

6.1.2 systemd-analyze blame

Lists services in order of how long they took to start up during the previous boot, starting with the process that took the longest.

This can be useful for finding services that take a long time to start and may be holding back the boot process. However, it can be misleading as a process can be hung up waiting for another to finish.

systemd-analyze blame
3.843s home.mount
3.843s \x40.mount
3.228s [email protected]
3.193s [email protected]
2.999s [email protected]
2.682s [email protected]
2.302s udisks2.service
866ms dev-mapper-locker.device
671ms ldconfig.service
399ms systemd-hwdb-update.service
363ms upower.service
319ms systemd-timesyncd.service

6.1.3 systemd-analyze plot

Outputs a nice chronological plot in SVG format of the previous boot process.

Like blame, if the system is slow to start up, this is useful for finding services that act as choke points and hold back others from starting.

systemd-analyze plot > startup.svg

Example startup plot

6.1.4 systemd-analyze dot

Produces a (potentially huge) graph of dependencies between units.

systemd-analyze dot | dot -Tsvg > systemd.svg

Example system graph

6.1.5 systemd-analyze verify

Verifies a systemd unit file, checking for issues with the configuration such as missing files, typos, and missing documentation. Useful if you set up your own units or alter

systemd-analyze verify user.slice

6.2 Utilities & Additional Components

journalctl, localectl, loginctl etc.

journald

handles logging for kernel, system and user components

logind

handles user logins and sessions, replacement for ConsoleKit

networkd

daemon / backend to handle configuration of network devices

tmpfiles

creating and cleaning up temp. files and directories

timedated

handles any time-related settings

localed

service to change locale and key mapping

udevd & libudev

device manager, handles /dev, firmware loading etc.

systemd-boot

bootmanager, formerly gummiboot, now rolled into systemd

6.2.1 journald & journalctl

journald handles logging for a variety of sources: The kernel, standard output and error of system services, system log messages via libc syslog call, log messages through its Journal API…

All of these logs are stored in binary files, and the job of the utility journalctl is to conveniently access and output this in a desired format. This makes it easy to combine logs from different sources, get output for specific time frames, or display the journal in a specific format, locale etc.

Examples:

journalctl

Simply displays every journal entry through a pager (less, more).

journalctl -b

Everything since the latest boot. Takes numerical arguments, i.e. journalctl -b -2 will give you the log from 2 boots ago.

journalctl -k

Kernel messages (similar to dmesg output).

journalctl –since yesterday

Speaks for itself…

journalctl –since “2018-12-03 15:00” –until “2018-12-03 16:00”

Everything in this time interval.

journalctl -u httpd.service -u php-fpm.service

Everything from two units (services) combined.

7 systemd Units

7.1 Sockets

man page

Aside from starting and stopping system services, systemd has the facility to start and stop services on demand, only if and when they are needed.

This is done through ’socket’ units instead of ’service’ units, named example.socket instead of example.service. A socket unit can listen to IPC2, network socket/port, or filesystem FIFO3. The service in question is then only started if there is activity on the socket.

Sockets can be handled exactly like you would a service with systemctl enable|disable|start|stop example.socket

For example, if you enable the sshd.socket, sshd will only be started if there are clients trying to connect to the machine. In this case, aside from configuring sshd, you will need to override the standard configuration of the socket if you want it to listen at a different port than the standard (port 22) or restrict incoming connections. This can be done simply with:

systemctl edit sshd.socket

7.2 Automounting

man page

Systemd can also handle automounting and unmounting filessystems, useful for remote and external storage.

This is done through ’automount’ units named path-to-mount.automount. These can be configured directly in /etc/fstab, by adding the option x-systemd.automount to an entry. For instance, if you have the example fstab entry below, and try to open a file or folder under /home/user/folder, the home-user-folder.automount unit will spawn a home-user-folder.mount unit, which in turn actually mounts the filesystem on device /dev/sdc2.

Example /etc/fstab entry:

/dev/sdc2 /home/user/folder ext4 noauto,x-systemd.automount,x-systemd.device-timeout=10,x-systemd.idle-timeout=120 0 2

7.3 Timers

man page

There is also a unit type for timers, for running processes at specific times, much like one would with a cron daemon. At the moment this is primarily used by systemd itself, but that could change in the future…

systemctl list-units -t timer –all
UNIT LOAD ACTIVE SUB DESCRIPTION
logrotate.timer loaded active waiting Daily rotation of log files
man-db.timer loaded active waiting Daily man-db cache update
shadow.timer loaded active waiting Daily verification of password and group files
systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary Directories

If you want to try using timers as a replacement for cron, here is a guide.

8 Things to Try at Home

8.1 Check the overall state of system services

systemctl status

Overview of the running system. What is the state? Any failed units?

systemctl –failed

List all failed units.

journalctl -u failed-unit.service

Check what the problem is.

8.2 Optimising boot time

Try optimising your boot time with systemd-analyze.

8.3 Try systemd-manager

systemd-manager is a handy frontend to managing system units, viewing their logs, their unit configuration files etc.

8.4 Create your own systemd unit

Red Hat has a guide on creating and modifying systemd unit filesSo does ArchLinux.

The systemd unit files are normally stored in /etc/systemd/system, and it is fairly simple to create, modify and use these. If you are packaging systemd units with software for a distribution, the standard is for them to go into /usr/lib/systemd/system/.

Example custom systemd unit:

[Unit]
Description=Emacs: the extensible, self-documenting text editor

[Service]
Type=forking
ExecStart=/usr/bin/emacs –daemon
ExecStop=/usr/bin/emacsclient –eval “(kill-emacs)”
Environment=SSH_AUTH_SOCK=%t/keyring/ssh
Restart=always

[Install]
WantedBy=default.target

Footnotes:

1 

A process is orphaned if its parent (the process it was started by) dies or is killed without finishing it.

2 

Inter-Process Communication, e.g. shared memory segments, semaphores, message queues, and Unix domain sockets.

3 

First In, First Out: A queue of items or task that is appended to from one end and processed from the other.

v. 20181206

Leave a Comment

Do NOT follow this link or you will be banned from the site!