Difference between revisions of "En:HOWTO: systemd"

From Sabayon Wiki
Jump to: navigation, search
m (formatting)
m (Basics)
Line 397: Line 397:
  timedatectl status
  timedatectl status
To enable NTP
To enable NTP
* Install chrony:
* Install NTP:
  equo i net-misc/ntp
  equo i net-misc/ntp
* Enable service to start at boot and start it now:
* Enable service to start at boot and start it now:

Revision as of 08:28, 14 January 2014

i18n: en
This page is under construction. When finiched, this note will be removed.

systemd System and Service Manager

What is this?

systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts.

systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services,

offers on-demand starting of daemons, keeps track of processes using Linux control groups,

supports snapshotting and restoring of the system state, maintains mount and automount points

and implements an elaborate transactional dependency-based service control logic.

please bookmark this page, as some of the links mentioned in this guide, will lead you to other pages...

systemctl usage

the basics

Verifying Bootup

As many of you know, systemd is the new init system,

Traditionally, when booting up a Linux system, you see a lot of little messages passing by on your screen., if they are shown at all,

given we use graphical boot splash technology like Plymouth these days.

Nonetheless the information of the boot screens was and still is very relevant, because it shows you for each service that is being started

as part of bootup, wether it managed to start up successfully or failed (with those green or red [ OK ] or [ FAILED ] indicators).

A feature is added to systemd that tracks and remembers for each service whether it started up successfully,

whether it exited with a non-zero exit code, whether it timed out, or whether it terminated abnormally (by segfaulting or similar),

both during start-up and runtime. By simply typing systemctl in your shell you can query the state of all services,

both systemd native and SysV/LSB services:

# root @ 15:51:25 ] bwg-inc # ]¬ systemctl
# UNIT                        LOAD   ACTIVE SUB       DESCRIPTION
# boot.automount              loaded active waiting   boot.automount
# proc-sys...t_misc.automount loaded active waiting   Arbitrary Executable File Fo
# sys-devi...und-card0.device loaded active plugged   NM10/ICH7 Family High Defini
# sys-devi...-net-p1p1.device loaded active plugged   RTL8101E/RTL8102E PCI Expres
# sys-devi...et-wlp3s0.device loaded active plugged   AR242x / AR542x Wireless Net
# -.mount                     loaded active mounted   /
# home2.mount                 loaded active mounted   /home2
# tmp.mount                   loaded active mounted   /tmp
# ntpd.service                loaded maintenance  maintenance    Network Time Service 
# LOAD   = Reflects whether the unit definition was properly loaded.
# ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
# SUB    = The low-level unit activation state, values depend on unit type.
# 99 loaded units listed. Pass --all to see loaded but inactive units, too.
# To show all installed unit files use 'systemctl list-unit-files'.

Look at the ACTIVE column, which shows you the high-level state of a service, whether it is active (i.e. running),

inactive (i.e. not running) or in any other state. If you look closely you'll see one item in the list that is marked maintenance

and highlighted in red. This informs you about a service that failed to run or otherwise encountered a problem.

In this case this is ntpd. Now, let's find out what actually happened to ntpd, with the systemctl status command:

# root @ 15:52:45 ] bwg-inc # ]¬ systemctl status ntpd.service
# ntpd.service - Network Time Service
#	  Loaded: loaded (/etc/systemd/system/ntpd.service)
#	  Active: maintenance
# 	    Main: 953 (code=exited, status=255)
# 	  CGroup: name=systemd:/systemd-1/ntpd.service

This shows us that NTP terminated during runtime (when it ran as PID 953), and tells us exactly the error condition:

the process exited with an exit status of 255.

Killing Services

Killing a system daemon is easy, right? Or is it?

Sure, as long as your daemon persists only of a single process this might actually be somewhat true.

You type killall rsyslogd and the syslog daemon is gone.

But here comes systemd to the rescue: With 'systemctl kill' you can easily send a signal to all processes of a service. Example:

# systemctl kill crond.service

This will ensure that SIGTERM is delivered to all processes of the crond service, not just the main process.

Of course, you can also send a different signal if you wish. For example, if you are bad-ass you might want to go for SIGKILL right-away:

# systemctl kill -s SIGKILL crond.service

And there you go, the service will be brutally slaughtered in its entirety, regardless how many times it forked,

whether it tried to escape supervision by double forking or fork bombing.

Sometimes all you need is to send a specific signal to the main process of a service, maybe because you want to trigger a reload via SIGHUP.

Instead of going via the PID file, here's an easier way to do this:

# systemctl kill -s HUP --kill-who=main crond.service

How does this relate to systemctl stop? kill goes directly and sends a signal to every process in the group,

however stop goes through the official configured way to shut down a service, i.e. invokes the stop command configured with ExecStop=

in the service file. Usually stop should be sufficient. kill is the tougher version,

stop, disable, or mask a service... The Three Levels of "Off"

In systemd, there are three levels of turning off a service (or other unit). Let's have a look which those are:

1, You can stop a service. That simply terminates the running instance of the service and does little else.:

$ systemctl stop ntpd.service

2, You can disable a service. This unhooks a service from its activation triggers. That means, that depending on your service

it will no longer be activated on boot, by socket or bus activation or by hardware plug (or any other trigger that applies to it).

However, you can still start it manually if you wish. If there is already a started instance disabling a service will not have the effect of

stopping it. :

$ systemctl disable ntpd.service

Disabling a service is a permanent change; until you undo it it will be kept, even across reboots.

3, You can mask a service. This is like disabling a service, but on steroids. It not only makes sure that service is not started automatically

anymore, but even ensures that a service cannot even be started manually anymore. This is a bit of a hidden feature in systemd,

since it is not commonly useful and might be confusing the user. But here's how you do it:

$ systemctl mask ntpd.service
$ ln -s /dev/null /etc/systemd/system/ntpd.service

By symlinking a service file to /dev/null you tell systemd to never start the service in question and completely block its execution.

Unit files stored in /etc/systemd/system override those from /usr/lib/systemd/system that carry the same name.

The former directory is administrator territory, the latter terroritory of your package manager. By installing your symlink

in /etc/systemd/system/ntpd.service you make sure that systemd will never read the upstream shipped service file



Which Service Owns Which Processes?

In systemd every process that is spawned is placed in a control group named after its service.

Control groups (or cgroups) are simply groups of processes that can be arranged in a hierarchy and labelled individually.

When processes spawn other processes, these children-processes are automatically made members of the parents cgroup.

Cgroups can be used as an effective way to label processes after the service they belong to and be sure that the service cannot escape

from the label, Regardless how often it forks or renames itself.

Here i discuss two commands you may use to relate systemd services and processes. "ps" and "systemd-cgls"


# ps xawf -eo pid,user,cgroup,args
#   PID USER     CGROUP                      COMMAND
#   271 root     4:cpuacct,cpu:/system/crond /usr/sbin/crond -n
#   272 root     4:cpuacct,cpu:/system/atd.s /usr/sbin/atd -f
#   273 root     4:cpuacct,cpu:/system/kdm.s /usr/bin/kdm vt1
#   281 root     4:cpuacct,cpu:/system/kdm.s  \_ /usr/bin/X :0 vt2 -background none -nolisten tcp -seat seat0 -auth /var/run/kdm/A:0-DZRMfb
#   287 root     2:name=systemd:/user/1000.u  \_ -:0             
#   351 apostee+ 2:name=systemd:/user/1000.u      \_ awesome
#   376 apostee+ 2:name=systemd:/user/1000.u          \_ /usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash -c "awesome"
#   296 polkitd  4:cpuacct,cpu:/system/polki /usr/lib/polkit-1/polkitd --no-debug
#   311 root     4:cpuacct,cpu:/system/dbus. /usr/sbin/modem-manager
#   316 root     4:cpuacct,cpu:/system/bluet /usr/sbin/bluetoothd -n
#   326 rpc      4:cpuacct,cpu:/system/rpcbi /sbin/rpcbind -w 
#   339 root     4:cpuacct,cpu:/system/wpa_s /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.
#   365 apostee+ 2:name=systemd:/user/1000.u dbus-launch --sh-syntax --exit-with-session
#   366 apostee+ 2:name=systemd:/user/1000.u /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session
#   422 apostee+ 2:name=systemd:/user/1000.u xscreensaver
#   424 apostee+ 2:name=systemd:/user/1000.u conky

In the third column you see the cgroup systemd assigned to each process.

If you want, you can set the shell alias psc (~/,bashrc) to the ps command line shown above:

# alias psc='ps xawf -eo pid,user,cgroup,args'


Another way to present the same information is the systemd-cgls tool which is shipped with systemd.

It shows the cgroup hierarchy in a pretty tree. Its output looks like this:
# systemd-cgls
# │ ├─systemd-logind.service
# │ ├─alsa-state.service
# │ │ └─253 /usr/sbin/alsactl -s -n 19 -c -E ALSA_CONFIG_PATH=/etc/alsa/alsactl.conf --initfile=/lib/alsa/init/00main rdaemon
# │ ├─systemd-udevd.service
# │ │ └─163 /usr/lib/systemd/systemd-udevd
# │ └─systemd-journald.service
# │   └─147 /usr/lib/systemd/systemd-journald
# └─user
#   └─1000.user
#     └─1.session
#       ├─ 287 -:0             
#       ├─ 351 awesome
#       ├─ 365 dbus-launch --sh-syntax --exit-with-session
#       ├─ 366 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session
#       ├─ 376 /usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash -c "awesome"
#       ├─ 422 xscreensaver
#       ├─ 424 conky
#       ├─ 464 /usr/libexec/at-spi-bus-launcher
#       ├─ 469 /usr/libexec/gvfsd
#       ├─ 497 /usr/lib/firefox/firefox
#       ├─2267 /usr/bin/python /usr/bin/terminator
#       ├─2275 gnome-pty-helper
#       ├─2276 /bin/bash
#       ├─2318 su
#       ├─2326 bash
#       ├─2480 systemd-cgls
#       └─2481 less

As you can see, this command shows the processes by their cgroup, as systemd labels the cgroups after the services.

If you look closely you will notice that a number of processes have been assigned to the cgroup /user.

systemd does not only maintains services in cgroups, but user session processes as well.

journalctl usage

the basics

let's start with some basics. To access the logs of the journal use the journalctl tool.

To have a first look at the logs, just type in:

# journalctl

If you run this as root you will see all logs generated on the system, from system components the same way

as for logged in users. The output you will get looks like a pixel-perfect copy of the traditional /var/log/messages format,

but actually has a couple of improvements over it:

  • Lines of error priority (and higher) will be highlighted red.
  • Lines of notice/warning priority will be highlighted bold.
  • The timestamps are converted into your local time-zone.
  • The output is auto-paged with your pager of choice (defaults to less).

This will show all available data, including rotated logs.

Access Control

Browsing logs this way is already pretty nice.

But requiring to be root sucks of course, even administrators tend to do most of their work as unprivileged users these days.

By default, Journal users can only watch their own logs, unless they are root or in the adm group.

To make watching system logs more fun, you could add yourselve to adm:

# usermod -a -G adm yourusername

After logging out and back in as yourusername you have access to the full journal of the system and all users:

$ journalctl

Live View

If invoked without parameters journalctl will show you the current log database.

Sometimes one needs to watch logs as they grow, where one previously used tail -f /var/log/messages:

$ journalctl -f

Yes, this does exactly what you expect it to do: it will show you the last ten logs lines,

and then wait for changes and show them as they take place.

Basic Filtering

When invoking journalctl without parameters you'll see the whole set of logs, beginning with the oldest message stored.

That of course, can be a lot of data. Much more useful is just viewing the logs of the current boot:

$ journalctl -b

This will show you only the logs of the current boot, with all the gimmicks mentioned.

But sometimes even this is way too much data to process.

So let's just listing all the real issues to care about: all messages of priority levels ERRORS and worse,

from the current boot:

$ journalctl -b -p err

But, if you reboot only seldom the -b makes little sense, filtering based on time is much more useful:

$ journalctl --since=yesterday

And there you go, all log messages from the day before at 00:00 in the morning until right now. Awesome!

Of course, we can combine this with -p err or a similar match. But suppose, we are looking for something that happened on the

15th of October, or was it the 16th?

$ journalctl --since=2012-10-15 --until="2011-10-16 23:59:59"

And there we go, we found what we were looking for. But i noticed that some CGI script in Apache was acting up earlier today, let's see what Apache logged at that time:

$ journalctl -u httpd --since=00:00 --until=9:30

There we found it. But... , wasn't there an issue with that disk /dev/sdc? Let's figure out what was going on there:

$ journalctl /dev/sdc

Ouch ! a disk error! Hmm, maybe quickly replace the disk before we lose data.

Wait... didn't I see that the vpnc binary was nagging? Let's check for that:

$ journalctl /usr/sbin/vpnc

I don't get this, this seems to be some weird interaction with dhclient, let's see both outputs, interleaved:

$ journalctl /usr/sbin/vpnc /usr/sbin/dhclient

As you can see here with the given examples, Journalctl is a pretty advanced tool, than can track down pretty much anything.

But we're not done yet. Journalctl has some more to offer, which will be showed in the section Advanced Usage.


Advanced Filtering

Internally systemd stores each log entry with a set of implicit meta data.

This meta data looks a lot like an environment block, but actually is a bit more powerful.

This implicit meta data is collected for each and every log message, without user intervention.

The data will be there, and wait to be used by you. Let's see how this looks:

$ journalctl -o verbose -n
$ Fri, 2013-11-01 19:22:34 CET [s=ac9e9c423355411d87bf0ba1a9b424e8;i=4301;b=5335e9cf5d954633bb99aefc0ec38c25;m=882ee28d2;t=4ccc0f98326e6;x=f21e8b1b0994d7ee]
       _CMDLINE=avahi-daemon: registering [epsilon.local]
       MESSAGE=Joining mDNS multicast group on interface wlan0.IPv4 with address

(I cut out a lot here, I don't want to make this story overly long. Without the -n parameter it shows you

the last 10 log entries, but I cut out all but the last.)

With the -o verbose switch we enabled verbose output. Instead of showing a pixel-perfect copy of classic

/var/log/messages that only includes a minimimal subset of what is available,

we now see all the details the journal has about each entry, but it's highly interesting: there is user credential information.

Now, as it turns out the journal database is indexed by all of these fields, out-of-the-box! Let's try this out:

$ journalctl _UID=70

And there you go, this will show all log messages logged from Linux user ID 70.

As it turns out you can easily combine these matches:

$ journalctl _UID=70 _UID=71

Specifying two matches for the same field will result in a logical OR combination of the matches.

All entries matching either will be shown, i.e. all messages from either UID 70 or 71

If you specify two matches for different field names, they will be combined with a logical AND.

All entries matching both will be shown now, meaning that all messages from processes named avahi-daemon and host bwg-inc.

$ journalctl _HOSTNAME=bwg-inc _COMM=avahi-daemon

But of course, that's not fancy enough for us. We must go deeper:

$ journalctl _HOSTNAME=bwg-inc _UID=70 + _HOSTNAME=epsilon _COMM=avahi-daemon

The + is an explicit OR you can use in addition to the implied OR when you match the same field twice.

The line above means: show me everything from host bwg-inc with UID 70, or of host epsilon with a process name of avahi-daemon.

And now it becomes Magic

Who can remember all those values a field can take in the journal, I mean, who has that kind of photographic memory?

Well, the journal has:

$ journalctl -F _SYSTEMD_UNIT

This will show us all values the field _SYSTEMD_UNIT takes in the database, or in other words:

the names of all systemd services which ever logged into the journal. This makes it super-easy to build nice matches.


systemd brings new way of making sure that time in system is correct.


To check status:

timedatectl status

To enable NTP

  • Install NTP:
equo i net-misc/ntp
  • Enable service to start at boot and start it now:
systemctl enable ntpd && systemctl start ntpd
  • Enable time synchronization with NTP:
timedatectl set-ntp 1

Fore more info:

timedatectl -h
man timedatectl

systemd timers

systemd is capable of taking on a significant subset of the functionality of Cron through

built-in support for calendar time events as well as monotonic time events.

While we previously used Cron, systemd also provides a good structure to set up Cron-


Running a single script

Let’s say you have a script /usr/local/bin/myscript that you want to run every hour.

  • service file

First, create a service file, and put it in /etc/systemd/system/

# nano -w /etc/systemd/system/myscript.service

with the following content:



Note that it is important to set the Type variable to be “simple”, not “oneshot”.

Using “oneshot” makes it so that the script will be run the first time, and then systemd

thinks that you don’t want to run it again, and will turn off the timer we make next.

  • timer file

Next, create a timer file, and put it also in the same directory as the service file above.

# nano -w /etc/systemd/system/myscript.timer

with the following content:

Description=Runs myscript every hour

# Time to wait after booting before we run first time
# Time between running each consecutive time


  • enable/start

Rather than starting / enabling the service file, you use the timer.

# systemctl start myscript.timer

and enable it with each boot:

# systemctl enable myscript.timer

Running Multiple Scripts on the Same Timer

Now let’s say there are a bunch of scripts you want to run, all at the same time.

In this case, you will want make a couple changes on the above formula.

  • service files

Create the service files to run your scripts as showed previously,

but include the following section at the end of each service file:


If there is any ordering dependency in your service files, be sure you specify it with

the After=something.service and/or Before=whatever.service parameters within the

Description section.

  • timer file

You only need a single timer file. Create mytimer.timer, as outlined above.

  • target file

You can create the target that all these scripts depend upon.,

# nano -w /etc/systemd/system/mytimer.target

with the following content:

# Lots more stuff could go here, but it's situational.
# Look at systemd.unit man page.

  • enable/start

You need to enable each of the service files, as well as the timer:

systemctl enable script1.service
systemctl enable script2.service
systemctl enable mytimer.timer
systemctl start mytimer.service

Hourly, daily and weekly events

One strategy which can be used for creating this functionality is through timers

which call in targets. All services which need to be run hourly can be called in as dependencies

of these targets.

First, the creation of a few directories is required:

# mkdir /etc/systemd/system/timer-{hourly,daily,weekly}.target.wants

The following files will need to be created in the paths specified in order for this to work.

  • hourly events
# nano -w /etc/systemd/system/timer-hourly.timer

with it's content:

Description=Hourly Timer


# nano -w /etc/systemd/system/timer-hourly.target

with it's content:

Description=Hourly Timer Target

  • daily events
# nano -w /etc/systemd/system/timer-daily.timer


Description=Daily Timer


# nano -w /etc/systemd/system/timer-daily.target


Description=Daily Timer Target

  • weekly events
# nano -w /etc/systemd/system/timer-weekly.timer


Description=Weekly Timer


# nano -w /etc/systemd/system/timer-weekly.target


Description=Weekly Timer Target

  • adding events

Adding events to these targets is as easy as dropping them into the correct wants folder.

So if you wish for a particular event to take place daily, create a systemd service file

and drop it into the relevant folder.

For example, if you wish to run mlocate-update.service daily (which runs mlocate), you would create the following file:

# nano -w /etc/systemd/system/timer-daily.target.wants/mlocate-update.service
Description=updates the mlocate database

User=                                          # Add a user if you wish the service to be executes as a particular user, else delete this line
Type=                                          # Simple by default, change it if you know what you are doing, else delete this line
ExecStart=/usr/bin/updatedb --option1 --option2     # More than one ExecStart can be used if required

  • enable and start the timers
# systemctl enable timer-{hourly,daily,weekly}.timer && systemctl start timer-{hourly,daily,weekly}.timer

  • Starting events according to the calendar

If you wish to start a service according to a calendar event and not a monotonic interval (i.e. you wish to replace the functionality of crontab), you will need to create a new timer and link your service file to that. An example would be:

# nano -w /etc/systemd/system/foo.timer


Description=foo timer

OnCalendar=Mon-Thu *-9-28 *:30:00 # To add a time of your choosing here, please refer to systemd.time manual page for the correct format


The service file may be created the same way as the events for monotonic clocks. However, take care to put them in the /etc/systemd/system/ folder.

analyzing and performance


tips and tricks

systemd for Administrators

documentation for developers