2016-07-06
How to handle SIGWINCH in an almquist shell
Last month I asked, is SIGWINCH in shells broken?
I explained how the shell allows you to create signal handlers, but takes care of all the dangerous bits of signal handling for you, so unlike in C or C++ you can run any code from within a signal handler. One exception was SIGWINCH
, a signal that caused my shell script to terminate about 50% of the time.
I concluded my article with an example of how I handle the signal, which was met with some people telling me that they cannot reproduce my problem:
# Record shell size changes trap "trap '' WINCH;winch_trapped=1" WINCH … # Handle window changes if [ -n "$winch_trapped" ]; then # Reinstall the trap winch_trapped= trap "trap '' WINCH;winch_trapped=1" WINCH # Redraw the current output redraw fi
As it would turn out, I should have RTFM more carefully.
A better example
This code snippet, as it turned out, did not contain the actual problem. It was implied (but not stated) that the signal handling code happens in some kind of loop.
Said loop looks somewhat like this:
# Record shell size changes trap "trap '' WINCH;winch_trapped=1" WINCH while read -r line; do # Handle window changes if [ -n "$winch_trapped" ]; then # Reinstall the trap winch_trapped= trap "trap '' WINCH;winch_trapped=1" WINCH # Redraw the current output redraw fi case "$line" in … esac done
Tracking it Down
This, it turned out, was important.
At this point, I would usually describe the debugging process, but by now I do not remember everything that I've done.
Fixing it
In the end it turned out that there is a difference in the behaviour between ash
and bash
. The explanation can be found in the manual page of ash
:
The exit status is 0 on success, 1 on end of file, between 2 and 128 if an error occurs and greater than 128 if a trapped signal interrupts read.
So in case of a signal trap, ash
executes the trap and has read
return a value greater 128, which is equivalent to read(2)
setting errno=EINTR
.
This behaviour is not shared by bash
, which seems to resume an interrupted read
transparently. It also has different rules for return values:
… The return code is zero, unless end-of-file is encountered, read times out (in which case the return code is greater than 128), a variable assignment error (such as assign- ing to a readonly variable) occurs, or an invalid file descrip- tor is supplied as the argument to -u.
So unless the -t
flag is used bash
's read
does not return values greater than 128, which makes developing working code easy:
# Record shell size changes trap "trap '' WINCH;winch_trapped=1" WINCH while true; do read -r line retval=$? if [ $retval -gt 128 ]; then # Resume interrupted read continue elif [ $retval -ne 0 ]; then # Read failed break fi # Handle window changes if [ -n "$winch_trapped" ]; then # Reinstall the trap winch_trapped= trap "trap '' WINCH;winch_trapped=1" WINCH # Redraw the current output redraw fi case "$line" in … esac done
Somewhere there is a lesson to learn in there. It is mostly in the missing part about how to debug this kind of problem, though.
Links
2016-06-18
Is SIGWINCH
in shells broken?
Julia Evans has written a piece about the scary properties of UNIX signals. I always figured it's safe enough if you perform an atomic operation like assigning a boolean or integer and handling the signal in regular code.
Take my signal handler for the powerd++
daemon:
/** * Sets g.signal, terminating the main loop. */ void signal_recv(int const signal) { g.signal = signal; }
It doesn't get much easier than that.
Signals in Shells
Bourne-style shells like the Almquist Shell or BASH offer signal handling through the trap
builtin. The following is the relevant manual section of the Almquist Shell (i.e. FreeBSD's /bin/sh
):
trap [action] signal ... trap -l Cause the shell to parse and execute action when any specified signal is received. The signals are specified by name or number. In addition, the pseudo-signal EXIT may be used to specify an action that is performed when the shell terminates. The action may be an empty string or a dash (‘-’); the former causes the specified signal to be ignored and the latter causes the default action to be taken. Omitting the action is another way to request the default action, for compatibility reasons this usage is not recommended though. In a subshell or utility environment, the shell resets trapped (but not ignored) signals to the default action. The trap command has no effect on signals that were ignored on entry to the shell. Option -l causes the trap command to display a list of valid sig‐ nal names.
If you bothered to read that you may have noticed, that there are no limits placed on what an action
constitutes. This is because the shell handles all the dangerous bits for you (unless you activate the trapsync
option, which you shouldn't unless you like to see your shell scripts segfault).
This is necessary, because the shell is an interpreter and thus there are no commands that are really safe to perform in a trap. The shell sanitises signals for you by safely interrupting and resuming builtin commands and simply waiting for non-builtin commands to complete before performing your action.
Now you might have a long running command that you may want to be able to interrupt somehow:
trap 'echo Interrupted by signal;exit 1' INT HUP TERM if my_longwinded_command; then do_something fi
If my_longwinded_command
is not a builtin
or function
, the trap does not spring until the command completed. The way to handle this without trapsync
is the following:
my_longwinded_command & trap "kill $!;echo Interrupted by signal;exit 1" INT HUP TERM if wait; then trap "echo Interrupted by signal;exit 1" INT HUP TERM do_something fi trap "echo Interrupted by signal;exit 1" INT HUP TERM
The trick here is that we emulate the sequential behaviour with the wait
builtin, which the shell can safely interrupt to perform action
.
Of course in the real world you want to handle the whole affair more gracefully, to avoid all this copy and paste.
There also is a small race between the command terminating and changing the trap, where the kill command can be called without there being a process to kill. There is not graceful way to handle this other than suppressing the output of the kill command.
So this is basically it we're all set up to handle signals in scripts.
Except for …
SIGWINCH
, which silently crashes shells that try to handle it. E.g.:
# Record shell size changes trap "trap '' WINCH;winch_trapped=1" WINCH … # Handle window changes if [ -n "$winch_trapped" ]; then # Reinstall the trap winch_trapped= trap "trap '' WINCH;winch_trapped=1" WINCH # Redraw the current output redraw fi
This should be fairly safe, but it crashes frequently (not always, though). It's a proven pattern for all other signals that I normally handle.
So what's wrong?
2016-04-07
powerd++: Better CPU Clock Control for FreeBSD
Setting of P-States (power states a.k.a. steppings) on FreeBSD is managed by powerd(8)
. It has been with us since 2005, a time when the Pentium-M single-core architecture was the cutting edge choice for notebooks and dual-core just made its way to the desktop.
That is not to say that multi-core architectures were not considered when powerd was designed, but as the number of cores grows and hyper-threading has made its way onto notebook CPUs, powerd falls short.
Incentive
Don't you know it? You sit at your desk, reading technical documentation, occasionally scrolling or clicking on the next page link. The only (interactive) programs running are your web browser, an e-mail client and a couple of terminals waiting for input. There is a constant fan noise, which occasionally picks up for no apparent reason, making it a million times more annoying.
You can't work like this!
You start looking at the load, which is low but not minuscule. In the age of IMAP and node.js web browsers and e-mail clients are always a little busy. Still this is not enough to explain the fan noise.
You're running powerd to reduce your energy footprint (for various reasons), or are you? Yes you are. So you start monitoring dev.cpu.0.freq
and it turns out your CPU clock is stuck at maximum like the speedometer of an adrenaline junkie with a death wish.
Something is wrong, your 15% to 30% load are way below the 50% default clock down threshold of powerd. You start digging, thinking you can tune powerd to do the right thing. Turns out you can't.
An Introduction to powerd
The following illustration shows powerd's operation on a dual-CPU system with two cores and hyper-threading each. That is not a realistic system today, but it saves space in the illustration and contains all the cases that need to be covered.
Note that …
- … the
sysctl(3)
interface flattens the architecture of the CPUs into a list of pipelines, each presented as individual CPUs. - … powerd has the first CPU hard coded as the one controlling the clock frequency for all cores.
- … powerd uses the sum of all loads to control the clock frequency.
Powerd using the sum of all loads to rate the overall load of the system allows single threaded loads to trigger higher P-States but comes at the cost of triggering high P-States with low distributed loads. The problem grows with the number of available cores. In the illustrated systems a mean load of 12.5% results in a 100% load rating. The same applies to a single quad-core CPU with hyper-threading.
Another problem resulting from this approach is that the optimal boundaries for the hysteresis changes with the number of cores. Also, to protect single core loads, powerd only permits boundaries from 0% to 100%. This results in powerd changing into the highest P-State at the drop of a needle and only clocking down if the load is close to 0.
The Design of powerd++
The powerd++
design has three significant differences. The way it manages the CPUs/cores/threads presented through the sysctl interface, the way that load is calculated and the way the target frequency is determined.
During its initialisation phase powerd++
assigns a frequency controlling core to each core, grouping them by the core that offers the handle to change the clock frequency. Unlike shown in the following illustration, all cores will always be controlled by dev.cpu.0
, because the cpufreq(4)
driver only supports global P-State changes. But powerd++
is built unaware of this limitation and will perform fine grained control the moment the driver offers it.
To rate the load within a core group, each core determines its own load and then passes it to the controlling core. The controlling core uses the maximum of the loads in the group as the group load. This approach allows single threaded applications to cause high load ratings (i.e. up to 100%), but having small loads on all cores in a group still results in a small load rating. Another advantage of this design is that load ratings always stay within the 0% to 100% range. Thus the same settings (including the defaults) work equally well for any number of cores.
Instead of using a hysteresis to decide whether the clock frequency should be increased, lowered or stay the same, powerd++
uses a target load to determine the frequency at which the current load would have rated as the target load. This approach results in quick frequency changes in either direction. E.g. given a target of 50% and a current load of 100% the new clock frequency would be twice the current frequency. To reduce sensitivity to signal noise more than two samples (5 by default) can be collected. This works as a low pass filter but is less damaging to the responsiveness of the system than increasing the polling interval.
Resources
The code is on github. A FreeBSD port is available as sysutils/powerdxx
.
Afterthoughts
My experience in automotive and race car engineering came in handy. If your noise filter is not in O(1)
(per frame), you're doing it wrong. If you have one control for many inputs a maximum or minimum are usually the right choice, the sum barely is. E.g. if you have 3 sensors that report 62°C, 74°C and 96°C, you want to adjust your coolant throughput to 96°C, not 232°C.
I hope that powerd++
will be widely used (within the community) and inspire the maintainers of cpufreq(4)
to add support for per-CPU frequency controls.
TODOs
Currently the power source detection depends on ACPI, I need to implement something similar for older and non-x86/amd64 systems. Currently those just fall back to the unknown state.
2015-02-01
/bin/sh: Writing Your Own watch
Command
The command watch
in FreeBSD has a completely different function than the popular GNU-command with the same name. Since I find the GNU-watch
convenient I wrote a short shell-script to provide that functionality for my systems. The script is a nice way to show off some basics as well as some advanced shell-scripting features.
To resolve the ambiguity with watch(8)
I called it observe
on my system. My observe
command takes the time to wait between updates as the first argument. Successive arguments are interpreted as commands to run. The following listing is the complete code:
#!/bin/sh set -f sleep=$1 clear= shift runcmd() { tput cm 0 0 (eval "$@") tput AL `tput li` } trap 'runcmd "$@"; tput ve; exit' EXIT INT TERM trap 'clear=1' HUP INFO WINCH tput vi clear runcmd "$@" while sleep $sleep; do eval ${clear:+clear;clear=} runcmd "$@" done
Careful observers may notice that there is no parameter checking and the code is not commented. These shortcomings are part of what makes it a convenient example in a tutorial.
Turning Off Glob-Pattern Expansion
The second line already shows a good convention:
#!/bin/sh set -f
The set
builtin can be used to set parameters as if they were provided on the command line. It is also able to turn them off again, e.g. set +x
would turn off tracing. The -f
option turns off glob pattern expansion for command arguments. This is a good habit to pick up, glob pattern expansion is very dangerous in scripts. Of course the -f
option could be set as part of the shebang, e.g. #!/bin/sh -f
, but that would allow the script user to override it. By canlling bash ./observe 2 ccache -s
the shell could be invoked without setting the option, which is dangerous for options with safety-implications.
Global Variable Initialisation
The next block initialises some global variables:
sleep=$1 clear= shift
Initialising global variables at the beginning of a script is not just good style (because there is one place to find them all), it also protects the script from whatever the caller put into the environment using export
or the interactive shell's equivalent.
The shift
builtin can be a very useful feature. It throws away the first argument, so what was $2
becomes $1
, $3
turns into $2
etc.. With an optional argument the number of arguments to be removed can be specified.
The runcmd
Function
The runcmd
function is responsible for invoking the command in a fashion that overwrites its last output:
runcmd() { tput cm 0 0 (eval "$@") tput AL `tput li` }
The tput(1)
command is handy to directly talk to the terminal. What it can do depends on the terminal it is run in, so it is good practice to test it in as many terminals as possible. A list of available commands is provided by the terminfo(5)
manual page. The following commands were used here:
cm
:cursor_address #row #col
Used to position the cursor in the top-left cornerAL
:parm_insert_line #lines
Used to push any garbage on the terminal (e.g. random key inputs) out of the terminalli
:lines
Returns the number of terminal lines onstdout
The tput AL `tput li`
basically acts as a clear below the cursor command.
The eval "$@"
command executes all the arguments (apart from the one that was shifted away) as shell commands. The command is enclosed by parenthesis to invoke it in a subshell. That effectively prevents it from affecting the script. It is not able to change signal handlers or variables of the script, because it is run in its own process.
Signal Handlers
Signal handlers provide a method of overriding the shell's default actions. The trap
builtin takes the code to execute as the first argument, followed by a list of signals to catch. Providing a dash as the first argument can be used to invoke the default action:
trap 'runcmd "$@"; tput ve; exit' EXIT INT TERM trap 'clear=1' HUP INFO WINCH
The INT
signal represents a user interrupt, usually caused by the user pressing CTRL+C
. The TERM
signal is a request to terminate. E.g. it is sent when the system shuts down. The EXIT
is a pseudosignal that occurs when the shell terminates regularly, i.e. by reaching the end of the script (in this case if sleep would fail) or an exit
call.
The HUP
signal is frequently used to reconfigure daemons without terminating them. WINCH
occurs when the terminal is resized. The INFO
signal is a very useful BSDism. It is usually invoked by pressing CTRL+T
and causes a process to print status information.
The Output Cycle
The output cycle heavily interacts with the signal handlers:
tput vi clear runcmd "$@" while sleep $sleep; do eval ${clear:+clear;clear=} runcmd "$@" done
The tput vi
command hides the cursor, tput ve
turns it back on.
The clear
command clears up the terminal before the command is run the first time.
The runcmd "$@"
call occurs once before the loop, because the first call within the loop occurs after the first sleep
interval.
The clear
global is set by the HUP/WINCH/INFO
handler. The eval ${clear:+clear;clear=}
line runs the clear
command if the variable is set and resets it afterwards. The clear
command is not run every cycle, because it would cause flickering. The ability to trigger it is required to clean up the screen in case a command does not override all the characters from a previous cycle.
Conclusion
If you made it here, thank you for reading this till the end! You probably already knew a lot of what you read. But maybe you also learned a trick or two. That's what I hope.
2015-01-17
/bin/sh: Using Named Pipes to Talk to Your Main Process
You want to fork off a couple of subshells and have them talk back to your main Process? Then this post is for you.
What is a Named Pipe?
A named pipe is a pipe with a file system node. This allows arbitrary numbers of processes to read and write from the pipe. Which in turn makes multiple usage scenarios possible. his post just covers one of them, others may be covered in future posts.
The Shell
The following examples should work in any Bourne Shell clone, such as the Almquist Shell (/bin/sh on FreeBSD) or the Bourne-Again Shell (bash).
HowTo
The first step is to create a Named Pipe. This can be done with the
mkfifo(1)
command:
# Get a temporary file name node="$(mktemp -u)" || exit # Create a named pipe mkfifo -m0600 "$node" || exit
Running that code should produce a Named Pipe in /tmp
.
The next step is to open a file descriptor. In this example a file descriptor is used for reading and writing, this avoids a number of pitfalls like deadlocking the script:
# Attach the pipe to file descriptor 3 exec 3<> "$node" # Remove file system node rm "$node"
Note how the file system node of the named pipe is removed immediately after assigning a file descriptor. The exec 3<> "$node"
command has opened a permanent file descriptor, which remains open until manually closed or until the process terminates. So deleting the file system node will cause the system to remove the Named Pipe as soon as the process terminates, even when it is terminated by a signal like SIGINT
(user presses CTRL-C).
Forking and Writing into the Named Pipe
From this point on the subshells can be forked using the &
operator:
# This function does something do_something() { echo "do_something() to stdout" echo "do_something() to named pipe" >&3 } # Fork do_something() do_something & # Fork do_something(), attach stdout to the named pipe do_something >&3 & # Fork inline ( echo "inline to pipe" >&3 ) & # Fork inline, attach stdout to the named pipe ( echo "inline to stdout" ) >&3 &
Whether output is redirected per command or for the entire subshell is a matter of personal taste. Either way the processes inherit the file descriptor to the Named Pipe. It is also possible to redirect stderr as well, or redirect it into a different named pipe.
The Named Pipe is buffered, so all the subshells can start writing into it immediately. Once the buffer is full, processes trying to write into the pipe will block, so sooner or later the data needs to be read from the pipe.
Reading from the Named Pipe
To read from the pipe the shell-builtin command read
is used.
Using non-builtin commands like head(1)
usually leads to problems, because they may read more data from a pipe than they output, causing it to be lost.
# Make sure white space does not get mangled by read (IFS only contains the newline character) IFS=' ' # Blocking read, this will halt the process until data is available read line <&3 # Non-blocking read that reads as much data as is currently available line_count=0 lines= while read -t0 line <&3; do line_count=$((line_count + 1)) lines="$lines$line$IFS" done
Using a blocking read causes the process to sleep until data is available. The process does not require any CPU time, the kernel takes care of waking the process.
That's all that is required to establish ongoing communication between your processes.
The direction of communication can be reversed to use the pipe as a job queue for forked processes. Or a second pipe can be used to establish 2-way communications. With just two processes a single pipe might suffice for two way communications. A named pipe can be connected to an ssh(1)
session or nc(1)
.
2014-09-27
Another day in my love affair with AWK
I consider myself a C/C++ developer. Right now I am embracing C++11 (I wanted to wait till it is actually well supported by compilers) and I am loving it.
Despite my happy relationship with C/C++ I have maintained a torrid affair with AWK for many years, which has spilled into this blog before:
- Almost a year ago I concluded that MAWK is freakin' fast and GNU AWK freakin' fast as a snail
- The past summer I stumbled over a bottleneck in the one-true-AWK, default for *BSD and Mac OS-X
A Matter of Accountability
So far circumstances dictated that either the script or the input data or both had to be kept secret. In this post both will be publicly available. The purpose of this post is to give people the chance to perform their own tests.
The following is required to perform the test:
The dbc2c.awk script was already part of my first post. It parses Vector DBC (Database CAN) files, an industry standard for describing a set of devices, messages and signals for the real time bus CAN (one can argue it's soft real time, it depends). It does the following things:
- Parse data from 1 or more input files
- Store the data in arrays, use indexes as references to describe relationships
- Output the data
- Traverse the data structure and store attributes of objects in an array
- Read a template
- Insert data into the template and print on stdout
Test Environment
- The operating system:
FreeBSD AprilRyan.norad 10.1-BETA2 FreeBSD 10.1-BETA2 #0 r271856: Fri Sep 19 12:55:39 CEST 2014 root@AprilRyan.norad:/usr/obj/S403/amd64/usr/src/sys/S403 amd64 - The compiler:
FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
Target: x86_64-unknown-freebsd10.1
Thread model: posix - CPU: Core i7@2.4GHz (Haswell)
- NAWK version: awk version 20121220 (FreeBSD)
- MAWK version: mawk 1.3.4.20140914
- GNU AWK version: GNU Awk 4.1.1, API: 1.1
Tests
With the recent changeset 219:01114669a8bf
, the script switched from using array iteration (for (index in array) { … }
) to creating a numbered index for each object type and iterate through them in order of creation to make sure data is output in the same order with every AWK implementation. This makes it much easier to compare and validate outputs from different flavours of AWK.
To reproduce the tests, run:
time -l awk -f scripts/dbc2c.awk -vDATE=whenever j1939_utf8.dbc | sha256
The checksum for the output should read:
9f0a105ed06ecac710c20d863d6adefa9e1154e9d3a01c681547ce1bd30890df
Here are my runtime results:
6.23 s
6.32 s
6.27 s
11.79 s
11.88 s
11.80 s
1.98 s
2.02 s
1.97 s
Memory usage (maximum resident set size):
22000 k
50688 k
26644 k
Conclusion
Once again the usual order of things establishes itself. GNU AWK wastes our time and memory while MAWK takes the winner's crown and NAWK keeps to the middle ground.
The dbc2c.awk
script has been tested before and GNU AWK actually performs much better this time, 6.0 instead of 9.6 time slower than MAWK. Maybe just parsing one file instead of 3 helps or the input data produces less collisions for the hashing algorithm (AWK array indexes are always cast to string and stored in hash tables).
In any way I'd love to see some more benchmarks out there. And maybe someone bringing their favourite flavour of AWK to the table.