Remote Unlocking of Encrypted Disks

1. Problem statement. You have an encrypted disk and want to decrypt the disk during boot while not sitting in front of your computer.

Solution is sketched and indicated in dm-crypt/Specialties. Below is a little bit more explanation. For the following you must be root.

2. Required software packages. Install the following packages: dropbear from repo “Community”. Then install the following AUR-packages:

  1. mkinitcpio-netconf
  2. mkinitcpio-utils
  3. mkinitcpio-dropbear

3. Populate root_key. First mkdir /etc/dropbear and populate root_key file with public ssh keys which should be able to log into your machine, similar to authorized_keys for OpenSSH. I.e., you must know the private keys on the corresponding machines you intend to use for unlocking.

4. Set-up networking in Grub. Edit /etc/default/grub and set

GRUB_CMDLINE_LINUX_DEFAULT="cryptdevice=UUID=5a74247e-75e8-4c05-89a7-66454f96f974:cryptssd:allow-discards root=/dev/mapper/cryptssd ip=192.168.178.118:192.168.178.118:192.168.178.1:255.255.255.0:chieftec:eth0:none"

Then issue

grub-mkconfig -o /boot/grub/grub.cfg

to re-generate grub.cfg. The specification for “ip=” is given in Mounting the root filesystem via NFS (nfsroot). Its most important parts are:

  1. client-ip: IP address of the client
  2. server-ip: IP address of the NFS server
  3. gateway-ip: IP address of a gateway
  4. netmask: Netmask for local network interface
  5. hostname: Name of the client
  6. device: Name of network device to use
  7. autoconf: Method to use for autoconfiguration

5. Configure mkinitcpio. Finally, the main task. Edit /etc/mkinitcpio.conf and set

HOOKS="base udev block keymap keyboard autodetect modconf netconf dropbear encryptssh filesystems fsck"

Now call

mkinitcpio -p linux

See Arch Wiki mkinitcpio. Output of mkinitcpio looks something like this:

  -> Running build hook: [dropbear]
Key is a ssh-rsa key
Wrote key to '/etc/dropbear/dropbear_rsa_host_key'
Key is a ssh-dss key
Wrote key to '/etc/dropbear/dropbear_dss_host_key'
Key is a ecdsa-sha2-nistp256 key
Wrote key to '/etc/dropbear/dropbear_ecdsa_host_key'
dropbear_rsa_host_key : sha1!! e1:11:51:ce:0b:07:2b:c7:66:37:c0:b9:de:f3:80:56:64:69:cc:fd
dropbear_dss_host_key : sha1!! ca:75:42:85:f9:96:6d:db:fd:15:d1:7a:4a:ee:19:b1:ff:91:14:bb
dropbear_ecdsa_host_key : sha1!! b9:b3:c4:ee:c4:af:21:87:52:39:e8:b6:c2:a3:b7:53:0e:52:f1:85
   -P, --allpresets             Process all preset files in /etc/mkinitcpio.d
   -r, --moduleroot <dir>       Root directory for modules (default: /)
   -S, --skiphooks <hooks>      Skip specified hooks, comma-separated, during build
   -s, --save                   Save build directory. (default: no)
   -d, --generatedir <dir>      Write generated image into <dir>
   -t, --builddir <dir>         Use DIR as the temporary build directory
   -V, --version                Display version information and exit
   -v, --verbose                Verbose output (default: no)
   -z, --compress <program>     Use an alternate compressor on the image
  -> Running build hook: [encryptssh]
  -> Running build hook: [filesystems]
  -> Running build hook: [fsck]
==> Generating module dependencies
==> Creating gzip-compressed initcpio image: /boot/initramfs-linux.img
==> Image generation successful

Content in /etc/dropbear is then

$ ls -l /etc/dropbear
total 16
-rw------- 1 root root  458 Apr  1 13:24 dropbear_dss_host_key
-rw------- 1 root root  140 Apr  1 13:24 dropbear_ecdsa_host_key
-rw------- 1 root root  806 Apr  1 13:24 dropbear_rsa_host_key
-rw------- 1 root root 1572 Apr  1 12:25 root_key

6. Usage. Use ssh root@YourComputer to connect to your previously configured dropbear server and type in the password for the encrypted disk. The connection will then close, and dropbear disappears. By the way, dropbear does not look at your configuration for OpenSSH, so if you block root access via OpenSSH, this is of no concern for dropbear.

7. Limitations. Above set-up just works for unlocking the root-device. If there are other encrypted devices, for example devices given in /etc/crypttab, these cannot be unlocked by above procedure.

8. Further reading. See LUKS encrypted devices remote über Dropbear SSH öffnen (in German), Remote unlocking LUKS encrypted LVM using Dropbear SSH in Ubuntu Server 14.04.1 (with Static IP).

Advertisements

Set-Up “Let’s Encrypt” for Hiawatha Web-Server

Google announced that starting with Chrome version 68 they will gradually mark HTTP-connections as “not secure”. “Let’s Encrypt” is a free service for web-masters to obtain certificates in an easy manner. Work on “Let’s Encrypt” started in 2014.

Setting up “Let’s Encrypt” with Hiawatha web-server is quite easy, although there are some pitfalls. I used the ArchLinux package for Hiawatha. There is also a ArchWiki page for Hiawatha.

Another detailed description is: Let’s Encrypt with Hiawatha by Chris Wadge.

1. Unpacking and production-server setting. After installing the ArchLinux package I unpacked the file /usr/share/hiawatha/letsencrypt.tar.gz. You have to edit letsencrypt.conf at three places:

ACCOUNT_EMAIL_ADDRESS = your@mail.address
HIAWATHA_CERT_DIR = {HIAWATHA_CONFIG_DIR}/tls
LE_CA_HOSTNAME = acme-v01.api.letsencrypt.org           # Production

I struggled with the last variable LE_CA_HOSTNAME. This has to be the productive “Let’s Encrypt” server. Although you might register with the testing-server, you apparently cannot do anything else with the testing-server. So delete the testing-server. The rest of the configuration file is obvious to change.

2. Configuration file. Now check your hiawatha.conf file:

Binding {
        Port = 443
        #TLScertFile = tls/hiawatha.pem
        TLScertFile = /etc/hiawatha/tls/www.eklausmeier.tk.pem
        Interface = 0.0.0.0
        MaxRequestSize = 2048
        TimeForRequest = 30
}
...
VirtualHost {
        Hostname = www.eklausmeier.tk, eklausmeier.tk, 192.168.178.24, klm.no-ip.org, klm.ddns.net, edh.no-ip.org, edh.ddns.net, klmport.no-ip.org, borussia
        ...
}

Continue reading

Set-Up Hiawatha Web-Server

I stumbled upon Hiawatha web-server when I read about a web-server for a houseboat by Ronald Scheckelhoff, WB8LZR. I had used Apache, thttpd, Lighttp, NGINX, and others before. Now I use Hiawatha web-server.

Hiawatha has three objectives, which are nicely met:

  1. Security: Hiawatha resisted Heartbleed and Slowloris attacks
  2. Ease of use: use the man-pages for configuring the web-server, no extensive Googling
  3. Lightweight on resources

The following diagram shows the number of source code lines using

wc `find . -iname \*.c -o -iname \*.h -o -iname \*akefile\* `

for each web-server.

Below configuration mostly follows the example configuration and provides Perl and PHP as CGI:

ServerId = http
ConnectionsTotal = 1000
ConnectionsPerIP = 25
SystemLogfile = /var/log/hiawatha/system.log
GarbageLogfile = /var/log/hiawatha/garbage.log

Binding {
        Port = 80
        MaxRequestSize = 1572864
        MaxUploadSize = 2047
        TimeForRequest = 90,180
}

CGIhandler = /usr/bin/perl:pl
CGIhandler = /usr/bin/php-cgi:php

Directory {
        DirectoryID = DownloadArea
        Path = /Download
        ShowIndex = yes
}

Directory {
        DirectoryID = WebPresence
        Path = /
        ExecuteCGI = yes
}

Hostname = 127.0.0.1
WebsiteRoot = /srv/http

VirtualHost {
        Hostname = www.eklausmeier.tk, eklausmeier.tk, 192.168.178.24, klm.no-ip.org, klm.ddns.net, edh.no-ip.org, edh.ddns.net, klmport.no-ip.org, borussia.no-ip.org
        WebsiteRoot = /srv/http
        FollowSymlinks = yes
        UseDirectory = WebPresence, DownloadArea
}

So I have a directory where Hiawatha shows a graphical representation of some files I can download. And it has an ordinary directory where I serve HTML and PHP files. I had to change MaxRequestSize and MaxUploadSize as I sometimes upload large chunks of data.

Since the 2014 Microsoft shotgun attack on No-IP.org I have many different DNS names to better withstand this vandalism.

Enabling GD for PHP is described here: php-gd — just uncomment extension=gd.

Towards web-based delta synchronization for cloud storage systems

Very interesting article.

Some remarkable excerpts:

To isolate performance issues to the JavaScript VM, the authors rebuilt the client side of WebRsync using the Chrome native client support and C++. It’s much faster.

Replacing MD5 with SipHash reduces computation complexity by almost 5x. As a fail-safe mechanism in case of hash collisions, WebRsync+ also uses a lightweight full content hash check. If this check fails then the sync will be re-started using MD5 chunk fingerprinting instead.

The client side of WebR2sync+ is 1700 lines of JavaScript. The server side is based on node.js (about 500 loc) and a set of C processing modules (a further 1000 loc).

the morning paper

Towards web-based delta synchronization for cloud storage systems Xiao et al., FAST’18

If you use Dropbox (or an equivalent service) to synchronise file between your Mac or PC and the cloud, then it uses an efficient delta-sync (rsync) protocol to only upload the parts of a file that have changed. If you use a web interface to synchronise the same files though, the entire file will be uploaded. This situation seems to hold across a wide range of popular services:

Given the universal presence of the web browser, why can’t we have efficient delta syncing for web clients? That’s the question Xiao et al. set out to investigate: they built an rsync implementation for the web, and found out it performed terribly. Having tried everything to improve the performance within the original rsync design parameters, then they resorted to a redesign which moved more of the heavy lifting back to…

View original post 728 more words

Unix Command comm: Compare Two Files

One lesser known Unix command is comm. This command is far less known than diff. comm needs two already sorted files FILE1 and FILE2. With the options

  • -1 suppress column 1 (lines unique to FILE1)
  • -2 suppress column 2 (lines unique to FILE2)
  • -3 suppress column 3 (lines that appear in both files)

For example, comm -12 F1 F2 prints all common lines in files F1 and F2.

I thought that comm had a bug, so I wrote a short Perl script to simulate the behaviour of comm. Of course, there was no bug, I just missed to notice that the records in the two files did not match due to white space.

#!/bin/perl -W
use strict;

use Getopt::Std;
my %opts = ('d' => 0, 's' => 0);
getopts('ds:',\%opts);
my $debug = ($opts{'d'} != 0);
my $member = defined($opts{'s'}) ? $opts{'s'} : 0;

my ($set,$prev) = (1,"");
my %H;

while (<>) {
        $prev = $ARGV if ($prev eq "");
        if ($ARGV ne $prev) {
                $set *= 2;
                $prev = $ARGV;
        }
        chomp;
        $H{$_} |= $set;
        printf("\t>>\t%s: %s -> %d\n",$ARGV,$_,$H{$_}) if ($debug);
}

$member = 2*$set - 1 if ($member == 0);
printf("\t>>\tmember = %d\n",$member) if ($debug);
for my $i (sort keys %H) {
        printf("%s\n",$i) if ($H{$i} == $member);
}

Above Perl scripts does not need sorted input files, as it stores all records of the files in memory, in a hash. It uses a bitmask as a set. For example, mycomm -s2 F1 F2 prints only those records, which are only in file F2 but not in F1.

Unix sort Issue

I wondered why Unix sort behaved strangely.

printf "A0 1\nA  1\n" | sort

delivered

A0 1
A  1

Of course, I expected A to come before A0. This was strange, as printf "A1 1\nA 1\n" | sort produced

A  1
A1 1

just as expected. Also, printf "A0\nA\n" | sort orders A before A0, as expected.

Solution: Use LC_ALL before sort. So

printf "A0 1\nA  1\n" | LC_ALL=C sort

delivered

A  1
A0 1

I realized this when I called sort with --debug flag,

printf "A0 1\nA  1\n" | sort --debug

which shows the empleyed locale:

sort: using ‘en_US.UTF-8’ sorting rules
A0 1
____
A  1
____

To check that my expected sort-order was indeed the “right” order, I wrote the following simple Perl-script to sort, which confirmed my understanding of ASCII sorting:

#!/bin/perl -W
use strict;
my @F = <>;     # slurp
for my $i (sort @F) { print $i; }

Parallelization and CPU Cache Overflow

In the post Rewriting Perl to plain C the runtime of the serial runs were reported. As expected the C program was a lot faster than the Perl script. Now running programs in parallel showed two unexpected behaviours: (1) more parallelizations can degrade runtime, and (2) running unoptimized programs can be faster.

See also CPU Usage Time Is Dependant on Load.

In the following we use the C program siriusDynCall and the Perl script siriusDynUpro which was described in above mentioned post. The program or scripts reads roughly 3GB of data. Before starting the program or script all this data has been already read into memory by using something like wc or grep.

1. AMD Processor. Running 8 parallel instances, s=size=8, p=partition=1(1)8:

for i in 1 2 3 4 5 6 7 8; do time siriusDynCall -p$i -s8 * > ../resultCp$i & done
        real 50.85s
        user 50.01s
        sys 0

Merging the results with the sort command takes a negligible amount of time

sort -m -t, -k3.1 resultCp* > resultCmerged

Best results are obtained when running just s=4 instances in parallel:

$ for i in 1 2 3 4 ; do /bin/time -p siriusDynCall -p$i -s4 * > ../dyn4413c1p$i & done
        real 33.68
        user 32.48
        sys 1.18

Continue reading