Set-Up “Let’s Encrypt” for Hiawatha Web-Server

Google announced that starting with Chrome version 68 they will gradually mark HTTP-connections as “not secure”. “Let’s Encrypt” is a free service for web-masters to obtain certificates in an easy manner. Work on “Let’s Encrypt” started in 2014.

Setting up “Let’s Encrypt” with Hiawatha web-server is quite easy, although there are some pitfalls. I used the ArchLinux package for Hiawatha. There is also a ArchWiki page for Hiawatha.

Another detailed description is: Let’s Encrypt with Hiawatha by Chris Wadge.

1. Unpacking and production-server setting. After installing the ArchLinux package I unpacked the file /usr/share/hiawatha/letsencrypt.tar.gz. You have to edit letsencrypt.conf at three places:

ACCOUNT_EMAIL_ADDRESS = your@mail.address
LE_CA_HOSTNAME =           # Production

I struggled with the last variable LE_CA_HOSTNAME. This has to be the productive “Let’s Encrypt” server. Although you might register with the testing-server, you apparently cannot do anything else with the testing-server. So delete the testing-server. The rest of the configuration file is obvious to change.

2. Configuration file. Now check your hiawatha.conf file:

Binding {
        Port = 443
        #TLScertFile = tls/hiawatha.pem
        TLScertFile = /etc/hiawatha/tls/
        Interface =
        MaxRequestSize = 2048
        TimeForRequest = 30
VirtualHost {
        Hostname =,,,,,,,, borussia

Continue reading


Towards web-based delta synchronization for cloud storage systems

Very interesting article.

Some remarkable excerpts:

To isolate performance issues to the JavaScript VM, the authors rebuilt the client side of WebRsync using the Chrome native client support and C++. It’s much faster.

Replacing MD5 with SipHash reduces computation complexity by almost 5x. As a fail-safe mechanism in case of hash collisions, WebRsync+ also uses a lightweight full content hash check. If this check fails then the sync will be re-started using MD5 chunk fingerprinting instead.

The client side of WebR2sync+ is 1700 lines of JavaScript. The server side is based on node.js (about 500 loc) and a set of C processing modules (a further 1000 loc).

the morning paper

Towards web-based delta synchronization for cloud storage systems Xiao et al., FAST’18

If you use Dropbox (or an equivalent service) to synchronise file between your Mac or PC and the cloud, then it uses an efficient delta-sync (rsync) protocol to only upload the parts of a file that have changed. If you use a web interface to synchronise the same files though, the entire file will be uploaded. This situation seems to hold across a wide range of popular services:

Given the universal presence of the web browser, why can’t we have efficient delta syncing for web clients? That’s the question Xiao et al. set out to investigate: they built an rsync implementation for the web, and found out it performed terribly. Having tried everything to improve the performance within the original rsync design parameters, then they resorted to a redesign which moved more of the heavy lifting back to…

View original post 728 more words

Youtube 500 Internal Server Error

As noted in Youtube 500 Internal Server Error today I again noted an “500 Internal Server Error”. Normally you would not expect these kind of errors from Google. It says:

Sorry, something went wrong.

A team of highly trained monkeys has been dispatched to deal with this situation.

If you see them, send them this information as text (screenshots frighten them):


nginx: 413 Request Entity Too Large – File Upload Issue

I got above error message in nginx. Stackoverflow post 413 Request Entity Too Large – File Upload Issue had all information to resolve the issue. The solution was written by User Arun.

One has to edit /etc/nginx/nginx.conf and add in http{...}

client_max_body_size 15900M ;

and /etc/php/php.ini

; Maximum allowed size for uploaded files.
upload_max_filesize = 15900M

; Maximum amount of memory a script may consume (128MB)
memory_limit = 6900M

; Maximum size of POST data that PHP will accept.
; Its value may be 0 to disable the limit. It is ignored if POST data reading
; is disabled through enable_post_data_reading.
post_max_size = 25900M

Besides editing /etc/nginx/nginx.conf and /etc/php/php.ini I had to stop nginx and php-fpm:

systemctl stop nginx
systemctl stop php-fpm

so the changes take effect.

After starting the two services then check with phpinfo().

Suppressing Advertisement on Web-Pages a.k.a. Ad-Blocking

Advertisements on web-pages is ubiquitous. Without advertisement even this blog could not be offered free of charge. But advertisement can be a real nuisance with its blinking, flickering, moving, and distracting appearance. Sometimes they even contain malware.

There are two simple remedies for this problem:

  1. use an adblocker plug-in for your browser
  2. modify your /etc/hosts file

The first one is easy to accomplish, but sometimes web-pages no longer work as expected. The second approach is in some ways more direct and more brutal, and leaves visual clues on the web-pages that brute force has been applied.

Editing /etc/hosts on your Linux desktop is easy. On Android you connect via adb shell, switch to root user with su, then

mount -o remount,rw /system

i.e., remount the /system directory from read-only to write-enabled, then edit /etc/hosts. Either reboot your smartphone, or

mount -o remount,ro /system

I use the following list of hosts in my /etc/hosts, which has a somewhat German felling: Continue reading

Migrating from to WordPress

I have been a loyal user of since 2006. I have written on this in my post Saving URLs in Still Troublesome. But now enough is enough. Here is a list of annoyances:

  1. You can neither export nor import your data anymore.
  2. The service is generally slow, i.e., it takes a lot of time to just load the site in your browser.
  3. The service is sometimes not available.
  4. You cannot change URLs without deleting the entire post.
  5. The company behind the service does not answer any inquires.
  6. The site is blocked by a number of company firewalls because it is marked as “social”.

Continue reading