Bluetooth Headphones in Arch Linux

There is a big difference between noise-cancelling headphones, and classical headphones without noise-cancelling ability! Especially when you use them in a noisy environment, like a plane or a large office bureau. Inspired by a positive review of the Bose headphones by Marques Brownlee, I bought them.

Here is the review:

Pairing with my OnePlus One smartphone was completely automatic and works like a charm. No further explnation is required.

Paring with a PC/laptop running Arch Linux needed a little more effort. Some advice in Bluetooth headset helped me alot.

One-time configuation:

systemctl start bluetooth.service
hciconfig hci0 up piscan
pacmd list-sinks | grep index:

In bluetoothctl enter

pair xx:yy:...
trust xx:yy:...

I had to delete the directory below /var/lib/bluetooth. Apparently something was stored there which shouldn’t have been there.

Once the pairing works, as described above, I just use:

systemctl start bluetooth.service
hciconfig hci0 up piscan

for starting bluetooth and making my PC with bluetooth visible. I switch on the headphone, which normally finds the PC in less than a second. Then I have to set the right sink via pacmd:

pacmd set-default-sink `pacmd list-sinks | grep index: | tail -1 | cut -d " " -f6`

Checking that all is well is:

pacmd list-sinks | grep index:

Once paired with your Linux machine, yo have to repeat set-default-sink if you lose the connection to your Linux machine, for example by walking too far away. If you lose Bluetooth connection apparently sound output will go to the regular speakers of your Linux machine. In case you are working in a large office other people will hear your music. Of course, you can mute the regular loudspeakers of your Linux machine using

pacmd set-sink-volume 0 0

assuming sink #0 is regular loudspeaker.

Running bacman in parallel

The mailing list for pacman-dev contains an interesting thread when rebuilding packages from source, i.e., running bacman in parallel. Gordian Edenhofer ran some tests on performance when running one, two, three, up to six jobs in parallel.

The results, below, clearly show that using all your cores is of great benefit:


Very similar to CPU Usage Time Is Dependant on Load.

There is an AUR alternative for bacman called fakepkg written by Gordian Edenhofer which supports this parallelism.

The measurements are as follows:

# Computer:
## nuc
CPU: i5-4250U CPU @ 1.30GHz
Mem: 8GB

# Packages: base base-devel

* Jobs: 1
real    5m40.325s
user    4m35.710s
sys     0m37.270s

* Jobs: 2
real    4m39.047s
user    4m50.913s
sys     0m35.717s

* Jobs: 3
real    3m9.770s
user    6m5.137s
sys     0m46.153s

* Jobs: 4
real    2m51.163s
user    7m25.950s
sys     0m30.090s

* Jobs: 5
real    2m41.124s
user    8m6.687s
sys     0m26.627s

* Jobs: 6
real    2m36.724s
user    8m12.370s
sys     0m24.470s

# Packages: base base-devel gnome

* Jobs: 1
real    10m34.284s
user    7m33.677s
sys     1m25.223s

* Jobs: 2
real    8m18.592s
user    8m45.407s
sys     1m45.633s

* Jobs: 3
real    5m48.511s
user    11m40.323s
sys     1m46.967s

* Jobs: 4
real    5m8.346s
user    13m47.353s
sys     1m16.640s

* Jobs: 5
real    4m52.659s
user    14m34.700s
sys     1m7.933s

* Jobs: 6
real    4m58.652s
user    14m25.353s
sys     1m6.863s

# Packages: all 1021

* Jobs: 1
real    80m56.509s
user    70m8.780s
sys     9m29.813s

* Jobs: 2
real    66m42.653s
user    75m13.030s
sys     9m57.040s

* Jobs: 3
real    43m25.701s
user    92m52.983s
sys     8m57.147s

* Jobs: 4
real    38m9.114s
user    110m5.763s
sys     6m43.427s

* Jobs: 5
real    36m12.302s
user    118m32.186s
sys     6m31.533s

* Jobs: 6
real    36m4.872s
user    118m54.449s
sys     6m35.830s

# Packages: base base-devel

* XZ_OPT="-T 1"
real    5m32.345s
user    4m22.960s
sys     0m28.610s

* XZ_OPT="-T 2"
real    4m27.433s
user    4m34.683s
sys     0m26.403s

* XZ_OPT="-T 3"
real    4m8.689s
user    4m59.597s
sys     0m20.943s

* XZ_OPT="-T 4"
real    4m7.828s
user    5m20.103s
sys     0m21.717s

* XZ_OPT="-T 5"
real    4m6.304s
user    5m20.757s
sys     0m21.817s

* XZ_OPT="-T 6"
real    4m5.932s
user    5m19.480s
sys     0m21.237s

* XZ_OPT="-T 0"
real    4m5.851s
user    5m18.137s
sys     0m20.823s

# Packages: base base-devel gnome

* XZ_OPT="-T 1"
real    10m20.164s
user    6m36.933s
sys     0m57.647s

* XZ_OPT="-T 2"
real    9m8.262s
user    6m52.057s
sys     0m57.357s

* XZ_OPT="-T 3"
real    8m53.265s
user    7m30.650s
sys     0m57.827s

* XZ_OPT="-T 4"
real    8m53.075s
user    8m1.787s
sys     0m59.480s

* XZ_OPT="-T 5"
real    8m48.173s
user    7m49.223s
sys     0m58.567s

* XZ_OPT="-T 6"
real    8m48.970s
user    7m47.837s
sys     0m56.570s

* XZ_OPT="-T 0"
real    8m49.713s
user    7m47.470s
sys     0m56.613s

# Packages: all 1021 packages

* XZ_OPT="-T 1"
real    79m42.006s
user    65m42.950s
sys     7m32.397s

* XZ_OPT="-T 2"
real    63m27.457s
user    68m57.536s
sys     6m56.100s

* XZ_OPT="-T 3"
real    60m37.071s
user    79m6.113s
sys     7m24.263s

* XZ_OPT="-T 4"
real    59m16.447s
user    85m22.746s
sys     7m35.783s

* XZ_OPT="-T 5"
real    59m2.436s
user    86m7.093s
sys     7m47.653s

* XZ_OPT="-T 6"
real    59m1.516s
user    86m18.973s
sys     7m34.320s

* XZ_OPT="-T 0"
real    58m36.010s
user    84m17.283s
sys     7m25.270s

The corresponding R-program to print the values is:

# Computer: NUC
## CPU: i5-4250U CPU @ 1.30GHz
## Mem: 8GB

# Measurement vectors
jobs = seq(1, 6)
# base and base-devel group (64 unique packages)
time_base = c(340.325, 279.047, 189.770, 171.163, 161.124, 156.724)
# base, base-devel and gnome group (111 unique packages)
time_gnome = c(634.284, 498.592, 348.511, 308.346, 292.659, 298.652)
# all packages installed at the NUC (1021 unique packages)
time_all = c(4856.509, 4002.653, 2545.701, 2289.114, 2172.302, 2164.872)
# base and base-devel group using XZ_OPT instead of true parallelization (64 unique packages)
time_xzopt_base = c(332.345, 267.433, 248.689, 247.828, 246.304, 245.932)
# base, base-devel and gnome group using XZ_OPT instead of true parallelization (111 unique packages)
time_xzopt_gnome = c(620.164, 548.262, 533.265, 533.075, 528.173, 528.970)
# all packages installed at the NUC using XZ_OPT instead of true parallelization (1021 unique packages)
time_xzopt_all = c(4782.006, 3807.457, 3637.071, 3556.447, 3542.436, 3541.516)

# Export drawing as vector graphic suitable for printing with A4
#svg("bacman: simple benchmark.svg", width=1*10, height=sqrt(2)*10)

# Define initial window size
par(mfrow=c(3, 1))

# Plot points and lines for base
plot(time_all ~ jobs, main="all (1021 pkgs)", ylim=c(0, max(time_all)*1.1), xlab="Jobs/Count", ylab="Time/s", col="blue")
lines(time_all ~ jobs, col="blue")
points(time_xzopt_all ~ jobs, col="black")
lines(time_xzopt_all ~ jobs, col="black")
legend(x="topright", legend=c("Parallele Jobs", "XZ_OPT=\"-T JOBS\""), col=c("blue", "black"), pch=c(19, 19))

# Plot points and lines for gnome
plot(time_gnome ~ jobs, main="base, base-devel, gnome (111 pkgs)", ylim=c(0, max(time_gnome)*1.1), xlab="Jobs/Count", ylab="Time/s", col="blue")
lines(time_gnome ~ jobs, col="blue")
points(time_xzopt_gnome ~ jobs, col="black")
lines(time_xzopt_gnome ~ jobs, col="black")
legend(x="topright", legend=c("Parallele Jobs", "XZ_OPT=\"-T JOBS\""), col=c("blue", "black"), pch=c(19, 19))

# Plot points and lines for all
plot(time_base ~ jobs, main="base, base-devel (64 pkgs)", ylim=c(0, max(time_base)*1.1), xlab="Jobs/Count", ylab="Time/s", col="blue")
lines(time_base ~ jobs, col="blue")
points(time_xzopt_base ~ jobs, col="black")
lines(time_xzopt_base ~ jobs, col="black")
legend(x="topright", legend=c("Parallele Jobs", "XZ_OPT=\"-T JOBS\""), col=c("blue", "black"), pch=c(19, 19))

# Write drawing to file

Why does deep and cheap learning work so well?

Very interesting.

the morning paper

Why does deep and cheap learning work so well Lin & Tegmark 2016

Deep learning works remarkably well, and has helped dramatically improve the state-of-the-art in areas ranging from speech recognition, translation, and visual object recognition to drug discovery, genomics, and automatic game playing. However, it is still not fully understood why deep learning works so well.

So begins a fascinating paper looking at connections between machine learning and the laws of physics – showing us how properties of the real world help to make many machine learning tasks much more tractable than they otherwise would be, and giving us insights into why depth is important in networks. It’s a paper I enjoyed reading, but my abilities stop at appreciating the form and outline of the authors’ arguments – for the proofs and finer details I refer you to the full paper.

A paradox

How do neural networks with comparatively…

View original post 1,545 more words

Setting Keyboard-Language in IceWM

If you start IceWM from GNOME, you can set your language settings in


For example


setxkbmap de

Setting all available window-managers is done in


Example: For starting icewm you can use the following configuration file /usr/share/xsessions/icewm-session.desktop

[Desktop Entry]
Comment=Simple and fast window manger

[Window Manager]