While I like openSUSE’s approach of ordering extra packages into addon repositories on their Build Service, I hate those ugly repository URLs. GUI users may just use the one-click install links on the package search, but command-line enthusiasts are out of luck.

To solve the problem, I’ve written two small Python scripts. obs-addrepo wraps zypper addrepo for Build Service repos:

# before
$ sudo zypper addrepo http://download.opensuse.org/repositories/Application:/Geo/openSUSE_12.1/ Application:Geo
# after
$ sudo obs-addrepo Application:Geo

obs-quickinstall is the 1-click installer for command-line users:

# before
$ sudo zypper addrepo http://download.opensuse.org/repositories/Application:/Geo/openSUSE_12.1/ temp
$ sudo zypper refresh temp
$ sudo zypper install --from temp josm
$ sudo zypper removerepo temp
# after
$ sudo obs-quickinstall Application:Geo josm

Both tools are now available as the obs-tools package (Git repo). Packages are currently building on the Build Service, the project page has installation instructions.

While I was on my quest of reducing the memory footprint of a freshly launched KDE session, I found that the process which uses most memory just after startup is Amarok, which contributes over 80 MiB to 300 MiB total RAM usage. Now of course, Amarok has its reasons for a high memory usage: For example, its collection is backed by a MySQL/Embedded database. This memory footprint is justified by the plethora of features Amarok offers. But still, 80 MiB RAM usage is quite a lot when all I want to do (99% of the time) is to listen to some music files on the local disk. (My collection has 818 tracks at this very moment.)

Can we improve on that?

Looking at my desktop, I see the “Now Playing” applet. It shows the current track from Amarok, and has the basic media player controls (pause/stop/previous/next + seek slider + volume slider). Again, this is about all I need for an user interface while my playlist is filled. I remember that the nowplaying applet communicates with Amarok via DBus using the MPRIS (Media Player Remote Interface Specification) standard.

With all these impressions in mind, my target is clear: I want a headless media player which runs in the background and offers an MPRIS-compliant control interface on DBus. Something with a smaller memory footprint.

Intensive searches on the internet did not turn up anything of interest. Of course there are command-line music players (e.g. MPlayer), but those expect to be connected to a terminal for control. They cannot be run in the background, and there’s no nice GUI for them (like with the nowplaying applet). It looks like I need to do it myself yet again.

So here is the Raven Music Server (called ravend for short, as it is a daemon), which is now publicly available at git://anongit.kde.org/scratch/majewsky/raven. It currently implements the basic interfaces mandated by MPRIS version 2 (unfortunately the “Now Playing” applet supports MPRIS 1 only). The biggest missing piece is support for editing the track list, so at the moment you need to restart the process to change the playlist.

I have been productively using ravend for two weeks now, since one day after its inception, and I’m quite satisfied with it. And now that it is in a public Git repo, you can, too! Provided that you find pleasure in controlling your mediaplayer with commands like

qdbus org.mpris.MediaPlayer2.ravend /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause

Convenient user interfaces will become available, eventually. Even then, the Raven Music Server will probably not be interesting for end users. Power users may find this project interesting if they like to keep an eye on their system’s memory footprint, or want to have their playback continue even when the X server is terminated, or want to run a full-fledged media player on a headless system.

What’s a clear sign that I’m a command-line addict? Not only do I have a custom prompt. My prompt is generated by a Python program, which has already grown to over 200 lines. My prompt detects Git and SVN repos, my custom build directory hierarchy, deleted directories at or above $PWD, common usernames and hostnames, shell type and shell level; and it’s still missing some features. What do you think: Is this madness? Does anyone else here use fully custom prompts?

Today’s XKCD got me thinking about the strength of my own passwords again. Some time ago, XKCD already hit on the topic of password reuse.

A major argument for reusing passwords is that one can’t remember dozens of passwords for all services one uses. The typical counter-argument then is that one can use a password storage, be it a local application like KWallet or an online service. Such services allow you to protect multiple different passwords with one master password, which is the only one which you have to remember.

I am personally a user of KWallet, and must agree that it’s a great relief to have a backup for this crucial data available anytime. (Currently, KWallet stores over 200 passwords on my notebook alone, and there are probably unmerged passwords over at my desktop.)

But alas, both kinds of password storage solutions have a big problem: KWallet and friends are useless when you don’t have the wallet file around on the computer which you are currently using, you’re stuck. If the only computer carrying the wallet file gets broken, you’re totally lost. On the other hand, online storage solutions require a big deal of trust towards the provider running the service. As we saw earlier this year with LastPass, this trust is in general not justified.

So what can be done? I just had an idea which I did not see before anywhere. (Might be that I did not look closely enough. Please tell me if this idea has already existed before.)

If we don’t want to store passwords (because that requires both the availability of the storage and trust with a provider of this available storage), we need to generate them based on an algorithm. In other words,

#!/usr/bin/env python2

import base64, getpass, hashlib, subprocess, sys

def doHash(x):
    return base64.b64encode(hashlib.sha512(x).digest())

def sendToXClipboard(x):
    subprocess.Popen(["xsel", "-i"], stdin=subprocess.PIPE).stdin.write(x)

try:
    site = sys.argv[1]
except IndexError:
    sys.stderr.write("Usage: %s [domain]\n" % sys.argv[0])
else:
    masterPassword = getpass.getpass("Password: ")
    sitePassword = doHash(doHash(site) + doHash(masterPassword)) # variant 1
    sitePassword = doHash(site + masterPassword)                 # variant 2
    sendToXClipboard(sitePassword)

This Python script reads the name of a website, and prompts for a master password. It then combines both using a considered-secure hashing algorithm (SHA-512 in this case), and sends the Base64-encoded result of that to the X clipboard (so the password won’t be displayed on screen). Base64 is the best compromise between printability and string size.

The code shows two incompatible variants of obtaining the sitePassword. I won’t debate over which is better. The extra hashes in variant 1 are, strictly speaking, security by obscurity, as they don’t help when the attacker knows the algorithm. However, that’s not the main security feature. As far as I can see, this algorithm relies solely on the strength of the SHA-512 algorithm, which is (as of August 11, 2011) considered secure, and (if the attacker is brute-forcing) on the strength of your master password. So don’t choose “correcthorsebatterystaple”. 😀

The locate command line tool from findutils is great when you forgot where you dropped that file you worked on a week ago, but don’t want to run Strigi (plus Strigi does not index the system files). However, its output is quite convoluted when you’re looking by topic instead of exact file name.

$ locate tagaro | wc -l
4977

Looking at the output of locate without the wc, there’s quite some garbage in there. For example, my backup and files in build directories, which I am certainly not interested in. Of course there is a way to exclude these from the listing, by editing “/etc/updatedb.conf”. By default, this contains the following on my Arch system:

# directories to exclude from the slocate database:
PRUNEPATHS="/media /mnt /tmp /var/tmp /var/cache /var/lock /var/run /var/spool"

# filesystems to exclude from the slocate database:
PRUNEFS="afs auto autofs binfmt_misc cifs coda configfs cramfs debugfs devpts devtmpfs ftpfs iso9660 mqueue ncpfs nfs nfs4 proc ramfs securityfs shfs smbfs sshfs sysfs tmpfs udf usbfs vboxsf"

As you see, quite some stuff is already excluded from locate’s database, like removable devices under /media, temporary data and virtual filesystems. Apart from these defaults, I’ve also added my global build directory /home/tmp/build and my backup drive to the list. Let’s apply the changes and see if this helps:

$ sudo updatedb
$ locate tagaro | wc -l
1656

An impressive improvement! But we’re still not there: Nearly a third of the output comes from the Git source control system which Tagaro uses. Paths like “/home/stefan/Code/kde/tagaro/.git/objects/b4/3cc4cc0bdc6c92b94655b8352c3073e8d3842d” are also useless, but how can we purge these? PRUNEPATHS only filters directory paths, but `man updatedb.conf` reveals there’s another configuration parameter which specifies directory names to be ignored. So let’s add this to /etc/updatedb.conf:

PRUNENAMES=".bzr .hg .git .svn"

This filters the most important types of VCS data directories. Again, let’s check if it helps:

$ sudo updatedb
$ locate tagaro | wc -l
1080

A reduction of over 75%! Now locate shows only output which is relevant. Also, the locate database has shrunk by 60%, as has the execution speed of locate. By the way, results are even more on the spot when you give the “-b” switch to locate. locate will then print only those files and directories whose name (instead of path) contains the given key. “locate -b tagaro” gives only 25 results here.

It can be fun to read into manpages, esp. if the manpages in question are as good as the ones available for Git. It turns out that Git has an interesting mechanism for automatic URL rewriting. For example, if you find the following command too long:

git clone git://git.kde.org/amarok

Then you can define a URL alias by putting the following into the file .gitconfig in your home directory:

[url "git://git.kde.org/"]
    insteadOf = kde://

Now the command shortens considerably:

git clone kde://amarok

You could also choose only “kde:” instead of “kde://”, but I like that the former looks like a normal URL. If you have a developer account, you might want to push commits via SSH. If so, you could change the “git://” URL into an SSH URL, or you can specify a separate URL alias for pushing. Add the following to the global gitconfig (in addition to the lines above):

[url "ssh://git@git.kde.org/"]
    pushInsteadOf = kde://

Now everything works automagically. You pull from “kde://” which gets rewritten to “git://git.kde.org/”, but when you want to push something, “kde://” will be rewritten to “ssh://git@git.kde.org/” instead.

P.S. Before you ask: I’ve also added that tip to the git.kde.org manual on community.kde.org.

It’s common nowadays that libraries install forward includes (also known as “pretty headers”) which have the same name as the class declared in it. For example, to get the QPushButton, one includes <QPushButton> or <QtGui/QPushButton> instead of <qpushbutton.h>.

For a newer project of mine, I wrote a script that generates these forward includes automatically, and I thought that it might be worth sharing. The script is quite simple because the structure of the source tree is in my case identical to what gets installed below /usr/include. If you install your headers in a more complicated way, you will probably need to expand the script.


#!/bin/sh
# This script finds all headers (except for private headers) in the known
# library source trees, looks for exported classes inside there, and generates
# forward includes for all these classes.

# configuration
EXPORT_MACRO=TAGARO_EXPORT # the macro which denotes exported classes
HEADER_DIR=tagaro # the directory which contains the headers of your lib
INCLUDE_DIR=includes/Tagaro # the directory which shall be filled with the pretty headers
INCLUDE_INSTALL_DIR='${INCLUDE_INSTALL_DIR}/Tagaro' # the directory into which CMake shall install the pretty headers
MANUAL_HEADERS='Settings' # specify manually created headers in this list (separated by spaces)

if [ ! -f $(basename $0) ]; then
    echo "Call this script only from the directory which contains it." >&2
    exit 1
fi

(
    echo "#NOTE: Use the $0 script to update this file."
    echo 'install(FILES'
    (
        find $HEADER_DIR/ -name \*.h -a \! -name \*_p.h | while read HEADERFILE; do
            grep "class $EXPORT_MACRO" $HEADERFILE | sed "s/^.*$EXPORT_MACRO \\([^ ]*\\).*$/\\1/" | while read CLASSNAME; do
                echo '#include ' > $INCLUDE_DIR/$CLASSNAME
                echo -en "\t"; echo "$CLASSNAME"
            done
        done
        for MANUAL_HEADER in $MANUAL_HEADERS; do
            if [ -n $MANUAL_HEADER ]; then
                echo -en "\t"; echo $MANUAL_HEADER
            fi
        done
    ) | sort
    echo "DESTINATION $INCLUDE_INSTALL_DIR COMPONENT Devel)"
) > $INCLUDE_DIR/CMakeLists.txt