While I like openSUSE’s approach of ordering extra packages into addon repositories on their Build Service, I hate those ugly repository URLs. GUI users may just use the one-click install links on the package search, but command-line enthusiasts are out of luck.

To solve the problem, I’ve written two small Python scripts. obs-addrepo wraps zypper addrepo for Build Service repos:

# before
$ sudo zypper addrepo http://download.opensuse.org/repositories/Application:/Geo/openSUSE_12.1/ Application:Geo
# after
$ sudo obs-addrepo Application:Geo

obs-quickinstall is the 1-click installer for command-line users:

# before
$ sudo zypper addrepo http://download.opensuse.org/repositories/Application:/Geo/openSUSE_12.1/ temp
$ sudo zypper refresh temp
$ sudo zypper install --from temp josm
$ sudo zypper removerepo temp
# after
$ sudo obs-quickinstall Application:Geo josm

Both tools are now available as the obs-tools package (Git repo). Packages are currently building on the Build Service, the project page has installation instructions.

While I was on my quest of reducing the memory footprint of a freshly launched KDE session, I found that the process which uses most memory just after startup is Amarok, which contributes over 80 MiB to 300 MiB total RAM usage. Now of course, Amarok has its reasons for a high memory usage: For example, its collection is backed by a MySQL/Embedded database. This memory footprint is justified by the plethora of features Amarok offers. But still, 80 MiB RAM usage is quite a lot when all I want to do (99% of the time) is to listen to some music files on the local disk. (My collection has 818 tracks at this very moment.)

Can we improve on that?

Looking at my desktop, I see the “Now Playing” applet. It shows the current track from Amarok, and has the basic media player controls (pause/stop/previous/next + seek slider + volume slider). Again, this is about all I need for an user interface while my playlist is filled. I remember that the nowplaying applet communicates with Amarok via DBus using the MPRIS (Media Player Remote Interface Specification) standard.

With all these impressions in mind, my target is clear: I want a headless media player which runs in the background and offers an MPRIS-compliant control interface on DBus. Something with a smaller memory footprint.

Intensive searches on the internet did not turn up anything of interest. Of course there are command-line music players (e.g. MPlayer), but those expect to be connected to a terminal for control. They cannot be run in the background, and there’s no nice GUI for them (like with the nowplaying applet). It looks like I need to do it myself yet again.

So here is the Raven Music Server (called ravend for short, as it is a daemon), which is now publicly available at git://anongit.kde.org/scratch/majewsky/raven. It currently implements the basic interfaces mandated by MPRIS version 2 (unfortunately the “Now Playing” applet supports MPRIS 1 only). The biggest missing piece is support for editing the track list, so at the moment you need to restart the process to change the playlist.

I have been productively using ravend for two weeks now, since one day after its inception, and I’m quite satisfied with it. And now that it is in a public Git repo, you can, too! Provided that you find pleasure in controlling your mediaplayer with commands like

qdbus org.mpris.MediaPlayer2.ravend /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause

Convenient user interfaces will become available, eventually. Even then, the Raven Music Server will probably not be interesting for end users. Power users may find this project interesting if they like to keep an eye on their system’s memory footprint, or want to have their playback continue even when the X server is terminated, or want to run a full-fledged media player on a headless system.

What’s a clear sign that I’m a command-line addict? Not only do I have a custom prompt. My prompt is generated by a Python program, which has already grown to over 200 lines. My prompt detects Git and SVN repos, my custom build directory hierarchy, deleted directories at or above $PWD, common usernames and hostnames, shell type and shell level; and it’s still missing some features. What do you think: Is this madness? Does anyone else here use fully custom prompts?

Today’s XKCD got me thinking about the strength of my own passwords again. Some time ago, XKCD already hit on the topic of password reuse.

A major argument for reusing passwords is that one can’t remember dozens of passwords for all services one uses. The typical counter-argument then is that one can use a password storage, be it a local application like KWallet or an online service. Such services allow you to protect multiple different passwords with one master password, which is the only one which you have to remember.

I am personally a user of KWallet, and must agree that it’s a great relief to have a backup for this crucial data available anytime. (Currently, KWallet stores over 200 passwords on my notebook alone, and there are probably unmerged passwords over at my desktop.)

But alas, both kinds of password storage solutions have a big problem: KWallet and friends are useless when you don’t have the wallet file around on the computer which you are currently using, you’re stuck. If the only computer carrying the wallet file gets broken, you’re totally lost. On the other hand, online storage solutions require a big deal of trust towards the provider running the service. As we saw earlier this year with LastPass, this trust is in general not justified.

So what can be done? I just had an idea which I did not see before anywhere. (Might be that I did not look closely enough. Please tell me if this idea has already existed before.)

If we don’t want to store passwords (because that requires both the availability of the storage and trust with a provider of this available storage), we need to generate them based on an algorithm. In other words,

#!/usr/bin/env python2

import base64, getpass, hashlib, subprocess, sys

def doHash(x):
    return base64.b64encode(hashlib.sha512(x).digest())

def sendToXClipboard(x):
    subprocess.Popen(["xsel", "-i"], stdin=subprocess.PIPE).stdin.write(x)

    site = sys.argv[1]
except IndexError:
    sys.stderr.write("Usage: %s [domain]\n" % sys.argv[0])
    masterPassword = getpass.getpass("Password: ")
    sitePassword = doHash(doHash(site) + doHash(masterPassword)) # variant 1
    sitePassword = doHash(site + masterPassword)                 # variant 2

This Python script reads the name of a website, and prompts for a master password. It then combines both using a considered-secure hashing algorithm (SHA-512 in this case), and sends the Base64-encoded result of that to the X clipboard (so the password won’t be displayed on screen). Base64 is the best compromise between printability and string size.

The code shows two incompatible variants of obtaining the sitePassword. I won’t debate over which is better. The extra hashes in variant 1 are, strictly speaking, security by obscurity, as they don’t help when the attacker knows the algorithm. However, that’s not the main security feature. As far as I can see, this algorithm relies solely on the strength of the SHA-512 algorithm, which is (as of August 11, 2011) considered secure, and (if the attacker is brute-forcing) on the strength of your master password. So don’t choose “correcthorsebatterystaple”. 😀

The locate command line tool from findutils is great when you forgot where you dropped that file you worked on a week ago, but don’t want to run Strigi (plus Strigi does not index the system files). However, its output is quite convoluted when you’re looking by topic instead of exact file name.

$ locate tagaro | wc -l

Looking at the output of locate without the wc, there’s quite some garbage in there. For example, my backup and files in build directories, which I am certainly not interested in. Of course there is a way to exclude these from the listing, by editing “/etc/updatedb.conf”. By default, this contains the following on my Arch system:

# directories to exclude from the slocate database:
PRUNEPATHS="/media /mnt /tmp /var/tmp /var/cache /var/lock /var/run /var/spool"

# filesystems to exclude from the slocate database:
PRUNEFS="afs auto autofs binfmt_misc cifs coda configfs cramfs debugfs devpts devtmpfs ftpfs iso9660 mqueue ncpfs nfs nfs4 proc ramfs securityfs shfs smbfs sshfs sysfs tmpfs udf usbfs vboxsf"

As you see, quite some stuff is already excluded from locate’s database, like removable devices under /media, temporary data and virtual filesystems. Apart from these defaults, I’ve also added my global build directory /home/tmp/build and my backup drive to the list. Let’s apply the changes and see if this helps:

$ sudo updatedb
$ locate tagaro | wc -l

An impressive improvement! But we’re still not there: Nearly a third of the output comes from the Git source control system which Tagaro uses. Paths like “/home/stefan/Code/kde/tagaro/.git/objects/b4/3cc4cc0bdc6c92b94655b8352c3073e8d3842d” are also useless, but how can we purge these? PRUNEPATHS only filters directory paths, but `man updatedb.conf` reveals there’s another configuration parameter which specifies directory names to be ignored. So let’s add this to /etc/updatedb.conf:

PRUNENAMES=".bzr .hg .git .svn"

This filters the most important types of VCS data directories. Again, let’s check if it helps:

$ sudo updatedb
$ locate tagaro | wc -l

A reduction of over 75%! Now locate shows only output which is relevant. Also, the locate database has shrunk by 60%, as has the execution speed of locate. By the way, results are even more on the spot when you give the “-b” switch to locate. locate will then print only those files and directories whose name (instead of path) contains the given key. “locate -b tagaro” gives only 25 results here.

It can be fun to read into manpages, esp. if the manpages in question are as good as the ones available for Git. It turns out that Git has an interesting mechanism for automatic URL rewriting. For example, if you find the following command too long:

git clone git://git.kde.org/amarok

Then you can define a URL alias by putting the following into the file .gitconfig in your home directory:

[url "git://git.kde.org/"]
    insteadOf = kde://

Now the command shortens considerably:

git clone kde://amarok

You could also choose only “kde:” instead of “kde://”, but I like that the former looks like a normal URL. If you have a developer account, you might want to push commits via SSH. If so, you could change the “git://” URL into an SSH URL, or you can specify a separate URL alias for pushing. Add the following to the global gitconfig (in addition to the lines above):

[url "ssh://git@git.kde.org/"]
    pushInsteadOf = kde://

Now everything works automagically. You pull from “kde://” which gets rewritten to “git://git.kde.org/”, but when you want to push something, “kde://” will be rewritten to “ssh://git@git.kde.org/” instead.

P.S. Before you ask: I’ve also added that tip to the git.kde.org manual on community.kde.org.

It’s common nowadays that libraries install forward includes (also known as “pretty headers”) which have the same name as the class declared in it. For example, to get the QPushButton, one includes <QPushButton> or <QtGui/QPushButton> instead of <qpushbutton.h>.

For a newer project of mine, I wrote a script that generates these forward includes automatically, and I thought that it might be worth sharing. The script is quite simple because the structure of the source tree is in my case identical to what gets installed below /usr/include. If you install your headers in a more complicated way, you will probably need to expand the script.

# This script finds all headers (except for private headers) in the known
# library source trees, looks for exported classes inside there, and generates
# forward includes for all these classes.

# configuration
EXPORT_MACRO=TAGARO_EXPORT # the macro which denotes exported classes
HEADER_DIR=tagaro # the directory which contains the headers of your lib
INCLUDE_DIR=includes/Tagaro # the directory which shall be filled with the pretty headers
INCLUDE_INSTALL_DIR='${INCLUDE_INSTALL_DIR}/Tagaro' # the directory into which CMake shall install the pretty headers
MANUAL_HEADERS='Settings' # specify manually created headers in this list (separated by spaces)

if [ ! -f $(basename $0) ]; then
    echo "Call this script only from the directory which contains it." >&2
    exit 1

    echo "#NOTE: Use the $0 script to update this file."
    echo 'install(FILES'
        find $HEADER_DIR/ -name \*.h -a \! -name \*_p.h | while read HEADERFILE; do
            grep "class $EXPORT_MACRO" $HEADERFILE | sed "s/^.*$EXPORT_MACRO \\([^ ]*\\).*$/\\1/" | while read CLASSNAME; do
                echo '#include ' > $INCLUDE_DIR/$CLASSNAME
                echo -en "\t"; echo "$CLASSNAME"
            if [ -n $MANUAL_HEADER ]; then
                echo -en "\t"; echo $MANUAL_HEADER
    ) | sort
) > $INCLUDE_DIR/CMakeLists.txt

Git has an aliasing feature that allows you to define aliases and shortcuts for Git commands. For example, the command

git config --global alias.st status

will setup “git st” as a shorthand for “git status”. To delete the alias, use

git config --unset --global alias.st

But that’s not the point of this post. Recently, I have become tired of typing the four letters “git ” at the beginning of each and every Git command. At least for often-used commands which I have abbreviated already with Git aliases, I want to be able to omit the “git “. Of course there’s a shell script to solve my problem.

while read git_line; do
if echo $git_line | grep '^\[.*\]$' &>/dev/null; then
echo $git_category $git_line
done < $HOME/.gitconfig | grep '^\[alias\]' | cut -d' ' -f2- | sed 's/ = / /' | while read git_alias git_command; do
alias $git_alias="git $git_alias"
unset git_line git_category git_alias git_command

Put this into your .bashrc (or equivalent). Please note that the above script was only tested with zsh, but I do not see anything that should not work in bash as well.

Now what does this do? For every global Git alias (like “git st” for “git status”), it defines the alias as a command recognized by the shell, so you need to type only “st” instead of “git status”. Git aliases can also resolve to the same command without entering an infinite loop, so you can define “pull” as an alias for “pull”, and use “pull” as a command on the shell.

[zsh] Dear lazyweb

February 3, 2010

I’ve recently been looking for a simple way to launch a program from the CLI in such a way that it is totally detached from the shell, i.e. it is launched in the background and is not connected to the shell’s console. Currently, I’m using the following zsh alias:

 alias -g "\&"="&>/dev/null&|" 

Now I can launch apps totally detached by writing:

 inkscape picture.svg \& 

The advantage is that the extra syntax is quite short, and that I can add it after having typed in the whole command (which is a clear convience plus compared to, for instance, prepending “kdeinit4_wrapper”). Still, it has some problems, like that omitting the space before the “\&” won’t work. (zsh will think that I’m looking for the executable “inkscape&”.) Also, bash does not know global aliases AFAIK. Do you know an easier way for me to do what I want?

[b]Update:[/b] Exchanged the trailing “&” for “&|” which, as MathStuf noted in the comments, disowns the process, thereby working around the annoying “You have stopped jobs.” message that appears when you ^D the shell for the first time. Thanks for the hint!

Blog’s better than backup

November 19, 2009

It’s very common that people post the scripts they’ve written for their daily tasks. Seems like blogs have become the leading backup solution for Bash scripts these days. I’ve decided to join this growing group of people, and share with you a tiny script from my ~/bin which is called “cleanup”. The code is at the bottom of the post.

When you launch cleanup, it will remove any files that match “*~” (i.e., the backup files which KWrite, Kate and many other apps create), and it will print out any files that have been removed. Additionally, it will look for a Makefile, and execute “make clean” if it has a clean rule. Everything operates on the pwd only. All in all, a very handy tool to clean the ls output.

Disclaimer: The “*~” backup files have use cases, and the “clean” rule may not do what you think it does. So when using the cleanup script, be aware of the possible consequences.

function hasMakeClean()
    FALSE=1; TRUE=0
    [ -f Makefile ] || return $FALSE
    grep '^clean *\:' Makefile &>/dev/null || return $FALSE
    return $TRUE

case $1 in
        echo Usage: $0 '[-s|--simulate]'
        find . -name '*~'
        hasMakeClean && echo "> make clean located."
        find . -name '*~' -exec rm {} \; -exec echo {} \;
        hasMakeClean && echo "> make clean" && make clean