Prevent qmake from installing test cases

If you use qmake to build your Qt based projects and at the same time the QtTest framework to write unit tests, you might have run into a strange behavior:

Generated Makefiles have an installs feature, which mainly is used on Unix like operating systems to install a previously compiled project. Normally you would expect the installs feature only puts your libraries and executable files into the target location. However, unit tests also get installed by default – which, usually, is not what’s wanted, right?

Fortunately, it’s quite easy to tell qmake not to install unit tests by default, simply add the no_testcase_installs configuration switch to your unit test’s *.pro file:

TARGET   = my_unit_test
CONFIG  += console testcase no_testcase_installs
QT      += testlib
# ...

That should already do the trick 😉

Creating config header files using qmake

In case you are using qmake as tool to build your Qt based application, you might sometimes jealously look over to other build systems like cmake for their support of configuration header files (or more generally “configuration” files at all, as this is not limited to C/C++ headers in the end). So what’s the background?

There are several files where you need the content to reflect something that is usually only known to the build system. This could be

  • Information about (optional) build dependencies of your app or library.
  • User provided configuration options, e.g. installation target locations for use with make install.

In a lot of cases you just might get away by passing certain stuff through to the compile process as defines (using qmake’s DEFINES variable). However, there are cases where this is not the case. Probably the most prominent example would be *.desktop files in Linux, where the file needs to point to the installation location of your application. However, you even will find situations where using DEFINES is not sufficient (admittedly, this is when stretching qmake’s capabilities a bit, but hey…). In these cases, what you actually want to create is configured files on disk, which are fed into further steps of the build or installation process later on.

Long story short: While other tools have their way to create such files well documented, qmake is hiding this a bit (at least I was not able to find information about it in the official docs). But nevertheless, such a function exists: Enter, QMAKE_SUBSTITUTES!

QMAKE_SUBSTITUTES is a variable you can append to template files (e.g., that are stored in your source directory. Qmake will configure these files and write the result to appropriate files without the *.in suffix into the build directory. Configuring in this case means that any references of the form $$SOME_VARIABLE will be replaced by the respective content of that variable known to qmake.

A very simple example: You could use this to create a header file containing the version number of your app.

In your project files, you would have something like this:

# ...
VERSION = 1.2.3
# ...

The content of the file could look like this:

#endif // MYAPP_CONFIG_H

And finally, you could pull in this file into your source and header files as required:


#include "config.h"

int main(int argc, char** argv) {
    std::cout << "MyApp version " << MYAPP_VERSION << std::endl;
    return 0;


Setting up suspend to hibernate in Fedora

Important Note:

When you upgrade to the current version Fedora 26 (which right now – July 2017 – ships with systemd 233), you might run into a regression which prevents any units depending on the suspend one not to be executed. The same applies to any scripts and programs in /usr/lib/systemd/system-sleep/ which are supposed to be executed before and after one of the power save states is activates. As a consequence, suspend-to-hibernate is not working on Fedora as long as this issue is not fixed in upstream systemd and rolled out to Fedora 🙁

Computers usually support two major energy saving modes: Suspend and Hibernate. Suspend basically shuts down all hardware components except your RAM (because it will preserve your current state there). Hibernate instead will write the current RAM contents to disk and then turn off everything. Both modes have their pro’s and con’s:

Suspend is much faster, i.e. when you close your laptop lid you usually want the session to be back as soon as you open the lid again. So during the day you probably want to use this mode mostly.

Hibernate on the other side has a greater potential for saving energy, as the system is basically shut down completely and restored to exactly this point later when you turn it on again. Hence, this mode is ideal when you seldom use your device (e.g. when you have to prepare for some events in your professional life and hence don’t have too much free time to spend with your private hardware 😉 ).

Besides the energy and time components, there is another point that kicks in when using full disk encryption: Using suspend is less secure, because sensitive data is stored unencrypted in the still powered on RAM. Especially when you often take your laptop with you or in case of a laptop you use for your company, you surely don’t want someone else to gain access to your data.

The good news is: You actually don’t have to decide for one or the other 😉 While it is not available out of the box, you can easily enable a mode where your laptop goes into suspend mode when you close your lid and later – say after one or two hours of not being used – it goes to hibernate, increasing energy saving and protecting your data in case you use full disk encryption. So, here is how to enable this “suspend-to-hibernate” mode in Fedora:

Enable Hibernate

You read right. If you have not already done so, you first have to enable Hibernate on your freshly installed Fedora box. This is due to the installer does not add the required parameters to the kernel boot arguments. Fortunately, this is a well documented task to do.

Enable Suspend-to-Hibernate

This as well is quite easy, even though not documented for the case of Fedora. However, our friends over at Arch Linux have summed up everything nicely in their Wiki and Forums:

First, you need to create a new file /etc/systemd/system/suspend-to-hibernate.service with the following contents:

Description=Delayed hibernation trigger

ExecStart=-/usr/bin/sh -c 'echo -n "alarm set for "; date +%%s -d$SLEEPLENGTH | tee $WAKEALARM'
ExecStop=-/usr/bin/sh -c '\
  alarm=$(cat $WAKEALARM); \
  now=$(date +%%s); \
  if [ -z "$alarm" ] || [ "$now" -ge "$alarm" ]; then \
     echo "hibernate triggered"; \
     systemctl hibernate; \
  else \
     echo "normal wakeup"; \
  fi; \
  echo 0 > $WAKEALARM; \


Basically, you can ignore most stuff if you don’t care. The most interesting part is the line marked bold: It determines the time the computer shall remain in suspend before going to hibernate. The default two hours are a decent choice for most use cases.

In theory, you could try to enable this new service by running

sudo systemctl enable suspend-to-hibernate

However, some updates in systemd obviously changed something that prevents the service from working flawlessly with the suspend service delivered with systemd itself. The good thing is, that this is easily fixed. Run the following

sudo cp /usr/lib/systemd/system/ /etc/systemd/system/

To create a local copy of the suspend target definition and edit it to include the line marked bold below (keep everything else as it is to be as near at the default definition as possible):


That’s it. If you’ve not already done so enable the suspend-to-hibernate service as shown above using systemctl enable.

Setting console font size on HiDPI screens in Fedora

I recently switched to another laptop which has a HiDPI screen. As usual, the thing is running Fedora (currently in version 25). Especially the KDE desktop was quite easy to configure to be usable; basically, it is a thing of starting System Settings, going to Display and Monitor and using the Scale Display button to bring up a neat configuration dialog which allows you to set a scaling factor for the monitor. For my particular setup, a factor of 2 works quite nice for the UI to be usable.

One open point was the font size in the Linux console as well as in GRUB2. However, some configuration changes later this was fixed as well 😉

Increasing Font Size of GRUB2

First, I generated a custom font file from one of the TTF fonts installed on my system (in this case from Google Noto):

sudo mkdir /boot/grub2/fonts
sudo grub2-mkfont -s 36 -o /boot/grub2/fonts/NotoSansRegular36.pf2

This font can now be used in /etc/default/grub:

GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" 

Basically, I added the GRUB_FONT entry which points to the font to use. In addition, I had to change the GRUB_TERMINAL_OUTPUT from console to gfxterm.

Note: The remaining entries must remain as they are! Only edit the two variables.

Finally, the GRUB configuration can be regenerated. Depending on whether you have EFI or a BIOS, this can be done via:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# EFI:
sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Increasing the Console Font Size

To increase the font size for the Linux console (Ctrl+Alt+F[2-8]), first install the terminus fonts:

sudo dnf install terminus-fonts-console

Now, change to a TTY and use the setfont command to load one of the fonts. You will find them in /usr/lib/kbd/consolefonts/. For me, the ter-m32n font works quite nicely:

sudo setfont ter-m32n

To make this font the default console font, edit /etc/vconsole.conf and adjust the FONT entry:



Fedora 21 (or later) not resuming from Suspend to Disk

Consider this as a memo to myself (or a useful hint in case you ran into a similar issue recently 😉 ):

Some days ago, my Asus Zenbook stopped to properly resume from Suspend to Disk state. I was running Fedora 21 (upgraded to 22 beta during the problem in the hope it was a caused by some bad Kernel update). As this did not help as well, I had to dig further and in fact found the issue:

It seems, that some recent update or some stupid configuration issue on my side (or an unlucky combination of both xD ) caused the Grub2 configuration to omit the resume kernel command line option. This option tells the kernel where the disk image is stored which conserves the system state. To fix the issue, first, find out which is your swap partition, e.g. using blkid:

martin@zenbook:~$ blkid | grep swap
/dev/mapper/vg_zenbook-lv_swap: UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="swap"

So in my case, the swap device is /dev/mapper/vg_zenbook-lv_swap. With that information, you can now edit /etc/default/grub and append the resume argument to the kernel command line:

GRUB_CMDLINE_LINUX="[...] resume=/dev/mapper/vg_zenbook-lv_swap"

Note: I left out the existing parameters from my configuration — whatever you do, do not remove anything but just append the resume=/path/to/your/swap/device argument. Finally, you can regenerate the grub configuration. In my case (as I am using EFI), I had to run

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

That’s it. At least in my case, Suspend to Disk is working properly again after this change 🙂

Vacation Retrospective

After not taking too many of my vacation days in 2014, I started with as much as 17 remaining vacations days into 2015. Unfortunately, at least in my company I had to take these days until end of March, so this meant 4 weeks off 🙂

Now, this is a long time, but I think whoever made this experience before me, knows such a “long” time is quickly over. Of course, March had some nice parts, which we used to go out for hiking (which we have beautiful places for quite near).

20150323_144004 20150309_144606 20150309_150503 20150309_150640 20150309_174430 20150323_152136
Another good thing is that during such a long time you usually manage to get stuff done, which you normally keep shifting in front of you (such as tidying up your apartment, giving the dentist a visit and so on 😉 So that point from my vacation todo list also has been solved.

Finally — thanks to the more April-like weather especially end of March — also my private projects got some attention. After OpenTodoList received some love in the beginning of the year, I also managed to update our (the RPdev) website and in the very end also had a chance to work on a (for me) long awaited new project: OpenTodoList for ownCloud 🙂 When initially starting work on OpenTodoList, I also had in mind some extension to ownCloud to be able to store todos there and share them with other people.

First, kudos here to the guys over at ownCloud! Creating new apps for ownCloud is really easy and I have a good feeling about the code so far, thanks to the well though-through frameworks created and/or used by ownCloud. After a first review of the existing tasks app for ownCloud (which makes a good impression to me) I decided to start over on my own. First of all, I was not sure whether the existing app would be fit to be integrated into OpenTodoList as I had in mind (well, it definitely could be integrated, but I wanted to have the app running in ownCloud being more a backend to OpenTodoList which supports any potential feature of the app) and second — the learning factor 😉 Reading through the documentation, I couldn’t resist to create a complete app on my own (especially since I did not have the chance yet to work a lot with web applications, so this was a welcome chance to the otherwise more Desktop-centric world I am working in).

Long story short, during some rather windy and cold March/April days, I put some effort into bringing up into the initial OpenTodoList for ownCloud app 🙂 It is still far away from being finished (or really usable), however, one already would be able to use it to store your todos and access them across devices. Sharing is not yet implemented and (of course) a new storage backend for the OpenTodoList app is missing as well, but giving the short time, I am quite pleased with the progress until now. For the sake of completeness, here’s a little demo on the current state:

Mac OS like gestures on Linux

Recently, I changed from a Windows based PC on my work to a MacBook Air. That’s really great, mostly because of Mac OS provides a lot of the features I love on Linux and that make working especially with a lot of open application windows quite easy.

Now, Mac OS also comes with excellent gesture support. I never really “missed” that feature (this is the first time I came in contact with Mac OS), but quickly came accustomed to that, too. Being able to show all open windows or get an overview of your virtual desktops with just a swipe on your trackpad is great. So the next weekend project was clear: Getting something like that for Linux, too 😉

In fact, Canonical has implemented such gesture handling for Ubuntu already. Unfortunately, the required packages (namely, their utouch framework) is not easily available for Fedora, which I am using. So, some more manual work was required. Good news: Actually it is quite straightforward to install everything as long as you know what you need 😉 So, here we go:

Preparing your environment

By default, the installation procedure for source builds will put the utouch framework into /usr/local. That is okay (so these files won’t interfere with anything typically installed via the package manager of your distribution). However, at least in my case I needed to setup some environment variables so the subsequent build commands would work and applications would find libraries at runtime. Without further ado:

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH

Execute this once in a terminal (and keep it open for the subsequent builds). If you plan to install utouch to a different location, adjust the paths accordingly.

Build Tools

You will of course also need the typical development tools in order to get everything done. As I had a lot of stuff already installed, I cannot tell which exact packages you need, however, typically the errors produced by the configure scripts will be informative enough for you to know. In addition, you have to install Qt4 and the accompanying development package (we will need it for the actual gesture recognition application later on).

Install utouch

First of all, we have to install the utouch framework. You can find it on Launchpad. We needs to install four components: utouch-evemu, utouch-frame, utouch-grail and utouch-geis. You can either get the sources via bazar or just download the latest release as a zipped tarball. I decided for the latter. Below, find the links to Launchpad, together with the tarballs I used. Just fetch and install the packages in the order given below:

Package Link to Launchpad Project Tarball used (check on website for newer versions)

For each of these packages:

  1. Download it either by getting the latest release tarball or download via bazar.
  2. Change into the directory and execute the usual build and installation steps:
    ./configure && make && sudo make install

Installation of touchegg and touchegg-gce

Now that we have utouch available, the next step is to install an appropriate application that will recognize the actual gestures and trigger appropriate actions. For that, I installed touchegg and (for graphical configuration) touchegg-gce. Both are Qt based applications, hence make sure you have Qt4 and the Qt development packages installed.

First, you want to install touchegg. You can find it as a project on Launchpad. When I checked, there were no release tarballs, so I used

bzr branch lp:touchegg

to get the code via bazar. Change into the directory and issue

qmake && make && sudo make install

to build and install touchegg. Note, that this will install it into /usr (instead of /usr/local). Now, you can just execute touchegg to run it. This will create a configuration file in $HOME/.config/touchegg (if it not already exists). You can edit this file to change the gestures recognized and their associated actions.

If you prefer a GUI for editing this file, you can use Touchegg-gce. This application allows you to load, modify and save the touchegg configuration. As it is hosted on GitHub, use git to get it before building:

git clone
cd Touchegg-gce
qmake &amp;&amp; make

Note that Touchegg-gce does not come with an installation procedure. Instead, just start it from where you also build it.

Some last steps…

Finally, you might want to do some configuration depending on your system and Desktop Environment. First of all, touchegg might interfere with the synaptics input driver for some gestures. To circumvent this, create a script $HOME/bin/ In this script we’ll use synclient to setup the synaptics driver appropriately. In my case, I want synaptics to handle one and two finger events and Touchegg 3 and 4 finger ones. For this to work, one has to disable 3 finger gestures in synaptics:


# If you want Touchegg to handle 2 finger gestures, deactivate
# 2 finger gestures in synaptics:
#synclient TapButton2=0

# Same for 3 finger gestures:
synclient TapButton3=0

# Same for 2 finger clicks:
#synclient ClickFinger2=0

# And for 3 finger clicks:
synclient ClickFinger3=0

# If Touchegg shall take care for scrolling, 
# deactivate it in synaptics:
#synclient HorizTwoFingerScroll=0
#synclient VertTwoFingerScroll=0

Make it executable (

chmod +x $HOME/bin/

) and ensure it is started when your desktop starts. For example, when using KDE, fire up systemsettings and go to Startup and Shutdown and add your script via Add Program.

Next, you want to ensure that touchegg itself is started on desktop startup. For this, create a second script $HOME/bin/ with the following content:

export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH
touchegg &amp;

Make this one executable, too (

chmod +x $HOME/bin/

) and also add it to the startup procedure of your desktop environment. That script does nothing else than starting touchegg; however, as at least in my case the utouch libraries were not automatically found, I had to modify LD_LIBRARY_PATH to point to /usr/local .

Last but not least, in order to have Touchegg-gce easily available, I created a similar script for it as well:

export LD_LIBRARY_PATH=/usr/local/lib:/usr/local/lib64/$LD_LIBRARY_PATH

Note that you want to adjust the path to point to the correct location where you stored Touchegg-gce 😉

That’s it!

On Unknown Roads: Raspberries@Home

First of all (as a kind of disclaimer): If you came here expecting some cool project for Raspberry Pi… forget about it. This post is just about “I did it” (yes, bought and set up two Raspberry Pis). No fancy stuff yet (hope that’ll change soon enough). For now, I just want to describe one possible basic use case for the Pis, so maybe if you have the same use case as I have… have a lot of fun reading further 😉


Since some time we had a little home server running. Nothing spectacular – an Intel Atom powered box running OpenSuse. Stable and easy to maintain so far. That box is running an ownCloud instance – used for hosting and sharing files within our home network (such as music collections or photos). Only drawback: The server itself was quite noisy. However, it did it’s jobs quite good. However: While ownCloud provides all you need to easily upload and manage your files to the server, consuming e.g. the music stored there was a bit difficult. Same holds true for videos: Viewing them usually meant having to use a laptop and connecting to the TV via HDMI. Nothing so bad so far, just the two cats that we have makes it more difficult to enjoy a full movie (either they try to “type in” something (when you don’t nearly close the lid) or they close the laptop (if you do). Besides, the server is also used as a backup target (via rsync from Linux and netatalk from MacOS).

Step 1: A little streaming helper

So, to improve the way we access our data, I decided to buy one of those Raspberry Pis. In the end, they are cheap enough that you cannot do anything wrong 😉 Few days later…


As promised: Right now nothing special so far. I decided to go for an OpenELEC installation (via NOOBS). OpenELEC is basically a lightweight Linux that directly boots into XBMC. As of now, that seems to work quite fine. BMC can be controlled either “directly” (assuming you have a mouse connected to the streaming device), via remote control (which we did not go for as of now) or via apps you can install on you Android/iOS device (which is what we’re currently using, as it also allows to browse e.g. the media collection on the server in a quite comfortable way).

Now, more interesting is: How to best access the files on the server? In our case, we’re using ownCloud for upload and sharing. That was okay so far – ownCloud is a great service when it comes to managing and sharing your files in your home network. The first idea to connect our new room mate with the server thus was, to simply use the WebDAV option that ownCloud provides. XBMC has built-in support for WebDAV, so why not use that. However, it turned out that playback (both music and video) is quite bad that way – it takes ages for XBMC to start playing songs; directory listings also need minutes to be finished). So, a better solution is required. On the other side, dropping ownCloud is also not preferred (as it makes sharing quite easy and we have other services running on it (as contact and event management and – in the future – also storing and sharing of todo lists).

Good news is: It seems there is nothing the ownCloud developers have not thought about already 🙂 Since ownCloud 4.0, you can mount external locations into ownCloud’s virtual file system. So, in our case we decided to use the following setup: For each user, a dedicated directory is created on the server where he can upload files via Samba. Each user can also decide to mount that directory in his ownCloud account. That way, the sharing features can be used for files uploaded via Samba, too. Last but not least, there is an additional “streaming” user being created that also has access to the shares. The “streaming Pi” uses this account to access the media files uploaded to the server: XBMC, too, has built in support for the Samba protocol. And using this approach, streaming really works fine 🙂

Step 2: Server goes Pi

So, the client side of the streaming project works fine. What remains is to review what happens on server side. So, actually everything works. But (yeah, I have to admit I’m not a fan of “never touch a running system”) there is room for improvement. First of all: The server hardware used so far is a bit noisy. Good thing is that it is located outside of normal living areas, so that isn’t that bad. But still… Second, power consumption. While the Atom that is used in the server has quite some power (two cores with hyper threading 😉 it also has higher power consumption. That would be okay if we would have used the capabilities of the processor, which was not the case. Indeed, most of the tasks our home server had to do where more or less I/O centric. So, next step was to replace the previous installation with – yeah – another Raspberry Pi. This time of course the software selection is a little bit different: Instead of OpenELEC, I decided for Raspbian – a Debian based distribution for the Raspi. Actually, I tried first with Pidora, however, I ran into some problems there (and as I currently had no time to look further into fixing this, I decided for Raspbian, as that promised to run just fine given it seems to be the most used OS with the Raspberry. So, two evenings of installation and configuration and we’re done: A second Raspberry Pi now is doing it’s jobs as a home server, running ownCloud, netatalk and the usual other stuff you need 🙂

What’s next

As the first paragraph already warned you: That “project” until now is not yet what you’d call exciting. Indeed, only bringing together some pieces of hardware and software and trying what works good together. However, that’s hopefully not the end 😉 First of all, the Raspberry Pi provides some interesting GPIO pins. So, why not make use of them 😉 In particular, I have two things in mind: Some kind of ambient light when streaming movies via the Pi would be one of them. That seems to be somewhat easy as I’m obviously not the only one who finds this interesting. A second thing (and here I’ve not yet found anything in place) would be to make something similar for our music collection. Let me only say Moodbar for now.

Apart from that, given that I’m currently experimenting a bit with QML, doing some home grown media center solution on my own also sounds kinda fascinating 😉 However – as so often – that probably would require much more time than I currently can effort. But let’s see 😉 Maybe everything “basic” is already in place.

Why Reading Helps or When Android tells you it cannot find a library, it really can’t

Recently, some pre-compiled beta packages of the upcoming Qt 5.2 have been released. As that release again adds more support for Android (one of the targets I want to see OpenTodoList properly running on most) I didn’t hesitate long, downloaded and installed it and started porting OpenTodoList to 5.2.

First of all: I indeed had to do some “porting” (which however did not came too unexpected). If I understood the recent news correctly, Qt has dropped V8 in favor of an own JavaScript engine which deals much better with QML. So far so good. However, there seems to be some minor “differences” between these engines which indeed caused OpenTodoList to crash. Basically, what I had in my code before was something like

// Utils.js
var PriorityColors = [ "red", "yellow", "green" ];
var PriorityColors[ -1 ] = "black";
// Somewhere in QML:
import "Utils.js" as Utils
Rectangle {
  property QtObject todo: null
  color: todo ? Utils.PriorityColors[todo.priority] : "black"

It seems that there were some problems with that approach, so I just exchanged the global array with a function and voila. Interestingly, when trying later on to reproduce the problem, I was not able to do so. So it seems that something else in my code (which I must also have changed meanwhile) has actually caused the crashes. Anyway, that problem was rather quickly solved.

Another one did not resolve so easily: When trying to deploy and run OpenTodoList to Android, I encountered the next problem. And this one indeed took me several days to fix (well.. better let’s call this “find”).

As I already said, support for Android has been again improved a lot in Qt 5.2. One of the (in my opinion) most useful changes is that building and deploying to Android does no longer require an additional “android” folder being created in your source directory. That directory contained a variety of files. Most notable:

  • The built binaries that later get deployed to your Android (virtual) device
  • The AndroidManifest.xml file which contains some important information about your application

Starting with Qt 5.2 (well, or better: Qt Creator 3.0 – as this is the one where the build process is actually implemented) the android directory is created in the shadow build directory. All files are basically copied from the Qt directory. If you want to provide some overrides, you can also keep a stripped down “android” directory somewhere in your source tree (which e.g. could only contain the AndroidManifest.xml and nothing else) and instruct the build process to take your files instead of the default ones by adding the following line in your project file:


So I created such a directory and then proceeding with generating me a new AndroidManifest.xml. Qt Creator nowadays comes with some support for doing so: From Projects -> Build&Run -> [Your Target Configuration] -> Run -> Deploy Configurations -> Details one can simply click the Create AndroidManifest.xml button, select a file name an that’s it. Next, you can open that file from the project explorer and Qt Creator will show you a neat form where you can enter the most important stuff (access to the XML source is provided via an additional tab in the editor). I entered all important stuff there like the package name to use, minimum and maximum Android SDK versions and so on. In the “Application” I specified the application name, the icons to use and for the “Run” option I entered “” (as this is what the application gets compiled to in the end).

After these preparations I build, deployed and… well, I got this nice little “Unfortunately Open Todo List has crashed” dialog shortly after the application tried to start up. That was a bit unexpected, but okay. I digged into the debug output provided and found some line telling me about an UnsatisfiedLinkError. In the description of that (Java) exception, I furthermore got that “findLibrary returned null” when the Java based “starter” that loads the Qt application on Android tried to load the “executable library” (in my case Uff… why that? My first though was, that some dependencies where not fulfilled. Further checking the logs, I found that the libraries that get loaded by the start up procedure are logged as well. I skipped through the list of loaded libraries and actually came to the conclusion that everything is there: All the required Qt libraries as well as the OpenTodoListCore library (which in my case provides the basic class infrastructure).

So what else could have gone wrong? Asking Google did not reveal anything that immediately led me to a solution. In fact, I spent several evenings in randomly changing some stuff in my QML code as well as on the C++ side in the hope that accidentally I might stumble upon something that might help me understand what’s going wrong.

As this, too, did not help, I went on to trying to debug the problem on a virtual device. Until then, I always had used my SGS3 for that, as the Qt 5.1 QML GUI did not work in the past (by the way: This seems to be solved now, too, so running and testing your QML apps in an AVD is possible with the new version). Unfortunately, that also did not immediately reveal any cure for my issue. I checked the log outputs of the tries on the AVD which seem to be a bit more verbose than what I got from my physical device. Today morning, it finally struck me then:


Together with the exception name and the (for me first not very meaningful “findLibrary returned null”) the AVD also prints out the search paths where it actually looks for native libraries. And looking at these paths, I finally realized what really went wrong: My app was looking for any libraries in the system library location and (ups…) in the root directory of OpenTodoList. And this indeed is unlikely to work, as libraries are stored in the lib/ sub-directory. So finally I had a hint where to search on. And then I was back at the very beginning of my “porting” work: The AndroidManifest.xml. Going back to the editor of that file, I noticed that the Run option is actually implemented as a pull down menu. And that one did provide me the option “OpenTodoList” (which is the target name in the *.pro file) instead of the name of the generated library. Very well… one compile&deploy later, OpenTodoList was starting up fine on Android again 🙂 Hurray!

So, the bottom line for me: Learn reading (and understanding). The “findLibrary returned null” does not mean that dependencies are not fulfilled. It really just means that the library you just try to load cannot be found anywhere (and thus, a null value is returned).

Fedora on Lenovo IdeaPad Y560

So after I stuck quite a long time with my Asus lapop, I recently decided to get me a new, shiny toy. As you can see from the title, my choice is Lenovo this time, with an IdeaPad Y560.

As I’m a Linux user for several years now, one of the first things to do after the purchase was to install my favorite operating system on it. In the following, I want to collect some experiences and maybe hacks required to successfully use Linux on that device – as information for myself and maybe others that also want to install something different than Windows on that laptop 😉

The Situation

The IdeaPad Y560 has an Intel i7 Quad core and comes with 6 GB RAM preinstalled. From the spec, it says, one can upgrade up to 8 GB.

My operating system choice is Fedora (currently version 14), 64 bit with KDE as default desktop.


At least my laptop had the following initial partitions:

  • Windows Boot Partition (200 MB)
  • Windows System Partition (around 580 GB)
  • Some “driver” partition (around 30 GB); this contained only some Windows drivers and programs
  • OEM Partition

The driver partition is set up as a logical drive inside an extended partition, so when using Windows, you actually might see 5 partitions reported.

Despite I usually don’t use Windows anymore, I decided to keep it installed in case some of the installed devices aren’t going to work with Linux. So, what I did was:

  • Making a backup of the driver partition; I assume, one can get these drivers from the Lenovo website as well, but just wanted to keep the files in case something goes horribly wrong 😉
  • Next step, I deleted the drivers partition and the extended partition. Note, that it is currently a rather bad idea to delete the (hidden) OEM partition, as it is required to restore the laptop to factory settings (unless your model is delivered with a backup DVD, but mine was not)
  • In case you don’t trust the Linux installer, you optionally might want to shrink the Windows partition from inside Windows; however, note, that in this case you can only shrink that partition until the first non-movable sectors are located. What – however – might be a good idea is to defragment the partition before proceeding with the Linux installation, at least, if the system has already been used for some time

Now the actual installation can begin. Insert the install CD/DVD/USB stick and reboot. Make sure, booting from the appropriate device is enabled and the device’s boot priority is higher as the priority of the hard disk.

At last for me, the following boot procedure was straightforward: Actually, you just need to follow the instructions. I decided to shrink the Windows partition (to 100 GB, so Linux has a total of 480 GB in my setup). The installer will do the rest for you (usually, it will suggest to create a extended partition, where it will create a boot partition and a LVM volume with root and swap partitions). I advice to Use 3 LVM volumes – root, swap and home – for root, I used 20 GB (which is sufficient in most cases, but in case you are unsure you can set it at least to 50 GB, which is enough in any case).

After the copying of the live image to the hard disk, just reboot into the new system and complete installation.

First Impression

After I had quite some trouble with both my tower PC and my Asus laptop in the first time, I was really impressed. Linux works really well in this laptop, and most things seem to work out-of-the-box. So for example, the volume control keys indeed are usable (I especially like the mute button 8) ). WLAN does not need any additional work this time (which still wasn’t the case with my Asus laptop, where I needed to install additional kernel modules manually) and graphics also do fine with the open source radeon driver.

What might need some work


The radeon driver, which is used by default on Fedora, works quite well. Sometimes, there are some fragments but these are negligible. However, if you want to run heavy-weight 3D applications (games, modelers, etc) or just like to use your desktop’s shiny 3D effects, you might want to use the (proprietary) display drivers.
The pre-installed radeon driver works now well (currently using F15) from a graphical point of view (no glitches, desktop effects working flawless and multiple monitors are no problem, too). However, currently, sound via HDMI seems not to be possible, so if you require it, consider installing the proprietary driver, too.
I recommend using the drivers from RPM Fusion. After enabling their repositories, issue an

su -c "yum install akmod-catalyst xorg-x11-drv-catalyst xorg-x11-drv-catalyst-libs.i686 xorg-x11-drv-catalyst-libs.x86_64"

If you decided to install the 32 bit version, you don’t need to install the xorg-x11-drv-catalyst-libs package for 64 and 32 bit, of course.

Also note that you might consider installing the “kmod” package instead of the “akmod”. The akmod’s are good in case when a new kernel is installed the appropriate kmod is not yet available (which might result in a blank screen as soon as you boot the next time using the new kernel). In that case, the kernel module will be build when starting the system. However, this increases boot time (so if you want to keep boot times absolutely minimal, better go for the simple kmod variant.

After installation, you should rebuild your initramfs (and make a backup of the old, in case you need to revert):

su -
mv /boot/initramfs-`uname -r`.img /boot/initramfs-`uname -r`.img-backup
dracut -v /boot/initramfs-`uname -r`.img `uname -r`

You should also disable KMS. Edit your /etc/grub.conf: You have to add radeon.modeset=0 to the kernel line. A complete section should then look like this:

title Fedora (
        root (hd0,4)
        kernel /vmlinuz- [...] radeon.modeset=0
        initrd /initramfs-

Last but not least: If you have manually created and changed the file /etc/X11/xorg.conf, create a backup as well. If this file does not exist in your installation, you don’t need to do something here (the file is not required anymore).

su -
cd /etc/X11/
cp xorg.conf xorg.conf.bak

For further information and some hacks, there is a blog post which I found to be quite useful.

Setting up dual-head

I usually use an additional monitor attached to my laptop. While configuration via the open source drivers worked flawless (just setup what you want in System Settings -> Display and Monitor, the proprietary driver had some problems. More specifically:
I want to setup one X screen to span both monitors. Usually, I have the laptop monitor configured to be the primary and the external monitor right of the laptop. One is able to set this configuration up via the Catalyst Control Center (start it via su -c amdcccle), however, the changes were not permanent, i.e.:

  • the control center instructed me to restart X (which I did)
  • after that restart, all was setup as instructed
  • however, after a system reboot, the external monitor was always set to be a clone of the first

After a bit playing around with the X configuration, I found this setup to be what I needed:

Section "ServerLayout"
	Identifier     "Default Layout"
	Screen         0  "Screen0" 0 0

Section "Files"
	ModulePath   "/usr/lib64/xorg/modules/extensions/catalyst"
	ModulePath   "/usr/lib64/xorg/modules"

Section "ServerFlags"
	Option	    "AIGLX" "on"

Section "Monitor"
	Identifier   "0-LVDS"
	Option	    "VendorName" "ATI Proprietary Driver"
	Option	    "ModelName" "Generic Autodetecting Monitor"
	Option	    "DPMS" "true"
	Option	    "PreferredMode" "1366x768"
	Option	    "TargetRefresh" "60"
	Option	    "Position" "0 0"
	Option	    "Rotate" "normal"
	Option	    "Disable" "false"

Section "Monitor"
	Identifier   "0-DFP1"
	Option	    "VendorName" "ATI Proprietary Driver"
	Option	    "ModelName" "Generic Autodetecting Monitor"
	Option	    "DPMS" "true"
	Option	    "PreferredMode" "1920x1080"
	Option	    "TargetRefresh" "60"
	Option	    "Position" "1366 0"
	Option	    "Rotate" "normal"
	Option	    "Disable" "false"

Section "Device"
	Identifier  "Videocard0"
	Driver      "fglrx"
	Option	    "OpenGLOverlay" "off"
	Option	    "Monitor-LVDS" "0-LVDS"
	Option	    "Monitor-DFP1" "0-DFP1"
	BusID       "PCI:1:0:0"

Section "Screen"
	Identifier "Screen0"
	Device     "Videocard0"
	DefaultDepth     24
	SubSection "Display"
		Viewport   0 0
		Virtual    3286 1920
		Depth      24

Section "Extensions"
	Option	    "Composite" "Enable"


In case you don’t hear any sound in KDE, don’t panic. At least for me, KDE picked the HDMI out as default output. So just go to the KDE system settings (hit Alt+F2 and enter “systemsettings” and hit enter to start it). There, navigate to Multimedia and select Phonon in the left sidebar. Now you can set up default output devices for the defined categories. Make sure, the entry “Internal Audio Analog Stereo” is at the very top of the list of output devices. You can Apply Device List To… to apply the list to all categories. Then, hit Apply to save the changes.

Ambient Light Sensor

The laptop comes with an integrated Ambient Light Sensor (ALS), which is used to automatically adjust the backlight brightness depending on the ambient light level. Up to Fedora 14, this sensor obviously has not been detected (and therefor automatic adjusting was off). However, starting from Fedora 15, the sensor is detected and fully used. In case you don’t want/need the ALS: I had to boot into Windows and disable the automatic brightness changes there (Lenovo preinstalled a tool for this, just tap the special button with the battery icon and it should show up, unless you wiped everything away). There, you can disable the ALS (use the button with the gear icon and then there should be an option to disable ALS). This seems to deactivate the sensor hardware-wise (after rebooting, the screen brightness remains constant in Fedora, too).

By the way: In case somebody knows either how to enable/disable the ALS from Linux or the name/manufacturer of the ALS in the laptop… would be nice if you could drop me a message 😉