Archive for the ‘Computer’ Category

Open Letter to HTC; the HTC Hero and Android 1.6 / Android 2.0 firmware upgrades

Dear HTP-People,
I recently bought (for the first time, I used to use used ones) my first mobile phone, the HTC Hero.

My purchase was made with the decision, to buy the best Android phone out there. And thus far I am quite satisfied with it.

Unfortunately, the phone’s software hasn’t been updated for three month. Two major upgrades for the Android operating system have been made by the Android community and Google Inc. in that time, and HTC has skipped the first one and didn’t even bother to acknowledge the second one.

I’d be happy to buy another HTC phone in the nearer future, but for that to happen I have to be convinced by HTC’s product commitment. That includes regular updates for hardware, also for older hardware. The responsibility for a product does not end at its sale.

I would eagerly buy another Android based HTC phone. But until I see a more committed product support by HTC, I cannot consider HTC Phones as an option.

Raphael Bosshard

Open source is like cancer? Indeed, it is!

It seems that Microsoft is infested with a dangerous cancer; the cancer of open source. Just a few month ago they released the a part of the source code to their Hyper-V virtualiser. This came after allegations of GPL violations; apparently Microsoft used GPL code and was thus forced to free the source code to comply with the terms of the known Free Software License. Now the open source bug bit Microsoft again; a Windows tool, developed by a subcontractor, was using code licensed unter the GPL. Looks like someone thought he could go the easy way and just ‘borrow’ open sourced code.

Now I doubt that Microsoft (or anyone within Microsoft) deliberately used GPL code. Actually, I’m quite sure that they have strict rules against that kind of things. However, as Steve Balmer put it; open source is like cancer; it can appear any where, any time and there is no cure. Microsoft is heavily dependant on third-party developers. And would guess that not all of Microsoft’s suppliers have the same strict rules as Microsoft, so this kind of thing will probably happen more and more. Microsoft cannot audit all the code they receive from contractors, can they?

Now what are they going to do? Require their contractors to open source everything? Probably not, but we’ll see.

gnome-shell: first impressions

I’ve been trying out gnome-shell for the last week. My impression so far:

* As to be expected it feels very rough
* It needs more keyboard bindings (e.g. in the Activities tab)
* Why is the clock the only interface element in the middle of the top-bar? Put that space to use, for Darwin’s sake! What about putting the applications menu bar up there? 😀
* Unusable without proprietary drivers, at least on NVidia hardware. Didn’t try it on AMD or Intel hardware

Now, that all sounds terrible negative. But in the end I’m quite optimistic. By being willing to break with the traditional desktop paradigm, the gnome-shell people are able to experiment with a plethora of new ideas. Finally workspaces are being put to use as a central metaphor not only as an afterthought. I also think that by using workspaces, MDI applications finally could get useful. I’m also looking forward to the integration of application notification and communication (through telepathy). The design document was very promising, I wonder how it all works out.

Five years of Ubuntu

Whoa, what a time. It seems like yesterday that the first release of Ubuntu (4.10, “Warty Warthog”) escaped into the wild. There was quite some buzz about that first release of the ‘Desktop Debian’ and I remember being very sceptical. Similar projects – Corel Linux OS, Lindows/Linspire – never lived up to the expectations and were quickly forgotten. But Ubuntu promised to be different. Ubuntu promised to be about Community and Participation, about End-Users and The Desktop.  No closed-door development like with Redhat Linux or SuSE Linux, no, a real community distribution. Five years late, Ubuntu has lived up to that probmise.

Yeah, right. It’s still not perfect. Yeah, Bug#1 is still open. And yeah, there are still a lot of lose ends to be tied up and some warts to be removed. Ubuntu is getting there. We are getting there.

As I wrote last year; 2008 was the Year of the Linux Desktop. And this was – to a big part – thanks to Ubuntu (and therefore Debian).

What an amazing five years. And what amazing times to come.

Here is to another five years!

Application development with Vala – First Steps: Getting the Vala feeling

Thanks to undeconstructed from #vala at

In the last few weeks I’ve been playing around with Vala, and enjoying it. “What the heck is Vala”, you might ask, and of course you will get an explanation. Not only because Vala, though still in its infancy, hasn’t gotten the attention it deserves.

Vala is, in some sense, a new programming language. It’s syntax and structure leans heavily towards C#. As Jürg Billeter, the mastermind behind Vala, likes to put it; Vala is an amalgam of different C inspired languages, mostly C++ and C#. An though there is no mercury in this mixture, you’ll find many a gold nugget in Vala.

“Why another programming language? Aren’t there enough out there yet?” might be your next question. And of course this also deserves an explanation. Yes, there is an awful lot of programming languages out there. But what makes Vala special is it’s unique combination of high-level programming and low-level interface. You see; even though Vala supports modern programming language features like objects, foreach-loops, generics, annotations, memory management and exception handling, Vala is very slim. It doesn’t require a virtual machine. Indeed; Vala compiles are binary compatible to C. You can use C libraries from within Vala, you can even use Vala libraries from C. That’s because Vala, at it’s core, still builds upon the foundations of C. But more on that later. First, let’s take a look at a piece of Vala code.

int main (string[] argv) {
        stdout.printf ("Hello, World!\n");
        return (0);

Well, that does look familiar, doesn’t it? All control structures (for-loop, while-loop, switch/case, if/else, …) behave like their C counterparts. Vala is very much like C, although there are some peculiarities. Like that string-datatype, and that stdout-whatsit.

Having stored this piece of code in a file called hello0.vala, it can be compiled:

valac hello0.vala

No rocket science thus far. The resulting binary can be executed and will result in following output:

Hello, World!

Now that we have made ourselves a little bit comfortable with the Vala tool chain, let’s take a look at some of the more interesting features Vala has to offer. That string data type, for example.
In our main function, we’ve defined a single input parameter. string[] argv; an array of strings. But other than simple C arrays, this is not only a block of continuous memory. It’s an object by itself and thus has properties we can inspect. It’s length, for example.

int main (string[] argv) {
	stdout.printf ("Argument vector length: %i\n", argv.length);
	return (0);

There is more. Using the foreach-loop we can even iterate over this array.

int main (string[] argv) {
	foreach (string arg in argv) {
		stdout.printf ("Argument: %s", arg);
	return (0);

Neat, isn’t it? But Vala has even more to offer. Like Classes. Everyone likes classes, so let’s make some! Animals are always good for examples, so I’ll use them as well.

public class Animal {
	public string name;
	public bool can_fly;
	public bool can_swim;
	public bool can_walk;
	public string noise;

	public string make_noise () {
		return (this.noise);

int main (string[] argv) {
	Animal dog0 = new Animal ();
	dog0.can_fly = false;
	dog0.can_walk = true;
	dog0.can_swim = true;
	dog0.noise = "Whuff";

	dog1.can_fly = false;
	dog1.can_walk = true;
	dog1.can_swim = true;
	dog1.noise = "Whuff";

	stdout.printf ("dog0: %s", dog1.make_noise ());
	stdout.printf ("dog1: %s", dog1.make_noise ());

This little example already shows some features of Vala. Obviously there are classes, with members and methods. But this example isn’t very sophisticated. Let’s try something more interesting. For example, let’s replace those booleans with a bitmap, bring in some generics and inheritance.

public enum Locomotion {
	NONE =  0,
	WALK =  1,
	SWIM =  2,
	FLY  =  4,
	EVERYTHING = Locomotion.WALK | Locomotion.SWIM | Locomotion.FLY

public abstract class Animal : Object {
	public string noise		{ public get; construct; }
	public Locomotion locomotion	{ public get; construct; }
	public string make_noise () {
		return this.noise;
	construct {
		this.locomotion = Locomotion.NONE;
		this.noise = "";
	public abstract string introduce ();

public class Dog : Animal {
	public string name { private get; construct; }
	Dog (string name) { = name;
	construct {
		this.locomotion = Locomotion.WALK | Locomotion.SWIM;
		this.noise = "Whuff";

	public override string introduce () {
		return ("My name is " +;

public class Finch : Animal {
	construct {
		this.locomotion = Locomotion.WALK | Locomotion.FLY;
		this.noise = "Tchirp";
	public override string introduce () {
		return ("Buggeroff");
public class Fish : Animal {
	construct {
		this.locomotion = Locomotion.SWIM;
		this.noise = "Blubb";
	public override string introduce () {
		return (this.noise);
public class Bass : Fish {
	public string name { public get; construct; }

	Bass (string name) { = name;
	public override string introduce () {
		return ("I am " +;

int main (string[] argv) {
	List<Animal> a_list = new List<Animal> ();
	Dog wulfie = new Dog ("Wulfie");
	Finch bert = new Finch ();
	Bass bob = new Bass("Bob");
	a_list.append (wulfie);
	a_list.append (bert);
	a_list.append (bob);

	foreach (Animal a in a_list) {
		string loc = "i can";
		if ((a.locomotion & Locomotion.FLY) == Locomotion.FLY )
			loc = loc + " fly";
			loc = loc + "'t fly";
		stdout.printf ("(%s)\t%s: %s, %s \n", a.get_type ().name (),  a.make_noise (),  a.introduce (), loc);
	return (0);

As you can see, Vala supports all features you’d expect from a modern programming language. Classes, abstract classes, inheritance, generics, method overloading and so on. And still there are many features I didn’t touch in this essay, like interfaces, signals, exceptions, namespaces, access level modifiers, lambda functions and packages. And most important; Vala provides memory management, freeing the developers of most the hassles usually associated with low-level application programming.

So with all those fancy features, how can Vala still be compatible to C? It’s because valac, the Vala compiler, uses the GLib type system for its underlying framework; Vala code is translated to plain C code with hooks to GLib and (if one uses GLib.Object as superclass) GObject. This generated code in turn is compiled by GCC. Vala applications thus are native binaries, requiring neither an interpreter nor a virtual machine.

Vala provides (very) complete mappings to many important Unix C libraries, such as stdlib, D-Bus, Cairo, SDL, Poppler. And of course, with its roots deep in GLib, Vala also supports the Gtk+ framework and all related libraries. The excellent libgee, also developed in Vala, provides array lists, hashes, sets and collections. Use of the pkg-config system makes using libraries very easy. And as soon as GObject introspection is in place, bindings to all kinds of scripting languages come with the package for free.

Vala, now at version 0.3.4, is still under heavy development. My first experiments have already uncovered some bugs, which got fixed immediately. Vala might still need some time to come of age, but already it is picking up momentum, as the first projects start to use it. Especially developers with an aversion towards C++ might find Vala interesting, for it links the solid GLib type and event system with an elegant syntax, providing all the necessary tools for rapid application development on a stable foundation.


Vala (Project site)

Why 2008 Will Be The Year Of The GNU/Linux Desktop – seriously. I mean it. Really.

Don’t take this article too serious. I had a lot of fun writing it. I do believe that GNU/Linux is coming to the desktop in 2008, however.

When I first started to work with GNU/Linux, now almost eleven years ago, I’d never would have guessed that it would become such a large part of my life one day.  I liked the idea; communal developed software. Collaboration on a whole new level. Software not developed by a company, but by enthusiasts. It had something rebellious and subversive. I just fell in love with it.

Back than, GNU/Linux (and the BSDs also) was merely a toy for me. Something fresh, something new. Something completely different. But after I had set up my first servers, had started using October Gnome as my desktop environment, I started digging deeper. Into the internals of the system. Into the Source.

I always considered myself to be a fairly good coder. Not excelling, but competent. And because of that I recognised the quality of that code. I realized that that those little pieces of code were the works of mad genious artists. Pure, elegant C, honed to perfection.

I was convinced that it would – one time – rule the world.

Yet I always fooled myself about GNU/Linux’ problems. Back then, getting X to run was a task of days, at least on the hardware at my disposal. The desktop environments were clumsy and counter-intuitive, the applications limited and ugly, for they were written with either CDE or Tcl/Tk. Most didn’t even have a graphical user interface and were limited to the console. XMMS was okay for playing MP3 files, but even watching a move was a daunting task. Those were the dark ages, when the Knights of the Light only had started their quest for freedom and equality.

Even back then there have always been Prophets of The Dawn. Reiterating the old prophecy that this, yes, this very year would come to be known as the Year Of The Linux Desktop. I always used to simile at them and give ’em cookies. They were cute, somehow. Breaking the Mighty Power of the Forces Of Evil would be the task of a generations. The desktop market was fixed in the hands of Microsoft, a convicted monopolist. An unpleasant necessity. Breaking that power would be a task for giants.

While I was promoting GNU/Linux in my surroundings, providing installation setup, first private, than commercial, the Amazing Power Coders were out there. Doing good wherever they could. And one by one, Linux’ weaknesses were being addressed, scrutinized, judged and attacked with the fury befitting a wildcat.

Things have changed. GNU/Linux has changed., KDE, GNOME, Linux and all the other pieces of software composing the free software desktop have ripen well. It’s functional software, mostly slim and elegant, sometimes even beautiful.

Today I saw an advertisement for the Asus EEE PC, printed by a large consumer electronic store (Mediamarkt). This is the first time, that a GNU/Linux product has been advertised by that company. A company that has no ideology whatsoever. Besides making money. In the last few month no week passed by without the announcement of yet another GNU/Linux desktop/consumer product. Mostly ultra-lowcost ultraportable. Because these are the areas where GNU/Linux shines like a twinkling star.

And there is more. In two weeks there will be an anniversary. The anniversary of “GNU/Linux preinstalled on the desktop machines of a major hardware vendor”. It’s almost been a year since Dell announced to ship desktop and consumer products with Ubuntu pre-installed. HP followed shortly thereafter. And although neither Dell nor HP are making huge profits by selling GNU/Linux machines, both still provide the option. Even after a year.

With the rise of those consumer machines GNU/Linux got a lot of advertisement. And suddenly people I never talked to before about GNU/Linux start asking questions. They ask for support, for guidance and help. Some even offer to pay. Finally.

And the disbeliever, the skeptic inside, is falling silent.

So, now hear my prophecy, as I am joining the ranks of the Enlightened; The GNU/Linux desktop is coming. 2007 saw the end of the beginning. 2008 will feel the torrent.

GTK+ 3.0: Getting serious.

GTK+ has come a long way. From its humble beginnings as “The GIMP ToolKit”, it is now used in a plethora of applications. In fact, GTK+ is very popular. GNOME, one of the leading desktop environment on Unix systems, uses GTK+ almost exclusively. The Gimp is built upon GTK+, of course. And there are many commercial software developers like Adobe, NVidia and VMware that decided to use it as a base for their products.

Still, there are several shortcomings with GTK+. Development of the 2.x series started back in 2002. Since then, GTK+ has ripened and aged. It has aged well, but still: its age shows.  Throughout the 2.x cycle, several years now, the developers have kept GTK+ ABI compatible. This keeps application developers depending on GTK+ very happy:  they can be sure that code linked to an older version of GTK+ continues to work with newer releases. Packages released back in 2002 will continue to work with new library releases. That’s great, because no third-party application developer likes to rebuild and repackage the whole product line, just because a new version of the underlying libraries got released and all the distribution start packaging that version. It causes work and trouble. And for commercial/proprietary developers that means costs.

The commitment not to break ABI made a lot of people very happy. But it also put very tight constraints on the GTK+ developers. It’s not that easy to add new features and remain ABI compatible. Minor features, yes. But as soon as you want to make radical improvements and need to change the exposed data structures, you run into serious trouble. It just not possible beyond a certain point.

On the 2008 GTK+ Hackfest in Berlin, Imendio’s GTK+ hackers presented their vision [1] of GTK+’s future and the reasons why they think that GTK+ has to make a step forward, embrace change and break ABI compatibility. Other GTK+ developers [2] have also voiced their opinions, listing parts of GTK+ that need serious love, but state that they don’t require breakage.
Whether or not these are the things that will mark the road to GTK+ 3.0, almost all of them need attention. And give hints to the shape of things to come.

Theming is one major aspect of GTK+ that needs a serious overhaul. Theming in GTK+ sucks and blows big time. The initial concept of how theming works in GTK+ stems from the very first releases and never received serious love. As a result it is very difficult to do fancy graphic things in GTK+ or to make custom widgets that fit into the rest of the desktop. The funny look of Evolution’s tree headers in some themes is one symptom, but every developer with the need to write custom widgets is looking for a hard time.
There have been several suggestions on how to do that, some of them involving CSS-style theming [3]. CSS would be nice, for sure. But even the ability to paint one widget to mimick another would be a huge gain. Application-specific theming and custom layouts? Delicious.

Although it is possible to create animations in plain GTK+, it’s not very easy to do. Out of the desire to create fancy interfaces in the image of the iPhone interface arose several GLib/GTK+ inspired libraries; Clutter, Pigment and Moonlight. All of those have drawbacks, however: Clutter doesn’t use the GLib event system, Moonlight is written in C++ (a no-go for a GTK+ library) and Pigment is in a very rough state. Still, there are very solid plans to what extent a scene-graph library might interact with GTK+ and what requirements such a library has to fulfill [4].

GTK+ has no standard Canvas. There is a GnomeCanvas, but it’s deprecated, not very popular and lacks some key features, like drawing GUI. Many developers resort to plain Cairo when it comes to custom graphics, but Cairo is lacking a way to draw GUI elements also. Nothing gained. There are some possible candidates [5] for a possible GTKCanvas, but none of them seems to be the right candidate. And then there is the question, if a specialised canvas is a good idea at all.
This problem might be solved with the emergence of the aforementioned scene-graph library; instead of introducing a specialised library for custom paint operations, make that library the standard way.

OS integration:
GTK+ is not limited to X11 systems anymore. There are many GTK+ applications that have been ported to Windows and enjoy a surprising popularity there; Inkscape for example has a significant Windows user base. And OS X gets more important with every passing month. Some of these applications make extensive use of operating system features. Up to now, GTK+ featured only a limited set of functions to provide access to operating system functions, but the first solutions addressing this problem are starting to appear [6].

One of the GTK+ buzzwords of the last few month has been introspection. Introspection allows to, well, inspect an object, its methods, public members and its inheritance. This is not only very comfortable for debugging, it also allows for very easy bindings: automated bindings for your favourite programming language? Here it comes. It might still be a while until all parts are in place, but already the results are amazing [7].

It might still be a long time until GTK+ 3.0 gets released. And in any way; GTK+ 3.0 won’t be about adding new features. There are still some mistakes of the past lumbering in GTK+. Exposed private structues, public members that get manipulated directly: things like these have to be fixed before a GTK+ 3.2 can start adding features [8].  But with some of the features, especially a scene-graph, window-independent widget placing and over-rideable paint methods for GTK+ widgets, GTK+ is starting to look very interesting again.

[1] Imendio’s GTK+ 3.0 vision
[2] Gtk+ Hackfest 2008: Day one and a half
[3] GUADEC 5 Notes
[4] New Theme API
[5] Canvas Overview
[6] libgtkhotkey 0.1, Portable Hotkey Integration
Future of GNOME language bindings

[8] GTK+ 3.0: Enabling incrementalism

VMware Server, Ubuntu “Hardy Heron” 8.04 and Linux 2.6.24

Yesterday I updated my work laptop to Ubuntu 8.04.  Of course it’s silly to abuse a productive machine as a testing ground for a unstable release, but I did it, nonetheless.

My impression so far is very good! Even “Hibernate” and “Suspend” seem to work now, very nice. The only problem I had was with VMware Server, the only piece of proprietary software I use on my machine. For business reasons. However; the problems I had were fast to fix.

  1. Patch the VMware modules.
    VMware does not yet support Linux 2.6.24, so the modules have to be patched. The VMware community forums helps with that:
    Don’t forget to edit …/source/vmmon-only/include/vcpuset.h, you need to change line 74 from “asm/bitops.h” to “linux/bitops.h”. (Thanks to luyseyal for that.)
  2. Re-compile the modules.
  3. Just use sudo to recompile the modules.

  4. Copy the libraries
    The VMWare Server Console was compiled with an older version of the GCC than Hardy is compiled. If you get the following error, you need to copy some libraries:

    /usr/lib/vmware/bin/vmware: /usr/lib/vmware/lib/ no version information available (required by /usr/lib/ /usr/lib/vmware/lib/ version `GCC_4.2.0' not found (required by /usr/lib/

    Use following commands to copy the libraries to the VMware directory:

    cp /lib/ /usr/lib/vmware/lib/ /usr/lib/ /usr/lib/vmware/lib/

That should to the trick!

The end of proprietary software?

Steve Ballmer, the CEO of Microsoft, called it a “cancer”. Other have outfitted it with “viral” attributes, destroying the software business as it was formerly known. And while Free Software advocates are very fast to dismiss these accusations as FUD (Fear, Uncertainty, Doubt), there is a grain of truth in them.

If we take a look at the Free/Open Source Software landscape, its advantages become quite evident; wherever there is a need for a certain tool, users gather, start to plan and develop a piece of software to satisfy that need. This works surprisingly good, as any Linux user can tell you; the Linux desktop has been a viable alternative to proprietary operating systems for quite a while now and I dare to say that the rate of progress is equal (if not higher) to that of any competitor. Indeed; this is not only limited to the desktop, but is true for almost every corner of the software landscape. However; there are to very distinct exceptions to that rule; games and specialist software.

The community based development approach of Free/OpenSource software has some very impressive advantages. First of all, the burden of the cost of the development is (ideally) not shouldered by one entity, but by a legion of contributors, be it enthusiast developers or hired programmers. Second; while the cost is divided, the advantages of the software is not. And third; abandoned software projects can be picked up again. For proprietary software, this is a no-go if you don’t own the copyrights.

However; since the development process is so utterly dependent on the community, Free/OpenSource software will only prevail where there are enough enthusiasts with the right skills. And this, unfortunately, does neither include specialist software nor games.

It is my expectation that Free/OpenSource software will increasingly dominate the software landscape. Utility software, office software, multimedia and productivity suites; these are categories that attract a huge crowd of people, some of them apt enough to contribute, in code, in documentation, in testing. I doubt that proprietary piece of software stands a chance against an Free/OpenSource product, at least in the long run, say; 10 to 15 years.

Games and specialist software however are something different. Their target audience is limited, the fields require a huge amount of expertise and thus the number of experts is limited. Creating a community in this kind of environment is a very difficult task. I expect proprietary software to flourish in this parts of the software landscape for quite some time to come.

Digital Restriction Management bites back – or – when companies advise users to break their own copy protection systems

So, it’s official; Sony Connect is no more.
For years now Sony Connect has tried to mirror the success of Apple’s iTunes store. But Sony did not get a toe into that business. Now they pulled the plug and are burying Sony Connect. And along with it the ATRAC format.
ATRAC was always Sony’s music compression format of choice. They used it on the MiniDisk and tried to make it popular on the Internet, tried to make it to the standard format for the music business. But it never caught on.
And not ATRAC is getting the shoe. Sony’s future music player won’t support ATRAC. And that brings Sony’s customers in a very awkward position; what to do with the DRMed files they bought from the Sony Connect store? None of the new Digital Music Players will play it, so those files are practically useless. Sony’s advise? “Break it!”.
On the Sony Connect Site the Connect Customers are advised to burn the music files on CD and rip them again; into MP3.
Now, that’s something, isn’t it? Sony’s Customers are advised to circumvent copy restriction schemes. Now forget the data degradation that occurs if you transcode data from one lossy format to another. Forget the trouble the customers have to get trough. Let’s focus on another thing.
Imagine if the big entertainment companies would get what they wanted for years; the complete control over the data stream, the removal of all unrestricted high-definition date outlets. What would the Connect Customers do in this case? Buy the files again? Resort to more drastic methods of copy restriction circumvention methods? I don’t think that the media companies would lift just one finger to help their customers. After all, they already paid.