Computers & Software

Ipod Touch

I’m posting this on my new Ipod Touch. What a terrific little gadget! I really love it.


Computers & Software

The Great Migration (from Qt3 to 4) Part I

Very belatedly, I have decided to migrate my project EP Simulator from Trolltech’s gui framework Qt version 3 to version 4. I should have started with Qt4 in the first place, but all my books were Qt3 books (actually I own just 1 Qt book), and I had the mistaken notion that Qt4 wouldn’t work on my computer until KDE 4 was released. For the non-Linuxy folks reading, KDE is a desktop manager for Linux, and the current version 3 is based on the Qt3. The new, yet-to-be-released version is based on Qt4. I initially didn’t release that both Qts could cohabitate on the same computer, which is silly. Anyway, once started with Qt3, I hesitated to switch because of the large numbers of incompatibilities between the 2 versions. Oh, back to you non-Linux users, I better explain that Qt is a gui framework that you can use to add gui elements such as menus, toolbars, dialog boxes, etc, etc to your programs. It is extremely elegant, in my opinion, but there is the little matter that it has a dual license. If you use it to develop proprietary programs not released under an open source license, you have to pay literally thousands of dollars for the privilege. Using it to develop open-source programs however is free, as in beer.

Why upgrade now? Qt3 will not be supported forever. Qt4 is better than Qt3, in many ways beyond the scope of this short blog entry. Qt4 really allows cross-platform development, as there is a Windows version of Qt4 for open-source development (there really wasn’t one for Qt3). The downside of upgrading is that Qt3 is not very compatible with Qt4. Classes, constructors, functions, etc. have changed. Forms designed with Qt Designer have changed, and the way these forms are subclassed is different. Trolltech, the makers of Qt provide instructions on their website to help with transition, but it is not easy. They provide a tool, qt3to4, to convert source files, and a compatibility library to support old Qt3 classes. But I am finding out a lot of the work has to be done by hand.

To start out, I copied my local unmirrored repository using svk (svk is great for this kind of thing) and created a qt4 branch. Even prior to that I first checked to make sure everything compiled under Qt3, without warnings or errors. With my local branch I can play around, checking in and updating to my local repository without ever affecting the mirrored repository or the central repository on Eventually when all the changes are made, I’ll smerge back to the local trunk and then push the changes back to the central repository. Again, svk makes this real easy.

I followed the instructions on the Trolltech website: running the qt3to4 tool. This went smoothly. I installed the Qt4 packages (including development and debug packages) using the Smart package manager. I changed the project settings to use Qt4 instead of Qt3, and made sure the paths were correct. I compiled, and of course, there were 10 million errors.

The next step was to convert my ui (user interface) files. More on this next time.

Computers & Software

The Great Sluggo Crash of 2007 Part II

In Part I, Sluggo, the aptly named Linux computer that is my workhorse, crashed and burned.  I found myself updating from Suse 9.2 to 10.1.  Now I could boot the computer, but a lot of stuff didn’t work.  Among the problems, in no particular order:

Software installation trouble

10.1 has two software installers: traditional Yast and Zenworks.  Zenworks was broken, though apparently this was fixed in the remastered edition of 10.1.  I had to update a few times with Yast to get it to work.  See this site for more info:

Once I fixed this, I could get the software updates from Zenworks.  I’m still not sure why there are both programs.  It looks like Zenworks is set up to get the automatic security updates, but Yast is better for installing and uninstalling programs.  Zenworks seems real slow however, and so far I am not impressed that it is a step forward.


I use SVK for version control (see previous posts).  SVK is a bunch of perl scripts, and 10.1 updated the version of perl.  When this happened, all my old perl scripts were left behind, including awstats, which I use to get statistics on my web site.  Well, I just had to reinstall SVK and the rest from the CPAN perl repository and, long story short, it worked.


I couldn’t access the other computers on the network at first.  As usual this turned out to be a firewall issue.  10.1 “modernizes” the network interface, using KNetwork instead of the traditional ifconfig Linux network tools.  Shutting this off and tweaking the firewall did the trick.  Moreover, my website now showed up, but…

No Blog!

The problem here was MySql which is required for the WordPress blog software was not starting at bootup.  I ended up fixing this and then reinstalling the PHP modules for apache2.  It worked.

DVDs, MP3, etc.

The only software I have found that plays my DVDs well on Sluggo is Okle.  It was broken by the upgrade, I think because I had compiled it under 9.2.  Ogle, the command line interface actually worked.  So I recompiled the source for Okle, and there I was, ready to go back to watching my Voyage to the Bottom of the Sea and Ultraman DVDs!  My MP3 files did not work under XMMS either.  However Amarok worked for this, so again, problem solved.

At this point I was back in business.  So, of course, not content, I got hold of a better motherboard that actually worked (the whole reason for what happened in Part I) and installed it.  Everything worked except one thing.  Sluggo remembered the MAC address of the old motherboard ethernet interface and assigned the new one to eth1 instead of eth0.  eth0 no longer existed, but the whole network interface depended on it.  So, more internet googling, and eventually I figured out that the eth name are assigned by udev, which is another complicated program that I managed to learn enough about to fix the problem.  Basically, for the sake of anyone else who might have this problem, I edited the 30-net-persistent-names.rules in /etc/udev/rules.d and made eth0 the right MAC address.  Fixed!

Sluggo was back in perfect shape, and faster than before (but still slow by modern standards).  I learned a lot in the process, including the realization that upgrading hardware and operating systems is not for the faint of heart.  One suggestion I have that I have been following for a long time is to keep a log or diary of what you do when you fix something.   My LinuxTweaks file is 1500 lines long.  And as much trouble as all the above was, little did I know that even more danger was ahead, to be described in my next installment, The Great MonsterMagnet Crash of 2007.  See you then.

Computers & Software

The Great Sluggo Crash of 2007

Computing is Hell. Everyone who works with computers knows this, but when our computers are working well, we tend to forget it, just like we tend to put out of our minds on Friday night that Monday morning is just a little over 2 days away. When everything is working well, we put off making backups, or just investing the time to understand how our systems really work — you know, little things that could just prevent a disaster from turning into the apocalypse.

A few days ago diaster struck. Sluggo, the $100, incredibly slow and powerless computer on which runs this website, crashed. I’m not sure of the exact cause, but probably it had something to do with accidentally turning off the power and then trying to upgrade it with a new motherboard and slightly less slow processor. In any case, the new motherboard didn’t work and when I went back to the original I found that Sluggo would no longer boot. In the middle of the boot sequence, as far as I can tell, a defective file system was detected and I would be dropped into a diagnostic mode. However, in this mode, there was no evidence of any defective file system. Yet, rebooting always gave the same result. I finally determined (rightly or wrongly, I’m not sure) that the boot scripts had been corrupted somehow, and the only possible fix would have to be reinstallation.

I dug up my Suse 9.2 DVD and worked with it, trying different ways to fix the problem. Whatever was wrong with Sluggo, even trying to reinstall Suse 9.2 did not fix the problem. Basically the reinstallation would fail each time.

So I dug up a Suse 10.1 DVD and did an update, and it worked, sort of. I didn’t want to do a fresh install and lose all the tweaks I had done in my system. Sure I had backed up the data, but anyone who uses a Linux system knows that there are dozens of little tweaks and customizations that you would hate to lose. So, now I could boot up into Suse 10.1. But…

[To be continued…]

Computers & Software

Version Control With SVK Part II

Part I of this article appeared some time ago — you can find it in the September posts. In this second section I describe what’s not quite right with Subversion (SVN) and what I like about SVK, it’s heir. I don’t have here a lot of info about how SVK came about and what not — see the SVK home page for details.

SVN is CVS done right, but SVK takes SVN one better. To use SVN you must be constantly in contact with the server that contains your repository, and must access it with a usually long URL that is easy to misspell. While this is less of a problem in the wireless, connected world we live in than it used to be, there are certainly places where you cannot easily link to the Internet, or there may be situations when you just want to save your laptop battery and inactivate your wireless card. With SVK, you don’t have to be connected to check in and check out files. SVK sets up a local repository on your hard drive that mirrors the central repository on your server. You can name this a short name, such as //epsimulator instead of // You can then check out your files from the local repository, //epsimulator. If you check in changes, though, you must be connected to the server that contains the SVN repository that was mirrored. BUT … there is a slick way around that, as shown below.

The work flow for SVK goes something like this. Mirror your remote repository on your laptop, for example.

svk mirror //epsimulator

You must get into the habit of synchronizing the local repository, to bring in any changes made by others. This is one extra step with SVK and easy to forget:

svk sync –all

Then checkout your files:

svk checkout //epsimulator

After checkout, you can update your files as long as you are in the sandbox directory:

svk update

Check in your changes:

svk commit -m ‘Here are my changes!’

Note the mandatory message to help keep track of what you did with each revision. With the commit, both the mirrored repository and the local repository are updated, and as you might expect this fails if you are not connected to your remote repository. To get around this, you create an unmirrored copy of your local repository:

svk cp //epsimulator //local/epsimulator

With SVK these repositories are just like files that can be copied and listed, as long as you precede your cp and ls commands with svk. Now you can checkout, update, and commit from the //local/epsimulator repository and not have to be connected at all to the Internet. When you are connected and want to update the remote repository, the easiest way is to use the svk push command:

svk push

This merges your changes, keeps the change log up to date and synchronizes the local and remote repositories in one easy command. Similarly svk pull can pull any changes to the remote repository to your unmirrored local repository.

There is a lot more you can do with SVK, check out the link above, or these tutorials. Then feel the freedom of version control without being tied to a network.


Computers & Software Electrophysiology

EP Simulator Update

EP Simulator, a teaching and simulation program to emulate essentially a complete EP lab, is progressing, albeit slowly!  See screenshot below.


Computers & Software Electrophysiology

Sneak Preview — First Screenshots from EPSimulator

The new EP Studios project is a simulation of an EP recording system.  The program is loosely based on the CardioLab system I have used for years.  The program will be released under an open source license sometime (?) in 2007.


Computers & Software

Version Control with SVK Part I

I just want to put a good word in for the version control program SVK. First a word of warning. This is definitely a programmer’s topic and if you have been reading for the political opinions or entertainment, or even for EP opinions, you should skip this topic.

There, have all the non-programmers left? Ok. Well, if you are a programmer, or want to be one, you should realize that version control is not an option, it is a necessity. This is a fact that no serious programmer is ignorant of. If you are really not familiar with the concept of version control, it is, in a nutshell, the ability to store all previous versions of a file and go back and resurrect any previous version at any time. This allows a tremendous freedom and provides a great sense of security that you are not going to mess something up and not be able to retrace your steps and get back the version you had before you messed up. In other words, it is the ultimate Undo. It is not a matter of whether or not to use version control, but which version control software to use. Recently I became acquainted with SVK, a variation of the Subversion version control system, with a number of advantages over Subversion (aka SVN). SVK represents about my 3rd or 4th foray into version control. For EP Office I used a commercial version control system which was Windows based. As with every other proprietary windows software program, I felt locked into updating the software when updates were released. Of course this cost money, and upgrading is also painful because it invariably breaks something that used to work before just fine. Later I realized the more serious danger of being locked into a proprietary software system: what if the company folded and I was left with unsupported, undocumented software in binary form without source code? What had I entrusted my precious program code to? The situation brought to mind the backup copies I had made using magnetic tape years ago. Those tapes are now useless, as there is no longer any hardware or software available that can read that tape format. Likewise perhaps my carefully archived software would some day be lost because it was stored in a software format that would be be forgotten.

About this time I was switching over to Linux and Open Source software, so naturally I went to the gold standard of version control, CVS. I found that CVS is great for version control of individual files, but is awkward for tracking a large project. I still use CVS for version control on certain text documents, but I don’t think it is very good for development compared to what is now available. If you want to save the state of multiple files all at once, you need to “tag” them with CVS. Deleting and renaming files or directories can also cause major headaches. Moreover CVS does not handle binary files well, and its commands and logic can be obtuse. I still don’t quite understand “sticky tags,” for example.

Enter Subversion (SVN). SVN is directory based, not file based. So if you commit you are committing all the files in a directory, whether they changed or not. The whole directory and its files and subdirectories becomes version 101, for example. You don’t have a mixture of FileA at version and FileB at version, as you might have with CVS. If you delete a file, it just is gone in version 102 onwards, but you can always go back to version 101 to get it back if you want. And SVN handles diffs between binary files just fine. So what’s with SVK that makes it better? In Part II I will examine SVNs shortcomings and why SVK is, in computer geek terms, really neat.

Computers & Software

Getting off the Microsoft carousel

Ten years ago, when EP Office (known as EP Database back then) first emerged from the primordial binary soup to crawl across the computer screens at the University of Colorado, the only platform I considered using was Microsoft. The database used was Microsoft Access 2.0. Over the years various features were added to the program, using other members of the Microsoft Office suite to provide features such as report generation (Word) and billing sheets (Excel). Meanwhile the Office suite mutated to Office 97, 2000, XP, and then its current iteration, Office 2003. Office 2007 is reportedly around the corner. Each update meant spending money to buy the new program, and then modifying my program to work with the new Office version. You see, each update to Office changed the format of the Access database file, and changed the syntax of Visual Basic for Applications, with the result that each Office update broke my program. The present version of EP Office works best with Office XP. It will also work with Office 2003, but certain security features added in 2003 cause some hiccups in the smooth functioning of EP Office. But I am sick of living or dying at the whim of Bill Gates and Company.

So, what is the answer? For the past several years I have been working with the Linux (perhaps more correctly GNU/Linux) operating system. Linux is fundamentally a clone of a very old, in computer years, operating system, Unix. Old Unix programs written 25 years ago still run on it. Backward compatibility is obviously a top priority. Moreover, Linux and its applications are open source, so that the source code is always available to the developer. Bugs in Microsoft programs are just tough luck, unless Microsoft decides to fix them. Bugs in open source programs are scrutinized by multiple developers and are fixed quickly.

I am thinking of releasing the source code of EP Office, making it open source. The problem with open source software is that it is difficult to generate much income from it. Anyone can download and make a copy for free. Nevertheless I am seriously thinking of doing this. It would certainly go against the grain of most medical software, which is generally prohibitively expensive. It would have the advantage of allowing others to adapt the program to the ever changing Microsoft Office suite, or allow the program to be cloned to a healthier platform, such as a web-based interface. We shall see…