Monthly Archives: March 2011

Building QT framework on Mac and Windows

When building programs with QT that you want do distribute to your friends, I found it preferable to compile the QT library myself.

On Mac OS X this was nice, because I could choose architecture combination for my binaries exactly as I wanted (PPC only, or PPC32 + x64, for example).

On Windows it was nice because I could download the latest Windows Platform SDK (which is free!) and build QT with it. That way, I could create both x86 and x64 Windows binaries that could be distributed along with no other files whatsoever.

However, building QT takes a long time. And if you are unlucky or too ambitious with your configure switches, it might not build at all. So, here are simply a few working configurations:

Mac OS 10.5, PPC, XCode 3.1.3, QT 4.5.3

./configure -release -no-qt3support -nomake examples -nomake demos -arch ppc -arch x86 -platform macx-g++42 -sdk /Developer/SDKs/MacOSX10.5.sdk/ -prefix /Users/zo0ok/qt_dev/4.5.3-carbon-ppc_x86

./configure -release -no-qt3support -nomake examples -nomake demos -arch ppc -cocoa -platform macx-g++42 -sdk /Developer/SDKs/MacOSX10.5.sdk/ -prefix /Users/zo0ok/qt_dev/4.5.3-cocoa-ppc

./configure -release -no-qt3support -nomake examples -nomake demos -arch ppc -arch x86_64 -cocoa -platform macx-g++42 -sdk /Developer/SDKs/MacOSX10.5.sdk/ -prefix /Users/zo0ok/qt_dev/4.5.3-cocoa-ppc_x64

./configure -static -release -arch ppc -sdk /Developer/SDKs/MacOSX10.5.sdk/ -no-framework -prefix /Users/zo0ok/qt_dev/4.5.3-static-ppc

./configure -static -release -nomake examples -nomake demos -arch ppc -arch x86 -platform macx-g++42  -sdk /Developer/SDKs/MacOSX10.5.sdk/ -no-framework -prefix /Users/zo0ok/qt_dev/4.5.3-static-ppc_x86

Please refer to the QT documentation to understand the advantages of carbon vs cocoa. Note that all those configurations are made on (and for) a PPC target. You might just want x64 if you build for modern Macs.

Mac OS 10.6, x64, XCode 3.2.6, QT 4.7.3

$ ./configure -release -no-qt3support -nomake examples -nomake demos -arch x86_64 -cocoa -platform macx-g++42 -sdk /Developer/SDKs/MacOSX10.5.sdk/ -prefix /Users/zo0ok/qt_dev/4.7.3-cocoa-x64

For some reason I got linking error an hour into the building process when using the MacOSX10.6 sdk. But those of you who are cutting edge dont use 10.6 and 3.2.6 anyways.

Windows SDK 7.0, x64 OS, QT 4.6.1
If you install the (free) Windows SDK 7.0 you get all compilers and make tools you need (you can use Visual Studio C++ Express, but it does (did) not build for x64.

Now, in the Windows world, compiling for different target cpus is quite simple (but a bit confusing). When you build QT the build happens in two steps. In the first step (configure) some tools are compiled (qmake, and more). That qmake will be used to build the rest of qt in the second step.

If you “cross compile” on Windows you will first use x86 MS tools to build x64 QT tools. In the second step (nmake) x86 and x64 tools will end up being mixed and you get problems (linking problems if I remember correctly. So, the easiest way to get it right is to not cross compile but:

  1. Build x64 QT using native x64 Microsoft tools
  2. Build x86 QT using native x86 Microsoft tools

This is easy, on an x64 version of Windows, as it runs x86 programs just fine. I really have to put a screenshot here, because there is no other way to explain this Microsoft mess:

The executables you see (cl, the C compiler, for example) are the x86 tools generating x86 binaries. You use those tools if you invoke vcvars32.bat. But then there are four folders containing the same tools for different other combinations:

Tools bat-file Description
amd64 vcvars64.bat Create x64 on x64 (USE)
x86_amd64 vcvarsx86_amd64.bat Create x64 on x86 (avoid)

Dont even bother with the irrelevant ia64 stuff. When you have a shell with environment from the correct vcvars file, this is how to build QT (exactly the same for both versions):

> configure.exe -static -release -no-qt3support -platform win32-msvc2008
> nmake sub-src sub-tools

Note: on Windows it was best to build qt “in place” (no -prefix used). So, I suggest you install the qt source to two different folder: c:\qt-x86 and c:\qt-x64. When you are done you can make your own copies of the vcvars-files and add qt\bin-folder to the path.

On Windows it is clever to add to your .pro-file:

CONFIG += embed_manifest_exe

That way, the exe file is the only thing you need to distribute (well, people might need the right c/c++ runtimes installed).

Your friends running Linux can compile from source 🙂

VmWare Server 2.0 host filesystem performance

I manage a few Linux machines that run VmWare Server 2.0.2. On those I have a few Windows Server OS guests.

A typical host is a Quad-Core Intel Core 2 processor, 8 GB RAM, separate system-drive and drive for virtual machines. It runs Debian (5.0 or 6.0) and VmWare Server 2.0.2.

A typical guest could be a Windows Server 2008 with 2GB RAM, 24GB C-drive, 12GB E-drive.

To get decent filesystem performance on the hosts I have used XFS and split the VmWare disk images into 2GB pieces. They have been allowed to grow dynamically.

Over time I have the feeling performance have grown worse, and not been very impressive. Different things have been tried. Finally, on of the hosts where reinstalled (Debian 6.0 instead of Debian 5.0), and btrfs was used instead of XFS. Horrible!

Filesystem 2GB Split Growable Performance
XFS Yes yes Questionable (at least after 12 months)
btrfs No No Horrible – 30min until Windows replies to ping
btrfs Yes No Bad – replies to ping in less than three minutes, but both physical Linux and virtual Windows experiences I/O-delays of a few seconds. Very un-snappy.
ext2 No No Excellent! Fast boot. Snappy.

Perhaps journaling filesystems have their advantages, but I make backups of all machines nightly and dont worry much of a filesystem crash. Also, ext2 can be considered fairly mature, proven and stable.

I will probably do some migration in the next days (reformat some XFS as ext2). Maybe I will provide some properly quantified measures. However, just moving the virtual machines and changing their format may fix problems with fragmentation, so it is hard to make a fair before-after-test.

Backing up your Windows system

Making a backup of your Windows computer is one thing – restoring it is another. If the system does not boot anymore, how do you restore it? Reinstalling Windows is a pain – if for no other reasons because you have to reactivate it with Microsoft. Here I present a method I like.

You need a Live-Linux-system. Any version of Ubuntu on a bootable CD or USB key will work, but if you prefer some other Linux live CD use it. I assume you use Ubuntu.

Linux can easily access you entire harddrive (or a partition) as one large file. This “file” can be copied to an external hard drive, or send over the network to another computer.

What drive to backup?
Linux hard drives are called /dev/sda, /dev/sdb, /dev/sdc etc. You need to figure out which drive contains your windows system. The USB key you booted from will also show up as a hard drive. If you insert an external USB drive to backup to, it will also show up as a hard drive.

Run fdisk to see your hard drives and their partitions:

freke@freke:~$ sudo /sbin/fdisk -l

Disk /dev/sda: 120.0 GB, 120000000000 bytes
255 heads, 63 sectors/track, 14589 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x19411940

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          32      257008+  83  Linux
/dev/sda2   *          33          46      102400    7  HPFS/NTFS
Partition 2 does not end on cylinder boundary.
/dev/sda3              46        7682    61337600    7  HPFS/NTFS
/dev/sda4            7682       14589    55482974+  83  Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bd89d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          62      497983+  83  Linux
/dev/sdb2              63         548     3903795   82  Linux swap / Solaris
/dev/sdb3             549        2372    14651280   83  Linux
/dev/sdb4            2373      121601   957706942+  83  Linux

Disk /dev/sde: 32 MB, 32473088 bytes
255 heads, 63 sectors/track, 3 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0030bb16

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           1           4       31680+   4  FAT16 <32M
Partition 1 has different physical/logical endings:
     phys=(2, 254, 63) logical=(3, 241, 46)

The above example contains a lot of information you dont need. But, you can see there are three drives:

  • /dev/sda, 120Gb
  • /dev/sdb, 1000Gb
  • /dev/sde, 32Mb (a little memory card)

Typically, just by the size, you can figure out what drive is your Windows system drive that you want to backup. If you found your drive and want to backup all of it, you can skip the next two sections (about partitions and the MBR).

What partitions to back up
Hard drives can be partitioned into different pieces. There are different purposes for this, for example:

  • You install Linux and Windows side by side and dual boot
  • You have one (smaller) partition for the system, and a separate partition for data, so you can reinstall/recover the system without touching your own files
  • Linux uses a separate partition for swap

In the above example, the 120Gb system hard drive contains four partitions:

  1. A little Linux /boot-partition
  2. A special Windows 7 partition (typically invisible from Windows)
  3. A Windows 7 system partition
  4. A Linux system partition

If I wanted to backup this system I can backup the entire 120 GB harddrive (entire /dev/sda). If I just want to backup Windows I can backup sda2 and sda3.

Master Boot Record and Partition table
If you backup all the partitions of a hard drive, you still have not got everything. There is something called MBR (Master Boot Record) that contains the partition table itself, and a little program that starts up operating systems (or boot loaders). Think of it like this:

  sda = MBR + sda1 + sda2 + sda3 + sda4

There may be small unused space between the partitions, so the size of the partitions + MBR is typically a little smaller than the entire hard drive. The MBR is exactly 512 bytes (very small), and there is nothing before the MBR on the drive.

If you backup the MBR and the partitions separately, you need to follow this procedure when restoring the system:

  1. Restore MBR
  2. Reboot (or force Linux kernel to reload partition table in another way)
  3. Restore partitions

Backup and recovery using local hard drive
The most natural thing is to backup to a local hard drive. In the example above, I could backup the 120Gb drive to a file on the 1000Gb drive. It is important to understand that we are going to backup an entire drive (or partition) to a file on a filesystem, on another drive. Thus:

  • If backing up an entire drive, none of its partitions/filesystems must be mounted
  • If backing up just a partition, that partition/filesystem must not be mounted
  • The filesystem that you backup to, must of course be mounted

If in doubt what is mounted (or not) use the command mount, with no arguments. Simple example of listing mounts, unmounting and remounting a filesystem (to a different mount point) shown below (some output rows from mount is removed from example)

freke@freke:~$ freke@freke:~$ mount
/dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0)
/dev/sda1 on /boot type ext2 (rw)
/dev/sdb1 on /media/old-boot type ext2 (rw)
/dev/sdb4 on /media/mediadrive type ext3 (rw,commit=0)
/dev/sdb3 on /media/old-root type ext3 (rw,commit=0)
/dev/sde1 on /media/PSP32MB type vfat (rw)
freke@freke:~$ sudo umount /media/PSP32MB/
freke@freke:~$ mount
/dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0)
/dev/sda1 on /boot type ext2 (rw)
/dev/sdb1 on /media/old-boot type ext2 (rw)
/dev/sdb4 on /media/mediadrive type ext3 (rw,commit=0)
/dev/sdb3 on /media/old-root type ext3 (rw,commit=0)
freke@freke:~$ sudo mkdir /media/myMemoryStick
freke@freke:~$ sudo mount /dev/sde1 /media/myMemoryStick/

Now, presume we want to backup the entire sda (120Gb) to some filesystem mounted on /media/backupdrive

$ sudo dd if=/dev/sda of=/media/backupdrive/sda.img

WARNING: the above command is very dangerous. if means input-file and of means output file. Mixing them, or messing them up will Destroy Data. Nobody knows why the command dd is named dd, but one theory is that it means Destroy Data.

Backing up MBR and a few partitions is done like this:

$ sudo dd if=/dev/sda of=/media/backupdrive/sda_mbr.img bs=512 count=1
$ sudo dd if=/dev/sda2 of=/media/backupdrive/sda2.img
$ sudo dd if=/dev/sda3 of=/media/backupdrive/sda3.img

Restoring the backups is done by just swapping if and of:

$ sudo dd if=/media/backupdrive/sda.img of=/dev/sda
$ sudo dd if=/media/backupdrive/sda_mbr.img of=/dev/sda
       !! reboot here - check that sda is still sda - mount backupdrive !!
$ sudo dd if=/media/backupdrive/sda2.img of=/dev/sda2
$ sudo dd if=/media/backupdrive/sda3.img of=/dev/sda3

The backup files can typically be compressed with much success. Compression/uncompression can also be done on the fly, which makes it possible to backup a larger drive to a smaller one (well, you dont know how well it will compress before you try). For example:

$ sudo dd if=/dev/sda | bzip2 > /media/backupdrive/sda.img.bz2

$ sudo su
# bunzip2 < /media/backupdrive/sda.img.bz2 | dd of=/dev/sda

Use gzip/gunzip instead of bzip2/bunzip2 if speed is more important than compression rate. I dont like piping much data into sudo, so I prefer to sudo to root instead.

Backup over network
It can be more convenient to make backup to another computer over the network rather than to a local or external hard drive. This is surprisingly easy. If the other computer runs Windows you need to install cygwin for this method to work. Mac OS X is ok.

Backup, server:

$ nc -l 33333 > sda.img.gz

Note, "-p 33333" switch is mandatory on some systems, and forbidden on others. We assume the server IP address is 33333 is a port number, any value from 1025 to 65535 is ok (as long as port is unused).

Backup, client:

$ sudo dd if=/dev/sda | gzip | nc 33333

Restore, client:

$ sudo su
# nc -l 33333 | gunzip | dd of=/dev/sda

Restore, server (assume client is

$ nc 33333 < sda.img.gz

A bonus with compression is that you can verify the integrity of the backup using:

$ gzip -t sda.img.gz      (or)
$ bzip2 -t sda.img.bz2

This only works if compression is done on the client, but thats where it should be done anyway.

Other things to think about
This method works for backing up Linux systems as well. It should work for Macs as well, but it does not make so much sense because Mac OS X is very easy to install, and to restore everything from Time Machine.

Note that Macs does not use MBR. Modern Windows systems may use GPT (GUID partition table). In that case everything above about partitions probably dont apply.

If you have partitions with numbers 5 and above (sda5, sda6 etc) you have logical partitions. You need to learn about those before trying to back them up.

Restoring to a different hard drive than the original hard drive may not work. Same model and size should be fine. Same size should also be fine. Larger drive could be problematic. And you need to resize or create partitions to use the entire drive. Smaller drive is tricky.

In linux, you can mount an uncompressed partition image easily, with something like:

  $ sudo mount -o loop sda2.img /media/windows_image

Compression of random data is inefficient. Compression of large blocks of zero-bytes is extremely efficient. When a file is deleted from a filesystem, the content of the file remains until overwritten. So, writing zeroes to unused space on the drive you backup is a good thing to do before backing it up. One simple way to do this is to fill the hard drive with large files containing just zero, until the drive is almost full, and then delete them. For reasons of fragmentation, you should never fill a Windows NTFS filesystem beyond 90%. In Linux (or Mac OS X) you can create a 100Mb emtpy file, and compress it, this way:

yggdrasil:OnMyMac gt$ dd if=/dev/zero of=100Mb.zeros bs=1m count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 3.126381 secs (33539608 bytes/sec)
yggdrasil:OnMyMac gt$ gzip < 100Mb.zeros > 100Mb.zeros.gz
yggdrasil:OnMyMac gt$ ls -l
total 205000
-rw-r--r--  1 gt  staff  104857600 Mar 25 22:51 100Mb.zeros
-rw-r--r--  1 gt  staff     101791 Mar 25 22:52 100Mb.zeros.gz

Planning your Windows system
If you install Windows from scratch, you can do this:

  1. Install a small Linux system first on the hard drive
  2. Install Windows to a reasonably small partition (maybe 40Gb for Windows 7)
  3. In Windows, create a data partition on the large unused remaining space
  4. Boot into Linux, backup the Windows system partition to the Windows data partition

Now whenever you want to you can restore your Windows system easily, and also make later backups. Everything contained on one computer.

VMWare Server 2.0 – Disable HTTP(S) redirect

Plenty of people have good reasons to want to access their VMWare Server 2.0.2 using http (port 8222) rather than https (port 8333). But VMWare did not make that so easy.

Many sites tell you that you enable http (disable http->https redirect) by editing /etc/vmware/hostd/proxy.xml

Replace (in all 5 places)

After that you should restart vmware.
  $ sudo /etc/init.d/vmware restart

This is absolutely correct, but perhaps it is just 90% of the solution.

If it does not help, try re-running

  $ sudo /usr/local/bin/

When I did that, it asked me if I wanted to keep or replace my proxy.xml file (that I modified as suggested above). I choose to keep it, and after that http worked properly. It did many other things too… It seems overkill to recompile the kernel modules just to be able to disable http->https redirect. Maybe there is a smarter way, but at least this one worked.

If you are fine with the command line, and you want a simple, free, virtualization technology, I suggest QEMU.

Building a Common Lisp program (ECL)

LISP is easy – you just need to start up the interpreter and start playing. But what if you are dependent on libraries, and you want to compile a binary? If you come from another background, like I did, it is quite confusing in the beginning.

Everything below applies to ECL – I think most things will apply to other Common Lisp implementations as well. I build a little command line utility that converts things to/from Base64 (using a library for that).

ASDF is a tool that handles dependencies between packages, and also controls your build process (like make). Every project is called a System. Yours to.

When you download lisp packages they typically come with an asd-file, and one or more lisp-files. Each package goes in its own directory, and ASDF needs to know about each package.

I did everything from scratch and installed ecl in /opt/ecl. I put the packages in /opt/ecl/packages (not standard at all).

Project files and building it
These are the files my project contains, and how to build.

kvaser@kvaser:~/lisp/simple-base64$ ls -l
-rwxr-xr-x 1 kvaser kvaser  337 Mar 13 10:18 build.lisp
-rw-r--r-- 1 kvaser kvaser  197 Mar 13 10:18 simple-base64.asd
-rwxr-xr-x 1 kvaser kvaser 1389 Mar 13 13:51 simple-base64.lisp
kvaser@kvaser:~/lisp/simple-base64$ ./build
  ... ...

The binaries (of my program, and all dependencies) end up ~/.cache/, so thats where you need to go to execute your program (or just make a symbolic link to the project directory).


(in-package :asdf)

(defsystem :simple-base64
  :name "simple-base64"
  :author "Zo0ok"
  :version "0"

  :components((:file "simple-base64"))

  :depends-on (:s-base64 :flexi-streams))

:components points to my lisp-file(s).
:depends-on lists other systems that I depend on (the base64-library itself, and a stream library that turned out to be useful.

Here is the source code to the program itself. It is very non-Lispy, remember, I am new to Lisp and I dont know how to program Lisp with style.

(defun print-usage-and-quit ()
  (format *error-output* "Usage:~%")
  (format *error-output* "  ./simple-base64 -e PLAINDATA~%")
  (format *error-output* "  ./simple-base64 -d BASE64DATA~%")
  (format *error-output* "  ./simple-base64 -e < plaindata.file~%")
  (format *error-output* "  ./simple-base64 -d < base64data.file~%")

;;; MAIN starts here

(let ( (mode-op-enc NIL)
       (mode-src-stdin NIL)
       (input-stream NIL)
       (output-stream NIL) )
    ( (= 2 (length si::*command-args*) )
      (setf mode-src-stdin T ) )
    ( (= 3 (length si::*command-args*) )
      (setf mode-src-stdin NIL ) )
    ( T
      (print-usage-and-quit) ) )
    ( (string= "-d" (second si::*command-args*) )
      (setf mode-op-enc NIL) )
    ( (string= "-e" (second si::*command-args*) )
      (setf mode-op-enc T) )
    ( T
      (print-usage-and-quit) ) )

    ( mode-src-stdin
      ( setf input-stream *standard-input* ))
    ( mode-op-enc
      ( setf input-stream (flexi-streams:make-in-memory-input-stream
                         (map 'vector #'char-code (third si::*command-args*)))))
    ( ( not mode-op-enc )
      ( setf input-stream (make-string-input-stream (third si::*command-args*))))

  (if mode-op-enc
    (s-base64:encode-base64 input-stream *standard-output*)
    (s-base64:decode-base64 input-stream *standard-output*) )

Notice that nowhere the systems I depend on are included, they are just used when needed.

Finally the build-script, a lisp program that uses asdf:

#!/opt/ecl/bin/ecl -shell

(require 'asdf)
(push (truename #P"/opt/ecl/packages/s-base64") asdf:*central-registry*)
(push (truename #P"/opt/ecl/packages/cl-trivial-gray-streams") asdf:*central-registry*)
(push (truename #P"/opt/ecl/packages/flexi-streams-1.0.7") asdf:*central-registry*)
(asdf:make-build :simple-base64 :type :program)

Note that the build-script is the place to put paths to systems I depend on. Also note that I have included cl-trivial-gray-streams, a system I dont use directly, but flexi-streams needs it so I need to tell where it is. Finally, this pushing paths to *central-registry* is supposed to be the "old way". But for now I was happy to find a way that works, and that I understand.

As usual, when something works it looks simple, but it is tricky to get all the details right in the first place. I believe this is a good starting point for a small lisp-project that depends on available libraries.

Trivial Gray Streams
The package Trivial Gray Streams caused problems. The standard package I downloaded did not work for ECL (complained it could not find the system). I ended up installing Trivial Gray Streams using Debian apt-get. It puts lisp packages in /usr/share/common-lisp, and that version worked.

This applies to ECL version 11.1.1 and Trivial Gray Streams from Debian 6.0. The version of Trivial Gray Streams that did not work was dated 2008-11-02.

Lisp on Debian/ARM

After reading Revenge of the Nerds I decided it was time to learn Lisp. Programming without some kind of real project is boring, so my plan is to write some web applications using jquery and Lisp (for the back end).

Since I have a Qnap TS-109 running 24×7 I thought it would make a good development machine and Lisp web server. It runs Debian 6.0, but running Lisp on it turned out to be a challenge.

Debian, Lisp and ASDF
Debian supports installing different implementations of (Common) Lisp. However, it seems to be tricky to find a version that installs a binary on Debian ARM.

Also, there is a package depency tool for lisp called ASDF. Lisp implementations should come with it.

The only Common Lisp that I managed to easily install (i.e. with apt-get) in Debain 6.0 ARM was GCL. But it is a version of GCL that is 5 years old, and it does not come with ASDF.

I spent much time trying to compile clisp, but in the end I ended up with:

  > ( / 6 3)
  > ( / 5 2)
  Segmentation Fault

Not so fun. Significant parts of clisp is written in assembly (both a good thing and a bad thing), and I was really not able to figure out if it was supposed to work on ARM EABI at all, or just on the old ARM ABI. So after much struggle I gave up clisp.

I managed to compile ECL from source. Not completely without hassle though. It comes with libffi, but I ended up with compilation errors (the processor does not support…). So, I downloaded libffi, compiled it myself and installed it in /opt/libffi. That was no problem, but I ended up making a symbolic link to include myself:

kvaser@kvaser:/opt/libffi$ ls -l
total 8
lrwxrwxrwx 1 root root   40 Mar 11 16:52 include -> /opt/libffi/lib/libffi-3.0.10rc9/include
drwxr-xr-x 4 root root 4096 Mar 11 16:37 lib
drwxr-xr-x 4 root root 4096 Mar 11 16:37 share

Now I configured ecl with:

CPPFLAGS=-I/opt/libffi/include LDFLAGS=-L/opt/libffi/lib ./configure --prefix=/opt/ecl --with-dffi=auto

That worked, and compiling went fine until ecl_min could not be executed, because it could not find I tried to fix that a while, but finally ended up making another symbolic link:

kvaser@kvaser:/usr/lib$ ls -l
lrwxrwxrwx 1 root root 31 Mar 11 19:56 -> /opt/libffi/lib/

After that, I ran make again to finish compilation. It went fine.

ECL, ASDF and cl-who
Now, where to put the Lisp http library cl-who? I copied the asd-file and the lisp-files to the ecl library folder and ran ecl as root:

kvaser@kvaser:~/lisp/cl-who-0.11.1$ sudo cp cl-who.asd /opt/ecl/lib/ecl-11.1.1/
kvaser@kvaser:~/lisp/cl-who-0.11.1$ sudo cp *.lisp /opt/ecl/lib/ecl-11.1.1/
kvaser@kvaser:~$ sudo /opt/ecl/bin/ecl
  ... ...
> (require 'asdf)

;;; Loading #P"/opt/ecl/lib/ecl-11.1.1/asdf.fas"
;;; Loading #P"/opt/ecl/lib/ecl-11.1.1/cmp.fas"
("ASDF" "CMP")

> (asdf:operate 'asdf:load-op :cl-who)    
  ... ...

Now, cl-who is compiled and installed, ready to use. Next time, it does not need to be compiled.

Hello LISP
I wrote a little Hello World program:

kvaser@kvaser:~/lisp$ cat hello.lisp 
(format T "Hello Lisp~%")
kvaser@kvaser:~/lisp$ /opt/ecl/bin/ecl -load hello.lisp 
;;; Loading "/home/kvaser/lisp/hello.lisp"
Hello Lisp

Quite good (except I already know the file was loaded and it disturbs my output, but whatever. How about compiling it?

kvaser@kvaser:~/lisp$ /opt/ecl/bin/ecl -compile hello.lisp 
;;; Loading #P"/opt/ecl/lib/ecl-11.1.1/cmp.fas"
;;; Compiling hello.lisp.
;;; OPTIMIZE levels: Safety=2, Space=0, Speed=3, Debug=0
;;; End of Pass 1.
;;; Note:
;;;   Invoking external command:
;;;   gcc -I. -I/opt/ecl/include/ -I/opt/libffi/include -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 -fPIC -Dlinux -O2 -w -c hello.c -o hello.o 
;;; Note:
;;;   Invoking external command:
;;;   gcc -o hello.fas -L/opt/ecl/lib/ /home/kvaser/lisp/hello.o -Wl,--rpath,/opt/ecl/lib/ -shared -L/opt/libffi/lib -L/opt/libffi/lib -lecl -lgmp -lgc -lffi -ldl -lm 
;;; Finished compiling hello.lisp.
kvaser@kvaser:~/lisp$ ls
cl-who-0.11.1  cl-who.tar.gz  hello.fas  hello.lisp
kvaser@kvaser:~/lisp$ ./hello.fas 
Segmentation fault
kvaser@kvaser:~/lisp$ /opt/ecl/bin/ecl -load hello.fas 
;;; Loading "/home/kvaser/lisp/hello.fas"
Hello Lisp

Ok, how to make a standalone executable?

> (compile-file "hello.lisp" :system-p t)
  ... ...

> (c:build-program "hello" :lisp-files '("hello.o"))
  ... ...
> (quit)
kvaser@kvaser:~/lisp$ ls
hello  hello.fas  hello.lisp  hello.o
kvaser@kvaser:~/lisp$ time ./hello
Hello Lisp

real	0m3.084s
user	0m2.920s
sys	0m0.160s
kvaser@kvaser:~/lisp$ time /opt/ecl/bin/ecl -load hello.fas 
;;; Loading "/home/kvaser/lisp/hello.fas"
Hello Lisp

real	0m3.127s
user	0m3.060s
sys	0m0.080s
kvaser@kvaser:~/lisp$ time /opt/ecl/bin/ecl -load hello.lisp
;;; Loading "/home/kvaser/lisp/hello.lisp"
Hello Lisp

real	0m3.113s
user	0m2.960s
sys	0m0.160s

Clearly, some overhead is involved in invoking ECL. I compared to C:

kvaser@kvaser:~/lisp$ cat hello.c 

int main(int argc, char **argv) {
	printf("Hello C\n");
kvaser@kvaser:~/lisp$ gcc -o hello_c hello.c 
kvaser@kvaser:~/lisp$ time ./hello_c 
Hello C

real	0m0.012s
user	0m0.010s
sys	0m0.000s

So, I can not use this method for CGI programming right away – each call to the webserver will take at least 3 seconds.

Command Line Essentials: Text and Pipeline

Table of Content, Navigating the filesystem

What I will demonstrate now is extremely powerful. This is just examples that do nothing valuable to you, but when you need to get things done on a computer, the same technique can be very productive. You need to combine your creativity and experience to be truly successful.

An example:

essentials@kvaser:~$ ls -l /  |  cat -n
     1	total 109
     2	drwxr-xr-x  2 root root  4096 Feb 18 06:51 bin
     3	drwxr-xr-x  3 root root  1024 Feb 18 07:03 boot
     4	drwxr-xr-x 13 root root  2880 Feb 18 07:06 dev
     5	drwxr-xr-x 78 root root  4096 Mar  1 20:36 etc
     6	drwxr-xr-x  8 root root  4096 Feb 28 21:39 home
     7	drwxr-xr-x 10 root root  8192 Feb 18 06:51 lib
     8	drwxr-xr-x  2 root root 49152 Feb  2 21:01 lost+found
     9	drwxr-xr-x  3 root root  4096 Feb 17 21:39 media
    10	drwxr-xr-x  2 root root  4096 Jan 16 21:45 mnt
    11	drwxr-xr-x  2 root root  4096 Feb  2 21:27 opt
    12	dr-xr-xr-x 87 root root     0 Jan  1  1970 proc
    13	drwxr-xr-x  5 root root  4096 Mar  1 18:17 root
    14	drwxr-xr-x  2 root root  4096 Feb 18 06:52 sbin
    15	drwxr-xr-x  2 root root  4096 Sep 16  2008 selinux
    16	drwxr-xr-x  2 root root  4096 Feb  2 21:27 srv
    17	drwxr-xr-x 11 root root     0 Jan  1  1970 sys
    18	drwxrwxrwt  3 root root  4096 Mar  1 20:39 tmp
    19	drwxr-xr-x 10 root root  4096 Feb  2 21:27 usr
    20	drwxr-xr-x 14 root root  4096 Feb  2 22:38 var
essentials@kvaser:~$ ls -l /  |  cat -n  |  head -n 10  |  tail -n 5
     6	drwxr-xr-x  8 root root  4096 Feb 28 21:39 home
     7	drwxr-xr-x 10 root root  8192 Feb 18 06:51 lib
     8	drwxr-xr-x  2 root root 49152 Feb  2 21:01 lost+found
     9	drwxr-xr-x  3 root root  4096 Feb 17 21:39 media
    10	drwxr-xr-x  2 root root  4096 Jan 16 21:45 mnt

What happens? We first list the contents of the root directory, and add line numbers to the output. Second, we choose line 6-10 and just output those lines.

On the command line, you can interact with a program in different ways. The most common ways are displayed above:

  1. arguments: -l, -n 5, etc (gives program instructions about what to do)
  2. stdout: the output of a program (a list of directories in text format)
  3. stdin: input to a program (in the case of cat, head and tail, connected directly to the output of the program before)

The programs themselves may not seem so powerful. But combined they can surprise you. A fundamental design principle of UNIX is:
“each program should do just one thing, but do that one thing well”
So, you might not find a command that does exactly what you want. But combining a few commands you can easily do advanced things, that the designer of the programs never even thought of. Also, those standard programs are very old, very fast and very high quality. You can trust them to do the job very very well.

Now play a little with the ls-cat-head-tail-example above, and make sure you understand exactly how it works.

If you want to know more about a command you can do (q to exit)

essentials@kvaser:~$ man head
essentials@kvaser:~$ man tail
essentials@kvaser:~$ man cat

A word of warning: the man pages are very detailed, but not very easy to read. If you are lucky you find an example in the man pages (or try Google). Tricky thing is to be aware of what programs actually exist and understand what they do – then the man pages can help with details.

More examples: (space to scroll, q to exit)

essentials@kvaser:~$ find /  |  less

less makes it possible to look at large outputs page by page.

essentials@kvaser:~$ find /  |  grep txt  |  less
find: `/var/run/exim4': Permission denied
find: `/var/run/samba/winbindd_privileged': Permission denied

grep (by default) chooses lines in the input that contains the word you search for. So, this command lists all files with txt in the filename on your filesystem (that you have permission to). Note that errors are not written to stdout but to stderr, and stderr is not (by default) send to the next command. That is why some lines are not caught by less.

If you want to find only files with the extension .txt it gets a little trickier:

essentials@kvaser:~$ find /  |  grep "\.txt$"  |  less
find: `/var/run/exim4': Permission denied
find: `/var/run/samba/winbindd_privileged': Permission denied

The period character (.) is a special character to grep, so if you really want to match “.” you need to write a backslash before it. This is called escaping. The dollar character is also a special character. It matches “the end of the line”, which is what we do want, so we dont escape the dollar character.

Actually, “\.txt$” is a regular expression (often regexp). More about those ones later, but they are super powerful.

A few more commands:

essentials@kvaser:~$ cat /etc/group  |  sort  |  head -n 5
essentials@kvaser:~$ cat /etc/group  |  sort  |  cut -d ':' -f 1  |  head -n 5
essentials@kvaser:~$ cat /etc/group  | wc -l

So, there is a file /etc/group (I think even in Cygwin). First we sort it and output the top 5 rows. Second we only output the first column (using cut). Third, we just count the lines.

So, now use the programs I have demonstrated above, and improvise. You can use files in the /etc directory as input data.

You can also play with output from

$ ps aux
$ w
$ /sbin/ifconfig  (or ifconfig, or ipconfig)
$ dig

Use: cat, head, tail, sort, cut, wc, grep.