Category Archives: Computers

QNAP TS 251+ for Container Station

After many years of running home server systems on Raspberry Pi, another one broke down and I decided I can to better, so I bought a QNAP TS 251+ with two WD 6TB Red drives (suitable for NAS).

My objective here is to mostly use the QNAP with Container Station; for running virtual machines on it.

First Impression

The TS 251+ looks profesionally black, but it is all plastic. Installation was fine but for someone with little computer experience I would imagine it a bit scary. A few things to note:

  • It restarted several times for firmware upgrades, and restarting took some time
  • There are some “I accept privacy…”-things to accept. I guess it is fine. But one reason you get your own hardware instead of running in the cloud is that you know your data is private. So if you are paranoid, read the fine print or get into the details.
  • I suggest you familiarize yourself with RAID0, RAID1 and JBOD before you start it up.
  • I suggest you read about Static Volume, Thin Volume and Thick Volume, and make up your mind, before you start it up (I think Thin makes most sense, especially for use with Container Station).
  • The Web GUI is good – very “modern” – in a way that it almost feels like a desktop computer. A bit over-engineered and messy if you ask me. There are very many features and details, and it is a bit intimidating and confusing at first.
  • Container Station is just what I want and need!
  • I find it silent and cool enough (44C reported under load)
  • It automatically started some “Raid Syncronization” that takes about 24h with my drives. Guess it is fine, but it makes me a bit nervous with something new that I hesitate to restart or reconfigure it because it is doing something low-level and important.

Memory Upgrade

This model comes with 2GB RAM. That is quite enough, but not if you want to run Container Station conveniently. I have switched off most QNAP services, running a single LXC Container with syncthing using about 500MB of RAM, and the QNAP complains there is little available RAM (and it uses swap). So I think it is safe to say that to run Container Station or Virtualization Station, more RAM is recommended.

Officially max RAM is 8GB but there are multiple records of people saying it works with 16GB as well. It also appears that you may use just one memory module (out of two), they dont need to be installed in pairs.

So I bought 2x8GB and it seems to work perfectly:

  • Corsair DDR3L 1600MHz MACMEMORY
  • CMSA16GX3M2A1600C11

Virtualization

There are several Virtualization options with the QNAP:

  • Virtualization Station: running real virtual machines (like VMWare), emulating hardware. Just starting Virtualization Station used almost 1GB or RAM.
  • Container Station:
    • running (LXC) virtual linux machines, emulating just the kernel. This is much more light-weight, and it means the virtual machine shares disk and RAM with the main system (you do not need to allocate disk, all disk is available and shared for every virtual machine – they just live in separate folder)
    • running Docker containers
  • Linux Station: allowing the QNAP to work as a Linux Desktop.

Apart from virtualization, the QNAP also allows you to install things like WordPress, Mediawiki, MySQL and other services as packages.

Network Cable Problems!

For months I was getting unstable behaviour from my QNAP due to a bad network cable. The short version is that the NIC went up and down. Sometimes I had no IP. Things worked after software reset, but not otherwise. I think I am quite good at diagnosing computer problems but this one fooled me. I contacted support, they were patient, I was patient (however annoyed, because I was quite sure I had a defective QNAP), and after several days I discovered the problem (more or less by chance).

I believe the QNAP did not handle this unstable network in a very good way. However, I kind of cant blame it for my bad network cable. So I would not advice anyone against a QNAP because of these problems. However, for some days I was very disappointed.

After replacing the network cable, the QNAP has worked perfectly fine and stable – I have no complains whatsoever.

Container Station Problem

When I woke up in the morning it turned out my container was down. There was message from the middle of the night:

 [Container Station] Created interface "lxcbr0" with errors. Unable to start DHCP service.   

I found this strange, I could not start Container Station again, and I found other people had had this problem with no elegant solutions. I found that the problem was solved if I deleted the two virtual switches (docker0) and (lxcbr0); Container Stations creates them automatically when it starts.

I think my container may have crashed due to too little RAM in the middle of the night, and that somehow corrupted something.

Update and Problems 2020-02-29

One of my virtual container station machines had its clock out of sync. When I started investigating I could not connect to the QNAP itself. The two virtual machines were up normally. The QNAP itself was nowhere to be seen on the network. I restarted it (using the power button – I believe it shuts down properly), it came up and it wanted a firmware update, which I immediately accepted. After that it did not come up (on the network) again.

I tried to reach it on 169.254.100.100 with no success.

I finally did a “reset” (using a paperclick on the rear side of the QNAP for 3-4s when it was already on). Following the reset it immediately appeared normally on the network. Password was “admin”.

However, the virtual container station machines did not start. I had to change their network settings to NAT, and then back to Bridge, then they worked. So it seems to me the virtual switch is not quite 100%.

All seems good now, but this took quite a while to figure out and fix. I bought a QNAP to get something much more stable and reliable than my old Raspberry Pi, but this was not impressive.

Update and Problems 2020-05-08

Quite same story as in February. The combination of

  • Container Stations
  • Virtual Switches / Bridged connections for Containter Station
  • Leaving QNAP up for a long time

is simply a very bad idea. Somehow the Network settings get corrupted and the entire f***ing QNAP, its bloated UI, and containers suffer. It takes hours to fix. Here is a little picture of the pleasure of trying to get rid of a virtual switch once it is corrupt (this was after mostly waiting for UI for an hour, achieving nothing).

My advice for now is to simply not use Bridged Network with containers and avoid creating virtual switches.

Dark Mode?

With macOS Mojave Apple introduced Dark Mode. Some applictions support it. I was mildly sceptical, thinking it was just some kind of fashion statement.

But there is an argument that goes like: “if I am going to stare into a lamp all day, I want as much of it to be as dark as possible”. It makes some sense. You would not want to stare into a lamp in the first place, why then let your display default to white everywhere?

There is also an explanation to why we ended up here: designers are educated for printed designs, which is usually on white paper, thus they prefer white background for computers as well, for aesthetic reasons. Everyone is not a designer, but we all mimic good design.

And you probably know that back the old days computer displays were black with green text. So it is plausible that people who want to make computers more modern and appealing prefer white displays, while people who are more nerdy or old fashioned like darkness.

What I have written so far may seem logical. But it does not matter. What matters is (from the perspective of a programmer):

  1. What is truly more ergonomical, to you?
  2. Is it enough to stick with either light or dark mode? Or should you switch depending on your surrounding environment?
  3. Can you get a consistent good dark mode experience, otherwise it is mostly annoying and better avoided entirely?
  4. How to design your product so it appeals to your customers?

Switching your OS to a dark mode is easy. If you are using XCode, Photoshop or some other product that supports dark mode, that is also easy. Terminal applications (frequently used by programmers) are highly customizable and has often never left dark mode in the first place.

How about the browser? Well, not the browser itself, but the web pages and web applications it delivers to you. Well, for Firefox and Chrome there is a plugin called “Dark Reader”. It works reasonably well for me. Read the FAQ/manual when you install it!

A problem is that when my eyes are used to bright content, a dark page with white text is no problem. But when I am used to a dark display and suddenly the entire display turns white for some reason, it is unpleasant.

As a developer I can of course wonder: how do we want web pages to be built so they work nicely both in light and dark mode?

  1. Each web page has a dark mode (will never happen)?
  2. Web pages should follow good light mode practices, so they look good when using a dark mode extension?
  3. Should any web pages be coded dark?

And as a developer, if my OS/Desktop, development tools, terminal and web browser is set to dark mode… what about the web application I am currently developing? I can’t possibly write CSS and whenever I refresh the result is passed through a black-box-dark-mode filter, that would be a very awkward development experience. So whenever I switch to the (web) application I am developing, the display will turn annoyingly white.

On Contrast

I had the idea that high contrast is easier on the eye. But I realise it is not. Absolutely white text on absolutely black background is quite hard on my eyes. However, ligth grey text on dark grey background is quite comfortable. Apple Terminal comes with a few different (color) profiles. Many of them are surprisingly colorful. I imagine I don’t want the cognitive input that colors give me, it distracts my mind, but perhaps I am wrong about it.

Review: NUC vs Raspberry Pi

I like small, cheap, quiet computers… perhaps a little too much. For a long time I have used a Raspberry Pi V2 (QuadCore@900MHz and 1GB RAM) as a workstation. To be honest, I have not used it for web browsing, that is just too painful. But I have used it for programming and running multiple Node.js services, and a few other things.

Despite there are so many single board computers it is hard to find really good alternatives to the Raspberry Pi. And when I look into it, I find that Intel NUCs are very good options. So, I just decided to replace my RPi2 workstation with the cheapest NUC that money can currently buy: the NUC6CAY with a Celeron J3455 CPU. It sounds cheap, particularly for something server like. The interesting thing with the J3455 CPU is that it is actually Quad Core, with no hyper threading. To me it sounds amazing!

I also have an older NUC, a 54250WYKH with an i5 CPU.

Raspberry Pi V2:   ARMv7    4 Cores      900MHz                  1GB RAM
NUC                Celeron  4 Cores      1500MHz (2300 burst)    8GB RAM
NUC                i5       2 Cores (HT) 1300MHz (2600 burst)   16GB RAM

I/O is obviously superior for the NUCs (both using SSD) versus the RPI v2 having a rotating disk connected to USB. But for my purposes I think I/O and (amount of) RAM makes little difference. I think it is more about raw CPU power.

Node.js / JavaScript
When it comes to different Node.js applications, it seems the older i5 is about twice as fast as the newer Celeron (for one Core and one thread). I would say this is slightly disappointing (for the Celeron). On the other hand the Celeron is about 10x faster than the RPi V2 when it comes to Node.js code, and that is a very good reason to use a NUC rather than a Raspberry PI.

Update 2018-02-11: after a few months
I came back to my RPi2 from my cheap NUC. The difference is… everything. I really like Raspberry PIs. I have built cases for them, bought cases for them, worked on them, made servers of them. But I really must say that a NUC makes more sense: it contains everything nicely and it is so much more powerful.

You can get a Celeron NUC with 2GB RAM and a 2.5′ disk for quite little money. And from there you can go to Core i7, 32GB RAM and two hard drives: M.2 + 2.5′. And check out the Hades Canyon NUC.

I feel sorry there is basically nothing in the market like a NUC with ARM, AMD, PowerPC or Mips. The only competition is the 4 year old MacMini, which is completely an Intel machine. If you find something cool, NUC-like, not Intel, feel free to post below.

Update 2018-02-28
I ran into a new problem on my RPi. It could be anything. My guess, that I will never be able to prove, is that it is a glitch made possible by using an SD-card as root device (and possibly questionable drivers/hardware for SD on the RPi).

Update 2018-04-09
Premier Farnell has introduced a Desktop Pi. Especially promising is that together with a recent RPi you can get rid of the SD-card entirely, and only use SSD/HDD or even mSATA (over USB i presume).

Update 2020-01-05
I decided to get a QNAP instead for my small server needs.

Update 2020-03-12
Another RPi (V2), this time with a 1TB WD Pi Drive, failed to run syncthing properly. I installed syncthing on a new machine but could not sync everything from this RPi because of repeated crashes (I tried a few fixes that did not work). So my only RPi that remains in regular use is a RPI v1 used with syncthing.

Gaming mouse, KVM and Linux

My old ugly Logitech mouse since 10 years died. For long I have been thinking about replacing it not really knowing what to get instead.

I have a “das keyboard” and I want a mouse with the same build quality and feel, but without a million configurable buttons. I also have a KVM switch (using two computers with the same display, keyboard and mouse) from Aten.

I bought a Corsair Katar mouse.

Findings:

  • When KVM-switching it takes a few seconds for the mouse to start working.
  • The mouse is very fast at first. In Windows it slows down after a few seconds (I guess when drivers and mouse profile kick in).
  • The mouse works just fine in Ubuntu, but it is too fast for my taste (even with basic mouse configuration options set at slowest).

Perhaps I would have been better off with a sub-$10-noname-mouse.

Update 2016-10-16
I found a way to slow down my mouse! This support post was useful, although my solution was slightly different.

First run:

$ xinput list
? Virtual core pointer                    	id=2	[master pointer  (3)]
?   ? Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
?   ? Corsair Corsair Gaming KATAR Mouse      	id=11	[slave  pointer  (2)]
?   ? Corsair Corsair Gaming KATAR Mouse      	id=12	[slave  pointer  (2)]
? Virtual core keyboard                   	id=3	[master keyboard (2)]
    ? Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    ? Power Button                            	id=6	[slave  keyboard (3)]
    ? Video Bus                               	id=7	[slave  keyboard (3)]
    ? Power Button                            	id=8	[slave  keyboard (3)]
    ? Metadot - Das Keyboard Das Keyboard Model S	id=9	[slave  keyboard (3)]
    ? Metadot - Das Keyboard Das Keyboard Model S	id=10	[slave  keyboard (3)]

I found out that fixing device 11 was useless, but device 12 was helpful.

My mouse parameters are obtained:

$ xinput list-props 12
Device 'Corsair Corsair Gaming KATAR Mouse':
	Device Enabled (142):	1
	Coordinate Transformation Matrix (144):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 3.000000
	Device Accel Profile (269):	0
	Device Accel Constant Deceleration (270):	1.000000
	Device Accel Adaptive Deceleration (271):	1.000000
	Device Accel Velocity Scaling (272):	10.000000
	Device Product ID (262):	6940, 6946
	Device Node (263):	"/dev/input/event6"
	Evdev Axis Inversion (273):	0, 0
	Evdev Axes Swap (275):	0
	Axis Labels (276):	"Rel X" (152), "Rel Y" (153), "Rel Vert Wheel" (268)
	Button Labels (277):	"Button Left" (145), "Button Middle" (146), "Button Right" (147), "Button Wheel Up" (148), "Button Wheel Down" (149), "Button Horiz Wheel Left" (150), "Button Horiz Wheel Right" (151), "Button Side" (266), "Button Extra" (267), "Button Forward" (291), "Button Back" (292), "Button Task" (293), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265)
	Evdev Scrolling Distance (278):	1, 1, 1
	Evdev Middle Button Emulation (279):	0
	Evdev Middle Button Timeout (280):	50
	Evdev Third Button Emulation (281):	0
	Evdev Third Button Emulation Timeout (282):	1000
	Evdev Third Button Emulation Button (283):	3
	Evdev Third Button Emulation Threshold (284):	20
	Evdev Wheel Emulation (285):	0
	Evdev Wheel Emulation Axes (286):	0, 0, 4, 5
	Evdev Wheel Emulation Inertia (287):	10
	Evdev Wheel Emulation Timeout (288):	200
	Evdev Wheel Emulation Button (289):	4
	Evdev Drag Lock Buttons (290):	0

Here, the “Coordinate Transformation Matrix” is the key to speeding the mouse down. The last parameter was 1.0, it is now 3.0, this seems to mean my mouse is just a third as fast as it used to be. To set it:

xinput --set-prop 12 "Coordinate Transformation Matrix" 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 3.0

I suppose your mouse can go quite crazy if you change all those 0.0 to something else. Good luck!

Hackintosh – a first attempt

I really have no love for Windows 10, but I use it for Steam and a few games. For a long long time people did not buy Apple computers because there were no games for them. Now I find there are more games than I can possibly want but there is no Apple computer I want to buy to play games on:

  • MacBook Air: I have this one – it gets warm and noisy with games
  • MacMini: underpowered for games, and so little value, especially if you want more RAM
  • Mac Pro: its perfect, just very much too expensive to replace a Windows 10 machine
  • iMac: I already have a display and KVM connected to a Linux computer, and I dont believe in throwing away the display because a hard drive breaks.

So I sound like my friends did 10-15 years ago: Macs are too expensive to play games!

But then there is Hackintosh: an ordinary PC running OS X.
There is even a Buyer’s guide, and something like this would suit me well.

I decided to try to turn my current Windows 10 PC into a Hackintosh and followed the instructions.

It was a gamble all the time:

  • My ASUS P8H67-M mainboard: some people seem to have had success with it, but it is not exactly a first choice.
  • My Radeon HD 6950 graphics card is not a good Hackintosh card at all. If I remove it I can fall back to the Intel HD 2000 that is integrated in the i5 CPU (or on the mainboard – I dont know). That is also not a good Hackintosh GPU.

Anyway, I disconnected my Windows hard drives and connected a 60GB SSD to install OS X. And for a while it was good. Some BIOS (UEFI) tweaking, and I

  1. got the installer running
  2. installed OS X
  3. started my new OS X (from the install USB-key, since bootloader was yet to be installed)
  4. played around in OS X, bragging about my feat

hackintosh
Audio was not working, and Video performance sucked, but ethernet worked and it was very useable.

I went on trying to install the bootloader and some drivers (using MultiBeast, following the instruction). This is where all my luck ended. MultiBeast reported that it failed.

I never managed to start OS X again. Not the installed system. Not the install USB-key. I tried:

  1. Removing all hard drives
  2. Reset BIOS/UEFI settings, and try many combinations
  3. Recreate the USB-key
  4. Remove my Radeon 6950 and fallback to Intel HD 2000
  5. Remove files from the USB-key that contains “kernel cache” and things like that
  6. Different boot options from Clover – both the standard menu and non standard options that I found in forums
  7. Create a UEFI-USB-key instead of a Legacy-USB-key

No success at all. I basically got this error.

In order to get things working in the first place I changed a few BIOS/UEFI settings:

  • SATA mode: IDE => AHCI
  • Serial: Disable

(I found no other relevant settings on my mainboard).

After changing IDE => AHCI Windows did not boot. That was an expected and common problem, and I fixed it following some simple steps (forcing safe boot). It was after that OS X never started again. I wonder if something happened to my mainboard/UEFI there, that Windows did, that I can not control/undo?

Update 2016-05-18
I found this post to follow. Much better now. I write this post from my Hackintosh.

In order to eliminate all possible old problems i deleted the 10Mb of the USB-key and hard drive using linux and

dd if=/dev/zero of=/dev/sdX bs=1024 count=10240

Obviously replace sdX with your drive.

About my “working” configuration:

legacy: USB-key is legacy. Clover is installed in Legacy-Root-mode.
MultiBeast: During installation, Step 5 (MultiBeast) fails, and I had to resort to Step 6.
safe mode: my startup arguments are:

dart=0 kext-dev-mode=1 PCIRootUID=0 UseKernelCache=NO -x

I have twice rendered my system unbootable but fixed it with multiple restarts. I think it is the CustoMac Essentials that install some kexts that are are not ok.
Audio is supposed to be ACL892 but it does not work. Probably because CustoMac Essentials fail.
Dual Boot with Windows does not work. This was expected. Clover fails to start Windows (although, there is some limited success, but Windows does not make it all the way).
Clover Configurator: what was not so obvious was the config.plist. It finds 3 different ones on my system. The one that seems to be in use is /EFI/CLOVER/config.plist – so that is the one to edit. But you need to save your changed configuration to a new file, and the copy using the command line and sudo.

Ideas
Well, I have some ideas how to get to a better situation.

  • Install everything NOT in Legacy mode but use UEFI-stuff all the way. Perhaps that just fixes stuff. Or not. I anyway need to get into my UEFI/BIOS to change to booting Windows.
  • Changing graphics adapter: it could be the reason I have to be in safe mode. And the safe mode could be the reason audio does not work. And so on

Update
I tried removing my Radeon 6950 falling back to HD2000. That did not work. I could neither boot from my hard drive nor the install USB-Key. Putting the Radeon back in the computer did not work at first. But after several reboots (also with the USB key) OS X now starts up again (in safe mode).

I tried everything from the beginning with HD 2000: erase drives, disconnect windows drives, upgrade BIOS, reset BIOS, create new USB key (both Legacy and UEFI): never did I manage to boot the installer using HD 2000. So the ill-supported Radeon 6950 (which possibly restricts me from going beyond Safe Mode) works better than the integrated HD 2000.

I do understand the advantage with a “supported” mainboard that has all the recommended UEFI/BIOS settings.

Storage and filesystem performance test

I have lately been curious about performance for low-end storage and asked myself questions like:

  1. Raspberry Pi or Banana Pi? Is the SATA of the Banana Pi a deal breaker? Especially now when the Raspberry Pi has 4 cores, and I don’t mind if one of them is mostly occupied with USB I/O overhead.
  2. For a Chromebook or a Mac Book Air where internal storage is fairly limited (or very expensive), how practical is it to use USB storage?
  3. Building OpenWRT buildroot requires a case sensitive filesystem (disqualifying the standard Mac OS X filesystem) – is it feasible to use a USB device?
  4. The journalling feature of HFS+ and ext4 is probably a good idea. How does it affect performance?
  5. For USB drives and Memory cards, what filesystems are better?
  6. Theoretical maximum throughput is usually not that interesting. I am more interested in actual performance (time to accomplish tasks), and I believe this is often limited by latency and overhead than throughput. Is it so?

Building OpenWRT on Mac Book Air
I tried building OpenWRT on a USB drive (with case sensitive HFS+), and it turned out to be very slow. I did some structured testing by checked out the code, putting it in a tarball, and repeating:

   $ cd /external/disk
1  $ time cp ~/openwrt.tar . ; time sync
2  $ time tar -xf ~/openwrt.tar ; time sync   (total 17k files)
   $ make menuconfig - not benchmarked)
3  $ time make tools/install                  (+38k files, +715MB)

I did this on the internal SSD (this first step of OpenWRT buildroot was not case sensitive-dependent), on an external old rotating 2.5 USB drive and on a cheap USB drive. I tried a few different filesystem combinations:

$ diskutil eraseVolume hfsx  NAME /dev/diskXsY   (non journaled case sensitive)
$ diskutil eraseVolume jhfsx NAME /dev/diskXsY   (journaled case sensitive)
$ diskutil eraseVolume ExFAT NAME /dev/diskXsY   (Microsoft ExFAT)

The results were (usually just a single run):

Drive and Interface Filesystem time cp time tar time make
Internal 128GB SSD Journalled HFS+ 5.4s 16m13s
2.5′ 160GB USB2 HFS+ 3.1s 7.0s 17m44s
2.5′ 160GB USB2 Journalled HFS+ 3.1s 7.1s 17m00s
Sandisk Extreme
16GB USB Drive USB3
HFS+ 2.0s 6.9s 18m13s
Kingston DTSE9H
8GB USB Drive USB2
HFS+ 20-30s 1m40s-2m20s 1h
Kingston DTSE9H
8GB USB Drive USB2
ExFAT 28.5s 15m52s N/A

Findings:

  • Timings on USB drives were quite inconsistent over several runs (while internal SSD and hard drive were consistent).
  • The hard drive is clearly not the limiting factor in this scenario, when comparing internal SSD to external 2.5′ USB. Perhaps a restart between “tar xf” and “make” would have cleared the buffer caches and the internal SSD would have come out better.
  • When it comes to USB drives: WOW, you get what you pay for! Turns out the Kingston is among the slowest USB drive that money can buy.
  • ExFAT? I don’t think so!
  • For HFS+ and OS X, journalling is not much of a problem

Building OpenWRT in Linux
I decided to repeat the tests on a Linux (Ubuntu x64) machine, this time building using two CPUs (make -j 2) to stress the storage a little more. The results were:

Drive and Interface Filesystem real time user time system time
Internal SSD ext4 9m40s 11m53s 3m40s
2.5′ 160GB USB2 ext2 8m53s 11m54s 3m38s
2.5′ 160GB USB2 (just after reboot) ext2 9m24s 11m56s 3m31s
Kingston DTSE9H
8GB USB Drive USB2
ext2 11m36s
+3m48s (sync)
11m57s 3m44s

Findings:

  • Linux block device layer almost eliminates the performance differences of the underlying storage.
  • The worse real time for the SSD is probably because of other processes taking CPU cycles

My idea was to test connecting the 160GB drive directly via SATA, but given the results I saw no point in doing so.

More reading on flash storage performance
I found this very interesting article (linked to by the Gentoo people of course). I think it explains a lot of what i have measured. I think, even the slowest USB drives and Memory cards would often be fast enough, if the OS handles them properly.

Conclusions
The results were not exactly what I expected. Clearly the I/O load during build is too low to affect performance in a siginficant way (except for Mac OS X and a slow USB drive). Anyway, USB2 itself has not proved to be the weak link in my tests.

Bad OS X performance due to bad blocks

An unfortunate iMac suffered from file system corruption a while ago. It was reinstalled and worked fine for a while, but performance degraded and after weeks the system was unusable. Startup was slow, and when on, it spent most time spinning the colorful wheel.

I realised the problem was that the hard drive (a good old rotating disk) had bad blocks, but this was not obvious to discover or fix within Mac OS X.

However, an Ubuntu live DVD (or USB I suppose) works perfectly with a Mac, and there the badblocks command proved useful. I did:

# badblocks -b 4096 -c 4096 -n -s /dev/sda

You probably want to make a backup of your system before doing this. Also, be aware that this command will take long time (about 9h on my 500GB drive). The command tests both reading and writing to the hard drive. It restores the data, so for a working drive it should be non-destructive. I work with 16MB chunks because reading and writing default 512 bytes is slower.

On my first run, about 250 bad blocks were discovered.
On a second run, 0 bad blocks were discovered.

The theory here is that the hard drive should learn about its bad blocks, and map around them. The computer is now reinstalled and it works very fine. I dont know if it is a matter of days or weeks until the drive completely breaks, or if it will work fine for years now. I will update this article in the future.

Finally, if you have a solid state drive (SSD)… I dont know. I guess you can run this a lot on a rotating drive without issues, but I would expect it to shorten the life of an SSD (but if it has bad blocks causing you problems, what are your options). For a USB-drive or SD-card… I doubt it is a good idea.

Conclusion
To be done…

Broken USB Drive

A friend had probems with a 250GB external Western Digital Passport USB drive. I connected it to Linux, and got:

[ 1038.640149] usb 3-5: new full-speed USB device number 4 using ohci-pci
[ 1038.823970] usb 3-5: device descriptor read/64, error -62
[ 1039.111652] usb 3-5: device descriptor read/64, error -62
[ 1039.391408] usb 3-5: new full-speed USB device number 5 using ohci-pci
[ 1039.575187] usb 3-5: device descriptor read/64, error -62
[ 1039.862954] usb 3-5: device descriptor read/64, error -62
[ 1040.142662] usb 3-5: new full-speed USB device number 6 using ohci-pci
[ 1040.550269] usb 3-5: device not accepting address 6, error -62
[ 1040.726092] usb 3-5: new full-speed USB device number 7 using ohci-pci
[ 1041.133774] usb 3-5: device not accepting address 7, error -62
[ 1041.133806] hub 3-0:1.0: unable to enumerate USB device on port 5

Turned out the USB/SATA-controller was broken, but the drive itself was healthy. I took the 2.5′ SATA-drive out of the enclosure and connected it to another SATA-controller – all seems fine.

USB Drives, dd, performance and No space left

Please note: sudo dd is a very dangerous combination. A little typing error and all your data can be lost!

I like to make copies and backups of disk partitions using dd. USB drives sometimes do not behave very nicely.

In this case I had created a less than 2GB FAT32 partition on a USB memory and made it Lubuntu-bootable, with a 1GB file for saving changes to the live filesystem. The partition table:

It seems I forgot to change the partition to FAT32, but it is formatted with FAT32 and that seems to work fine 😉

$ sudo /sbin/fdisk -l /dev/sdc

Disk /dev/sdc: 4004 MB, 4004511744 bytes
50 heads, 2 sectors/track, 78213 cylinders, total 7821312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3a78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048     3700000     1848976+  83  Linux

I wanted to make an image of this USB drive that I can write to other USB drives. That is why I made the partition/filesystem significantly below 2GB, so all 2GB USB drives should work. This is how I created the image:

$ sudo dd if=/dev/sdb of=lubuntu.img bs=512 count=37000000

So, now I had a 1.85GB file named lubuntu.img, ready to write back to another USB drive. That was when the problems began:

$ sudo dd if=lubuntu.img of=/dev/sdb
dd: writing to ‘/dev/sdb’: No space left on device
2006177+0 records in
2006176+0 records out
1027162112 bytes (1.0 GB) copied, 24.1811 s, 42.5 MB/s

Very fishy! The write speed (42.5MB/s) is obviously too high, and the USB drive is 4GB, not 1GB. I tried with several (identical) USB drives, same problem. This has never happened to me before.

I changed strategy and made an image of just the partition table, and another image of the partion:

$ sudo dd if=/dev/sdb of=lubuntu.sdb bs=512 count=1
$ sudo dd if=/dev/sdb1 of=lubuntu.sdb1

…and restoring to another drive… first the partition table:

$ sudo dd if=lubuntu.sdb if=/dev/sdb

Then remove and re-insert USB Drive, make sure it does not mount automatically before you proceed with the partition.

$ sudo dd if=lubuntu.sdb1 if=/dev/sdb1 

That worked! However, the write speed to USB drives usually slow down as more data is written (in one chunk, somehow). I have noticed this before with other computers and other USB drives. I guess USB drives have some internal mapping table that does not like big files.

Finally, to measure progress of the dd command, send it the signal:

$ sudo kill -USR1 <PID OF dd PROCESS>

Above behaviour noticed on x86 Ubuntu 13.10.

Streaming media on the Mac : Ace Player HD

There are many great reasons to use a Mac. Easy access to propriatory Windows software isn’t one of them. Watching sports online usually include one of a few technologies:

  • Flash: Works fine on a Mac
  • Sopcast: There is a native Mac client these days
  • Acestream: No native client available

As more and more events are being streamed using acestream (free as in beer for windows), being able to take part in these streams would be great. And using bootcamp and reboot isn’t really a viable option…

I was able to follow instructions on this web page to wrap Ace Player HD (itself wrapping VLC) using Winebottler to get it all to work. All the information is in the thread, but as it is spanning over many months, it isn’t quite clear what hints work and what hints did not. Below is a little summary of what I did to make it work on Mac OS X 10.9.1:

Follow this instruction post: http://forum.wiziwig.eu/threads/87110-MAC-OSX-Acestream-2-1-5-3?p=1664117#post1664117

Winetricks were critical.

What you need:

  • Ace_Stream_Media (Ace Player HD 2.1.9 (VLC 2.0.5)) As pointed out in some posts, more recent versions DONT work. Perhaps they will now, but these combos did at least work fine.
  • WineBottlerCombo_1.7.11.dmg (post suggest 1.7.9, i used 1.7.11 with no problems)

What you don’t need:

  • Registry hacks

Where I got stuck (and how I solve it)

  • Streams working fine, but picture is very choppy (a few fps). Fixed by switching to OpenGL in VLC config: http://i42.tinypic.com/20q10ew.jpg
  • Engine fails to start with some strange error: Reboot (yes…)

 

Final notes

When shutting down the app, you also need to exit the engine. You do this by right clicking the little “Windowsy” icon in your Menu bar, and choosing Quit. It will take 20 or so seconds before they all shut down (wine, wineserver processes).