Category Archives: Computers

Gaming mouse, KVM and Linux

My old ugly Logitech mouse since 10 years died. For long I have been thinking about replacing it not really knowing what to get instead.

I have a “das keyboard” and I want a mouse with the same build quality and feel, but without a million configurable buttons. I also have a KVM switch (using two computers with the same display, keyboard and mouse) from Aten.

I bought a Corsair Katar mouse.

Findings:

  • When KVM-switching it takes a few seconds for the mouse to start working.
  • The mouse is very fast at first. In Windows it slows down after a few seconds (I guess when drivers and mouse profile kick in).
  • The mouse works just fine in Ubuntu, but it is too fast for my taste (even with basic mouse configuration options set at slowest).

Perhaps I would have been better off with a sub-$10-noname-mouse.

Update 2016-10-16
I found a way to slow down my mouse! This support post was useful, although my solution was slightly different.

First run:

$ xinput list
? Virtual core pointer                    	id=2	[master pointer  (3)]
?   ? Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
?   ? Corsair Corsair Gaming KATAR Mouse      	id=11	[slave  pointer  (2)]
?   ? Corsair Corsair Gaming KATAR Mouse      	id=12	[slave  pointer  (2)]
? Virtual core keyboard                   	id=3	[master keyboard (2)]
    ? Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    ? Power Button                            	id=6	[slave  keyboard (3)]
    ? Video Bus                               	id=7	[slave  keyboard (3)]
    ? Power Button                            	id=8	[slave  keyboard (3)]
    ? Metadot - Das Keyboard Das Keyboard Model S	id=9	[slave  keyboard (3)]
    ? Metadot - Das Keyboard Das Keyboard Model S	id=10	[slave  keyboard (3)]

I found out that fixing device 11 was useless, but device 12 was helpful.

My mouse parameters are obtained:

$ xinput list-props 12
Device 'Corsair Corsair Gaming KATAR Mouse':
	Device Enabled (142):	1
	Coordinate Transformation Matrix (144):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 3.000000
	Device Accel Profile (269):	0
	Device Accel Constant Deceleration (270):	1.000000
	Device Accel Adaptive Deceleration (271):	1.000000
	Device Accel Velocity Scaling (272):	10.000000
	Device Product ID (262):	6940, 6946
	Device Node (263):	"/dev/input/event6"
	Evdev Axis Inversion (273):	0, 0
	Evdev Axes Swap (275):	0
	Axis Labels (276):	"Rel X" (152), "Rel Y" (153), "Rel Vert Wheel" (268)
	Button Labels (277):	"Button Left" (145), "Button Middle" (146), "Button Right" (147), "Button Wheel Up" (148), "Button Wheel Down" (149), "Button Horiz Wheel Left" (150), "Button Horiz Wheel Right" (151), "Button Side" (266), "Button Extra" (267), "Button Forward" (291), "Button Back" (292), "Button Task" (293), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265), "Button Unknown" (265)
	Evdev Scrolling Distance (278):	1, 1, 1
	Evdev Middle Button Emulation (279):	0
	Evdev Middle Button Timeout (280):	50
	Evdev Third Button Emulation (281):	0
	Evdev Third Button Emulation Timeout (282):	1000
	Evdev Third Button Emulation Button (283):	3
	Evdev Third Button Emulation Threshold (284):	20
	Evdev Wheel Emulation (285):	0
	Evdev Wheel Emulation Axes (286):	0, 0, 4, 5
	Evdev Wheel Emulation Inertia (287):	10
	Evdev Wheel Emulation Timeout (288):	200
	Evdev Wheel Emulation Button (289):	4
	Evdev Drag Lock Buttons (290):	0

Here, the “Coordinate Transformation Matrix” is the key to speeding the mouse down. The last parameter was 1.0, it is now 3.0, this seems to mean my mouse is just a third as fast as it used to be. To set it:

xinput --set-prop 12 "Coordinate Transformation Matrix" 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 3.0

I suppose your mouse can go quite crazy if you change all those 0.0 to something else. Good luck!

Hackintosh – a first attempt

I really have no love for Windows 10, but I use it for Steam and a few games. For a long long time people did not buy Apple computers because there were no games for them. Now I find there are more games than I can possibly want but there is no Apple computer I want to buy to play games on:

  • MacBook Air: I have this one – it gets warm and noisy with games
  • MacMini: underpowered for games, and so little value, especially if you want more RAM
  • Mac Pro: its perfect, just very much too expensive to replace a Windows 10 machine
  • iMac: I already have a display and KVM connected to a Linux computer, and I dont believe in throwing away the display because a hard drive breaks.

So I sound like my friends did 10-15 years ago: Macs are too expensive to play games!

But then there is Hackintosh: an ordinary PC running OS X.
There is even a Buyer’s guide, and something like this would suit me well.

I decided to try to turn my current Windows 10 PC into a Hackintosh and followed the instructions.

It was a gamble all the time:

  • My ASUS P8H67-M mainboard: some people seem to have had success with it, but it is not exactly a first choice.
  • My Radeon HD 6950 graphics card is not a good Hackintosh card at all. If I remove it I can fall back to the Intel HD 2000 that is integrated in the i5 CPU (or on the mainboard – I dont know). That is also not a good Hackintosh GPU.

Anyway, I disconnected my Windows hard drives and connected a 60GB SSD to install OS X. And for a while it was good. Some BIOS (UEFI) tweaking, and I

  1. got the installer running
  2. installed OS X
  3. started my new OS X (from the install USB-key, since bootloader was yet to be installed)
  4. played around in OS X, bragging about my feat

hackintosh
Audio was not working, and Video performance sucked, but ethernet worked and it was very useable.

I went on trying to install the bootloader and some drivers (using MultiBeast, following the instruction). This is where all my luck ended. MultiBeast reported that it failed.

I never managed to start OS X again. Not the installed system. Not the install USB-key. I tried:

  1. Removing all hard drives
  2. Reset BIOS/UEFI settings, and try many combinations
  3. Recreate the USB-key
  4. Remove my Radeon 6950 and fallback to Intel HD 2000
  5. Remove files from the USB-key that contains “kernel cache” and things like that
  6. Different boot options from Clover – both the standard menu and non standard options that I found in forums
  7. Create a UEFI-USB-key instead of a Legacy-USB-key

No success at all. I basically got this error.

In order to get things working in the first place I changed a few BIOS/UEFI settings:

  • SATA mode: IDE => AHCI
  • Serial: Disable

(I found no other relevant settings on my mainboard).

After changing IDE => AHCI Windows did not boot. That was an expected and common problem, and I fixed it following some simple steps (forcing safe boot). It was after that OS X never started again. I wonder if something happened to my mainboard/UEFI there, that Windows did, that I can not control/undo?

Update 2016-05-18
I found this post to follow. Much better now. I write this post from my Hackintosh.

In order to eliminate all possible old problems i deleted the 10Mb of the USB-key and hard drive using linux and

dd if=/dev/zero of=/dev/sdX bs=1024 count=10240

Obviously replace sdX with your drive.

About my “working” configuration:

legacy: USB-key is legacy. Clover is installed in Legacy-Root-mode.
MultiBeast: During installation, Step 5 (MultiBeast) fails, and I had to resort to Step 6.
safe mode: my startup arguments are:

dart=0 kext-dev-mode=1 PCIRootUID=0 UseKernelCache=NO -x

I have twice rendered my system unbootable but fixed it with multiple restarts. I think it is the CustoMac Essentials that install some kexts that are are not ok.
Audio is supposed to be ACL892 but it does not work. Probably because CustoMac Essentials fail.
Dual Boot with Windows does not work. This was expected. Clover fails to start Windows (although, there is some limited success, but Windows does not make it all the way).
Clover Configurator: what was not so obvious was the config.plist. It finds 3 different ones on my system. The one that seems to be in use is /EFI/CLOVER/config.plist – so that is the one to edit. But you need to save your changed configuration to a new file, and the copy using the command line and sudo.

Ideas
Well, I have some ideas how to get to a better situation.

  • Install everything NOT in Legacy mode but use UEFI-stuff all the way. Perhaps that just fixes stuff. Or not. I anyway need to get into my UEFI/BIOS to change to booting Windows.
  • Changing graphics adapter: it could be the reason I have to be in safe mode. And the safe mode could be the reason audio does not work. And so on

Update
I tried removing my Radeon 6950 falling back to HD2000. That did not work. I could neither boot from my hard drive nor the install USB-Key. Putting the Radeon back in the computer did not work at first. But after several reboots (also with the USB key) OS X now starts up again (in safe mode).

I tried everything from the beginning with HD 2000: erase drives, disconnect windows drives, upgrade BIOS, reset BIOS, create new USB key (both Legacy and UEFI): never did I manage to boot the installer using HD 2000. So the ill-supported Radeon 6950 (which possibly restricts me from going beyond Safe Mode) works better than the integrated HD 2000.

I do understand the advantage with a “supported” mainboard that has all the recommended UEFI/BIOS settings.

Storage and filesystem performance test

I have lately been curious about performance for low-end storage and asked myself questions like:

  1. Raspberry Pi or Banana Pi? Is the SATA of the Banana Pi a deal breaker? Especially now when the Raspberry Pi has 4 cores, and I don’t mind if one of them is mostly occupied with USB I/O overhead.
  2. For a Chromebook or a Mac Book Air where internal storage is fairly limited (or very expensive), how practical is it to use USB storage?
  3. Building OpenWRT buildroot requires a case sensitive filesystem (disqualifying the standard Mac OS X filesystem) – is it feasible to use a USB device?
  4. The journalling feature of HFS+ and ext4 is probably a good idea. How does it affect performance?
  5. For USB drives and Memory cards, what filesystems are better?
  6. Theoretical maximum throughput is usually not that interesting. I am more interested in actual performance (time to accomplish tasks), and I believe this is often limited by latency and overhead than throughput. Is it so?

Building OpenWRT on Mac Book Air
I tried building OpenWRT on a USB drive (with case sensitive HFS+), and it turned out to be very slow. I did some structured testing by checked out the code, putting it in a tarball, and repeating:

   $ cd /external/disk
1  $ time cp ~/openwrt.tar . ; time sync
2  $ time tar -xf ~/openwrt.tar ; time sync   (total 17k files)
   $ make menuconfig - not benchmarked)
3  $ time make tools/install                  (+38k files, +715MB)

I did this on the internal SSD (this first step of OpenWRT buildroot was not case sensitive-dependent), on an external old rotating 2.5 USB drive and on a cheap USB drive. I tried a few different filesystem combinations:

$ diskutil eraseVolume hfsx  NAME /dev/diskXsY   (non journaled case sensitive)
$ diskutil eraseVolume jhfsx NAME /dev/diskXsY   (journaled case sensitive)
$ diskutil eraseVolume ExFAT NAME /dev/diskXsY   (Microsoft ExFAT)

The results were (usually just a single run):

Drive and Interface Filesystem time cp time tar time make
Internal 128GB SSD Journalled HFS+ 5.4s 16m13s
2.5′ 160GB USB2 HFS+ 3.1s 7.0s 17m44s
2.5′ 160GB USB2 Journalled HFS+ 3.1s 7.1s 17m00s
Sandisk Extreme
16GB USB Drive USB3
HFS+ 2.0s 6.9s 18m13s
Kingston DTSE9H
8GB USB Drive USB2
HFS+ 20-30s 1m40s-2m20s 1h
Kingston DTSE9H
8GB USB Drive USB2
ExFAT 28.5s 15m52s N/A

Findings:

  • Timings on USB drives were quite inconsistent over several runs (while internal SSD and hard drive were consistent).
  • The hard drive is clearly not the limiting factor in this scenario, when comparing internal SSD to external 2.5′ USB. Perhaps a restart between “tar xf” and “make” would have cleared the buffer caches and the internal SSD would have come out better.
  • When it comes to USB drives: WOW, you get what you pay for! Turns out the Kingston is among the slowest USB drive that money can buy.
  • ExFAT? I don’t think so!
  • For HFS+ and OS X, journalling is not much of a problem

Building OpenWRT in Linux
I decided to repeat the tests on a Linux (Ubuntu x64) machine, this time building using two CPUs (make -j 2) to stress the storage a little more. The results were:

Drive and Interface Filesystem real time user time system time
Internal SSD ext4 9m40s 11m53s 3m40s
2.5′ 160GB USB2 ext2 8m53s 11m54s 3m38s
2.5′ 160GB USB2 (just after reboot) ext2 9m24s 11m56s 3m31s
Kingston DTSE9H
8GB USB Drive USB2
ext2 11m36s
+3m48s (sync)
11m57s 3m44s

Findings:

  • Linux block device layer almost eliminates the performance differences of the underlying storage.
  • The worse real time for the SSD is probably because of other processes taking CPU cycles

My idea was to test connecting the 160GB drive directly via SATA, but given the results I saw no point in doing so.

More reading on flash storage performance
I found this very interesting article (linked to by the Gentoo people of course). I think it explains a lot of what i have measured. I think, even the slowest USB drives and Memory cards would often be fast enough, if the OS handles them properly.

Conclusions
The results were not exactly what I expected. Clearly the I/O load during build is too low to affect performance in a siginficant way (except for Mac OS X and a slow USB drive). Anyway, USB2 itself has not proved to be the weak link in my tests.

Bad OS X performance due to bad blocks

An unfortunate iMac suffered from file system corruption a while ago. It was reinstalled and worked fine for a while, but performance degraded and after weeks the system was unusable. Startup was slow, and when on, it spent most time spinning the colorful wheel.

I realised the problem was that the hard drive (a good old rotating disk) had bad blocks, but this was not obvious to discover or fix within Mac OS X.

However, an Ubuntu live DVD (or USB I suppose) works perfectly with a Mac, and there the badblocks command proved useful. I did:

# badblocks -b 4096 -c 4096 -n -s /dev/sda

You probably want to make a backup of your system before doing this. Also, be aware that this command will take long time (about 9h on my 500GB drive). The command tests both reading and writing to the hard drive. It restores the data, so for a working drive it should be non-destructive. I work with 16MB chunks because reading and writing default 512 bytes is slower.

On my first run, about 250 bad blocks were discovered.
On a second run, 0 bad blocks were discovered.

The theory here is that the hard drive should learn about its bad blocks, and map around them. The computer is now reinstalled and it works very fine. I dont know if it is a matter of days or weeks until the drive completely breaks, or if it will work fine for years now. I will update this article in the future.

Finally, if you have a solid state drive (SSD)… I dont know. I guess you can run this a lot on a rotating drive without issues, but I would expect it to shorten the life of an SSD (but if it has bad blocks causing you problems, what are your options). For a USB-drive or SD-card… I doubt it is a good idea.

Conclusion
To be done…

Broken USB Drive

A friend had probems with a 250GB external Western Digital Passport USB drive. I connected it to Linux, and got:

[ 1038.640149] usb 3-5: new full-speed USB device number 4 using ohci-pci
[ 1038.823970] usb 3-5: device descriptor read/64, error -62
[ 1039.111652] usb 3-5: device descriptor read/64, error -62
[ 1039.391408] usb 3-5: new full-speed USB device number 5 using ohci-pci
[ 1039.575187] usb 3-5: device descriptor read/64, error -62
[ 1039.862954] usb 3-5: device descriptor read/64, error -62
[ 1040.142662] usb 3-5: new full-speed USB device number 6 using ohci-pci
[ 1040.550269] usb 3-5: device not accepting address 6, error -62
[ 1040.726092] usb 3-5: new full-speed USB device number 7 using ohci-pci
[ 1041.133774] usb 3-5: device not accepting address 7, error -62
[ 1041.133806] hub 3-0:1.0: unable to enumerate USB device on port 5

Turned out the USB/SATA-controller was broken, but the drive itself was healthy. I took the 2.5′ SATA-drive out of the enclosure and connected it to another SATA-controller – all seems fine.

USB Drives, dd, performance and No space left

Please note: sudo dd is a very dangerous combination. A little typing error and all your data can be lost!

I like to make copies and backups of disk partitions using dd. USB drives sometimes do not behave very nicely.

In this case I had created a less than 2GB FAT32 partition on a USB memory and made it Lubuntu-bootable, with a 1GB file for saving changes to the live filesystem. The partition table:

It seems I forgot to change the partition to FAT32, but it is formatted with FAT32 and that seems to work fine 😉

$ sudo /sbin/fdisk -l /dev/sdc

Disk /dev/sdc: 4004 MB, 4004511744 bytes
50 heads, 2 sectors/track, 78213 cylinders, total 7821312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3a78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048     3700000     1848976+  83  Linux

I wanted to make an image of this USB drive that I can write to other USB drives. That is why I made the partition/filesystem significantly below 2GB, so all 2GB USB drives should work. This is how I created the image:

$ sudo dd if=/dev/sdb of=lubuntu.img bs=512 count=37000000

So, now I had a 1.85GB file named lubuntu.img, ready to write back to another USB drive. That was when the problems began:

$ sudo dd if=lubuntu.img of=/dev/sdb
dd: writing to ‘/dev/sdb’: No space left on device
2006177+0 records in
2006176+0 records out
1027162112 bytes (1.0 GB) copied, 24.1811 s, 42.5 MB/s

Very fishy! The write speed (42.5MB/s) is obviously too high, and the USB drive is 4GB, not 1GB. I tried with several (identical) USB drives, same problem. This has never happened to me before.

I changed strategy and made an image of just the partition table, and another image of the partion:

$ sudo dd if=/dev/sdb of=lubuntu.sdb bs=512 count=1
$ sudo dd if=/dev/sdb1 of=lubuntu.sdb1

…and restoring to another drive… first the partition table:

$ sudo dd if=lubuntu.sdb if=/dev/sdb

Then remove and re-insert USB Drive, make sure it does not mount automatically before you proceed with the partition.

$ sudo dd if=lubuntu.sdb1 if=/dev/sdb1 

That worked! However, the write speed to USB drives usually slow down as more data is written (in one chunk, somehow). I have noticed this before with other computers and other USB drives. I guess USB drives have some internal mapping table that does not like big files.

Finally, to measure progress of the dd command, send it the signal:

$ sudo kill -USR1 <PID OF dd PROCESS>

Above behaviour noticed on x86 Ubuntu 13.10.

Streaming media on the Mac : Ace Player HD

There are many great reasons to use a Mac. Easy access to propriatory Windows software isn’t one of them. Watching sports online usually include one of a few technologies:

  • Flash: Works fine on a Mac
  • Sopcast: There is a native Mac client these days
  • Acestream: No native client available

As more and more events are being streamed using acestream (free as in beer for windows), being able to take part in these streams would be great. And using bootcamp and reboot isn’t really a viable option…

I was able to follow instructions on this web page to wrap Ace Player HD (itself wrapping VLC) using Winebottler to get it all to work. All the information is in the thread, but as it is spanning over many months, it isn’t quite clear what hints work and what hints did not. Below is a little summary of what I did to make it work on Mac OS X 10.9.1:

Follow this instruction post: http://forum.wiziwig.eu/threads/87110-MAC-OSX-Acestream-2-1-5-3?p=1664117#post1664117

Winetricks were critical.

What you need:

  • Ace_Stream_Media (Ace Player HD 2.1.9 (VLC 2.0.5)) As pointed out in some posts, more recent versions DONT work. Perhaps they will now, but these combos did at least work fine.
  • WineBottlerCombo_1.7.11.dmg (post suggest 1.7.9, i used 1.7.11 with no problems)

What you don’t need:

  • Registry hacks

Where I got stuck (and how I solve it)

  • Streams working fine, but picture is very choppy (a few fps). Fixed by switching to OpenGL in VLC config: http://i42.tinypic.com/20q10ew.jpg
  • Engine fails to start with some strange error: Reboot (yes…)

 

Final notes

When shutting down the app, you also need to exit the engine. You do this by right clicking the little “Windowsy” icon in your Menu bar, and choosing Quit. It will take 20 or so seconds before they all shut down (wine, wineserver processes).

Pure 64-bit AMD/Intel x64 CPU? Ever?

The big CPU wars seem to be over. Itanium is sinking. Alpha is since long abandoned. Power is IBM only, PowerPC is barely a niche product. Sparc is never going to be what it was.

We are left with MIPS (the Chinese seem to believe in it, and it is a good architecture, so lets have some more patience before we consider it dead).

We are left with ARM – the lightest and most simple architecture of them all. And ARM is about to be 64 bit (but very likely, ARM will most of all remain relevant and dominant in devices where 32 bit is very much enough).

And we are left with x86 (32-bit) and amd64/x86-64 (AMDs original branding, and Intels branding of the same thing after they licensed it).

From an architectural perspective, x86 is the worst of all the CPUs, and for rational reasons it was thought by many to one day be defeated by RISC (PowerPC, MIPS) or Itanium. But x86 persisted because Intel (and AMD) were good at manufacturing fast units at a good price, driven by an enourmous demand for CPUs running Windows and Windows applications.

Now, finally, x86 is getting less relevant more than 25 years after its first 32-bit incarnation: the 386 in 1986 (followed by 486, Pentium, Pentium MMX, Pentium Pro, Pentium II, Pentium III, Pentium 4… and then it just gets too complicated. Of course AMD K5, K6 and Athlon deserves to be mentioned as well, and there was also Cyrix in the beginning, and later Via). x86 is now losing, but not to anything less than its own successor x86-64.

I read there are those who want to drop x86 support (in Linux distributions), and as I have written about elsewhere support for pure x86 is limited in Ubuntu (requiring PAE). I believe x86 is there to stay – indefinitely (well not really). There is always the legacy and embedded market, and there will be a need for it.

But is there really a need for amd64/x86-64 to retain 32-bit backwards compability (and 16 bit)? I hear Windows 9 is going to be 64-bit only.

Of course there are always legacy 32-bit applications to run on 64-bit Windows – those will be fewer and fewer, and those can be emulated. Perhaps not a good idea today – but in a few years. But I guess Intel will not repeat the mistake of Itanium (where x86 was only software supported).

Of course there will always be the need to run a few old 32-bit OS, but there can be x86-boards built and sold for that purpose.

My idea is of course that a Pure 64 bit Intel i7 or Xeon would be able to use its silicon more efficiently, than a CPU that also has 32-bit compability in hardware. However the first Pentium had a little more than 3M transistors, while todays i7 have more than 2G transistors. So, including an entire Pentium processor (scaled down to modern process size) in a modern CPU costs… virtually nothing. At least not when it comes to transistor count, but perhaps there is complexity/legacy cost?

BIOS is not needed to boot a PC anymore.

So, when will Apple (of course, Apple will do it first, if anyone ever will) ship the first Pure 64-bit Mac OS X machine, with no 32-bit or 16-bit CPU or hardware modes?

Well, I am not going to mourn all those lost architectures. I was mourning Alpha and PowerPC, but it is over now. And Itanium – the world could have seen the ITER fusion reactor being built for the money Intel and HP spent to replace the superiour Alpha – what a shame for mankind.

Disclaimer: I am not an expert, and I have just been writing down my thoughts here. Feel free to comment or to correct me – be careful using this post as a source of knowledge.

Death of my SSD

My new SSD drive lasted about 6 weeks. It is the first one I ever bought (not counting the built-in one in my Eee 701). And I thought I had taken extra good care of it, not even using it for the pagefile.

If you run Windows 7, I recommend making a System Image Backup (Control Panel -> System and Security -> Backup And Restore) to a spare drive. When I did that I got the following error message:

The Backup Failed. New bad clusters were found on the source volume. These clusters were not backed up.

Before I ran the backup, I had noticed nothing suspicious about the drive. Now I checked the Windows Event log (System Log), and found Errors from Source=Disk, with ID=7.

Trying to fix
I tried to run chkdsk over and over again. First via right-clicking on c:, clicking Properties, Tools, and checking/fixing errors. I also tried running chkdsk /b from the command line. This reduced the number of errors in the event log down to one (when the computer started), but I could not get rid of the last error. The Image Backup kept failing. Days later I had several errors again. This made me definitely give up on the drive.

The Windows Backup Image
The Windows Backup Image feature is simple and nice. But as usual with Microsoft it is after all a half-crappy tool that works if you are lucky. I don’t like:

  • When the backup above failed, the tool anyway created a backup image. How am I supposed to know if I can use that backup or not?
  • When running the backup again, the tool obviously just read files/data that had changed, effectively avoiding the bad blocks. So when running again I did not even get an error, making me even more uncertain about the usability of the backup.
  • The recovery CD can only recover to a hard drive that is not smaller than the original hard drive. This applies even if the original backed-up Windows partition is smaller than the new drive. (I was afraid replacing my 128GB SSD with another brand/model 128GB SSD, because the new drive could very possibly be a few blocks smaller than the original one).

I recovered the system to an old non-SSD drive with success, but performance was so depressing.

New Drive
In the end I replaced my broken 128 GB OCZ Octane S2 with a 60GB Intel Series 520 drive. They were almost exactly the same price. I re-installed Windows from scratch. Online Windows Activation was OK this second time I used the same key – I did not have to call Microsoft and explain anything.

Conclusion
The performance of SSD on the system/root drive is fantastic. Booting and starting programs is so fast. I just could not go back to a normal drive. But I hesitate to store my own files on SSD, and I will think twice before getting a budget SSD drive again.

Recover scenarios to the new smaller drive
I will be honest and admit I never tried to restore the old system to my 60GB drive. My experience with not being able to restore to a smaller hard drive comes from another computer.

Intel has an SSD Toolbox, and I think it contains migration tools. Perhaps those tools could have handled the smaller (albeit large enough) destination Intel SSD drive.

There are ways to resize partitions, backup and restore them and make them bootable. But I did not trust the quality of my backup 100% (after all Windows said it failed). And I hesitate to use dd to write images to SSD drives.

Fixing the bad blocks?
Bad blocks are supposed to be more common on SSD drives than traditional drives, but firmware should handle it. I found no useful tools from the drive manufacturer, OCZ. My nice dealer allowed me to replace the drive after I explained that Windows failed to backup it, and chkdsk didn’t fix it.

Indications in Linux
I booted the Ubuntu 12.04 CD, and ran the following commands:

$ sudo md5sum /dev/sda
md5sum: /dev/sda: Input/output error
$ sudo badblocks -b 4096 -c 1024 -e 10 /dev/sdg
1312474
1312575
(and another 8 lines with bad blocks)

The system log or dmesg command gave more details.

Quickly and easily transfer files over network

You want to copy a file between two computers or Symbian phones, and the usual methods don’t feel that attractive? Network drives, scp and ftp requires configuring a server (that later might be a security risk). USB cables are never available when needed. DropBox and Bluetooth are too slow.

A while ago I described how to copy files with netcat, but that works best on *nix and is not so easy for people who do not like the command line. And, it does not work on mobile phones.

So, I wrote a little program that does what netcat does but is simple to use and has a GUI. And, I wrote it in QT, so it works in Mac OS X, Linux, Windows and Symbian. It is exactly the same on all platforms.

Do you want a simple way to copy files over the network? Download ParrotCopy and give it a try – instructions included:

ParrotCopy:

The Symbian version probably only works on Symbian^3 and has just been tested on Nokia N8. Let me know if you need a binary for an older Symbian device or a Maemo or Harmattan device.

Bugs
In server mode, the program tries to connect to www.google.com:80, to figure out its own IP. It is simply an ugly hack because I had problems with other methods. You may have problems if internet is not accessible. I will not release 1.0 until this is fixed.

Limitations
You have to manually name the file you receive, and you can transfer just one file at a time. These are not bugs, but future versions will probably do better. Netcat can be combined with tar, gzip etc. I hope to add at least tar in the future. For now, making a zipfile is a simple way to transfer many files.

Release 0.9.7
Now possible to copy contents of status field and file/folder fields (if you want to paste it into other application).