Category Archives: Linux - Page 2

Working OpenVPN configuration

I am posting my working OpenVPN server configuration, and client configuration for Linux, Android and iOS. First a little background.

I have an OpenWRT (14.07) router running OpenVPN server. This router has a public IP address and thanks to dyn.com/dns it can be resolved using a domain name (ROUTER.PUBLIC in all configuration examples below).

My router LAN address is 192.168.8.1, the LAN network is 192.168.8.*, and the OpenVPN network is 192.168.9.* (in this range OpenVPN-clients will be given an address to their vpn/dun-device). I run OpenVPN on TCP 1143.

What I want to achieve is
1) to access local services (like ownCloud and ssh) of computers on the LAN
2) to access internet as if I were at home, when I have an internet access that is somehow restricted

The Server
Essentially, this OpenWRT OpenVPN Setup Guide is very good. Follow it. I am not going to repeat everything, just post my working configurations.

root@breidablick:/etc/config# cat openvpn 

config openvpn 'myvpn'
	option enabled '1'
	option dev 'tun'
	option proto 'tcp'
	option status '/tmp/openvpn.clients'
	option log '/tmp/openvpn.log'
	option verb '3'
	option ca '/etc/openvpn/ca.crt'
	option cert '/etc/openvpn/my-server.crt'
	option key '/etc/openvpn/my-server.key'
	option server '192.168.9.0 255.255.255.0'
	option port '1143'
	option keepalive '10 120'
	option dh '/etc/openvpn/dh2048.pem'
	option push 'redirect-gateway def1'
	option push 'dhcp-option DNS 192.168.8.1'
	option push 'route 192.168.8.0 255.255.255.0'

It is a little unclear if the last three options really work for all clients. I also have:

root@breidablick:/etc/config# cat network 
.
.
.
config interface 'vpn0'
	option ifname 'tun0'
	option proto 'none'

and

root@breidablick:/etc/config# cat firewall 
.
.
.
config zone
	option name 'vpn'
	option input 'ACCEPT'
	option forward 'ACCEPT'
	option output 'ACCEPT'
	list network 'vpn0'
.
.
.
config forwarding
	option src 'lan'
	option dest 'vpn'

config forwarding
	option src 'vpn'
	option dest 'wan'
.
.
.
# may not be needed depending on your lan policys (2 next)
config rule
	option name 'Allow-lan-vpn'
	option src 'lan'
	option dest 'vpn'
	option target ACCEPT
	option family 'ipv4'

config rule
	option name 'Allow-vpn-lan'
	option src 'vpn'
	option dest 'lan'
	option target ACCEPT
	option family 'ipv4'
.
.
.
# may not be needed depending on your wan policy
config rule
	option name 'Allow-OpenVPN-from-Internet'
	option src 'wan'
	option proto 'tcp'
	option dest_port '1143'
	option target 'ACCEPT'
	option family 'ipv4'

iOS client
You need to install OpenVPN client for iOS from the app store. The client configuration is prepared on your computer, and synced with iOS using iTunes (brilliant or braindead?). This is my working configuration:

client
dev tun
ca ca.crt
cert iphone.crt
key iphone.key
remote ROUTER.PUBLIC 1143 tcp-client
route 0.0.0.0 0.0.0.0 vpn_gateway
dhcp-option DNS 192.168.8.1
redirect-gateway def1

This route and redirect-gateway configuration makes all traffic go via VPN. Omit those lines if you want direct internet access.

Android client
For Android, you also need to install the OpenVPN client from the Store. My client is the “OpenVPN for Android” by Arne Schwabe. This client has a GUI that allows you to configure everything (but you need to get the certificate files to your Android device somehow). You can watch the entire Generated Config in the GUI and mine looks like this (omitting GUI and Android-specific stuff, and the certificates):

ifconfig-nowarn
client
verb 4
connect-retry-max 5
connect-retry 5
resolv-retry 60
dev tun
remote ROUTER.PUBLIC 1143 tcp-client
route 0.0.0.0 0.0.0.0 vpn_gateway
dhcp-option DNS 192.168.8.1
remote-cert-tls server
management-query-proxy

Linux client
I also connect linux computers occationally. The configuration is:

client
remote ROUTER.PUBLIC 1194
ca ca.crt
cert linux.crt
key linux.key
dev tun
proto tcp
nobind
auth-nocache
script-security 2
persist-key
persist-tun
user nobody
group nogroup
verb 5
# redirect-gateway local def1
log log.txt

Here the redirect-gateway is commented away, so internet traffic is not going via VPN.

Certificates
The easy-rsa package and instructions in the OpenWRT guide above are excellent. You should have different certificates for different clients. One certificate can only be used for one connection at a time.

Better configuration?
I dont say this is the optimal or best way to configure OpenVPN – but it works for me. You may prefer UDP over TCP, and may reasons for running TCP are perhaps not valid for you. You may want different encryption or data compressions options, different logging options and so on.

Installing Citrix Receiver 13.1 in Ubuntu/Debian

The best thing with Citrix Receiver for Linux is that it exists. Apart from that it kind of sucks. Last days I have tried to install it on Xubuntu 14.10 and Debian 7.7, both 64-bit version.

The good thing is that for both Debian and Ubuntu the 64-bit deb-file is actually installable using “dpkg -i”, if you fix all dependencies. I did:

1) #dpkg --add-architecture i386
2) #apt-get update
3) #dpkg -i icaclient_13.1.0.285639_amd64.deb
  ... list of failed dependencies...
4) #dpkg -r icaclient
5) #apt-get install [all packages from (3)]
6) #dpkg -i icaclient_13.1.0.285639_amd64.deb

Step (1) and (2) only needed in Debian.

selfservice is hard to get to start from the start menu. And selfservice gets segmentation fault when OpenVPN is on (WTF?). So for now, I have given up on it.

npica.so is supposed to make the browser plugin work, but not much luck there (guess it is because I have a 64 bit browser). I deleted system-wide symbolic links to npica.so (do: find | grep npica.so in the root directory).

#rm /usr/lib/mozilla/plugins/npica.so
#rm /usr/local/lib/netscape/plugins/npica.so

Then I could tell the Citrix portal that I do have the Receiver even though the browser does not recognize it, and as I launch an application I have choose to run it with wfica.sh (the good old way).

keyboard settings can no longer be made in the GUI but you have to edit your ~/.ICAClient/wfclient.ini file. The following makes Swedish keyboard work for me:

KeyboardLayout = SWEDISH
KeyboardMappingFile = linux.kbd
KeyboardDescription = Automatic (User Profile)
KeyboardType=(Default)

The problem is, when you fix the file, you need to restart all Citrix-related processes for the new settings to apply. If you feel you got the settings right but no success, just restart your computer. I wasted too much time thinking I had killed all processes, and thinking my wfclient.ini-file was bad, when a simple restart fixed it.

Debian on NUC and boot problems

I got a NUC (D54250WYKH) that I installed Debian 7.7 on.

Advice: First update the NUC “BIOS”.

  1. Download from Intel
  2. Put on USB memory
  3. Put USB memory in NUC
  4. Start NUC, Press F7 to upgrade BIOS

If I had done this first I would have saved some time and some reading about EFI stuff I don’t want to know anyway. A few more conclusions follow.

EFI requires a special little EFI-partition. Debian will set it up automatically for you, unless you are an expert and choose manual partitioning, of course 😉 That would also have saved me some time.

(X)Ubuntu 14.10 had no problems even without upgrading BIOS.

The NUC is very nice! In case it is not clear: there is space for both an mSATA drive and a 2.5′ drive in my model. In fact, I think there is also space for an extra extra small mSATA drive. Unless building a gaming computer I believe NUC (or similar) is the way to go.

Finally, Debian 7.7 comes with Linux 3.2 kernel which has old audio drivers that produce bad audio quality. I learnt about Debian backports and currently run Linux 3.16 with Debian 7.7 and I have perfect audio now.

Grub Boot Error

Update 20150419: This OCZ SSD Drive is now entirely broken.

My desktop computer (it is still an ASUS Barebone V3-M3N8200) sometimes gives me the following error when I turn it on:

error: attempt to read or write outside of disk 'hd0'.
Entering rescue mode...
grub rescue> _

gruberror

My observations:

  • This has happened now and then for a while
  • It seems to happen more often when the computer have been off for a longer period of time (sounds unlikely, I know)
  • Ctrl-Alt-Del: It always boots properly the second time

I have three SATA drives. BIOS boots the first harddrive, where GRUB is installed on the mbr, and where / is the first and only partition, and /boot lives on the / partition.

The drive in question is (from dmesg):

[    1.339215] ata3.00: ATA-8: OCZ-VERTEX PLUS R2, 1.2, max UDMA/133
[    1.339217] ata3.00: 120817072 sectors, multi 1: LBA48 NCQ (depth 31/32)
[    1.339323] ata3.00: configured for UDMA/133
[    1.339466] scsi 2:0:0:0: Direct-Access     ATA      OCZ-VERTEX PLUS  1.2  PQ: 0 ANSI: 5
[    1.339623] sd 2:0:0:0: Attached scsi generic sg1 type 0
[    1.339715] sd 2:0:0:0: [sda] 120817072 512-byte logical blocks: (61.8 GB/57.6 GiB)

That is, a 60GB SSD drive from OCZ (yes, I had another OCZ SSD drive that died).

I can not explain my occational boot errors, but I have some theories:

  • The SSD drive is broken/corrupted (but no signs within Ubuntu of anything like it)
  • All drive is somehow not initialized when GRUB executes (?)
  • Somehow, more than one hard drive is involved in the boot process, and they are not all initialized at the same time (but this does not seem to be the case)

GSmartControl gives me some suspicious output about my drive… but I do not know how to interpret it:

Error in ATA Error Log structure: checksum error
Error in Self-Test Log structure: checksum error

The (Short) self test completes without errors.

Any ideas or suggestions are welcome! I will update this post if I learn anything new.

Installing Ubuntu on Pentium M with forcepae

If trying to install Ubuntu (or Xubuntu, Lubuntu, Kubuntu) 14.04 (or 14.10) on a Pentium M computer, you may get the following error:

ERROR: PAE is disabled on this Pentium M

ERROR: PAE is disabled on this Pentium M

Just restart the computer and when you come to the install menu…

lubuntu-pae-1

…hit F6 to get a menu of kernel parameters. Now, none of those parameters are what you want, so hit ESC. You should now be able to type forcepae at the end of the kernel command:
lubuntu-pae-3

Now, hit Return, and startup/installation of Ubuntu should proceed just normally.

Background
PAE is a CPU feature that has been available on most x86-CPUs since the Pentium Pro days. Since Ubuntu 12.10, it is a required feature. Some Pentium M CPUs have the PAE feature implemented, but the processor does not announce the feature properly. Since Ubuntu 14.04 the above forcepae option is available, to allow Linux to use PAE even if the CPU officially does not support it.

This affects mostly laptops from perhaps 2000-2005. These laptops are often good computers with 1400-2000MHz CPU and 512+ MB of RAM. As Windows XP is now officially unsupported by Microsoft owners of such harware might want to install an Ubuntu flavour on the computer instead.

There have been ways to make this work with Ubuntu 12.10-13.10. I suggest, abandon those versions and hacks completely, and make a fresh install of 14.04.

I have written before about Ubuntu on Pentium M without PAE.

Migrating from Windows XP
I would personally suggest Xubuntu or Lubuntu as a replacement for Windows XP: Both should be lightweight enough for your Pentium M computer, and both are easy to use with only a Windows background. Lubuntu is most Windows-like and the lightest of them. Xubuntu is a bit heavier (and nicer), and also resembles Mac OS X a bit.

I suggest the “Try Ubuntu without Installing” option. You will have an installera available inside Ubuntu anyways, and you can confirm that most things work properly before you wipe the computer.

USB Drives, dd, performance and No space left

Please note: sudo dd is a very dangerous combination. A little typing error and all your data can be lost!

I like to make copies and backups of disk partitions using dd. USB drives sometimes do not behave very nicely.

In this case I had created a less than 2GB FAT32 partition on a USB memory and made it Lubuntu-bootable, with a 1GB file for saving changes to the live filesystem. The partition table:

It seems I forgot to change the partition to FAT32, but it is formatted with FAT32 and that seems to work fine 😉

$ sudo /sbin/fdisk -l /dev/sdc

Disk /dev/sdc: 4004 MB, 4004511744 bytes
50 heads, 2 sectors/track, 78213 cylinders, total 7821312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3a78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048     3700000     1848976+  83  Linux

I wanted to make an image of this USB drive that I can write to other USB drives. That is why I made the partition/filesystem significantly below 2GB, so all 2GB USB drives should work. This is how I created the image:

$ sudo dd if=/dev/sdb of=lubuntu.img bs=512 count=37000000

So, now I had a 1.85GB file named lubuntu.img, ready to write back to another USB drive. That was when the problems began:

$ sudo dd if=lubuntu.img of=/dev/sdb
dd: writing to ‘/dev/sdb’: No space left on device
2006177+0 records in
2006176+0 records out
1027162112 bytes (1.0 GB) copied, 24.1811 s, 42.5 MB/s

Very fishy! The write speed (42.5MB/s) is obviously too high, and the USB drive is 4GB, not 1GB. I tried with several (identical) USB drives, same problem. This has never happened to me before.

I changed strategy and made an image of just the partition table, and another image of the partion:

$ sudo dd if=/dev/sdb of=lubuntu.sdb bs=512 count=1
$ sudo dd if=/dev/sdb1 of=lubuntu.sdb1

…and restoring to another drive… first the partition table:

$ sudo dd if=lubuntu.sdb if=/dev/sdb

Then remove and re-insert USB Drive, make sure it does not mount automatically before you proceed with the partition.

$ sudo dd if=lubuntu.sdb1 if=/dev/sdb1 

That worked! However, the write speed to USB drives usually slow down as more data is written (in one chunk, somehow). I have noticed this before with other computers and other USB drives. I guess USB drives have some internal mapping table that does not like big files.

Finally, to measure progress of the dd command, send it the signal:

$ sudo kill -USR1 <PID OF dd PROCESS>

Above behaviour noticed on x86 Ubuntu 13.10.

Build Node.js on Debian ARM

Update 2015-02-15: So far, I have failed building Nodejs v0.12.0 on ARMv5

I have a QNAP TS109 running Debian (port:armel, version:7), and of course I want to run node.js on it. I don’t think there are any binaries, so building from source is the way to go.

About my environment:

$ cat /etc/debian_version
7.2
$ gcc --version | head -n 1
gcc (Debian 4.6.3-14) 4.6.3
$ uname -a
Linux kvaser 3.2.0-4-orion5x #1 Debian 3.2.51-1 armv5tel GNU/Linux
$ cat /proc/cpuinfo
Processor       : Feroceon rev 0 (v5l)
BogoMIPS        : 331.77
Features        : swp half thumb fastmult edsp
CPU implementer : 0x41
CPU architecture: 5TEJ
CPU variant     : 0x0
CPU part        : 0x926
CPU revision    : 0

Hardware        : QNAP TS-109/TS-209
Revision        : 0000
Serial          : 0000000000000000

I downloaded the latest version of node.js: node-v0.10.25, and this is how I ended up compiling it (first writing build.sh, then executing it as root):

$ cat build.sh
#!/bin/sh
export CFLAGS='-march=armv5t'
export CXXFLAGS='-march=armv5t'
./configure
make install
$ sudo ./build.sh

That takes almost four hours.

A few notes…

make install
Naturally, make install has to be run as root. When I do that, everything is built again, from scratch. This is not what I expect of make install, and to me this seems like a bug. This is why I put the build lines into a little shell script, and ran the entire script with sudo. Compiling as root does not make sense

-march=armv4 and -march=armv4t
Compiling with -march=armv4t (or no -march at all, defaulting to armv4 I believe) results in an error:

../deps/v8/src/arm/macro-assembler-arm.cc:65:3: error:
#error "For thumb inter-working we require an architecture which supports blx"

You can workaround this by above line 65 in the above file:

#define CAN_USE_THUMB_INSTRUCTIONS 1

as I mentioned in my old article about building Node.js on Debian ARM.

-march=armv5te
I first tried building with -march=armv5te (since that seemed closest to armv5tel which is what uname tells me I have). The build completed, but the node binary generated Segmentation fault (however node -h did work, so the binary was not completely broken).

I do not know if this problem is caused by my CPU not being compatible with/capable of armv5te, or, if there is something about armv5te that is not compatible with the way Debian and its libraries are built.

Install Citrix Receiver 13 on Ubuntu 13.10

In this post I will explain how I installed Citrix Receiver (version 13) on Ubuntu 13.10 (Xubuntu x64 and Lubuntu x86 – but keep reading for other Ubuntu variants too).

The quick summary
Go to Citrix Receiver for Linux Download Page. Pick the generic tar.gz-version under 32-bit (yes, do this for 64-bit Ubuntu).

Then:

$ cd ~/Downloads

(nasty habit of not including a folder in the tar file:)
$ mkdir citrixtmp
$ cd citrixtmp
$ tar -xzf ../linuxx86-13.0.0.256735.tar.gz

(install, not as root)
$ ./setupwfc
   (choose 1=install
    answer yes to all questions
    use all defaults
    finally, 3=exit installer)

Now, if you are on 64-bit Ubuntu there are some 32-bit dependencies to take care of:

$ sudo apt-get install libgtk2.0-0:i386
$ sudo apt-get install libxml2:i386
$ sudo apt-get install libstdc++6:i386

If, on the other hand, you are on 32-bit Ubuntu, you can instead install these packages:

$ sudo apt-get install libwebkitgtk-1.0.0
$ sudo apt-get install libxerces-c3.1

Now, (re)start your browser, log in to your Citrix Portal, open an application. Your browser should suggest you open it with wfica.sh (located in ~/ICAClient/linuxx86). Do it – it should work!

You should now be able to use your Citrix applications in a productive way from your Ubuntu computer!

If you are on 32-bit Ubuntu, you should also be able to use the GUI Self Service application (I have not figured out how to fix the webkit dependencies for 64-bit ubuntu).

Feel free to read on for more comments and details.

What is Citrix Receiver and how do I use it

Citrix is a technology that allows an organization (your employer) to package applications (typically Windows applications) and make them available over the intranet or the internet. This way, you can run the applications on a computer without the need to install those application on the computer itself.

I have two ways to access my Citrix Applications.

The first way is via a web based Citrix Portal. I open my web browser, enter the URL of the portal and log in. Now, in the web browser, I see all my applications as icons, and as I click the applications they start in separate windows via Citrix Receiver.

The second way is to launch the Citrix Receiver Self Service application, give the address of the citrix servers and then authenticate. This method can enable “desktop integration” (your Citrix Applications are available via your normal Start Menu or whatever you call it). This Self Service application is new to Citrix v13, and replaced something else in v12.

The Web-browser way is easier to make work. There are unresolved dependency issues with the Self Service program and my solution above.

My #1 priority is to get a working solution at all.

Why not use the .deb packages
The deb-packages are obviously not built for Ubuntu 13.10. I believe they are built for Debian, but this must be confirmed.

The purpose of deb-packages is to automatically resolve all dependencies. But the dependencies are wrong for Ubuntu, and you will need to “force” installation of the deb-packages. In the future, this can leave you with conflicts and confustion.

So, I prefer the generic tar.gz-installation (which also works fine without sudo/being root).

Why not use the 64-bit packages
Well, first there is no generic 64-bit package, so I would end up resolving the dependency problems with the deb-package.

Also, the 64-bit deb-package actually contains 32-bit binaries. It is just the dependencies that are configured against the 32-bit compability libraries in Debian (instead of the standard 64-bit libraries).

So, nothing fun with 64-bit until Citrix actually compiles a real 64-bit binary with no 32-bit dependencies.

Other versions of Ubuntu
I believe what I have written applies not only to Xubuntu, but also Lubuntu, Kubuntu (may require more gtk-installation as it is QT based) and standard Ubuntu, and more. Please comment below if you experience something else.

Other versions of Linux
If you are on Debian or a Debian-derived distribution (like Crunchbang) I guess you should go with the deb-packages.

You really need the Self Service
Consider installing 32-bit Ubuntu on your 64-bit PC. Depending on what computer you have and what you do with it this may be a quite ok idea, or a very poor idea. I can admit I have been running 32-bit Ubuntu on a 64-bit PC for years, at work, specifically because Citrix worked better that way (even the old Citrix Reciever 12 had this issue, even if the Self Service looked different then).

What is the difference between Receiver 12 and 13
If you use Citrix via your web browser, you will not notice much difference (if any).

The Self Service is much different, visually, from the old Receiver. The old one looked like something for SUN Solaris and the 80s (motif-based). The new one looks like some kind of mobile app. I dont know which is worst. Many components are still the same.

If you currently run Citrix 12 and you are happy with it, I suggest you dont upgrade to 13.

Problems installing Citrix Receiver 12
If you want to install the old Citrix Reciever 12, have a look at my old post.

Troubleshooting
Your browser should allow you to download the ICA file (instead of launching it). Do it – it should be saved to ~/Downloads/lauch.ica. Now try to start it manually with wfica.sh:

$ ~/ICAClient/linuxx86/wfica.sh ~/Downloads/launch.ica

If you are missing dependencies they should show up here.

Final words
I consider this post “work in progress”. I’d like to

  • make Self Service work
  • confirm extra features (audio, drive mapping, etc) that might not work properly with my install above

But I hope it can be helpful even in this state. Feel free to comment!

Upgrade Lubuntu 13.04 to 13.10 on Eee 701

Lubuntu is the perfect distribution for your Eee 701. Now the time has come to upgrade to 13.10, and since I have had a few problems with that before I was a bit reluctant to upgrade my Eee 701, especially since it just has a 4GB SSD.

Since I installed 13.04 on the Eee, the available disk space has disappeared. It turns out, the kernel has been upgraded several times, but the old versions have not been discarded. You just need the latest version (the one you are running, check with uname -a). If you have more linux-images than needed, purge them. Do the same with linux-headers-packages.

$ dpkg -l | grep linux-image
$ uname -a
$ sudo apt-get purge linux-image-3.8.0-XX
$ dpkg -l | grep linux-headers
$ sudo apt-get purge linux-headers-3.8.0-XX

When it was time for upgrade, I had 1.6 GB (df -h) available on /. To play safe I formatted an SD card (1GB should be enough) and mounted it on /var/cache/apt (where all downloaded packages go during upgrade).

$ sudo apt-get clean
$ sudo mkfs.ext2 /dev/sdb1
$ mount /dev/sdb1 /var/cache/apt

I updated using the normal GUI upgrade program. During upgrade, the peak disk usage (just before cleaning) was less than 550MB on the SD card /var/cache/apt and my /-device was down to 700MB available (so my 1.6GB available in the first place should have been just enough).

The computer restarted nicely. The fact that the SD card was not immediately mounted on /var/cache/apt caused no problems. After upgrade I just had 1.1Gb available on / though. After again purging unused linux-image I was up at 1.2Gb. I wonder where the extra 400Mb went; I found Firefox, and I doubt it was installed in 13.04… removing it saves about 60Mb.

So, the conclusion is that upgrading Lubuntu from 13.04 to 13.10 on your Eee 701 should be just fine, if you have about 1.5Gb available space on /, and if you feel you have about 400MB to spare on the upgrade. A permanent SD card or mini-usb-memory that can host /home, /var, /tmp and/or /usr is of course nice.

Upgrading Qnap TS109 from Squeeze to Wheezy

Update: new instructions for upgrading Wheezy to Jessie

Now that Wheezy has been out for a while I thought it is stable enough even for my old QNAP TS109. A great source of information for Debian on Qnaps is Martin Michlmeyr, so I decided to upgrade from squeeze to wheezy using Debain standard instructions.

Package Checking
I did not have any packages on hold, but over the years I have installed quite many packages I dont need. So I spent some time listing and removing packages:

$ sudo dpkg -l
$ sudo apt-get purge SOMEPACKAGES

I thought, faster to delete them now, than to upgrade them a little later.

/etc/apt/sources.list
First real upgrade-related step is fixing /etc/apt/sources.list:

deb http://ftp.se.debian.org/debian/ wheezy main
deb-src http://ftp.se.debian.org/debian/ wheezy main non-free

deb http://security.debian.org/ wheezy/updates main
deb-src http://security.debian.org/ wheezy/updates main non-free

I have just replaced ‘squeeze’ for ‘wheezy’ four times.

update upgrade
Now the point of no return:

$ sudo apt-get update
$ sudo apt-get upgrade

This presented me with a few challenges.

???????????????????????????Configuring linux-base?????????????????????????????
? Boot loader configuration check needed                                     ?
?                                                                            ?
? The boot loader configuration for this system was not recognized. These    ?
? settings in the configuration may need to be updated:                      ?
?                                                                            ?
?  * The root device ID passed as a kernel parameter;                        ?
?  * The boot device ID used to install and update the boot loader.          ?
?                                                                            ?
?                                                                            ?
? You should generally identify these devices by UUID or label. However,     ?
? on MIPS systems the root device must be identified by name.                ?
?                                                                            ?
?                                                                            ?
??????????????????????????????????????????????????????????????????????????????
?                                 <  OK  >                                   ?
??????????????????????????????????????????????????????????????????????????????

What is an ARM user gonna do about it? You can safely ignore this (if you are upgrading Debian on a QNAP – probably not if you are upgrading Ubuntu on your laptop!). This is supposed to be grub/lilo-related, and not relevant.

In the end of apt-get upgrade I got these messages, ensuring my system will boot properly even after upgrade. You should probably see something like this too, or consider to find out how to do it manually.

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-5-orion5x
flash-kernel: installing version 2.6.32-5-orion5x
Generating kernel u-boot image... done.
Flashing kernel... done.
Flashing initramfs... done.

Sudo was a little challenge:

Configuration file `/etc/sudoers'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** sudoers (Y/I/N/O/D/Z) [default=N] ? D

The “diff” told me that it intended to delete my sudo line related to me; the new way is to add people to the group (/etc/group) named sudo. So I added myself to the sudo group and bravely answered ‘Y’ to the question above.

Immediately, sudo did not work, as I was no longer in the sudoers file… However, a little logout/login fixed that, and the group works all fine.

After apt-get upgrade had completed I decided to reboot my system, before proceeding. For the first time ever it came up with another IP-address than usual. Obviously the dhcp-client did not bother to ask for the same address anymore, and the dhcp-server did not bother to hand out the same address either. So, a few nervous minutes before I found my QNAP on another IP.

apt-get dist-upgrade
Now that the system rebooted properly it was time for the final upgrade step:

$ sudo apt-get dist-upgrade

This procedure mostly works on it own, occationally asking something.

I answered Yes to this question (after reading the diff, not remembering having edited this file)

Configuration file `/etc/default/rcS'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** rcS (Y/I/N/O/D/Z) [default=N] ? y

The dist-upgrade once again replaced the kernel…

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.2.0-4-orion5x
flash-kernel: installing version 3.2.0-4-orion5x
Generating kernel u-boot image... done.
Flashing kernel... done.
Flashing initramfs... done.

…so I made a final reboot. Everything seems just fine.