Saturday 29 November 2014

Generating PGP keys stronger than 4096 bit RSA

I will be transitioning from the pgp key I have been using for four years to a new one. The decision to move to a new one was taken four years ago, and recent news on privacy, applied cryptography and leaked information about practices of various authorities seem to resonate with my plans.

Four years ago I moved to a 4096-bit pgp key, and now, as I am about to generate a new one, the question about the preferred cryptographic algorithms and key sizes arises. Currently, GnuPG seems to support a maximum RSA key size of 4096. This post covers my process of generating a 8192-bit RSA key, and of course using SHA512, a hash function much stronger than SHA-1.

The limitation of GnuPG as shipped with Ubuntu 14.04

Many have written about the limitation of GnuPG to only support generating RSA keys up to 4096. This limitation is not imposed by the algorithm itself, but merely a decision of developers which may or may not be related to export regulations and similar legislation around the use of cryptographic technologies in the United States.

I will not go into any discussion on such legislative restrictions, and I will focus on technology stating that I am not a citizen of the US and any US law do not apply to me at my current location.

As GnuPG is open source free software, anyone can download the source, and alter the constant defining maximum key size to lift the current limitation. As there is no technical limitation why one could not generate 8196-bit, one can just change keygen.c and increase the key size limit, then compile and use the modified version. It is important to note here, than an unmodified GnuPG will works perfectly with a 8192-bit key, the only restriction is it will not offer one to create such keys out-of-the-box.

Avoiding recompilation without hacking binaries

There is a little known feature of GnuPG that allows one to generate 8192-bit keys without any modification. It supports unattended key generation that reads the configuration form an input file, and this esoteric feature is not subject to the limitation mentioned above.


$ cat params.txt
Key-Type: RSA
Key-Length: 8192
Subkey-Type: RSA
Subkey-Length: 8192
Name-Real: Tibor B****
Name-Email: tibor.b*****@gmail.com
Creation-Date: 20150101T000000
Expire-Date: 20191231T000000
Passphrase: !!!change it!!!
Preferences: S10 S9 S13 H10 Z3 Z2 Z1
%commit
%echo done
$ gpg --batch --gen-key params.txt

GnuPG 1.4.16, the version which ships with Ubuntu, gives the following error.

gpg: fatal: out of secure memory while allocating 4228 bytes

Enter GnuPG 2

Fortunately, GnuPG 2, which is the new modular version of GNU Privary Guard, is also packaged, one just has to install gnupg2, currently, version 2.0.22 ships with Ubuntu 14.04. GnuPG 2 does not have memory allocation issues and is able to generate the 8192-bit key.

Entropy

In order to increase the speed of key generation, I have installed the package rng-tools. This smart software allows one to use the hardware based true random number generator available on most modern PC chipsets and feed the kernel entropy pool. I have found this small utility to make an extremely big difference in accelerating key generation by drastically increasing the bandwidth of the random device, without trade-offs in security. I fail to see why this package is not part of the base installation of Ubuntu.

Transitioning to the new key

The process of transitioning to the new key has been described by others in sufficient detail, please see here for a good HOWTO.

Make sure to also update your gpg config, and read up on the current status of the cipher and digest algorithms supported by GnuPG. Pick your preferred ones wisely.

Saturday 6 September 2014

Ubuntu 14.04 - openconnect VPN and network manager

Openconnect VPN

After having reinstalled my thinkpad with Ubuntu 14.04, I noted that I could not connect to one of my client's VPN via GUI. The VPN authentication dialog simply did not pop up after clicking the VPN profile on the network manager indicator/applet. No error displayed, no useful information in syslog. The only line I could correlate with connection attempts looked like the one cited below. Note that this line in itself does not indicate any error, it is present during normal operation as well, the point here is that no other lines related to VPN were displayed at all.

Aug 31 14:43:43 gluon NetworkManager[997]: <info> VPN service 'openconnect' disappeared

The VPN itself is Cisco Anyconnect, and connecting to it via the commandline using openconnect worked fine. I use password based authentication in tandem with a hardware token generator, no client certificate involved in this configuration. All required packages are installed, and if I create another VPN profile in network manager with an invalid gateway URL, then I do get the authentication dialog displaying an error.

As this used to work properly on Ubuntu 12.10, I googled for regressions and found forum threads and this and another bug report but they have not helped to resolve my case.

To make the long story short, I found where network manager persists connection profiles, and when checking the VPN connection profile in question, I found it contained invalid file paths to certificates, which typically seem to be the result of handling the 'None' option in the wrong way.


sudo cat /etc/NetworkManager/system-connections/${VPN_PROFILE_NAME} | grep cert
usercert=/home/tibi/(null)
cacert=/home/tibi/(null)
authtype=cert
# simply delete the lines with invalid path
# and optionally set authtype=password, but actually it does not matter.

The malfunction seems to be the result of using the build-in import/export functionality. The exported connection profile simply contains (null) for the certificates if password based authentication is used, however, during import this value is simply appended to the user's home directory.


[openconnect]
Description=****
Host=vpn.****.hu
CACert=(null)
Proxy=
CSDEnable=0
CSDWrapper=
UserCertificate=(null)
PrivateKey=(null)
FSID=0

Anyway, I would have expected better visual feedback or error logging.

Monday 1 September 2014

Optimus and Ubuntu 14.04

This post takes the reader through Nvidia Optimus related tweaks I applied on my Lenovo W530 after upgrading it to Ubuntu 14.04. I have dedicated a a series of 5 posts to the same topic on Ubuntu 12.10. Please take those previous posts as prerequisite knowledge, as I do not intend to repeat the detailed descriptions I have already provided. As outlined earlier, my goal is to get a very stable system with extended battery life, and the ability to connect an external projector to the VGA port and cover the following use cases (a verbatim copy of the enumeration from a previous post):

  • Extend the desktop to the external monitor.
  • Get a cloned output of the primary monitor to the external monitor with panning support - this means, that in the case the external monitor's resolution is smaller, a smaller viewport will follow the mouse and show a cropped clone of the desktop's content. The viewport will follow the mouse to show the area of interest.
  • Run LibreOffice presentations with the external monitor showing the current slide and the primary monitor showing the presentation overview, notes and time.
  • Never ever get X freezes or kernel lockups on suspend/resume with or without an external monitor connected.
  • Switching to virtual terminals should always work in a bulletproof manner. The box is a workhorse, cannot allow hiccups.
  • Might sound like a small detail, but a properly displayed usplash/plymouth is also important, not only for cosmetic purposes.

Prime and Optimus support out of the box

As documented in the official wiki of the nouveau driver, Optimus support has improved much in the open source driver, and automatic power management of the discrete graphics chip is also documented to work properly with Linux kernel 3.13. This means, Optimus should work out of the box: the discrete graphics chip should be turned on when needed, and fully powered off automatically when idle for 5 seconds.

The following command listing can be used to quickly review the current state of the two graphics chips. As it can be seen, the discrete chip's status is DynOff meaning that automatic power management is active and the device is currently powered off. Further, xrandr lists all outputs - those wired to the integrated chip as well as those wired to the Nvidia GPU.

uname -a
Linux gluon 3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :DynOff:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
xrandr -q | grep conn # confirms that outputs wired IDG and also those wired to DIS are visible to X
LVDS2 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
VGA2 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
LVDS-1-1 disconnected
VGA-1-1 disconnected
DP-1-1 disconnected
DP-1-2 disconnected
DP-1-3 disconnected

When connecting an external display, DIS will be powered on automatically and outputs are detected and handled almost perfectly by Ubuntu out of the box (more on this later). However, once DIS has been powered on, it cannot be powered off, even if no external monitor is connected. Further, if DIS is not powered off, the Thinkpad cannot be suspended, which is a pain.

sudo cat /sys/kernel/debug/vgaswitcheroo/switch 
0:DIS: :DynPwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
echo "OFF" | sudo tee /sys/kernel/debug/vgaswitcheroo/switch
OFF
sudo cat /sys/kernel/debug/vgaswitcheroo/switch 
0:DIS: :DynPwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0

Others have experienced slightly different behaviour, in particular DIS being powered on (DynPwr) at boot, the common phenomenon is the inability to power off DIS via the command line. The author has suggested disabling the new automatic power management feature to allow the user to explicitly power on and power off the devices as needed.

Disabling automatic power management of the Nvidia graphics chip

Adapting the suggestion to Ubuntu 14.04 involved editing bootloader configuration so it passes the additional argument nouveau.runpm=0 to the kernel.

sudo nano /etc/default/grub # find and edit the GRUB_CMDLINE_LINUX_DEFAULT as follows:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nouveau.runpm=0"
sudo update-grub
sudo init 6 # reboot

Having rebooted the system, I verified that the configuration change indeed took effect. Power states DynPwr and DynOff disappeared, and manual power state switching could be successfully performed the same way as in Ubuntu 12.10.

cat /proc/cmdline # verify nouveau.runpm=0 was successfully passed to the kernel
BOOT_IMAGE=/boot/vmlinuz-3.13.0-35-generic.efi.signed root=UUID=2a924e40-1f12-4c5a-8f22-556d6e66ffa6 ro quiet splash nouveau.runpm=0 vt.handoff=7
sudo cat /sys/kernel/debug/vgaswitcheroo/switch # verify current power state
0:DIS: :Pwr:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
echo "OFF" | sudo tee /sys/kernel/debug/vgaswitcheroo/switch # power off DIS
OFF
sudo cat /sys/kernel/debug/vgaswitcheroo/switch # verify power state again
0:DIS: :Off:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0

The surprise

Based on my knowledge of how outputs are wired on W530, my experiences on Ubuntu 12.10 and my observation in 14.04 so far I assumed that with DIS powered off, xrandr -q would not list the external VGA and DisplayPort outputs, or at least that connecting an external display to those outputs wired to the Nvidia chip would not do anything if the discrete GPU is powered off.

Contrary to my expectations, even if DIS is powered OFF, the external VGA and DiplayPort outputs are available to X, and hooking up an external monitor works the same way after this tweak as before. The Thinkpad can now be suspended, even with external outputs connected.

However, if an external display was active at the time the laptop was suspended, compiz crashed once the system resumed. This is a minor glitch resulting in a prompt to submit a bug report, and compiz is automatically restarted and fully functional even in this case. To be on the safe side, I would disconnect external displays before suspending - this seems to make sense anyway.

Disabling the discrete GPU on boot by default to save power

In order to save power, I disabled the Nvidia GPU during the boot process. I wanted X to be started with both IDG and DIS powered on, to make sure all outputs are properly detected. Customising /etc/rc.local is the approach I settled with, as this file is run after all other services, including X, have been started. This script does not do anything on a clean install, so all one needs to do is to insert one line before exit 0:


echo "OFF" > /sys/kernel/debug/vgaswitcheroo/switch
exit 0

External VGA working almost perfectly

In general I can confirm a lot of progress related to Optimus since 12.10. Outputs are detected out of the box, no need for a second X servers, special Xorg configuration and hybrid screenclone or other voodoo magic. The desktop can be extended to the external VGA, resolution and other parameters all configured using the GUI.

I observed one issue while testing LibreOffice presentations with the external monitor showing the current slide and the primary monitor showing the presentation overview. The current slide, rendered to the external VGA in fullscreen mode did not properly update. At first sight it seems to always be 1 slide behind compared to the LCD of the thinkpad, but moving the cursor over this area of the screen made the area around the cursor refresh, unveiling parts the actual current slide (XDamage came to my mind).

This issue I have not yet resolved. I do not have an external monitor at home, so I will have to wait and squeeze the investigation into my schedule during working hours.

Update: Workaround

I have settled with launching watch -n xrefresh before starting my presentation, and terminating it once I am done. This essentially forces a full refresh every second which yields acceptable experience. Not optimal, but just good enough.

Saturday 30 August 2014

Ubuntu 14.04 on an SSD

I recently upgraded my Lenovo W530 thinkpad to 14.04. As this is my primary workstation which I consider mission critical, one objective was to perform a clean install and restore my development and office environment as quickly as possible. I have set myself a target of 2 hours. The only thing considered a risk was Lotus Notes, which had to be migrated and upgraded at the same time. This post covers disk related settings I applied, and can be considered as a follow up on my two years old 'Ubuntu 12.10 on an SSD'.

I have installed Ubuntu via an UEFI bootable USB key, and during installation, reused the SSD and kept the present partition scheme.


$ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3E979DF9-E126-4EB8-AAF1-D17DEAD86D1E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          133119   64.0 MiB    EF00  EFI System
   2          133120        67241983   32.0 GiB    0700  Linux filesystem
   3        67241984        75630591   4.0 GiB     8200  Linux swap
   4        75630592       500118158   202.4 GiB   0700  Linux filesystem

During the installation process, all partitions except the EFI System partition have been reformatted. Later, I applied some changes to mount options.

To discard or not to discard: that is the question

Well, almost. Whether to discard or not should not be a question: in order to ensure the longevitiy of an SSD, the operating system has to communicate when unused blocks to the drive, so it can employ proper wear leveling. When to discard is the question one should ask himself, there are currently two options which are commonly used:

  • Discard as the block becomes free, that is, as part of the file delete operation. This is a mature, very safe option supported by most filesystem drivers, but impacts the performance, especially when many small files are deletes.
  • Discard all blocks that have become unused in one shot, on a scheduled basis, typically weekly. This concept is supported by some filesystem drivers (but e.g. VFAT does not support it) and is considered more performant in general, however, it can cause data loss issues on some SSDs if performed when I/O load is high. The current implementation is known to be safe on Intel and Samsung SSDs. Read more on this here

The default behavior in Ubuntu 14.04 is to run fstrim weekly via cron. The command only performs trimming on Intel and Samsung SSDs by default, and simply exits if any other SSD is found. My SSD is from Lenovo (P/N: 0A65620), and is a rebranded Samsung PM830 self encrypting disk featuring FIPS certified hardware based full disk encryption with AES-256. A quick test confirmed that my SSD is recognised as a supported device, so specifying the discard for ext4 partitions is not needed any more.


sudo fstrim -v /home
/home: 179466240 bytes were trimmed
sudo fstrim -v /
/: 59457536 bytes were trimmed

Still, partitions with VFAT or other filesytems without fstrim support should be mounted with the discard option.

Swap

I tuned the OS via sysctl variables to use swap only if really needed, as described in 'Ubuntu 12.10 on an SSD'.


# transient changes until reboot
echo 1 | sudo tee /proc/sys/vm/swappiness
echo 50 | sudo tee /proc/sys/vm/vfs_cache_pressure

# persistent changes
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.swappiness=1
vm.vfs_cache_pressure=50
EOF

Further tweaks

I configured tmpfs to be mounted over /tmp in just the same way I did with Ubuntu 12.10, limiting the maximum size of the temporary filesystem to 25% or RAM, and adding mount options for security.


cat <<EOF | sudo tee -a /etc/fstab
tmpfs /tmp tmpfs nosuid,nodev,size=25% 0 0
EOF

Below is a verbatim copy of the bottom lines of the post 'Ubuntu 12.10 on an SSD', it still applies.

As a closing note, end-user behaviour also matters much. I try to pay attention to creating transient files under /tmp - if you compile a lot, this matters much. One should find a healthy balance between being SSD-aware and being too paranoid about it. I investigated methods to decrease disk writes caused by syslog - many suggest to mount a tmpfs over /var/log which would mean all your logs are lost when you reboot, making any kind of audit or post mortem debugging impossible. I ended up sticking to the commit=120 mount option after some calculations. You should do the math and enjoy your disk. As with any kind of disk, check the SMART attributes from time to time, and make sure you have take backups on a regular basis.

Monday 7 July 2014

Lenovo S650 VibeUI update 1427

This post is a follow up on my recent posts on Lenovo VibeUI KitKat ROM, which is a major step forward with many advantages, however, I experienced anomalies with 3 Google provided applications on this ROM.

  1. Google Authenticator generates invalid TOTP tokens, which I have describes in detail last month. My workaround I settled with was to permanently switch to FreeOTP, an open source TOTP app that, besides working properly on the new ROM, feels superior to Google Authenticator.
  2. Google Maps navigation is always crashing after a few minutes. The issue was narrowed down to affect newer versions, and downgrading to 8.0.0 proved to be a stable temporal workaround.
  3. Hangouts on the new ROM always exited when attempting to join a video call, audio-only calls were working fine. This issue remained unresolved as I preferred to use my thinkpad for video calls - and I do not consider hangout video calls critical anyway.

Lenovo published a firmware update for S650 smartphones on the 3rd of July, I downloaded and installed it over night out of curiosity.

The upgrade procedure

First, I downloaded the update itself and recent version of google apps minimal package. I made sure the following files are copied to the external storage:

I took a backup of my call logs and text messages, then booted into recovery and created a full TWRP backup. Read my previous posts for details on this step.

After wiping data, cache and dalvik cache partitions I installed the update from within TWRP recovery, then immediately applied superuser.zip. Before installing the google apps package, I manually freed up some space on the system partition by deleting apps that I do not need:


mount /system
rm /system/vendor/operator/app/*.apk 
# BaiduSearch.apk DuomiMusic.apk GaodeMap.apk Lakala.apk LenovoPhonemgr.apk MobileQQ.apk 
# ReadingJoy.apk SinaWeather.apk SinaWeibo.apk SohuNews.apk SohuTv.apk Tmall.apk UCBrowser.apk
rm /system/app/BaiduInput.apk
rm /system/priv-app/GameWorld_Phone.apk
rm /system/priv-app/Youyue.apk
umount /system

Once this was done, but before installing the google apps package, I rebooted the phone, and chose English language in the setup wizard. I found this much harder to do if google apps was installed in one shot, before the initial boot & setup as the google setup wizard (in Chinese) was interfering with the native setup process... After language selection and initial setup, however, I went back to recovery and installed google apps minimal. After booting the system the google setup wizard was greeting me in English...

Later I restored my call logs and text messages as well as application data of come of my key apps.

First impressions

This updated fixed the Google Hangouts crash issue.

Unfortunately, Google Maps is still crashing after a few minutes of navigation. This issue, however, can be resolved by downgrading to version 8.0.0 of Google Maps, so it is not a show stopper.

Collecting debug info

As the Google Maps crash was very easy to reproduce, I decided to collect logs via ADB logcat and narrow down the data the lines related to the crash:


$ cd android-sdk-linux/platform-tools
$ ./adb logcat -c
$ ./adb logcat > /tmp/logcat14.txt
$ # wait until Google Maps navigation crashes, then immediately Ctrl-C

Taking an initial view on the data one realises the information is simply too much for human eyes. I started grepping for "com.google.android.apps.maps", then identified the process ID which was 31858, and digging deeper I realised that navigation made the GL_THREAD die with SIGSEGV, that is, a segmentation fault. I ended up applying cropping unrelated logs from before the crash and after the diagnostics had completed, and also filtered out some noice with the following command, which yielded an output that can be processed manually and that demonstrated the 3 phases that android performs to collect data for diagnostics on application (native) crashes.


$ grep -A 100000000 "Fatal signal 11" /tmp/logcat14.txt | grep -B 100000000 "native_crash should" \
| grep -v "AlarmManager" | grep -v "PowerManager" > /tmp/logcat_filtered.txt
$ less -S /tmp/logcat_filtered.txt
F/libc    (31858): Fatal signal 11 (SIGSEGV) at 0x00000016 (code=1), thread 715 (GL_THREAD)
F/libc    (31858): Send stop signal to pid:31858 in void debuggerd_signal_handler(int, siginfo_t*, void*)
D/AEE/AED (  133): $===AEE===AEE===AEE===$
D/AEE/AED (  133): p 0 poll events 1 revents 0
D/AEE/AED (  133): not know revents:0
D/AEE/AED (  133): p 1 poll events 1 revents 0
D/AEE/AED (  133): not know revents:0
D/AEE/AED (  133): p 2 poll events 1 revents 1
D/AEE/AED (  133): aed_main_fork_worker: generator 0x12ca0d0, worker 0xbedb5918, recv_fd 15
D/AEE/AED (  133): p 3 poll events 1 revents 0
D/AEE/AED (  133): not know revents:0
D/AEE/AED (  133): p 4 poll events 1 revents 0
D/AEE/AED (  133): not know revents:0
I/DEBUG   ( 1047): handle_request(15)
I/DEBUG   ( 1047): check process 31858 name:droid.apps.maps
I/DEBUG   ( 1047): tid 715 abort msg address is:0
I/DEBUG   ( 1047): BOOM: pid=31858 uid=10112 gid=10112 tid=715
D/SurfaceFlinger(  139): ffi_3d_jank timespan = 33.007385 jankCount = 1
I/DEBUG   ( 1047): [OnPurpose Redunant in preset_info] pid: 31858, tid: 715, name: GL_THREAD  >>> com.google.android.apps.maps <<<
I/DEBUG   ( 1047): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
I/DEBUG   ( 1047): Build fingerprint: 'Lenovo/sofina/S650:4.4.2/KOT49H/VIBEUI_V1.5_1427_2_ST_S650.:user/release-keys'
D/ADB_SERVICES(28073): adb fdevent_process list (11) (20) 
I/DEBUG   ( 1047): pid: 31858, tid: 715, name: GL_THREAD  >>> com.google.android.apps.maps <<<
I/DEBUG   ( 1047): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 00000016
I/DEBUG   ( 1047):     r0 0000000c  r1 00000000  r2 4412ec00  r3 00000000
I/DEBUG   ( 1047):     r4 5f8c4d72  r5 62987e58  r6 64092a98  r7 000010f8
I/DEBUG   ( 1047):     r8 418a0900  r9 4251a500  sl 62987e40  fp 407aee6c
I/DEBUG   ( 1047):     ip 00000000  sp 62a85c30  lr 418a4b94  pc 418a4c00  cpsr 00070010
I/DEBUG   ( 1047): 
I/DEBUG   ( 1047): backtrace:
I/DEBUG   ( 1047):     #00  pc 00027c00  /system/lib/libdvm.so
I/DEBUG   ( 1047):     #01  pc 0002f2f0  /system/lib/libdvm.so (dvmMterpStd(Thread*)+76)
I/DEBUG   ( 1047):     #02  pc 0002c7d4  /system/lib/libdvm.so (dvmInterpret(Thread*, Method const*, JValue*)+188)
I/DEBUG   ( 1047):     #03  pc 00062ef9  /system/lib/libdvm.so (dvmCallMethodV(Thread*, Method const*, Object*, bool, JValue*, std::__va_list)+340)
I/DEBUG   ( 1047):     #04  pc 00062f1d  /system/lib/libdvm.so (dvmCallMethod(Thread*, Method const*, Object*, JValue*, ...)+20)
I/DEBUG   ( 1047):     #05  pc 000575c5  /system/lib/libdvm.so
I/DEBUG   ( 1047):     #06  pc 0000d600  /system/lib/libc.so (__thread_entry+72)
I/DEBUG   ( 1047): 
I/DEBUG   ( 1047): stack:
I/DEBUG   ( 1047):          62a85bf0  5f8c4d60  /data/dalvik-cache/data@app@com.google.android.apps.maps-1.apk@classes.dex
I/DEBUG   ( 1047):          62a85bf4  41901ad7  /system/lib/libdvm.so
I/DEBUG   ( 1047):          62a85bf8  64092a98  
I/DEBUG   ( 1047):          62a85bfc  5f8c4d60  /data/dalvik-cache/data@app@com.google.android.apps.maps-1.apk@classes.dex
I/DEBUG   ( 1047):          62a85c00  4193ded8  /system/lib/libdvm.so
I/DEBUG   ( 1047):          62a85c04  64092a98  
I/DEBUG   ( 1047):          62a85c08  64092ac0  
I/DEBUG   ( 1047):          62a85c0c  5f8c4d72  /data/dalvik-cache/data@app@com.google.android.apps.maps-1.apk@classes.dex
I/DEBUG   ( 1047):          62a85c10  62987e58  
I/DEBUG   ( 1047):          62a85c14  64092a98  
I/DEBUG   ( 1047):          62a85c18  000010f8  
I/DEBUG   ( 1047):          62a85c1c  418a0900  /system/lib/libdvm.so
I/DEBUG   ( 1047):          62a85c20  62987e68  
I/DEBUG   ( 1047):          62a85c24  425b39e0  /dev/ashmem/dalvik-heap (deleted)
I/DEBUG   ( 1047):          62a85c28  407aee6c  /system/lib/libft2.so
I/DEBUG   ( 1047):          62a85c2c  4189f880  /system/lib/libdvm.so
...
D/dalvikvm(  696): create interp thread : stack size=128KB
D/dalvikvm(  696): create new thread
D/dalvikvm(  696): new thread created
D/dalvikvm(  696): update thread list
D/dalvikvm(  696): threadid=88: interp stack at 0x63ee3000
D/dalvikvm(  696): threadid=88: created from interp
D/dalvikvm(  696): start new thread
D/dalvikvm(  696): threadid=88: notify debugger
D/dalvikvm(  696): threadid=88 (Error dump: data_app_native_crash): calling run()
...
D/AES     (  696): ExceptionLog: notify aed
D/AES     (  696):     process : com.google.android.apps.maps
D/AES     (  696):      module : com.google.android.apps.maps v801010122 (8.1.1)
D/AES     (  696): 
D/AES     (  696):       cause : data_app_native_crash
D/AES     (  696):       pid : 31858
W/AES     (  696): native_crash should be processed by aee already

As highlighted above, downgrading Google Maps to 8.0.0 or turning to alternatives (Waze for online, MapFactor for offline navigation) gets around this issue. My motivation for collecting and filtering logs was rather curiosity that the intention to spend much time debugging and tinkering with this closed source application.

Monday 16 June 2014

Lenovo S650 with android 4.4.2 and google authenticator (Part 3)

The first part of the series documents the process of upgrading Lenovo S650 from the original 4.2.2 ROM to the new VibeUI ROM which is based on KitKat 4.4.2 and MyUI. The second one documents Google Authenticator issues and a viable alternative. This post lists a few other issues I ran into during the first days of using the new ROM.

Bluetooth file transfer facepalm

I wanted to transfer my updated KeePass database .passwords.kdb to my phone from my thinkpad, but received an error telling me the transfer has failed. After a few attempts I found my thinkpad shows the phone disconnected after the first attempt. Bluetooth file transfers were working fine with Lenovo S650 4.2.2 ROM. To eliminate the possibility that a software update broke bluetooth on the thinkpad, I quickly verified that file transfers are still working between my old Samsung Galaxy Y and my thinkpad.

Apparently, I went down into the garage to check that phone properly connects to my car radio - not being able to issue and receive calls from within my car would have been a major issue and a show stopper. As everything seemed to be just fine, way more stable compared to Samsung Galaxy Y, I returned upstairs to my thinkpad.

After further experiments I found that the thinkpad shows my phone disconnected after around 30 seconds, whether I attempt to initiate file transfers or not. This seems to be a feature and not a bug, a consequence of bluetooth low energy that was not present in the previous 4.2.2 ROM but is included in android 4.4 (4.3+ to be precise).

I have narrowed down the experiments and found the root cause: a stupid bug on the phone prevents me to transfer hidden files (file name starting with a period) to Lenovo 4.4.2 - I have simply renamed the file and successfully sent it to my phone. As I do not have other KitKat devices handy, I cannot test if this is local to Lenovo or a generic android "feature". This limitation is definitely not imposed by the OBEX protocol.

Google Maps 8.1.0 crashing

On the 7th of June, I updates Google Maps to 8.1.0 but later that day found navigation crashing after one or two minutes of operation. As I was on the road (driving) I did not have time to investigate the issue in depth. I have relaunched the application, after the next crash I forcefully stopped the application, later I even rebooted, and purged all application data, but none of these seemed to help in the long run. The issue seemed to be isolated and only affect navigation mode, so I was still able to use Maps, but with much degraded functionality - using directions without navigation and memorizing.

I had to take a pit stop, and purged the application completely, then reinstalled version 8.0.0 which I had backed up on my microSD card. This version is working without a glitch, I just have to ensure not to update to 8.1.0... First I thought there might be an issue with the stock Lenovo VibeUI / 4.4.2 ROM that only comes to surface with this new version of Maps. My recent experience with Google made me decide not to invest more time into tracking down the root cause. Version 8.0.0 serves me well, I decided to stay on this version for a while.

Later that day I have googled to check if others are having issues with 8.1.0, and found thread on xda-developers that perfectly describes my observations. The posts do not contain any solution or detailed investigation. The fact that this issue was observed on a Lenovo 850 ROM made me wonder again if this issue was Lenovo specific... I have also checked recent comments on Maps on Google Play and found many complaints about the recent update, but as the comments do not include the brand/model of the device, this did not help me further.

The bottom line is, going further into investigating the issue with 8.1.0 does not fit into my current schedule and I will stick to 8.0.0 until I find time and motivation to investigate.

Lenovo S650 with android 4.4.2 and google authenticator (Part 2)

The first part of the series documents the process of upgrading Lenovo S650 from the original 4.2.2 ROM to the new VibeUI ROM which is based on KitKat 4.4.2 and MyUI. This post lists a few issues I ran into during the first days of using the new ROM.

Google Authenticator issues

I have been using Google Authenticator's time based one-time passwords as a second authentication factor a few very critical applications for a rather long time. After my recent update, I noticed that I was unable to log in to a critical service as the verification code was rejected. First I thought there must be some sort of time synchronisation issue, so I synchronised the clock from within Google Authenticator application but my authentication attempts were being rejected. I deleted the configuration and re-imported the secret key, however, this did not help.

Needless to say, after a few attempts I was rather nervous as I was under time pressure and worried about my stash... The same version of the application, 2.49, was working flawlessly before the upgrade so I was initially rather sure the error is somewhere on my side. Rather than falling into panic, I quickly searched and found my old Samsung Galaxy Y (android 2.3.6), where the same version, 2.49 was already installed, deleted the old configuration and carefully typed in the base32 representation of my secret key, making sure I do not mix up 0 with O and 1 with I respectively. (At this point I would like to draw attention to Base58 encoding which has an alphabet built with humans in mind, and does not contain characters which are easy to mix up...)

To my surprise I could log in using Google Authenticator on the Samsung device. After taking care of my time sensitive task and calming down, I continued searching for the root cause. The RFC6238 TOTP security token calculation used in Google Authenticator is very simple and publicly available, it only depends on the following two inputs:

  • The shared secret. First I double and triple checked the shared secret was the same on both devices. I gathered definitive evidence by peeking into the SQLite database located at /data/data/com.google.android.apps.authenticator2/databases/databases first via a hex editor, then via sqliteman on my thinkpad. Here is the catch, one can only do that on a rooted device. And definitely, from time to time, one might find himself in a situation where root access is needed for a legitimate purpose...
  • The number of 30 second periods elapsed since the Unix epoch which is based on system time. Although I have synced both devices and the timezone was also the same, I kept getting different tokens on the two devices. I experimented with the timezone settings, changing locale settings and manually setting different timezones but it did not help.

I decided to study the source code - the algorithm is rather simple and both inputs were known to be the same, yet the result were different. The source did not contain any magic, but at the same time I noticed the latest tag in the source repository was 2.21 whereas on both of my devices I had 2.49 - that was at least a bit suspicious. On the project page I found the following disclaimer: "This open source project allows you to download the code that powered version 2.21 of the application. Subsequent versions contain Google-specific workflows that are not part of the project."

I went on and found FreeOTP, a real open source implementation forked from Google Authenticator 2.21, maintained by RedHat. After taking a look at the source, I have installed it onto my S650 and imported my shared secret. Success. Open source rules. I uninstalled Google Authenticator.

Having found a working, very nice, true open source alternative, I have lost motivation to investigate further by decompiling Google Authenticator - unfortunately, I am rather busy these days with my primary duties. My hypothesis, without any proof, is the following: Version 2.49 of Google Authenticator might include some JNI based crypto-acceleration, or something else outside of the Java/Dalvik core libraries, that is not part of any standard, is not documented, therefore not included or works a bit differently in Lenovo ROM. Please leave a comment if you can prove or confute it.

Read on for other issues I encountered in the first few days. Of course, resolution or at least a workaround is provided where applicable.

Thursday 22 May 2014

Lenovo S650 with android 4.4.2 and google authenticator

I have finally found some spare hours to upgrade my Lenovo S650 from the 4.2.2 based S650_S119_131014 ROM to S650_S912_VIBEUI_V1.5_1419_5_ST_1F49 which is based on 4.4.2. As a first step, I have studied the contents of the zip file, especially the updater-script at /META-INF/com/google/android/. It quickly became obvious that I would loose my root access after installing this update, which I wanted to avoid.

The rules I have set for myself do not allow rooting the phone via windows based third party applications (some of which are known to install some souvenirs), and the common path of using the windows-only MediaTek flashing tool in the rooting process is also out of scope for me. For private reasons I decided to not publish information on how to properly root the lenovo ROM via command line, without any closed source, suspicious third party app. Rather than that, let us assume that the original JellyBean system already has root access, and our goal is to retain this during the upgrade.

While setting the immutable flag on the su binary via chattr +i /system/xbin/su would retain root access during minor updates, this is useless if the system partition is formatted. In our case, all partition except sdcard{0,1} would be formatted as it can be seen in the updater-script.

Adding the binary to the itself zip and amending the updates-script with a command to set proper SELinux context, ownership and permissions come as an intuitive solution. This can be achieved by the following line:


set_metadata("/system/xbin/su", "uid", 0, "gid", 0, "mode", 6755, "capabilities", 0x0, "selabel", "u:object_r:su_exec:s0")

One should note that the zip file is a singed jar, and any tinkering would cause signature verification to fail. Although there are various approaches for bypassing signature verification, no magic is needed due to the fact that the current system is already rooted. Custom recovery images allow to install updates without a [valid] signature.

Custom recovery

Readers who have familiarized themselves with the previous post should already know how to 'flash a custom recovery' with a single line.

dd of=/dev/block/mmcblk0 if=recovery.img bs=1024 seek=36352

As an alternative, one can use Mobileuncle MTK Tools to install a custom recovery - if one renames the file to recovery.img and places it into the root of the SD card, the app will automatically pick it up.

I have used TWRP recovery 2.7.0.0. The Lenovo S650 come in two variants, which have different partition layouts. When using pre-built recovery images, make sure to use an image compatible with CN (Chinese) or ROW (Rest Of the World) partition layout according to your device.

The first thing after installing the custom recovery was to test it - taking a TWRP backup of the current system. Actually, testing the backup process is only half of the job, since the recovery process is slightly more important...

Superuser application

While this post is not going to cover the process of rooting, the basic concept of how root permission is granted is outlines. The su binary, unlike in traditional Unices, will not ask for credentials but instead, communicate with a graphical application that will display the details of the 'switch user' request and allow the end user to approve or reject the request via the touchscreen.

As any flaw of the su binary or the superuser application, intentional or unintentional, may critically impact the security of the device, one should consider if this capability is needed at all, and if so, which implementation to chose.

While SuperSU and ChainsDD Superuser are common choices, I decided to go with Koushik Dutta's Superuser. Unlike the other two it is open source, so I have taken some time to go through the source of both the native binary and the app. Additionally it features PIN protection and logging, which typically would not be available in the free versions of other alternatives.

I have downloaded the zip file, and studied the updater-script. I modified the 4.4.2 ROM by adding the binary and the application and changing the scripts to ensure proper metadata is set on the binary.

Google apps

I have downloaded a signed intallable zip of minimal Google Apps for 4.4.2 - this only takes a few megabytes and allows me decide which apps fo google to install later.

The point of no return

I have observed that while the ROM updater-script does not replace the recovery, it installs a script that runs on normal boot and patches the recovery. In worst case, this could mean that I end up with a non-rooted system and my custom recovery, that would allow me to alter the system (install unsigned updates), would be erased and replaced by Lenovo's restricted recovery image. That would have been game over for me.

I have copied the vanilla ROM, the superuser installer and GApps installer zips to the SD card, along with my modified "superuser-included" ROM zip and rebooted to recovery. I performed an advanced wipe selecting user data, cache and dalvik-cache to be erased and then attempted to install the modified ROM.

In the middle of the process, I received an error notifying me that update-binary had failed. The second attempt yielded the same results, leaving the system in an unknown state. I payed attention not to reboot a single time until I was sure the system was properly installed and rooted, or at least the custom recovery image was still in place.

I had to fall back to 'plan B' and installed the vanilla ROM, but instead of rebooting, quickly installed the superuser installer zip as well. Once done, I opened a shell and manually checked the presence and metadata of the su binary and the superuser application. Then I rebooted.

Make space and install GApps

On first boot, I realised there was almost no space on the system partition, so I removed some of the apps like Youyue, GameWorld and Baidu, then took the opportunity to uninstall some preintalled non-system apps like Weibo, QQ and Lakala. I intentionally retained many system apps by Lenovo - in some cases they are really useful and feature rich alternatives to some Google components.

If one is concerned about privacy and decides not to go with Lenovo apps, then the very same person should also be worried about installing any Google component onto his device... Instead of engaging in a religious debate, look at what kind of traffic the various components cause, and block according to your findings.

Once there was 100MB free space on the system partition, I installed the minimal google apps package and rebooted - and singed into my google account. Play store services worked fine, and nothing gave me significant headache... until I tried to use google-authenticator...

Read on, the second part of the series is already available.

Monday 14 April 2014

Backing up a stock Lenovo S650/MT6582 phone from shell

Right after unboxing my S650, before even booting into normal mode I wanted to clone the preloaded firmware. This device is my first personally owned android smartphone and I wanted to make sure I have a backup of the original state before tinkering or flashing a custom ROM. The rules, as implied by the above statement, were the following:

  • Flashing a custom recovery was not allowed.
  • The use of third party applications other than those that come preloaded on the Mediatek device was not allowed, as installing those would have 'tainted' the original state.
  • Linux only. Running any M$ Windows tool was strictly out of scope.
  • More of an unfortunate circumstance than a rule, but I did not have an external microSD card handy, so I had to rely using the internal storage.

I would not position myself as an android expert in any way, as this almost my first encounter of a modern android device. Trying to make use of the skills I have, I have taken a Linux centric approach. After all, my smartphone is just an ARM based mini computer with embedded NAND flash storage running Linux...

Before my phone arrived, I read a few articles on how a full backup could be taken, how recovery works and what the boot process of an android devices is like. This post is not going to cover these basics. My first idea was to boot my phone using a custom recovery image stored on my thinkpad, without actually flashing the recovery image. This involves rebooting the device into bootloader mode and connecting it to my thinkpad via USB.

Factory mode, Bootloader, fastboot and recovery

Having initiated the first boot with the power & volume up buttons pressed, I was confronted with a Chinese menu, which turned out to be not the recovery mode, but the factory mode with options similar to the following:

工厂模式
完整测试
单项测试
测试报告
清除e MMC
版本信息
重启手机 

The only familiar character sequence 'eMMC' suggested that option was responsible for wiping the whole internal storage, while the default selection would reboot the phone. A couple of days later I found a forum post that contained translation of these (or very similar) menu items.

The phone was connected to my thinkpad via a USB cable, but I did not have connectivity to the device, although I would have expected some sort of connection via the Android SDK.

After this, I rebooted into recovery mode via power, volume up & volume down buttons and realised that the stock recovery also does not provide any means to connect to the device from my thinkpad.

Finally I could reboot the device into bootloader mode (also known as fastboot mode). I had to start up the device normally, select 'Media device (MTP)' when configuring the mode of USB connection and enable USB debugging. I have found that 'Camera (PTP)' or 'Mass storage' modes do not allow the execution of Android De bug Bridge (adb) commands.

Once the device was connected, which I confirmed via adb devices, I could reboot the device into bootloader mode via adb reboot bootloader. The devices rebooted and printed that it entered fastboot mode. I confirmed that the output of adb devices does not list the device as ready for connection (as the device was in bootloader mode), and that fastboot devices outputs that the device is present and in fastboot mode.

From within bootloader mode, the command fastboot boot custom-recovery.img is should boot a custom recovery without actually flashing it. I my case, however, the device printed it downloaded the image successfully but I was left with a 'Booting ...' message and a frozen device, that did not do anything for twenty minutes, even did not respond to the power button.

I took out the battery, reinserted it an everything was back to normal. I thought the recovery image I tried to boot was not built for this phone or maybe it was corrupted, so I tried it with multiple images that were told to work with my S650. Nor the CWM, nor the TWRP image seemed to work. I had to take an alternative approach.

Backing up manually, using the shell

Eventually, I rebooted to normal mode and worked towards my goal via adb shell which gave me shell access so I could make use of my linux experience. Some commands essentially require root access - note that rooting an S650 is not covered in this post, but the ability to gain root access is assumed.

First of all, I had to discover how the eMMC storage was partitioned and which parts I need to take a backup of.


$ cat /proc/partitions
major minor  #blocks  name

   7        0      10290 loop0
 253        0     524288 zram0
 179        0    7597184 mmcblk0
 179        1          1 mmcblk0p1
 179        2      10240 mmcblk0p2
 179        3      10240 mmcblk0p3
 179        4       6144 mmcblk0p4
 179        5    1048576 mmcblk0p5
 179        6     131072 mmcblk0p6
 179        7    3145728 mmcblk0p7
 179        8    3202688 mmcblk0p8
 179       64       2048 mmcblk0boot1
 179       32       2048 mmcblk0boot0

$ cat /proc/dumchar_info
Part_Name Size StartAddr Type MapTo
preloader    0x0000000001400000   0x0000000000000000   2   /dev/misc-sd
mbr          0x0000000000080000   0x0000000000000000   2   /dev/block/mmcblk0
ebr1         0x0000000000080000   0x0000000000080000   2   /dev/block/mmcblk0p1
pro_info     0x0000000000300000   0x0000000000100000   2   /dev/block/mmcblk0
nvram        0x0000000000500000   0x0000000000400000   2   /dev/block/mmcblk0
protect_f    0x0000000000a00000   0x0000000000900000   2   /dev/block/mmcblk0p2
protect_s    0x0000000000a00000   0x0000000001300000   2   /dev/block/mmcblk0p3
seccfg       0x0000000000020000   0x0000000001d00000   2   /dev/block/mmcblk0
uboot        0x0000000000060000   0x0000000001d20000   2   /dev/block/mmcblk0
bootimg      0x0000000000600000   0x0000000001d80000   2   /dev/block/mmcblk0
recovery     0x0000000000c00000   0x0000000002380000   2   /dev/block/mmcblk0
sec_ro       0x0000000000600000   0x0000000002f80000   2   /dev/block/mmcblk0p4
misc         0x0000000000080000   0x0000000003580000   2   /dev/block/mmcblk0
logo         0x0000000000300000   0x0000000003600000   2   /dev/block/mmcblk0
ebr2         0x0000000000080000   0x0000000003900000   2   /dev/block/mmcblk0
expdb        0x0000000000a00000   0x0000000003980000   2   /dev/block/mmcblk0
android      0x0000000040000000   0x0000000004380000   2   /dev/block/mmcblk0p5
cache        0x0000000008000000   0x0000000044380000   2   /dev/block/mmcblk0p6
usrdata      0x00000000c0000000   0x000000004c380000   2   /dev/block/mmcblk0p7
fat          0x00000000c37a0000   0x000000010c380000   2   /dev/block/mmcblk0p8
bmtpool      0x0000000001500000   0x00000000febf00a8   2   /dev/block/mmcblk0
Part_Name:Partition name you should open;
Size:size of partition
StartAddr:Start Address of partition;
Type:Type of partition(MTD=1,EMMC=2)
MapTo:actual device you operate

As it could be seen, most of the relevant data is on /dev/block/mmcblk0 and there are critical areas on the flash storage that are not mapped to any partition. An obvious solution was to use the almightly dd command and calculate proper count and skip parameters based on the output above.

It should be noted that the content of the preloader that is mapped to /dev/misc-sd is directly available under /dev/preloader.


dd if=/dev/preloader of=preloader

dd if=/dev/block/mmcblk0 of=mbr.img bs=1024 count=512 skip=0
dd if=/dev/block/mmcblk0 of=ebr1.img bs=1024 count=512 skip=512
dd if=/dev/block/mmcblk0 of=pro_info.img bs=1024 count=3072 skip=1024
dd if=/dev/block/mmcblk0 of=nvram.img bs=1024 count=5120 skip=4096
dd if=/dev/block/mmcblk0 of=protect_f.img bs=1024 count=10240 skip=9216
dd if=/dev/block/mmcblk0 of=protect_s.img bs=1024 count=10240 skip=19456
dd if=/dev/block/mmcblk0 of=seccfg.img bs=1024 count=128 skip=29696
dd if=/dev/block/mmcblk0 of=uboot.img bs=1024 count=384 skip=29824
dd if=/dev/block/mmcblk0 of=bootimg.img bs=1024 count=6144 skip=30208
dd if=/dev/block/mmcblk0 of=recovery.img bs=1024 count=12288 skip=36352
dd if=/dev/block/mmcblk0 of=sec_ro.img bs=1024 count=6144 skip=48640
dd if=/dev/block/mmcblk0 of=misc.img bs=1024 count=512 skip=54784
dd if=/dev/block/mmcblk0 of=logo.img bs=1024 count=3072 skip=55296
dd if=/dev/block/mmcblk0 of=ebr2.img bs=1024 count=512 skip=58368
dd if=/dev/block/mmcblk0 of=expdb.img bs=1024 count=10240 skip=58880
dd if=/dev/block/mmcblk0 of=android.img bs=1024 count=1048576 skip=69120
dd if=/dev/block/mmcblk0 of=cache.img bs=1024 count=131072 skip=1117696

dd if=/dev/block/mmcblk0 bs=1024 count=3145728 skip=1248768 | gzip > usrdata.img.gz
#dd if=/dev/block/mmcblk0 of=fat.img bs=1024 count=3202688 skip=200192

#dd if=/dev/block/mmcblk0 of=bmtpool.img bs=1024 count=21504 skip=4173760.1640625

A final note on the last few commands

At the time of taking the backup, I did not have an external sdcard, so I had to use the FAT partition of the device, which was 3GB in size. Obviously, as an immediate consequence, the related line is commented out, as trying to write the image of the FAT partition to the FAT partition itself would not be the brightest idea, would it? The content of this partition can be simply accessed and copied over to a PC by selecting USB connection mode 'Storage device'.

Further, backing up the usrdata partition, which in size is similar to the partition to which the backup was stored, needed a small trick. Given that the partition was almost empty (confirmed via df), I gzipped the image on the fly to make sure, and ended up with a 315MB compressed image.

I did not find much information on bmtpool. It is not mapped to a partition and has a start address which is not aligned to 512 bytes, so it could not be properly backed up via Windows based SP FlashTool according to multiple sources. Actually, nobody seems to have missed the contents of this section.

Tinkering with the raw backups

Some of the backed up files represent the raw copy of whole partitions, these can be directly loop-mounted on linux. Others such as boot.img or recovery.img need another approach.

Based on my experience with initramfs and kernel images, I initially tried to uncompress them via gunzip and cpio, but looking at them via a hex editor revealed that these are amended by a special header. I have quickly googled up a perl script from clockworkmod which was built for unpacking recovery images, but soon it turned out that MT6582 MediaTek devices use a special format that can be unpacked via this script.

Just out of curiosity, I studies the directory structure of the boot and recovery images and confirmed that my smartphone is indeed just a special linux device...

Restoring manually

Restoring is straightforward. Use the same dd commands, but swap the value of if and if, and replace skip with seek. And do make sure to slightly modify the command for usrdata.img.gz, using gunzip. This one is left as an exercise for the reader.