February 09, 2010

Linux Catalyst (fglrx): Fixing black screen issue in versions 9.12 and 10.1

After upgrading AMD/ATI Catalyst driver (aka fglrx) from version 9.11 to 9.12 on Linux, I found that the new version didn't work - X would start but the screen would be black. I haven't put much time in resolving it and just rolled back to Catalyst 9.11.

Now that Catalyst 10.1 is out on Linux, I tried upgrading again but encountered the same black screen issue. I found a few solutions but they missed a crucial step (number 2 below), so here is the complete workaround:
  1. If your X session has been started and you are looking at the black screen, press CTRL+ALT+F2 to switch to the console, and login as root.
  2. Make sure /etc/ati/amdpcsdb.default file is present.
  3. Delete /etc/ati/amdpcsdb file.
  4. Reboot
X should start as usual after that.
If you don't have /etc/ati/amdpcsdb.default file, try to reinstall Catalyst.

The cause of the issue seems to be the inability of the newer fglrx versions to handle the amdpcsdb file created by the previous versions. I don't know what AMD's QA department was doing. Or do they even have any QA for Linux?

December 26, 2008

Linux & Automounting of removable USB disks: Xfce take

Automounting of the removable USB devices seems to be a recurring problem for many Linux users. It's caused by the opaque interaction between ConsoleKit, PolicyKit and the desktop environment. Even if the desktop environment supports the volume management (and I believe all major Linux desktops do: GNOME, KDE, XFCE), missing just one link in the fragile chain of interactions between the desktop components will break automounting, and possibly other things such as sound in PulseAudio.

I'll take Xfce desktop as an example since I've had some troubles with it automounting the USB flash sticks. The prerequisites for working automounting in Xfce are: running dbus daemon (it's a must for the recent desktops anyway), running consolekit daemon, running Thunar (check for "thunar --daemon" process), and configured Thunar volume management (check "Enable Volume Management" option in the Advanced tab of the File Manager section in the settings).

But even having all that running doesn't guarantee the success. Enter the infamous "org.freedesktop.hal.storage.mount-removable no <-- (action, result )" error message, which seems to be perfectly in line with a long-time Unix tradition of the cryptic error messages. What's worse, the error window doesn't elaborate on what component has failed and why.

Common answer to that is the addition of the following snippet to /etc/PolicyKit/PolicyKit.conf:
<match action="org.freedesktop.hal.storage.mount-removable">
<return result="yes">

However this is actually wrong - this fix is akin to removing the password from some user account if the user is unable to login instead of providing them with the new password.

Another symptom of this problem is the lack of PolicyKit authorizations in the polkit-auth output (check at the end of this post for a "good" authorization list):
$ polkit-auth

After the long investigation I found that my X session wasn't registered with ConsoleKit:
$ ck-list-sessions 
uid = '501'
realname = ''
seat = 'Seat1'
session-type = ''
active = FALSE
x11-display = ''
x11-display-device = ''
display-device = '/dev/tty1'
remote-host-name = ''
is-local = TRUE
on-since = '2008-12-23T19:04:59Z'
idle-since-hint = '2008-12-23T19:05:33Z'

See, only my tty1 console session is there, no sign of my X session started from that console. Here's how a registered X session should look:
$ ck-list-sessions 
uid = '501'
realname = ''
seat = 'Seat1'
session-type = ''
active = TRUE
x11-display = ':0'
x11-display-device = '/dev/tty7'
display-device = ''
remote-host-name = ''
is-local = TRUE
on-since = '2008-12-26T09:28:52Z'

Why my Xorg session wasn't registered?
Here I must confess that I don't use GDM or any other display manager, so I (correctly) assumed that it was the cause. Apparently GDM and KDM take care of registering with the ConsoleKit, and /usr/bin/startxfce4 script used to start Xfce from the console fails to do that.

After some more Googling and experimentation I found this XDM bug which gave me a hint - /usr/bin/ck-xinit-session should be executed during the Xorg startup, preferably immediately after X server is launched.

Here's my solution to this: create .config/xfce4/xinitrc in your home directory, make it executable and invoke /usr/bin/ck-xinit-session before Xfce xinitrc:

echo -e '#!/bin/sh\n\n' > ~/.config/xfce4/xinitrc
echo 'exec /usr/bin/ck-xinit-session sh /etc/xdg/xfce4/xinitrc' >> ~/.config/xfce4/xinitrc
chmod u+x ~/.config/xfce4/xinitrc

After starting Xfce, I checked the session list and my X session was there. Bingo! After inserting a guinea USB stick and waiting a few seconds, I was greeted with a Thunar window with the USB stick content. So automounting finally worked, too, and without enabling it for everyone and everywhere with that PolicyKit.conf hack.

Here's a "working" output from polkit-auth for the reference:

I hope this guide will be useful for those diehards who don't use GDM/KDM. As a bonus, this technique fixes the sound in PulseAudio for me.

May 30, 2008

HOWTO: Using nginx to accelerate Apache on Cpanel server

Nginx - the small, lightning fast and very efficient web server is usually used to serve static content or as a reverse proxy/load balancer for the Apache or other relatively slow backends. So it would be natural to use nginx as a frontend for Cpanel's Apache. It would save a substantial amount of memory and CPU time usually sucked by the numerous Apache children spoonfeeding content to the clients.

I always had this in mind, but until recently had no time to look closely at implementing it. Then I saw a forum post with a sample script for generating the nginx configuration file based on Cpanel account info, and then an onslaught of visitors on a shared Cpanel server I admin slowed it to a crawl, and I was forced to delve into the innards of Cpanel. As a result of this investigation I wrote the "nginx on Cpanel" HOWTO presented below.

Installing Apache module

First of all, when nginx is used as a reverse proxy to Apache, the visitors' IPs received by Apache are wrong - all requests to Apache come from nginx, so the main server IP will be logged.
To make Apache log the real IPs of the visitors instead of the main server IP, a special Apache module (mod_rpaf) is needed.
Download, untar, cd to the newly created directory and run this command as root:
/usr/local/apache/bin/apxs -i -c -n mod_rpaf-2.0.so mod_rpaf-2.0.c
That will install the module into the Apache module directory.

Then go to WHM, Main >> Service Configuration >> Apache Configuration > Include Editor > Pre Main Include and add this section there, replacing LIST_OF_YOUR_IPS with the list of IP addresses managed by Cpanel:

LoadModule rpaf_module modules/mod_rpaf-2.0.so 

RPAFenable On
# Enable reverse proxy add forward
# which ips are forwarding requests to us
RPAFsethostname On
# let rpaf update vhost settings
# allows to have the same hostnames as in the "real"
# configuration for the forwarding Apache
RPAFheader X-Real-IP
# Allows you to change which header mod_rpaf looks
# for when trying to find the ip the that is forwarding
# our requests

Apache configuration changes

Then we need to move Apache to another port, let's take 81 for example. You can simply edit it in the "Tweak Settings" page in WHM, replacing with or, doing it command line way, edit /var/cpanel/cpanel.config and change port 80 in apache_port assignment to 81:
Run /usr/local/cpanel/whostmgr/bin/whostmgr2 --updatetweaksettings as advised at the top of that file.
Check /usr/local/apache/conf/httpd.conf for any occurences of port 80, and run /scripts/rebuildhttpdconf to make sure httpd.conf is up to date.

It also makes sense to reduce the number of Apache children, as nginx will take care of spoonfeeding the data to the clients connecting via the slow network links, freeing Apache children to do their backend work. Edit /usr/local/apache/conf/httpd.conf and replace prefork.c section with this (note that I used very modest values here, and your mileage may vary):
<IfModule prefork.c>
StartServers 5
MinSpareServers 2
MaxSpareServers 5
MaxClients 50
MaxRequestsPerChild 0

Run /usr/local/cpanel/bin/apache_conf_distiller --update --main to pick up the changes, and then /scripts/rebuildhttpdconf to make sure your changes are in.
Note that you will need to watch Apache extended server status at the peak load times to have an idea how many Apache children your server needs by default.

You'll also need to update the Apache port in /etc/chkserv.d/httpd and restart chksrvd with /etc/init.d/chksrvd restart

Generating nginx config files

The final step - we have to build the nginx config file based on the domains hosted on your server.
It is done by the simple script which will generate two configuration files for nginx - main one here: /usr/local/nginx/conf/nginx.conf and the include file with all virtual hosts: /usr/local/nginx/conf/vhost.conf


cat > "/usr/local/nginx/conf/nginx.conf" <<EOF
user nobody;
# no need for more workers in the proxy mode
worker_processes 1;

error_log logs/error.log info;

worker_rlimit_nofile 8192;

events {
worker_connections 512; # increase for more busy servers
use rtsig; # you should use epoll here for Linux kernels 2.6.x

http {
server_names_hash_max_size 2048;

include mime.types;
default_type application/octet-stream;

sendfile on;
tcp_nopush on;
tcp_nodelay on;

keepalive_timeout 10;

gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain text/html application/x-javascript text/xml text/css;
ignore_invalid_headers on;

client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
connection_pool_size 256;
client_header_buffer_size 4k;
large_client_header_buffers 4 32k;
request_pool_size 4k;
output_buffers 4 32k;
postpone_output 1460;

include "/usr/local/nginx/conf/vhost.conf";


/bin/cp /dev/null /usr/local/nginx/conf/vhost.conf

cd /var/cpanel/users
for USER in *; do
for DOMAIN in `cat $USER | grep ^DNS | cut -d= -f2`; do
IP=`cat $USER|grep ^IP|cut -d= -f2`;
ROOT=`grep ^$USER: /etc/passwd|cut -d: -f6`;
echo "Converting $DOMAIN for $USER";

cat >> "/usr/local/nginx/conf/vhost.conf" <<EOF
server {
access_log off;

error_log logs/vhost-error_log warn;
listen 80;
server_name $DOMAIN www.$DOMAIN;

# uncomment location below to make nginx serve static files instead of Apache
# it will make the bandwidth accounting incorrect as these files won't be logged!
#location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
# root $ROOT/public_html;

location / {
client_max_body_size 10m;
client_body_buffer_size 128k;

proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size 4k;
# you can increase proxy_buffers here to suppress "an upstream response
# is buffered to a temporary file" warning
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

proxy_connect_timeout 30s;

proxy_redirect http://www.$DOMAIN:81 http://www.$DOMAIN;
proxy_redirect http://$DOMAIN:81 http://$DOMAIN;

proxy_pass http://$IP:81/;

proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;

Run /usr/local/nginx/sbin/nginx -t to check the configuration, and then /usr/local/nginx/sbin/nginx to start nginx. You are set!

If you don't care about the bandwidth consumed by the virtual hosts and are willing to lose the correct bandwidth calculation over the increased server performance, you can uncomment the <location> lines below the WARNING comment and watch the server picking up the speed. Beware of the two gotchas here: the sub domains most likely will not work as they have document root pointed to a different place, and as nginx doesn't support .htaccess files for the performance reasons, they won't be obeyed for the file types listed.

Obviously, the config file must be regenerated every time a new domain is added. The deleted and suspended domains should work just fine though.


You may ask, is it really worth the trouble? Here's a graph of the average load on the server where nginx was installed as a reverse proxy for Apache as described in this post, can you guess where the switch to nginx happened?

January 21, 2007

Capturing HDV stream from camera in Linux over FireWire

Things are a bit in disarray in the HDV Linux department. While there are some good Linux applications out there supporting HD video editing (such as Cinelerra), getting HDV stream out of the camera via ieee1394 aka FireWire aka iLink proved to be a formiddable task.

Anyway, after getting hold of HDV-capable Canon HV10 digital video camera, I quickly found that dvgrab utility which was supposed to take care of my grabbing needs, doesn't work with HDV streams (which are in essence just 1440x1080i MPEG2 streams on a standard miniDV tape casette). Some googling later, it appears an utility with rather dull name test-mpeg2 (a part of libiec61883-utils RPM in Fedora Core Linux) should help me to capture the Firewire stream. But it didn't. It printed the hopeful "Starting to receive" message and just sat there doing nothing.

Another round of googling revealed another utility: mpg1394grab which worked despite being tiny (less than 200 lines), 4 years old and written for another video camera altogether. The compiling instructions are in the .c file itself.

Cutting some googling and trying, test-mpeg2 worked in the end, too. I had to provide an id of the ieee1394 node corresponing to my video camera as reported by plugreport:
test-mpeg2 -r 0 > capture.m2t
but before that, run this bit of magic:
plugctl -n 0 oPCR[0].n_p2p_connections=1
What does it do? -n 0 specifies an id of your camera FireWire node from plugreport, and the rest tells adapter to enable the use of Point2Point (p2p) connection for the output (oPCR) instead of a broadcast connection (bcast_connection from plugreport).

Unfortunately, after camera disconnect it has to be set up again. I haven't been able to write an udev rule to make it automatic because apparently udev doesn't catch connect/disconnect events on the FireWire bus (as monitored by udevmonitor).

So there it is, a single 1.5GB file containing 8 minutes of HDV video. Too bad dvgrab with its autosplitting doesn't work with HDV. Though it worked flawlessly in the combination with Kino when the camera was set to output a DV stream, so at least the DV capturing (and editing) looks quite solid in Linux.

January 15, 2007

Mobile Action 8730P USB Cable And Linux

More than an year ago, I had to use GPRS as a mean of Internet connectivity while travelling. It turned out the only option to connect my Siemens C75 phone to the notebook was a special USB cable. I double checked the Linux support, and bought the MA-8730P made by Mobile Action. It is based on pl2303 USB<->serial converter chip which is well supported in Linux.

Little I knew that the "P" version (meaning it can charge a phone directly from the USB while it is connected) had a little quirk: to start operating, a small unique sequence must be sent to the converter. Under Windows it is done by a bloaty "Phone Manager" supplied by Mobile Action. Under Linux, the device was recognized by the system but any connection attempts to the USB modem failed.

Some guys have captured this secret sequence under Windows and hacked together a tiny program which enables the cable operation. Since it will undoubtefully
be useful for those unfortunate souls who want to use the MobileAction USB cables under Linux, I'm posting it here (1k). Apparently it also works for the MA-8720P cable.

To compile, run
$ gcc -Wall -o chargerma chargerma.c
To use it, first connect the cable. You should see something like this in your dmesg output:
usb 4-1: new full speed USB device using uhci_hcd and address 3
usb 4-1: configuration #1 chosen from 1 choice
pl2303 4-1:1.0: pl2303 converter detected
usb 4-1: pl2303 converter now attached to ttyUSB0
usb 4-1: New USB device found, idVendor=067b, idProduct=2303
usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 4-1: Product: USB-Serial Controller
usb 4-1: Manufacturer: Prolific Technology Inc.
Now run (as root)
# ./chargerma /dev/ttyUSB0
Note that you have to run it every time after a cable disconnect to restore the modem functionality. I guess by making use of some udev/hotplug rules this program could be run automatically on each cable connect, but my hotplug-fu is not that strong yet.
Below is the full program text for the reference.

#include  <stdio.h>
#include <termio.h>
#include <unistd.h>
#include <fcntl.h>

int main(int argc, char* argv[])
int fd;
int status, result;
char *buf = "\x55\x55\x55\x55\x04\x01\r\x0";
struct termios options;

if (argc == 1) {
printf("usage: chargerma /dev/ttyUSB0\n");
return 0;

fd = open(argv[1], O_RDWR | O_NDELAY);
if (fd == -1) {
perror("open_port: Unable to open port\n");
return -1;

// Init the port
ioctl(fd, TIOCMGET, &status);
status |= TIOCM_RTS;
status &= ~TIOCM_DTR;
result = ioctl(fd, TIOCMSET, &status);
tcgetattr(fd, &options);
cfsetispeed(&options, B9600);
cfsetospeed(&options, B9600);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;

// Send the secret sequence
result = write(fd, buf, 8);
if (result < 0)
fputs("write failed!\n", stderr);
tcsetattr(fd, TCSAFLUSH, &options);
return 0;

UPDATE: Some quick googling showed that exactly the same technique works also for the MA-8230P and MA-8910P USB cables. I would guess it'll work for all "P" variations of the Mobile Action cables (MA-8020P, 8250P, 8260P, 8270P, 8280P, 8290P, 8830P, 8310P, 8320P) in Linux. You're welcome to test it out!