My exheres repository

A couple of months ago, I reinstall my PC with exherbo and now I can say that I’m a happy user (well, a happy “developer” since Exherbo do not have users). So, I decided to port a number of ebuild than I use frequently to exheres, and create my own exheres repository in github, called ajdiaz-exheres.

Now I will try to include my repo in unavailable-unofficial list… In the meanwhile, here is the configuration that you need for paludis:

location = ${ROOT}/var/db/paludis/repositories/ajdiaz-exheres
sync = git://
format = e


Update dot files with git

For a years I was using a custom created scripts to keep my dot files updated. I had a local repository in bazaar and a script which check differences between home dot files and files stored in the repository. This solutions works fine for years, but now I want to do some changes…

The first one is moving my dot files to git (and probably pushed them to github), and the second one is to create a hook for git to update my dot files. I known that there are a lot of similar solutions, one more complex, other more easy, but this is mine 🙂

So, I created a post-commit hook script for git, which perform the modifications that I need. Now I just only do this steps:

1. Create a new git repo:

mkdir mydots_repo
cd my_dots_repo && git init

2. Put the hook:

wget -O .git/hooks/pre-commit
chmod 755 .git/hooks/pre-commit

Or just put this content to pre-commit hook:

#! /bin/bash
# (c) 2010 Andres J. Diaz <[email protected]>
# A hook to git-commit(1) to update the home dot files link using this
# repository as based.
# To enable this hook, rename this file to "post-commit".

for dot in $PWD/*; do

 if [ -L "${home_dot}" ]; then
 if [ "${home_dot}" -ef "$dot" ]; then
 echo "[skip] ${home_dot}: is already updated"
 rm -f "${home_dot}" && \
 ln -s "$dot" "${home_dot}" && \
 echo "[done] updated link: ${home_dot}"
 if [ -r "${home_dot}" ]; then
 echo "[keep] ${home_dot}: is regular file"
 ln -s "$dot" "${home_dot}" && \
 echo "[done] updated link: ${home_dot}"

3. Copy old files:

cp ~/old/bzr_repo/* .
git add *

4. Commit and recreate links:

git commit -a -m'initial import'

And it’s works 🙂

Moving to github

Since one week ago, we are moving the Connectical servers from old location in Virpus datacenter on Texas to our own managed infraestructure, build on the top of a GuruPlugs cluster.

We are discussing now about how distribute the infraestructure and how to keep a number of copies in remote locations up-to-date, we are exploring solutions like elliptics or some similar.

In the meanwhile I created my github account to still my projects under development, and also to have a backup of some projects that I really use everyday.


New version of dtools

Today I was released a new version of dtools. Distributed tools, aka dtools is a project written in bash coding to create a suite of programs to allow running different UNIX comamnds parallelly in a list of tagged hosts.


  • Fully written in bash, no third party software required (except ssh, obviously).
  • Based in module architecture, easy to extend.
  • Full integration with ssh.
  • Easy to group hosts by tags or search by regular expression.
  • Manage of ssh hosts
  • Parseable output, but human-readable
  • Thinking in system admin, no special development skills required to extend the software.

Short Example

$ dt tag:linux ssh date
okay::dt:ssh:myhostlinux1.domain:Mon Nov 16 23:54:04 CET 2009
okay::dt:ssh:myhostlinux3.domain:Mon Nov 16 23:54:04 CET 2009
okay::dt:ssh:myhostlinux2.domain:Mon Nov 16 23:54:04 CET 2009

As usual, you can download the code from the project page, or if you wish you can download the code via git:

git clone git://


New htop color scheme

From a couple of weeks I use htop in my work to get a fast view about the system status, htop is an an interactive process viewer for Linux, similar to classic UNIX top, but with some enhancements, for example a more configurable view, the integration with strace and lsof programs and much more.

But (and it’s a big “but” for me) I really dislike the color scheme that use by default. htop comming with five color schemes, but I cannot find any beautifull one (from my personal point of view, of course), so I decided to make a new schema. I called “blueweb” theme (dont’ ask) ;). And here is the result:

htop with blueweb theme
htop with blueweb theme

You can download the patch file for the htop source code. And yes, unfortunately you need to patch the code.

Now my htop looks nice 🙂


Python module to handle runit services

Last month I needed to install runit in some servers to supervise a couple of services. Unfortunately my management interface cannot handle the services anymore, so I decided to write a small module in python to solve this handicap, and that is the result!.

With this module you can handle in python environment a number of runit scripts. I think that this might be work for daemontools too, but I do not test yet. Let’s see an example 😀

>>> import supervise
>>> c = supervise.Service("/var/service/httpd")
>>> print s.status()
{'action': None, 'status': 0, 'uptime': 300L, 'pid': None}
>>> if s.status()['status'] == supervise.STATUS_DOWN: print "service down"
service down
>>> s.start()
>>> if s.status()['status'] == supervise.STATUS_UP: print "service up"
service up

Personally I use this module with rpyc library to manage remotely the services running in a host, but it too easy making a web interface, for example using bottle:

import supervise
import simplejson
from bottle import route, run

def service_status(name):
   """ Return a json with service status """
   return simplejson.dumps( supervise.Service("/var/service/" +
name).status() )

def service_up(name):
    """ Start the service and return OK """
    c = supervise.Service("/var/service/" + name)
    return "OK UP"

def service_down(name):
    """ Stop the service and return OK """
    c = supervise.Service("/var/service/" + name)
    return "OK DOWN"

from bottle import PasteServer

Now you can stop your service just only point your browser http://localhost/service/down/httpd (to down http service in this case).


libnsss_map library

Last week I was working on libnss_map, aNSS library module to map user credentials to existent user in the system. This module is intended to be used in high virtualized environment like cloud computing or embedded systems which require a lot of users.

When a new user has been authenticated by PAM or other authentication mechanism, then the nss_map module create a virtual user when credentials mapped to an existent user. For example, suppose here are a user virtual, created a la standard way on /etc/passwd:

    virtual:x:15000:15000:virtual user for nss_map:/dev/null:/sbin/nologin

Then edit the /etc/nssmap.conf

    virtual:x:15000:15000:virtual user for nss_map:/home/virtual:/bin/bash

Note that the user directory is really a base dir in nssmap, each new user can search their home in /home/virtual/logname, where logname is the name used by user to login, and the /home/virtual is the prefix setted in nssmap.conf.

As usual, you can get any of my projects from


/proc and /sys tricks

Really the sysfs and /proc filesystems are worlds of magic and fantasy. Each day I discover a new trick using this filesystems. So, I decided to post a short summary of my favorites ones. Enjoy and feel free to add your tricks in comments, maybe we can a /proc and /sys knownledge database in a post 🙂

1. Scanning for LUNs in attached FC (or any other SCSI compliance host).

echo "- - -" > /sys/class/scsi_host/hostX/scan

2. CPU hotplug.

# Set cpuX offline
echo "0" > /sys/devices/system/cpu/cpuX/online
# Set cpuX online
echo "0" > /sys/devices/system/cpu/cpuX/online

3. Enable dmesg timestamp.

echo Y >/sys/modules/prinkt/parameters/time

4. Restore a removed file when is still in use.
Let’s suppose that you have a file in fd YYY, opened by process XXX, and you need to recover that file after remove it.

cat /proc/XXX/fd/YY > /tmp/myfile_restored

5. Get the IO operations for a process:

cat /proc/XXX/io

The syscr and the syscw are the accumulated read and write IO operations that process do where running.

6. Increase size of IO scheduler queue:
Let’s suppose that you wanto to change the IO queue size on drive XXX.

echo "10000" >/sys/block/XXX/queue/nr_request

7. Get the current IO scheduler enabed to a specific device:
Let’s suppose that you want to known the IO scheduler for device XXX.

cat /sys/block/XXX/queue/scheduler

8. Get the threads of pdflush process which are running:

cat /proc/sys/vm/nr_pdflush_threads

9. Set the percentage threshold for memory to start to flushd data to disk:

echo XX > /proc/sys/vm/dirty_background_ratio

10. Set the sleep time for pdflush checking (in centisecs):

echo XXX > /proc/sys/vm/dirty_writeback_centisecs

11. Set the time to live for a data in buffer, when raises, data will commit to disk (in centisecs):

echo XXX > /proc/sys/vm/dirty_expire_centisecs

12. Prevent a process to be killed by OOM killer.
Let’s suppos that we have to prevent OMM to kill the process with PID XXX

echo "-17" > /proc/XXX/oom_adj

Note that you cannot forbid OMM killer to kill any process, this change only gives more resistence to be killed to the process, If everthing else fails, the process will be died anyway.

Distributed tools

For last months I needed to maintain a number of heterogeneous servers for mi work, I need to do some usually actions, like update a config file, restart a service, create local users etc.

For this purposes there are a lot of applications, like dsh (or full csm), pysh, shmux and many others (only need to perform a search in google using phrase “distributed shell”). Unfortunately for me, I want a easy-to-parse solution, because I’ve a big (really big) number of servers, and I want a single cut-based/awk parsing, and also I need to do some actions as other users (like root, for example) via sudo. Althought many of the existants solutions offers me a subset of this features, I cannot found a complete solution. So I decided to create one 😀

You can find the code, and some packages in the dtools development site. I was use this solution in production environment from months with excelent results, and you can feel free to use.

Of course, its free (of freedom) software, distributed under MIT license.

Enjoy and remember: feedback are welcome 😉