An overview of the book "Effective Python"

For some time I have been working on moving from beginner to intermediate Python programmer. One step in that direction is reading the book “Effective Python – 59 Specific Ways to Write Better Python” by Brett Slatkin. Finally I have manged to finish the book during the holidays!

 

The book belongs to the “Effective” series of Addison-Wesley , which was started by Scott Meyers who is the author of “Effective C++” (1992).  All books in the series is about different topics of a specific subject. The topics all have separate items (sub-chapters), which are numbered. In the case of “Effective Python”, there are 59 specific topics covered. I really like the concept since it is possible to read one or two topics each day. Each topic is independent, hence no risk of not remembering what it was all about if having a longer break like a week.

The book has 8 broader chapters:
  1. Pythonic Thinking
  2. Functions
  3. Classes and Inheritance
  4. Metaclasses and Attributes
  5. Concurency and Parallelism
  6. Built-In Modules
  7. Collaboration
  8. Production

If being experienced in programming but a python newbie, chapter 1,2 and 3 is a really good introduction. The concepts and patterns are the same as in others programming languages but the chapters show how to apply it in python. For all of you that have some experience of python, it is no surprise that the python examples are shorter and easier to read and understand compared to other languages. Good advice is given together with examples regarding slicing, list comprehensions, arguments, return values, and iterators. Reading them in this condensed format saves a lot of time compared to acquiring the knowledge by reading discussions at stackoverflow.

Once understanding and tried the concepts of chapter 1-3, the chapters 7 and 8 are the most important. The chapters deal with the issues that always appear when a program starts to scale up, the team grows, and installing in a production environment. In chapter 7 the items give advice in docstrings, how to arrange packages and modules, using a root exception, and isolate dependencies using virtualenv. Chapter 8 is more about unittesting and interactive debugging, and how to configure for multiple different environments. Experienced programmers will recognize many items in this last chapter.

If you have gained experienced and developed some programs in python, chapters 4, 5 and 6 fit very well. Chapter 6 is about processes and threads, and explains well-known concepts like use locks to prevent races, coordinate threads using queues, and how to put jobs on different processor cores in python. Chapter 7 gives a short introduction into some of most important builtin modules. The last chapter to mention is 4 which contains some specific python concepts like metaclasses and attributes. Meta-classes, attributes and decorators are interesting concepts which gives new dynamic possibilities to expand classes during runtime or enforce checks every time a method of a class is called. Might be interesting if writing generic frameworks (data-binding) or mockup during unit-testing.

This is definitely a book that will increase the knowledge of a beginner to intermediate python programmer! Many of the items are easy to absorb and start using right away in your daily work, other items (or chapters) are something you might return to once you have reached the level where they make sense. Keep a copy of the book on your desk to look into once in a while, or why not start each day by a new item?

Advertisements

Supervising a Telldus Daemon

It is rare but sometimes it happens – the telldus daemon responsible for communicating with the Telldus Duo hardware has disappeared from the processlist, crashed or exited silently. This is not acceptable on an embedded system where processes shall have 100% uptime. The preferred way is to have some sort of supervisor which is responsible for restarting the process if it disappears.

Who is the naturally supervisor? Well, the process that started the process to be supervised. In Linux the first process started is either the old init or the newer systemd, which then starts all the other processes according to runlevels and dependencies (systemd) or sequence number (init/rc). There are a number of possible solutions for supervising using either init or systemd. A quick check on ‘google’ gives a couple of solutions, others probably exists as well:

  • daemontools – a supervisor which monitors processes, possible to use with init.
  • monit – a supervisor which monitors processes but also have a wealth of other monitoring possibility. Seems more targeted to server systems than embedded.
  • built in supervising in systemd – the systemd has a simple supervisor built in which restarts processes.
The websolution and the telldus daemon nowadays run on the raspbian called ‘Jessie’. Raspbian ‘Jessie’ has systemd as the first process that starts all other processes. The existing solution is built on init.d bash-scripts with links to the different runlevels (rcX). I decided to go with the systemd as the supervisor since it was time to transfer the init.d scripts to systemd services and only simple supervising was needed anyways.

In systemd you don’t write a script, instead you define a service having different properties and dependencies. This service configuration is put into /etc/systemd/system/<name>.service. The syntax is the same as .INI files on windows. See Service Configuration for details.

Telldus daemon as a systemd service

Explanations:

Type=forking

The telldus daemon is a forking daemon, i.e. it is forked off the starting process to be able to be automatically attached to the process having pid 1 (systemd in this case) making it independant of the terminal (tty).

 PIDfile=/var/run/telldusd.pid

The PID number is saved into this file when fork has been made, making it possible to find out the id of the process.

Restart=always

Always restart regardless the case abort, exception, signal or exit via return code.

Websolution (cherrypy) as a systemd service

Explanations:

After=network.target

Dependant on network connection before starting up.

Type=simple

Ordinary process i.e. doesn’t return if started since it is listening on network connections

User=root

Start process as root, since the webserver will switch to telldus user as soon as possible.
Environment=”PYTHONPATH=/usr/local/bin/setup”
Sets and exports the env. variable PYTHONPATH to the started process.

To make the services start call:

sudo systemctl enable telldusd
sudo systemctl enable cherrytelldus

Both solutions were tried out by killing the processes abruptly using ‘kill -kill ‘ and verified that they were restarted.

A Tellstick Sensor that Twitters

There are tellstick sensors measuring temperature and humidity, not with high precision but still ok if monitoring the house when being away. The sensors might be connected to a Tellstick NET from Telldus which sends all the measurement values up to the cloud (Telldus Live). However it is always a hassle to remember how to login to Telldus Live just to check the temperatures of the sensors.

It would be nice if one could read the temperature from some app already in use – twitter fits very well since it allows short text messages and it is possible to read both from an app and from a PC (no login required).

Telldus has an API to Telldus Live having the address: api.telldus.com. The API is a REST interface possible to access using GET requests and the text answers are in simple JSON format. It is definitely possible to to periodically fetch the temperature value of a sensor. Next step is to push the temperature value up to a twitter account. The twitter REST API was released as early as 2006 and have to be considered mature – see api.twitter.com. By using this API it is possible to automatically twitter status message and it might for example be used by IoT devices.

The Telldus API and the Twitter API use the standard OAuth (Open Authorization) to authorize clients. This standard is used by many companies including Facebook and Google. The standard makes it possible for service vendors to give limited access to certain parts of their API to registered clients. The registered clients might be applications, mobile apps, or back-end systems. OAuth requires you to generate 2 pair of keys, where one pair is authorization of the user or application (consumer key/consumer secret) and the other pair is authorization of the service request (access token/access secret). Once calling one of the REST API:s the public key is put directly in the HTTP authorization header and the secret key is used together with the other parameters to create a hash signature also going into the header.

If searching on pypi (the python package index), there is already a python library ‘tellive‘ for connecting to Telldus Live. The tellive library contains a client class which takes care of the constructing of the  REST url and the HTTP authorization header. However you still have to know a bit of how the REST api:s work to be able to use the correct arguments. Twitter on the other hand has several independent implementations calling the REST api:s. Among them is the python-twitter which seems pythonic and stable.

Now we have everything to put together a script which uses both the libraries ‘tellive-py’ and ‘python-twitter’:

The class ‘OAuthCredentials’ contains the two key pairs (consumer and access). The function ‘get_temperature’ uses the ‘tellive’ library by listing all sensors and find the value of requested sensor. The function ‘send_to_twitter’ uses the ‘python-twitter’ library to post a string containing the temperature to the twitter account belonging to the specified customer keys. Finally decide how often the script will run and make a crontab configuration.

Auto Mounting an Old NAS in Linux

My Buffalo NAS (Linkstation Live)  is quite old but still doing its job. Recently I switched to a new version of Linux Mint on one of my laptops. I have a need for copying large files from my laptop to the NAS, mostly captured movies since the laptop has a firewire port. Nearly all Linux distributions provide SMB/CIFS support, in this case I needed a SMB/CIFS client to be able to mount the NAS, since the NAS exports a SMB/CIFS service interface.

The first problem is that Buffalo NAS goes down very quickly into sleep mode and waking it up from sleep mode takes a couple of minutes. The windows software from Buffalo keeps the NAS alive by using Wake-On-Lan i.e. sending ‘magic’ packets keeping it awake.

The NAS needs wake-on-lan packets every 30 seconds, so I made a short python script calling wake-on-lan regularly in a while-loop:

A call to the script was then added to /etc/rc.local which is run automatically during startup. The line in rc.local looks like:

python /usr/local/bin/wakeupnas.py &

The last character ‘&’ is important since it makes the script spawn off as a separate process. Next step was to do the actually mounting to the filesystem. First I tried to do it using the normal mount command:

> sudo mount -t cifs //nas/dir /mylaptop/mountpoint

This of course worked, so I moved over to configure the mount daemon by configuring the actually mounting in /etc/fstab. Next time I started my laptop, I went to the mountpoint happily and expected all files to appear from the NAS – but nada, nothing! The problem was that my laptop tries to do the mounting before the NAS is ready (the NAS takes roughly 3 minutes to boot). I needed another solution and turned to AutoFS which is automounting. By using automounting, the directory is mounted at the same time you try to acccess it. A nice side-effect is if quoting from help.ubuntu – “automounting NFS/Samba shares conserves bandwidth and offers better overall performance compared to static mounts via fstab.

First of all I checked that I had AutoFS installed including the kernel support, but I suppose this is default on Linux Mint. Next I performed the following two steps:
1. Add user credentials (i.e. usernamne and password) in a file and put it into /etc/creds/. I also had to add a static hostname in my router to make it work.
2. Secondly, I had to add the following row to /etc/auto.master:

 /cifs /etc/auto.smb --timeout 300

In short this line means “if accessing the directory /cifs/ run the script auto.smb which will look into /etc/creds/ and automount all exported directories of the NAS”

Made a quick reboot of my laptop and change directory into /cifs/. VIOLA! There they were, all my directories of the NAS!

Arduino PIR Sensor for the Tellstick

Once the switches were installed to the Tellstick Duo and the Raspberry PI, the next thing to look into was sensors. However the ones found were all closed i.e. not possible to configure individually. It opened up for yet another Do It Yourself (DIY) project making a 433 MHz sensor for the Tellstick Duo. The sensor chosen was a Passive Infrared Sensor (PIR) detecting motion for example detecting a person entering the room.

Components
1 Arduino board
1 breadboard and/or sparkfun extension
1 PIR sensor (in this case from Parallax)
1 transmitter 433 MHz

The PIR sensor and the 433 MHz transmitter might be bought from SeedStudio or ebay.

Wired

Here is everything wired up. At the bottom is an old Arduino Duemilanove and next is a sparkfun prototype shield together with a mini breadboard. The data pin of PIR sensor is connected to port D3 on the Arduino and the data pin of the 433 MHz transmitter is connected to the port D5. The PIR sensor and the transmitter are both fed by the +5V pin of the Arduino.

Code

The code is dependent on two external libraries MsTimer2 and NewRemoteTransmitter.

MsTimer2 – During early experiments using the PIR sensors, a lot of false motion detects happened. A timer is used to assure that the signal is stable during a certain time.

NewRemoteTransmitter – A library implementing the arctech protocol used by switches like Nexa. The protocol uses manchester encoding where each ‘0’ is followed by an ‘1’ and vice versa. This makes the signal less prone to disturbances.

The code in the Arduino (see sketch below) binds the data pin of the PIR to an interrupt on signal change (line 89). The interrupt sets a timer (line 76) and if the change is still steady when the timer fires (line 61), a transmit is broadcasted containing the id and unit of the transmitter (line 57).

Tellstick

The id and unit number are statically defined in the code on line 25 and 26 as 14433000 and in the same manner as all other switches in Tellstick, this DIY switch has to be defined in /etc/tellstick.conf using the defined id and unit:

CanaryMod as Virtual Machine in Azure

Introduction
CanaryMod is a wrapper around MineCraft which provides an extension framework. The framework provides a stable API as well as extended permission and group management. The extensions in Minecraft/CanaryMod is called mods and my reason to use CanaryMod was to make use of two mods called ScriptCraft and RaspberryJuice. These two mods make it easy to run scripts or make your own mods in the supported scripting lanuage. ScriptCraft supports JavaScript and RaspberryJuice supports python using the same API as the MineCraft PI version.

Creating the Server in Azure
Azure is Microsofts cloud platform containing different services for running applications, and collecting/visualizing data. In this case we will only use it as a server host running our virtual machine. In Azure it is possible to choose among different OS templates published by companies or open source community. First thought was to use an Ubuntu OS template, and then install java, open up ports, create start/stop scripts etc. However, since Microsoft bought Mojang (creator of Minecraft), they are providing a Minecraft Server themselves using Ubuntu (quite a change in business strategy nowadays – signed Satya Nadella). So at second thought, let’s use this preconfigured server and just reconfigure it for CanaryMod to save time!

Step by step creating the server:

First create or use an existing account at portal.azure.com.

Search for the Minecraft Server published by Microsoft:

Create a virtual machine instance in the Azure cloud containing the Minecraft server. Choose a hostname (hint: use some uncommon name otherwise you will end up with a name containing your name and some long id provided by microsoft). Choose a username and password for your user at the server (hint: choose an uncommon username to make it harder to guess). We will use ssh later on to connect to the server and then we will make use of the username/password.

The final step is to press create a wait for like 5-10 minutes while the server is created:

Finally the server is created and running:

In Microsoft Azure it is possible to inspect the server and remap ports (endpoints). Let’s do that for the our virtual machine:

As expected there two ports open, minecraft and ssh. The ssh is using the default port and as a security step we will change it to high private port number e.g. 59592 (49152–65535) :

Try it out by using ssh (if windows use the Chocolately package manager to install ssh – see this former blog post).

> ssh -p @.cloudapp.net

Now your are into your Ubuntu virtual machine  – verify that minecraft server is running by

> ps -efw | grep java

You should see something like – i.e. the running process:
/usr/bin/java -Xms1024m -Xmx1024m -jar
/srv/minecraft_server/minecraft_server.1.8.jar
As a last security thing – assure that the system is updated (sudo is running as root temporary):

sudo apt-get update

Installing CanaryMod
Now we are ready to begin the installation of Canarymod!
First of all stop the process running official minecraft by calling the systemctl scripts:

> sudo systemctl stop minecraft-server

Create the Canarymod directory side by side of the official minecraft directory and make the minecraft user owner of the directory:

> cd /srv
> sudo mkdir canarymod_server
> sudo chown minecraft canarymod_server
> cd canarymod_server

Download Canarymod and run it for the first time to create all necessary files:

> sudo wget https://canarymod.net/releases/CanaryMod-1.8.0-1.2.0-RC1.jar
> sudo chown minecraft CanaryMod-1.8.0-1.2.0-RC1.jar
> sudo -u minecraft java -jar CanaryMod-1.8.0-1.2.0-RC1.jar
....logging from canarymod....
> > shutdown
....logging from canarymod - save/stop server....

Confirm the End-User License Agreement (EULA) by setting it to true in the nano texteditor:

> sudo -u minecraft nano eula.txt

Restart the CanaryMod process and make your minecraft user operator (use the uuid – get it by using this link) in the command console. The command console is entered right after the server has started and preceded by ‘> >’.

sudo -u minecraft java -jar CanaryMod-1.8.0-1.2.0-RC1.jar
....logging from canarymod....
> > /op 
> > shutdown
....logging from canarymod - save/stop server....

Next step is to change the system control script ran at startup/shutdown of the virtual machine, instead of starting the official minecraft they shall start Canarymod. The system control script of minecraft is called minecraft-server.service and the only thing needed to be changed is the directory path and the name of the .jar file.

> cd /etc/systemd/system
> sudo nano minecraft-server.service
....
change all /srv/minecraft_server to /srv/canarymod_server
change the .jar to CanaryMod-1.8.0-1.2.0-RC1.jar
remove Alias in [Install]
....

Once the system control script is changed, it has be verified that it does what it is supposed to do.
First make sure it is started each time the server is started by enabling it.

> sudo systemctl enable minecraft-server
> sudo shutdown -r now

Reconnect once again and verify that the process is running – see last part of  ‘Creating the server in Azure’.

Installing ScriptCraft

One of the reasons of choosing CanaryMod was to be able to run mods in javascript, to aid in constructing large buildings and as an introduction into programming. ScriptCraft is installed as a plugin of CanaryMod – made in just a few steps:

cd /srv/canarymod_server/plugins
> sudo wget http://scriptcraftjs.org/download/latest/scriptcraft-3.1.12/scriptcraft.jar
> sudo chown minecraft scriptcraft.jar

Verify that it is installed correctly by trying the simplest possible javascript code in the console window.

> sudo systemctl stop minecraft-server
> sudo -u minecraft java -jar CanaryMod-1.8.0-1.2.0-RC1.jar
....logging from canarymod....
> js 1+1
.... canarymod is logging '2' as a result ....

Trying it all out
Ready for prime time? Connecting to the CanaryMod server in Azure….

 

Implementing a Web Solution for Tellstick Duo

A  web application for the Tellstick Duo targeting our household needs the following functions:

  • control individual switches (turn on/off lamps, and printer)
  • change schema mode (at which clock hour lamps will automatically turn/off)
  • also be usable on both android and windows phone on different screen sizes

It is all about either turning on/off switches or schema modes from a phone – best realized by using distinct and fairly large buttons. A status row is also needed to acknowledge an operation. Also the GUI should look “app-like”. Here is the end result:

In the blog post before this one, a walk through was made of the libraries needed to make a pyton web solution: cherrypy and jinja2. A little bit knowledge of html/javascript and the javascript library jquery mobile is also needed. Interaction with Tellstick Duo and setting up crontab is done using the already developed commands/libraries which are described in the former blog posts: Turn.py – Using Python to Control the Telldus and Setup.py – Configure Switches Using Crontab.

Below is an overview of the system design of the web application:


 html requests    +-------------------+     +--------------+
 REST API calls   |                   +---> |turn.py       |
+---------------> |    Cherrypy       |     +--------------+
                  |    Application    |
<-----------------+    python         |     +--------------+
   html           +----------+--------+---> |setup.py      |
                             ^              +--------------+
                             |   jinja2
                             |
                         +--------+
                         | +----+ |
                         |        |
                         | +----+ |
                         |        |
                         | +----+ |
                         +--------+

                       template page
                     html/jquery mobile/js
The cherrypy application instantiates the webserver which listens for connections on port 80. If a request is made for the main page, the template is read from flash and parsed by jinja2, in this case the schema modes and all the names of the switches are inserted into the page by jinja2. The webserver responds using the result from jinja2. Since jquery mobile is used in the main page each button is connected to a client side javascript callback. Inside the callback a REST API method call is made asynchronously to the webserver. The REST call arrives to the webserver which delegates it to a python function which in this case either calls turn.py (turn on/off switch) or setup.py (changes schema mode).