Posts tagged "Article"

My Workflow with Chef

Something i wish somebody had published before i went and discovered myself. I have started with the Single Repository for everything at first, quickly realizing that maintaining such a repository would be a pain. I then went and split all the Community cookbooks and my own cookbooks into two directories in the repository, so i could update the Community Versions with knife cookbook install.

As soon as you have multiple projects sharing some cookbooks, this way also becomes hard to manage. After a lot of reading and exploring, i ended up with the following.

One Repository per cookbook

This is a way, that afaict is something that evolved from using Vagrant and Berkshelf in recent months. It makes sense if you start with these tools, to manage every Cookbook in it’s separate Repository. That way you can easily Change and Test things, without having to worry about the entire Repository.

For Personal cookbooks i have the habbit of prefixing them with something, so tools like berkshelf and knife won’t suddenly pull Versions from their Repositories because the name is identical.

Use a ‘Meta’ cookbook for roles and configuration

Beating a dead horse here, but still. Something i wish the Chef Server would do better sometimes. Because of the Lack of versioning in all Configurations on the Chef Server (Environments, Roles etc) i mostly end up pulling all Configuration into meta cookbooks that just pull togother all required cookbooks and apply configuration through attributes.

You would for example create a cookbook ‘Webserver’ that includes Apache, and apply the configuration in ‘attributes/apache.rb’.

With this, you will be able to actually test configuration changes before pushing them live. Also adding new Nodes is then just a matter of setting one Role in the node Config to recipe[webserver].

Berkshelf for Version Management

Berkshelf is a nice tool, albeit somewhat buggy at times. It helps with Version Management for a cookbook. Make sure you don’t use both chef_api :config and site :opscode in one Berksfile, choose your “master” source, and configure single special Cookbooks to pull from a different source. Using both sources, lets Berkshelf choose where to get the cookbook from, which i have found to be very funky.

Also don’t use Version constraints in the metadata.rb. Berkshelf 2 can’t use them, and the Chef server tends to lock up trying to resolve them in my experience. Use the Berksfile for constraints, and then berks apply them to a Environment on the Chef Server that is used by your destination Nodes.

And don’t add the Berksfile.lock to the Repo, After writing it once, Berkshelf 2 can’t read it anymore. You’ll end up constantly deleting it… Occasionally deleting ~/.berkshelf/cookbooks also seems a good idea, to prevent Berkshelf from using it as cache.

I hope Berkshelf 3 is released soon, and fixes all the oddities and bugs that currently exist. Haven’t been brave enough to use the Beta yet.

Vagrant for Testing and Development

Use Vagrant. Use one of the Provisionerless Boxes from opscode/bento, and add the Omnibus and Berkshelf plugins to Vagrant. That way you have a ‘clean’ base box, with a Chef version of your choice.

The Vagrant Box never talks to the Chef Server, so if you need data bags, use chef-solo-search.

Foodcritic / knife cookbook test / Unit Test

As bare minimum, use foodcritic and knife cookbook test to verify your code. I use chef-minitest for Unitesting. You really want to use these facilities, you’ll thank yourself the first time you need to change something after you’ve not touched a cookbook in a few weeks.

Tie everyting into Jenkins

At Work i have a Jenkins running that watches the cookbook Repositories, and runs foodcritic and knife cookbook test on them, and finally vagrant destroy -f && vagrant up. So before i berkshelf upload i can verify my changes on Jenkins on a clean installation.

The Steps

This is what the workflow entails at the end:

  • Bump metadata.rb version
  • Work on cookbook
  • Write Unit Tests
  • vagrant provision to test changes locally, rinse, repeat
  • git commit to central Repository, wait for Jenkins to finish the entire run
  • berks upload to Upload Cookbook
  • berks apply to send Cookbook into the wild

A few Chef Practices

Here is some advice of deploying chef in your environment, that may help you avoid some of the pitfalls of Chef

Try not to Fork Community Recipes

I did it, and it took me a lot of work to revert to the community Versions. Once you fork a recipe by changing directly into the community version, merging upstream changes will be your task. Unless you do a lot of merging you will disconnect from the community versions rather quickly, and all the maintenance burden will be yours.

A better way is to write wrapper cookbooks that just add to the community version. The way to do so, is to create a new cookbook, add a depends 'cookbook' to the ‘metadata.rb, and create your recipe that starts with include_recipe 'nginx', after which you can proceed to make your changes.

Some cookbooks from the community though are extremely weak, and sometimes don’t really warrant a wrapper cookbook.

Another advantage is, that you can actually upstream fixes you make to a community cookbook, without having to worry about the local changes that are specific to your environment.

Do not use roles for run_lists

Today i use Chef Roles and Environments just for configuration. Roles and Environments have one big flaw: They are not versioned in Chef. So if you want to add a new cookbook to your run list on a development server to your “Webserver” Role, the Production environment will receive that change too.

Create another cookbook that uses the metadata.rb depends, and include_recipe to build your run lists. That way you can version the run-list cookbook and don’t have to worry about spilling into environments that you didn’t intend it to. These run list cookbooks can also be used to set defaults, which are then also versionable.

Test your Work

This is probably a somewhat alien concept to a lot of the more traditional admin folk, but once you start with Chef, your infrastructure is code. And code has to be tested.

Create a Development Environment that suits your needs, and use Chef Minitests to verify your results. Do not deploy to your Production Environment.

The better your Test Coverage becomes, the less scary changes to your infrastructure become. If you write a large cookbook today, and skip out on the tests, you will be scared to touch the thing in half a year or so, once you’re not in the code anymore.

Other minor tips

  • If you don’t already use berkshelf, keep your cookbooks in a separate directory in your repo, and use the cookbook_path in your knife.rb
  • Where applicable use a dedicated Git Repository for each of your cookbooks
  • Put a universal .chef/knife.rb into your repository
  • Don’t put site specific configuration into your cookbooks. Use the meta ones, or roles and environments
  • shef -z is a good way to interactively work with a chef server
  • Don’t run the chef-client as daemon, it is a memory hog, and grows over time. I prefer the Cron variation.
  • If at all possible, consider different distributions in your recipes. Try not to specialize onto one. Quite a few of the community cookbooks are impossible to use on RHEL variants because they have been developed on Ubuntu.

Use Chef with Kickstart / Cobbler

If a full stack deployment of Chef to manage your Infrastructure seems a little nerve wracking to start with, there are ways to incorporate it with your current work flow in a less invasive manner.

I wanted a Kickstart environment that was capable of deploying a number of different distribution. To get the system off the ground, i decided to go with cobbler as the alternative solutions didn’t seem mature enough, or to distribution specific at the time.

The problem then is, how to configure the different distribution so the resulting installations have a common setup and feel. Cobbler has some mechanisms to do so, but i decided to go for Chef of course.

Getting Chef-Solo onto a fresh install

I’ve tried various ways to get Chef onto a system in the past. From distribution supplied packages, to using gem install. The issue here however is, that you end up with various versions of Chef with various distribution specific Bugs (Ubuntu’s random ruby segfaulting for example)

Recently Opscode started to create a full-stack package of Chef called “Omnibus Chef”. These packages come with ruby and everything required for Chef to run.

In the Cobbler configuration, there is a snippet that Looks like:

# Install Omnibus Chef
curl -L | bash

# Create Chef Solo Config
mkdir -p /etc/chef/
cat <<EOBM > /etc/chef/solo.rb
file_cache_path "/var/chef-solo/cache"
cookbook_path ["/var/chef-solo/cookbooks", "/var/chef-solo/site-cookbooks"]
role_path "/var/chef-solo/roles"
data_bag_path "/var/chef-solo/data_bags"

# Clone Chef Cookbooks for chef-solo
rm -rf /var/chef-solo
/usr/bin/git clone http://<git-server>/git/chef.git /var/chef-solo

# chef solo needs fqdn to be set properly
# something that can't be guaranteed during install
/bin/hostname localhost

# Run Chef solo
/opt/chef/bin/chef-solo \
    -o 'recipe[acme::cobbler-install]'
    -c /etc/chef/solo.rb \
    -L /var/log/chef-client.log

This way you hand over control of the Systems Configuration to Chef the soonest possible, and don’t have to Shell Script or Cobbler Template for the different Distributions.

Testing the whole thing

To test the entire stack from Cobbler to Chef I’ve build a script that uses Cobblers XMLRPC Interface to switch distributions after the Chef Minitests have successfully finished. A little `rc.local’ script tests the Cookbooks, and on success switches the distribution, scrubs the disk and reboots. On failure, the system just stops waiting for somebody to fix the Cookbooks and tests.


What is Chef, and what is the big deal

In Marketing words:

Chef is an open-source systems integration framework built specifically for automating the cloud. No matter how complex the realities of your business, Chef makes it easy to deploy servers and scale applications throughout your entire infrastructure. Because it combines the fundamental elements of configuration management and service oriented architectures with the full power of Ruby, Chef makes it easy to create an elegant, fully automated infrastructure.

Infrastructure automation is not a new thing. CFEngine for example is a system that has existed for years, though it has never impacted like Chef or Puppet have.

For me the big deal these days is primarily the ability to create a reproducible environment. Yes, i am lazy, and yes i like to automate everything i can. But the fact that i can rely on my systems to behave equally after every installation for me personally is the biggest deal.

I’ll try to cover some more Chef related topics here in the furute.

Why Chef

I guess this boils down largely to personal preference. The features that got me into chef, over puppet are as follows

The DSL of Chef just looks nice to me

A statement in Chef to deploy a template looks like

template "#{node[:phpfpm][:pooldir]}/claus.conf" do
    source "fpm-claus.conf.erb"
    mode 00644
    owner "root"
    group "root"
    notifies :restart, "service[#{node[:phpfpm][:service]}]"

While in Puppet it looks like:

file {"/usr/local/bin/":
    mode    => "664",
    owner   => "root",
    group   => "root",
    content => template("jvm/options.erb"),
    notify  => Service[apache2]

As i said, personal preference.

The ability to seamlessly switch between Chef DSL and Ruby

Chef’s DSL is an extension to ruby, so while writing recipes you can rather seamlessly incorporate ruby code.

For example a loop to install a bunch of packages:

%w{spamassassin procmail razor fetchmail python-spf}.each do |pkg|
    package pkg do
        action :install

Interaction between Nodes

This is among the primary reasons i prefer Chef. A good example to illustrate this, is the Nagios Cookbook. The Nagios Server Recipe can search for all nodes, and will create a configuration for those, while the Client Cookbook will find all Servers and allow Access from those.

Another example could be a Loadbalancer Cookbook that would use a ‘role’ search on all nodes to identify it’s web servers, and create the configuration accordingly:

search(:node, "role:web-fe-group-a") do |r|
    # Configure LB

Data Bags

A Data Bag is a collection of json objects stored in the Chef Server that can be searched and used in recipes.

These can be used to create users for example. A databag for a user could look like:

    "comment": "Example User",
        "groups": "users",
        "gid": 1041,
        "id": "example",
        "shell": "/bin/false",
        "uid": 4131

To use this in a recipe, you would do something like:

search(:users, 'groups:users') do |u|
    user u['id'] do
        uid u['uid']
        gid u['gid']
        shell u['shell']
        comment u['comment']
        supports :manage_home => true
        home "/home/#{u['id']}"

My History with Chef

The first commit to my personal Chef Repository is from 2011. I got started on Chef by using the free Trial on It allows the management of up to 5 nodes for free. This is probably the best way to get acquainted with Chef development, as setting up the full stack can be a hassle.

I then moved to using littlechef to handle my systems. Littlechef is Chef Solo with some extensions to manage a bunch of nodes. To learn Chef, this system is also a nice alternative to setting up the full stack.

Today i run a mix of `chef-solo’ in a Cobbler environment, and Chef with the Community Server to handle the entire life cycle of an installation.

Testing Ntp and tzdata with Behat

This is a followup of my previous Post about Behavior Driven Infrastructure.

Using Behat to Test a Server

Here’s an example for a Test for tzdata

Feature: tzdata Configuration
    As a Server
    I want to have a tzdata installation
    So that my calculations for various timezones are always correct

Scenario: The tzdata data Installation and Configuration

    Given i have the "tzdata" Package installed
    Then the directory "/usr/share/zoneinfo" should exist
    And the "tzdata" Package Version should match "2011(d|e)"
    And the file "/usr/share/zoneinfo/localtime" should exist
    And the file "/etc/localtime" should exist

Scenario: The tzdata checks for correct times

    When i execute "date"
    Then the output should match "CEST|CET"

    When i execute "date  --utc -d '2006-08-07 12:34:56-06:00'"
    Then the output should match "UTC"
    And the output should match "18:34:56"

    When i execute "TZ=Europe/London date -d '2006-08-07 12:34:56-06:00'"
    Then the output should match "BST|GMT"
    And the output should match "19:34:56"

Writing the Unit Tests

I’ve put up 2 examples, one for ntp and one for tzdata, on my Github here. The Code is very Quick’n’Dirty, just as a “see what is possible” quality.


Now when a random Dictator somewhere decides that his Country should change the timezones again, you can setup a quick test, rollout the tests, then rollout a new tzdata and be assured that the Timezone changes reliably hit every Server.

This is a very Simple Test, but it is just here as an example and to explore the viability. For an Apache installation for example you could proceed to check Various configuration settings, check for helper services that need to be there, check your logfile collection is setup properly, check your logrotation, check if all PHP packages are installed and correctly configured.

Behavior driven Infrastructure and Monitoring

While I’m busy getting acquainted with Chef I’m starting to wonder why the topic of “Behavior driven Infrastructure” hasn’t picked up more momentum than what appears to be the case right now. (Or I’ve just been living under a Rock, and missed all of it).

Behavior driven Infrastructure

BDD Has been around in Software Development for a while now, but coverage for it’s use in a Systems Administrators life has been pretty vague from what i can tell.

I’ve found a few interesting posts, but not much beyond this.

I’ve read Test-Driven Infrastructure with Chef which touched a bit on the subject.

Behavior Driven Monitoring?

With growing Infrastructure, Monitoring becomes a major Pain. Especially if you do it the “classic” way, that focuses on monitoring Components rather then Services. If your system checks thousands of hosts, chances are on some of them something is broken. Broken disk maybe? MySQL Slave that is causing high loads because a batch Job is running some statistics? Apache that is running low on Childs?

But really, does it matter? As long as the Service is up and running and performing? Do we need to monitor every little cog in our infrastructure if there is a way to do “Top Down” Monitoring?

I certainly don’t enjoy being woken in the middle of the night by Monitoring that tells me that a Database is chocking somewhere.

And now?

cucumber-nagios seems like the only project touching on that Subject.

As PHP is my language of choice, and I’m not quite convinced that learning Ruby is going to make me a happier person, I’ll stick to that.

Behat is Cucumber for PHP, installation is done quickly:

pear channel-discover
pear channel-discover

pear install behat/gherkin
pear install behat/behat

(Remind me to build a Chef Cookbook for this.)

Time to explore…

Tinc and mDNS- The Perfect Road Warrior


I have in the past used OpenVPN to setup a VPN connection between home and my VPS. Usually everything was good, and worked the way i wanted it to. The troubles started when i wanted to add a Notebook and various Development Virtual Machines to this network. I somehow managed to put this together, add some more services and keep this thing alive.

It was no fun whatsoever to use and maintain though, so i needed a new Solution.


I started exploring alternatives to OpenVPN and quickly stumbled on Tinc which looked exactly like what i needed. The Setup is fairly easy to do, and there is plenty of Documentation available online already.

At first i used Tinc’s “Router” Mode with multiple networks connected together, like i used to have with the OpenVPN setup. Routing in such a small network though is painfull, and if you constantly add new devices and networks you’ll pretty quickly grow tired of maintaining your static routes.

I’ve since simply setup Tinc to do “Switch” mode where it acts as a simple Network switch (as the name suggests). All endpoints can now share the same network and there is no need to setup routes anymore.

Another advantage is, that you can use the Linux Bridge Utils to put your Tinc interface into a bridge with a local Lan interface, and immediately have that entire network added to your Tinc VPN.

My current setup looks like this:

  • At home i have a OpenWRT Router with Tinc installed, where the Tinc interface is simply added to the Bridge that OpenWRT already has
  • On my VPS i have a Tinc endpoint
  • On my Notebook i also just have a tinc endpoint.

Both my Notebook and my VPS are now always on my Local Lan, no matter where i am physically.

Adding Zeroconf DNS to the mix

With the Tinc setup above your life will already be much better. With some tiny shell scripts my Notebook figures out if it’s at home or on the road and connects to my VPN automatically. Everywhere i go i have my Home Network with me.

Now there’s only one Problem left: DNS

When you connect to a foreign network, you will usually be issued a DHCP IP and a DNS Server along with it. That DNS Server obviously knows nothing about the Hosts you have at home.

After i started using tinc, i setup a DNS Server at home to serve my Hostnames, and built a bunch of shell scripts that would make sure all endpoints used that DNS. That is not a very good solution.

The only real alternative to using DNS is Zeroconf DNS or mDNS: Your System will announce it’s Hostname and IP Address via Multicast.

As i already had a Switched VPN network, all i had to do was to replace the DNS server with Zeroconf. I simply installed Avahi and the mDNS Resolver on all my hosts, and started using the .local hostnames for everything. This works flawlessly on OS-X too. Windows is a bit hit and miss, as the Bonjour implementation from Apple on Windows seems a bit lacking.


This Setup has a couple of benefits that i don’t want to miss anymore.

  • Static Network Layout: It doesn’t matter if i am at home or on the road, my Network configuration always looks the same everywhere.
  • Dynamic Autoconfiguration: Once you have Tinc and Avahi running, there is really nothing you need to do anymore configuration wise. You configure it once, and it just works. I haven’t had that experience with OpenVPN which was a constant struggle to keep running.
  • Encryption everywhere: I don’t like people spying on me, if i connect my Notebook to a “hostile” network, chances are my usage will be monitored somehow. With the VPN Configuration i can just route all my HTTP traffic to a Squid Proxy running on my VPS and know that nobody will be able to sniff my connection.
  • Easy to Expand: At home i don’t really need to do anything, i can just add Virtual Machines. The Tinc Tunnel is bridged with my LAN, and all traffic is automatically forwarded.

I have a MySQL Database running on my VPS. If i wanted to connect to it from Home without the VPN configuration, i would need to expose the MySQL server to the public internet. With the VPN i can just let it listen on the VPN interface, and don’t need to worry about exposing it. With Avahi on the VPS i have a “public” ( and a “private” (aello.local) hostname. This way i don’t need to remember IP Addresses.

Before i ditched Apple, i also used to have a iTunes running at home to serve music through Bonjour, as my VPN network was in one broadcast domain, i could listen to Music from whereever i was. Something that Apple tries to make sure you can’t do.

The only problem with this setup is: If your Home Lan Network Range clashes with the network you connect to physically you are doomed. So choose a network range at home that is as small as possible and fairly uncommon.

Aliases with Avahi

For my Development VMs i need a way to have multiple Hostnames for one IP. Unfortunately the standard Avahi Installation does not yet allow Aliases. Fortunately though somebody spent some time and build that does exactly what i need.

On Debian you just need to install python-avahi and python-dbus and you can run development.local to Create the alias in your lan, which will then instantly broadcast througout the Tinc Network.

More Info

If you’d like a bit more details on the Setup drop me a note, and i’ll put up some more documentation on the entire setup.

rssReader is now Bliss


Time to revive an ancient project: rssReader

It’s been a while since i last worked on my own little Feed Reader project (7 Years).


I’ve recently started using Google Reader as i got myself a fancy Android Phone with a data flat rate, and i didn’t get TT-RSS running properly on it (probably totally my fault).

The only thing i didn’t want on Google Reader were all my authenticated and NSFW feeds, i needed something different for those.

I remembered that i had this little project way back when, and decided to have a look at it. Unfortunetly it has grown quite old. Still based on PHP4, not very web2.0’ish, and generally not really pretty to look at.

So i just rebuilt the thing from Scratch.

The only thing it has in common with the old version is that it uses Smarty as Template engine. I commited the old thing, if you want to have a good laugh.


First: May I suggest you try TT-RSS. Or if you don’t care much about privacy: Google Reader is quite awesome. It’s even more Awesome with this extension installed.

If you still want to have a look, you can get it on Github. Installation should be very straight forward.

The only advantage it has over TT-RSS, is that it doesn’t need a database whatsoever (It’s also somewhat easier to configure, but with the difference in features that’s not much of a surprise)

It has plenty of disadvantages though.

Extracting a single path from a SVN Repository

I have Subversion Repository that holds various smaller projects lumped into one directory.

This is annoying if you want to work on such projects with GIT as Subversion front-end, or want to put that code on Github.

I Therefore needed a way to extract these directories from one Repository and dump them into a dedicated repo with a proper directory structure (trunk, branches, tags).

These are the steps needed for my rssReader example:

First Dump all of /home/svn and extract the php/rssreader directory

svnadmin dump /home/svn | \
  svndumpfilter include --drop-empty-revs \
  --renumber-revs --skip-missing-merge-sources \ 
  php/rssreader &gt; rssreader.dump

--drop-empty-revs and --renumber-revs make sure that the dumps history looks clean and doesn’t have all commits from the parent directories.

I then needed to edit the dump file and remove a svn-sync revprop which would make svnadmin load barf.

If you have Copied contents around in the source repository, you may need to include the sources of these copies as well, and clean them up in the destination repository. svndumpfilter will complain that it can’t copy the Source directory in such cases.

Create new Repo and the Basic Layout

We now need a new, empty repository

svnadmin create rssreader
svn -m 'Initial Layout' \
  mkdir file:///$PWD/rssreader/php \
  file:///$PWD/rssreader/branches \

It is important that you create the directory structure as it was in the Source Repository.

For this example the code i wanted to export was located at php/rssreader i thus needed to create the php/ path in my destination repository, as svnadmin load won’t do that.

Import the dump

Now just load the dump file into the repository with:

cat rssreader.dump | svnadmin load rssreader

Cleanup the final repo

svn mv -m 'Move to trunk' \
  file:///$PWD/rssreader/php/rssreader \
svn rm -m 'Remove junk' file:///$PWD/rssreader/php

Finally clone it with git

Now i can go ahead and create a git clone of it:

# first create the authors file
echo "claus = Claus Beerta " &gt; ~/svnauthors

# Now Clone
git svn clone -A ~/svnauthors \
  -s file:///home/claus/rssreader \

(git doesn’t expand $PWD properly, so you need to give it the complete path. Wierd)

Now i can go ahead and hack at the clean repo, commit it to Git or my Private SVN Server.

Gnome 3 - Back to the Roots

So I’ve recently started using Linux on my Desktop full time again. All because of Gnome 3.

I’ve abandoned Linux on my Desktop at home a few years back, and started using OS X full time. I got a PPC Mac Mini, then an Intel Mac Mini and finally a MacBook. I was fairly happy with it: Fancy UI with a nice CLI to it to fiddle around on.

Good Bye Apple

A couple of months back though i started to grow tired of Apple and it’s behavior in general. The Company has grown from Cool Underdog to a Mega Company, and that definitely shows. IMO they’re becoming the Microsoft of the 90’s, using their market dominance in some areas to pressure little companies out of business.

The past few months also felt like Apple’s primary focus are their iSomething Devices, and they don’t care about OS X much anymore.

With all the recent updates to iTunes, which I’ve grown to absolutely hate as an application, their App Store for OS X and their apparent intentions to turn the Desktop into a Touch UI, I’m not very big on the whole gestures notion. I finally decided to give up on it all together.

(No, you can’t have my stuff, I’ve already sold all of it)

Windows 7 as intermediary

For some time i was using Windows 7 exclusively on my desktop at home and notebook. Windows 7 is a good OS, but as a Unix person it is severely lacking in a number of areas.

Microsoft is also severely lacking in the innovation department. They need to get their act together and get some good updates out again. Their image is crap these days, and if they don’t turn around, i wouldn’t be to surprised if they didn’t matter in a couple of years anymore. Even their primary enterprise market is slowly shifting away from them.

It’d be a shame if we’d end up with a new dominant player (Apple or Google for example) merely replacing Microsoft. We need to keep the competition, to keep these Mega Companies in check.

Hello Gnome

Fortunately for me Gnome 3 arrived. I started using it at work (Fedora 15 Betas) and when it got released i put Archlinux on my Desktop and haven’t looked back yet.

I still occasionally boot Windows on my Notebook to edit Photos in Lightroom, but that’s allright. I’ve tried Bibble but i am to used to Lightroom to make the switch.

In essence i have made a complete turnaround: From Linux to Mac OS X to Windows and now back to Linux.

Let’s see how long it’ll last this time.

Some good to know things on Gnome 3

  • Gnome 3 Cheatsheet: link
  • Gnome Shell extensions: link
  • Gnome Tweak Tool for some advanced Settings: link

Enable Focus follows Mouse in Gnome Shell:

gconftool-2 -s /apps/metacity/general/focus_mode sloppy --type string

Changing a user theme with the extension installed (Older versions of the Tweak Tool didn’t really work for me):

gsettings get name # Get current
gsettings set name Zukitwo # Set one
gsettings reset name # Reset to default

First Post via Posterous

So, how do you build an interface to post content for your selfmade Blog app if you actually don’t really want to?

Easy: You don’t!

I’ve been looking around for inspirations on how to build a interface to put posts on my selfmade website. Thing is though: Input validation is tedious and error prone. Even if i am the only person who will ever use this interface, i’ll still manage to trick myself. Encodings, Character Sets, HTML Editing etc, then i’d like to post Images and Photos and other Media stuff. Building a frontend for that is a tedious task.

So, why bother?

Posterous to the rescue

While looking around i stumbled over tumblr and posterous both providing a blogger like service that:

  • Doesn’t run on Wordpress. I’ve grown old and tired of it.
  • Both have slick looking and quick interfaces.
  • Neither want to know when your mothers, best friends niece had it’s last teeth pulled while signing up.
  • Both allow posting Markdown content via Email, and they both obviously have rich text editors to edit your posts afterwards.

What Posterous also has is the ability to distribute content to various other sites.

That and Posterous didn’t present the site in german to me unlike Tumblr. I know german, sure, but i don’t want to. My browser says “Give me English Please”, so why send me a german page? I absolutely HATE it when sites do that.

My Content is Mine

Posterous has a simple, yet usefull API that allows me to get my Content back and put it on my site. It also has a comments API where i can feed the comments from posterous back into my database. That way everything i create is under my control, and i can do with it as i see fit. If for some reason the site starts to bother me, i can just delete my account. I will keep my content.

Punch line is: I can use Posterous wonderful interface and features to produce and distribute content, and then just pull it back into my Site.

Here’s the Code to it. It lacks importing comments, but that’s not that urgent.

Triggering the importer

Once posted, Posterous sends a mail back to confirm that something went live. Why thank you, I can use that!

A little procmail action:

 * ^From: .**
     :0 c # Trigger an Update
     ! USER=cbeerta PASSWORD=thoughshallnotknow php index.php --import-posterous

     :0 # and store it (for now)

And the post will be added to my page immediatly. Nifty.

Securing your Web server against Bots

Bots usually operate in a fairly similar way to get onto your server:

  • They exploit a known vulnerability in a PHP script to inject some code
  • This injected code is usually very simple, downloading the Trojan from a remote address with curl or wget to a temporary directory
  • After the Trojan has been downloaded, it is then being executed through the PHP vulnerability

A method I’ve employed in the past to at least stop these automated spreads of Trojans is by adding iptables rules that forbid the User that the Web server is running as to do any connects to the outside world:

# Allow Everything local
iptables -A OUTPUT -o lo+ -A OUTPUT -o lo+ -A OUTPUT -o lo+ -m owner --uid-owner 33 -j ACCEPT
iptables -A OUTPUT -d -p tcp -m owner --uid-owner 33 -j ACCEPT
# Allow DNS Requests 
iptables -A OUTPUT -p udp -m owner --uid-owner 33 -m udp --dport 53 -j ACCEPT
# Allow HTTP Answers to clients requesting stuff from the Web Server (HTTP+HTTPS)
iptables -A OUTPUT -p tcp -m owner --uid-owner 33 -m tcp --sport 80 -j ACCEPT
iptables -A OUTPUT -p tcp -m owner --uid-owner 33 -m tcp --sport 443 -j ACCEPT
# Log everything that gets dropped
iptables -A OUTPUT -m owner --uid-owner 33 -m limit --limit 5/sec -j LOG --log-prefix "www-data: "
# and finally drop anything that tries to leave
iptables -A OUTPUT -m owner --uid-owner 33 -j REJECT --reject-with icmp-port-unreachable

# Force outgoing request through http proxy on port 8080
iptables -t nat-A OUTPUT -p tcp -A OUTPUT -p tcp -A OUTPUT -p tcp -m owner --uid-owner 33 -m tcp --dport 80 -j DNAT --to-destination

“But now all my RSS Clients, and HTTP Includes won’t work anymore” There is two ways around the fact that now nothing on your web server is allowed to talk to the evil internet anymore:

  1. Insert `ACCEPT` rules into the iptables chain to the destinations you want to allow. This method is tedious, and error prone as you need to constantly be aware what ip’s the services you’re using have and update your iptables rules accordingly.
  2. Using a simple HTTP Proxy to pass through the requests you want to allow.

I’ve always preferred the HTTP Proxy method, while it may be a bit more work to setup in the first place, the added security is worth it, since you can allow on an url basis you don’t need to worry about the remote side changing ip’s anymore, as well as that if you allow ip’s with iptables, people can upload their Trojans to these web servers and bypass all your fancy protection.

A good proxy to use that allows for extensive filtering and is still small footprint is Tinyproxy, a few settings you want to tune are:

# Only Listen on Localhost

# Allow requests from your local server only
Allow <Official IP Address of your server>

# Enable Filtering, and deny everything by default
Filter "/etc/tinyproxy/filter"
FilterURLs On
FilterExtended On
FilterDefaultDeny Yes

Looking at your Tinyproxy logfiles, you should now see requests beeing denied if you access a page on the Web server that tries to include external resouces:

CONNECT   Aug 01 05:11:57 [16731]: Connect (file descriptor 7): []
CONNECT   Aug 01 05:11:57 [16731]: Request (file descriptor 7): GET /1.0/user/cb0amg/recenttracks.rss HTTP/1.0
INFO      Aug 01 05:11:57 [16731]: process_request: trans Host GET for 7
NOTICE    Aug 01 05:11:57 [16731]: Proxying refused on filtered url ""
INFO      Aug 01 05:11:57 [16731]: Not sending client headers to remote machine

Voila, my Wordpress installation tried to grab the recent track RSS from, i want to allow that so I’ll just add this to my Tinyproxy filter rule:

^* ^* ^*

Now anything you want your Web Server to access, you can simply add to your Tinyproxy filter.

Remember though, this is not a blanket protection against any software flaw that exists! You should still keep your software updated at all times.

Reading other people's Code

I don’t know if i just suck at understanding other people’s (php mostly) code, but i frequently find myself slapping my forehead while reading code other people produce. I don’t say i’m a lot better at it, and i know that, that”s why i would never ever put my crap code online for other people to use and work with.

I”m on the search for a Wiki Software (more a Wiki Class) that i can use to start writing something that is in my head, and i don’t currently want to write my own Wiki just for that. (I wonder how long i can withstand that urge though). While going through the numerous PHP Wiki’s that are available, i’m getting more and more frustrated.

Just an example of what i feel is just bad:


Please don't do that, it will drive poor Admin's insane trying to figure out why the settings in your precious php.ini don't seem to work. (Especially enabling display_errors inside the code somewhere is just a no-go, and who knows what the software does if i forbid ini_set in my php setup)

Having a full page of require's, that point to 5 line classes is bad. Having everything just in one single file is not the way to go either, and having 50 lines of Documentation for an absolutely obvious function that is exactly one(!) single line of code, against having not a single line of comments for a 50+ line function is just silly. Either document properly, or don''t document at all!

Writing entire functions in a single line that even grows out of my editor window (which is 200 chars wide) is bad. Having 10+ functions of those beneath each other makes me sad.

After opening another php file, and seeing this:

foreach($_GET as $key => $value){if(in_array($key,$export_vars)){$$key = $value;}}
foreach($_POST as $key => $value){if(in_array($key,$export_vars)){$$key = $value;}}

i immediatly quit my editor and rm -rf the thing. You know Spaces don't cost extra money to use in Sourcecode.

    Yes, most of the formatting used in this file is HORRIBLY BAD STYLE. However,
    most of the action happens outside of this file, and I really wanted the code
    to look as small as what it does. Basically. Oh, I just suck. :)

At least they’re honest with that statement. AND OH BOY IS HE RIGHT! (rm -rf)


The ErfurtWiki engine is fully contained in one script file, but almost 200 feature enriching plugins and extensions modules are available.

Guys, THAT IS NOTHING TO BE PROUD OF! 130KB of Code! That's a whopping 4000 Lines! Aaargh!

I've got a headache now, and i still don't have anything that i can use. It''s not all bad though. Take a look at the source of coWiki. It is nicely structured, has _usefull_ comments in the code, and is beatifully indented. I think i will just use the Markdown class, and start writing code around that.