Category Archives: howto

Run your own Germination X server

We are in the process of moving the Germination X game to a new server, which gives me the chance to properly document the steps required to set it up – this is based on a clean Debian Squeeze 6.0.1 box on Linode.

Step 1: Get the source and Clojure installed

sudo apt-get install subversion
svn co
cp lein /usr/bin; sudo chmod +x /usr/bin/lein
sudo apt-get install default-jdk
cd GerminationX/oak
lein deps (automatically installs all the clojure dependancies)
cp fatjars/* lib/ (installs the FAtiMA precompiled jars)

Step 2: Install mongodb (used to store the game world)

sudo apt-key adv –keyserver –recv 7F0CEB10
add: “deb dist 10gen” to /etc/apt/sources.list
sudo apt-get update
sudo apt-get install mongodb-10gen

Once the server is running, you can browse the database (players and tiles containing plants) using the “mongo” console, the database is called “oak”.

Step 3: Start the server

Configure core.clj to build a new world (see comments in source) and test by running as normal user (WARNING: this is not secure!!! – only for testing!!!)


This will start the game server printing output to stderr/stdout, and after a while the three plant spirit FAtiMA processes. Wait a moment then check port 8001 – the game should be running. Once checked, comment out world building in core.clj and kill the 4 java processes.

Step 4: Locking down the game server

For reasons of security the game server should be run as a user with very restricted permissions – this means if someone finds a way to run commands via the process in some way, they wouldn’t be able to do much to the server except access some game data. The standard user for this is called “www-data”.

sudo chown www-data:www-data public
sudo chown www-data:www-data public/log.txt
sudo ./ start
Check port 8001

Check the server is running with “sudo ./ status” or stop it with “sudo ./ stop”. This code was based on a similar script for Naked on Pluto, thanks to Aymeric! The game logs it’s stdout/stderr output in /var/log/oak-server-8001.log while some in-game logging currently goes to public/log.txt (so it can be visible from a browser).

We are also running the game via apache using a reverse proxy, (again based on the setup used for NoP) this gives us an added layer of protection/load management etc.

Isometric engineering

The first report from south western Sweden where I am on unofficial residency/hacking retreat in the environs of the Tjärnö Marine Research Laboratory.

This is a great place for my main task, to focus on Germination X, and I’m starting with new isometric ground tiles. I’m aiming for the feeling of roots and interconnectedness with just a whiff of geological process.

Germination X is about thinking by sketching, and this started with a couple of rough drawings playing with different pathways across squares. If the connection points are kept consistent they fit together to build cubes, and automatically, the cubes joined together too.

I also thought that varying the widths of the paths would be more interesting, and built a kind of template from the master Germination X cube.

If I kept to these guidelines the plan should work – but I wasn’t sure how it the end effect would actually look. Using the template (visible underneath the paper, still in analogue mode) I drew five different variations of these pathway cubes.

After a bit of fixing and colouring in the gimp, this is an example chunk of world rendered in the Germination X 2D engine. Although the lining up is still not perfect, I like the roughness. Any combination of cubes will fit, you can swap any one for any other without breaking the pattern. The only tricky bit is that the random number generator is seeded from the position of the cubes, so the same pattern is generated consistently for any part of the world without having to store anything.

PS2 homebrew #5

Getting stuff to work on PS2 wasn’t quite as easy as I probably made it sound in the last homebrew post. The problem with loading code from usb stick is that there is no way to debug anything, no remote debugging, no stdout – not even any way to render text unless you write your own program to do that.

The trick is to use the fact that we are rendering a CRT TV signal and that you can control what gets rendered in the overscan area (think 8bit loading screens). There is a register which directly sets the background colour of the scanline – this macro is all you need:

#define gs_p_bgcolor        0x120000e0    // Set CRTC background color

#define GS_SET_BGCOLOR(r,g,b) \
            *(volatile unsigned long *)gs_p_bgcolor =    \
        (unsigned long)((r) & 0x000000FF) <<  0 | \
        (unsigned long)((g) & 0x000000FF) <<  8 | \
        (unsigned long)((b) & 0x000000FF) << 16

Which you can use to set the background to green for example:


Its a good idea to change this at different points in your program. When you get a crash the border colour it’s frozen with will tell you what area it was last in, allowing you to track down errors.

There is also a nice side effect that this provides a visual profile of your code at the same time. Rendering is synced to the vertical blank – when the CRT laser shoots back to the top of the screen a new frame is started and you have a 50th of a second (PAL) to get everything done. In the screenshot below you can see how the frame time breaks down rendering 9 animated primitives – and why it might be a good idea to use some of these other processors:

PS2 homebrew #4

Getting things to render on the PS2 is a little more complicated than using OpenGL and it’s also a very different system to a PC. On the right you can see a block diagram of the Emotion Engine – it consists of the EE core, the CPU on the left and the GS – Graphics Synthesiser, on the right. In between are 2 other processors called Vector Units – very fast processors designed to do things to vectors – points, colours etc.

All the Graphics Synthesiser can do is rasterise 2D shapes with points given in screen space and do your texturing & gouraud shading for you. It can draw points, lines, triangles or quads in various configurations (similar to OpenGL). However all the 3D transformations and lighting calculations have to happen elsewhere – in one, or both of the Vector Units or the CPU.

So how do we get the GS to render something? Well you send it chunks of data, called GS Packets, that look a little like this:

GIF Tag 1
Primitive data
GIF Tag 2
Primitive data

The GIF Tags contain information on what sort of primitive it should draw and how the primitive data is laid out. The primitive data is the same as the primitive data in fluxus – vertex positions, colours, texture coordinates, texture data etc.

Once I had tinyscheme and the basic scenegraph working that the minimal fluxus build uses, I wrote a very simple renderer running on the EE core to apply the transformation matrices to the primitives (with similar push and pop to OpenGL). It doesn’t calculate lighting at the moment, so it’s just setting the vertex colours to the normal’s values for debugging. This is a literal photographic screen shot of my PS2 running exactly the same test fluxus script as the android was running:

PS2 homebrew #3

The next thing I wanted to do was see if I could compile the minimal android version of fluxus for the PS2. All the PS2SDK examples are written in C, and when I tried the C++ compiler at link time I got a bunch of these odd errors:

ps2-main.cpp: undefined reference to `__gxx_personality_v0′

It turns out the C++ compiler does not support exceptions so you need to add this line to the makefile:

EE_CXXFLAGS += -fno-exceptions

The other thing to get used to is one of the side effects of a machine with so many processors is that you need send lots of data around between them using DMA transfer, or direct memory access. DMA works on chunks of memory at a time, so your data needs to be aligned on particular byte boundaries. This sounds a lot more complicated than it is in practice (although it does lead to really obscure bugs if you get it wrong).

For instance, when making arrays on the heap you can do this:

struct my_struct
    int an_int;
    float my_array[8] __attribute__((__aligned__(16)));
    float a_float;

Which tells gcc to sort it out for you by forcing the pointer to my_array to fall on a 16 byte boundary.

When allocating from the heap the EE kernel provides you with a memalign version of malloc:

float *some_floats=(float*)memalign(128, sizeof(float) * 100);

The pointer some_floats will be aligned to a 128 byte boundary. This works as normal with free().

At this point, other than a few changes to tinyscheme for string functions that don’t exist on the PS2 libraries, most of the fluxus code was building. The only problem was the OpenGL ES code, as although the PS2 has some attempts at libraries that work a bit like OpenGL, the real point of playing with this machine is to write your own realtime renderer. a bit more on that next…

PS2 homebrew #2

Once you have a system for running unauthorised code, you can test it with some well used homebrew – I used uLaunchElf for this purpose. Using swapmagic you need to put the executable on your usb stick called SWAPMAGIC/SWAPMAGIC.ELF and it should boot automatically. Not all usb sticks work, I had to try a few before I found a suitable one.

The next task is installing the toolchain of compilers and related utilities for making your own executables. At this point I found that the ps2dev site was offline along with it’s svn repo, but I managed to find a backup here, which I am now mirroring here. When you have the svn repo recreated, follow the instructions in ps2dev/trunk/ps2toolchain/readme.txt to build and install, you just need to set up some paths to tell it where to install (I use ~/opt) and run a script. The script tries to update from the unresponsive official svn, so I also had to change ps2toolchain/scripts/ to get it to complete.

You get a load of sample code in ps2dev/ps2sdk/samples, all you need to do is run make to build them. Just as with the Nintendo DS and Android devices, the compiler is gcc – or rather “ee-gcc”. “ee” stands for the Emotion Engine, Sony’s name for the main MIPS processor.

The result of the make command in the exciting screenshot above is a file called “cube.elf”. ELF stands for Executable and Linkable Format, a unix standard (debugging variants used to be called DWARFs “Debugging With Attributed Record Formats”, I make no comment).

Copy this to your USB stick and you should see something like this:

PS2 homebrew #1

Reuse is the better form of recycling, and the machines that we all increasingly use are fabulous concoctions of various plastics, metals and ceramic compounds. It’s simply shortsighted to treat them as throwaway items. With this in mind, and lacking a computer I can use at gigs with a GPU for a while now, I thought I’d postpone the inevitable cost by doing a little obsolete tech archeology.

I think homebrew and it’s associated arts – as a creative subversion (rather than a route to piracy) is increasingly relevant, so I’m going to try and log my notes here. The information out there can be a bit challenging to understand, but I’d also like to show that it’s not that much of a dark art (compared to official development of, for example android which I’ve been exploring lately). I must admit an advantage with PS2, as some time ago I was doing this “the proper way” for things like this, so perhaps some useful information can leak out of long dead NDA’s too.

The first problem to overcome with homebrew is how to get the machine to run code it’s not supposed to run. You need special hardware to burn PS2 bootable disks, and the hardcore approach to getting around that is known as the swap trick. You block the various sensors that detect if a disk is present, make a copy of a game on a CDR, replacing the code with your own. Boot using the original, then find a place in the game where the software isn’t paying attention and carefully switch it with the CDR. Once you can run “unofficial code”, you can make use of one of the various discovered exploits to (for example) make a bootable memory card to launch your own software from then on. A simpler approach is to give your memory card to someone who has already done this and get them to do it for you.

I didn’t know anyone who had done this, and I am also not that hardcore, so I spent a little bit of cash on a bootable CD designed for this purpose (which is legal to produce, because of homebrew). Along with the CD you get a variety of small shaped pieces of plastic for circumventing the drive opening detectors. These are only really needed for piracy though AFAICT, as you can boot into homebrew on a normal USB stick, so I haven’t needed to bother with them.

About @NopCleanerBot

In our continuing efforts to embrace all “social media”, one of the bots in Naked on Pluto has started a twitter account. Could this be the first character in a FaceBook game to tweet?

The tweeting bot’s actions cause it to wander the world and sporadically report on it’s progress, who is in the same location (bots, players or objects) and snippets of conversations it hears, from players or bots talking to each other. I’m using tweepy, a python twitter library, so I had to do a bit of a cheap hack for the racket->python process communication using the filesystem. The python daemon simply waits for a file to be created by the main bot AI thread and sends the contents of it to the twitter stream. In order to do the authentication with OAuth I needed to set up a Twitter application to get a pair of keys to sign the single user the app will have.

Here is the code for the tweepy daemon:

#!/usr/bin/env python                                                                                   
import tweepy
import os.path
import time

auth = tweepy.OAuthHandler("your consumer key", "your consumer secret")
print(auth.get_authorization_url()) # you need to visit this site, logged in as the twitter user
verifier = raw_input('Verifier:') # then input the verifier code here before continuing
api = tweepy.API(auth) # this api allows us to read or write from the user's account

while True:
    if os.path.isfile("msg"): # look for a file called "msg"
        f = open("msg", "r")
        api.update_status( # dump the contents into the twitter stream

Jackd, performances and harddisks

So you have your sample loading working in a separate thread to your realtime audio code and the interface i/o communication is also tucked away in it’s own thread. You have a lock free mechanism for communicating between all threads and still, sometimes – when loading data from disk jackd chucks your program off for blocking – usually during a live performance, and you have to fumble around restarting the audio server. What’s going on?

Well, it turns out that when your hard drive goes to sleep, the spin up resulting from a disk access request (such as mid-performance sample loading) can cause the kernel to block, which causes jackd to panic and either stop entirely or detach your client. This is even with jackd running with the –nozombies switch, so it must be the watchdog thread at work.

The solution is simply to prevent your drive spinning down when you are performing, by adding “hdparm -S 0 /dev/yourharddisk” to your live audio startup script. Thanks to ClaudiusMaximus for helping me fix this at Piksel – it’s been bothering me for ages.

More on haxe development

I thought I’d expand a little on the al jazari flash game, and how to develop flash games using free software.

Haxe is really rather nice, and although I’d prefer something with more parentheses (and ironically, less static typing) it does make programming for flash a nicer experience than I’d been led to believe is normally the case. I only used haxe, gimp and a bit of fluxus to get sprite renders of the 3D models, along with the firefox and of course its flash plugin (I’d like to use gnash if I do a lot more of this). I’m going to describe the basics and some of the things it took me longer to figure out. I relied a lot on howtos in blog posts, so I thought it would be a good idea to join in the fun.

Firstly you need a file called compile.hxml with something like this in it:

-swf al-jazari.swf
-swf-version 9
-swf-lib resources.swf
-main AlJazari
-swf-header 640:480:40:ffffff

This is something like a makefile for haxe and contains the main output file, the main class and the size of the area the plugin will take up. You compile your haxe script with the command haxe compile.hxml.

The style of haxe (or paradigm, if you will) is very Java like (which is probably good for me after all this exposure to Scheme and C++). You need to name your file the same as the class containing the main function eg:

class MyMainClass
    static function main()
        trace("this gets run");

Will work if it’s called MyMainClass.hx and the compile.hxml contains:

-main MyMainClass

You can then test it out by writing a little html like this:

<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000"
<param name="movie" value="al-jazari.swf"/>
<param name="allowScriptAccess" value="always" />
<param name="quality" value="high" />
<param name="scale" value="noscale" />
<param name="salign" value="lt" />
<param name="bgcolor" value="#ffffff"/>
<embed src="al-jazari.swf"

And then point your browser at this to test the code.

Using textures

The compile.hxml file for al jazari also includes a reference to a lib – which is where you can embed resources like textures for making sprites. You build one of these with a bit of xml like this:

<?xml version="1.0" encoding="utf-8"?>
<movie version="9">
    <background color="#ffffff"/>
            <bitmap id="BlueCubeTex" import="textures/blue-cube.png"/>

The id refers to a class you have to add to your haxe script – I think this is like a forward declaration or extern of some form, which allows you to refer to your texture:

class BlueCubeTex extends BitmapData { public function new() { super(0,0); } }

This bit of code (say in a class inherited from Sprite) will then draw the texture like this:


The xml script is needed to build the swf library file which contains the textures, which you do by running:

swfmill simple resources.xml resources.swf

swfmill is free software, and installable with apt-get install swfmill on ubuntu.

Using sound

I couldn’t figure out a way to embed sounds using swfmill, it seems that it’s a recent feature, and I couldn’t find any examples that included the haxe code to load them. I did get this to work though:


var sound: Sound = new Sound();
var req:URLRequest = new URLRequest(“path/from/main/swf/to/samples/sample.mp3”);
var context:SoundLoaderContext = new SoundLoaderContext(8000,true);

Which loads mp3s from a url, and then after some time (you can set up a callback to tell you then it’s loaded but I didn’t bother):;

Where the parameter is the offset (in samples I think) from the start of the sound.