Category Archives: howto

A fluxus workshop plan

I’ve been getting some emails asking for course notes for fluxus workshops, I don’t really have anything as structured as that but I thought it would be good to document something here. I usually pretty much follow the first part of the fluxus manual pretty closely, trying to flip between visually playful parts and programming concepts. I’ve taught this to teenagers, unemployed people, masters students, professors and artists – it’s very much aimed at first time programmers. I’m also less interested in churning out fluxus users, and more motivated by using it as an introduction to algorithms and programming in general. Generally it’s good to start with an introduction to livecoding, where fluxus comes from, who uses it and what for. I’ve also started discussing the political implications of software and algorithmic literacy too.

So first things first, an introduction to a few key bindings (ctrl-f fullscreen/ctrl-w windowed), then in the console:

  1. Scheme as calculator – parentheses and nesting simple expressions.
  2. Naming values with define.
  3. Naming processes with define to make procedures.

Time to make some graphics, so switch to a workspace with ctrl-1:

  1. A new procedure to draw a cube.
  2. Calling this every frame.
  3. Mouse camera controls, move around the cube.
  4. Different built in shapes, drawing a sphere, cylinder, torus.

Then dive into changing the graphics state, so:

  1. Colours.
  2. Transforms.
  3. Textures.
  4. Multiple objects, graphics state persistent like changing a “pen colour”.
  5. Transform state is applicative (scale multiplies etc).

Then tackle recursion, in order to reduce the size of the code, and make much more complex objects possible.

  1. A row of cubes.
  2. Make it bend with small rotation.
  3. Animation with (time).

At this point they know enough to be able play with what they’ve learnt for a while, making procedural patterns and animated shapes.

After this it’s quite easy to explain how to add another call to create tree recursion, and scope state using (with-state) and it all goes fractal crazy.

This is generally enough for a 2 hour taster workshop. If there is more time, then I go into the scene graph and explain how primitives are built from points, faces and show how texture coords work etc. Also the physics system is great to show as it’s simple to get very different kinds of results.

Compiling Scheme to Javascript

Recently I’ve been continuing my experiments with compilers by writing a Scheme to Javascript compiler. This is fairly simple as the languages are very similar, both have garbage collection and first class functions so it’s largely a case of reusing these features directly. There are some caveats though – here is an example of the canonical factorial example in Scheme:

(define factorial
  (lambda (n)
    (if (= n 0) 1
        (* n (factorial (- n 1))))))

The Javascript code generated by the compiler:

var factorial = function (n)
{
    if ((n == 0)) {
        return 1
    } else {
        return (n * factorial((n - 1)))
    }
}

Scheme has tail call optimisation, meaning that recursive calls are as fast as iterative ones – in Javascript however, a function that calls itself will gradually use up the stack. The factorial example above breaks with larger numbers so we need to do some work to make things easier for Javascript interpreters. When considering list processing, fold can be considered a core form that abstracts all other list processing. If we make that fast, we can reuse it to define higher order list processing such as filter, which takes a function and a list and returns another list. The function is called on each element, if it returns false it’s filtered out of the resulting list.

(define filter
  (lambda (fn l)
    (foldl
     (lambda (i r)
       (if (fn i) (append r (list i)) r))
     ()
     l)))

Compiling this with an iterative version of fold – which uses a for loop in Javascript, we generate this code:

var filter = function (fn,l)
{
    return (function() {
        var _foldl_fn=function (i,r) {
            if (fn(i)) {
                return r.concat([i])
            } else {
                return r
            }
        };

        var _foldl_val=[];
        var _foldl_src=l;
        for (var _foldl_i=0; _foldl_i<_foldl_src.length; _foldl_i++) {
            _foldl_val = _foldl_fn(_foldl_src[_foldl_i], _foldl_val); 
        }
        return _foldl_val; })()
    }
}

We have to be a little bit careful that variables created by the compiler don’t clash with those of the user’s code, so we have to make really ugly variable names. You also need to be careful to wrap code in closures as much as possible to control the scope. The problem with this is that there is probably a limit to the amount of nesting possible, especially comparing different implementations of Javascript. A interesting project that has got much further than this is Spock. More context to this work and some actual source code soon…

Android Camera Problems

The DORIS marine mapping platform is taking shape. For this project, touch screens are not great for people wearing gloves in small fishing boats – so one of the things the android app needs to do is make use of physical keys. In order to do that for taking pictures, I’ve had to write my own camera android activity.

It seems that there are differences in underlying camera behaviour across devices – specifically the Acer E330 model we have to take out on the boats. It seemed that nearly every time takePicture() is called the supplied callback functions fail fire. I’ve tested callbacks for the shutter, raw and jpeg and error events and also tried turning off the preview callback beforehand as suggested elsewhere, no luck on the Acer, while it always works fine on HTC.

The only solution I have so far is to close and reopen the camera just before takePicture() is called which seems to work as intended. As it takes some seconds to complete, it’s also important (as this is bound to a key up event) to prevent the camera starting to take pictures before it’s finished processing the previous one as that causes further callback confusion.

import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.hardware.Camera;
import android.hardware.Camera.CameraInfo;
import android.hardware.Camera.PictureCallback;
import android.util.Log;

class PictureTaker 
{
    private Camera mCam;
    private Boolean mTakingPicture;

    public PictureTaker() {
        mTakingPicture=false;
    }

    public void Startup(SurfaceView view) {
        mTakingPicture=false;
        OpenCamera(view);
    }

    private void OpenCamera(SurfaceView view) {
        try {
            mCam = Camera.open();
            if (mCam == null) {
                Log.i("DORIS","Camera is null!");
                return;
            }
            mCam.setPreviewDisplay(view.getHolder());
            mCam.startPreview();
        }
        catch (Exception e) {
            Log.i("DORIS","Problem opening camera! " + e);
            return;
        }
    }

    private void CloseCamera() {
        mCam.stopPreview();
        mCam.release();
        mCam = null;
    }

    public void TakePicture(SurfaceView view, PictureCallback picture)
    {
        if (!mTakingPicture) {
            mTakingPicture=true;
            CloseCamera();
            OpenCamera(view);
            
            try {
                mCam.takePicture(null, null, picture);
            }
            catch (Exception e) {
                Log.i("DORIS","Problem taking picture: " + e);
            }
        }
        else {
            Log.i("DORIS","Picture already being taken");
        }   
    }
}

Re-interpreting history

A script for sniffing bits of supercollider code being broadcast as livecoding history over a network and re-interpreting them as objects in fluxus, written during an excellent workshop by Alberto de Campo and Julian Rohrhuber at /*VIVO*/ Mexico City.

(osc-source "57120")

(define (stringle str)
    (map
        char->integer
        (string->list str)))


;;(osc-destination "osc.udp:255.255.255.255:57120")
;;(osc-send "/vivo" "s" '("fluxus:hola"))

(define (safe l n)
    (list-ref l (modulo n (length l))))

(define (render arg)
    (let ((l (map (lambda (t) (/ t 255)) (stringle arg))))
        (with-state
            (scale (vector (safe l 0)
                    (safe l 1)
                    (safe l 2)))
            (rotate (vmul (vector (safe l 32)
                    (safe l 12)
                    (safe l 30)) 360))
            (colour (vector (safe l 0)
                    (safe l 3)
                    (safe l 4)))

            (build-torus 0.1 1 4 20))))

(clear)
(scale 2)
;;(render "hello 343 323")

(every-frame 
    (begin
        (when (osc-msg "/hist")
            (printf "~a~n" (osc 1))
            (when (osc 1)
                (render (osc 1))))))

Aniziz: Keeping it local and independent

A large part of the Aniziz project involves porting the Germination X isometric engine from Haxe to Javascript, and trying to make things a bit more consistent and nicer to use as I go. One of the things I’ve tried to learn from experiments with functional reactive programming is using a more declarative style, and broadly trying to keep all the code associated with a thing in one place.

A good real world example is the growing ripple effect for Aniziz (which you can just about make out in the screenshot above), that consists of a single function that creates, animates and destroys itself without need for any references anywhere else:

game.prototype.trigger_ripple=function(x,y,z) {
    var that=this; // lexical scope in js annoyance

    var effect=new truffle.sprite_entity(
        this.world,
        new truffle.vec3(x,y,z),
        "images/grow.png");

    var len=1; // in seconds

    effect.spr.do_centre_middle_bottom=false;
    effect.finished_time=this.world.time+len;
    effect.needs_update=true; // will be updated every frame
    effect.spr.expand_bb=50; // expand the bbox as we're scaling up
    effect.spr.scale(new truffle.vec2(0.5,0.5)); // start small

    effect.every_frame=function() { 
        // fade out with time
        var a=(effect.finished_time-that.world.time);
        if (a>0) effect.spr.alpha=a; 
   
        // scale up with time
        var sc=1+that.world.delta*2; 
        effect.spr.scale(new truffle.vec2(sc,sc));

        // delete ourselves when done
        if (effect.finished_time<that.world.time) {
            effect.delete_me=true;
        }
    };
}

The other important lesson here is the use of time to make movement framerate independent. In this case it’s not critical as it’s an effect – but it’s good practice to always multiply time changing values by the time elapsed since the last frame (called delta). This means you can specify movement in pixels per second, and more importantly means that things will remain consistent in terms of interactivity within reason on slower machines.

Websockets vs HTTP

A bit of R&D this morning into websockets. Previous games like Naked on Pluto and Germination X have made use of standard HTTP protocol for their client server communication. This is easy to set up (as a newbie web programmer) and fine for prototyping and proof of concept – but has some serious problems with regard to scale.

The first problem is that the direction is one way, clients always have to call the server in order to get data. This has serious impact on how realtime applications work – as they need poll – “has anything changed yet”… “has anything changed yet”…

More seriously, the server has no real notion of who the client is and what they have received already, so all the data needs to be sent for each poll. This results in duplicate data being sent, and is a waste of bandwidth.

Underlying the HTTP protocol are sockets for sending the data – each request is treated as a distinct event so a socket is created and destroyed to return the response. A better way is to hook into this lower level and use sockets directly – each client then has a unique connection on the server, and data can be sent in both directions. Also the server is told when the socket is disconnected so things can be cleaned up.

On the client side, websockets are a standard part of HTML5 so they are fairly simple to use from Javascript:

socket = new WebSocket('ws://localhost:8001/websocket');
socket.onopen= function() {
    socket.send('hello from client');
};
socket.onmessage= function(s) {
    alert('server says: '+s.data);
};

On the server I’m using Clojure, and after a bit of fiddling around with my own socket server implementation, I found Webbit which takes all the hassle away:

(defn -main []
  (println "starting up")
  (doto (WebServers/createWebServer 8001)
    (.add "/websocket"
          (proxy [WebSocketHandler] []
            (onOpen [c] (println "opened" c))
            (onClose [c] (println "closed" c))
            (onMessage [c j]
                (println "message recieved: " c j)
                (.send c "hello from server"))))

    (.add (StaticFileHandler. "."))
    (.start)))

This approach takes us from perhaps 100’s of simultaneous connections for an online game into more like 10,000 theoretically – so much more into the big league, but also more importantly persistent connections like this allow for some interesting game mechanics.

The Swamp that was, a bicycle opera

Day one on a new project – with Kaffe Matthews, and a collaboration between FoAM & Timelab, “The Swamp that was” is an opera where bicycles become a way to hear stories of the past in the city of Ghent.

I’m picking up the software side of things from Wolfgang Hauptfleisch, which involves using BeagleBoards – low power self contained open hardware ARM computers, a good follow up to my experiments with the considerably more closed NDS and less general purpose Android.


This is my test BeagleBoard xM in a custom laser cut housing from Timelab.

The “swamp” system is based on lua scripts calling proteaAudio for realtime audio processing. Lua has a tiny footprint and is great as an embedded interpreter (despite indexing from 1 and other minor gripes). Everything seems to be up and running, and I’ve set up a project on gitorious with the sourcecode.

Some random things I’ve learned include: flags to speed up dd copying images to extremely slow usb SD memory cards:

sudo dd if=swamp0-1.img of=/dev/sde oflag=dsync bs=1M count=1024

List ip addresses of devices attached to your local router:

sudo arp-scan --interface=wlan0 192.168.1.0/24

I’m using the Ångström distribution on the beagle board, which uses opkg for package management – it took me a long time to figure out the best way to search for packages is simply:

opkg list | grep alsa

Fast HTML5 sprite rendering

After quite a lot of experimentation with HTML5 canvas, I’ve figured out a way to use it with the kind of big isometric game worlds used for Germination X which are built from hundreds of overlapping sprites. There are lots of good resources out there on low level optimisations, but I needed to rethink my general approach in order to get some of these working.

It was quite obvious from the start that the simple clear screen & redraw everything way was going to be far too slow. Luckily HTML5 canvas gives us quite a lot of options for building less naive strategies.

A debug view of the game with 10 frames of changes shown with two plant spirits and one butterfly moving around.

The secret is only drawing the changes for each frame (called partial re-rendering in the link above). To do this we can calculate sprites which have changed and the ones they overlap with. The complication is maintaining the draw order and using clipping to keep the depth correct without needing to redraw everything else too.

In the game update we need to tag all the sprites which change position, rotation, scale, bitmap image, transparency etc.

Then in the render loop we build a list of all sprites that need redrawing, along with a list of bounding boxes for each overlapping sprite of the changed sprites that touch them. There may be more than one bounding box as a single sprite may need to be redrawn for multiple other changed sprites.

For each changed sprite:
    Get the bounding box for the changed sprite
    For each sprite which overlaps with this bounding box: 
        If overlapping sprite has already been stored:
            Add the bounding box to overlapping sprite's list 
        Else:
            Store overlapping sprite and current bounding box.
    Add the changed sprite to the list.

Assuming the sprites have been sorted into depth order, we now draw them using the list we have made (we would only need to loop over the redraw list if we built it in depth sorted order).

For each sprite:
    If it's in the redraw list:
        If it's not one of the originally changed sprites:
            Set a clipping rect for each bounding box.
        Draw the sprite.
        Turn off the clipping, if it was used.

With complex scenes and multiple moving objects, this algorithm means we only need to redraw a small percentage of the total sprites visible – and we start to approach Flash in terms of performance for games (I suspect that flash is doing something similar to this under the hood). The code is here, currently written in HaXE, but will probably end up being ported to Javascript.

Betablocker DS code patterns (part 2)

Following on from the last installment of Betablocker DS coding patterns, this time looking at some very short programs that are nicely controllable by the livecoder – including some actual audio examples here (embedding seems broken on archive.org).

Both these example programs are 16 bytes long, and require 3 concurrent threads to be running over different parts of the program. Each thread has a simple purpose, the first loads a byte and plays it as a sound repeatedly, the second one increments the byte that the first is playing, and the third periodically resets the byte to a constant value.

The first example simply plays the byte as a note – resulting in an arpeggiator style sound. After writing the code, the performer can change the speed of each of the three threads to get quite a bit of variety. For example, the slower the resetting thread runs, the longer the pattern will be before it repeats.

The second example is very similar, but is slightly modified to play a sequence of different sounds rather than notes, to become a drum sequencer. The second instruction is also changed from a “push literal” to a “push contents of address” which results in the program playing a part of memory as a sequence. We can control the start address location with the reset byte, write directly to this part of memory, and fiddle with the other thread’s speed to get even more complexity.

Betablocker DS code patterns (part 1)

Some useful Betablocker DS code patterns as used in common live coding situations.

Firstly, when carrying out a sound check before a live performance it’s important to keep some kinds of sound engineers happy generating as many different sounds as you can so they can fiddle around with stuff – of course remember to turn everything up to 8 when asked to make the loudest sound you can in order to leave some overhead to blow the PA.

This is a betablocker program which will play all the sounds in a sound bank at all frequencies. Add more concurrent threads to make it more interesting. Values interpreted as pointers are visualised as arrows:


(the instruction set description can be found here)

In betablocker the beat is always locked to the instruction cycle rate, so fast changing sounds are a matter of optimisation. One of the most common ways around this (and a good way to start things off in a gig) is to use one fast loop program for playing sounds while another slower program modifies the first by overwriting bits of it.

More to follow in this series. See also evolved betablocker genetic programs.