Category Archives: Accidental art

A cryptoweaving experiment

Archaeologists can read a woven artifact created thousands of years ago, and from its structure determine the actions performed in the right order by the weaver who created it. They can then recreate the weaving, following in their ancestor’s ‘footsteps’ exactly.

This is possible because a woven artifact encodes time digitally, weft by weft. In most other forms of human endeavor, reverse engineering is still possible (e.g. in a car or a cake) but instructions are not encoded in the object’s fundamental structure – they need to be inferred by experiment or indirect means. Similarly, a text does not quite represent its writing process in a time encoded manner, but the end result. Interestingly, one possible self describing artifact could be a musical performance.

Looked at this way, any woven pattern can be seen as a digital record of movement performed by the weaver. We can create the pattern with a notation that describes this series of actions (a handweaver following a lift plan), or move in the other direction like the archaeologist, recording a given notation from an existing weave.

example
A weaving and its executable code equivalent.

One of the potentials of weaving I’m most interested in is being able to demonstrate fundamentals of software in threads – partly to make the physical nature of computation self evident, but also as a way of designing new ways of learning and understanding what computers are.

If we take the code required to make the pattern in the weaving above:

(twist 3 4 5 14 15 16)
(weave-forward 3)
(twist 4 15)
(weave-forward 1)
(twist 4 8 11 15)

(repeat 2
 (weave-back 4)
 (twist 8 11)
 (weave-forward 2)
 (twist 9 10)
 (weave-forward 2)
 (twist 9 10)
 (weave-back 2)
 (twist 9 10)
 (weave-back 2)
 (twist 8 11)
 (weave-forward 4))

We can “compile” it into a binary form which describes each instruction – the exact process for this is irrelevant, but here it is anyway – an 8 bit encoding, packing instructions and data together:

8bit instruction encoding:

Action  Direction  Count/Tablet ID (5 bit number)
0 1         2              3 4 5 6 7 

Action types
weave:    01 (1)
rotate:   10 (2)
twist:    11 (3)

Direction
forward: 0
backward: 1

If we compile the code notation above with this binary system, we can then read the binary as a series of tablet weaving card flip rotations (I’m using 20 tablets, so we can fit in two instructions per weft):

0 1 6 7 10 11 15
0 1 5 7 10 11 14 15 16
0 1 4 5 6 7 10 11 13
1 6 7 10 11 15
0 1 5 7 11 17
0 1 5 10 11 14
0 1 4 6 7 10 11 14 15 16 17
0 1 2 3 4 5 6 7 11 12 15
0 1 4 10 11 14 16
1 6 10 11 14 17
0 1 4 6 11 16
0 1 4 7 10 11 14 16
1 2 6 10 11 14 17
0 1 4 6 11 12 16
0 1 4 7 10 11 14 16
1 5

If we actually try weaving this (by advancing two turns forward/backward at a time) we get this mess:

close

The point is that (assuming we’ve made no mistakes) this weave represents *exactly* the same information as the pattern does – you could extract the program from the messy encoded weave, follow it and recreate the original pattern exactly.

The messy pattern represents both an executable, as well as a compressed form of the weave – taking up less space than the original pattern, but looking a lot worse. Possibly this is a clue too, as it contains a higher density of information – higher entropy, and therefore closer to randomness than the pattern.

Pixel Quipu

The graphviz visualisations we’ve been using for quipu have quite a few limitations, as they tend to make very large images, and there is limited control over how they are drawn. It would be better to be able to have more of an overview of the data, also rendering the knots in the right positions with the pendants being the right length.

Meet the pixelquipu!

ur018

These are drawn using a python script which reads the Harvard Quipu Database and renders quipu structure using the correct colours. The knots are shown as a single pixel attached to the pendant, with a colour code of red as single knot, green for a long knot and blue as a figure of eight knot (yellow is unknown or missing). The value of the knot sets the brightness of the pixel. The colour variations for the pendants are working, but no difference between twisted and alternating colours, also no twist direction is visualised yet.

hp017

Another advantage of this form of rendering is that we can draw data entropy within the quipu in order to provide a different view of how the data is structured, as a attempt to uncover hidden complexity. This is done hierarchically so a pendant’s entropy is that of its data plus all the sub-pendants, which seemed most appropriate given the non-linear form that the data takes.

ur037

e-ur037

We can now look at some quipus in more detail – what was the purpose of the red and grey striped pendants in the quipu below? They contain no knots, are they markers of some kind? This also seems to be a quipu where the knots do not follow the decimal coding pattern that we understand, they are mostly long knots of various values.

ur051

There also seems to be data stored in different kinds of structure in the same quipu – the collection of sub-pendants below in the left side presumably group data in a more hierarchical manner than the right side, which seems much more linear – and also a colour change emphasises this.

ur015

Read left to right, this long quipu below seems very much like you’d expect binary data to look – some kind of header information or preamble, followed by a repeating structure with local variation. The twelve groups of eight grey pendants seem redundant – were these meant to be filled in later? Did they represent something important without containing any knots? We will probably never know.

UR1176

The original thinking of the pixelquipu was to attempt to fit all the quipus on a single page for viewing, as it represents them with the absolute minimum pixels required. Here are both pendant colour and entropy shown for all 247 quipu we have the data for:

all

entropy-local

The UAV toolkit & appropriate technology

The UAV toolkit’s second project phase is now complete, the first development sprint at the start of the year was a bit of research into what we could use an average phone’s sensors for, resulting in a proof of concept remote sensing android app that allowed you to visually program different scripts which we then tested on some drones, a radio controlled plane and a kite.

photo-1444743618-504214

This time we had a specific focus on environmental agencies, working with Katie Threadgill at the Westcountry Rivers Trust has meant we’ve had to think about how this could be used by real people in an actual setting (farm advisors working with local farmers). Making something cheap, open source and easy to use, yet open ended has been the focus – and we are now looking at providing WRT with a complete toolkit which would comprise a drone (for good weather) a kite (for bad weather/no flight licences required) and an android phone so they don’t need to worry about destroying their own if something goes wrong. Katie has produced this excellent guide on how the app works.

The idea of appropriate technology has become an important philosophy for projects we are developing at Foam Kernow, in conjunction with unlikely connections in livecoding and our wider arts practice. For example the Sonic Bike project – where from the start we restricted the technology so that no ‘cloud’ network connections are required and all the data and hardware required has to fit on the bike – with no data “leaking” out.

photo-1444742698-352249

With the UAV toolkit the open endedness of providing a visual programming system that works on a touchscreen results in an application that is flexible enough to be used in ways and places we can’t predict. For example in crisis situations, where power, networking or hardware is not available to set up remote sensing devices when you need them most. With the UAV toolkit we are working towards a self contained system, and what I’ve found interesting is how many interface and programming ‘guidelines’ I have to bend to make this possible – open endedness is very much against the grain of contemporary software design philosophy.

The “app ecosystem” is ultimately concerned with elevator pitches – to do one thing, and boil it down to the least actions possible to achieve it. This is not a problem in itself, but the assumption that this is the only philosophy worth consideration is wrong. One experience that comes to mind recently is having to make and upload banner images of an exact size to the Play Store before it would allow me to release an important fix needed for Mongoose 2000, which is only intended to ever have 5 or 6 users.

photo-1444743271-137365

For the UAV toolkit, our future plans include stitching together photos captured on the phone and producing a single large map without the need to use any other software on a laptop. There are also interesting possibilities regarding distributed networking with bluetooth and similar radio systems – for example sending code to different phones is needed, as currently there is no way to distribute scripts amongst users. This could also be a way of creating distributed processing – controlling one phone in a remote location with another via code sent by adhoc wifi or SMS for example.

AI as puppetry, and rediscovering a long forgotten game.

AI in games is a hot topic at the moment, but most examples of this are attempts to create human-like behaviour in characters – a kind of advanced puppetry. These characters are also generally designed beforehand rather than reacting and learning from player behaviour, let alone being allowed to adapt in an open ended manner.

geo-6
Rocketing around the gravitational wells.

Geo was an free software game I wrote around 10 years ago which I’ve recently rediscovered. I made it a couple of years after I was working for William Latham’s Computer Artworks – and was obviously influenced by that experience. At the time it was a little demanding for graphics hardware, but it turns out the intervening years processing power has caught up with it.

This is a game set in a large section of space inhabited by lifeforms comprised of triangles, squares and pentagons. Each lifeform exerts a gravitational pull and has the ability to reproduce. It’s structure is defined by a simple genetic code which is copied to it’s descendants with small errors, giving rise to evolution. Your role is to collect keys which orbit around gravitational wells in order to progress to the next level, which is repopulated by copies of the most successful individuals from the previous level.

A simple first generation lifeform.
A simple first generation lifeform.

Each game starts with a random population, so the first couple of levels are generally quite simple, mostly populated by dormant or self destructive species – but after 4 or 5 generations the lifeforms start to reproduce, and by level 10 a phenotype (or species) will generally have emerged to become an highly invasive conqueror of space. It becomes an against the clock matter to find all the keys before the gravitational effects are too much for your ship’s engines to escape, or your weapons to ‘prune’ the structure’s growth.

I’ve used similar evolutionary strategies in much more recent games, but they’ve required much more effort to get the evolution working (49,000 players have now contributed to egglab’s camouflage evolution for example).

A well defended 'globular' colony - a common species to evolve.
A well defended 'globular' colony – a common phenotype to evolve.

What I like about this form of more humble AI (or artificial life) is that instead of a program trying to imitate another lifeform, it really just represents itself – challenging you to exist in it’s consistent but utterly alien world. I’ve always wondered why the dominant post-human theme of sentient AI was a supercomputer deliberately designed usually by a millionaire or huge company. It seems to me far more likely that some form of life will arise – perhaps even already exists – in the wild variety of online spambots and malware mainly talking to themselves, and will be unnoticed – probably forever, by us. We had a little indication of this when the facebook bots in the naked on pluto game started having autonomous conversations with other online spambots on their blog.

A densely packed 'crystalline' colony structure.
A densely packed 'crystalline' colony structure.

Less speculatively, what I’ve enjoyed most about playing this simple game is exploring and attempting to shape the possibilities of the artificial life while observing and categorising the common solutions that emerge during separate games – cases of parallel evolution. I’ve tweaked the between-levels fitness function a little, but most of the evolution tends to occur ‘darwinistically’ while you are playing, simply the lifeforms that reproduce most effectively survive.


An efficient and highly structured 'solar array' phenotype which I’ve seen emerge twice with different genotypes.

You can get the updated source here, it only requires GLUT and ALUT (a cross platform audio API). At one time it compiled on windows, and should build on OSX quite easily – I may distribute binaries at some point if I get time.

geo-5
A ‘block grid’ phenotype which is also common.

Easter Python/Minecraft programming day at dbsCode

Thursday saw our second dbsCode Easter programming taster, and like last year we focused on minecraft programming with our procedural architecture api.

The main change this time was that for the 20 11-16 year old participants we doubled our teachers to 4 (Glen Pike, Francesca Sargent and Matthew Dodkins and me), plus a couple of interested parents helped us out too. This meant that the day was much more relaxed and we noticed they were engaged with the programming for a much higher proportion of the time. Another factor was that we went straight into coding, as none of them needed introduction to minecraft this year. I think one of the biggest strengths of this kind of learning is that they are able to easily switch between playful interaction (jumping into each other’s worlds, building stuff the normal way) and programming. This means there is low pressure which I think makes it more of a self driven activity, as well as making a long (4 hour) workshop possible.

Here are some screenshots of their creations – this was a melon palace created in a world that had somehow become sliced apart:

minecraftDay4

Inside the melon palace, the waterfall pulsed with a while loop and sleeps that altered the water source blocks.

minecraftDay5

As last year there was a lot of mixing of activities, using code to create big shapes and then editing them manually for the finer details:

minecraftDay3

IMG_20150402_142138

Here, a huge block of water inexplicably cuts through the scenery:

IMG_20150402_143112

And two houses, that became merged together and then filled with bookshelves and other homely items:

IMG_20150402_143000

Neural Network livecoding and retrofitting ZX Spectrum hardware

An experimental, and quite angry neural network livecoding synth (with an audio ‘weave’ visualisation) for the ZX Spectrum: source code and TZX file (for emulators). It’s a bit hard to make out in the video, but you can move around the 48 neurons and modify their synapses and trigger levels. There are two clock inputs and the audio output is the purple neuron at the bottom left. It allows recurrent loops as a form of memory, and some quite strange things are possible. The keys are:

  • w,d: move diagonally north west <-> south east
  • s,e: move diagonally south west <-> north east
  • t,y,g,h: toggle incoming synapse connections for the current neuron
  • space: change the ‘threshold’ of the current neuron (bit shifts left)

This audio should load up on a real ZX Spectrum:

One of the nice things about tech like this is that it’s easily hackable – this is a modification to the video output better explained here, but you can get a standard analogue video signal by connecting the internal feed directly to the plug and detaching the TV signal de-modulator with a tiny bit of soldering. Look at all those discrete components!

IMG_20140809_125815

Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:

IMG_20140108_232857

Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:

IMG_20140108_232914

The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:

IMG_20140109_004111

Project Nightjar: Camouflage data visualisation and possible internet robot predators

We’ve had tens of thousands of people spotting nightjars and donating a bit of their time to sensory ecology research. The results of this (of course it’s still on-going, along with the new nest spotting game) is a 20Mb database with hundreds of thousands of clicks recorded. One of the things we were interested in was seeing what people were mistaking for the birds – so I had a go at visualising all the clicks over the images (these are all parts of the cropped image – as it really doesn’t compress well):

clicks-ReflectanceCF017Vrgb0.54-crop

clicks-ReflectanceCF022Vrgb0.52-crop

Then, looking through the results – I saw a strange artefact:

clicks-ReflectanceCP018VRgb0.55-crop

Uncompressed high res version

My first thought was that someone had tried cheating with a script, but I can hardly imagine that anyone would go to the bother and it’s only in one image. Perhaps some form of bot or scraping software agent – I thought that browser click automation was done by directly interpreting the web page? Perhaps it’s a fall back for HTML5 canvas elements?

It turns out it’s a single player (playing as a monkey, age 16 to 35 who had played before) – so easy enough to filter away, but in doing that I noticed the click order was not as regular as it looked, and it goes a bit wobbly in the middle:

Someone with very precise mouse skills then? :)

Open Sauces

Open sauces is a FoAM project to investigate the sharing of food, food culture and food systems. Last week in Brussels we started experimenting with ways to store, display and reason about recipes in different ways. Taking the recipes from the Open Sauces book we’re representing them as Petri Nets, which means we can feed them into various different visualisations, from Scheme Bricks – taken from the Naked on Pluto’s gallery installation projection:

os1

To a new brand new circular representation:

10533207625_86d0291d08_b

These structures are filtered somewhat to be more readable than the raw petri nets, which can be rendered via graphviz for debugging:

test