It’s still slow. I’m aware of the limitations and background to browser tech, but it’s depressing that it’s somehow problematic drawing a few hundreds of sprites in 2012 while my ancient Gameboy DS can manage quite fine thankyouverymuch :)
Having to write your own mouse event detection for sprite down/up/over/out. I guess this is one of the biggest reasons for the explosion of HTML5 game engines. It’s quite non-trivial and un-fun to set up properly. Actually getting the real mouse coordinates is also quite hard to do.
On the plus side, it’s a breath of fresh air to get back the control of immediate mode – it seems historically that this tends to be the optimal approach, give us the render loop!
Now you’ve got this far – here is some accidental art which happened this morning, and which I’m particularly proud of:
The Jellyfish VM is now running in fluxus on android – a kind of executable rendering primitive which defines it’s form (a 3D triangle list) in the same place as it’s behaviour (a vector processing program), in the primitive’s data array.
This provides a 3D procedural modelling VM loosely inspired by the Playstation 2 which is much faster than interpreting scheme (or Java) on ARM devices like android. There are also possibilities for genetic programming or touch based programming interfaces just like Betablocker DS (think 3D animation rather than music).
Triangles are written to by de-referencing a register that defines the start of the model data, which exists in program memory. It can be programmed with raw vectors for each instruction, and right now the VM is run for 10 cycles for each frame, although this will be configurable “in the future”. Here is a simple program that was used to make the screenshots:
(define jelly (build-jellyfish)) (with-primitive jelly (program-jellyfish (list ; data (vector 0 0 0) ; time (increases by 1 each loop) (vector 2 2 -3) ; shuffle data for converting (x y z) -> (z z x) ; code follows to build a vertex by rotation around an angle based on time (vector LDA 0 0) ; load current time from address 0 (vector LDL 135.3 0) ; load value 135.3 (angle in degrees) (vector MUL 0 0) ; multiply time by angle (vector SIN 0 0) ; makes (sin(angle) cos(angle) 0) ; make a spiral by scaling up with time (vector LDA 0 0) ; load time again (vector LDL 0.05 0) ; load 0.05 (vector MUL 0 0) ; multiply to get time*0.05 (vector MUL 0 0) ; mul rotation vector by time*0.05 ; move backward in z so we get some depth (vector LDA 0 0) ; load the time again (vector LDL 0.03) ; load 0.03 (vector MUL 0 0) ; multiply the time by 0.01 (vector LDA 1 0) ; load the shuffle vec from address 1 (vector SHF 0 0) ; shuffle the x to z position (vector ADD 0 0) ; add (0 0 x) to set z on current position (vector STI 0 REG_MDL) ; write position to model memory registers ; increment the index by 1 (vector LDA 0 0) ; load address (vector LDL 1 0) ; load inc (vector ADD 0 0) ; add them together (vector STA 0 0) ; store at address loc (vector JMP 2 0)))) ; goto 2
This is the emotional colour map for Germination X. It’s a 22 by 8 sized image, with 8 colours for each OCC emotion (those we haven’t done yet are white).
From left to right, these represent: LOVE, HATE, HOPE, FEAR, SATISFACTION, RELIEF, FEARS_CONFIRMED, DISAPPOINTMENT, JOY, DISTRESS, HAPPY_FOR, PITY, RESENTMENT, GLOATING, PRIDE, SHAME, GRATIFICATION, REMORSE, ADMIRATION, REPROACH, GRATITUDE, ANGER.
It’s primarily based on a set of colours Lina has been working on. She found it useful to group the emotions into positive and negative sets. Something I think is significant is that these colours were made with the colours of the plants and the general feel of the game in mind. I think the problems I found with this exercise was a lack of context – perhaps thinking too abstractly about this is a bad idea.
I made this chart quickly, deliberately avoiding thinking about potential reasons for my choices. Interestingly I found I needed to start with the motion curves first (which describe how the colours are blended between over time) and these seemed much less arbitrary than the actual colours I chose. The reason could be that there is a closer connection between movement (however simple) and gestural expressions.
First create a program and choose the colours:
Rule: O => : O O :
Repeat 5 generations
: = Green
0 = Orange
Then “compile” the code to expand the pattern:
2= : O O :
3= : : O O : : O O : :
4= : : : O O : : O O : : : : O O : : O O : : :
5= : : : : O O : : O O : : : : O O : : O O : : : : : : O O : : O O : : : : O O : : O O : : : :
The resulting weave can be previewed in ASCII as described earlier. This was useful to discover the range of patterns possible. When done, install the compiled code on the loom as a warp thread colour sequence.
Then you follow the same sequence for the weft threads, choosing the colour based on the code generated. In this way you become a mere part of the process yourself.
This is another, more complex program with 2 rules. This expands rather quicker than the last one, so only three generations are required:
Rule 1: O => O : O :
Rule 2: : => : O :
Run for 3 generations
This technique draws comparisons with Jacquard looms, but obviously it’s far simpler as the weave itself is the same, we are only changing between 2 colours (and a human is very much required to do the weaving in this case). However, one of the activities I would have tried with more time available would have been reverse engineering Jacquard woven fabric – to attempt to decode the rules used.
During the workshop it was also suggested that a woven quine may be possible – where the pattern somehow contains the instructions for it’s own manufacture.
For my contribution to the Mathematickal Arts workshop, I wanted to explore weaving, specifically plain weave. This is the simplest form of weaving, but when combined with sequences of colour it can produce many different types of pattern.
Some of these patterns when combined with muted colours, have in the past been used as a type of camouflage – and are classified into District Checks for use in hunting in Lowland Scotland. This is, I guess, a kind of less prestigious form of Tartan.
I started off by trying to understand how the patterns emerge, beginning with the basic structure:
The threads running top to bottom are the warp, those running across are the weft. If we consider the top most thread as visible, we can figure out the colours of this small section of weave. A few lines of scheme calculate and print the colours of an arbitrarily sized weave, by using lists of warp and weft as input.
; return warp or weft, dependant on the position (define (stitch x y warp weft) (if (eq? (modulo x 2) (modulo y 2)) warp weft)) ; prints out a weaving (define (weave warp weft) (for ((x (in-range 0 (length weft)))) (for ((y (in-range 0 (length warp)))) (display (stitch x y (list-ref warp y) (list-ref weft x)))) (newline)))
I’ve been visualising the weaves with single characters representing colours for ascii previewing, here are some examples:
(weave '(O O O O O O O) '(: : : : : : : : :)) => O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O : O (weave '(O O : : O O : : O O) '(O : : O O : : O O :)) => : O : : : O : : : O O : : : O : : : O : O O O : O O O : O O O O : O O O : O O O : O : : : O : : : O O : : : O : : : O : O O O : O O O : O O O O : O O O : O O O : O : : : O : : : O
This looked quite promising as ascii art, but I didn’t really know how it would translate into a textile. I also wanted to look into ways of generating new patterns algorithmically, using formal grammars – this was actually one of the simpler parts of the project. The idea is that you begin with an axiom, or starting state, and do a search replace on it repeatedly following one or more simple rules:
Axiom: O Rule: O -> :O:O Generation 1: O Generation 2: :O:O Generation 3: ::O:O::O:O Generation 4: :::O:O::O:O:::O:O::O:O
We can then generate quite complex patterns for the warp and the weft from very small instructions, next I’ll show what happens when some of these are translated into real woven fabric…
Getting stuff to work on PS2 wasn’t quite as easy as I probably made it sound in the last homebrew post. The problem with loading code from usb stick is that there is no way to debug anything, no remote debugging, no stdout – not even any way to render text unless you write your own program to do that.
The trick is to use the fact that we are rendering a CRT TV signal and that you can control what gets rendered in the overscan area (think 8bit loading screens). There is a register which directly sets the background colour of the scanline – this macro is all you need:
#define gs_p_bgcolor 0x120000e0 // Set CRTC background color #define GS_SET_BGCOLOR(r,g,b) \ *(volatile unsigned long *)gs_p_bgcolor = \ (unsigned long)((r) & 0x000000FF) << 0 | \ (unsigned long)((g) & 0x000000FF) << 8 | \ (unsigned long)((b) & 0x000000FF) << 16
Which you can use to set the background to green for example:
Its a good idea to change this at different points in your program. When you get a crash the border colour it’s frozen with will tell you what area it was last in, allowing you to track down errors.
There is also a nice side effect that this provides a visual profile of your code at the same time. Rendering is synced to the vertical blank – when the CRT laser shoots back to the top of the screen a new frame is started and you have a 50th of a second (PAL) to get everything done. In the screenshot below you can see how the frame time breaks down rendering 9 animated primitives – and why it might be a good idea to use some of these other processors:
Some LED bike light tracking using a particle filter:
I’ve been invited to do a lecture for Markku Nousiainen’s experimental course on computational photography next week, so I’ve been constructing some demos that display different computer vision algorithms based on the work I’ve been doing on Lirec. The idea is that they may serve as inspiration for how algorithmic understanding of images can be used for artistic purposes.
The code is part of the mysterious magic squares library.