Tag Archives: fluxus

Re-interpreting history

A script for sniffing bits of supercollider code being broadcast as livecoding history over a network and re-interpreting them as objects in fluxus, written during an excellent workshop by Alberto de Campo and Julian Rohrhuber at /*VIVO*/ Mexico City.

(osc-source "57120")

(define (stringle str)
    (map
        char->integer
        (string->list str)))


;;(osc-destination "osc.udp:255.255.255.255:57120")
;;(osc-send "/vivo" "s" '("fluxus:hola"))

(define (safe l n)
    (list-ref l (modulo n (length l))))

(define (render arg)
    (let ((l (map (lambda (t) (/ t 255)) (stringle arg))))
        (with-state
            (scale (vector (safe l 0)
                    (safe l 1)
                    (safe l 2)))
            (rotate (vmul (vector (safe l 32)
                    (safe l 12)
                    (safe l 30)) 360))
            (colour (vector (safe l 0)
                    (safe l 3)
                    (safe l 4)))

            (build-torus 0.1 1 4 20))))

(clear)
(scale 2)
;;(render "hello 343 323")

(every-frame 
    (begin
        (when (osc-msg "/hist")
            (printf "~a~n" (osc 1))
            (when (osc 1)
                (render (osc 1))))))

Mexican livecoding style

At only around 2 years old, the Mexican livecoding scene is pretty advanced. Here are images of (I think) all of the performances at /*vivo*/ (Simposio Internacional de Música y Código 2012) in Mexico City, which included lots of processing, fluxus, pure data and ATMEL processor bithop along with supercollider and plenty of non-digital techniques too. The from-scratch technique is considered important in Mexico, with most performances using this creative restriction to great effect. My comments below are firmly biased in favour of fluxus, not considering myself knowledgeable enough for thorough examinations of supercollider usage. Also there are probably mistakes and misappropriations – let me know!

Hernani Villaseñor, Julio Zaldívar (M0M0) – A performance of contrasts between Julio’s C coded 8bit-shifting ATMEL sounds and Hernani’s from scratch supercollider scripts, both building up in intensity through the performance, a great opener. A side effect of Julio using avrdude to upload code resulted in the periodic sonification of bytecode as it spilled into the digital to analogue converter during uploads. He was also using an oscilloscope to visualise the sound output, some of the code clearly designed for their visuals as well as crunchy sounds.

Mitzi Olvera and Alejandro Franco – I’d been aware of Mitzi’s work for a while from her fluxus videos online so it was great to see this performance, she made good use of the fluxus immediate mode primitives, and started off with restricting them to points mode only, while building up a complex set of recursive patterns and switching render hints to break the performance down into distinct sections. She neatly transitioned from the initial hard lines and shapes all the way to softened transparent clouds. Meanwhile Alejandro built up the mix and blasted us with Karplus Strong synthesis, eventually forcing scserver to it’s knees by flooding it with silent events.

Julian Rohrhuber, Alberto de Campo – A good chunk of powerbooks unplugged (plugged in) from Julian and Alberto, starting with a short improvisation before switching to a full composition explored within the republic framework, sharing code and blending their identities.

Martín Zumaya (Stereo Vision), José Carlos Hasbun (joseCaos) – It was good to see Processing in use for livecoding, and Martin improvised a broad range of material until concentrating on iconic minimal constructions that matched well with José’s sounds – a steady build up of dark poly-rhythmic beats with some crazy feedback filtering mapped to the mouse coordinates to keep things fluid and unpredictable.

IOhannes Zmölnig – pure data morse code livecoded in Braille. This was an experiment based on his talk earlier that day, a study in making the code as hard to read for the performer as the audience. In fact the resulting effect was beautiful, ending with the self modification of position and structure that IOhannes is famous for – leaving a very consistent audio/visual link to the driving monotonic morse bass, bleeps and white noise.

Radiad3or (Jaime Lobato, Alberto Cerro, Fernando Lomelí, Iván Esquinca y Mauro Herrera) – part 1 was human instruction, analogue performance as well as a comment at the inadequacy of livecoding for a computer, with commands like “changeTimbre” for the performers to interpret using their voices, a drumkit, flutes and a didgeridoo. Following this, part 2 was about driving the computer with these sounds, inverting it into a position alongside or following the performers rather than a mediator, being reprogrammed by the music. This performance pushed the concept of livecoding to new levels, leaving us in the dust still coming to terms with what we were trying to do in the first place!

Benoît and the Mandelbrots (live from Karlsruhe) – a remote performance from Germany, the Mandelbrots dispatched layers upon layers of synthesised texture, along with their trademark in-performance text chat, a kind of code unto itself and a view into their collective mind. The time lag issues involved with remote streaming, not knowing what/when they could see of us, added an element to this performance all of it’s own. As did the surprise appearance of various troublemakers into the live video stream…

Jorge Ramírez – another remote performance, this time from Beijing, China. Part grimy glitch and part sonification of firewalls and effects of imagined or real monitoring and censorship algorithms this was powerful, and included more temporal disparity – this time caused by the sound arriving some time before the code that described it.

Si, si, si (Ernesto Romero Mariscal Guasp y Luciana Renner Maceralli) – a narrative combination of Luciana’s performance art, tiny webcam augmented theatre sets, and Ernesto’s supercollider soundtrack. Livecoding hasn’t ventured into storytelling much yet, and this performance indicated that it should. Luciana’s inventive use of projection with liquids and transparent fibres reminded me of the early days of film effects and was a counterpoint to Ernesto’s synthesised ambience and storytelling audio.

Luis Navarro, Emilio Ocelotl – ambitious stuff this – dark dubsteppy sounds from Emilio, driving parameters of a from-scratch fluxus sierpinski fractal exploration from Luis. Similar to Mitzi’s performance, Luis limited his scene to immediate mode primitives, a ternary tree recursion forming the basis for constantly morphing structures.

Alexandra Cárdenas, Eduardo Obieta – Something very exciting I noticed was a tendency when working in sound/visual pairs such as Alexandra and Eduardo for the sounds to be designed with the visuals in mind – e.g. the use of contrasting frequencies that could be picked out well by fft algorithms. This demonstrated a good mutual understanding, as well as a challenge to the normal DJ/VJ hierarchy. Eduardo fully exercised the NURBS primitive (I remember it would hardly render at 10fps when I first added it to fluxus!) exploding it to the sound input before unleashing the self-test script to end the performance in style!

Eduardo Meléndez – one of the original Mexican livecoders, programming audio and visuals at the same time! Not only that – but text (supercollider) and visual programming (vvvvv) in one performance too. I would have liked to have paid closer attention to this one, but I was a bit nervous

Slub finished off the performances, but I’ll write more about that soon as material comes in (I didn’t have time to take any photos!).

Making time

Time, the ever baffling one directional mystery. A lot of it has been spent between the members of slub on ways to synchronise multiple machines to share a simple beat, sometimes attempting industrial strength solutions but somehow the longest standing approach we always come back to for our various ad-hoc software remains to be a single osc message. This is the kind of thing that seems to normally involve stressed pre-performance hacking, so after having to rewriting it for temporal recursion I thought I should get it down here for future reference!

The message is called “/sync” and contains two floating point values, the first the number of beats in a “bar” (which is legacy, we don’t use this now) and then the current beats per minute. The time the message is sent is considered to be the start of the beat. A sync message comes into my system via a daemon called syncup. All this really does is attach a timestamp to the sync message recording what the local time on my machine was when it arrived, and sends it on to fluxus. Shared timestamps would be better, but don’t make any sense without a shared clock, and they seem fragile to our demands. The daemon polls on a fairly tight loop (100ms) and the resulting timestamp seems accurate enough for our ears (fluxus runs on the frame refresh rate which is too variable for this job).

So now we have a new sync message which includes a timestamp for the beat start. The first thing the system does is to assume this is in the past, and that the current time has already moved ahead. There are 3 points of time involved:

From the sync time (in the past, on the left) and the bpm we can calculate the beat times into the future. We have a “logical time” which is initialised with the current time from the system clock, a safety margin added, and then gets “snapped” to the nearest beat. The safety margin is needed as the synth graph build and play messages coming from fluxus need to be early enough to get scheduled by fluxa’s synth engine to play with sample accuracy.

The beat snapping has to be able to move back in time as well as forwards, for tiny adjustments from the sync messages (as they never come in exactly when they are expected) otherwise we skip beats. The algorithm to do this is as follows:

(define (snap-time-to-sync time)
  (+ time (calc-offset time last-sync-time (* (/ 1 bpm) 60)))) 

(define (calc-offset time-now sync-time beat-dur)
  ;; find the difference in terms of tempo
  (let* ((diff (/ (- sync-time time-now) beat-dur))
         ;; get the fractional remainder (doesn't matter how
         ;; far in the past or future the synctime is)
         (fract (- diff (floor diff))))
    ;; do the snapping
    (if (< fract 0.5) 
        ;; need to jump forwards - convert back into seconds
        (* fract beat-dur) 
        ;; the beat is behind us, so go backwards
        (- (* (- 1 fract) beat-dur)))))

The last thing that is needed is a global sync offset, which I add at the start of the process, to the incoming message timestamps – this has to be tuned by ear, and accounts for the fact that the latency between the synth playing a note and the speakers moving air seems to vary between machines dependent on many uncertain factors – sound card parameters, battery vs ac power, sound system setup, colour of your backdrop etc.

Other than this we tend to keep the networking tech to a minimum and use our ears and scribbled drawn scores (sometimes made from stones) to share any other musical data.

Temporal recursion

Slub have a number of important livecoding transmissions coming up (including a performance at the Mozilla Festival!) so it’s time to work on fluxus/fluxa/scheme bricks. Here are some recording tests of a feature I’ve been wanting to use for a long time – temporal recursion.

These recordings were not changed by hand as they played, but started and left running in ‘generative audio’ mode in order to try and understand the technique. This method of sequencing is inspired by Impromptu which uses a similar idea. In fluxa it’s all based around a single new function: “in” which schedules a call to a function – which can be the current function (this is different to the existing ‘timed tasks’ in fluxus which are less precise for this kind of sequencing).

(define (tick time a)
    (play (+ time 3) (sample "ga.wav" (note (pick '(40 42 45) a))))
    (in time 0.22 tick (+ a 1)))

(in (time-now) 1 tick 0)

The “in” function takes the current time, the time to wait before the call, the function to call and it’s parameters. In the example above the argument “a” gets incremented each time, resulting in a sequence of notes being played. Recursion generally brings up thoughts of self similarity and fractal patterns – as in the graphical use of recursion in fluxus, but here it’s better to imagine a graph of function calls. Each function can branch to a arbitrary number of others, so limitations have to be put in place to stop the thing exploding with too many concurrent calls. What seems to happen (even with small function call graphs) is the appearance of high level time structures – state changes and shifts into different modes where different patterns of calls lock into sequence. You can hear this clearly in the second recording above which alters itself half way through.

I’ve also experimented with visualising the call graph, with limited success with this more complex example – the round nodes are functions, the boxes parameter changes and labels on the connections are the branch conditions:

(require fluxus-018/fluxa)

(searchpath "/home/dave/noiz/nm/")
(define n '(23 30))
(set-scale '(1 1 2 1))

(define (za time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.4 0.1 1 0.4 1) a)) 
        (sample "ga.wav" (note (- (modulo (- b a) 17) (* (pick n a) 2))))) -0.5)
    (when (zero? (modulo a 16)) (in time 0.3 zm a b))
    (if (eq? (modulo a 14) 12)
        (in time 1.0 zoom a (- b 1))
        (in time (- 0.5 (* (modulo a 13) 0.03)) za (+ a 1) (+ b 2))))

(define (zm time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.1 0.4 1) a))
        (sample "ga.wav" (note (+ (modulo b 5) (/ (pick n a) 2))))) 0.5)
    (if (> a 12)
        (in time 1.0 za b a)
        (in time (pick '(1.3 1.5 0.5) a) zm (+ a 1) b)))

(define (zoom time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.1 0.2 0.3) a)) 
        (sample "ga.wav" (note (+ (pick n a) (* 3 (modulo a (+ 1 (modulo b 5)))))))))
    (if (> a 16) 
        (in time 0.3 zm 0 (+ b 1))
        (in time (pick '(1.3 0.12) a) zoom (+ a 1) b)))

(in (time-now) 1 zoom 0 1)

Shaving yaks on android

fluxus installed on the emulator

Mostly my android experience has been good so far, it’s been very quick to get things running and be fairly productive. It doesn’t come without it’s share of yak shaving though. I’ve spent quite a lot of time trying to get remote debugging of native code working with my phone (a HTC desire) with little success. This seems to be a fairly well documented problem, the symptoms are messages like this when running ndk-gdb:

ERROR: Could not setup network redirection to gdbserver?
Maybe using –port=
to use a different TCP port might help?

When run with –verbose it seems something a little different is wrong:

run-as: Package ‘am.fo.nebogeo.fluxus’ has corrupt installation

Which looks like some result of having a rooted phone (which I do) as ndk-debug is a possible attack vector for doing nasty things and is therefore very critical of the permissions of the directories on the system. In order to fix this I installed ROM Manager from the market which contains a script to fix permissions. This didn’t work at first by pressing the button in the app so after some poking around I found the shell script in: /data/data/com.koushikdutta.rommanager/files/fix_permissions

It turned out that this script depended on an interesting utility called busybox, which provides tiny versions of GNU programs for phones. The easiest way to install this was to install TitaniumBackup and click on it’s “Problems” button which would download it, then copy it from /data/data/com.keramidas.TitaniumBackup/files into /system/bin/ and run busybox –install -s /system/bin/

Then the script could run, and additionally I needed to run chmod 774 /data/data/* to get make sure everything was in the correct state, but still no luck with gdb. At this point I decided my time would be better spent on making a linux version of the android app for debugging purposes and get it running in the software emulator. More on this soon.

I’m also documenting bits and pieces of my android development notes on FoAM’s wiki here.