Tag Archives: hardware

A screenless programming language: putting it all together

More background on this project here. This is the finished Raspberry Pi interface board, most of the circuitry here is an LED driver to light up one of the 32 programming blocks as they are being scanned, or in any pattern required depending on the needs of the particular programming language in use. The small IC is an old 74LS02 NOR gate I’m misusing as a NOT gate (which I couldn’t find in my supplies). I hard wired one of the gate’s inputs to ground so the other one gets inverted. I need this in order to switch between the two 16 way multiplexers to get 32 outputs, by alternating their enable pins.

IMG_20141002_084650

This time I remembered to include decoupling capacitors (thanks to Julian). Here is the completed wiring for the block input plugs, with everything connected up. I’m new to this scale of building, so really pretty happy it seems to be working:

IMG_20140927_155501

The same thing, the right way up with a single programming block plugged in, and the Raspberry Pi in the foreground. The first time it’s starting to look like something:

IMG_20140927_162218

I’m also starting to document the project, as it will be released as open source hardware, here’s an unfinished schematic. This is attempting to be complete, but it’s not really as complex as it looks – I’ll work on a block diagram that makes it easier to describe:

unfinished-circuit

Building a screenless programming language for the Raspberry Pi #6

Following on from the last in this mini series, the giant 16 byte multiplexer is finished and tested – lots of head scratching due to shorts and wonky solder joints (it’s amazing how sensitive CMOS logic is, interference on accidental free pins causes strange effects) and I’ve replaced the breadboard with a new interface board between the Pi and the multiplexer.

IMG_20140914_163350

Now it’s time to think more about the ‘front end’ – here are some prototype programming blocks made from the ‘holes’ from a hole saw (I’m chopping up bits of driftwood). These include a central drill hole, which we can use for an indicator LED.

IMG_20140914_155709

This is the minimal circuitry needed for each programming block – it’s just a 4 bit identification code (reconfigurable via jumpers) and the LED, mounted around the 7 pin VGA style plug.

IMG_20140914_155742

Here it’s mounted in the block after a bit of hollowing out. To be fully kid-proof I’ll probably eventually set the internals in epoxy glue.

IMG_20140914_162353

Testing the plug/socket wiring on the breadboard. The plan is to paint the top surface of the programming block in blackboard paint so different instructions and languages can be tested and built with the same blocks. It also intriguingly makes abstraction possible, which is usually a problem with non-text based approaches, but this needs more thinking about.

IMG_20140916_080606

Here it is with another unenclosed block attached to the main board, and via the new interface to the Raspberry Pi. The interface will need a bit more circuitry to drive the LEDs – which will be able to be lit in different ways depending on the needs of the language, perhaps sequential in some cases, more dependant on logic in others. Also, different shaped blocks or different coloured LEDs may have different meanings in a language too.

Screenless programming language #5

Update after the last post. After many hours soldering 75% of the board is complete! The Pi can now address a whopping 12 bytes of data.

IMG_20140912_085034

After thinking about it for a while, and checking with the languages I’m planning to use – I’ve decided to double the number of instruction ‘slots’ by halving the bits to 4. This doesn’t actually change the core hardware at all – I’ll just scan two at a time and split them in software (part of this project has been to see how much easier it is to change software than hardware). Some of the instructions can span two addresses if needed, but this makes more efficient use of the resources. The ports on the left of the board are where the plugs for the programming ‘interface’ will come in – each one will now split to two locations.

Here’s my test GPIO software running on the Pi, constant testing has been the only way to stay sane on this project. Eventually I could replace the Pi with a microcontroller for some applications, but the Pi has been a great way to ease the prototype process.

IMG_20140910_092642

Thinking outside of the screen (#1)

I’m starting a new exploratory project to build a screen-less programming language based on two needs:

  • A difficulty with teaching kids programming in my CodeClub where they become lost ‘in the screen’. It’s a challenge (for any of us really but for children particularly) to disengage and think differently – e.g. to draw a diagram to work something out or work as part of a team.
  • A problem with performing livecoding where a screen represents a spectacle, or even worse – a ‘school blackboard’ that as an audience we expect ourselves to have to understand.

I’ve mentioned this recently to a few people and it seems to resonate, particularly in regard to a certain mismatch of children’s ability to manipulate physical objects against their fluid touchscreen usage. So, with my mind on the ‘pictures under glass’ rant and taking betablocker as a starting point (and weaving code as one additional project this might link with), I’m building some prototype hardware to provide the Raspberry Pi with a kind of external physical memory that could comprise symbols made from carved wood or 3D printed shapes – while still describing the behaviour of real software. I also want to avoid computer vision for a more understandable ‘pluggable’ approach with less slightly faulty ‘magic’ going on.

Before getting too theoretical I wanted to build some stuff – a flexible prototype for figuring out what this sort of programming could be. The Raspberry Pi has 17 configurable I/O pins on it’s GPIO interface, so I can use 5 of them as an address lookup (for 32 memory locations to start with, expandable later) and 8 bits as input for code or data values at these locations.

The smart thing would be to use objects that identify themselves with a signal, using serial communication down a single wire with a standard protocol. The problem with this is that it would make potential ‘symbol objects’ themselves fairly complicated and costly – and I’d like to make it easy and cheap to make loads of them. For this reason I’m starting with a parallel approach where I can just solder across pins on a plug to form a simple 8 bit ID, and restrict the complexity to the reading hardware.

Testing the 74HC4067 16-channel analog mult iplexer/demultiplexer
Testing the 74HC4067 16-channel multiplexer/demultiplexer

I’ve got hold of a bunch of 74HC4067 multiplexers which allow you to select one signal from 16 inputs (or the other way around), using 4 bits – and stacking them up, one for each 8 bits X 16 memory locations. This was the furthest I could go without surface mount ICs (well out of my wonky soldering abilities).

Building the board, (with narrow 24 pin IC holders sliced down the middle). The input comes in via a common bus down the centre of the board.
Building the board, (with narrow IC holders hacked by slicing them down the middle). The input comes in via a common bus down the centre of the board.
Solder practice
Solder practice
Testing the first 4 bits on the breadboard
Testing the first 4 bits on the breadboard

Now 4 bits are working it’s harder to test with an LED – so next up is getting the Raspberry Pi attached.

Spork Factory: evolving a light follower robot

Continuing with the structured procrastination R&D project on evolvable hardware, I’m proud to report a pretty decent light following robot – this is a video of the first real-world test, with a program grown from primordial soup chasing me around:

After creating a software model simulation of the robot in the last post, I added some new bytecode instructions for the virtual machine: LEYE and REYE push 1 on the stack if we are detecting light from the left or right photoresistor, zero if it’s dark. LMOT and RMOT pop the top byte of the stack to turn the motors on and off. The strategy for the genetic algorithm’s fitness function is running each 16 byte generated program on the robot for 1000 cycles, moving the robot to a new random location and facing direction 10 times without stopping the program. At the end of each run the position of the robot was compared to the light position, and the distances were averaged as the fitness. Note that we’re not assigning fitness to how fast we get to the light.

This is pretty simple stuff, but it’s still interesting to look at what happens over time in the genetic algorithm. Both motors are running at startup by default, so the first successful programs learn how turn one motor off – otherwise the robot just shoots off and scores really low. So the first generations tend to just go round in circles. Then they start to learn how to plug the eyes in, one by one edging them closer to the goal – then it’s a case of improving the sample rate to improve accuracy, usually by using jmps and optimising the loops.

This is an example of a fairly simple and effective solution, the final generation shown in the animation above:

loop:
  leye 
  rmot 
  nop nop nop nop nop 
  reye 
  lmot 
  or 
  nop nop nop nop
  rmot 
  jmpz loop

Some explanation, the right and left eyes are plugged into the left and right motors, which is the essential bit making it work, the ‘nop’s are all values that are not executable. The ‘rmot’ before the ‘jmpz’ makes the robot scan around in circles if there is no light detected (strangely, a case which doesn’t happen in the simulation). The argumant to ‘jmpz’ is 0 (loop) which is actually the 17th byte – so it’s cheekily using memory which has been initialised to zero as part of it’s program.

This is a more complicated and stranger program which evolved after 70 generations with a high fitness, I haven’t worked out what it’s up to yet:

  pshl 171 
loop:
  lmot 
  leye 
  pip 111 
  pip 30 
  rmot 
  reye 
  pshl 214 
  nop 
  lmot 
  jmp loop

Evolvable hardware

I’m modding a robot toy for the next Spork Factory experiment, the chassis provides twin motor driven wheels and I’m replacing it’s brains with a circuit based on the ATtiny85 for running the results of the genetic algorithm, and a pair of light dependant resistors for ‘seeing’ with.

Here’s the circuit (made in about 20 minutes after installing Fritzing for the first time). It’s quite simple – two LDR’s provide input, and some transistors are needed to provide enough power to the robot’s motors (it’s using PNP transistors as they were the first matching pair I could find, which means logical 0 is ‘on’ and 1 is ‘off’ in the code).

The robot needs to be emulated in software so the genetic algorithm can measure the fitness of hundreds of thousands of candidate programs. We need to simulate the effect of motor speeds on it’s position and velocity – here is a test running the right motor at a constant rate, and gradually increasing the speed of the left motor.

This is the code snippet that calculates the velocity from the 2 motor speeds – more work is needed to plug in the correct values for the distance between the wheels and the actual rotational speeds of the motor drives.

// update dir and pos from current motor speed
float relative_speed_diff=m_left_motor-m_right_motor;
float angle=atan(relative_speed_diff);
    
// rotate the direction by the angle
float nx=(m_dir.x*cos(angle))-(m_dir.y*sin(angle));
float ny=(m_dir.x*sin(angle))+(m_dir.y*cos(angle));
m_dir.x=nx;
m_dir.y=ny;
    
// update the position
m_pos=m_pos.add(m_dir);

Spork factory

A system for creating an abundance of useless software for tiny devices. Spork Factory evolves programs that run on Atmel processors – the same make as found on the Arduino, in this case the ATtiny85 – a £2.50 8 pin 8bit CPU. I’m currently simply using a piezo speaker as an output and evolving programs based on the frequency of the sound produced by flipping the pins up and down, so creating 2bit synths using the Fourier transform as the fitness function. With more hardware (input as well as output) perhaps we could evolve small robots, or even maybe cheap claytronics or programmable matter experiments.

This project reuses the previous genetic programming experiments (including jgap as its genetic algorithm framework), and is also inspired by Till Bovermann’s recent work with Betablocker in Supercollider for bytecode synthesis.

The programs generated don’t use the Atmel instruction set directly, but interpret a custom one derived from Betablocker for two reasons. Atmel processors separate their instruction memory from data (the Harvard architecture) which makes it difficult to modify code as it’s running (either uploading new evolved code or running self modifying instructions), the other is that using a simplified custom instruction set makes it easier for genetic algorithms to create all kinds of strange programs that will always run.

I’ve added an ‘OUT’ instruction, which pops the top of the stack and writes it to the pins on the ATtiny, so the first thing a program needs to do is generate and output some data. The second thing it needs to do is create an oscillator to create a tone, after that the fitness function grades the program on the amount of frequencies present in the sound, encouraging it to make richer noises.

Here are two example programs from a single run, first the ancestor, a simple oscillator which evolved after 4 or 5 generations:

out
out
nop
nop
dec
nop
nop
nop
out
nop
jmpz 254
nop
nop
nop
dup

It’s simply outputting 0’s, then using the ‘dec’ to decrement the top of the stack to make a 255 which sets the rightmost bit to 1 (the one the speaker is attached to) and then loops with the ‘jmpz’ causing it to oscillate. This program produces this fft plot:

After 100 or so further generations, this descendant program emerges. The dec is replaced by ‘pshl 81’ which does the same job (pushes the literal value 81 onto the stack, setting our speaker bit to 1) but also uses a ‘dup’ (duplicate top of the stack) to shuffle the values around to make a more complex output signal with more frequencies present:

out
out
not
nop
pshl 81
pshi 149
out
nop
out
nop
dup
psh 170
jmp 0

Some further experiments, and perhaps even sound samples soon…