Arduino networked lamp test

Working through this tutorial currently, trying to understand how Processing can be used to network an Arduino and power the colour of a lamp from words featured in an XML feed (in this case my blog feed – replacing the word ‘love’ with ‘space’ and the word ‘peace’ with ‘rock’). This generates the colour #3C4C2C.

Space, rock and Arduino
Rock, space and Arduino

And after adding this post to the feed…
Note the slight colour change.

This is the circuit I used, from this website. The LED is a 4 pin one, which can generate any combination of RGB colour as light:

https://mayorquinmachines.weebly.com/blog/arduino-project-arduino-networked-lamp
https://mayorquinmachines.weebly.com/blog/arduino-project-arduino-networked-lamp

And the two versions of it that I built:

Arduino networked lamp circuit v1
Arduino networked lamp circuit v1, with RGB LEDs
Arduino networked lamp circuit v2
Arduino networked lamp circuit v2, with one LED

Here is the code used in Processing:

//Arduino Code for the Arduino Networked Lamp - Processing

#define SENSOR 0
#define R_LED 9
#define G_LED 10
#define B_LED 11
#define BUTTON 12
int val =0; //variable to store the value coming from the sensor
int btn = LOW;
int old_btn = LOW;
int state = 0;
char buffer[7];
int pointer = 0;
byte inByte = 0;
byte r = 0;
byte g = 0;
byte b = 0;

void setup() {
  Serial.begin(9600); //open up serial port
  pinMode(BUTTON, INPUT);
}
  
void loop() {
  val = analogRead(SENSOR);
  Serial.println(val);
 
  if (Serial.available() >0) {
    //read incoming byte
    inByte = Serial.read();
    if (inByte == '#') {
      while (pointer < 6) {
        buffer[pointer] = Serial.read(); 
        pointer++;
      }
      //now need to decode 3 numbers of colors stored as hex numbers into 3 bytes
      r = hex2dec(buffer[1]) +hex2dec(buffer[0])*16;
      g = hex2dec(buffer[3]) +hex2dec(buffer[2])*16;
      b = hex2dec(buffer[5]) +hex2dec(buffer[4])*16;
      pointer = 0; //reset pointer
    }
  }
  btn = digitalRead(BUTTON);
  //check if there was a transition
  if ((btn == HIGH) && (old_btn == LOW)) {
    state = 1-state;
  }
  old_btn = btn; //val is now old,lets store it
  if (state == 1) {
    analogWrite(R_LED, r);
    analogWrite(G_LED, g);
    analogWrite(B_LED, b);
  }
  else {
    analogWrite(R_LED, 0);
    analogWrite(G_LED, 0);
    analogWrite(B_LED, 0);
  }
  delay(100);
}
int hex2dec(byte c) {
 if (c >= '0' && c<= '9') {
  return c- '0';
 } else if (c >='A' && c <= 'F') {
  return c - 'A' + 10;
 }
}

And the code in Arduino:


//Arduino Code for the Arduino Networked Lamp - Arduino

#define SENSOR 0
#define R_LED 9
#define G_LED 10
#define B_LED 11
#define BUTTON 12
int val =0; //variable to store the value coming from the sensor
int btn = LOW;
int old_btn = LOW;
int state = 0;
char buffer[7];
int pointer = 0;
byte inByte = 0;
byte r = 0;
byte g = 0;
byte b = 0;

void setup() {
  Serial.begin(9600); //open up serial port
  pinMode(BUTTON, INPUT);
}
  
void loop() {
  val = analogRead(SENSOR);
  Serial.println(val);
 
  if (Serial.available() >0) {
    //read incoming byte
    inByte = Serial.read();
    if (inByte == '#') {
      while (pointer < 6) {
        buffer[pointer] = Serial.read(); 
        pointer++;
      }
      //now need to decode 3 numbers of colors stored as hex numbers into 3 bytes
      r = hex2dec(buffer[1]) +hex2dec(buffer[0])*16;
      g = hex2dec(buffer[3]) +hex2dec(buffer[2])*16;
      b = hex2dec(buffer[5]) +hex2dec(buffer[4])*16;
      pointer = 0; //reset pointer
    }
  }
  btn = digitalRead(BUTTON);
  //check if there was a transition
  if ((btn == HIGH) && (old_btn == LOW)) {
    state = 1-state;
  }
  old_btn = btn; //val is now old,lets store it
  if (state == 1) {
    analogWrite(R_LED, r);
    analogWrite(G_LED, g);
    analogWrite(B_LED, b);
  }
  else {
    analogWrite(R_LED, 0);
    analogWrite(G_LED, 0);
    analogWrite(B_LED, 0);
  }
  delay(100);
}
int hex2dec(byte c) {
 if (c >= '0' && c<= '9') {
  return c- '0';
 } else if (c >='A' && c <= 'F') {
  return c - 'A' + 10;
 }
}

This didn’t work the first time I ran it, so I had to specify the Arduino port that Processing should use and then…

 

This week’s research – 26/04/18

Very inspired by cuneiform following my visit to the British Museum, especially the Neo-Assyrian circular clay tablet with depictions of constellations (planisphere).

Looking at Living Symphonies by James Bulley and Daniel Jones as possible inspiration for the final Space Rocks musical piece.

Reading Science Fiction for Prototyping: Designing the Future with Science Fiction by Brian David Johnson, researching how to create a narrative around the final Space Rock objects.

And finally got hold of a copy of Active Matter. Currently watching the intro page move as the sun tries to peep out from behind clouds.

Active Matter inner cover
Active Matter inner cover
Active Matter inner cover
Active Matter inner cover

Got me thinking about the material of the Space Rock objects, and also reminded me of the Massive Attack heat-sensitive packaging.

VRLO, Wed, 25 Apr 2018. Surprisingly small space, and only a few demos there. Still struggling with the VR issue that only one person can share the experience at a time, because each person needs the (expensive) headset. However, really (vicariously) enjoyed the CAD in VR demo from Gravity Sketch. Almost worth getting aheadset for, to draw 3D models in a virtual 3D space.

Research and inspiration 27/03/2018

Discovered the artist Amulets this week. He works with cassettes, players, tape loops and effects , creating woozy soundscapes and atmospheres from these simple sources. Of particular interest is the physical aspect of what he does, manipulating the sounds and machines in real time.

Also reading The Oxford Handbook of Interactive Audio, with a view to developing a more theoretical approach to the sound that will be part of my final installation.

Currently experimenting with combining simple tones to make chords / walls of sound, using this for reference: Frequencies for equal-tempered scale, A4 = 440 Hz

Panned version.

Mono version.

Also read this piece recently on FACT – The Sound of Fear, which mentioned the Ghost Tape Number 10, which was unpacked in this podcast a while ago. An interesting example of using sound to play on people’s cultural preconceptions.

During the Vietnam conflict, US troops played a soundtrack known as Ghost Tape Number 10 against the soldiers of the National Liberation Front. Used as part of Operation Wandering Soul, the unsettling tape collage tapped into Vietnamese beliefs that ancestors not buried in their homeland roam without rest in the afterlife. This spooky mix of voice, sound and music was intended to haunt Vietnamese soldiers and encourage them to abandon their cause.

Quite intrigued by this visualisation of sound in space via AR. Discovered this while reading about the Weird Type AR app.

I have also been investigating a few options for networking and interacting with the objects that will be the 3D models in the installation.

One is X BEE – apparently “the Digi XBee3 Series offers design freedom with easy-to-add functionality and flexible wireless connectivity.”

Another is the Google’s Project SOLI, a sensor which recognises hand gestures.

And finally, a MIDI controller ring, The Wave, as featured on TechCrunch.

Sadly the last two will not be available for a few months yet.

Research and inspiration 17/03/2018

Some links and thoughts from the past week.

This episode discusses the emoji-based augmented version of Bosch’s Garden of Earthly Delights by Carla Gannis, along with the concept of companies and institutions owning the ‘airspace’ or virtual space around  their properties, such as paintings. It also discusses whether we will be able to buy virtual land, as you can already do in Second Life. Particularly poignant in light of this piece I read recently on fastcodesign.com, about ‘digital artists’ hijacking MOMA with AR.

This also sparked some research into RGB-D.
“In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of robotics and visual & augmented reality.”

Intriguing short video that makes you wonder if it is CGI or a model / set:
Club Palace (Real or CGI?) – NOWNESS. Inspiration for the ‘set’ around the What Goes Around space objects, perhaps?

Have also been exploring how to network the various sensors that will be attached to the Space Rocks wirelessly, and investigating XBee:
https://www.arduino.cc/en/Main/ArduinoXbeeShield

Also been investigating the Arecibo message…a short radio message sent into space to celebrate the remodeling of the Arecibo radio telescope in Puerto Rico in 1974.[1] It was aimed at the globular star cluster M13, about 25,000 light years from Earth. M13 was chosen because it was the right size, and was in the sky at the right time and place for the ceremony.


And the response that someone created:

 

 

 

 

And also The Von Neumann Probe (A Nano Ship to the Stars). 
Simply put, a Von Neumann probe is a self-replicating device that could, one day, be used to explore every facet of the Milky Way in a relatively small window of time.

Creative AI meetup #17: Hopes and Fears for AI

This month’s Creative AI meetup was on the topic of Hopes and Fears for AI. The talk once again featured two speakers, both from a more scientific / academic background than previous meetups, which usually featured one artist and one scholar.

First up, Beth Singler (Faraday Institute for Science and Religion / Centre for the Future of Intelligence) considered the influence of current dominant narratives around AI.

Her talk was on the topic of “Prophecy or Prediction? Artificial Intelligence and Imagining the Future”
The stories that we tell ourselves about artificial intelligence influence the development of the technology itself. This talk will consider the influence of current dominant narratives – shared through the press and through media such as television, film, and memes – and how those stories can present as prediction while containing elements of prophetic judgement within them. The role of specific charismatic voices such as Ray Kurzweil, the “Prophet of Both Techno-Doom and Techno-Salvation” (Motherboard 2011) in perpetuating and shaping accounts of the future will also be considered, as well as the purpose of such accounts. How such eschatological or apocalyptic accounts affect individuals will also be addressed, with reference to accounts of anxiety and fear, along with how far future stories and imagery might serve to prevent public engagement with more near future issues.

Dr Beth Singler is the Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She is exploring the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of the project she is producing a series of short documentaries, including Pain in the Machine, which won the 2017 AHRC Best Research Film of the Year Award. Beth is also an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence, collaborating on a project on AI Narratives.

The second speaker was Matthew Crosby, a postdoc at Imperial working on the Kinds of Intelligence project as part of the Leverhulme Centre for the Future of Intelligence. He is interested in the relationship between different forms of intelligence (especially artificial), and consciousness. He maintains a blog on consciousness and the future of intelligence at mdcrosby.com/blog, where you can also find more information about his work.

He discussed “AI Suffering”
AI has the potential to change human lives for better and for worse. This is a general property of technological advances, which have previously brought greater (technological) power, and, with that, greater (moral) responsibility. What is different about AI, however, is the possibility of creating sentient entities, for which we may be morally responsible. By creating such entities, we risk increasing the amount of suffering in the world – not for us, but for them. Thomas Metzinger has called for a moratorium on any AI research that could result in AI entities that suffer. However, it is not clear exactly which research constitutes a risk. Metzinger focuses on research into conscious AI. I believe this is too narrow. In this talk I will argue that all progress in AI is progress towards creating entities with a capacity for suffering. AI suffering may be inevitable. It may also be a moral necessity.

 

 

What Goes Around sketches – February 2018

Some sketches to visualise the What Goes Around objects that may feature in the final installation. From spacecraft to insekts [sic], these are initial drawings of what may have been sent back to us from space.