Research and inspiration 27/03/2018

Discovered the artist Amulets this week. He works with cassettes, players, tape loops and effects , creating woozy soundscapes and atmospheres from these simple sources. Of particular interest is the physical aspect of what he does, manipulating the sounds and machines in real time.

Also reading The Oxford Handbook of Interactive Audio, with a view to developing a more theoretical approach to the sound that will be part of my final installation.

Currently experimenting with combining simple tones to make chords / walls of sound, using this for reference: Frequencies for equal-tempered scale, A4 = 440 Hz

Panned version.

Mono version.

Also read this piece recently on FACT – The Sound of Fear, which mentioned the Ghost Tape Number 10, which was unpacked in this podcast a while ago. An interesting example of using sound to play on people’s cultural preconceptions.

During the Vietnam conflict, US troops played a soundtrack known as Ghost Tape Number 10 against the soldiers of the National Liberation Front. Used as part of Operation Wandering Soul, the unsettling tape collage tapped into Vietnamese beliefs that ancestors not buried in their homeland roam without rest in the afterlife. This spooky mix of voice, sound and music was intended to haunt Vietnamese soldiers and encourage them to abandon their cause.

Quite intrigued by this visualisation of sound in space via AR. Discovered this while reading about the Weird Type AR app.

I have also been investigating a few options for networking and interacting with the objects that will be the 3D models in the installation.

One is X BEE – apparently “the Digi XBee3 Series offers design freedom with easy-to-add functionality and flexible wireless connectivity.”

Another is the Google’s Project SOLI, a sensor which recognises hand gestures.

And finally, a MIDI controller ring, The Wave, as featured on TechCrunch.

Sadly the last two will not be available for a few months yet.

Research and inspiration 17/03/2018

Some links and thoughts from the past week.

This episode discusses the emoji-based augmented version of Bosch’s Garden of Earthly Delights by Carla Gannis, along with the concept of companies and institutions owning the ‘airspace’ or virtual space around  their properties, such as paintings. It also discusses whether we will be able to buy virtual land, as you can already do in Second Life. Particularly poignant in light of this piece I read recently on fastcodesign.com, about ‘digital artists’ hijacking MOMA with AR.

This also sparked some research into RGB-D.
“In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of robotics and visual & augmented reality.”

Intriguing short video that makes you wonder if it is CGI or a model / set:
Club Palace (Real or CGI?) – NOWNESS. Inspiration for the ‘set’ around the What Goes Around space objects, perhaps?

Have also been exploring how to network the various sensors that will be attached to the Space Rocks wirelessly, and investigating XBee:
https://www.arduino.cc/en/Main/ArduinoXbeeShield

Also been investigating the Arecibo message…a short radio message sent into space to celebrate the remodeling of the Arecibo radio telescope in Puerto Rico in 1974.[1] It was aimed at the globular star cluster M13, about 25,000 light years from Earth. M13 was chosen because it was the right size, and was in the sky at the right time and place for the ceremony.


And the response that someone created:

 

 

 

 

And also The Von Neumann Probe (A Nano Ship to the Stars). 
Simply put, a Von Neumann probe is a self-replicating device that could, one day, be used to explore every facet of the Milky Way in a relatively small window of time.

Creative AI meetup #17: Hopes and Fears for AI

This month’s Creative AI meetup was on the topic of Hopes and Fears for AI. The talk once again featured two speakers, both from a more scientific / academic background than previous meetups, which usually featured one artist and one scholar.

First up, Beth Singler (Faraday Institute for Science and Religion / Centre for the Future of Intelligence) considered the influence of current dominant narratives around AI.

Her talk was on the topic of “Prophecy or Prediction? Artificial Intelligence and Imagining the Future”
The stories that we tell ourselves about artificial intelligence influence the development of the technology itself. This talk will consider the influence of current dominant narratives – shared through the press and through media such as television, film, and memes – and how those stories can present as prediction while containing elements of prophetic judgement within them. The role of specific charismatic voices such as Ray Kurzweil, the “Prophet of Both Techno-Doom and Techno-Salvation” (Motherboard 2011) in perpetuating and shaping accounts of the future will also be considered, as well as the purpose of such accounts. How such eschatological or apocalyptic accounts affect individuals will also be addressed, with reference to accounts of anxiety and fear, along with how far future stories and imagery might serve to prevent public engagement with more near future issues.

Dr Beth Singler is the Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She is exploring the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of the project she is producing a series of short documentaries, including Pain in the Machine, which won the 2017 AHRC Best Research Film of the Year Award. Beth is also an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence, collaborating on a project on AI Narratives.

The second speaker was Matthew Crosby, a postdoc at Imperial working on the Kinds of Intelligence project as part of the Leverhulme Centre for the Future of Intelligence. He is interested in the relationship between different forms of intelligence (especially artificial), and consciousness. He maintains a blog on consciousness and the future of intelligence at mdcrosby.com/blog, where you can also find more information about his work.

He discussed “AI Suffering”
AI has the potential to change human lives for better and for worse. This is a general property of technological advances, which have previously brought greater (technological) power, and, with that, greater (moral) responsibility. What is different about AI, however, is the possibility of creating sentient entities, for which we may be morally responsible. By creating such entities, we risk increasing the amount of suffering in the world – not for us, but for them. Thomas Metzinger has called for a moratorium on any AI research that could result in AI entities that suffer. However, it is not clear exactly which research constitutes a risk. Metzinger focuses on research into conscious AI. I believe this is too narrow. In this talk I will argue that all progress in AI is progress towards creating entities with a capacity for suffering. AI suffering may be inevitable. It may also be a moral necessity.

 

 

What Goes Around sketches – February 2018

Some sketches to visualise the What Goes Around objects that may feature in the final installation. From spacecraft to insekts [sic], these are initial drawings of what may have been sent back to us from space.

Arduino adventures: triggering MP3 files with distance sensor

Finally got the circuit built to be able to read the distance sensor and the MP3 board, so now I can trigger a selected MP3 to play when something passes the sensor. What I built previously was overly-complicated and was most likely shorting out.

The videos show the sensor working from 1ocm distance, but this is easy to vary. They also show the sound looping if the object sensed stays in range, and also two separate pieces of audio being triggered. This was changed in the code before being uploaded. The code used is below.

Next to make it play a random one of the MP3 files each time the sensor is triggered!

//code rearranged by Javier Muñoz 10/11/2016 ask me at javimusama@hotmail.com
#include <SoftwareSerial.h>

#define ARDUINO_RX 5//should connect to TX of the Serial MP3 Player module
#define ARDUINO_TX 6//connect to RX of the module

#define trigPin 13//for the distance module
#define echoPin 12


SoftwareSerial mySerial(ARDUINO_RX, ARDUINO_TX);//init the serial protocol, tell to myserial wich pins are TX and RX

////////////////////////////////////////////////////////////////////////////////////
//all the commands needed in the datasheet(http://geekmatic.in.ua/pdf/Catalex_MP3_board.pdf)
static int8_t Send_buf[8] = {0} ;//The MP3 player undestands orders in a 8 int string
                                 //0X7E FF 06 command 00 00 00 EF;(if command =01 next song order) 
#define NEXT_SONG 0X01 
#define PREV_SONG 0X02 

#define CMD_PLAY_W_INDEX 0X03 //DATA IS REQUIRED (number of song)

#define VOLUME_UP_ONE 0X04
#define VOLUME_DOWN_ONE 0X05
#define CMD_SET_VOLUME 0X06//DATA IS REQUIRED (number of volume from 0 up to 30(0x1E))
#define SET_DAC 0X17
#define CMD_PLAY_WITHVOLUME 0X22 //data is needed  0x7E 06 22 00 xx yy EF;(xx volume)(yy number of song)

#define CMD_SEL_DEV 0X09 //SELECT STORAGE DEVICE, DATA IS REQUIRED
                #define DEV_TF 0X02 //HELLO,IM THE DATA REQUIRED
                
#define SLEEP_MODE_START 0X0A
#define SLEEP_MODE_WAKEUP 0X0B

#define CMD_RESET 0X0C//CHIP RESET
#define CMD_PLAY 0X0D //RESUME PLAYBACK
#define CMD_PAUSE 0X0E //PLAYBACK IS PAUSED

#define CMD_PLAY_WITHFOLDER 0X0F//DATA IS NEEDED, 0x7E 06 0F 00 01 02 EF;(play the song with the directory \01\002xxxxxx.mp3

#define STOP_PLAY 0X16

#define PLAY_FOLDER 0X17// data is needed 0x7E 06 17 00 01 XX EF;(play the 01 folder)(value xx we dont care)

#define SET_CYCLEPLAY 0X19//data is needed 00 start; 01 close

#define SET_DAC 0X17//data is needed 00 start DAC OUTPUT;01 DAC no output
////////////////////////////////////////////////////////////////////////////////////


void setup()
{
  Serial.begin(9600);//Start our Serial coms for serial monitor in our pc
mySerial.begin(9600);//Start our Serial coms for THE MP3
delay(500);//Wait chip initialization is complete
   sendCommand(CMD_SEL_DEV, DEV_TF);//select the TF card  
delay(200);//wait for 200ms
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);

}

void loop()
{
  if(measureDistance(trigPin,echoPin)<50){
sendCommand(CMD_PLAY_WITHFOLDER, 0X0203);//play the third song of the second folder
delay(1000);//wait to avoid errors
}
delay(300);
}

void sendCommand(int8_t command, int16_t dat)
{
 delay(20);
 Send_buf[0] = 0x7e; //starting byte
 Send_buf[1] = 0xff; //version
 Send_buf[2] = 0x06; //the number of bytes of the command without starting byte and ending byte
 Send_buf[3] = command; //
 Send_buf[4] = 0x00;//0x00 = no feedback, 0x01 = feedback
 Send_buf[5] = (int8_t)(dat >> 8);//datah
 Send_buf[6] = (int8_t)(dat); //datal
 Send_buf[7] = 0xef; //ending byte
 for(uint8_t i=0; i<8; i++)//
 {
   mySerial.write(Send_buf[i]) ;//send bit to serial mp3
   Serial.print(Send_buf[i],HEX);//send bit to serial monitor in pc
 }
 Serial.println();
}

long measureDistance(int trigger,int echo){
   long duration, distance;
  
  digitalWrite(trigger, LOW);  //PULSE ___|---|___
  delayMicroseconds(2); 
  digitalWrite(trigger, HIGH);
  delayMicroseconds(10); 
  digitalWrite(trigger, LOW);
  
  duration = pulseIn(echo, HIGH);
  distance = (duration/2) / 29.1;
   Serial.println("distance:");
   Serial.println(distance);
  return distance;

}

Aperture – initial sketches for musical visualisation projection

Some initial sketches for projecting light through apertures. The apertures will be based on a visualisation of a song or piece of audio. These are the shapes that will make up the apertures.

Aperture - initial sketches for musical visualisation projection
Aperture – initial sketches for musical visualisation projection
Aperture - initial sketches for musical visualisation projection - version 2
Aperture – initial sketches for musical visualisation projection – version 2