Research and inspiration 17/03/2018

Some links and thoughts from the past week.

This episode discusses the emoji-based augmented version of Bosch’s Garden of Earthly Delights by Carla Gannis, along with the concept of companies and institutions owning the ‘airspace’ or virtual space around  their properties, such as paintings. It also discusses whether we will be able to buy virtual land, as you can already do in Second Life. Particularly poignant in light of this piece I read recently on fastcodesign.com, about ‘digital artists’ hijacking MOMA with AR.

This also sparked some research into RGB-D.
“In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of robotics and visual & augmented reality.”

Intriguing short video that makes you wonder if it is CGI or a model / set:
Club Palace (Real or CGI?) – NOWNESS. Inspiration for the ‘set’ around the What Goes Around space objects, perhaps?

Have also been exploring how to network the various sensors that will be attached to the Space Rocks wirelessly, and investigating XBee:
https://www.arduino.cc/en/Main/ArduinoXbeeShield

Also been investigating the Arecibo message…a short radio message sent into space to celebrate the remodeling of the Arecibo radio telescope in Puerto Rico in 1974.[1] It was aimed at the globular star cluster M13, about 25,000 light years from Earth. M13 was chosen because it was the right size, and was in the sky at the right time and place for the ceremony.


And the response that someone created:

 

 

 

 

And also The Von Neumann Probe (A Nano Ship to the Stars). 
Simply put, a Von Neumann probe is a self-replicating device that could, one day, be used to explore every facet of the Milky Way in a relatively small window of time.

Creative AI meetup #17: Hopes and Fears for AI

This month’s Creative AI meetup was on the topic of Hopes and Fears for AI. The talk once again featured two speakers, both from a more scientific / academic background than previous meetups, which usually featured one artist and one scholar.

First up, Beth Singler (Faraday Institute for Science and Religion / Centre for the Future of Intelligence) considered the influence of current dominant narratives around AI.

Her talk was on the topic of “Prophecy or Prediction? Artificial Intelligence and Imagining the Future”
The stories that we tell ourselves about artificial intelligence influence the development of the technology itself. This talk will consider the influence of current dominant narratives – shared through the press and through media such as television, film, and memes – and how those stories can present as prediction while containing elements of prophetic judgement within them. The role of specific charismatic voices such as Ray Kurzweil, the “Prophet of Both Techno-Doom and Techno-Salvation” (Motherboard 2011) in perpetuating and shaping accounts of the future will also be considered, as well as the purpose of such accounts. How such eschatological or apocalyptic accounts affect individuals will also be addressed, with reference to accounts of anxiety and fear, along with how far future stories and imagery might serve to prevent public engagement with more near future issues.

Dr Beth Singler is the Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She is exploring the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of the project she is producing a series of short documentaries, including Pain in the Machine, which won the 2017 AHRC Best Research Film of the Year Award. Beth is also an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence, collaborating on a project on AI Narratives.

The second speaker was Matthew Crosby, a postdoc at Imperial working on the Kinds of Intelligence project as part of the Leverhulme Centre for the Future of Intelligence. He is interested in the relationship between different forms of intelligence (especially artificial), and consciousness. He maintains a blog on consciousness and the future of intelligence at mdcrosby.com/blog, where you can also find more information about his work.

He discussed “AI Suffering”
AI has the potential to change human lives for better and for worse. This is a general property of technological advances, which have previously brought greater (technological) power, and, with that, greater (moral) responsibility. What is different about AI, however, is the possibility of creating sentient entities, for which we may be morally responsible. By creating such entities, we risk increasing the amount of suffering in the world – not for us, but for them. Thomas Metzinger has called for a moratorium on any AI research that could result in AI entities that suffer. However, it is not clear exactly which research constitutes a risk. Metzinger focuses on research into conscious AI. I believe this is too narrow. In this talk I will argue that all progress in AI is progress towards creating entities with a capacity for suffering. AI suffering may be inevitable. It may also be a moral necessity.

 

 

Creative AI meetup #15: Existential Risk and Computational Creativity – 11th January 2018

The Creative AI meetup is designed to bring together artists, developers, designers, technologists and industry professionals to discuss applications of artificial intelligence in the creative industries.

January’s talk discussed  existential risk and computational creativity.

Shahar Avin of Centre for the Study of Existential Risk presented his work on a superintelligence modification (mod) for Sid Meier’s Civilization® V, a popular turn-based strategy game. The aim of the mod is to concretise some of the issues surrounding catastrophic AI risks, and to put individuals in a situation that makes both the risks and possible solutions accessible.

The mod allows the player to pursue victory through technological superiority, via developing a safe superintelligence, while introducing associated risks from rogue superintelligence, which could lead to human extinction (and game loss). Players can allocate resources to AI research and to AI safety research, negotiate AI treaties with other civilisations, all while balancing the demands of all the other interlocking systems of the game, including trade, diplomacy and warfare. The mod was made available to a closed group of testers and the responses were mixed, highlighting some of the difficulties of concretising abstract concepts in this area, while also suggesting certain key characteristics of the domain are amenable to treatment through the medium of a video game.

An interesting discussion at the end included one audience member mentioning versions of Civilization® that have been played solely by AI, all of which have ended in destruction and the end of the game. The lesson seemingly being that the modelling suggests that time spent on developing safe superintelligences could avert catastrophic risks and ultimately human extinction.

Simon Colton explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research.

Simon’s talk was entitled “From Creative AI to Computational Creativity and Back Again”.

One of the maxims emerging from the Creative AI movement, fuelled by developments in generative deep learning models, is the notion of producing the highest quality output possible from creative systems. If we accept that how and why an artwork was produced is often taken into consideration when value judgements are made, then the academic field of Computational Creativity has much to offer to help Creative AI practitioners. Simon explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research. Conversely, the remarkable power of generative networks to hallucinate images, music and text represents a real boon for Computational Creativity researchers interested in the simulation of imaginative behaviour, and also upon the ways in which we are currently (and could in future) harness this power to explore practical and philosophical aspects surrounding the idea that software can be creative.

The most interesting aspects of the talk were his assertion that there is no agreed definition of creativity, and the idea that the context of a piece of creative work counts for much. If we know an artwork is made solely by AI, we judge it very differently, and usually negatively.

Simon Colton is a Professor of Digital Games Technologies at Falmouth University and a part-time Professor of Computational Creativity at Goldsmiths College. He is an AI researcher specialising in questions of Computational Creativity — getting software to autonomously create artefacts of real value in interesting ways. He has published nearly 200 papers and his research has won national and international prizes. He is most well known for the software he has written and co-written to make mathematical discoveries; paint pictures; make games and generate fictional ideas, including The Painting Fool. He’s also known for his philosophical and theoretical contributions to Computational Creativity, in particular driving forward the assessment of creative software via what it does, rather than what it produces.

Hacking AI monitoring and surveillance systems

Machine learning systems are very capable, but they aren’t exactly smart. They lack common sense. Taking advantage of that fact, researchers have created a wonderful attack on image recognition systems that uses specially-printed stickers that are so interesting to the AI that it completely fails to see anything else.
https://techcrunch.com/2018/01/02/these-psychedelic-stickers-blow-ai-minds/

Harvey is one of a growing number of privacy-focused designers and developers “exploring new opportunities that are the result of [heightened] surveillance,” and working to establish lines of defense against it. He’s spent the past several years experimenting with strategies for putting control over people’s privacy back in their own hands, in their pockets and on their faces.
https://www.alternet.org/news-amp-politics/anti-surveillance-state-clothes-and-gadgets-block-face-recognition-technology

When human vision is no longer the only game in town, don’t leave home without this umbrella studded with infrared LEDs visible only to CCD surveillance cameras, designed to let you flirt with object tracking algorithms used in advanced surveillance systems. Use in pairs with a friend to train these systems to recognize nonhuman shapes and patterns more common to dreams and hallucinations than to your average city street.
https://survival.sentientcity.net/umbrella.html

Music from other sources

A bit of research today, looking at sound and technology, particularly AI.

George Philip Wright’s Vochlea gadget transforms human voice into instruments

Ripple Player iOS app

Description
Creating playlists is a burden. Shuffling songs is annoying. That’s why Ripple was born.

With Ripple player, creating playlists is no more than a simple click. All you need to do is select one song as an origin, and a playlist will be automatically generated for you. The playlist will be random, yet containing songs coherent and matching each other well.

Advanced Algorithm
– Generate local coherent playlists
– No Internet access needed
– Fast and Accurate

Household objects become musical instruments with Sound Pegs by Nick Brennan

And finally…

Social Reality

Reading today about social VR, and pondering why more social platforms aren’t using VR to connect people virtually. Or if that is even a good idea?

Rec Room has plenty of flaws, but it nonetheless shows the power of today’s truly immersive virtual-reality technology to promote connections between people in ways that past attempts at virtual socializing—remember Second Life?—could never muster. The interactions with others are largely intuitive; to become friends with people in Rec Room, for instance, you shake their hands, which produces buzzing feedback in the handheld controller. I’ve had a blast spending time in Rec Room with my one other friend who uses VR, who in real life lives across the country. And it’s also the only virtual environment I’ve found that prompts you to connect with people you don’t know in ways that aren’t so awkward you want to rip off your headset.
https://www.technologyreview.com/s/607956/virtual-realitys-missing-element-other-people/

 

Beyond Artificial Intelligence

Have been reading Beyond Artificial Intelligence, a series of essays exploring the ‘Disappearing Human-Machine Divide’. The book is not what I expected, but is provoking lots of thought on how interactivity could become frictionless. The chapters so far consider how people can connect with each other and / or machines without serious hardware. This includes an example of sending nerve impulses directly between two human beings.