This week’s research – 06/05/2018

Marco Marchesi – Practical uses of style transfer in the creative industry – I missed this month’s Creative AI Meetup, but the presentation looks interesting. How to use AI style transfer in creative industry projects.

Looking through Arduino projects using audio this week, to ind some techniques that may be useful for the Space Rock interactions:
https://create.arduino.cc/projecthub/projects/tags/audio

Inspired by Active Matter, I am looking at materials for the Space Rock (other than the Jesmonite currently planned), including bits of space junk.

And dug out this cult classic as inspiration for the Space Rock audio content and narrative.

I Hear a New World is a studio concept album written and produced by Joe Meek with the Blue Men, partially released as an EP in 1960. The album was Meek’s pet project. He was fascinated by the space programme, and believed that life existed elsewhere in the solar system. This album was his attempt “to create a picture in music of what could be up there in outer space”, he explained. “At first I was going to record it with music that was completely out of this world but realized that it would have very little entertainment value so I kept the construction of the music down to earth”.

Creative AI meetup #17: Hopes and Fears for AI

This month’s Creative AI meetup was on the topic of Hopes and Fears for AI. The talk once again featured two speakers, both from a more scientific / academic background than previous meetups, which usually featured one artist and one scholar.

First up, Beth Singler (Faraday Institute for Science and Religion / Centre for the Future of Intelligence) considered the influence of current dominant narratives around AI.

Her talk was on the topic of “Prophecy or Prediction? Artificial Intelligence and Imagining the Future”
The stories that we tell ourselves about artificial intelligence influence the development of the technology itself. This talk will consider the influence of current dominant narratives – shared through the press and through media such as television, film, and memes – and how those stories can present as prediction while containing elements of prophetic judgement within them. The role of specific charismatic voices such as Ray Kurzweil, the “Prophet of Both Techno-Doom and Techno-Salvation” (Motherboard 2011) in perpetuating and shaping accounts of the future will also be considered, as well as the purpose of such accounts. How such eschatological or apocalyptic accounts affect individuals will also be addressed, with reference to accounts of anxiety and fear, along with how far future stories and imagery might serve to prevent public engagement with more near future issues.

Dr Beth Singler is the Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. She is exploring the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of the project she is producing a series of short documentaries, including Pain in the Machine, which won the 2017 AHRC Best Research Film of the Year Award. Beth is also an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence, collaborating on a project on AI Narratives.

The second speaker was Matthew Crosby, a postdoc at Imperial working on the Kinds of Intelligence project as part of the Leverhulme Centre for the Future of Intelligence. He is interested in the relationship between different forms of intelligence (especially artificial), and consciousness. He maintains a blog on consciousness and the future of intelligence at mdcrosby.com/blog, where you can also find more information about his work.

He discussed “AI Suffering”
AI has the potential to change human lives for better and for worse. This is a general property of technological advances, which have previously brought greater (technological) power, and, with that, greater (moral) responsibility. What is different about AI, however, is the possibility of creating sentient entities, for which we may be morally responsible. By creating such entities, we risk increasing the amount of suffering in the world – not for us, but for them. Thomas Metzinger has called for a moratorium on any AI research that could result in AI entities that suffer. However, it is not clear exactly which research constitutes a risk. Metzinger focuses on research into conscious AI. I believe this is too narrow. In this talk I will argue that all progress in AI is progress towards creating entities with a capacity for suffering. AI suffering may be inevitable. It may also be a moral necessity.

 

 

This week’s research and inspiration

Feedback. Am used to creating this physically with amp and sound input device (guitar or microphone), but can the same effect be created with a sensor and digital audio output?

Interesting Fast Co piece “about new, video-generating AI that’s dissolving the line between fact and fiction.”
https://www.fastcodesign.com/90162494/the-war-on-whats-real

OSMO
Saw a talk from Rachel Wingfield of Loop who showed video of OSMO – http://loop.ph/portfolio/osmo-ted2015/ – “an experiment in totally transforming a public space into a place of wonder and tranquility.” My favourite detail about this is that the whole ‘space’ folds up to the size of a suitcase, apparently. Oh, and that it was originally set up under the A13 flyover in Canning Town, London.

Some articles about generative (product) design.

The Alien Style of Deep Learning Generative Design
https://medium.com/intuitionmachine/the-alien-look-of-deep-learning-generative-design-5c5f871f7d10

Autodesk Project Dreamcatcher
https://autodeskresearch.com/projects/dreamcatcher

NASA’s Evolved Antenna, an aerial designed by an automatic computer design program using an evolutionary algorithm. In 2006.
https://en.wikipedia.org/wiki/Evolved_antenna

Audio:
Planning to go through some of the ‘homework’ on Modular Curiosity to get my head around VCV Rack:
https://www.youtube.com/channel/UCnZEv3hADF9ELOIwUNu6RVg

And discovered this Mica Levy piece today, Delete Beach, which inspired lots of ideas for narrative/s for What Goes Around...

Some inspiration from this week (or so)

Love this (free download) series of one minute soundsculptures / loops:
https://machinefabriek.bandcamp.com/album/minuten
Thinking about them left playing all together in a darkened room somewhere.

Also exploring music that is an evocation of place, real or fictional. This is an interesting example. “…a psychogeographic investigation into a world of abandoned Underground stations, Quatermass, eighteenth century secret societies and the footsore reveries of a modern Flâneur.”:

And this beautiful, tactile (if slightly creepy) synth design:
http://www.electronicbeats.net/the-feed/beautiful-occult-synthesizer-lets-conjure-dark-droning-soundscapes/

Interesting Quartz piece about algorithmic accountability. Ties in with the Creative AI talk I attended in January, where there was some discussion about AI being left to run things without checks and balances (tested in versions of the game Civilisation) invariably leading to world destruction:
https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/

REALLY tempted to get tickets for this, even though it’s in another country. “A workshop on the radical potential of artificial intelligence (AI) in combination with robotics to change human bodily experience.”:
https://www.eventbrite.nl/e/tickets-workshop-human-machine-configurations-by-marco-donnarumma-and-ana-rajcevic-43197660365

Also an honourable mention to Damon Krukowski, whose talk at Second Home Spitalfields about his new book The New Analog I attended last week. Damon discussed the ways in which the switch from analogue to digital audio has influenced the way we perceive and think about everything from time and space to love, money and power. His use of the sound engineer’s distinction of signal and noise and the difference in a digital world (basically, there is much more signal and less noise in the digital realm) was particularly thought-provoking.

Creative AI meetup #15: Existential Risk and Computational Creativity – 11th January 2018

The Creative AI meetup is designed to bring together artists, developers, designers, technologists and industry professionals to discuss applications of artificial intelligence in the creative industries.

January’s talk discussed  existential risk and computational creativity.

Shahar Avin of Centre for the Study of Existential Risk presented his work on a superintelligence modification (mod) for Sid Meier’s Civilization® V, a popular turn-based strategy game. The aim of the mod is to concretise some of the issues surrounding catastrophic AI risks, and to put individuals in a situation that makes both the risks and possible solutions accessible.

The mod allows the player to pursue victory through technological superiority, via developing a safe superintelligence, while introducing associated risks from rogue superintelligence, which could lead to human extinction (and game loss). Players can allocate resources to AI research and to AI safety research, negotiate AI treaties with other civilisations, all while balancing the demands of all the other interlocking systems of the game, including trade, diplomacy and warfare. The mod was made available to a closed group of testers and the responses were mixed, highlighting some of the difficulties of concretising abstract concepts in this area, while also suggesting certain key characteristics of the domain are amenable to treatment through the medium of a video game.

An interesting discussion at the end included one audience member mentioning versions of Civilization® that have been played solely by AI, all of which have ended in destruction and the end of the game. The lesson seemingly being that the modelling suggests that time spent on developing safe superintelligences could avert catastrophic risks and ultimately human extinction.

Simon Colton explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research.

Simon’s talk was entitled “From Creative AI to Computational Creativity and Back Again”.

One of the maxims emerging from the Creative AI movement, fuelled by developments in generative deep learning models, is the notion of producing the highest quality output possible from creative systems. If we accept that how and why an artwork was produced is often taken into consideration when value judgements are made, then the academic field of Computational Creativity has much to offer to help Creative AI practitioners. Simon explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research. Conversely, the remarkable power of generative networks to hallucinate images, music and text represents a real boon for Computational Creativity researchers interested in the simulation of imaginative behaviour, and also upon the ways in which we are currently (and could in future) harness this power to explore practical and philosophical aspects surrounding the idea that software can be creative.

The most interesting aspects of the talk were his assertion that there is no agreed definition of creativity, and the idea that the context of a piece of creative work counts for much. If we know an artwork is made solely by AI, we judge it very differently, and usually negatively.

Simon Colton is a Professor of Digital Games Technologies at Falmouth University and a part-time Professor of Computational Creativity at Goldsmiths College. He is an AI researcher specialising in questions of Computational Creativity — getting software to autonomously create artefacts of real value in interesting ways. He has published nearly 200 papers and his research has won national and international prizes. He is most well known for the software he has written and co-written to make mathematical discoveries; paint pictures; make games and generate fictional ideas, including The Painting Fool. He’s also known for his philosophical and theoretical contributions to Computational Creativity, in particular driving forward the assessment of creative software via what it does, rather than what it produces.