This week’s research and inspiration

Feedback. Am used to creating this physically with amp and sound input device (guitar or microphone), but can the same effect be created with a sensor and digital audio output?

Interesting Fast Co piece “about new, video-generating AI that’s dissolving the line between fact and fiction.”

Saw a talk from Rachel Wingfield of Loop who showed video of OSMO – – “an experiment in totally transforming a public space into a place of wonder and tranquility.” My favourite detail about this is that the whole ‘space’ folds up to the size of a suitcase, apparently. Oh, and that it was originally set up under the A13 flyover in Canning Town, London.

Some articles about generative (product) design.

The Alien Style of Deep Learning Generative Design

Autodesk Project Dreamcatcher

NASA’s Evolved Antenna, an aerial designed by an automatic computer design program using an evolutionary algorithm. In 2006.

Planning to go through some of the ‘homework’ on Modular Curiosity to get my head around VCV Rack:

And discovered this Mica Levy piece today, Delete Beach, which inspired lots of ideas for narrative/s for What Goes Around...

Some inspiration from this week (or so)

Love this (free download) series of one minute soundsculptures / loops:
Thinking about them left playing all together in a darkened room somewhere.

Also exploring music that is an evocation of place, real or fictional. This is an interesting example. “…a psychogeographic investigation into a world of abandoned Underground stations, Quatermass, eighteenth century secret societies and the footsore reveries of a modern Flâneur.”:

And this beautiful, tactile (if slightly creepy) synth design:

Interesting Quartz piece about algorithmic accountability. Ties in with the Creative AI talk I attended in January, where there was some discussion about AI being left to run things without checks and balances (tested in versions of the game Civilisation) invariably leading to world destruction:

REALLY tempted to get tickets for this, even though it’s in another country. “A workshop on the radical potential of artificial intelligence (AI) in combination with robotics to change human bodily experience.”:

Also an honourable mention to Damon Krukowski, whose talk at Second Home Spitalfields about his new book The New Analog I attended last week. Damon discussed the ways in which the switch from analogue to digital audio has influenced the way we perceive and think about everything from time and space to love, money and power. His use of the sound engineer’s distinction of signal and noise and the difference in a digital world (basically, there is much more signal and less noise in the digital realm) was particularly thought-provoking.

Creative AI meetup #16: The Art of DeepDream and GANs

February’s talk looked at DeepDream as a window on aesthetic experience (Owain Evans, University of Oxford) and GANs in an art context (Anna Ridler, Artist).

Owain Evans, Postdoc, University of Oxford

“Deep Dream as a Window on Aesthetic Experience”

Deep Dream produces intriguing, dog-filled images. This talk is not about these images but about process that generates them. I’ll explain the process and consider how it sheds light on human aesthetic experience. Deep Dream works because the neural network automatically computes “resemblances” between disparate objects: e.g. between a meatball and a dog, or a cat’s ear and a beak. Our own ability to see these resemblances is crucial to our experiencing art.

Owain Evans is a postdoc at the University of Oxford, working in Machine Learning with a focus on AI Safety. He also leads a collaboration on “Inferring Human Preferences” with Stanford University and His PhD is from MIT, where he worked on cognitive science, probabilistic programming, and philosophy of science.

Anna Ridler, Artist,

“Misremembering and mistranslating: using GANs in an art context”

Research has looked at whether artificial intelligence, and more particularly machine learning, can create art. Producing an image using a GAN versus any other way gives the viewer a different experience, expectation, history, traces and contexts to consider. What are these associations and how might they be used in a piece of work? I look at how I have used this associations in my own work and projects, particularly focusing on training sets and GAN generated imagery.

Anna Ridler is an artist and researcher whose practice brings together technology, literature and drawing. She is interested in working with abstract collections of information or data, particularly self-generated data sets, to create new and unusual narratives in a variety of mediums, and how new technologies, such as machine learning, can be used to translate them clearly to an audience. She works heavily with technology at both the front and back end of projects (what is exhibited as well as the research that goes into the piece). Her intention is to make work that is not about technology for its own sake, but rather uses these technologies as a tool to talk about other things – memory, love, decay – or to augment or change the story in a way in that otherwise would not happen.

Particularly interested by Pix2Pix and plan to dabble with that over the next few weeks.

Creative AI meetup #15: Existential Risk and Computational Creativity – 11th January 2018

The Creative AI meetup is designed to bring together artists, developers, designers, technologists and industry professionals to discuss applications of artificial intelligence in the creative industries.

January’s talk discussed  existential risk and computational creativity.

Shahar Avin of Centre for the Study of Existential Risk presented his work on a superintelligence modification (mod) for Sid Meier’s Civilization® V, a popular turn-based strategy game. The aim of the mod is to concretise some of the issues surrounding catastrophic AI risks, and to put individuals in a situation that makes both the risks and possible solutions accessible.

The mod allows the player to pursue victory through technological superiority, via developing a safe superintelligence, while introducing associated risks from rogue superintelligence, which could lead to human extinction (and game loss). Players can allocate resources to AI research and to AI safety research, negotiate AI treaties with other civilisations, all while balancing the demands of all the other interlocking systems of the game, including trade, diplomacy and warfare. The mod was made available to a closed group of testers and the responses were mixed, highlighting some of the difficulties of concretising abstract concepts in this area, while also suggesting certain key characteristics of the domain are amenable to treatment through the medium of a video game.

An interesting discussion at the end included one audience member mentioning versions of Civilization® that have been played solely by AI, all of which have ended in destruction and the end of the game. The lesson seemingly being that the modelling suggests that time spent on developing safe superintelligences could avert catastrophic risks and ultimately human extinction.

Simon Colton explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research.

Simon’s talk was entitled “From Creative AI to Computational Creativity and Back Again”.

One of the maxims emerging from the Creative AI movement, fuelled by developments in generative deep learning models, is the notion of producing the highest quality output possible from creative systems. If we accept that how and why an artwork was produced is often taken into consideration when value judgements are made, then the academic field of Computational Creativity has much to offer to help Creative AI practitioners. Simon explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research. Conversely, the remarkable power of generative networks to hallucinate images, music and text represents a real boon for Computational Creativity researchers interested in the simulation of imaginative behaviour, and also upon the ways in which we are currently (and could in future) harness this power to explore practical and philosophical aspects surrounding the idea that software can be creative.

The most interesting aspects of the talk were his assertion that there is no agreed definition of creativity, and the idea that the context of a piece of creative work counts for much. If we know an artwork is made solely by AI, we judge it very differently, and usually negatively.

Simon Colton is a Professor of Digital Games Technologies at Falmouth University and a part-time Professor of Computational Creativity at Goldsmiths College. He is an AI researcher specialising in questions of Computational Creativity — getting software to autonomously create artefacts of real value in interesting ways. He has published nearly 200 papers and his research has won national and international prizes. He is most well known for the software he has written and co-written to make mathematical discoveries; paint pictures; make games and generate fictional ideas, including The Painting Fool. He’s also known for his philosophical and theoretical contributions to Computational Creativity, in particular driving forward the assessment of creative software via what it does, rather than what it produces.