Some initial sketches for projecting light through apertures. The apertures will be based on a visualisation of a song or piece of audio. These are the shapes that will make up the apertures.
February’s talk looked at DeepDream as a window on aesthetic experience (Owain Evans, University of Oxford) and GANs in an art context (Anna Ridler, Artist).
Owain Evans, Postdoc, University of Oxford
“Deep Dream as a Window on Aesthetic Experience”
Deep Dream produces intriguing, dog-filled images. This talk is not about these images but about process that generates them. I’ll explain the process and consider how it sheds light on human aesthetic experience. Deep Dream works because the neural network automatically computes “resemblances” between disparate objects: e.g. between a meatball and a dog, or a cat’s ear and a beak. Our own ability to see these resemblances is crucial to our experiencing art.
Owain Evans is a postdoc at the University of Oxford, working in Machine Learning with a focus on AI Safety. He also leads a collaboration on “Inferring Human Preferences” with Stanford University and Ought.org. His PhD is from MIT, where he worked on cognitive science, probabilistic programming, and philosophy of science.
Anna Ridler, Artist, http://www.annaridler.com
“Misremembering and mistranslating: using GANs in an art context”
Research has looked at whether artificial intelligence, and more particularly machine learning, can create art. Producing an image using a GAN versus any other way gives the viewer a different experience, expectation, history, traces and contexts to consider. What are these associations and how might they be used in a piece of work? I look at how I have used this associations in my own work and projects, particularly focusing on training sets and GAN generated imagery.
Anna Ridler is an artist and researcher whose practice brings together technology, literature and drawing. She is interested in working with abstract collections of information or data, particularly self-generated data sets, to create new and unusual narratives in a variety of mediums, and how new technologies, such as machine learning, can be used to translate them clearly to an audience. She works heavily with technology at both the front and back end of projects (what is exhibited as well as the research that goes into the piece). Her intention is to make work that is not about technology for its own sake, but rather uses these technologies as a tool to talk about other things – memory, love, decay – or to augment or change the story in a way in that otherwise would not happen.
Particularly interested by Pix2Pix and plan to dabble with that over the next few weeks.
In this immersive installation, Nigerian artist Emeka Ogboh makes a connection between the volatility of financial markets and the movement of people seeking better lives. A traditional Greek lamentation song is complemented with real-time stock market indexes moving across an LED display. The Way Earthly Things Are Going was commissioned by the art exhibition documenta 14. It was installed in a raw concrete auditorium within the Athens Conservatoire, an iconic building but one which has become a symbol of failed utopian modernism.
Taking its title from a lyric in the Bob Marley song ‘So Much Trouble in the World’, this work references the current financial crisis – particularly significant to Greece, but also of global relevance – and the migration of people fleeing war and economic hardship. The ticker tape displays financial data, transmitted live from dozens of stock exchange indexes around the world. This is slowed down to match the pace of the singing, recorded specifically for this work with a traditional polyphonic choir. The lamentation song ‘When I forget, I’m glad’, from the Epirus region of northern Greece, recounts a story of forced migration and relates to the present economic situation in Greece.
The feeling of wandering the perimeter of this piece was mesmerising. Each speaker seems to contain and project a different voice of the choir, the sounds melding and changing as you move around the vast echoing space. The human voices contrast with the cold hard facts of the stock prices on display. Both the singing and the prices are in a language (for me) that I find hard to understand, although the sentiment of both seems abundantly clear. The installation uses the large concrete space perfectly and I could happily have wandered from voice to voice under the slowly-flickering ‘scoreboard’ for much longer than I had time for. An inspiring yet simple use of sound and (moving) image.
Built a player and audio amplifier that plays a .wav file from an SD Card, based on this circuit / tutorial:
Sadly, despite what it says above, it doesn’t sound great.
So, need to investigate some better options for sound output for this, clearly.
In the meantime, am going to investigate the tilt switch aspect of this build – http://www.amazingtips247.co.uk/2017/01/tilt-switch-arduino-with-sound.html
As well as how to then make this work randomly:
And using other (proximity) sensors:
Data as a creative material
Friday 02 February 2018, 1:00pm – 1:00pm
Open Data Institute, 65 Clifton Street, London, EC2A 4JE
Using data as the seed in their creative process, Kultur Design produce art, design and visualisation that has the uniqueness and serendipity of the data embedded within it.
In this talk, Mike Brondbjerg from Kultur Design will look at how data, in one form or another, connects many of Kultur Design’s projects, and how they’ve visualised data in very different, creative ways.
Mike Brondbjerg is a partner at Kultur Design, a creative studio specialising in using data, both creatively and analytically, in information and generative design projects, data visualisation and data art.
Kultur Design produce print, motion and web projects for clients like Kano Computers, Datameer, Heineken, Reasons To, Abbott Laboratories & King’s College.
The work created is based largely on Processing.
Another idea for an audio player from outer space, that could be 3D printed or made interactive within VR.
The Creative AI meetup is designed to bring together artists, developers, designers, technologists and industry professionals to discuss applications of artificial intelligence in the creative industries.
January’s talk discussed existential risk and computational creativity.
Shahar Avin of Centre for the Study of Existential Risk presented his work on a superintelligence modification (mod) for Sid Meier’s Civilization® V, a popular turn-based strategy game. The aim of the mod is to concretise some of the issues surrounding catastrophic AI risks, and to put individuals in a situation that makes both the risks and possible solutions accessible.
The mod allows the player to pursue victory through technological superiority, via developing a safe superintelligence, while introducing associated risks from rogue superintelligence, which could lead to human extinction (and game loss). Players can allocate resources to AI research and to AI safety research, negotiate AI treaties with other civilisations, all while balancing the demands of all the other interlocking systems of the game, including trade, diplomacy and warfare. The mod was made available to a closed group of testers and the responses were mixed, highlighting some of the difficulties of concretising abstract concepts in this area, while also suggesting certain key characteristics of the domain are amenable to treatment through the medium of a video game.
An interesting discussion at the end included one audience member mentioning versions of Civilization® that have been played solely by AI, all of which have ended in destruction and the end of the game. The lesson seemingly being that the modelling suggests that time spent on developing safe superintelligences could avert catastrophic risks and ultimately human extinction.
Simon Colton explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research.
Simon’s talk was entitled “From Creative AI to Computational Creativity and Back Again”.
One of the maxims emerging from the Creative AI movement, fuelled by developments in generative deep learning models, is the notion of producing the highest quality output possible from creative systems. If we accept that how and why an artwork was produced is often taken into consideration when value judgements are made, then the academic field of Computational Creativity has much to offer to help Creative AI practitioners. Simon explored ways in which generative systems can evolve into genuinely creative, autonomous systems, drawing on 20 years of Computational Creativity research. Conversely, the remarkable power of generative networks to hallucinate images, music and text represents a real boon for Computational Creativity researchers interested in the simulation of imaginative behaviour, and also upon the ways in which we are currently (and could in future) harness this power to explore practical and philosophical aspects surrounding the idea that software can be creative.
The most interesting aspects of the talk were his assertion that there is no agreed definition of creativity, and the idea that the context of a piece of creative work counts for much. If we know an artwork is made solely by AI, we judge it very differently, and usually negatively.
Simon Colton is a Professor of Digital Games Technologies at Falmouth University and a part-time Professor of Computational Creativity at Goldsmiths College. He is an AI researcher specialising in questions of Computational Creativity — getting software to autonomously create artefacts of real value in interesting ways. He has published nearly 200 papers and his research has won national and international prizes. He is most well known for the software he has written and co-written to make mathematical discoveries; paint pictures; make games and generate fictional ideas, including The Painting Fool. He’s also known for his philosophical and theoretical contributions to Computational Creativity, in particular driving forward the assessment of creative software via what it does, rather than what it produces.