Some sketches to visualise the What Goes Around objects that may feature in the final installation. From spacecraft to insekts [sic], these are initial drawings of what may have been sent back to us from space.
Posting these as much for my later reference as anything, but the green lights look so pretty, and the distance sensor seems to be working as intended. Now to trigger the MP3s.
Today I have been re-examining the the design for the Space Rock audio player, based on the concept suggested by the Oblique Strategies app that I installed recently: Use fewer notes.
“Oblique strategies is a set of cards created by Brian Eno and Peter Schmidt used to break deadlocks in creative situations. Each card contains a (sometimes cryptic) remark that can help you resolve a creative dilemma. Whenever you’re stuck you draw a card and ponder how it applies to your situation.”
A test print of the Space Rock (v1) to acquaint myself with 3D printing processes and techniques. This prototype is largely to test the potential for including the Arduino elements within this shape (and others later on). Once the electronics work and can be fitted, this will be filled and sanded for a smoother outer finish.
February’s talk looked at DeepDream as a window on aesthetic experience (Owain Evans, University of Oxford) and GANs in an art context (Anna Ridler, Artist).
Owain Evans, Postdoc, University of Oxford
“Deep Dream as a Window on Aesthetic Experience”
Deep Dream produces intriguing, dog-filled images. This talk is not about these images but about process that generates them. I’ll explain the process and consider how it sheds light on human aesthetic experience. Deep Dream works because the neural network automatically computes “resemblances” between disparate objects: e.g. between a meatball and a dog, or a cat’s ear and a beak. Our own ability to see these resemblances is crucial to our experiencing art.
Owain Evans is a postdoc at the University of Oxford, working in Machine Learning with a focus on AI Safety. He also leads a collaboration on “Inferring Human Preferences” with Stanford University and Ought.org. His PhD is from MIT, where he worked on cognitive science, probabilistic programming, and philosophy of science.
Anna Ridler, Artist, http://www.annaridler.com
“Misremembering and mistranslating: using GANs in an art context”
Research has looked at whether artificial intelligence, and more particularly machine learning, can create art. Producing an image using a GAN versus any other way gives the viewer a different experience, expectation, history, traces and contexts to consider. What are these associations and how might they be used in a piece of work? I look at how I have used this associations in my own work and projects, particularly focusing on training sets and GAN generated imagery.
Anna Ridler is an artist and researcher whose practice brings together technology, literature and drawing. She is interested in working with abstract collections of information or data, particularly self-generated data sets, to create new and unusual narratives in a variety of mediums, and how new technologies, such as machine learning, can be used to translate them clearly to an audience. She works heavily with technology at both the front and back end of projects (what is exhibited as well as the research that goes into the piece). Her intention is to make work that is not about technology for its own sake, but rather uses these technologies as a tool to talk about other things – memory, love, decay – or to augment or change the story in a way in that otherwise would not happen.
Particularly interested by Pix2Pix and plan to dabble with that over the next few weeks.
Another idea for an audio player from outer space, that could be 3D printed or made interactive within VR.