The master project ‘In Between’ by Deborah Perrotta (Photography, Royal Academy of Fine Arts Antwerp) aims to explore the aspect of lack of identity – or the search for the “self” in contemporary times. In a digital world where Instagram filters makes us look different from our real aspects, it makes makes her question: can we trust the image made by the devices?
Considering photographs as technical images, in a way that making and criticizing photographs involves the problem of apparatus (systems that tend to work more and more automatically), the project is entirely interconnected with the device and technique, in the automaticity of the production of images, or rather, in the image that only exists because of digital technology and the photographic medium. She chose to use artificial intelligence in order to provoke a reflection about the search for an identity/face while the abstraction/deformation of the “self” happens in current times. How many forms can we assume and how many others do we cease to be?
Here at AIDD, we functioned as promotor for her work and offered advice & support in training the deep learning models. Keep an eye out, in September she will be presenting her work at the masters expo at FOMU.
From February until June, we guided the EPS (European Project Semester) team ‘Hello Hyperreality’ in creating a solarpunk vision of a future Antwerp. This team of international students made a VR-experience titled ‘The New Antwerp’, in which they researched the impact of multisensorial (smell, touch) and visual elements (such as texture) on the user experience. We gave them a workshop in the AI-driven application ArtEngine, Unity’s material authoring tool, enabling them to create realistic textures for the experience.
Wonderful news from Japan, as the film ‘Simulation’, created by Studio Radiaal for their research project ‘Form a Line into a Circle‘ at MAXlab (Royal Academy of Fine Arts Antwerp) and residency at ChampdAction, is a jury selection at the Japan Media Arts Festival. This prominent festival honors outstanding works from a diverse range of media – from animation and comics to videogames and media art.
Here at AIDD, we helped with adapting the AI-model, so that the work travels through reconstructed memories, based on photographs. In this work, we find ourselves on a treshold between realities, photographic and synthetic. Between memories, registered and artificial.
This project has received further kind support of LUCA School of Arts/KU Leuven, Leuven (BE), docARTES, Ghent (BE), the educational programme Graphic and Digital Media and the Immersive Lab, AP University College, Antwerp (BE).
That’s right! In collaboration with the educational programme Graphic & Digital Media during GDMtv live at AP University College (Campus Lange Nieuwstraat), we’re bringing you 3 afternoons of inspiring experts, offering their view on generative/AI-driven design or giving you some hands-on experience.
14:00 – 15:30: Dries Depoorter – Surveillance Art, Dying Phones and Fake Likes
16:00 – 17:30: Cyborn – AI use in VR games: a Peek into Hubris
14:00 – 15:30: Jeroen Cluckers & Lowie Spriet – Deep Dive into AI
16:00 – 17:30: Bavo Van Hecke – Artbreeder (workshop)
14:00 – 17:00: Lieven Menschaert – Generative Design Principles: NodeBox (workshop)
If you would like to join, please contact us: firstname.lastname@example.org
Models used: BigGAN, StyleGAN
Stop designing, start breeding … that’s what the students Graphic and Digital Media from AP University College in Antwerp did during our workshop. AI-tool of the day was Artbreeder, an off-the-shelf web-based application. They aim to be a new type of creative tool that empowers users creativity by making it easier to collaborate and explore with AI. But do they succeed? We checked it with the students.
The results were outstanding! Although the tool is still at the beginning of it’s lifecycle, it has great potential. It has been experienced by the students as something completely new! It’s not just another raster graphics editor with some fancy filters. No no, Artbreeder is more, it’s an engine that triggers people creativity! And so it did! The end results are often not useable because of some weird glitches but when fully trained like the portrait model the end results are awesome.
Artbreeder is a very inspiring low entry AI tool that triggers people creativity. The quality of it’s end results is either very surprising and/or photorealistic. The possibilities are endless, this might become a game changer …
Models used: 3D Ken Burns (2D to 3D conversion), Artemis-HQ from Video Enhance AI, Topaz Labs (upscaling)
Great news! Our film ‘Simulation’, a co-production with Studio Radiaal for their research project ‘Form a Line into a Circle‘ at MAXlab (Royal Academy of Fine Arts Antwerp) and residency at ChampdAction, is selected for the Besides the Screen conference in Ningbo, China. Our work will be screened at the University of Nottingham and online, June 10-12.
‘Simulation’ uses the 3D Ken Burns model to create 3D pictures, based on 2D photographs. By altering the code of the model (as proposed by Vladimir Alexeev in this article), the fabric of this 3D synthesized reality is revealed. We find ourselves on a treshold between realities, photographic and synthetic. Between memories, registered and artificial.
This project has received further kind support of LUCA School of Arts/KU Leuven, Leuven (BE), docARTES, Ghent (BE), the educational programme Graphic and Digital Media and Immersive Lab, AP University College, Antwerp (BE).
Models used: Neural Style, StyleGAN2, SPADE FACE
We gave a workshop in RunwayML for 60 participants of the educational programme Graphic & Digital Media. We asked them to create portraits, based on Edvard Munch’s ‘The Scream’, using and combining different models from the platform.
Model used: Pix2PixHD Next Frame Prediction
This project uses the Pix2PixHD Next Frame Prediction model. You can train the model on input images to let it predict future frames. For this project, we trained the model on timelapse footage from growing plants. The idea is to let the model predict the future growth of these plants, so a registration of a natural growth cycle is followed by a generated, synthetic cycle. Strikingly haunting synthetic plants and patterns emerge.
Model used: DeOldify
Using image analysis to fill in missing elements such as colour is one of the most promising applications of AI in design. We set to work with RunwayML to find out how correct these suggestions are. (Spoiler alert: the girls’ dresses should be red).
Model used: 3D Photo Inpainting
We gave a workshop in machine learning for the participants of the course Advanced Video Capturing & Editing. They used the 3D Photo Inpainting model to create moving stills, as teasers for their final projects.