Models used: BigGAN, StyleGAN
Stop designing, start breeding … that’s what the students Graphic and Digital Media from AP University College in Antwerp did during our workshop. AI-tool of the day was Artbreeder, an off-the-shelf web-based application. They aim to be a new type of creative tool that empowers users creativity by making it easier to collaborate and explore with AI. But do they succeed? We checked it with the students.
The results were outstanding! Although the tool is still at the beginning of it’s lifecycle, it has great potential. It has been experienced by the students as something completely new! It’s not just another raster graphics editor with some fancy filters. No no, Artbreeder is more, it’s an engine that triggers people creativity! And so it did! The end results are often not useable because of some weird glitches but when fully trained like the portrait model the end results are awesome.
Artbreeder is a very inspiring low entry AI tool that triggers people creativity. The quality of it’s end results is either very surprising and/or photorealistic. The possibilities are endless, this might become a game changer …
Models used: 3D Ken Burns (2D to 3D conversion), Artemis-HQ from Video Enhance AI, Topaz Labs (upscaling)
Great news! Our film ‘Simulation’, a co-production with Studio Radiaal for their research project ‘Form a Line into a Circle‘ at MAXlab (Royal Academy of Fine Arts Antwerp) and residency at ChampdAction, is selected for the Besides the Screen conference in Ningbo, China. Our work will be screened at the University of Nottingham and online, June 10-12.
‘Simulation’ uses the 3D Ken Burns model to create 3D pictures, based on 2D photographs. By altering the code of the model (as proposed by Vladimir Alexeev in this article), the fabric of this 3D synthesized reality is revealed. We find ourselves on a treshold between realities, photographic and synthetic. Between memories, registered and artificial.
This project has received further kind support of LUCA School of Arts/KU Leuven, Leuven (BE), docARTES, Ghent (BE), the educational programme Graphic and Digital Media and Immersive Lab, AP University College, Antwerp (BE).
Models used: Neural Style, StyleGAN2, SPADE FACE
We gave a workshop in RunwayML for 60 participants of the educational programme Graphic & Digital Media. We asked them to create portraits, based on Edvard Munch’s ‘The Scream’, using and combining different models from the platform.
Model used: Pix2PixHD Next Frame Prediction
This project uses the Pix2PixHD Next Frame Prediction model. You can train the model on input images to let it predict future frames. For this project, we trained the model on timelapse footage from growing plants. The idea is to let the model predict the future growth of these plants, so a registration of a natural growth cycle is followed by a generated, synthetic cycle. Strikingly haunting synthetic plants and patterns emerge.
Model used: DeOldify
Using image analysis to fill in missing elements such as colour is one of the most promising applications of AI in design. We set to work with RunwayML to find out how correct these suggestions are. (Spoiler alert: the girls’ dresses should be red).
Model used: 3D Photo Inpainting
We gave a workshop in machine learning for the participants of the course Advanced Video Capturing & Editing. They used the 3D Photo Inpainting model to create moving stills, as teasers for their final projects.