Drawn, Together - Studio Summary

Machine Augmented Sketching in the Design Studio

Kyle Steinfeld for ARCH 100d, Spring 2020

(left) AI Augmented Axonometric Sketch
Robert Carrasco, 2020
(right) AI Augmented Landscape Drawing
Sarah Dey, 2020

This studio seeks to prototype tools related to a vision for machine-augmented architectural design, and seeks to develop a new set of creative practices that will be enabled by it.

Our Guests at the Final Review

Our Guests at the Midterm Review

Our Presenters

Background

Animation generated by Artbreeder
Kyle Steinfeld, 2020

We may observe that the interplay between technologies of design and the culture of design practice comes into sharp contrast at intense moments of technological or social change. In my career as a student and a scholar of architectural design, I have witnessed two such intense moments:

These moments are what the historian Mario Carpo has called the "two digital turns".

(left) Mario Klingemann
(right) Anna Ridler at Ars Electronica

Based on my experience, It seems to me that we are at the cusp of a third "digital turn". I think so based on what has been happening across a range of creative fields.

Triggered by new advances in machine learning, and the development of methods for making these advances visible and accessible to a wider audience, the past five years has seen a burst of renewed interest in generative practices across the domains of fine art, music, and graphic design. The motivation of this studio is to better understand what ramifications might these methods might hold for architectural design.

I'll offer here a quick tour of the short history of these tools in creative practice, and will highlight those precedent projects that I find particularly relevant.

'Image Generated by a Convolutional Network - r/MachineLearning.'
Reddit user swifty8883. June 16, 2015

One of the first examples of the application of contemporary deep learning techniques was this image, a machine "hallucination" that pulls to the surface forms and textures that remind a neural net of things it has seen before. This is an example of process that has come to be known as "Deepdream".

Adam8
Mario Klingemann, 2015

A number of ammeter and professional visual artists at that time used Deepdream in the production of their work. One of the pioneers in this area is Mario Klingemann.

Zhu, Park, Isola, Efros: CycleGAN, 2017

Soon after Deepdream came a number of techniques for the production of visual work that were based on related technologies. These include approaches to so-called "transformation" models, which mapped patterns and forms in one image to another.

Hesse: Edges to Cats, 2017
Here, an ML model has been trained to understand the transformation from a line-drawing of a cat to a photographic image of a cat. Once training is complete, this model will attempt to create a cat-like image of any given line drawing.

A notable approach within this category is Pix2Pix, a particular type of GAN that maps patterns found in an input image to desired forms, colors, and textures in an output image.

A timelapse video of landscape images produced by GauGAN
Neil Bickford, 2019

The project here works in a similar manner, mapping color-coded diagrammatic images into corresponding colors, textures, and forms derived from landscape images. We might understand the result as a tool for the augmented drawing of landscapes.

Delirious Facade
A "hybrid" facade combining the overall form of one facade selected from the city of Toronto with the fine features of another
LAMAS, 2016

Also around this time, AI-based tools became more accessible to a general audience, and we begin to find designers using these tools in the service of architectural investigations. Here we see the firm LAMAS utilizing image transfer techniques to generate novel facade designs.

Tom White, Synthetic Abstractions 2019
Tom White is an artist who is interested in representing "the way machines see the world".

Meanwhile, other visual artists, such as Tom White, have explored more hands-off models for the collaboration between human and machine.

Tom White, Synthetic Abstractions 2019

Here, Tom uses image classification models to produce abstract ink prints that reveal visual concepts.

Scott Eaton, 2019
Scott Eaton is a mechanical engineer and anatomical artist who uses custom-trained **transfer models** as a "creative collaborator" in his figurative drawings.

Perhaps the primary precedent for the work of the studio is Scott Eaton, a mechanical engineer and anatomical artist who uses custom-trained transfer models as a "creative collaborators" in his figurative drawings.

A timelapse of the drawing used as input to a network used to create Drawing "Humanity (Fall of the Damned)"
Scott Eaton, 2019
This work was the inspiration for the Sketch2Pix tool developed for this course.

Suggestive Drawing among Human and Artificial Intelligences
Here, an ML model has been trained to understand the transformation from line drawings to a whole range of objects: from flowers to patterned dresses.

Another important precedent for the work of the studio is Nono Martinez Alonso's work at the GSD. This project deploys the Pix2Pix model mentioned above in the service of a creative design tool, and thereby demonstrates the potential of computer-assisted drawing interfaces.

The Tools of the Studio

Here we describe some of the specific tools used by the students in this course.

Text Generation Models

(left) Talk to Transformer
(right) AI Dungeon

In one Proposition described below, the studio made use of the AI Dungeon and Talk to Transformer applications, which generate synthetic text that completes a given body of text.

Runway

(left) Runway Generative Engine
(right) RunwayML

The RunwayML platform represents a critical link in the workflow of the class. In one Proposition described below, the studio made use of one particular implementation of Runway, the Runway Generative Engine, which creates synthetic images from textual captions.

Artbreeder

interpolations between generated landscapes on Artbreeder
Bay Raitt, 2019

The studio made use of several feature of Artbreeder, a web-based creative tool that allows people to collaborate and explore high complexity spaces of synthetic images. It was created and is maintained by Joel Simon while at Stochastic Labs in Berkeley.

Sketch2Pix

Sketch2Pix

The Sketch2Pix workflow

A Sketch2Pix "brush" trained on images of mushrooms.

A Sketch2Pix "brush" trained on images of a bowling pin.

A Sketch2Pix "brush" trained on images of skulls of various animals.

A Sketch2Pix "brush" trained on images of trees.

The Work of the Studio

Sketching as a Daily Practice

To foster an environment in which the practice of architectural sketching can thrive, each week we read something, we make something, and we present something for public display.

Images tagged with the the arch100d hashtag on Instagram as of 3/17/20

The primary conduit for the public display of our work is an Instagram hashtag related to the course.

Three Propositions

Work from each of the three propositions posted to Instagram
Nicholas Doerschlag, 2020
Rose Wang, 2020

Given the speculative nature of the course, rather than privilege the development of a singular design project, the studio proceeds through a series of lightly-connected "propositions" that explore the potential role of an AI tool in design. We will proceed in short bursts, and we will be patient in allowing small questions to aggregate into larger and more elaborate proposals.

The latter portion of the semester is dedicated to individual student projects, understood as individual theses, that seek to apply AI methods to a student-defined design problem.

While the role of each AI tool we encounter differs - at times acting as an assistant, a critic, or a provocateur - each proposition will offer the studio a chance to better know the underlying technology and how it might figure in a larger process of design.

While we will obviously be primarily driven by investigating new design methods, we recognize that such an investigation would benefit from the details of a unifying architectural design problem.

What is an appropriate test bed for these technologies of the artificial?

(left) Fake Marin, Kyle Steinfeld 2020.
(right) Fake Oakland, Kyle Steinfeld 2019

Thematically we will focus on the Northern California Landscape, and on the interface between the built environment and the natural environment. Or, rather, on the interface between the artificial built environment and the artificial natural environment.

Proposition One: Strange Fruit

This proposition introduces students to a range of AI tools, including tools for text generation, image generation, and image transformation, and provides a platform for beginning to understand these as design prompts.

Students were instructed to go to the grocery store and select a fruit or vegetable that is grown in Northern California, and to research this produce. They determined where in Northern California it may have been grown, what it is like in that place, and who might have participated in the production of this food product. From this research, they were prepared to talk about these places and people, and to show images that illustrate what they found.

Next, based on this research, students collaborate with a text generation bot to write a story about a person involved in the production of the produce.

Finally, based on this story, students use an image generation tool to create a storyboard of seven (7) captioned images.

Proposition One
Nehal Jain, 2020

A Sketch2Pix "brush" trained on images of a strawberry, and the use of this brush.
Nehal Jain, 2020

Proposition One
Nehal Jain, 2020

Proposition Two: Landscapes of Change

This proposition uses the AI tools introduced in the previous proposition, and further extends them into the realm of architectural production. Whereas in the last proposition, students each worked individually to define their own Sketch2Pix brushes,here we work in larger groups to create a more robust and architecturally useful tool.

A Sketch2Pix "brush" trained on images of wooden dowel models, and the use of this brush.
CARRASCO Robert, GOLESTANI Payam, JAIN Nehal, and NGUYEN Tina, 2020

A Sketch2Pix "brush" trained on images of a plaster blob.
Kyle Steinfeld, 2020

Proposition Two
Gabi Nehorayan, 2020
Proposition Two
TJ Tang, 2020

Proposition Three: Four Elements of a Synthetic Architecture

Continuing our movement toward the language of architecture, this final Proposition focuses on building systems expressed through a single drawing type: the axonometric.

Here, following Semper and using Artbreeder as a provocateur once more, students each make a coordinated proposal for four elemental building systems: a hearth, a roof, an enclosure, and a mound. Each of these systems forms the basis of the training of a separate Sketch2Pix brush, which is then employed to produce a number of architectural proposals rendered in axonometric.

To begin, students make use of the Artbreeder general model, and from this create four synthetic images that are suggestive of an architecture related to each of Semper's four elements.

Kyle's Four Elements of a Synthetic Architecture
(from left to right) Synthetic Mound (1.04 Stupa, 0.86 Mobile Home, 0.42 Chaos), Synthetic Envelope (0.55 Mobile Home, 0.52 Dome, 0.50 Mosque), Synthetic Hearth (1.48 Barn, 1.36 Yurt, 0.72 Mobile Home), Synthetic Roof (0.81 Space Shuttle, 0.80 Church, 0.42 Chaos)
Kyle Steinfeld using Artbreeder.com, 2020.

In crafting your images, students make use of Artbreeder's ability to specify and edit the "genes" of an image, and make note of the three most dominant genes that each image employs (seen above). We understand Semper's four elements as: Mound, or massive elements - often stone, earthwork, or concrete - that relates a building to its ground. Roof, or linear elements - such as timber or steel - that offer protection from rain and sun. Enclosure, or planar elements - such as sheet materials or textiles - that produce spatial division and social separation. Hearth, or objects such as mechanical systems or furniture - that provide thermal comfort, ventilation, cooking, and offer a central focus of social life.

Four Sketch2Pix "brushes" inspired by Artbreeder images that evoke Semper's four elements of Mound, Enclosure, Hearth, and Roof
Kyle Steinfeld, 2020

The use of the four brushes mentioned above.
Kyle Steinfeld, 2020