This studio seeks to prototype tools related to a vision for machine-augmented architectural design, and seeks to develop a new set of creative practices that will be enabled by it.
This studio seeks to prototype tools related to a vision for machine-augmented architectural design, and seeks to develop a new set of creative practices that will be enabled by it.
Kyle Steinfeld, 2020
We may observe that the interplay between technologies of design and the culture of design practice comes into sharp contrast at intense moments of technological or social change. In my career as a student and a scholar of architectural design, I have witnessed two such intense moments:
The first was in the mid-1990s, when, as an undergraduate student of architecture, I was a part of a transitional generation that saw the shift from analog to digital representation.
The second was in the early-2000s, when, as a graduate student and young professional, I saw the adoption of computational techniques in design, such as scripting and parametric modeling.
These moments are what the historian Mario Carpo has called the "two digital turns".
Based on my experience, It seems to me that we are at the cusp of a third "digital turn". I think so based on what has been happening across a range of creative fields.
Triggered by new advances in machine learning, and the development of methods for making these advances visible and accessible to a wider audience, the past five years has seen a burst of renewed interest in generative practices across the domains of fine art, music, and graphic design. The motivation of this studio is to better understand what ramifications might these methods might hold for architectural design.
I'll offer here a quick tour of the short history of these tools in creative practice, and will highlight those precedent projects that I find particularly relevant.
Reddit user swifty8883. June 16, 2015
One of the first examples of the application of contemporary deep learning techniques was this image, a machine "hallucination" that pulls to the surface forms and textures that remind a neural net of things it has seen before. This is an example of process that has come to be known as "Deepdream".
A number of ammeter and professional visual artists at that time used Deepdream in the production of their work. One of the pioneers in this area is Mario Klingemann.
Soon after Deepdream came a number of techniques for the production of visual work that were based on related technologies. These include approaches to so-called "transformation" models, which mapped patterns and forms in one image to another.
Here, an ML model has been trained to understand the transformation from a line-drawing of a cat to a photographic image of a cat. Once training is complete, this model will attempt to create a cat-like image of any given line drawing.
A notable approach within this category is Pix2Pix, a particular type of GAN that maps patterns found in an input image to desired forms, colors, and textures in an output image.
Neil Bickford, 2019
The project here works in a similar manner, mapping color-coded diagrammatic images into corresponding colors, textures, and forms derived from landscape images. We might understand the result as a tool for the augmented drawing of landscapes.
A "hybrid" facade combining the overall form of one facade selected from the city of Toronto with the fine features of another
LAMAS, 2016
Also around this time, AI-based tools became more accessible to a general audience, and we begin to find designers using these tools in the service of architectural investigations. Here we see the firm LAMAS utilizing image transfer techniques to generate novel facade designs.
Meanwhile, other visual artists, such as Tom White, have explored more hands-off models for the collaboration between human and machine.
Here, Tom uses image classification models to produce abstract ink prints that reveal visual concepts.
Scott Eaton is a mechanical engineer and anatomical artist who uses custom-trained **transfer models** as a "creative collaborator" in his figurative drawings.
Perhaps the primary precedent for the work of the studio is Scott Eaton, a mechanical engineer and anatomical artist who uses custom-trained transfer models as a "creative collaborators" in his figurative drawings.
Scott Eaton, 2019
This work was the inspiration for the Sketch2Pix tool developed for this course.
Here, an ML model has been trained to understand the transformation from line drawings to a whole range of objects: from flowers to patterned dresses.
Another important precedent for the work of the studio is Nono Martinez Alonso's work at the GSD. This project deploys the Pix2Pix model mentioned above in the service of a creative design tool, and thereby demonstrates the potential of computer-assisted drawing interfaces.
Here we describe some of the specific tools used by the students in this course.
In one Proposition described below, the studio made use of the AI Dungeon and Talk to Transformer applications, which generate synthetic text that completes a given body of text.
The RunwayML platform represents a critical link in the workflow of the class. In one Proposition described below, the studio made use of one particular implementation of Runway, the Runway Generative Engine, which creates synthetic images from textual captions.
Bay Raitt, 2019
The studio made use of several feature of Artbreeder, a web-based creative tool that allows people to collaborate and explore high complexity spaces of synthetic images. It was created and is maintained by Joel Simon while at Stochastic Labs in Berkeley.
To foster an environment in which the practice of architectural sketching can thrive, each week we read something, we make something, and we present something for public display.
The primary conduit for the public display of our work is an Instagram hashtag related to the course.
Given the speculative nature of the course, rather than privilege the development of a singular design project, the studio proceeds through a series of lightly-connected "propositions" that explore the potential role of an AI tool in design. We will proceed in short bursts, and we will be patient in allowing small questions to aggregate into larger and more elaborate proposals.
The latter portion of the semester is dedicated to individual student projects, understood as individual theses, that seek to apply AI methods to a student-defined design problem.
While the role of each AI tool we encounter differs - at times acting as an assistant, a critic, or a provocateur - each proposition will offer the studio a chance to better know the underlying technology and how it might figure in a larger process of design.
While we will obviously be primarily driven by investigating new design methods, we recognize that such an investigation would benefit from the details of a unifying architectural design problem.
What is an appropriate test bed for these technologies of the artificial?
Thematically we will focus on the Northern California Landscape, and on the interface between the built environment and the natural environment. Or, rather, on the interface between the artificial built environment and the artificial natural environment.
This proposition introduces students to a range of AI tools, including tools for text generation, image generation, and image transformation, and provides a platform for beginning to understand these as design prompts.
Students were instructed to go to the grocery store and select a fruit or vegetable that is grown in Northern California, and to research this produce. They determined where in Northern California it may have been grown, what it is like in that place, and who might have participated in the production of this food product. From this research, they were prepared to talk about these places and people, and to show images that illustrate what they found.
Next, based on this research, students collaborate with a text generation bot to write a story about a person involved in the production of the produce.
Finally, based on this story, students use an image generation tool to create a storyboard of seven (7) captioned images.
Nehal Jain, 2020
This proposition uses the AI tools introduced in the previous proposition, and further extends them into the realm of architectural production. Whereas in the last proposition, students each worked individually to define their own Sketch2Pix brushes,here we work in larger groups to create a more robust and architecturally useful tool.
CARRASCO Robert, GOLESTANI Payam, JAIN Nehal, and NGUYEN Tina, 2020
Kyle Steinfeld, 2020
Continuing our movement toward the language of architecture, this final Proposition focuses on building systems expressed through a single drawing type: the axonometric.
Here, following Semper and using Artbreeder as a provocateur once more, students each make a coordinated proposal for four elemental building systems: a hearth, a roof, an enclosure, and a mound. Each of these systems forms the basis of the training of a separate Sketch2Pix brush, which is then employed to produce a number of architectural proposals rendered in axonometric.
To begin, students make use of the Artbreeder general model, and from this create four synthetic images that are suggestive of an architecture related to each of Semper's four elements.
In crafting your images, students make use of Artbreeder's ability to specify and edit the "genes" of an image, and make note of the three most dominant genes that each image employs (seen above). We understand Semper's four elements as: Mound, or massive elements - often stone, earthwork, or concrete - that relates a building to its ground. Roof, or linear elements - such as timber or steel - that offer protection from rain and sun. Enclosure, or planar elements - such as sheet materials or textiles - that produce spatial division and social separation. Hearth, or objects such as mechanical systems or furniture - that provide thermal comfort, ventilation, cooking, and offer a central focus of social life.
Kyle Steinfeld, 2020
Kyle Steinfeld, 2020