My work centers on the dynamic relationship between the creative practice of design and computational design methods.
While one of these is often characterized as a direct determinant of the other, my work seeks to demonstrate that...
new technologies of design
do not directly determine social relationships,
but are among the network of actors -
designers and specialists,
software and users,
data and drawings -
that compete to shape
the diffusion of design authorship
the social distribution of design work.
The interplay between new technologies of design and the culture of design practice comes into sharp contrast at intense moments of technological or social change. In my career as a student and a scholar of architectural design, I have witnessed two such intense moments.
The first was in the mid-1990s, when, as an undergraduate student of architecture, I was a part of a transitional generation that saw the shift from analog to digital representation.
The second was in the early-2000s, when, as a graduate student and young professional, I saw the adoption of computational techniques in design, such as scripting and parametric modeling.
These moments are what the historian Mario Carpo has called the "two digital turns". Based on my experience, It seems to me that we are at the cusp of a third.
I think so based on what has been happening across a range of creative fields.
Catalyzed by new advances in machine learning, and the development of methods for making these advances visible and accessible to a wider audience, the past eight years has seen a burst of renewed interest in generative practices across the domains of fine art, music, and graphic design.
Recent Advances in Creative AI
I'll offer here a quick overview of the short history of these tools in creative practice, and will highlight three precedent projects that I find particularly relevant.
Google Magenta: Sketch RNN, 2017 Here, the drawings of an author are augmented with predictions of what is to come next. The model underlying this tool was trained using Google Quickdraw.
Google Magenta: Sketch RNN, 2017 The same model as in the previous slide, with this visualization showing many possible futures for the sketch. The model underlying this tool was trained using Google Quickdraw.
Hesse: Edges to Cats, 2017 Here, an ML model has been trained to understand the transformation from a line-drawing of a cat to a photographic image of a cat. Once training is complete, this model will attempt to create a cat-like image of any given line drawing.
A timelapse video of landscape images produced by GauGAN Neil Bickford, 2019
Tom White is an artist who is interested in representing "the way machines see the world". He uses image classification models to produce abstract ink prints that reveal visual concepts.
Scott Eaton is a anatomical artist who uses custom-trained call-and-response models as a "creative collaborator" in his figurative drawings. His large-scale piece "Fall of the Damned" was the inspiration for the Sketch2Pix tool developed for this course.
A timelapse of the drawing used as input to the "Bodies" network used to create Drawing "Humanity (Fall of the Damned)" Scott Eaton, 2019
Mario Klingemann
Mario Klingemann is a self-described "neurographer" (a portmanteau of neural photographer) that uses generative hallucination models in the creation of still images and interactive animations.
There are those that advocate for the more comprehensive automation of broad portions of the design process. I am not such an advocate.
AI in the Design Studio
This undergraduate studio offered in the Spring of 2020 proceeded through a series of lightly-connected "propositions" that explore the potential role of an AI tool in design.
Thematically the studio focused on the Northern California Landscape, and on the interface between the built environment and the natural environment. Or, rather, on the interface between the artificial built environment and the artificial natural environment.
Sketch2Pix
Four Sketch2Pix "brushes" inspired by Artbreeder images that evoke Semper's four elements of Mound, Enclosure, Hearth, and Roof Kyle Steinfeld, 2020
The use of the four brushes mentioned above. Kyle Steinfeld, 2020
I primarily teach two types of courses in the Department of Architecture: core courses in design and architectural representation, and topical research studios and seminars in Design Computation.
(right, center) A single-family home in North Oakland Kyle Steinfeld, this morning.
The project began with the start of the pandemic.
The places you see here - this is where I live, this is my neighborhood in Oakland - and this is where I found myself confined in the lockdown of March of 2020. I spent quite a long time taking walks in this neighborhood thinking about the modest architecture of these buildings that are within walking distance of my home.
In particular, I started thinking about these ornamental elements such as you see here pictured on the right.
I became obsessed with these little bits of architecture - how they're expressed as these little deformations of stucco that hold imagistic qualities. They can look like flowers or like soft-serve ice-cream. These kitch little pieces of - probably foam covered in plaster - are applied to recall some vague Western tradition - Greek, Roman, Italian, French... it's hard to tell, and it hardly matters.
(left) Entrance to the Carson Pirie Scott Building Louis Sullivan, 1903 Photograph by Hedrich-Blessing, 1960.
In their historicism, they play on our capacity for recognition and recall; In their constructed illusion of high relief, they play on our tendency to perceive three dimensional form.
These little optical illusions sprinkled all over my neighborhood began to seem really important: they offer each dwelling something of an identity, and allow us to differentiate one otherwise unremarkable house from another.
Certain strains of contemporary architectural form-making hold resonance with certain threads of imagistic pattern-making found in machine-augmented visual art.
One of these pieces operates on pixels, the other on polygons. Can we bring these two together?
Walking the streets of Oakland, it occured to me that certain strains of contemporary architectural form-making hold resonance with certain threads of imagistic pattern-making found in machine-augmented visual art.
It seemed plausible for there to be some resonance here - some way to make existing neural networks instrumental in this domain. To bring existing technologies to bear on a sort-of generative architectural sculptural ornament.
Can the raster representation that dominates much of the relevant technical foundational work in ML be adapted to describe architectural form in more robust ways?
A double-sided depthmap of a canonical CAD object. Here, the red channel represents a front-facing depthmap while the blue channel represents a back-facing depthmap.
So, with these thoughts in our heads, a team here at UC Berkeley and I got to work.
Initial experiments examined the illusion of depth in relief.
All hail the king of the forest!
We began by experimenting with variations on a traditional format called a "depthmap".
A variation of a depthmap in which depth information from different directions is encoded into the separate channels of an RGB image.
To up the ante, we developed a "homebrew" variation that encodes depth information.
We were interested to know if our GAN was able to capture the "form language" of the test subjects shown here.
This, of course, is Squirtle... a Pokemon. On the previous slide was Totoro, king of the forest. I think you can see the form language we were hoping to reproduce.
Synthetic Pokemon figures described by a GAN-generated two-sided depthmap.
... and, as you can see here, we found some modest success!
Synthetic Pokemon figures described by a GAN-generated two-sided depthmap.
Shown here is a collection of synthetic pokemon figures.
This was as far as "depthmaps" could take us. To develop further, we required a different format.
In a vector displacement map, displacements are stored as vectors of arbitrary length and direction. This vector information is separated into its X,Y, and Z components and is stored as the RGB channels of a full-color raster image.
To extend this work, we turned to an obscure raster representation drawn from an obscure corner of the world of 3d animation called a "vector displacement map"
(left) Creature design by Nicolas Swijngedau www.facebook.com/NicolasDesign
Vector displacement maps have found widespread application in the niche practice of 3d character animation.
(right) A single-family home in North Oakland Kyle Steinfeld, this morning.
To my knowledge, vector displacement maps have not been used in architectural design.... but they should!
There seems to be some real resonance between the sculpting interfaces offered by contemporary character modeling software, such as Z-brush shown on the left, and the kitchy stucco deformations that dominate North Oakland.
Three vector displacement maps (top) and their corresponding 3d forms (bottom)
The three forms above are *not* generated by a GAN.
Is a GAN capable of capturing the "form language" of vector displacement maps?
We developed methods for applying vector displacement maps in the service of describing to a constrained family of polygon mesh forms in a way that is comprehensible to a GAN.
A Pipeline for Representing 3d Sculptural Relief as Raster Data
This is something of a novel "pipeline" - something that no one else has done before - for generating 3d scupltural forms.
Artificiale Releivo
Artificial Relief
Kyle Steinfeld, Titus Ebbecke, Georgios Grigoriadis, and David Zhou
2021
Rendered detail of the Artificial Relief pieces. Cast in bronze and produced for display at the Italian Pavilion of the 2021 Venice Architecture Biennale. Kyle Steinfeld, Titus Ebbecke, Georgios Grigoriadis, and David Zhou, 2021
Fast-foward to the end, this technologyu led us to develop the Artifical Relief project, produced for display at the Italian Pavilion of the 2021 Architecture Biennale.
Here, a dataset of fragmented and decontextualized historical sculptural relief underlies the generation of uncanny forms that straddle the unrecognizable and the familiar.
Given our position on the centrality of data in this kind of creative work, the project requires a reference: an historical piece that functions as a starting point. An object to reconsider. For this, we returned to the ancestor of those kitchy little sculptural stucco details found all over North Oakland.
(left) North side of the grand staircase of the Pergamon Altar Carole Raddato
(right) Screen recording of a CAD model containing a 3d scan of selected panels from the Pergamon Altar.
The project draws sculptural samples from a specific piece in particular: The Pergamon Altar. This is a Greek construction originating in modern-day Turkey, which was disassembled in the late 19th century, and re-assembled in the early 20th century in a Berlin museum.
Selected sculptural forms into fragments that can be described as deformations of a flat sheet.
The project operates in a manner that mimics the fate of the Pergamon.
It begins with a disassembly of selected sculptural forms into fragments that can be described as deformations of a flat sheet, and serve to train a neural network to understand the form-language of our selected source material.
.
Vector displacement maps can be manipulated and combined using standard raster image editing methods.
The result is a generative sculptural model, that can create synthetic samples that resemble the Pergamon Alter,
A walk through a latent space of GAN-Generated forms
and that can be combined and aggregated into merged forms.
GAN-generated forms are 3d printed on a standard SLA printer using a filament designed for investment casting.
I'll speak briefly about the fabrication process, and how this piece was realized, which began by printing the GAN-generated forms on a standard SLA printer using a filament designed for investment casting.
48 samples are cast using a lost-wax process.
These prints were then cast in bronze at a foundry about an hour south of Oakland. This scope of work required 48 samples to be cast using a lost-wax process.
Two bronze pieces in a "raw" state, prior to finishing and patina.
Here we see some of the pieces as they arrived from the foundry.
An aggregation of bronze pieces in a "raw" state, prior to finishing and patina.
These are in their "raw" state, prior to finishing, mounting, and patina.
Mounting plates are waterjet cut from brass, and de-burred by hand.
Of course, even in a highly digital process such as what I've described above, there's a great deal of manual work involved in brining a physical piece into the world.
Here we're de-burring a brass mounting plate...
The mounting plates are welded to the back of each piece.
Which is welded on the back of each piece.
A cold patina is applied using a liver of sulfur solution.
Here we see the process of applying a cold patina.
We used a liver of sulfur solution, which results in a fairly dark patina, almost black.
The Artificiale Releivo piece, as installed.
And here we see the final piece as installed.
The ambition of this piece is to evoke some suggestion of the historical material from which it is formed - some vestige of the "form language" inherited from the Pergamon alter in particular - while also becoming something quite different.
I hope that the piece engages a viewer by hovering at the "uncanny" boundary of individually recognizable forms and a differentated field that suggests the latent space of the GAN.