CED Open House 2021

Kyle's Work and Role in the MArch Program

Kyle Steinfeld for the College of Environmental Design, Spring 2021

Animation generated by Artbreeder
Kyle Steinfeld, 2020

Who Am I?

Animation generated by Artbreeder, Kyle Steinfeld, 2020

I am not an imaginary person.

I am an Associate Professor of Architecture at UC Berkeley, where I've been teaching for just about ten years.

My work centers on the dynamic relationship between the creative practice of design and computational design methods.

While one of these is often characterized as a direct determinant of the other, my work seeks to demonstrate that...

  • new technologies of design
  • do not directly determine social relationships,
  • but are among the network of actors -
  • designers and specialists,
  • software and users,
  • data and drawings -
  • that compete to shape
  • the diffusion of design authorship
  • the social distribution of design work.
  • The interplay between new technologies of design and the culture of design practice comes into sharp contrast at intense moments of technological or social change. In my career as a student and a scholar of architectural design, I have witnessed two such intense moments.

    (left) The sort of thing that I drew in 1995.
    (right) Computational Geometry, Ko Steinfeld 2018

    The first was in the mid-1990s, when, as an undergraduate student of architecture, I was a part of a transitional generation that saw the shift from analog to digital representation.

    The second was in the early-2000s, when, as a graduate student and young professional, I saw the adoption of computational techniques in design, such as scripting and parametric modeling.

    These moments are what the historian Mario Carpo has called the "two digital turns". Based on my experience, It seems to me that we are at the cusp of a third.

    Increasingly realistic synthetic faces generated by variations on Generative Adversarial Networks (GANs).
    From left to right:
    Goodfellow et al (2014), Radford et al (2015), Liu and Tuzel (2016), Karras et al (2017), Karras et al (2019), Tokking Heads (2020), Deep Nostalgia (2021)
    Some images generated using thispersondoesnotexist
    Images on left adapted from General Framework for AI and Security Threats

    I think so based on what has been happening across a range of creative fields.

    Catalyzed by new advances in machine learning, and the development of methods for making these advances visible and accessible to a wider audience, the past seven years has seen a burst of renewed interest in generative practices across the domains of fine art, music, and graphic design.

    Recent Advances in Creative AI

    I'll offer here a quick overview of the short history of these tools in creative practice, and will highlight three precedent projects that I find particularly relevant.

    From Classification To Hallucination Models

    (left) 'Image Generated by a Convolutional Network - r/MachineLearning.'
    Reddit user swifty8883. June 16, 2015
    (right) Adam8
    Mario Klingemann, 2015

    Completion Models

    Google Magenta: Sketch RNN, 2017
    Here, the drawings of an author are augmented with predictions of what is to come next. The model underlying this tool was trained using Google Quickdraw.

    Google Magenta: Sketch RNN, 2017
    The same model as in the previous slide, with this visualization showing many possible futures for the sketch. The model underlying this tool was trained using Google Quickdraw.

    Transfer Models

    Zhu, Park, Isola, Efros: CycleGAN, 2017

    Hesse: Edges to Cats, 2017
    Here, an ML model has been trained to understand the transformation from a line-drawing of a cat to a photographic image of a cat. Once training is complete, this model will attempt to create a cat-like image of any given line drawing.

    A timelapse video of landscape images produced by GauGAN
    Neil Bickford, 2019

    The GauGAN Beta interface.
    NVIDIA, 2019

    AI in the Arts

    Tom White

    Tom White is an artist who is interested in representing "the way machines see the world". He uses image classification models to produce abstract ink prints that reveal visual concepts.

    Tom White, Synthetic Abstractions 2019

    Tom White, Synthetic Abstractions 2019

    Scott Eaton

    Scott Eaton is a anatomical artist who uses custom-trained transfer models as a "creative collaborator" in his figurative drawings. His large-scale piece "Fall of the Damned" was the inspiration for the Sketch2Pix tool developed for this course.

    A timelapse of the drawing used as input to the "Bodies" network used to create Drawing "Humanity (Fall of the Damned)"
    Scott Eaton, 2019

    YACHT

    Perhaps the most ambitious of the precedents I'll show today comes from the tech-forward band YACHT, who produced a concept album titled "Chain Tripping" in which AI played a role at every step. This included:

    In a sentiment we hope to emulate in this studio, Claire Evans states: "We wanted to understand it. We knew that the best way to do that is to make something."

    YACHT 2019
  • What Does This Mean For Architecture?
  • I DON'T KNOW!
  • (let's find out)
  • Face to Pix2Pix Facade
    Koki Ibukuro, 2019

    GAN Trained on images of Gehry's architecture, Refik Anadol 2019

    Provocation Models

    AI-generated material might hold utility as a provocation at the start of the design process - as a way for us to shake off our preconceptions.

    "Sketch to House Photo"
    Jeremy Graham, 2019

    "Architecture Machine Translation"
    Erik Swahn, 2019

    Augmentation Models

    We might find utility in the application of an AI-completion model to fill in the low-level details of a design based on high-level human gestures.

    GAN-enabled Space Layout under Morphing Footprint
    Stanislas Chaillou 2019

    Automation Models

    There are those that advocate for the more comprehensive automation of broad portions of the design process. I am not such an advocate.

    AI in the Design Studio

    This undergraduate studio offered in the Spring of 2020 proceeded through a series of lightly-connected "propositions" that explore the potential role of an AI tool in design.

    (left) Fake Marin, Kyle Steinfeld 2020.
    (right) Fake Oakland, Kyle Steinfeld 2019

    Thematically the studio focused on the Northern California Landscape, and on the interface between the built environment and the natural environment. Or, rather, on the interface between the artificial built environment and the artificial natural environment.

    Text / Image Generation Models

    (left) Talk to Transformer
    (right) AI Dungeon

    Runway Generative Engine

    Artbreeder

    interpolations between generated landscapes on Artbreeder
    Bay Raitt, 2019

    GAN Loci

    Sketch2Pix

    Suggestive Drawing among Human and Artificial Intelligences
    Here, an ML model has been trained to understand the transformation from line drawings to a whole range of objects: from flowers to patterned dresses. Deploying this model in the service of a creative design tool, Nono Martinez Alonso demonstrates the potential of computer-assisted drawing interfaces.

    Work from the Studio

    Synthetic Landscape Drawings
    Nick Doerschlag, 2020

    Synthetic Landscape Drawing
    Rose Wang, 2020

    Synthetic Axonometric Drawings
    Zihao Zhao, 2020
    Synthetic Axonometric Drawings
    Rose Wang, 2020
    Robert Carrasco, 2020
    Synthetic Axonometric Drawings
    Nick Doerschlag, 2020

    Core MArch Classes

    (right) Jon Couch, Summer 2018

    I primarily teach two types of courses in the Department of Architecture: core courses in design and architectural representation, and topical research studios and seminars in Design Computation.

    (left) Qingyue Gao, 2020
    (right) Amelia Goldrup, 2020

    Bruce Zhou, 2020

    I primarily teach two types of courses in the Department of Architecture: core courses in design and architectural representation, and topical research studios and seminars in Design Computation.