Search form

Shift Into the Uncanny

Artist Michelle Robinson discusses including artificial intelligence in her process, and the prospect of AI as the ultimate conceptual art form.

I’ve been blogging for Animation World Network since 2016, but my last piece — the interview with “Summer Island” author Steve Coulson on AI-assisted content creation — generated more feedback than all of my previous post combined. Reactions were polarized, to put it mildly, including some amusing death threats.

Naturally, I doubled down. My next interview on the topic of AI-assisted content creation is with my old friend Michelle Robinson, who I first met as a colleague after joining Walt Disney Feature Animation on Fantasia 2000 in 1995. I quickly became aware of Michelle’s photography sideline, but had no idea that it would lead us to the conversation we had decades later.

In her typically thoughtful fashion, Michelle expounds upon her personal artistic practice, her incorporation of AI technology, related conceptual and translational issues, creative frustrations, happy accidents, the “death of art,” and future directions…

Shift into the uncanny

KEVIN: Michelle, ever since we were colleagues at Disney back in the ‘90s, your independent artmaking has expanded to synthesize photography and architecture. You’re incorporating computer graphics, sculpture, needlepoint, and now AI. As an artist, what drives the lines of inquiry that you’re pursuing conceptually and aesthetically in your work?

MICHELLE: I think if I were to try and distill the conceptual basis of my work down to the main threads, the things that I’m most interested in are memory, loss, and ruins — with an expansive definition of the latter.

KEVIN: In the archeological context?

MICHELLE: Contemporary ruins: ruins of home, ruins of family, ruins of memory. I’m interested in those “in-between” urban spaces that we photographers like to capture — decrepit old buildings and the like — but more as states of mind, as psychological symbols.

KEVIN: You’ve traveled a lot, but have been living in Los Angeles for a while. Was this something that came to mind after you moved to L.A., or were you already onto it when you lived in other parts of the USA?

MICHELLE: I was already in that state of mind as an undergrad at Texas A&M. I did a photography project where I inhabited a burned-out house for a semester.

KEVIN: Wow.

MICHELLE: (laughs) Not around the clock, but I would go there during the day and gradually transform this abandoned place into my own space. I did a whole series of portraits in there. As I grew older, I started turning inward and thinking about how we accumulate secrets and sadness during our lives, and about what can happen to our memories and our thoughts of family when a secret is revealed.

KEVIN: Yeah.

MICHELLE: When there’s trauma within the family, things can shift into the uncanny in the Freudian sense — where the familiar becomes unfamiliar. There’s that wonderful German word, unheimlich, which means “unhomey”, but also relates to secrets being revealed — of things becoming known which never should be known. I started thinking about that with respect to personal experiences and the experiences of my friends, and that opened the door to exploring some of these other media. And it felt like the digital space was a really good place to go with it all.

KEVIN: That phrase — “shift into the uncanny” — really strikes me. I see how digital media facilitates your exploration of that, and how artificial intelligence can take things further in terms of uncanny oddities. In many ways, AI seems like a natural fit technologically for what you’re exploring conceptually.

MICHELLE: Yeah, I think AI gets further into the ideas of truth that I’m digging into — such as what is true about your memories. We all know that memories are fragile: error-prone, easily manipulated, and subject to change over time. And returning to the subject of secrets revealed, that knowledge makes you question your own memories.

KEVIN: Well, this definitely occurs within families — such as my own — where people have different memories of something that you thought clearly happened a certain way, and those divergent recollections continue to evolve over time.

MICHELLE: Yeah. It’s fascinating. It’s not always terrible, but it’s definitely…

KEVIN: …fascinating. (laughs)

MICHELLE: (laughs) It’s interesting, right? And so I started thinking about that. I began thinking about our real life versus our digital life, and how those two realms can be very different. Photography has always been a good medium for these kinds of thoughts because we lend a certain credibility to photography

KEVIN: Well, we used to. (laughs) I think that’s evolving, too.

MICHELLE: (laughs) Yeah. Of course, by now everyone knows that you can manipulate a photo, but it still has an impact that’s different from a painting. There’s still the idea that something indexical happened: something that existed was recorded, and we have an imprint of that.

KEVIN: Right.

MICHELLE: So, photography is still a useful tool when you’re trying to make work that questions truth or the nature of reality. And AI entering my practice was an extension of working in computer graphics and making images that I wanted to be a bit duplicitous: you have a first read, but then another take on it. Maybe during the first read you think, “That’s real. That happened.” But then you notice strange clues built into the image that things aren’t exactly right — realizing that something is a miniature, for example. And so I began thinking about what an AI might do with all this.

Lost in translation

KEVIN: And, to clarify: you’re using your own work as inputs to the AI as opposed to “data scraping” from the internet, correct? You’re feeding your photographs into a GAN (generative adversarial network), processing those, and then working further from the results.

MICHELLE: Yes. Almost three years ago, I became interested in GANs, and I discovered there was one that you could easily access without writing code, called Playform. I was curious what an AI would do if I submitted a data set of images of homes and neighborhoods I grew up in — multiple homes and neighborhoods mixed together. What kind of houses would it make? Does it even know what a house is? Does it understand what’s important about a house, or even the basic iconography of a house — doors, windows — will it recognize any of that?

KEVIN: Hmm.

MICHELLE: I was doing this at a time when I was also reading and thinking about translation. I read a book by a woman who recounted her experience as a translator, and I read poetry that was translated back and forth across multiple languages, with the “telephone” effect that occurs where meaning is changed and lost in translation.

KEVIN: Ah.

MICHELLE: And I thought that it was a good metaphor for memory: this idea that when we draw up a memory and reexamine it, we’re also changing it.

KEVIN: The Observer Effect.

MICHELLE: Yes. All of that was in the soup, and I was fascinated to see what the AI would come up with — especially because my self-generated data set of photos was very limited: I had the GAN training on just 32 images for 8 hours. It got to the point where you could tell that the AI was outputting houses, but they were profoundly strange: melty, weird, and not very functional. They didn’t all have doors and windows where you would expect them to be. (laughs) But, there were certain aspects that this algorithmic entity had picked out that it seemed to think were important: the peaked roof, the little red door. So, those things kept showing up in the images, which I thought was really interesting.

KEVIN: Any garden gnomes pop up?

MICHELLE: (laughs) No, although I did have pink flamingos in the data set.

KEVIN: The neighborhoods you photographed were from which states?

MICHELLE: Well, there was College Station in Texas, where I went to school. And then there was the neighborhood where I grew up in Mesa, Arizona. I’ve also used a neighborhood here in Califormia as a proxy for other places that I’ve lived but were difficult to get back to. The pink flamingos came from those territories, but they didn’t show up in the AI’s final images. It just filtered them out. (laughs)

KEVIN: (laughs) As well it should.

MICHELLE: Yeah, so then I wanted to translate that image output back into something physical, something that had presence in the material world.

KEVIN: Cool.

The equivalent of a pixel

MICHELLE: So, I tried drawing. I did a series of drawings of these AI-generated houses that look a bit like architectural renderings.

KEVIN: Yes. I’ve seen those on your website.

MICHELLE: I guess I was drawing from my architecture background in school, but I didn’t find those entirely satisfying. I started to think about what is a really tactile equivalent of a pixel. I experimented with embroidery, and then eventually cross-stitch, which is even more like a pixel.

KEVIN: Yes.

MICHELLE: And cross-stitch also carries all of these associations with domestic labor and women’s work.

KEVIN: Right.

MICHELLE: There’s a whole sociological context to the practice that you tie into. And the interesting thing about that process, of course, is that I’m doing it by hand. I have a blank sheet of fabric and I’m translating the AI-generated image back onto the cloth stitch-by-stitch, which creates its own new set of errors and happy accidents.

KEVIN: So, to summarize: you’re photographing houses, feeding those photos into a generative adversarial network, getting output from the AI, and then recontextualizing those with your own two hands in the form of cross-stitch pieces.

MICHELLE: Yes.

KEVIN: And then how do you present these? Do you show the entire process, or do you just exhibit the final work?

MICHELLE: Well, that’s the big question. So far, I’ve been showing these pieces as individual cross-stitch works, with cryptic titles that give some clues as to the origins, but no disclosure regarding the process leading up to what you see hanging on the wall.

KEVIN: You’re not interested in the making of the work being part of the exhibition.

MICHELLE: Ultimately I think that I am, but this is all still very much work in progress. When I build up a large enough body of work around the idea, then maybe I’ll present these in a different way. But it’s very slow labor. Cross-stitch takes a long time.

KEVIN: (laughs) I’ve never done it, but I can imagine.

MICHELLE: Holy smokes, it takes forever! (laughs) I had a curator say to me recently, “I’m worried about you, because it seems like this is just going to take way too long.” And then I’ve had other people say, “Couldn’t you just have…”

KEVIN: … a machine do it. (laughs)

MICHELLE: “… a machine do it?” (laughs) Or, “Couldn’t you just print them out?” But I feel that it has to be this way.

KEVIN: The process of making is important to you, something that — as an artist myself — I understand. You grow artistically during the process. I would imagine that the meditative aspect of cross-stitch helps sow the seeds for what happens next.

MICHELLE: Totally. It could be another year or more before I complete the entire series, and then we’ll see how to exhibit it, and what if anything to show of the process.

KEVIN: It impresses me that you’re combining the high-tech, methodical speed of an AI with the low-tech, methodical pace of cross-stitch in a way that creates a fascinating oscillation of fast and slow, of generative and meditative.

MICHELLE: Yeah, awesome — I’m glad you picked up on that. But, I guess that’s only possible if you know where the images come from and how they were made. I’ve been thinking a lot about human labor versus computer labor. We have this impression that digital technology is quick, but it would be interesting to find out what eight hours of GAN training on my photographs actually means in total hours. Maybe it’s a thousand hours, I don’t know. It’s invisible to me because it’s distributed across many machines in the cloud.

KEVIN: Right.

MICHELLE: Maybe the AI labor is actually more intensive than the cross-stitch part.

KEVIN: Hmm.

The ultimate conceptual art

MICHELLE: Conceptual art has this long history of focusing on the idea as opposed to the product.

KEVIN: Yes.

MICHELLE: So, I when I think about Sol Lewitt’s wall drawings…

KEVIN: Yes…

MICHELLE: …I think of what’s happening now with AI prompts. Sol Lewitt’s works were written sets of instructions: prompts, essentially…

KEVIN: … on how to generate and recreate his wall drawings.

MICHELLE: Yes, and are given to someone else to execute.

KEVIN: In a way, it seems that AI image generation represents the ultimate conceptual art.

MICHELLE: Possibly… because prompts are pure concept. (laughs)

KEVIN: We’re entering a future in which the value of your ideas will matter more than ever, which I guess has progressively been the case.

MICHELLE: When it comes to artmaking, certainly.

KEVIN: In addition to Sol Lewitt’s drawing instructions, I think of readymades, the iconic appropriations of Robert Rauschenberg and the work of Andy Warhol, which could be considered an analog form of data scraping…

MICHELLE: Sherrie Levine taking photographs of photographs… yes. These aspects could find their way into an exhibition proposal that I have in mind, which may include videos of the AI as it generates new images from my photographs

KEVIN: Cool. I’ve seen a minute or so of video of your GAN working away, and I’ve also observed the generative “decision-making” of artificial intelligence firsthand while playing with Midjourney. You see these AIs taking instructions, ingesting data, building up to something, culling things out, refining, and making further redirects until it arrives at something that it’s happy with — even if you aren’t. (laughs) I’m anthropomorphizing here, but…

MICHELLE: A bit, yeah. (laughs) It’s a really fascinating process to observe, and I think it may ultimately be part of the exhibition, but the jury’s still out on that. The idea of entering and exiting this digital space is important, but I don’t want it to come across as a parlor trick.

KEVIN: Right.

MICHELLE: I really like the idea of trying to visualize something that we don’t quite understand by bridging the digital and the physical.

KEVIN: Just prior to this interview, I watched Netflix’s adaptation of Neil Gaiman’s The Sandman, which (*SPOILER ALERT*) features a young woman who’s a “vortex”: a human with great dream power capable of breaking the barriers between the dream world and the real world. And that reminded me of your work.

MICHELLE: Ah, yes…

KEVIN: It seemed analogous to how you’re weaving in and out of physical and digital spaces — working with artificial intelligence and your own natural intelligence. That oscillation fascinates me. Your process fascinates me.

MICHELLE: (laughs) Thanks.

KEVIN: In certain respects, I think the incorporation of AI into the artmaking process reveals how many people still associate art making with craft and effort over concept. Even though — like Steve Coulson from my previous interview — you’re putting a lot of design thinking, creative effort and editorial decision-making into the process, there are still people who will look at the result and say, “Oh… an AI did this for you.”

MICHELLE: (laughs) Yes. Much like people previously said about computer graphics.

KEVIN: (laughs) I’ve been there. But then you complete the work with this painfully-laborious cross-stitch home stretch that’s almost like an act of atonement for using an AI.

MICHELLE: (laughs) Atonement. I’m gonna write that down.

KEVIN: Please do. (laughs) You’ve been ahead of the game, working with AI, for a while. And we now see a lot of folks — many with no artistic training — jumping onto DALL·E, Midjourney and other AI image-generation platforms, firing off prompts like “imagine Donald Trump and Barack Obama’s love child”, or marginally more sophisticated: “imagine a spaghetti monster in the style of Frank Frazetta.”

MICHELLE: (laughs) Cupcake kittens and so forth, yeah.

KEVIN: Most of these results are not driven by the same level of artistic inquiry that you’re putting into your own AI-assisted work, which uses your own photography as input instead of data scraping the internet.

MICHELLE: Yes.

KEVIN: That said, has your use of AI evolved or been affected by recent advances in the technology which are now available to the general population?

Happy accidents

MICHELLE: I’d say I’ve developed a better understanding of how to make the GAN work for me, but the evolution of my process is unrelated to the things that are going on with Midjourney and the other AIs. I’m working with a limited data set of my own photography — and I now understand the more comprehensive that is, the better. So, over the Christmas holiday I went back and re-photographed my neighborhood in Mesa, Arizona with that in mind, and was more objective about the way I captured images of the houses: trying to keep them as unified as possible. And the results from the AI were much more "correct," if you want to use that word. That’s both more interesting and less interesting to me.

KEVIN: Yes, people often say of computer graphics that there are no "happy accidents," only unhappy accidents -- that you can't get the serendipitous occurrences that media like painting and ceramics lend themselves to. I've long disagreed with this — having enjoyed happy accidents in my own digital artwork — and it sounds like you were getting happy accidents from the AI that began to disappear as the data processing got better. The output lost some of its uncanny weirdness, but it's that weirdness which appeals to you.

MICHELLE: Yeah, I would agree with that. I think the weirdness is really interesting. I’ve also been surprised at the nature of the images I get back, the oddities and artifacts really differ from data set to data set. And I don’t know if something’s happening under the hood, if the AI algorithm is changing...

KEVIN: It almost certainly is. Midjourney and DALL·E are evolving constantly. You can put the same prompt in as you did yesterday, and get a different result — which is cool, but can be problematic for artists using AI assistants for bodies of work and serial storytelling. It's a volatile toolset.

MICHELLE: Absolutely.

KEVIN: Are you still using the same GAN that you started with years ago, or are you exploring other AI platforms?

MICHELLE: I've been using the same GAN — Playform — for the house project, but I've dabbled in others such as DALL·E and GauGan, to see what they can do. They each have different strengths and weaknesses. And like everybody else, I was often just having fun with, "What will it give me if I put this in?" (laughs)

KEVIN: (laughs) Yeah, that aspect of AI prompting is rather addictive.

MICHELLE: I even experimented with describing one of my art pieces to DALL·E in text, to see what it would give me compared to the original.

KEVIN: That brings up the fascinating prospect of AI prompts as a compact image storage medium. Stable Diffusion is one of the AI platforms pursuing the potential to faithfully regenerate existing images and even videos from prompts — the idea being that you could eventually store massive content files using minimal text prompts. The fidelity of this approach is currently questionable, but it's probably only a matter of time before it becomes less "lossy.". Weaving in and out from image to prompt to image gets into the squirrelly "translation" issues that you address in your work.

MICHELLE: Yeah. I'm just trying to keep up. (laughs)

KEVIN: (laughs) Me too.

MICHELLE: It's amazing how quickly things are moving. My social media feeds are full of friends playing around with this stuff. It's freaky, but it's fun.

Thoughts on the scrum

KEVIN: Well, that’s a great segue into my next question. In just the past few months, there’s been an absolute explosion of interest in, engagement with, and debate about AI generated imagery. It’s safe to say that reactions have been mixed, sometimes within the same individual: “This is incredible!” / “This is horrible!” Some folks are exhilarated by the potential creative opportunities. Others are disturbed by the prospective career disruption. I gave a talk on AI in Beijing the year before the pandemic in which I addressed which professions would be disrupted, based upon the conventional wisdom at the time. Those most at risk for being displaced by AI were bank tellers, insurance adjusters and truck drivers. At the very bottom of the list, in terms of AI displacement, were the so-called “creative” professions: artists and designers for whom the threat of AI displacement was deemed decades away.

MICHELLE: (laughs) Yeah…

KEVIN: Cut to 2022, when the art and design community is in an uproar over the encroachment of AI. So, my question for you is two-fold: 1) What have been the pros and cons of AI technology in terms of your artistic practice, and 2) What advice would you give to creatives who are interested/fearful of how AI may impact their interests? It’s a massive two-part question, I know.

MICHELLE: (laughs) Yeah, a bit.

KEVIN: What have you appreciated about AI-generated imagery, and what have you found frustrating?

MICHELLE: Well, right off the bat, it certainly didn’t do what I expected it to do — for better and for worse. You quickly understand that it’s not a medium over which you have a huge amount of control. At this moment, it doesn’t feel like a medium you can master in the same way that you can master drawing or darkroom photography. It’s…

KEVIN: Like riding a bucking bronco.

MICHELLE: (laughs) Yeah. It’s a bit of a black box… so, there’s that. But I’ve also found it to be a fruitful collaboration, for lack of a better word, and an interesting way to generate a conversation with myself about the things that we’ve been talking about.

KEVIN: Yes.

MICHELLE: It’s been super useful in that respect. And I could see it being helpful in pre-visualizing something I might want to do. I could easily imagine taking something I want to construct and devising a prompt to manifest it.

KEVIN: I’m doing that now with Midjourney vis-a-vis projects we currently have in development. There’s no end to the handwringing over the legal and ethical issues over data scraping — and it will be almost impossible for the governments of the world to keep up with the pace of the technology — so, I think it’s incumbent upon each of us to define our own positions. Speaking personally, I use AI platforms in much the same way as you mentioned: generating a conversation with myself, creating images that I don’t intend to ever show anyone as “my work,” but helping me to visualize the characters I’m creating and the story that I’m developing.

MICHELLE: Yeah.

KEVIN: And there are people reading this who may say, “Yeah, but you’re taking work away from writers and artists who could contribute to your project.” And my reply to them is: “As a writer/artist myself, who are you to tell me what tools I can and cannot use?” To wit: if I illustrate my own children’s book (which I have) instead of hiring someone else to do it (which I haven’t), is that also “robbing” someone of work?

MICHELLE: Yeah. I think if people want to use AI platforms, they should simply credit them. It’s part of the medium, right? So, if you’re going to submit a piece of digital art to something like…

KEVIN: Like a Colorado state fair?

MICHELLE: (laughs) Like a Colorado state fair, sure.

KEVIN: At the risk of sounding snarky, very little at the Colorado state fair probably qualified as “Art”, whether generated by machines or by humans. (laughs)

MICHELLE: (laughs) We could have a lot of fun riffing on that, but it does indeed bring up the old issues of art, authorship and authenticity. Wherein does the art lie? Is the prompt the art? Is the output of the prompt the art? And there are already people trying to monetize their prompts, protecting them like they’re a precious asset…

KEVIN: At the same time that they’re generating results on the backs of existing images and artists.

MICHELLE: Yeah.

KEVIN: And the hilarious part of all that is you can grab these prompts for free. I remember when I was doing some visual development work on Midjourney, and someone grabbed a prompt that I abandoned as unproductive, and continued refining it. My initial reaction was annoyance that someone would dare to co-opt my prompt, but my subsequent reaction was fascination that a complete stranger thought that something I discarded as useless was interesting enough to process further.

MICHELLE: Yeah, it’s an interesting space.

What is art?

KEVIN: I recently corresponded with an artist friend of mine, who’s adamantly against anything AI. I respect that, but I said to him: “The cool thing about all of this is that I can’t remember a time in my life when the discourse about art — what art is, and who/what can make art — has been so pervasive and democratized as it currently is. Everybody’s talking about it.”

MICHELLE: Yeah, I think it’s pretty great. I think it helps us. It’s good for all of us, to rethink what it means to be creative — what it means to be human. What is humanity? What constitutes human expression? I think what’s missed in a lot of the conversations and debates is that AI is just a mirror: it’s mirroring all of this stuff that we’ve created. So, there’s still humanity in these “AI-generated” images — from the beautiful to the horrible — because it’s making stuff from our stuff. It’s creating images from things people made.

KEVIN: What do you think about the fact that the newly-generated AI images are going back into the scrum, so to speak? Do you think that dumping all of that derivative output back into the mix will compromise future results?

MICHELLE: That’s a really good question. A concern I expressed recently to a friend of mine was about the sheer onslaught of images that is suddenly being generated by everybody, given the way that AI has democratized image-making. Anyone can go in and make something that’s aesthetically… whatever.

KEVIN: (laughs) That’s a great way to put it: aesthetically whatever.

MICHELLE: Well, I’m trying to encompass the range as objectively as I can, because it’s certainly not all beautiful or interesting, but it’s something. So, I don’t know if in five more years I can still make the same comment about humanity being mirrored, given how that will have been mediated and arguably “compromised.”

KEVIN: Well, certainly five years from now AI technology will be light years beyond where we are today. The ability to generate high quality results from limited data sets will have improved dramatically.

MICHELLE: Right.

KEVIN: Currently, the AIs benefit from big data input. Narrow data sets pose problems for the GANs, as you know.

MICHELLE: Yes.

KEVIN: And as AI image generators arguably introduce more “chaff” into the data sets, it could have an impact on future results.

MICHELLE: For better and for worse.

KEVIN: Yes. So, for the folks reading this who haven’t touched AI — who may be interested but are also intimidated — how would you advise them to get started? Would you recommend they jump in, as you did? Or would you advise them otherwise?

MICHELLE: Well, like many things in life, if you don’t have a specific interest or intent in mind, you probably won’t get back anything very satisfying beyond the initial “wow” factor.

KEVIN: Right. You’ll be doing random mashups, and…

MICHELLE: Yeah. And the biggest limitation currently is that the AIs are awfully literal. They suffer from literalism, and don’t interpret as well as a human might. So, you’ll get strange things due to the “garbage in, garbage out” principle that may be interesting, but are more often baffling. I’ve only used AI when I felt like I had a good conceptual basis for using it — something beyond just, “I want to make an image.”

KEVIN: I know you’re not working publicly with your GAN, but when I first jumped onto Midjourney I joined one of the “newbies” groups, which is a great learning forum in terms of seeing what other people are doing prompt-wise and results-wise. It quickly became apparent who was using the AI to dig into something serious, and who was just messing around. The usage runs the gamut from people who perhaps have never made an image in their adult life, to others from artistic professions who are wrangling it as a creative tool.

MICHELLE: Yeah, it’s a hodgepodge for sure.

Art is dead… again

KEVIN: But, I think this is true of every medium. There’s nothing to prevent anyone from drawing or painting or writing or taking photos or making videos… as many people do. AI is an incredibly powerful tool, but results will vary according to the user’s experience, application and intent.

MICHELLE: Yes. You have amateurs and professionals in every medium.

KEVIN: I believe skilled creators will be the ones who put AI technology to the best use, artistically. As one of those creators, how do you see your artwork continuing to evolve in the near term, and do you have any forecasts for the long term of where you see all of this going in general?

MICHELLE: Hmm. Well, within the so-called “fine art world,” I’ve perceived a retrenchment into physical, tangible works prior to all of this, and my own practice has shifted that way again: “I wanna make something.” And I think that desire to hang onto something authentically human and authored will persist. I also think there will be a decrease in the initial excitement over AI image generation among the general population, once everyone has spent their free DALL·E credits and are like, “OK, I did that.” (laughs) They’ve made images of cheese robots and ducks going shopping and…

KEVIN: (laughs) Right.

MICHELLE: And then we’ll begin to see AI content creation being used heavily in commercial applications.

KEVIN: We’re seeing that now.

MICHELLE: Yeah, it’s making its way in there faster than I think anybody could have imagined. There are SIGGRAPH papers being presented on 3D modeling from prompts, so that’s an obvious next step.

KEVIN: After a good portion of my career was spent pulling vertices and pushing rigs, I’d love to just talk to the computer and have it do what I want. Goodbye RSI. (laughs)

MICHELLE: (laughs) Yeah, right? So, from a production standpoint, it’s pretty dreamy — but I don’t see the end of the artist on the horizon. I really don’t see that. I think like every other medium that has come along, AI will become another part of the way we do things. But it won’t kill the other things, in the same way that photography didn’t kill painting.

KEVIN: Photography freed painting.

MICHELLE: Exactly.

Long live art

KEVIN: The death of art has been forecast for hundreds if not thousands of years, and we haven’t seen it yet.

MICHELLE: Yeah. Photography didn’t kill painting, motion capture didn’t kill animation… so I think AI will primarily enable increased efficiency and complexity, which is what productions always aspire to.

KEVIN: Yep.

MICHELLE: In the commercial world, there’s always the desire for more, and I think AI will facilitate that.  I don’t imagine that artists will be out of work if they embrace the technology.

KEVIN: Well, that’s a great button, so I’m gonna end it right there. Michelle, thanks so much for the interview and the insights.

MICHELLE: My pleasure.

For more on Michelle Robinson’s background and artwork in general, check out Diana Nicholette Jeon’s recent interview with Michelle in FRAMES Magazine.

Kevin Geiger's picture

Kevin is the author of AWN's Reality Bites blog, his musings on the art, technology and business of immersive media (AR, VR, MR) and AI. You can find Kevin's website at www.kevingeiger.com and he can be reached at holler@kevingeiger.com.