Do Anne Spalter’s AIs have Eyes?

Exploring Implication in AI Artwork

M○C△
12 min readOct 12, 2022

By Maxwell Cohen

This piece terrifies me:

Too Close to the Sun (2019), by Anne Spalter. Pastel Drawing inspired by AI-generated composition created with playform.io. 20 x 20 in.

In a deep, guttural way; dread. Maybe it’s because I had this dream about a plane crash once. Or because I grew up in the tri-state shadow of 9/11. Regardless, Anne Spalter’s Too Close to the Sun scares me. And not necessarily because of what’s in the frame. More because of everything outside it.

Things which I myself imagine: Horrified observers on the ground; frantic cell-phone signals suddenly cutting out; agog jaws watching a coffee shop TV in disbelief. The work itself need only suggest such things. They stem as much from the videos I’ve seen, the dreams I’ve had, the terror I’ve felt during marijuana-infused bouts of insomnia as from the piece’s actual contents.

All artworks are limited to some extent by the physical constraints of their frames. A novel is squeezed by its page count; an actor only has so many lines. Choices must be made, details abandoned, darlings murdered. For an artwork to become transcendent, to make meaning beyond those constraints, it must traffic in implication. It needs to represent more than just what it physically captures.

Implications of this sort are essential for artworks of even staggering size. Take Paolo Verenese’s The Wedding Feast at Cana, the largest single artwork in The Louvre. Wedding Feast contains an immense amount of detail across its 67.29 m2 of surface area, but even Veronese couldn’t include everything.

The Wedding Feast at Cana (1563), by Paolo Veronese. Oil on canvas. 267in x 391 in. Property of The Louvre Museum, Paris

For as much as is actually contained within the frame, there is still more that Veronese only hints at. Bodies on balconies vanishing halfway into unseen rooms. A cityscape extending beyond what we can see. Veronese can’t show us the entire world, but he ultimately doesn’t need to. Because if he implies the world well, we’ll invent it ourselves. And he does. And we do.

Subject matter, tone, and perspective all work together to manipulate our minds into forming certain ideas. The artist provides us a spark — initial instructions — and then we go about constructing a wider world based off of their blueprints.

Too Close to the Sun, for instance, is basically one giant manipulation tactic. Look how Spalter places her perspective in mid-air, without any supplementary details for us to grab hold of. We are ungrounded within the frame, like we’re falling through the sky. Our fear of heights hijacked for artistic purposes, we’re nudged towards certain fearful implications.

Such creative use of perspective is a common theme in Spalter’s work. She’ll often construct her compositions so that we’re looking up at something. Her subjects — human edifices like bridges, planes, boats, buildings, windmills, cityscapes, factories, lighthouses, spaceships, castles, citadels, fortresses, towers, trains; and sometimes mountains — tower over us, either floating in the sky or stretching up into it. The angle itself communicates a sense of awe and of scale. A feeling of being so small, and confronting things — not just physical — that are so overwhelmingly large.

Ship on an Icy Ocean (2020), by Anne Spalter. Pastel and charcoal on black paper. Based on an AI composition. 30 x 20 in.

See what I mean? Wherever this piece, Ship on an Icy Ocean, takes your brain, remember: It is not an accident.

It’s the same powerful, purposeful manipulation of audiences that J.R.R Tolkien uses in his fantasy writing. Tolkien is constantly hinting at things just outside his story’s purview: faraway locales and historical epochs and bloody wars he’ll reference by name, which he’ll imbue with mood through specific word choice, but which he won’t explain further. Naturally, the reader fills in the missing details. The world itself becomes more fleshed-out and lived-in, but because we flesh it out ourselves.

This principle extends to almost any visual artwork with a concrete setting (abstract expressionists, you can take this essay off). Because if we’re seeing part of an environment, then we’re automatically invited to imagine whatever’s to the front, back, and sides of it.

Which includes, implicitly, the artist themselves. The expanded environment of an artwork must necessarily include its artist, through whose eyes we see the piece. It’s Veronese’s eyes which first observe the Wedding Feast. It’s Spalter who has seen an exploding plane while skydiving (more on this later). By simply looking at an artwork, we wear the eyes of whomever perceived it.

Jan van Eyck, the 15th-century Dutch painter (credit to @CulturalTutor for teaching me about him), was perhaps the first to make the artist’s perspective explicit. Here is van Eyck’s 1434 work, Arnolfini Portrait, which depicts Giovanni di Nicolao Arnolfini and his wife (and their little dog too):

Arnolfini Portrait (1434), by Jan van Eyck. Oil on oak panel of 3 vertical boards. Property of National Gallery, London

Now, let’s take a closer look.

Van Eyck cleverly hides himself in the mirror reflection between the Arnolfinis. Using this mirror, van Eyck not only captures the larger world outside the frame proper, but he solidifes himself and his perspective as inescapable aspects of the artwork itself.

We see this principle again in the work of van Eyck’s 20th-century countryman, M.C. Escher. Below is Escher’s Hand with Reflecting Sphere:

Hand with Reflecting Sphere (1935), by M.C. Escher. Lithograph. 12.5in x 8.4in

Here too, Escher overtly expresses that, yes, he, the artist, is there in every artwork he makes! Because to create art is inherently to imply one’s own existence. In any work of art, the artist — be they writer, choreographer, chef — remains present but unseen, juuuuuuuuuust off the edge of the frame (or page, or stage, or plate).

Which is easy enough to understand when we’re discussing humans and their very human art. But earlier, while mentioning Too Close to the Sun, I said, “It’s Spalter who has seen an exploding plane while skydiving (more on this later).” Well, now it’s later. And things may not actually be so clear-cut.

In the 2020’s, artmaking is no longer an exclusively human experience. As with many of Spalter’s works, Too Close to the Sun was an image first crafted by an AI. And if an AI creates such an image, does that mean it’s creating a perspective, too? Is it also hanging unseen juuuuuuuuuust off the edge of the frame? Has it invented itself a pair of eyes through which we see?

And if not, then whose eyes are we looking through, exactly? Are they Spalter’s? Or have we discovered some Twilight Zone (a dimension not only of sight and sound but of mind) between the two?

Not every AI artist so often or eloquently explores perspective and implication as Anne Spalter does. Spalter has been pioneering digital art practices for something like 40 years now, and her rich, expansive history with AI art is just what we need to tackle such philosophical questions.

So let’s get a-tackling.

In the 1980’s, having stumbled upon the computer’s vast potential for artistry — the result of a stint in investment banking — Anne Spalter returned to a graduate program at the Rhode Island School of Design (RISD) thinking, “[The Computer] is actually an incredibly powerful visual thinking tool, and I really do want to incorporate it into my practice.

But as she told me, “There weren’t any classes at RISD at that time on the computer. So the head of the graduate program said, ‘Why don’t you teach one?’”

In this roundabout way, Spalter founded the first digital fine arts program at RISD, then later at Brown University, too. Should we look at the educational materials she’s produced since then — like this syllabus for “Visual Thinking/ Visual Computing,” a course she taught at Brown in 2005 alongside computer researcher Andries Van Dam — we very quickly understand that perspective has always been a central piece of Spalter’s practice.

Which comes through with remarkable clarity when we actually look at her AI art. Perspective manifests powerfully in these works, no matter their contents.

Drive All Night (2019), by Anne Spalter; Oil painting based on AI algorithm generated composition. 20.5 x 20.5 in.

This is Drive all Night, an “oil painting based on AI algorithm generated composition” (In other words, AI generated an image, and then Spalter physically recreated it with paint. Sometimes she’ll use pencil or charcoal instead). The perspective here is hard to ignore, as are the implications which emerge from it. We are there on this road, looking towards the distant mountains up ahead. We understand viscerally that we’re in a car, foothills off to our left and right, the sights and sounds of that far-off city more real in our minds than in the painting itself. We can almost feel the wind in our hair.

Apparition (2019), by Anne Spalter. Pastel Drawing inspired by AI-generated composition created with playform.io. 20in x 20in.

Like Drive all Night, Apparition again demonstrates Spalter’s upward perspective. We intrinsically sense that this is a plane landing, and that it is landing soon, zooming towards a runway that we seem to be standing just on the edge of. We should probably back up.

Still from Moon Flowers (2022), by Anne Spalter.

In Moon Flowers, part of a recent series called “The Factories,” we’re shrunken down to the level of top-soil. Spalter makes us so small in this painting — or we’re seeing through a pair of eyes so close to the ground — that we focus not on the eponymous factory, but on that which is immediately before us: some rather weedy pink flowers and the giant moon behind them. What kinds of implications does this shrunken perspective suggest to you? Something about bugs and beetles perhaps?

We know from our conversation with Karan4d that an AI’s output is completely dictated by the input of the artist themselves. But a dog commanded to sit is still sitting. And the AI tasked with creating Moon Flowers still invented a perspective and let us see through it.

A perspective which seems to overtly imply a broader world outside the image’s edges. A wilting flower hangs halfway off Moon Flowers’ frame. And so does the moon. Is the AI aware of its own implication-making? Is it doing so on purpose?

Even when Spalter’s pieces are highly abstract — in that kooky, psychedelic way that many older AI works seem to be — there is still a clear implication of wider space. Like there’s more to be seen if we’d only look around and see it.

Despite all the bonkers AI elements in Interstellar Space Travel with Lucky Rabbit, we are still viscerally there in the room with this…uhh…little girl? And her weird, backpack rabbit thing. Oh my god, there are rabbits everywhere.

Interstellar Travel with Lucky Space Rabbit (2021), by Anne Spalter.

Rabbits aside, it doesn’t matter how illogical and abstract the locale before us appears because we’re still existing within a defined space. And whenever that happens, the artwork conjures an expanded environment. Therefore, the artist must be conjuring self, as well. If a = b and b = c, then a = c by default.

I encourage you to play around with one of the now-public AI engines like Dall-E2 if you’re curious to see some different ways this principle manifests in practice. Here is an example of a reasonably-specific prompt and the various ways Dall-E2 processed it into outputs. The prompt was,“Pointillist painting of a dog on its hind-legs looking into a medium-sized hole in a tree trunk wherein six small owls are having a Victorian-era tea party.” Admittedly, I don’t know very much about prompt-engineering. Anyways, the four outputs appeared as follows:

Dall-E2 generation of “Pointillist painting of a dog on its hind-legs looking into a medium-sized hole in a tree trunk wherein six small owls are having a Victorian-era tea party

The same input led to four radically-different artworks, all of them unlike in composition, in character, in style and color, and even in perspective. Nevertheless, however, each one implies a continuation of itself off the frame, right?

On some conscious level, we understand that the AI — in this case Dall–E2 — is simply filling the number of pixels it is commanded to. The AI isn’t creating any kind of solid “Self” that we could theoretically see. And I should know: I’ve spent the better part of an hour trying to trick Dall-E2 into showing me mirror reflections of itself.

All I seem to get is weird shit like this:

Dall-E2 generation of “horrifying photorealistic portrait of a woman with mirrors for eyes”

But that’s okay, because we also know that Jan van Eyck and M.C. Escher aren’t actually creating worlds beyond the boundaries of their canvases either. Those worlds do exist, but in a hypothetical, liminal sense! They exist because we as observers have the unique ability to expand a work beyond itself. It’s a process which benefits both parties. We allow ourselves to be manipulated by an artwork because we understand that the artwork can only reach its transcendent potential if we acquiesce to such a thing. We have to buy in, sure, but also we want to.

So perhaps I’ve been asking the wrong questions all along.

It’s not about whether the AI is or is not implying its own existence. That’s an implication we impose onto the AI ourselves, the same way we impose it onto Veronese and Spalter. Even if an AI could explicitly conceptualize itself a pair of eyes, it’s not actually capable of independent invention. It would only be amalgamating together all the other perspectives — eyes — within its memory. It would require an outside human observer to say, “Oh, yes, that perspective, yes that one there, that one is yours.”

I think we often misunderstand that, when we affix our own very-human manner of mental processing onto an AI — which categorically does not think like humans do — it’s us doing the implicating. We’re implicating the AI a body. We’re giving it eyes ourselves.

When I look at Spalter’s Lighthouse for Peace —

Lighthouse for Peace (2022), by Anne Spalter. In Collection of DCinvestor

— I know that there’s not actually a pair of AI eyes peering out into this world. Not eyes we could see in a mirror the same way we can see van Eyck.

Nor when we’re dreaming are there a pair of phantom eyeballs floating inside our skulls. The sensations exist without being deliberately perceived by a single perceiver. The reactionary subconscious mind is a far better allegory for AI than our conscious thought is. AI artworks are sensory explosions far more akin to dreams than to long-gestating creative concoctions.

If Anne Spalter painted a picture sans any AI input, there would absolutely be an implication that her eyes were imparting the perspective. Conscious human mind → human artwork → human eyes. But let’s say Spalter is using an AI, and gives it instructions on what to create, and then it creates an artwork using the 10,000 images loaded into its memory. The resulting perspective isn’t some unique product of the AI, but a stacking upon one another of all the disparate eyeballs within it, similar to how when I was a kid I would put on, like, 10 pairs of sunglasses to see what would happen. You want to know what happened? Shit was dark! All those perspectives morphed into one. Still, I couldn’t, like, go outside wearing all at once. In some fundamental sense, that ultimate combined perspective didn’t really exist.

AIs don’t have imaginations like humans do. They would have no concept of Spalter’s or van Eyck’s or Escher’s perceptions manifesting beyond an artwork’s edges. Because an AI is more like a periscope than a set of independent optics. It sees only what its pilot does.

In a hundred years, when the Kaiju attack and humanity is forced to don giant Mech suits, it’s not like the robots will have some independent creative perspective just because they have, I don’t know, cameras for eyes. The Mech’s eyesight will just become an extended version of my own, just with night-vision and laser targeting. Until I unplug myself, at which point, even with its cameras and spotlights and chainsaw arm, the Mech can no longer perceive.

The truth of Spalter’s AI artworks is that it’s always her own eyes perceiving them. By enticing the AI to create, Spalter ports herself and her perspective into the software. The resultant artwork is like some third plane of existence, some fusion of Spalter’s conscious desires with an AI’s uncanny ability to channel subconscious stuff. The AI itself can’t make true creative extrapolations, but it can make something approximate of them based on what it already knows. To us observers, there doesn’t always seem to be much difference.

In truth, yes, the AI is just another tool Spalter uses to reveal and imply her own perspective. But unlike any other artmaking tool in history, the AI can capture not only what’s on Spalter’s mind, but what’s juuuuuuuuuust to the left and right of it. It can manifest implications on Spalter’s behalf.

Which is, itself, an incredible leap forward. And back. And, I suppose, side to side.

--

--

M○C△

At the crux of a digitally-native creative space, The Museum of Crypto Art (M○C△) preserves history, elevates artistry, and ignites decentralized culture.