Artist ········· William Keihn (b.1983, Indiana), Chicago based
Medium ········· Photography
Year ··········· 2019
Medium ········· Photography
Year ··········· 2019
-
The process I use to make photographs relates itself to an act of meditation. I don't make images in a controlled execution of orders or in aim of a specific thing. There are ruminations, but the process is closer to an improvised performance where the body operates upon intuition. Of course there are many determinations made, but when it comes down to it, the images are created through a certain surrender of control, a kind of faith.
Our world is saturated with constructed meaning, and the vast majority of images being made seem to further uniformity and efficacy. As a photographer, I question what it is that I can do that advertising and popular media cannot.
I don’t care for images to perform the visual statistics of photojournalism. I’m interested in creating work that distorts clear genre distinctions and images that engage with the world’s contradictions. Photographs offer us denotive elements but a viewer is free to make their own determinations. In this way, my work is open to individual readings.
The openness is liberating.
The process I use to make photographs relates itself to an act of meditation. I don't make images in a controlled execution of orders or in aim of a specific thing. There are ruminations, but the process is closer to an improvised performance where the body operates upon intuition. Of course there are many determinations made, but when it comes down to it, the images are created through a certain surrender of control, a kind of faith.
Our world is saturated with constructed meaning, and the vast majority of images being made seem to further uniformity and efficacy. As a photographer, I question what it is that I can do that advertising and popular media cannot.
I don’t care for images to perform the visual statistics of photojournalism. I’m interested in creating work that distorts clear genre distinctions and images that engage with the world’s contradictions. Photographs offer us denotive elements but a viewer is free to make their own determinations. In this way, my work is open to individual readings.
The openness is liberating.




Artist ········· Kyung-Mook Kim
Medium ········· Single Channel Video
Year ········· Edited for E-merge 2019
Language ······· Korean with English subtitles
Duration ······· 9 minutes, 33 seconds
Medium ········· Single Channel Video
Year ········· Edited for E-merge 2019
Language ······· Korean with English subtitles
Duration ······· 9 minutes, 33 seconds
Grace Period Short: Notes for E-merge
Original 2015 film co-directed by Caroline Key
Original 2015 film co-directed by Caroline Key
As an openly gay cultural dissident, I was denied by homophobia, institutional violence, and strong cultural chauvinism prevalent in South Korea. In the struggle to gain my own voice, cinema was the main medium through which I could articulate the social violence that silenced me. Such efforts resulted in making films representing my friends and various groups of people disregarded and alienated from society. What started as a search for my own identity, the process of making films has been a journey to discover socially oppressed voices and, hopefully, our freedom.





Artist ········· Doug Rosman
Medium ········· Text, video and images generated using machine learning
Year ········· 2019
Language ······· English
Duration ······· 9 minutes, 33 seconds
Medium ········· Text, video and images generated using machine learning
Year ········· 2019
Language ······· English
Duration ······· 9 minutes, 33 seconds
self-contained:
Thank you for your data.
Thank you for your data.
On February 23, 2019, the artist Doug Rosman gave a lecture about machine learning, and then hosted a “workshop”–-though the word “laboratory” seems more appropriate–-where he invited audience members to have their likenesses recorded to be harvested as data. This essay addresses technologies that enable the manipulation of another’s likeness, and the ethical considerations in collecting the data necessary to do that. Specifically, the essay reflects on Rosman’s data-collecting “workshop” as a case study in how artists who work with surveillance technologies, like machine learning, may take adequate responsibility for the processes they engage. Through this case study, Rosman considers the implications of their work as they begin incorporating others’ likenesses in a dataset.
There is a certain distorted notion of liberation that developed in the age of surveillance; a shift from the wisdom of “dance like nobody’s watching” to the reality of “dance like you don’t care that everyone might be watching.” This has evolved again in the age of artificial intelligence, where now it’s “the robots are always watching you dance and will let us know if they identify moves we told the robots we don’t like.”
While we may find it increasingly difficult to liberate ourselves from these vast technological systems that are now the backdrop to our everyday existence, this essay proposes an entry point into how we may deal with giving up parts of ourselves through data, rather than abdicating the vision of self-determination.
There is a certain distorted notion of liberation that developed in the age of surveillance; a shift from the wisdom of “dance like nobody’s watching” to the reality of “dance like you don’t care that everyone might be watching.” This has evolved again in the age of artificial intelligence, where now it’s “the robots are always watching you dance and will let us know if they identify moves we told the robots we don’t like.”
While we may find it increasingly difficult to liberate ourselves from these vast technological systems that are now the backdrop to our everyday existence, this essay proposes an entry point into how we may deal with giving up parts of ourselves through data, rather than abdicating the vision of self-determination.
In death, (not) as in life
When the actor Carrie Fisher, who played Princess Leia in the Star Wars franchise, died in late 2016, rumors began to circulate about Disney negotiating with her estate to secure the actor’s likeness in order to recreate her on screen through CGI in future Star Wars installments. Though the rumors turned out to be false, fans made it clear Disney would face severe backlash if they engineered a posthumous performance for Fisher. People had already reacted poorly to their decision to digitally recreate a minor character in a recent Star Wars film, portrayed by the deceased actor Peter Cushing. One might be able to see the logic: why confuse audiences by hiring a new actor to play the same character? The visual effects head, John Knoll, justified his decision to recreate Cushing by claiming, vaguely, “we weren’t doing anything that I think Peter Cushing would have objected to.” People were uncomfortable with the idea of Disney digitally resurrecting a beloved figure like Carrie Fisher (well, her character Princess Leia, anyway), though it is hard to know if that’s because of the implications of this technology, or if because it’s Disney doing it. This controversy of digital resurrection has appeared elsewhere, as in the holographic of reproductions of the deceased musicians such as Tupac, Michael Jackson, Roy Orbison, Billie Holiday, and Ronnie James Dio.
Whether you’re alive or dead, powerful companies can own the rights to your likeness, and in some cases, release rogue simulacra of you out into the world. The aforementioned holographic musicians and Princess Leia cases present acute examples, as their re-creation and reanimation of their likenesses depends on well-funded production teams stacked with talented graphics and audio engineers. The Star Wars team at Disney, for instance, relies on a trove of high-resolution face and body scans they take of their actors to use as concept-art references in pre-production. Conveniently, a handful of reference images today is an archive of machine learning data for tomorrow.
Now that we live in the age of machine learning, it seems the goal of all human activity is to be quantified, digitized and placed in enormous databases operated by governments, corporations, and tech companies, to be perpetually parsed and analyzed by automated processes at an ever-increasing scale for control and profit. With this machine learning, the time, expertise and budget required to meaningfully recreate someone’s likeness is drastically reduced, to the point where anyone who is somewhat computer savvy can find ways to create digital clones.
Text, music, video, pictures, the human voice; nothing is safe from data-devouring machine learning algorithms. In the last few years, incredibly sophisticated machine learning algorithms called deep neural networks have appeared that are capable of synthesizing various forms of human output. What we’ve seen with deepfakes is that it isn’t that difficult to use a simple webcam recording of your face in your bedroom to realistically control the movements of someone else’s face. We’ve seen with technologies like Adobe Voco or the open source Lyrebird, it also isn’t difficult to create a machine to speak words typed using your voice, to say words you’ve never said aloud before, and to train a system to generate new texts artificially in a way that replicates the language of its input source. Though none of these technologies are even close to perfect, and still quite experimental, they advance every day and are becoming simpler to use. The implication is that given a computer and some time, we all have the power to conjure doppelgängers of ourselves (and others) that look and move like us, sound like us, and even say the things we might say.
A bountiful harvest
On a rainy Saturday in February, 2019, I gave a lecture about machine learning, and about my ongoing art project self-contained. After the talk, I asked people if I could harvest their data.
In this lecture, I went into the nuts and bolts of machine learning, and explained how I use it as a tool for experimentation in my art practice. I then explained to the audience that I wanted to create a dataset containing their likenesses in order to expand my work beyond my own likeness. To do this, I needed volunteers to generate some data for me. It would be a quick and painless process. I had my volunteers sign a contract I’d written up, promising in exchange for letting me use their likenesses a unique digital art object to own in return.

My subjects performed mundane and idiosyncratic movements in front of a camera and motion tracker for a short period of time, one person after the other, in assembly-line fashion. It took about 30 minutes to record everyone. My volunteers and others watching from the sidelines seemed to enjoy the process–the whole event felt like a light-hearted activity. In previous iterations of self-contained, I had only used my own body to generate data, so this process was never exploitative of anyone but myself. But now I was asking others–some of whom were complete strangers–to allow me to record video of them so that I could teach a machine to emulate their likenesses and movements.
For this project, I use what we’ve come to think of as “artificial intelligence” in the form of machine learning algorithms to create certain kinds of images. I use a particular open source neural network architecture called pix2pix that learns to generate images based on learned associations with any kind of input imagery. I can teach pix2pix to colorize black and white images by showing it thousands of pairs of color images with their corresponding grayscale versions. I can teach pix2pix to convert day-time landscape photos to night-time photos by showing it day and night versions of the same location. I can even teach it to produce “real” images of cats from simple line drawings (your results may vary). Essentially, pix2pix allows me to teach a machine to see the world in a very specific way. The algorithm can learn anything, so it is up to me to make the decision about what kind of imagistic input-output relationship to form…and whether I can get my hands on enough data to adequately “teach” the machine that relationship.
Imagery from the original self-contained, using the artist’s likeness as the dataset
So, why do I need these subjects to generate data for me? For my purposes, I teach a computer to associate very low-information dot patterns, in the form of stick-figure-esque skeletons made up of disconnected circles, with photographic representations of bodies. By learning to associate these images, the machine learns to de-abstract a suggestion of a body. If I feed the machine a stick figure made of dots, it will attempt to synthesize a fully a realized body based on that figure. Machine learning algorithms–like humans–consume large amounts of information and extract the important bits. We are both entities that identify patterns, and learn to categorize and label the world around us in order to make sense of things quickly. But what would happen when I tried asking the machine to reverse this process? How might a machine construct a photographic representation of a body from fourteen simple circles? And further, what would happen if I intentionally confused the machine, and taught it to associate different bodies with the same dot patterns?
Once the machine has learned, which in my case required over a hundred hours of “training” on approximately 30,000 image pairs (~60,000 total images), the machine does its very best to reconstruct bodies from this sparse input information. With each synthesized frame, the body flickers between multiple representations, and forms chimeric hybrids with other bodies and representations as the machine makes decisions about who it sees in each new frame of particular dot patterns. I explore the way contemporary machine learning algorithms can be used to engineer incredibly specific assumptions about how to see the world. As these algorithms become better and cheaper to implement, they are put to more and more use in public spaces. Though “artificial intelligence” is usually used to recognize and identify things in the world, I ask it to go one step further and actually produce imagery, as a way to represent a machine’s process of dealing with the simultaneity present in all humans it observes; of dealing with multiple identities and representations contained in a singular body.
But in order to do this, I need copious amounts of data: thousands of images of a person’s likeness, in the form of frames taken from video recordings. Whatever the work itself is, there are aspects of producing this work that venture into treacherous territory, particularly when I invite others to volunteer their likenesses. With the recordings I take of my subjects, I can use these machine learning tools to create puppets out of them. Not that I necessarily want to think of my volunteers as puppets, but once the machine learns this dots-to-bodies association, I am free to feed it any orientation of dots, with the expectation that it will produce images of my subjects’ bodies in positions they themselves had never inhabited during recording. Though nobody would mistake the machine-generated images of my subjects as “real”, there is still a certain amount of control I exercise over their likenesses; likenesses that I now have the permission to use.
Trust me, I’m an artist
In the moment of being recorded, my subjects seemed to be enjoying themselves, all the while fully aware they had just signed over their rights to me to make use of their likeness. Maybe it was the backdrop–a lively atmosphere that felt more like a social function than a clinical laboratory–that distracted people from considering the nature of what they were doing in giving parts of themselves to me. Maybe they trusted me as an artist to treat their likeness respectfully, even though there was nothing in the contract I wrote that promised any such thing.
But is there potentially some form of empowerment in openly acknowledging that everything they were doing in front of the camera was for the express purpose of recording data? I hadn’t tried to obscure this by framing this as, say, a fun activity for me to make an artwork out of you using cool new tech! I said explicitly, as per the contract they signed, that I wanted to harvest their data. I think of the recent “10-year challenge” meme that circulated on Facebook. Though Facebook claimed not to have started this meme, speculated to be a ploy by Facebook to amass well-categorized image data to improve their facial recognition algorithms, the fact remains that this “fun” social network activity may have provided an incredible boon to the development of Facebook’s technology.
This whole event–an artist lecture followed by a “data harvesting workshop”–was operating in the utopian space of artistic production, which may have provided enough comfort for people to disregard the implications of data harvesting. But what differentiated my act of collecting data from being exploitative, in the way we consider the same practices by governments and tech companies? What does it means for our data to exist out in the world? Can the process of giving up data be abstracted from who is doing the collecting and why? And I’m not just thinking about the data used to predict our habits and behaviors, but data that can be used to actually construct new, unauthorized digital replicas of ourselves.
To exist in 2019 is to be constantly shedding personally identifiable information through passive, ambient processes. Social media platforms gather all sorts of data like who you’re talking to and what you’re saying, or they have learned what you look like from all the pictures you’ve uploaded that include your face. Your web browsing history grows and grows. Your public transportation card creates an entire map of all the trains and buses you’ve taken. Your face appears in closed-circuit surveillance cameras distributed around your city. In an essay, the author Diana Budds expressed her unease as she traversed New York city one day, thinking about how she appeared in these security cameras:
“What was troubling to me is that I don’t know, off the top of my head, what the privacy policies are for any of these companies, how long footage is stored, who has deals to share data, or how secure any of the data is. Was I being profiled? By leaving my apartment, did my image enter some sort of database? I would have no idea, just like everyone else who ventures outside.”
“I have nothing to hide” only makes sense if hiding is still possible
None of this is new information; we are all very much aware of these processes. So aware, in fact, that we largely ignore them altogether. Whether in public or in private, companies, governments, university researchers and individuals freely make use of our data. Sometimes they ask for permission, sometimes they don’t. We figure that we’re already compromised and there’s nothing we can do to keep ourselves from filling up a database housed in an air-conditioned warehouse in desert somewhere. Some of us may take active measures to curb the amount of data we shed. We may delete our social media accounts, disable location-accessing services on our phone apps, or put tape over our webcams, but in 2019 there are so many ways in the world to capture our existence that it seems there’s no way to avoid them all.
All of this data is analyzed so that, say, governments or corporations may form an image of me to monitor, or to target for advertising. Not a photographic image, but an image of me in data. As the artist Trevor Paglen wrote,
“Human visual culture has become a special case of vision, an exception to the rule. The overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop.”
The data we shed has, until this point in recent history, been used mostly for recognition; of recognizing who we are and what we want. But now it can create images that might resemble us enough to act with some kind of agency. After all, governments or companies aren’t tracking our real selves, per se. They are monitoring our digital doppelgängers.
Any sort of culture, be it visual, aural, textual, now carries a dual purpose. Pieces of culture function as what they are: a book to be read (by a human), a movie to be watched (by a human), a selfie you posted to Instagram (to be commented on by a human (and probably a bot or two)). At the same time, these cultural objects are highly potent artifacts of human culture for pattern-finding algorithms. A speech broadcast on television by former president Obama may function more effectively as a tool for improving a captured version of Obama’s aural and visual likenesses than for disseminating presidential information.
In the case of my self-contained project, the data I ask for is actively generated, in an environment that functions as a laboratory. Generating data, as opposed to shedding it, requires that we actively participate in the production of data. This may come in the form of filling out surveys or participating in scientific experiments, but also the pictures and videos we upload to Instagram or YouTube.
So I wonder, is there a way in which giving up our data doesn’t isn’t a dystopic erosion of ourselves and our privacy? In a world where this is the norm, how does it feel to act in a transparent environment where you are consciously surrendering parts of yourself, to an entity you understand, for the sake of, say, art? If I, the artist, am transparent about my need for your data, does it seem to matter less what I actually do with that data? Can the act of surrendering our data ever be a positive experience, or does it just feel slightly more comfortable in some cases? I ask these questions because I am concerned about our growing complacency with the way our data is harvested, stored, and used, and I believe I have been complicit in softening attitudes towards data collection.
If you’re a cop, you have to tell me (right?)
The artist Allison Burtch coined the term “cop art” to describe art works that don’t critique surveillance so much as perpetuate it:
“Cop art is a form of surveillance art where you just mimic the oppressor, and I feel like I see it a lot with people who make surveillance art where it’s just like, ‘Hey I stalked someone and called it art.’ and it’s like no, you’re being a cop. You’re literally copying the NSA, you’re not making art.”
For any artist appropriating the tools or methods of surveillance, there is a responsibility we have to the those who engage with the work–particularly those whose identities might be implicated in the work–to acknowledge that just because we are making art doesn’t free us from the reality that we are engaging in a process of surveillance; a process that subjugates the participant for the benefit of the artist. Even if an artist’s intentions in harvesting data seem more “pure” than the motivations of Facebook or the government, the artist is also guilty of manufacturing an imbalanced power relationship. Not only is the harvester not required to surrender their own data to the harvested; it is likely the harvester gains much more out of this transaction. Collecting data from willing participants, if treated as a process unlike the typical ways our data is collected seems a dishonest and obfuscating misdirection. I believe that no matter my intentions as the artist, I am subject to the same scrutiny as any other entity that desires your data.
In the end, the data generated by my fifteen subjects during my “workshop” was messy, and not nearly of high enough fidelity to do anything particularly malicious with the data. Given the logistics of recording not one but fifteen subjects in a new environment, a lot didn’t go right. I didn’t do a good job making sure everyone’s movements stayed within the bounds of the black backdrop I had set up. There were changing lighting conditions in the environment–and simply not enough light in general. There were some issues with the camera autofocus and I didn’t set the camera shutter speed high enough, so some of my subjects’ movements caused undesirable amounts of motion blur. Despite the messiness of the data, the trained neural network was still able to produce somewhat coherent likenesses. The individuals in the synthesized output are unrecognizable, yet still distinct and differentiable from one another. But if I had captured better data, their representations may have been identifiable.
As an artist interested in these issues, I will likely continue to ask people for their data. After all, giving up our data in some shape or form is something we do all the time. It’s nice if sometimes it gets to be for the sake of art. But I encourage that we resist complacency about data collection, in whatever form it takes. It is easy to feel numb and powerless to these insatiable systems.
Is it possible to liberate ourselves from these invisible systems of control and subjugation, imposed upon us by governments and corporations? I don’t think so. And yet, I believe that as we cope with the powerlessness we may feel living in world of opaque automated mechanisms, we can counteract this sense of powerlessness in our transparency with each other. Offering your likeness to an artist for an art work is but a tiny microcosm of what we offer of ourselves all the time. But in such moments, it is important to acknowledge that we are performing a politics of transparency with each other. If I am transparent and honest when I ask you for your data, and acknowledge the power it gives me, I hope that this experience of subjugation can evolve towards something resembling liberation. It may be that this is simply a shift in perspective; a choice to believe that giving up your data for art is better than giving it to Facebook, even though the mechanisms and the power dynamics they create are largely the same. But I believe that as artists, we can model such practices of transparency that extend beyond the specifics of any one artwork, and lead us to better understand what’s a stake when we surrender our data.
That is, if we do our due diligence.
When my subjects signed a contract that gave me the right to their likenesses, I did not adequately address the ramifications of this process. I consider this essay an opportunity to fix that; a chance to atone for a vague contract that relied too much on an unspoken trust between myself and my participants. Below this paragraph you will find an addendum to the contract I included at the top of this essay. This addendum is a step towards a more comprehensive approach in claiming accountability for the data I take. Though this addendum is directed towards those who signed the original contract, I ask that any reader also take the following considerations as a partial inoculation for living in world that can’t stop asking you to give up parts of yourself.

Artist ········· Daniel Salamanca
Medium ········· Photography Stills, Video
Year ··········· 2019
Language ······· English subtitles
Duration ······· 14 minutes, 53 seconds
Medium ········· Photography Stills, Video
Year ··········· 2019
Language ······· English subtitles
Duration ······· 14 minutes, 53 seconds
Overnight Sunrise
This time-based work, a video, is shot from the arch window located on the 17th floor of the MacLean Center (School of the Art Institute of Chicago). It shows, in three different framings, a red and orange sunrise happening around 7:00 AM just on top of the lake, west of Chicago. This image, this optical phenomenon, happens essentially because of the numerous molecules living in the air, which scatter most of the blue and violet colors of the light. With the excess of pollution from they city’s cars and industry, this beautiful dramatic effect gets intensified. The work is completed with a filter that further intensifies the red in the image.
Although in a general sense the word Liberation might be associated with revolutionary social movements, I wanted to approach this concept from a metaphorical and philosophical perspective. For me, liberation is also the capacity of us, human beings, to think, to develop our personalities and to nurture our mental and physical creativity without any restrictions. In other words, to reach a minimum sense of freedom.
To that effect, the image of the sunrise acts as a metaphor for the dilemmas surrounding these concepts. While the sunrise can be seen as a bucolic, romanticized image of the sublime, it is also a visual, colored proof of how humans are impacting the planet. That’s the reason why it looks red and ironically similar to a burning apocalyptic landscape. In that sense, the image is also showing us the passage of time, the effects of global warming and a possible future collapse. On the other hand, it is still extremely beautiful, a breathtaking natural wonder, something that we know happens every day, but that we tend to forget and that we never attend to. And if we do attend to it, like in the video, it often seems like a long, boring experience against the constant anxiety of the contemporary world. Well, it is that contradiction, between the possible end of the world and the contemplation of time in its real dimension which, for me, opens a mental and lucid “black hole” into Liberation and a Renaissance in Revolution. In a philosophical sense, the idea is that once we accept that death is unavoidable, in this case, that of the world as we know it, we can then start to do something about it. Many thinkers compare the exercise of philosophy with a lesson on how to die. It’s like embracing a disruption in order to work with it.
In a few words, the work invites the viewer to think about this abstract dichotomy so that we can start confronting the absurd world we live in from a new, rebirth perspective, just like a sunrise act as a new beginning for every new day.
Note: This idea was conceived after conversations with two other artists, Matias Armendaris And Ed Oh who, for me, understand that the future is related to other frequencies of thinking. Thanks to them for sharing their energies with me.
This time-based work, a video, is shot from the arch window located on the 17th floor of the MacLean Center (School of the Art Institute of Chicago). It shows, in three different framings, a red and orange sunrise happening around 7:00 AM just on top of the lake, west of Chicago. This image, this optical phenomenon, happens essentially because of the numerous molecules living in the air, which scatter most of the blue and violet colors of the light. With the excess of pollution from they city’s cars and industry, this beautiful dramatic effect gets intensified. The work is completed with a filter that further intensifies the red in the image.


















Although in a general sense the word Liberation might be associated with revolutionary social movements, I wanted to approach this concept from a metaphorical and philosophical perspective. For me, liberation is also the capacity of us, human beings, to think, to develop our personalities and to nurture our mental and physical creativity without any restrictions. In other words, to reach a minimum sense of freedom.
To that effect, the image of the sunrise acts as a metaphor for the dilemmas surrounding these concepts. While the sunrise can be seen as a bucolic, romanticized image of the sublime, it is also a visual, colored proof of how humans are impacting the planet. That’s the reason why it looks red and ironically similar to a burning apocalyptic landscape. In that sense, the image is also showing us the passage of time, the effects of global warming and a possible future collapse. On the other hand, it is still extremely beautiful, a breathtaking natural wonder, something that we know happens every day, but that we tend to forget and that we never attend to. And if we do attend to it, like in the video, it often seems like a long, boring experience against the constant anxiety of the contemporary world. Well, it is that contradiction, between the possible end of the world and the contemplation of time in its real dimension which, for me, opens a mental and lucid “black hole” into Liberation and a Renaissance in Revolution. In a philosophical sense, the idea is that once we accept that death is unavoidable, in this case, that of the world as we know it, we can then start to do something about it. Many thinkers compare the exercise of philosophy with a lesson on how to die. It’s like embracing a disruption in order to work with it.
In a few words, the work invites the viewer to think about this abstract dichotomy so that we can start confronting the absurd world we live in from a new, rebirth perspective, just like a sunrise act as a new beginning for every new day.
Note: This idea was conceived after conversations with two other artists, Matias Armendaris And Ed Oh who, for me, understand that the future is related to other frequencies of thinking. Thanks to them for sharing their energies with me.
Artist ········· Gabriel Chalfin-Piney
Medium ········· Magazine / Artist book
Year ········· Edited for E-merge 2019
Language ······· English
Reading Time ··· 10 minutes
Medium ········· Magazine / Artist book
Year ········· Edited for E-merge 2019
Language ······· English
Reading Time ··· 10 minutes
Greeting Cards From Prison:
Stories and Art by Vincent Wade Robinson
Stories and Art by Vincent Wade Robinson
Greeting Cards From Prison is a collection of stories from an interview conducted by Gabriel Chalfin-Piney with artist Vincent-Wade Robinson on March 10th, 2019 at L & M Starlight Restaurant in Chicago Illinois. All of the artworks are templates from Vincent’s greeting card business, which he operated while incarcerated within the Illinois Department of Corrections.
Vincent is active at the Chicago Torture Justice Center, an organization that seeks to address the traumas of police violence and institutionalized racism through access to healing and wellness services, trauma-informed resources, and community connection. The Center is a part of and supports a movement to end all forms of police violence.
Educational, artistic and vocational programs have been cut by the Illinois Department of Corrections drastically since the late 1990s. Because of this, artistic opportunities for currently incarcerated artists are largely nonexistent. Greeting Cards From Prison is an exploration in blending oral history practice with the art practice of a formerly incarcerated artist.
For this project, I took on the role of designer and compiler. This publication serves as a tool to elevate the voice of the formerly incarcerated artist. My hope is to bring awareness to the lack of vocational and education programs in Illinois Prisons, and to ask people to reconsider how they view incarceration and formerly incarcerated people reentering society. As long as there are prisons, true liberation cannot exist. The great Angela Davis said it best: “We have to talk about liberating minds as well as liberating society.” My intention for this zine is not to liberate those who are incarcerated, but to shift the perceptions and language surrounding incarceration.
Vincent is active at the Chicago Torture Justice Center, an organization that seeks to address the traumas of police violence and institutionalized racism through access to healing and wellness services, trauma-informed resources, and community connection. The Center is a part of and supports a movement to end all forms of police violence.
Educational, artistic and vocational programs have been cut by the Illinois Department of Corrections drastically since the late 1990s. Because of this, artistic opportunities for currently incarcerated artists are largely nonexistent. Greeting Cards From Prison is an exploration in blending oral history practice with the art practice of a formerly incarcerated artist.
For this project, I took on the role of designer and compiler. This publication serves as a tool to elevate the voice of the formerly incarcerated artist. My hope is to bring awareness to the lack of vocational and education programs in Illinois Prisons, and to ask people to reconsider how they view incarceration and formerly incarcerated people reentering society. As long as there are prisons, true liberation cannot exist. The great Angela Davis said it best: “We have to talk about liberating minds as well as liberating society.” My intention for this zine is not to liberate those who are incarcerated, but to shift the perceptions and language surrounding incarceration.
Please print and share this zine with others.
Please print and share this zine with others.







Please print and share this zine with others.