Good Systems researchers are studying how media representations shape the public’s understanding of the social benefits and ills of artificial intelligence.

We’re looking for undergraduate and graduate students to craft short narratives that push beyond standard representations of sentient killer robots or helpful droids and instead grapple with the potential benefits or harms of AI that are actually emerging today. 

We’re starting to see this kind of work emerge in journalism, television, and high-concept science fiction — in books like Safiya Noble’s Algorithms of Oppression and Annalee Newitz’s Autonomous to TV series like Black Mirror — and we’re curious to see what our writers here at UT can come up with.  

 

Our project will unfold in three phases. Here’s how it works:

Phase 1: Pitch a Concept

Use this online form to submit a pitch for a writing project in one of these three categories: 

  • A work of short fiction (short story, a film treatment, or a sketch for a longer work)
  • A researched work of creative non-fiction (reportage, memoir, or personal essay)
  • A script or sketch for a theatrical concept

In 500 words or less, summarize your idea and argue for the interest of what you’d like to contribute. The entry form asks you to forecast the scope of your project and to specify how your project will explore or educate an audience about one or more of AI’s social benefits or drawbacks and how you plan to execute it.

If you’re looking for inspiration, try here, here, and here. You may find some accounts of how AI is already affecting our society and of what its potential benefits and dangers may be.  

We’ll begin reviewing entries May 15th and will continue until we have enough entries to work with. The first five promising pitches in each category will be awarded $25 in the form of an Amazon gift card. 

NOTE: For students interested in producing a short video instead, please refer to these guidelines.

 

Phase 2: Enter Production

The five most promising projects across all three categories will be selected to receive up to $500 each in development funding. (Awards will be made based on the genre, merit, and ambition of the proposals.) As part of the conditions of these awards, winners will agree to submit a rough draft of their projects by July 15 and a final version by August 15, 2020. 

 

Phase 3: Exhibit Your Work 

Our Good Systems subproject Bad AI and Beyond will be sponsoring a symposium in the Fall 2020, bringing together academic speakers, industry insiders, and creatives to discuss how media representations shape popular perceptions of AI. Selected works that have been awarded funding through this process will be featured at this event.

 

We look forward to your submissions! 

 

More about Good Systems

We use artificial intelligence technologies to entertain us, communicate, get places faster, make predictions, swipe left or right, protect our homes, drive more safely, and solve complex problems quickly and easily. AI is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military weapons systems.  

So how can we ensure that AI is beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some?  

Good Systems is the third and final research grand challenge at The University of Texas at Austin. Grand challenges are “moonshot goals,” referring to President John F. Kennedy’s 1961 commitment to land humans on the moon. These challenges are audacious, risky, and might fail, but they attempt to tackle issues that are too important and urgent to ignore. 

Our goal is to find a way to evaluate and design ethical AI technologies that protect and improve our world — then to share that process. To do this, our team brings students and researchers together from computer science, philosophy, engineering, information science, communications and journalism, business, and the arts. Each of these perspectives is needed as we think about what it means to be “good” and to envision a way to develop AI-based technologies that consider human values at their core.