AI and the New Era of Creativity

by Paula Parisi

The negotiated settlements of this year’s WGA and SAG-ACTRA strikes address many issues around workplace practices, staffing, and compensation for entertainment industry writers and performers. But one issue in particular captured the public’s attention, and has become the main character in the story of the historic moment: AI and its potential to replace human creators. The resulting contracts and the news stories that followed, hailed human triumph over the machines, noting agreed upon limits in AI’s use in generating content. But that doesn’t mean that AI is going away. It’s not a matter of if, but how artificial intelligence will be integrated in the future of all disciplines of content creation, and even scholarship.

What differentiates AI from data crunching or machine learning is the ability to perform tasks that would normally require human intelligence and decision making. AI is already automating tasks that were formerly painstaking. Procedural generation — computers quickly building out lifelike environments —is fundamental in building video games, where generating a seemingly unlimited number of new worlds is crucial. It is also now a mainstay of film and television production, used for pre-visualizing scenes to better plan effects and camera motion. Machine learning has made possible in real time what until recently took days and weeks to render. “These tools are getting smarter every day,” says Erik Weaver, director of adaptive production at the Entertainment Technology Center (ETC), the industry think-tank housed within the USC School of Cinematic Arts. “The convergence of procedural generation, machine learning and AI has had a huge effect on the creative process.”

Caption: The Entertainment Technology Center (ETC) test shoot.

AI that reduces tedium and makes workflows faster, freeing up humans to be more prolific and creative, is a welcome innovation. There have been research projects at the School of Cinematic Arts focused on harnessing AI to make prep work easier—tasks like creating look books and world building—and much of the entertainment industry is focused on AI for workflows. The ETC is working with industry partners on several projects that would speed up development, like an AI application capable of extracting all narrative, visual and audio features in media content, from emotions to shot types to pace; and the application of blockchain and AI technology to entertainment industry supply chains like moving and sharing digital assets across projects and divisions.

Where trepidation exists is with regard to generative AI that produces images, text and audio, to create storytelling content, the core of the School of Cinematic Arts mission and curriculum. Like OpenAI’s popular ChatGPT and Google’s Bard, which generate text and can be used for help writing scripts, the visual platforms are large language models (LLMs), but in addition to responding to written prompts, most can also work with reference images. Images created by generative AI tools like Midjourney, Runway and Dall-E2 can be wildly imaginative or vividly photoreal. High quality results are generated instantly, though it sometimes takes several prompts to get the desired result. Single frame images can be run through animation software like Blender, Adobe After Effects, Wonder Studio and Cinema 4D to create animations.

At SCA, much of the past academic year has been dedicated to dispelling fears—in the classrooms, in workshops, and at discussion events. In April, an event titled “AI, Creativity & The Future of Film” drew a standing room only audience. The event’s discussions around AI’s use in creating script drafts, storyboards, production design, and animation let to a quelling of fears, the general consensus being that creative jobs are not at stake—yet! After all, decision making about what makes great, or popular, cinematic content is still highly subjective, and that’s not the forte of machines. In another faculty workshop it was also clear that AI tools struggle with the finer points of dialogue, especially for genres like comedy and romance. However, it’s also clear that AI is living up to its mandate and learning quickly. The ETC’s Erik Weaver likens the shifting to as “going from a cruising sedan to a Formula One race car. Now, as we apply artificial intelligence to different facets of that pipeline, it’s like boarding a rocket ship.” Which is why the new WGA contract stipulates that “AI-generated material can’t be used to undermine a writer’s credit or separated rights.”

In June, the ETC held its first Synthetic Media Summit, a daylong symposium that convened leading thinkers from the studios, technology companies, and law firms, as well as USC faculty experts, to explore the uses and ethics of artificial intelligence in media and society. Presented in conjunction with SCA and the Annenberg School for Communication & Journalism, it provided context for where AI is and a roadmap for where it’s going. While Silicon Valley stands to profit from the hype, Hollywood is “just trying to figure out what works and what doesn’t,” says ETC AI director Yves Bergquist, who produced the event with Annenberg Associate Professor Mike Ananny. “With one eye on technology, one eye on the creative community, examining ethics and society, there is a tremendous opportunity for SCA to have a voice in the debate about AI.”

Caption: The Entertainment Technology Center (ETC) test shoot.

SCA students, faculty and staff have also been hearing from a wide spectrum of speakers, and storytellers, about what the future could hold for scholars and creators in a new world of creativity, where AI is integrated into every aspect of development, production and distribution. New to the schedule this fall were two new undergraduate courses. “AI and Creativity,” offered an introduction to “the history, ethics and practices of generative AI in the context of the cinematic arts.” Among the assignments was creating a music video using text-to-video tools. The other class, “Contemporary Directing Practice” asked “What is directing when you’re working with text-to-image and text-to-video tools?” Holly Willis, Chair of Media Arts + Practice division, who has become a leading voice on the use of AI tools, says “AI is providing artists with a new visual vocabulary and, combined with virtual production, a new workflow.”

In March, Willis was named co-director of USC’s new Center for Generative AI and Society. With $10 million in funding, the Center will convene experts and explore the transformative impact of artificial intelligence on culture, education and media across the 22 USC schools. Working with co-director Bill Swartout, research professor at the USC Viterbi School of Engineering and CTO for USC Viterbi’s Institute for Creative Technologies, Willis will fulfil university president Carol Folt’s mandate to chart a course of vigorous exploration of “the intersection of ethics and the use and evolution of generative AI.” In that capacity Willis — an author, artist and 17-year SCA veteran will be “raising critical awareness, contextual understanding and basic AI literacy across SCA while also learning from our community what's important to them with regard to intellectual property, notions of human creativity, and more.”

For the SCA community, the important tests will happen in the creating, and the discussions that the projects elicit. Another collaboration between the School of Cinematic Arts and the Annenberg School for Communication & Journalism has allocated twenty spark grants to fund faculty projects on the creative use of AI. An upcoming exhibit will feature work created in the AI classes, along with discussions about ethics, creativity, and inevitably the kinds of jobs graduating students can compete for if they learn to master these tools. For example, “Prompt engineer” is a new entertainment job description. “Learning how to dialogue with the AI is part of the process, translating what you want into something the algorithm can figure out,” explains Willis. “Photoreal” is not a recommended prompt, because it’s not a word typically used to describe iconic visuals, and LLMs are trained by associating words with pictures. “Cinematic” or “photograph of” tend to produce better results. The models can even translate lens traits, like aperture. Midjourney’s “chaos parameter,” a value between 1 and 100, controls the randomness of an image. Imagination, quantified.

To Holly Willis, artificially generated images are “metaphors that capture the cultural transition from an emphasis on the photographic to the data image,” while Yves Bergquist is more utilitarian, informed by the ETC’s perch at the intersection of academe and industry. “The technology is only worthwhile insofar as it is empowering creativity,” he says. “Sure, SCA makes sure the students understand what’s powering the models, but the only thing that matters is the product that results.” AI’s influence will be more additive than displacement, he believes, citing “micro-workflows.” “It will automate things like neural radiance fields.” Also known as NeRFs, neural radiance fields transform 2D into 3D images. “That will definitely accelerate virtual production,” Bergquist says. “Other micro workflows — language translation and that sort of thing — will be optimized. But don’t expect push-a-button-get-a-movie kind of stuff.”

RELATED Stories

  • Volumes of Creativity with Virtual Production


    Volumes of Creativity with Virtual Production The evolution of filmmaking with virtual production and LED technology

  • Virtual Production Program Boosts Alumni Success Stories


    From LED walls to industry success, shaping future filmmakers

  • Creating Immersive Narratives


    USC's Pioneering Themed Entertainment BFA Program