The Recording Academy and IBM are bringing generative AI to the 2024 Grammys

Date:

By Marty Swant  •  January 25, 2024  •  5 min read  •

Ivy Liu

During the 2024 Grammy Awards, one name aims to strike a different kind of chord: Watson.

On what’s known as “Music’s Biggest Night,” The Recording Academy and IBM will use generative AI to create content for Grammy Awards social channels and give music fans a way to engage with AI-generated content, the companies told Digiday.

The new “AI Stories with IBM Watsonx” tool will create text, images, animations and videos based on a variety of sources and real-time news before and during the awards. Along with Recording Academy’s editorial team using AI Stories, fans will also be able to use an AI widget on the Grammys website to use generate text that is then integrated with pre-existing visual assets. The Grammys website will also provide a livestream of the Feb. 4 awards and other day-of coverage.

The plan is to also use AI Stories to create content to share insights about more than 100 Grammy-nominated and -winning artists. Training data came from The Recording Academy’s content and other historical data, artist pages, Wikipedia profiles and a range of other publicly available content such as articles about music and the Grammys. 

“The reality is, we’ve got millions of news articles in our content system,” said Ray Starck, vice president of digital strategy for The Recording Academy. “It’s one thing to do a search and see some results, but it’s also being able to look at that content and maybe pivot off of an opportunity for something that might be a hot topic right now within the industry. Everything moves so fast.”

In an interview with Digiday, Starck said the goal is to experiment with AI while also producing more real-time content during the 2024 Grammys. Even before talking with IBM about using Watson, the Academy was already developing product ideas to leverage AI for content creation while also protecting its intellectual property. The Recording Academy and IBM will also make sure to have humans in the loop to ensure accuracy and make updates based on breaking news during the ceremony.

Starck sees the use of generative AI as something additive to enhance how the editorial team researches topics and creates content. And rather than allow users to use any prompt, the Academy developed pre-generated prompts that also help to limit the risks related to outputs or IP concerns.

“Our core content strategy was to take a look at all of the great content that’s been inside our content management system, look back at our history, our records [and] our award data,” Starck said.

The experimentation comes as the music industry grapples with generative AI’s potential impact on artists and recording labels. Last year, Universal Music and two other music publishers filed a lawsuit against Anthropic alleging the AI startup violated copyright laws by trainings its AI models with copyrighted lyrics and distributing them in answers through its Claude chatbot.

The use of GenAI for Grammys content comes less than a year after the Recording Academy announced new rules for AI-generated music. Last summer, Recording Academy CEO Harvey Mason Jr. said artists that use AI — such as for a voice or instrument — could be eligible for nomination if they can prove a human still “contributed creatively in the appropriate categories.”

The AI efforts are the latest evolution in a seven-year-old partnership between the Recording Academy and IBM, which will use the event to promote WatsonX’s various offerings. The partnership also marks the first time the Academy has used a large language model to create AI-generated content. Although IBM wouldn’t disclose the terms of the agreement, Noah Syken, IBM’s vp of sports and entertainment, said the partnership includes monetary investments “being made both ways.”

“It’s really about how do we understand the engagement we’re trying to create with a 50-year-old like me, or an 18-year-old,” Syken told Digiday. “What’s the language that’s going to resonate with them? And how do we train the model to understand the context of where the information is being delivered?”

Using IBM’s WatsonX platform and Meta’s open-source Llama 2 large language model, AI Stories was developed with a process called retrieval augmentation generation (RAG) thats help guide the AI model toward using data from music-focused sources. IBM also used a process called few-shot learning, which helps train the AI model on a small amount of data. As part of training the AI model to provide accurate information, IBM also trained it to ensure AI Stories generates accurate pronouns for each artist based on how they identify. 

The challenge with creating a tool for The Grammys was how to take the knowledge base from Llama 2 and the music-specific information to make a feature that was creative and free-form but still accurate. Aaron Baughman, an engineer and inventor at IBM, offered a liquid analogy to describe process of prioritizing data sources with the RAG approach depending on the kinds of content they want an AI model to generate.

“Think of it like multiple buckets filling up with water. We would first try to fill the bucket up with fact-based data,” Baughman told Digiday. “If there’s still room left, we would then pour more information from maybe Wikipedia or something. And if there’s still more tokens left for the context, we would pour in more water.”

https://digiday.com/?p=532804

More in Media Buying

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related

Wilmer Valderrama Shares the Go-To Circuit Workout That Keeps Him Fit on the Road

WILMER VALDERRAMA DOESN'T just work out to look good...

Here’s How to Get Pedro Pascal’s ‘The Last of Us’ Jacket on Sale

NOW THAT HBO'S The Last of Us has officially...

This Brain Surgeon Is Bringing Radiosurgery Mainstream

John Adler, MD, never wanted to be an entrepreneur....