July 2, 2024
Banking

Project AVA: Exploring GenAI’s Promising Future in Game Production


Like most in the industry, the team was new to GenAI but experts in game development. With the optimistic naivety of anyone diving into over-hyped technology, they set out to answer three questions:

  • Could GenAI generate a meaningful percentage of the code and assets for them?
  • Could GenAI hit a level of quality they would consider shippable?
  • Could they build a game our lawyers would consider shippable? (special thanks to the Keywords Studios legal team for all their support)

The team of three quickly discovered the technology’s limitations, it couldn’t write all the code or create great narrative. Clearly, they needed to add humans, so with the support of other studio heads worldwide, the small team expanded to include domain experts for every discipline. This additional expertise allowed them to push the GenAI tools on code, design, narrative, art, and audio. Across every discipline, creators leaned into the project with open minds, brought together from around the world by their curiosity.

Despite designing the game around the understood limitations of GenAI at that time: a relatively simple, single-player, narrative-driven 2D game, GenAI eventually failed to generate a significant portion of the code or planned assets, but valuable lessons were learnt in every domain, and more broadly across Keywords Studios:

  • Useful today: The team identified several areas where GenAI could provide meaningful support to the domain experts, such as concept art iteration, first drafts of dialogue, helping with storyline brainstorming, and on the code side, ChatGPT proved to be a valuable aid for seasoned game developers trying to familiarise themselves with UE4 quickly. As the tools evolved and the team’s prompt engineering skills improved, the range of tasks where GenAI could add value grew, highlighting the importance of continual exploration.
  • Evaluating AI from not just a technical, but also an ethical, and legal perspective is crucial. The Electric Square team learned the importance of asking the right questions early on and identifying potential red flags. During the evaluation process, we found many GenAI tools lacked the basics for enterprise adoption, such as SLAs, or the training data set transparency we needed to be informed consumers. Others lacked scalable infrastructure, or the human bandwidth to support a “pilot” beyond a few days. The AI Center of Excellence is currently collaborating with our privacy, legal, IT and InfoSec teams to build an evaluation framework future teams can use to speed up, and de-risk the process of AI tool selection.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent. View more
Accept
Decline