In 2021, Krafton Inc. made what seemed like a smart, forward-looking bet. The company, best known for global gaming hits, acquired Unknown Worlds Entertainment, the creative force behind the widely loved Subnautica series. The deal was structured with an incentive of an additional $250 million earnout if the studio’s next title, Subnautica 2, performed as expected.
At the time, it made sense. If the game succeeded, everyone would win. But by 2025, that success had become a problem.
Internal projections showed that Subnautica 2 was on track to trigger the full earnout. For Krafton’s CEO, Changhan Kim, the agreement now looked less like a smart investment and more like an expensive miscalculation. According to court findings, he believed he had agreed to a “bad deal,” one that the company would soon have to honour in full.
Bad advice?
Kim did what many executives in his position would do. He asked his legal team for options. Their answer was clear that there was no clean way out. Even dismissing the studio’s leadership “for cause” would not eliminate the obligation and would almost certainly expose the company to litigation and reputational harm.
At that point, the story could have followed the typical corporate playbook of renegotiation or some sort of compromise or even just accepting the cost of a deal that had, in hindsight, been too generous.
In the age of AI, the story instead took a different turn.
ChatGPT as legal consultant
Kim opened ChatGPT.
At first, AI echoed what his lawyers said. Cancelling the earnout would be difficult. But Kim pushed and kept asking and in response, ChatGPT created an elaborate scheme, a multi-step corporate strategy that would later be referred to as “Project X.”
The scheme became a case study in how AI can be used to rationalise decisions already made. It outlined the creation of an internal task force, a pathway to renegotiate or forcibly take control of the studio, and a strategy to secure publishing rights and technical assets. It even anticipated the public narrative, advising that the dispute be framed around “quality” and “fan trust” instead of money. At one point, ChatGPT even drafted a statement to the gaming community.
From strategy to action
Over the following weeks, Krafton acted on ChatGPT’s guidance. When efforts to renegotiate failed, the company removed the studio’s leadership, including CEO Ted Gill and the co-founders. The dismissals were justified internally with claims that would later be challenged in court. Externally, the company attempted to control the narrative.
But it didn’t go as anticipated. The gaming community reacted with suspicion almost immediately. The messaging rang hollow. And what might once have been a contained contractual dispute quickly became a public controversy.
The court draws a line
Eventually, the matter reached the Delaware Court of Chancery, where vice chancellor Lori Will was tasked with untangling what had happened. Her ruling was decisive. Krafton had breached its agreement. The removal of leadership was pretextual, not justified by legitimate cause. Control of the studio was to be restored, and the earnout period extended to account for the disruption.
But the reasoning was more significant than the outcome.
In language that is likely to echo beyond this case, the court made clear that corporate leaders are expected to exercise independent human judgment. That responsibility, the ruling suggested, cannot be outsourced to an AI system, especially when the stakes involve contractual obligations and fiduciary duties.
The litigation, however, is ongoing, with the $250 million earnout and potential damages still in play.
A failure of accountability, not AI
This case matters because ChatGPT actually didn’t provide flawed advice. The CEO had already received accurate legal guidance and went searching for an alternative. When AI initially agreed with the lawyers, he pressed until it generated a strategy aligned with what he wanted to do.
The technology did not fail in this case. Governance did.
That distinction is important, especially as businesses increasingly integrate AI into decision-making processes. This case demonstrates how AI is sometimes not a neutral tool but can be used as a mechanism to validate predetermined choices.
A new frontier for litigation risk?
For litigation, the implications are immediate. AI interactions are no longer hypothetical or peripheral, they are evidentiary. In this case, the reported deletion of specific ChatGPT conversation logs adds another layer of complexity, raising questions about record-keeping, disclosure, and intent.
Future disputes will almost certainly probe how and why AI tools were used, what prompts were given, and whether outputs influenced or merely justified, key decisions.
What this all means for businesses
For businesses, the lesson is broader and more uncomfortable. The presence of AI in a workflow does not dilute responsibility. It actually raises it. In fact, reliance on AI in high-stakes contexts may invite greater scrutiny, not less.
Boards and executives will need to think carefully about how AI is governed internally. Things like when it can be used, how its outputs are validated, and where the line is drawn between assistance and abdication.
And then there is the reputational issue. The attempt to frame the dispute as being about “fan trust” rather than financial obligation was ineffective and counterproductive. In an environment where audiences are increasingly attuned to corporate narratives, AI-generated messaging can quickly be perceived as inauthentic or manipulative, particularly when it clashes with underlying facts.
What makes this case so compelling is that it is not really about gaming, or even about AI in the narrow technical sense. It is about decision-making under pressure, and the lengths to which leaders might go to avoid an unfavourable outcome. AI simply became the instrument.
As noted, the litigation is not over. The $250 million earnout remains in dispute, and further claims are still to be resolved. But regardless of how the financial aspects conclude, this case has set a marker.
In the age of AI, the question is no longer whether machines can assist in complex decisions. They can, and they will. The question courts are beginning to answer is what happens when those decisions go wrong, and who, in the end, is responsible.
For now, at least, the answer remains firmly human.
The guide, When Data Thinks, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Download it here.