A Microsoft scientist has concocted a way for OpenAI’s GPT-4 model to play Doom, a game at which the AI apparently does not excel.

Adrian de Wynter published a paper and a blog post documenting his attempts to see if GPT-4 could play the classic PC game. To do so, he tapped GPT-4 with Vision API, which lets the AI model receive visual images and create a corresponding action.

He then crafted a program to take screenshots from a computer running Doom. The screenshots were fed to GPT-4, with the goal of having it beat the game. The answers were then converted into outputs used to control the game.

A screenshot from his paper

(Credit: Adrian de Wynter)

For his experiment, de Wynter gave GPT-4 access to a system of prompt generators, which had received a walkthrough on how to win the game, and could account for past actions taken. The results found that OpenAI’s technology can play Doom to a “passable” degree, meaning the AI is smart enough to traverse the map and shoot at enemies.

But when compared to the performance of a human, GPT-4 “kinda fails at the game and my grandma has played it wayyy better than this model,” de Wynter wrote in his blog post. For example, the AI model would “sometimes do really dumb things like getting stuck in corners and just punching the wall like an angsty teenager, or shooting point blank explosive barrels,” he said.

In addition, GPT-4 would often ignore enemies and act as if they didn’t exist as soon as they dropped out of its field of view. “Most of the time the model died because it got stuck in a corner and ended up being shot in the back,” de Wynter added. 

The results suggest  GPT-4 can struggle when it comes to long-term reasoning and “object permanence,” or the ability to recognize that an object still exists, even though it can’t be seen. The other problem is that OpenAI’s model sometimes made up, or hallucinated, why it needed to take certain actions while playing the game.

Recommended by Our Editors

To be fair, GPT-4 wasn’t designed to play first-person shooters. Still, de Wynter also concluded that OpenAI’s model is smart enough to “operate doors, follow simple objectives, and identify enemies and fire weapons at them” using only simple depictions of a video game world.

“So, while this is a very interesting exploration around planning and reasoning, and could have applications in automated videogame testing, it is quite obvious that this model is not aware of what it is doing,” he added. “I strongly urge everyone to think about what deployment of these models implies for society and their potential misuse.”

[ For more curated Computing news, check out the main news page here]

The post OpenAI’s GPT-4 Can Play Doom, But Only Like a Noob first appeared on

New reasons to get excited everyday.

Get the latest tech news delivered right in your mailbox

You may also like

Notify of
Inline Feedbacks
View all comments

More in computing