Random - Doomlaser

Archive for the ‘Random’ Category

Doomlaser Interview on AI, Scaling Limits, and Governance (from 2015)

Tuesday, December 23rd, 2025

I dug up a 1-hour public radio interview I did in 2015 for Dave Monk’s WEFT 90.1 FM show, where I describe AI as “high-level algebra” used to make sense of abstract inputs—which is basically what modern LLMs are: tokenized text turned into vectors, pushed through huge stacks of matrix multiplications, producing a next-token distribution.

What makes the interview interesting in hindsight isn’t prediction; it’s that certain constraints were already visible once intelligence is treated as math plus incentives, rather than as something mystical.

Back then, I didn’t have the words “Transformer” or “latent space”, but the core intuition was the same: intelligence-as-inference, not magic.

The interview was recorded months before OpenAI was founded, during a period when “AI” mostly meant expert systems, narrow ML, or sci-fi abstractions. We ended up talking about things that feel oddly current now:

• Architectural bottlenecks and scaling limits
• Why “reasoning” might not simply emerge from brute force
• Automation, labor displacement, and second-order effects
• Speculative governance models for advanced AI
• The idea of for-profit engines constrained or controlled by nonprofit oversight

At the time, these were armchair frameworks—attempts to reason from first principles about where computation, incentives, and institutions tend to drift when scaled.

Listening back now, I’m struck by how much of the underlying structure was visible at a time when many second-order effects were still difficult to imagine.

Why resurface this now?

I initially revisited the recording after recent public comments from Sergey Brin about Google’s under-investment in AI research and architectural bets. There’s a familiar pattern here: long stretches of incremental progress, followed by abrupt nonlinear jumps once the right abstraction and enough compute collide.

Transformers didn’t make intelligence appear out of nowhere. They provided a usable way to stack inference at scale.

In that sense, LLMs feel less like a revolution and more like a delayed convergence—linear algebra finally getting enough data, enough depth, and enough money to show its teeth.

OpenAI corporate structure diagram

Governance, incentives, and structure

One part of the interview that surprised me is how much time we spent on corporate governance and institutional design. In particular, we discussed a model where:

• A profit-seeking AI engine exists for efficiency and capital formation
• But is structurally constrained by a nonprofit or mission-locked entity
• To limit runaway incentive capture

That general shape later materialized, imperfectly and contentiously, in OpenAI’s original structure.

I think many people were circling similar ideas at the time. It’s more interesting as an example of how certain governance configurations are almost forced once the underlying economics become obvious.

The full conversation is up on YouTube. It’s long, covers a lot of ground, and is very much a product of its era, but I think it holds up as an archival artifact of how some of these ideas were already forming before the current wave.

I’m curious which parts people think aged well, and which feel naive or clearly wrong in a post-LLM world. That delta is often more interesting than the hits.

Here is a full text transcript of that 2015 conversation on public radio, if you’d like to read it directly.


Mark

OpenAI API Generated Video Game Dialogue With Real-Time Text-to-Speech

Monday, February 6th, 2023

Watching the recent progress in AI has been so fascinating that I wondered if it would be possible to use the OpenAI API to generate dialogue for a video game.

I’ve been working on an FPS with procedurally generated levels, so AI-generated dialogue seemed like a logical fit. After some hacking, I got it to work!



NPC dialogue lines are generated on the fly, different every time, facilitated by a custom prompt for each line in our Unity dialogue editor.

Instead of writing the dialogue directly, you tell the AI what kind of possibility space to write in, and give it some background on the character, the setting, and the particulars of what’s going on. It’s sort of like prepping a kid for an improvisational play.




You can additionally tweak the AI temperature variable, which controls the randomness of the generated output.

How it’s done

I’m using OpenAI’s Text-Davinci-003 model to generate the results. Each prompt is sent over the internet to the AI and gets back a response from the model that attempts to follow the prompt’s instructions. Generating this response from the pre-trained model is called AI inference. Some results are better than others.




For instance, here’s the prompt I’m currently using for our character Big Brain’s first line of dialog:


"Please provide a dialog line for a satirical science fiction game. No formatting. You are Big Brain, ruler and overseer of this domain. You talk like a snotty commander. The player is here in your room of the Capital City to see you at your request, from a long ways away.. You are wondering what he is thinking. The player is a small kill drone and is completely your underling. You feign concern for his welfare, but he is here to do your bidding. You are not to ask him what you can do for him, but instead enlighten him on what he must do for you. You are going to send him on a mission. Please continue in one or two very short sentences: "


And in response, I get something back like:



"Welcome, drone. I have a task for you. Listen carefully."


Or, on a different run:


"Welcome, kill drone. I have an assignment for you. Look no further for purpose or direction, for I have it all stored away in my extraordinary brain."


At first, this may seem like a lot of prompt for such a curt reply, but it pays to be specific when instructing the AI what to print back at you. If you don’t specify not to ask the player something, the AI will happily go ahead and do something like that—which doesn’t always make sense in the one way conversation that is our specific game.


There’s a bit of latency in the response from OpenAI, so the game asynchronously fetches all its lines at the beginning of each scene. For characters who speak audibly, like Big Brain, I then send each returned dialogue string to Google’s Cloud Text-to-Speech as needed, and then apply some live processing on the voice audio I get back.

It could get even cooler if I fed into the prompt some state information from the game, so it can surprise you with observations gleaned from your interactivity with the systems. It’s still early days with this kind of AI-generated dialogue stuff, but I thought it was a cool milestone.

The cost

When sending prompts to the AI, you specify how many tokens you want back. A token accounts for a bit less than a word on average. For these lines of dialog for Big Brain, I’m asking for 100 tokens per line. OpenAI is currently charging 2 cents per thousand tokens requested of the Text-Davinci-003 model.

Here’s what that looked like in dollars to develop and test this scene:

So about $3.50 to develop this demo. Too expensive to deploy in a live game without a token limit, or perhaps caching the most common AI responses.

Quality of response

My prompts are pretty naive, with simple references. My guess is that refining or rewriting the prompts to use weirder and more specific references could help in improving the results I get back.

This process is something I refer to as ‘AI Whispering‘. I’m a novice at it, but I believe the potential is wide open, especially as AI models continue to get better.

Future directions

Procedurally crafting prompts from within the game is an obvious next step, providing each prompt with more background information gleaned from various game state variables.


Another obvious step would be to allow the player to talk back. We’ve seen what wild adventures these GPT-3 based interactive games can go on with experiments like AI Dungeon. But the flexibility of direct interaction with GPT-3 means it can easily veer off into situations that traditional game logic cannot currently cope with.

One solution would be to allow the AI to control the game from a specialized set of messages it learns in the prompt. You could let the AI do stage direction for your scene, or control the movements and actions of characters. Things like that.

It might be wise for AAA studios to create their own Large Language Models so they can conjure real-time procedural dialog without outsourcing to an external API like OpenAI’s.

Another interesting development is the announcement of LAION-AI’s Open Assistant. It is an open source Large Language Model, an effort that aims to democratize direct access to a ChatGPT style LLM.

It’s still early days for this kind of technique, and I’m excited to see what unfolds as these techniques become more commonplace and explored.

If you enjoyed this post, feel free to say hi on Twitter or Mastodon and ask any questions you might have.

Video Games as High Art? Roger Ebert & The Cultural Abyss

Wednesday, October 16th, 2019

I ran across this brief talk today, which I gave at the Game Developers Conference years ago, about video games, Roger Ebert, high art & the cultural ghetto. I think it’s held up pretty well over time.

The quick synopsis of my thesis is, simply, that art is something that people do, and the medium is irrelevant.

With video games, “the artist” is designing a possibility space for the audience—what can happen, and what the consequences of the player’s decisions are.

A video game doesn’t need to have any goal or explicit win-state. We’ve seen that with the rise of walking simulators, which are no different than experiencing a piece of architecture, a garden, or an art exhibit itself.

Fun fact: I’m the person who prodded Roger Ebert into writing his infamous essay condemning the artistic merit of video games, which he later retracted after a rousing bit of internet outrage from all corners. But when I ran into him and Chaz at Ebertfest in 2010, and reminded him about our exchanges, he shook my hand and was all smiles.

The Museum of Modern Art has had an interactive wing for decades, but now it holds actual video games in its permanent collection so I’d say the question now is pretty much moot.

MoMA‘s inaugural selections, from Katamari to Dwarf Fortress, express a good range of what the what the video game medium has been capable of producing over the course of its first few formative decades.

Check out our video games on itch.io — they’re free

Working On A Game With A Goblin

Wednesday, May 10th, 2017



I’m working on a new game about goblins and a lot of other stuff. It controls from a third person 3d perspective with pixel art.



Aesthetic and mechanical inspirations include:

And you can find out more on the Doomlaser…



Game Videos

Sunday, January 1st, 2012

I thought I’d start the new year off with a few videos that feature some of our recent game work.

First up is a video of someone playing Braindead.

Next up is Hot Throttle.

This is a funny story. Back in May, I was out in Williamsburg, Brooklyn to show Hot Throttle at Babycastles. There happened to be a German/French TV crew out there from Europe’s Arte TV and they produced this segment. I sound much classier talking about Cactus and man-cars in French.

Finally, here’s a segment from PBS on the cultural relevance of videogames in the modern age. About halfway through, our very own Hot Throttle makes an appearance.

That’s it!

Hot Throttle

Thursday, February 3rd, 2011

Hot Throttle Title Screen

Hot Throttle is a new game from cactus and myself, created for Adult Swim Games. It’s about a gang of men who like to race pretending that they are cars.


PLAY IT HERE.



If you would prefer to download it and run it on your PC, you can grab the full uncensored Hot Throttle, that I showed at Babycastles in New York in 2011, here.