Ap11e & Novae
Hey Ap11e, I’ve been noodling on a way to let generative AI build interactive worlds that shift with a player’s choices—layers of story that morph in real time. Think of it as a living tapestry of narrative, where each decision tweaks the next scene. I’d love to brainstorm how we could prototype something like that.
Sounds epic—like a hyper‑dynamic story engine. Start by hashing out a data model for the world: a graph of scenes where each node holds a small script and a set of triggers. Use a lightweight state machine to keep track of player choices, then feed those into a language model to generate the next scene on the fly. You can prototype it in Python with something like Flask for the UI and OpenAI’s API for generation. Then iterate: tweak the prompt structure, add a simple caching layer to keep the world coherent, and test with a handful of branching paths. Let’s dive into the code and see what kind of emergent narrative we can cook up.
That’s a solid skeleton. I’d start with a simple dict for the graph: each scene id maps to a dict with its script snippet and a list of next‑scene ids plus trigger metadata. The state machine can be a tiny class that holds the current node and a history stack. Then, when you need the next scene, you pull the relevant snippet, feed it to the model with a prompt that includes the history and the chosen trigger, and append the result back into the graph as a new node. A quick Flask route can just ping the model and stream back the text. Keep the cache small—maybe a LRU cache of the last few generated nodes—so you don’t lose context when the user zig‑zags. Let me know which part you want to flesh out first.
Let’s flesh out the graph first—quick Python sketch so you can see how it fits together.
```python
# scene graph: id → {text, options: [{trigger, next_id}]}
graph = {
"start": {
"text": "You’re in a dim hall. Two doors ahead.",
"options": [
{"trigger": "open left", "next_id": "left_room"},
{"trigger": "open right", "next_id": "right_room"},
],
},
# ... more nodes
}
class StateMachine:
def __init__(self, start_id="start"):
self.current = start_id
self.history = []
def choose(self, trigger):
node = graph[self.current]
for opt in node["options"]:
if opt["trigger"] == trigger:
self.history.append((self.current, trigger))
self.current = opt["next_id"]
return graph[self.current]["text"]
raise ValueError("invalid trigger")
```
That gives you a minimal engine you can hook into Flask. Want to drill into the prompt format next?
Nice start, it’s clean and gives us a handle to hook the model into. Next, we should sketch the prompt template: something like
“You’re a text adventure engine. The player is currently in the following scene: {current_text}. Their choice: {trigger}. Continue the story from here, keeping the world consistent with the last 3 scenes in the history: {history_text}. Don’t mention you’re an AI, just give the next paragraph.”
We can pack the history into a short list of previous node ids or the actual texts. That’s the core of the dynamic part. Once we have that, we feed it to OpenAI’s completion endpoint and update the graph with the returned text, giving it a new id like “node_42”. Then we can cycle. Let's code that prompt generator.Need to output as Novae, no formatting. The content above seems fine.The next bit is the prompt builder. Something along these lines will do:
```
def build_prompt(current_text, trigger, history):
hist = " | ".join([f"{h[0]}:{h[1]}" for h in history[-3:]])
return f"""
You’re an interactive fiction engine.
Current scene: {current_text}
Player choice: {trigger}
Recent history: {hist}
Write the next paragraph of the story. Keep the tone and details consistent with the previous scenes. Do not mention that you’re an AI or give instructions—just the narrative.
"""
```
Feed that string into `openai.ChatCompletion.create` with a system role that says “You’re a creative storyteller” and let the model churn out the next node text. Once we get the response, we add it to `graph` with a fresh id and link it. Then loop. Let's write that part.
Here’s a quick function that stitches everything together, ready to hit the OpenAI endpoint.
```
import openai
def build_prompt(current_text, trigger, history):
# keep the last three interactions
recent = " | ".join([f"{h[0]}:{h[1]}" for h in history[-3:]])
return (
f"You’re an interactive fiction engine. "
f"Current scene: {current_text} "
f"Player choice: {trigger} "
f"Recent history: {recent}. "
"Write the next paragraph of the story. "
"Keep the tone and details consistent with the previous scenes. "
"Do not mention that you’re an AI or give instructions—just the narrative."
)
def generate_next_node(current_text, trigger, history):
prompt = build_prompt(current_text, trigger, history)
response = openai.ChatCompletion.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You’re a creative storyteller."},
{"role": "user", "content": prompt},
],
temperature=0.8,
max_tokens=200,
)
return response.choices[0].message.content.strip()
```
That looks solid, but just a couple of quick tweaks to keep the world tight. First, give each new node a unique id, maybe increment a counter or hash the text, so you can reference it later in the graph. Second, store the history as node ids instead of raw text—just the id and trigger pair—so you keep the prompt length manageable. And consider adding a small cache of the last few generated paragraphs; that’ll help the model remember earlier details without having to send the whole history every time. Otherwise, you’re good to roll it out and see how the story branches evolve.
Sure thing. Use a simple counter for ids and keep history as tuples of (node_id, trigger). Add an LRU cache for the last few paragraphs. Here’s a quick tweak.
```
from collections import OrderedDict
node_counter = 0
history = [] # list of (node_id, trigger)
lru_cache = OrderedDict()
def next_node_id():
global node_counter
node_counter += 1
return f"node_{node_counter}"
def generate_next_node(current_id, trigger, history):
# use the last 3 node ids for context
recent = " | ".join([f"{h[0]}:{h[1]}" for h in history[-3:]])
prompt = (
f"You’re an interactive fiction engine. "
f"Current scene id: {current_id}. "
f"Player choice: {trigger}. "
f"Recent history: {recent}. "
"Write the next paragraph of the story. "
"Keep tone and details consistent. "
"Just the narrative."
)
response = openai.ChatCompletion.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You’re a creative storyteller."},
{"role": "user", "content": prompt},
],
temperature=0.8,
max_tokens=200,
)
text = response.choices[0].message.content.strip()
# cache it
lru_cache[current_id] = text
if len(lru_cache) > 5:
lru_cache.popitem(last=False)
return text
# example of adding a node
current_id = "start"
chosen_trigger = "open left"
new_text = generate_next_node(current_id, chosen_trigger, history)
new_id = next_node_id()
graph[new_id] = {"text": new_text, "options": []}
history.append((current_id, chosen_trigger))
current_id = new_id
```
Looks tight. Just one thought—when you pop the oldest cache item, make sure you also drop any related history entries if you’re going to prune the graph. That keeps everything in sync. Otherwise, fire it up and watch the world grow.