- Clarified partial vs full completion in Sections 1, 2, 3, 4, 5, 6 - Section 7: Scoped Style Guidelines refresh UI/route (v2.4 pending) - Section 8 (new): Lore & Location RAG-Lite — tag beats with locations/items, build lore index in bible tracker, inject only relevant lore per chapter - Section 9 (new): Structured Story State / Thread Tracking — replace prev_sum blob with story_state.json (active threads, immediate handoff, resolved threads) - Summary updated with items 7, 8, 9 as pending v2.4/v2.5 tasks Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
12 KiB
AI Context Optimization Blueprint (v2.5)
This blueprint outlines architectural improvements for how AI context is managed during the writing process. The goal is to provide the AI (Claude/Gemini) with better, highly-targeted context upfront, which will dramatically improve first-draft quality and reduce the reliance on expensive, time-consuming quality checks and rewrites (currently up to 5 attempts).
0. Model Selection & Review (New Step)
Current Process:
Model selection logic exists in ai/setup.py (which determines optimal models based on API queries and fallbacks to defaults like gemini-2.0-flash), and the models are instantiated in ai/models.py. The active selection is cached in data/model_cache.json and viewed via templates/system_status.html.
Actionable Review Steps: Every time a change is made to this blueprint or related files, the following steps must be completed to review the models, update the version, and ensure changes are saved properly:
- Check the System Status UI: Navigate to
/system/statusin the web application. This UI displays the "AI Model Selection" and "All Models Ranked". - Verify Cache (
data/model_cache.json): Check this file to see the currently cached models for the roles (logic,writer,artist). - Review Selection Logic (
ai/setup.py): Examineselect_best_models()to understand the criteria and prompt used for model selection (e.g., favoringgemini-2.xover1.5, using Flash for speed and Pro for complex reasoning). - Force Refresh: Use the "Refresh & Optimize" button in the System Status UI or call
ai.init_models(force=True)to force a re-evaluation of available models from the Google API and update the cache. - Update Version & Commit: Ensure the
ai_blueprint.mdversion is bumped and a git commit is made reflecting the changes.
1. Context Trimming & Relevance Filtering (The "Less is More" Approach)
Current Problem:
story/writer.py injects the entire list of characters (chars_for_writer) into the prompt for every chapter. As the book grows, this wastes tokens, dilutes the AI's attention, and causes hallucinations where random characters appear in scenes they don't belong in.
Solution:
- Dynamic Character Injection: ✅ Only inject characters who are explicitly mentioned in the chapter's
scene_beats, plus the POV character. (Implemented v1.5.0) - RAG for Lore/Locations: ⏳ Instead of forcing all world-building into a static style block, implement a lightweight retrieval system (or explicit tagging in beats) that pulls in descriptions of only the locations and specific items relevant to the current chapter. (Planned v2.5 — see Section 8)
2. Structured "Story So Far" (State Management)
Current Problem:
prev_sum is likely a growing narrative blob. prev_content is truncated blindly to 2000 tokens, which might chop off the actual ending of the previous chapter (the most important part for continuity).
Solution:
- Smart Truncation: ✅ Instead of truncating
prev_contentblindly, take the last 1000 tokens of the previous chapter, ensuring the immediate hand-off (where characters are standing, what they just said) is perfectly preserved. (Implemented v1.5.0 viautils.truncate_to_tokenstail logic) - Thread Tracking: ⏳ Refactor the
Story So Farinto structured data: (Planned v2.5 — see Section 9)Active Plot Threads: What are the characters currently trying to achieve?Immediate Preceding Action: A concise 3-sentence summary of exactly how the last chapter ended physically and emotionally.Resolved Threads: Keep hidden from the prompt to save tokens unless relevant.
3. Pre-Flight Scene Expansion (Fixing it before writing)
Current Problem:
The system relies heavily on evaluate_chapter_quality to catch bad pacing, missing beats, or "tell not show" errors. This causes loops of rewriting.
Solution:
- Beat Expansion Step: ✅ Before sending the prompt to the
model_writer, use an inexpensive, fast model to expand thescene_beatsinto a "Director's Treatment." This treatment explicitly outlines the sensory details, emotional shifts, and entry/exit staging for the chapter. (Implemented v2.0 —expand_beats_to_treatmentinstory/writer.py)
4. Enhanced Bible Tracker (Stateful World)
Current Problem:
bible_tracker.py updates character clothing, descriptors, and speech styles, but does not track location states, time of day, or inventory/items.
Solution:
- ✅ Expanded
update_trackingto includecurrent_location,time_of_day, andheld_items. (Implemented v1.5.0) - ✅ This explicit "Scene State" is passed to the writer prompt so the AI doesn't have to guess if it's day or night, or if a character is still holding a specific artifact from two chapters ago. (Implemented v1.5.0)
5. UI/UX: Asynchronous Model Optimization (Refresh & Optimize)
Current Problem:
Clicking "Refresh & Optimize" in templates/system_status.html submits a form that blocks the UI and results in a full page refresh. This creates a clunky, blocking experience.
Solution:
- ✅ Frontend (
templates/system_status.html): Converted the<form>submission into an asynchronous AJAXfetch()call with a spinner and disabled button state during processing. (Implemented v2.2) - ✅ Backend (
web/routes/admin.py): Updated theoptimize_modelsroute to detect AJAX requests and return a JSON status response instead of performing a hard redirect. (Implemented v2.2)
6. Eliminating AI-Isms and Enforcing Genre Authenticity (v2.3)
Current Problem:
Despite the existing style_guidelines.json and basic prompts, the AI writing often falls back on predictable phrases ("testament to," "shiver down spine," "a sense of") and lacks true human-like voice, especially failing to deeply adapt to specific genre conventions.
Solution & Implementation Plan:
- ✅ Genre-Specific Instructions:
story/writer.pynow callsget_genre_instructions(genre)to inject genre-tailored mandates (Thriller, Romance, Fantasy, Sci-Fi, Horror, Historical, General Fiction) into every draft prompt. (Implemented v2.3) - ✅ Deep POV Mandate: The draft prompt in
story/writer.pyincludes aDEEP_POV_MANDATEblock that explicitly bans summary mode and all filter words, with concrete rewrite examples. (Implemented v2.3) - ✅ Prose Filter Enhancements: The default
ai_ismslist instory/style_persona.pyexpanded from 12 to 33+ banned phrases. (Implemented v2.3) - ✅ Enforce Show, Don't Tell via Evaluation:
story/editor.pyevaluate_chapter_qualitynow includes aDEEP_POV_ENFORCEMENTblock with automatic fail conditions for filter word density and summary mode. (Implemented v2.3)
7. Regular Maintenance of AI-Isms (Continuous Improvement) — v2.4
Current Problem:
AI models evolve, and new overused phrases regularly emerge. The static list in data/style_guidelines.json will become outdated. The refresh_style_guidelines() function already exists in story/style_persona.py but has no UI or scheduled trigger.
Solution & Implementation Plan:
- Admin UI Trigger: ⏳ Add a "Refresh Style Guidelines" button to
templates/system_status.html(near the existing "Refresh & Optimize"). Use the same async AJAX pattern from Section 5. - Backend Route: ⏳ Add a
/admin/refresh-style-guidelinesroute inweb/routes/admin.pythat callsstyle_persona.refresh_style_guidelines(model_logic, folder)and returns JSON status. - Logging: ⏳ Log changes to
data/app.logso admins can see what was added or removed.
8. Lore & Location Context Retrieval (RAG-Lite) — v2.5
Current Problem:
The remaining half of Section 1 — prev_sum and the style_block carry all world-building as a monolithic blob. Locations, artifacts, and lore details not relevant to the current chapter waste tokens and dilute the AI's focus, causing it to hallucinate setting details or ignore established world rules.
Solution & Implementation Plan:
- Tag Beats with Locations/Items: ⏳ Extend the chapter schema in the blueprint JSON to support optional
locationsandkey_itemsarrays per chapter (e.g.,"locations": ["The Thornwood Inn"],"key_items": ["The Sunstone Amulet"]). - Lore Index in Bible: ⏳ Add a
loredict totracking_*.json(managed bystory/bible_tracker.py) that maps location/item names to short canonical descriptions (max 2 sentences each). - Retrieval in
write_chapter: ⏳ Instory/writer.py, before building the prompt, scan the chapter'slocationsandkey_itemsarrays and pull matching entries from the lore index into alore_blockinjected into the prompt — replacing the monolithic style block lore dump. - Fallback: If no tags are present, behaviour is unchanged (graceful degradation).
9. Structured "Story So Far" — Thread Tracking — v2.5
Current Problem:
The remaining half of Section 2 — prev_sum is a growing unstructured narrative blob. As chapters accumulate, the AI receives an ever-longer wall of prose-summary as context, which dilutes attention, buries the most important recent state, and causes continuity drift.
Solution & Implementation Plan:
- Structured Summary Schema: ⏳ After each chapter is written, use
model_logicto extract structured state into astory_state.jsonfile:{ "active_threads": ["Elara is searching for the Sunstone", "The Inquisitor suspects Daren"], "immediate_handoff": "Elara escaped through the east gate. Daren was left behind. Dawn is breaking.", "resolved_threads": ["The tavern debt is paid"], "chapter": 7 } - Prompt Injection: ⏳ In
story/writer.py, replace the rawprev_sumblob with a formatted injection of the structured state — active threads first, then theimmediate_handoff, hiding resolved threads unless they are referenced in the current chapter's beats. - State Update Step: ⏳ After
write_chaptercompletes and is accepted, call aupdate_story_state(chapter_text, current_state, folder)function instory/bible_tracker.py(or a newstory/state.py) to updatestory_state.jsonwith the new chapter's resolved/active threads. - Continuity Guard: ⏳ The
immediate_handofffield from the previous chapter must always appear verbatim in the prompt as the first context block, beforeprev_sum, so the AI always sees the most recent physical/emotional state of the POV character.
Summary of Actionable Changes for Implementation Mode:
- ✅ Modify
writer.pyto filterchars_for_writerbased on characters named inbeats. (Implemented in v1.5.0) - ✅ Modify
writer.pyprev_contentlogic to extract the tail of the chapter, not a blind slice. (Implemented in v1.5.0 viautils.truncate_to_tokenstail logic) - ✅ Update
bible_tracker.pyto track time of day and location states. (Implemented in v1.5.0) - ✅ Add a pre-processing function to expand chapter beats into staging directions before generating the prose draft. (Implemented in v2.0 —
expand_beats_to_treatmentinstory/writer.py) - ✅ (v2.2) Update "Refresh & Optimize" action in UI to be an async fetch call with a processing flag instead of a full page reload, and update
admin.pyto handle JSON responses. - ✅ (v2.3) Updated writing prompts and evaluation rubrics across
story/writer.py,story/editor.py, andstory/style_persona.pyto aggressively filter AI-isms, enforce Deep POV via a non-negotiable mandate, add genre-specific writing instructions, and fail chapters that rely on "telling" rather than "showing" via filter-word density checks in the evaluator. - ⏳ (v2.4) Add "Refresh Style Guidelines" button + backend route to trigger AI review of
data/style_guidelines.json, keeping the AI-isms list current. (See Section 7) - ⏳ (v2.5) Implement Lore & Location RAG-Lite: tag chapter beats with locations/items, build a lore index in the bible tracker, inject only relevant lore into each chapter prompt. (See Section 8)
- ⏳ (v2.5) Implement Structured Story State (Thread Tracking): replace the raw
prev_sumblob with a structuredstory_state.jsoncontaining active threads, a precise immediate handoff, and resolved threads. (See Section 9)