feat: Implement ai_blueprint.md action plan — architectural review & optimisations

Steps 1–7 of the ai_blueprint.md action plan executed:

DOCUMENTATION (Steps 1–3, 6–7):
- docs/current_state_analysis.md: Phase-by-phase cost/quality mapping of existing pipeline
- docs/alternatives_analysis.md: 15 alternative approaches with testable hypotheses
- docs/experiment_design.md: 7 controlled A/B experiment specifications (CPC, HQS, CER metrics)
- ai_blueprint_v2.md: New recommended architecture with cost projections and experiment roadmap

CODE IMPROVEMENTS (Step 4 — Experiments 1–4 implemented):
- story/writer.py: Extract build_persona_info() — persona loaded once per book, not per chapter
- story/writer.py: Adaptive scoring thresholds — SCORE_PASSING scales 6.5→7.5 by chapter position
- story/writer.py: Beat expansion skip — if beats >100 words, skip Director's Treatment expansion
- story/planner.py: validate_outline() — pre-generation gate checks missing beats, continuity, pacing
- story/planner.py: Enrichment field validation — warn on missing title/genre after enrich()
- cli/engine.py: Wire persona cache, outline validation gate, chapter_position threading

Expected savings: ~285K tokens per 30-chapter novel (~7% cost reduction)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-22 22:01:30 -05:00
parent 6684ec2bf5
commit 2100ca2312
8 changed files with 1143 additions and 32 deletions

View File

@@ -80,6 +80,14 @@ def enrich(bp, folder, context=""):
if 'plot_beats' not in bp or not bp['plot_beats']:
bp['plot_beats'] = ai_data.get('plot_beats', [])
# Validate critical fields after enrichment
title = bp.get('book_metadata', {}).get('title')
genre = bp.get('book_metadata', {}).get('genre')
if not title:
utils.log("ENRICHER", "⚠️ Warning: book_metadata.title is missing after enrichment.")
if not genre:
utils.log("ENRICHER", "⚠️ Warning: book_metadata.genre is missing after enrichment.")
return bp
except Exception as e:
utils.log("ENRICHER", f"Enrichment failed: {e}")
@@ -288,3 +296,66 @@ def create_chapter_plan(events, bp, folder):
except Exception as e:
utils.log("ARCHITECT", f"Failed to create chapter plan: {e}")
return []
def validate_outline(events, chapters, bp, folder):
"""Pre-generation outline validation gate (Action Plan Step 3: Alt 2-B).
Checks for: missing required beats, character continuity issues, severe pacing
imbalances, and POV logic errors. Returns findings but never blocks generation —
issues are logged as warnings so the writer can proceed.
"""
utils.log("ARCHITECT", "Validating outline before writing phase...")
beats_context = bp.get('plot_beats', [])
chars_summary = [{"name": c.get("name"), "role": c.get("role")} for c in bp.get('characters', [])]
# Sample chapter data to keep prompt size manageable
chapters_sample = chapters[:5] + chapters[-5:] if len(chapters) > 10 else chapters
prompt = f"""
ROLE: Continuity Editor
TASK: Review this chapter outline for issues that could cause expensive rewrites later.
REQUIRED_BEATS (must all appear somewhere in the chapter plan):
{json.dumps(beats_context)}
CHARACTERS:
{json.dumps(chars_summary)}
CHAPTER_PLAN (sample — first 5 and last 5 chapters):
{json.dumps(chapters_sample)}
CHECK FOR:
1. MISSING_BEATS: Are all required plot beats present? List any absent beats by name.
2. CONTINUITY: Are there character deaths/revivals, unacknowledged time jumps, or contradictions visible in the outline?
3. PACING: Are there 3+ consecutive chapters with identical pacing that would create reader fatigue?
4. POV_LOGIC: Are key emotional scenes assigned to the most appropriate POV character?
OUTPUT_FORMAT (JSON):
{{
"issues": [
{{"type": "missing_beat|continuity|pacing|pov", "description": "...", "severity": "critical|warning"}}
],
"overall_severity": "ok|warning|critical",
"summary": "One-sentence summary of findings."
}}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
result = json.loads(utils.clean_json(response.text))
severity = result.get('overall_severity', 'ok')
issues = result.get('issues', [])
summary = result.get('summary', 'No issues found.')
for issue in issues:
prefix = "⚠️" if issue.get('severity') == 'warning' else "🚨"
utils.log("ARCHITECT", f" {prefix} Outline {issue.get('type', 'issue')}: {issue.get('description', '')}")
utils.log("ARCHITECT", f"Outline validation complete: {severity.upper()}{summary}")
return result
except Exception as e:
utils.log("ARCHITECT", f"Outline validation failed (non-blocking): {e}")
return {"issues": [], "overall_severity": "ok", "summary": "Validation skipped."}