feat: Implement ai_blueprint_v2.md — Exp 5, 6 & 7 (persona validation, mid-gen consistency, two-pass drafting)

Exp 6 — Iterative Persona Validation (story/style_persona.py + cli/engine.py):
- Added validate_persona(): generates ~200-word sample in persona voice, scores 1–10 via
  lightweight voice-quality prompt; accepts if ≥ 7/10
- cli/engine.py retries create_initial_persona() up to 3× until validation passes
- Expected: -20% Phase 3 voice-drift rewrites

Exp 5 — Mid-gen Consistency Snapshots (cli/engine.py):
- analyze_consistency() called every 10 chapters inside the writing loop
- Issues logged as ⚠️ warnings; non-blocking; score and summary emitted
- Expected: -30% post-generation continuity error rate

Exp 7 — Two-Pass Drafting (story/writer.py):
- After Flash rough draft, Pro model (model_logic) polishes prose against a strict
  checklist: filter words, deep POV, active voice, AI-isms, chapter hook
- max_attempts reduced 3 → 2 since polished prose needs fewer rewrite cycles
- Expected: +0.3 HQS with no increase in per-chapter cost

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-22 22:08:47 -05:00
parent 2100ca2312
commit 4f2449f79b
4 changed files with 167 additions and 16 deletions

View File

@@ -104,6 +104,86 @@ def create_initial_persona(bp, folder):
return {"name": "AI Author", "bio": "Standard, balanced writing style."}
def validate_persona(bp, persona_details, folder):
"""Validate a newly created persona by generating a 200-word sample and scoring it.
Experiment 6 (Iterative Persona Validation): generates a test passage in the
persona's voice and evaluates voice quality before accepting it. This front-loads
quality assurance so Phase 3 starts with a well-calibrated author voice.
Returns (is_valid: bool, score: int). Threshold: score >= 7 → accepted.
"""
meta = bp.get('book_metadata', {})
genre = meta.get('genre', 'Fiction')
tone = meta.get('style', {}).get('tone', 'balanced')
name = persona_details.get('name', 'Unknown Author')
bio = persona_details.get('bio', 'Standard style.')
sample_prompt = f"""
ROLE: Fiction Writer
TASK: Write a 200-word opening scene that perfectly demonstrates this author's voice.
AUTHOR_PERSONA:
Name: {name}
Style/Bio: {bio}
GENRE: {genre}
TONE: {tone}
RULES:
- Exactly ~200 words of prose (no chapter header, no commentary)
- Must reflect the persona's stated sentence structure, vocabulary, and voice
- Show, don't tell — no filter words (felt, saw, heard, realized, noticed)
- Deep POV: immerse the reader in a character's immediate experience
OUTPUT: Prose only.
"""
try:
resp = ai_models.model_logic.generate_content(sample_prompt)
utils.log_usage(folder, ai_models.model_logic.name, resp.usage_metadata)
sample_text = resp.text
except Exception as e:
utils.log("SYSTEM", f" -> Persona validation sample failed: {e}. Accepting persona.")
return True, 7
# Lightweight scoring: focused on voice quality (not full 13-rubric)
score_prompt = f"""
ROLE: Literary Editor
TASK: Score this prose sample for author voice quality.
EXPECTED_PERSONA:
{bio}
SAMPLE:
{sample_text}
CRITERIA:
1. Does the prose reflect the stated author persona? (voice, register, sentence style)
2. Is the prose free of filter words (felt, saw, heard, noticed, realized)?
3. Is it deep POV — immediate, immersive, not distant narration?
4. Is there genuine sentence variety and strong verb choice?
SCORING (1-10):
- 8-10: Voice is distinct, matches persona, clean deep POV
- 6-7: Reasonable voice, minor filter word issues
- 1-5: Generic AI prose, heavy filter words, or persona not reflected
OUTPUT_FORMAT (JSON): {{"score": int, "reason": "One sentence."}}
"""
try:
resp2 = ai_models.model_logic.generate_content(score_prompt)
utils.log_usage(folder, ai_models.model_logic.name, resp2.usage_metadata)
data = json.loads(utils.clean_json(resp2.text))
score = int(data.get('score', 7))
reason = data.get('reason', '')
is_valid = score >= 7
utils.log("SYSTEM", f" -> Persona validation: {score}/10 {'✅ Accepted' if is_valid else '❌ Rejected'}{reason}")
return is_valid, score
except Exception as e:
utils.log("SYSTEM", f" -> Persona scoring failed: {e}. Accepting persona.")
return True, 7
def refine_persona(bp, text, folder):
utils.log("SYSTEM", "Refining Author Persona based on recent chapters...")
ad = bp.get('book_metadata', {}).get('author_details', {})