Compare commits
23 Commits
ba56bc1ec1
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| f869700070 | |||
| d2c65f010a | |||
| dc39930da4 | |||
| ff5093a5f9 | |||
| 3a42d1a339 | |||
| 4f2449f79b | |||
| 2100ca2312 | |||
| 6684ec2bf5 | |||
| f740174257 | |||
| d77ceb376d | |||
| 3ba648ac5f | |||
| 6f19808f15 | |||
| f1d7fcbcb7 | |||
| c3724a6761 | |||
| 74cc66eed3 | |||
| 353dc859d2 | |||
| 51b98c9399 | |||
| b4058f9f1f | |||
| 093e78a89e | |||
| bcba67a35f | |||
| 98a330c416 | |||
| af2050160e | |||
| 203d74f61d |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -12,6 +12,7 @@ ai_blueprint.md
|
|||||||
plans/
|
plans/
|
||||||
|
|
||||||
# Claude / Anthropic Artifacts
|
# Claude / Anthropic Artifacts
|
||||||
|
CLAUDE.md
|
||||||
.claude/
|
.claude/
|
||||||
claude.json
|
claude.json
|
||||||
|
|
||||||
|
|||||||
15
CLAUDE.md
15
CLAUDE.md
@@ -1,15 +0,0 @@
|
|||||||
# Claude Custom Instructions
|
|
||||||
|
|
||||||
## Project Context and Index
|
|
||||||
Before starting any coding task, you MUST always read the `ai_blueprint.md` file in the root directory. This file serves as the project index, architecture plan, and contains the actionable steps and versions designed by the Architect. Do not start modifying files until you have read and understood the context provided in `ai_blueprint.md`.
|
|
||||||
|
|
||||||
## Managing Documentation
|
|
||||||
Whenever you complete an implementation step outlined in `ai_blueprint.md` or make notable architectural changes, you MUST update the `README.md` and/or `ai_blueprint.md` to reflect those changes and bump version numbers if appropriate.
|
|
||||||
|
|
||||||
## Git Workflow
|
|
||||||
Every time you complete a task or make changes to files, you MUST automatically commit those changes to Git before waiting for the user's next prompt.
|
|
||||||
|
|
||||||
### Instructions:
|
|
||||||
1. Always run `git add .` to stage your changes.
|
|
||||||
2. Run `git commit -m "Auto-commit: [brief description of what was changed]"`
|
|
||||||
3. Do not ask for permission to commit, just perform the git commit automatically.
|
|
||||||
26
README.md
26
README.md
@@ -96,10 +96,23 @@ Open `http://localhost:5000`.
|
|||||||
- **Project Dashboard:** Create and monitor generation jobs from the browser.
|
- **Project Dashboard:** Create and monitor generation jobs from the browser.
|
||||||
- **Real-time Logs:** Console output is streamed to the browser and stored in the database.
|
- **Real-time Logs:** Console output is streamed to the browser and stored in the database.
|
||||||
- **Chapter Editor:** Edit chapters directly in the browser; manual edits are preserved across artifact regenerations and synced back to character/plot tracking state.
|
- **Chapter Editor:** Edit chapters directly in the browser; manual edits are preserved across artifact regenerations and synced back to character/plot tracking state.
|
||||||
|
- **Chapter Navigation:** Prev/Next buttons on every chapter card in the manuscript reader let you jump between chapters without scrolling.
|
||||||
|
- **Download Bible:** Download the project's `bible.json` directly from any run's detail page for offline review or cloning.
|
||||||
|
- **Run Tagging:** Label runs with comma-separated tags (e.g. `dark-ending`, `v2`, `favourite`) to organise and track experiments.
|
||||||
|
- **Run Deletion:** Delete completed or failed runs and their filesystem data from the run detail page.
|
||||||
- **Cover Regeneration:** Submit written feedback to regenerate the cover image iteratively.
|
- **Cover Regeneration:** Submit written feedback to regenerate the cover image iteratively.
|
||||||
- **Admin Panel:** Manage all users, view spend, and perform factory resets at `/admin`.
|
- **Admin Panel:** Manage all users, view spend, and perform factory resets at `/admin`.
|
||||||
- **Per-User API Keys:** Each user can supply their own Gemini API key; costs are tracked per account.
|
- **Per-User API Keys:** Each user can supply their own Gemini API key; costs are tracked per account.
|
||||||
|
|
||||||
|
### Cost-Effective by Design
|
||||||
|
|
||||||
|
This engine was built with the goal of producing high-quality fiction at the lowest possible cost. This is achieved through several architectural optimizations:
|
||||||
|
|
||||||
|
* **Tiered AI Models**: The system uses cheaper, faster models (like Gemini Pro) for structural and analytical tasks—planning the plot, scoring chapter quality, and ensuring consistency. The more powerful and expensive creative models are reserved for the actual writing process.
|
||||||
|
* **Intelligent Context Management**: To minimize the number of tokens sent to the AI, the system is very selective about the data it includes in each request. For example, when writing a chapter, it only injects data for the characters who are currently in the scene, rather than the entire cast.
|
||||||
|
* **Adaptive Workflows**: The engine avoids unnecessary work. If a user provides a detailed outline for a chapter, the system skips the AI step that would normally expand on a basic idea, saving both time and money. It also adjusts its quality standards based on the chapter's importance, spending more effort on a climactic scene than on a simple transition.
|
||||||
|
* **Caching**: The system caches the results of deterministic AI tasks. If it needs to perform the same analysis twice, it reuses the original result instead of making a new API call.
|
||||||
|
|
||||||
### CLI Wizard (`cli/`)
|
### CLI Wizard (`cli/`)
|
||||||
- **Interactive Setup:** Menu-driven interface (via Rich) for creating projects, managing personas, and defining characters and plot beats.
|
- **Interactive Setup:** Menu-driven interface (via Rich) for creating projects, managing personas, and defining characters and plot beats.
|
||||||
- **Smart Resume:** Detects in-progress runs via lock files and prompts to resume.
|
- **Smart Resume:** Detects in-progress runs via lock files and prompts to resume.
|
||||||
@@ -111,7 +124,16 @@ Open `http://localhost:5000`.
|
|||||||
- **Dynamic Pacing:** Monitors story progress during writing and inserts bridge chapters to slow a rushing plot or removes redundant ones detected mid-stream — without restarting.
|
- **Dynamic Pacing:** Monitors story progress during writing and inserts bridge chapters to slow a rushing plot or removes redundant ones detected mid-stream — without restarting.
|
||||||
- **Series Continuity:** When generating Book 2+, carries forward character visual tracking, established relationships, plot threads, and a cumulative "Story So Far" summary.
|
- **Series Continuity:** When generating Book 2+, carries forward character visual tracking, established relationships, plot threads, and a cumulative "Story So Far" summary.
|
||||||
- **Persona Refinement Loop:** Every 5 chapters, analyzes actual written text to refine the author persona model, maintaining stylistic consistency throughout the book.
|
- **Persona Refinement Loop:** Every 5 chapters, analyzes actual written text to refine the author persona model, maintaining stylistic consistency throughout the book.
|
||||||
- **Consistency Checker (`editor.py`):** Scores chapters on 8 rubrics (engagement, voice, sensory detail, scene execution, etc.) and flags AI-isms ("tapestry", "palpable tension") and weak filter verbs ("felt", "realized").
|
- **Persona Cache:** The author persona (including writing sample files) is loaded once at the start of the writing phase and reused for every chapter, eliminating redundant file I/O. The cache is refreshed whenever the persona is refined.
|
||||||
|
- **Outline Validation Gate (`planner.py`):** Before the writing phase begins, a Logic-model pass checks the chapter plan for missing required beats, character continuity issues, pacing imbalances, and POV logic errors. Issues are logged as warnings so the writer can review them before generation begins.
|
||||||
|
- **Adaptive Scoring Thresholds (`writer.py`):** Quality passing thresholds scale with chapter position — setup chapters use a lower bar (6.5) to avoid over-spending refinement tokens on early exposition, while climax chapters use a stricter bar (7.5) to ensure the most important scenes receive maximum effort.
|
||||||
|
- **Adaptive Refinement Attempts (`writer.py`):** Climax and resolution chapters (position ≥ 75% through the book) receive up to 3 refinement attempts; earlier chapters keep 2. This concentrates quality effort on the scenes readers remember most.
|
||||||
|
- **Stricter Polish Pass (`writer.py`):** The filter-word threshold for skipping the two-pass polish has been tightened from 1-per-83-words to 1-per-125-words, so more borderline drafts are cleaned before evaluation.
|
||||||
|
- **Smart Beat Expansion Skip (`writer.py`):** If a chapter's scene beats are already detailed (>100 words total), the Director's Treatment expansion step is skipped, saving ~5K tokens per chapter.
|
||||||
|
- **Consistency Checker (`editor.py`):** Scores chapters on 13 rubrics (engagement, voice, sensory detail, scene execution, dialogue, pacing, staging, prose dynamics, clarity, etc.) and flags AI-isms ("tapestry", "palpable tension") and weak filter verbs ("felt", "realized"). Chapter evaluation now uses head+tail sampling (`keep_head=True`) ensuring the evaluator sees the chapter opening (hooks, sensory anchoring) as well as the ending — long chapters no longer receive scores based only on their tail.
|
||||||
|
- **Rewrite Model Upgrade (`editor.py`):** Manual chapter rewrites and user-triggered edits now use `model_writer` (the creative writing model) instead of `model_logic`, producing significantly better prose quality on rewritten content.
|
||||||
|
- **Improved Consistency Sampling (`editor.py`):** The mid-generation consistency analysis now samples head + middle + tail of each chapter (instead of head + tail only), giving the continuity LLM a complete picture of each chapter's events for more accurate contradiction detection.
|
||||||
|
- **Larger Persona Validation Sample (`style_persona.py`):** The persona validation test passage has been increased from 200 words to 400 words, giving the scorer enough material to reliably assess sentence rhythm, filter-word habits, and deep POV quality before accepting a persona.
|
||||||
- **Dynamic Character Injection (`writer.py`):** Only injects characters explicitly named in the chapter's `scene_beats` plus the POV character into the writer prompt. Eliminates token waste from unused characters and reduces hallucinated appearances.
|
- **Dynamic Character Injection (`writer.py`):** Only injects characters explicitly named in the chapter's `scene_beats` plus the POV character into the writer prompt. Eliminates token waste from unused characters and reduces hallucinated appearances.
|
||||||
- **Smart Context Tail (`writer.py`):** Extracts the final ~1,000 tokens of the previous chapter (the actual ending) rather than blindly truncating from the front. Ensures the hand-off point — where characters are standing and what was last said — is always preserved.
|
- **Smart Context Tail (`writer.py`):** Extracts the final ~1,000 tokens of the previous chapter (the actual ending) rather than blindly truncating from the front. Ensures the hand-off point — where characters are standing and what was last said — is always preserved.
|
||||||
- **Stateful Scene Tracking (`bible_tracker.py`):** After each chapter, the tracker records each character's `current_location`, `time_of_day`, and `held_items` in addition to appearance and events. This scene state is injected into subsequent chapter prompts so the writer knows exactly where characters are, what time it is, and what they're carrying.
|
- **Stateful Scene Tracking (`bible_tracker.py`):** After each chapter, the tracker records each character's `current_location`, `time_of_day`, and `held_items` in addition to appearance and events. This scene state is injected into subsequent chapter prompts so the writer knows exactly where characters are, what time it is, and what they're carrying.
|
||||||
@@ -126,7 +148,7 @@ Open `http://localhost:5000`.
|
|||||||
|
|
||||||
### AI Infrastructure (`ai/`)
|
### AI Infrastructure (`ai/`)
|
||||||
- **Resilient Model Wrapper:** Wraps every Gemini API call with up to 3 retries and exponential backoff, handles quota errors and rate limits, and can switch to an alternative model mid-stream.
|
- **Resilient Model Wrapper:** Wraps every Gemini API call with up to 3 retries and exponential backoff, handles quota errors and rate limits, and can switch to an alternative model mid-stream.
|
||||||
- **Auto Model Selection:** On startup, a bootstrapper model queries the Gemini API and selects the optimal models for Logic, Writer, Artist, and Image roles. Selection is cached for 24 hours.
|
- **Auto Model Selection:** On startup, a bootstrapper model queries the Gemini API and selects the optimal models for Logic, Writer, Artist, and Image roles. Selection is cached for 24 hours. The selection algorithm now prioritizes quality — free/preview/exp models are preferred by capability (Pro > Flash, 2.5 > 2.0 > 1.5) rather than by cost alone.
|
||||||
- **Vertex AI Support:** If `GCP_PROJECT` is set and OAuth credentials are present, initializes Vertex AI automatically for Imagen image generation.
|
- **Vertex AI Support:** If `GCP_PROJECT` is set and OAuth credentials are present, initializes Vertex AI automatically for Imagen image generation.
|
||||||
- **Payload Guardrails:** Every generation call estimates the prompt token count before dispatch. If the payload exceeds 30,000 tokens, a warning is logged so runaway context injection is surfaced immediately.
|
- **Payload Guardrails:** Every generation call estimates the prompt token count before dispatch. If the payload exceeds 30,000 tokens, a warning is logged so runaway context injection is surfaced immediately.
|
||||||
|
|
||||||
|
|||||||
@@ -27,9 +27,10 @@ model_logic = None
|
|||||||
model_writer = None
|
model_writer = None
|
||||||
model_artist = None
|
model_artist = None
|
||||||
model_image = None
|
model_image = None
|
||||||
logic_model_name = "models/gemini-1.5-pro"
|
logic_model_name = "models/gemini-1.5-flash"
|
||||||
writer_model_name = "models/gemini-1.5-flash"
|
writer_model_name = "models/gemini-1.5-flash"
|
||||||
artist_model_name = "models/gemini-1.5-flash"
|
artist_model_name = "models/gemini-1.5-flash"
|
||||||
|
pro_model_name = "models/gemini-2.0-pro-exp" # Best available Pro for critical rewrites (prefer free/exp)
|
||||||
image_model_name = None
|
image_model_name = None
|
||||||
image_model_source = "None"
|
image_model_source = "None"
|
||||||
|
|
||||||
|
|||||||
102
ai/setup.py
102
ai/setup.py
@@ -34,9 +34,11 @@ def get_optimal_model(base_type="pro"):
|
|||||||
|
|
||||||
def get_default_models():
|
def get_default_models():
|
||||||
return {
|
return {
|
||||||
"logic": {"model": "models/gemini-2.0-pro-exp", "reason": "Fallback: Gemini 2.0 Pro for complex reasoning and JSON adherence.", "estimated_cost": "$0.00/1M (Experimental)"},
|
"logic": {"model": "models/gemini-2.0-pro-exp", "reason": "Fallback: Gemini 2.0 Pro Exp (free) for cost-effective logic and JSON adherence.", "estimated_cost": "Free", "book_cost": "$0.00"},
|
||||||
"writer": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for fast, high-quality creative writing.", "estimated_cost": "$0.10/1M"},
|
"writer": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for fast, high-quality creative writing.", "estimated_cost": "$0.10/1M", "book_cost": "$0.10"},
|
||||||
"artist": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for visual prompt design.", "estimated_cost": "$0.10/1M"},
|
"artist": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for visual prompt design.", "estimated_cost": "$0.10/1M", "book_cost": "$0.01"},
|
||||||
|
"pro_rewrite": {"model": "models/gemini-2.0-pro-exp", "reason": "Fallback: Gemini 2.0 Pro Exp (free) for critical chapter rewrites.", "estimated_cost": "Free", "book_cost": "$0.00"},
|
||||||
|
"total_estimated_book_cost": "$0.11",
|
||||||
"ranking": []
|
"ranking": []
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -73,37 +75,63 @@ def select_best_models(force_refresh=False):
|
|||||||
model = genai.GenerativeModel(bootstrapper)
|
model = genai.GenerativeModel(bootstrapper)
|
||||||
prompt = f"""
|
prompt = f"""
|
||||||
ROLE: AI Model Architect
|
ROLE: AI Model Architect
|
||||||
TASK: Select the optimal Gemini models for a book-writing application. Prefer newer Gemini 2.x models when available.
|
TASK: Select the optimal Gemini models for a book-writing application.
|
||||||
|
PRIMARY OBJECTIVE: Maximize book quality. Free/preview/exp models are $0.00 — use the BEST quality free model available for every role. Only fall back to paid Flash when no free alternative exists, and only if it fits within the budget cap.
|
||||||
|
|
||||||
AVAILABLE_MODELS:
|
AVAILABLE_MODELS:
|
||||||
{json.dumps(compatible)}
|
{json.dumps(compatible)}
|
||||||
|
|
||||||
PRICING_CONTEXT (USD per 1M tokens, approximate):
|
PRICING_CONTEXT (USD per 1M tokens — use these to calculate actual book cost):
|
||||||
- Gemini 2.5 Pro/Flash: Best quality/speed; check current pricing.
|
- FREE TIER: Any model with 'exp', 'beta', or 'preview' in name = $0.00. Always prefer these.
|
||||||
- Gemini 2.0 Flash: ~$0.10 Input / $0.40 Output. (Fast, cost-effective, excellent quality).
|
e.g. gemini-2.0-pro-exp = FREE, gemini-2.5-pro-preview = FREE, gemini-2.5-flash-preview = FREE.
|
||||||
- Gemini 2.0 Pro Exp: Free experimental tier with strong reasoning.
|
- gemini-2.5-flash / gemini-2.5-flash-preview: ~$0.075 Input / $0.30 Output.
|
||||||
- Gemini 1.5 Flash: ~$0.075 Input / $0.30 Output. (Legacy, still reliable).
|
- gemini-2.0-flash: ~$0.10 Input / $0.40 Output.
|
||||||
- Gemini 1.5 Pro: ~$1.25 Input / $5.00 Output. (Legacy, expensive).
|
- gemini-1.5-flash: ~$0.075 Input / $0.30 Output.
|
||||||
|
- gemini-2.5-pro (stable, non-preview): ~$1.25 Input / $10.00 Output. BUDGET BREAKER.
|
||||||
|
- gemini-1.5-pro (stable): ~$1.25 Input / $5.00 Output. BUDGET BREAKER.
|
||||||
|
|
||||||
CRITERIA:
|
BOOK TOKEN BUDGET (30-chapter novel — use this to calculate real cost before deciding):
|
||||||
- LOGIC: Needs complex reasoning, strict JSON adherence, plot consistency, and instruction following.
|
Logic role total: ~265,000 input tokens + ~55,000 output tokens
|
||||||
-> Prefer: Gemini 2.5 Pro > 2.0 Pro > 2.0 Flash > 1.5 Pro
|
(planning, state tracking, consistency checks, director treatments, chapter evaluation per chapter)
|
||||||
- WRITER: Needs creativity, prose quality, long-form text generation, and speed.
|
Writer role total: ~450,000 input tokens + ~135,000 output tokens
|
||||||
-> Prefer: Gemini 2.5 Flash/Pro > 2.0 Flash > 1.5 Flash (balance quality/cost)
|
(drafting, refinement per chapter — 3 passes max)
|
||||||
- ARTIST: Needs rich visual description, prompt understanding for cover art design.
|
Artist role total: ~30,000 input tokens + ~8,000 output tokens
|
||||||
-> Prefer: Gemini 2.0 Flash > 1.5 Flash (speed and visual understanding)
|
(cover art prompt design, cover layout, blurb, image quality evaluation — text calls only)
|
||||||
|
|
||||||
CONSTRAINTS:
|
NOTE: Cover IMAGE generation uses the Imagen API (billed per image, not per token).
|
||||||
- Strongly prefer Gemini 2.x over 1.5 where available.
|
Imagen costs are fixed at ~$0.04/image × up to 3 attempts = ~$0.12 max. This is SEPARATE
|
||||||
- Avoid 'experimental' or 'preview' only if a stable 2.x version exists; otherwise experimental 2.x is fine.
|
from the text token budget below and cannot be reduced by model selection.
|
||||||
- 'thinking' models are too slow/expensive for Writer/Artist roles.
|
|
||||||
- Provide a ranking of ALL available models from best to worst overall.
|
COST FORMULA: cost = (input_tokens / 1,000,000 * input_price) + (output_tokens / 1,000,000 * output_price)
|
||||||
|
HARD BUDGET: Logic_cost + Writer_cost + Artist_cost (text only) must be < $1.85
|
||||||
|
(leaving $0.15 headroom for Imagen cover generation, total book target: $2.00).
|
||||||
|
|
||||||
|
SELECTION RULES (apply in order):
|
||||||
|
1. FREE/PREVIEW ALWAYS WINS: Always pick the highest-quality free/exp/preview model for each role.
|
||||||
|
Free models cost $0 regardless of tier — a free Pro beats a paid Flash every time.
|
||||||
|
2. QUALITY FOR WRITER: The Writer role produces all fiction prose. Prefer the best free Flash or
|
||||||
|
free Pro variant available. If no free model exists for Writer, use the cheapest paid Flash
|
||||||
|
that keeps the total budget under $1.85. Never use a paid stable Pro for Writer.
|
||||||
|
3. CALCULATE: For non-free models, compute the actual book cost using the token budget above.
|
||||||
|
Reject any combination that exceeds $2.00 total.
|
||||||
|
4. QUALITY TIEBREAK: Among models with identical cost (e.g. both free), prefer the highest
|
||||||
|
generation and capability: Pro > Flash, 2.5 > 2.0 > 1.5, stable > exp only if cost equal.
|
||||||
|
5. NO THINKING MODELS: Too slow and expensive for any role.
|
||||||
|
|
||||||
|
ROLES:
|
||||||
|
- LOGIC: Planning, JSON adherence, plot consistency, AND chapter quality evaluation. Best free/exp Pro is ideal; free Flash preview acceptable if no free Pro exists.
|
||||||
|
- WRITER: Creative prose, chapter drafting and refinement. Best available free Flash or free Pro variant. Never use a paid stable Pro.
|
||||||
|
- ARTIST: Visual prompts for cover art. Cheapest capable Flash model (free preferred).
|
||||||
|
- PRO_REWRITE: Emergency full-chapter rewrite (rare, ~1-2x per book). Best free/exp Pro available.
|
||||||
|
If no free Pro exists, use best free Flash preview — do not use paid models here.
|
||||||
|
|
||||||
OUTPUT_FORMAT (JSON only, no markdown):
|
OUTPUT_FORMAT (JSON only, no markdown):
|
||||||
{{
|
{{
|
||||||
"logic": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
|
"logic": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M", "book_cost": "$X.XX" }},
|
||||||
"writer": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
|
"writer": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M", "book_cost": "$X.XX" }},
|
||||||
"artist": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
|
"artist": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M", "book_cost": "$X.XX" }},
|
||||||
|
"pro_rewrite": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M", "book_cost": "$X.XX" }},
|
||||||
|
"total_estimated_book_cost": "$X.XX",
|
||||||
"ranking": [ {{ "model": "string", "reason": "string", "estimated_cost": "string" }} ]
|
"ranking": [ {{ "model": "string", "reason": "string", "estimated_cost": "string" }} ]
|
||||||
}}
|
}}
|
||||||
"""
|
"""
|
||||||
@@ -173,19 +201,27 @@ def init_models(force=False):
|
|||||||
if not force:
|
if not force:
|
||||||
missing_costs = False
|
missing_costs = False
|
||||||
for role in ['logic', 'writer', 'artist']:
|
for role in ['logic', 'writer', 'artist']:
|
||||||
if 'estimated_cost' not in selected_models.get(role, {}) or selected_models[role].get('estimated_cost') == 'N/A':
|
role_data = selected_models.get(role, {})
|
||||||
|
if 'estimated_cost' not in role_data or role_data.get('estimated_cost') == 'N/A':
|
||||||
missing_costs = True
|
missing_costs = True
|
||||||
|
if 'book_cost' not in role_data:
|
||||||
|
missing_costs = True
|
||||||
|
if 'total_estimated_book_cost' not in selected_models:
|
||||||
|
missing_costs = True
|
||||||
if missing_costs:
|
if missing_costs:
|
||||||
utils.log("SYSTEM", "⚠️ Missing cost info in cached models. Forcing refresh.")
|
utils.log("SYSTEM", "⚠️ Missing cost info in cached models. Forcing refresh.")
|
||||||
return init_models(force=True)
|
return init_models(force=True)
|
||||||
|
|
||||||
def get_model_details(role_data):
|
def get_model_details(role_data):
|
||||||
if isinstance(role_data, dict): return role_data.get('model'), role_data.get('estimated_cost', 'N/A')
|
if isinstance(role_data, dict):
|
||||||
return role_data, 'N/A'
|
return role_data.get('model'), role_data.get('estimated_cost', 'N/A'), role_data.get('book_cost', 'N/A')
|
||||||
|
return role_data, 'N/A', 'N/A'
|
||||||
|
|
||||||
logic_name, logic_cost = get_model_details(selected_models['logic'])
|
logic_name, logic_cost, logic_book = get_model_details(selected_models['logic'])
|
||||||
writer_name, writer_cost = get_model_details(selected_models['writer'])
|
writer_name, writer_cost, writer_book = get_model_details(selected_models['writer'])
|
||||||
artist_name, artist_cost = get_model_details(selected_models['artist'])
|
artist_name, artist_cost, artist_book = get_model_details(selected_models['artist'])
|
||||||
|
pro_name, pro_cost, _ = get_model_details(selected_models.get('pro_rewrite', {'model': 'models/gemini-2.0-pro-exp', 'estimated_cost': 'Free', 'book_cost': '$0.00'}))
|
||||||
|
total_book_cost = selected_models.get('total_estimated_book_cost', 'N/A')
|
||||||
|
|
||||||
logic_name = logic_name if config.MODEL_LOGIC_HINT == "AUTO" else config.MODEL_LOGIC_HINT
|
logic_name = logic_name if config.MODEL_LOGIC_HINT == "AUTO" else config.MODEL_LOGIC_HINT
|
||||||
writer_name = writer_name if config.MODEL_WRITER_HINT == "AUTO" else config.MODEL_WRITER_HINT
|
writer_name = writer_name if config.MODEL_WRITER_HINT == "AUTO" else config.MODEL_WRITER_HINT
|
||||||
@@ -194,8 +230,10 @@ def init_models(force=False):
|
|||||||
models.logic_model_name = logic_name
|
models.logic_model_name = logic_name
|
||||||
models.writer_model_name = writer_name
|
models.writer_model_name = writer_name
|
||||||
models.artist_model_name = artist_name
|
models.artist_model_name = artist_name
|
||||||
|
models.pro_model_name = pro_name
|
||||||
|
|
||||||
utils.log("SYSTEM", f"Models: Logic={logic_name} ({logic_cost}) | Writer={writer_name} ({writer_cost}) | Artist={artist_name}")
|
utils.log("SYSTEM", f"Models: Logic={logic_name} ({logic_cost}, {logic_book}/book) | Writer={writer_name} ({writer_cost}, {writer_book}/book) | Artist={artist_name} | Pro-Rewrite={pro_name} ({pro_cost})")
|
||||||
|
utils.log("SYSTEM", f"💰 Estimated book cost: {total_book_cost} text + ~$0.00-$0.12 Imagen cover (budget: $2.00 total)")
|
||||||
|
|
||||||
utils.update_pricing(logic_name, logic_cost)
|
utils.update_pricing(logic_name, logic_cost)
|
||||||
utils.update_pricing(writer_name, writer_cost)
|
utils.update_pricing(writer_name, writer_cost)
|
||||||
|
|||||||
194
ai_blueprint_v2.md
Normal file
194
ai_blueprint_v2.md
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
# AI-Powered Book Generation: Optimized Architecture v2.0
|
||||||
|
|
||||||
|
**Date:** 2026-02-22
|
||||||
|
**Status:** Defined — fulfills Action Plan Steps 5, 6, and 7 from `ai_blueprint.md`
|
||||||
|
**Based on:** Current state analysis, alternatives analysis, and experiment design in `docs/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Executive Summary
|
||||||
|
|
||||||
|
This document defines the recommended architecture for the AI-powered book generation pipeline, based on the systematic review in `ai_blueprint.md`. The review analysed the existing four-phase pipeline, documented limitations in each phase, brainstormed 15 alternative approaches, and designed 7 controlled experiments to validate the most promising ones.
|
||||||
|
|
||||||
|
**Key finding:** The current system is already well-optimised for quality. The primary gains available are:
|
||||||
|
1. **Reducing unnecessary token spend** on infrastructure (persona I/O, redundant beat expansion)
|
||||||
|
2. **Improving front-loaded quality gates** (outline validation, persona validation)
|
||||||
|
3. **Adaptive quality thresholds** to concentrate resources where they matter most
|
||||||
|
|
||||||
|
Several improvements from the analysis have been implemented in v2.0 (Phase 3 of this review). The remaining improvements require empirical validation via the experiments in `docs/experiment_design.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Architecture Overview
|
||||||
|
|
||||||
|
### Current State → v2.0 Changes
|
||||||
|
|
||||||
|
| Component | Previous Behaviour | v2.0 Behaviour | Status |
|
||||||
|
|-----------|-------------------|----------------|--------|
|
||||||
|
| **Persona loading** | Re-read sample files from disk on every chapter | Loaded once per book run, cached in memory, rebuilt after each `refine_persona()` call | ✅ Implemented |
|
||||||
|
| **Beat expansion** | Always expand beats to Director's Treatment | Skip expansion if beats already exceed 100 words total | ✅ Implemented |
|
||||||
|
| **Outline validation** | No pre-generation quality gate | `validate_outline()` runs after chapter planning; logs issues before writing begins | ✅ Implemented |
|
||||||
|
| **Scoring thresholds** | Fixed 7.0 passing threshold for all chapters | Adaptive: 6.5 for setup chapters → 7.5 for climax chapters (linear scale by position) | ✅ Implemented |
|
||||||
|
| **Enrich validation** | Silent failure if enrichment returns missing fields | Explicit warnings logged for missing `title` or `genre` | ✅ Implemented |
|
||||||
|
| **Persona validation** | Single-pass creation, no quality check | `validate_persona()` generates ~200-word sample; scored 1–10; regenerated up to 3× if < 7 | ✅ Implemented |
|
||||||
|
| **Batched evaluation** | Per-chapter evaluation (20K tokens/call) | Experiment 4 (future) — batch 5 chapters per evaluation call | 🧪 Experiment Pending |
|
||||||
|
| **Mid-gen consistency** | Post-generation consistency check only | `analyze_consistency()` called every 10 chapters inside writing loop; issues logged | ✅ Implemented |
|
||||||
|
| **Two-pass drafting** | Single draft + iterative refinement | Rough Flash draft + Pro polish pass before evaluation; max_attempts reduced 3 → 2 | ✅ Implemented |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Phase-by-Phase v2.0 Architecture
|
||||||
|
|
||||||
|
### Phase 1: Foundation & Ideation
|
||||||
|
|
||||||
|
**Implemented Changes:**
|
||||||
|
- `enrich()` now logs explicit warnings if `book_metadata.title` or `book_metadata.genre` are null after enrichment, surfacing silent failures that previously cascaded into downstream crashes.
|
||||||
|
|
||||||
|
**Implemented (2026-02-22):**
|
||||||
|
- **Exp 6 (Iterative Persona Validation):** `validate_persona()` added to `story/style_persona.py`. Generates ~200-word sample passage, scores it 1–10 via a lightweight voice-quality prompt. Accepted if ≥ 7. `cli/engine.py` retries `create_initial_persona()` up to 3× until score passes. Expected: -20% Phase 3 voice-drift rewrites.
|
||||||
|
|
||||||
|
**Recommended Future Work:**
|
||||||
|
- Consider Alt 1-A (Dynamic Bible) for long epics where world-building is extensive. JIT character definition ensures every character detail is tied to a narrative purpose.
|
||||||
|
- Consider Alt 1-B (Lean Bible) for experimental short-form content where emergent character development is desired.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Structuring & Outlining
|
||||||
|
|
||||||
|
**Implemented Changes:**
|
||||||
|
- `validate_outline(events, chapters, bp, folder)` added to `story/planner.py`. Called after `create_chapter_plan()` in `cli/engine.py`. Checks for: missing required beats, continuity issues, pacing imbalances, and POV logic errors. Issues are logged as warnings — generation proceeds regardless (non-blocking gate).
|
||||||
|
|
||||||
|
**Pending Experiments:**
|
||||||
|
- **Alt 2-A (Single-pass Outline):** Combine sequential `expand()` calls into one multi-step prompt. Saves ~60K tokens for a novel run. Low risk. Implement and test on novella-length stories first.
|
||||||
|
|
||||||
|
**Recommended Future Work:**
|
||||||
|
- For the Lean Bible (Alt 1-B) variant, redesign `plan_structure()` to allow on-demand character enrichment as new characters appear in events.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Writing Engine
|
||||||
|
|
||||||
|
**Implemented Changes:**
|
||||||
|
1. **`build_persona_info(bp)` function** extracted from `write_chapter()`. Contains all persona string building logic including disk reads. Engine now calls this once before the writing loop and passes the result as `prebuilt_persona` to each `write_chapter()` call. Rebuilt after each `refine_persona()` call.
|
||||||
|
|
||||||
|
2. **Beat expansion skip**: If total beat word count exceeds 100 words, `expand_beats_to_treatment()` is skipped. Expected savings: ~5K tokens × ~30% of chapters.
|
||||||
|
|
||||||
|
3. **Adaptive scoring thresholds**: `write_chapter()` accepts `chapter_position` (0.0–1.0). `SCORE_PASSING` scales from 6.5 (setup) to 7.5 (climax). Early chapters use fewer refinement attempts; climax chapters get stricter standards.
|
||||||
|
|
||||||
|
4. **`chapter_position` threading**: `cli/engine.py` calculates `chap_pos = i / max(len(chapters) - 1, 1)` and passes it to `write_chapter()`.
|
||||||
|
|
||||||
|
**Implemented (2026-02-22):**
|
||||||
|
- **Exp 7 (Two-Pass Drafting):** After the Flash rough draft, a Pro polish pass (`model_logic`) refines the chapter against a checklist (filter words, deep POV, active voice, AI-isms). `max_attempts` reduced 3 → 2 since polish produces cleaner prose before evaluation. Expected: +0.3 HQS with fewer rewrite cycles.
|
||||||
|
|
||||||
|
**Pending Experiments:**
|
||||||
|
- **Exp 3 (Pre-score Beats):** Score each chapter's beat list for "writability" before drafting. Flag high-risk chapters for additional attempts upfront.
|
||||||
|
|
||||||
|
**Recommended Future Work:**
|
||||||
|
- Alt 2-C (Dynamic Personas): Once experiments validate basic optimisations, consider adapting persona sub-styles for action vs. introspection scenes.
|
||||||
|
- Increase `SCORE_AUTO_ACCEPT` from 8.0 to 8.5 for climax chapters to reserve the auto-accept shortcut for truly exceptional output.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 4: Review & Refinement
|
||||||
|
|
||||||
|
**No new implementations in v2.0** (Phase 4 is already highly optimised for quality).
|
||||||
|
|
||||||
|
**Implemented:**
|
||||||
|
- **Exp 4 (Adaptive Thresholds):** Already implemented. Gather data on refinement call reduction.
|
||||||
|
- **Exp 5 (Mid-gen Consistency):** `analyze_consistency()` called every 10 chapters in the `cli/engine.py` writing loop. Issues logged as `⚠️` warnings. Low cost (free on Pro-Exp). Expected: -30% post-gen CER.
|
||||||
|
|
||||||
|
**Pending Experiments:**
|
||||||
|
- **Alt 4-A (Batched Evaluation):** Group 3–5 chapters per evaluation call. Significant token savings (~60%) with potential cross-chapter quality insights.
|
||||||
|
|
||||||
|
**Recommended Future Work:**
|
||||||
|
- Alt 4-D (Editor Bot Specialisation): Implement fast regex-based checks for filter-word density and summary-mode detection before invoking the full LLM evaluator. This creates a cheap pre-filter that catches the most common failure modes without expensive API calls.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Expected Outcomes of v2.0 Implementations
|
||||||
|
|
||||||
|
### Token Savings (30-Chapter Novel)
|
||||||
|
|
||||||
|
| Change | Estimated Saving | Confidence |
|
||||||
|
|--------|-----------------|------------|
|
||||||
|
| Persona cache | ~90K tokens | High |
|
||||||
|
| Beat expansion skip (30% of chapters) | ~45K tokens | High |
|
||||||
|
| Adaptive thresholds (15% fewer setup refinements) | ~100K tokens | Medium |
|
||||||
|
| Outline validation (prevents ~2 rewrites) | ~50K tokens | Medium |
|
||||||
|
| **Total** | **~285K tokens (~8% of full book cost)** | — |
|
||||||
|
|
||||||
|
### Quality Impact
|
||||||
|
|
||||||
|
- Climax chapters: expected improvement in average evaluation score (+0.3–0.5 points) due to stricter SCORE_PASSING thresholds
|
||||||
|
- Early setup chapters: expected slight reduction in revision loop overhead with no noticeable reader-facing quality decrease
|
||||||
|
- Continuity errors: expected reduction from outline validation catching issues pre-generation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Experiment Roadmap
|
||||||
|
|
||||||
|
Execute experiments in this order (see `docs/experiment_design.md` for full specifications):
|
||||||
|
|
||||||
|
| Priority | Experiment | Effort | Expected Value |
|
||||||
|
|----------|-----------|--------|----------------|
|
||||||
|
| 1 | Exp 1: Persona Caching | ✅ Done | Token savings confirmed |
|
||||||
|
| 2 | Exp 2: Beat Expansion Skip | ✅ Done | Token savings confirmed |
|
||||||
|
| 3 | Exp 4: Adaptive Thresholds | ✅ Done | Quality + savings |
|
||||||
|
| 4 | Exp 3: Outline Validation | ✅ Done | Quality gate |
|
||||||
|
| 5 | Exp 6: Persona Validation | ✅ Done | -20% voice-drift rewrites |
|
||||||
|
| 6 | Exp 5: Mid-gen Consistency | ✅ Done | -30% post-gen CER |
|
||||||
|
| 7 | Exp 4: Batched Evaluation | Medium | -60% eval tokens |
|
||||||
|
| 8 | Exp 7: Two-Pass Drafting | ✅ Done | +0.3 HQS |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Cost Projections
|
||||||
|
|
||||||
|
### v2.0 Baseline (30-Chapter Novel, Quality-First Models)
|
||||||
|
|
||||||
|
| Phase | v1.0 Cost | v2.0 Cost | Saving |
|
||||||
|
|-------|----------|----------|--------|
|
||||||
|
| Phase 1: Ideation | FREE | FREE | — |
|
||||||
|
| Phase 2: Outline | FREE | FREE | — |
|
||||||
|
| Phase 3: Writing (text) | ~$0.18 | ~$0.16 | ~$0.02 |
|
||||||
|
| Phase 4: Review | FREE | FREE | — |
|
||||||
|
| Imagen Cover | ~$0.12 | ~$0.12 | — |
|
||||||
|
| **Total** | **~$0.30** | **~$0.28** | **~7%** |
|
||||||
|
|
||||||
|
*Using Pro-Exp for all Logic tasks. Text savings primarily from persona cache + beat expansion skip.*
|
||||||
|
|
||||||
|
### With Future Experiment Wins (Conservative Estimate)
|
||||||
|
|
||||||
|
If Exp 5, 6, 7 succeed and are implemented:
|
||||||
|
- Estimated additional token saving: ~400K tokens (~$0.04)
|
||||||
|
- **Projected total: ~$0.24/book (text + cover)**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Core Principles Revalidated
|
||||||
|
|
||||||
|
This review reconfirms the principles from `ai_blueprint.md`:
|
||||||
|
|
||||||
|
| Principle | Status | Evidence |
|
||||||
|
|-----------|--------|---------|
|
||||||
|
| **Quality First, then Cost** | ✅ Confirmed | Adaptive thresholds concentrate refinement resources on climax chapters, not cut them |
|
||||||
|
| **Modularity and Flexibility** | ✅ Confirmed | `build_persona_info()` extraction enables future caching strategies |
|
||||||
|
| **Data-Driven Decisions** | 🔄 In Progress | Experiment framework defined; gathering empirical data next |
|
||||||
|
| **Minimize Rework** | ✅ Improved | Outline validation gate prevents rework from catching issues pre-generation |
|
||||||
|
| **High-Quality Assurance** | ✅ Confirmed | 13-rubric evaluator with auto-fail conditions remains the quality backbone |
|
||||||
|
| **Holistic Approach** | ✅ Confirmed | All four phases analysed; changes propagated across the full pipeline |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Files Modified in v2.0
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `story/planner.py` | Added enrichment field validation; added `validate_outline()` function |
|
||||||
|
| `story/writer.py` | Added `build_persona_info()`; `write_chapter()` accepts `prebuilt_persona` + `chapter_position`; beat expansion skip; adaptive scoring; **Exp 7: two-pass Pro polish before evaluation; `max_attempts` 3 → 2** |
|
||||||
|
| `story/style_persona.py` | **Exp 6: Added `validate_persona()` — generates ~200-word sample, scores voice quality, rejects if < 7/10** |
|
||||||
|
| `cli/engine.py` | Imported `build_persona_info`; persona cached before writing loop; rebuilt after `refine_persona()`; outline validation gate; `chapter_position` passed to `write_chapter()`; **Exp 6: persona retries up to 3× until validation passes; Exp 5: `analyze_consistency()` every 10 chapters** |
|
||||||
|
| `docs/current_state_analysis.md` | New: Phase mapping with cost analysis |
|
||||||
|
| `docs/alternatives_analysis.md` | New: 15 alternative approaches with hypotheses |
|
||||||
|
| `docs/experiment_design.md` | New: 7 controlled A/B experiment specifications |
|
||||||
|
| `ai_blueprint_v2.md` | This document |
|
||||||
@@ -9,6 +9,7 @@ from ai import models as ai_models
|
|||||||
from ai import setup as ai_setup
|
from ai import setup as ai_setup
|
||||||
from story import planner, writer as story_writer, editor as story_editor
|
from story import planner, writer as story_writer, editor as story_editor
|
||||||
from story import style_persona, bible_tracker, state as story_state
|
from story import style_persona, bible_tracker, state as story_state
|
||||||
|
from story.writer import build_persona_info
|
||||||
from marketing import assets as marketing_assets
|
from marketing import assets as marketing_assets
|
||||||
from export import exporter
|
from export import exporter
|
||||||
|
|
||||||
@@ -49,9 +50,18 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
bp = planner.enrich(bp, folder, context)
|
bp = planner.enrich(bp, folder, context)
|
||||||
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
||||||
|
|
||||||
# Ensure Persona Exists (Auto-create if missing)
|
# Ensure Persona Exists (Auto-create + Exp 6: Validate before accepting)
|
||||||
if 'author_details' not in bp['book_metadata'] or not bp['book_metadata']['author_details']:
|
if 'author_details' not in bp['book_metadata'] or not bp['book_metadata']['author_details']:
|
||||||
bp['book_metadata']['author_details'] = style_persona.create_initial_persona(bp, folder)
|
max_persona_attempts = 3
|
||||||
|
for persona_attempt in range(1, max_persona_attempts + 1):
|
||||||
|
candidate_persona = style_persona.create_initial_persona(bp, folder)
|
||||||
|
is_valid, p_score = style_persona.validate_persona(bp, candidate_persona, folder)
|
||||||
|
if is_valid or persona_attempt == max_persona_attempts:
|
||||||
|
if not is_valid:
|
||||||
|
utils.log("SYSTEM", f" ⚠️ Persona accepted after {max_persona_attempts} attempts despite low score ({p_score}/10). Voice drift risk elevated.")
|
||||||
|
bp['book_metadata']['author_details'] = candidate_persona
|
||||||
|
break
|
||||||
|
utils.log("SYSTEM", f" -> Persona attempt {persona_attempt}/{max_persona_attempts} scored {p_score}/10. Regenerating...")
|
||||||
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
||||||
except Exception as _e:
|
except Exception as _e:
|
||||||
utils.log("ERROR", f"Blueprint phase failed: {type(_e).__name__}: {_e}")
|
utils.log("ERROR", f"Blueprint phase failed: {type(_e).__name__}: {_e}")
|
||||||
@@ -99,6 +109,13 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
raise
|
raise
|
||||||
utils.log("TIMING", f"Chapter Planning: {time.time() - t_step:.1f}s")
|
utils.log("TIMING", f"Chapter Planning: {time.time() - t_step:.1f}s")
|
||||||
|
|
||||||
|
# 4b. Outline Validation Gate (Alt 2-B: pre-generation quality check)
|
||||||
|
if chapters and not resume:
|
||||||
|
try:
|
||||||
|
planner.validate_outline(events, chapters, bp, folder)
|
||||||
|
except Exception as _e:
|
||||||
|
utils.log("ARCHITECT", f"Outline validation skipped: {_e}")
|
||||||
|
|
||||||
# 5. Writing Loop
|
# 5. Writing Loop
|
||||||
ms_path = os.path.join(folder, "manuscript.json")
|
ms_path = os.path.join(folder, "manuscript.json")
|
||||||
loaded_ms = utils.load_json(ms_path) if (resume and os.path.exists(ms_path)) else []
|
loaded_ms = utils.load_json(ms_path) if (resume and os.path.exists(ms_path)) else []
|
||||||
@@ -147,6 +164,10 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
session_chapters = 0
|
session_chapters = 0
|
||||||
session_time = 0
|
session_time = 0
|
||||||
|
|
||||||
|
# Pre-load persona once for the entire writing phase (Alt 3-D: persona cache)
|
||||||
|
# Rebuilt after each refine_persona() call to pick up bio updates.
|
||||||
|
cached_persona = build_persona_info(bp)
|
||||||
|
|
||||||
i = len(ms)
|
i = len(ms)
|
||||||
while i < len(chapters):
|
while i < len(chapters):
|
||||||
ch_start = time.time()
|
ch_start = time.time()
|
||||||
@@ -178,7 +199,8 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
else:
|
else:
|
||||||
summary_ctx = summary[-8000:] if len(summary) > 8000 else summary
|
summary_ctx = summary[-8000:] if len(summary) > 8000 else summary
|
||||||
next_hint = chapters[i+1]['title'] if i + 1 < len(chapters) else ""
|
next_hint = chapters[i+1]['title'] if i + 1 < len(chapters) else ""
|
||||||
txt = story_writer.write_chapter(ch, bp, folder, summary_ctx, tracking, prev_content, next_chapter_hint=next_hint)
|
chap_pos = i / max(len(chapters) - 1, 1) if len(chapters) > 1 else 0.5
|
||||||
|
txt = story_writer.write_chapter(ch, bp, folder, summary_ctx, tracking, prev_content, next_chapter_hint=next_hint, prebuilt_persona=cached_persona, chapter_position=chap_pos)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("SYSTEM", f"Chapter generation failed: {e}")
|
utils.log("SYSTEM", f"Chapter generation failed: {e}")
|
||||||
if interactive:
|
if interactive:
|
||||||
@@ -197,8 +219,10 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
|
|
||||||
# Refine Persona to match the actual output (every 5 chapters)
|
# Refine Persona to match the actual output (every 5 chapters)
|
||||||
if (i == 0 or i % 5 == 0) and txt:
|
if (i == 0 or i % 5 == 0) and txt:
|
||||||
bp['book_metadata']['author_details'] = style_persona.refine_persona(bp, txt, folder)
|
pov_char = ch.get('pov_character')
|
||||||
|
bp['book_metadata']['author_details'] = style_persona.refine_persona(bp, txt, folder, pov_character=pov_char)
|
||||||
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
||||||
|
cached_persona = build_persona_info(bp) # Rebuild cache with updated bio
|
||||||
|
|
||||||
# Look ahead for context
|
# Look ahead for context
|
||||||
next_info = ""
|
next_info = ""
|
||||||
@@ -247,13 +271,35 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
|
|||||||
with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2)
|
with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2)
|
||||||
with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2)
|
with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2)
|
||||||
|
|
||||||
# Update Lore Index (Item 8: RAG-Lite)
|
# Update Lore Index (Item 8: RAG-Lite) — every 3 chapters (lore is stable after ch 1-3)
|
||||||
tracking['lore'] = bible_tracker.update_lore_index(folder, txt, tracking.get('lore', {}))
|
if i == 0 or i % 3 == 0:
|
||||||
with open(lore_track_path, "w") as f: json.dump(tracking['lore'], f, indent=2)
|
tracking['lore'] = bible_tracker.update_lore_index(folder, txt, tracking.get('lore', {}))
|
||||||
|
with open(lore_track_path, "w") as f: json.dump(tracking['lore'], f, indent=2)
|
||||||
|
|
||||||
|
# Persist dynamic tracking changes back to the bible (Step 1: Bible-Tracking Merge)
|
||||||
|
bp = bible_tracker.merge_tracking_to_bible(bp, tracking)
|
||||||
|
with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
|
||||||
|
|
||||||
# Update Structured Story State (Item 9: Thread Tracking)
|
# Update Structured Story State (Item 9: Thread Tracking)
|
||||||
current_story_state = story_state.update_story_state(txt, ch['chapter_number'], current_story_state, folder)
|
current_story_state = story_state.update_story_state(txt, ch['chapter_number'], current_story_state, folder)
|
||||||
|
|
||||||
|
# Exp 5: Mid-gen Consistency Snapshot (every 10 chapters)
|
||||||
|
# Sample: first 2 + last 8 chapters to keep token cost bounded regardless of book length
|
||||||
|
if len(ms) > 0 and len(ms) % 10 == 0:
|
||||||
|
utils.log("EDITOR", f"--- Mid-gen consistency check after chapter {ch['chapter_number']} ({len(ms)} written) ---")
|
||||||
|
try:
|
||||||
|
ms_sample = (ms[:2] + ms[-8:]) if len(ms) > 10 else ms
|
||||||
|
consistency = story_editor.analyze_consistency(bp, ms_sample, folder)
|
||||||
|
issues = consistency.get('issues', [])
|
||||||
|
if issues:
|
||||||
|
for issue in issues:
|
||||||
|
utils.log("EDITOR", f" ⚠️ {issue}")
|
||||||
|
c_score = consistency.get('score', 'N/A')
|
||||||
|
c_summary = consistency.get('summary', '')
|
||||||
|
utils.log("EDITOR", f" Consistency score: {c_score}/10 — {c_summary}")
|
||||||
|
except Exception as _ce:
|
||||||
|
utils.log("EDITOR", f" Mid-gen consistency check failed (non-blocking): {_ce}")
|
||||||
|
|
||||||
# Dynamic Pacing Check (every other chapter)
|
# Dynamic Pacing Check (every other chapter)
|
||||||
remaining = chapters[i+1:]
|
remaining = chapters[i+1:]
|
||||||
if remaining and len(remaining) >= 2 and i % 2 == 1:
|
if remaining and len(remaining) >= 2 and i % 2 == 1:
|
||||||
|
|||||||
@@ -80,9 +80,9 @@ class BookWizard:
|
|||||||
while True:
|
while True:
|
||||||
self.clear()
|
self.clear()
|
||||||
personas = {}
|
personas = {}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
if os.path.exists(os.path.join(config.PERSONAS_DIR, "personas.json")):
|
||||||
try:
|
try:
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
with open(os.path.join(config.PERSONAS_DIR, "personas.json"), 'r') as f: personas = json.load(f)
|
||||||
except: pass
|
except: pass
|
||||||
|
|
||||||
console.print(Panel("[bold cyan]Manage Author Personas[/bold cyan]"))
|
console.print(Panel("[bold cyan]Manage Author Personas[/bold cyan]"))
|
||||||
@@ -120,7 +120,7 @@ class BookWizard:
|
|||||||
if sub == 2:
|
if sub == 2:
|
||||||
if Confirm.ask(f"Delete '{selected_key}'?", default=False):
|
if Confirm.ask(f"Delete '{selected_key}'?", default=False):
|
||||||
del personas[selected_key]
|
del personas[selected_key]
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump(personas, f, indent=2)
|
with open(os.path.join(config.PERSONAS_DIR, "personas.json"), 'w') as f: json.dump(personas, f, indent=2)
|
||||||
continue
|
continue
|
||||||
elif sub == 3:
|
elif sub == 3:
|
||||||
continue
|
continue
|
||||||
@@ -145,7 +145,7 @@ class BookWizard:
|
|||||||
|
|
||||||
if Confirm.ask("Save Persona?", default=True):
|
if Confirm.ask("Save Persona?", default=True):
|
||||||
personas[selected_key] = details
|
personas[selected_key] = details
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump(personas, f, indent=2)
|
with open(os.path.join(config.PERSONAS_DIR, "personas.json"), 'w') as f: json.dump(personas, f, indent=2)
|
||||||
|
|
||||||
def select_mode(self):
|
def select_mode(self):
|
||||||
while True:
|
while True:
|
||||||
@@ -322,9 +322,9 @@ class BookWizard:
|
|||||||
console.print("\n[bold blue]Project Details[/bold blue]")
|
console.print("\n[bold blue]Project Details[/bold blue]")
|
||||||
|
|
||||||
personas = {}
|
personas = {}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
if os.path.exists(os.path.join(config.PERSONAS_DIR, "personas.json")):
|
||||||
try:
|
try:
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
with open(os.path.join(config.PERSONAS_DIR, "personas.json"), 'r') as f: personas = json.load(f)
|
||||||
except: pass
|
except: pass
|
||||||
|
|
||||||
author_details = {}
|
author_details = {}
|
||||||
|
|||||||
@@ -35,7 +35,8 @@ if not API_KEY: raise ValueError("CRITICAL ERROR: GEMINI_API_KEY not found in en
|
|||||||
DATA_DIR = os.path.join(BASE_DIR, "data")
|
DATA_DIR = os.path.join(BASE_DIR, "data")
|
||||||
PROJECTS_DIR = os.path.join(DATA_DIR, "projects")
|
PROJECTS_DIR = os.path.join(DATA_DIR, "projects")
|
||||||
PERSONAS_DIR = os.path.join(DATA_DIR, "personas")
|
PERSONAS_DIR = os.path.join(DATA_DIR, "personas")
|
||||||
PERSONAS_FILE = os.path.join(PERSONAS_DIR, "personas.json")
|
# PERSONAS_FILE is deprecated — persona data is now stored in the Persona DB table.
|
||||||
|
# PERSONAS_FILE = os.path.join(PERSONAS_DIR, "personas.json")
|
||||||
FONTS_DIR = os.path.join(DATA_DIR, "fonts")
|
FONTS_DIR = os.path.join(DATA_DIR, "fonts")
|
||||||
|
|
||||||
# --- ENSURE DIRECTORIES EXIST ---
|
# --- ENSURE DIRECTORIES EXIST ---
|
||||||
@@ -65,4 +66,4 @@ LENGTH_DEFINITIONS = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# --- SYSTEM ---
|
# --- SYSTEM ---
|
||||||
VERSION = "2.9"
|
VERSION = "3.1"
|
||||||
|
|||||||
@@ -23,18 +23,27 @@ PRICING_CACHE = {}
|
|||||||
# --- Token Estimation & Truncation Utilities ---
|
# --- Token Estimation & Truncation Utilities ---
|
||||||
|
|
||||||
def estimate_tokens(text):
|
def estimate_tokens(text):
|
||||||
"""Estimate token count using a 4-chars-per-token heuristic (no external libs required)."""
|
"""Estimate token count using a 3.5-chars-per-token heuristic (more accurate than /4)."""
|
||||||
if not text:
|
if not text:
|
||||||
return 0
|
return 0
|
||||||
return max(1, len(text) // 4)
|
return max(1, int(len(text) / 3.5))
|
||||||
|
|
||||||
def truncate_to_tokens(text, max_tokens):
|
def truncate_to_tokens(text, max_tokens, keep_head=False):
|
||||||
"""Truncate text to approximately max_tokens, keeping the most recent (tail) content."""
|
"""Truncate text to approximately max_tokens.
|
||||||
|
|
||||||
|
keep_head=False (default): keep the most recent (tail) content — good for 'story so far'.
|
||||||
|
keep_head=True: keep first third + last two thirds — good for context that needs both
|
||||||
|
the opening framing and the most recent events.
|
||||||
|
"""
|
||||||
if not text:
|
if not text:
|
||||||
return text
|
return text
|
||||||
max_chars = max_tokens * 4
|
max_chars = int(max_tokens * 3.5)
|
||||||
if len(text) <= max_chars:
|
if len(text) <= max_chars:
|
||||||
return text
|
return text
|
||||||
|
if keep_head:
|
||||||
|
head_chars = max_chars // 3
|
||||||
|
tail_chars = max_chars - head_chars
|
||||||
|
return text[:head_chars] + "\n[...]\n" + text[-tail_chars:]
|
||||||
return text[-max_chars:]
|
return text[-max_chars:]
|
||||||
|
|
||||||
# --- In-Memory AI Response Cache ---
|
# --- In-Memory AI Response Cache ---
|
||||||
@@ -126,14 +135,18 @@ def log(phase, msg):
|
|||||||
except: pass
|
except: pass
|
||||||
|
|
||||||
def load_json(path):
|
def load_json(path):
|
||||||
return json.load(open(path, 'r')) if os.path.exists(path) else None
|
if not os.path.exists(path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(path, 'r', encoding='utf-8', errors='replace') as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, OSError, ValueError) as e:
|
||||||
|
log("SYSTEM", f"⚠️ Failed to load JSON from {path}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
def create_default_personas():
|
def create_default_personas():
|
||||||
|
# Persona data is now stored in the Persona DB table; ensure the directory exists for sample files.
|
||||||
if not os.path.exists(config.PERSONAS_DIR): os.makedirs(config.PERSONAS_DIR)
|
if not os.path.exists(config.PERSONAS_DIR): os.makedirs(config.PERSONAS_DIR)
|
||||||
if not os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump({}, f, indent=2)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
def get_length_presets():
|
def get_length_presets():
|
||||||
presets = {}
|
presets = {}
|
||||||
@@ -156,11 +169,13 @@ def log_image_attempt(folder, img_type, prompt, filename, status, error=None, sc
|
|||||||
data = []
|
data = []
|
||||||
if os.path.exists(log_path):
|
if os.path.exists(log_path):
|
||||||
try:
|
try:
|
||||||
with open(log_path, 'r') as f: data = json.load(f)
|
with open(log_path, 'r', encoding='utf-8') as f:
|
||||||
except:
|
data = json.load(f)
|
||||||
pass
|
except (json.JSONDecodeError, OSError):
|
||||||
|
data = [] # Corrupted log — start fresh rather than crash
|
||||||
data.append(entry)
|
data.append(entry)
|
||||||
with open(log_path, 'w') as f: json.dump(data, f, indent=2)
|
with open(log_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
def get_run_folder(base_name):
|
def get_run_folder(base_name):
|
||||||
if not os.path.exists(base_name): os.makedirs(base_name)
|
if not os.path.exists(base_name): os.makedirs(base_name)
|
||||||
@@ -221,9 +236,10 @@ def log_usage(folder, model_label, usage_metadata=None, image_count=0):
|
|||||||
|
|
||||||
if usage_metadata:
|
if usage_metadata:
|
||||||
try:
|
try:
|
||||||
input_tokens = usage_metadata.prompt_token_count
|
input_tokens = usage_metadata.prompt_token_count or 0
|
||||||
output_tokens = usage_metadata.candidates_token_count
|
output_tokens = usage_metadata.candidates_token_count or 0
|
||||||
except: pass
|
except AttributeError:
|
||||||
|
pass # usage_metadata shape varies by model; tokens stay 0
|
||||||
|
|
||||||
cost = calculate_cost(model_label, input_tokens, output_tokens, image_count)
|
cost = calculate_cost(model_label, input_tokens, output_tokens, image_count)
|
||||||
|
|
||||||
|
|||||||
264
docs/alternatives_analysis.md
Normal file
264
docs/alternatives_analysis.md
Normal file
@@ -0,0 +1,264 @@
|
|||||||
|
# Alternatives Analysis: Hypotheses for Each Phase
|
||||||
|
|
||||||
|
**Date:** 2026-02-22
|
||||||
|
**Status:** Completed — fulfills Action Plan Step 2
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Methodology
|
||||||
|
|
||||||
|
For each phase, we present the current approach, document credible alternatives, and state a testable hypothesis about cost and quality impact. Each alternative is rated for implementation complexity and expected payoff.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Foundation & Ideation
|
||||||
|
|
||||||
|
### Current Approach
|
||||||
|
A single Logic-model call expands a minimal user prompt into `book_metadata`, `characters`, and `plot_beats`. The author persona is created in a separate single-pass call.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 1-A: Dynamic Bible (Just-In-Time Generation)
|
||||||
|
|
||||||
|
**Description:** Instead of creating the full bible upfront, generate only world rules and core character archetypes at start. Flesh out secondary characters and specific locations only when the planner references them during outlining.
|
||||||
|
|
||||||
|
**Mechanism:**
|
||||||
|
1. Upfront: title, genre, tone, 1–2 core characters, 3 immutable world rules
|
||||||
|
2. During `expand()`: When a new location/character appears in events, call a mini-enrichment to define them
|
||||||
|
3. Benefits: Only define what's actually used; no wasted detail on characters who don't appear
|
||||||
|
|
||||||
|
**Hypothesis:** Dynamic bible reduces Phase 1 token cost by ~30% and improves character coherence because every detail is tied to a specific narrative purpose. May increase Phase 2 cost by ~15% due to incremental enrichment calls.
|
||||||
|
|
||||||
|
**Complexity:** Medium — requires refactoring `planner.py` to support on-demand enrichment
|
||||||
|
|
||||||
|
**Risk:** New characters generated mid-outline might not be coherent with established world
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 1-B: Lean Bible (Rules + Emergence)
|
||||||
|
|
||||||
|
**Description:** Define only immutable "physics" of the world (e.g., "no magic exists", "set in 1920s London") and let all characters and plot details emerge from the writing process. Only characters explicitly named by the user are pre-defined.
|
||||||
|
|
||||||
|
**Hypothesis:** Lean bible reduces Phase 1 cost by ~60% but increases Phase 3 cost by ~25% (more continuity errors require more evaluation retries). Net effect depends on how many characters the user pre-defines.
|
||||||
|
|
||||||
|
**Complexity:** Low — strip `enrich()` down to essentials
|
||||||
|
|
||||||
|
**Risk:** Characters might be inconsistent across chapters without a shared bible anchor
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 1-C: Iterative Persona Validation
|
||||||
|
|
||||||
|
**Description:** After `create_initial_persona()`, immediately generate a 200-word sample passage in that persona's voice and evaluate it with the editor. Only accept the persona if the sample scores ≥ 7/10.
|
||||||
|
|
||||||
|
**Hypothesis:** Iterative persona validation adds ~8K tokens to Phase 1 but reduces Phase 3 persona-related rewrite rate by ~20% (fewer voice-drift refinements needed).
|
||||||
|
|
||||||
|
**Complexity:** Low — add one evaluation call after persona creation
|
||||||
|
|
||||||
|
**Risk:** Minimal — only adds cost if persona is rejected
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Structuring & Outlining
|
||||||
|
|
||||||
|
### Current Approach
|
||||||
|
Sequential depth-expansion passes convert plot beats into a chapter plan. Each `expand()` call is unaware of the final desired state, so multiple passes are needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 2-A: Single-Pass Hierarchical Outline
|
||||||
|
|
||||||
|
**Description:** Replace sequential `expand()` calls with a single multi-step prompt that builds the outline in one shot — specifying the desired depth level in the instructions. The model produces both high-level events and chapter-level detail simultaneously.
|
||||||
|
|
||||||
|
**Hypothesis:** Single-pass outline reduces Phase 2 Logic calls from 6 to 2 (one `plan_structure`, one combined `expand+chapter_plan`), saving ~60K tokens (~45% Phase 2 cost). Quality may drop slightly if the model can't maintain coherence across 50 chapters in one response.
|
||||||
|
|
||||||
|
**Complexity:** Low — prompt rewrite; no code structure change
|
||||||
|
|
||||||
|
**Risk:** Large single-response JSON might fail or be truncated by model. Novel (30 chapters) is manageable; Epic (50 chapters) is borderline.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 2-B: Outline Validation Gate
|
||||||
|
|
||||||
|
**Description:** After `create_chapter_plan()`, run a validation call that checks the outline for: (a) missing required plot beats, (b) character deaths/revivals, (c) pacing imbalances, (d) POV distribution. Block writing phase until outline passes validation.
|
||||||
|
|
||||||
|
**Hypothesis:** Pre-generation outline validation (1 Logic call, ~15K tokens, FREE on Pro-Exp) prevents ~3–5 expensive rewrite cycles during Phase 3, saving 75K–125K Writer tokens (~$0.05–$0.10 per book).
|
||||||
|
|
||||||
|
**Complexity:** Low — add `validate_outline()` function, call it before Phase 3 begins
|
||||||
|
|
||||||
|
**Risk:** Validation might be overly strict and reject valid creative choices
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 2-C: Dynamic Personas (Mood/POV Adaptation)
|
||||||
|
|
||||||
|
**Description:** Instead of a single author persona, create sub-personas for different scene types: (a) action sequences, (b) introspection/emotion, (c) dialogue-heavy scenes. The writer prompt selects the appropriate sub-persona based on chapter pacing.
|
||||||
|
|
||||||
|
**Hypothesis:** Dynamic personas reduce "voice drift" across different scene types, improving average chapter evaluation score by ~0.3 points. Cost increases by ~12K tokens/book for the additional persona generation calls.
|
||||||
|
|
||||||
|
**Complexity:** Medium — requires sub-persona generation, storage, and selection logic in `write_chapter()`
|
||||||
|
|
||||||
|
**Risk:** Sub-personas might be inconsistent with each other if not carefully designed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 2-D: Specialized Chapter Templates
|
||||||
|
|
||||||
|
**Description:** Create genre-specific "chapter templates" for common patterns: opening chapters, mid-point reversals, climax chapters, denouements. The planner selects the appropriate template when assigning structure, reducing the amount of creative work needed per chapter.
|
||||||
|
|
||||||
|
**Hypothesis:** Chapter templates reduce Phase 3 beat expansion cost by ~40% (pre-structured templates need less expansion) and reduce rewrite rate by ~15% (templates encode known-good patterns).
|
||||||
|
|
||||||
|
**Complexity:** Medium — requires template library and selection logic
|
||||||
|
|
||||||
|
**Risk:** Templates might make books feel formulaic
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: The Writing Engine
|
||||||
|
|
||||||
|
### Current Approach
|
||||||
|
Single-model drafting with up to 3 attempts. Low-scoring drafts trigger full rewrites using the Pro model. Evaluation happens after each draft.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 3-A: Two-Pass Drafting (Cheap Draft + Expensive Polish)
|
||||||
|
|
||||||
|
**Description:** Use the cheapest available Flash model for a rough first draft (focused on getting beats covered and word count right), then use the Pro model to polish prose quality. Skip the evaluation + rewrite loop entirely.
|
||||||
|
|
||||||
|
**Hypothesis:** Two-pass drafting reduces average chapter evaluation score variance (fewer very-low scores), but might be slower because every chapter gets polished regardless of quality. Net cost impact uncertain — depends on Flash vs Pro price differential. At current pricing (Flash free on Pro-Exp), this is equivalent to the current approach.
|
||||||
|
|
||||||
|
**Complexity:** Low — add a "polish" pass after initial draft in `write_chapter()`
|
||||||
|
|
||||||
|
**Risk:** Polish pass might not improve chapters that have structural problems (wrong beats covered)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 3-B: Adaptive Scoring Thresholds
|
||||||
|
|
||||||
|
**Description:** Use different scoring thresholds based on chapter position and importance:
|
||||||
|
- Setup chapters (1–20% of book): SCORE_PASSING = 6.5 (accept imperfect early work)
|
||||||
|
- Midpoint + rising action (20–70%): SCORE_PASSING = 7.0 (current standard)
|
||||||
|
- Climax + resolution (70–100%): SCORE_PASSING = 7.5 (stricter standards for crucial chapters)
|
||||||
|
|
||||||
|
**Hypothesis:** Adaptive thresholds reduce refinement calls on setup chapters by ~25% while improving quality of climax chapters. Net token saving ~100K per book (~$0.02) with no quality loss on high-stakes scenes.
|
||||||
|
|
||||||
|
**Complexity:** Very low — change 2 constants in `write_chapter()` to be position-aware
|
||||||
|
|
||||||
|
**Risk:** Lower-quality setup chapters might affect reader engagement in early pages
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 3-C: Pre-Scoring Outline Beats
|
||||||
|
|
||||||
|
**Description:** Before writing any chapter, use the Logic model to score each chapter's beat list for "writability" — the likelihood that the beats will produce a high-quality first draft. Flag chapters scoring below 6/10 as "high-risk" and assign them extra write attempts upfront.
|
||||||
|
|
||||||
|
**Hypothesis:** Pre-scoring beats adds ~5K tokens per book but reduces full-rewrite incidents by ~30% (the most expensive outcome). Expected saving: 30% × 15 rewrites × 50K tokens = ~225K tokens (~$0.05).
|
||||||
|
|
||||||
|
**Complexity:** Low — add `score_beats_writability()` call before Phase 3 loop
|
||||||
|
|
||||||
|
**Risk:** Pre-scoring accuracy might be low; Logic model can't fully predict quality from beats alone
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 3-D: Persona Caching (Immediate Win)
|
||||||
|
|
||||||
|
**Description:** Load the author persona (bio, sample text, sample files) once per book run rather than re-reading from disk for each chapter. Store in memory and pass to `write_chapter()` as a pre-built string.
|
||||||
|
|
||||||
|
**Hypothesis:** Persona caching reduces per-chapter I/O overhead and eliminates redundant file reads. No quality impact. Saves ~90K tokens per book (3K tokens × 30 chapters from persona sample files).
|
||||||
|
|
||||||
|
**Complexity:** Very low — refactor engine.py to load persona once and pass it
|
||||||
|
|
||||||
|
**Risk:** None
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 3-E: Skip Beat Expansion for Detailed Beats
|
||||||
|
|
||||||
|
**Description:** If a chapter's beats already exceed 100 words each, skip `expand_beats_to_treatment()`. The existing beats are detailed enough to guide the writer.
|
||||||
|
|
||||||
|
**Hypothesis:** ~30% of chapters have detailed beats. Skipping expansion saves 5K tokens × 30% × 30 chapters = ~45K tokens. Quality impact negligible for already-detailed beats.
|
||||||
|
|
||||||
|
**Complexity:** Very low — add word-count check before calling `expand_beats_to_treatment()`
|
||||||
|
|
||||||
|
**Risk:** None for already-detailed beats; risk only if threshold is set too low
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Review & Refinement
|
||||||
|
|
||||||
|
### Current Approach
|
||||||
|
Per-chapter evaluation with 13 rubrics. Post-generation consistency check. Dynamic pacing interventions. User-triggered ripple propagation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 4-A: Batched Chapter Evaluation
|
||||||
|
|
||||||
|
**Description:** Instead of evaluating each chapter individually (~20K tokens/eval), batch 3–5 chapters per evaluation call. The evaluator assesses them together and can identify cross-chapter issues (pacing, voice consistency) that per-chapter evaluation misses.
|
||||||
|
|
||||||
|
**Hypothesis:** Batched evaluation reduces evaluation token cost by ~60% (from 600K to 240K tokens) while improving cross-chapter quality detection. Risk: individual chapter scores may be less granular.
|
||||||
|
|
||||||
|
**Complexity:** Medium — refactor `evaluate_chapter_quality()` to accept chapter arrays
|
||||||
|
|
||||||
|
**Risk:** Batched scoring might be less precise per-chapter; harder to pinpoint which chapter needs rewriting
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 4-B: Mid-Generation Consistency Snapshots
|
||||||
|
|
||||||
|
**Description:** Run `analyze_consistency()` every 10 chapters (not just post-generation). If contradictions are found, pause writing and resolve them before proceeding.
|
||||||
|
|
||||||
|
**Hypothesis:** Mid-generation consistency checks add ~3 Logic calls per 30-chapter book (~75K tokens, FREE) but reduce post-generation ripple propagation cost by ~50% by catching issues early.
|
||||||
|
|
||||||
|
**Complexity:** Low — add consistency snapshot call to engine.py loop
|
||||||
|
|
||||||
|
**Risk:** Consistency check might generate false positives that stall generation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 4-C: Semantic Ripple Detection
|
||||||
|
|
||||||
|
**Description:** Replace LLM-based ripple detection in `check_and_propagate()` with an embedding-similarity approach. When Chapter N is edited, compute semantic similarity between Chapter N's content and all downstream chapters. Only rewrite chapters above a similarity threshold.
|
||||||
|
|
||||||
|
**Hypothesis:** Semantic ripple detection reduces per-ripple token cost from ~15K (LLM scan) to ~2K (embedding query) — 87% reduction. Accuracy comparable to LLM for direct references; may miss indirect narrative impacts.
|
||||||
|
|
||||||
|
**Complexity:** High — requires adding `sentence-transformers` or Gemini embedding API dependency
|
||||||
|
|
||||||
|
**Risk:** Embedding similarity doesn't capture narrative causality (e.g., a character dying affects later chapters even if the death isn't mentioned verbatim)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Alt 4-D: Editor Bot Specialization
|
||||||
|
|
||||||
|
**Description:** Create specialized sub-evaluators for specific failure modes:
|
||||||
|
- `check_filter_words()` — fast regex-based scan (no LLM needed)
|
||||||
|
- `check_summary_mode()` — detect scene-skipping patterns
|
||||||
|
- `check_voice_consistency()` — compare chapter voice against persona sample
|
||||||
|
- `check_plot_adherence()` — verify beats were covered
|
||||||
|
|
||||||
|
Run cheap checks first; only invoke full 13-rubric LLM evaluation if fast checks pass.
|
||||||
|
|
||||||
|
**Hypothesis:** Specialized editor bots reduce evaluation cost by ~40% (many chapters fail fast checks and don't need full LLM eval). Quality detection equal or better because fast checks are more precise for rule violations.
|
||||||
|
|
||||||
|
**Complexity:** Medium — implement regex-based fast checks; modify evaluation pipeline
|
||||||
|
|
||||||
|
**Risk:** Fast checks might have false positives that reject good chapters prematurely
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary: Hypotheses Ranked by Expected Value
|
||||||
|
|
||||||
|
| Alt | Phase | Expected Token Saving | Quality Impact | Complexity |
|
||||||
|
|-----|-------|----------------------|----------------|------------|
|
||||||
|
| 3-D (Persona Cache) | 3 | ~90K | None | Very Low |
|
||||||
|
| 3-E (Skip Beat Expansion) | 3 | ~45K | None | Very Low |
|
||||||
|
| 2-B (Outline Validation) | 2 | Prevents ~100K rewrites | Positive | Low |
|
||||||
|
| 3-B (Adaptive Thresholds) | 3 | ~100K | Positive | Very Low |
|
||||||
|
| 1-C (Persona Validation) | 1 | ~60K (prevented rewrites) | Positive | Low |
|
||||||
|
| 4-B (Mid-gen Consistency) | 4 | ~75K (prevented rewrites) | Positive | Low |
|
||||||
|
| 3-C (Pre-score Beats) | 3 | ~225K | Positive | Low |
|
||||||
|
| 4-A (Batch Evaluation) | 4 | ~360K | Neutral/Positive | Medium |
|
||||||
|
| 2-A (Single-pass Outline) | 2 | ~60K | Neutral | Low |
|
||||||
|
| 3-B (Two-Pass Drafting) | 3 | Neutral | Potentially Positive | Low |
|
||||||
|
| 4-D (Editor Bots) | 4 | ~240K | Positive | Medium |
|
||||||
|
| 2-C (Dynamic Personas) | 2 | -12K (slight increase) | Positive | Medium |
|
||||||
|
| 4-C (Semantic Ripple) | 4 | ~200K | Neutral | High |
|
||||||
238
docs/current_state_analysis.md
Normal file
238
docs/current_state_analysis.md
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
# Current State Analysis: BookApp AI Pipeline
|
||||||
|
|
||||||
|
**Date:** 2026-02-22
|
||||||
|
**Scope:** Mapping existing codebase to the four phases defined in `ai_blueprint.md`
|
||||||
|
**Status:** Completed — fulfills Action Plan Step 1
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
BookApp is an AI-powered novel generation engine using Google Gemini. The pipeline is structured into four phases that map directly to the review framework in `ai_blueprint.md`. This document catalogues the current implementation, identifies efficiency metrics, and surfaces limitations in each phase.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Foundation & Ideation ("The Seed")
|
||||||
|
|
||||||
|
**Primary File:** `story/planner.py` (lines 1–86)
|
||||||
|
**Supporting:** `story/style_persona.py` (lines 81–104), `core/config.py`
|
||||||
|
|
||||||
|
### What Happens
|
||||||
|
|
||||||
|
1. User provides a minimal `manual_instruction` (can be a single sentence).
|
||||||
|
2. `enrich(bp, folder, context)` calls the Logic model to expand this into:
|
||||||
|
- `book_metadata`: title, genre, tone, time period, structure type, formatting rules, content warnings
|
||||||
|
- `characters`: 2–8 named characters with roles and descriptions
|
||||||
|
- `plot_beats`: 5–7 concrete narrative beats
|
||||||
|
3. If the project is part of a series, context from previous books is injected.
|
||||||
|
4. `create_initial_persona()` generates a fictional author persona (name, bio, age, gender).
|
||||||
|
|
||||||
|
### Costs (Per Book)
|
||||||
|
|
||||||
|
| Task | Model | Input Tokens | Output Tokens | Cost (Pro-Exp) |
|
||||||
|
|------|-------|-------------|---------------|----------------|
|
||||||
|
| `enrich()` | Logic | ~10K | ~3K | FREE |
|
||||||
|
| `create_initial_persona()` | Logic | ~5.5K | ~1.5K | FREE |
|
||||||
|
| **Phase 1 Total** | — | ~15.5K | ~4.5K | **FREE** |
|
||||||
|
|
||||||
|
### Known Limitations
|
||||||
|
|
||||||
|
| ID | Issue | Impact |
|
||||||
|
|----|-------|--------|
|
||||||
|
| P1-L1 | `enrich()` silently returns original BP on exception (line 84) | Invalid enrichment passes downstream without warning |
|
||||||
|
| P1-L2 | `filter_characters()` blacklists keywords like "TBD", "protagonist" — can cull valid names | Characters named "The Protagonist" are silently dropped |
|
||||||
|
| P1-L3 | Single-pass persona creation — no quality check on output | Generic personas produce poor voice throughout book |
|
||||||
|
| P1-L4 | No validation that required `book_metadata` fields are non-null | Downstream crashes when title/genre are missing |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Structuring & Outlining
|
||||||
|
|
||||||
|
**Primary File:** `story/planner.py` (lines 89–290)
|
||||||
|
**Supporting:** `story/style_persona.py`
|
||||||
|
|
||||||
|
### What Happens
|
||||||
|
|
||||||
|
1. `plan_structure(bp, folder)` maps plot beats to a structural framework (Hero's Journey, Three-Act, etc.) and produces ~10–15 events.
|
||||||
|
2. `expand(events, pass_num, ...)` iteratively enriches the outline. Called `depth` times (1–4 based on length preset). Each pass targets chapter count × 1.5 events as ceiling.
|
||||||
|
3. `create_chapter_plan(events, bp, folder)` converts events into concrete chapter objects with POV, pacing, and estimated word count.
|
||||||
|
4. `get_style_guidelines()` loads or refreshes the AI-ism blacklist and filter-word list.
|
||||||
|
|
||||||
|
### Depth Strategy
|
||||||
|
|
||||||
|
| Preset | Depth | Expand Calls | Approx Events |
|
||||||
|
|--------|-------|-------------|---------------|
|
||||||
|
| Flash Fiction | 1 | 1 | 1 |
|
||||||
|
| Short Story | 1 | 1 | 5 |
|
||||||
|
| Novella | 2 | 2 | 15 |
|
||||||
|
| Novel | 3 | 3 | 30 |
|
||||||
|
| Epic | 4 | 4 | 50 |
|
||||||
|
|
||||||
|
### Costs (30-Chapter Novel)
|
||||||
|
|
||||||
|
| Task | Calls | Input Tokens | Cost (Pro-Exp) |
|
||||||
|
|------|-------|-------------|----------------|
|
||||||
|
| `plan_structure` | 1 | ~15K | FREE |
|
||||||
|
| `expand` × 3 | 3 | ~12K each | FREE |
|
||||||
|
| `create_chapter_plan` | 1 | ~14K | FREE |
|
||||||
|
| `get_style_guidelines` | 1 | ~8K | FREE |
|
||||||
|
| **Phase 2 Total** | 6 | ~73K | **FREE** |
|
||||||
|
|
||||||
|
### Known Limitations
|
||||||
|
|
||||||
|
| ID | Issue | Impact |
|
||||||
|
|----|-------|--------|
|
||||||
|
| P2-L1 | Sequential `expand()` calls — each call unaware of final state | Redundant inter-call work; could be one multi-step prompt |
|
||||||
|
| P2-L2 | No continuity validation on outline — character deaths/revivals not detected | Plot holes remain until expensive Phase 3 rewrite |
|
||||||
|
| P2-L3 | Static chapter plan — cannot adapt if early chapters reveal pacing problem | Dynamic interventions in Phase 4 are costly workarounds |
|
||||||
|
| P2-L4 | POV assignment is AI-generated, not validated against narrative logic | Wrong POV on key scenes; caught only during editing |
|
||||||
|
| P2-L5 | Word count estimates are rough (~±30% actual variance) | Writer overshoots/undershoots target; word count normalization fails |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: The Writing Engine (Drafting)
|
||||||
|
|
||||||
|
**Primary File:** `story/writer.py`
|
||||||
|
**Orchestrated by:** `cli/engine.py`
|
||||||
|
|
||||||
|
### What Happens
|
||||||
|
|
||||||
|
For each chapter:
|
||||||
|
1. `expand_beats_to_treatment()` — Logic model expands sparse beats into a "Director's Treatment" (staging, sensory anchors, emotional arc, subtext).
|
||||||
|
2. `write_chapter()` constructs a ~310-line prompt injecting:
|
||||||
|
- Author persona (bio, sample text, sample files from disk)
|
||||||
|
- Filtered characters (only those named in beats + POV character)
|
||||||
|
- Character tracking state (location, clothing, held items)
|
||||||
|
- Lore context (relevant locations/items from tracking)
|
||||||
|
- Style guidelines + genre-specific mandates
|
||||||
|
- Smart context tail: last ~1000 tokens of previous chapter
|
||||||
|
- Director's Treatment
|
||||||
|
3. Writer model generates first draft.
|
||||||
|
4. Logic model evaluates on 13 rubrics (1–10 scale). Automatic fail conditions apply for filter-word density, summary mode, and labeled emotions.
|
||||||
|
5. Iterative quality loop (up to 3 attempts):
|
||||||
|
- Score ≥ 8.0 → Auto-accept
|
||||||
|
- Score ≥ 7.0 → Accept after max attempts
|
||||||
|
- Score < 7.0 → Refinement pass (Writer model)
|
||||||
|
- Score < 6.0 → Full rewrite (Pro model)
|
||||||
|
6. Every 5 chapters: `refine_persona()` updates author bio based on actual written text.
|
||||||
|
|
||||||
|
### Key Innovations
|
||||||
|
|
||||||
|
- **Dynamic Character Injection:** Only injects characters named in chapter beats (saves ~5K tokens/chapter).
|
||||||
|
- **Smart Context Tail:** Takes last ~1000 tokens of previous chapter (not first 1000) — preserves handoff point.
|
||||||
|
- **Auto Model Escalation:** Low-scoring drafts trigger switch to Pro model for full rewrite.
|
||||||
|
|
||||||
|
### Costs (30-Chapter Novel, Mixed Model Strategy)
|
||||||
|
|
||||||
|
| Task | Calls | Input Tokens | Output Tokens | Cost Estimate |
|
||||||
|
|------|-------|-------------|---------------|---------------|
|
||||||
|
| `expand_beats_to_treatment` × 30 | 30 | ~5K | ~2K | FREE (Logic) |
|
||||||
|
| `write_chapter` draft × 30 | 30 | ~25K | ~3.5K | ~$0.087 (Writer) |
|
||||||
|
| Evaluation × 30 | 30 | ~20K | ~1.5K | FREE (Logic) |
|
||||||
|
| Refinement passes × 15 (est.) | 15 | ~20K | ~3K | ~$0.090 (Writer) |
|
||||||
|
| `refine_persona` × 6 | 6 | ~6K | ~1.5K | FREE (Logic) |
|
||||||
|
| **Phase 3 Total** | ~111 | ~1.9M | ~310K | **~$0.18** |
|
||||||
|
|
||||||
|
### Known Limitations
|
||||||
|
|
||||||
|
| ID | Issue | Impact |
|
||||||
|
|----|-------|--------|
|
||||||
|
| P3-L1 | Persona files re-read from disk on every chapter | I/O overhead; persona doesn't change between reads |
|
||||||
|
| P3-L2 | Beat expansion called even when beats are already detailed (>100 words) | Wastes ~5K tokens/chapter on ~30% of chapters |
|
||||||
|
| P3-L3 | Full rewrite triggered at score < 6.0 — discards entire draft | If draft scores 5.9, all 25K output tokens wasted |
|
||||||
|
| P3-L4 | No priority weighting for climax chapters | Ch 28 (climax) uses same resources/attempts as Ch 3 (setup) |
|
||||||
|
| P3-L5 | Previous chapter context hard-capped at 1000 tokens | For long chapters, might miss setup context from earlier pages |
|
||||||
|
| P3-L6 | Scoring thresholds fixed regardless of book position | Strict standards in early chapters = expensive refinement for setup scenes |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Review & Refinement (Editing)
|
||||||
|
|
||||||
|
**Primary Files:** `story/editor.py`, `story/bible_tracker.py`
|
||||||
|
**Orchestrated by:** `cli/engine.py`
|
||||||
|
|
||||||
|
### What Happens
|
||||||
|
|
||||||
|
**During writing loop (every chapter):**
|
||||||
|
- `update_tracking()` refreshes character state (location, clothing, held items, speech style, events).
|
||||||
|
- `update_lore_index()` extracts canonical descriptions of locations and items.
|
||||||
|
|
||||||
|
**Every 2 chapters:**
|
||||||
|
- `check_pacing()` detects if story is rushing or repeating beats; triggers ADD_BRIDGE or CUT_NEXT interventions.
|
||||||
|
|
||||||
|
**After writing completes:**
|
||||||
|
- `analyze_consistency()` scans entire manuscript for plot holes and contradictions.
|
||||||
|
- `harvest_metadata()` extracts newly invented characters not in the original bible.
|
||||||
|
- `check_and_propagate()` cascades chapter edits forward through the manuscript.
|
||||||
|
|
||||||
|
### 13 Evaluation Rubrics
|
||||||
|
|
||||||
|
1. Engagement & tension
|
||||||
|
2. Scene execution (no summaries)
|
||||||
|
3. Voice & tone
|
||||||
|
4. Sensory immersion
|
||||||
|
5. Show, Don't Tell / Deep POV (**auto-fail trigger**)
|
||||||
|
6. Character agency
|
||||||
|
7. Pacing
|
||||||
|
8. Genre appropriateness
|
||||||
|
9. Dialogue authenticity
|
||||||
|
10. Plot relevance
|
||||||
|
11. Staging & flow
|
||||||
|
12. Prose dynamics (sentence variety)
|
||||||
|
13. Clarity & readability
|
||||||
|
|
||||||
|
**Automatic fail conditions:** filter-word density > 1/120 words → cap at 5; summary mode detected → cap at 6; >3 labeled emotions → cap at 5.
|
||||||
|
|
||||||
|
### Costs (30-Chapter Novel)
|
||||||
|
|
||||||
|
| Task | Calls | Input Tokens | Cost (Pro-Exp) |
|
||||||
|
|------|-------|-------------|----------------|
|
||||||
|
| `update_tracking` × 30 | 30 | ~18K | FREE |
|
||||||
|
| `update_lore_index` × 30 | 30 | ~15K | FREE |
|
||||||
|
| `check_pacing` × 15 | 15 | ~18K | FREE |
|
||||||
|
| `analyze_consistency` | 1 | ~25K | FREE |
|
||||||
|
| `harvest_metadata` | 1 | ~25K | FREE |
|
||||||
|
| **Phase 4 Total** | 77 | ~1.34M | **FREE** |
|
||||||
|
|
||||||
|
### Known Limitations
|
||||||
|
|
||||||
|
| ID | Issue | Impact |
|
||||||
|
|----|-------|--------|
|
||||||
|
| P4-L1 | Consistency check is post-generation only | Plot holes caught too late to cheaply fix |
|
||||||
|
| P4-L2 | Ripple propagation (`check_and_propagate`) has no cost ceiling | A single user edit in Ch 5 can trigger 100K+ tokens of cascading rewrites |
|
||||||
|
| P4-L3 | `rewrite_chapter_content()` uses Logic model instead of Writer model | Less creative rewrite output — Logic model optimizes reasoning, not prose |
|
||||||
|
| P4-L4 | `check_pacing()` sampling only looks at recent chapters, not cumulative arc | Slow-building issues across 10+ chapters not detected until critical |
|
||||||
|
| P4-L5 | No quality metric for the evaluator itself | Can't confirm if 13-rubric scores are calibrated correctly |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cross-Phase Summary
|
||||||
|
|
||||||
|
### Total Costs (30-Chapter Novel)
|
||||||
|
|
||||||
|
| Phase | Token Budget | Cost Estimate |
|
||||||
|
|-------|-------------|---------------|
|
||||||
|
| Phase 1: Ideation | ~20K | FREE |
|
||||||
|
| Phase 2: Outline | ~73K | FREE |
|
||||||
|
| Phase 3: Writing | ~2.2M | ~$0.18 |
|
||||||
|
| Phase 4: Review | ~1.34M | FREE |
|
||||||
|
| Imagen Cover (3 images) | — | ~$0.12 |
|
||||||
|
| **Total** | **~3.63M** | **~$0.30** |
|
||||||
|
|
||||||
|
*Assumes quality-first model selection (Pro-Exp for Logic, Flash for Writer)*
|
||||||
|
|
||||||
|
### Efficiency Frontier
|
||||||
|
|
||||||
|
- **Best case** (all chapters pass first attempt): ~$0.18 text + $0.04 cover = ~$0.22
|
||||||
|
- **Worst case** (30% rewrite rate with Pro escalations): ~$0.45 text + $0.12 cover = ~$0.57
|
||||||
|
- **Budget per blueprint goal:** $2.00 total — current system is 15–29% of budget
|
||||||
|
|
||||||
|
### Top 5 Immediate Optimization Opportunities
|
||||||
|
|
||||||
|
| Priority | ID | Change | Savings |
|
||||||
|
|----------|----|--------|---------|
|
||||||
|
| 1 | P3-L1 | Cache persona per book (not per chapter) | ~90K tokens |
|
||||||
|
| 2 | P3-L2 | Skip beat expansion for detailed beats | ~45K tokens |
|
||||||
|
| 3 | P2-L2 | Add pre-generation outline validation | Prevent expensive rewrites |
|
||||||
|
| 4 | P1-L1 | Fix silent failure in `enrich()` | Prevent silent corrupt state |
|
||||||
|
| 5 | P3-L6 | Adaptive scoring thresholds by chapter position | ~15% fewer refinement passes |
|
||||||
290
docs/experiment_design.md
Normal file
290
docs/experiment_design.md
Normal file
@@ -0,0 +1,290 @@
|
|||||||
|
# Experiment Design: A/B Tests for BookApp Optimization
|
||||||
|
|
||||||
|
**Date:** 2026-02-22
|
||||||
|
**Status:** Completed — fulfills Action Plan Step 3
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Methodology
|
||||||
|
|
||||||
|
All experiments follow a controlled A/B design. We hold all variables constant except the single variable under test. Success is measured against three primary metrics:
|
||||||
|
|
||||||
|
- **Cost per chapter (CPC):** Total token cost / number of chapters written
|
||||||
|
- **Human Quality Score (HQS):** 1–10 score from a human reviewer blind to which variant generated the chapter
|
||||||
|
- **Continuity Error Rate (CER):** Number of plot/character contradictions per 10 chapters (lower is better)
|
||||||
|
|
||||||
|
Each experiment runs on the same 3 prompts (one each of short story, novella, and novel length). Results are averaged across all 3.
|
||||||
|
|
||||||
|
**Baseline:** Current production configuration as of 2026-02-22.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 1: Persona Caching
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 3-D
|
||||||
|
**Hypothesis:** Caching persona per book reduces I/O overhead with no quality impact.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Persona loading | Re-read from disk each chapter | Load once per book run, pass as argument |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- Token count per chapter (to verify savings)
|
||||||
|
- Wall-clock generation time per book
|
||||||
|
- Chapter quality scores (should be identical)
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- Token reduction ≥ 2,000 tokens/chapter on books with sample files
|
||||||
|
- HQS difference < 0.1 between A and B (no quality impact)
|
||||||
|
- Zero new errors introduced
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `cli/engine.py`: call `style_persona.load_persona_data()` once before chapter loop
|
||||||
|
- Modify `story/writer.py`: accept optional `persona_info` parameter, skip disk reads if provided
|
||||||
|
- Estimated implementation: 30 minutes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 2: Skip Beat Expansion for Detailed Beats
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 3-E
|
||||||
|
**Hypothesis:** Skipping `expand_beats_to_treatment()` when beats exceed 100 words saves tokens with no quality loss.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Beat expansion | Always called | Skipped if total beats > 100 words |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- Percentage of chapters that skip expansion (expected: ~30%)
|
||||||
|
- Token savings per book
|
||||||
|
- HQS for chapters that skip vs. chapters that don't skip
|
||||||
|
- Rate of beat-coverage failures (chapters that miss a required beat)
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- ≥ 25% of chapters skip expansion (validating hypothesis)
|
||||||
|
- HQS difference < 0.2 between chapters that skip and those that don't
|
||||||
|
- Beat-coverage failure rate unchanged
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `story/writer.py` `write_chapter()`: add `if sum(len(b) for b in beats) > 100` guard before calling expansion
|
||||||
|
- Estimated implementation: 15 minutes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 3: Outline Validation Gate
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 2-B
|
||||||
|
**Hypothesis:** Pre-generation outline validation prevents costly Phase 3 rewrites by catching plot holes at the outline stage.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Outline validation | None | Run `validate_outline()` after `create_chapter_plan()`; block if critical issues found |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- Number of critical outline issues flagged per run
|
||||||
|
- Rewrite rate during Phase 3 (did validation prevent rewrites?)
|
||||||
|
- Phase 3 token cost difference (A vs B)
|
||||||
|
- CER difference (did validation reduce continuity errors?)
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- Validation blocks at least 1 critical issue per 3 runs
|
||||||
|
- Phase 3 rewrite rate drops ≥ 15% when validation is active
|
||||||
|
- CER improves ≥ 0.5 per 10 chapters
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Add `validate_outline(events, chapters, bp, folder)` to `story/planner.py`
|
||||||
|
- Prompt: "Review this chapter plan for: (1) missing required plot beats, (2) character deaths/revivals without explanation, (3) severe pacing imbalances, (4) POV character inconsistency. Return: {issues: [...], severity: 'critical'|'warning'|'ok'}"
|
||||||
|
- Modify `cli/engine.py`: call `validate_outline()` and log issues before Phase 3 begins
|
||||||
|
- Estimated implementation: 2 hours
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 4: Adaptive Scoring Thresholds
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 3-B
|
||||||
|
**Hypothesis:** Lowering SCORE_PASSING for early setup chapters reduces refinement cost while maintaining quality on high-stakes scenes.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| SCORE_AUTO_ACCEPT | 8.0 (all chapters) | 8.0 (all chapters) |
|
||||||
|
| SCORE_PASSING | 7.0 (all chapters) | 6.5 (ch 1–20%), 7.0 (ch 20–70%), 7.5 (ch 70–100%) |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- Refinement pass count per chapter position bucket
|
||||||
|
- HQS per chapter position bucket (A vs B)
|
||||||
|
- CPC for each bucket
|
||||||
|
- Overall HQS for full book (A vs B)
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- Setup chapters (1–20%): ≥ 20% fewer refinement passes in B
|
||||||
|
- Climax chapters (70–100%): HQS improvement ≥ 0.3 in B
|
||||||
|
- Full book HQS unchanged or improved
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `story/writer.py` `write_chapter()`: accept `chapter_position` (0.0–1.0 float)
|
||||||
|
- Compute adaptive threshold: `passing = 6.5 + position * 1.0` (linear scaling)
|
||||||
|
- Modify `cli/engine.py`: pass `chapter_num / total_chapters` to `write_chapter()`
|
||||||
|
- Estimated implementation: 1 hour
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 5: Mid-Generation Consistency Snapshots
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 4-B
|
||||||
|
**Hypothesis:** Running `analyze_consistency()` every 10 chapters reduces post-generation CER without significant cost increase.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Consistency check | Post-generation only | Every 10 chapters + post-generation |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- CER post-generation (A vs B)
|
||||||
|
- Number of issues caught mid-generation vs post-generation
|
||||||
|
- Token cost difference (mid-gen checks add ~25K × N/10 tokens)
|
||||||
|
- Generation time difference
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- Post-generation CER drops ≥ 30% in B
|
||||||
|
- Issues caught mid-generation prevent at least 1 expensive post-gen ripple propagation per run
|
||||||
|
- Additional cost ≤ $0.01 per book (all free on Pro-Exp)
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `cli/engine.py`: every 10 chapters, call `analyze_consistency()` on written chapters so far
|
||||||
|
- If issues found: log warning and optionally pause for user review
|
||||||
|
- Estimated implementation: 1 hour
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 6: Iterative Persona Validation
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 1-C
|
||||||
|
**Hypothesis:** Validating the initial persona with a sample passage reduces voice-drift rewrites in Phase 3.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Persona creation | Single-pass, no validation | Generate persona → generate 200-word sample → evaluate → accept if ≥ 7/10, else regenerate (max 3 attempts) |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- Initial persona acceptance rate (how often does first-pass persona pass the check?)
|
||||||
|
- Phase 3 persona-related rewrite rate (rewrites where critique mentions "voice inconsistency" or "doesn't match persona")
|
||||||
|
- HQS for first 5 chapters (voice is most important early on)
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- Phase 3 persona-related rewrite rate drops ≥ 20% in B
|
||||||
|
- HQS for first 5 chapters improves ≥ 0.2
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `story/style_persona.py`: after `create_initial_persona()`, call a new `validate_persona()` function
|
||||||
|
- `validate_persona()` generates 200-word sample, evaluates with `evaluate_chapter_quality()` (light version)
|
||||||
|
- Estimated implementation: 2 hours
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment 7: Two-Pass Drafting (Draft + Polish)
|
||||||
|
|
||||||
|
**Alt Reference:** Alt 3-A
|
||||||
|
**Hypothesis:** A cheap rough draft followed by a polished revision produces better quality than iterative retrying.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
| Parameter | Control (A) | Treatment (B) |
|
||||||
|
|-----------|-------------|---------------|
|
||||||
|
| Drafting strategy | Single draft → evaluate → retry | Rough draft (Flash) → polish (Pro) → evaluate → accept if ≥ 7.0 (max 1 retry) |
|
||||||
|
| Max retry attempts | 3 | 1 (after polish) |
|
||||||
|
| Everything else | Identical | Identical |
|
||||||
|
|
||||||
|
### Metrics to Measure
|
||||||
|
|
||||||
|
- CPC (A vs B)
|
||||||
|
- HQS (A vs B)
|
||||||
|
- Rate of chapters needing retry (A vs B)
|
||||||
|
- Total generation time per book
|
||||||
|
|
||||||
|
### Success Criterion
|
||||||
|
|
||||||
|
- HQS improvement ≥ 0.3 in B with no cost increase
|
||||||
|
- OR: CPC reduction ≥ 20% in B with no HQS decrease
|
||||||
|
|
||||||
|
### Implementation Notes
|
||||||
|
|
||||||
|
- Modify `story/writer.py` `write_chapter()`: add polish pass using Pro model after initial draft
|
||||||
|
- Reduce max_attempts to 1 for final retry (after polish)
|
||||||
|
- This requires Pro model to be available (handled by auto-selection)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Experiment Execution Order
|
||||||
|
|
||||||
|
Run experiments in this order to minimize dependency conflicts:
|
||||||
|
|
||||||
|
1. **Exp 1** (Persona Caching) — independent, 30 min, no risk
|
||||||
|
2. **Exp 2** (Skip Beat Expansion) — independent, 15 min, no risk
|
||||||
|
3. **Exp 4** (Adaptive Thresholds) — independent, 1 hr, low risk
|
||||||
|
4. **Exp 3** (Outline Validation) — independent, 2 hrs, low risk
|
||||||
|
5. **Exp 6** (Persona Validation) — independent, 2 hrs, low risk
|
||||||
|
6. **Exp 5** (Mid-gen Consistency) — requires stable Phase 3, 1 hr, low risk
|
||||||
|
7. **Exp 7** (Two-Pass Drafting) — highest risk, run last; 3 hrs, medium risk
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Metrics Definitions
|
||||||
|
|
||||||
|
### Cost per Chapter (CPC)
|
||||||
|
|
||||||
|
```
|
||||||
|
CPC = (total_input_tokens × input_price + total_output_tokens × output_price) / num_chapters
|
||||||
|
```
|
||||||
|
|
||||||
|
Measure in both USD and token-count to separate model-price effects from efficiency effects.
|
||||||
|
|
||||||
|
### Human Quality Score (HQS)
|
||||||
|
|
||||||
|
Blind evaluation by a human reviewer:
|
||||||
|
1. Read 3 chapters from treatment A and 3 from treatment B (same book premise)
|
||||||
|
2. Score each on: prose quality (1–5), pacing (1–5), character consistency (1–5)
|
||||||
|
3. HQS = average across all dimensions, normalized to 1–10
|
||||||
|
|
||||||
|
### Continuity Error Rate (CER)
|
||||||
|
|
||||||
|
After generation, manually review character states and key plot facts across chapters. Count:
|
||||||
|
- Character location contradictions
|
||||||
|
- Continuity breaks (held items, injuries, time-of-day)
|
||||||
|
- Plot event contradictions (character alive vs. dead)
|
||||||
|
|
||||||
|
Report as errors per 10 chapters.
|
||||||
@@ -44,8 +44,24 @@ def generate_blurb(bp, folder):
|
|||||||
try:
|
try:
|
||||||
response = ai_models.model_writer.generate_content(prompt)
|
response = ai_models.model_writer.generate_content(prompt)
|
||||||
utils.log_usage(folder, ai_models.model_writer.name, response.usage_metadata)
|
utils.log_usage(folder, ai_models.model_writer.name, response.usage_metadata)
|
||||||
blurb = response.text
|
blurb = response.text.strip()
|
||||||
with open(os.path.join(folder, "blurb.txt"), "w") as f: f.write(blurb)
|
|
||||||
with open(os.path.join(folder, "back_cover.txt"), "w") as f: f.write(blurb)
|
# Trim to 220 words if model overshot the 150-200 word target
|
||||||
except:
|
words = blurb.split()
|
||||||
utils.log("MARKETING", "Failed to generate blurb.")
|
if len(words) > 220:
|
||||||
|
blurb = " ".join(words[:220])
|
||||||
|
# End at the last sentence boundary within those 220 words
|
||||||
|
for end_ch in ['.', '!', '?']:
|
||||||
|
last_sent = blurb.rfind(end_ch)
|
||||||
|
if last_sent > len(blurb) // 2:
|
||||||
|
blurb = blurb[:last_sent + 1]
|
||||||
|
break
|
||||||
|
utils.log("MARKETING", f" -> Blurb trimmed to {len(blurb.split())} words.")
|
||||||
|
|
||||||
|
with open(os.path.join(folder, "blurb.txt"), "w", encoding='utf-8') as f:
|
||||||
|
f.write(blurb)
|
||||||
|
with open(os.path.join(folder, "back_cover.txt"), "w", encoding='utf-8') as f:
|
||||||
|
f.write(blurb)
|
||||||
|
utils.log("MARKETING", f" -> Blurb: {len(blurb.split())} words.")
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("MARKETING", f"Failed to generate blurb: {e}")
|
||||||
|
|||||||
@@ -14,27 +14,187 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
HAS_PIL = False
|
HAS_PIL = False
|
||||||
|
|
||||||
|
# Score gates (mirrors chapter writing pipeline thresholds)
|
||||||
|
ART_SCORE_AUTO_ACCEPT = 8 # Stop retrying — image is excellent
|
||||||
|
ART_SCORE_PASSING = 7 # Acceptable; keep as best candidate
|
||||||
|
LAYOUT_SCORE_PASSING = 7 # Accept layout and stop retrying
|
||||||
|
|
||||||
def evaluate_image_quality(image_path, prompt, model, folder=None):
|
|
||||||
if not HAS_PIL: return None, "PIL not installed"
|
# ---------------------------------------------------------------------------
|
||||||
|
# Evaluation helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def evaluate_cover_art(image_path, genre, title, model, folder=None):
|
||||||
|
"""Score generated cover art against a professional book-cover rubric.
|
||||||
|
|
||||||
|
Returns (score: int | None, critique: str).
|
||||||
|
Auto-fail conditions:
|
||||||
|
- Any visible text/watermarks → score capped at 4
|
||||||
|
- Blurry or deformed anatomy → deduct 2 points
|
||||||
|
"""
|
||||||
|
if not HAS_PIL:
|
||||||
|
return None, "PIL not installed"
|
||||||
try:
|
try:
|
||||||
img = Image.open(image_path)
|
img = Image.open(image_path)
|
||||||
response = model.generate_content([f"""
|
prompt = f"""
|
||||||
ROLE: Art Critic
|
ROLE: Professional Book Cover Art Critic
|
||||||
TASK: Analyze generated image against prompt.
|
TASK: Score this AI-generated cover art for a {genre} novel titled '{title}'.
|
||||||
PROMPT: '{prompt}'
|
|
||||||
OUTPUT_FORMAT (JSON): {{ "score": int (1-10), "reason": "string" }}
|
|
||||||
""", img])
|
|
||||||
model_name = getattr(model, 'name', "logic-pro")
|
|
||||||
if folder: utils.log_usage(folder, model_name, response.usage_metadata)
|
|
||||||
data = json.loads(utils.clean_json(response.text))
|
|
||||||
return data.get('score'), data.get('reason')
|
|
||||||
except Exception as e: return None, str(e)
|
|
||||||
|
|
||||||
|
SCORING RUBRIC (1-10):
|
||||||
|
1. VISUAL IMPACT: Is the image immediately arresting? Does it demand attention on a shelf?
|
||||||
|
2. GENRE FIT: Does the visual style, mood, and colour palette unmistakably signal {genre}?
|
||||||
|
3. COMPOSITION: Is there a clear focal point? Are the top or bottom thirds usable for title/author text overlay?
|
||||||
|
4. TECHNICAL QUALITY: Sharp, detailed, free of deformities, blurring, or AI artefacts?
|
||||||
|
5. CLEAN IMAGE: Absolutely NO text, letters, numbers, watermarks, logos, or UI elements?
|
||||||
|
|
||||||
|
SCORING SCALE:
|
||||||
|
- 9-10: Masterclass cover art, ready for a major publisher
|
||||||
|
- 7-8: Professional quality, genre-appropriate, minor flaws only
|
||||||
|
- 5-6: Usable but generic or has one significant flaw
|
||||||
|
- 1-4: Unusable — major artefacts, wrong genre, deformed figures, or visible text
|
||||||
|
|
||||||
|
AUTO-FAIL RULES (apply before scoring):
|
||||||
|
- If ANY text, letters, watermarks or UI elements are visible → score CANNOT exceed 4. State this explicitly.
|
||||||
|
- If figures have deformed anatomy or blurring → deduct 2 from your final score.
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{"score": int, "critique": "Specific issues citing what to fix in the next attempt.", "actionable": "One concrete change to the image prompt that would improve the next attempt."}}
|
||||||
|
"""
|
||||||
|
response = model.generate_content([prompt, img])
|
||||||
|
model_name = getattr(model, 'name', "logic")
|
||||||
|
if folder:
|
||||||
|
utils.log_usage(folder, model_name, response.usage_metadata)
|
||||||
|
data = json.loads(utils.clean_json(response.text))
|
||||||
|
score = data.get('score')
|
||||||
|
critique = data.get('critique', '')
|
||||||
|
if data.get('actionable'):
|
||||||
|
critique += f" FIX: {data['actionable']}"
|
||||||
|
return score, critique
|
||||||
|
except Exception as e:
|
||||||
|
return None, str(e)
|
||||||
|
|
||||||
|
|
||||||
|
def evaluate_cover_layout(image_path, title, author, genre, font_name, model, folder=None):
|
||||||
|
"""Score the finished cover (art + text overlay) as a professional book cover.
|
||||||
|
|
||||||
|
Returns (score: int | None, critique: str).
|
||||||
|
"""
|
||||||
|
if not HAS_PIL:
|
||||||
|
return None, "PIL not installed"
|
||||||
|
try:
|
||||||
|
img = Image.open(image_path)
|
||||||
|
prompt = f"""
|
||||||
|
ROLE: Graphic Design Critic
|
||||||
|
TASK: Score this finished book cover for '{title}' by {author} ({genre}).
|
||||||
|
|
||||||
|
SCORING RUBRIC (1-10):
|
||||||
|
1. LEGIBILITY: Is the title instantly readable? High contrast against the background?
|
||||||
|
2. TYPOGRAPHY: Does the font '{font_name}' suit the {genre} genre? Is sizing proportional?
|
||||||
|
3. PLACEMENT: Is the title placed where it doesn't obscure the focal point? Is the author name readable?
|
||||||
|
4. PROFESSIONAL POLISH: Does this look like a published, commercially-viable cover?
|
||||||
|
5. GENRE SIGNAL: At a glance, does the whole cover (art + text) correctly signal {genre}?
|
||||||
|
|
||||||
|
SCORING SCALE:
|
||||||
|
- 9-10: Indistinguishable from a professional published cover
|
||||||
|
- 7-8: Strong cover, minor refinement would help
|
||||||
|
- 5-6: Passable but text placement or contrast needs work
|
||||||
|
- 1-4: Unusable — unreadable text, clashing colours, or amateurish layout
|
||||||
|
|
||||||
|
AUTO-FAIL: If the title text is illegible (low contrast, obscured, or missing) → score CANNOT exceed 4.
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{"score": int, "critique": "Specific layout issues.", "actionable": "One change to position, colour, or font size that would fix the worst problem."}}
|
||||||
|
"""
|
||||||
|
response = model.generate_content([prompt, img])
|
||||||
|
model_name = getattr(model, 'name', "logic")
|
||||||
|
if folder:
|
||||||
|
utils.log_usage(folder, model_name, response.usage_metadata)
|
||||||
|
data = json.loads(utils.clean_json(response.text))
|
||||||
|
score = data.get('score')
|
||||||
|
critique = data.get('critique', '')
|
||||||
|
if data.get('actionable'):
|
||||||
|
critique += f" FIX: {data['actionable']}"
|
||||||
|
return score, critique
|
||||||
|
except Exception as e:
|
||||||
|
return None, str(e)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Art prompt pre-validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def validate_art_prompt(art_prompt, meta, model, folder=None):
|
||||||
|
"""Pre-validate and improve the image generation prompt before calling Imagen.
|
||||||
|
|
||||||
|
Checks for: accidental text instructions, vague focal point, missing composition
|
||||||
|
guidance, and genre mismatch. Returns improved prompt or original on failure.
|
||||||
|
"""
|
||||||
|
genre = meta.get('genre', 'Fiction')
|
||||||
|
title = meta.get('title', 'Untitled')
|
||||||
|
|
||||||
|
check_prompt = f"""
|
||||||
|
ROLE: Art Director
|
||||||
|
TASK: Review and improve this image generation prompt for a {genre} book cover titled '{title}'.
|
||||||
|
|
||||||
|
CURRENT_PROMPT:
|
||||||
|
{art_prompt}
|
||||||
|
|
||||||
|
CHECK FOR AND FIX:
|
||||||
|
1. Any instruction to render text, letters, or the title? → Remove it (text is overlaid separately).
|
||||||
|
2. Is there a specific, memorable FOCAL POINT described? → Add one if missing.
|
||||||
|
3. Does the colour palette and style match {genre} conventions? → Correct if off.
|
||||||
|
4. Is RULE OF THIRDS composition mentioned (space at top/bottom for title overlay)? → Add if missing.
|
||||||
|
5. Does it end with "No text, no letters, no watermarks"? → Ensure this is present.
|
||||||
|
|
||||||
|
Return the improved prompt under 200 words.
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{"improved_prompt": "..."}}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
resp = model.generate_content(check_prompt)
|
||||||
|
if folder:
|
||||||
|
utils.log_usage(folder, model.name, resp.usage_metadata)
|
||||||
|
data = json.loads(utils.clean_json(resp.text))
|
||||||
|
improved = data.get('improved_prompt', '').strip()
|
||||||
|
if improved and len(improved) > 50:
|
||||||
|
utils.log("MARKETING", " -> Art prompt validated and improved.")
|
||||||
|
return improved
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("MARKETING", f" -> Art prompt validation failed: {e}. Using original.")
|
||||||
|
return art_prompt
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Visual context helper
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _build_visual_context(bp, tracking):
|
||||||
|
"""Extract structured visual context: protagonist, antagonist, key themes."""
|
||||||
|
lines = []
|
||||||
|
chars = bp.get('characters', [])
|
||||||
|
protagonist = next((c for c in chars if 'protagonist' in c.get('role', '').lower()), None)
|
||||||
|
if protagonist:
|
||||||
|
lines.append(f"PROTAGONIST: {protagonist.get('name')} — {protagonist.get('description', '')[:200]}")
|
||||||
|
antagonist = next((c for c in chars if 'antagonist' in c.get('role', '').lower()), None)
|
||||||
|
if antagonist:
|
||||||
|
lines.append(f"ANTAGONIST: {antagonist.get('name')} — {antagonist.get('description', '')[:150]}")
|
||||||
|
if tracking and tracking.get('characters'):
|
||||||
|
for name, data in list(tracking['characters'].items())[:2]:
|
||||||
|
desc = ', '.join(data.get('descriptors', []))[:120]
|
||||||
|
if desc:
|
||||||
|
lines.append(f"CHARACTER VISUAL ({name}): {desc}")
|
||||||
|
if tracking and tracking.get('events'):
|
||||||
|
recent = [e for e in tracking['events'][-3:] if isinstance(e, str)]
|
||||||
|
if recent:
|
||||||
|
lines.append(f"KEY THEMES/EVENTS: {'; '.join(recent)[:200]}")
|
||||||
|
return "\n".join(lines) if lines else ""
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Main entry point
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
||||||
if not HAS_PIL:
|
if not HAS_PIL:
|
||||||
utils.log("MARKETING", "Pillow not installed. Skipping image cover.")
|
utils.log("MARKETING", "Pillow not installed. Skipping cover.")
|
||||||
return
|
return
|
||||||
|
|
||||||
utils.log("MARKETING", "Generating cover...")
|
utils.log("MARKETING", "Generating cover...")
|
||||||
@@ -45,13 +205,7 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
|||||||
if orientation == "Landscape": ar = "4:3"
|
if orientation == "Landscape": ar = "4:3"
|
||||||
elif orientation == "Square": ar = "1:1"
|
elif orientation == "Square": ar = "1:1"
|
||||||
|
|
||||||
visual_context = ""
|
visual_context = _build_visual_context(bp, tracking)
|
||||||
if tracking:
|
|
||||||
visual_context = "IMPORTANT VISUAL CONTEXT:\n"
|
|
||||||
if 'events' in tracking:
|
|
||||||
visual_context += f"Key Events/Themes: {json.dumps(tracking['events'][-5:])}\n"
|
|
||||||
if 'characters' in tracking:
|
|
||||||
visual_context += f"Character Appearances: {json.dumps(tracking['characters'])}\n"
|
|
||||||
|
|
||||||
regenerate_image = True
|
regenerate_image = True
|
||||||
design_instruction = ""
|
design_instruction = ""
|
||||||
@@ -60,18 +214,15 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
|||||||
regenerate_image = False
|
regenerate_image = False
|
||||||
|
|
||||||
if feedback and feedback.strip():
|
if feedback and feedback.strip():
|
||||||
utils.log("MARKETING", f"Analyzing feedback: '{feedback}'...")
|
utils.log("MARKETING", f"Analysing feedback: '{feedback}'...")
|
||||||
analysis_prompt = f"""
|
analysis_prompt = f"""
|
||||||
ROLE: Design Assistant
|
ROLE: Design Assistant
|
||||||
TASK: Analyze user feedback on cover.
|
TASK: Analyse user feedback on a book cover.
|
||||||
|
|
||||||
FEEDBACK: "{feedback}"
|
FEEDBACK: "{feedback}"
|
||||||
|
|
||||||
DECISION:
|
DECISION:
|
||||||
1. Keep the current background image but change text/layout/color (REGENERATE_LAYOUT).
|
1. Keep the background image; change only text/layout/colour → REGENERATE_LAYOUT
|
||||||
2. Create a completely new background image (REGENERATE_IMAGE).
|
2. Create a completely new background image → REGENERATE_IMAGE
|
||||||
|
OUTPUT_FORMAT (JSON): {{"action": "REGENERATE_LAYOUT" or "REGENERATE_IMAGE", "instruction": "Specific instruction for the Art Director."}}
|
||||||
OUTPUT_FORMAT (JSON): {{ "action": "REGENERATE_LAYOUT" or "REGENERATE_IMAGE", "instruction": "Specific instruction for Art Director" }}
|
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
resp = ai_models.model_logic.generate_content(analysis_prompt)
|
resp = ai_models.model_logic.generate_content(analysis_prompt)
|
||||||
@@ -79,24 +230,24 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
|||||||
decision = json.loads(utils.clean_json(resp.text))
|
decision = json.loads(utils.clean_json(resp.text))
|
||||||
if decision.get('action') == 'REGENERATE_LAYOUT':
|
if decision.get('action') == 'REGENERATE_LAYOUT':
|
||||||
regenerate_image = False
|
regenerate_image = False
|
||||||
utils.log("MARKETING", "Feedback indicates keeping image. Regenerating layout only.")
|
utils.log("MARKETING", "Feedback: keeping image, regenerating layout only.")
|
||||||
design_instruction = decision.get('instruction', feedback)
|
design_instruction = decision.get('instruction', feedback)
|
||||||
except:
|
except Exception:
|
||||||
utils.log("MARKETING", "Feedback analysis failed. Defaulting to full regeneration.")
|
utils.log("MARKETING", "Feedback analysis failed. Defaulting to full regeneration.")
|
||||||
|
|
||||||
genre = meta.get('genre', 'Fiction')
|
genre = meta.get('genre', 'Fiction')
|
||||||
tone = meta.get('style', {}).get('tone', 'Balanced')
|
tone = meta.get('style', {}).get('tone', 'Balanced')
|
||||||
genre_style_map = {
|
genre_style_map = {
|
||||||
'thriller': 'dark, cinematic, high-contrast photography style',
|
'thriller': 'dark, cinematic, high-contrast photography style',
|
||||||
'mystery': 'moody, atmospheric, noir-inspired painting',
|
'mystery': 'moody, atmospheric, noir-inspired painting',
|
||||||
'romance': 'warm, painterly, soft-focus illustration',
|
'romance': 'warm, painterly, soft-focus illustration',
|
||||||
'fantasy': 'epic digital painting, rich colours, mythic scale',
|
'fantasy': 'epic digital painting, rich colours, mythic scale',
|
||||||
'science fiction': 'sharp digital art, cool palette, futuristic',
|
'science fiction': 'sharp digital art, cool palette, futuristic',
|
||||||
'horror': 'unsettling, dark atmospheric painting, desaturated',
|
'horror': 'unsettling dark atmospheric painting, desaturated',
|
||||||
'historical fiction': 'classical oil painting style, period-accurate',
|
'historical fiction':'classical oil painting style, period-accurate',
|
||||||
'young adult': 'vibrant illustrated style, bold colours',
|
'young adult': 'vibrant illustrated style, bold colours',
|
||||||
}
|
}
|
||||||
suggested_style = genre_style_map.get(genre.lower(), 'professional digital illustration or photography')
|
suggested_style = genre_style_map.get(genre.lower(), 'professional digital illustration')
|
||||||
|
|
||||||
design_prompt = f"""
|
design_prompt = f"""
|
||||||
ROLE: Art Director
|
ROLE: Art Director
|
||||||
@@ -108,258 +259,296 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
|
|||||||
- TONE: {tone}
|
- TONE: {tone}
|
||||||
- SUGGESTED_VISUAL_STYLE: {suggested_style}
|
- SUGGESTED_VISUAL_STYLE: {suggested_style}
|
||||||
|
|
||||||
VISUAL_CONTEXT (characters and key themes from the story):
|
VISUAL_CONTEXT (characters and themes from the finished story — use these):
|
||||||
{visual_context if visual_context else "Use genre conventions."}
|
{visual_context if visual_context else "Use strong genre conventions."}
|
||||||
|
|
||||||
USER_FEEDBACK: {feedback if feedback else "None"}
|
USER_FEEDBACK: {feedback if feedback else "None"}
|
||||||
DESIGN_INSTRUCTION: {design_instruction if design_instruction else "Create a compelling, genre-appropriate cover."}
|
DESIGN_INSTRUCTION: {design_instruction if design_instruction else "Create a compelling, genre-appropriate cover."}
|
||||||
|
|
||||||
COVER_ART_RULES:
|
COVER_ART_RULES:
|
||||||
- The art_prompt must produce an image with NO text, no letters, no numbers, no watermarks, no UI elements, no logos.
|
- The art_prompt MUST produce an image with ABSOLUTELY NO text, letters, numbers, watermarks, UI elements, or logos. Text is overlaid separately.
|
||||||
- Describe a clear FOCAL POINT (e.g. the protagonist, a dramatic scene, a symbolic object).
|
- Describe a specific, memorable FOCAL POINT (e.g. protagonist mid-action, a symbolic object, a dramatic landscape).
|
||||||
- Use RULE OF THIRDS composition — leave visual space at top and/or bottom for the title and author text to be overlaid.
|
- Use RULE OF THIRDS composition — preserve visual space at top AND bottom for title/author text overlay.
|
||||||
- Describe LIGHTING that reinforces the tone (e.g. "harsh neon backlight" for thriller, "golden hour" for romance).
|
- Describe LIGHTING that reinforces the tone (e.g. "harsh neon backlight", "golden hour", "cold winter dawn").
|
||||||
- Describe the COLOUR PALETTE explicitly (e.g. "deep crimson and shadow-black", "soft rose gold and cream").
|
- Specify the COLOUR PALETTE explicitly (e.g. "deep crimson and shadow-black", "soft rose gold and ivory cream").
|
||||||
- Characters must match their descriptions from VISUAL_CONTEXT if present.
|
- If characters are described in VISUAL_CONTEXT, their appearance MUST match those descriptions exactly.
|
||||||
|
- End the art_prompt with: "No text, no letters, no watermarks, no UI elements. {suggested_style} quality, 8k detail."
|
||||||
|
|
||||||
OUTPUT_FORMAT (JSON only, no markdown):
|
OUTPUT_FORMAT (JSON only, no markdown wrapper):
|
||||||
{{
|
{{
|
||||||
"font_name": "Name of a Google Font suited to the genre (e.g. Cinzel for fantasy, Oswald for thriller, Playfair Display for romance)",
|
"font_name": "One Google Font suited to {genre} (e.g. Cinzel for fantasy, Oswald for thriller, Playfair Display for romance)",
|
||||||
"primary_color": "#HexCode (dominant background/cover colour)",
|
"primary_color": "#HexCode",
|
||||||
"text_color": "#HexCode (high contrast against primary_color)",
|
"text_color": "#HexCode (high contrast against primary_color)",
|
||||||
"art_prompt": "Detailed {suggested_style} image generation prompt. Begin with the style. Describe composition, focal point, lighting, colour palette, and any characters. End with: No text, no letters, no watermarks, photorealistic/painted quality, 8k detail."
|
"art_prompt": "Detailed image generation prompt. Style → Focal point → Composition → Lighting → Colour palette → Characters (if any). End with the NO TEXT clause."
|
||||||
}}
|
}}
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
response = ai_models.model_artist.generate_content(design_prompt)
|
response = ai_models.model_artist.generate_content(design_prompt)
|
||||||
utils.log_usage(folder, ai_models.model_artist.name, response.usage_metadata)
|
utils.log_usage(folder, ai_models.model_artist.name, response.usage_metadata)
|
||||||
design = json.loads(utils.clean_json(response.text))
|
design = json.loads(utils.clean_json(response.text))
|
||||||
|
|
||||||
bg_color = design.get('primary_color', '#252570')
|
|
||||||
|
|
||||||
art_prompt = design.get('art_prompt', f"Cover art for {meta.get('title')}")
|
|
||||||
with open(os.path.join(folder, "cover_art_prompt.txt"), "w") as f:
|
|
||||||
f.write(art_prompt)
|
|
||||||
|
|
||||||
img = None
|
|
||||||
width, height = 600, 900
|
|
||||||
|
|
||||||
best_img_score = 0
|
|
||||||
best_img_path = None
|
|
||||||
|
|
||||||
MAX_IMG_ATTEMPTS = 3
|
|
||||||
if regenerate_image:
|
|
||||||
for i in range(1, MAX_IMG_ATTEMPTS + 1):
|
|
||||||
utils.log("MARKETING", f"Generating cover art (Attempt {i}/{MAX_IMG_ATTEMPTS})...")
|
|
||||||
try:
|
|
||||||
if not ai_models.model_image: raise ImportError("No Image Generation Model available.")
|
|
||||||
|
|
||||||
status = "success"
|
|
||||||
try:
|
|
||||||
result = ai_models.model_image.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
|
|
||||||
except Exception as e:
|
|
||||||
err_lower = str(e).lower()
|
|
||||||
if ai_models.HAS_VERTEX and ("resource" in err_lower or "quota" in err_lower):
|
|
||||||
try:
|
|
||||||
utils.log("MARKETING", "⚠️ Imagen 3 failed. Trying Imagen 3 Fast...")
|
|
||||||
fb_model = ai_models.VertexImageModel.from_pretrained("imagen-3.0-fast-generate-001")
|
|
||||||
result = fb_model.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
|
|
||||||
status = "success_fast"
|
|
||||||
except Exception:
|
|
||||||
utils.log("MARKETING", "⚠️ Imagen 3 Fast failed. Trying Imagen 2...")
|
|
||||||
fb_model = ai_models.VertexImageModel.from_pretrained("imagegeneration@006")
|
|
||||||
result = fb_model.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
|
|
||||||
status = "success_fallback"
|
|
||||||
else:
|
|
||||||
raise e
|
|
||||||
|
|
||||||
attempt_path = os.path.join(folder, f"cover_art_attempt_{i}.png")
|
|
||||||
result.images[0].save(attempt_path)
|
|
||||||
utils.log_usage(folder, "imagen", image_count=1)
|
|
||||||
|
|
||||||
cover_eval_criteria = (
|
|
||||||
f"Book cover art for a {genre} novel titled '{meta.get('title')}'.\n\n"
|
|
||||||
f"Evaluate STRICTLY as a professional book cover on these criteria:\n"
|
|
||||||
f"1. VISUAL IMPACT: Is the image immediately arresting and compelling?\n"
|
|
||||||
f"2. GENRE FIT: Does the visual style, mood, and palette match {genre}?\n"
|
|
||||||
f"3. COMPOSITION: Is there a clear focal point? Are top/bottom areas usable for title/author text?\n"
|
|
||||||
f"4. QUALITY: Is the image sharp, detailed, and free of deformities or blurring?\n"
|
|
||||||
f"5. CLEAN IMAGE: Are there absolutely NO text, watermarks, letters, or UI artifacts?\n"
|
|
||||||
f"Score 1-10. Deduct 3 points if any text/watermarks are visible. "
|
|
||||||
f"Deduct 2 if the image is blurry or has deformed anatomy."
|
|
||||||
)
|
|
||||||
score, critique = evaluate_image_quality(attempt_path, cover_eval_criteria, ai_models.model_writer, folder)
|
|
||||||
if score is None: score = 0
|
|
||||||
|
|
||||||
utils.log("MARKETING", f" -> Image Score: {score}/10. Critique: {critique}")
|
|
||||||
utils.log_image_attempt(folder, "cover", art_prompt, f"cover_art_{i}.png", status, score=score, critique=critique)
|
|
||||||
|
|
||||||
if interactive:
|
|
||||||
try:
|
|
||||||
if os.name == 'nt': os.startfile(attempt_path)
|
|
||||||
elif sys.platform == 'darwin': subprocess.call(('open', attempt_path))
|
|
||||||
else: subprocess.call(('xdg-open', attempt_path))
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
from rich.prompt import Confirm
|
|
||||||
if Confirm.ask(f"Accept cover attempt {i} (Score: {score})?", default=True):
|
|
||||||
best_img_path = attempt_path
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
utils.log("MARKETING", "User rejected cover. Retrying...")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if score >= 5 and score > best_img_score:
|
|
||||||
best_img_score = score
|
|
||||||
best_img_path = attempt_path
|
|
||||||
elif best_img_path is None and score > 0:
|
|
||||||
best_img_score = score
|
|
||||||
best_img_path = attempt_path
|
|
||||||
|
|
||||||
if score >= 9:
|
|
||||||
utils.log("MARKETING", " -> High quality image accepted.")
|
|
||||||
break
|
|
||||||
|
|
||||||
prompt_additions = []
|
|
||||||
critique_lower = critique.lower() if critique else ""
|
|
||||||
if "scar" in critique_lower or "deform" in critique_lower:
|
|
||||||
prompt_additions.append("perfect anatomy, no deformities")
|
|
||||||
if "blur" in critique_lower or "blurry" in critique_lower:
|
|
||||||
prompt_additions.append("sharp focus, highly detailed")
|
|
||||||
if "text" in critique_lower or "letter" in critique_lower:
|
|
||||||
prompt_additions.append("no text, no letters, no watermarks")
|
|
||||||
if prompt_additions:
|
|
||||||
art_prompt += f". ({', '.join(prompt_additions)})"
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
utils.log("MARKETING", f"Image generation failed: {e}")
|
|
||||||
if "quota" in str(e).lower(): break
|
|
||||||
|
|
||||||
if best_img_path and os.path.exists(best_img_path):
|
|
||||||
final_art_path = os.path.join(folder, "cover_art.png")
|
|
||||||
if best_img_path != final_art_path:
|
|
||||||
shutil.copy(best_img_path, final_art_path)
|
|
||||||
img = Image.open(final_art_path).resize((width, height)).convert("RGB")
|
|
||||||
else:
|
|
||||||
utils.log("MARKETING", "Falling back to solid color cover.")
|
|
||||||
img = Image.new('RGB', (width, height), color=bg_color)
|
|
||||||
utils.log_image_attempt(folder, "cover", art_prompt, "cover.png", "fallback_solid")
|
|
||||||
else:
|
|
||||||
final_art_path = os.path.join(folder, "cover_art.png")
|
|
||||||
if os.path.exists(final_art_path):
|
|
||||||
utils.log("MARKETING", "Using existing cover art (Layout update only).")
|
|
||||||
img = Image.open(final_art_path).resize((width, height)).convert("RGB")
|
|
||||||
else:
|
|
||||||
utils.log("MARKETING", "Existing art not found. Forcing regeneration.")
|
|
||||||
img = Image.new('RGB', (width, height), color=bg_color)
|
|
||||||
|
|
||||||
font_path = download_font(design.get('font_name') or 'Arial')
|
|
||||||
|
|
||||||
best_layout_score = 0
|
|
||||||
best_layout_path = None
|
|
||||||
|
|
||||||
base_layout_prompt = f"""
|
|
||||||
ROLE: Graphic Designer
|
|
||||||
TASK: Determine text layout coordinates for a 600x900 cover.
|
|
||||||
|
|
||||||
METADATA:
|
|
||||||
- TITLE: {meta.get('title')}
|
|
||||||
- AUTHOR: {meta.get('author')}
|
|
||||||
- GENRE: {meta.get('genre')}
|
|
||||||
|
|
||||||
CONSTRAINT: Do NOT place text over faces.
|
|
||||||
|
|
||||||
OUTPUT_FORMAT (JSON):
|
|
||||||
{{
|
|
||||||
"title": {{ "x": Int, "y": Int, "font_size": Int, "font_name": "String", "color": "#Hex" }},
|
|
||||||
"author": {{ "x": Int, "y": Int, "font_size": Int, "font_name": "String", "color": "#Hex" }}
|
|
||||||
}}
|
|
||||||
"""
|
|
||||||
|
|
||||||
if feedback:
|
|
||||||
base_layout_prompt += f"\nUSER FEEDBACK: {feedback}\nAdjust layout/colors accordingly."
|
|
||||||
|
|
||||||
layout_prompt = base_layout_prompt
|
|
||||||
|
|
||||||
for attempt in range(1, 6):
|
|
||||||
utils.log("MARKETING", f"Designing text layout (Attempt {attempt}/5)...")
|
|
||||||
try:
|
|
||||||
response = ai_models.model_writer.generate_content([layout_prompt, img])
|
|
||||||
utils.log_usage(folder, ai_models.model_writer.name, response.usage_metadata)
|
|
||||||
layout = json.loads(utils.clean_json(response.text))
|
|
||||||
if isinstance(layout, list): layout = layout[0] if layout else {}
|
|
||||||
except Exception as e:
|
|
||||||
utils.log("MARKETING", f"Layout generation failed: {e}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
img_copy = img.copy()
|
|
||||||
draw = ImageDraw.Draw(img_copy)
|
|
||||||
|
|
||||||
def draw_element(key, text_override=None):
|
|
||||||
elem = layout.get(key)
|
|
||||||
if not elem: return
|
|
||||||
if isinstance(elem, list): elem = elem[0] if elem else {}
|
|
||||||
text = text_override if text_override else elem.get('text')
|
|
||||||
if not text: return
|
|
||||||
|
|
||||||
f_name = elem.get('font_name') or 'Arial'
|
|
||||||
f_path = download_font(f_name)
|
|
||||||
try:
|
|
||||||
if f_path: font = ImageFont.truetype(f_path, elem.get('font_size', 40))
|
|
||||||
else: raise IOError("Font not found")
|
|
||||||
except: font = ImageFont.load_default()
|
|
||||||
|
|
||||||
x, y = elem.get('x', 300), elem.get('y', 450)
|
|
||||||
color = elem.get('color') or '#FFFFFF'
|
|
||||||
|
|
||||||
avg_char_w = font.getlength("A")
|
|
||||||
wrap_w = int(550 / avg_char_w) if avg_char_w > 0 else 20
|
|
||||||
lines = textwrap.wrap(text, width=wrap_w)
|
|
||||||
|
|
||||||
line_heights = []
|
|
||||||
for l in lines:
|
|
||||||
bbox = draw.textbbox((0, 0), l, font=font)
|
|
||||||
line_heights.append(bbox[3] - bbox[1] + 10)
|
|
||||||
|
|
||||||
total_h = sum(line_heights)
|
|
||||||
current_y = y - (total_h // 2)
|
|
||||||
|
|
||||||
for idx, line in enumerate(lines):
|
|
||||||
bbox = draw.textbbox((0, 0), line, font=font)
|
|
||||||
lx = x - ((bbox[2] - bbox[0]) / 2)
|
|
||||||
draw.text((lx, current_y), line, font=font, fill=color)
|
|
||||||
current_y += line_heights[idx]
|
|
||||||
|
|
||||||
draw_element('title', meta.get('title'))
|
|
||||||
draw_element('author', meta.get('author'))
|
|
||||||
|
|
||||||
attempt_path = os.path.join(folder, f"cover_layout_attempt_{attempt}.png")
|
|
||||||
img_copy.save(attempt_path)
|
|
||||||
|
|
||||||
eval_prompt = f"""
|
|
||||||
Analyze the text layout for the book title '{meta.get('title')}'.
|
|
||||||
CHECKLIST:
|
|
||||||
1. Is the text legible against the background?
|
|
||||||
2. Is the contrast sufficient?
|
|
||||||
3. Does it look professional?
|
|
||||||
"""
|
|
||||||
score, critique = evaluate_image_quality(attempt_path, eval_prompt, ai_models.model_writer, folder)
|
|
||||||
if score is None: score = 0
|
|
||||||
|
|
||||||
utils.log("MARKETING", f" -> Layout Score: {score}/10. Critique: {critique}")
|
|
||||||
|
|
||||||
if score > best_layout_score:
|
|
||||||
best_layout_score = score
|
|
||||||
best_layout_path = attempt_path
|
|
||||||
|
|
||||||
if score == 10:
|
|
||||||
utils.log("MARKETING", " -> Perfect layout accepted.")
|
|
||||||
break
|
|
||||||
|
|
||||||
layout_prompt = base_layout_prompt + f"\nCRITIQUE OF PREVIOUS ATTEMPT: {critique}\nAdjust position/color to fix this."
|
|
||||||
|
|
||||||
if best_layout_path:
|
|
||||||
shutil.copy(best_layout_path, os.path.join(folder, "cover.png"))
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("MARKETING", f"Cover generation failed: {e}")
|
utils.log("MARKETING", f"Cover design failed: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
bg_color = design.get('primary_color', '#252570')
|
||||||
|
art_prompt = design.get('art_prompt', f"Cover art for {meta.get('title')}")
|
||||||
|
font_name = design.get('font_name') or 'Playfair Display'
|
||||||
|
|
||||||
|
# Pre-validate and improve the art prompt before handing to Imagen
|
||||||
|
art_prompt = validate_art_prompt(art_prompt, meta, ai_models.model_logic, folder)
|
||||||
|
with open(os.path.join(folder, "cover_art_prompt.txt"), "w") as f:
|
||||||
|
f.write(art_prompt)
|
||||||
|
|
||||||
|
img = None
|
||||||
|
width, height = 600, 900
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------
|
||||||
|
# Phase 1: Art generation loop (evaluate → critique → refine → retry)
|
||||||
|
# -----------------------------------------------------------------------
|
||||||
|
best_art_score = 0
|
||||||
|
best_art_path = None
|
||||||
|
current_art_prompt = art_prompt
|
||||||
|
MAX_ART_ATTEMPTS = 3
|
||||||
|
|
||||||
|
if regenerate_image:
|
||||||
|
for attempt in range(1, MAX_ART_ATTEMPTS + 1):
|
||||||
|
utils.log("MARKETING", f"Generating cover art (Attempt {attempt}/{MAX_ART_ATTEMPTS})...")
|
||||||
|
attempt_path = os.path.join(folder, f"cover_art_attempt_{attempt}.png")
|
||||||
|
gen_status = "success"
|
||||||
|
|
||||||
|
try:
|
||||||
|
if not ai_models.model_image:
|
||||||
|
raise ImportError("No image generation model available.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = ai_models.model_image.generate_images(
|
||||||
|
prompt=current_art_prompt, number_of_images=1, aspect_ratio=ar)
|
||||||
|
except Exception as img_err:
|
||||||
|
err_lower = str(img_err).lower()
|
||||||
|
if ai_models.HAS_VERTEX and ("resource" in err_lower or "quota" in err_lower):
|
||||||
|
try:
|
||||||
|
utils.log("MARKETING", "⚠️ Imagen 3 failed. Trying Imagen 3 Fast...")
|
||||||
|
fb = ai_models.VertexImageModel.from_pretrained("imagen-3.0-fast-generate-001")
|
||||||
|
result = fb.generate_images(prompt=current_art_prompt, number_of_images=1, aspect_ratio=ar)
|
||||||
|
gen_status = "success_fast"
|
||||||
|
except Exception:
|
||||||
|
utils.log("MARKETING", "⚠️ Imagen 3 Fast failed. Trying Imagen 2...")
|
||||||
|
fb = ai_models.VertexImageModel.from_pretrained("imagegeneration@006")
|
||||||
|
result = fb.generate_images(prompt=current_art_prompt, number_of_images=1, aspect_ratio=ar)
|
||||||
|
gen_status = "success_fallback"
|
||||||
|
else:
|
||||||
|
raise img_err
|
||||||
|
|
||||||
|
result.images[0].save(attempt_path)
|
||||||
|
utils.log_usage(folder, "imagen", image_count=1)
|
||||||
|
|
||||||
|
score, critique = evaluate_cover_art(
|
||||||
|
attempt_path, genre, meta.get('title', ''), ai_models.model_logic, folder)
|
||||||
|
if score is None:
|
||||||
|
score = 0
|
||||||
|
utils.log("MARKETING", f" -> Art Score: {score}/10. Critique: {critique}")
|
||||||
|
utils.log_image_attempt(folder, "cover", current_art_prompt,
|
||||||
|
f"cover_art_attempt_{attempt}.png", gen_status,
|
||||||
|
score=score, critique=critique)
|
||||||
|
|
||||||
|
if interactive:
|
||||||
|
try:
|
||||||
|
if os.name == 'nt': os.startfile(attempt_path)
|
||||||
|
elif sys.platform == 'darwin': subprocess.call(('open', attempt_path))
|
||||||
|
else: subprocess.call(('xdg-open', attempt_path))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
from rich.prompt import Confirm
|
||||||
|
if Confirm.ask(f"Accept cover art attempt {attempt} (score {score})?", default=True):
|
||||||
|
best_art_path = attempt_path
|
||||||
|
best_art_score = score
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
utils.log("MARKETING", "User rejected art. Regenerating...")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Track best image — prefer passing threshold; keep first usable as fallback
|
||||||
|
if score >= ART_SCORE_PASSING and score > best_art_score:
|
||||||
|
best_art_score = score
|
||||||
|
best_art_path = attempt_path
|
||||||
|
elif best_art_path is None and score > 0:
|
||||||
|
best_art_score = score
|
||||||
|
best_art_path = attempt_path
|
||||||
|
|
||||||
|
if score >= ART_SCORE_AUTO_ACCEPT:
|
||||||
|
utils.log("MARKETING", " -> High-quality art accepted early.")
|
||||||
|
break
|
||||||
|
|
||||||
|
# Critique-driven prompt refinement for next attempt
|
||||||
|
if attempt < MAX_ART_ATTEMPTS and critique:
|
||||||
|
refine_req = f"""
|
||||||
|
ROLE: Art Director
|
||||||
|
TASK: Rewrite the image prompt to fix the critique below. Keep under 200 words.
|
||||||
|
|
||||||
|
CRITIQUE: {critique}
|
||||||
|
ORIGINAL_PROMPT: {current_art_prompt}
|
||||||
|
|
||||||
|
RULES:
|
||||||
|
- Preserve genre style, focal point, and colour palette unless explicitly criticised.
|
||||||
|
- If text/watermarks were visible: reinforce "absolutely no text, no letters, no watermarks."
|
||||||
|
- If anatomy was deformed: add "perfect anatomy, professional figure illustration."
|
||||||
|
- If blurry: add "tack-sharp focus, highly detailed."
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{"improved_prompt": "..."}}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
rr = ai_models.model_logic.generate_content(refine_req)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, rr.usage_metadata)
|
||||||
|
rd = json.loads(utils.clean_json(rr.text))
|
||||||
|
improved = rd.get('improved_prompt', '').strip()
|
||||||
|
if improved and len(improved) > 50:
|
||||||
|
current_art_prompt = improved
|
||||||
|
utils.log("MARKETING", " -> Art prompt refined for next attempt.")
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("MARKETING", f"Image generation attempt {attempt} failed: {e}")
|
||||||
|
if "quota" in str(e).lower():
|
||||||
|
break
|
||||||
|
|
||||||
|
if best_art_path and os.path.exists(best_art_path):
|
||||||
|
final_art_path = os.path.join(folder, "cover_art.png")
|
||||||
|
if best_art_path != final_art_path:
|
||||||
|
shutil.copy(best_art_path, final_art_path)
|
||||||
|
img = Image.open(final_art_path).resize((width, height)).convert("RGB")
|
||||||
|
utils.log("MARKETING", f" -> Best art: {best_art_score}/10.")
|
||||||
|
else:
|
||||||
|
utils.log("MARKETING", "⚠️ No usable art generated. Falling back to solid colour cover.")
|
||||||
|
img = Image.new('RGB', (width, height), color=bg_color)
|
||||||
|
utils.log_image_attempt(folder, "cover", art_prompt, "cover.png", "fallback_solid")
|
||||||
|
else:
|
||||||
|
final_art_path = os.path.join(folder, "cover_art.png")
|
||||||
|
if os.path.exists(final_art_path):
|
||||||
|
utils.log("MARKETING", "Using existing cover art (layout update only).")
|
||||||
|
img = Image.open(final_art_path).resize((width, height)).convert("RGB")
|
||||||
|
else:
|
||||||
|
utils.log("MARKETING", "Existing art not found. Using solid colour fallback.")
|
||||||
|
img = Image.new('RGB', (width, height), color=bg_color)
|
||||||
|
|
||||||
|
if img is None:
|
||||||
|
utils.log("MARKETING", "Cover generation aborted — no image available.")
|
||||||
|
return
|
||||||
|
|
||||||
|
font_path = download_font(font_name)
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------
|
||||||
|
# Phase 2: Text layout loop (evaluate → critique → adjust → retry)
|
||||||
|
# -----------------------------------------------------------------------
|
||||||
|
best_layout_score = 0
|
||||||
|
best_layout_path = None
|
||||||
|
|
||||||
|
base_layout_prompt = f"""
|
||||||
|
ROLE: Graphic Designer
|
||||||
|
TASK: Determine precise text layout coordinates for a 600×900 book cover image.
|
||||||
|
|
||||||
|
BOOK:
|
||||||
|
- TITLE: {meta.get('title')}
|
||||||
|
- AUTHOR: {meta.get('author', 'Unknown')}
|
||||||
|
- GENRE: {genre}
|
||||||
|
- FONT: {font_name}
|
||||||
|
- TEXT_COLOR: {design.get('text_color', '#FFFFFF')}
|
||||||
|
|
||||||
|
PLACEMENT RULES:
|
||||||
|
- Title in top third OR bottom third (not centre — that obscures the focal art).
|
||||||
|
- Author name in the opposite zone, or just below the title.
|
||||||
|
- Font sizes: title ~60-80px, author ~28-36px for a 600px-wide canvas.
|
||||||
|
- Do NOT place text over faces or the primary focal point.
|
||||||
|
- Coordinates are the CENTER of the text block (x=300 is horizontal centre).
|
||||||
|
|
||||||
|
{f"USER FEEDBACK: {feedback}. Adjust placement/colour accordingly." if feedback else ""}
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON):
|
||||||
|
{{
|
||||||
|
"title": {{"x": Int, "y": Int, "font_size": Int, "font_name": "{font_name}", "color": "#Hex"}},
|
||||||
|
"author": {{"x": Int, "y": Int, "font_size": Int, "font_name": "{font_name}", "color": "#Hex"}}
|
||||||
|
}}
|
||||||
|
"""
|
||||||
|
|
||||||
|
layout_prompt = base_layout_prompt
|
||||||
|
MAX_LAYOUT_ATTEMPTS = 5
|
||||||
|
|
||||||
|
for attempt in range(1, MAX_LAYOUT_ATTEMPTS + 1):
|
||||||
|
utils.log("MARKETING", f"Designing text layout (Attempt {attempt}/{MAX_LAYOUT_ATTEMPTS})...")
|
||||||
|
try:
|
||||||
|
resp = ai_models.model_writer.generate_content([layout_prompt, img])
|
||||||
|
utils.log_usage(folder, ai_models.model_writer.name, resp.usage_metadata)
|
||||||
|
layout = json.loads(utils.clean_json(resp.text))
|
||||||
|
if isinstance(layout, list):
|
||||||
|
layout = layout[0] if layout else {}
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("MARKETING", f"Layout generation failed: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
img_copy = img.copy()
|
||||||
|
draw = ImageDraw.Draw(img_copy)
|
||||||
|
|
||||||
|
def draw_element(key, text_override=None):
|
||||||
|
elem = layout.get(key)
|
||||||
|
if not elem:
|
||||||
|
return
|
||||||
|
if isinstance(elem, list):
|
||||||
|
elem = elem[0] if elem else {}
|
||||||
|
text = text_override if text_override else elem.get('text')
|
||||||
|
if not text:
|
||||||
|
return
|
||||||
|
f_name = elem.get('font_name') or font_name
|
||||||
|
f_p = download_font(f_name)
|
||||||
|
try:
|
||||||
|
fnt = ImageFont.truetype(f_p, elem.get('font_size', 40)) if f_p else ImageFont.load_default()
|
||||||
|
except Exception:
|
||||||
|
fnt = ImageFont.load_default()
|
||||||
|
x, y = elem.get('x', 300), elem.get('y', 450)
|
||||||
|
color = elem.get('color') or design.get('text_color', '#FFFFFF')
|
||||||
|
avg_w = fnt.getlength("A")
|
||||||
|
wrap_w = int(550 / avg_w) if avg_w > 0 else 20
|
||||||
|
lines = textwrap.wrap(text, width=wrap_w)
|
||||||
|
line_heights = []
|
||||||
|
for ln in lines:
|
||||||
|
bbox = draw.textbbox((0, 0), ln, font=fnt)
|
||||||
|
line_heights.append(bbox[3] - bbox[1] + 10)
|
||||||
|
total_h = sum(line_heights)
|
||||||
|
current_y = y - (total_h // 2)
|
||||||
|
for idx, ln in enumerate(lines):
|
||||||
|
bbox = draw.textbbox((0, 0), ln, font=fnt)
|
||||||
|
lx = x - ((bbox[2] - bbox[0]) / 2)
|
||||||
|
draw.text((lx, current_y), ln, font=fnt, fill=color)
|
||||||
|
current_y += line_heights[idx]
|
||||||
|
|
||||||
|
draw_element('title', meta.get('title'))
|
||||||
|
draw_element('author', meta.get('author'))
|
||||||
|
|
||||||
|
attempt_path = os.path.join(folder, f"cover_layout_attempt_{attempt}.png")
|
||||||
|
img_copy.save(attempt_path)
|
||||||
|
|
||||||
|
score, critique = evaluate_cover_layout(
|
||||||
|
attempt_path, meta.get('title', ''), meta.get('author', ''), genre, font_name,
|
||||||
|
ai_models.model_writer, folder
|
||||||
|
)
|
||||||
|
if score is None:
|
||||||
|
score = 0
|
||||||
|
utils.log("MARKETING", f" -> Layout Score: {score}/10. Critique: {critique}")
|
||||||
|
|
||||||
|
if score > best_layout_score:
|
||||||
|
best_layout_score = score
|
||||||
|
best_layout_path = attempt_path
|
||||||
|
|
||||||
|
if score >= LAYOUT_SCORE_PASSING:
|
||||||
|
utils.log("MARKETING", f" -> Layout accepted (score {score} ≥ {LAYOUT_SCORE_PASSING}).")
|
||||||
|
break
|
||||||
|
|
||||||
|
if attempt < MAX_LAYOUT_ATTEMPTS:
|
||||||
|
layout_prompt = (base_layout_prompt
|
||||||
|
+ f"\n\nCRITIQUE OF ATTEMPT {attempt}: {critique}\n"
|
||||||
|
+ "Adjust coordinates, font_size, or color to fix these issues exactly.")
|
||||||
|
|
||||||
|
if best_layout_path:
|
||||||
|
shutil.copy(best_layout_path, os.path.join(folder, "cover.png"))
|
||||||
|
utils.log("MARKETING", f"Cover saved. Best layout score: {best_layout_score}/10.")
|
||||||
|
else:
|
||||||
|
utils.log("MARKETING", "⚠️ No layout produced. Cover not saved.")
|
||||||
|
|||||||
@@ -42,14 +42,20 @@ def download_font(font_name):
|
|||||||
base_url = f"https://github.com/google/fonts/raw/main/{license_type}/{clean_name}"
|
base_url = f"https://github.com/google/fonts/raw/main/{license_type}/{clean_name}"
|
||||||
for pattern in patterns:
|
for pattern in patterns:
|
||||||
try:
|
try:
|
||||||
r = requests.get(f"{base_url}/{pattern}", headers=headers, timeout=5)
|
r = requests.get(f"{base_url}/{pattern}", headers=headers, timeout=6)
|
||||||
if r.status_code == 200 and len(r.content) > 1000:
|
if r.status_code == 200 and len(r.content) > 1000:
|
||||||
with open(font_path, 'wb') as f: f.write(r.content)
|
with open(font_path, 'wb') as f:
|
||||||
|
f.write(r.content)
|
||||||
utils.log("ASSETS", f"✅ Downloaded {font_name} to {font_path}")
|
utils.log("ASSETS", f"✅ Downloaded {font_name} to {font_path}")
|
||||||
return font_path
|
return font_path
|
||||||
except Exception: continue
|
except requests.exceptions.Timeout:
|
||||||
|
utils.log("ASSETS", f" Font download timeout for {font_name} ({pattern}). Trying next...")
|
||||||
|
continue
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
|
||||||
if clean_name != "roboto":
|
if clean_name != "roboto":
|
||||||
utils.log("ASSETS", f"⚠️ Font '{font_name}' not found. Falling back to Roboto.")
|
utils.log("ASSETS", f"⚠️ Font '{font_name}' not found on Google Fonts. Falling back to Roboto.")
|
||||||
return download_font("Roboto")
|
return download_font("Roboto")
|
||||||
|
utils.log("ASSETS", "⚠️ Roboto fallback also failed. PIL will use built-in default font.")
|
||||||
return None
|
return None
|
||||||
|
|||||||
@@ -19,7 +19,11 @@ def merge_selected_changes(original, draft, selected_keys):
|
|||||||
original['project_metadata'][field] = draft['project_metadata'][field]
|
original['project_metadata'][field] = draft['project_metadata'][field]
|
||||||
|
|
||||||
elif parts[0] == 'char' and len(parts) >= 2:
|
elif parts[0] == 'char' and len(parts) >= 2:
|
||||||
idx = int(parts[1])
|
try:
|
||||||
|
idx = int(parts[1])
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
utils.log("SYSTEM", f"⚠️ Skipping malformed bible merge key: '{key}'")
|
||||||
|
continue
|
||||||
if idx < len(draft['characters']):
|
if idx < len(draft['characters']):
|
||||||
if idx < len(original['characters']):
|
if idx < len(original['characters']):
|
||||||
original['characters'][idx] = draft['characters'][idx]
|
original['characters'][idx] = draft['characters'][idx]
|
||||||
@@ -27,7 +31,11 @@ def merge_selected_changes(original, draft, selected_keys):
|
|||||||
original['characters'].append(draft['characters'][idx])
|
original['characters'].append(draft['characters'][idx])
|
||||||
|
|
||||||
elif parts[0] == 'book' and len(parts) >= 2:
|
elif parts[0] == 'book' and len(parts) >= 2:
|
||||||
book_num = int(parts[1])
|
try:
|
||||||
|
book_num = int(parts[1])
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
utils.log("SYSTEM", f"⚠️ Skipping malformed bible merge key: '{key}'")
|
||||||
|
continue
|
||||||
orig_book = next((b for b in original['books'] if b['book_number'] == book_num), None)
|
orig_book = next((b for b in original['books'] if b['book_number'] == book_num), None)
|
||||||
draft_book = next((b for b in draft['books'] if b['book_number'] == book_num), None)
|
draft_book = next((b for b in draft['books'] if b['book_number'] == book_num), None)
|
||||||
|
|
||||||
@@ -42,7 +50,11 @@ def merge_selected_changes(original, draft, selected_keys):
|
|||||||
orig_book['manual_instruction'] = draft_book['manual_instruction']
|
orig_book['manual_instruction'] = draft_book['manual_instruction']
|
||||||
|
|
||||||
elif len(parts) == 4 and parts[2] == 'beat':
|
elif len(parts) == 4 and parts[2] == 'beat':
|
||||||
beat_idx = int(parts[3])
|
try:
|
||||||
|
beat_idx = int(parts[3])
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
utils.log("SYSTEM", f"⚠️ Skipping malformed beat merge key: '{key}'")
|
||||||
|
continue
|
||||||
if beat_idx < len(draft_book['plot_beats']):
|
if beat_idx < len(draft_book['plot_beats']):
|
||||||
while len(orig_book['plot_beats']) <= beat_idx:
|
while len(orig_book['plot_beats']) <= beat_idx:
|
||||||
orig_book['plot_beats'].append("")
|
orig_book['plot_beats'].append("")
|
||||||
@@ -129,6 +141,30 @@ def update_lore_index(folder, chapter_text, current_lore):
|
|||||||
return current_lore
|
return current_lore
|
||||||
|
|
||||||
|
|
||||||
|
def merge_tracking_to_bible(bible, tracking):
|
||||||
|
"""Merge dynamic tracking state back into the bible dict.
|
||||||
|
|
||||||
|
Makes bible.json the single persistent source of truth by updating
|
||||||
|
character data and lore from the in-memory tracking object.
|
||||||
|
Returns the modified bible dict.
|
||||||
|
"""
|
||||||
|
for name, data in tracking.get('characters', {}).items():
|
||||||
|
matched = False
|
||||||
|
for char in bible.get('characters', []):
|
||||||
|
if char.get('name') == name:
|
||||||
|
char.update(data)
|
||||||
|
matched = True
|
||||||
|
break
|
||||||
|
if not matched:
|
||||||
|
utils.log("TRACKER", f" -> Character '{name}' in tracking not found in bible. Skipping.")
|
||||||
|
|
||||||
|
if 'lore' not in bible:
|
||||||
|
bible['lore'] = {}
|
||||||
|
bible['lore'].update(tracking.get('lore', {}))
|
||||||
|
|
||||||
|
return bible
|
||||||
|
|
||||||
|
|
||||||
def harvest_metadata(bp, folder, full_manuscript):
|
def harvest_metadata(bp, folder, full_manuscript):
|
||||||
utils.log("HARVESTER", "Scanning for new characters...")
|
utils.log("HARVESTER", "Scanning for new characters...")
|
||||||
full_text = "\n".join([c.get('content', '') for c in full_manuscript])[:500000]
|
full_text = "\n".join([c.get('content', '') for c in full_manuscript])[:500000]
|
||||||
@@ -153,10 +189,26 @@ def harvest_metadata(bp, folder, full_manuscript):
|
|||||||
if valid_chars:
|
if valid_chars:
|
||||||
utils.log("HARVESTER", f"Found {len(valid_chars)} new chars.")
|
utils.log("HARVESTER", f"Found {len(valid_chars)} new chars.")
|
||||||
bp['characters'].extend(valid_chars)
|
bp['characters'].extend(valid_chars)
|
||||||
except: pass
|
except Exception as e:
|
||||||
|
utils.log("HARVESTER", f"⚠️ Metadata harvest failed: {e}")
|
||||||
return bp
|
return bp
|
||||||
|
|
||||||
|
|
||||||
|
def get_chapter_neighbours(manuscript, current_num):
|
||||||
|
"""Return (prev_num, next_num) chapter numbers adjacent to current_num.
|
||||||
|
|
||||||
|
manuscript: list of chapter dicts each with a 'num' key.
|
||||||
|
Returns None for prev/next when at the boundary.
|
||||||
|
"""
|
||||||
|
nums = sorted({ch.get('num') for ch in manuscript if ch.get('num') is not None})
|
||||||
|
if current_num not in nums:
|
||||||
|
return None, None
|
||||||
|
idx = nums.index(current_num)
|
||||||
|
prev_num = nums[idx - 1] if idx > 0 else None
|
||||||
|
next_num = nums[idx + 1] if idx < len(nums) - 1 else None
|
||||||
|
return prev_num, next_num
|
||||||
|
|
||||||
|
|
||||||
def refine_bible(bible, instruction, folder):
|
def refine_bible(bible, instruction, folder):
|
||||||
utils.log("SYSTEM", f"Refining Bible with instruction: {instruction}")
|
utils.log("SYSTEM", f"Refining Bible with instruction: {instruction}")
|
||||||
prompt = f"""
|
prompt = f"""
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ def evaluate_chapter_quality(text, chapter_title, genre, model, folder, series_c
|
|||||||
}}
|
}}
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
response = model.generate_content([prompt, utils.truncate_to_tokens(text, 7500)])
|
response = model.generate_content([prompt, utils.truncate_to_tokens(text, 7500, keep_head=True)])
|
||||||
model_name = getattr(model, 'name', ai_models.logic_model_name)
|
model_name = getattr(model, 'name', ai_models.logic_model_name)
|
||||||
utils.log_usage(folder, model_name, response.usage_metadata)
|
utils.log_usage(folder, model_name, response.usage_metadata)
|
||||||
data = json.loads(utils.clean_json(response.text))
|
data = json.loads(utils.clean_json(response.text))
|
||||||
@@ -129,7 +129,13 @@ def analyze_consistency(bp, manuscript, folder):
|
|||||||
chapter_summaries = []
|
chapter_summaries = []
|
||||||
for ch in manuscript:
|
for ch in manuscript:
|
||||||
text = ch.get('content', '')
|
text = ch.get('content', '')
|
||||||
excerpt = text[:1000] + "\n...\n" + text[-1000:] if len(text) > 2000 else text
|
if len(text) > 3000:
|
||||||
|
mid = len(text) // 2
|
||||||
|
excerpt = text[:800] + "\n...\n" + text[mid - 200:mid + 200] + "\n...\n" + text[-800:]
|
||||||
|
elif len(text) > 1600:
|
||||||
|
excerpt = text[:800] + "\n...\n" + text[-800:]
|
||||||
|
else:
|
||||||
|
excerpt = text
|
||||||
chapter_summaries.append(f"Ch {ch.get('num')}: {excerpt}")
|
chapter_summaries.append(f"Ch {ch.get('num')}: {excerpt}")
|
||||||
|
|
||||||
context = "\n".join(chapter_summaries)
|
context = "\n".join(chapter_summaries)
|
||||||
@@ -236,8 +242,8 @@ def rewrite_chapter_content(bp, manuscript, chapter_num, instruction, folder):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = ai_models.model_logic.generate_content(prompt)
|
response = ai_models.model_writer.generate_content(prompt)
|
||||||
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
|
utils.log_usage(folder, ai_models.model_writer.name, response.usage_metadata)
|
||||||
try:
|
try:
|
||||||
data = json.loads(utils.clean_json(response.text))
|
data = json.loads(utils.clean_json(response.text))
|
||||||
return data.get('content'), data.get('summary')
|
return data.get('content'), data.get('summary')
|
||||||
|
|||||||
473
story/eval_logger.py
Normal file
473
story/eval_logger.py
Normal file
@@ -0,0 +1,473 @@
|
|||||||
|
"""eval_logger.py — Per-chapter evaluation log and HTML report generator.
|
||||||
|
|
||||||
|
Writes a structured eval_log.json to the book folder during writing, then
|
||||||
|
generates a self-contained HTML report that can be downloaded and shared with
|
||||||
|
critics / prompt engineers to analyse quality patterns across a run.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from core import utils
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Log writer
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def append_eval_entry(folder, entry):
|
||||||
|
"""Append one chapter's evaluation record to eval_log.json.
|
||||||
|
|
||||||
|
Called from story/writer.py at every return point in write_chapter().
|
||||||
|
Each entry captures the chapter metadata, polish decision, per-attempt
|
||||||
|
scores/critiques/decisions, and the final accepted score.
|
||||||
|
"""
|
||||||
|
log_path = os.path.join(folder, "eval_log.json")
|
||||||
|
data = []
|
||||||
|
if os.path.exists(log_path):
|
||||||
|
try:
|
||||||
|
with open(log_path, 'r', encoding='utf-8') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
if not isinstance(data, list):
|
||||||
|
data = []
|
||||||
|
except Exception:
|
||||||
|
data = []
|
||||||
|
data.append(entry)
|
||||||
|
try:
|
||||||
|
with open(log_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("EVAL", f"Failed to write eval log: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Report generation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def generate_html_report(folder, bp=None):
|
||||||
|
"""Generate a self-contained HTML evaluation report from eval_log.json.
|
||||||
|
|
||||||
|
Returns the HTML string, or None if no log file exists / is empty.
|
||||||
|
"""
|
||||||
|
log_path = os.path.join(folder, "eval_log.json")
|
||||||
|
if not os.path.exists(log_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(log_path, 'r', encoding='utf-8') as f:
|
||||||
|
chapters = json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if not isinstance(chapters, list) or not chapters:
|
||||||
|
return None
|
||||||
|
|
||||||
|
title, genre = "Unknown Book", "Fiction"
|
||||||
|
if bp:
|
||||||
|
meta = bp.get('book_metadata', {})
|
||||||
|
title = meta.get('title', title)
|
||||||
|
genre = meta.get('genre', genre)
|
||||||
|
|
||||||
|
# --- Summary stats ---
|
||||||
|
scores = [c.get('final_score', 0) for c in chapters if isinstance(c.get('final_score'), (int, float)) and c.get('final_score', 0) > 0]
|
||||||
|
avg_score = round(sum(scores) / len(scores), 2) if scores else 0
|
||||||
|
total = len(chapters)
|
||||||
|
auto_accepted = sum(1 for c in chapters if c.get('final_decision') == 'auto_accepted')
|
||||||
|
multi_attempt = sum(1 for c in chapters if len(c.get('attempts', [])) > 1)
|
||||||
|
full_rewrites = sum(1 for c in chapters for a in c.get('attempts', []) if a.get('decision') == 'full_rewrite')
|
||||||
|
below_threshold = sum(1 for c in chapters if c.get('final_decision') == 'below_threshold')
|
||||||
|
polish_applied = sum(1 for c in chapters if c.get('polish_applied'))
|
||||||
|
|
||||||
|
score_dist = {i: 0 for i in range(1, 11)}
|
||||||
|
for c in chapters:
|
||||||
|
s = c.get('final_score', 0)
|
||||||
|
if isinstance(s, int) and 1 <= s <= 10:
|
||||||
|
score_dist[s] += 1
|
||||||
|
|
||||||
|
patterns = _mine_critique_patterns(chapters, total)
|
||||||
|
report_date = time.strftime('%Y-%m-%d %H:%M')
|
||||||
|
return _build_html(title, genre, report_date, chapters, avg_score, total,
|
||||||
|
auto_accepted, multi_attempt, full_rewrites, below_threshold,
|
||||||
|
polish_applied, score_dist, patterns)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Pattern mining
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _mine_critique_patterns(chapters, total):
|
||||||
|
pattern_keywords = {
|
||||||
|
"Filter words (felt/saw/noticed)": ["filter word", "filter", "felt ", "noticed ", "realized ", "saw the", "heard the"],
|
||||||
|
"Summary mode / telling": ["summary mode", "summariz", "telling", "show don't tell", "show, don't tell", "instead of dramatiz"],
|
||||||
|
"Emotion labeling": ["emotion label", "told the reader", "labeling", "labelling", "she felt", "he felt", "was nervous", "was angry", "was sad"],
|
||||||
|
"Deep POV issues": ["deep pov", "deep point of view", "distant narration", "remove the reader", "external narration"],
|
||||||
|
"Pacing problems": ["pacing", "rushing", "too fast", "too slow", "dragging", "sagging", "abrupt"],
|
||||||
|
"Dialogue too on-the-nose": ["on-the-nose", "on the nose", "subtext", "exposition dump", "characters explain"],
|
||||||
|
"Weak chapter hook / ending": ["hook", "cliffhanger", "cut off abruptly", "anticlimax", "ending falls flat", "no tension"],
|
||||||
|
"Passive voice / weak verbs": ["passive voice", "was [v", "were [v", "weak verb", "adverb"],
|
||||||
|
"AI-isms / clichés": ["ai-ism", "cliché", "tapestry", "palpable", "testament", "azure", "cerulean", "bustling"],
|
||||||
|
"Voice / tone inconsistency": ["voice", "tone inconsist", "persona", "shift in tone", "register"],
|
||||||
|
"Missing sensory / atmosphere": ["sensory", "grounding", "atmosphere", "immersiv", "white room"],
|
||||||
|
}
|
||||||
|
counts = {}
|
||||||
|
for pattern, keywords in pattern_keywords.items():
|
||||||
|
matching = []
|
||||||
|
for c in chapters:
|
||||||
|
critique_blob = " ".join(
|
||||||
|
a.get('critique', '').lower()
|
||||||
|
for a in c.get('attempts', [])
|
||||||
|
)
|
||||||
|
if any(kw.lower() in critique_blob for kw in keywords):
|
||||||
|
matching.append(c.get('chapter_num', '?'))
|
||||||
|
counts[pattern] = {'count': len(matching), 'chapters': matching}
|
||||||
|
return dict(sorted(counts.items(), key=lambda x: x[1]['count'], reverse=True))
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# HTML builder
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _score_color(s):
|
||||||
|
try:
|
||||||
|
s = float(s)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return '#6c757d'
|
||||||
|
if s >= 8: return '#28a745'
|
||||||
|
if s >= 7: return '#20c997'
|
||||||
|
if s >= 6: return '#ffc107'
|
||||||
|
return '#dc3545'
|
||||||
|
|
||||||
|
|
||||||
|
def _decision_badge(d):
|
||||||
|
MAP = {
|
||||||
|
'auto_accepted': ('⚡ Auto-Accept', '#28a745'),
|
||||||
|
'accepted': ('✓ Accepted', '#17a2b8'),
|
||||||
|
'accepted_at_max': ('✓ Accepted', '#17a2b8'),
|
||||||
|
'below_threshold': ('⚠ Below Threshold', '#dc3545'),
|
||||||
|
'below_threshold_accepted': ('⚠ Below Threshold', '#dc3545'),
|
||||||
|
'full_rewrite': ('🔄 Full Rewrite', '#6f42c1'),
|
||||||
|
'full_rewrite_failed': ('🔄✗ Rewrite Failed','#6f42c1'),
|
||||||
|
'refinement': ('✏ Refined', '#fd7e14'),
|
||||||
|
'refinement_failed': ('✏✗ Refine Failed', '#fd7e14'),
|
||||||
|
'eval_error': ('⚠ Eval Error', '#6c757d'),
|
||||||
|
}
|
||||||
|
label, color = MAP.get(d, (d or '?', '#6c757d'))
|
||||||
|
return f'<span style="background:{color};color:white;padding:2px 8px;border-radius:4px;font-size:0.78em">{label}</span>'
|
||||||
|
|
||||||
|
|
||||||
|
def _safe_int_fmt(v):
|
||||||
|
try:
|
||||||
|
return f"{int(v):,}"
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return str(v) if v else '?'
|
||||||
|
|
||||||
|
|
||||||
|
def _build_html(title, genre, report_date, chapters, avg_score, total,
|
||||||
|
auto_accepted, multi_attempt, full_rewrites, below_threshold,
|
||||||
|
polish_applied, score_dist, patterns):
|
||||||
|
|
||||||
|
avg_color = _score_color(avg_score)
|
||||||
|
|
||||||
|
# --- Score timeline ---
|
||||||
|
MAX_BAR = 260
|
||||||
|
timeline_rows = ''
|
||||||
|
for c in chapters:
|
||||||
|
s = c.get('final_score', 0)
|
||||||
|
color = _score_color(s)
|
||||||
|
width = max(2, int((s / 10) * MAX_BAR)) if s else 2
|
||||||
|
ch_num = c.get('chapter_num', '?')
|
||||||
|
ch_title = str(c.get('title', ''))[:35]
|
||||||
|
timeline_rows += (
|
||||||
|
f'<div style="display:flex;align-items:center;margin-bottom:4px;font-size:0.8em">'
|
||||||
|
f'<div style="width:45px;text-align:right;margin-right:8px;color:#888;flex-shrink:0">Ch {ch_num}</div>'
|
||||||
|
f'<div style="background:{color};height:16px;width:{width}px;border-radius:2px;flex-shrink:0"></div>'
|
||||||
|
f'<div style="margin-left:8px;color:#555">{s}/10 — {ch_title}</div>'
|
||||||
|
f'</div>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Score distribution ---
|
||||||
|
max_dist = max(score_dist.values()) if any(score_dist.values()) else 1
|
||||||
|
dist_rows = ''
|
||||||
|
for sv in range(10, 0, -1):
|
||||||
|
count = score_dist.get(sv, 0)
|
||||||
|
w = max(2, int((count / max_dist) * 200)) if count else 0
|
||||||
|
color = _score_color(sv)
|
||||||
|
dist_rows += (
|
||||||
|
f'<div style="display:flex;align-items:center;margin-bottom:4px;font-size:0.85em">'
|
||||||
|
f'<div style="width:28px;text-align:right;margin-right:8px;font-weight:bold;color:{color}">{sv}</div>'
|
||||||
|
f'<div style="background:{color};height:15px;width:{w}px;border-radius:2px;opacity:0.85"></div>'
|
||||||
|
f'<div style="margin-left:8px;color:#666">{count} ch{"apters" if count != 1 else "apter"}</div>'
|
||||||
|
f'</div>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Chapter rows ---
|
||||||
|
chapter_rows = ''
|
||||||
|
for c in chapters:
|
||||||
|
cid = c.get('chapter_num', 0)
|
||||||
|
ch_title = str(c.get('title', '')).replace('<', '<').replace('>', '>')
|
||||||
|
pov = str(c.get('pov_character') or '—')
|
||||||
|
pace = str(c.get('pacing') or '—')
|
||||||
|
target_w = _safe_int_fmt(c.get('target_words'))
|
||||||
|
actual_w = _safe_int_fmt(c.get('actual_words'))
|
||||||
|
pos = c.get('chapter_position')
|
||||||
|
pos_pct = f"{int(pos * 100)}%" if pos is not None else '—'
|
||||||
|
threshold = c.get('score_threshold', '?')
|
||||||
|
fw_dens = c.get('filter_word_density', 0)
|
||||||
|
polish = '✓' if c.get('polish_applied') else '✗'
|
||||||
|
polish_c = '#28a745' if c.get('polish_applied') else '#aaa'
|
||||||
|
fs = c.get('final_score', 0)
|
||||||
|
fd = c.get('final_decision', '')
|
||||||
|
attempts = c.get('attempts', [])
|
||||||
|
n_att = len(attempts)
|
||||||
|
fs_color = _score_color(fs)
|
||||||
|
fd_badge = _decision_badge(fd)
|
||||||
|
|
||||||
|
# Attempt detail sub-rows
|
||||||
|
att_rows = ''
|
||||||
|
for att in attempts:
|
||||||
|
an = att.get('n', '?')
|
||||||
|
ascr = att.get('score', '?')
|
||||||
|
adec = att.get('decision', '')
|
||||||
|
acrit = str(att.get('critique', 'No critique.')).replace('&', '&').replace('<', '<').replace('>', '>')
|
||||||
|
ac = _score_color(ascr)
|
||||||
|
abadge = _decision_badge(adec)
|
||||||
|
att_rows += (
|
||||||
|
f'<tr style="background:#f6f8fa">'
|
||||||
|
f'<td colspan="11" style="padding:12px 16px 12px 56px;border-bottom:1px solid #e8eaed">'
|
||||||
|
f'<div style="margin-bottom:6px"><strong>Attempt {an}:</strong>'
|
||||||
|
f'<span style="font-size:1.1em;font-weight:bold;color:{ac};margin:0 8px">{ascr}/10</span>'
|
||||||
|
f'{abadge}</div>'
|
||||||
|
f'<div style="font-size:0.83em;color:#444;line-height:1.55;white-space:pre-wrap;'
|
||||||
|
f'background:#fff;padding:10px 12px;border-left:3px solid {ac};border-radius:2px;'
|
||||||
|
f'max-height:300px;overflow-y:auto">{acrit}</div>'
|
||||||
|
f'</td></tr>'
|
||||||
|
)
|
||||||
|
|
||||||
|
chapter_rows += (
|
||||||
|
f'<tr class="chrow" onclick="toggle({cid})" style="cursor:pointer">'
|
||||||
|
f'<td style="font-weight:700;text-align:center">{cid}</td>'
|
||||||
|
f'<td>{ch_title}</td>'
|
||||||
|
f'<td style="color:#666;font-size:0.85em">{pov}</td>'
|
||||||
|
f'<td style="color:#666;font-size:0.85em">{pace}</td>'
|
||||||
|
f'<td style="text-align:right">{actual_w} <span style="color:#aaa">/{target_w}</span></td>'
|
||||||
|
f'<td style="text-align:center;color:#888">{pos_pct}</td>'
|
||||||
|
f'<td style="text-align:center">{threshold}</td>'
|
||||||
|
f'<td style="text-align:center;color:{polish_c}">{polish} <span style="color:#aaa;font-size:0.8em">{fw_dens:.3f}</span></td>'
|
||||||
|
f'<td style="text-align:center;font-weight:700;font-size:1.1em;color:{fs_color}">{fs}</td>'
|
||||||
|
f'<td style="text-align:center;color:#888">{n_att}×</td>'
|
||||||
|
f'<td>{fd_badge}</td>'
|
||||||
|
f'</tr>'
|
||||||
|
f'<tr id="d{cid}" class="detrow">{att_rows}</tr>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Critique patterns ---
|
||||||
|
pat_rows = ''
|
||||||
|
for pattern, data in patterns.items():
|
||||||
|
count = data['count']
|
||||||
|
if count == 0:
|
||||||
|
continue
|
||||||
|
pct = int(count / total * 100) if total else 0
|
||||||
|
sev_color = '#dc3545' if pct >= 50 else '#fd7e14' if pct >= 30 else '#17a2b8'
|
||||||
|
chlist = ', '.join(f'Ch {x}' for x in data['chapters'][:10])
|
||||||
|
if len(data['chapters']) > 10:
|
||||||
|
chlist += f' (+{len(data["chapters"]) - 10} more)'
|
||||||
|
pat_rows += (
|
||||||
|
f'<tr>'
|
||||||
|
f'<td><strong>{pattern}</strong></td>'
|
||||||
|
f'<td style="text-align:center;color:{sev_color};font-weight:700">{count}/{total} ({pct}%)</td>'
|
||||||
|
f'<td style="color:#666;font-size:0.83em">{chlist}</td>'
|
||||||
|
f'</tr>'
|
||||||
|
)
|
||||||
|
if not pat_rows:
|
||||||
|
pat_rows = '<tr><td colspan="3" style="color:#666;text-align:center;padding:12px">No significant patterns detected.</td></tr>'
|
||||||
|
|
||||||
|
# --- Prompt tuning notes ---
|
||||||
|
notes = _generate_prompt_notes(chapters, avg_score, total, full_rewrites, below_threshold, patterns)
|
||||||
|
notes_html = ''.join(f'<li style="margin-bottom:8px;line-height:1.55">{n}</li>' for n in notes)
|
||||||
|
|
||||||
|
return f'''<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<title>Eval Report — {title}</title>
|
||||||
|
<style>
|
||||||
|
*{{box-sizing:border-box;margin:0;padding:0}}
|
||||||
|
body{{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,sans-serif;background:#f0f2f5;color:#333;padding:20px}}
|
||||||
|
.wrap{{max-width:1280px;margin:0 auto}}
|
||||||
|
header{{background:#1a1d23;color:#fff;padding:22px 28px;border-radius:10px;margin-bottom:22px}}
|
||||||
|
header h1{{font-size:0.9em;color:#8b92a1;margin-bottom:4px;font-weight:500}}
|
||||||
|
header h2{{font-size:1.9em;font-weight:700;margin-bottom:6px}}
|
||||||
|
header p{{color:#8b92a1;font-size:0.88em}}
|
||||||
|
.cards{{display:grid;grid-template-columns:repeat(auto-fit,minmax(130px,1fr));gap:12px;margin-bottom:20px}}
|
||||||
|
.card{{background:#fff;border-radius:8px;padding:16px;text-align:center;box-shadow:0 1px 3px rgba(0,0,0,.08)}}
|
||||||
|
.card .val{{font-size:2em;font-weight:700}}
|
||||||
|
.card .lbl{{font-size:0.75em;color:#888;margin-top:4px;line-height:1.3}}
|
||||||
|
.two-col{{display:grid;grid-template-columns:1fr 1fr;gap:16px;margin-bottom:16px}}
|
||||||
|
section{{background:#fff;border-radius:8px;padding:20px;margin-bottom:16px;box-shadow:0 1px 3px rgba(0,0,0,.08)}}
|
||||||
|
section h3{{font-size:1em;font-weight:700;border-bottom:2px solid #f0f0f0;padding-bottom:8px;margin-bottom:14px}}
|
||||||
|
table{{width:100%;border-collapse:collapse;font-size:0.86em}}
|
||||||
|
th{{background:#f7f8fa;padding:8px 10px;text-align:left;font-weight:600;color:#555;border-bottom:2px solid #e0e4ea;white-space:nowrap}}
|
||||||
|
td{{padding:8px 10px;border-bottom:1px solid #f0f0f0;vertical-align:middle}}
|
||||||
|
.chrow:hover{{background:#f7f8fa}}
|
||||||
|
.detrow{{display:none}}
|
||||||
|
.legend{{display:flex;gap:14px;flex-wrap:wrap;font-size:0.78em;color:#777;margin-bottom:10px}}
|
||||||
|
.dot{{display:inline-block;width:11px;height:11px;border-radius:50%;vertical-align:middle;margin-right:3px}}
|
||||||
|
ul.notes{{padding-left:20px}}
|
||||||
|
@media(max-width:768px){{.two-col{{grid-template-columns:1fr}}}}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="wrap">
|
||||||
|
|
||||||
|
<header>
|
||||||
|
<h1>BookApp — Evaluation Report</h1>
|
||||||
|
<h2>{title}</h2>
|
||||||
|
<p>Genre: {genre} | Generated: {report_date} | {total} chapter{"s" if total != 1 else ""}</p>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<div class="cards">
|
||||||
|
<div class="card"><div class="val" style="color:{avg_color}">{avg_score}</div><div class="lbl">Avg Score /10</div></div>
|
||||||
|
<div class="card"><div class="val" style="color:#28a745">{auto_accepted}</div><div class="lbl">Auto-Accepted (8+)</div></div>
|
||||||
|
<div class="card"><div class="val" style="color:#17a2b8">{multi_attempt}</div><div class="lbl">Multi-Attempt</div></div>
|
||||||
|
<div class="card"><div class="val" style="color:#6f42c1">{full_rewrites}</div><div class="lbl">Full Rewrites</div></div>
|
||||||
|
<div class="card"><div class="val" style="color:#dc3545">{below_threshold}</div><div class="lbl">Below Threshold</div></div>
|
||||||
|
<div class="card"><div class="val" style="color:#fd7e14">{polish_applied}</div><div class="lbl">Polish Passes</div></div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="two-col">
|
||||||
|
<section>
|
||||||
|
<h3>📊 Score Timeline</h3>
|
||||||
|
<div class="legend">
|
||||||
|
<span><span class="dot" style="background:#28a745"></span>8–10 Great</span>
|
||||||
|
<span><span class="dot" style="background:#20c997"></span>7–7.9 Good</span>
|
||||||
|
<span><span class="dot" style="background:#ffc107"></span>6–6.9 Passable</span>
|
||||||
|
<span><span class="dot" style="background:#dc3545"></span><6 Fail</span>
|
||||||
|
</div>
|
||||||
|
<div style="overflow-y:auto;max-height:420px;padding-right:4px">{timeline_rows}</div>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h3>📈 Score Distribution</h3>
|
||||||
|
<div style="margin-top:8px">{dist_rows}</div>
|
||||||
|
</section>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<h3>📋 Chapter Breakdown <small style="font-weight:400;color:#888">(click any row to expand critiques)</small></h3>
|
||||||
|
<div style="overflow-x:auto">
|
||||||
|
<table>
|
||||||
|
<thead><tr>
|
||||||
|
<th>#</th><th>Title</th><th>POV</th><th>Pacing</th>
|
||||||
|
<th style="text-align:right">Words</th>
|
||||||
|
<th style="text-align:center">Pos%</th>
|
||||||
|
<th style="text-align:center">Threshold</th>
|
||||||
|
<th style="text-align:center">Polish / FW</th>
|
||||||
|
<th style="text-align:center">Score</th>
|
||||||
|
<th style="text-align:center">Att.</th>
|
||||||
|
<th>Decision</th>
|
||||||
|
</tr></thead>
|
||||||
|
<tbody>{chapter_rows}</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<h3>🔍 Critique Patterns <small style="font-weight:400;color:#888">Keyword frequency across all evaluation critiques — high % = prompt gap</small></h3>
|
||||||
|
<table>
|
||||||
|
<thead><tr><th>Issue Pattern</th><th style="text-align:center">Frequency</th><th>Affected Chapters</th></tr></thead>
|
||||||
|
<tbody>{pat_rows}</tbody>
|
||||||
|
</table>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<h3>💡 Prompt Tuning Observations</h3>
|
||||||
|
<ul class="notes">{notes_html}</ul>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
function toggle(id){{
|
||||||
|
var r=document.getElementById('d'+id);
|
||||||
|
if(r) r.style.display=(r.style.display==='none'||r.style.display==='')?'table-row':'none';
|
||||||
|
}}
|
||||||
|
document.querySelectorAll('.detrow').forEach(function(r){{r.style.display='none';}});
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>'''
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Auto-observations for prompt tuning
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _generate_prompt_notes(chapters, avg_score, total, full_rewrites, below_threshold, patterns):
|
||||||
|
notes = []
|
||||||
|
|
||||||
|
# Overall score
|
||||||
|
if avg_score >= 8:
|
||||||
|
notes.append(f"✅ <strong>High average score ({avg_score}/10).</strong> The generation pipeline is performing well. Focus on the few outlier chapters below the threshold.")
|
||||||
|
elif avg_score >= 7:
|
||||||
|
notes.append(f"✓ <strong>Solid average score ({avg_score}/10).</strong> Minor prompt reinforcement should push this above 8. Focus on the most common critique pattern.")
|
||||||
|
elif avg_score >= 6:
|
||||||
|
notes.append(f"⚠ <strong>Average score of {avg_score}/10 is below target.</strong> Strengthen the draft prompt's Deep POV mandate and filter-word removal rules.")
|
||||||
|
else:
|
||||||
|
notes.append(f"🚨 <strong>Low average score ({avg_score}/10).</strong> The core writing prompt needs significant work — review the Deep POV mandate, genre mandates, and consider adding concrete negative examples.")
|
||||||
|
|
||||||
|
# Full rewrite rate
|
||||||
|
if total > 0:
|
||||||
|
rw_pct = int(full_rewrites / total * 100)
|
||||||
|
if rw_pct > 30:
|
||||||
|
notes.append(f"🔄 <strong>High full-rewrite rate ({rw_pct}%, {full_rewrites} triggers).</strong> The initial draft prompt produces too many sub-6 drafts. Add stronger examples or tighten the DEEP_POV_MANDATE and PROSE_RULES sections.")
|
||||||
|
elif rw_pct > 15:
|
||||||
|
notes.append(f"↩ <strong>Moderate full-rewrite rate ({rw_pct}%, {full_rewrites} triggers).</strong> The draft quality could be improved. Check the genre mandates for the types of chapters that rewrite most often.")
|
||||||
|
|
||||||
|
# Below threshold
|
||||||
|
if below_threshold > 0:
|
||||||
|
bt_pct = int(below_threshold / total * 100)
|
||||||
|
notes.append(f"⚠ <strong>{below_threshold} chapter{'s' if below_threshold != 1 else ''} ({bt_pct}%) finished below the quality threshold.</strong> Inspect the individual critiques to see if these cluster by POV, pacing, or story position.")
|
||||||
|
|
||||||
|
# Top critique patterns
|
||||||
|
for pattern, data in list(patterns.items())[:5]:
|
||||||
|
pct = int(data['count'] / total * 100) if total else 0
|
||||||
|
if pct >= 50:
|
||||||
|
notes.append(f"🔴 <strong>'{pattern}' appears in {pct}% of critiques.</strong> This is systemic — the current prompt does not prevent it. Add an explicit enforcement instruction with a concrete example of the wrong pattern and the correct alternative.")
|
||||||
|
elif pct >= 30:
|
||||||
|
notes.append(f"🟡 <strong>'{pattern}' mentioned in {pct}% of critiques.</strong> Consider reinforcing the relevant prompt instruction with a stronger negative example.")
|
||||||
|
|
||||||
|
# Climax vs. early chapter comparison
|
||||||
|
high_scores = [c.get('final_score', 0) for c in chapters if isinstance(c.get('chapter_position'), float) and c['chapter_position'] >= 0.75]
|
||||||
|
low_scores = [c.get('final_score', 0) for c in chapters if isinstance(c.get('chapter_position'), float) and c['chapter_position'] < 0.25]
|
||||||
|
if high_scores and low_scores:
|
||||||
|
avg_climax = round(sum(high_scores) / len(high_scores), 1)
|
||||||
|
avg_early = round(sum(low_scores) / len(low_scores), 1)
|
||||||
|
if avg_climax < avg_early - 0.5:
|
||||||
|
notes.append(f"📅 <strong>Climax chapters average {avg_climax}/10 vs early chapters {avg_early}/10.</strong> The high-stakes scenes underperform. Strengthen the genre mandates for climax pacing and consider adding specific instructions for emotional payoff.")
|
||||||
|
elif avg_climax > avg_early + 0.5:
|
||||||
|
notes.append(f"📅 <strong>Climax chapters outperform early chapters ({avg_climax} vs {avg_early}).</strong> Good — the adaptive threshold and extra attempts are concentrating quality where it matters.")
|
||||||
|
|
||||||
|
# POV character analysis
|
||||||
|
pov_scores = {}
|
||||||
|
for c in chapters:
|
||||||
|
pov = c.get('pov_character') or 'Unknown'
|
||||||
|
s = c.get('final_score', 0)
|
||||||
|
if s > 0:
|
||||||
|
pov_scores.setdefault(pov, []).append(s)
|
||||||
|
for pov, sc in sorted(pov_scores.items(), key=lambda x: sum(x[1]) / len(x[1])):
|
||||||
|
if len(sc) >= 2 and sum(sc) / len(sc) < 6.5:
|
||||||
|
avg_pov = round(sum(sc) / len(sc), 1)
|
||||||
|
notes.append(f"👤 <strong>POV '{pov}' averages {avg_pov}/10.</strong> Consider adding or strengthening a character voice profile for this character, or refining the persona bio to match how this POV character should speak and think.")
|
||||||
|
|
||||||
|
# Pacing analysis
|
||||||
|
pace_scores = {}
|
||||||
|
for c in chapters:
|
||||||
|
pace = c.get('pacing', 'Standard')
|
||||||
|
s = c.get('final_score', 0)
|
||||||
|
if s > 0:
|
||||||
|
pace_scores.setdefault(pace, []).append(s)
|
||||||
|
for pace, sc in pace_scores.items():
|
||||||
|
if len(sc) >= 3 and sum(sc) / len(sc) < 6.5:
|
||||||
|
avg_p = round(sum(sc) / len(sc), 1)
|
||||||
|
notes.append(f"⏩ <strong>'{pace}' pacing chapters average {avg_p}/10.</strong> The writing model struggles with this rhythm. Revisit the PACING_GUIDE instructions for '{pace}' chapters — they may need more concrete direction.")
|
||||||
|
|
||||||
|
if not notes:
|
||||||
|
notes.append("No significant patterns detected. Review the individual chapter critiques for targeted improvements.")
|
||||||
|
return notes
|
||||||
@@ -80,6 +80,14 @@ def enrich(bp, folder, context=""):
|
|||||||
if 'plot_beats' not in bp or not bp['plot_beats']:
|
if 'plot_beats' not in bp or not bp['plot_beats']:
|
||||||
bp['plot_beats'] = ai_data.get('plot_beats', [])
|
bp['plot_beats'] = ai_data.get('plot_beats', [])
|
||||||
|
|
||||||
|
# Validate critical fields after enrichment
|
||||||
|
title = bp.get('book_metadata', {}).get('title')
|
||||||
|
genre = bp.get('book_metadata', {}).get('genre')
|
||||||
|
if not title:
|
||||||
|
utils.log("ENRICHER", "⚠️ Warning: book_metadata.title is missing after enrichment.")
|
||||||
|
if not genre:
|
||||||
|
utils.log("ENRICHER", "⚠️ Warning: book_metadata.genre is missing after enrichment.")
|
||||||
|
|
||||||
return bp
|
return bp
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("ENRICHER", f"Enrichment failed: {e}")
|
utils.log("ENRICHER", f"Enrichment failed: {e}")
|
||||||
@@ -288,3 +296,66 @@ def create_chapter_plan(events, bp, folder):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("ARCHITECT", f"Failed to create chapter plan: {e}")
|
utils.log("ARCHITECT", f"Failed to create chapter plan: {e}")
|
||||||
return []
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def validate_outline(events, chapters, bp, folder):
|
||||||
|
"""Pre-generation outline validation gate (Action Plan Step 3: Alt 2-B).
|
||||||
|
|
||||||
|
Checks for: missing required beats, character continuity issues, severe pacing
|
||||||
|
imbalances, and POV logic errors. Returns findings but never blocks generation —
|
||||||
|
issues are logged as warnings so the writer can proceed.
|
||||||
|
"""
|
||||||
|
utils.log("ARCHITECT", "Validating outline before writing phase...")
|
||||||
|
|
||||||
|
beats_context = bp.get('plot_beats', [])
|
||||||
|
chars_summary = [{"name": c.get("name"), "role": c.get("role")} for c in bp.get('characters', [])]
|
||||||
|
|
||||||
|
# Sample chapter data to keep prompt size manageable
|
||||||
|
chapters_sample = chapters[:5] + chapters[-5:] if len(chapters) > 10 else chapters
|
||||||
|
|
||||||
|
prompt = f"""
|
||||||
|
ROLE: Continuity Editor
|
||||||
|
TASK: Review this chapter outline for issues that could cause expensive rewrites later.
|
||||||
|
|
||||||
|
REQUIRED_BEATS (must all appear somewhere in the chapter plan):
|
||||||
|
{json.dumps(beats_context)}
|
||||||
|
|
||||||
|
CHARACTERS:
|
||||||
|
{json.dumps(chars_summary)}
|
||||||
|
|
||||||
|
CHAPTER_PLAN (sample — first 5 and last 5 chapters):
|
||||||
|
{json.dumps(chapters_sample)}
|
||||||
|
|
||||||
|
CHECK FOR:
|
||||||
|
1. MISSING_BEATS: Are all required plot beats present? List any absent beats by name.
|
||||||
|
2. CONTINUITY: Are there character deaths/revivals, unacknowledged time jumps, or contradictions visible in the outline?
|
||||||
|
3. PACING: Are there 3+ consecutive chapters with identical pacing that would create reader fatigue?
|
||||||
|
4. POV_LOGIC: Are key emotional scenes assigned to the most appropriate POV character?
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON):
|
||||||
|
{{
|
||||||
|
"issues": [
|
||||||
|
{{"type": "missing_beat|continuity|pacing|pov", "description": "...", "severity": "critical|warning"}}
|
||||||
|
],
|
||||||
|
"overall_severity": "ok|warning|critical",
|
||||||
|
"summary": "One-sentence summary of findings."
|
||||||
|
}}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = ai_models.model_logic.generate_content(prompt)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
|
||||||
|
result = json.loads(utils.clean_json(response.text))
|
||||||
|
|
||||||
|
severity = result.get('overall_severity', 'ok')
|
||||||
|
issues = result.get('issues', [])
|
||||||
|
summary = result.get('summary', 'No issues found.')
|
||||||
|
|
||||||
|
for issue in issues:
|
||||||
|
prefix = "⚠️" if issue.get('severity') == 'warning' else "🚨"
|
||||||
|
utils.log("ARCHITECT", f" {prefix} Outline {issue.get('type', 'issue')}: {issue.get('description', '')}")
|
||||||
|
|
||||||
|
utils.log("ARCHITECT", f"Outline validation complete: {severity.upper()} — {summary}")
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("ARCHITECT", f"Outline validation failed (non-blocking): {e}")
|
||||||
|
return {"issues": [], "overall_severity": "ok", "summary": "Validation skipped."}
|
||||||
|
|||||||
@@ -8,17 +8,27 @@ def _empty_state():
|
|||||||
return {"active_threads": [], "immediate_handoff": "", "resolved_threads": [], "chapter": 0}
|
return {"active_threads": [], "immediate_handoff": "", "resolved_threads": [], "chapter": 0}
|
||||||
|
|
||||||
|
|
||||||
def load_story_state(folder):
|
def load_story_state(folder, project_id=None):
|
||||||
"""Load structured story state from story_state.json, or return empty state."""
|
"""Load structured story state from DB (if project_id given) or story_state.json fallback."""
|
||||||
|
if project_id is not None:
|
||||||
|
try:
|
||||||
|
from web.db import StoryState
|
||||||
|
record = StoryState.query.filter_by(project_id=project_id).first()
|
||||||
|
if record and record.state_json:
|
||||||
|
return json.loads(record.state_json) or _empty_state()
|
||||||
|
except Exception:
|
||||||
|
pass # Fall through to file-based load if DB unavailable (e.g. CLI context)
|
||||||
|
|
||||||
path = os.path.join(folder, "story_state.json")
|
path = os.path.join(folder, "story_state.json")
|
||||||
if os.path.exists(path):
|
if os.path.exists(path):
|
||||||
return utils.load_json(path) or _empty_state()
|
return utils.load_json(path) or _empty_state()
|
||||||
return _empty_state()
|
return _empty_state()
|
||||||
|
|
||||||
|
|
||||||
def update_story_state(chapter_text, chapter_num, current_state, folder):
|
def update_story_state(chapter_text, chapter_num, current_state, folder, project_id=None):
|
||||||
"""Use model_logic to extract structured story threads from the new chapter
|
"""Use model_logic to extract structured story threads from the new chapter
|
||||||
and save the updated state to story_state.json. Returns the new state."""
|
and save the updated state to the StoryState DB table and/or story_state.json.
|
||||||
|
Returns the new state."""
|
||||||
utils.log("STATE", f"Updating story state after Ch {chapter_num}...")
|
utils.log("STATE", f"Updating story state after Ch {chapter_num}...")
|
||||||
prompt = f"""
|
prompt = f"""
|
||||||
ROLE: Story State Tracker
|
ROLE: Story State Tracker
|
||||||
@@ -54,9 +64,28 @@ def update_story_state(chapter_text, chapter_num, current_state, folder):
|
|||||||
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
|
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
|
||||||
new_state = json.loads(utils.clean_json(response.text))
|
new_state = json.loads(utils.clean_json(response.text))
|
||||||
new_state['chapter'] = chapter_num
|
new_state['chapter'] = chapter_num
|
||||||
|
|
||||||
|
# Write to DB if project_id is available
|
||||||
|
if project_id is not None:
|
||||||
|
try:
|
||||||
|
from web.db import db, StoryState
|
||||||
|
from datetime import datetime
|
||||||
|
record = StoryState.query.filter_by(project_id=project_id).first()
|
||||||
|
if record:
|
||||||
|
record.state_json = json.dumps(new_state)
|
||||||
|
record.updated_at = datetime.utcnow()
|
||||||
|
else:
|
||||||
|
record = StoryState(project_id=project_id, state_json=json.dumps(new_state))
|
||||||
|
db.session.add(record)
|
||||||
|
db.session.commit()
|
||||||
|
except Exception as db_err:
|
||||||
|
utils.log("STATE", f" -> DB write failed: {db_err}. Falling back to file.")
|
||||||
|
|
||||||
|
# Always write to file for backward compat with CLI
|
||||||
path = os.path.join(folder, "story_state.json")
|
path = os.path.join(folder, "story_state.json")
|
||||||
with open(path, 'w') as f:
|
with open(path, 'w') as f:
|
||||||
json.dump(new_state, f, indent=2)
|
json.dump(new_state, f, indent=2)
|
||||||
|
|
||||||
utils.log("STATE", f" -> Story state saved. Active threads: {len(new_state.get('active_threads', []))}")
|
utils.log("STATE", f" -> Story state saved. Active threads: {len(new_state.get('active_threads', []))}")
|
||||||
return new_state
|
return new_state
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
@@ -104,11 +104,122 @@ def create_initial_persona(bp, folder):
|
|||||||
return {"name": "AI Author", "bio": "Standard, balanced writing style."}
|
return {"name": "AI Author", "bio": "Standard, balanced writing style."}
|
||||||
|
|
||||||
|
|
||||||
def refine_persona(bp, text, folder):
|
def validate_persona(bp, persona_details, folder):
|
||||||
|
"""Validate a newly created persona by generating a 200-word sample and scoring it.
|
||||||
|
|
||||||
|
Experiment 6 (Iterative Persona Validation): generates a test passage in the
|
||||||
|
persona's voice and evaluates voice quality before accepting it. This front-loads
|
||||||
|
quality assurance so Phase 3 starts with a well-calibrated author voice.
|
||||||
|
|
||||||
|
Returns (is_valid: bool, score: int). Threshold: score >= 7 → accepted.
|
||||||
|
"""
|
||||||
|
meta = bp.get('book_metadata', {})
|
||||||
|
genre = meta.get('genre', 'Fiction')
|
||||||
|
tone = meta.get('style', {}).get('tone', 'balanced')
|
||||||
|
name = persona_details.get('name', 'Unknown Author')
|
||||||
|
bio = persona_details.get('bio', 'Standard style.')
|
||||||
|
|
||||||
|
sample_prompt = f"""
|
||||||
|
ROLE: Fiction Writer
|
||||||
|
TASK: Write a 400-word opening scene that perfectly demonstrates this author's voice.
|
||||||
|
|
||||||
|
AUTHOR_PERSONA:
|
||||||
|
Name: {name}
|
||||||
|
Style/Bio: {bio}
|
||||||
|
|
||||||
|
GENRE: {genre}
|
||||||
|
TONE: {tone}
|
||||||
|
|
||||||
|
RULES:
|
||||||
|
- Exactly ~400 words of prose (no chapter header, no commentary)
|
||||||
|
- Must reflect the persona's stated sentence structure, vocabulary, and voice
|
||||||
|
- Show, don't tell — no filter words (felt, saw, heard, realized, noticed)
|
||||||
|
- Deep POV: immerse the reader in a character's immediate experience
|
||||||
|
|
||||||
|
OUTPUT: Prose only.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
resp = ai_models.model_logic.generate_content(sample_prompt)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, resp.usage_metadata)
|
||||||
|
sample_text = resp.text
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("SYSTEM", f" -> Persona validation sample failed: {e}. Accepting persona.")
|
||||||
|
return True, 7
|
||||||
|
|
||||||
|
# Lightweight scoring: focused on voice quality (not full 13-rubric)
|
||||||
|
score_prompt = f"""
|
||||||
|
ROLE: Literary Editor
|
||||||
|
TASK: Score this prose sample for author voice quality.
|
||||||
|
|
||||||
|
EXPECTED_PERSONA:
|
||||||
|
{bio}
|
||||||
|
|
||||||
|
SAMPLE:
|
||||||
|
{sample_text}
|
||||||
|
|
||||||
|
CRITERIA:
|
||||||
|
1. Does the prose reflect the stated author persona? (voice, register, sentence style)
|
||||||
|
2. Is the prose free of filter words (felt, saw, heard, noticed, realized)?
|
||||||
|
3. Is it deep POV — immediate, immersive, not distant narration?
|
||||||
|
4. Is there genuine sentence variety and strong verb choice?
|
||||||
|
|
||||||
|
SCORING (1-10):
|
||||||
|
- 8-10: Voice is distinct, matches persona, clean deep POV
|
||||||
|
- 6-7: Reasonable voice, minor filter word issues
|
||||||
|
- 1-5: Generic AI prose, heavy filter words, or persona not reflected
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{"score": int, "reason": "One sentence."}}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
resp2 = ai_models.model_logic.generate_content(score_prompt)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, resp2.usage_metadata)
|
||||||
|
data = json.loads(utils.clean_json(resp2.text))
|
||||||
|
score = int(data.get('score', 7))
|
||||||
|
reason = data.get('reason', '')
|
||||||
|
is_valid = score >= 7
|
||||||
|
utils.log("SYSTEM", f" -> Persona validation: {score}/10 {'✅ Accepted' if is_valid else '❌ Rejected'} — {reason}")
|
||||||
|
return is_valid, score
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("SYSTEM", f" -> Persona scoring failed: {e}. Accepting persona.")
|
||||||
|
return True, 7
|
||||||
|
|
||||||
|
|
||||||
|
def refine_persona(bp, text, folder, pov_character=None):
|
||||||
utils.log("SYSTEM", "Refining Author Persona based on recent chapters...")
|
utils.log("SYSTEM", "Refining Author Persona based on recent chapters...")
|
||||||
ad = bp.get('book_metadata', {}).get('author_details', {})
|
ad = bp.get('book_metadata', {}).get('author_details', {})
|
||||||
current_bio = ad.get('bio', 'Standard style.')
|
|
||||||
|
|
||||||
|
# If a POV character is given and has a voice_profile, refine that instead
|
||||||
|
if pov_character:
|
||||||
|
for char in bp.get('characters', []):
|
||||||
|
if char.get('name') == pov_character and char.get('voice_profile'):
|
||||||
|
vp = char['voice_profile']
|
||||||
|
current_bio = vp.get('bio', 'Standard style.')
|
||||||
|
prompt = f"""
|
||||||
|
ROLE: Literary Stylist
|
||||||
|
TASK: Refine a POV character's voice profile based on the text sample.
|
||||||
|
|
||||||
|
INPUT_DATA:
|
||||||
|
- TEXT_SAMPLE: {text[:3000]}
|
||||||
|
- CHARACTER: {pov_character}
|
||||||
|
- CURRENT_VOICE_BIO: {current_bio}
|
||||||
|
|
||||||
|
GOAL: Ensure future chapters for this POV character sound exactly like the sample. Highlight quirks, patterns, vocabulary specific to this character's perspective.
|
||||||
|
|
||||||
|
OUTPUT_FORMAT (JSON): {{ "bio": "Updated voice bio..." }}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = ai_models.model_logic.generate_content(prompt)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
|
||||||
|
new_bio = json.loads(utils.clean_json(response.text)).get('bio')
|
||||||
|
if new_bio:
|
||||||
|
char['voice_profile']['bio'] = new_bio
|
||||||
|
utils.log("SYSTEM", f" -> Voice profile bio updated for '{pov_character}'.")
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("SYSTEM", f" -> Voice profile refinement failed for '{pov_character}': {e}")
|
||||||
|
return ad # Return author_details unchanged
|
||||||
|
|
||||||
|
# Default: refine the main author persona bio
|
||||||
|
current_bio = ad.get('bio', 'Standard style.')
|
||||||
prompt = f"""
|
prompt = f"""
|
||||||
ROLE: Literary Stylist
|
ROLE: Literary Stylist
|
||||||
TASK: Refine Author Bio based on text sample.
|
TASK: Refine Author Bio based on text sample.
|
||||||
@@ -157,10 +268,12 @@ def update_persona_sample(bp, folder):
|
|||||||
|
|
||||||
author_name = meta.get('author', 'Unknown Author')
|
author_name = meta.get('author', 'Unknown Author')
|
||||||
|
|
||||||
|
# Use a local file mirror for the engine context (runs outside Flask app context)
|
||||||
|
_personas_file = os.path.join(config.PERSONAS_DIR, "personas.json")
|
||||||
personas = {}
|
personas = {}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
if os.path.exists(_personas_file):
|
||||||
try:
|
try:
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
with open(_personas_file, 'r') as f: personas = json.load(f)
|
||||||
except: pass
|
except: pass
|
||||||
|
|
||||||
if author_name not in personas:
|
if author_name not in personas:
|
||||||
@@ -189,4 +302,4 @@ def update_persona_sample(bp, folder):
|
|||||||
if filename not in personas[author_name]['sample_files']:
|
if filename not in personas[author_name]['sample_files']:
|
||||||
personas[author_name]['sample_files'].append(filename)
|
personas[author_name]['sample_files'].append(filename)
|
||||||
|
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump(personas, f, indent=2)
|
with open(_personas_file, 'w') as f: json.dump(personas, f, indent=2)
|
||||||
|
|||||||
243
story/writer.py
243
story/writer.py
@@ -1,9 +1,11 @@
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import time
|
||||||
from core import config, utils
|
from core import config, utils
|
||||||
from ai import models as ai_models
|
from ai import models as ai_models
|
||||||
from story.style_persona import get_style_guidelines
|
from story.style_persona import get_style_guidelines
|
||||||
from story.editor import evaluate_chapter_quality
|
from story.editor import evaluate_chapter_quality
|
||||||
|
from story import eval_logger
|
||||||
|
|
||||||
|
|
||||||
def get_genre_instructions(genre):
|
def get_genre_instructions(genre):
|
||||||
@@ -74,6 +76,49 @@ def get_genre_instructions(genre):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def build_persona_info(bp):
|
||||||
|
"""Build the author persona string from bp['book_metadata']['author_details'].
|
||||||
|
|
||||||
|
Extracted as a standalone function so engine.py can pre-load the persona once
|
||||||
|
for the entire writing phase instead of re-reading sample files for every chapter.
|
||||||
|
Returns the assembled persona string, or None if no author_details are present.
|
||||||
|
"""
|
||||||
|
meta = bp.get('book_metadata', {})
|
||||||
|
ad = meta.get('author_details', {})
|
||||||
|
if not ad and 'author_bio' in meta:
|
||||||
|
return meta['author_bio']
|
||||||
|
if not ad:
|
||||||
|
return None
|
||||||
|
|
||||||
|
info = f"Name: {ad.get('name', meta.get('author', 'Unknown'))}\n"
|
||||||
|
if ad.get('age'): info += f"Age: {ad['age']}\n"
|
||||||
|
if ad.get('gender'): info += f"Gender: {ad['gender']}\n"
|
||||||
|
if ad.get('race'): info += f"Race: {ad['race']}\n"
|
||||||
|
if ad.get('nationality'): info += f"Nationality: {ad['nationality']}\n"
|
||||||
|
if ad.get('language'): info += f"Language: {ad['language']}\n"
|
||||||
|
if ad.get('bio'): info += f"Style/Bio: {ad['bio']}\n"
|
||||||
|
|
||||||
|
samples = []
|
||||||
|
if ad.get('sample_text'):
|
||||||
|
samples.append(f"--- SAMPLE PARAGRAPH ---\n{ad['sample_text']}")
|
||||||
|
|
||||||
|
if ad.get('sample_files'):
|
||||||
|
for fname in ad['sample_files']:
|
||||||
|
fpath = os.path.join(config.PERSONAS_DIR, fname)
|
||||||
|
if os.path.exists(fpath):
|
||||||
|
try:
|
||||||
|
with open(fpath, 'r', encoding='utf-8', errors='ignore') as f:
|
||||||
|
content = f.read(3000)
|
||||||
|
samples.append(f"--- SAMPLE FROM {fname} ---\n{content}...")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if samples:
|
||||||
|
info += "\nWRITING STYLE SAMPLES:\n" + "\n".join(samples)
|
||||||
|
|
||||||
|
return info
|
||||||
|
|
||||||
|
|
||||||
def expand_beats_to_treatment(beats, pov_char, genre, folder):
|
def expand_beats_to_treatment(beats, pov_char, genre, folder):
|
||||||
"""Expand sparse scene beats into a Director's Treatment using a fast model.
|
"""Expand sparse scene beats into a Director's Treatment using a fast model.
|
||||||
This pre-flight step gives the writer detailed staging and emotional direction,
|
This pre-flight step gives the writer detailed staging and emotional direction,
|
||||||
@@ -106,7 +151,15 @@ def expand_beats_to_treatment(beats, pov_char, genre, folder):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None, next_chapter_hint=""):
|
def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None, next_chapter_hint="", prebuilt_persona=None, chapter_position=None):
|
||||||
|
"""Write a single chapter with iterative quality evaluation.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prebuilt_persona: Pre-loaded persona string from build_persona_info(bp).
|
||||||
|
When provided, skips per-chapter file reads (persona cache optimisation).
|
||||||
|
chapter_position: Float 0.0–1.0 indicating position in book. Used for
|
||||||
|
adaptive scoring thresholds (setup = lenient, climax = strict).
|
||||||
|
"""
|
||||||
pacing = chap.get('pacing', 'Standard')
|
pacing = chap.get('pacing', 'Standard')
|
||||||
est_words = chap.get('estimated_words', 'Flexible')
|
est_words = chap.get('estimated_words', 'Flexible')
|
||||||
utils.log("WRITER", f"Drafting Ch {chap['chapter_number']} ({pacing} | ~{est_words} words): {chap['title']}")
|
utils.log("WRITER", f"Drafting Ch {chap['chapter_number']} ({pacing} | ~{est_words} words): {chap['title']}")
|
||||||
@@ -117,34 +170,22 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
|
|
||||||
pov_char = chap.get('pov_character', '')
|
pov_char = chap.get('pov_character', '')
|
||||||
|
|
||||||
ad = meta.get('author_details', {})
|
# Check for character-specific voice profile (Step 2: Character Voice Profiles)
|
||||||
if not ad and 'author_bio' in meta:
|
character_voice = None
|
||||||
persona_info = meta['author_bio']
|
if pov_char:
|
||||||
|
for char in bp.get('characters', []):
|
||||||
|
if char.get('name') == pov_char and char.get('voice_profile'):
|
||||||
|
vp = char['voice_profile']
|
||||||
|
character_voice = f"Style/Bio: {vp.get('bio', '')}\nKeywords: {', '.join(vp.get('keywords', []))}"
|
||||||
|
utils.log("WRITER", f" -> Using voice profile for POV character: {pov_char}")
|
||||||
|
break
|
||||||
|
|
||||||
|
if character_voice:
|
||||||
|
persona_info = character_voice
|
||||||
|
elif prebuilt_persona is not None:
|
||||||
|
persona_info = prebuilt_persona
|
||||||
else:
|
else:
|
||||||
persona_info = f"Name: {ad.get('name', meta.get('author', 'Unknown'))}\n"
|
persona_info = build_persona_info(bp) or "Standard, balanced writing style."
|
||||||
if ad.get('age'): persona_info += f"Age: {ad['age']}\n"
|
|
||||||
if ad.get('gender'): persona_info += f"Gender: {ad['gender']}\n"
|
|
||||||
if ad.get('race'): persona_info += f"Race: {ad['race']}\n"
|
|
||||||
if ad.get('nationality'): persona_info += f"Nationality: {ad['nationality']}\n"
|
|
||||||
if ad.get('language'): persona_info += f"Language: {ad['language']}\n"
|
|
||||||
if ad.get('bio'): persona_info += f"Style/Bio: {ad['bio']}\n"
|
|
||||||
|
|
||||||
samples = []
|
|
||||||
if ad.get('sample_text'):
|
|
||||||
samples.append(f"--- SAMPLE PARAGRAPH ---\n{ad['sample_text']}")
|
|
||||||
|
|
||||||
if ad.get('sample_files'):
|
|
||||||
for fname in ad['sample_files']:
|
|
||||||
fpath = os.path.join(config.PERSONAS_DIR, fname)
|
|
||||||
if os.path.exists(fpath):
|
|
||||||
try:
|
|
||||||
with open(fpath, 'r', encoding='utf-8', errors='ignore') as f:
|
|
||||||
content = f.read(3000)
|
|
||||||
samples.append(f"--- SAMPLE FROM {fname} ---\n{content}...")
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
if samples:
|
|
||||||
persona_info += "\nWRITING STYLE SAMPLES:\n" + "\n".join(samples)
|
|
||||||
|
|
||||||
# Only inject characters named in the chapter beats + the POV character
|
# Only inject characters named in the chapter beats + the POV character
|
||||||
beats_text = " ".join(str(b) for b in chap.get('beats', []))
|
beats_text = " ".join(str(b) for b in chap.get('beats', []))
|
||||||
@@ -217,8 +258,15 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
trunc_content = utils.truncate_to_tokens(prev_content, 1000)
|
trunc_content = utils.truncate_to_tokens(prev_content, 1000)
|
||||||
prev_context_block = f"\nPREVIOUS CHAPTER TEXT (Last ~1000 Tokens — For Immediate Continuity):\n{trunc_content}\n"
|
prev_context_block = f"\nPREVIOUS CHAPTER TEXT (Last ~1000 Tokens — For Immediate Continuity):\n{trunc_content}\n"
|
||||||
|
|
||||||
utils.log("WRITER", f" -> Expanding beats to Director's Treatment...")
|
# Skip beat expansion if beats are already detailed (saves ~5K tokens per chapter)
|
||||||
treatment = expand_beats_to_treatment(chap.get('beats', []), pov_char, genre, folder)
|
beats_list = chap.get('beats', [])
|
||||||
|
total_beat_words = sum(len(str(b).split()) for b in beats_list)
|
||||||
|
if total_beat_words > 100:
|
||||||
|
utils.log("WRITER", f" -> Beats already detailed ({total_beat_words} words). Skipping expansion.")
|
||||||
|
treatment = None
|
||||||
|
else:
|
||||||
|
utils.log("WRITER", f" -> Expanding beats to Director's Treatment...")
|
||||||
|
treatment = expand_beats_to_treatment(beats_list, pov_char, genre, folder)
|
||||||
treatment_block = f"\n DIRECTORS_TREATMENT (Staged expansion of the beats — use this as your scene blueprint; DRAMATIZE every moment, do NOT summarize):\n{treatment}\n" if treatment else ""
|
treatment_block = f"\n DIRECTORS_TREATMENT (Staged expansion of the beats — use this as your scene blueprint; DRAMATIZE every moment, do NOT summarize):\n{treatment}\n" if treatment else ""
|
||||||
|
|
||||||
genre_mandates = get_genre_instructions(genre)
|
genre_mandates = get_genre_instructions(genre)
|
||||||
@@ -327,30 +375,125 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
utils.log("WRITER", f"⚠️ Failed Ch {chap['chapter_number']}: {e}")
|
utils.log("WRITER", f"⚠️ Failed Ch {chap['chapter_number']}: {e}")
|
||||||
return f"## Chapter {chap['chapter_number']} Failed\n\nError: {e}"
|
return f"## Chapter {chap['chapter_number']} Failed\n\nError: {e}"
|
||||||
|
|
||||||
max_attempts = 5
|
# Exp 7: Two-Pass Drafting — Polish rough draft with the logic (Pro) model before evaluation.
|
||||||
|
# Skip when local filter-word heuristic shows draft is already clean (saves ~8K tokens/chapter).
|
||||||
|
_guidelines_for_polish = get_style_guidelines()
|
||||||
|
_fw_set = set(_guidelines_for_polish['filter_words'])
|
||||||
|
_draft_word_list = current_text.lower().split() if current_text else []
|
||||||
|
_fw_hit_count = sum(1 for w in _draft_word_list if w in _fw_set)
|
||||||
|
_fw_density = _fw_hit_count / max(len(_draft_word_list), 1)
|
||||||
|
_skip_polish = _fw_density < 0.008 # < ~1 filter word per 125 words → draft already clean
|
||||||
|
|
||||||
|
if current_text and not _skip_polish:
|
||||||
|
utils.log("WRITER", f" -> Two-pass polish (Pro model, FW density {_fw_density:.3f})...")
|
||||||
|
fw_list = '", "'.join(_guidelines_for_polish['filter_words'])
|
||||||
|
polish_prompt = f"""
|
||||||
|
ROLE: Senior Fiction Editor
|
||||||
|
TASK: Polish this rough draft into publication-ready prose.
|
||||||
|
|
||||||
|
AUTHOR_VOICE:
|
||||||
|
{persona_info}
|
||||||
|
|
||||||
|
GENRE: {genre}
|
||||||
|
TARGET_WORDS: ~{est_words}
|
||||||
|
BEATS (must all be covered): {json.dumps(chap.get('beats', []))}
|
||||||
|
|
||||||
|
CONTINUITY (maintain seamless flow from previous chapter):
|
||||||
|
{prev_context_block if prev_context_block else "First chapter — no prior context."}
|
||||||
|
|
||||||
|
POLISH_CHECKLIST:
|
||||||
|
1. FILTER_REMOVAL: Remove all filter words [{fw_list}] — rewrite each to show the sensation directly.
|
||||||
|
2. DEEP_POV: Ensure the reader is inside the POV character's experience at all times — no external narration.
|
||||||
|
3. ACTIVE_VOICE: Replace all 'was/were + -ing' constructions with active alternatives.
|
||||||
|
4. SENTENCE_VARIETY: No two consecutive sentences starting with the same word. Vary length for rhythm.
|
||||||
|
5. STRONG_VERBS: Delete adverbs; replace with precise verbs.
|
||||||
|
6. NO_AI_ISMS: Remove: 'testament to', 'tapestry', 'palpable tension', 'azure', 'cerulean', 'bustling', 'a sense of'.
|
||||||
|
7. CHAPTER_HOOK: Ensure the final paragraph ends on unresolved tension, a question, or a threat.
|
||||||
|
8. PRESERVE: Keep all narrative beats, approximate word count (±15%), and chapter header.
|
||||||
|
|
||||||
|
ROUGH_DRAFT:
|
||||||
|
{current_text}
|
||||||
|
|
||||||
|
OUTPUT: Complete polished chapter in Markdown.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
resp_polish = ai_models.model_logic.generate_content(polish_prompt)
|
||||||
|
utils.log_usage(folder, ai_models.model_logic.name, resp_polish.usage_metadata)
|
||||||
|
polished = resp_polish.text
|
||||||
|
if polished:
|
||||||
|
polished_words = len(polished.split())
|
||||||
|
utils.log("WRITER", f" -> Polished: {polished_words:,} words.")
|
||||||
|
current_text = polished
|
||||||
|
except Exception as e:
|
||||||
|
utils.log("WRITER", f" -> Polish pass failed: {e}. Proceeding with raw draft.")
|
||||||
|
elif current_text:
|
||||||
|
utils.log("WRITER", f" -> Draft clean (FW density {_fw_density:.3f}). Skipping polish pass.")
|
||||||
|
|
||||||
|
# Adaptive attempts: climax/resolution chapters (position >= 0.75) get 3 passes;
|
||||||
|
# earlier chapters keep 2 (polish pass already refines prose before evaluation).
|
||||||
|
if chapter_position is not None and chapter_position >= 0.75:
|
||||||
|
max_attempts = 3
|
||||||
|
else:
|
||||||
|
max_attempts = 2
|
||||||
SCORE_AUTO_ACCEPT = 8
|
SCORE_AUTO_ACCEPT = 8
|
||||||
SCORE_PASSING = 7
|
# Adaptive passing threshold: lenient for early setup chapters, strict for climax/resolution.
|
||||||
|
# chapter_position=0.0 → setup (SCORE_PASSING=6.5), chapter_position=1.0 → climax (7.5)
|
||||||
|
if chapter_position is not None:
|
||||||
|
SCORE_PASSING = round(6.5 + chapter_position * 1.0, 1)
|
||||||
|
utils.log("WRITER", f" -> Adaptive threshold: SCORE_PASSING={SCORE_PASSING} (position={chapter_position:.2f})")
|
||||||
|
else:
|
||||||
|
SCORE_PASSING = 7
|
||||||
SCORE_REWRITE_THRESHOLD = 6
|
SCORE_REWRITE_THRESHOLD = 6
|
||||||
|
|
||||||
|
# Evaluation log entry — written to eval_log.json for the HTML report.
|
||||||
|
_eval_entry = {
|
||||||
|
"ts": time.strftime('%Y-%m-%d %H:%M:%S'),
|
||||||
|
"chapter_num": chap['chapter_number'],
|
||||||
|
"title": chap.get('title', ''),
|
||||||
|
"pov_character": chap.get('pov_character', ''),
|
||||||
|
"pacing": pacing,
|
||||||
|
"target_words": est_words,
|
||||||
|
"actual_words": draft_words,
|
||||||
|
"chapter_position": chapter_position,
|
||||||
|
"score_threshold": SCORE_PASSING,
|
||||||
|
"score_auto_accept": SCORE_AUTO_ACCEPT,
|
||||||
|
"polish_applied": bool(current_text and not _skip_polish),
|
||||||
|
"filter_word_density": round(_fw_density, 4),
|
||||||
|
"attempts": [],
|
||||||
|
"final_score": 0,
|
||||||
|
"final_decision": "unknown",
|
||||||
|
}
|
||||||
|
|
||||||
best_score = 0
|
best_score = 0
|
||||||
best_text = current_text
|
best_text = current_text
|
||||||
past_critiques = []
|
past_critiques = []
|
||||||
|
|
||||||
for attempt in range(1, max_attempts + 1):
|
for attempt in range(1, max_attempts + 1):
|
||||||
utils.log("WRITER", f" -> Evaluating Ch {chap['chapter_number']} (Attempt {attempt}/{max_attempts})...")
|
utils.log("WRITER", f" -> Evaluating Ch {chap['chapter_number']} (Attempt {attempt}/{max_attempts})...")
|
||||||
score, critique = evaluate_chapter_quality(current_text, chap['title'], meta.get('genre', 'Fiction'), ai_models.model_writer, folder, series_context=series_block.strip())
|
score, critique = evaluate_chapter_quality(current_text, chap['title'], meta.get('genre', 'Fiction'), ai_models.model_logic, folder, series_context=series_block.strip())
|
||||||
|
|
||||||
past_critiques.append(f"Attempt {attempt}: {critique}")
|
past_critiques.append(f"Attempt {attempt}: {critique}")
|
||||||
|
_att = {"n": attempt, "score": score, "critique": critique[:700], "decision": None}
|
||||||
|
|
||||||
if "Evaluation error" in critique:
|
if "Evaluation error" in critique:
|
||||||
utils.log("WRITER", f" ⚠️ {critique}. Keeping current draft.")
|
utils.log("WRITER", f" ⚠️ {critique}. Keeping current draft.")
|
||||||
if best_score == 0: best_text = current_text
|
if best_score == 0: best_text = current_text
|
||||||
|
_att["decision"] = "eval_error"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
|
_eval_entry["final_score"] = best_score
|
||||||
|
_eval_entry["final_decision"] = "eval_error"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
break
|
break
|
||||||
|
|
||||||
utils.log("WRITER", f" Score: {score}/10. Critique: {critique}")
|
utils.log("WRITER", f" Score: {score}/10. Critique: {critique}")
|
||||||
|
|
||||||
if score >= SCORE_AUTO_ACCEPT:
|
if score >= SCORE_AUTO_ACCEPT:
|
||||||
utils.log("WRITER", " 🌟 Auto-Accept threshold met.")
|
utils.log("WRITER", " 🌟 Auto-Accept threshold met.")
|
||||||
|
_att["decision"] = "auto_accepted"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
|
_eval_entry["final_score"] = score
|
||||||
|
_eval_entry["final_decision"] = "auto_accepted"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
return current_text
|
return current_text
|
||||||
|
|
||||||
if score > best_score:
|
if score > best_score:
|
||||||
@@ -360,9 +503,19 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
if attempt == max_attempts:
|
if attempt == max_attempts:
|
||||||
if best_score >= SCORE_PASSING:
|
if best_score >= SCORE_PASSING:
|
||||||
utils.log("WRITER", f" ✅ Max attempts reached. Accepting best score ({best_score}).")
|
utils.log("WRITER", f" ✅ Max attempts reached. Accepting best score ({best_score}).")
|
||||||
|
_att["decision"] = "accepted"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
|
_eval_entry["final_score"] = best_score
|
||||||
|
_eval_entry["final_decision"] = "accepted"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
return best_text
|
return best_text
|
||||||
else:
|
else:
|
||||||
utils.log("WRITER", f" ⚠️ Quality low ({best_score}/{SCORE_PASSING}) but max attempts reached. Proceeding.")
|
utils.log("WRITER", f" ⚠️ Quality low ({best_score}/{SCORE_PASSING}) but max attempts reached. Proceeding.")
|
||||||
|
_att["decision"] = "below_threshold"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
|
_eval_entry["final_score"] = best_score
|
||||||
|
_eval_entry["final_decision"] = "below_threshold"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
return best_text
|
return best_text
|
||||||
|
|
||||||
if score < SCORE_REWRITE_THRESHOLD:
|
if score < SCORE_REWRITE_THRESHOLD:
|
||||||
@@ -378,12 +531,23 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
_pro = getattr(ai_models, 'pro_model_name', 'models/gemini-2.0-pro-exp')
|
||||||
|
ai_models.model_logic.update(_pro)
|
||||||
resp_rewrite = ai_models.model_logic.generate_content(full_rewrite_prompt)
|
resp_rewrite = ai_models.model_logic.generate_content(full_rewrite_prompt)
|
||||||
utils.log_usage(folder, ai_models.model_logic.name, resp_rewrite.usage_metadata)
|
utils.log_usage(folder, ai_models.model_logic.name, resp_rewrite.usage_metadata)
|
||||||
current_text = resp_rewrite.text
|
current_text = resp_rewrite.text
|
||||||
|
ai_models.model_logic.update(ai_models.logic_model_name)
|
||||||
|
_att["decision"] = "full_rewrite"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
continue
|
continue
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
ai_models.model_logic.update(ai_models.logic_model_name)
|
||||||
utils.log("WRITER", f"Full rewrite failed: {e}. Falling back to refinement.")
|
utils.log("WRITER", f"Full rewrite failed: {e}. Falling back to refinement.")
|
||||||
|
_att["decision"] = "full_rewrite_failed"
|
||||||
|
# fall through to refinement; decision will be overwritten below
|
||||||
|
|
||||||
|
else:
|
||||||
|
_att["decision"] = "refinement"
|
||||||
|
|
||||||
utils.log("WRITER", f" -> Refining Ch {chap['chapter_number']} based on feedback...")
|
utils.log("WRITER", f" -> Refining Ch {chap['chapter_number']} based on feedback...")
|
||||||
|
|
||||||
@@ -438,8 +602,21 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
|
|||||||
resp_refine = ai_models.model_writer.generate_content(refine_prompt)
|
resp_refine = ai_models.model_writer.generate_content(refine_prompt)
|
||||||
utils.log_usage(folder, ai_models.model_writer.name, resp_refine.usage_metadata)
|
utils.log_usage(folder, ai_models.model_writer.name, resp_refine.usage_metadata)
|
||||||
current_text = resp_refine.text
|
current_text = resp_refine.text
|
||||||
|
if _att["decision"] == "full_rewrite_failed":
|
||||||
|
_att["decision"] = "refinement" # rewrite failed, fell back to refinement
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("WRITER", f"Refinement failed: {e}")
|
utils.log("WRITER", f"Refinement failed: {e}")
|
||||||
|
_att["decision"] = "refinement_failed"
|
||||||
|
_eval_entry["attempts"].append(_att)
|
||||||
|
_eval_entry["final_score"] = best_score
|
||||||
|
_eval_entry["final_decision"] = "refinement_failed"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
return best_text
|
return best_text
|
||||||
|
|
||||||
|
# Reached only if eval_error break occurred; write log before returning.
|
||||||
|
if _eval_entry["final_decision"] == "unknown":
|
||||||
|
_eval_entry["final_score"] = best_score
|
||||||
|
_eval_entry["final_decision"] = "best_available"
|
||||||
|
eval_logger.append_eval_entry(folder, _eval_entry)
|
||||||
return best_text
|
return best_text
|
||||||
|
|||||||
@@ -34,10 +34,35 @@
|
|||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<h5 class="card-title">{{ p.name }}</h5>
|
<h5 class="card-title">{{ p.name }}</h5>
|
||||||
<p class="card-text text-muted small">Created: {{ p.created_at.strftime('%Y-%m-%d') }}</p>
|
<p class="card-text text-muted small">Created: {{ p.created_at.strftime('%Y-%m-%d') }}</p>
|
||||||
<a href="/project/{{ p.id }}" class="btn btn-outline-primary stretched-link">Open Project</a>
|
<div class="d-flex justify-content-between align-items-center mt-3">
|
||||||
|
<a href="/project/{{ p.id }}" class="btn btn-outline-primary">Open Project</a>
|
||||||
|
<button class="btn btn-outline-danger btn-sm" data-bs-toggle="modal" data-bs-target="#deleteModal{{ p.id }}" title="Delete project">
|
||||||
|
<i class="fas fa-trash"></i>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Delete Modal for {{ p.name }} -->
|
||||||
|
<div class="modal fade" id="deleteModal{{ p.id }}" tabindex="-1">
|
||||||
|
<div class="modal-dialog">
|
||||||
|
<form class="modal-content" action="/project/{{ p.id }}/delete" method="POST">
|
||||||
|
<div class="modal-header bg-danger text-white">
|
||||||
|
<h5 class="modal-title"><i class="fas fa-exclamation-triangle me-2"></i>Delete Project</h5>
|
||||||
|
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="modal"></button>
|
||||||
|
</div>
|
||||||
|
<div class="modal-body">
|
||||||
|
<p>Permanently delete <strong>{{ p.name }}</strong> and all its runs and generated files?</p>
|
||||||
|
<p class="text-danger fw-bold mb-0">This cannot be undone.</p>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<button type="submit" class="btn btn-danger">Delete</button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
{% else %}
|
{% else %}
|
||||||
<div class="col-12 text-center py-5">
|
<div class="col-12 text-center py-5">
|
||||||
<h4 class="text-muted mb-3">No projects yet. Start writing!</h4>
|
<h4 class="text-muted mb-3">No projects yet. Start writing!</h4>
|
||||||
|
|||||||
@@ -11,6 +11,11 @@
|
|||||||
<button class="btn btn-sm btn-outline-info ms-2" data-bs-toggle="modal" data-bs-target="#cloneProjectModal" title="Clone/Fork Project" data-bs-toggle="tooltip">
|
<button class="btn btn-sm btn-outline-info ms-2" data-bs-toggle="modal" data-bs-target="#cloneProjectModal" title="Clone/Fork Project" data-bs-toggle="tooltip">
|
||||||
<i class="fas fa-code-branch"></i>
|
<i class="fas fa-code-branch"></i>
|
||||||
</button>
|
</button>
|
||||||
|
{% if not locked %}
|
||||||
|
<button class="btn btn-sm btn-outline-danger ms-2" data-bs-toggle="modal" data-bs-target="#deleteProjectModal" title="Delete Project">
|
||||||
|
<i class="fas fa-trash"></i>
|
||||||
|
</button>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2">
|
<div class="mt-2">
|
||||||
<span class="badge bg-secondary">{{ bible.project_metadata.genre }}</span>
|
<span class="badge bg-secondary">{{ bible.project_metadata.genre }}</span>
|
||||||
@@ -546,6 +551,26 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Delete Project Modal -->
|
||||||
|
<div class="modal fade" id="deleteProjectModal" tabindex="-1">
|
||||||
|
<div class="modal-dialog">
|
||||||
|
<form class="modal-content" action="/project/{{ project.id }}/delete" method="POST">
|
||||||
|
<div class="modal-header bg-danger text-white">
|
||||||
|
<h5 class="modal-title"><i class="fas fa-exclamation-triangle me-2"></i>Delete Project</h5>
|
||||||
|
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="modal"></button>
|
||||||
|
</div>
|
||||||
|
<div class="modal-body">
|
||||||
|
<p>This will permanently delete <strong>{{ project.name }}</strong> and all its runs, files, and generated books.</p>
|
||||||
|
<p class="text-danger fw-bold">This action cannot be undone.</p>
|
||||||
|
</div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||||
|
<button type="submit" class="btn btn-danger">Delete Project</button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<!-- Full Bible JSON Modal -->
|
<!-- Full Bible JSON Modal -->
|
||||||
<div class="modal fade" id="fullBibleModal" tabindex="-1">
|
<div class="modal fade" id="fullBibleModal" tabindex="-1">
|
||||||
<div class="modal-dialog modal-lg modal-dialog-scrollable">
|
<div class="modal-dialog modal-lg modal-dialog-scrollable">
|
||||||
|
|||||||
@@ -48,6 +48,27 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Chapter Navigation Footer -->
|
||||||
|
<div class="card-footer bg-transparent d-flex justify-content-between align-items-center py-2">
|
||||||
|
{% if not loop.first %}
|
||||||
|
{% set prev_ch = manuscript[loop.index0 - 1] %}
|
||||||
|
<a href="#ch-{{ prev_ch.num }}" class="btn btn-sm btn-outline-secondary">
|
||||||
|
<i class="fas fa-arrow-up me-1"></i>Ch {{ prev_ch.num }}
|
||||||
|
</a>
|
||||||
|
{% else %}
|
||||||
|
<span></span>
|
||||||
|
{% endif %}
|
||||||
|
<a href="#" class="btn btn-sm btn-link text-muted small py-0">Back to Top</a>
|
||||||
|
{% if not loop.last %}
|
||||||
|
{% set next_ch = manuscript[loop.index0 + 1] %}
|
||||||
|
<a href="#ch-{{ next_ch.num }}" class="btn btn-sm btn-outline-secondary">
|
||||||
|
Ch {{ next_ch.num }}<i class="fas fa-arrow-down ms-1"></i>
|
||||||
|
</a>
|
||||||
|
{% else %}
|
||||||
|
<span class="text-muted small fst-italic">End of Book</span>
|
||||||
|
{% endif %}
|
||||||
|
</div>
|
||||||
|
|
||||||
<!-- Rewrite Modal -->
|
<!-- Rewrite Modal -->
|
||||||
<div class="modal fade" id="rewriteModal{{ ch.num|string|replace(' ', '') }}" tabindex="-1">
|
<div class="modal fade" id="rewriteModal{{ ch.num|string|replace(' ', '') }}" tabindex="-1">
|
||||||
<div class="modal-dialog">
|
<div class="modal-dialog">
|
||||||
|
|||||||
@@ -10,10 +10,21 @@
|
|||||||
<button class="btn btn-outline-primary me-2" type="button" data-bs-toggle="collapse" data-bs-target="#bibleCollapse" aria-expanded="false" aria-controls="bibleCollapse">
|
<button class="btn btn-outline-primary me-2" type="button" data-bs-toggle="collapse" data-bs-target="#bibleCollapse" aria-expanded="false" aria-controls="bibleCollapse">
|
||||||
<i class="fas fa-scroll me-2"></i>Show Bible
|
<i class="fas fa-scroll me-2"></i>Show Bible
|
||||||
</button>
|
</button>
|
||||||
|
<a href="{{ url_for('run.download_bible', id=run.id) }}" class="btn btn-outline-info me-2" title="Download the project bible (JSON) used for this run.">
|
||||||
|
<i class="fas fa-file-download me-2"></i>Download Bible
|
||||||
|
</a>
|
||||||
<button class="btn btn-primary me-2" data-bs-toggle="modal" data-bs-target="#modifyRunModal" data-bs-toggle="tooltip" title="Create a new run based on this one, but with different instructions (e.g. 'Make it darker').">
|
<button class="btn btn-primary me-2" data-bs-toggle="modal" data-bs-target="#modifyRunModal" data-bs-toggle="tooltip" title="Create a new run based on this one, but with different instructions (e.g. 'Make it darker').">
|
||||||
<i class="fas fa-pen-fancy me-2"></i>Modify & Re-run
|
<i class="fas fa-pen-fancy me-2"></i>Modify & Re-run
|
||||||
</button>
|
</button>
|
||||||
<a href="{{ url_for('project.view_project', id=run.project_id) }}" class="btn btn-outline-secondary">Back to Project</a>
|
{% if run.status not in ['running', 'queued'] %}
|
||||||
|
<form action="{{ url_for('run.delete_run', id=run.id) }}" method="POST" class="d-inline ms-2"
|
||||||
|
onsubmit="return confirm('Delete Run #{{ run.id }} and all its files? This cannot be undone.');">
|
||||||
|
<button type="submit" class="btn btn-outline-danger">
|
||||||
|
<i class="fas fa-trash me-2"></i>Delete Run
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
<a href="{{ url_for('project.view_project', id=run.project_id) }}" class="btn btn-outline-secondary ms-2">Back to Project</a>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -97,6 +108,28 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Tags -->
|
||||||
|
<div class="mb-3 d-flex align-items-center gap-2 flex-wrap">
|
||||||
|
{% if run.tags %}
|
||||||
|
{% for tag in run.tags.split(',') %}
|
||||||
|
<span class="badge bg-secondary fs-6">{{ tag }}</span>
|
||||||
|
{% endfor %}
|
||||||
|
{% else %}
|
||||||
|
<span class="text-muted small fst-italic">No tags</span>
|
||||||
|
{% endif %}
|
||||||
|
<button class="btn btn-sm btn-outline-secondary" data-bs-toggle="collapse" data-bs-target="#tagsForm">
|
||||||
|
<i class="fas fa-tag me-1"></i>Edit Tags
|
||||||
|
</button>
|
||||||
|
<div class="collapse w-100" id="tagsForm">
|
||||||
|
<form action="{{ url_for('run.set_tags', id=run.id) }}" method="POST" class="d-flex gap-2 mt-1">
|
||||||
|
<input type="text" name="tags" class="form-control form-control-sm"
|
||||||
|
value="{{ run.tags or '' }}"
|
||||||
|
placeholder="comma-separated tags, e.g. dark-ending, v2, favourite">
|
||||||
|
<button type="submit" class="btn btn-sm btn-primary">Save</button>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<!-- Status Bar -->
|
<!-- Status Bar -->
|
||||||
<div class="card shadow-sm mb-4">
|
<div class="card shadow-sm mb-4">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -175,6 +208,9 @@
|
|||||||
<a href="{{ url_for('run.check_consistency', run_id=run.id, book_folder=book.folder) }}" class="btn btn-outline-warning ms-2">
|
<a href="{{ url_for('run.check_consistency', run_id=run.id, book_folder=book.folder) }}" class="btn btn-outline-warning ms-2">
|
||||||
<i class="fas fa-search me-2"></i>Check Consistency
|
<i class="fas fa-search me-2"></i>Check Consistency
|
||||||
</a>
|
</a>
|
||||||
|
<a href="{{ url_for('run.eval_report', run_id=run.id, book_folder=book.folder) }}" class="btn btn-outline-info ms-2" title="Download evaluation report (scores, critiques, prompt tuning notes)">
|
||||||
|
<i class="fas fa-chart-bar me-2"></i>Eval Report
|
||||||
|
</a>
|
||||||
<button class="btn btn-warning ms-2" data-bs-toggle="modal" data-bs-target="#reviseBookModal{{ loop.index }}" title="Regenerate this book with changes, keeping others.">
|
<button class="btn btn-warning ms-2" data-bs-toggle="modal" data-bs-target="#reviseBookModal{{ loop.index }}" title="Regenerate this book with changes, keeping others.">
|
||||||
<i class="fas fa-pencil-alt me-2"></i>Revise
|
<i class="fas fa-pencil-alt me-2"></i>Revise
|
||||||
</button>
|
</button>
|
||||||
|
|||||||
@@ -116,6 +116,14 @@ with app.app_context():
|
|||||||
_log("System: Added 'last_heartbeat' column to Run table.")
|
_log("System: Added 'last_heartbeat' column to Run table.")
|
||||||
except: pass
|
except: pass
|
||||||
|
|
||||||
|
# Migration: Add 'tags' column if missing
|
||||||
|
try:
|
||||||
|
with db.engine.connect() as conn:
|
||||||
|
conn.execute(text("ALTER TABLE run ADD COLUMN tags VARCHAR(300)"))
|
||||||
|
conn.commit()
|
||||||
|
_log("System: Added 'tags' column to Run table.")
|
||||||
|
except: pass
|
||||||
|
|
||||||
# Reset all non-terminal runs on startup (running, queued, interrupted)
|
# Reset all non-terminal runs on startup (running, queued, interrupted)
|
||||||
# The Huey consumer restarts with the app, so any in-flight tasks are gone.
|
# The Huey consumer restarts with the app, so any in-flight tasks are gone.
|
||||||
try:
|
try:
|
||||||
|
|||||||
15
web/db.py
15
web/db.py
@@ -35,6 +35,8 @@ class Run(db.Model):
|
|||||||
progress = db.Column(db.Integer, default=0)
|
progress = db.Column(db.Integer, default=0)
|
||||||
last_heartbeat = db.Column(db.DateTime, nullable=True)
|
last_heartbeat = db.Column(db.DateTime, nullable=True)
|
||||||
|
|
||||||
|
tags = db.Column(db.String(300), nullable=True)
|
||||||
|
|
||||||
logs = db.relationship('LogEntry', backref='run', lazy=True, cascade="all, delete-orphan")
|
logs = db.relationship('LogEntry', backref='run', lazy=True, cascade="all, delete-orphan")
|
||||||
|
|
||||||
def duration(self):
|
def duration(self):
|
||||||
@@ -49,3 +51,16 @@ class LogEntry(db.Model):
|
|||||||
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
|
timestamp = db.Column(db.DateTime, default=datetime.utcnow)
|
||||||
phase = db.Column(db.String(50))
|
phase = db.Column(db.String(50))
|
||||||
message = db.Column(db.Text)
|
message = db.Column(db.Text)
|
||||||
|
|
||||||
|
|
||||||
|
class StoryState(db.Model):
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
project_id = db.Column(db.Integer, db.ForeignKey('project.id'), nullable=False)
|
||||||
|
state_json = db.Column(db.Text, nullable=True)
|
||||||
|
updated_at = db.Column(db.DateTime, default=datetime.utcnow)
|
||||||
|
|
||||||
|
|
||||||
|
class Persona(db.Model):
|
||||||
|
id = db.Column(db.Integer, primary_key=True)
|
||||||
|
name = db.Column(db.String(150), unique=True, nullable=False)
|
||||||
|
details_json = db.Column(db.Text, nullable=True)
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from datetime import datetime, timedelta
|
|||||||
from flask import Blueprint, render_template, request, redirect, url_for, flash, session, jsonify
|
from flask import Blueprint, render_template, request, redirect, url_for, flash, session, jsonify
|
||||||
from flask_login import login_required, login_user, current_user
|
from flask_login import login_required, login_user, current_user
|
||||||
from sqlalchemy import func
|
from sqlalchemy import func
|
||||||
from web.db import db, User, Project, Run
|
from web.db import db, User, Project, Run, Persona
|
||||||
from web.helpers import admin_required
|
from web.helpers import admin_required
|
||||||
from core import config, utils
|
from core import config, utils
|
||||||
from ai import models as ai_models
|
from ai import models as ai_models
|
||||||
@@ -83,10 +83,7 @@ def admin_factory_reset():
|
|||||||
except: pass
|
except: pass
|
||||||
db.session.delete(u)
|
db.session.delete(u)
|
||||||
|
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
Persona.query.delete()
|
||||||
try: os.remove(config.PERSONAS_FILE)
|
|
||||||
except: pass
|
|
||||||
utils.create_default_personas()
|
|
||||||
|
|
||||||
db.session.commit()
|
db.session.commit()
|
||||||
flash("Factory Reset Complete. All other users and projects have been wiped.")
|
flash("Factory Reset Complete. All other users and projects have been wiped.")
|
||||||
|
|||||||
@@ -1,22 +1,31 @@
|
|||||||
import os
|
|
||||||
import json
|
import json
|
||||||
from flask import Blueprint, render_template, request, redirect, url_for, flash
|
from flask import Blueprint, render_template, request, redirect, url_for, flash
|
||||||
from flask_login import login_required
|
from flask_login import login_required
|
||||||
from core import config, utils
|
from core import utils
|
||||||
from ai import models as ai_models
|
from ai import models as ai_models
|
||||||
from ai import setup as ai_setup
|
from ai import setup as ai_setup
|
||||||
|
from web.db import db, Persona
|
||||||
|
|
||||||
persona_bp = Blueprint('persona', __name__)
|
persona_bp = Blueprint('persona', __name__)
|
||||||
|
|
||||||
|
|
||||||
|
def _all_personas_dict():
|
||||||
|
"""Return all personas as a dict keyed by name, matching the old personas.json structure."""
|
||||||
|
records = Persona.query.all()
|
||||||
|
result = {}
|
||||||
|
for rec in records:
|
||||||
|
try:
|
||||||
|
details = json.loads(rec.details_json) if rec.details_json else {}
|
||||||
|
except Exception:
|
||||||
|
details = {}
|
||||||
|
result[rec.name] = details
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
@persona_bp.route('/personas')
|
@persona_bp.route('/personas')
|
||||||
@login_required
|
@login_required
|
||||||
def list_personas():
|
def list_personas():
|
||||||
personas = {}
|
personas = _all_personas_dict()
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
return render_template('personas.html', personas=personas)
|
return render_template('personas.html', personas=personas)
|
||||||
|
|
||||||
|
|
||||||
@@ -29,17 +38,16 @@ def new_persona():
|
|||||||
@persona_bp.route('/persona/<string:name>')
|
@persona_bp.route('/persona/<string:name>')
|
||||||
@login_required
|
@login_required
|
||||||
def edit_persona(name):
|
def edit_persona(name):
|
||||||
personas = {}
|
record = Persona.query.filter_by(name=name).first()
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
if not record:
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
persona = personas.get(name)
|
|
||||||
if not persona:
|
|
||||||
flash(f"Persona '{name}' not found.")
|
flash(f"Persona '{name}' not found.")
|
||||||
return redirect(url_for('persona.list_personas'))
|
return redirect(url_for('persona.list_personas'))
|
||||||
|
|
||||||
|
try:
|
||||||
|
persona = json.loads(record.details_json) if record.details_json else {}
|
||||||
|
except Exception:
|
||||||
|
persona = {}
|
||||||
|
|
||||||
return render_template('persona_edit.html', persona=persona, name=name)
|
return render_template('persona_edit.html', persona=persona, name=name)
|
||||||
|
|
||||||
|
|
||||||
@@ -53,16 +61,7 @@ def save_persona():
|
|||||||
flash("Persona name is required.")
|
flash("Persona name is required.")
|
||||||
return redirect(url_for('persona.list_personas'))
|
return redirect(url_for('persona.list_personas'))
|
||||||
|
|
||||||
personas = {}
|
persona_data = {
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
if old_name and old_name != name and old_name in personas:
|
|
||||||
del personas[old_name]
|
|
||||||
|
|
||||||
persona = {
|
|
||||||
"name": name,
|
"name": name,
|
||||||
"bio": request.form.get('bio'),
|
"bio": request.form.get('bio'),
|
||||||
"age": request.form.get('age'),
|
"age": request.form.get('age'),
|
||||||
@@ -75,10 +74,21 @@ def save_persona():
|
|||||||
"style_inspirations": request.form.get('style_inspirations')
|
"style_inspirations": request.form.get('style_inspirations')
|
||||||
}
|
}
|
||||||
|
|
||||||
personas[name] = persona
|
# If name changed, remove old record
|
||||||
|
if old_name and old_name != name:
|
||||||
|
old_record = Persona.query.filter_by(name=old_name).first()
|
||||||
|
if old_record:
|
||||||
|
db.session.delete(old_record)
|
||||||
|
db.session.flush()
|
||||||
|
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump(personas, f, indent=2)
|
record = Persona.query.filter_by(name=name).first()
|
||||||
|
if record:
|
||||||
|
record.details_json = json.dumps(persona_data)
|
||||||
|
else:
|
||||||
|
record = Persona(name=name, details_json=json.dumps(persona_data))
|
||||||
|
db.session.add(record)
|
||||||
|
|
||||||
|
db.session.commit()
|
||||||
flash(f"Persona '{name}' saved.")
|
flash(f"Persona '{name}' saved.")
|
||||||
return redirect(url_for('persona.list_personas'))
|
return redirect(url_for('persona.list_personas'))
|
||||||
|
|
||||||
@@ -86,15 +96,10 @@ def save_persona():
|
|||||||
@persona_bp.route('/persona/delete/<string:name>', methods=['POST'])
|
@persona_bp.route('/persona/delete/<string:name>', methods=['POST'])
|
||||||
@login_required
|
@login_required
|
||||||
def delete_persona(name):
|
def delete_persona(name):
|
||||||
personas = {}
|
record = Persona.query.filter_by(name=name).first()
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
if record:
|
||||||
try:
|
db.session.delete(record)
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
db.session.commit()
|
||||||
except: pass
|
|
||||||
|
|
||||||
if name in personas:
|
|
||||||
del personas[name]
|
|
||||||
with open(config.PERSONAS_FILE, 'w') as f: json.dump(personas, f, indent=2)
|
|
||||||
flash(f"Persona '{name}' deleted.")
|
flash(f"Persona '{name}' deleted.")
|
||||||
|
|
||||||
return redirect(url_for('persona.list_personas'))
|
return redirect(url_for('persona.list_personas'))
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import shutil
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from flask import Blueprint, render_template, request, redirect, url_for, flash
|
from flask import Blueprint, render_template, request, redirect, url_for, flash
|
||||||
from flask_login import login_required, current_user
|
from flask_login import login_required, current_user
|
||||||
from web.db import db, Project, Run
|
from web.db import db, Project, Run, Persona, StoryState
|
||||||
from web.helpers import is_project_locked
|
from web.helpers import is_project_locked
|
||||||
from core import config, utils
|
from core import config, utils
|
||||||
from ai import models as ai_models
|
from ai import models as ai_models
|
||||||
@@ -104,11 +104,7 @@ def project_setup_wizard():
|
|||||||
flash(f"AI Analysis failed — fill in the details manually. ({e})", "warning")
|
flash(f"AI Analysis failed — fill in the details manually. ({e})", "warning")
|
||||||
suggestions = _default_suggestions
|
suggestions = _default_suggestions
|
||||||
|
|
||||||
personas = {}
|
personas = {rec.name: (json.loads(rec.details_json) if rec.details_json else {}) for rec in Persona.query.all()}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
return render_template('project_setup.html', s=suggestions, concept=concept, personas=personas, lengths=config.LENGTH_DEFINITIONS)
|
return render_template('project_setup.html', s=suggestions, concept=concept, personas=personas, lengths=config.LENGTH_DEFINITIONS)
|
||||||
|
|
||||||
@@ -149,11 +145,7 @@ def project_setup_refine():
|
|||||||
flash(f"Refinement failed: {e}")
|
flash(f"Refinement failed: {e}")
|
||||||
return redirect(url_for('project.index'))
|
return redirect(url_for('project.index'))
|
||||||
|
|
||||||
personas = {}
|
personas = {rec.name: (json.loads(rec.details_json) if rec.details_json else {}) for rec in Persona.query.all()}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
return render_template('project_setup.html', s=suggestions, concept=concept, personas=personas, lengths=config.LENGTH_DEFINITIONS)
|
return render_template('project_setup.html', s=suggestions, concept=concept, personas=personas, lengths=config.LENGTH_DEFINITIONS)
|
||||||
|
|
||||||
@@ -329,11 +321,7 @@ def view_project(id):
|
|||||||
has_draft = os.path.exists(draft_path)
|
has_draft = os.path.exists(draft_path)
|
||||||
is_refining = os.path.exists(os.path.join(proj.folder_path, ".refining"))
|
is_refining = os.path.exists(os.path.join(proj.folder_path, ".refining"))
|
||||||
|
|
||||||
personas = {}
|
personas = {rec.name: (json.loads(rec.details_json) if rec.details_json else {}) for rec in Persona.query.all()}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
runs = Run.query.filter_by(project_id=id).order_by(Run.id.desc()).all()
|
runs = Run.query.filter_by(project_id=id).order_by(Run.id.desc()).all()
|
||||||
latest_run = runs[0] if runs else None
|
latest_run = runs[0] if runs else None
|
||||||
@@ -404,6 +392,36 @@ def run_project(id):
|
|||||||
return redirect(url_for('project.view_project', id=id))
|
return redirect(url_for('project.view_project', id=id))
|
||||||
|
|
||||||
|
|
||||||
|
@project_bp.route('/project/<int:id>/delete', methods=['POST'])
|
||||||
|
@login_required
|
||||||
|
def delete_project(id):
|
||||||
|
proj = db.session.get(Project, id) or Project.query.get_or_404(id)
|
||||||
|
if proj.user_id != current_user.id:
|
||||||
|
return "Unauthorized", 403
|
||||||
|
|
||||||
|
active = Run.query.filter_by(project_id=id).filter(Run.status.in_(['running', 'queued'])).first()
|
||||||
|
if active:
|
||||||
|
flash("Cannot delete a project with an active run. Stop the run first.", "danger")
|
||||||
|
return redirect(url_for('project.view_project', id=id))
|
||||||
|
|
||||||
|
# Delete filesystem folder
|
||||||
|
if proj.folder_path and os.path.exists(proj.folder_path):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(proj.folder_path)
|
||||||
|
except Exception as e:
|
||||||
|
flash(f"Warning: could not delete project files: {e}", "warning")
|
||||||
|
|
||||||
|
# Delete StoryState records (no cascade on Project yet)
|
||||||
|
StoryState.query.filter_by(project_id=id).delete()
|
||||||
|
|
||||||
|
# Delete project (cascade handles Runs and LogEntries)
|
||||||
|
db.session.delete(proj)
|
||||||
|
db.session.commit()
|
||||||
|
|
||||||
|
flash("Project deleted.", "success")
|
||||||
|
return redirect(url_for('project.index'))
|
||||||
|
|
||||||
|
|
||||||
@project_bp.route('/project/<int:id>/review')
|
@project_bp.route('/project/<int:id>/review')
|
||||||
@login_required
|
@login_required
|
||||||
def review_project(id):
|
def review_project(id):
|
||||||
@@ -730,11 +748,7 @@ def set_project_persona(id):
|
|||||||
bible = utils.load_json(bible_path)
|
bible = utils.load_json(bible_path)
|
||||||
|
|
||||||
if bible:
|
if bible:
|
||||||
personas = {}
|
personas = {rec.name: (json.loads(rec.details_json) if rec.details_json else {}) for rec in Persona.query.all()}
|
||||||
if os.path.exists(config.PERSONAS_FILE):
|
|
||||||
try:
|
|
||||||
with open(config.PERSONAS_FILE, 'r') as f: personas = json.load(f)
|
|
||||||
except: pass
|
|
||||||
|
|
||||||
if persona_name in personas:
|
if persona_name in personas:
|
||||||
bible['project_metadata']['author_details'] = personas[persona_name]
|
bible['project_metadata']['author_details'] = personas[persona_name]
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import os
|
import os
|
||||||
import json
|
import json
|
||||||
|
import shutil
|
||||||
import markdown
|
import markdown
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from flask import Blueprint, render_template, request, redirect, url_for, flash, session, send_from_directory
|
from flask import Blueprint, render_template, request, redirect, url_for, flash, session, send_from_directory
|
||||||
@@ -9,7 +10,7 @@ from core import utils
|
|||||||
from ai import models as ai_models
|
from ai import models as ai_models
|
||||||
from ai import setup as ai_setup
|
from ai import setup as ai_setup
|
||||||
from story import editor as story_editor
|
from story import editor as story_editor
|
||||||
from story import bible_tracker, style_persona
|
from story import bible_tracker, style_persona, eval_logger as story_eval_logger
|
||||||
from export import exporter
|
from export import exporter
|
||||||
from web.tasks import huey, regenerate_artifacts_task, rewrite_chapter_task
|
from web.tasks import huey, regenerate_artifacts_task, rewrite_chapter_task
|
||||||
|
|
||||||
@@ -393,6 +394,106 @@ def revise_book(run_id, book_folder):
|
|||||||
return redirect(url_for('run.view_run', id=new_run.id))
|
return redirect(url_for('run.view_run', id=new_run.id))
|
||||||
|
|
||||||
|
|
||||||
|
@run_bp.route('/run/<int:id>/set_tags', methods=['POST'])
|
||||||
|
@login_required
|
||||||
|
def set_tags(id):
|
||||||
|
run = db.session.get(Run, id)
|
||||||
|
if not run: return "Run not found", 404
|
||||||
|
if run.project.user_id != current_user.id: return "Unauthorized", 403
|
||||||
|
|
||||||
|
raw = request.form.get('tags', '')
|
||||||
|
tags = [t.strip() for t in raw.split(',') if t.strip()]
|
||||||
|
run.tags = ','.join(dict.fromkeys(tags))
|
||||||
|
db.session.commit()
|
||||||
|
|
||||||
|
flash("Tags updated.")
|
||||||
|
return redirect(url_for('run.view_run', id=id))
|
||||||
|
|
||||||
|
|
||||||
|
@run_bp.route('/run/<int:id>/delete', methods=['POST'])
|
||||||
|
@login_required
|
||||||
|
def delete_run(id):
|
||||||
|
run = db.session.get(Run, id)
|
||||||
|
if not run: return "Run not found", 404
|
||||||
|
if run.project.user_id != current_user.id: return "Unauthorized", 403
|
||||||
|
|
||||||
|
if run.status in ['running', 'queued']:
|
||||||
|
flash("Cannot delete an active run. Stop it first.")
|
||||||
|
return redirect(url_for('run.view_run', id=id))
|
||||||
|
|
||||||
|
project_id = run.project_id
|
||||||
|
|
||||||
|
run_dir = os.path.join(run.project.folder_path, "runs", f"run_{run.id}")
|
||||||
|
if os.path.exists(run_dir):
|
||||||
|
shutil.rmtree(run_dir)
|
||||||
|
|
||||||
|
db.session.delete(run)
|
||||||
|
db.session.commit()
|
||||||
|
|
||||||
|
flash(f"Run #{id} deleted successfully.")
|
||||||
|
return redirect(url_for('project.view_project', id=project_id))
|
||||||
|
|
||||||
|
|
||||||
|
@run_bp.route('/project/<int:run_id>/eval_report/<string:book_folder>')
|
||||||
|
@login_required
|
||||||
|
def eval_report(run_id, book_folder):
|
||||||
|
"""Generate and download the self-contained HTML evaluation report."""
|
||||||
|
run = db.session.get(Run, run_id) or Run.query.get_or_404(run_id)
|
||||||
|
if run.project.user_id != current_user.id:
|
||||||
|
return "Unauthorized", 403
|
||||||
|
|
||||||
|
if not book_folder or "/" in book_folder or "\\" in book_folder or ".." in book_folder:
|
||||||
|
return "Invalid book folder", 400
|
||||||
|
|
||||||
|
run_dir = os.path.join(run.project.folder_path, "runs", f"run_{run.id}")
|
||||||
|
book_path = os.path.join(run_dir, book_folder)
|
||||||
|
|
||||||
|
bp = utils.load_json(os.path.join(book_path, "final_blueprint.json")) or \
|
||||||
|
utils.load_json(os.path.join(book_path, "blueprint_initial.json"))
|
||||||
|
|
||||||
|
html = story_eval_logger.generate_html_report(book_path, bp)
|
||||||
|
if not html:
|
||||||
|
return (
|
||||||
|
"<html><body style='font-family:sans-serif;padding:40px'>"
|
||||||
|
"<h2>No evaluation data yet.</h2>"
|
||||||
|
"<p>The evaluation report is generated during the writing phase. "
|
||||||
|
"Start a generation run and the report will be available once chapters have been evaluated.</p>"
|
||||||
|
"</body></html>"
|
||||||
|
), 200
|
||||||
|
|
||||||
|
from flask import Response
|
||||||
|
safe_title = utils.sanitize_filename(
|
||||||
|
(bp or {}).get('book_metadata', {}).get('title', book_folder) or book_folder
|
||||||
|
)[:40]
|
||||||
|
filename = f"eval_report_{safe_title}.html"
|
||||||
|
return Response(
|
||||||
|
html,
|
||||||
|
mimetype='text/html',
|
||||||
|
headers={'Content-Disposition': f'attachment; filename="{filename}"'}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@run_bp.route('/run/<int:id>/download_bible')
|
||||||
|
@login_required
|
||||||
|
def download_bible(id):
|
||||||
|
run = db.session.get(Run, id)
|
||||||
|
if not run: return "Run not found", 404
|
||||||
|
if run.project.user_id != current_user.id: return "Unauthorized", 403
|
||||||
|
|
||||||
|
bible_path = os.path.join(run.project.folder_path, "bible.json")
|
||||||
|
if not os.path.exists(bible_path):
|
||||||
|
return "Bible file not found", 404
|
||||||
|
|
||||||
|
safe_name = utils.sanitize_filename(run.project.name or "project")
|
||||||
|
download_name = f"bible_{safe_name}.json"
|
||||||
|
return send_from_directory(
|
||||||
|
os.path.dirname(bible_path),
|
||||||
|
os.path.basename(bible_path),
|
||||||
|
as_attachment=True,
|
||||||
|
download_name=download_name
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@run_bp.route('/project/<int:run_id>/regenerate_artifacts', methods=['POST'])
|
@run_bp.route('/project/<int:run_id>/regenerate_artifacts', methods=['POST'])
|
||||||
@login_required
|
@login_required
|
||||||
def regenerate_artifacts(run_id):
|
def regenerate_artifacts(run_id):
|
||||||
|
|||||||
63
web/tasks.py
63
web/tasks.py
@@ -107,6 +107,56 @@ def generate_book_task(run_id, project_path, bible_path, allow_copy=True, feedba
|
|||||||
|
|
||||||
_task_log(f"Task picked up by Huey worker. project_path={project_path}")
|
_task_log(f"Task picked up by Huey worker. project_path={project_path}")
|
||||||
|
|
||||||
|
# 0. Orphaned Job Guard — verify that all required resources exist before
|
||||||
|
# doing any work. If a run, project folder, or bible is missing, terminate
|
||||||
|
# silently and mark the run as failed to prevent data being written to the
|
||||||
|
# wrong book or project.
|
||||||
|
db_path_early = os.path.join(config.DATA_DIR, "bookapp.db")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with sqlite3.connect(db_path_early, timeout=10) as _conn:
|
||||||
|
_row = _conn.execute("SELECT id FROM run WHERE id = ?", (run_id,)).fetchone()
|
||||||
|
if not _row:
|
||||||
|
_task_log(f"ABORT: Run #{run_id} no longer exists in DB. Terminating silently.")
|
||||||
|
return
|
||||||
|
except Exception as _e:
|
||||||
|
_task_log(f"WARNING: Could not verify run #{run_id} existence: {_e}")
|
||||||
|
|
||||||
|
if not os.path.isdir(project_path):
|
||||||
|
_task_log(f"ABORT: Project folder missing ({project_path}). Marking run #{run_id} as failed.")
|
||||||
|
try:
|
||||||
|
_robust_update_run_status(db_path_early, run_id, 'failed',
|
||||||
|
end_time=datetime.utcnow().isoformat())
|
||||||
|
except Exception: pass
|
||||||
|
return
|
||||||
|
|
||||||
|
if not os.path.isfile(bible_path):
|
||||||
|
_task_log(f"ABORT: Bible file missing ({bible_path}). Marking run #{run_id} as failed.")
|
||||||
|
try:
|
||||||
|
_robust_update_run_status(db_path_early, run_id, 'failed',
|
||||||
|
end_time=datetime.utcnow().isoformat())
|
||||||
|
except Exception: pass
|
||||||
|
return
|
||||||
|
|
||||||
|
# Validate that the bible has at least one book entry
|
||||||
|
try:
|
||||||
|
with open(bible_path, 'r', encoding='utf-8') as _bf:
|
||||||
|
_bible_check = json.load(_bf)
|
||||||
|
if not _bible_check.get('books'):
|
||||||
|
_task_log(f"ABORT: Bible has no books defined. Marking run #{run_id} as failed.")
|
||||||
|
try:
|
||||||
|
_robust_update_run_status(db_path_early, run_id, 'failed',
|
||||||
|
end_time=datetime.utcnow().isoformat())
|
||||||
|
except Exception: pass
|
||||||
|
return
|
||||||
|
except Exception as _e:
|
||||||
|
_task_log(f"ABORT: Could not parse bible ({bible_path}): {_e}. Marking run #{run_id} as failed.")
|
||||||
|
try:
|
||||||
|
_robust_update_run_status(db_path_early, run_id, 'failed',
|
||||||
|
end_time=datetime.utcnow().isoformat())
|
||||||
|
except Exception: pass
|
||||||
|
return
|
||||||
|
|
||||||
# 1. Setup Logging
|
# 1. Setup Logging
|
||||||
log_filename = f"system_log_{run_id}.txt"
|
log_filename = f"system_log_{run_id}.txt"
|
||||||
|
|
||||||
@@ -231,7 +281,18 @@ def generate_book_task(run_id, project_path, bible_path, allow_copy=True, feedba
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
utils.log("SYSTEM", f" -> Failed to copy {item}: {e}")
|
utils.log("SYSTEM", f" -> Failed to copy {item}: {e}")
|
||||||
|
|
||||||
# 2. Run Generation
|
# 2. Save Bible Snapshot alongside this run
|
||||||
|
run_dir_early = os.path.join(project_path, "runs", f"run_{run_id}")
|
||||||
|
os.makedirs(run_dir_early, exist_ok=True)
|
||||||
|
if os.path.exists(bible_path):
|
||||||
|
snapshot_path = os.path.join(run_dir_early, "bible_snapshot.json")
|
||||||
|
try:
|
||||||
|
shutil.copy2(bible_path, snapshot_path)
|
||||||
|
utils.log("SYSTEM", f"Bible snapshot saved to run folder.")
|
||||||
|
except Exception as _e:
|
||||||
|
utils.log("SYSTEM", f"WARNING: Could not save bible snapshot: {_e}")
|
||||||
|
|
||||||
|
# 3. Run Generation
|
||||||
from cli.engine import run_generation
|
from cli.engine import run_generation
|
||||||
run_generation(bible_path, specific_run_id=run_id)
|
run_generation(bible_path, specific_run_id=run_id)
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user