Compare commits

...

4 Commits

Author SHA1 Message Date
edabc4d4fa v1.4.0: Organic writing, speed, and log improvements
Organic book quality:
- write_chapter: strip key_events spoilers from character context so the writer
  doesn't know planned future events when writing early chapters
- write_chapter: added next_chapter_hint — seeds anticipation for the next scene
  in the final paragraphs of each chapter for natural story flow
- write_chapter: added DIALOGUE VOICE instruction referencing CHARACTER TRACKING
  speech styles so every character sounds distinctly different
- Lowered SCORE_AUTO_ACCEPT 9→8 to stop over-refining already-professional drafts

Speed improvements:
- check_pacing: reduced from every chapter to every other chapter (~50% fewer calls)
- refine_persona: reduced from every 3 to every 5 chapters (~40% fewer calls)
- Resume summary rebuild: uses first + last-4 chapters instead of all chapters
  to avoid massive prompts when resuming mid-book
- Summary context sent to writer capped at 8000 chars (most-recent events)
- update_tracking text cap lowered 500000→20000 (covers any realistic chapter)

Logging and progress bars:
- Progress bar updates at chapter START, not just after completion
- Chapter banner logged before each write so the log shows which chapter is active
- Word count logged after first draft (e.g. "Draft: 2,341 words (target: ~2200)")
- Word count added to chapter completion TIMING line
- Pacing check now logs "Pacing OK" with reason when no intervention needed
- utils: added log_banner() helper for phase separator lines

UI:
- run_details.html: log lines are now phase-coloured (WRITER=cyan, ARCHITECT=green,
  TIMING=gray, SYSTEM=yellow, TRACKER=purple, RESUME=orange, etc.)
- Status bar shows current active phase (e.g. "Status: Running — WRITER")

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-20 10:59:08 -05:00
958a6d0ea0 v1.3.1: Remove rigidity from chapter counts, beats, word lengths, and bridge chapters
story.py — create_chapter_plan():
- TARGET_CHAPTERS is now a guideline (±15%) not a hard constraint; the AI
  can produce a count that fits the story rather than forcing a specific number
- Word scaling is now pacing-aware instead of uniform: Very Fast ≈ 60% of avg,
  Fast ≈ 80%, Standard ≈ 100%, Slow ≈ 125%, Very Slow ≈ 150%
- Two-pass normalisation: pacing weights applied first, then the total is
  nudged to the word target — natural variation preserved throughout
- Variance range tightened to ±8% (was ±10%) for more predictable totals
- Prompt now tells the AI that estimated_words should reflect pacing rhythm

story.py — expand():
- Added event ceiling (target_chapters × 1.5): if the outline already has
  enough beats, the pass switches from "add events" to "enrich descriptions"
  — prevents over-dense outlines for short stories and flash fiction
- Task instruction is dynamically chosen: add-events vs deepen-descriptions
- Clarified that original user beats must be preserved but new events must
  each be distinct and spread evenly (not front-loaded)

story.py — refinement loop:
- Word count constraint softened from hard "do not condense" to
  "~N words ±20% acceptable if the scene demands it" so action chapters
  can run short and introspective chapters can run long naturally

main.py — bridge chapter insertion:
- Removed hardcoded 1500-word estimate for dynamically inserted bridge
  chapters; now computes the average estimated_words from the current
  chapter plan so bridge chapters match the book's natural chapter length

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-20 10:42:51 -05:00
1964c9c2a5 v1.3.0: Improve all AI prompts, refinement loops, and cover generation accuracy
story.py — write_chapter():
- Added POSITION context ("Chapter N of Total") so the AI calibrates narrative
  tension correctly (setup vs escalation vs climax/payoff)
- Moved PACING_GUIDE to sit directly after PACING metadata instead of being
  buried after 13 quality criteria items where the AI rarely reads it
- Removed duplicate pacing descriptions that appeared after QUALITY_CRITERIA

story.py — refinement loop:
- Capped critique history to last 2 entries (was accumulating all previous
  attempts, wasting tokens and confusing the model on attempt 4-5)
- Added TARGET_WORDS and BEATS constraints to the refinement prompt to prevent
  chapters from shrinking or losing plot beats during editing passes
- Restructured refinement prompt with explicit HARD_CONSTRAINTS section

story.py — check_and_propagate():
- Increased chapter context from 5000 to 12000 chars for continuity rewrites
  (was asking for a full chapter rewrite but only providing a fragment)
- Added explicit word count target to rewrite so chapters are not truncated
- Added conservative decision bias: only rewrite on genuine contradictions

story.py — plan_structure():
- Now passes TARGET_CHAPTERS, TARGET_WORDS, GENRE, and CHARACTERS to the
  structure AI — it was planning blindly without knowing the book's scale

marketing.py — generate_blurb():
- Rewrote prompt with 4-part structure: Hook → Stakes → Tension → Close
- Formats plot beats as a readable list instead of raw JSON array
- Extracts protagonist automatically for personalised blurb copy
- Added genre-tone matching, present-tense voice, and no-spoiler rule

marketing.py — generate_cover():
- Added genre-to-visual-style mapping (thriller → cinematic, fantasy → epic
  digital painting, romance → painterly, etc.)
- Art prompt instructions now enforce: no text/letters/watermarks, rule-of-thirds
  composition, explicit focal point, lighting description, colour palette
- Replaced generic image evaluation with a 5-criteria book-cover rubric:
  visual impact, genre fit, composition, quality, and clean image (no text)
- Score penalties: -3 for visible text/watermarks, -2 for blur/deformed anatomy

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-20 10:38:36 -05:00
2a9a605800 v1.2.0: Prefer Gemini 2.x models, improve cover generation and Docker health
Model selection (ai.py):
- get_optimal_model() now scores Gemini 2.5 > 2.0 > 1.5 when ranking candidates
- get_default_models() fallbacks updated to gemini-2.0-pro-exp (logic) and gemini-2.0-flash (writer/artist)
- AI selection prompt rewritten: includes Gemini 2.x pricing context, guidance to avoid 'thinking' models for writer/artist roles, and instructions to prefer 2.x over 1.5
- Added image_model_name and image_model_source globals for UI visibility
- init_models() now reads MODEL_IMAGE_HINT; tries imagen-3.0-generate-001 then imagen-3.0-fast-generate-001 on both Gemini API and Vertex AI paths

Cover generation (marketing.py):
- Fixed display bug: "Attempt X/5" now correctly reads "Attempt X/3"
- Added imagen-3.0-fast-generate-001 as intermediate fallback before legacy Imagen 2
- Quality threshold: images with score < 5 are only kept if nothing better exists
- Smarter prompt refinement on retry: deformity, blur, and watermark critique keywords each append targeted corrections to the art prompt
- Fixed missing sys import (sys.platform check for macOS was silently broken)

Config / Docker:
- config.py: added MODEL_IMAGE_HINT env var, bumped version to 1.2.0
- docker-compose.yml: added MODEL_IMAGE environment variable
- Dockerfile: added libpng-dev and libfreetype6-dev for better font/PNG rendering; added HEALTHCHECK so Portainer detects unhealthy containers

System status UI:
- system_status.html: added Image row showing active Imagen model and provider (Gemini API / Vertex AI)
- Added cache expiry countdown with colour-coded badges

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-20 10:31:02 -05:00
11 changed files with 525 additions and 221 deletions

View File

@@ -3,11 +3,13 @@ FROM python:3.11-slim
# Set working directory # Set working directory
WORKDIR /app WORKDIR /app
# Install system dependencies required for Pillow (image processing) # Install system dependencies required for Pillow (image processing) and fonts
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
build-essential \ build-essential \
libjpeg-dev \ libjpeg-dev \
zlib1g-dev \ zlib1g-dev \
libpng-dev \
libfreetype6-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Copy requirements files # Copy requirements files
@@ -24,4 +26,6 @@ COPY . .
# Set Python path and run # Set Python path and run
ENV PYTHONPATH=/app ENV PYTHONPATH=/app
EXPOSE 5000 EXPOSE 5000
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:5000/login')" || exit 1
CMD ["python", "-m", "modules.web_app"] CMD ["python", "-m", "modules.web_app"]

View File

@@ -14,6 +14,7 @@ GCP_LOCATION = get_clean_env("GCP_LOCATION", "us-central1")
MODEL_LOGIC_HINT = get_clean_env("MODEL_LOGIC", "AUTO") MODEL_LOGIC_HINT = get_clean_env("MODEL_LOGIC", "AUTO")
MODEL_WRITER_HINT = get_clean_env("MODEL_WRITER", "AUTO") MODEL_WRITER_HINT = get_clean_env("MODEL_WRITER", "AUTO")
MODEL_ARTIST_HINT = get_clean_env("MODEL_ARTIST", "AUTO") MODEL_ARTIST_HINT = get_clean_env("MODEL_ARTIST", "AUTO")
MODEL_IMAGE_HINT = get_clean_env("MODEL_IMAGE", "AUTO")
DEFAULT_BLUEPRINT = "book_def.json" DEFAULT_BLUEPRINT = "book_def.json"
# --- SECURITY & ADMIN --- # --- SECURITY & ADMIN ---
@@ -64,4 +65,4 @@ LENGTH_DEFINITIONS = {
} }
# --- SYSTEM --- # --- SYSTEM ---
VERSION = "1.1.0" VERSION = "1.4.0"

View File

@@ -37,3 +37,4 @@ services:
- MODEL_LOGIC=${MODEL_LOGIC:-AUTO} - MODEL_LOGIC=${MODEL_LOGIC:-AUTO}
- MODEL_WRITER=${MODEL_WRITER:-AUTO} - MODEL_WRITER=${MODEL_WRITER:-AUTO}
- MODEL_ARTIST=${MODEL_ARTIST:-AUTO} - MODEL_ARTIST=${MODEL_ARTIST:-AUTO}
- MODEL_IMAGE=${MODEL_IMAGE:-AUTO}

36
main.py
View File

@@ -97,10 +97,11 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
summary = "The story begins." summary = "The story begins."
if ms: if ms:
# Generate summary from ALL written chapters to maintain continuity # Efficient rebuild: first chapter (setup) + last 4 (recent events) avoids huge prompts
utils.log("RESUME", "Rebuilding 'Story So Far' from existing manuscript...") utils.log("RESUME", f"Rebuilding story context from {len(ms)} existing chapters...")
try: try:
combined_text = "\n".join([f"Chapter {c['num']}: {c['content']}" for c in ms]) selected = ms[:1] + ms[-4:] if len(ms) > 5 else ms
combined_text = "\n".join([f"Chapter {c['num']}: {c['content'][:3000]}" for c in selected])
resp_sum = ai.model_writer.generate_content(f""" resp_sum = ai.model_writer.generate_content(f"""
ROLE: Series Historian ROLE: Series Historian
TASK: Create a cumulative 'Story So Far' summary. TASK: Create a cumulative 'Story So Far' summary.
@@ -134,12 +135,19 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
i += 1 i += 1
continue continue
# Progress Banner — update bar and log chapter header before writing begins
utils.update_progress(15 + int((i / len(chapters)) * 75))
utils.log_banner("WRITER", f"Chapter {ch['chapter_number']}/{len(chapters)}: {ch['title']}")
# Pass previous chapter content for continuity if available # Pass previous chapter content for continuity if available
prev_content = ms[-1]['content'] if ms else None prev_content = ms[-1]['content'] if ms else None
while True: while True:
try: try:
txt = story.write_chapter(ch, bp, folder, summary, tracking, prev_content) # Cap summary to most-recent 8000 chars; pass next chapter title as hook hint
summary_ctx = summary[-8000:] if len(summary) > 8000 else summary
next_hint = chapters[i+1]['title'] if i + 1 < len(chapters) else ""
txt = story.write_chapter(ch, bp, folder, summary_ctx, tracking, prev_content, next_chapter_hint=next_hint)
except Exception as e: except Exception as e:
utils.log("SYSTEM", f"Chapter generation failed: {e}") utils.log("SYSTEM", f"Chapter generation failed: {e}")
if interactive: if interactive:
@@ -156,8 +164,8 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
else: else:
break break
# Refine Persona to match the actual output (Consistency Loop) # Refine Persona to match the actual output (every 5 chapters to save API calls)
if (i == 0 or i % 3 == 0) and txt: if (i == 0 or i % 5 == 0) and txt:
bp['book_metadata']['author_details'] = story.refine_persona(bp, txt, folder) bp['book_metadata']['author_details'] = story.refine_persona(bp, txt, folder)
with open(bp_path, "w") as f: json.dump(bp, f, indent=2) with open(bp_path, "w") as f: json.dump(bp, f, indent=2)
@@ -207,18 +215,23 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2) with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2)
with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2) with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2)
# --- DYNAMIC PACING CHECK --- # --- DYNAMIC PACING CHECK (every other chapter to halve API overhead) ---
remaining = chapters[i+1:] remaining = chapters[i+1:]
if remaining: if remaining and len(remaining) >= 2 and i % 2 == 1:
pacing = story.check_pacing(bp, summary, txt, ch, remaining, folder) pacing = story.check_pacing(bp, summary, txt, ch, remaining, folder)
if pacing and pacing.get('status') == 'add_bridge': if pacing and pacing.get('status') == 'add_bridge':
new_data = pacing.get('new_chapter', {}) new_data = pacing.get('new_chapter', {})
# Estimate bridge chapter length from current plan average (not hardcoded)
if chapters:
avg_words = int(sum(c.get('estimated_words', 1500) for c in chapters) / len(chapters))
else:
avg_words = 1500
new_ch = { new_ch = {
"chapter_number": ch['chapter_number'] + 1, "chapter_number": ch['chapter_number'] + 1,
"title": new_data.get('title', 'Bridge Chapter'), "title": new_data.get('title', 'Bridge Chapter'),
"pov_character": new_data.get('pov_character', ch.get('pov_character')), "pov_character": new_data.get('pov_character', ch.get('pov_character')),
"pacing": "Slow", "pacing": "Slow",
"estimated_words": 1500, "estimated_words": avg_words,
"beats": new_data.get('beats', []) "beats": new_data.get('beats', [])
} }
chapters.insert(i+1, new_ch) chapters.insert(i+1, new_ch)
@@ -235,6 +248,8 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
with open(chapters_path, "w") as f: json.dump(chapters, f, indent=2) with open(chapters_path, "w") as f: json.dump(chapters, f, indent=2)
utils.log("ARCHITECT", f" -> ⚠️ Pacing Intervention: Removed redundant chapter '{removed['title']}'.") utils.log("ARCHITECT", f" -> ⚠️ Pacing Intervention: Removed redundant chapter '{removed['title']}'.")
elif pacing:
utils.log("ARCHITECT", f" -> Pacing OK. {pacing.get('reason', '')[:100]}")
# Increment loop # Increment loop
i += 1 i += 1
@@ -249,7 +264,8 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
prog = 15 + int((i / len(chapters)) * 75) prog = 15 + int((i / len(chapters)) * 75)
utils.update_progress(prog) utils.update_progress(prog)
utils.log("TIMING", f" -> Chapter {ch['chapter_number']} finished in {duration:.1f}s | Avg: {avg_time:.1f}s | ETA: {int(eta//60)}m {int(eta%60)}s") word_count = len(txt.split()) if txt else 0
utils.log("TIMING", f" -> Ch {ch['chapter_number']} done in {duration:.1f}s | {word_count:,} words | Avg: {avg_time:.1f}s | ETA: {int(eta//60)}m {int(eta%60)}s")
utils.log("TIMING", f"Writing Phase: {time.time() - t_step:.1f}s") utils.log("TIMING", f"Writing Phase: {time.time() - t_step:.1f}s")

View File

@@ -31,6 +31,8 @@ model_image = None
logic_model_name = "models/gemini-1.5-pro" logic_model_name = "models/gemini-1.5-pro"
writer_model_name = "models/gemini-1.5-flash" writer_model_name = "models/gemini-1.5-flash"
artist_model_name = "models/gemini-1.5-flash" artist_model_name = "models/gemini-1.5-flash"
image_model_name = None
image_model_source = "None"
class ResilientModel: class ResilientModel:
def __init__(self, name, safety_settings, role): def __init__(self, name, safety_settings, role):
@@ -75,10 +77,15 @@ def get_optimal_model(base_type="pro"):
candidates = [m.name for m in models if base_type in m.name] candidates = [m.name for m in models if base_type in m.name]
if not candidates: return f"models/gemini-1.5-{base_type}" if not candidates: return f"models/gemini-1.5-{base_type}"
def score(n): def score(n):
# Prioritize stable models (higher quotas) over experimental/beta ones # Prefer newer generations: 2.5 > 2.0 > 1.5
if "exp" in n or "beta" in n or "preview" in n: return 0 gen_bonus = 0
if "latest" in n: return 50 if "2.5" in n: gen_bonus = 300
return 100 elif "2.0" in n: gen_bonus = 200
elif "2." in n: gen_bonus = 150
# Within a generation, prefer stable over experimental
if "exp" in n or "beta" in n or "preview" in n: return gen_bonus + 0
if "latest" in n: return gen_bonus + 50
return gen_bonus + 100
return sorted(candidates, key=score, reverse=True)[0] return sorted(candidates, key=score, reverse=True)[0]
except Exception as e: except Exception as e:
utils.log("SYSTEM", f"⚠️ Error finding optimal model: {e}") utils.log("SYSTEM", f"⚠️ Error finding optimal model: {e}")
@@ -86,9 +93,9 @@ def get_optimal_model(base_type="pro"):
def get_default_models(): def get_default_models():
return { return {
"logic": {"model": "models/gemini-1.5-pro", "reason": "Fallback: Default Pro model selected.", "estimated_cost": "$3.50/1M"}, "logic": {"model": "models/gemini-2.0-pro-exp", "reason": "Fallback: Gemini 2.0 Pro for complex reasoning and JSON adherence.", "estimated_cost": "$0.00/1M (Experimental)"},
"writer": {"model": "models/gemini-1.5-flash", "reason": "Fallback: Default Flash model selected.", "estimated_cost": "$0.075/1M"}, "writer": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for fast, high-quality creative writing.", "estimated_cost": "$0.10/1M"},
"artist": {"model": "models/gemini-1.5-flash", "reason": "Fallback: Default Flash model selected.", "estimated_cost": "$0.075/1M"}, "artist": {"model": "models/gemini-2.0-flash", "reason": "Fallback: Gemini 2.0 Flash for visual prompt design.", "estimated_cost": "$0.10/1M"},
"ranking": [] "ranking": []
} }
@@ -131,29 +138,37 @@ def select_best_models(force_refresh=False):
model = genai.GenerativeModel(bootstrapper) model = genai.GenerativeModel(bootstrapper)
prompt = f""" prompt = f"""
ROLE: AI Model Architect ROLE: AI Model Architect
TASK: Select the optimal Gemini models for specific application roles. TASK: Select the optimal Gemini models for a book-writing application. Prefer newer Gemini 2.x models when available.
AVAILABLE_MODELS: AVAILABLE_MODELS:
{json.dumps(models)} {json.dumps(models)}
PRICING_CONTEXT (USD per 1M tokens): PRICING_CONTEXT (USD per 1M tokens, approximate):
- Flash Models (e.g. gemini-1.5-flash): ~$0.075 Input / $0.30 Output. (Very Cheap) - Gemini 2.5 Pro/Flash: Best quality/speed; check current pricing.
- Pro Models (e.g. gemini-1.5-pro): ~$3.50 Input / $10.50 Output. (Expensive) - Gemini 2.0 Flash: ~$0.10 Input / $0.40 Output. (Fast, cost-effective, excellent quality).
- Gemini 2.0 Pro Exp: Free experimental tier with strong reasoning.
- Gemini 1.5 Flash: ~$0.075 Input / $0.30 Output. (Legacy, still reliable).
- Gemini 1.5 Pro: ~$1.25 Input / $5.00 Output. (Legacy, expensive).
CRITERIA: CRITERIA:
- LOGIC: Needs complex reasoning, JSON adherence, and instruction following. (Prefer Pro/1.5). - LOGIC: Needs complex reasoning, strict JSON adherence, plot consistency, and instruction following.
- WRITER: Needs creativity, prose quality, and speed. (Prefer Flash/1.5 for speed, or Pro for quality). -> Prefer: Gemini 2.5 Pro > 2.0 Pro > 2.0 Flash > 1.5 Pro
- ARTIST: Needs visual prompt understanding. - WRITER: Needs creativity, prose quality, long-form text generation, and speed.
-> Prefer: Gemini 2.5 Flash/Pro > 2.0 Flash > 1.5 Flash (balance quality/cost)
- ARTIST: Needs rich visual description, prompt understanding for cover art design.
-> Prefer: Gemini 2.0 Flash > 1.5 Flash (speed and visual understanding)
CONSTRAINTS: CONSTRAINTS:
- Avoid 'experimental' or 'preview' unless no stable version exists. - Strongly prefer Gemini 2.x over 1.5 where available.
- Prioritize 'latest' or stable versions. - Avoid 'experimental' or 'preview' only if a stable 2.x version exists; otherwise experimental 2.x is fine.
- 'thinking' models are too slow/expensive for Writer/Artist roles.
- Provide a ranking of ALL available models from best to worst overall.
OUTPUT_FORMAT (JSON): OUTPUT_FORMAT (JSON only, no markdown):
{{ {{
"logic": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX Input / $X.XX Output" }}, "logic": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
"writer": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX Input / $X.XX Output" }}, "writer": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
"artist": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX Input / $X.XX Output" }}, "artist": {{ "model": "string", "reason": "string", "estimated_cost": "$X.XX/1M" }},
"ranking": [ {{ "model": "string", "reason": "string", "estimated_cost": "string" }} ] "ranking": [ {{ "model": "string", "reason": "string", "estimated_cost": "string" }} ]
}} }}
""" """
@@ -195,7 +210,7 @@ def select_best_models(force_refresh=False):
return fallback return fallback
def init_models(force=False): def init_models(force=False):
global model_logic, model_writer, model_artist, model_image, logic_model_name, writer_model_name, artist_model_name global model_logic, model_writer, model_artist, model_image, logic_model_name, writer_model_name, artist_model_name, image_model_name, image_model_source
if model_logic and not force: return if model_logic and not force: return
genai.configure(api_key=config.API_KEY) genai.configure(api_key=config.API_KEY)
@@ -264,13 +279,28 @@ def init_models(force=False):
model_writer.update(writer_name) model_writer.update(writer_name)
model_artist.update(artist_name) model_artist.update(artist_name)
# Initialize Image Model (Default to None) # Initialize Image Model
model_image = None model_image = None
if hasattr(genai, 'ImageGenerationModel'): image_model_name = None
try: model_image = genai.ImageGenerationModel("imagen-3.0-generate-001") image_model_source = "None"
except: pass
img_source = "Gemini API" if model_image else "None" hint = config.MODEL_IMAGE_HINT if hasattr(config, 'MODEL_IMAGE_HINT') else "AUTO"
if hasattr(genai, 'ImageGenerationModel'):
# Candidate image models in preference order
if hint and hint != "AUTO":
candidates = [hint]
else:
candidates = ["imagen-3.0-generate-001", "imagen-3.0-fast-generate-001"]
for candidate in candidates:
try:
model_image = genai.ImageGenerationModel(candidate)
image_model_name = candidate
image_model_source = "Gemini API"
utils.log("SYSTEM", f"✅ Image model: {candidate} (Gemini API)")
break
except Exception:
continue
# Auto-detect GCP Project from credentials if not set (Fix for Image Model) # Auto-detect GCP Project from credentials if not set (Fix for Image Model)
if HAS_VERTEX and not config.GCP_PROJECT and config.GOOGLE_CREDS and os.path.exists(config.GOOGLE_CREDS): if HAS_VERTEX and not config.GCP_PROJECT and config.GOOGLE_CREDS and os.path.exists(config.GOOGLE_CREDS):
@@ -326,9 +356,17 @@ def init_models(force=False):
utils.log("SYSTEM", f"✅ Vertex AI initialized (Project: {config.GCP_PROJECT})") utils.log("SYSTEM", f"✅ Vertex AI initialized (Project: {config.GCP_PROJECT})")
# Override with Vertex Image Model if available # Override with Vertex Image Model if available
try: vertex_candidates = ["imagen-3.0-generate-001", "imagen-3.0-fast-generate-001"]
model_image = VertexImageModel.from_pretrained("imagen-3.0-generate-001") if hint and hint != "AUTO":
img_source = "Vertex AI" vertex_candidates = [hint]
except: pass for candidate in vertex_candidates:
try:
model_image = VertexImageModel.from_pretrained(candidate)
image_model_name = candidate
image_model_source = "Vertex AI"
utils.log("SYSTEM", f"✅ Image model: {candidate} (Vertex AI)")
break
except Exception:
continue
utils.log("SYSTEM", f"Image Generation Provider: {img_source}") utils.log("SYSTEM", f"Image Generation Provider: {image_model_source} ({image_model_name or 'unavailable'})")

View File

@@ -1,10 +1,10 @@
import os import os
import sys
import json import json
import shutil import shutil
import textwrap import textwrap
import subprocess import subprocess
import requests import requests
import google.generativeai as genai
from . import utils from . import utils
import config import config
from modules import ai from modules import ai
@@ -90,18 +90,40 @@ def generate_blurb(bp, folder):
utils.log("MARKETING", "Generating blurb...") utils.log("MARKETING", "Generating blurb...")
meta = bp.get('book_metadata', {}) meta = bp.get('book_metadata', {})
# Format beats as a readable list, not raw JSON
beats = bp.get('plot_beats', [])
beats_text = "\n".join(f" - {b}" for b in beats[:6]) if beats else " - (no beats provided)"
# Format protagonist for the blurb
chars = bp.get('characters', [])
protagonist = next((c for c in chars if 'protagonist' in c.get('role', '').lower()), None)
protagonist_desc = f"{protagonist['name']}{protagonist.get('description', '')}" if protagonist else "the protagonist"
prompt = f""" prompt = f"""
ROLE: Marketing Copywriter ROLE: Marketing Copywriter
TASK: Write a back-cover blurb (150-200 words). TASK: Write a compelling back-cover blurb for a {meta.get('genre', 'fiction')} novel.
INPUT_DATA: BOOK DETAILS:
- TITLE: {meta.get('title')} - TITLE: {meta.get('title')}
- GENRE: {meta.get('genre')} - GENRE: {meta.get('genre')}
- LOGLINE: {bp.get('manual_instruction')} - AUDIENCE: {meta.get('target_audience', 'General')}
- PLOT: {json.dumps(bp.get('plot_beats', []))} - PROTAGONIST: {protagonist_desc}
- CHARACTERS: {json.dumps(bp.get('characters', []))} - LOGLINE: {bp.get('manual_instruction', '(none)')}
- KEY PLOT BEATS:
{beats_text}
OUTPUT: Text only. BLURB STRUCTURE:
1. HOOK (1-2 sentences): Open with the protagonist's world and the inciting disruption. Make it urgent.
2. STAKES (2-3 sentences): Raise the central conflict. What does the protagonist stand to lose?
3. TENSION (1-2 sentences): Hint at the impossible choice or escalating danger without revealing the resolution.
4. HOOK CLOSE (1 sentence): End with a tantalising question or statement that demands the reader open the book.
RULES:
- 150-200 words total.
- DO NOT reveal the ending or resolution.
- Match the genre's marketing tone ({meta.get('genre', 'fiction')}: e.g. thriller = urgent/terse, romance = emotionally charged, fantasy = epic/wondrous, horror = dread-laden).
- Use present tense for the blurb voice.
- No "Blurb:", no title prefix, no labels — marketing copy only.
""" """
try: try:
response = ai.model_writer.generate_content(prompt) response = ai.model_writer.generate_content(prompt)
@@ -167,30 +189,51 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
except: except:
utils.log("MARKETING", "Feedback analysis failed. Defaulting to full regeneration.") utils.log("MARKETING", "Feedback analysis failed. Defaulting to full regeneration.")
genre = meta.get('genre', 'Fiction')
tone = meta.get('style', {}).get('tone', 'Balanced')
# Map genre to visual style suggestion
genre_style_map = {
'thriller': 'dark, cinematic, high-contrast photography style',
'mystery': 'moody, atmospheric, noir-inspired painting',
'romance': 'warm, painterly, soft-focus illustration',
'fantasy': 'epic digital painting, rich colours, mythic scale',
'science fiction': 'sharp digital art, cool palette, futuristic',
'horror': 'unsettling, dark atmospheric painting, desaturated',
'historical fiction': 'classical oil painting style, period-accurate',
'young adult': 'vibrant illustrated style, bold colours',
}
suggested_style = genre_style_map.get(genre.lower(), 'professional digital illustration or photography')
design_prompt = f""" design_prompt = f"""
ROLE: Art Director ROLE: Art Director
TASK: Design a book cover. TASK: Design a professional book cover for an AI image generator.
METADATA: BOOK:
- TITLE: {meta.get('title')} - TITLE: {meta.get('title')}
- GENRE: {meta.get('genre')} - GENRE: {genre}
- TONE: {meta.get('style', {}).get('tone', 'Balanced')} - TONE: {tone}
- SUGGESTED_VISUAL_STYLE: {suggested_style}
VISUAL_CONTEXT: VISUAL_CONTEXT (characters and key themes from the story):
{visual_context} {visual_context if visual_context else "Use genre conventions."}
USER_FEEDBACK: USER_FEEDBACK: {feedback if feedback else "None"}
{f"{feedback}" if feedback else "None"} DESIGN_INSTRUCTION: {design_instruction if design_instruction else "Create a compelling, genre-appropriate cover."}
INSTRUCTION: COVER_ART_RULES:
{f"{design_instruction}" if design_instruction else "Create a compelling, genre-appropriate cover."} - The art_prompt must produce an image with NO text, no letters, no numbers, no watermarks, no UI elements, no logos.
- Describe a clear FOCAL POINT (e.g. the protagonist, a dramatic scene, a symbolic object).
- Use RULE OF THIRDS composition — leave visual space at top and/or bottom for the title and author text to be overlaid.
- Describe LIGHTING that reinforces the tone (e.g. "harsh neon backlight" for thriller, "golden hour" for romance).
- Describe the COLOUR PALETTE explicitly (e.g. "deep crimson and shadow-black", "soft rose gold and cream").
- Characters must match their descriptions from VISUAL_CONTEXT if present.
OUTPUT_FORMAT (JSON): OUTPUT_FORMAT (JSON only, no markdown):
{{ {{
"font_name": "Name of a popular Google Font (e.g. Roboto, Cinzel, Oswald, Playfair Display)", "font_name": "Name of a Google Font suited to the genre (e.g. Cinzel for fantasy, Oswald for thriller, Playfair Display for romance)",
"primary_color": "#HexCode (Background)", "primary_color": "#HexCode (dominant background/cover colour)",
"text_color": "#HexCode (Contrast)", "text_color": "#HexCode (high contrast against primary_color)",
"art_prompt": "A detailed description of the cover art for an image generator. Explicitly describe characters based on the visual context." "art_prompt": "Detailed {suggested_style} image generation prompt. Begin with the style. Describe composition, focal point, lighting, colour palette, and any characters. End with: No text, no letters, no watermarks, photorealistic/painted quality, 8k detail."
}} }}
""" """
try: try:
@@ -212,9 +255,10 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
best_img_score = 0 best_img_score = 0
best_img_path = None best_img_path = None
MAX_IMG_ATTEMPTS = 3
if regenerate_image: if regenerate_image:
for i in range(1, 4): for i in range(1, MAX_IMG_ATTEMPTS + 1):
utils.log("MARKETING", f"Generating cover art (Attempt {i}/5)...") utils.log("MARKETING", f"Generating cover art (Attempt {i}/{MAX_IMG_ATTEMPTS})...")
try: try:
if not ai.model_image: raise ImportError("No Image Generation Model available.") if not ai.model_image: raise ImportError("No Image Generation Model available.")
@@ -222,25 +266,44 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
try: try:
result = ai.model_image.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar) result = ai.model_image.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
except Exception as e: except Exception as e:
if "resource" in str(e).lower() and ai.HAS_VERTEX: err_lower = str(e).lower()
utils.log("MARKETING", "⚠️ Imagen 3 failed. Trying Imagen 2...") # Try fast imagen variant before falling back to legacy
fb_model = ai.VertexImageModel.from_pretrained("imagegeneration@006") if ai.HAS_VERTEX and ("resource" in err_lower or "quota" in err_lower):
result = fb_model.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar) try:
status = "success_fallback" utils.log("MARKETING", "⚠️ Imagen 3 failed. Trying Imagen 3 Fast...")
else: raise e fb_model = ai.VertexImageModel.from_pretrained("imagen-3.0-fast-generate-001")
result = fb_model.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
status = "success_fast"
except Exception:
utils.log("MARKETING", "⚠️ Imagen 3 Fast failed. Trying Imagen 2...")
fb_model = ai.VertexImageModel.from_pretrained("imagegeneration@006")
result = fb_model.generate_images(prompt=art_prompt, number_of_images=1, aspect_ratio=ar)
status = "success_fallback"
else:
raise e
attempt_path = os.path.join(folder, f"cover_art_attempt_{i}.png") attempt_path = os.path.join(folder, f"cover_art_attempt_{i}.png")
result.images[0].save(attempt_path) result.images[0].save(attempt_path)
utils.log_usage(folder, "imagen", image_count=1) utils.log_usage(folder, "imagen", image_count=1)
score, critique = evaluate_image_quality(attempt_path, art_prompt, ai.model_writer, folder) cover_eval_criteria = (
f"Book cover art for a {genre} novel titled '{meta.get('title')}'.\n\n"
f"Evaluate STRICTLY as a professional book cover on these criteria:\n"
f"1. VISUAL IMPACT: Is the image immediately arresting and compelling?\n"
f"2. GENRE FIT: Does the visual style, mood, and palette match {genre}?\n"
f"3. COMPOSITION: Is there a clear focal point? Are top/bottom areas usable for title/author text?\n"
f"4. QUALITY: Is the image sharp, detailed, and free of deformities or blurring?\n"
f"5. CLEAN IMAGE: Are there absolutely NO text, watermarks, letters, or UI artifacts?\n"
f"Score 1-10. Deduct 3 points if any text/watermarks are visible. "
f"Deduct 2 if the image is blurry or has deformed anatomy."
)
score, critique = evaluate_image_quality(attempt_path, cover_eval_criteria, ai.model_writer, folder)
if score is None: score = 0 if score is None: score = 0
utils.log("MARKETING", f" -> Image Score: {score}/10. Critique: {critique}") utils.log("MARKETING", f" -> Image Score: {score}/10. Critique: {critique}")
utils.log_image_attempt(folder, "cover", art_prompt, f"cover_art_{i}.png", status, score=score, critique=critique) utils.log_image_attempt(folder, "cover", art_prompt, f"cover_art_{i}.png", status, score=score, critique=critique)
if interactive: if interactive:
# Open image for review
try: try:
if os.name == 'nt': os.startfile(attempt_path) if os.name == 'nt': os.startfile(attempt_path)
elif sys.platform == 'darwin': subprocess.call(('open', attempt_path)) elif sys.platform == 'darwin': subprocess.call(('open', attempt_path))
@@ -254,16 +317,30 @@ def generate_cover(bp, folder, tracking=None, feedback=None, interactive=False):
utils.log("MARKETING", "User rejected cover. Retrying...") utils.log("MARKETING", "User rejected cover. Retrying...")
continue continue
if score > best_img_score: # Only keep as best if score meets minimum quality bar
if score >= 5 and score > best_img_score:
best_img_score = score
best_img_path = attempt_path
elif best_img_path is None and score > 0:
# Accept even low-quality image if we have nothing else
best_img_score = score best_img_score = score
best_img_path = attempt_path best_img_path = attempt_path
if score == 10: if score >= 9:
utils.log("MARKETING", " -> Perfect image accepted.") utils.log("MARKETING", " -> High quality image accepted.")
break break
if "scar" in critique.lower() or "deform" in critique.lower() or "blur" in critique.lower(): # Refine prompt based on critique keywords
art_prompt += " (Ensure high quality, clear skin, no scars, sharp focus)." prompt_additions = []
critique_lower = critique.lower() if critique else ""
if "scar" in critique_lower or "deform" in critique_lower:
prompt_additions.append("perfect anatomy, no deformities")
if "blur" in critique_lower or "blurry" in critique_lower:
prompt_additions.append("sharp focus, highly detailed")
if "text" in critique_lower or "letter" in critique_lower:
prompt_additions.append("no text, no letters, no watermarks")
if prompt_additions:
art_prompt += f". ({', '.join(prompt_additions)})"
except Exception as e: except Exception as e:
utils.log("MARKETING", f"Image generation failed: {e}") utils.log("MARKETING", f"Image generation failed: {e}")

View File

@@ -223,13 +223,31 @@ def plan_structure(bp, folder):
if not beats_context: if not beats_context:
beats_context = bp.get('plot_beats', []) beats_context = bp.get('plot_beats', [])
target_chapters = bp.get('length_settings', {}).get('chapters', 'flexible')
target_words = bp.get('length_settings', {}).get('words', 'flexible')
chars_summary = [{"name": c.get("name"), "role": c.get("role")} for c in bp.get('characters', [])]
prompt = f""" prompt = f"""
ROLE: Story Architect ROLE: Story Architect
TASK: Create a structural event outline. TASK: Create a detailed structural event outline for a {target_chapters}-chapter book.
ARCHETYPE: {structure_type} BOOK:
TITLE: {bp['book_metadata']['title']} - TITLE: {bp['book_metadata']['title']}
EXISTING_BEATS: {json.dumps(beats_context)} - GENRE: {bp.get('book_metadata', {}).get('genre', 'Fiction')}
- TARGET_CHAPTERS: {target_chapters}
- TARGET_WORDS: {target_words}
- STRUCTURE: {structure_type}
CHARACTERS: {json.dumps(chars_summary)}
USER_BEATS (must all be preserved and woven into the outline):
{json.dumps(beats_context)}
REQUIREMENTS:
- Produce enough events to fill approximately {target_chapters} chapters.
- Each event must serve a narrative purpose (setup, escalation, reversal, climax, resolution).
- Distribute events across a beginning, middle, and end — avoid front-loading.
- Character arcs must be visible through the events (growth, change, revelation).
OUTPUT_FORMAT (JSON): {{ "events": [{{ "description": "String", "purpose": "String" }}] }} OUTPUT_FORMAT (JSON): {{ "events": [{{ "description": "String", "purpose": "String" }}] }}
""" """
@@ -243,29 +261,40 @@ def plan_structure(bp, folder):
def expand(events, pass_num, target_chapters, bp, folder): def expand(events, pass_num, target_chapters, bp, folder):
utils.log("ARCHITECT", f"Expansion pass {pass_num} | Current Beats: {len(events)} | Target Chaps: {target_chapters}") utils.log("ARCHITECT", f"Expansion pass {pass_num} | Current Beats: {len(events)} | Target Chaps: {target_chapters}")
beats_context = [] # If events already well exceed the target, only deepen descriptions — don't add more
event_ceiling = int(target_chapters * 1.5)
if len(events) >= event_ceiling:
task = (
f"The outline already has {len(events)} beats for a {target_chapters}-chapter book — do NOT add more events. "
f"Instead, enrich each existing beat's description with more specific detail: setting, characters involved, emotional stakes, and how it connects to what follows."
)
else:
task = (
f"Expand the outline toward {target_chapters} chapters. "
f"Current count: {len(events)} beats. "
f"Add intermediate events to fill pacing gaps, deepen subplots, and ensure character arcs are visible. "
f"Do not overshoot — aim for {target_chapters} to {event_ceiling} total events."
)
if not beats_context: original_beats = bp.get('plot_beats', [])
beats_context = bp.get('plot_beats', [])
prompt = f""" prompt = f"""
ROLE: Story Architect ROLE: Story Architect
TASK: Expand the outline to fit a {target_chapters}-chapter book. TASK: {task}
CURRENT_COUNT: {len(events)} beats.
INPUT_OUTLINE: ORIGINAL_USER_BEATS (must all remain present):
{json.dumps(beats_context)} {json.dumps(original_beats)}
CURRENT_EVENTS: CURRENT_EVENTS:
{json.dumps(events)} {json.dumps(events)}
RULES: RULES:
1. Detect pacing gaps. 1. PRESERVE all original user beats — do not remove or alter them.
2. Insert intermediate events. 2. New events must serve a clear narrative purpose (tension, character, world, reversal).
3. Deepen subplots. 3. Avoid repetitive events — each beat must be distinct.
4. PRESERVE original beats. 4. Distribute additions evenly — do not front-load the outline.
OUTPUT_FORMAT (JSON): {{ "events": [{{ "description": "String", "purpose": "String" }}] }} OUTPUT_FORMAT (JSON): {{ "events": [{{"description": "String", "purpose": "String"}}] }}
""" """
try: try:
response = ai.model_logic.generate_content(prompt) response = ai.model_logic.generate_content(prompt)
@@ -304,24 +333,30 @@ def create_chapter_plan(events, bp, folder):
prompt = f""" prompt = f"""
ROLE: Pacing Specialist ROLE: Pacing Specialist
TASK: Group events into Chapters. TASK: Group the provided events into chapters for a {meta.get('genre', 'Fiction')} {bp['length_settings'].get('label', 'novel')}.
CONSTRAINTS: GUIDELINES:
- TARGET_CHAPTERS: {target} - AIM for approximately {target} chapters, but the final count may vary ±15% if the story structure demands it.
- TARGET_WORDS: {words} (e.g. a tightly plotted thriller may need fewer; an epic with many subplots may need more.)
- INSTRUCTIONS: - TARGET_WORDS for the whole book: {words}
- Assign pacing to each chapter: Very Fast / Fast / Standard / Slow / Very Slow
Reflect dramatic rhythm — action scenes run fast, emotional beats run slow.
- estimated_words per chapter should reflect its pacing:
Very Fast ≈ 60% of average, Fast ≈ 80%, Standard ≈ 100%, Slow ≈ 125%, Very Slow ≈ 150%
- Do NOT force equal word counts. Natural variation makes the book feel alive.
{structure_instructions} {structure_instructions}
{pov_instruction} {pov_instruction}
INPUT_EVENTS: {json.dumps(events)} INPUT_EVENTS: {json.dumps(events)}
OUTPUT_FORMAT (JSON): [{{ "chapter_number": 1, "title": "String", "pov_character": "String", "pacing": "String", "estimated_words": 2000, "beats": ["String"] }}] OUTPUT_FORMAT (JSON): [{{"chapter_number": 1, "title": "String", "pov_character": "String", "pacing": "String", "estimated_words": 2000, "beats": ["String"]}}]
""" """
try: try:
response = ai.model_logic.generate_content(prompt) response = ai.model_logic.generate_content(prompt)
utils.log_usage(folder, ai.model_logic.name, response.usage_metadata) utils.log_usage(folder, ai.model_logic.name, response.usage_metadata)
plan = json.loads(utils.clean_json(response.text)) plan = json.loads(utils.clean_json(response.text))
# Parse target word count
target_str = str(words).lower().replace(',', '').replace('k', '000').replace('+', '').replace(' ', '') target_str = str(words).lower().replace(',', '').replace('k', '000').replace('+', '').replace(' ', '')
target_val = 0 target_val = 0
if '-' in target_str: if '-' in target_str:
@@ -334,16 +369,31 @@ def create_chapter_plan(events, bp, folder):
except: pass except: pass
if target_val > 0: if target_val > 0:
variance = random.uniform(0.90, 1.10) variance = random.uniform(0.92, 1.08)
target_val = int(target_val * variance) target_val = int(target_val * variance)
utils.log("ARCHITECT", f"Target adjusted with variance ({variance:.2f}x): {target_val} words.") utils.log("ARCHITECT", f"Word target after variance ({variance:.2f}x): {target_val} words.")
current_sum = sum(int(c.get('estimated_words', 0)) for c in plan) current_sum = sum(int(c.get('estimated_words', 0)) for c in plan)
if current_sum > 0: if current_sum > 0:
factor = target_val / current_sum base_factor = target_val / current_sum
utils.log("ARCHITECT", f"Adjusting chapter lengths by {factor:.2f}x to match target.") # Pacing multipliers — fast chapters are naturally shorter, slow chapters longer
pacing_weight = {
'very fast': 0.60, 'fast': 0.80, 'standard': 1.00,
'slow': 1.25, 'very slow': 1.50
}
# Two-pass: apply pacing weights then normalise to hit total target
for c in plan: for c in plan:
c['estimated_words'] = int(c.get('estimated_words', 0) * factor) pw = pacing_weight.get(c.get('pacing', 'standard').lower(), 1.0)
c['estimated_words'] = max(300, int(c.get('estimated_words', 0) * base_factor * pw))
# Normalise to keep total close to target
adjusted_sum = sum(c['estimated_words'] for c in plan)
if adjusted_sum > 0:
norm = target_val / adjusted_sum
for c in plan:
c['estimated_words'] = max(300, int(c['estimated_words'] * norm))
utils.log("ARCHITECT", f"Chapter lengths scaled by pacing. Total ≈ {sum(c['estimated_words'] for c in plan)} words across {len(plan)} chapters.")
return plan return plan
except Exception as e: except Exception as e:
@@ -361,7 +411,7 @@ def update_tracking(folder, chapter_num, chapter_text, current_tracking):
{json.dumps(current_tracking)} {json.dumps(current_tracking)}
NEW_TEXT: NEW_TEXT:
{chapter_text[:500000]} {chapter_text[:20000]}
OPERATIONS: OPERATIONS:
1. EVENTS: Append 1-3 key plot points to 'events'. 1. EVENTS: Append 1-3 key plot points to 'events'.
@@ -544,7 +594,7 @@ def refine_persona(bp, text, folder):
except: pass except: pass
return ad return ad
def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None): def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None, next_chapter_hint=""):
pacing = chap.get('pacing', 'Standard') pacing = chap.get('pacing', 'Standard')
est_words = chap.get('estimated_words', 'Flexible') est_words = chap.get('estimated_words', 'Flexible')
utils.log("WRITER", f"Drafting Ch {chap['chapter_number']} ({pacing} | ~{est_words} words): {chap['title']}") utils.log("WRITER", f"Drafting Ch {chap['chapter_number']} ({pacing} | ~{est_words} words): {chap['title']}")
@@ -612,6 +662,14 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
trunc_content = prev_content[-3000:] if len(prev_content) > 3000 else prev_content trunc_content = prev_content[-3000:] if len(prev_content) > 3000 else prev_content
prev_context_block = f"\nPREVIOUS CHAPTER TEXT (For Tone & Continuity):\n{trunc_content}\n" prev_context_block = f"\nPREVIOUS CHAPTER TEXT (For Tone & Continuity):\n{trunc_content}\n"
# Strip future planning notes (key_events) from character context — the writer
# should not know what is *planned* to happen; only name, role, and description.
chars_for_writer = [
{"name": c.get("name"), "role": c.get("role"), "description": c.get("description", "")}
for c in bp.get('characters', [])
]
total_chapters = ls.get('chapters', '?')
prompt = f""" prompt = f"""
ROLE: Fiction Writer ROLE: Fiction Writer
TASK: Write Chapter {chap['chapter_number']}: {chap['title']} TASK: Write Chapter {chap['chapter_number']}: {chap['title']}
@@ -619,10 +677,18 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
METADATA: METADATA:
- GENRE: {genre} - GENRE: {genre}
- FORMAT: {ls.get('label', 'Story')} - FORMAT: {ls.get('label', 'Story')}
- PACING: {pacing} - POSITION: Chapter {chap['chapter_number']} of {total_chapters} — calibrate narrative tension accordingly (early = setup/intrigue, middle = escalation, final third = payoff/climax)
- TARGET_WORDS: ~{est_words} - PACING: {pacing} — see PACING_GUIDE below
- TARGET_WORDS: ~{est_words} (write to this length; do not summarise to save space)
- POV: {pov_char if pov_char else 'Protagonist'} - POV: {pov_char if pov_char else 'Protagonist'}
PACING_GUIDE:
- 'Very Fast': Pure action/dialogue. Minimal description. Short punchy paragraphs.
- 'Fast': Keep momentum. No lingering. Cut to the next beat quickly.
- 'Standard': Balanced dialogue and description. Standard paragraph lengths.
- 'Slow': Detailed, atmospheric. Linger on emotion and environment.
- 'Very Slow': Deep introspection. Heavy sensory immersion. Slow burn tension.
STYLE_GUIDE: STYLE_GUIDE:
{style_block} {style_block}
@@ -646,6 +712,8 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
- CHARACTER INTERACTIONS: If characters are meeting for the first time in the summary, treat them as strangers. - CHARACTER INTERACTIONS: If characters are meeting for the first time in the summary, treat them as strangers.
- SENTENCE VARIETY: Avoid repetitive sentence structures (e.g. starting multiple sentences with "He" or "She"). Vary sentence length to create rhythm. - SENTENCE VARIETY: Avoid repetitive sentence structures (e.g. starting multiple sentences with "He" or "She"). Vary sentence length to create rhythm.
- GENRE CONSISTENCY: Ensure all introductions of characters, places, items, or actions are strictly appropriate for the {genre} genre. Avoid anachronisms or tonal clashes. - GENRE CONSISTENCY: Ensure all introductions of characters, places, items, or actions are strictly appropriate for the {genre} genre. Avoid anachronisms or tonal clashes.
- DIALOGUE VOICE: Every character speaks with their own distinct voice (see CHARACTER TRACKING for speech styles). No two characters may sound the same. Vary sentence length, vocabulary, and register per character.
- CHAPTER HOOK: End this chapter with unresolved tension — a decision pending, a threat imminent, or a question unanswered.{f" Seed subtle anticipation for the next scene: '{next_chapter_hint}'." if next_chapter_hint else " Do not neatly resolve all threads."}
QUALITY_CRITERIA: QUALITY_CRITERIA:
1. ENGAGEMENT & TENSION: Grip the reader. Ensure conflict/tension in every scene. 1. ENGAGEMENT & TENSION: Grip the reader. Ensure conflict/tension in every scene.
@@ -662,16 +730,10 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
12. PROSE DYNAMICS: Vary sentence length. Use strong verbs. Avoid passive voice. 12. PROSE DYNAMICS: Vary sentence length. Use strong verbs. Avoid passive voice.
13. CLARITY: Ensure sentences are clear and readable. Avoid convoluted phrasing. 13. CLARITY: Ensure sentences are clear and readable. Avoid convoluted phrasing.
- 'Very Fast': Rapid fire, pure action/dialogue, minimal description.
- 'Fast': Punchy, keep it moving.
- 'Standard': Balanced dialogue and description.
- 'Slow': Detailed, atmospheric, immersive.
- 'Very Slow': Deep introspection, heavy sensory detail, slow burn.
CONTEXT: CONTEXT:
- STORY_SO_FAR: {prev_sum} - STORY_SO_FAR: {prev_sum}
{prev_context_block} {prev_context_block}
- CHARACTERS: {json.dumps(bp['characters'])} - CHARACTERS: {json.dumps(chars_for_writer)}
{char_visuals} {char_visuals}
- SCENE_BEATS: {json.dumps(chap['beats'])} - SCENE_BEATS: {json.dumps(chap['beats'])}
@@ -682,13 +744,15 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
resp_draft = ai.model_writer.generate_content(prompt) resp_draft = ai.model_writer.generate_content(prompt)
utils.log_usage(folder, ai.model_writer.name, resp_draft.usage_metadata) utils.log_usage(folder, ai.model_writer.name, resp_draft.usage_metadata)
current_text = resp_draft.text current_text = resp_draft.text
draft_words = len(current_text.split()) if current_text else 0
utils.log("WRITER", f" -> Draft: {draft_words:,} words (target: ~{est_words})")
except Exception as e: except Exception as e:
utils.log("WRITER", f"⚠️ Failed Ch {chap['chapter_number']}: {e}") utils.log("WRITER", f"⚠️ Failed Ch {chap['chapter_number']}: {e}")
return f"## Chapter {chap['chapter_number']} Failed\n\nError: {e}" return f"## Chapter {chap['chapter_number']} Failed\n\nError: {e}"
# Refinement Loop # Refinement Loop
max_attempts = 5 max_attempts = 5
SCORE_AUTO_ACCEPT = 9 SCORE_AUTO_ACCEPT = 8 # 8 = professional quality; no marginal gain from extra refinement
SCORE_PASSING = 7 SCORE_PASSING = 7
SCORE_REWRITE_THRESHOLD = 6 SCORE_REWRITE_THRESHOLD = 6
@@ -750,43 +814,50 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None):
guidelines = get_style_guidelines() guidelines = get_style_guidelines()
fw_list = '", "'.join(guidelines['filter_words']) fw_list = '", "'.join(guidelines['filter_words'])
# Exclude current critique from history to avoid duplication in prompt # Cap history to last 2 critiques to avoid token bloat
history_str = "\n".join(past_critiques[:-1]) if len(past_critiques) > 1 else "None" history_str = "\n".join(past_critiques[-3:-1]) if len(past_critiques) > 1 else "None"
refine_prompt = f""" refine_prompt = f"""
ROLE: Automated Editor ROLE: Automated Editor
TASK: Rewrite text to satisfy critique and style rules. TASK: Rewrite the draft chapter to address the critique. Preserve the narrative content and approximate word count.
CRITIQUE: CURRENT_CRITIQUE:
{critique} {critique}
HISTORY: PREVIOUS_ATTEMPTS (context only):
{history_str} {history_str}
CONSTRAINTS: HARD_CONSTRAINTS:
- TARGET_WORDS: ~{est_words} words (aim for this; ±20% is acceptable if the scene genuinely demands it — but do not condense beats to save space)
- BEATS MUST BE COVERED: {json.dumps(chap.get('beats', []))}
- SUMMARY CONTEXT: {prev_sum[:1500]}
AUTHOR_VOICE:
{persona_info} {persona_info}
STYLE:
{style_block} {style_block}
{char_visuals} {char_visuals}
- BEATS: {json.dumps(chap.get('beats', []))}
OPTIMIZATION_RULES: PROSE_RULES (fix each one found in the draft):
1. NO_FILTERS: Remove [{fw_list}]. 1. FILTER_REMOVAL: Remove filter words [{fw_list}] — rewrite to show the sensation directly.
2. VARIETY: No consecutive sentence starts. 2. VARIETY: No two consecutive sentences starting with the same word or pronoun.
3. SUBTEXT: Indirect dialogue. 3. SUBTEXT: Dialogue must imply meaning — not state it outright.
4. TONE: Match {meta.get('genre', 'Fiction')}. 4. TONE: Match {meta.get('genre', 'Fiction')} conventions throughout.
5. INTERACTION: Use environment. 5. ENVIRONMENT: Characters interact with their physical space.
6. DRAMA: No summary mode. 6. NO_SUMMARY_MODE: Dramatise key moments — do not skip or summarise them.
7. ACTIVE_VERBS: No 'was/were' + ing. 7. ACTIVE_VOICE: Replace 'was/were + verb-ing' constructions with active alternatives.
8. SHOWING: Physical emotion. 8. SHOWING: Render emotion through physical reactions, not labels.
9. LOGIC: Continuous staging. 9. STAGING: Characters must enter and exit physically — no teleporting.
10. CLARITY: Simple structures. 10. CLARITY: Prefer simple sentence structures over convoluted ones.
INPUT_CONTEXT: DRAFT_TO_REWRITE:
- SUMMARY: {prev_sum} {current_text}
- PREVIOUS_TEXT: {prev_context_block}
- DRAFT: {current_text}
OUTPUT: Polished Markdown. PREVIOUS_CHAPTER_ENDING (maintain continuity):
{prev_context_block}
OUTPUT: Complete polished chapter in Markdown. Include the chapter header. Same approximate length as the draft.
""" """
try: try:
# Use Writer model (Flash) for refinement to save costs (Flash 1.5 is sufficient for editing) # Use Writer model (Flash) for refinement to save costs (Flash 1.5 is sufficient for editing)
@@ -1159,25 +1230,33 @@ def check_and_propagate(bp, manuscript, changed_chap_num, folder, change_summary
utils.log("WRITER", f" -> Checking Ch {target_chap['num']} for continuity...") utils.log("WRITER", f" -> Checking Ch {target_chap['num']} for continuity...")
chap_word_count = len(target_chap.get('content', '').split())
prompt = f""" prompt = f"""
ROLE: Continuity Checker ROLE: Continuity Checker
TASK: Determine if chapter needs rewrite based on new context. TASK: Determine if a chapter contradicts a story change. If it does, rewrite it to fix the contradiction.
INPUT_DATA: CHANGED_CHAPTER: {changed_chap_num}
- CHANGED_CHAPTER: {changed_chap_num} CHANGE_SUMMARY: {current_context}
- NEW_CONTEXT: {current_context}
- CURRENT_CHAPTER_TEXT: {target_chap['content'][:5000]}... CHAPTER_TO_CHECK (Ch {target_chap['num']}):
{target_chap['content'][:12000]}
DECISION_LOGIC: DECISION_LOGIC:
- Compare CURRENT_CHAPTER_TEXT with NEW_CONTEXT. - If the chapter directly contradicts the change (references dead characters, items that no longer exist, events that didn't happen), status = REWRITE.
- If the chapter contradicts the new context (e.g. references events that didn't happen, or characters who are now dead/absent), it needs a REWRITE. - If the chapter is consistent or only tangentially related, status = NO_CHANGE.
- If it fits fine, NO_CHANGE. - Be conservative — only rewrite if there is a genuine contradiction.
REWRITE_RULES (apply only if REWRITE):
- Fix the specific contradiction. Preserve all other content.
- The rewritten chapter MUST be approximately {chap_word_count} words (same length as original).
- Include the chapter header formatted as Markdown H1.
- Do not add new plot points not in the original.
OUTPUT_FORMAT (JSON): OUTPUT_FORMAT (JSON):
{{ {{
"status": "NO_CHANGE" or "REWRITE", "status": "NO_CHANGE" or "REWRITE",
"reason": "Brief explanation", "reason": "Brief explanation of the contradiction or why it's consistent",
"content": "Full Markdown text of the rewritten chapter (ONLY if status is REWRITE, otherwise null)" "content": "Full Markdown rewritten chapter (ONLY if status is REWRITE, otherwise null)"
}} }}
""" """

View File

@@ -71,6 +71,10 @@ def get_sorted_book_folders(run_dir):
return sorted(subdirs, key=sort_key) return sorted(subdirs, key=sort_key)
# --- SHARED UTILS --- # --- SHARED UTILS ---
def log_banner(phase, title):
"""Log a visually distinct phase separator line."""
log(phase, f"{'' * 18} {title} {'' * 18}")
def log(phase, msg): def log(phase, msg):
timestamp = datetime.datetime.now().strftime('%H:%M:%S') timestamp = datetime.datetime.now().strftime('%H:%M:%S')
line = f"[{timestamp}] {phase:<15} | {msg}" line = f"[{timestamp}] {phase:<15} | {msg}"

View File

@@ -1303,7 +1303,8 @@ def system_status():
models_info = cache_data.get('models', {}) models_info = cache_data.get('models', {})
except: pass except: pass
return render_template('system_status.html', models=models_info, cache=cache_data, datetime=datetime) return render_template('system_status.html', models=models_info, cache=cache_data, datetime=datetime,
image_model=ai.image_model_name, image_source=ai.image_model_source)
@app.route('/personas') @app.route('/personas')
@login_required @login_required

View File

@@ -338,12 +338,61 @@
const statusBar = document.getElementById('status-bar'); const statusBar = document.getElementById('status-bar');
const costEl = document.getElementById('run-cost'); const costEl = document.getElementById('run-cost');
let lastLog = '';
// Phase → colour mapping (matches utils.log phase labels)
const PHASE_COLORS = {
'WRITER': '#4fc3f7',
'ARCHITECT': '#81c784',
'TIMING': '#78909c',
'SYSTEM': '#fff176',
'TRACKER': '#ce93d8',
'RESUME': '#ffb74d',
'SERIES': '#64b5f6',
'ENRICHER': '#4dd0e1',
'HARVESTER': '#ff8a65',
'EDITOR': '#f48fb1',
};
function escapeHtml(str) {
return str.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;');
}
function colorizeLog(logText) {
if (!logText) return '';
return logText.split('\n').map(line => {
const m = line.match(/^(\[[\d:]+\])\s+(\w+)\s+\|(.*)$/);
if (!m) return '<span style="color:#666">' + escapeHtml(line) + '</span>';
const [, ts, phase, msg] = m;
const color = PHASE_COLORS[phase] || '#aaaaaa';
return '<span style="color:#555">' + escapeHtml(ts) + '</span> '
+ '<span style="color:' + color + ';font-weight:bold">' + phase.padEnd(14) + '</span>'
+ '<span style="color:#ccc">|' + escapeHtml(msg) + '</span>';
}).join('\n');
}
function getCurrentPhase(logText) {
if (!logText) return '';
const lines = logText.split('\n').filter(l => l.trim());
for (let k = lines.length - 1; k >= 0; k--) {
const m = lines[k].match(/\]\s+(\w+)\s+\|/);
if (m) return m[1];
}
return '';
}
function updateLog() { function updateLog() {
fetch(`/run/${runId}/status`) fetch(`/run/${runId}/status`)
.then(response => response.json()) .then(response => response.json())
.then(data => { .then(data => {
// Update Status Text // Update Status Text + current phase
statusText.innerText = "Status: " + data.status.charAt(0).toUpperCase() + data.status.slice(1); const statusLabel = data.status.charAt(0).toUpperCase() + data.status.slice(1);
if (data.status === 'running') {
const phase = getCurrentPhase(data.log);
statusText.innerText = 'Status: Running' + (phase ? ' — ' + phase : '');
} else {
statusText.innerText = 'Status: ' + statusLabel;
}
costEl.innerText = '$' + parseFloat(data.cost).toFixed(4); costEl.innerText = '$' + parseFloat(data.cost).toFixed(4);
// Update Status Bar // Update Status Bar
@@ -371,10 +420,11 @@
statusBar.innerText = ""; statusBar.innerText = "";
} }
// Update Log (only if changed to avoid scroll jitter) // Update Log with phase colorization (only if changed to avoid scroll jitter)
if (consoleEl.innerText !== data.log) { if (lastLog !== data.log) {
lastLog = data.log;
const isScrolledToBottom = consoleEl.scrollHeight - consoleEl.clientHeight <= consoleEl.scrollTop + 50; const isScrolledToBottom = consoleEl.scrollHeight - consoleEl.clientHeight <= consoleEl.scrollTop + 50;
consoleEl.innerText = data.log; consoleEl.innerHTML = colorizeLog(data.log);
if (isScrolledToBottom) { if (isScrolledToBottom) {
consoleEl.scrollTop = consoleEl.scrollHeight; consoleEl.scrollTop = consoleEl.scrollHeight;
} }

View File

@@ -56,6 +56,22 @@
</tr> </tr>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
<tr>
<td class="fw-bold text-uppercase">Image</td>
<td>
{% if image_model %}
<span class="badge bg-success">{{ image_model }}</span>
{% else %}
<span class="badge bg-danger">Unavailable</span>
{% endif %}
</td>
<td>
<span class="badge bg-light text-dark border">{{ image_source or 'None' }}</span>
</td>
<td class="small text-muted">
{% if image_model %}Imagen model used for book cover generation.{% else %}No image generation model could be initialized. Check GCP credentials or Gemini API key.{% endif %}
</td>
</tr>
{% else %} {% else %}
<tr> <tr>
<td colspan="3" class="text-center py-4 text-muted"> <td colspan="3" class="text-center py-4 text-muted">
@@ -139,15 +155,32 @@
<h5 class="mb-0"><i class="fas fa-clock me-2"></i>Cache Status</h5> <h5 class="mb-0"><i class="fas fa-clock me-2"></i>Cache Status</h5>
</div> </div>
<div class="card-body"> <div class="card-body">
<p class="mb-0"> <p class="mb-1">
<strong>Last Scan:</strong> <strong>Last Scan:</strong>
{% if cache and cache.timestamp %} {% if cache and cache.timestamp %}
{{ datetime.fromtimestamp(cache.timestamp).strftime('%Y-%m-%d %H:%M:%S') }} {{ datetime.fromtimestamp(cache.timestamp).strftime('%Y-%m-%d %H:%M:%S') }} UTC
{% else %} {% else %}
Never Never
{% endif %} {% endif %}
</p> </p>
<p class="text-muted small mb-0">Model selection is cached for 24 hours to save API calls.</p> <p class="mb-0">
<strong>Next Refresh:</strong>
{% if cache and cache.timestamp %}
{% set expires = cache.timestamp + 86400 %}
{% set now_ts = datetime.utcnow().timestamp() %}
{% if expires > now_ts %}
{% set remaining = (expires - now_ts) | int %}
{% set h = remaining // 3600 %}{% set m = (remaining % 3600) // 60 %}
in {{ h }}h {{ m }}m
<span class="badge bg-success ms-1">Cache Valid</span>
{% else %}
<span class="badge bg-warning text-dark">Expired — click Refresh &amp; Optimize</span>
{% endif %}
{% else %}
<span class="badge bg-warning text-dark">No cache — click Refresh &amp; Optimize</span>
{% endif %}
</p>
<p class="text-muted small mt-2 mb-0">Model selection is cached for 24 hours to save API calls.</p>
</div> </div>
</div> </div>
{% endblock %} {% endblock %}