v2.0.0: Modularize project into single-responsibility packages

Replaced monolithic modules/ package with a clean architecture:

- core/       config.py, utils.py
- ai/         models.py (ResilientModel), setup.py (init_models)
- story/      planner.py, writer.py, editor.py, style_persona.py, bible_tracker.py
- marketing/  cover.py, blurb.py, fonts.py, assets.py
- export/     exporter.py
- web/        app.py (Flask factory), db.py, helpers.py, tasks.py, routes/{auth,project,run,persona,admin}.py
- cli/        engine.py (run_generation), wizard.py (BookWizard)

Flask routes split into 5 Blueprints; all templates updated with blueprint-
prefixed url_for() calls. Dockerfile and docker-compose updated to use
web.app entry point and new package paths.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-20 22:20:53 -05:00
parent edabc4d4fa
commit f7099cc3e4
52 changed files with 3984 additions and 3798 deletions

399
story/editor.py Normal file
View File

@@ -0,0 +1,399 @@
import json
import os
from core import utils
from ai import models as ai_models
from story.style_persona import get_style_guidelines
def evaluate_chapter_quality(text, chapter_title, genre, model, folder):
guidelines = get_style_guidelines()
ai_isms = "', '".join(guidelines['ai_isms'])
fw_examples = ", ".join([f"'He {w}'" for w in guidelines['filter_words'][:5]])
word_count = len(text.split()) if text else 0
min_sugg = max(3, int(word_count / 500))
max_sugg = min_sugg + 2
suggestion_range = f"{min_sugg}-{max_sugg}"
prompt = f"""
ROLE: Senior Literary Editor
TASK: Critique chapter draft.
METADATA:
- TITLE: {chapter_title}
- GENRE: {genre}
PROHIBITED_PATTERNS:
- AI_ISMS: {ai_isms}
- FILTER_WORDS: {fw_examples}
- CLICHES: White Room, As You Know Bob, Summary Mode, Anachronisms.
- SYNTAX: Repetitive structure, Passive Voice, Adverb Reliance.
QUALITY_RUBRIC (1-10):
1. ENGAGEMENT & TENSION: Does the story grip the reader from the first line? Is there conflict or tension in every scene?
2. SCENE EXECUTION: Is the middle of the chapter fully fleshed out? Does it avoid "sagging" or summarizing key moments?
3. VOICE & TONE: Is the narrative voice distinct? Does it match the genre?
4. SENSORY IMMERSION: Does the text use sensory details effectively without being overwhelming?
5. SHOW, DON'T TELL: Are emotions shown through physical reactions and subtext?
6. CHARACTER AGENCY: Do characters drive the plot through active choices?
7. PACING: Does the chapter feel rushed? Does the ending land with impact, or does it cut off too abruptly?
8. GENRE APPROPRIATENESS: Are introductions of characters, places, items, or actions consistent with the {genre} conventions?
9. DIALOGUE AUTHENTICITY: Do characters sound distinct? Is there subtext? Avoids "on-the-nose" dialogue.
10. PLOT RELEVANCE: Does the chapter advance the plot or character arcs significantly? Avoids filler.
11. STAGING & FLOW: Do characters enter/exit physically? Do paragraphs transition logically (Action -> Reaction)?
12. PROSE DYNAMICS: Is there sentence variety? Avoids purple prose, adjective stacking, and excessive modification.
13. CLARITY & READABILITY: Is the text easy to follow? Are sentences clear and concise?
SCORING_SCALE:
- 10 (Masterpiece): Flawless, impactful, ready for print.
- 9 (Bestseller): Exceptional quality, minor style tweaks only.
- 7-8 (Professional): Good draft, solid structure, needs editing.
- 6 (Passable): Average, has issues with pacing or voice. Needs heavy refinement.
- 1-5 (Fail): Structural flaws, boring, or incoherent. Needs rewrite.
OUTPUT_FORMAT (JSON):
{{
"score": int,
"critique": "Detailed analysis of flaws, citing specific examples from the text.",
"actionable_feedback": "List of {suggestion_range} specific, ruthless instructions for the rewrite (e.g. 'Expand the middle dialogue', 'Add sensory details about the rain', 'Dramatize the argument instead of summarizing it')."
}}
"""
try:
response = model.generate_content([prompt, text[:30000]])
model_name = getattr(model, 'name', ai_models.logic_model_name)
utils.log_usage(folder, model_name, response.usage_metadata)
data = json.loads(utils.clean_json(response.text))
critique_text = data.get('critique', 'No critique provided.')
if data.get('actionable_feedback'):
critique_text += "\n\nREQUIRED FIXES:\n" + str(data.get('actionable_feedback'))
return data.get('score', 0), critique_text
except Exception as e:
return 0, f"Evaluation error: {str(e)}"
def check_pacing(bp, summary, last_chapter_text, last_chapter_data, remaining_chapters, folder):
utils.log("ARCHITECT", "Checking pacing and structure health...")
if not remaining_chapters:
return None
meta = bp.get('book_metadata', {})
prompt = f"""
ROLE: Structural Editor
TASK: Analyze pacing.
CONTEXT:
- PREVIOUS_SUMMARY: {summary[-3000:]}
- CURRENT_CHAPTER: {last_chapter_text[-2000:]}
- UPCOMING: {json.dumps([c['title'] for c in remaining_chapters[:3]])}
- REMAINING_COUNT: {len(remaining_chapters)}
LOGIC:
- IF skipped major beats -> ADD_BRIDGE
- IF covered next chapter's beats -> CUT_NEXT
- ELSE -> OK
OUTPUT_FORMAT (JSON):
{{
"status": "ok" or "add_bridge" or "cut_next",
"reason": "Explanation...",
"new_chapter": {{ "title": "...", "beats": ["..."], "pov_character": "..." }} (Required if add_bridge)
}}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
return json.loads(utils.clean_json(response.text))
except Exception as e:
utils.log("ARCHITECT", f"Pacing check failed: {e}")
return None
def analyze_consistency(bp, manuscript, folder):
utils.log("EDITOR", "Analyzing manuscript for continuity errors...")
if not manuscript: return {"issues": ["No manuscript found."], "score": 0}
if not bp: return {"issues": ["No blueprint found."], "score": 0}
chapter_summaries = []
for ch in manuscript:
text = ch.get('content', '')
excerpt = text[:1000] + "\n...\n" + text[-1000:] if len(text) > 2000 else text
chapter_summaries.append(f"Ch {ch.get('num')}: {excerpt}")
context = "\n".join(chapter_summaries)
prompt = f"""
ROLE: Continuity Editor
TASK: Analyze book summary for plot holes.
INPUT_DATA:
- CHARACTERS: {json.dumps(bp.get('characters', []))}
- SUMMARIES:
{context}
OUTPUT_FORMAT (JSON): {{ "issues": ["Issue 1", "Issue 2"], "score": 8, "summary": "Brief overall assessment." }}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
return json.loads(utils.clean_json(response.text))
except Exception as e:
return {"issues": [f"Analysis failed: {e}"], "score": 0, "summary": "Error during analysis."}
def rewrite_chapter_content(bp, manuscript, chapter_num, instruction, folder):
utils.log("WRITER", f"Rewriting Ch {chapter_num} with instruction: {instruction}")
target_chap = next((c for c in manuscript if str(c.get('num')) == str(chapter_num)), None)
if not target_chap: return None
prev_text = ""
prev_chap = None
if isinstance(chapter_num, int):
prev_chap = next((c for c in manuscript if c['num'] == chapter_num - 1), None)
elif str(chapter_num).lower() == "epilogue":
numbered_chaps = [c for c in manuscript if isinstance(c['num'], int)]
if numbered_chaps:
prev_chap = max(numbered_chaps, key=lambda x: x['num'])
if prev_chap:
prev_text = prev_chap.get('content', '')[-3000:]
meta = bp.get('book_metadata', {})
ad = meta.get('author_details', {})
if not ad and 'author_bio' in meta:
persona_info = meta['author_bio']
else:
persona_info = f"Name: {ad.get('name', meta.get('author', 'Unknown'))}\n"
if ad.get('bio'): persona_info += f"Style/Bio: {ad['bio']}\n"
char_visuals = ""
from core import config
tracking_path = os.path.join(folder, "tracking_characters.json")
if os.path.exists(tracking_path):
try:
tracking_chars = utils.load_json(tracking_path)
if tracking_chars:
char_visuals = "\nCHARACTER TRACKING (Visuals & Preferences):\n"
for name, data in tracking_chars.items():
desc = ", ".join(data.get('descriptors', []))
speech = data.get('speech_style', 'Unknown')
char_visuals += f"- {name}: {desc}\n * Speech: {speech}\n"
except: pass
guidelines = get_style_guidelines()
fw_list = '", "'.join(guidelines['filter_words'])
prompt = f"""
You are an expert fiction writing AI. Your task is to rewrite a specific chapter based on a user directive.
INPUT DATA:
- TITLE: {meta.get('title')}
- GENRE: {meta.get('genre')}
- TONE: {meta.get('style', {}).get('tone')}
- AUTHOR_VOICE: {persona_info}
- PREVIOUS_CONTEXT: {prev_text}
- CURRENT_DRAFT: {target_chap.get('content', '')[:5000]}
- CHARACTERS: {json.dumps(bp.get('characters', []))}
{char_visuals}
PRIMARY DIRECTIVE (USER INSTRUCTION):
{instruction}
EXECUTION RULES:
1. CONTINUITY: The new text must flow logically from PREVIOUS_CONTEXT.
2. ADHERENCE: The PRIMARY DIRECTIVE overrides any conflicting details in CURRENT_DRAFT.
3. VOICE: Strictly emulate the AUTHOR_VOICE.
4. GENRE: Enforce {meta.get('genre')} conventions. No anachronisms.
5. LOGIC: Enforce strict causality (Action -> Reaction). No teleporting characters.
PROSE OPTIMIZATION RULES (STRICT ENFORCEMENT):
- FILTER_REMOVAL: Scan for words [{fw_list}]. If found, rewrite the sentence to remove the filter and describe the sensation directly.
- SENTENCE_VARIETY: Penalize consecutive sentences starting with the same pronoun or article. Vary structure.
- SHOW_DONT_TELL: Convert internal summaries of emotion into physical actions or subtextual dialogue.
- ACTIVE_VOICE: Convert passive voice ("was [verb]ed") to active voice.
- SENSORY_ANCHORING: The first paragraph must establish the setting using at least one non-visual sense (smell, sound, touch).
- SUBTEXT: Dialogue must imply meaning rather than stating it outright.
RETURN JSON:
{{
"content": "The full chapter text in Markdown...",
"summary": "A concise summary of the chapter's events and ending state (for continuity checks)."
}}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
try:
data = json.loads(utils.clean_json(response.text))
return data.get('content'), data.get('summary')
except:
return response.text, None
except Exception as e:
utils.log("WRITER", f"Rewrite failed: {e}")
return None, None
def check_and_propagate(bp, manuscript, changed_chap_num, folder, change_summary=None):
utils.log("WRITER", f"Checking ripple effects from Ch {changed_chap_num}...")
changed_chap = next((c for c in manuscript if c['num'] == changed_chap_num), None)
if not changed_chap: return None
if change_summary:
current_context = change_summary
else:
change_summary_prompt = f"""
ROLE: Summarizer
TASK: Summarize the key events and ending state of this chapter for continuity tracking.
TEXT:
{changed_chap.get('content', '')[:10000]}
FOCUS:
- Major plot points.
- Character status changes (injuries, items acquired, location changes).
- New information revealed.
OUTPUT: Concise text summary.
"""
try:
resp = ai_models.model_writer.generate_content(change_summary_prompt)
utils.log_usage(folder, ai_models.model_writer.name, resp.usage_metadata)
current_context = resp.text
except:
current_context = changed_chap.get('content', '')[-2000:]
original_change_context = current_context
sorted_ms = sorted(manuscript, key=utils.chapter_sort_key)
start_index = -1
for i, c in enumerate(sorted_ms):
if str(c['num']) == str(changed_chap_num):
start_index = i
break
if start_index == -1 or start_index == len(sorted_ms) - 1:
return None
changes_made = False
consecutive_no_changes = 0
potential_impact_chapters = []
for i in range(start_index + 1, len(sorted_ms)):
target_chap = sorted_ms[i]
if consecutive_no_changes >= 2:
if target_chap['num'] not in potential_impact_chapters:
future_flags = [n for n in potential_impact_chapters if isinstance(n, int) and isinstance(target_chap['num'], int) and n > target_chap['num']]
if not future_flags:
remaining_chaps = sorted_ms[i:]
if not remaining_chaps: break
utils.log("WRITER", " -> Short-term ripple dissipated. Scanning remaining chapters for long-range impacts...")
chapter_summaries = []
for rc in remaining_chaps:
text = rc.get('content', '')
excerpt = text[:500] + "\n...\n" + text[-500:] if len(text) > 1000 else text
chapter_summaries.append(f"Ch {rc['num']}: {excerpt}")
scan_prompt = f"""
ROLE: Continuity Scanner
TASK: Identify chapters impacted by a change.
CHANGE_CONTEXT:
{original_change_context}
CHAPTER_SUMMARIES:
{json.dumps(chapter_summaries)}
CRITERIA: Identify later chapters that mention items, characters, or locations involved in the Change Context.
OUTPUT_FORMAT (JSON): [Chapter_Number_Int, ...]
"""
try:
resp = ai_models.model_logic.generate_content(scan_prompt)
utils.log_usage(folder, ai_models.model_logic.name, resp.usage_metadata)
potential_impact_chapters = json.loads(utils.clean_json(resp.text))
if not isinstance(potential_impact_chapters, list): potential_impact_chapters = []
potential_impact_chapters = [int(x) for x in potential_impact_chapters if str(x).isdigit()]
except Exception as e:
utils.log("WRITER", f" -> Scan failed: {e}. Stopping.")
break
if not potential_impact_chapters:
utils.log("WRITER", " -> No long-range impacts detected. Stopping.")
break
else:
utils.log("WRITER", f" -> Detected potential impact in chapters: {potential_impact_chapters}")
if isinstance(target_chap['num'], int) and target_chap['num'] not in potential_impact_chapters:
utils.log("WRITER", f" -> Skipping Ch {target_chap['num']} (Not flagged).")
continue
utils.log("WRITER", f" -> Checking Ch {target_chap['num']} for continuity...")
chap_word_count = len(target_chap.get('content', '').split())
prompt = f"""
ROLE: Continuity Checker
TASK: Determine if a chapter contradicts a story change. If it does, rewrite it to fix the contradiction.
CHANGED_CHAPTER: {changed_chap_num}
CHANGE_SUMMARY: {current_context}
CHAPTER_TO_CHECK (Ch {target_chap['num']}):
{target_chap['content'][:12000]}
DECISION_LOGIC:
- If the chapter directly contradicts the change (references dead characters, items that no longer exist, events that didn't happen), status = REWRITE.
- If the chapter is consistent or only tangentially related, status = NO_CHANGE.
- Be conservative — only rewrite if there is a genuine contradiction.
REWRITE_RULES (apply only if REWRITE):
- Fix the specific contradiction. Preserve all other content.
- The rewritten chapter MUST be approximately {chap_word_count} words (same length as original).
- Include the chapter header formatted as Markdown H1.
- Do not add new plot points not in the original.
OUTPUT_FORMAT (JSON):
{{
"status": "NO_CHANGE" or "REWRITE",
"reason": "Brief explanation of the contradiction or why it's consistent",
"content": "Full Markdown rewritten chapter (ONLY if status is REWRITE, otherwise null)"
}}
"""
try:
response = ai_models.model_writer.generate_content(prompt)
utils.log_usage(folder, ai_models.model_writer.name, response.usage_metadata)
data = json.loads(utils.clean_json(response.text))
if data.get('status') == 'NO_CHANGE':
utils.log("WRITER", f" -> Ch {target_chap['num']} is consistent.")
current_context = f"Ch {target_chap['num']} Summary: " + target_chap.get('content', '')[-2000:]
consecutive_no_changes += 1
elif data.get('status') == 'REWRITE' and data.get('content'):
new_text = data.get('content')
if new_text:
utils.log("WRITER", f" -> Rewriting Ch {target_chap['num']} to fix continuity.")
target_chap['content'] = new_text
changes_made = True
current_context = f"Ch {target_chap['num']} Summary: " + new_text[-2000:]
consecutive_no_changes = 0
try:
with open(os.path.join(folder, "manuscript.json"), 'w') as f: json.dump(manuscript, f, indent=2)
except: pass
except Exception as e:
utils.log("WRITER", f" -> Check failed: {e}")
return manuscript if changes_made else None