Blueprint v2.4-2.6: Style Rules UI, Lore RAG, Thread Tracking, Redo Book

v2.4 — Item 7: Refresh Style Guidelines
- web/routes/admin.py: Added /admin/refresh-style-guidelines route (AJAX-aware)
- templates/system_status.html: Added 'Refresh Style Rules' button with spinner

v2.5 — Item 8: Lore & Location RAG-Lite
- story/bible_tracker.py: Added update_lore_index() — extracts location/item
  descriptions from chapters into tracking_lore.json
- story/writer.py: Reads chapter locations/key_items, builds LORE_CONTEXT block
  injected into the prompt (graceful degradation if no tags)
- cli/engine.py: Loads tracking_lore.json on resume, calls update_lore_index
  after each chapter, saves tracking_lore.json

v2.5 — Item 9: Structured Story State (Thread Tracking)
- story/state.py (new): load_story_state, update_story_state (extracts
  active_threads, immediate_handoff, resolved_threads via model_logic),
  format_for_prompt (structured context replacing the prev_sum blob)
- cli/engine.py: Loads story_state.json on resume, uses format_for_prompt as
  summary_ctx for write_chapter, updates state after each chapter accepted

v2.6 — Item 10: Redo Book
- templates/consistency_report.html: Added 'Redo Book' form with instruction
  input and confirmation dialog
- web/routes/run.py: Added revise_book route — creates new Run, queues
  generate_book_task with user instruction as feedback

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-21 01:35:43 -05:00
parent 2db7a35a66
commit 83a6a4315b
9 changed files with 291 additions and 27 deletions

View File

@@ -1,4 +1,4 @@
# AI Context Optimization Blueprint (v2.5) # AI Context Optimization Blueprint (v2.6)
This blueprint outlines architectural improvements for how AI context is managed during the writing process. The goal is to provide the AI (Claude/Gemini) with **better, highly-targeted context upfront**, which will dramatically improve first-draft quality and reduce the reliance on expensive, time-consuming quality checks and rewrites (currently up to 5 attempts). This blueprint outlines architectural improvements for how AI context is managed during the writing process. The goal is to provide the AI (Claude/Gemini) with **better, highly-targeted context upfront**, which will dramatically improve first-draft quality and reduce the reliance on expensive, time-consuming quality checks and rewrites (currently up to 5 attempts).
@@ -79,9 +79,9 @@ Despite the existing `style_guidelines.json` and basic prompts, the AI writing o
AI models evolve, and new overused phrases regularly emerge. The static list in `data/style_guidelines.json` will become outdated. The `refresh_style_guidelines()` function already exists in `story/style_persona.py` but has no UI or scheduled trigger. AI models evolve, and new overused phrases regularly emerge. The static list in `data/style_guidelines.json` will become outdated. The `refresh_style_guidelines()` function already exists in `story/style_persona.py` but has no UI or scheduled trigger.
**Solution & Implementation Plan:** **Solution & Implementation Plan:**
1. **Admin UI Trigger:** Add a "Refresh Style Guidelines" button to `templates/system_status.html` (near the existing "Refresh & Optimize"). Use the same async AJAX pattern from Section 5. 1. **Admin UI Trigger:** Added "Refresh Style Rules" button to `templates/system_status.html` using the same async AJAX spinner pattern as "Refresh & Optimize". *(Implemented v2.4)*
2. **Backend Route:** Add a `/admin/refresh-style-guidelines` route in `web/routes/admin.py` that calls `style_persona.refresh_style_guidelines(model_logic, folder)` and returns JSON status. 2. **Backend Route:** Added `/admin/refresh-style-guidelines` route in `web/routes/admin.py` that calls `style_persona.refresh_style_guidelines(model_logic)` and returns JSON status with counts. *(Implemented v2.4)*
3. **Logging:** ⏳ Log changes to `data/app.log` so admins can see what was added or removed. 3. **Logging:** Route logs the updated counts to `data/app.log` via `utils.log`. *(Implemented v2.4)*
## 8. Lore & Location Context Retrieval (RAG-Lite) — v2.5 ## 8. Lore & Location Context Retrieval (RAG-Lite) — v2.5
@@ -89,10 +89,11 @@ AI models evolve, and new overused phrases regularly emerge. The static list in
The remaining half of Section 1 — `prev_sum` and the `style_block` carry all world-building as a monolithic blob. Locations, artifacts, and lore details not relevant to the current chapter waste tokens and dilute the AI's focus, causing it to hallucinate setting details or ignore established world rules. The remaining half of Section 1 — `prev_sum` and the `style_block` carry all world-building as a monolithic blob. Locations, artifacts, and lore details not relevant to the current chapter waste tokens and dilute the AI's focus, causing it to hallucinate setting details or ignore established world rules.
**Solution & Implementation Plan:** **Solution & Implementation Plan:**
1. **Tag Beats with Locations/Items:** ⏳ Extend the chapter schema in the blueprint JSON to support optional `locations` and `key_items` arrays per chapter (e.g., `"locations": ["The Thornwood Inn"]`, `"key_items": ["The Sunstone Amulet"]`). 1. **Tag Beats with Locations/Items:** Chapter schema supports optional `locations` and `key_items` arrays. `story/writer.py` reads these from the chapter dict. *(Implemented v2.5)*
2. **Lore Index in Bible:** Add a `lore` dict to `tracking_*.json` (managed by `story/bible_tracker.py`) that maps location/item names to short canonical descriptions (max 2 sentences each). 2. **Lore Index in Bible:** Added `update_lore_index(folder, chapter_text, current_lore)` to `story/bible_tracker.py`. Index is stored in `tracking_lore.json` and loaded into `tracking['lore']`. *(Implemented v2.5)*
3. **Retrieval in `write_chapter`:** ⏳ In `story/writer.py`, before building the prompt, scan the chapter's `locations` and `key_items` arrays and pull matching entries from the lore index into a `lore_block` injected into the prompt — replacing the monolithic style block lore dump. 3. **Retrieval in `write_chapter`:** `story/writer.py` matches chapter `locations`/`key_items` against the lore index and injects a `LORE_CONTEXT` block into the prompt. *(Implemented v2.5)*
4. **Fallback:** If no tags are present, behaviour is unchanged (graceful degradation). 4. **Fallback:** If chapter has no `locations`/`key_items` or lore index is empty, `lore_block` is empty and behaviour is unchanged. *(Implemented v2.5)*
5.**Engine Wiring:** `cli/engine.py` loads `tracking_lore.json` on resume, calls `update_lore_index` after each chapter, and saves to `tracking_lore.json`. *(Implemented v2.5)*
## 9. Structured "Story So Far" — Thread Tracking — v2.5 ## 9. Structured "Story So Far" — Thread Tracking — v2.5
@@ -100,18 +101,19 @@ The remaining half of Section 1 — `prev_sum` and the `style_block` carry all w
The remaining half of Section 2 — `prev_sum` is a growing unstructured narrative blob. As chapters accumulate, the AI receives an ever-longer wall of prose-summary as context, which dilutes attention, buries the most important recent state, and causes continuity drift. The remaining half of Section 2 — `prev_sum` is a growing unstructured narrative blob. As chapters accumulate, the AI receives an ever-longer wall of prose-summary as context, which dilutes attention, buries the most important recent state, and causes continuity drift.
**Solution & Implementation Plan:** **Solution & Implementation Plan:**
1. **Structured Summary Schema:** ⏳ After each chapter is written, use `model_logic` to extract structured state into a `story_state.json` file: 1. **Structured Summary Schema:** New `story/state.py` module. After each chapter, `update_story_state()` uses `model_logic` to extract and save `story_state.json` with `active_threads`, `immediate_handoff` (exactly 3 sentences), and `resolved_threads`. *(Implemented v2.5)*
```json 2.**Prompt Injection:** `cli/engine.py` calls `story_state.format_for_prompt(current_story_state, chapter_beats)` before each `write_chapter` call. The formatted string replaces `prev_sum` as the context. Falls back to the raw `summary` blob if no structured state exists yet. *(Implemented v2.5)*
{ 3.**State Update Step:** `cli/engine.py` calls `story_state.update_story_state()` after each chapter is written and accepted, saving `story_state.json` in the book folder. *(Implemented v2.5)*
"active_threads": ["Elara is searching for the Sunstone", "The Inquisitor suspects Daren"], 4.**Continuity Guard:** `format_for_prompt()` always places `IMMEDIATE STORY HANDOFF` first, followed by `ACTIVE PLOT THREADS`. Resolved threads are only included if referenced in the next chapter's beats. *(Implemented v2.5)*
"immediate_handoff": "Elara escaped through the east gate. Daren was left behind. Dawn is breaking.",
"resolved_threads": ["The tavern debt is paid"], ## 10. Consistency Report Quick Fix (v2.6)
"chapter": 7
} **Current Problem:**
``` The `templates/consistency_report.html` page displays issues found in the manuscript but does not provide a direct action to fix them. It only suggests using the "Read & Edit" or "Modify & Re-run" features.
2. **Prompt Injection:** ⏳ In `story/writer.py`, replace the raw `prev_sum` blob with a formatted injection of the structured state — active threads first, then the `immediate_handoff`, hiding resolved threads unless they are referenced in the current chapter's beats.
3. **State Update Step:** ⏳ After `write_chapter` completes and is accepted, call a `update_story_state(chapter_text, current_state, folder)` function in `story/bible_tracker.py` (or a new `story/state.py`) to update `story_state.json` with the new chapter's resolved/active threads. **Solution & Implementation Plan:**
4. **Continuity Guard:** ⏳ The `immediate_handoff` field from the previous chapter must always appear verbatim in the prompt as the first context block, before `prev_sum`, so the AI always sees the most recent physical/emotional state of the POV character. 1.**Frontend Action:** Added "Redo Book" form to `templates/consistency_report.html` footer with a text input for the revision instruction and a confirmation prompt on submit. *(Implemented v2.6)*
2.**Backend Route:** Added `/project/<run_id>/revise_book/<book_folder>` route in `web/routes/run.py`. Route creates a new `Run` record and queues `generate_book_task` with the user's instruction as `feedback` and `source_run_id` pointing to the original run. The existing bible refinement logic in `generate_book_task` applies the instruction to the bible before regenerating. *(Implemented v2.6)*
## Summary of Actionable Changes for Implementation Mode: ## Summary of Actionable Changes for Implementation Mode:
1. ✅ Modify `writer.py` to filter `chars_for_writer` based on characters named in `beats`. *(Implemented in v1.5.0)* 1. ✅ Modify `writer.py` to filter `chars_for_writer` based on characters named in `beats`. *(Implemented in v1.5.0)*
@@ -120,6 +122,7 @@ The remaining half of Section 2 — `prev_sum` is a growing unstructured narrati
4. ✅ Add a pre-processing function to expand chapter beats into staging directions before generating the prose draft. *(Implemented in v2.0 — `expand_beats_to_treatment` in `story/writer.py`)* 4. ✅ Add a pre-processing function to expand chapter beats into staging directions before generating the prose draft. *(Implemented in v2.0 — `expand_beats_to_treatment` in `story/writer.py`)*
5.**(v2.2)** Update "Refresh & Optimize" action in UI to be an async fetch call with a processing flag instead of a full page reload, and update `admin.py` to handle JSON responses. 5.**(v2.2)** Update "Refresh & Optimize" action in UI to be an async fetch call with a processing flag instead of a full page reload, and update `admin.py` to handle JSON responses.
6.**(v2.3)** Updated writing prompts and evaluation rubrics across `story/writer.py`, `story/editor.py`, and `story/style_persona.py` to aggressively filter AI-isms, enforce Deep POV via a non-negotiable mandate, add genre-specific writing instructions, and fail chapters that rely on "telling" rather than "showing" via filter-word density checks in the evaluator. 6.**(v2.3)** Updated writing prompts and evaluation rubrics across `story/writer.py`, `story/editor.py`, and `story/style_persona.py` to aggressively filter AI-isms, enforce Deep POV via a non-negotiable mandate, add genre-specific writing instructions, and fail chapters that rely on "telling" rather than "showing" via filter-word density checks in the evaluator.
7. ⏳ **(v2.4)** Add "Refresh Style Guidelines" button + backend route to trigger AI review of `data/style_guidelines.json`, keeping the AI-isms list current. *(See Section 7)* 7.**(v2.4)** Add "Refresh Style Rules" button to `system_status.html` and `/admin/refresh-style-guidelines` route in `admin.py`. *(Implemented v2.4)*
8. ⏳ **(v2.5)** Implement Lore & Location RAG-Lite: tag chapter beats with locations/items, build a lore index in the bible tracker, inject only relevant lore into each chapter prompt. *(See Section 8)* 8.**(v2.5)** Lore & Location RAG-Lite: `update_lore_index` in `bible_tracker.py`, `tracking_lore.json`, lore retrieval in `writer.py`, wired in `engine.py`. *(Implemented v2.5)*
9. ⏳ **(v2.5)** Implement Structured Story State (Thread Tracking): replace the raw `prev_sum` blob with a structured `story_state.json` containing active threads, a precise immediate handoff, and resolved threads. *(See Section 9)* 9.**(v2.5)** Structured Story State (Thread Tracking): new `story/state.py`, `story_state.json`, structured prompt context replacing raw summary blob in `engine.py`. *(Implemented v2.5)*
10.**(v2.6)** "Redo Book" form in `consistency_report.html` + `revise_book` route in `run.py` that creates a new run with the instruction applied as bible feedback. *(Implemented v2.6)*

View File

@@ -8,7 +8,7 @@ from core import config, utils
from ai import models as ai_models from ai import models as ai_models
from ai import setup as ai_setup from ai import setup as ai_setup
from story import planner, writer as story_writer, editor as story_editor from story import planner, writer as story_writer, editor as story_editor
from story import style_persona, bible_tracker from story import style_persona, bible_tracker, state as story_state
from marketing import assets as marketing_assets from marketing import assets as marketing_assets
from export import exporter from export import exporter
@@ -92,8 +92,9 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
events_track_path = os.path.join(folder, "tracking_events.json") events_track_path = os.path.join(folder, "tracking_events.json")
chars_track_path = os.path.join(folder, "tracking_characters.json") chars_track_path = os.path.join(folder, "tracking_characters.json")
warn_track_path = os.path.join(folder, "tracking_warnings.json") warn_track_path = os.path.join(folder, "tracking_warnings.json")
lore_track_path = os.path.join(folder, "tracking_lore.json")
tracking = {"events": [], "characters": {}, "content_warnings": []} tracking = {"events": [], "characters": {}, "content_warnings": [], "lore": {}}
if resume: if resume:
if os.path.exists(events_track_path): if os.path.exists(events_track_path):
tracking['events'] = utils.load_json(events_track_path) tracking['events'] = utils.load_json(events_track_path)
@@ -101,6 +102,11 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
tracking['characters'] = utils.load_json(chars_track_path) tracking['characters'] = utils.load_json(chars_track_path)
if os.path.exists(warn_track_path): if os.path.exists(warn_track_path):
tracking['content_warnings'] = utils.load_json(warn_track_path) tracking['content_warnings'] = utils.load_json(warn_track_path)
if os.path.exists(lore_track_path):
tracking['lore'] = utils.load_json(lore_track_path) or {}
# Load structured story state
current_story_state = story_state.load_story_state(folder)
summary = "The story begins." summary = "The story begins."
if ms: if ms:
@@ -148,7 +154,12 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
while True: while True:
try: try:
summary_ctx = summary[-8000:] if len(summary) > 8000 else summary # Build context: use structured state if available, fall back to summary blob
structured_ctx = story_state.format_for_prompt(current_story_state, ch.get('beats', []))
if structured_ctx:
summary_ctx = structured_ctx
else:
summary_ctx = summary[-8000:] if len(summary) > 8000 else summary
next_hint = chapters[i+1]['title'] if i + 1 < len(chapters) else "" next_hint = chapters[i+1]['title'] if i + 1 < len(chapters) else ""
txt = story_writer.write_chapter(ch, bp, folder, summary_ctx, tracking, prev_content, next_chapter_hint=next_hint) txt = story_writer.write_chapter(ch, bp, folder, summary_ctx, tracking, prev_content, next_chapter_hint=next_hint)
except Exception as e: except Exception as e:
@@ -218,6 +229,13 @@ def process_book(bp, folder, context="", resume=False, interactive=False):
with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2) with open(chars_track_path, "w") as f: json.dump(tracking['characters'], f, indent=2)
with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2) with open(warn_track_path, "w") as f: json.dump(tracking.get('content_warnings', []), f, indent=2)
# Update Lore Index (Item 8: RAG-Lite)
tracking['lore'] = bible_tracker.update_lore_index(folder, txt, tracking.get('lore', {}))
with open(lore_track_path, "w") as f: json.dump(tracking['lore'], f, indent=2)
# Update Structured Story State (Item 9: Thread Tracking)
current_story_state = story_state.update_story_state(txt, ch['chapter_number'], current_story_state, folder)
# Dynamic Pacing Check (every other chapter) # Dynamic Pacing Check (every other chapter)
remaining = chapters[i+1:] remaining = chapters[i+1:]
if remaining and len(remaining) >= 2 and i % 2 == 1: if remaining and len(remaining) >= 2 and i % 2 == 1:

View File

@@ -93,6 +93,42 @@ def update_tracking(folder, chapter_num, chapter_text, current_tracking):
return current_tracking return current_tracking
def update_lore_index(folder, chapter_text, current_lore):
"""Extract canonical descriptions of locations and key items from a chapter
and merge them into the lore index dict. Returns the updated lore dict."""
utils.log("TRACKER", "Updating lore index from chapter...")
prompt = f"""
ROLE: Lore Keeper
TASK: Extract canonical descriptions of locations and key items from this chapter.
EXISTING_LORE:
{json.dumps(current_lore)}
CHAPTER_TEXT:
{chapter_text[:15000]}
INSTRUCTIONS:
1. For each LOCATION mentioned: provide a 1-2 sentence canonical description (appearance, atmosphere, notable features).
2. For each KEY ITEM or ARTIFACT mentioned: provide a 1-2 sentence canonical description (appearance, properties, significance).
3. Do NOT add characters — only physical places and objects.
4. If an entry already exists in EXISTING_LORE, update or preserve it — do not duplicate.
5. Use the exact name as the key (e.g., "The Thornwood Inn", "The Sunstone Amulet").
6. Only include entries that have meaningful descriptive detail in the chapter text.
OUTPUT_FORMAT (JSON): {{"LocationOrItemName": "Description.", ...}}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
new_entries = json.loads(utils.clean_json(response.text))
if isinstance(new_entries, dict):
current_lore.update(new_entries)
return current_lore
except Exception as e:
utils.log("TRACKER", f"Lore index update failed: {e}")
return current_lore
def harvest_metadata(bp, folder, full_manuscript): def harvest_metadata(bp, folder, full_manuscript):
utils.log("HARVESTER", "Scanning for new characters...") utils.log("HARVESTER", "Scanning for new characters...")
full_text = "\n".join([c.get('content', '') for c in full_manuscript])[:500000] full_text = "\n".join([c.get('content', '') for c in full_manuscript])[:500000]

94
story/state.py Normal file
View File

@@ -0,0 +1,94 @@
import json
import os
from core import utils
from ai import models as ai_models
def _empty_state():
return {"active_threads": [], "immediate_handoff": "", "resolved_threads": [], "chapter": 0}
def load_story_state(folder):
"""Load structured story state from story_state.json, or return empty state."""
path = os.path.join(folder, "story_state.json")
if os.path.exists(path):
return utils.load_json(path) or _empty_state()
return _empty_state()
def update_story_state(chapter_text, chapter_num, current_state, folder):
"""Use model_logic to extract structured story threads from the new chapter
and save the updated state to story_state.json. Returns the new state."""
utils.log("STATE", f"Updating story state after Ch {chapter_num}...")
prompt = f"""
ROLE: Story State Tracker
TASK: Update the structured story state based on the new chapter.
CURRENT_STATE:
{json.dumps(current_state)}
NEW_CHAPTER (Chapter {chapter_num}):
{utils.truncate_to_tokens(chapter_text, 4000)}
INSTRUCTIONS:
1. ACTIVE_THREADS: 2-5 concise strings, each describing what a key character is currently trying to achieve.
- Carry forward unresolved threads from CURRENT_STATE.
- Add new threads introduced in this chapter.
- Remove threads that are now resolved.
2. IMMEDIATE_HANDOFF: Write exactly 3 sentences describing how this chapter ended:
- Sentence 1: Where are the key characters physically right now?
- Sentence 2: What emotional state are they in at the very end of this chapter?
- Sentence 3: What immediate unresolved threat, question, or decision is hanging in the air?
3. RESOLVED_THREADS: Carry forward from CURRENT_STATE + add threads explicitly resolved in this chapter.
OUTPUT_FORMAT (JSON):
{{
"active_threads": ["Thread 1", "Thread 2"],
"immediate_handoff": "Sentence 1. Sentence 2. Sentence 3.",
"resolved_threads": ["Resolved thread 1"],
"chapter": {chapter_num}
}}
"""
try:
response = ai_models.model_logic.generate_content(prompt)
utils.log_usage(folder, ai_models.model_logic.name, response.usage_metadata)
new_state = json.loads(utils.clean_json(response.text))
new_state['chapter'] = chapter_num
path = os.path.join(folder, "story_state.json")
with open(path, 'w') as f:
json.dump(new_state, f, indent=2)
utils.log("STATE", f" -> Story state saved. Active threads: {len(new_state.get('active_threads', []))}")
return new_state
except Exception as e:
utils.log("STATE", f" -> Story state update failed: {e}. Keeping previous state.")
return current_state
def format_for_prompt(state, chapter_beats=None):
"""Format the story state into a prompt-ready string.
Active threads and immediate handoff are always included.
Resolved threads are only included if referenced in the chapter's beats."""
if not state or (not state.get('immediate_handoff') and not state.get('active_threads')):
return None
beats_text = " ".join(str(b) for b in (chapter_beats or [])).lower()
lines = []
if state.get('immediate_handoff'):
lines.append(f"IMMEDIATE STORY HANDOFF (exactly how the previous chapter ended):\n{state['immediate_handoff']}")
if state.get('active_threads'):
lines.append("ACTIVE PLOT THREADS:")
for t in state['active_threads']:
lines.append(f" - {t}")
relevant_resolved = [
t for t in state.get('resolved_threads', [])
if any(w in beats_text for w in t.lower().split() if len(w) > 4)
]
if relevant_resolved:
lines.append("RESOLVED THREADS (context only — do not re-introduce):")
for t in relevant_resolved:
lines.append(f" - {t}")
return "\n".join(lines)

View File

@@ -189,6 +189,22 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
if items: if items:
char_visuals += f" * Held Items: {', '.join(items)}\n" char_visuals += f" * Held Items: {', '.join(items)}\n"
# Build lore block: pull only locations/items relevant to this chapter
lore_block = ""
if tracking and tracking.get('lore'):
chapter_locations = chap.get('locations', [])
chapter_items = chap.get('key_items', [])
lore = tracking['lore']
relevant_lore = {
name: desc for name, desc in lore.items()
if any(name.lower() in ref.lower() or ref.lower() in name.lower()
for ref in chapter_locations + chapter_items)
}
if relevant_lore:
lore_block = "\nLORE_CONTEXT (Canonical descriptions for this chapter — use these exactly):\n"
for name, desc in relevant_lore.items():
lore_block += f"- {name}: {desc}\n"
style_block = "\n".join([f"- {k.replace('_', ' ').title()}: {v}" for k, v in style.items() if isinstance(v, (str, int, float))]) style_block = "\n".join([f"- {k.replace('_', ' ').title()}: {v}" for k, v in style.items() if isinstance(v, (str, int, float))])
if 'tropes' in style and isinstance(style['tropes'], list): if 'tropes' in style and isinstance(style['tropes'], list):
style_block += f"\n- Tropes: {', '.join(style['tropes'])}" style_block += f"\n- Tropes: {', '.join(style['tropes'])}"
@@ -282,6 +298,7 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
{prev_context_block} {prev_context_block}
- CHARACTERS: {json.dumps(chars_for_writer)} - CHARACTERS: {json.dumps(chars_for_writer)}
{char_visuals} {char_visuals}
{lore_block}
- SCENE_BEATS: {json.dumps(chap['beats'])} - SCENE_BEATS: {json.dumps(chap['beats'])}
{treatment_block} {treatment_block}

View File

@@ -27,7 +27,15 @@
</ul> </ul>
</div> </div>
<div class="card-footer bg-light"> <div class="card-footer bg-light">
<small class="text-muted">Tip: Use the "Read & Edit" feature to fix these issues manually, or use "Modify & Re-run" to have AI rewrite sections.</small> <small class="text-muted mb-3 d-block">Tip: Use the "Read &amp; Edit" feature to fix issues manually, or use the form below to queue a full AI book revision.</small>
<form action="{{ url_for('run.revise_book', run_id=run.id, book_folder=book_folder) }}" method="POST" onsubmit="return confirm('This will start a new run to regenerate this book with your instruction applied. Continue?');">
<div class="input-group">
<input type="text" name="instruction" class="form-control" placeholder="e.g. Fix the timeline contradictions in the middle chapters" required>
<button type="submit" class="btn btn-warning">
<i class="fas fa-sync-alt me-2"></i>Redo Book
</button>
</div>
</form>
</div> </div>
</div> </div>
</div> </div>

View File

@@ -8,6 +8,11 @@
</div> </div>
<div class="col-md-4 text-end"> <div class="col-md-4 text-end">
<a href="{{ url_for('project.index') }}" class="btn btn-outline-secondary me-2">Back to Dashboard</a> <a href="{{ url_for('project.index') }}" class="btn btn-outline-secondary me-2">Back to Dashboard</a>
<button id="styleBtn" class="btn btn-outline-info me-2" onclick="refreshStyleGuidelines()">
<span id="styleIcon"><i class="fas fa-filter me-2"></i></span>
<span id="styleSpinner" class="spinner-border spinner-border-sm me-2 d-none" role="status"></span>
<span id="styleLabel">Refresh Style Rules</span>
</button>
<button id="refreshBtn" class="btn btn-primary" onclick="refreshModels()"> <button id="refreshBtn" class="btn btn-primary" onclick="refreshModels()">
<span id="refreshIcon"><i class="fas fa-sync me-2"></i></span> <span id="refreshIcon"><i class="fas fa-sync me-2"></i></span>
<span id="refreshSpinner" class="spinner-border spinner-border-sm me-2 d-none" role="status"></span> <span id="refreshSpinner" class="spinner-border spinner-border-sm me-2 d-none" role="status"></span>
@@ -226,6 +231,34 @@ async function refreshModels() {
} }
} }
async function refreshStyleGuidelines() {
const btn = document.getElementById('styleBtn');
const icon = document.getElementById('styleIcon');
const spinner = document.getElementById('styleSpinner');
const label = document.getElementById('styleLabel');
btn.disabled = true;
icon.classList.add('d-none');
spinner.classList.remove('d-none');
label.textContent = 'Updating...';
try {
const resp = await fetch("{{ url_for('admin.refresh_style_guidelines_route') }}", {
method: 'POST',
headers: { 'X-Requested-With': 'XMLHttpRequest' }
});
const data = await resp.json();
showToast(data.message, resp.ok ? 'bg-success text-white' : 'bg-danger text-white');
} catch (err) {
showToast('Request failed: ' + err.message, 'bg-danger text-white');
} finally {
btn.disabled = false;
icon.classList.remove('d-none');
spinner.classList.add('d-none');
label.textContent = 'Refresh Style Rules';
}
}
function showToast(message, classes) { function showToast(message, classes) {
const toast = document.getElementById('refreshToast'); const toast = document.getElementById('refreshToast');
const body = document.getElementById('toastBody'); const body = document.getElementById('toastBody');

View File

@@ -213,6 +213,27 @@ def optimize_models():
return redirect(request.referrer or url_for('project.index')) return redirect(request.referrer or url_for('project.index'))
@admin_bp.route('/admin/refresh-style-guidelines', methods=['POST'])
@login_required
@admin_required
def refresh_style_guidelines_route():
is_ajax = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
try:
if not ai_models.model_logic:
raise Exception("No AI model available. Run 'Refresh & Optimize' first.")
new_data = style_persona.refresh_style_guidelines(ai_models.model_logic)
msg = f"Style Guidelines updated — {len(new_data.get('ai_isms', []))} AI-isms, {len(new_data.get('filter_words', []))} filter words."
utils.log("SYSTEM", msg)
if is_ajax:
return jsonify({'status': 'ok', 'message': msg})
flash(msg)
except Exception as e:
if is_ajax:
return jsonify({'status': 'error', 'message': str(e)}), 500
flash(f"Error refreshing style guidelines: {e}")
return redirect(request.referrer or url_for('admin.system_status'))
@admin_bp.route('/system/status') @admin_bp.route('/system/status')
@login_required @login_required
def system_status(): def system_status():

View File

@@ -1,6 +1,7 @@
import os import os
import json import json
import markdown import markdown
from datetime import datetime
from flask import Blueprint, render_template, request, redirect, url_for, flash, session, send_from_directory from flask import Blueprint, render_template, request, redirect, url_for, flash, session, send_from_directory
from flask_login import login_required, current_user from flask_login import login_required, current_user
from web.db import db, Run, LogEntry from web.db import db, Run, LogEntry
@@ -315,6 +316,39 @@ def get_task_status(task_id):
return {"status": "completed", "success": task_result} return {"status": "completed", "success": task_result}
@run_bp.route('/project/<int:run_id>/revise_book/<string:book_folder>', methods=['POST'])
@login_required
def revise_book(run_id, book_folder):
run = db.session.get(Run, run_id) or Run.query.get_or_404(run_id)
if run.project.user_id != current_user.id:
flash("Unauthorized.")
return redirect(url_for('run.view_run', id=run_id))
if run.status == 'running':
flash("A run is already active. Please wait for it to finish.")
return redirect(url_for('run.view_run', id=run_id))
instruction = request.form.get('instruction', '').strip()
if not instruction:
flash("Please provide an instruction describing what to fix.")
return redirect(url_for('run.check_consistency', run_id=run_id, book_folder=book_folder))
bible_path = os.path.join(run.project.folder_path, "bible.json")
if not os.path.exists(bible_path):
flash("Bible file not found. Cannot start revision.")
return redirect(url_for('run.view_run', id=run_id))
new_run = Run(project_id=run.project_id, status='queued', start_time=datetime.utcnow())
db.session.add(new_run)
db.session.commit()
from web.tasks import generate_book_task
generate_book_task(new_run.id, run.project.folder_path, bible_path, feedback=instruction, source_run_id=run.id)
flash(f"Book revision queued. Instruction: '{instruction[:80]}...' — a new run has been started.")
return redirect(url_for('run.view_run', id=new_run.id))
@run_bp.route('/project/<int:run_id>/regenerate_artifacts', methods=['POST']) @run_bp.route('/project/<int:run_id>/regenerate_artifacts', methods=['POST'])
@login_required @login_required
def regenerate_artifacts(run_id): def regenerate_artifacts(run_id):