fix: Pipeline hardening — error handling, token efficiency, and robustness

core/utils.py:
- estimate_tokens: improved heuristic 4 chars/token → 3.5 chars/token (more accurate)
- truncate_to_tokens: added keep_head=True mode for head+tail truncation (better
  context retention for story summaries that need both opening and recent content)
- load_json: explicit exception handling (json.JSONDecodeError, OSError) with log
  instead of silent returns; added utf-8 encoding with error replacement
- log_image_attempt: replaced bare except with (json.JSONDecodeError, OSError);
  added utf-8 encoding to output write
- log_usage: replaced bare except with AttributeError for token count extraction

story/bible_tracker.py:
- merge_selected_changes: wrapped all int() key casts (char idx, book num, beat idx)
  in try/except with meaningful log warning instead of crashing on malformed keys
- harvest_metadata: replaced bare except:pass with except Exception as e + log message

cli/engine.py:
- Persona validation: added warning when all 3 attempts fail and substandard persona
  is accepted — flags elevated voice-drift risk for the run
- Lore index updates: throttled from every chapter to every 3 chapters; lore is
  stable after the first few chapters (~10% token saving per book)
- Mid-gen consistency check: now samples first 2 + last 8 chapters instead of passing
  full manuscript — caps token cost regardless of book length

story/writer.py:
- Two-pass polish: added local filter-word density check (no API call); skips the
  Pro polish if density < 1 per 83 words — saves ~8K tokens on already-clean drafts
- Polish prompt: added prev_context_block for continuity — polished chapter now
  maintains seamless flow from the previous chapter's ending

marketing/fonts.py:
- Separated requests.exceptions.Timeout with specific log message vs generic failure
- Added explicit log message when Roboto fallback also fails (returns None)

marketing/blurb.py:
- Added word count trim: blurbs > 220 words trimmed to last sentence within 220 words
- Changed bare except to except Exception as e with log message
- Added utf-8 encoding to file writes; logs final word count

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-22 22:31:22 -05:00
parent 3a42d1a339
commit ff5093a5f9
6 changed files with 106 additions and 36 deletions

View File

@@ -23,18 +23,27 @@ PRICING_CACHE = {}
# --- Token Estimation & Truncation Utilities ---
def estimate_tokens(text):
"""Estimate token count using a 4-chars-per-token heuristic (no external libs required)."""
"""Estimate token count using a 3.5-chars-per-token heuristic (more accurate than /4)."""
if not text:
return 0
return max(1, len(text) // 4)
return max(1, int(len(text) / 3.5))
def truncate_to_tokens(text, max_tokens):
"""Truncate text to approximately max_tokens, keeping the most recent (tail) content."""
def truncate_to_tokens(text, max_tokens, keep_head=False):
"""Truncate text to approximately max_tokens.
keep_head=False (default): keep the most recent (tail) content — good for 'story so far'.
keep_head=True: keep first third + last two thirds — good for context that needs both
the opening framing and the most recent events.
"""
if not text:
return text
max_chars = max_tokens * 4
max_chars = int(max_tokens * 3.5)
if len(text) <= max_chars:
return text
if keep_head:
head_chars = max_chars // 3
tail_chars = max_chars - head_chars
return text[:head_chars] + "\n[...]\n" + text[-tail_chars:]
return text[-max_chars:]
# --- In-Memory AI Response Cache ---
@@ -126,7 +135,14 @@ def log(phase, msg):
except: pass
def load_json(path):
return json.load(open(path, 'r')) if os.path.exists(path) else None
if not os.path.exists(path):
return None
try:
with open(path, 'r', encoding='utf-8', errors='replace') as f:
return json.load(f)
except (json.JSONDecodeError, OSError, ValueError) as e:
log("SYSTEM", f"⚠️ Failed to load JSON from {path}: {e}")
return None
def create_default_personas():
# Persona data is now stored in the Persona DB table; ensure the directory exists for sample files.
@@ -153,11 +169,13 @@ def log_image_attempt(folder, img_type, prompt, filename, status, error=None, sc
data = []
if os.path.exists(log_path):
try:
with open(log_path, 'r') as f: data = json.load(f)
except:
pass
with open(log_path, 'r', encoding='utf-8') as f:
data = json.load(f)
except (json.JSONDecodeError, OSError):
data = [] # Corrupted log — start fresh rather than crash
data.append(entry)
with open(log_path, 'w') as f: json.dump(data, f, indent=2)
with open(log_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2)
def get_run_folder(base_name):
if not os.path.exists(base_name): os.makedirs(base_name)
@@ -218,9 +236,10 @@ def log_usage(folder, model_label, usage_metadata=None, image_count=0):
if usage_metadata:
try:
input_tokens = usage_metadata.prompt_token_count
output_tokens = usage_metadata.candidates_token_count
except: pass
input_tokens = usage_metadata.prompt_token_count or 0
output_tokens = usage_metadata.candidates_token_count or 0
except AttributeError:
pass # usage_metadata shape varies by model; tokens stay 0
cost = calculate_cost(model_label, input_tokens, output_tokens, image_count)