Blueprint v1.0.4: Implemented AI Context Optimization & Token Management

- core/utils.py: Added estimate_tokens(), truncate_to_tokens(), get_ai_cache(), set_ai_cache(), make_cache_key() utilities
- story/writer.py: Applied truncate_to_tokens() to prev_content (2000 tokens) and prev_sum (600 tokens) context injections
- story/editor.py: Applied truncate_to_tokens() to summary (1000t), last_chapter_text (800t), eval text (7500t), propagation contexts (2500t/3000t)
- web/routes/persona.py: Added MD5-keyed in-memory cache for persona analyze endpoint; truncated sample_text to 750 tokens
- ai/models.py: Added pre-dispatch payload size estimation with 30k-token warning threshold

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-20 23:30:39 -05:00
parent f04a241936
commit db70ad81f7
6 changed files with 79 additions and 9 deletions

View File

@@ -71,7 +71,7 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
prev_context_block = ""
if prev_content:
trunc_content = prev_content[-3000:] if len(prev_content) > 3000 else prev_content
trunc_content = utils.truncate_to_tokens(prev_content, 2000)
prev_context_block = f"\nPREVIOUS CHAPTER TEXT (For Tone & Continuity):\n{trunc_content}\n"
chars_for_writer = [
@@ -238,7 +238,7 @@ def write_chapter(chap, bp, folder, prev_sum, tracking=None, prev_content=None,
HARD_CONSTRAINTS:
- TARGET_WORDS: ~{est_words} words (aim for this; ±20% is acceptable if the scene genuinely demands it — but do not condense beats to save space)
- BEATS MUST BE COVERED: {json.dumps(chap.get('beats', []))}
- SUMMARY CONTEXT: {prev_sum[:1500]}
- SUMMARY CONTEXT: {utils.truncate_to_tokens(prev_sum, 600)}
AUTHOR_VOICE:
{persona_info}