Root cause: formatPromptMessageTranscript in prompt-builder.js ignored
isContextOnly, so context review and extraction target sections were
flattened into plain transcript even though the flag was correctly set
in intermediate layers. Additionally, userPromptSections (which
contained the dividers) was only a fallback that never reached the
final prompt when block-based profiles had user blocks.
Fix:
- getPromptMessageLikeDescriptor now preserves isContextOnly flag
- formatPromptMessageTranscript now inserts context/target section
dividers when messages carry isContextOnly, ensuring the final
LLM prompt always shows the distinction regardless of which
rendering path (recentMessages, chatMessages, dialogueText) is used
Regression tests:
- prompt-builder-mixed-transcript: verify recentMessages block content
includes context review and extraction target dividers
- extractor-phase3-layered-context: end-to-end test proving default
extract profile + default structured mode produces final promptMessages
with context/target section dividers