- prompt-builder.js: add RECALL_TARGET_CONTENT_HEADER, update splitSectionedTranscriptPayloadMessage to recognize recall-specific target header
- retriever.js: add buildRecallSectionedTranscript helper, format recentMessages as sectioned transcript with context-review and recall-target headers for prompt building while keeping flat string[] for ranking
- p0-regressions.mjs: add testRecallUsesSectionedPromptMessagesForContextAndTarget regression asserting two system messages with correct transcriptSection and headers
Root cause: formatPromptMessageTranscript in prompt-builder.js ignored
isContextOnly, so context review and extraction target sections were
flattened into plain transcript even though the flag was correctly set
in intermediate layers. Additionally, userPromptSections (which
contained the dividers) was only a fallback that never reached the
final prompt when block-based profiles had user blocks.
Fix:
- getPromptMessageLikeDescriptor now preserves isContextOnly flag
- formatPromptMessageTranscript now inserts context/target section
dividers when messages carry isContextOnly, ensuring the final
LLM prompt always shows the distinction regardless of which
rendering path (recentMessages, chatMessages, dialogueText) is used
Regression tests:
- prompt-builder-mixed-transcript: verify recentMessages block content
includes context review and extraction target dividers
- extractor-phase3-layered-context: end-to-end test proving default
extract profile + default structured mode produces final promptMessages
with context/target section dividers