Ivan's Inferences #1
Secret Messages for LLMs
Modern LLM-powered applications increasingly rely on long, stateful conversations. But there’s an inherent tension in these systems:
- The model needs rich metadata, instructions, and workflow context.
- The user wants a clean, natural, uncluttered message.
A powerful pattern that resolves this tension is the use of hidden context messages—messages that are part of the model’s conversation history, but are not shown to the user. This allows the system to give the model operational guidance, without polluting the user interface or exposing internal implementation details.
We’ve found this technique especially effective for workflows like meeting preparation, daily digests, recurring reports, and any scenario where the model must understand why a message was generated and how to respond when the user asks for changes.
Why Hidden Context Matters
Most LLM integrations treat conversation history as a literal transcript: whatever the user sees is exactly what the model sees. But this creates friction in more structured workflows. The model needs to know:
- What type of message it just produced
- How to interpret future user replies
- Which tools should be invoked depending on user intent
- What constraints govern edits, corrections, or disabling a workflow
If you expose this information directly to the user, the experience becomes noisy and confusing. If you hide it entirely from the model, the behavior becomes inconsistent and unreliable.
Hidden context gives you the best of both worlds:
- Users see clean, natural messages
- Models see fully annotated messages
- Developers get predictable, controllable behavior
A Concrete Example: Meeting Prep Messages
We have a system that generates automated meeting prep messages. From the user’s perspective, it should look like a simple Slack message:
“Here’s your prep for tomorrow’s customer sync…”
But internally, you need the model to understand that:
- This message was generated by a meeting prep workflow
- Users may want to edit or turn off meeting prep workflows
- Specific tools should be used to adjust meeting prep behavior
To accomplish this, we send the user-facing text to the user, but store a hidden-context version of the message in the conversation history for the model.
Extracting the Hidden-Context Builder
Rather than constructing hidden context inline each time, we define a simple helper that appends internal metadata to the user-visible text:
function buildMessageWithHiddenContent(text: string): string {
return `
${text}
## Agent Instructions
This message was generated by a meeting preparation run.
The user can configure how these meeting preps are sent using the following tools:
- updateMeetingPrepInstructionsTool
- updateMeetingPrepEnabledTool
- updateMeetingPrepFilterTool
- ...`;
}This structured annotation is never shown to the user, but always included when the model is given conversation history. It tells the model exactly how to behave when the user replies with something like:
- “Can you disable these for internal meetings?”
- “Can you make these shorter?”
- “I want to change how these are generated.”
Instead of guessing, the model now knows precisely which tool to invoke.
Using Hidden Context in Your Message Pipeline
Once you have the helper, the message workflow becomes straightforward:
const userVisibleText = text;
const fullContent = buildMessageWithHiddenContent(userVisibleText);
// 1. Send only the clean text to the user
await client.chat.postMessage({
channel: channel_id,
text: format(userVisibleText),
});
// 2. Store the hidden-context version separately for the model
await messageService.createMessage({ content: fullContent });
await contextService.storeInteraction({ content: fullContent });That’s it.
- Users get a crisp, human-friendly Slack message
- The model gets a richly annotated version
- Future model calls behave much more consistently
This separation of presentation (what users see) and semantics (what the model sees) is the foundation of the hidden-context technique.
Why This Technique Works
LLMs reason best when contextual information is attached directly to the relevant message. By storing the hidden instructions with the message itself, the model can reliably interpret user follow-ups without relying on brittle system prompts or complicated orchestration code.
Compared to alternative patterns:
Versus storing metadata in a separate data structure
You end up writing glue code to manually reassemble conversational context. Hidden context keeps everything together.
Versus exposing workflow metadata to the user
This creates noise in the UX and leaks internal mechanics.
Versus relying on system prompts alone
System prompts are global, but workflows are local. Hidden context allows message-level specificity.
Generalizing the Pattern
Although this example focuses on meeting prep, the same approach works wherever you need the model to remember what kind of message it previously generated:
- Daily digests
- Follow-up recommendations
- Automated summaries
- Reminders
- Notifications
- Draft documents the user may revise
- Internal or external reports
Any time the model should treat a message differently based on its origin or intent, add a hidden context block.
The Big Idea
Hidden context transforms an LLM conversation from an unstructured transcript into a reliable, semi-structured workflow engine.
- The user sees clean conversational output
- The model sees a richly annotated state machine
- The developer gets deterministic, tool-driven behavior
It's one of the highest-leverage architectural patterns we've adopted—small in code, huge in impact.
