AI assistant integration (MCP)
How Overshow will expose your captured history to AI assistants like Claude, Cursor, and local LLMs through the Model Context Protocol, with privacy-first defaults.
Last updated: 18 April 2026
Availability. Overshow MCP integration is not enabled by default in current builds. This page describes how the integration is designed to work so interested users can decide whether it fits their workflow. Exact commands, binaries, and client support will be confirmed when MCP ships.
What MCP is, in one paragraph
Model Context Protocol (MCP) is an open standard for connecting AI assistants to external tools and data. An MCP client (your assistant) speaks a small JSON-RPC protocol to an MCP server (Overshow), and the server exposes a list of tools the assistant can call, with typed inputs and outputs. Think of it as a well-defined bridge between a chat interface and a data source, replacing ad-hoc copy-paste or custom plugins with a consistent, auditable contract.
For Overshow, MCP means your assistant can search your captures, answer questions grounded in your own history, review meetings, and inspect the capture pipeline, all without Overshow itself needing a chat UI for that particular assistant.
Compared with pasting large raw exports into a chat, tools return bounded, filterable snippets (time range, app, speaker, meeting, and so on), so assistants spend fewer tokens on noise and more on answering your question.
What MCP lets you do
| You want to | MCP lets your assistant |
|---|---|
| Recall what you were working on last Tuesday | Query Overshow search with time and app filters, surface the relevant captures |
| Get a meeting recap without opening the desktop app | List recent meetings, fetch the encrypted summary, return it inline |
| Ask “has anyone answered this before?” | Look up detected questions with candidate answers from your recorded history |
| Pull context into an IDE session | Cursor or a local assistant fetches recent captures matching a topic and cites sources |
| Audit the pipeline | Check capture status, worker queues, and recent events to explain gaps or delays |
The shared thread: you stay in the assistant you already use, and Overshow becomes one of the grounded sources it can cite.
MCP is most useful when your assistant is allowed to cite the tool results back to you. You can then verify the underlying captures in the desktop app before acting on anything, just like with Ask.
Tools that may be exposed
The server exposes a fixed, documented set of tools. Assistants choose which to call; they cannot execute arbitrary code against your data. All tools operate on the local database through the Overshow server running on your machine.
| Category | Tool | What it does |
|---|---|---|
| Search | search-content |
Keyword, semantic, or hybrid search across OCR, audio, UI snapshots, and documents |
search-by-profile |
Search scoped to a specific person, organisation, or project | |
| Questions | find-questions |
Detected questions from screen and audio, with candidate answers retrieved from history |
| Meetings | list-meetings |
Meetings within a date range or status filter |
get-meeting-summary |
Decrypted summary for a specific meeting (stored encrypted at rest) | |
| Action prompts | run-action-prompt |
Run a structured AI action prompt over one or more meetings using Overshow's on-device LLM |
| Profiles | list-profiles |
People, organisations, and projects inferred from speakers and calendar attendees |
get-profile-detail |
Topics, knowledge facts, and interaction history for a profile | |
| Pipeline | pipeline-status |
Live snapshot of workers, queue depths, embedding coverage, last activity |
recent-captures |
The most recent items that flowed through the pipeline | |
capture-item |
Full detail for a single captured item including processing stage and errors | |
capture-status |
Whether screen and audio capture is currently running or paused | |
recent-events |
Snapshot of recent pipeline events from the in-memory bus |
Every tool is read-only with respect to your captures. No tool creates or deletes recordings, transcripts, or summaries, and run-action-prompt does not persist its LLM output to the database — it streams the answer back to the assistant and returns.
Privacy model
The integration is designed around a simple rule: where your query and results end up depends on which assistant you connect.
| Assistant type | What leaves your machine |
|---|---|
| Local LLM (built-in Overshow AI, Jan.ai, LM Studio) | Nothing. Queries, tool calls, and responses stay on device. |
| Cloud assistant (Claude, Cursor, others) | Query text and any tool results returned to the assistant are sent to that provider's servers. |
Because tool results can include verbatim OCR text, transcripts, and UI snapshots, sending them to a cloud assistant effectively ships that content to a third party. Overshow therefore treats cloud assistants as an explicit opt-in:
- The setup flow auto-detects local LLM clients by default.
- Configuring Claude, Cursor, or similar clients requires an explicit
--cloudflag (or the equivalent choice in the UI). - The flag appears in logs and the desktop app so you can audit what was enabled and when.
If you connect a cloud assistant, treat every captured moment as potentially leaving your device the first time the assistant calls a tool. Use window exclusions and capture pauses before you would normally rely on them, not after.
Separately, the server runs on localhost only. It is not exposed to the network. An assistant has to be running on the same machine (or explicitly tunnelled by you) to call it.
What about the `run-action-prompt` tool?
run-action-prompt is the one tool that, internally, sends meeting content (OCR text and audio transcript) through a chat backend to generate its answer. Overshow currently ships only the on-device MLX chat backend, so that content does not leave your device regardless of which MCP client called the tool. If Overshow ever exposes a cloud chat backend, its use will be surfaced the same way cloud MCP clients are: as an explicit opt-in.
How it will fit into your machine
At a high level the pieces are:
You install Overshow as usual. A separate overshow-mcp process is launched by the assistant on demand; it forwards tool calls to the Overshow server already running as part of the desktop app. There is no extra background service to keep healthy beyond what you already run.
Setup, at launch
The exact commands will be confirmed when MCP ships. The intended shape is:
| Step | What you will do |
|---|---|
| 1. Install Overshow | Install the desktop app and complete onboarding as normal. |
| 2. Run the setup helper | A single command detects installed assistants and writes their config files. |
| 3. Confirm cloud consent | Cloud assistants only configure when you pass an explicit flag. |
| 4. Restart the assistant | Claude Desktop, Cursor, and similar read their MCP config at startup. |
| 5. Ask your first question | The assistant now sees Overshow tools and can cite your captures in its replies. |
Local-only setups need no extra flags. Cloud setups require the opt-in described above.
What to expect in practice
A few realistic scenarios, assuming MCP is enabled and connected:
| Scenario | What happens |
|---|---|
| You ask Claude about yesterday's standup | Claude calls list-meetings, then get-meeting-summary, and replies with the summary inline. |
| Cursor drafts a commit message | Cursor calls search-content for recent work on the touched files, cites the relevant captures. |
| A local LLM helps you write a follow-up | The assistant searches meeting transcripts by participant and drafts an email you can edit. |
| You debug why capture missed an app | An assistant calls capture-status, pipeline-status, and recent-events to explain the gap. |
In each case the assistant sees only what the tool returns for that specific call, not the entire database.
Trade-offs worth knowing now
- Tool results are bounded. Each call returns a limited window (for example, a page of search hits or a capped number of events). Assistants may need to call more than once for broad questions, which costs latency and, on cloud assistants, tokens.
- Grounding depends on retrieval. The assistant's answer is only as good as the captures surfaced by the tools it chose. Vague prompts tend to trigger broad
search-contentcalls with mediocre results. - Cloud assistants can be verbose. Some clients will read large tool outputs into their context repeatedly. If that matters for cost or privacy, prefer a local LLM for day-to-day work and reserve cloud assistants for tasks that clearly need them.
- MCP is still maturing. Clients differ in how they display tool calls, cite sources, and handle errors. Overshow follows the protocol closely, but the experience also depends on the assistant you pair with it.
What it does not do
- It does not give assistants write access to your Overshow data.
- It does not stream captures in real time; it answers on-demand queries.
- It does not replace the desktop Ask feature. Ask stays the primary grounded-answer surface inside Overshow itself.
- It does not bypass capture controls. Paused or excluded content is not indexed, so MCP cannot retrieve it.
Register interest
If you would find MCP useful, the beta signup page is the best way to tell us. That also helps us prioritise which clients to polish first (local LLMs, Claude Desktop, Cursor, and others are all in scope, but launch support may roll out in stages).
See also
- Ask: the grounded-answer surface inside Overshow that uses the same local retrieval signals MCP exposes.
- Search: the filters and modes that
search-contentandsearch-by-profilemap onto. - Privacy: on-device processing, capture controls, and encryption, which together define what MCP can ever see.
- Security: how cloud assistant integrations are treated as an explicit opt-in.