Skip to content

Ask

Grounded answers from your captures with citations, on-device synthesis on Apple Intelligence, diagnostics, exports, and Quick Ask from anywhere.

Last updated: 2 April 2026

What grounded answers mean

Grounded means every substantive claim in an Ask reply should be traceable to material Overshow has already stored: OCR, transcripts, UI snapshots, or indexed document chunks. The system retrieves evidence, then synthesises it into readable prose. It is not a general-purpose chatbot that invents facts when your archive is silent.

That matters for knowledge work where verification beats fluency: you can open citations and confirm wording, timing, and source application before acting on an answer.

Ask interface showing question, grounded answer, and source references

Ask declines or qualifies when confidence is low rather than guessing. If retrieval is thin, you will see an honest boundary. not a confident fabrication.

How Ask works

Ask runs in two broad phases:

Phase What happens
Retrieval The same semantic and keyword signals that power Search surface candidate passages, scoped by your filters and configuration.
Synthesis On macOS 26+, Apple Intelligence runs an on-device LLM that turns retrieved snippets into an answer while staying tied to those sources.

Context assembly respects limits on how much material is included in each request, balancing coverage with focus. Nearby segments from the same source are merged so the model sees coherent passages rather than isolated fragments.

Ask versus Search

Dimension Search Ask
Primary output Ranked list of hits One narrative answer plus citations
Mental model “Show me every place this appears” “What did we conclude / decide / say?”
Exploration You scan, compare, and open many cards You read the synthesis, then drill into cited moments
Ranking Keyword and semantic signals. tuned for lists Retrieval supports a single best-effort context bundle
Verification Implicit. you judge each hit Explicit. citations point to evidence
When evidence is weak Empty or short result sets Decline, fallback labelling, or qualified text
When to use which

Use Search when you need exhaustive coverage, exact phrase hunting, or side-by-side comparison of many snippets. Use Ask when a short, sourced summary saves time. as long as you are willing to open citations for anything compliance-critical.

Citations and verification

Every useful answer includes citations that tie statements back to stored content. Citations include the content type, relevance score, and matched text. with links into the data inspector for deeper review where available.

Citation fields

Field Meaning
Content type Whether the evidence came from OCR, audio, UI snapshot, document chunk, or a combined view
Score Retrieval strength. higher usually means closer match
Matched text The excerpt surfaced as evidence; read this before trusting paraphrase

Always open citations when the answer informs commitments, incidents, or policy. The summary is a convenience; the matched text is the ground truth in your archive.

Fallback status and the full LLM path

Not every environment can run the full on-device stack at every moment. Ask surfaces fallback status so you can tell:

Situation What you should expect
Full LLM answer Apple Intelligence produced a structured reply from retrieved context
Extractive fallback Text may be closer to quoted retrieval than a rewritten narrative. still grounded, differently presented

The UI and diagnostics make this distinction explicit so you do not mistake a condensed extract for a free-form model essay.

When Ask declines. and what to do

Low confidence triggers honest decline rather than invention. Practical responses:

  • Broaden time or remove an overly tight app_name filter.
  • Rephrase toward terms that likely appear in captures.
  • Switch to Search in hybrid or keyword mode to inspect raw hits.
  • Confirm capture was running and permitted for that context.

Prompt configurations

Overshow ships four bundled prompt configurations so you can steer tone and structure without hand-authoring system prompts:

Configuration Typical character
Grounded Conservative tying to sources; default trust posture
Concise Shorter answers when you want speed
Audio-focused Emphasis on spoken evidence in the context bundle
Chain-of-thought More explicit reasoning steps while remaining source-bound

Exact labels in the UI may vary slightly by version; the intent is the same. pick the shape of answer you need, not a different knowledge base.

Retrieval diagnostics and export

Ask exposes retrieval diagnostics for transparency and debugging:

Diagnostic area Examples
Timings Where latency went. retrieval vs model vs assembly
Backend Which on-device path answered
Fallback trigger Why extractive or shortened behaviour occurred
Prompt preview What the model actually saw (redacted where appropriate)
Context stats Count and character totals for included snippets

You can export diagnostics as JSON or CSV for tickets or internal analysis, and export Markdown context when you want to paste evidence into an external LLM or notebook. still under your control, not sent automatically by Overshow.

Privacy and on-device processing

Ask’s retrieval uses your local index; synthesis on supported Macs uses Apple Intelligence on-device. No query text is sent to external search providers as part of this feature. Any organisation-wide cloud policies elsewhere in the product remain separate. Ask is designed around local evidence first.

Treat exported Markdown like any sensitive artefact: it can contain quotes from your screen and microphone history. Share only where policy allows.

Quick Ask overlay

Quick Ask (typically Opt+Space) opens a lightweight overlay so you can pose a question without leaving the app in front of you. Workflow:

  1. Invoke the shortcut from any workspace.
  2. Type your question; retrieval runs against your archive.
  3. Read the answer and citations; jump to inspector links where offered.
  4. Dismiss the overlay to return to your previous context.

This complements the main Ask sub-tab beside Search in the desktop shell. use the overlay for interruptions, use the tab for longer review sessions.

Desktop UI integration

  • Ask sub-tab sits alongside Search for full-width review.
  • Citations link into the data inspector for transcript and capture detail.
  • Search UI remains the place to enumerate every hit; Ask remains the place for synthesis with receipts.

Tips for better answers

  • Ask specific questions that name projects, people (as they appear in data), or time periods.
  • Enable sensible context caps. too little context starves the model; too much dilutes focus.
  • Pick prompt configs to match evidence type (audio-heavy meetings vs on-screen specs).
  • If an answer feels vague, check diagnostics before blaming the model. retrieval may be thin.

Example questions

Situation Example phrasing
Post-meeting recap “What action items were agreed in yesterday’s planning call?”
Incident “Did we mention rollback before the outage window closed?”
Spec drift “What did we decide about the API rate limit in design review?”
Personal recall “Which URL did I have open when discussing the invoice?”

Ask cannot retrieve what was never captured. Pair honest declines with Search and, if needed, calendar or meeting filters to confirm you are looking at the right day and source.