Daily summaries
End-of-day knowledge digests from hourly capture data, with app usage breakdown, on-device generation, encryption at rest, and on-demand generation from the desktop app.
Last updated: 2 April 2026
Overview
Daily summaries turn a day’s Overshow captures into a readable digest: what stood out, what was decided, what you might want to remember, and where on-screen time concentrated across applications. Generation runs on device over hourly aggregates of capture data, then persists the result encrypted at rest. You can open any past date that has capture data and generate on demand from the desktop app, with a progress indicator while work completes.
What a daily summary contains
The digest is evidence-led: it reflects what appeared in your captures, not a separate time-tracking product or cloud summarisation service for the core flow.
Summary sections
| Section | Typical content |
|---|---|
| Key facts | Notable statements, figures, and concrete details that showed up in OCR or transcribed audio during the day. |
| Decisions | Commitments, choices, and conclusions that can be inferred from captured discussion and on-screen work. |
| Learnings | Insights, corrections, and “things worth remembering” distilled from the day’s indexed material. |
| App usage breakdown | Which applications appeared in screen capture and how capture time allocated across them for that calendar day. |
Relationship to hourly capture
The summary pipeline consumes work driven from hourly capture aggregation. the same underlying rhythm that organises much of the desktop index. That is why a day with sparse or uneven capture may yield a shorter digest: there is simply less signal per hour to map and reduce.
How generation works
How the day is processed
| Stage | Role |
|---|---|
| Hourly extraction | Each hour contributes candidate facts, decisions, and learnings from that window’s captures. |
| Merge | The pipeline merges hourly contributions into a single daily narrative with consistent tone and without endless repetition. |
| App usage | Application names and durations are derived from which apps were visible in screen captures across the day, then presented as a breakdown alongside the narrative. |
Worker and backend
The summary pipeline processes pending work items locally and writes results back for the UI to display.
| Backend | Role |
|---|---|
| LLM | Produces richer phrasing and connection between ideas when a language model is available in your configuration. |
| Baseline fallback | Ensures you still receive a structured digest when the LLM path is unavailable or errors; expect simpler linkage between bullets. |
“On-device” means the generation step is designed to run without sending your raw captures to a cloud summarisation API for the core digest. Your organisation’s policy and build still govern any optional integrations elsewhere in the product.
Enabling daily summaries
Daily summary generation is guarded by a feature flag, which is off by default. Enable it in product settings before expecting automatic or background generation to run.
| Mode | What to expect |
|---|---|
| Flag off | No background generation for daily rows; the UI may still show historical rows if they were created earlier. |
| Flag on | Worker can pick up pending daily summary jobs according to schedule and datastore state. |
First-time setup checklist
- Turn daily summaries on where your build exposes feature flags.
- Confirm capture is running for the days you care about.
- Open
/summary, pick a date, and use generate on demand to validate output before relying on background completion. - Review ignored applications so important tools are not excluded from app usage and OCR signal.
Generating on demand
Even when background scheduling exists, you remain in control: the desktop app lets you request generation for a chosen date. A progress indicator reflects worker activity so long-running days do not feel stuck.
| Situation | Suggestion |
|---|---|
| You were away | Pick the date with real capture activity and generate; past dates remain available whenever data exists. |
| A day looks empty | Confirm capture was not paused all day and that ignored apps did not hide your main work surface. |
| You want a second opinion | Regenerate after changing capture or ignore settings so the next pass sees different evidence. |
Desktop summaries experience
The /summary route is the primary place to read and trigger daily digests.
| Element | Purpose |
|---|---|
| Date picker | Jump to any day with capture data; compare adjacent days when reconstructing a multi-day decision. |
| Knowledge cards | Present facts, decisions, and learnings in scannable blocks aligned to the table above. |
| App usage breakdown | Shows which applications dominated visible screen time in captures for that date. |
| Progress UI | Surfaces generation state while the worker runs so you can wait or navigate away intentionally. |
At day end, skim the digest, then use Search to open any bullet that needs verbatim evidence from transcripts or OCR.
What app usage measures
App usage in daily summaries reflects time allocation inferred from screen captures. which applications were visible in captured frames and how capture coverage spread across them. It does not measure keystrokes, mouse activity, audio-only focus, or background processes that never appeared on screen. Treat it as “where your captured screen time lived”, not a full productivity score.
| Measures | Does not measure |
|---|---|
| Visible application identity in frames | Keystroke or input volume |
| Temporal spread of captures per app | Silent background CPU |
| Honest signal when capture runs | Time when capture paused or excluded |
Encryption and storage
Completed daily summaries are stored encrypted at rest alongside other sensitive narrative content in your local datastore. This matches the product’s privacy-first posture: the digest is yours, readable on device, and not dependent on a third-party summarisation SaaS for the default path.
When summaries can be thin
Some days legitimately produce short or generic output. Common causes:
| Cause | Effect |
|---|---|
| Low capture volume | Few hourly buckets contain OCR or audio text to map. |
| Paused capture | Large gaps remove context the reducer could connect. |
| Ignored or excluded apps | Work happens in tools that never enter the index. |
| Headless or minimal UI | Little on-screen text for OCR to anchor facts. |
| LLM unavailable | Baseline fallback may produce flatter prose with less cross-hour linking. |
Diagnosing a thin summary
Compare capture settings, pause history, and ignore lists for the date in question. If app usage shows only one or two applications yet you remember more context, the missing apps were likely not captured or not visible in frame samples. Regenerate after a representative day to confirm improvement.
Tips for richer summaries
- Run capture consistently during working hours you care to remember.
- Keep ignored windows limited to genuinely sensitive or noisy surfaces.
- Prefer visible work in standard desktop apps when you need strong OCR anchors (browser tabs, documents, tickets).
- Enable daily summaries when you want background completion; use on-demand generation to backfill specific dates.
- Compare adjacent days when rebuilding a narrative across a week. daily summaries are per calendar date, not arbitrary ranges.
Viewing past dates and comparing days
The date picker is not limited to “today”. Any calendar day for which capture data exists in your local database can be selected, and you can generate or regenerate a summary for that day.
| Workflow | Why it helps |
|---|---|
| Catch up after leave | Reconstruct decisions and facts without manually scrolling search results. |
| Audit a specific day | Pair the digest with Search filters for time range when something is disputed. |
| Week-over-week review | Open adjacent dates in sequence; app usage patterns often explain shifts in tone or focus. |
If a date is missing from the picker or cannot generate, there may be no capture rows for that day, or storage was reset on this device. summaries cannot invent content that was never indexed.
LLM versus baseline (practical differences)
| Aspect | LLM | Baseline fallback |
|---|---|---|
| Wording | More natural connective language between hours | More list-like, fewer implicit links |
| Risk of overreach | Guardrails still apply; content should tie to captures | Tends to stay closer to literal fragments |
| Availability | Depends on platform, model, and settings | Intended to always be available as a backstop |
You do not need to pick manually: the pipeline selects the LLM path when appropriate and falls back otherwise.
See also
- Meetings for call-level narratives and templates.
- Search to drill from a bullet to evidence.
- Screen capture for how frames enter the index that feeds summaries.