I’m a builder and user of Obsidian, validating a concept called Concerns. Today it’s only a landing page + short survey (no product yet) to test whether this pain is real.
The core idea (2–3 bullets):
- Many of us capture tons of useful info (notes/links/docs), but it rarely becomes shipped work.
- Instead of better “organization” (tags/folders), I’m exploring an “action engine” that:
1.detects what you’re actively targetting/working on (“active projects”)
2.surfaces relevant saved material at the right moment
3.proposes a concrete next action (ideally pushed into your existing task tool)
My own “second brain” became a graveyard of good intentions: the organizing tax was higher than the value I got back. I’m trying to validate whether the real bottleneck is execution, not capture.Before writing code, I’m trying to pin down two things:
- Project context signals (repo/PRs? issues? tasks? calendar? a “project doc”?)
- How to close the loop: ingest knowledge → rank against active projects → emit a small set of next-actions into an existing todo tool → learn from outcomes (done/ignored/edited) and optionally write back the minimal state. The open question: what’s the cleanest feedback signal without creating noise or privacy risk? (explicit ratings vs completion events vs doc-based write-back)
What I’m asking from you:
1.Where does your “second brain” break down the most?
capture / organization / retrieval / execution (If you can, share a concrete recent example.)
2.What best represents “active project context” for you today?
task project (Todoist/Things/Reminders)
issues/boards (GitHub/Linear/Jira)
a doc/wiki page (Notion/Docs)
calendar
"in my head"
Which one would you actually allow a tool to read?3.What’s your hard “no” for an AI that suggests actions from your notes/links? (pick 1–2)
privacy/data retention
noisy suggestions / interruption
hallucinations / wrong suggestions
workflow change / migration cost
pricing
others
item007•1w ago
item007•1w ago
1.On-demand recall & retrieval is the core pain: people capture a lot but can’t reliably resurface the right note/link at the right time; they want stronger search (fuzzy/semantic), snapshots/context, and “pull-based” recall when needed.
2.Privacy/local-first is a hard requirement for many: “no cloud, no third-party access,” ideally open-source and self-hostable; any AI must run fully on-device to be trusted.
3.Low-friction matters more than perfect organization: users prefer systems that don’t force structure or add maintenance overhead—messy-first, iterate only when a real problem appears.
4.Avoid interruption by default: many dislike proactive “AI suggestions”; they want controlled resurfacing (opt-in prompts), not constant nudges.
5.Different goals coexist: for many, notes are for memory/inspiration/reference (not turning into tasks), while others want action workflows—tools should respect both modes.
6.Cost and scalability must be predictable: long-term indexing (years of notes/history) can get expensive; pricing needs to be transparent and not “per task,” and context signals across tools are often noisy/unreliable.
item007•1w ago