frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
1•Osiris30•1m ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•3m ago•0 comments

ShowHN: Make OpenClaw Respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•6m ago•1 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•7m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•8m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•13m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•15m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
5•witnessme•19m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•22m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
2•bigbromaker•25m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•31m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•34m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•34m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
2•pbradv•37m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
4•hasheddan•37m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
3•ArtemZ•49m ago•5 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•49m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•51m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
12•duxup•54m ago•1 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•55m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments
Open in hackernews

Don't build a spaced repetition startup

https://www.giacomoran.com/blog/dont-build-sr-startup/
23•ran3000•4mo ago

Comments

rickcarlino•4mo ago
Hi Giacomo, I am sorry to hear it did not work out for you and it is great that you can share your experience with others. As much as I love spaced repetition, and as much as I continue to use it on a daily basis for years with great result, I would need to (unfortunately) agree with you on this one.

In college, spaced repetition helped me get ahead, but it felt nearly impossible to convince my peers of this. While I was able to effortlessly remember thousands of vocabulary words in my language program, my peers struggled and failed to heed the simple advice of "just put it into Anki". Spaced repetition is valuable, but much like having a good diet or exercising regularly, it is really really hard to convince people of the benefits. A spaced repetition SaaS can easily become akin to an unused (and eventually cancelled) gym membership.

It seems that most of the apps that make it in this space ultimately need to compromise on their fundamentals in order to get traction. I don't need to name any names here, there are plenty of examples of SRS apps that started as solid tools for learning but evolved down a path of gamification to a point of diminished usefulness.

Anyway, sorry again to hear about this one and best of luck to you on whatever you decide to build next. Like you, I am continuing work on my (noncommercial) SRS and I always enjoy hearing your perspectives. Feel free to reach out if you ever want to bounce ideas around.

LittleFishyChan•4mo ago
I have been using Supermemo daily for almost 20 years, and I have basically stopped trying to sell people on the idea. When I explain it to them, they agree that it is a good idea, but very a few people can truly follow through in the long-term. Yes, SRS is basically a gym membership that most people will stop using after Jan 3, I like that metaphor.
ttd•4mo ago
I appreciated this - well written and useful review of their business and why they think it didn't work.

In addition to the challenges listed here, IMO there are rapidly diminishing returns for the type of recall learning that spaced repetition enables. As you progress further in your career, there is much less emphasis on what you know, and more emphasis on how you apply it, how you communicate, and how your knowledge ends up helping others around you. I suspect most professionals decide at some point that they need to start "paging out" specific knowledge to make room for broader experience, retrieving it from the bookshelf (swap partition) when needed.

I'm also curious on the fixation with creating a startup in the VC-funded sense. Why choose able-to-find-VC-funding to be your metric of success?

mabster•4mo ago
For my day job I'm either "Getting things done" or Zettelkasten anyway because it's more about retrieval than memory.

But for languages, SRS is great.

And I'm also glad I memorised a whole bunch of math formulas way back. E.g. Boolean algebra I keep using an identity that I couldn't find on identity sheets by web search.

reverseblade2•4mo ago
lol I just started building this, but I will make it more like wanikani levels which helped great for learning kanji where cards accumulate with further intervals with promoted levels. Even have the domain , nemorize.com (some dummy game is there now)
mabster•4mo ago
Totally expected the "desirable difficulties". For SRS to work effectively it has to "be yours" and that only works effectively if you're building your own cards.

With Duolingo it kind of encourage small bursts rather than the hour (minimum) per day you need with Anki for better or worse.

mattjenner•4mo ago
Your blog post clearly shows what you didn’t do: marketing, brand, and building an audience.

I understand the educational value of what you made. You clearly overcame many technical hurdles to combine the platform with solid educational principles. Awesome stuff!

But another major step needed was to know that a great product, marketed poorly, is likely doomed. Ironically, a lesser product marketed well, will likely succeed- it can be improved as it gains attention (and revenue).

Sorry it didn’t work out. Maybe one day your personal runway will expand for it and you can explore how to put it in front of an audience.

Once you get an audience, and you know how to build that audience, you’ll get user feedback, adoption traction and can explore new audiences to share it with.

We’re in an age where they won’t come to you. You must go to them (with marketing, brand, building an audience).

watwut•4mo ago
Spaced repetition and flashcards are two very different things. It is very unfortunate that people are constantly confusing them. Spaced repetition is an idea that you recall/revise in intervals. Flashcards are a method to learn/revise by flashing card on you which is not exactly the most effective learning for me.

Mentioned Duolingo has spaced repetition. It is not a flashcards app.

For me as an adult who did learned stuff, both the supposed frictions and necessary components of learning as described in the article does not ring true. It is not really consistent with how I perceived anki/duolingo nor how I understand the reasons for "create own cards" advice. It is not even consistent with where my learning successes and failures came from.

Maybe I am simply not the target of this startup then. But also, I do not think I am the only snowflake adult for whom these points did not ring true.

graboid•4mo ago
It is a very interesting write-up. A random thought I had while reading this: I feel like long-term, a system that schedules/"optimizes" the process of learning by reading/watching content and then engaging with this new content by taking notes and connecting those notes to existing knowledge could be more fruitful. Something akin to SuperMemo's "Incremental Reading", but not as focused on creating flashcards out of the material.

With traditional Q/A-style spaced repetition, I feel like accumulating a long list of isolated facts sometimes (I know, you can remedy this a bit by also quizzing connections, context, but I feel like the general tendency still remains).

jamager•4mo ago
I am working in this space as well. Habit formation / self-learning is a big easy because overwhelming majority of people prefers more structured / guided content. This makes SRS very niche.

The part of seeing "You still need to decide what’s worth remembering" as friction I strongly disagree with.

That is a very important part of the learning process and IMO should not be automated. It is difficult because learning is difficult, but if you kill that you also kill the spirit of self-learning!

And not mentioned in the article, but I think the most important factor is that everyone in the SRS niche knows Anki, and despite Anki faults, everyone has put the effort to learn how to use it, got used to it, found a good-enough workflow, and have zero incentive to move to a more expensive alternative even if it is better.

rsanek•4mo ago
Great write-up. I especially liked

> I find the common advice of “write your own cards” misplaced: you pay the opportunity cost of not reviewing all the cards you don’t make.

This perspective of cost is a nice way of putting it. The other side to me is that writing your own cards can, but does not necessarily, improve retention on its own. Much of card creation is still mostly manual formatting, copy-pasting, etc.

My own solution to this has been to build an 80-20 version of what was demo'd in the "AI canvas" part of the video. I drop in a source (pdf, epub, etc), and an LLM prompt creates cards. I just use Anki Connect to directly add these cards to my collection.

Would be interested in seeing the Rember system prompt for card creation, and hearing what LLM is being used. I've seen wildly different results from changing my own prompt (and from using different models).