> 1. The review histories of related cards. Card semantics allow us to identify related cards. This enables memory models to account for the review histories of all relevant cards when estimating a specific card’s retrievability.
> 2. [...]
I've been thinking that card semantics shouldn't be analyzed at all, and just treated as a black box. You can get so much data off of just a few users of a flashcard deck that you could build your own map of the relationships between cards, just by noticing the ones that get failed or pass together over time. Just package that map with the deck and the scheduler might get a lot smarter.
That map could give you good info on which cards were redundant, too.
edit: this may be interesting to someone, but I've also been trying to flesh out a model where agents buy questions from a market, trade questions with each other, and make bets with each other about whether the user will be able to recall the question when asked. Bankrupt agents are replaced by new agents. Every incentive in the system is parameterized by the user's learning requirements.
I think you can do both and get even better results. The main limitation is that the same flashcards must be studied by multiple students, which doesn't generally apply.
I also love the idea of the market, you could even extend it to evaluate/write high-quality flashcards.
I think only a kernel of the same flashcards, because in my mind new cards would quickly find their position after being reviewed a few times, and might displace already well-known cards. I see the process as throwing random cards at students, seeing what's left after shaking the tree, and using that info to teach new students.
The goal, however, would definitely be a single standard but evolving set of cards that described some group of related ideas. I know that's against Supermemo/Anki gospel, but I've gotten an enormous amount of value out of engineered decks such as https://www.asiteaboutnothing.net/w_ultimate_spanish_conjuga....
> I also love the idea of the market, you could even extend it to evaluate/write high-quality flashcards.
It's been my idea to drive conversational spaced repetition with something like this.
My personal interest is more on conceptual knowledge, like math, cs, history or random blog posts and ideas. It's often the case that, on the same article, different people focus different things, so it would be hard to collect even a small number of reviews on a flashcard you want to study.
Since in Anki the "note" is the editing unit, that works for some cloze deletions but not for QA cards (only for double-sided QA cards). A content-aware memory model would allow you to apply "disperse siblings" to any set of cards, independently of whether they were created together in the same editing interface.
In general, we can think of a spaced repetition system as being (i) Content-aware vs. Content-agnostic and (ii) Deck-aware vs. Deck-agnostic
Content-aware systems care about what you're studying (language, medecine, etc) while Content-agnostic systems don't care about what you're studying.
Deck-aware systems consider each card in the context of the rest of the cards (the "deck") while Deck-agnostic systems consider each card in pure isolation.
Currently, FSRS is both Content-agnostic as well as Deck-agnostic. This makes it extremely easy to integrate into a spaced repetition system, but this also means the model will underfit a bit.
It it interesting to note that you could in practice optimize seperate FSRS models for each deck covering different topics, which would make it Content-aware in a sense. Additionally, "fuzz" is a somewhat Deck-aware feature of the model in that it exists specifically to reduce interactions between other cards in the deck.
Using decks to draw semantic boundaries is likely overly constraining. I think we want to account for finer differences between cards. Decks are coarse and people differ in the ways they use them, some people recommend having just one global deck. Notes are too fine. We explored something in between: a note capturing an idea or concept, plus an associated set of cards. Turns out it's hard to draw idea boundaries. That's why I think it's easier to relate cards by semantic embeddings or more rigid but clearer structures, like the DAG of dependencies suggested elsewhere in this thread.
I was working in a detail rich context, where there were a lot of items, about which there were a lot of facts that mostly didn't change but only mostly. Getting a snapshot of these details into approximately everyone's head seemed like a job for spaced repetition, and I considered making a shared Anki deck for the company.
What wasn't clear was how to handle those updates. Just changing the deck in place feels wrong, for those who have been using it - they're remembering right, the cards have changed.
Deprecating cards that are no longer accurate but which don't have replacement information was a related question. It might be worth informing people who have been studying that card that it's wrong now, but there's no reason to surface the deprecation to a person who has never seen the card.
Is there an obvious way to use standard SRS features for this? A less obvious way? A system that provides less standard features? Is this an opportunity for a useful feature for a new or existing system? Or is this actually not an issue for some reason I've missed?
Then placing a "this has changed" notification card at the front of the new queue only for people who learned the old information is as simple as checking the corresponding card's review status in the database.
1. Identify knowledge blocks that you want people to learn. This is what would be tracked with the SRS.
2. Create cards, with a prompt which requires knowledge blocks to answer. Have the answers in this system feed back knowledge to the SRS.
3. When one of the knowledge blocks changes, take the previous knowledge familiarity and count that against the user.
So for example, at some point a card might be "Q. What effect will eating eggs have on blood cholesterol? A. Raise it." That would be broken down into two knowledge blocks: "Cholesterol content of eggs" and "Effect of dietary cholesterol on blood cholesterol".
At some point you might change that card to "Q. What effect will eating eggs have on blood cholesterol? A. None, dietary cholesterol typically doesn't affect blood cholesterol." (Or maybe we're back again on that one.)
The knowledge blocks would be the same, but you'd have to take the existing time studied on the "Effect of dietary cholesterol on blood cholesterol" and mark it against recall rather than towards recall. Someone who'd never studied it would be expected to learn it at a certain pace; but someone who'd studied the old value would be expected to have a harder time -- to have to unlearn the old value.
I think you could probably hack the inputs to the existing FSRS algorithm to simulate that effect -- either by raising the difficulty, or by adding negative views or inputs. But ideally you'd take a trace of people whose knowledge blocks had changed, and account for unlearning specifically.
I love Anki and used it before when I needed to memorize things, but would love to know what other options on the market exist.
When I was studying Japanese, I was thinking how it's always best to learn words in sentences and that it would be good if the sentences for a particular word were random.
Extending that, the sentences could be picked such that the other words are words scheduled for today meaning much more bang for buck per learning hour.
>Extending that, the sentences could be picked such that the other words are words scheduled for today meaning much more bang for buck per learning hour.
Just the other day I was thinking about how there’s a good chunk of vocab that could be “mined” from the sentences in my vocab deck.
I think that this idea would work well, but would probably require a whole new SRS program to be able to implement it cleanly. It’s too dynamic for a traditional SRS app like Anki which is pretty static in nature.
It has Anki integration or its own SM2 flashcards app (soon FSRS). And it passively collects a personal corpus of sentences from any web/ebook material you open (manga up next).
I plan to add more sophisticated sync between the reading and reviewing such that cards can be more dynamically based on relevant personal corpus content, and where reading (on the web or in books, outside flashcards) would auto-review any flashcards you have (or which you create in the future).
There really shouldn’t be any difference between encountering a word in something you’re reading and reviewing it on a flashcard. And it would be nice to revisit reading material with guidance from FSRS, to find N+1 sentences for learning new words and to find excerpts containing words that are due for review.
So optimizing the algorithm such that every card comes at the exact right moment might cause all cards to feel too hard or too easy. I think having a mix of difficult and easy cards is actually a feature, not a bug.
Choose your SRS algorithm to best predict what a user knows and when they’re likely to forget it.
If your application decides that it wants to throw some softballs, that’s an application level decision. If you care about psychology and motivation, build a really good algorithm for that. Then blend SRS with motivation as desired.
AnkiMorphs[1] will analyze the morphemes in your sentences and, taking into account the interval of each card as a sign of how well you know each one, will re-order your new cards to, ideally, present you with cards that have only one unknown word.
It doesn't do anything to affect the FSRS directly—it only changes the order of new, unlearned cards—but in my experience it's so effective at shrinking the time from new card to stable/mature that I'm not sure how much more it would help to have the FSRS intervals being adjusted in this particular domain.
See how it's applied to Japanese learning here: https://elldev.com/feed/grsly
Some questions / comments / suggestions:
1. Is there a way to import vocab / kanji from Wanikani? WK is quite popular and has a good API. Bunpro integrates nicely with it, where it will or won't show furigana for kanji in the example sentences based on whether you've already learned the word in Wanikani. I'm guessing in your case you'd just want to import all the vocab. Even though I did the placement test, Grsly is still trying to teach me basic vocab like uta and obaasan. This is slowing down my progress through the grammar points.
2. Similar to question 1, is there a way to import grammar progress from Bunpro? Or even just click a button and have it assume I know everything from N5. The placement test only seemed to test a handful of basic grammar points.
3. Some of the sentences it has generated are quite awkward, like "ironna musume" ("all kinds of my daughter"). I guess that's grammatically correct, but it seems pretty unlikely to show up anywhere in real life. Have you considered using a local/small LLM to score or bias the example sentence generation? It's possible to constrain an LLM to only generate output that matches a grammar. You could construct such a grammar for each nontrivial element in your deck, with the vocab currently available for use. I guess you'd have to change the answer in your FAQ if you started using AI.
2. this is more challenging as there's very often not a 1-to-1 relationship between grammar points.
3. I have a branch on the hsrs github that changes the sampling to be prefix-order so an llm can guide it, with mixed results. There's a tension between picking common outputs, and picking the output that will maximize your increase in retention across multiple cards. That being said 色んな娘 is definitely me forgetting to tag 娘 as non-attributive (like pronouns), will fix. you can read about the mechanisms I have to keep the content as natural as possible here: https://github.com/satchelspencer/hsrs/blob/main/docs/deck-c...
however, individual grammar outputs aren't their own cards, you get a fresh example every time you see a card. this requires a very different scheduling approach, since you have to estimate how all the cards in the 'call tree' contribute to the overall result and reschedule them as well https://github.com/satchelspencer/hsrs/blob/main/docs/overvi...
I’ve incorporated many different things into the SRS, from vector embeddings to graph based association to lemma clustering to morpheme linking, and was surprised how much of these I took out.
Most of the unlocks with the SRS have been more in application space. Doing reviews with Anki feels like a chore, and I’m always counting down the reviews left to do. Reviews with Phrasing however are much more addictive, and I routinely spent an extra 30+ minutes in that “ok just one more card” loop.
We will never be able to know with 100% certainty how well you know a card, but FSRS gets us darn close. I think the interesting stuff is less about improving that metric, and more about what can you do with that information.
Thanks to the whole FSRS team btw (I assume y’all will be reading this hn post) <3
And if anyone is curious I wrote up a bit about my SRS here: https://phrasing.app/blog/humane-srs
Seriously, thank you for everything you've done. You've created something truly great :)
Can't help but repeat this old joke: A guy bought a gym-membership for 6 months, and paid $1000. But he was lazy (like most of us are) and never went or very rarely went to the gym, never felt like going there. After 6 months he realized he had wasted $1000. So he thought maybe if he bought the equipment himself he could and would do exercise at home. He bought the equipment for $1000, but then he rarelywent home. Didn't feel like it :-)
Yeah, you hear this a lot on Fitness YouTube -- the best workout is the one you actually do. With language, it's all about practice -- the best study method is the one you actually do.
There's a lot of UX work to do for SRS. Do you have a sense of how well the ideas behind Humane SRS translate outside of language learning? I imagine the main challenge would be identifying a steady influx of new cards.
I agree that gains in scheduling accuracy are fairly imperceptible for most students. That's why, over the past few years building https://rember.com, we've focused on UX rather than memory models. People who review hundreds of card a day definitely feel the difference, doing 50 fewer reviews per day is liberating. And now that LLMs can generate decent-quality flashcards, people will build larger and larger collections, so scheduler improvements might suddenly become much more important.
Ultimately, though, the biggest advantages is freeing the SRS designer. I'm sure you've grappled with questions like "is the right unit the card, the note, the deck or something else entirely?" or "what happens to the review history if the student edits a card?". You have to consider how review UX, creation/editing flows, and card organization interact. Decoupling the scheduler from these concerns would help a ton.
I agree most peoples collections get unwieldy and something needs to be done, so props to Rember! I take the opposite approach - instead of helping people manage large collections, I try to help people get the most out of small collections. This sort of thing is not possible in most fields outside of languages (I don't think — I cannot say I've given it any real thought though).
For example, the standard tier in Phrasing is 40 new Expressions per month. This should result in 2,000-3,500 words in a year, which would be a pretty breakneck pace for most learners, and is considered sufficient for fluency. Of course, users can learn Expressions other users have created for free, or subscribe to higher tiers, or buy credits outright, but it's often not needed.
Indeed Phrasing does not really use the idea of "cards," we reconstruct pseudo-cards based on the morphemes, lemmas, and inflections found within the Expression. So "cards" are indeed not the boundary I use.
First, in the section "Expressions are flashcards on steroids", the flavor text on each element (Translations, Audio, etc) is identical.
Next, I look at the pricing and get one idea. Then when I create an account and go to upgrade, I see completely different pricing options. Its not that I care so much about the options, but it kind of worries me!
At one point I swear I saw the phrase "Say something about comprehensible input" instead of an explanation of CI, and the sentence itself was duplicated but now I don't. Maybe you are making this landing page live? It _is_ a nice landing page, to be sure.
Overall, I think it looks really cool and I'm interested in trying it out but just a little nervous at the moment.
The “say something about comprehensible input” was indeed a funny copy issue I found a few weeks ago. edit: found and fixed! original: I thought I fixed it though, there must be a screen size that needs to be updated. I’ll look for it, but it’s a framer website so I can’t grep. Let me know if you find it again!
Indeed I just launched the new page with the new pricing. I have two major tasks this week, the second of which is to update the pricing flow to match the new prices on the home page.
It’s a one man show and fully bootstrapped, so apologies about the disarray. Everything takes a month or two to migrate when you do all the design, marketing, engineering, support, and bug fixes yourself!
EDIT: Both the flavor text and the “say something about ci” have been fixed. The upgrade flow will take a few days. I am planning to grandfather everyone who signs up for the old plan ($10pm) into the new plan ($20pm) at the old price :)
It does look great, so kudos!
Just a friendly heads up, I’m on mobile and I noticed that the burger menu doesn’t work.
(iPhone 13)
Otherwise - awesome work
Thanks for the report and thanks for the kind words :)
I wonder if the author has ever considered reaching out to makers of Anki decks used by premeds and medical students like the AnKing [1]. They create Anki decks for users studying the MCAT and various Med School curricula, so have a) relatively stable deck content (which is very well annotated and contains lots of key words that would make semantic grouping quite easy) b) probably contains loads of statistics on user reviews (since they have an Anki addon that sends telemetry to their team to make the decks better IIRC), and c) contains incredibly disparate information (all the way from high-school physics to neurochemistry).
---
Also, having used those decks in the past, and downloaded the add-on/look at the monetization structure of developers like the AnKing, I would be very surprised if aggregate data on review statistics wasn't collected in some way. I.e., if the AnKing is collecting this data already to design better decks/understand which cards are the hardest—probably to target individual support—then I imagine that collecting some de-anonymized version of that data wouldn't be too much of a stretch.
Plus, considering that all of the developers of AnKing-style decks are all doctors, they probably have a pretty good grasp at handling PII and could (hopefully) make pretty sound decisions on whether to give you access :)
One thing that becomes very obvious very quickly is that all cards derived from the same piece of information should be treated as a group. The last thing you'd want is to see "a cow / ?" quickly followed by "una mucca / ?". This is just pointless.
So while I appreciate the in-depth write-up by the author, I must say that its main insight - that the scheduling needs to account for the inter-card dependencies - lies right there on the surface. The fact that Anki doesn't support this doesn't make it any less obvious.
I put together and built a Raycast extension for Anki for this reason: https://www.raycast.com/anton-suprun/anki
Using Raycast keyboard friendly UI makes Anki a lot more fun and friction-less.
If you use a Mac - give it a go.
Apologies if some features are missing - I’ve been procrastinating on patching it with some requested features
Side note: if anyone else is reading this and likes it - contributors are welcome.
There’s a lot of improvements to be made and I could use the help as I’m quite busy with a few other projects at the moment
What ends up happening is I have two similar cards mixed up. For the first card I take a 50/50 and get it right. Then for the second card I get it correct by process of information instead of having to take another 50/50. This results in the system incorrectly thinking I knew the second card that came up.
ran3000•6mo ago
I believe this technical shift in how SRS models the student's memory won't just improve scheduling accuracy but, more critically, will unlock better product UX and new types of SRS.
IncreasePosts•6mo ago
I have a script for it, but am basically waiting until I can run a powerful enough LLM locally to chug through it with good results.
Basically like the knowledge tree you mention towards the end, but attempt to create a knowledge DAG by asking a LLM "does card (A) imply knowledge of card (B) or vice versa". Then, take that DAG and use it to schedule the cards in a breadth first ordering. So, when reviewing a new deck with a lot of new cards, I'll be sure to get questions like "what was the primary cause of the civil war", before I get questions like "who was the Confederate general who fought at bull run"
ran3000•6mo ago
What I like about your approach is that it circumvents the data problem. You don't need a dataset with review histories and flashcard content in order to train a model.
jarrett-ye•6mo ago
GPT-4 can probably estimate whether two flashcards are functionally equivalent
https://notes.andymatuschak.org/zJ7PMGzjcgBUoPjLUHBF9jn
GPT-4 can probably estimate whether one prompt will spoil retrieval of another
https://notes.andymatuschak.org/zK9Y15pCnRMLoxUahLCzdyc
gwd•6mo ago
I've got a system for learning languages that does some of the things you mention. The goal is to be able to recommend content for a user to read which combines 1) appropriate level of difficulty 2) usefulness for learning. The idea is to have the SRS system build into the system, so you just sit and read what it gives you, and review of old words and learning new words (according to frequency) happens automatically.
Separating the recall model from the teaching model as you say opens up loads of possibilities.
Brief introduction:
1. Identify "language building blocks" for a language; this includes not just pure vocabulary, but the grammar concepts, inflected forms of words, and can even include graphemes and what-not.
2. For each building block, assign a value -- normally this is the frequency of the building block within the corpus.
3. Get a corpus of selections to study. Tag them with the language building blocks. This is similar to Math Academy's approach, but while they have hundreds of math concepts, I have tens of thousands of building blocks.
3. Use a model to estimate the current difficulty of each word. (I'm using "difficulty" here as the inverse of "retrievability", for reasons that will be clear later.)
4. Estimate the delta of difficulty of each building block after being viewed. Multiply this delta by the word value to get the study value of that word.
5. For each selection, calculate the total difficulty, average difficulty, and total study value. (This is why I use "difficulty" rather than "retrievability", so that I can calculate total cognitive load of a selection.)
Now the teaching algorithm has a lot of things it can do. It can calculate a selection score which balances study value, difficulty, as well as repetitiveness. It can take the word with the highest study value, and then look for words with that word in it. It can take a specific selection that you want to read or listen to, find the most important word in that selection, and then look for things to study which reinforce that word.
You mentioned computational complexity -- calculating all this from scratch certainly takes a lot, but the key thing is that each time you study something, only a handful of things change. This makes it possible to update things very efficiently using an incremental computation [1].
But that does make the code quite complicated.
[1] https://en.wikipedia.org/wiki/Incremental_computing
ran3000•6mo ago
How far along are you in developing the system?
gwd•6mo ago
There's an open beta of the system ported to Biblical Greek here:
https://www.laleolanguage.com
I've got several active users without really having done any advertising; working on revamping the UI and redesigning the website before I do a big push and start advertising. Most of the people using the site have learned Biblical Greek entirely through the system.
There are experimental ports to Korean and Japanese as well, but those (along with the Mandarin port) aren't public yet. The primary missing pieces are:
1. Content -- the system relies on having large amounts of high-quality content. Finding it, tagging it, and dealing with copyright will take some time
2. On-ramp: It works best to help people at the intermediate level to advance. But if you start at an intermediate level, it doesn't know what you know.
Another thread I'm pursuing is exposing the algorithm via API to other language learning apps:
https://api-dev.laleolanguage.com/v1/docs
All of that needs a better funnel. I'll probably post some stuff here once I've got everything in a better state.
(If anyone reading this is interested in the API, please contact me at contact@laleolanguage.com .)
rahimnathwani•6mo ago
ran3000•6mo ago