I’ve been working on a passion project called Polymax for the past few years to eliminate the friction that currently exists in getting new information into Spaced Repetition Systems such as Anki.
Spaced repetition is the most powerful technique that I came across while I was studying Chinese that allowed me to optimize how I memorize information. Even though it’s extremely powerful, there is a lot of effort involved in creating new cards for things that you are learning on a continuous basis.
Since the advent of LLMs, I’ve been working on using them to make spaced repetition easier to use. I was originally not planning to release this in its current form, but since OpenAI has released their study mode, I figured I’d share it with the HN community to get your thoughts and feedback.
—
My main questions for you:
1. What do you think of the concept overall? Is an AI study buddy that generates flashcards something you would use?
2. I want to monetize this in an ethical way that respects user privacy and data ownership. What monetization models would you suggest given my local-first roadmap?
3. What other features would make this a compelling study tool for you?
Really appreciate any insights that I can get from you. Happy to answer any other questions in the comments.
v3lmx•12h ago
1. I like the idea, but I would have two main concerns:
- on the reliability of the information, especially for very specific subjects
- on privacy, if you're using cloud LLMs, even if the data shared to the LLM is most likely not sensitive
2. I would expect a subscription for a service like this, but depending on the pricing of the LLM you use I don't know if you could turn a profit
sjayasinghe•54m ago
The reliability of the information depends on two factors:
* The quality of project resources that you're working with. You use your actual textbooks from class, the quality would be higher. Some random articles online, the quality would be worse. You have complete control over the source material that the LLM uses as context.
* The specific model that you're working with. The system is model-agnostic, you can bring your favorite model.
On privacy, I plan to make this a local-first application where you can bring your own LLM API key from any provider, and the API calls are made from your own machine to your LLM provider. In addition to a 3rd party LLM provider, you can use a local model running on your machine for maximum privacy. This is also the easiest way to ensure user ownership of their educational material.
sjayasinghe•14h ago
Spaced repetition is the most powerful technique that I came across while I was studying Chinese that allowed me to optimize how I memorize information. Even though it’s extremely powerful, there is a lot of effort involved in creating new cards for things that you are learning on a continuous basis.
Since the advent of LLMs, I’ve been working on using them to make spaced repetition easier to use. I was originally not planning to release this in its current form, but since OpenAI has released their study mode, I figured I’d share it with the HN community to get your thoughts and feedback.
—
My main questions for you: 1. What do you think of the concept overall? Is an AI study buddy that generates flashcards something you would use? 2. I want to monetize this in an ethical way that respects user privacy and data ownership. What monetization models would you suggest given my local-first roadmap? 3. What other features would make this a compelling study tool for you?
Really appreciate any insights that I can get from you. Happy to answer any other questions in the comments.
v3lmx•12h ago
- on the reliability of the information, especially for very specific subjects
- on privacy, if you're using cloud LLMs, even if the data shared to the LLM is most likely not sensitive
2. I would expect a subscription for a service like this, but depending on the pricing of the LLM you use I don't know if you could turn a profit
sjayasinghe•54m ago
* The quality of project resources that you're working with. You use your actual textbooks from class, the quality would be higher. Some random articles online, the quality would be worse. You have complete control over the source material that the LLM uses as context.
* The specific model that you're working with. The system is model-agnostic, you can bring your favorite model.
On privacy, I plan to make this a local-first application where you can bring your own LLM API key from any provider, and the API calls are made from your own machine to your LLM provider. In addition to a 3rd party LLM provider, you can use a local model running on your machine for maximum privacy. This is also the easiest way to ensure user ownership of their educational material.