frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
101•theblazehen•2d ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•189 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•549 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•113 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
328•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
4•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
251•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

Launch HN: Golpo (YC S25) – AI-generated explainer videos

https://video.golpoai.com/
116•skar01•5mo ago
Hey HN! We’re Shraman and Shreyas Kar, building Golpo (https://video.golpoai.com), an AI generator for whiteboard-style explainer videos, capable of creating videos from any document or prompt.

We’ve always made videos to communicate any concept and felt like it was the clearest way to communicate. But making good videos was time-consuming and tedious. It required planning, scripting, recording, editing, syncing voice with visuals. Even a 2-minute video could take hours.

AI video tools are impressive at generating cinematic scenes and flashy content, but struggle to explain a product demo, walk through a complex workflow, or teach a technical topic. People still spend hours making explainer videos manually because existing AI tools aren’t built for learning or clarity.

Our solution is Golpo. Our video generation engine generates time-aligned graphics with spoken narration that are good for onboarding, training, product walkthroughs, and education. It’s fast, scalable, and built from the ground up to help people understand complex ideas through simple storytelling.

Here’s a demo: https://www.youtube.com/watch?v=C_LGM0dEyDA#t=7.

Golpo is built specifically for use cases involving explaining, learning, and onboarding. In our (obviously biased!) opinion, it feels authentic and engaging in a way no other AI video generator does.

Golpo can generate videos in over 190 languages. After it generates a video, you can fully customize its animations by just describing the changes you want to see in each motion graphic it generates in natural language.

It was challenging to get this to work! Initially, we used a code-generation approach with Manim, where we fine-tuned a language model to emit Python animation scripts directly from the input text. While promising for small examples, this quickly became brittle, and the generated code usually contained broken imports, unsupported transforms, and poor timing alignment between narration and visuals. Debugging and regenerating these scripts was often slower than creating them manually.

We also explored training a custom diffusion-based video model, but found it impractical for our needs. Diffusion could produce high-fidelity cinematic scenes, but generating coherent sequences beyond about 30 seconds was unreliable without complex stitching, making edits required regenerating large portions of the video, and visuals frequently drifted from the instructional intent, especially for abstract or technical topics. Also, we did not have the compute to scale this.

Existing state-of-the-art systems like Sora and Veo 3 face similar limitations: they are optimized for cinematic storytelling, not step-by-step educational content, and they lack both the deterministic control needed for time-aligned narration and the scalability for 5–10 minute explainers.

In the end, we took a different path of training a reinforcement learning agent to “draw” whiteboard strokes, step-by-step, optimized for clear, human-like explanations. This worked well because the action space was simple and the environment was not overly complex, allowing the agent to learn efficient, precise, and consistent drawing behaviors.

Here are some sample videos that Golpo generated:

https://www.youtube.com/watch?v=33xNoWHYZGA (Whiteboard Gym - the tech behind Golpo itself)

https://www.youtube.com/watch?v=w_ZwKhptUqI (How do RNNs work?)

https://www.youtube.com/watch?v=RxFKo-2sWCM (function pointers in C)

https://golpo-podcast-inputs.s3.us-east-2.amazonaws.com/file... (basic intro to Gödel's theorem)

You can try Golpo here: https://video.golpoai.com, and we will set you up with 2 credits. We’d love your feedback, especially on what feels off, what you’d want to control, and how you might use it. Comments welcome!

Comments

typs•5mo ago
If that demo video is how it actually works, this is a pretty amazing technical feat. I’m definitely going to try this out.

Edit: I've used. It's amazing. I'm going to be using this a lot.

skar01•5mo ago
Thank you!!
Masih77•5mo ago
I call bs on training a RL agent to literally output strokes. The way each image renders is a dead give away that this is just using a text to image model, then convert it to svg, and finally animate the svg paths. They might even bypass the svg conversions with clever mask reveals. I was able to achieve the same thing in about 5 mins. https://giphy.com/gifs/rFVxSxZMlflZUX4TqI
mclau157•5mo ago
I have used AI in the past to learn a topic but by creating a GUI with input sliders and output that I can see how things change when I change parameters, this could work here where people can basically ask "what if x happens" and see the result which also makes them feel in control of the learning
skar01•5mo ago
Thank you!!
skar01•5mo ago
Hey also, if you want to suggest a video, we could try generating one and reply here with a link! Just tell us what you want the video to be about!!
cube2222•5mo ago
Hey, kudos for the product / demo on the website - it managed to keep me engaged to watch it till the end.

I’m mostly curious how it fairs with more complex topics and doing actually informative (rather than just “plain background”) illustrations.

Like a video explaining transformer attention in LLMs, to stay on the AI topic?

skar01•5mo ago
Yeah so it actually does pretty well. Here are some sample videos:

https://www.youtube.com/watch?v=33xNoWHYZGA&t=1s

https://www.youtube.com/watch?v=w_ZwKhptUqI

andhuman•5mo ago
Could you do a video about latent heat?
metalliqaz•5mo ago
So... if I had the enterprise accounts for various LLM services, could I dupe this company with a basic upload page and a nice big prompt?
Wolf_Larsen•5mo ago
Its not that simple, but it would be straight forward to duplicate the outputs of this with a simple LLM + ffmpeg workflow. They did mention a custom model on the landing page, and if they've trained one then you would be spending much more money on each output than they are. Because without a fine-tuned model there would be a lot of inference done for QA and refinement of each prompt | clip | frame .
MarcelOlsz•5mo ago
"Custom model" usually translates to "deployed an OSS model and tweaked a few things" like 99% of the time.
Lienetic•5mo ago
I'm curious - do you feel differently about some of these coding and coding-adjacent tools out there like Cursor and Lovable?
metalliqaz•5mo ago
no, not really. I think they are massively over-valued but in the tech world... what else is new? I view those tools as mostly a convenience. They are integrating things into nice easy packages to use. That's the value.

With this... eh. Most people don't need to make more than one or two explainer videos, so are they going to take on a new monthly fee for that? And then there are power users who do it all the time, but almost surely have their own workflow put together that is customized to exactly what they want.

At any point, one of the big players could introduce this as a feature for their main product.

poly2it•5mo ago
The creator tier ($99.99/mo) lists "15 seconds" as a perk. Does this mean the maximum video length is 15 seconds?
bangaladore•5mo ago
Given that the next tier up is "Create longer/more detailed video (up to 4 min long)", I'd guess you are right.

Seems like this is pretty useless unless you pay 200$ per month. Which may be a reasonable number for the clearly commercial / enterprise use case, but I'm just not certain what you can do wtih the lower tiers.

skar02•5mo ago
One of the founders here! No it's not. The max video length is up to 2 min, which is also the case in any non-free tier. We just include a 15-second option for that tier (because people it need for things like FB ads)
poly2it•5mo ago
Maybe clarify it a bit. Eg. "Short 15 second option".
BugsJustFindMe•5mo ago
In the post you talk about 5–10 minute explainers.

What does one do if they want to make a 5-10 minute explainer if the maximum length is 2 minutes?

metalliqaz•5mo ago
My suggestion would be to re-think the demo videos. I have only watched most of the way into the "function pointers in C" example. If I didn't already know C well, I would not be able to follow that. The technical diagrams don't stay on the screen long enough for new learners to process the information. These videos probably look fantastic to the person who wrote the document it summarizes, but to a newbie the information is fleeting and hard to follow. The machine doesn't understand that the screen shouldn't be completely wiped all the time while it follows the narrative. Some visuals should be static for paragraphs, or stay visible while detail marked up around it. For a true master of the art, see 3blue1brown.
bangaladore•5mo ago
> For a true master of the art, see 3blue1brown.

I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.

[1] https://github.com/ManimCommunity/manim/

albumen•5mo ago
Manim is awesome and I'd love to see that, but it doesn't easily offer the "hand-drawn whiteboard" look they've got currently.
WasimBhai•5mo ago
I have 2 credits but it won't let me generate a video. Founders, if you are around, you may want to debug.
skar02•5mo ago
Huh, that's odd. Could you DM me your email?
skar01•5mo ago
Or just email us at founders@golpoai.com
delbronski•5mo ago
Wow, I was skeptical at first, but the result was pretty awesome!

Congrats! Cool product.

Feedback: I tried making a product explainer video for a tree planting rover I’m working on. The rover looked different in every scene. I can imagine this kind of consistency may be more difficult to get right. Maybe if I had uploaded a photo of how the rover looks it may have helped. In one scene the rover looks like an actual rover, in the other it looks like a humanoid robot.

But still, super impressed!

skar01•5mo ago
Thanks! We are working on the consistency.
KaoruAoiShiho•5mo ago
Did NotebookLM just come out with this? Very tough to compete with google.
empressplay•5mo ago
Can confirm, it creates slides though, not whiteboard animations. Although the slides are in color and have graphs, clipart, etc. (but they are static and the whiteboard drawing is cooler!)

It created an 8 minute video explaining my Logo-based coding language using 50 sources and it was free.

https://www.youtube.com/watch?v=HZW75burwQc

skar01•5mo ago
We have color as well and support graphs and clipart
adi4213•5mo ago
This is neat but I wasn’t able to get it to work (server overloaded is what the browser app said) I’d also recommend registering a custom domain in Supabase so the Google SSO shows the golpo domain - which is a small, but professional-signaling affordance
skar01•5mo ago
We will soon! Wanted to get the model working first! Could you try again
ishita159•5mo ago
Planning to add links as input anytime soon?

I would love to add a link to my product docs, upload some images and have it generate an onboarding video of the platform.

skar02•5mo ago
Yes, very soon. We already support this via API and will add to our platform too!
skar01•5mo ago
Our API is currently available to our enterprise customers!
reactordev•5mo ago
This is actually pretty amazing. Not only does it work, it’s good. At least from the demo videos. YMMV.

What I always wanted to do was to teach what I know but I lack the time commitment to get it out. This might be a way…

skar01•5mo ago
Thank you so much!
CalRobert•5mo ago
So it eats concepts and makes videos?

One is reminded of smbc

https://www.seekpng.com/png/detail/213-2132749_gulpo-decal-f...

skar02•5mo ago
Haha! The name actually comes from the word story in Bengali.
ceroxylon•5mo ago
The generated graphic in the linked demo for "Training materials that captivate" is a sketch of someone looking forlorn while holding a piece of paper. Is there a way to do in-line edits to the generated result to polish out things like this?
skar01•5mo ago
We are working on that. There will ultimately be a storyboard feature where you can edit frame by frame!
nextworddev•5mo ago
Has anyone tried prompting VEO to create these videos
skar02•5mo ago
We have! Veo I believe, can't do more than 8-second videos, and when prompted they aren't very coherent in our experience.
nextworddev•5mo ago
oh had no idea. will try your product
OG_BME•5mo ago
I created a video on the free tier, the shareable link didn't work (404), I upgraded to be able to download it, and it seems to have disappeared? It says "Still generating" in my Library.

The video UUID starts with "f5fbd6c7", hopefully that's sufficient to identify me!

skar02•5mo ago
Sorry about that! I found your video. Should I link it here or DM it to you (can you do DM in Hacker News?) ? You could also email me at shreyas2@stanford.edu, and I can send it there
dang•5mo ago
(No DMs on HN, at least not yet)
OG_BME•5mo ago
Just emailed you! Thanks.
Lienetic•5mo ago
This is really interesting, definitely going to give it a try! Seems fun but are you seeing people actually needing to make lots of videos like this? What's your vision - how does this become really big?
drawnwren•5mo ago
I'm sure someone else has mentioned this but your video on the main page correctly has GRPO the first time it's introduced but then every time you mention it after that -- you've swapped it to GPRO.
tk90•5mo ago
Pretty cool, especially the voice and background music - feels just right.

I asked it about pointers in Rust. The transcript and images were great, very approachable!

"Do not let your computer sleep" -> is this using GPU on my machine or something?

skar01•5mo ago
No! We just had that because we had not built the library feature yet, and just forgot to remove it. Now you can access through there!!
subhro•5mo ago
From one Kar to another, দূর্দান্ত গল্প Congratulations.
skar02•5mo ago
Thanks!
albumen•5mo ago
Love it. The tone is just right. A couple of suggestions:

Have you tried a "filled line" approach, rather than "outlined" strokes? Might feel more like individual marker strokes.

I made a demo video on the free tier and it did a great job explaining acoustic delay lines in an accessible fashion, after feeding it a catalog PDF with an overview of the historical artefact and photography of an example unit. Unfortunately the service invented its own idea of what the artefact looked like. Could you offer a storyboard view and let users erase the incorrect parts and sketch their own shapes? Or split the drawing up into logical elements and the user could redraw them as needed, which would then be reused where that element is used in other frames?

skar01•5mo ago
Thank you!! We are actually currently working on the storyboarding feature!!
BoorishBears•5mo ago
Very cool: what output format is the model producing?

Straight vector paths?

dtran•5mo ago
Love this idea! The Whiteboard Gym explainer video seemed really text-heavy (although I did learn enough to guess that that's because text likely beat drawing/adding an image for these abstract concepts for the GRPO agent). I found Shraman's personal story video much more engaging! https://x.com/ShramanKar/status/1955404430943326239

Signed up and waiting on a video :)

Edit: here's a 58s explainer video for the concept of body doubling: https://video.golpoai.com/share/448557cc-cf06-4cad-9fb2-f56b...

addandsubtract•5mo ago
The body doubling concept is something I've noticed myself, but never knew there was a term for it. TIL :)
ActVen•5mo ago
Popup window with "Load Failed" after it had some progress on the bar past 40% or so. Shows up in the library, but won't play. I just deleted it for now.
skar01•5mo ago
Could you try again?
ActVen•5mo ago
Just tried on Chrome instead of safari and it worked this time. Thanks and congrats on the launch!
skar01•5mo ago
Thank you!
meistertigran•5mo ago
Can you share the paper mentioned in the demo video?
trenchpilgrim•5mo ago
I threw the user docs for my open source project in there and it was... surprisingly not terrible!

Note: Your paywall for downloading the video is easily bypassed by Inspect Element :)

My main concern for you is that y'all will get Sherlocked by OpenAI/Anthropic/Google.

mkagenius•5mo ago
Not only the giants. They will face significant threat from open source too[1]. But they just need to carve their own user base and be profitable in that space.

1. For example, I have built http://gitpodcast.com which can be run for free. Can also be self hosted using free tier of gemini and azure speech.

ayaros•5mo ago
In the Khan academy videos I remember watching, an instructor would actually write on a tablet; you'd see each letter get hand-written one by one in order. Is there no way to get it to do that? What the AI is doing instead is building-up the strokes of every character on the line of text all at once, which looks completely unnatural. The awkwardness is compounded by the fact that the letters are outlined, so it takes even more steps to create them.

In addition, the line-art style of the illustrations looks like that same cartoonish-AI-slop style I see everywhere now. I just can't take it seriously.

If this tool is widely deployed it's just going to get used to spread more misinformation. I'm sure it will be great for bad actors and spammers to have yet another tool in their toolbox to spread whatever weird content or messages they want. But for the rest of us, that means search engines and YouTube and other places will be filled with a million AI-generated half-baked inferior copies of Khan Academy. It's already hard enough to find good educational resources online if you don't know where to look, and this will only make the problem worse.

You'll just have to forgive me if I'm not really excited about this tool.

...also the name is a bit weird. It reminds me of "Gulpo, the fish who eats concepts" from that classic SMBC cartoon. (https://www.smbc-comics.com/comic/2010-12-15)

mandeepj•5mo ago
Congrats on the launch!

If I may ask - how do you generate your audio?

raylad•5mo ago
Feedback on the text: I find the way that the text generates randomly across the line very distracting because I (and I think most people) read from left to right. Having letters appear randomly is much more difficult to follow.

Are there options to have the text appear differently?

dfee•5mo ago
From the video

> The Al needs to figure out not just what to draw, but precisely when to draw it

;)

sdotdev•5mo ago
I'll try the 1 free generation soon, but the way the text appears randomly in that landing page demo video is really weird. I keep loosing track of where I'm reading too as the audio sometimes is not perfectly synced. The sync is not that bad however, but it could be better.
ks2048•5mo ago
I made it 8 seconds into the "function pointers in C" video and immediately stopped. It went too fast to read the code examples and diagrams. (second "slide" appears for 1 second.. and what is that array it is showing?) If you go back and look at the code (a three line swap function) - it's messed up. No opening bracket and where is the closing bracket? It is "swaps first and last", but hard-coded to only length 3 arrays?

I'm sure AI could help make good animations like this, but this looks like slop.

personjerry•5mo ago
I feel like this is another case of throwing AI in a non-AI-required problem. Khan Academy itself just hired people to make its videos at a very reasonable wage. Why would you need to add AI into the equation? If you wanted to, you could build a platform of basic video / whiteboard content creators at a very reasonable price point.
wordpad•5mo ago
You can't have arbitrary content with a human in the workflow.
personjerry•5mo ago
You can absolutely hire a human to make arbitrary content
wordpad•5mo ago
Humans already make arbitrary content.

It's a question of scale.

Humans could always write things down, but only the printing press changed the world.

dragonwriter•5mo ago
> Humans could always write things down, but only the printing press changed the world.

No, humans couldn't always write things down (there was a whole lot of time that humanity existed and written language didn't), and writing things down changed the world quite a bit long before the printing press.

atleastoptimal•5mo ago
someone needs to do something about the purple darkmode rounded corner tailwind style that has infected all LLMs now.

cool product though!

UltraSane•5mo ago
Impressive. Reminds me of Google NotebookLLMs AI generated podcasts of PDFs.
android521•5mo ago
Do you have a developer api that empowers developers to create explainer videos?
giorgioz•5mo ago
I love the concept but the implementation in the demo seem not good enough to me. I think the black and white demo is quite ugly... 1) Explainer videos are not in black and white 2) the images are not drawn live usually. 3) text being drawn on the go is just a fake animation. In reality most explainer videos show short meaningful sentence appearing all at once so the user has more time for reading me.

Keep up refining the generated demo! Best of luck

fxwin•5mo ago
I'm also not the biggest fan of the white-on-black style, but there is definitely precedent (at least in science-youtube-space) for explainer videos "drawn live" [1-4]

[1] https://www.youtube.com/@Aleph0

[2] https://www.youtube.com/@MinutePhysics

[3] https://www.youtube.com/@12tone

[4] https://www.youtube.com/@SimplilearnOfficial

whitepaint•5mo ago
I've tried it and it is really cool. Well done and good luck.
torlok•5mo ago
Going by the example videos, this is nothing like I'd expect a whiteboard video to look like. It fills the slides in erratically, even text. No human does that. It's distracting more than anything. If a human teacher wants to show cause-and-effect, they'll draw the cause, then an arrow, then the effect to emphasize what they're saying. Your videos resemble printing more than drawing.
grues-dinner•5mo ago
It seems really strange that you wouldn't farm this kind of thing out to a non-AI function that animates the text properly into the space given using parameters that the AI generated. I mean it's impressive that it does work at all, let alone as well as it does, but are we also going to get an AI to do all the video encoding as well?
achempion•5mo ago
Where I can find what is a credit? It says 150 credits for a Growth plan but doesn't explain how many credits are needed for a single video

p.s. the pricing section is unreadable under the 840px width

snowfield•5mo ago
I want to pay 20usd just to troll my friends with explainer videos on why they're shit at video games :D
ludicrousdispla•5mo ago
that seems like excellent product market fit as the AI generated explainer videos won't even need to be correct, and the more incorrect they are the better the troll
ing33k•5mo ago
it created this video for an app I am working on. https://video.golpoai.com/share/8de80271-1109-48e4-ac52-9265...
qwertytyyuu•5mo ago
The way text is appears is so weird, is like rendering a by plotting each letter aschronously. I wonder how it compares to auto generated power point presentations. I suspect it might be worse
futhey•5mo ago
That worked. Really well!

But, white on black is really ugly. Even black on white or a simple inversion would be an improvement.

I think it could benefit from the ability to pause and see the transcript, and make edits before the video is generated.

Terretta•5mo ago
Chalkboards are white on black. You basically have chalkboards or whiteboards to draw from. Not sure either is landing in the Louvre. Both have their aesthetic uses. I'd imagine chalkboards for academic topics, whiteboards for business, but different ages and cultures will feel differently.
futhey•5mo ago
I bought the color package, enjoying it. It's a personal preference. Hopefully the founders get a variety of feedback and can make a judgement based on multiple datapoints.