frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•32s ago•0 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•3m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•6m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•6m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•6m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•6m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
2•juujian•8m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•10m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•12m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•14m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•15m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•15m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•18m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•21m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•23m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•24m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•25m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•26m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•29m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•32m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•35m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•36m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•41m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•43m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•45m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•45m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•46m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•52m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•58m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•59m ago•1 comments
Open in hackernews

Is Entertainment Discovery Fundamentally Broken?

2•nicola_alessi•1mo ago
For the last year, I've been obsessed with a problem: finding something to watch is a chore. The interfaces of Netflix, Prime, and others feel like slot machines designed for maximum engagement, not for matching my mood. The "Because you watched..." algorithms create boring feedback loops, and browsing endless rows of posters is inefficient.

This feels like a discovery problem. These platforms are optimization engines for content consumption, not for genuine recommendation. Their goal is to keep you on the service, not to help you find the perfect movie for a rainy Tuesday night.

As a builder, this led me to a prototype (https://lumigo.tv/en-US): what if you could describe your mood or intent in plain language and get a tailored, unbiased shortlist? I've been working on lumigo.tv to test this. The core is an AI agent that you query like, "a thought-provoking sci-fi movie from the 90s" or "a cozy British mystery series." It searches a database of titles and returns matches with ratings and where to stream them.

The technical hypothesis is that a conversational, intent-based search can cut through the noise better than collaborative filtering or genre rows. No ads in results, no promoted titles—just a direct query-to-match engine.

My question to HN isn't about the specific tool, but the broader principle:

Is the dominant "infinite scroll of posters" model the end-state for discovery, or is it a legacy UI that we've just accepted?

Can a neutral, conversational interface ever compete with the billion-dollar optimization of platform-native algorithms?

What would a technically ideal discovery layer look like? Would it be a meta-layer across all services (like a better JustWatch), or is deep integration with one platform's catalog necessary?

I'm sharing this not for feedback on the site itself, but to discuss the architecture of discovery. Is solving the "what to watch" problem more about better data, a better interface, or changing the fundamental incentives away from engagement maximization?

Comments

neeksHN•1mo ago
They need to find a way reinvent "channel surfing". Discovery via "flipping" has lead me to watch things I'd otherwise never would click in an app interface.

I've always been surprised that Netflix, and other services, don't create "live channels" (e.g "The Office" channel) of their libraries.

nicola_alessi•1mo ago
This is a fantastic point, and you've hit on something fundamental that's been lost in the shift to on-demand: the joy of discovery through serendipity and low commitment.

You're describing the exploration/exploitation trade-off in a very concrete way. Algorithmic recommendations are pure exploitation (based on your known likes). Endless scrolling is a frustrating middle ground. But "channel surfing" or "flipping" was a form of low-stakes exploration. You weren't making a choice to invest 90 minutes; you were dipping in for 30 seconds. If it didn't grab you, there was zero cost to leaving, which is psychologically liberating and led to finding unexpected gems.

Netflix's "Play Something" button and "Shuffle Play" for shows like The Office are direct, if clumsy, acknowledgments of this need. But you're right, why not a live "80s Action" channel or an "A24 Indie" channel? The technical barrier is near-zero.

Our take at lumigo.tv is that the modern equivalent shouldn't be tied to a linear broadcast schedule. The core experience to replicate is the low-friction, zero-commitment sampling.

One experiment we're considering is a "Mood Stream": you pick a vibe ("Cult Classic," "Mind-Bending Sci-Fi," "90s Comfort"), and it starts a never-ending, autoplaying stream of trailers or key 2-minute scenes from films in that category. You lean back and "flip" with a pause button. If a clip hooks you, you click to see the full title and where to stream it. It’s on-demand channel surfing.

The UI challenge is huge—how do you make it feel effortless, not just another menu? But your comment validates that solving this might be more valuable than another slightly-better recommendation algorithm. Thanks for this; it’s a much clearer design goal than “better search.”

mttpgn•1mo ago
Your site has a search bar for typing in a full prompt to an LLM about what is my current mood, and I just find it interesting that one's mood is the important thing for your users to supply as input to your service. For me, unless a major event has taken place, I usually don't take time to think much about what's my mood beyond one or two words. If I've been on a journaling kick I'll usually write about the concrete experiences of the day as a proxy for describing my mood without actually getting to what this means for my energy levels/affectations, etc. The mood descriptors I do recognize in myself (eg. kinda sad!) generally factor little into my content consumption decisions (at least consciously). More important to me are questions like "What are folks talking about? (driving discourse online or at the office)", "Which movies have been recommended to me (by friends/family or by advertising)", and "What's accessible? (On a service I already subscribe to without needing an additional purchase)".
nicola_alessi•1mo ago
Your point is excellent and cuts to the core of what we're trying to explore. You're right, ‘mood' can be a fuzzy, high-friction starting point.

The hypothesis behind the prompt isn't that everyone consciously identifies a mood. It's more that "mood" is a useful shorthand for a complex set of preferences at a given moment. When you think, "I want something mindless and funny after that long meeting," that's a mood proxy. The goal of the open-ended prompt is to capture that full sentence, not just the one-word label.

You've identified the three major discovery engines that dominate today:

Social Proof ("What are folks talking about?") Direct Recommendation ("What was recommended to me?") Access & Friction ("What's on my services?"). These are powerful because they require zero cognitive effort from the user. You're reacting to signals. Our experiment is asking: what if you reversed the flow? What if you started with your own internal state—even if vaguely defined as "kinda sad" or "need distraction" and used a model to map that to a title? It's inherently more work, which is its biggest hurdle.

The interesting technical challenge is whether an LLM can act as a translator between your messy, human input ("just finished a complex project, brain fried, want visual spectacle not dialogue") and the structured metadata of a database (genres, pacing, tone, plot keywords). It's not about mood detection; it's about intent parsing. A future iteration might not ask for a mood at all, but simply: "Tell me about your day." The model's job would then be to infer the desired escapism, catharsis, or reinforcement from the narrative. Would that feel more natural, or just more invasive?

We're early, and you've nailed the key tension. Does discovery work best when it's passive (social/algorithmic feeds) or active (intent-driven search)? The former is easy; the latter might be more satisfying if we can reduce the friction enough. Thanks for giving me a much better way to frame this.

pbasp•1mo ago
Personally I don't believe much in AI recommendations. The problem is the data. AI isn't magic, if the AI doesn't have the data, then it will hallucinate the data. I've discussed with ChatGPT about my movie tastes and asked it to give me recommendations... At first it was a quite interesting conversation, but it couldn't go very far because it knows a lot of details about the blockbuster movies, but strictly nothing about the remaining 98% movies. In comparison, collaborative filtering has access to way more data.
nicola_alessi•1mo ago
You are 100% correct, and this is the central limitation. An LLM like ChatGPT, trained on general web text, is a terrible movie recommendation engine for exactly the reasons you state. Its knowledge is broad but shallow, skewed toward popular discourse, and it will happily confabulate titles.

Our approach with lumigo.tv is different by necessity, and it's a direct response to the problem you've nailed. We don't use an LLM for knowledge.

Here's the technical split:

The LLM is strictly a query translator. Its only job is to take your messy, natural language prompt ("a gloomy noir set in a rainy city") and convert it into a structured set of searchable tags, genres, and metadata filters. It is forbidden from generating or hallucinating movie titles, actors, or plots. The recommendations come from a structured database. Those translated filters are executed against a traditional database of movies/shows (we've integrated with TMDB and similar sources). The results are ranked by existing metrics like popularity, rating, and release date. The LLM never invents a result; it can only return what exists in the connected data. You're right that pure collaborative filtering (like Netflix's) has a massive data advantage for mainstream tastes. Where it falls short is for edge cases and specific intent. If you want "movies like the third act of Parasite," a collaborative filter has no vector for that. Our hypothesis is that a human can describe that intent, an LLM can map it to tags (e.g., "class tension," "thriller," "dark comedy"), and a database can find matches.

So, it's not AI vs. collaborative filtering. It's AI as a natural-language front-end to a traditional database. The AI handles the "what I want" translation; the database handles the "what exists" retrieval. This avoids the hallucination problem but still allows for queries that a "Because you watched..." algorithm could never process.

Does that distinction make sense? It's an attempt to use each tool for what it's best at.

pbasp•1mo ago
Yes, it does make sense, and it's a very interesting approach. So if you ask "a gloomy noir set in a rainy city" it'll translate into TMDB Keywords? I doubt that the TMDB Keywords have that depth (yet a data problem). How do you translate "in a rainy city"?
pbasp•1mo ago
Maybe it's just me, but I find it weird to ask for a movie with very detailed characteristics. What I care above all is watching a good movie rather than wasting my time on a bad movie. I have a long list of movies that I plan to watch because I expect them to be good. My mood decides in which order I watch them, that's all. That's why I prefer collaborative filtering: I want to find movies that I'll like, I don't care if the city is rainy or sunny.
pbasp•1mo ago
I'm convinced that in the future (5 or 10 years from now) you'll ask the AI precisely what movie you want to watch and it'll generate it on the fly. If you don't like the direction the story takes, you'll ask it to rectify. It'll be the end of the cinema as we know it today. I'm not sure it's a future that excites me :(