frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
1•aloukissas•1m ago•0 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•4m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•10m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•13m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•13m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•16m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•16m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•28m ago•4 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•28m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•30m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•33m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•34m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•46m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•48m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•49m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•51m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•55m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•59m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
40•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments
Open in hackernews

Is Entertainment Discovery Fundamentally Broken?

2•nicola_alessi•1mo ago
For the last year, I've been obsessed with a problem: finding something to watch is a chore. The interfaces of Netflix, Prime, and others feel like slot machines designed for maximum engagement, not for matching my mood. The "Because you watched..." algorithms create boring feedback loops, and browsing endless rows of posters is inefficient.

This feels like a discovery problem. These platforms are optimization engines for content consumption, not for genuine recommendation. Their goal is to keep you on the service, not to help you find the perfect movie for a rainy Tuesday night.

As a builder, this led me to a prototype (https://lumigo.tv/en-US): what if you could describe your mood or intent in plain language and get a tailored, unbiased shortlist? I've been working on lumigo.tv to test this. The core is an AI agent that you query like, "a thought-provoking sci-fi movie from the 90s" or "a cozy British mystery series." It searches a database of titles and returns matches with ratings and where to stream them.

The technical hypothesis is that a conversational, intent-based search can cut through the noise better than collaborative filtering or genre rows. No ads in results, no promoted titles—just a direct query-to-match engine.

My question to HN isn't about the specific tool, but the broader principle:

Is the dominant "infinite scroll of posters" model the end-state for discovery, or is it a legacy UI that we've just accepted?

Can a neutral, conversational interface ever compete with the billion-dollar optimization of platform-native algorithms?

What would a technically ideal discovery layer look like? Would it be a meta-layer across all services (like a better JustWatch), or is deep integration with one platform's catalog necessary?

I'm sharing this not for feedback on the site itself, but to discuss the architecture of discovery. Is solving the "what to watch" problem more about better data, a better interface, or changing the fundamental incentives away from engagement maximization?

Comments

neeksHN•1mo ago
They need to find a way reinvent "channel surfing". Discovery via "flipping" has lead me to watch things I'd otherwise never would click in an app interface.

I've always been surprised that Netflix, and other services, don't create "live channels" (e.g "The Office" channel) of their libraries.

nicola_alessi•1mo ago
This is a fantastic point, and you've hit on something fundamental that's been lost in the shift to on-demand: the joy of discovery through serendipity and low commitment.

You're describing the exploration/exploitation trade-off in a very concrete way. Algorithmic recommendations are pure exploitation (based on your known likes). Endless scrolling is a frustrating middle ground. But "channel surfing" or "flipping" was a form of low-stakes exploration. You weren't making a choice to invest 90 minutes; you were dipping in for 30 seconds. If it didn't grab you, there was zero cost to leaving, which is psychologically liberating and led to finding unexpected gems.

Netflix's "Play Something" button and "Shuffle Play" for shows like The Office are direct, if clumsy, acknowledgments of this need. But you're right, why not a live "80s Action" channel or an "A24 Indie" channel? The technical barrier is near-zero.

Our take at lumigo.tv is that the modern equivalent shouldn't be tied to a linear broadcast schedule. The core experience to replicate is the low-friction, zero-commitment sampling.

One experiment we're considering is a "Mood Stream": you pick a vibe ("Cult Classic," "Mind-Bending Sci-Fi," "90s Comfort"), and it starts a never-ending, autoplaying stream of trailers or key 2-minute scenes from films in that category. You lean back and "flip" with a pause button. If a clip hooks you, you click to see the full title and where to stream it. It’s on-demand channel surfing.

The UI challenge is huge—how do you make it feel effortless, not just another menu? But your comment validates that solving this might be more valuable than another slightly-better recommendation algorithm. Thanks for this; it’s a much clearer design goal than “better search.”

mttpgn•1mo ago
Your site has a search bar for typing in a full prompt to an LLM about what is my current mood, and I just find it interesting that one's mood is the important thing for your users to supply as input to your service. For me, unless a major event has taken place, I usually don't take time to think much about what's my mood beyond one or two words. If I've been on a journaling kick I'll usually write about the concrete experiences of the day as a proxy for describing my mood without actually getting to what this means for my energy levels/affectations, etc. The mood descriptors I do recognize in myself (eg. kinda sad!) generally factor little into my content consumption decisions (at least consciously). More important to me are questions like "What are folks talking about? (driving discourse online or at the office)", "Which movies have been recommended to me (by friends/family or by advertising)", and "What's accessible? (On a service I already subscribe to without needing an additional purchase)".
nicola_alessi•1mo ago
Your point is excellent and cuts to the core of what we're trying to explore. You're right, ‘mood' can be a fuzzy, high-friction starting point.

The hypothesis behind the prompt isn't that everyone consciously identifies a mood. It's more that "mood" is a useful shorthand for a complex set of preferences at a given moment. When you think, "I want something mindless and funny after that long meeting," that's a mood proxy. The goal of the open-ended prompt is to capture that full sentence, not just the one-word label.

You've identified the three major discovery engines that dominate today:

Social Proof ("What are folks talking about?") Direct Recommendation ("What was recommended to me?") Access & Friction ("What's on my services?"). These are powerful because they require zero cognitive effort from the user. You're reacting to signals. Our experiment is asking: what if you reversed the flow? What if you started with your own internal state—even if vaguely defined as "kinda sad" or "need distraction" and used a model to map that to a title? It's inherently more work, which is its biggest hurdle.

The interesting technical challenge is whether an LLM can act as a translator between your messy, human input ("just finished a complex project, brain fried, want visual spectacle not dialogue") and the structured metadata of a database (genres, pacing, tone, plot keywords). It's not about mood detection; it's about intent parsing. A future iteration might not ask for a mood at all, but simply: "Tell me about your day." The model's job would then be to infer the desired escapism, catharsis, or reinforcement from the narrative. Would that feel more natural, or just more invasive?

We're early, and you've nailed the key tension. Does discovery work best when it's passive (social/algorithmic feeds) or active (intent-driven search)? The former is easy; the latter might be more satisfying if we can reduce the friction enough. Thanks for giving me a much better way to frame this.

pbasp•1mo ago
Personally I don't believe much in AI recommendations. The problem is the data. AI isn't magic, if the AI doesn't have the data, then it will hallucinate the data. I've discussed with ChatGPT about my movie tastes and asked it to give me recommendations... At first it was a quite interesting conversation, but it couldn't go very far because it knows a lot of details about the blockbuster movies, but strictly nothing about the remaining 98% movies. In comparison, collaborative filtering has access to way more data.
nicola_alessi•1mo ago
You are 100% correct, and this is the central limitation. An LLM like ChatGPT, trained on general web text, is a terrible movie recommendation engine for exactly the reasons you state. Its knowledge is broad but shallow, skewed toward popular discourse, and it will happily confabulate titles.

Our approach with lumigo.tv is different by necessity, and it's a direct response to the problem you've nailed. We don't use an LLM for knowledge.

Here's the technical split:

The LLM is strictly a query translator. Its only job is to take your messy, natural language prompt ("a gloomy noir set in a rainy city") and convert it into a structured set of searchable tags, genres, and metadata filters. It is forbidden from generating or hallucinating movie titles, actors, or plots. The recommendations come from a structured database. Those translated filters are executed against a traditional database of movies/shows (we've integrated with TMDB and similar sources). The results are ranked by existing metrics like popularity, rating, and release date. The LLM never invents a result; it can only return what exists in the connected data. You're right that pure collaborative filtering (like Netflix's) has a massive data advantage for mainstream tastes. Where it falls short is for edge cases and specific intent. If you want "movies like the third act of Parasite," a collaborative filter has no vector for that. Our hypothesis is that a human can describe that intent, an LLM can map it to tags (e.g., "class tension," "thriller," "dark comedy"), and a database can find matches.

So, it's not AI vs. collaborative filtering. It's AI as a natural-language front-end to a traditional database. The AI handles the "what I want" translation; the database handles the "what exists" retrieval. This avoids the hallucination problem but still allows for queries that a "Because you watched..." algorithm could never process.

Does that distinction make sense? It's an attempt to use each tool for what it's best at.

pbasp•1mo ago
Yes, it does make sense, and it's a very interesting approach. So if you ask "a gloomy noir set in a rainy city" it'll translate into TMDB Keywords? I doubt that the TMDB Keywords have that depth (yet a data problem). How do you translate "in a rainy city"?
pbasp•1mo ago
Maybe it's just me, but I find it weird to ask for a movie with very detailed characteristics. What I care above all is watching a good movie rather than wasting my time on a bad movie. I have a long list of movies that I plan to watch because I expect them to be good. My mood decides in which order I watch them, that's all. That's why I prefer collaborative filtering: I want to find movies that I'll like, I don't care if the city is rainy or sunny.
pbasp•1mo ago
I'm convinced that in the future (5 or 10 years from now) you'll ask the AI precisely what movie you want to watch and it'll generate it on the fly. If you don't like the direction the story takes, you'll ask it to rectify. It'll be the end of the cinema as we know it today. I'm not sure it's a future that excites me :(