frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
469•nar001•4h ago•224 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
156•bookofjoe•2h ago•137 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
447•theblazehen•2d ago•161 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
33•thelok•2h ago•2 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
33•mellosouls•2h ago•27 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
93•AlexeyBrin•5h ago•17 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
782•klaussilveira•20h ago•241 comments

First Proof

https://arxiv.org/abs/2602.05192
42•samasblack•2h ago•28 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
26•simonw•2h ago•24 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
36•vinhnx•3h ago•4 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
59•onurkanbkrc•5h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1034•xnx•1d ago•583 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
180•alainrk•4h ago•255 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
27•rbanffy•4d ago•5 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
171•jesperordrup•10h ago•65 comments

Vinklu Turns Forgotten Plot in Bucharest into Tiny Coffee Shop

https://design-milk.com/vinklu-turns-forgotten-plot-in-bucharest-into-tiny-coffee-shop/
10•surprisetalk•5d ago•0 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
16•marklit•5d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
107•videotopia•4d ago•27 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
7•0xmattf•1h ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
266•isitcontent•20h ago•33 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•43 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
278•dmpetrov•20h ago•148 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
36•matt_d•4d ago•11 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
546•todsacerdoti•1d ago•264 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
421•ostacke•1d ago•110 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
365•vecti•22h ago•166 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
65•helloplanets•4d ago•69 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
338•eljojo•23h ago•209 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
460•lstoll•1d ago•303 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
373•aktau•1d ago•194 comments
Open in hackernews

The Thinking Game Film – Google DeepMind documentary

https://thinkinggamefilm.com
213•ChrisArchitect•2mo ago

Comments

DrierCycle•2mo ago
AlphaFold is optimization, not thinking. Propaganda 'r us.
aschla•2mo ago
https://news.ycombinator.com/newsguidelines.html
DrierCycle•2mo ago
https://news.ycombinator.com/item?id=44203562
dwa3592•2mo ago
what is thinking?
DrierCycle•2mo ago
Sharp wave ripples, nested oscillations, cohering at action-syntax. The brain is "about actions" and lacks representations.
__patchbit__•2mo ago
Creatively peeling the hyper dimensional space in the scope of simplectic geometry, markhov blanket and helmholtz invariance????
fredoliveira•2mo ago
Did you watch the documentary? Would probably fare better if you did, because it'd give you the context for the film title.
DrierCycle•2mo ago
I'm an hour into it, unconvinced.

The illusion that agency 'emerges' from rules like games, is fundamentally absurd.

This is the foundational illusion of mechanics. It's UFOlogy not science.

fredoliveira•2mo ago
Well, two things: it's the last sentence of the film; being on hour into something you're calling propaganda is brave.

Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.

DrierCycle•2mo ago
It's science under the creative boundary of binary/symbols. And as analog thinkers, we should be developing far greater tools than these glass ceilings. And yes, having finished the film, it's far more propagandic than it began as.

Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.

Zigurd•2mo ago
Everything between your ears is an electrochemical process. It's all math and there is no "creative boundary." There's plenty to criticize in AI hype that we're going to get to machine intelligence very soon. I suspect a lot of the hype is oriented towards getting favorable treatment from the government if not outright subsidies. But claiming that there are fundamental barriers is a losing bet.
DrierCycle•2mo ago
It doesn't happen "btwn ears" and math is an illusion of imprecision. The fundamental barrier is frameworks and computers will not be involved. There will be software obviously. But it will never be computed.
amitport•2mo ago
Plenty *commercial* labs frequently prioritized pure science over *immediate* consumer products, but none done so out of charity. Deepmind included.
MattRix•2mo ago
Is there a fundamental difference between it and true agency/thought? I’m not so sure.
DrierCycle•2mo ago
Agency will emerge from exceeding the bottleneck of evolution's hand-me-down tools: binary, symbols, metaphors. As long as these unconscious sportscasters for thought "explain" to us what thought "is", we are trapped. DeepMind is simply another circular hamster wheel of evolution. Just look at the status-propaganda the film heightens in order to justify the magic.
Zigurd•2mo ago
Your mind emerges from a network of neurons. Machine models are probably far from enabling that kind of emergence, but if what's going on between our ears isn't computation, it's magic.
DrierCycle•2mo ago
It's not magic. It's neural syntax. And nothing trapped by computation is occurring. It's not a model, it is the world as actions.

The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.

Only analog correlation gets us to agency and thought.

dboreham•2mo ago
Why is it absurd? Because believing that would break some deep delusion humans have about themselves?
youngNed•2mo ago
Quite honestly, it's about time the penny dropped.

Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.

I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.

The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency

Rochus•2mo ago
Not sure why this is downvoted. The comment cuts to the core of the "Intelligence vs. Curve-Fitting" debate. From my humble perspective as a PhD in the molecular biology /biophysics field you are fundamentally correct: AlphaFold is optimization (curve-fitting), not thinking. But calling it "propaganda" might be a slight oversimplification of why that optimization is useful. If you ask AlphaFold to predict a protein that violates the laws of physics (e.g. a designed sequence with impossible steric clashes), it will sometimes still confidently predict a folded structure because it is optimizing for "looking like a protein", not for "obeying physics". The "Propaganda" label likely comes from DeepMind's marketing, which uses words like "Solved"; instead, DeepMind found a way to bypass the protein folding problem.
DrierCycle•2mo ago
I'm concerned that coders and the general public will confuse optimization with intelligence. That's the nature of propaganda, substituting sleight of hand to create a false narrative.

btw an excellent explanation, thank you.

autonomousErwin•2mo ago
What's the difference between optimisation and intelligence?
HarHarVeryFunny•2mo ago
For a start optimization is a process, and intelligence is a capability.
dekhn•2mo ago
If there's one thing I wish DeepMind did less of, it's conflating the protein folding problem with static structure prediction. The former is a grand challenge problem that remains 'unsolved' while the latter is an impressive achievment that really is optimization using a huge collection of prior knowledge. I've told John Moult, the organizer of CASP this (I used to "compete" in these things), and I think most people know he's overstating the significance of static structure prediction.

Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.

smj-edison•2mo ago
I'm really curious about this space: what types of simulation/prediction (if any) do you see as being the most useful?

Edit to clarify my question: What useful techniques 1. Exist and are used now, and 2. Theoretically exist but have insurmountable engineering issues?

dekhn•2mo ago
Right now techniques that exist and used now are mostly around target discovery (identifying proteins in humans that can be targeted by a drug), protein structure prediction and function prediction. Identifying sites on the protein that can be bound by a drug is also pretty common. I worked on a project recently where our goal was to identify useful mutations to make to an engineered antibody so that it bound to a specific protein in the body that is linked to cancer.

If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).

Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.

"Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.

smj-edison•2mo ago
Thank you for the detailed answer! I'm just about to start college, and I've been wanting to research molecular dynamics, as well as building a quantitative pathway database. My hope is to speed up the research pipeline, so it's heartening to know that it's not a complete dead end!
tim333•2mo ago
I think if you watch the actual film you'd find they don't claim AlphaFold is thinking.
BanditDefender•2mo ago
There is quite a bit of bait-and-switch in AI, isn't there?

"Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"

"Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"

"Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"

One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)

tim333•2mo ago
Maybe but the film is about Hassabis thinking about thinking and working towards general intelligence that can think. It doesn't really make claims about their existing software regarding that.
HarHarVeryFunny•2mo ago
It seems that to solve the protein folding problem in a fundamental way would require solving chemistry, yet the big lie (or false hope) of reductionism is that discovering the fundamental laws of the universe such as quantum theory doesn't in fact help that much with figuring out the laws/dynamics at higher levels of abstraction such as chemistry.

So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.

Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.

So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.

dekhn•2mo ago
We have seen some suggestion that the classical molecular dynamics force fields are sufficient to predict protein folding (in the case of stable, soluble, globular proteins), in the sense that we don't need to solve chemistry but only need to know a coarse approximation of it.
HarHarVeryFunny•2mo ago
Sure, but AlphaFold is still probably the most impactful and positive thing to have come out of "Deep Learning" so far.
theturtletalks•2mo ago
Didn’t the transformer model come from AlphaFold? I feel like we wouldn’t have had the LLMs we use today if it wasn’t for AlphaFold.
HarHarVeryFunny•2mo ago
The Transformer was invented at Google, but by a different team. AFAIK the original AlphaFold didn't use a transformer, but AlphaFold 2.0 and 3.0 do.
ChrisArchitect•2mo ago
Streaming on YouTube now: https://www.youtube.com/watch?v=d95J8yzvjbQ
ChrisArchitect•2mo ago
Hard to discount the impact of AlphaFold in science work but submitting this to a number of film festivals like Tribeca seems a bit AI-washing.
llbbdd•2mo ago
What is AI-washing?
ayewo•2mo ago
Like whitewashing, but for AI, I’m guessing.
incognito124•2mo ago
Watched it a while ago. Made me seriously think about AI and what we should use it for. I feel like all the entertainment use cases (image and video gen) are a complete waste.
jeffbee•2mo ago
DeepMind's new [edit: apparently now old] weather forecast model is similar in architecture to the toys that generate videos of horses addressing Congress or cats wearing sombreros. The technology moves forward and while some of the new applications are not important, other applications of the same technology may be important.
incognito124•2mo ago
Is it really similar? I was under the impression it's a GNN of a (really dense) polyhedron, not a diffusion model
jeffbee•2mo ago
GenCast is a diffusion model, but it is not the "new" one like I said. Apparently there is another one. https://arxiv.org/pdf/2506.10772
mattlondon•2mo ago
The chatbots and image editors are just a side-show. The real value is coming in e.g. chemistry (Alpha fold etc all), fusion research, weather prediction etc.
echelon•2mo ago
None of that has reached the market yet. If it was up to the sciences alone, AI couldn't bear the weight of its own costs.

It also needs to be vertically integrated to make money, otherwise it's a handout to the materials science company. I can't see any of the AI companies stretching themselves that thin. So they give it away for goodwill or good PR.

incognito124•2mo ago
That's not really true. Commercial weather prediction has reached the market, and a drug (sorry, can't find the new s link) that was found by AI-accelerated drug discovery is now in clinical testing
aoeusnth1•2mo ago
The reason why vertical integration is important for AI investment is that if AI is commoditized, then that AI-acceleration will costs pennies for drugs that are worth billions.

I don't see how OpenAI or Google can profit from drug discovery. It's nearly pure consumer surplus (where the drug companies and patients are the consumers).

tim333•2mo ago
Science in general tends to be subsidised and given away because as basic understanding of the world is hard to monopolise. I'm not sure how Einstein would have done a general relativity startup.

That said Deepmind are doing a spin-off making drugs https://www.isomorphiclabs.com/

bayindirh•2mo ago
> None of that has reached the market yet.

AI for science is not "marketed". It silently evolves under the wraps and changes our lives step by step.

There are many AI systems already monitoring our ecosystem and predicting things as you read this comment.

poszlem•2mo ago
The real value is coming in warfare.
awaythrow999•2mo ago
Right. More accurate predictions for meta-data based killings which as championed by US in their war on terror
walletdrainer•2mo ago
Metadata based killings are most likely a huge improvement from the prior state of affairs
modeless•2mo ago
Yeah. Let the leaders assassinate each other with drone strikes instead of indiscriminately bombing whole cities as they used to.
dylan604•2mo ago
what gov't in modern day would fall because the leader was assassinated? the next in line would just step up, and now have a pissed population that will be in favor of ratcheting up beyond assassinations.
mattlondon•2mo ago
Any autocratic state would probably have quite a high likelihood I would expect.

I am sure you can think of a few prominent examples.

epolanski•2mo ago
ML is used in weather prediction since the 80s and is the backbone of it since almost a decade.

Not sure what are LLMs supposed to do there.

danpalmer•2mo ago
No one is suggesting using LLMs for weather. DeepMind is making significant progress on weather prediction with new AI models.
neumann•2mo ago
oh god - please tell BoM in Australia. Either ML is not keeping up with clime change unpredictability, or SOTA is worse than what we had 10 years ago.
danpalmer•2mo ago
I relocated to Australia last year. This country is obsessed with the BoM and I don't know why. The few times I've used it, it was either outright wrong, or I couldn't even find the weather info I wanted (like, is it going to rain tomorrow), and they only added TLS in 2025!
Glemkloksdjf•2mo ago
LLMs in general are ML based, need a lot of data and compute. The same infrastructure as any other ML based system.

The AI/AGI hype in my opinion could be better renamed to ml with data and compute 'hype' (i don't like the word hype as it doesn't fit very well)

threethirtytwo•2mo ago
Why are images and video a complete waste? This makes no sense to me.

Right now the generators aren’t effective but they are definitely stepping stones to something better in the future.

If that future thing produces video, movies and pictures better than anything humanity can produce at a rate faster than we can produce things… how is that a waste?

It can arguably be bad for society but definitely not a waste.

incognito124•2mo ago
Let me phrase it a bit differently, then: AI generated cats in Ghibli style are a waste, we should definitely do less of that. I did not hold that opinion before the documentary

Education-style infographics and videos are OK.

danielbln•2mo ago
I'm glad you're not the sole arbiter for what is wasteful and what isn't.
dylan604•2mo ago
Just because you disagree does not make them wrong though
threethirtytwo•2mo ago
I’m not even talking about this. Those cat videos are just stepping stones for academy award winning masterpieces of cinema like dune. All generated by AI on a click in one second.
lm28469•2mo ago
Homoconsomator brain be like ^
eamsen•2mo ago
Our family derives a lot of joy from stylized versions of our photos. For us, it is not a waste. If you do not derive anything positive from it, you will likely not use it, hence no energy wasted either. Your argument is objectively wrong.
QuantumGood•2mo ago
Parent said "entertainment use cases" are a complete waste, not all uses of images and video. I don't agree, but do particularly find educational use cases of AI video are becoming compelling.

I help people turn wire rolling shelf racks into the base of their home studio, and AI can now create a "how to attach something to a wire shelf rack" without me having to do all the space and rack and equipment and lighting and video setup, and just use a prompt. It's not close to perfect yet, but it's becoming useful.

threethirtytwo•2mo ago
If AI can produce movies, video and art better aka “more entertaining” then humans than how is it a waste?
wasmainiac•2mo ago
But it’s not. I think most can agree that there really has not been any real entertainment from genAI beyond novelty crap like seeing Lincoln pulling a nice track at a skate park. No one wants to watch genAI slop video, no one wants to listen to genAI video essays, most people do not want to read genAI blog posts. Music is a maybe, based on leaderboards, but it is not like we ever had a lack of music to listen to.
CamperBob2•2mo ago
Eventually it will be good enough that you won't know the difference.

I have a feeling that's already happened to me.

threethirtytwo•2mo ago
Bro. You and your cohorts said the exact same thing about LLMs and coding when ChatGPT just came out. The status quo is obvious. So no one is talking about that.

Draw the trendline into the future. What will happen when the content is indistinguishable and AI is so good it produces something moves people to tears?

wasmainiac•2mo ago
Bro, it sure if you noticed. ChatGTP isnt that great at coding end to end. It can regurgitate common examples well, but if your working on large technical code bases it does more harm that good. It need constant oversight, why don’t I write the code myself. We are at an infrastructure limit, not sure we are going to see order of magnitude improvements any more.
threethirtytwo•2mo ago
I no longer write code. I’ve been a swe for over a decade. AI writes all my code following my instructions. My code output is now expected to be 5x what it was before because we are now augmented by AI. All my coworkers use AI. We don’t use ChatGPT we use anthropic. If I didn’t use AI I would be fired for being too slow.

What I work on is large and extremely technical.

And no we are not at an infrastructure limit. This statement is insane. We are literally only a couple years into LLMs becoming popular. Everything we see now is just the beginning. You can only make a good judgement call of whether we are at our limit in 10 years.

Because the transition hit so quickly a lot of devs and companies haven’t fully embraced AI yet. Culture is still lagging capability. What you’re saying about ChatGPT was true a year ago. And now one year later, everything you’re saying isn’t remotely true anymore. The pace is frightening. So I don’t blame you for not knowing. Yes AI needs to be managed but it’s at a point where the management no longer hinders you and it instead augments your capabilities.

youngNed•2mo ago
Because vast amounts of people find Coldplay entertaining. That doesn't mean it's a good thing.
threethirtytwo•2mo ago
You lack imagination. When ChatGPT just came out people were saying it can never code. Now if you aren’t using ai in your coding you’re biting the dust.

Stop talking about the status quo… we are talking about the projected trendline. What will AI be when it matures?

Second you’re just another demographic. Smaller than fans of Coldplay but equally generic and thus an equal target for generated art.

Here’s a prompt that will one day target you: “ChatGPT, create musical art that will target counter culture posers who think they’re better than everyone just because they like something that isn’t mainstream. Make it so different they will worship that garbage like they worship Pearl Jam. Pretend that the art is by a human so what when they finally figure out they fell for it hook line and sinker they’ll realize their counter culture tendencies are just another form of generic trash fandom no different than people who love cold play or, dare I say it, Taylor swift.”

What do you do then when this future comes to pass and all content even for posers is replicated in ways that are superior?

plastic3169•2mo ago
”What a way to show them. You rock! Unfortunately I can’t create the musical art you requested as you reference multiple existing musical acts by name. How about rephrasing your request in a way that is truly original and unique to you”
threethirtytwo•2mo ago
Again I’m referring to the future. When ChatGPT came out nobody thought it was good enough to be an assistant coding agent. That future came to pass.

Nobody gives a fuck about what ChatGPT can currently do. It’s not interesting to talk about because it’s obvious. I don’t even understand why you’re just rehashing the obvious response. I’m talking about the future. The progression of LLMs is leading to a future where my prompt leads to a response that is superior to the same prompt given to a human.

dylan604•2mo ago
> particularly find educational use cases of AI video are becoming compelling.

compelling graphics take a long time to create. for education content creators, this can be too expensive as well. my high school physics teacher would hand draw figures on transparencies on an overhead projector. if he could have produced his drawings as animations cheap and fast using AI, it would have really brought his teaching style (he really tried to make it humorous) to another level. I think it would be effective for his audience.

imagine the stylized animations for things like the rebooted Cosmos, NOVA, or even 3Blue1Brown on YT. there is potential for small teams to punch above their weight class with genAI graphics

lm28469•2mo ago
It might be shocking to you but some people believe there is more to life than producing and consuming "content" faster and faster.

Most of it is used to fool people for engagement, scam, politics or propaganda, it definitely is a huge waste of resource, time, brain and compute power. You have to be completely brainwashed by consumerism and techsolutionism to not see it

Glemkloksdjf•2mo ago
I actually had a counter thought a few years ago.

We consume A LOT of entertainment every day. Our brains like that a lot.

Doesn't has to be just video but even normal people not watching tv at all entertain themselves through books or events etc.

Live would be quite boring.

threethirtytwo•2mo ago
I see it. But you’re lacking imagination to what I’m referring to. It’s also fucking obvious. Like I’m obviously not referring to TikTok videos and ads and that kind of bullshit every one on earth knows about and obviously hates. You’re going on as if it’s “shocking” to me when what you’re talking about is obvious as night and day. What’s shocking to me is that you’re not getting my point and I’m obviously talking about something less well known.

Take your favorite works of art, music and cinema. Imagine if content on that level can be generated by AI in seconds. I wouldn’t classify that as a “waste” at all. You’re obviously referring to bullshit content, I’m referring to content that is meaningful to you and most people. That is where the trendline is pointing. And my point, again is this:

We don’t know the consequence of such a future. But I wouldn’t call such content created by AI a waste if it is objectively superior to content created by humans.

modeless•2mo ago
You might have said the same thing about GPUs for 20 years when they were mostly for games, before they turned out to be essential for AI. All the entertainment use cases were directly funding development of the next generation of computing all along.
tim333•2mo ago
Practical things are probably treating diseases and more abundance of physical goods. More speculative/sci-fi is merging in some form with AI and maybe immortality which I think is the more interesting bit.
cultofmetatron•2mo ago
unfortunately all this work on sora has very real military use case. I personally think all this investment in sora by open AI is largely to create a digital fog of war. Now when a rocket splatters a 6 year old palestinian girl's head across the pavement like a jackson polock painting, They will be able to claim its AI generated by state sponsored actors in order to prevent disruption to the manufactured consent aperatus.
dwarfpagent•2mo ago
I find it funny that the YouTube link takes you to the film, but like an hour into it.
vmilner•2mo ago
Yes, it made me think I'd already watched it and had forgotten about it...
redbell•2mo ago
Just watched it yesterday and enjoyed every second of it, the director put more focus on Demis Hassabis which turns out to be a true superhero and I have to confess that I am probably admiring him more that any other human in the tech industry.
nightski•2mo ago
In my experience all DeepMind content ends up being a puff piece for Dennis Hassabis. It's like his personal marketing engine lol.
stevenjgarner•2mo ago
Is that a good thing or a bad thing? Demis is after all a co-founder and CEO.
Hacker_Yogi•2mo ago
Makes it seem that AI is a one-man show while also feeding the hype cycle
ainch•2mo ago
Perhaps they need more advertising around the correct spelling of his name.
nightski•2mo ago
Good catch but it was just an honest typo.
ipnon•2mo ago
He's the leading AI researcher at the 3rd largest company in the world in the middle of an AI boom. He's naturally going to have quite the marketing budget behind him!
tim333•2mo ago
All content about organisations tends to go that way. I guess it's just easier to talk about the leader than the thousands of others involved.
Glemkloksdjf•2mo ago
Our society is leader based. Otherwise the garbage Trump wouldn't matter but he does. The same thing with garbage Musk. Musk gets what he wants from Tesla because the shareholders believe that Musk is critical to Tesla.

Both are fundamental to their followers.

So its quite clear that you can't just say 'its DeepMind' but have a figure in the middle of it like Dennis.

They trust him to lead DeepMind.

stevenjgarner•2mo ago
https://www.youtube.com/watch?v=d95J8yzvjbQ
lysace•2mo ago
It's official too. It's on: https://www.youtube.com/@googledeepmind

Moderators: Please change the link; feels kind of unethical to bait someone into paying for this now.

dwroberts•2mo ago
I want to watch it, but at the same time, it’s basically going to be an advert for Google. I’m not sure if I can put up with the uncritical fluff.

I would love to see a real (ie outsider) filmmaker do this - eg an updated ‘Lo and behold’ by Werner Herzog

actionfromafar•2mo ago
Where he speaks french
dist-epoch•2mo ago
It's an advert for Demis Hassabis, not Google.
ilaksh•2mo ago
It was directed by Greg Kohs, who is a real filmmaker and does not work for Google.
lysace•2mo ago
Are you saying this movie production wasn't paid for by Google? If it was, surely he did?
ilaksh•2mo ago
oh it might have been paid for by Google for sure.
lysace•2mo ago
Like 99.99% probability, sure. Greg's previous big feature was on Deepmind's AlphaGo, three years after its Google acquisition.

https://www.imdb.com/title/tt6700846/

gardnr•2mo ago
Full length: https://www.youtube.com/watch?v=WXuK6gekU1Y

They do a great job capturing the "Move 37" moment: https://youtu.be/WXuK6gekU1Y?t=2993

dwroberts•2mo ago
Yeah I don’t mean to say they’re not a real filmmaker or untalented etc, I mean more the context they’re doing it. That they’ve chosen to cover this topic themselves, and that they would show critical angles of it and not just promo + hagiography
dwa3592•2mo ago
Loved this documentary. People complaining - WTFV first.
jnwatson•2mo ago
I caught it on the airplane a few days ago. I would have loved a little more technical depth, but I guess that's pretty much standard for a puff piece.

It is interesting that Hassabis has had the same goal for almost 20 years now. He has a decent chance of hitting it too.

someguy101010•2mo ago
reposting this from youtube comment

From 1:14:55-1:15:20, within the span of 25 seconds, the way Demis spoke about releasing all known sequences without a shred of doubt was so amazing to see. There wasn't a single second where he worried about the business side of it (profits, earnings, shareholders, investors) —he just knew it had to be open source for the betterment of the world. Gave me goosebumps. I watched that on repeat for more than 10 times.

dekhn•2mo ago
Another way to interpret this (and I don't mean it pejoratively at all): Demis has been optimizing his chances for winning a nobel prize for quite some time now. Releasing the data increased that chance. He also would have been fairly certain that the commercial value of the predictions was fairly low (simply predicting structures accurately was never the rate-limiting step for downstream things like drug discovery). And that he and his team would have a commercial advantage by developing better proprietary models using them to make discoveries.
tim333•2mo ago
Also since selling Deepmind to Google, it's Google's shareholder's money really.
sgt101•2mo ago
I think that's a rather conspiratorial way of framing it.

I think it's more about someone trying to do the most good that was possible at that time.

I doubt he cares much about prizes or money at this point.

dekhn•2mo ago
It's hardly a conspiracy to use strategy and intelligence to maximize the probability of achieving the outcome you desire.

He doesn't have to care much about prizes or money at this point: he won his prize and he gets all the hardware and talent he needs.

potsandpans•2mo ago
I also noticed this as well. Actually went back and watched it several times. It's an incredible moment. I keep thinking, "if this moment is real, this is truly a special person."
mNovak•2mo ago
My interpretation of that moment was that they had already decided to give away protein sequences as charity, it was just a decision of all as a bundle vs fielding individual requests (a 'service').

Still great of them to do, and as can be seen it's worth it as a marketing move.

dekhn•2mo ago
(as an aside, this is a common thing that comes up when you have a good model: do you make a server that allows people to do one-off or small-scale predictions, or do you take a whole query set and run it in batch and save the results in a database; this comes up a lot)
jpecar•2mo ago
DB of known proteins is not where the money can be made, designing new proteins is. This is why AlphaFold3 (that can aid in this) is now wrapped in layers of legalese preventing you to actually use it in the way you want. At least that's what my lifescience users tell me. Big Pharma is now paying Big Money to DeepMind to make use of AF3 ...
beginnings•2mo ago
i tried to watch it but like AI in general, it was extraordinarily boring. neural nets are really cool technically, but the whole AI thing is just getting old and I couldnt care less where its going

we can guarantee that whether its the birth of superintelligence or just a very powerful but fundamentally limited algorithm, it will not be used for the betterment of mankind, it will be exploited by the few at the top at the expense of the masses

because thats apparently who we are as a species

hbarka•2mo ago
Hi, I’m genuinely curious about your writing style. I’m seeing this trend of no proper casing and no punctuation becoming vogue-ish. Is there a particular reason you prefer to write this way or is this writing style typical for a generation? Sincere question, not snark, coming from an older generation guy.
aswegs8•2mo ago
If you grew up in the internet of early 2000s, that's how we wrote online.
querez•2mo ago
I grew up in the Internet at that time, and it's certainly not how I type. So you might want to be more specific about which sites or subcultures you think this style is representative of?
luma•2mo ago
I’m certainly no authority but i tend to write the same way for casual communication, came from the 90s era BBS days. It was (and still is) common on irc nets too. Autocorrect fixes up some of it, but sometimes i just have ideas i’m trying to dump out of my head and the shift key isn’t helping that go faster. Emails at work get more attention, but bullshittin with friends on the PC? No need.

I’ll code switch depending on the venue, on HN i mostly Serious Post so my post history might demonstrate more care for the language than somewhere i consider more causal.

mystifyingpoi•2mo ago
This is the writing style of this generation. I've just scrolled 6 months of my conversation with a friend in his twenties. Not a single comma or period to be seen. I mean on his side.
beginnings•2mo ago
it signals high status and nonconformity. the reader intuits that a sigma male is speaking and he doesnt play by the rules. hes not bound by the constraints and regulations of classical reality. hes dangerous

but seriously, its just more comfortable to type. apostrophes and capitals are generally superfluous, we'll and well the only edge case, theyve, theyll, wont, dont etc its just not necessary. theres no ambiguity

i only recently started using full stops for breaks. for years, I was only using commas, but full stops are trending among the right people. but only for breaks, not for closing

AndrewKemendo•2mo ago
Correct! I’m glad people are finally starting to get it
verisimi•2mo ago
weekends are always better on hn
AndrewKemendo•2mo ago
How does that relate to my comment?
verisimi•2mo ago
Agreeing that tech is often not used for the betterment of mankind is still considered a controversial comment around here.

The original comment and you agreeing, struck me as examples of the more open commentary one can see at the weekends.

tim333•2mo ago
If you watch on there's a bit where they decide to give away all the protein folding results for free when they could have charged (https://youtu.be/d95J8yzvjbQ?t=4497). Not everything is exploitation rather than the betterment of mankind.
beginnings•2mo ago
that sort of mentality is typical in researchers, but the powers that be will be thinking about profit and control, mass layoffs and AI governance in conjunction with digital id, carbon credits etc

every technological advancement that made people more productive and should have led to them having to do less work, only led to people needing to do more work to survive. i just dont see AI being any different

Glemkloksdjf•2mo ago
Its so disappointing to read this.

Do you know how long it took us to get to this point? Massive compute, knowledge, alogorithm etc.

Why are you even on HN if the most modern and most impactful technologie leads you to say "i couldn't care less were its going'?

Just a few years ago there was not a single way of just solving image generation, music generation and chat bots which actually able to respond reasonable to you and that in different languages.

AlphaFold already helps society today btw.

quirino•2mo ago
Watched it this week. Pretty good.

There are a couple parts at the start and the end where a lady points her phone camera at stuff and asks an AI about what it sees. Must have been mind-blowing stuff when this section was recorded (2023), but now it's just the bare minimum people expect of their phones.

Crazy times we're living in.

HarHarVeryFunny•2mo ago
I was ok with that as "fledgling AI" at the start of the movie/documentary, but thought that going back to it and having the chatbot suggest a chess book opening to Hassabis at the end was cheesy and misleading.

They should have ended the movie on the success of AlphaFold.

ilaksh•2mo ago
Greg Kohs and his team are brilliant. For example, the way it captured the emotional triumph of the AlphaFold achievement. And a lot of other things.

One of the smart choices was that it omitted a whole potential discussion about LLMs (VLMs) etc. and the fact that that part of the AI revolution was not invented in that group, and just showed them using/testing it.

One takeaway could be that you could be one of the world's most renowned AI geniuses and not invent the biggest breakthrough (like transformers). But also somewhat interesting is that even though he had been thinking about this for most of his life, the key technology (transformer-type architecture) was not invented until 2017. And they picked it up and adapted it within 3 years of it being invented.

Also I am wondering if John Jumper and/or other members of the should get a little bit more credit for adapting transformers into Evoformer.

circadian•2mo ago
There's some funny comments going on in this thread. Understandably so. What could be more divisive an issue than AI on a silicon valley forum!?

As a brit, I found it to be a really great documentary about the fact that you can be idealistic and still make it. There are, for sure, numerous reasons to give Deepmind shit: Alphabet, potential arms usage, "we're doing research, we're not responsible". The Oppenheimer aspect is not to be lost, we all have to take responsibility for wielding technology.

I was more anti-Deepmind than pro before this, but the truth is as I get older it's nicer to see someone embodying the aspiration of wanton benevolence (for whatever reason) based on scientific reasoning, than to not. To keep it away from the US and acknowledge the benefits of spreading the proverbial "love" to the benefit of all (US included) shows a level of consideration that should not be under-acknowledged.

I like this documentary. Does AGI and the search for it scare me? Hell yes. So do killer mutant spiders descending on earth post nuclear holocaust. It's all about probabilities. To be honest: disease X freaks me out more than a superintelligence built by an organisation willing to donate the research to solve the problems of disease X. Google are assbiscuits, but Deepmind point in the right direction (I know more about their weather and climate forecasting efforts). This at least gave me reason to think some heart is involved...

vismit2000•2mo ago
Earlier on HN: https://news.ycombinator.com/item?id=46086561
sakesun•2mo ago
Is the multimodal agent really as good as shown in the documentary? If so, why did Google need to stage parts of the demo at Google I/O?

https://news.ycombinator.com/item?id=38559582#38566618

mskogly•2mo ago
Two thoughts: 1 the field of ai research moves so fast that any attempt to make a full documentary would we obsolete long before it was released. 2 all I want ai do to right now is to remove generic «dramatic» music from YouTube clips.
mattlondon•2mo ago
What confused me about this documentary was the "at home" scenes for Hassabis.

He is famously a North London lad, but the at home shots are clearly shot from South London looking North (you can tell by the orientation of The Shard and Bishops Gate out of the window).

I thought that this might have been a "stage home" but it appears to be the same place in the background of various video conferences he is on too, so unless those were staged for the documentary (which seems like a lot of effort), then he lives near Crystal Palace and not Highgate?

kk58•2mo ago
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5409063

AI for science is much bigger than RL or Generative AI in science.

There are several classes of models Like operator learning, physics informed neural networks, Fourier operators

That perform magnificently well and have killer applications in various industrial settings

Do read the attached paper if you're curious about AI in science