frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•4m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•8m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•9m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•11m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•12m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•15m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•26m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•32m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•36m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•45m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•52m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•55m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•55m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•56m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•57m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•57m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•57m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments
Open in hackernews

Kira Vale, $500 and 600 prompts, AI generated short movie [video]

https://www.youtube.com/watch?v=gx8rMzlG29Q
31•jacquesm•6mo ago

Comments

Bluestein•6mo ago
From YouTube, which, I second:

"Pinned by @hashemalghailiofficialchannel @philipashane 4 days ago (edited)

I’ve been wondering when this day would come, when we’d see an AI film that was just a damn good film, without the distraction of AI blemishes. This is well written, well directed, well edited, just about everything is top notch. The “acting” isn’t stellar but nor is it bad. This is very impressive and a landmark achievement, kudos to you."

leakycap•6mo ago
> without the distraction of AI blemishes

Maybe I'm just detail oriented, but "police" wasn't even spelled right on the officer's sparse uniform. That isn't even half way into the movie, and by then I'd spotted dozens of other weird AI details.

Bluestein•6mo ago
Fair enough.-

I think the point being not "no blemishes" but "story and execution good enough for blemishes not to be distracting" still stands.-

leakycap•6mo ago
The point is that if you are looking for any amount of details, you will notice this was full of AI blemishes.

The common word "POLICE" misspelled with non-letters on an otherwise empty uniform was an obvious one, but so was the historical singer's face changing in just the first few cuts.

dragonwriter•6mo ago
I...don't. The story was fine, and the execution was understandable given the state of the tooling, but viewed as a film and not a tech demo of advances in what is acheivab;e with modern AI tools it's not great. Many of the voices have the same very noticeable robotic features, and the delivery, whether narration or diegetic dialogue, is monotonous; the "angry crowd" is almost the only place in the whole work that speaking voices appear impacted by emotion, and even that feels off. The scenes have consistent, very limited range of lengths and a very limited palette of simple continuous camera movements, consistently using one per clip.

Even though the mockumentary format is an excellent choice for minimizing the impact of several of those problems, they are still pretty glaring there, even if less so than if you tried to make literally any other style of film with the same techniques.

Bluestein•6mo ago
I really appreciate your insightful look into this, and will again view the video with an eye to these issues you point out.-

PS. We might be a tad beyond "train arriving at the station" territory, at least - that much can be granted methinks.-

stubish•6mo ago
It reminded me a lot of many video game cut scenes. It is still hitting uncanny valley a lot. eg. Voice acting recorded phrase by phrase with limited context and odd pacing better suited for the stage. Acting by puppets with somewhat inhuman movement.
vannevar•6mo ago
This short is an amazing achievement, and (at least to me) a very skillful and clever use of AI. But I don't think it's a good film. And that has nothing to do with AI blemishes. If it had been made shot-for-shot without AI, I probably wouldn't have watched til the end. If I had to put my finger on it, I'd say we spend way too long with a character (the only character) we don't ever really know. The sort-of movie reel concept that keeps her at a distance could work, but I think it would need to be cut way down, maybe half the time. Cutting the clone comeback (which doesn't really advance the plot) would save 4 or 5 minutes.
leakycap•6mo ago
This reminded me of Youtube videos that are one stock video after another: the emotional moments didn't register as real or have a feeling of continuity as the story unfolded.

Amazing, especially for $500 - but this feels like Fiverr Pixar to me, even in this advancing state of the art.

techpineapple•6mo ago
I wonder if anyone has tried to replicate a AAA movie scene from a prompt, or it would be interesting to try and do a whole movie. I’d want to see the side by side, but the Only problem is since that movie might be in its training data, it might not make a good test.
leakycap•6mo ago
That's what I've noticed: if you reference something real that exists in an LLM's training, it will cling onto that because it then has something credible to work from.

On the other hand, it is also challenging to accurately describe an AAA movie scene in any terms where the AI won't then connect the dots to a familiar scene from an AAA movie and incorporate those details.

justinclift•6mo ago
... or refuse to do the work if it recognises the scene/movie/artist (etc).
1oooqooq•6mo ago
it's way more than 500 when you account for the month of extreme editing by the author. you probably couldn't hire him for a week for 500 let alone half a month of overtime.
techpineapple•6mo ago
I’m curious if there’s a limit in how good AI can get at movie making. I think it will take revolutionary new algorithms/tech.

This video is a great example. Looks great, sounds great, but also looks like a really good amateur found a bunch of clips on a stock video site and edited them together, probably because stock video is a really plentiful source of learning data. The interviews look the best, but again, lots of interviews in the training data.

When you combine the skill it takes to generate good prompts, with the lack of sufficient training data, I’ll just say I don’t think Christopher Nolan has anything to worry about just yet. Maybe Wes Anderson does though.

jaggs•6mo ago
This is probably the worst it's ever going to be?
techpineapple•6mo ago
That’s not incompatible with a ceiling. I’m not sure what the point you’re trying to make is?
polytely•6mo ago
I don't think Wes Anderson has anything to worry about either, it isn't only panning shots in pastel colors.
fouc•6mo ago
I wouldn't be surprised if the video models were vastly undertrained compared to our text models. There's probably millions of hours of video we haven't used to train the video models yet.

Still seems like early days on this tech. We're nowhere near the limits.

Just a year ago we could only create the distorted video of Will Smith eating spaghetti. A year from now this is going to be even more flawless.

techpineapple•6mo ago
But what does flawless mean, how is this not flawless? I see very few “flaws” in this. But the comprehensiveness of the video training space is probably just miniscule compared to photo and text.
satyrun•6mo ago
It is impressive technically but I think the whole plot and story details are pretty bad.

The gum just doesn't work for me. A black and white mega popular white female jazz singer doesn't really make sense. Maybe a Judy Garland type singer would work but she is singing a style that I don't think makes sense. Like someone making what they think jazz vocals should sound like but they don't really listen to much jazz. Billie Holiday wasn't even that popular.

Just like the black and white part doesn't work for me because you can tell it is just the same color clips but desaturated. While real black and white would be on film and look shot on film.

I think the AI stuff is actually pretty good but the director/human creativity here is actually what is not that good. The sound design and music are pretty bad.

I am waiting to see what Aronofsky can do with these tools since the studios won't let him set 30 million dollars on fire again like with The Fountain.

imiric•6mo ago
The reason it looks like many joined clips is because long-form video generation is currently not possible. Most SOTA models only allow generating a few seconds at a time. Past that it becomes much harder for the model to maintain consistency; objects pop in and out of existence, physics errors are more likely, etc.

I think that these are all limitations that can be improved with scale and iterative improvements. Image and video generation models are not affected as much by the problems that plague LLMs, so it should be possible to improve them by brute force alone.

I'm frankly impressed with this short film. They managed to maintain the appearance of characters across scenes, the sound and lip syncing are well done, the music is great. I could see myself enjoying this type of content in a few years, especially if I can generate it myself.

> I’ll just say I don’t think Christopher Nolan has anything to worry about just yet.

The transition will happen gradually. We'll start seeing more and more generated video in mainstream movies, and traditional directors will get more comfortable with using it. I think we'll still need human directors for a long time to come. They will just use their talents in very different ways.

boznz•6mo ago
The film is about us not accepting a clone for the original. The massive irony is that the film is likely going to generate the same response from the commenters.
fouc•6mo ago
At a meta level it is also about LLMs/AI generated content too, the twist at the end makes that clear.
fouc•6mo ago
from the reddit post:

  AI tools used to make this short film:
  Image Generation: Whisk, Runway, Midjourney, Dreamina, Sora
  Video Generation: FLOW & Veo 3, Dreamina, HIGGSFIELD, Kling AI
  Voice Generation: ElevenLabs
  Lip Sync: FLOW & Veo 3, Dreamina, HeyGen
  Music Generation: Suno AI
  Sound FX Generation: MMAudio, ElevenLabs
  Prompt Optimization: ChatGPT
superfunny•6mo ago
Joanna Stern at the Wall Street Journal presented a project earlier this year where she and a team of editors created a short film using AI tools - not exactly the same thing here but the results were very good. They had a bigger budget too.

You can see it here: https://www.youtube.com/watch?v=US2gO7UYEfY

In their case, they interspersed live actors with AI-generated imagery.