frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How Accurate Are Google's A.I. Overviews?

https://www.nytimes.com/2026/04/07/technology/google-ai-overviews-accuracy.html
1•JeanKage•1m ago•0 comments

Electron isAccessibilitySupportEnabled broken since v34

https://github.com/electron/electron/issues/45856
1•shakna•3m ago•0 comments

Wailbrew – Minimalistic Homebrew GUI Made with Go, Wails and React

https://github.com/wickenico/WailBrew
1•wickenico•3m ago•0 comments

Valgrind 3.27 RC1 is out

1•paulf38•6m ago•0 comments

Operation Paperclip

https://en.wikipedia.org/wiki/Operation_Paperclip
1•chistev•8m ago•1 comments

Audio Flamingo Next: Open audio-language models for speech, sound, and music

https://afnext-umd-nvidia.github.io/
1•mchinen•9m ago•0 comments

Whisk AI

https://whiskailabs.com
1•bellamoon544•9m ago•0 comments

Track Historical GitHub Repo Metrics in Slack and Git

https://github.com/zinggAI/zingg-stats
1•sonalgoyal•10m ago•1 comments

Call Me a Jerk: Persuading AI to Comply with Objectionable Requests

https://gail.wharton.upenn.edu/research-and-insights/call-me-a-jerk-persuading-ai/
2•tie-in•12m ago•0 comments

Ransomware Is Growing Three Times Faster Than the Spending Meant to Stop It

https://ciphercue.com/blog/ransomware-claims-grew-faster-than-security-spend-2025
1•adulion•12m ago•0 comments

Unit 731

https://en.wikipedia.org/wiki/Unit_731
2•chistev•13m ago•1 comments

No one can force me to have a secure website!!!

https://tom7.org/httpv/
1•cubefox•17m ago•0 comments

Show HN: How unique is your combination of interests among 8B people?

https://zippy-starlight-f6f3cf.netlify.app
1•DM70•18m ago•0 comments

Show HN: I analyzed 591 agentic engineering jobs: LangChain dominates at 22%

https://agentic-engineering-jobs.com/langchain-job-market-2026
1•maximbuz•19m ago•1 comments

Show HN: A CLI that writes its own integration code

https://docs.superglue.cloud/getting-started/cli-skills
1•adinagoerres•19m ago•1 comments

Show HN: iOS app that continuously turns contact dates into calendar events

https://keepdates.app
1•elkabong•20m ago•0 comments

EU-backed manufacturing body goes bust – and no one will say why

https://www.ftm.eu/newsletters/eu-backed-manufacturing-body-goes-bust-and-no-one-will-say-why
2•robtherobber•21m ago•0 comments

I built a bot that tests every interesting HN app daily so I don't have to

https://tokenstree.eu/
3•vfalbor•24m ago•0 comments

Two-Stage Semantic Chunking for RAG in Python

https://alessandrofuda.github.io/semantic-chunking-rag-python-llamaindex/
1•aledevv•27m ago•0 comments

An AI Vibe Coding Horror Story

https://www.tobru.ch/an-ai-vibe-coding-horror-story/
25•teichmann•29m ago•7 comments

Compare harnesses not models: Blitzy vs. GPT-5.4 on SWE-Bench Pro

https://quesma.com/blog/verifying-blitzy-swe-bench-pro/
1•stared•32m ago•0 comments

LiquidClash – A native macOS proxy client with Liquid Glass UI

https://github.com/liquidclash/liquidclash
1•liquidclash•32m ago•0 comments

Backblaze has stopped backing up your data

https://rareese.com/posts/backblaze/
3•rrreese•34m ago•0 comments

Show HN: A stateful UI runtime for reactive web apps in Go

https://github.com/doors-dev/doors
3•derstruct•36m ago•0 comments

The Complete Guide to React Native Build Optimization

https://themythicalengineer.com/the-complete-guide-to-react-native-build-optimization.html
2•sks147•37m ago•0 comments

I turned my Wi-Fi network into a presence sensor

https://www.howtogeek.com/turned-wi-fi-network-into-presence-sensor-home-assistant/
2•xngbuilds•37m ago•0 comments

Steven Heller's Font of the Month: Gilway Paradox

https://ilovetypography.com/2026/04/14/steven-hellers-font-of-the-month-gilway-paradox/
2•jjgreen•38m ago•0 comments

Think the Iran war is a disaster? Blame these DC think tanks first

https://responsiblestatecraft.org/iran-war-think-tanks/
3•KoftaBob•46m ago•0 comments

Jarvis – governed AI control plane with receipts, rollback, and agent guardrails

https://github.com/animallee76-spec/jarvis-governed-control-plane
1•traceable_dev•46m ago•0 comments

The README for this Java library is something else

https://github.com/bsommerfeld/pathetic
2•pist133•48m ago•0 comments
Open in hackernews

Can Claude Fly a Plane?

https://so.long.thanks.fish/can-claude-fly-a-plane/
76•casi•2h ago

Comments

thewhitetulip•2h ago
Humans can also fly. Once.
Findecanor•56m ago
Douglas Adams formulated how it would be possible for a human to fly continuously, though.

http://extremelysmart.com/humor/howtofly.php

thewhitetulip•27m ago
I have the hitchhikers guide to the galaxy but I never got around to reading it. I might have to read it next
travisgriggs•2h ago
The bit in the middle where it decides to make its control loop be pure P(roportional), presumably dropping the I and D parts, is interesting to me. Seems like a poor choice.

I try to fly about once a week, I’ve never really tried to self analyze what my inputs are for what I do. My hunch is that there’s quite a bit of I(ntegral) damping I do to avoid over correcting, but also quite a bit of D(erivative) adjustments I do, especially on approach, in order to “skate to the puck”. Density going to have to take it up with some flight buddies. OR maybe those with drone software control loop experience can weigh in?

aetherspawn•2h ago
Dumping the I part instead of just tuning it properly is kind of an insane thing to do … speaking as an actual controls engineer
gbgarbeb•2h ago
"Actual controls engineers" use PD loops (no I) all the time.
rcxdude•1h ago
In some circumstances, yes (usually when the system itself acts as an integrator somehow). Aircraft controls do not strike me as a system where this is sensible (trimming an aircraft is basically an integral control process).

(d'oh, should have read the specific context: in the case mentioned, it is where the system acts as an integrator (pitch -> altitude), and so pure P control is pretty reasonable)

userbinator•2h ago
The real question is, can it keep the plane in one piece?
thewhitetulip•2h ago
And which human will fly in an llm operated plane?!
ccozan•2h ago
Please welcome aboard of Airthropic Lines!
Markoff•1h ago
I am sure some Ryanair customers would risk it for good price.
stnikolauswagne•1h ago
Give the whole scheme some sort of mile multiplier and you will get high-freq fliers salivating over taking a llm flight with a 12 hour layover in Iceland to get to Portland from New York for those sweet miles.
hdgvhicv•1h ago
Keeping a plane on the ground seems easy enough. Keeping in the air in one place would be impossible. Keeping any place in the air is only temporary.
morpheuskafka•2h ago
Surely at least part of the issue here is that even an LLM operates in two digit tokens per second, not to mention extra tokens for "thinking/reasoning" mode, while a real autopilot probably has response times in tens of milliseconds. Plus the network latency vs a local LLM.
webprofusion•2h ago
"Can I Get Claude to Fly A Plane" isn't the same thing. Interesting though, would be a good test for different models but it relies on the test harness being good enough that a human could also use the same info to achieve the required outcome. e.g. if latency of input/output is too slow then nobody could do it.
est•2h ago
> main issue seemed to be delay from what it saw with screenshots and api data and changing course.

This is where I think Taalas-style hardware AI may dominate in the future, especially for vehicle/plane autopilot, even it can't update weights. But determinism is actually a good thing.

sigmoid10•2h ago
This is a limitation of LLM i/o which historically is a bit slow due to these sequential user vs assistant chat prompt formats they still train on. But in principle nothing stops you from feeding/retrieving realtime full duplex input/output from a transformer architecture. It will just get slower as you scale to billions or even trillions of parameters, to the point where running it in the cloud might offer faster end-to-end actions than running it locally. What I could imagine is a small local model running everyday tasks and a big remote model tuning in for messy situations where a remote human might have to take over otherwise.
leptons•2h ago
Does Claude know the plane isn't at the car wash?
mihaaly•1h ago
Friend participating in some sort of simulated glider tournament trained a neural network to fly one some way (don't ask details). I recall rules were changed to ban such, not because of him.

Using Claude sounds overkill and unfit the same time.

bottlepalm•1h ago
AI being able to quickly react to real time video input is the next thing. Computer use right now is painfully slow working off a slow screenshot/command loop.
operatingthetan•1h ago
We already have advanced autopilots that can fly commercial airliners. We just don't trust them enough to not have human pilots. I would trust the autopilot more than freaking Claude. We already do, every day.
dewey•1h ago
I don't think anyone is suggesting we should do that...but it's still a fun project to play around with?
codingconstable•1h ago
Agreed. I think thats a really fun way to test out Claude's ability to perform an abstract task it's probably not trained on, was nice to read
Ekaros•1h ago
I think we can trust them to not have human pilots. It is just that having human in loop is very useful in not that rare scenarios. Say airfield has too much wind or fog or another plane has crashed on all runways... Someone needs to make decision what to do next. Or when there is some system failure not thought about.

And well if they are there they might as well fly for practise.

And no. I would not allow LLM in to the loop of making any decision involving actual flying part.

LiamPowell•1h ago
There's also the issue that when something goes wrong, many people will never trust an autopilot again. Just look at how people have reacted to a Waymo running over a cat in a scenario where most humans would have made the same error. There's now many people calling for self-driving cars to never be allowed on roads and citing that one incident.
girvo•1h ago
Which makes sense: a robot can’t be responsible for anything, a human can be.
ekianjo•1h ago
> We just don't trust them enough to not have human pilots

never mind that most crashes are caused by humans, very rarely by technical issues going amok

stnikolauswagne•1h ago
>never mind that most crashes are caused by humans, very rarely by technical issues going amok

Because humans are the fallback for all the scenarios that the tech cannot reliably cover. And my intuition says that the tech around planes is so heavily audited that only things that work with 99.999...% accuracy work will be left to tech.

reeredfdfdf•12m ago
Still those technological issues do happen, and in those situations it's good to have a human pilot in control. See for example Qantas Flight 72 - the autopilot thought aircraft was stalling, and sent the plane into a dive. It could have ended up very badly without human supervision.
boring-human•1h ago
> We just don't trust them enough to not have human pilots.

Much of the value of a human crew is as an implicit dogfooding warranty for the passengers. If it wasn't safe to fly, the pilots wouldn't risk it day after day.

To think of it, it'd be nice if they posted anonymized third-party psych evaluations of the cockpit crew on the wall by the restrooms. The cabin crew would probably appreciate that too.

sandworm101•1h ago
There are soooo many pilot decisions that AI is nowhere near making. Managing a flight is more than flying. It is about making safety decisions during crisis, from deciding when to abort an approach to deciding when to eject a passenger. Sure, someone on the ground could make many of those decisions, but i prefer such things be decided by someone with literal skin in the game, not a beancounter or lawyer in an office
ButlerianJihad•1h ago
I sincerely doubt that pilots decide "when to eject a passenger". Mostly it would be the cabin crew: the flight attendants are 100% in charge of flight safety, and they would be managing relationships with passengers, and they would be the ones to make the call. It would ultimately be them calling some kind of law enforcement. If an Air Marshal is onboard already, obviously they would be on the front line as well.

Furthermore, the concept of "ejecting a passenger" from a flight would mostly not be something you do while in the air, unless you're nuts. Ejecting a passenger is either done before takeoff, or your crew decides to divert the flight, or continue to the destination and have law enforcement waiting on the tarmac.

Naturally, pilots get involved when it's a question of where to fly the plane and when to divert, but ultimately the cabin crew is also involved in those decisions about problem passengers.

rounce•43m ago
The Pilot in Command has ultimate legal responsibility over the operation of the flight, ICAO conventions explicitly state this. Whilst in practice the cabin crew will be the ones dealing with the passenger(s) and supplying information to the PIC , it won’t be them making the final decision.
sandworm101•32m ago
No. Cabin crew recommend. Pilots actually decide.
ButlerianJihad•7m ago
Do the pilots also decide whether to issue a parachute to the ejected passenger?
zenmac•15m ago
It would be interesting to see if Claude can land and take off. Don't think the autopilot can do that yet.
LiamPowell•10m ago
Autopilots can. Both on airliners and small planes, although only landing on the latter as far as I know. Airbus ATTOL is probably the most interesting of these in that it's visual rather than ILS (note that no commercial airliners are using this).
basfijneman•1h ago
If planes can fly autopilot I assume claude can make a pretty good flight plan. Not sure if claude can react in time if shit hits the fan.

"spawning 5 subagents"

dnnddidiej•11m ago
"Rate limited try again in 10 seconds"
otabdeveloper4•1h ago
Yes, but for a limited time only.
jmward01•1h ago
The question of 'can it fly' is clearly a 'yes, given a little bit of effort'. Flying isn't hard, autopilots have been around a long time. It is recognizing and dealing with things you didn't anticipate that is hard. I think it is more interesting to have 99% of flying done with automated systems but have an LLM focus on recognizing unanticipated situations and recovering or mitigating them.
stnikolauswagne•1h ago
>I think it is more interesting to have 99% of flying done with automated systems but have an LLM focus on recognizing unanticipated situations and recovering or mitigating them.

Seeing how Claude (or any current LLM) perform in even the most low-stake coding scenario I dont think I would ever set foot on a plane where the 1% of most risky scenarios are decided by one.

amelius•1h ago
Using an LLM doesn't mean it has to take the final decision. You can also use it as a warning system.
stnikolauswagne•51m ago
Is there any indication that current warning systems are insufficient in any way that would be improved by LLM involvement?
vidarh•41m ago
We won't know that until someone has actually investigated how an LLM would do in those scenarios.
red_admiral•35m ago
> Flying isn't hard

Most of the time. Sometimes you get a double bird strike when you've barely cleared the Hudson river, or similar.

progx•1h ago
Prepare for landing "rate limit exceeded" (Error 429)" ;-)
edu•1h ago
Besides the article, I think a big issue for this would be the speed of the input-decision-act loop as it should be pretty fast and Claude would introduce a lot of latency in it.
nairboon•1h ago
Let's hope you don't reach Claude's session limit during approach, while trying to correct a slightly too steep descent angle.
chha•1h ago
...or that the satellite network connection disconnects for some reason.
Markoff•1h ago
I wouldn't really worry about flying, but more about taking off/landing.

Related from December 2025: Garmin Emergency Autoland deployed for the first time

https://www.flightradar24.com/blog/aviation-news/aviation-sa...

stinkbeetle•1h ago
Autoland has been used for 60 years and on much more complicated aircraft than that Beechcraft B200.
nnevod•1h ago
I suppose part of the problem with autolanding a small plane is that they have much less intertia and are much more susceptible to conditions.

Large planes are autolanded in normal conditions with oversight of qualified, capable and backed up operator, in harsh conditions they are not used, as far as I understand.

Autoland systems in small planes are emergency systems to land plane with disabled operator in any conditions generally acceptable for flying in that plane.

johntopia•1h ago
If there's a timeline where claude can actually fly a plane, then operating nuclear reactors can be possible as well.
blitzar•1h ago
Sky King managed it, no reason claude shouldnt be able to.
ramon156•1h ago
> CRASHED #2, different cause. Plane was stable in a slow descent but between fly.py invocations (~20 sec gap while I logged and computed the next maneuver) there was no active controller. Plane kept descending under its last commanded controls until it hit terrain at 26 ft MSL, 1.7 nm short of the runway. Lesson: never leave the controller idle in flight

Gold

monour•1h ago
they say already used in some missiles which hit school at current war by mistake
vachina•1h ago
Give a stochastic text generator to physics. What can go wrong.
amelius•1h ago
I see you are still in the stochastic parrot phase.
dist-epoch•1h ago
try using codex-5.3-spark, it has much faster inference, might be able to keep up. and maybe a specialized different openrouter model for visual parsing.
xuxu298•1h ago
haha, if can, would you dare to follow it? :D
rkagerer•1h ago
You could also use your forehead as a hammer, but it's likewise going to result in more pain than gain.

I wouldn't trust Claude to ride my bike, so I certainly wouldn't board its flight.

razorbeamz•1h ago
I'd imagine Claude is too slow to fly a plane above everything.
hansmayer•58m ago
Mate, we don't trust it to write an email or the code it generates. Why should we trust it to fly a plane?
sneak•57m ago
Somebody, somewhere, is using it to decide who lives and who dies by bombs. Why not hook it up to a flight sim?
hansmayer•53m ago
Sad, but true.
Paracompact•52m ago
As most others have pointed out, the goal from here wouldn't be to craft a custom harness so that Claude could technically fly a plane 100x worse than specialist autopilots. Instead, what would be more interesting is if Claude's executive control, response latency, and visual processing capabilities were improved in a task-agnostic way so that as an emergent property Claude became able to fly a plane.

It would still be better just to let autopilots do the work, because the point of the exercise isn't improved avionics. But it would be an honestly posed challenge for LLMs.

kqr•30m ago
Lots of people commenting seem to have not read the article. The author didn't hook Claude up directly with the controls, asking it to one-shot a successful flight.

The author tried getting Claude to develop an autopilot script while being able to observe the flight for nearly live feedback. It got three attempts, and did not manage autolanding. (There's a reason real autopilots do that assisted with ground-based aids.)

resiros•29m ago
I think you gave someone an idea for a new RL environment :) Probably it will be able to fly it in the next iteration.
nelox•27m ago
So Claude crashed because it was busy figuring out how to fly the plane?