frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•1m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•3m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•4m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•7m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•7m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•10m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•14m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•14m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•14m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•17m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•20m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•21m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•21m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
3•vinhnx•22m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•27m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•32m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•36m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•37m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•38m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•45m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•48m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•48m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•49m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•50m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•50m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•51m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•51m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•55m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•55m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•57m ago•0 comments
Open in hackernews

π0.5: A VLA with open-world generalization

https://pi.website/blog/pi05
177•lachyg•9mo ago

Comments

beklein•9mo ago
This is amazing! As someone working with industrial robots, normally under strict environmental constraints and control, witnessing such real-world robotics progress truly excites me about the future!

By the way, they’ve open-sourced their π0 model (code and model weights). More information can be found here: https://github.com/Physical-Intelligence/openpi

UltraSane•9mo ago
It seems robotics has advanced more in the last 3 years than the previous 20.
htrp•9mo ago
the torrent of funding helps here
Tireings•9mo ago
ML helps here and the progress Nvidia made with their robotics platform.
rurban•9mo ago
But mostly OpenCV, in its excellent C++ and python variants. Not everything is modern ML heuristics, some classic AI is also needed still.
UltraSane•9mo ago
The vision language action models and the two level slow planning and fast control LLMs seem to be a big breakthrough.
gs17•9mo ago
Is the robot platform they're using something they've developed themselves? The paper doesn't seem to mention any details outside of sensors and actuators.
lachyg•9mo ago
Off the shelf robots -- we've got our models running on dozen+ different robot types (and have this specific generalization demo working on multiple platforms too.)
gs17•9mo ago
Great, would you happen to know what's used in this video?
modeless•9mo ago
Here are some of the suppliers for things seen in the videos:

https://arx-x.com/

https://x.com/GalaxeaDynamics

https://www.youtube.com/@HEXMOVEHexmove_Robotic

https://www.trossenrobotics.com/

npodbielski•9mo ago
So you are saying I can buy some robot, GPU and have this robot fold my laundry? How much? :D
meisel•9mo ago
These variable-length arrays are getting quite advanced
matthewfcarlson•9mo ago
Ignore the haters. This is hilarious
layer8•9mo ago
Precisely my thoughts.
djoldman•9mo ago
I'm genuinely asking (not trying to be snarky)... Why are these robots so slow?

Is it a throughput constraint given too much data from the environment sensors?

Is it processing the data?

I'm curious about where the bottleneck is.

robopolicy•9mo ago
Part of it is that training of these VLAs currently happens on human teleop data which limits speed (both for safety reasons and because of actual physical speed constraints in the teleoperation pipeline).

Let’s see how it changes once these pipelines follow the LLM recipes to use more than just human data…

dheera•9mo ago
Not a PI employee, but diffusion policies are like diffusion models for image generation, they generate actions from noise in multiple steps. With current compute you can't run 100+Hz control loops with that kind of architecture.

Some combination of distillation, new architectures, faster compute, can eventually attack these problems. Historically as long as something in tech has been shown to be possible, speed has almost always been a non-issue in the years afterwards.

For now even getting a robot to understand what to do in the physical world is a major leap from before.

amelius•9mo ago
You're probably right, but for some tasks I suppose you need processing speed, for example bipedal walking.
davidguetta•9mo ago
That's not the reason, Pi0 was basically predicting at 10hz and predicting a temporal chunk up to 50 points so it could go up to 500Hz.

It's slow because the original telop is slow, and the learned controllers through imitation learning is always a bit slower.

Source : i work on this (not at PI)

cloudbonsai•9mo ago
Another practical reason is that it's dangerous.

Pi0 uses ARX robot arm which weights 3-4kg per arm. It can easily break things or harm people if you allow it to move fast.

davidguetta•9mo ago
Not really if you clamp the torque aggressively.

But yeah in general the physical world is more dangerous than we tend to think

michaelt•9mo ago
When you're operating your robot around humans, you want to be very confident it won't injure anyone. It'd be pretty bad if a bug in your code meant instead of putting the cast iron frying pan in the dishwasher, it sent it flying across the room.

One way of doing that is to write code with no bugs or unpredictable behaviour, a nigh-impossible feat - especially once you've got ML models in the mix.

Another option is to put a guard cage around your robot so nobody can enter pan-throwing distance without deactivating the robot first. But obviously that's not practical in a home environment.

Another option is just to go slowly all the time. The pan won't fly very far if the robot only moves 6 inches per second.

reverius42•9mo ago
Putting the cast iron frying pan in the dishwasher would also be pretty bad.
idiotsecant•9mo ago
Maybe this robot is satisfying a rust production utility function. Don't be so bioist. All utility functions are beautiful.
jagged-chisel•9mo ago
Annoying perhaps. But not bad.
ethan_smith•9mo ago
The primary bottleneck is typically the motion planning system that must continuously solve complex optimization problems to ensure safe trajectories while avoiding collisions in dynamic environments.
vhartman•9mo ago
These models typically predict actions directly, there is no motion planning going on here.
ajhai•9mo ago
It is inference latency most of the time. These VLA models take in an image + state + text and spit out a set of joint angle deltas.

Depending on the model being used, we may get just one set of joint angle deltas or a series of them. In order to be able to complete a task, it will need to capture images from the cameras, current joint angles and send them to the model along with the task text to get the joint angle changes we will need to apply. Once the joint angles are updated, we will need to check if the task is complete (this can come from the model too). We run this loop till the task is complete.

Combine this with the motion planning that has to happen to make sure the joint angles we are getting do not result in colliding with the surroundings and are safe, results in overall slowness.

airstrike•9mo ago
I'm just a layman, but I can't see this design scaling. It's way too slow and "hard" for fine motor tasks like cleaning up a kitchen or being anywhere around humans, really.

I think the future is in "softer" type of robots that can sense whether their robot fingers are pushing a cabinet door (or if it's facing resistance) and adjust accordingly. A quick google search shows this example (animated render) which is closer to what I imagine the ultimate solution will be: https://compliance-robotics.com/compliance-industry/

Human flesh is way too squishy for us to allow hard tools to interface with it, unless the human is in control. The difference between a blunt weapon and the robot from TFA is that the latter is very slow and on wheels.

nullc•9mo ago
The development here is primarily in the model. If someone invents the 'brains' a robot needs to do useful domestic tasks then there will suddenly be a lot of incentive to build the right body for it.
airstrike•9mo ago
Right, but ISTM that building the right body is a much harder problem than people are willing to admit.

Software isn't constrained by the harsh truths of physical reality.

huydotnet•9mo ago
Amazing! On a fun note, I believe if a human kid were cleaning up the spill and threw the sponge into the sink like that, the kid would be in trouble. XD
scotty79•9mo ago
cleaning a spill consists mostly of spreading it over the whole counter
th0ma5•9mo ago
Does the general laws of demos apply here? Than any automation shown is the extent of capabilities not the start?
fwip•9mo ago
One thing I notice is that they specify that the robot has never seen the homes before, but certain objects, like the laundry baskets, are identical.

Doing your demo is significantly easier if you've already programmed/trained the robot to recognize the specific objects it has to interact with, even if those items are in different locations.

horhay•9mo ago
They also got these things working corners of a location instead of stacking tasks on different areas of the same location. And even on these "one-area" task groups it can fail a good amount. Kudos to them for showing the failures though
dissahc•9mo ago
isn't object recognition essentially solved? AI models were beating humans at image classification (in terms of error rate) back in 2016. even if this particular model isn't the best at it, they can always call out to an API or have a secondary on-device VLM that has stronger object recognition capabilities
th0ma5•9mo ago
Thank you all I guess the answer is yes.
desertmonad•9mo ago
Finally, machines doing the work we dont want to do
ajhai•9mo ago
https://x.com/ajhai/status/1899528923303809217 something I have been working on for a few months now.
bytesandbits•9mo ago
Most of it is open source. Their VLAs are based upon Gemma models + vision encoders, plus their own action experts. You can download and play around or fine tune their Pi0 VLAs from their servers directly (JAX format) or from Huggingface LeRobot safetensors port. They also have notebooks and code in their repo to get started with fine-tuning. Inference runs in a single 4090 RTX streamed over WiFi to the robot.
amelius•9mo ago
OpenAI is among their investors, which makes me wonder how long their work remains "open".
yencabulator•9mo ago
VLA = vision-language-action, a kind of a machine learning model
zx8080•9mo ago
> Investors > We are grateful for the support of Bond, Jeff Bezos, Khosla Ventures, Lux Capital, OpenAI, Redpoint Ventures, Sequoia Capital, and Thrive Capital.