frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Sun is twisting Mercury's crust in unexpected ways

https://bgr.com/science/the-sun-is-twisting-mercurys-crust-in-unexpected-ways/
1•Bluestein•20s ago•0 comments

Solving (Almost) cybersecurity once and for all

https://adaptive.live/blog/how-we-can-almost-solve-cyber-security-once-and-for-all
1•debarshri•39s ago•0 comments

I Love GitOps

https://newsletter.masterpoint.io/p/i-love-gitops
1•mooreds•59s ago•0 comments

What It's Like to Be 'Mind Blind'

https://time.com/6155443/aphantasia-mind-blind/
1•mucha•4m ago•0 comments

Embabel: Framework for Building AI Agents with Java

https://thenewstack.io/meet-embabel-a-framework-for-building-ai-agents-with-java/
1•andrewstetsenko•4m ago•0 comments

Epic Games and Qualcomm Are Bringing Fortnite to Windows 11 on Arm

https://www.thurrott.com/games/318482/epic-games-and-qualcomm-are-bringing-fortnite-to-windows-11-on-arm
1•mooreds•5m ago•0 comments

Marginalia mania: how 'annotating' books went from no-no to BookTok's next trend

https://www.theguardian.com/books/2025/jun/23/marginalia-mania-how-annotating-books-went-from-big-no-no-to-booktoks-next-trend
1•herbertl•5m ago•0 comments

The AI Revolution: Human like interfaces, not intelligence

https://jaimefh.com/writing/ai_revolution_interfaces
1•jiwidi•6m ago•0 comments

Snyk Acquires Invariant Labs

https://snyk.io/news/snyk-acquires-invariant-labs-to-accelerate-agentic-ai-security-innovation/
2•od0•7m ago•0 comments

The Secret Rules of the Terminal

https://wizardzines.com/zines/terminal/
1•marvinborner•9m ago•0 comments

Scaling Pinterest ML Infrastructure with Ray: From Training to ML Pipelines

https://medium.com/pinterest-engineering/scaling-pinterest-ml-infrastructure-with-ray-from-training-to-end-to-end-ml-pipelines-4038b9e837a0
1•herbertl•11m ago•0 comments

Show HN: I built an AI thumbnail generator for YouTubers who can't design

https://thumbo.io
1•isacbuilds•11m ago•0 comments

Amish company embraced robots–then made an even bolder bet

https://fortune.com/2025/06/24/flextur-robots-automation-manufacturing-small-business/
2•Bluestein•11m ago•0 comments

AI doesn't have to reason to take your job

https://www.vox.com/future-perfect/417325/artificial-intelligence-apple-reasoning-openai-chatgpt
1•lr0•12m ago•0 comments

The Reenchanted World: On finding mystery in the digital age

https://harpers.org/archive/2025/06/the-reenchanted-world-karl-ove-knausgaard-digital-age/
1•herbertl•13m ago•0 comments

Adding to markwhen documents via SMS and email

https://docs.markwhen.com/meridiem/api/sms-email
1•koch•18m ago•0 comments

Alcohol-soaked star system could explain why life, including us was able to form

https://www.livescience.com/space/exoplanets/alcohol-soaked-star-system-could-help-explain-why-life-including-us-was-able-to-form
1•Bluestein•18m ago•0 comments

Personal Copilot: Train Your Own Coding Assistant

https://huggingface.co/blog/personal-copilot
1•auraham•19m ago•0 comments

Agency is your secret edge

https://alanwu.xyz/posts/agency/
2•lunw•20m ago•0 comments

Stealthy ship hull cuts through waves like butter

https://news.engin.umich.edu/2025/06/stealthy-ship-hull-cuts-through-waves-like-butter/
1•gnabgib•20m ago•0 comments

What's Predictive in an AI Persona?

https://askrally.com/article/whats-predictive-in-a-persona
1•virtual_rf•21m ago•0 comments

The German automotive industry wants to develop open-source software together

https://www.vda.de/en/press/press-releases/2025/250624_PM_Automotive_industry_signs_Memorandum_of_Understanding
2•smartmic•22m ago•0 comments

I wrote 280 articles about web scraping. Here's their index grouped by tag

https://github.com/TheWebScrapingClub/ArticleIndex
2•PigiVinci83•22m ago•0 comments

LLMs can hoover up data from books, judge rules

https://www.theregister.com/2025/06/24/anthropic_book_llm_training_ok/
2•rntn•23m ago•0 comments

Cut Django Database Latency by 50-70ms with Native Connection Pooling

https://saurabh-kumar.com/articles/2025/06/cut-django-database-latency-by-50-70ms-with-native-connection-pooling/
1•selectnull•23m ago•0 comments

Show HN: Gitbasher – A simple bash utility to make Git easy to use

https://github.com/maxbolgarin/gitbasher
2•maxbolgarin•25m ago•0 comments

Biocide overdose blunder suspected in A321 dual-engine incident

https://www.flightglobal.com/safety/biocide-overdose-blunder-suspected-in-a321-dual-engine-incident/138004.article
4•worik•25m ago•0 comments

Cheapest DIY Microscope (1 min video)

https://www.youtube.com/shorts/SMjOA-P95CM
1•rmason•26m ago•0 comments

Owsla Manifesto – Can we fix Education?

https://owsla.io/manifesto
1•ChilledTonic•27m ago•0 comments

Strike Set Back Iran's Nuclear Program by Only a Few Months, U.S. Report Says

https://www.nytimes.com/2025/06/24/us/politics/iran-nuclear-sites.html
3•zzzeek•28m ago•7 comments
Open in hackernews

Gemini Robotics On-Device brings AI to local robotic devices

https://deepmind.google/discover/blog/gemini-robotics-on-device-brings-ai-to-local-robotic-devices/
119•meetpateltech•6h ago

Comments

suninsight•5h ago
This will not end well.
sajithdilshan•5h ago
I wonder what kind of guardrails (like Three Laws of Robotics) there are to prevent the robots going crazy while executing the prompts
hn_throwaway_99•5h ago
A power cord?
sajithdilshan•5h ago
what if they are battery powered?
msgodel•4h ago
Usually I put master disconnect switches on my robots just to make working on them safe. I use cheap toggle switches though I'm too cheap for the big red spiny ones.
pixl97•4h ago
[Robot learns to superglue the switch open]
msgodel•4h ago
It's only going to do that if you RL it with episodes that include people shutting it down for safety. The RL I've done with my models are all simulations that don't even simulate the switch.
pixl97•2h ago
Which will likely work for only on machine AI, but it seems to me any very complicated actions/interactions with the world may require external interactions with LLMs which know these kind of actions. Or in the future the models will be far larger and more expansive on device containing this kind of knowledge.

For example, what if you need to train the model to keep unauthorized people from shutting it off?

msgodel•2h ago
Having a robot near people with no master off switch sounds like a dumb idea.
bigyabai•4h ago
That's what we use twelve gauge buckshot for, here in America.
ctoth•5h ago
The laws of robotics were literally designed to cause conflict and facilitate strife in a fictional setting--I certainly hope no real goddamn system is built like that,.

> To ensure robots behave safely, Gemini Robotics uses a multi-layered approach. "With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period," says Parada. "And then you have it talk to a VLA that actually produces options, and then that VLA calls a low-level controller, which typically has safety critical components, like how much force you can move or how fast you can move this arm."

conception•4h ago
Of course someone will. The terror nexus doesn’t build itself, yet, you know.
hlfshell•3h ago
The generally accepted term for the research around this in robotics is Constitutional AI (https://arxiv.org/abs/2212.08073) and has been cited/experimented with in several robotics VLAs.
JumpCrisscross•22m ago
Is there any evidence we have the technical ability to put such ambiguous guardrails on LLMs?
asadm•2h ago
in practice, those laws are bs.
suyash•4h ago
What sort of hardware does the SDK runs on, can it run on a modern Raspberry Pi ?
ethan_smith•4h ago
According to the blog post, it requires an NVIDIA Jetson Orin with at least 8GB RAM, and they've optimized for Jetson AGX Orin (64GB) and Orin NX (16GB) modules.
v9v•4h ago
Could you quote where in the blog post they claim that? CTRL+F "Jetson" gave no results in TFA.
moffkalast•2h ago
Yeah they didn't really mention anything, I was almost getting my hopes up that Google might be announcing a modernized Coral TPU for the transformer age, but I guess not. It's probably all just API calls to their TPUv6 data centers lmao.
martythemaniak•3h ago
You can think of these as essentially multi-modal LLMs, which is to say you can have very small/fast ones (SmolVLA - 0.5B params) that are good at specific tasks, and larger/slower more general ones (OpenVLA - a finetuned llama2 7B). So a rpi could be used for some very specific tasks, but even the more general ones could run on beefy consumer hardware.
Toritori12•4h ago
Does Anyone know how easy is to join the "trusted tester program" and if they offer modules that you can easily plug-in to run the sdk?
martythemaniak•3h ago
I've spent the last few months looking into VLAs and I'm convinced that they're gonna be a big deal, ie they very well might be the "chatgpt moment for robotics" that everyone's been anticipating. Multimodal LLMs already have a ton of built-in understanding of images and text, so VLAs are just regular MMLLMs that are fine-tuned to output a specific sequence of instructions that can be fed to a robot.

OpenVLA, which came out last year, is a Llama2 fine tune with extra image encoding that outputs a 7-tuple of integers. The integers are rotation and translation inputs for a robot arm. If you give a vision llama2 a picture of a an apple and a bowl and say "put the apple in the bowl", it already understands apples, bowls, knows the end state should apple in bowl etc. What missing is a series of tuples that will correctly manipulate the arm to do that, and the way they did it is through a large number of short instruction videos.

The neat part is that although everyone is focusing on robot arms manipulating objects at the moment, there's no reason this method can't be applied to any task. Want a smart lawnmower? It already understands "lawn" "mow", "don't destroy toy in path" etc, just needs a finetune on how to corectly operate a lawnmower. Sam Altman made some comments about having self-driving technology recently and I'm certain it's a chat-gpt based VLA. After all, if you give chatgpt a picture of a street, it knows what's a car, pedestrian, etc. It doesn't know how to output the correct turn/go/stop commands, and it does need a great deal of diverse data, but there's no reason why it can't do it. https://www.reddit.com/r/SelfDrivingCars/comments/1le7iq4/sa...

Anyway, super exciting stuff. If I had time, I'd rig a snowblower with a remote control setup, record a bunch of runs and get a VLA to clean my driveway while I sleep.

ckcheng•3h ago
VLA = Vision-language-action model: https://en.wikipedia.org/wiki/Vision-language-action_model

Not https://public.nrao.edu/telescopes/VLA/ :(

For completeness, MMLLM = Multimodal Large language model.

generalizations•2h ago
I will be surprised if VLAs stick around, based on your description. That sounds far too low-level. Better hand that off to the 'nervous system' / kernel of the robot - it's not like humans explicitly think about the rotation of their hip & ankle when they walk. Sounds like a bad abstraction.
Workaccount2•1h ago
I don't think transformers will be viable for self driving cars until they can both:

1) Properly recognize what they are seeing without having to lean so hard on their training data. Go photoshop a picture of a cat and give it a 5th leg coming out of it's stomach. No LLM will be able to properly count the cat's legs (they will keep saying 4 legs no matter how many times you insist they recount).

2.) Be extremely fast at outputting tokens. I don't know where the threshold is, but its probably going to be a non-thinking model (at first) and probably need something like Cerebras or diffusion architecture to get there.

martythemaniak•38m ago
1. Well, based on Karpathy's talks on Tesla FSD, his solution is to actually make the training set reflect everything you'd see in reality. The tricky part is that if something occurs 0.0000001% IRL and something else occurs 50% of the time, they both need to make 5% of the training corpus. The thing with multimodal LLMs is that lidar/depth input can just be another input that gets encoded along with everything else, so for driving "there's a blob I don't quite recognize" is still a blob you have to drive around.

2. Figure has a dual-model architecture which makes a lot of sense: A 7B model that does higher-level planning and control and a runs at 8Hz, and a tiny 0.08B model that runs at 200Hz and does the minute control outputs. https://www.figure.ai/news/helix

baron816•3h ago
I’m optimistic about humanoid robotics, but I’m curious about the reliability issue. Biological limbs and hands are quite miraculous when you consider that they are able to constantly interact with the world, which entails some natural wear and tear, but then constantly heal themselves.
marinmania•2h ago
It does either get very exciting or very spooky thinking of the possibilities in the near future.

I had always assumed that such a robot would be very specific (like a cleaning robot) but it does seem like by the time they are ready they will be very generalizable.

I know they would require quite a few sensors and motors, but compared to self-driving cars their liability would be less and they would use far less material.

fragmede•1h ago
The exciting part comes when two robots are able to do repairs on each other.
pryelluw•1h ago
2 bots 1 bolt ?
marinmania•1h ago
I think this is the spooky part. I feel dumb saying it, but is there a point where they are able to coordinate and build a factory to build chips/more of themselves? Or other things entirely?
didip•2h ago
I think those problems can be solved with further research in material science, no? Combined that with very responsive but low torque servos, I think this is a solvable problem.
UltraSane•1h ago
Consumable components could be automatically replaced by other robots.
zzzeek•1h ago
THANK YOU.

Please make robots. LLMs should be put to work for *manual* tasks, not art/creative/intellectual tasks. The goal is to improve humanity. not put us to work putting screws inside of iphones

(five years later)

what do you mean you are using a robot for your drummer

Workaccount2•1h ago
I continued to be impressed how Google stealth releases fairly groundbreaking products, and then (usually) just kind of forgets about them.

Rather than advertising blitz and flashy press events, they just do blog posts that tech heads circulate, forget about, and then wonder 3-4 years later "whatever happened to that?"

This looks awesome. I look forward to someone else building a start-up on this and turning it into a great product.

jagger27•58m ago
These are going to be war machines, make absolutely no mistake about it. On-device autonomy is the perfect foil to escape centralized authority and accountability. There’s no human behind the drone to charge for war crimes. It’s what they’ve always dreamed of.

Who’s going to stop them? Who’s going to say no? The military contracts are too big to say no to, and they might not have a choice.

The elimination of toil will mean the elimination of humans all together. That’s where we’re headed. There will be no profitable life left for you, and you will be liquidated by “AI-Powered Automation for Every Decision”[0]. Every. Decision. It’s so transparent. The optimists in this thread are baffling.

0: https://www.palantir.com/

mateus1•51m ago
MIT spinoff Google-owned Boston Dynamics pledged not to militarize their robots. Which is very hard to believe given they're backed by DARPA, the DoD/Military investment arm.
jagger27•47m ago
Militarize is just bad marketing. Call them cleaning machines and put them to work on dirty things.
paxys•42m ago
Was owned by Google. Then Softbank. Now Hyundai.
JumpCrisscross•25m ago
> These are going to be war machines, make absolutely no mistake about it

Of course they will. Practically everything useful has a military application. I'm not sure why this is considered a hot take.