frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Carbon footprint from Israel's war on Gaza exceeds 100 countries

https://www.theguardian.com/world/2025/may/30/carbon-footprint-of-israels-war-on-gaza-exceeds-that-of-many-entire-countries
1•lr0•1m ago•0 comments

Paperclips

https://www.decisionproblem.com/paperclips/index2.html
1•tomrod•3m ago•0 comments

Built an open-source web scraper for AI agents – seeking feedback

https://github.com/any4ai/AnyCrawl
1•ntbperst•4m ago•1 comments

P-1's attempt to build an engineering brain

https://www.ibm.com/think/news/physical-ai-age-p-1-engineering-brain
2•sirregex•6m ago•0 comments

Living Tattoos for Buildings

https://www.tugraz.at/en/tu-graz/services/news-stories/tu-graz-news/singleview/article/lebende-tattoos-fuer-gebaeude
1•geox•8m ago•0 comments

XChat is rolling out with encryption, vanishing messages, ability to send files

https://twitter.com/elonmusk/status/1929238157872312773
3•tosh•10m ago•1 comments

gitsquash - Interactive CLI tool to squash Git commits

https://github.com/helloanoop/gitsquash
1•helloanoop•11m ago•1 comments

Good Writing

https://www.paulgraham.com/goodwriting.html
1•rbanffy•12m ago•0 comments

LLM-Powered Method Resolution with Synonllm

https://worksonmymachine.substack.com/p/llm-powered-method-resolution-with
1•Stwerner•15m ago•0 comments

Ask HN: Recommend data privacy tools to limit big tech from collecting our data

1•Desafinado•15m ago•1 comments

InstantAPI

https://web.instantapi.ai/
1•handfuloflight•15m ago•0 comments

OneDrive Gives Web Apps Full Read Access to All Files

https://www.securityweek.com/onedrive-gives-web-apps-full-read-access-to-all-files/
3•Bender•15m ago•0 comments

Chinese Hacking Group APT41 Exploits Google Calendar to Target Governments

https://www.securityweek.com/chinese-hacking-group-apt41-exploits-google-calendar-to-target-governments/
1•Bender•16m ago•0 comments

Google and DOJ tussle over how AI will remake the web in antitrust closing args

https://arstechnica.com/gadgets/2025/05/google-and-doj-tussle-over-how-ai-will-remake-the-web-in-antitrust-closing-arguments/
1•Bender•17m ago•0 comments

EtherTrip: Psychedelic Ethereum Galaxy Visualizer

https://shayanb.github.io/EtherTrip/
1•shayanbahal•23m ago•0 comments

Tilt Shift Generator Gallery

https://www.tiltshiftgenerator.com/gallery.php
1•susam•27m ago•0 comments

Autopoietic Networks (a few more examples)

https://gbragafibra.github.io/2025/05/27/autopoietic_nets2.html
1•Fibra•27m ago•0 comments

Capuchin monkeys develop 'fad' of abducting baby howlers, cameras reveal

https://phys.org/news/2025-05-capuchin-monkeys-bizarre-fad-abducting.html
1•wglb•28m ago•0 comments

Engagement = % of Humanity's Time Hijacked and Wasted

7•dwaltrip•36m ago•0 comments

Strategy to fabricate highly performing thin-film tin perovskite transistors

https://techxplore.com/news/2025-05-strategy-fabricate-highly-thin-tin.html
1•wglb•37m ago•2 comments

Show HN: OfflineLLM: Live Voice Chat with DeepSeek, Llama on iOS and VisionOS

https://offlinellm.bilaal.co.uk/
2•bilaal_dc5631•43m ago•0 comments

The Pedestrians Who Abetted a Hawk's Deadly Attack

https://www.theatlantic.com/science/archive/2025/05/hawk-new-jersey-traffic/682913/
1•twalichiewicz•43m ago•1 comments

Louisiana lawmakers push 'chemtrail' ban legislation through the House

https://www.fox8live.com/2025/05/30/louisiana-lawmakers-push-chemtrail-ban-legislation-through-house/
3•zzzeek•45m ago•3 comments

Apple will reportedly open up its local AI models to third-party apps

https://www.theverge.com/news/670868/apple-intelligence-ai-third-party-developer-access-model
2•handfuloflight•46m ago•0 comments

Thing cannot write computer programs

https://mastodon.social/@jcoglan/114608805416238733
4•sir_pepe•52m ago•0 comments

From Builder to Guide: What I Miss Most About Being "Just an Engineer"

https://flyingwhilebuilding.com/
1•flyingbuilding•54m ago•1 comments

Quaternions – Freya Holmer [video]

https://www.youtube.com/watch?v=PMvIWws8WEo
2•jalict•57m ago•0 comments

.

https://samwarnick.com/blog/making-the-bullpen-trading-card-game/
1•catskull•57m ago•1 comments

Show HN: You2Aanki – Turn Videos into Anki Vocabulary Flashcards

https://you2anki.com/
3•isege•58m ago•2 comments

Huawei AI CloudMatrix 384 – China's Answer to Nvidia GB200 NVL72 – SemiAnalysis

https://semianalysis.com/2025/04/16/huawei-ai-cloudmatrix-384-chinas-answer-to-nvidia-gb200-nvl72/
2•rbanffy•58m ago•0 comments
Open in hackernews

Show HN: I built an AI agent that turns ROS 2's turtlesim into a digital artist

https://github.com/Yutarop/turtlesim_agent
29•ponta17•1d ago
I'm a grad student studying robotics, with a particular interest in the intersection of LLMs and mobile robots. Recently, I discovered how easily LangChain enables the creation of AI agents, and I wanted to explore how such agents could interact with simulated environments.

So, I built TurtleSim Agent, an AI agent that turns the classic ROS 2 turtlesim turtle into a creative artist.

With this agent, you can give plain English commands like “draw a triangle” or “make a red star,” and it will reason through the instructions and control the simulated turtle accordingly. I’ve included demo videos on GitHub. Behind the scenes, it uses an LLM to interpret the text, decide what actions are needed, and then call a set of modular tools (motion, pen control, math, etc.) to complete the task.

If you're interested in LLM+robotics, ROS, or just want to see a turtle become a digital artist, I'd love for you to check it out:

GitHub: https://github.com/Yutarop/turtlesim_agent

Looking ahead, I’m also exploring frameworks like LangGraph and MCP (Modular Chain of Thought Planning) to see whether they might be better suited for more complex planning and decision-making tasks in robotics. If anyone here is familiar with these frameworks or working in this space, I’d love to connect or hear your thoughts.

Comments

dpflan•1d ago
Forgive me for asking, but im always curios about the definition of “agent”. What is an “agent” exactly? Is it a static prompt that is sent along with user input to an LLM service and then handles that resposne? And then it’s done? Is an agent a prompted LLM call? Or some entity that is changing its own prompt as it continues to exist?
karmakaze•1d ago
It depends on how you look at it. If the output 'it' is a drawing, then the agent is the thing doing the drawing on the user's behalf. In more detail the output thing are commands, so then the agent would be what's generating those commands from the user's input. E.g. a web browser is a user agent that makes requests and renders resources that the user specifies.
ponta17•1d ago
Thanks for the thoughtful question! The term “agent” definitely gets used in a lot of different ways, so I’ll clarify what I mean here.

In this project, an agent is an LLM-powered system that takes a high-level user instruction, reasons about what steps are needed to fulfill it, and then executes those steps using a set of tools. So it’s more than a single prompted LLM call — the agent maintains a kind of working state and can call external functions iteratively as it plans and acts.

Concretely, in turtlesim_agent, the agent receives an input like “draw a red triangle,” and then: 1. Uses the LLM to interpret the intent, 2. Decides which tools to use (like move forward, turn, set pen color), 3. Calls those tools step-by-step until the task is done.

Hope that clears it up a bit!

paxys•1d ago
To put it more simply, "agent" is now just a generic term to describe any middleware that sits between user input and a base LLM.
latchkey•1d ago
This really brings back memories. The first computer language I learned as a child was Logo. My grandfather gifted me a lesson from a local computer store where someone came out to his house and sat with me in front of his Apple II.

I was too young to understand the concepts around the math of steps or degrees. While the thought of programming on a computer was amazing (and later became an engineer), I couldn't grasp Logo, got frustrated, and lost interest.

If I could have had something like this, I'm sure it would have made more sense to me earlier on. It makes me think about how this will affect the learning rate in a positive way.

pj_mukh•1d ago
Haha this is so incredibly cool.

One thing I might’ve missed, what are the “physics” universe? In the rainbow example the turtle seems to teleport between arcs?

ponta17•19h ago
Thanks! Great question.

TurtleSim itself doesn't simulate real-world physics — it allows instant position updates when needed. In this project, the goal was to create a digital turtle artist, not to replicate physical realism. So when the agent wants to draw something, it puts the pen down and moves physically (i.e., using velocity commands). But when it doesn't need to draw and just wants to move quickly to another position, it uses a teleport function I provided as a tool.

That's why in the rainbow example, you might see the turtle "jump" between arcs — it's skipping the movement to get to the next drawing point faster.

moffkalast•1d ago
That's pretty cool, but I feel like all of the LLM integrations with ROS so far have sort of entirely missed the point in terms of useful applications. Endless examples of models sending bare bone twist commands do a disservice to what LLMs are good at, it's like swatting flies with a bazooka in terms of compute used, too.

Getting the robot to move from point A to point B is largely a solved problem with traditional probabilistic methods, while niches where LLMs are the best fit I think are largely still unaddressed, e.g.:

- a pipeline for natural language commands to high level commands ("fetch me a beer" to [send nav2 goal to kitchen, get fridge detection from yolo, open fridge with moveit, detect beer with yolo, etc.]

- using a VLM to add semantic information to map areas, e.g. have the robot turn around 4 times in a room, and have the model determine what's there so it can reference it by location and even know where that kitchen and fridge is in the above example

- system monitoring, where an LLM looks at ros2 doctor, htop, topic hz, etc. and determines if something's crashed or isn't behaving properly, and returns a debug report or attempts to fix it with terminal commands

- handling recovery behaviours in general, since a lot of times when robots get stuck the resolution is simple, you just need something to take in the current situational information, reason about it, and pick one of the possible ways to resolve it

ponta17•19h ago
Thanks a lot for the thoughtful feedback — I really appreciate it!

I think there might be a small misunderstanding regarding how the LLM is actually being used here (and in many agent-based setups). The LLM itself isn’t directly executing twist commands or handling motion; it’s acting as a decision-maker that chooses from a set of callable tools (Python functions) based on the task description and intermediate results.

In this case, yes — one of the tools happens to publish Twist commands, but that’s just one of many modular tools the LLM can invoke. Whether it’s controlling motion or running object detection, from the LLM’s point of view it’s simply choosing which function to call next. So the computational load really depends on what the tool does internally — not the LLM’s reasoning process itself.

Of course, I agree with your broader point: we should push toward more meaningful high-level tasks where LLMs can orchestrate complex pipelines — and I think your examples (like fetch-a-beer or map annotation via VLMs) are spot-on.

My goal with this project was to explore that decision-making loop in a minimal, creative setting — kind of like a sandbox for LLM-agent behavior.

Actually, I’m currently working on something along those lines using a TurtleBot3. I’m planning to provide the agent with tools that let it scan obstacles via 3D LiDAR and recognize objects through image processing, so that it can make more context-aware decisions.

Really appreciate the push for deeper use cases — that’s definitely where I want to go next!