frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Gemini responds to request to turn on lights with hallucinated jailbreak prompt

https://www.reddit.com/r/googlehome/s/Lh3dYqccgB
1•visviva•1m ago•0 comments

RustCast -open-source Raycast-style launcher written in Rust

https://github.com/unsecretised/rustcast
1•todsacerdoti•1m ago•0 comments

Why Do Olympic Athletes Bite Their Medals?

https://www.thv11.com/article/sports/olympics/winter-games-iq/why-athletes-bite-medals-olympics/5...
1•RickJWagner•2m ago•0 comments

Mdash – Markdown in URL

https://kamilmac.github.io/mdash/
1•kmacinski•4m ago•0 comments

Brings your family memories now

https://familymemories.video
1•tareq_•4m ago•0 comments

Travel to Cheap Destinations

https://nomagicpill.substack.com/p/travel-to-cheap-destinations
1•surprisetalk•5m ago•0 comments

Rebuilding my home network with VLANs and 10Gbps

https://clintonboys.com/projects/homelab/03-network/
1•mtsolitary•6m ago•0 comments

Show HN: RepoSherlock – repo onboarding in minutes (map, run, risks)

1•kemal-arslan•7m ago•0 comments

Going Through Snowden Documents, Part 2

https://libroot.org/posts/going-through-snowden-documents-part-2/
1•stareatgoats•9m ago•0 comments

Can Europe get kids off social media?

https://www.ft.com/content/cf465c21-4789-490b-b328-41f6383567d7
2•thm•11m ago•0 comments

I Built a NAS (Buildlog)

https://arne.me/blog/buildlog-nas
2•abahlo•12m ago•0 comments

Making Software: How do computers store data?

https://www.makingsoftware.com/chapters/how-is-data-stored
1•Garbage•14m ago•0 comments

A timeline of claims about AI/LLMs

https://blog.nethuml.xyz/posts/2026/02/timeline-of-claims-about-ai-llms/
2•nethuml•16m ago•0 comments

Freeciv 3D with hex map tiles and WebGPU renderer

https://freecivworld.net/
1•roschdal•17m ago•0 comments

SpaceX-xAI Merger: Nobody's Talking About the von Neumann Elephant in the Room

1•juanpabloaj•21m ago•1 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
6•aarghh•25m ago•0 comments

Ask HN: Would you use an ESLint-like tool for SEO that fails your CI/CD build?

1•YannBuilds•26m ago•0 comments

Praise for Price Gouging

https://www.grumpy-economist.com/p/praise-for-price-gouging
1•mhb•29m ago•0 comments

Open source infra orchestrator agent clanker CLI

https://github.com/bgdnvk/clanker
1•tekbog•30m ago•0 comments

Lance table format explained simply, stupid (Animated)

https://tontinton.com/posts/lance/
1•tontinton•32m ago•0 comments

Solving Soma

https://anekstein.com/posts/2026-02-01-blocker
1•davidanekstein•32m ago•0 comments

We built a cloud platform for agentic software (our virtualization, etc.)

https://agentuity.com/
1•rblalock•32m ago•2 comments

Show HN: WLM-SLP – A 0D-27D Structural Language for Multi-Agent Alignment

https://github.com/gavingu2255-ai/WLM-Open-Source/blob/main/README.md
1•WujieGuGavin•32m ago•0 comments

Former Tumblr Head Jeff D'Onofrio Steps in as Acting CEO at the Washington Post

https://www.theverge.com/tech/875433/tumblr-jeff-donofrio-ceo-washington-post-layoffs
3•bookofjoe•35m ago•1 comments

Bounded Flexible Arrays in C

https://people.kernel.org/kees/bounded-flexible-arrays-in-c
1•fanf2•36m ago•0 comments

The Invisible Labor Force Powering AI

https://cacm.acm.org/news/the-invisible-labor-force-powering-ai/
1•pseudolus•38m ago•0 comments

Reading Recursion via Pascal

https://journal.paoloamoroso.com/reading-recursion-via-pascal
1•AlexeyBrin•38m ago•0 comments

Show HN: I made a website that finds patterns on your spreadsheet

https://analyzetable.com
1•kouhxp•39m ago•0 comments

Jokes on You AI: Turning the Tables – LLMs for Learning

https://www.dev-log.me/jokes_on_you_ai_llms_for_learning/
1•wazHFsRy•39m ago•0 comments

You don't need RAG in 2026

https://ryanlineng.substack.com/p/you-dont-need-rag-in-2026
2•kareninoverseas•40m ago•0 comments
Open in hackernews

Ask HN: How do robotics teams manage data and debugging today?

10•Lazaruscv•4mo ago
Hi HN,

I’m working on a project in the robotics space and would love to get the community’s perspective.

The problem I’ve seen: robotics teams generate a massive amount of data (ROS2, MCAP, OpenLABEL, etc.), but debugging and analysis often means hours of digging through logs, building custom scripts, or fighting fragmented formats. For small and medium robotics companies, this can really slow down iteration.

I’m trying to understand:

• How do you/your team currently manage and analyze robot data?

• What are the biggest pain points you face (e.g. debugging failures, comparing test runs, searching across logs)?

• Have you tried tools like Foxglove/Rerun/etc.? What works, what doesn’t?

• If there was a solution that actually made this easier, what would it have to do for you to adopt it?

I also put together a short (5 min) survey here: https://forms.gle/x57UReg8Yj9Gx7qZ8

If you’re willing to share your experiences in more detail, it would really help shape what we’re building.

We’ll anonymize responses and share the aggregated insights back with the community once we’ve collected enough.

Thanks in advance — I know this is a niche problem, but I figured HN has some of the sharpest robotics engineers, founders, and tinkerers out there. Really curious to hear how you’re solving this today.

Comments

dapperdrake•4mo ago
Am willing to help with this as well. The math can be iffy.
Lazaruscv•4mo ago
Really appreciate the offer, we’d love to take you up on it. A lot of what we’re exploring right now comes down to signal analysis and anomaly detection in robotics data, which gets math-heavy fast (especially when combining time-series data from multiple sources). We’re setting up short user interviews with roboticists/devs to better map the pain points. Would you be open to a quick chat about the trickiest math/log parsing issues you’ve faced? It could help us avoid reinventing the wheel.
msadowski•4mo ago
Full disclosure, I work at Foxglove right now. Before joining, I spent over seven years consulting and had more than 50 clients during that period. Here are some thoughts:

* Combing through the syslogs to find issues is an absolute nightmare, even more so if you are told that the machine broke at some point last night

* Even if you find the error, it's not necessarily when something broke; it could have happened way before, but you just discovered it because the system hit a state that required it

* If combing through syslog is hard, try rummaging through multiple mcap files by hand to see where a fault happened

* The hardware failing silently is a big PITA - this is especially true for things that read analog signals (think PLCs)

Many of the above issues can be solved with the right architecture or tooling, but often the teams I joined didn't have it, and lacked the capacity to develop it.

At Foxglove, we make it easy to aggregate and visualize the data and have some helper features (e.g., events, data loaders) that can speed up workflows. However, I would say that having good architecture, procedures, and an aligned team goes a long way in smoothing out troubleshooting, regardless of the tools.

Lazaruscv•4mo ago
This is super insightful, thank you for laying it out so clearly. Your point about the error surfacing way after it first occurred is exactly the sort of issue we’re interested in tackling. Foxglove is doing a great job with visualization and aggregation; what we’re thinking is more of a complementary diagnostic layer that:

• Correlates syslogs with mcap/bag file anomalies automatically

• Flags when a hardware failure might have begun (not just when it manifests)

• Surfaces probable root causes instead of leaving teams to manually chase timestamps

From your experience across 50+ clients, which do you think is the bigger timesink: data triage across multiple logs/files or interpreting what the signals actually mean once you’ve found them?

msadowski•4mo ago
In my case, it’s definitely the data triage. Once I see the signal, I usually have ideas on what’s happening but I’ve been doing this for 11 years.

Maybe there could be value in signal interpretation for purely software engineers but I reckon it would be hard for such team to build robots.

Lazaruscv•4mo ago
Our current thinking is to focus heavily on automating triage across syslogs and bag/mcap files, since that’s where the hours really get burned, even for experienced folks. For interpretation, we see it more as an assistive layer (e.g., surfacing “likely causes” or linking to past incidents), rather than trying to replace domain expertise.

Do you think there are specific triage workflows where even a small automation (say, correlating error timestamps across syslog and bag files) would save meaningful time?

msadowski•4mo ago
One thing that comes to mind is checking the timestamps across sensors and other topics. Two cases come to mind:

* I was setting up Ouster lidar to use gos time, don’t remember the details now but it was reporting the time ~32 seconds in the past (probably some leap seconds setting?)

* I had a ROS node misbehaving in some weird ways - it turned out there was a service call to insert something into db and for some reason the db started taking 5+ minutes to complete which wasn’t really appropriate for a blocking call

I think the timing is one thing that needs to be consistently done right on every platform. The other issues I came across were very application specific.

chfritz•4mo ago
This is something we are actively discussing in the Cloud Robotics Working Group right now. We've already had a number of sessions on this topic with various guest speakers. You can watch the recordings here: https://cloudroboticshub.github.io/meetings. Feel free to attend the upcoming meetings.
Lazaruscv•4mo ago
Thanks a lot for sharing this resource! I wasn’t aware of the Cloud Robotics Working Group, those sessions look super relevant. I’ll definitely check out the recordings and join future meetings. Our angle is very aligned: we’re exploring how AI/automation can help with the time sink of debugging large-scale ROS/ROS 2 systems, especially when logs/bag files pile up. It’d be valuable to hear what the community feels is still missing, even with the current set of tools. Do you think there’s space for a layer focused purely on automated error detection and root cause suggestions?
chfritz•4mo ago
"automated error detection" -- how do you want to do that? How would you define "error". Clearly you are not just proposing to detect "error" lines in the log, because that's trivial. But if you don't, then how would you define and detect errors and auto-root-cause them? Maybe we can discuss at one of the next meetings.
Lazaruscv•4mo ago
errors are rarely explicit in robots, they're often emergent from complex interactions, like a silent drift in AMCL localization causing a downstream collision, or sporadic packet loss in DDS causing desynchronized multirobot coordination. We'd describe errors dynamically through a mix of domain rules, unsupervised ML, and generative AI;

* Start with user-determined or auto-deduced invariants from "nominal" runs (e.g., "joint torque variance should never exceed 10% during unloaded motion," derived from historical MCAP bags). This takes inspiration from model-based verification techniques in current ROS2 research, e.g., automated formal verification with model driven engineering.

* use light, edge-optimized models (e.g., graph neural networks or variational autoencoders) to monitor ROS topic multivariate time series (/odom, /imu, /camera/image_raw). Fuse visual and sensor input using multimodal LLMs (fine-tuned on e.g. nuScenes or custom robot logs) to detect "silent failures" e.g., detect a LiDAR occlusion not reflected in logs but apparent from point cloud entropy spikes cross-checked against camera frames.

* Utilize GenAI (e.g., versions of GPT-4o or Llama) for NLP on logs, classifying ambiguous events like "nav stack increased latency" as a predictor for failure. This predictive approach is an improvement of the ROS Help Desk's GenAI model that already demonstrates 70-80% decrease in debugging time by indicating issues before full failure.

This is not hypothesizing; there are already PyTorch and ROS2 plugin prototype versions with ~90% accuracy detection in Gazebo simulation failures, and dynamic covariance compensation (as used in more recent AI-facilitated ROS2 localization studies) takes care of noisy real-world data.

The detection pipeline that is automatic will be akin to where the system receives live streams or bag files via a ROS2-compatible middleware (e.g., based on more recent flexible integration layers for task orchestration), then processes in streaming fashion then:

* map heterogeneous formats (MCAP, OpenLABEL, JSON logs) to a temporal knowledge graph nodes for components (sensors, planners), edges for causal dependencies and timestamps. Enables holistic analysis, as opposed to fragmented tools.

* Route the data through Apache Flink or Kafka combined ML pipelines for windowed detection. For instance, flag a "error" if a robot's velocity profile is beyond predicted physics models (with Control or PySDF libraries), even without explicit logs—combining environmental context from combined BIM/APS for vision use.

* subsequently employ uncertainty sampling through large language models to solicit user input on borderline scenarios, progressively fine‑tuning the models. Benchmark outcomes from SYSDIAGBENCH indicate that LLMs such as GPT‑4 perform exceptionally well, correctly identifying robotic problems in 85 % of cases across various model scales.

I trust this provides some insight; we are currently testing a prototype that fuses these components into a real‑time observability framework for ROS2. Although still in its infancy, it already demonstrates encouraging accuracy on both simulated and real‑world data sets. I would appreciate your thoughts on sharpening the notion of “error” for multi‑agent or hybrid systems, particularly in contexts where emergent behavior makes it hard to distinguish between anomalies and adaptive responses.

chfritz•4mo ago
Thanks for sharing! You've clearly done your homework. Can you contact me, e.g., on LinkedIn? I'd love to explore with you whether what you want to build could benefit from the open-source framework we've built for developing new full-stack robotic capabilities (Transitive, https://transitiverobotics.com/docs/learn/intro/).