frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•2m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•2m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•5m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•5m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•6m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•6m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•11m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•14m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•17m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•18m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•18m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•19m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•19m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•20m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•21m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•24m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•27m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•27m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•33m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
3•onurkanbkrc•34m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•34m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•37m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•40m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•40m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•40m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•40m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
4•juujian•42m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•44m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•46m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•49m ago•0 comments
Open in hackernews

Ask HN: How do robotics teams manage data and debugging today?

10•Lazaruscv•4mo ago
Hi HN,

I’m working on a project in the robotics space and would love to get the community’s perspective.

The problem I’ve seen: robotics teams generate a massive amount of data (ROS2, MCAP, OpenLABEL, etc.), but debugging and analysis often means hours of digging through logs, building custom scripts, or fighting fragmented formats. For small and medium robotics companies, this can really slow down iteration.

I’m trying to understand:

• How do you/your team currently manage and analyze robot data?

• What are the biggest pain points you face (e.g. debugging failures, comparing test runs, searching across logs)?

• Have you tried tools like Foxglove/Rerun/etc.? What works, what doesn’t?

• If there was a solution that actually made this easier, what would it have to do for you to adopt it?

I also put together a short (5 min) survey here: https://forms.gle/x57UReg8Yj9Gx7qZ8

If you’re willing to share your experiences in more detail, it would really help shape what we’re building.

We’ll anonymize responses and share the aggregated insights back with the community once we’ve collected enough.

Thanks in advance — I know this is a niche problem, but I figured HN has some of the sharpest robotics engineers, founders, and tinkerers out there. Really curious to hear how you’re solving this today.

Comments

dapperdrake•4mo ago
Am willing to help with this as well. The math can be iffy.
Lazaruscv•4mo ago
Really appreciate the offer, we’d love to take you up on it. A lot of what we’re exploring right now comes down to signal analysis and anomaly detection in robotics data, which gets math-heavy fast (especially when combining time-series data from multiple sources). We’re setting up short user interviews with roboticists/devs to better map the pain points. Would you be open to a quick chat about the trickiest math/log parsing issues you’ve faced? It could help us avoid reinventing the wheel.
msadowski•4mo ago
Full disclosure, I work at Foxglove right now. Before joining, I spent over seven years consulting and had more than 50 clients during that period. Here are some thoughts:

* Combing through the syslogs to find issues is an absolute nightmare, even more so if you are told that the machine broke at some point last night

* Even if you find the error, it's not necessarily when something broke; it could have happened way before, but you just discovered it because the system hit a state that required it

* If combing through syslog is hard, try rummaging through multiple mcap files by hand to see where a fault happened

* The hardware failing silently is a big PITA - this is especially true for things that read analog signals (think PLCs)

Many of the above issues can be solved with the right architecture or tooling, but often the teams I joined didn't have it, and lacked the capacity to develop it.

At Foxglove, we make it easy to aggregate and visualize the data and have some helper features (e.g., events, data loaders) that can speed up workflows. However, I would say that having good architecture, procedures, and an aligned team goes a long way in smoothing out troubleshooting, regardless of the tools.

Lazaruscv•4mo ago
This is super insightful, thank you for laying it out so clearly. Your point about the error surfacing way after it first occurred is exactly the sort of issue we’re interested in tackling. Foxglove is doing a great job with visualization and aggregation; what we’re thinking is more of a complementary diagnostic layer that:

• Correlates syslogs with mcap/bag file anomalies automatically

• Flags when a hardware failure might have begun (not just when it manifests)

• Surfaces probable root causes instead of leaving teams to manually chase timestamps

From your experience across 50+ clients, which do you think is the bigger timesink: data triage across multiple logs/files or interpreting what the signals actually mean once you’ve found them?

msadowski•4mo ago
In my case, it’s definitely the data triage. Once I see the signal, I usually have ideas on what’s happening but I’ve been doing this for 11 years.

Maybe there could be value in signal interpretation for purely software engineers but I reckon it would be hard for such team to build robots.

Lazaruscv•4mo ago
Our current thinking is to focus heavily on automating triage across syslogs and bag/mcap files, since that’s where the hours really get burned, even for experienced folks. For interpretation, we see it more as an assistive layer (e.g., surfacing “likely causes” or linking to past incidents), rather than trying to replace domain expertise.

Do you think there are specific triage workflows where even a small automation (say, correlating error timestamps across syslog and bag files) would save meaningful time?

msadowski•4mo ago
One thing that comes to mind is checking the timestamps across sensors and other topics. Two cases come to mind:

* I was setting up Ouster lidar to use gos time, don’t remember the details now but it was reporting the time ~32 seconds in the past (probably some leap seconds setting?)

* I had a ROS node misbehaving in some weird ways - it turned out there was a service call to insert something into db and for some reason the db started taking 5+ minutes to complete which wasn’t really appropriate for a blocking call

I think the timing is one thing that needs to be consistently done right on every platform. The other issues I came across were very application specific.

chfritz•4mo ago
This is something we are actively discussing in the Cloud Robotics Working Group right now. We've already had a number of sessions on this topic with various guest speakers. You can watch the recordings here: https://cloudroboticshub.github.io/meetings. Feel free to attend the upcoming meetings.
Lazaruscv•4mo ago
Thanks a lot for sharing this resource! I wasn’t aware of the Cloud Robotics Working Group, those sessions look super relevant. I’ll definitely check out the recordings and join future meetings. Our angle is very aligned: we’re exploring how AI/automation can help with the time sink of debugging large-scale ROS/ROS 2 systems, especially when logs/bag files pile up. It’d be valuable to hear what the community feels is still missing, even with the current set of tools. Do you think there’s space for a layer focused purely on automated error detection and root cause suggestions?
chfritz•4mo ago
"automated error detection" -- how do you want to do that? How would you define "error". Clearly you are not just proposing to detect "error" lines in the log, because that's trivial. But if you don't, then how would you define and detect errors and auto-root-cause them? Maybe we can discuss at one of the next meetings.
Lazaruscv•4mo ago
errors are rarely explicit in robots, they're often emergent from complex interactions, like a silent drift in AMCL localization causing a downstream collision, or sporadic packet loss in DDS causing desynchronized multirobot coordination. We'd describe errors dynamically through a mix of domain rules, unsupervised ML, and generative AI;

* Start with user-determined or auto-deduced invariants from "nominal" runs (e.g., "joint torque variance should never exceed 10% during unloaded motion," derived from historical MCAP bags). This takes inspiration from model-based verification techniques in current ROS2 research, e.g., automated formal verification with model driven engineering.

* use light, edge-optimized models (e.g., graph neural networks or variational autoencoders) to monitor ROS topic multivariate time series (/odom, /imu, /camera/image_raw). Fuse visual and sensor input using multimodal LLMs (fine-tuned on e.g. nuScenes or custom robot logs) to detect "silent failures" e.g., detect a LiDAR occlusion not reflected in logs but apparent from point cloud entropy spikes cross-checked against camera frames.

* Utilize GenAI (e.g., versions of GPT-4o or Llama) for NLP on logs, classifying ambiguous events like "nav stack increased latency" as a predictor for failure. This predictive approach is an improvement of the ROS Help Desk's GenAI model that already demonstrates 70-80% decrease in debugging time by indicating issues before full failure.

This is not hypothesizing; there are already PyTorch and ROS2 plugin prototype versions with ~90% accuracy detection in Gazebo simulation failures, and dynamic covariance compensation (as used in more recent AI-facilitated ROS2 localization studies) takes care of noisy real-world data.

The detection pipeline that is automatic will be akin to where the system receives live streams or bag files via a ROS2-compatible middleware (e.g., based on more recent flexible integration layers for task orchestration), then processes in streaming fashion then:

* map heterogeneous formats (MCAP, OpenLABEL, JSON logs) to a temporal knowledge graph nodes for components (sensors, planners), edges for causal dependencies and timestamps. Enables holistic analysis, as opposed to fragmented tools.

* Route the data through Apache Flink or Kafka combined ML pipelines for windowed detection. For instance, flag a "error" if a robot's velocity profile is beyond predicted physics models (with Control or PySDF libraries), even without explicit logs—combining environmental context from combined BIM/APS for vision use.

* subsequently employ uncertainty sampling through large language models to solicit user input on borderline scenarios, progressively fine‑tuning the models. Benchmark outcomes from SYSDIAGBENCH indicate that LLMs such as GPT‑4 perform exceptionally well, correctly identifying robotic problems in 85 % of cases across various model scales.

I trust this provides some insight; we are currently testing a prototype that fuses these components into a real‑time observability framework for ROS2. Although still in its infancy, it already demonstrates encouraging accuracy on both simulated and real‑world data sets. I would appreciate your thoughts on sharpening the notion of “error” for multi‑agent or hybrid systems, particularly in contexts where emergent behavior makes it hard to distinguish between anomalies and adaptive responses.

chfritz•4mo ago
Thanks for sharing! You've clearly done your homework. Can you contact me, e.g., on LinkedIn? I'd love to explore with you whether what you want to build could benefit from the open-source framework we've built for developing new full-stack robotic capabilities (Transitive, https://transitiverobotics.com/docs/learn/intro/).