frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A calculus course with an AI tutor watching the lectures with you

https://calculus.academa.ai/
1•apoogdk•27s ago•0 comments

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•5m ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•5m ago•0 comments

Mario Tchou

https://en.wikipedia.org/wiki/Mario_Tchou
1•simonebrunozzi•6m ago•0 comments

Does Anyone Even Know What's Happening in Zim?

https://mayberay.bearblog.dev/does-anyone-even-know-whats-happening-in-zim-right-now/
1•mugamuga•7m ago•0 comments

The last Morse code maritime radio station in North America [video]

https://www.youtube.com/watch?v=GzN-D0yIkGQ
1•austinallegro•9m ago•0 comments

Show HN: Hacker Newspaper – Yet another HN front end optimized for mobile

https://hackernews.paperd.ink/
1•robertlangdon•10m ago•0 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
1•novoreorx•18m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
1•mahirsaid•20m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•21m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
1•XzetaU8•29m ago•0 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•36m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•40m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•42m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•45m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•46m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•46m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•47m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•57m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•1h ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•1h ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
2•pentagrama•1h ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•1h ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
4•lostlogin•1h ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•1h ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•1h ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•1h ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•1h ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
2•monero-xmr•1h ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•1h ago•0 comments
Open in hackernews

Model-Based GUI Automation (Springer SoSyM)

https://link.springer.com/article/10.1007/s10270-025-01319-9
1•jspinak•3mo ago

Comments

jspinak•3mo ago
Hi HN, author here.

I started building Brobot in 2018 to automate gameplay - I wanted to understand why my automation kept breaking. The more I dug in, the more I realized this was a fundamental problem in GUI automation itself.

Two problems kept surfacing:

1. Script fragility - automation breaks constantly from minor GUI changes

2. Inability to test - no way to verify automation works before deploying

Research in GUI testing shows that the vast majority of test failures come from UI changes, not actual bugs. Yet you can't write integration tests for traditional GUI automation. You just run it and hope.

The root cause: traditional automation uses sequential scripts (do A, then B, then C). Making this robust requires exponential code growth - a 30-state automation has 6.36 trillion possible paths. You can't test all paths, can't guarantee it works.

Model-based GUI automation solves both problems by borrowing from robotics navigation. Instead of writing step-by-step scripts, you create a navigable map of the GUI. The framework handles pathfinding, state management, and error recovery automatically.

Key results:

• Reduces complexity from exponential to polynomial (mathematically proven)

• Makes GUI automation testable for the first time (integration tests, path verification)

• Enables reliable visual APIs for RL agents

• Supports robust dataset generation for model training

• Works for games, business apps, web interfaces - any GUI

Over 7 years, I developed and formalized this approach through both mathematical theory and real-world validation. Springer SoSyM published it in late October.

Open-source implementation: https://github.com/qontinui

• qontinui (Python) - Core automation library (pip install qontinui)

• multistate (Python) - State machine (pip install multistate)

• qontinui-runner (Rust/TypeScript) - Desktop execution engine

• qontinui-api (Python/FastAPI) - REST API bridge (pip install qontinui-api)

Interactive docs & playground: https://qontinui.github.io/multistate/

Original Java version (Brobot, 2018-2025): https://github.com/jspinak/brobot

I'm also building a visual builder (qontinui-web, Feb 2026 launch) for no-code automation - point-and-click designer that creates JSON configs the runner executes locally. Available now in early access (breaking changes possible before launch, but migration tools provided for format changes).

The research provides the mathematical foundation, the Python stack lets you use it today (code-based or visual). Wanted to contribute something useful to the AI/RL community.

Demos:

• Mobile game image collection/labeling: https://jspinak.github.io/brobot/docs/tutorials/tutorial-bas...

• More examples: https://jspinak.github.io/brobot/

Paper: https://link.springer.com/article/10.1007/s10270-025-01319-9

Story behind the name: https://jspinak.github.io/brobot/docs/theoretical-foundation...

pushpeshkarki•3mo ago
Can we have more complex examples in the examples sections, like actual gameplay automation rather than just basic UI navigation? This will allow the readers to understand the capability of the tool/framework better. Also, I would like to know how the results are displayed to the end users once the automation test suite execution is completed.
jspinak•3mo ago
Thanks for your questions! The mobile game demo (https://jspinak.github.io/brobot/docs/tutorials/tutorial-bas...) shows game automation and automated image collection and labeling to build a dataset for model training.

Here's the Qontinui Runner's action log during live automation: https://i.imgur.com/8R4d2Uf.png. Note the GO_TO_STATE action – that’s unique to model-based GUI automation. Instead of writing explicit navigation steps, you tell the framework "go to this state" and it handles pathfinding automatically.

You can see some actions failed (red X) - like "select to process corn". Traditional scripts would crash here. The model-based approach handles this differently: the next GO_TO_STATE call finds paths from wherever the GUI actually is (the current active states) to the desired state. So even when individual actions fail, the automation self-corrects on the next navigation.

Important clarification: This isn't test automation (using bots to test applications). The breakthrough is making the AUTOMATION ITSELF testable, enabling standard software engineering practices in a domain where they were previously infeasible. You can write integration tests that verify your bot works correctly before running it live. Section 11 of the paper covers this (Appendix 3 has an example from Brobot; qontinui.io provides visual test output).

The approach works for any GUI automation: gaming, visual APIs for RL agents, data collection, business automation, and yes, also software testing. I started with games (Brobot, 2018) because brittleness was most painful there.

Does that help clarify?