frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

AlphaGenome: AI for better understanding the genome

https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/
372•i_love_limes•11h ago•110 comments

Launch HN: Issen (YC F24) – Personal AI language tutor

227•mariano54•11h ago•205 comments

The time is right for a DOM templating API

https://justinfagnani.com/2025/06/26/the-time-is-right-for-a-dom-templating-api/
85•mdhb•6h ago•46 comments

Alternative Layout System

https://alternativelayoutsystem.com/scripts/#same-sizer
128•smartmic•6h ago•16 comments

Kea 3.0, our first LTS version

https://www.isc.org/blogs/kea-3-0/
52•conductor•5h ago•19 comments

How much slower is random access, really?

https://samestep.com/blog/random-access/
40•sestep•3d ago•7 comments

Fault Tolerant Llama training

https://pytorch.org/blog/fault-tolerant-llama-training-with-2000-synthetic-failures-every-15-seconds-and-no-checkpoints-on-crusoe-l40s/
27•Mougatine•3d ago•5 comments

Dickinson's Dresses on the Moon

https://www.theparisreview.org/blog/2025/06/20/dickinsons-dresses-on-the-moon/
12•Bluestein•3d ago•0 comments

Show HN: Magnitude – Open-source AI browser automation framework

https://github.com/magnitudedev/magnitude
60•anerli•7h ago•23 comments

Snow - Classic Macintosh emulator

https://snowemu.com/
203•ColinWright•17h ago•74 comments

Matrix v1.15

https://matrix.org/blog/2025/06/26/matrix-v1.15-release/
128•todsacerdoti•6h ago•37 comments

A Review of Aerospike Nozzles: Current Trends in Aerospace Applications

https://www.mdpi.com/2226-4310/12/6/519
68•PaulHoule•10h ago•32 comments

A new pyramid-like shape always lands the same side up

https://www.quantamagazine.org/a-new-pyramid-like-shape-always-lands-the-same-side-up-20250625/
622•robinhouston•1d ago•150 comments

Puerto Rico's Solar Microgrids Beat Blackout

https://spectrum.ieee.org/puerto-rico-solar-microgrids
347•ohjeez•1d ago•199 comments

Show HN: I built an AI dataset generator

https://github.com/metabase/dataset-generator
121•matthewhefferon•11h ago•24 comments

Introducing Gemma 3n

https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/
284•bundie•9h ago•131 comments

SigNoz (YC W21, Open Source Datadog) Is Hiring DevRel Engineers (Remote)(US)

https://www.ycombinator.com/companies/signoz/jobs/cPaxcxt-devrel-engineer-remote-us-time-zones
1•pranay01•7h ago

Shifts in diatom and dinoflagellate biomass in the North Atlantic over 6 decades

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0323675
43•PaulHoule•8h ago•2 comments

Collections: Nitpicking Gladiator's Iconic Opening Battle, Part I

https://acoup.blog/2025/06/06/collections-nitpicking-gladiators-iconic-opening-battle-part-i/
5•diodorus•3d ago•0 comments

Typr – TUI typing test with a word selection algorithm inspired by keybr

https://github.com/Sakura-sx/typr
40•Sakura-sx•3d ago•29 comments

Starcloud can’t put a data centre in space at $8.2M in one Starship

https://angadh.com/space-data-centers-1
57•angadh•6h ago•69 comments

The Business of Betting on Catastrophe

https://thereader.mitpress.mit.edu/the-business-of-betting-on-catastrophe/
67•anarbadalov•3d ago•31 comments

“My Malformed Bones” – Harry Crews’s Counterlives

https://harpers.org/archive/2025/07/my-malformed-bones-charlie-lee-harry-crews/
9•Caiero•3d ago•0 comments

Lateralized sleeping positions in domestic cats

https://www.cell.com/current-biology/fulltext/S0960-9822(25)00507-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS096098222500507X%3Fshowall%3Dtrue
104•EvgeniyZh•7h ago•50 comments

Memory safety is table stakes

https://www.usenix.org/publications/loginonline/memory-safety-merely-table-stakes
66•comradelion•6h ago•70 comments

Ambient Garden

https://ambient.garden
312•fipar•3d ago•56 comments

“Why is the Rust compiler so slow?”

https://sharnoff.io/blog/why-rust-compiler-slow
148•Bogdanp•6h ago•163 comments

Writing a basic Linux device driver when you know nothing about Linux drivers

https://crescentro.se/posts/writing-drivers/
424•sbt567•4d ago•59 comments

Access BMC UART on Supermicro X11SSH

https://github.com/zarhus/zarhusbmc/discussions/3
57•pietrushnic•11h ago•10 comments

Muvera: Making multi-vector retrieval as fast as single-vector search

https://research.google/blog/muvera-making-multi-vector-retrieval-as-fast-as-single-vector-search/
91•georgehill•15h ago•7 comments
Open in hackernews

Show HN: Magnitude – Open-source AI browser automation framework

https://github.com/magnitudedev/magnitude
59•anerli•7h ago
Hey HN, Anders and Tom here. We had a post about our AI test automation framework 2 months ago that got a decent amount of traction (https://news.ycombinator.com/item?id=43796003).

We got some great feedback from the community, with the most positive response being about our vision-first approach used in our browser agent. However, many wanted to use the underlying agent outside the testing domain. So today, we're releasing our fully featured AI browser automation framework.

You can use it to automate tasks on the web, integrate between apps without APIs, extract data, test your web apps, or as a building block for your own browser agents.

Traditionally, browser automation could only be done via the DOM, even though that’s not how humans use browsers. Most browser agents are still stuck in this paradigm. With a vision-first approach, we avoid relying on flaky DOM navigation and perform better on complex interactions found in a broad variety of sites, for example:

- Drag and drop interactions

- Data visualizations, charts, and tables

- Legacy apps with nested iframes

- Canvas and webGL-heavy sites (like design tools or photo editing)

- Remote desktops streamed into the browser

To interact accurately with the browser, we use visually grounded models to execute precise actions based on pixel coordinates. The model used by Magnitude must be smart enough to plan out actions but also able to execute them. Not many models are both smart *and* visually grounded. We highly recommend Claude Sonnet 4 for the best performance, but if you prefer open source, we also support Qwen-2.5-VL 72B.

Most browser agents never make it to production. This is because of (1) the flaky DOM navigation mentioned above, but (2) the lack of control most browser agents offer. The dominant paradigm is you give the agent a high-level task + tools and hope for the best. This quickly falls apart for production automations that need to be reliable and specific. With Magnitude, you have fine-grained control over the agent with our `act()` and `extract()` syntax, and can mix it with your own code as needed. You also have full control of the prompts at both the action and agent level.

```ts

// Magnitude can handle high-level tasks

await agent.act('Create an issue', {

  // Optionally pass data that the agent will use where appropriate

  data: {

    title: 'Use Magnitude',

    description: 'Run "npx create-magnitude-app" and follow the instructions',

  },
});

// It can also handle low-level actions

await agent.act('Drag "Use Magnitude" to the top of the in progress column');

// Intelligently extract data based on the DOM content matching a provided zod schema

const tasks = await agent.extract(

    'List in progress issues',

    z.array(z.object({

        title: z.string(),

        description: z.string(),

        // Agent can extract existing data or new insights

        difficulty: z.number().describe('Rate the difficulty between 1-5')

    })),
);

```

We have a setup script that makes it trivial to get started with an example, just run "npx create-magnitude-app". We’d love to hear what you think!

Repo: https://github.com/magnitudedev/magnitude

Comments

grbsh•7h ago
Why not just use Claude by itself? Opus and Sonnet are great at producing pixel coordinates and tool usages from screenshots of UIs. Curious as to what your framework gives me over the plain base model.
anerli•6h ago
Hey! To have a framework that can effectively control browser agents, you need systems to interact with the browser, but also pass relevant content from the page to the LLM. Our framework manages this agent loop in a way that enables flexible agentic execution that can mix with your own code - giving you control but in a convenient way. Claude and OpenAI computer use APIs/loops are slower, more expensive, and tailored for a limited set of desktop automation use cases rather than robust browser automations.
KeysToHeaven•6h ago
Finally, a browser agent that doesn’t panic at the sight of a canvas
anerli•6h ago
Exactly :)
revskill•5h ago
Not sure about this because you're the author.
anerli•5h ago
Try it out and report back!
revskill•4h ago
No
legucy•4h ago
Classic new age hacker news hostility. Do you think this response adds anything?
owebmaster•1h ago
I do, cheap praise doesn't benefit the community and it might be astroturf. Constructive criticism would be more valuable - there are multiple similar projects like this posted here daily, and this one likely isn't the best.
anerli•57m ago
For context, we have no affiliation with KeysToHeaven (though we appreciate his comment). We do think our vision-first approach gives us a significant edge over other browser agents, though we probably could’ve made that aspect clearer in the title
TheTaytay•27m ago
It's obvious this is the OP though. They are allowed to respond to favorable comments.
axlee•3h ago
Using this for testing instead of regular playwright must 10000x the cost and speed, doesn't it? At which points do the benefits outweigh the costs?
anerli•3h ago
I think depends a lot on how much you value your own time, since its quite time consuming to write and update playwright scripts. It's gonna save you developer hours to write automations using natural language rather than messing around with and fixing selectors. It's also able to handle tasks that playwright wouldn't be able to do at all - like extracting structured data from a messy/ambiguous DOM and adapting automatically to changing situations.

You can also use cheaper models depending on your needs, for example Qwen 2.5 VL 72B is pretty affordable and works pretty well for most situations.

plufz•3h ago
But we can use an LLM to write that script though and give that agent access to a browser to find DOM selectors etc. And than we have a stable script where we, if needed, manually can fix any LLM bugs just once…? I’m sure there are use cases with messy selectors as you say, but for me it feels like most cases are better covered by generating scripts.
anerli•2h ago
Yeah we've though about this approach a lot - but the problem is if your final program is a brittle script, you're gonna need a way to fix it again often - and then you're still depending on recurrently using LLMs/agents. So we think its better to have the program itself be resilient to change instead of you/your LLM assistant having to constantly ensure the program is working.
adenta•16m ago
I wonder if a nice middle ground would be: - recording the playwright behind the scenes and storing - trying that as a “happy path” first attempt to see if it passes - if it doesn’t pass, rebuilding it with the AI and vision models

Best of both worlds. The playwright is more of a cache than a test

rozap•3h ago
There are a number of these out there, and this one has a super easy setup and appears to Just Work, so nice job on that. I had it going and producing plausible results within a minute or so.

One thing I'm wondering is if there's anyone doing this at scale? The issue I see is that with complex workflows which take several dozen steps and have complex control flow, the probability of reaching the end falls off pretty hard, because if each step has a .95 chance of completing successfully, after not very many steps you have a pretty small overall probability of success. These use cases are high value because writing a traditional scraper is a huge pain, but we just don't seem to be there yet.

The other side of the coin is simple workflows, but those tend to be the workflows where writing a scraper is pretty trivial. This did work, and I told it to search for a product at a local store, but the program cost $1.05 to run. So doing it at any scale quickly becomes a little bit silly.

So I guess my question is: who is having luck using these tools, and what are you using them for?

One route I had some success with is writing a DSL for scraping and then having the llm generate that code, then interpreting it and editing it when it gets stuck. But then there's the "getting stuck detection" part which is hard etc etc.

anerli•3h ago
Glad you were able to get it set up quickly!

We currently are optimizing for reliability and quality, which is why we suggest Claude - but it can get expensive in some cases. Using Qwen 2.5-VL-72B will be significantly cheaper, though may not be always reliable.

Most of our usage right now is for running test cases, and people seem to often prefer qwen for that use case - since typically test cases are clearer how to execute.

Something that is top of mind for is is figuring out a good way to "cache" workflows that get taken. This way you can repeat automations either with no LLM or with a smaller/cheap LLM. This will would enable deterministic, repeatable flows, that are also very affordable and fast. So even if each step on the first run is only 95% reliable - if it gets through it, it could repeat it with 100% reliability.

TheTaytay•28m ago
I am desperately waiting for someone to write exactly this! Use the LLM to write the repeatable, robust script. If the script fails, THEN fall back to an LLM to recover and fix the script.
anerli•6m ago
Yeah, I think its a little tricky to do this well + automatically but is essentially our goal - not necessarily literally writing a script but storing the actions taken by the LLM and being able to repeat them, and adapt only when needed
ewired•1h ago
It was interesting to find out that Qwen 2.5 VL can output coordinates like Sonnet 4, or does that use a different implementation?
anerli•1h ago
Both of them are "visually grounded" - meaning if you ask for the location of something in an image - they can output the exact x/y pixel coordinates! Not many models can do this, especially not many that are large enough to actually reason through sequences of actions well