frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ShowHN:Make OpenClaw Respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•46s ago•0 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•2m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•3m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•8m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•9m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
5•witnessme•13m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•17m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•20m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•26m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•28m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•29m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•31m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•32m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
3•ArtemZ•43m ago•5 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•44m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•46m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
4•duxup•49m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•50m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments
Open in hackernews

Robin: A multi-agent system for automating scientific discovery

https://arxiv.org/abs/2505.13400
151•nopinsight•8mo ago

Comments

peterclary•8mo ago
Will we have AIs doing an increasing amount of the research, theory and even publication, with human scientists increasingly relegated to doing experiments under their direction?
lgas•8mo ago
If so, it won't last long. At some point AI will be able to use robots to do the experiments itself.
TechDebtDevin•8mo ago
lmfao
florbnit•8mo ago
Closed loop optimization is already a thing, and you don’t even need AI for it, just good old bayesian optimization is enough.
photon_mz•8mo ago
Bayesian doesn't have 'world model' intuition for next experiments to run. Think, human scientists are very 'sample-efficient' at deciding which experiment (i.e. sample) to run, in ways good-ol opt isn't but LLMs could be.

thoughts?

florbnit•8mo ago
I feel like you missed the context of my comment. Someone suggested AI would do experiments, someone responded with “lmfao” as a dismissal. I answered that we already have computers running experimental series even without AI. I’m not dismissing AI I’m saying that we are already in a post computer run experimentation world. People not in an industry using that would obviously not know.
TechDebtDevin•8mo ago
AI is just good old bayesian :|
dekhn•8mo ago
In practice this turns out to be extremely challenging. I've been through many labs with a ton of automated stuff that is... constantly being worked on by a range of 3rd party techs, rather than actually running in response to models.
postalrat•8mo ago
It makes me wonder if there is some easily automated or configurable experiment is capable of revealing "new science".
lamename•8mo ago
Also on HN today "I got fooled by AI-for-science hype—here's what it taught me" https://news.ycombinator.com/item?id=44037941
hirenj•8mo ago
Not my subject area, but at least one other group looked at ABCA1, and judging from this abstract, it has been linked via GWAS already, and furthermore concludes it doesn’t play a role (I haven’t looked at the data though).

I don’t know, but if we were to reframe this as some software to take a hit from a GWAS, look up the small molecule inhibitor/activator for it, and then do some RNA-seq on it, I doubt it would gain any interest.

https://iovs.arvojournals.org/article.aspx?articleid=2788418

starlust2•8mo ago
Wouldn't the fact that another group researched ABCA1 validate that the assistant did find a reasonable topic to research?

Ultimately we want effective treatments but the goal of the assistant isn't to perfectly predict solutions. Rather it's to reduce the overall cost and time to a solution through automation.

ClaraForm•8mo ago
Not if (a) it misses a line of research has been refuted 1-2 years ago, (b) the experiments at recommends (RNA-Seq) are a limited resource that requires a whole lab to be setup to efficiently act based upon it, and (c) the result of the work is genetic upregulation of a gene, which could mean just about anything.

Genetic regulation can at best let us know _involvement_ of a gene, but nothing about why. Some examples of why a gene might be involved: it's a compensation mechanism (good!), it modulates the timing of the actual critical processes (discovery worthy but treatment path neutral), it is causative of a disease (treatment potential found) etc...

We don't need pipelines for faster scientific thinking ... especially if the result is experts will have to re-validate each finding. Most experts are anyway truly limited by access to models or access to materials. I certainly don't have a shortage of "good" ideas, and no machine will convince me they're wrong without doing the actual experiments. ;)

cflyingdutchman•8mo ago
This is a great framing - would you please expound on it a bit. Software is almost exclusively gated by the "thinking" step, except for very large language models, so it would be helpful to understand the gates ("access to models or access to materials") in more detail.
ijk•8mo ago
This is, I think, what I've been struggling to get across to people: while some domains have problems that you can test entirely in code, there are a lot more where the bottleneck is too resource-conatrained in the physical world to have an experiment-free researcher have any value.

There's practically negative utility for detecting archeological sites in South America, for example: we already know about far more than we could hope to excavate. The ideas aren't the bottleneck.

There's always been an element of this in AI: RL is amazing if you have some way to get ground truth for your problem, and a giant headache if you don't. And so on. But I seem to have trouble convincing people that sometimes the digital is insufficient.

photochemsyn•8mo ago
This approach is very interesting, and one attention-catching datum is that their proposed compound, ripasudil, is now largely out-of-patent with some caveats, via Google Patents and ChatGPT 03:

> 1999 - D. Western Therapeutics Institute (DWTI) finishes the discovery screen that produced K-115 = ripasudil and files the first PCT on 4-F-isoquinoline diazepane sulfonamides. (Earliest composition-of-matter priority. A 20-year term from a 1999 JP priority date takes you to 2019 (before any extensions).

> 2005 - Kowa (the licensee) files a follow-up patent covering the use of ripasudil for lowering intra-ocular pressure. U.S. counterpart US 8 193 193 issued 2012; nominal expiry 11 July 2026. (A method-of-use patent – can block generics in the U.S. even after the base substance expires).

Scanning the vast library of out-of-patent pharmaceuticals for novel uses has great potential for curing disease and reducing human suffering, but the for-profit pipeline in academic/corporate partnerships is notoriously uninterested in such research because they want exclusive patents that justify profits well beyond a simple %-of-manufacturing cost margin. Indeed they'd probably try to make random patentable derivatives of the compound in the hope that the activity of the public domain substance was preserved and market that instead (see the Prontosil/sulfanilimide story of the 1930s, well-related in Thomas Hager's 2006 book "The Demon Under The Microscope).

I suppose the user of these tools could restrict them to in-patent compounds, but that's ludicrously anti-scientific in outlook. In general it seems the more constraints are applied, the worse the performance.

Another issue is this is a heavily studied area and the result is more incremental than novel. I'd like to see it tackle a question with much less background data - propose a novel, cheap, easily manufactured industrial catalyst for the conversion of CO2 to methanol.

ankit219•8mo ago
This is very cool.

One question I have in these orchestration based multi agent systems is the out of domain generalization. Biotech and Pharma is one domain where not all the latest research is out there in public domain (hence big labs havent trained models on it). Then, there are many failed approaches (internal to each lab + tribal knowledge) which would not be known to the world outside. In both these cases, any model or system would struggle to get accuracy (because the model is guessing on things it has no knowledge of). In context learning can work but it's a hit and miss with larger contexts. And it's a workflow + output where errors are not immediately obvious like coding agents. I am curious as to what extent do you see this helping a scientist? Put another way, do you see this as a co-researcher where a person can brainstorm with (which they currently do with chatgpt) or do you expect a higher involvement in their day to day workflow? Sorry if this question is too direct.

greenflag•8mo ago
Someone has pointed out on X/Twitter that the "novel discovery" made by the AI system already has an entire review article written about the subject [0]

[0] https://x.com/wildtypehuman/status/1924858077326528991

woolion•8mo ago
This is the real problem with AI: it generates plausible-sounding slop in absurd quantity, for which verification is very expensive.

Beyond that, it would interesting to check how wrong the AI version is compared to the ground truth (the published papers).

Current technology cannot do logic, but biology is even more perverse. For instance, suppose you ask to remove starch. If you degrade it, starch is actually removed. However, it's most likely that the point was actually to remove sugar, but degrading it actually made the sugar more readily bio-available. The relationships are complex, and there's a lot of implicit knowledge that is not communicated again at every sentence.

It would be good if the effort towards hype ideas like that was redirected in making a great tool to find and analyze the papers in a fairly reliable way (which would have prevented this blunder).

IanCal•8mo ago
That overview I can find it talking about wet AMD, the claim for this is specifically dry AMD.

edit - and from the paper

> Notably, while ROCK inhibitors have been previously suggested for treatment of wet AMD and other retinal diseases of neovascularization, Robin is the first to propose their application in dry AMD for their effect on phagocytosis