I don’t know, but if we were to reframe this as some software to take a hit from a GWAS, look up the small molecule inhibitor/activator for it, and then do some RNA-seq on it, I doubt it would gain any interest.
https://iovs.arvojournals.org/article.aspx?articleid=2788418
Ultimately we want effective treatments but the goal of the assistant isn't to perfectly predict solutions. Rather it's to reduce the overall cost and time to a solution through automation.
Genetic regulation can at best let us know _involvement_ of a gene, but nothing about why. Some examples of why a gene might be involved: it's a compensation mechanism (good!), it modulates the timing of the actual critical processes (discovery worthy but treatment path neutral), it is causative of a disease (treatment potential found) etc...
We don't need pipelines for faster scientific thinking ... especially if the result is experts will have to re-validate each finding. Most experts are anyway truly limited by access to models or access to materials. I certainly don't have a shortage of "good" ideas, and no machine will convince me they're wrong without doing the actual experiments. ;)
There's practically negative utility for detecting archeological sites in South America, for example: we already know about far more than we could hope to excavate. The ideas aren't the bottleneck.
There's always been an element of this in AI: RL is amazing if you have some way to get ground truth for your problem, and a giant headache if you don't. And so on. But I seem to have trouble convincing people that sometimes the digital is insufficient.
> 1999 - D. Western Therapeutics Institute (DWTI) finishes the discovery screen that produced K-115 = ripasudil and files the first PCT on 4-F-isoquinoline diazepane sulfonamides. (Earliest composition-of-matter priority. A 20-year term from a 1999 JP priority date takes you to 2019 (before any extensions).
> 2005 - Kowa (the licensee) files a follow-up patent covering the use of ripasudil for lowering intra-ocular pressure. U.S. counterpart US 8 193 193 issued 2012; nominal expiry 11 July 2026. (A method-of-use patent – can block generics in the U.S. even after the base substance expires).
Scanning the vast library of out-of-patent pharmaceuticals for novel uses has great potential for curing disease and reducing human suffering, but the for-profit pipeline in academic/corporate partnerships is notoriously uninterested in such research because they want exclusive patents that justify profits well beyond a simple %-of-manufacturing cost margin. Indeed they'd probably try to make random patentable derivatives of the compound in the hope that the activity of the public domain substance was preserved and market that instead (see the Prontosil/sulfanilimide story of the 1930s, well-related in Thomas Hager's 2006 book "The Demon Under The Microscope).
I suppose the user of these tools could restrict them to in-patent compounds, but that's ludicrously anti-scientific in outlook. In general it seems the more constraints are applied, the worse the performance.
Another issue is this is a heavily studied area and the result is more incremental than novel. I'd like to see it tackle a question with much less background data - propose a novel, cheap, easily manufactured industrial catalyst for the conversion of CO2 to methanol.
One question I have in these orchestration based multi agent systems is the out of domain generalization. Biotech and Pharma is one domain where not all the latest research is out there in public domain (hence big labs havent trained models on it). Then, there are many failed approaches (internal to each lab + tribal knowledge) which would not be known to the world outside. In both these cases, any model or system would struggle to get accuracy (because the model is guessing on things it has no knowledge of). In context learning can work but it's a hit and miss with larger contexts. And it's a workflow + output where errors are not immediately obvious like coding agents. I am curious as to what extent do you see this helping a scientist? Put another way, do you see this as a co-researcher where a person can brainstorm with (which they currently do with chatgpt) or do you expect a higher involvement in their day to day workflow? Sorry if this question is too direct.
peterclary•6h ago
lgas•5h ago
TechDebtDevin•5h ago
florbnit•5h ago
dekhn•4h ago
postalrat•1h ago