> To manufacture a protein, researchers typically need to order a corresponding DNA sequence from a commercial vendor, which they can then install in a cell.
... and the point of the security system, as I understand it, is to deny the purchase, on the grounds that the DNA sequence is somehow useful for something harmful:
> Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert.
But this supposes that being a "close match" is either necessary or sufficient evidence.
But presumably the entire point of the AI — the reason why it would be applied to the task — is the premise that one or more completely different DNA sequences could be used to produce the same harmful protein, yes?
... And is that not where the research was already going with conventional techniques?
> But Clore says monitoring gene synthesis is still a practical approach to detecting biothreats, since the manufacture of DNA in the US is dominated by a few companies that work closely with the government. By contrast, the technology used to build and train AI models is more widespread. “You can’t put that genie back in the bottle,” says Clore. “If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model.”
Right, the problem is still agency rather than knowledge. The threat of AI isn't so much the discovery of a novel synthesis, but rather the social engineering required to gather the materials (and orchestrate the process).
zahlman•59m ago
... and the point of the security system, as I understand it, is to deny the purchase, on the grounds that the DNA sequence is somehow useful for something harmful:
> Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert.
But this supposes that being a "close match" is either necessary or sufficient evidence.
But presumably the entire point of the AI — the reason why it would be applied to the task — is the premise that one or more completely different DNA sequences could be used to produce the same harmful protein, yes?
... And is that not where the research was already going with conventional techniques?
> But Clore says monitoring gene synthesis is still a practical approach to detecting biothreats, since the manufacture of DNA in the US is dominated by a few companies that work closely with the government. By contrast, the technology used to build and train AI models is more widespread. “You can’t put that genie back in the bottle,” says Clore. “If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model.”
Right, the problem is still agency rather than knowledge. The threat of AI isn't so much the discovery of a novel synthesis, but rather the social engineering required to gather the materials (and orchestrate the process).