> Looks like Sam Altman regrets making ChatGPT available to the Pentagon. And he should, Dave Lee says
(«Dave Lee is Bloomberg Opinion's US technology columnist. He was previously a correspondent for the Financial Times and BBC News»)
Transcript:
> Sam Altman admitted his company looked sloppy and opportunistic when it agreed in something of a hurry to make chat GPT available on the Pentagonʼs classified networks. Hereʼs why I agree with him. It was sloppy and why I think heʼs made a significant mistake.
> You may have heard that Anthropic, the creator of Claude, has fallen out of the Department of Defense over the use of AI for surveillance and autonomous killing. Late last week, Sam Altman swooped in and made an agreement to replace Claude. Altman said he had secured protections against unethical use, but many thought heʼd been naive in making that deal, especially since heʼd made it so quickly, capitalizing on the chance to steal some business from a rival. But now itʼs backfiring. Since his announcement to work with the military, downloads of Chat GPT have fallen. While installs of the Claude app have surged by more than 200%, bringing it to the top of the app store charts after being outside the top 100 as recently as January. Chat GPT users on social media are angry with Altman over what they see as a capitulation to the Trump administration. Dario Amodei, the co-founder of Anthropic, has shown principle at a time when so few tech CEOs seem prepared to do so. However, if the Pentagon sees through its threat to ban any American company with a military contract from doing business with Anthropic, that could destroy their business. Anthropic has said it plans to fight that order in court, and legal observers believe they will probably prevail. Therefore, Sam Altmanʼs hasty moves might ultimately work in Anthropicʼs favor, gifting them millions more users and the kind of PR that money canʼt buy. The OpenAI CEO said he would treat the past few days as a learning experience.
mdp2021•1h ago
> Looks like Sam Altman regrets making ChatGPT available to the Pentagon. And he should, Dave Lee says
(«Dave Lee is Bloomberg Opinion's US technology columnist. He was previously a correspondent for the Financial Times and BBC News»)
Transcript:
> Sam Altman admitted his company looked sloppy and opportunistic when it agreed in something of a hurry to make chat GPT available on the Pentagonʼs classified networks. Hereʼs why I agree with him. It was sloppy and why I think heʼs made a significant mistake.
> You may have heard that Anthropic, the creator of Claude, has fallen out of the Department of Defense over the use of AI for surveillance and autonomous killing. Late last week, Sam Altman swooped in and made an agreement to replace Claude. Altman said he had secured protections against unethical use, but many thought heʼd been naive in making that deal, especially since heʼd made it so quickly, capitalizing on the chance to steal some business from a rival. But now itʼs backfiring. Since his announcement to work with the military, downloads of Chat GPT have fallen. While installs of the Claude app have surged by more than 200%, bringing it to the top of the app store charts after being outside the top 100 as recently as January. Chat GPT users on social media are angry with Altman over what they see as a capitulation to the Trump administration. Dario Amodei, the co-founder of Anthropic, has shown principle at a time when so few tech CEOs seem prepared to do so. However, if the Pentagon sees through its threat to ban any American company with a military contract from doing business with Anthropic, that could destroy their business. Anthropic has said it plans to fight that order in court, and legal observers believe they will probably prevail. Therefore, Sam Altmanʼs hasty moves might ultimately work in Anthropicʼs favor, gifting them millions more users and the kind of PR that money canʼt buy. The OpenAI CEO said he would treat the past few days as a learning experience.