US used Claude AI tool to capture Maduro, violating developer’s terms — WSJ
The developer Anthropic’s user guidelines prohibit the use of the technology to facilitate violence, the newspaper said
NEW YORK, February 14. /TASS/. During their operation to capture Venezuelan President Nicolas Maduro, the US military used Claude AI tool, despite the developer’s prohibition to use the technology to facilitate violence, The Wall Street Journal (WSJ) newspaper reported citing sources.
The developer Anthropic’s user guidelines prohibit the use of the technology to facilitate violence, develop weapons or conduct surveillance.
"We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," an Anthropic spokesman was quoted as saying. "Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance."
In late January, WSJ reported that a contract between the Pentagon and Anthropic worth approximately $200 million was under the threat of being cancelled because of differences on the use of artificial intelligence technologies to spy on people on the US territory.