Originally published at National Catholic Register

Amid the explosion in recent years of artificial intelligence (AI), Catholics have consistently called for the inclusion of socially responsible safeguards, limits and ethical principles within the technology.

Now, a leading AI developer that is trying to do that has found itself in a major dispute with the U.S. government — stoking a heated debate over the ethical and moral dimensions of AI development.

Anthropic, a San Francisco startup, is the creator of Claude, a large language model (LLM)-based AI assistant that has already enjoyed wide adoption across many sectors of U.S. society, including thousands of businesses and schools. Founded in part by defectors from industry juggernaut OpenAI, Anthropic has positioned itself as the safe, responsible option in the AI ecosystem; its CEO, Dario Amodei, often gives interviews advocating for the development of “guardrails” to protect humanity from unchecked AI.

The U.S. government, meanwhile, has since last year been exploring the use of AI in national defense. Four major U.S. AI companies — Google, xAI, OpenAI and Anthropic — have been working with the Pentagon to varying degrees, with highly lucrative contracts on the table.

In part because of the relatively cautious approach on the part of its developers, Claude

Read more...