Well one AI CEO has lines he is not willing to cross and the balls to tell the government fvck off. "it’s official - Anthropic just refused the Pentagon’s demands, dario’s statement is doesn’t ******** around: - “these threats do not change our position: we cannot in good conscience accede to their request.” - dario - he described the pentagons efforts to force him to enable claude for mass surveillance and autonomous killing weapons - dario’s response: mass surveillance is not democratic and Claude isn’t good enough to enable autonomous weapons - we won’t cave - dario will help governmenr transition to a NEW provider if they choose to blacklist anthropic. " it’s official - Anthropic just refused the Pentagon’s demands, dario’s statement is doesn’t fuck around:- “these threats do not change our position: we cannot in good conscience accede to their request.” - dario - he described the pentagons efforts to force him to enable… https://t.co/4FKAe59xvG pic.twitter.com/ahMUZaLldh— Ejaaz (@cryptopunk7213) February 26, 2026
Meanwhile Grok wants to enslave us while the wealthy go and live on another planet served by autonomous humanoid robots.
Do you realize how ********ed up your stance must be for an A.I. company to find itself on the morally superior ground? The Trump admin, everyone.
Sam Altman said OpenAI could step up as a contractor for the DOD if Anthropic really does gets the boot. This tidbit tho: According to Keach Hagey, Altman met his future husband Oliver Mulherin "in Peter Thiel's hot tub at 3 a.m." in 2015. https://en.wikipedia.org/wiki/Sam_Altman
"Can you AI us some bombing targets in Iran?" OpenAI CEO Sam Altman said late Friday that his company has agreed to terms with the Department of Defense on use of its artificial intelligence models, shortly after President Donald Trump said the government won't work with AI rival Anthropic. "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network," Altman wrote in a post on X. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." https://www.google.com/amp/s/www.cn...rival-anthropic-was-blacklisted-by-trump.html
So we all know about Anthropic, right? https://thehackernews.com/2026/02/pentagon-designates-anthropic-supply.html Anthropic on Friday hit back after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate the artificial intelligence (AI) upstart as a "supply chain risk." "This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons," the company said. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." As I understood, engineers from both OpenAI and Google signed a letting in agreement with Anthropic. And initially, OpenAI said they were not going to allow the government access to it's product for the same reason. But then OpenAI (Altman) changed their mind. The result? https://www.techbuzz.ai/articles/chatgpt-uninstalls-spike-295-after-pentagon-deal-backlash U.S. app uninstalls of ChatGPT’s mobile app jumped 295% day-over-day on Saturday, February 28, as consumers responded to the news of OpenAI’s deal with the Department of Defense (DoD), which has been rebranded under the Trump administration as the Department of War. This data, which comes from market intelligence provider Sensor Tower, represents a sizable increase compared with ChatGPT’s typical day-over-day uninstall rate of 9%, as measured over the past 30 days. And what of Anthropic (Claude)? Meanwhile, U.S. downloads for OpenAI competitor Anthropic’s Claude jumped up by 37% day-over-day on Friday, February 27, and 51% as of Saturday, February 28, after the company announced that it would not partner with the U.S. defense department. Doesn't mean there won't be significant data collection in any case, from any AI platform.
Instead of watching an interview with Notts County's manager on Facebook I click on the Meta summary instead. Valid comments. Only problem is that we beat Walsall.
Grok made offensive posts about Diogo Jota, Hillsborough, and the Mu ice air disaster over the weekend. Are all of the AI platforms this bad, or just Elon’s? https://www.nytimes.com/athletic/70...iverpool-grok-hillsborough-munich-jota-posts/
If your post isn’t ironic, then I have to ask, what exactly do you mean by “heavily censored?” Aren’t all of these just computer programs, without any human intervention?
Yes, the post is meant to be ironic. Of course all the AIs are censored in that they behave and answer in a way to encourage more engagement with them. They don't just produce the answer and then post that, but there are multiple safe guards around that to make sure the AI behaves politely (in fact far too polite, more subservient). Maybe Grok has less of this sugarcoating than other models.
Just be clear, the AIs are "censored" by their developers/owners, not by the government. In the end they are just algorithms which can and are tweaked to maximize the corporate interests, like the algorithms for Facebook or TikTok, and in whatever way the leaders of the company want to.
To be clear, the AIs aren't "intelligent" - they consume content and distill it, and when someone asks it a question, the AI guesses what words will answer the question based on what words were out there for it to consume which are related to the words in the question which was asked. It's a very good word guesser, but it's just guessing words. If Grok is answering questions with offensive stuff about Diogo Jota and Hillsborough, it's because there's a lot of offensive stuff out there about Diogo Jota and Hillsborough which it consumed, and the offensive stuff seemed like the right words to guess when people ask about those topics. It's just holding up a mirror to society - if people weren't writing offensive stuff on those topics, Grok wouldn't come up with that offensive stuff in its responses.
The offensive stuff is out there but it's at the extremes, something that reasonable people would have to go out of their way to find. What does Grok say about flat earth and eugenics?
So this has some interesting points and repercussions I want to talk about. First, after OpenAI signed up to replace Anthropic in this contract lots of people gave up ChatGPT and went to other models - hundreds of thousands, maybe millions. This is really bad for OpenAI, not in the sense of them losing money from this (they probably lost little), but in the sense that the top handful of LLM AI companies are essentially interchangeable (proved by the ease of the switch) and OpenAI's massively larger spend only let them discover things that everyone else can then use, and didn't create a lead that guarantees their dominance in the field. The second point is that while I applaud Anthropic's refusal... they got into the contract with Palantir in the first place. They created this military-only model. What did they think this was going to be used for? I'm sure the deal sounded good when it offered them money, but you can't exactly tell your drill instructor "Hey, I signed up for college money, I can't hold a rifle because it's against my ethics".
I agree that if Anthropic wasn’t meeting the contract or willing to meet a new contract, then the govt has the right to go elsewhere. What f********head Pete did was ban Anthropic from being used in any way by the US govt and anyone contracting with them, kinda like huawei.
It's really, really astonishing how bad Facebook's AI agent is. Other platforms will write code or generate video clips. Facebook's will see a post about, say, the global macroeconomic ramifications of the closing of the Straits of Hormuz and reply, "Find gas stations nearby?" And I have no doubt they could have ended hunger in the Americas for the cost it took to get that insight.
I think the fact that he went so crazy about this means that what they were doing is really, really important. And as we don't know what that important thing is, it scares me what it could be.