Depends… are they mass producing the kind of stuff that makes it into consumer electronics, or are those products being sidelined to make the kind of stuff that is really only applicable in giant server data centers?
Well shoot…I was looking forward to the hardware equivalent of buying Christmas decorations on December 26, or buying candy on November 1.
When I posted the tweet, I didn't think much about it, because it seemed geeky but normal, but then, after Val and Dave asked, I realized that it would require some thinkering with the CC website. I asked google about it, and I found some information, which sounds about right: Yes — and it already exists in multiple forms. Systems that produce a different number for each transaction are called dynamic card numbers, tokenized card numbers, or one-time (single-use) card numbers. They reduce fraud by ensuring intercepted numbers can’t be reused. How it works (high-level) Tokenization: The merchant sees a token (a surrogate number) instead of the real PAN. Tokens map to the real card in the issuer/processor system. Tokens can be single-use, merchant-specific, or limited to a time window or transaction amount. Dynamic ****** (dCVV): The card’s PAN may be static but the ****** (security code) is generated dynamically (often by a chip or mobile app) per transaction or short time window. One-time virtual card numbers: The issuer or a third-party generates a unique virtual card number (VCN) for each transaction or merchant; the VCN is routed to the underlying account and can be configured to expire immediately or after one use. EMV Chip/TAP with tokenization: Contactless and chip transactions often use cryptographic transaction counters and EMV cryptograms so the data presented to the merchant changes every transaction even though the PAN may be the same. Cryptographic card schemes (e.g., 3-D Secure / EMV 3-D Secure): Add transaction-specific cryptographic proofs that bind the merchant, amount, and time to the authorization. Implementations and examples Major card networks: Visa, Mastercard, and American Express support tokenization frameworks and dynamic ****** offerings to issuers and merchants. Issuers and fintechs: Many banks and fintechs (Capital One, Citi, Revolut, Privacy.com, Curve, etc.) provide virtual card numbers or single-use card functionality. Operating-system-level tokens: Apple Pay, Google Pay, and Samsung Pay use device-specific tokenization so the merchant never receives the real PAN; each device and sometimes each transaction uses different cryptograms. Practical variants and trade-offs Per-transaction numbers (true one-time PAN): Maximum protection; requires issuer/processor support and may complicate recurring payments and merchant record-keeping. Merchant-scoped tokens: Unique token per merchant; prevents cross-merchant misuse but supports recurring billing with that merchant. Time-limited tokens: Work for a set window (hours/days) useful for e-commerce sessions. Dynamic ****** with static PAN: Easier to adopt for card-present and remote channels that accept dynamic ******; still exposes PAN to merchants. User experience: Virtual cards and device-wallet tokens can be seamless, but true one-time PANs might interfere with returns, refunds, or merchant reconciliation unless the tokenization system supports mapping. Now, how much control as a user do you have over it, is the real question, but I guess that if you are knowledgeable enough, you could automate your loging to your CC website/app, and request that they issue a VCN each time, and do about the same for Uber. Or maybe not, and what you do is do it manually, but if it was me, it would mean saving myself $300/month, so why not do it.
You'd have to be pretty stupid to do that. Which is why Sam Altman is in the Middle East right now looking for stupid people with lots of money. The problem is OpenAI is spending otherworldly amounts of money to make their product better, and it's just not working. Their only idea how to fix that is 10x the otherworldly spending. No one knows this more than Microsoft, which knows what's going on inside OpenAI. What they have now is good enough to be a toy, but not good enough to do business beyond simple tasks that wouldn't be worth if they were charged a sustainable price for the product. Microsoft is pretty stupid, but they are not Oracle / Meta stupid. They aren't going to bet the future of the company on this.
Everything (things like memory, NAND, "graphics" cards) is packaged for use in servers, and it just going to go in a landfill after futile attempts to find some other use for it. There is a lot of science going on with LLMs, but they use small versions (invariably Chinese models) that run on local hardware. They don't need massive datacenters.
Satya Nadella. The two companies are already integrated with Chat GPT powering CoPilot and Open AI using Microsoft Azure.
I agree he's a dummy, but from his comments the last few days, it's clear even he can see something is wrong. He's essentially begging people to use AI or else it's going to fall apart. That means he knows 1) it's going in the direction of falling apart 2) not enough people use it and 3) he has no idea how to get people to use it. But ChatGPT doesn't look like it's going to be more than it is, and what it is is never going to pay back what was already spent, let alone continued spending. Just as a business decision it's nonviable to keep OpenAI going like they want.
But if Open AI folds it won't be at its current value, so Microsoft can just continue what they're doing and absorb it. This is the company that spent $75.4 billion on Activision Blizzard, $19.7 billion on Nuance Communications, $8.5 billion on Skype and $7.2 billion on Nokia.
Eh, even if it is not a bubble, there is going to be consolidation. The shakeout is inevitable, not all of these models and companies are going to become profitable or even just sustainable.
The problem is the memory is installed directly on the board (or for Blackwell inside the processing chips themselves) in Nvidia GPUs. Nothing in a datacenter can be used in a home computer.
I know this is a thread about AI news, not our experience with AI, but this is my first time ever using AI chatbots on purpose and I wanted to share. A long term modeling project I've had is building a 1/72 Fouga Magister jet trainer in Israeli Flight Demonstration Team colors flying over a desert with a little figure below to simulate the "Damned for All Time" scene from Jesus Christ Superstar. I've spent years thinking how to make that figure of Judas, and I still don't know. I've thought about sculpting it, but it seems way beyond me. There are people that make custom sculpts for gaming, but it's very expensive. There is an AI based program called Meshy which takes 2D images and makes 3D digital models out of them which one can print, but it does a terrible job with photos I have of Carl Anderson because it can't differentiate him from the ground, and the images I have are all cut off somewhere. But modelers have had luck asking chatbots to generate clean images and sending those to Meshy, so I decided to try it. I gave these chatbots the following prompt: "Create an image of Judas from the 1972 movie Jesus Christ Superstar kneeling on the ground. Use a transparent background.". Here is what they gave me: Nano Banana (Google): Fake transparent background, apocalyptic medieval Indian getup, kinda gets the emotion right. Claude: Fake transparent background, utterly wrong, but it's so certain of the look it might be influenced by a theatre production Grok: Jedi Jesus? Wrongest of them all by quite a distance. At least he's clothed. Copilot (Microsoft): I like it, despite being wrong. It's Shaft with a Judas flavor. Good position. And this is the only one to have a transparent background. Didn't try ChapGPT because they want me to register before they generate an image. I wouldn't piss on them if they were on fire - I certainly won't give them my name and info.
ChatGPT says "We’re so sorry, but the prompt may violate our guardrails concerning similarity to third-party content. If you think we got it wrong, please retry or edit your prompt." I changed "of" to "like" but it didn't help.
So i extract a photo of Carl Anderson as Judas in Jesus Christ Superstar and asked Google Gemini about it.
I guess Google Gemini thinks all black people look the same, thinking that Carl Anderson as Judas in the Biblical tale of Jesus is Jimmy Cliff as this.
OK, that is news to me. I've got to see if my card does the same thing. I'm thinking it might be good for subscriptions. Some are just ridiculously hard to cancel. I canceled misfits three times before it took.
I don't know if it would work for a recurring subscription. Mine are one-time use, like if I'm buying something from a social media ad.
So, is AI doomed to fail? https://www.techbuzz.ai/articles/new-research-claims-ai-agents-are-mathematically-doomed-to-fail Published without fanfare during the height of agent hype, "Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models" delivers a mathematical gut punch to the agentic AI vision. The paper, authored by former SAP CTO Vishal Sikka and his teenage prodigy son, claims to prove that LLMs are fundamentally incapable of carrying out computational and agentic tasks beyond a certain complexity. Even reasoning models that go beyond pure word prediction won't fix the problem, according to their analysis. "There is no way they can be reliable," Sikka told Wired in a recent interview. The researcher, who studied under AI pioneer John McCarthy before his career at SAP, Infosys, and Oracle, now runs AI services startup Vianai. His verdict on agents running critical systems like nuclear power plants? Forget it. You might get one to file some papers and save time, but mistakes are inevitable. ----------------------------------------------------------- The AI agent debate boils down to a tension between mathematical truth and economic inevitability. Sikka's paper proves what many suspected - that pure LLMs can't be perfectly reliable. OpenAI's own research confirms hallucinations are permanent. But the industry isn't building pure LLMs anymore. They're building hybrid systems with verification layers, guardrails, and domain-specific architectures. Whether that's enough to overcome fundamental mathematical limitations remains an open question. What's certain is that 2026 won't be "the year of the agent" either - but it'll be another year of more agents, incrementally better and more widely deployed. The massive automation of human cognitive activity is coming, mathematical proof or not. Whether that improves our work and lives, as Alan Kay suggests, won't be mathematically verifiable.
Today I had a chance to gauge the current state of the art in AI in a topic I am an expert. I did free lance translations for about two decades but really haven't touched it for at least a decade. I agreed to help a relative translate a rental document so I decided to see how well AI is doing. Previously, I would have had to scan the document and put it through a character recognition software and then use google translate. It would give me a decent starting point but nothing much higher than low 90s. Often times the hassle of those steps was not worth the trouble. This time I provided Gemini Pro with pictures of the document pages and used a short phrase to tell it to translate a Spanish language rental contract to English. I made three minor corrections and some formatting adjustments in three pages worth of text. It did the scanning and used professional level judgement on the translation context. Of course, legal jargon is much more tightly defined than other contexts but I thought that was pretty impressive.
The problem with assessments like this one, is that there are a whole host of tasks that fall in between "running a nuclear power plant" and "filing some papers" where A.I. might be of use. I agree we should keep A.I. away from critical systems. My main issue with the whole A.I. bubble is just that, the bubble aspect of it. Billions of dollars being pumped into it, unlikely to ever see a return on investment that is worthwhile. The grotesque, frankly deeply immoral hoarding of computing resources. I am not in the camp that believes A.I. is incapable of carrying out any worthwhile tasks. I have used the example of database analysis before, if you had told me even two years ago it would become as competent at that task is it currently is, I would have found it very hard to believe.
Yep you're describing exactly what AI large language models are best at. But you still need a human to validate them.