I was referring to the LLM and how they work internally. I suppose it's a moot point as to whether they're 'intelligent' in any real sense. The Prof. Wooldridge guy says it lacks problem solving ability from first principles so he seems to think they're doing versions of what he calls 'pattern recognition' and says the jury's out on this stuff. He's the 'Ashall Professor of the Foundations of Artificial Intelligence in the Department of Computer Science at the University of Oxford' so I'm guessing he knows something about the subject. Well more than me which, OK.. isn't saying much
It was stupid of him to say that "the moon was a distraction". A functional moon base is a needed logistics point because it's so much cheaper to launch from there.
How is it much cheaper? You have to get the fuel there. You have to build a launchpad. Presumably it's going to be maintained by humanoid robots for reasons.
It's a lot cheaper because you don't have to launch the fuel to get to Mars from Earth, you'd be getting it from the polar ice on the Moon. Set up a refinery on the Moon that splits the polar ice into Liquid Hydrogen and Liquid Oxygen, then launch that fuel from the moon to a re-usable Mars transit vehicle that is orbiting somewhere between the Moon and Earth. That way the only "expensive" part is launching the crew from the Earth and getting them to the Mars transit vehicle.
Tesla is different than the other Mag 7 as it has a lot of people who are Musk-is-a-genius investors in retail. So they buy the stock, it goes up, and funds buy because they know that retail will continue to buy, so the stock continues to go up, which causes more retail buying, etc. Of the Mag 7, only Apple is close, and since Jobs died, retail is acting more rational, and their price is more accurate. Of the others, yeah, bubble, though I'm starting to see Nvdia start to diversify more which is a good thing. Still a bohemouth, but starting to move beyond the circle. Man, if OpenAI crashes, Oracle is screwed. But that is a discussion for another thread.
Part of the reason is something you can easily empathize with. Imagine you are taking a multiple choice test in school. Wrong answers are zero, empty answers are zero. You have to be stupid to leave an answer blank - just random guessing means you get 25% of the randomly filled answers right. Maybe more if you pick "B". It's usually "B". That's what AIs are doing essentially. In their training and in the tests that judge how well they do, and "I don't know" is the same thing as a wrong answer, so they are trained and they are judged to always try. And as a product that AI companies want people to use, they want the AI to always appear confident because humans are innately trusting of confidence. There's other reasons as well.
In the Peanuts comic strip, Charlie Brown’s famous strategy for multiple-choice tests is that if you don't know the answer, it’s usually "C". Thank you, AI
If there’s an “all of the above” option, that’s usually right. At least in the job related trainings I’ve had.
Not to mention, we humans have been conditioned that computers/tech know the answer, so if an AI model says something like "I was unable to find that answer," then the human will move on to the next AI model that provides a lie.
Yeah, but we do that with people too. We invariably listen to those who claim they have the simple solution to a problem and ignore the people who go on and on about difficulty and no matter how often we get burned by doing that we keep doing it again. It was even a point in an episode of Star Trek TNG. When Beverly Crusher was linked to Picard's mind, she could see that at times he had no solid idea what to do he still gave the impression of confident knowledge. We follow a person who looks like they know what they are doing, so a person who wants to be followed is going to pretend they know what to do. Or as I suspect may often be the case, fool themselves into thinking they know what to do.
He's not even going to get that. If we go to the moon (if) I predict he's going to be overtaken by Blue Origin. His Starship lander idea is just so stupid.
Jimmy Ba and Tony Wu, two of the co-founders of xAI, both announced they were leaving the company this week. Other high level people like Hand Gao as well as lots of lower level engineers also left.