If you are not familiar with the technology, neural nets have one important trait. They are very time, effort, and energy intensive to set up, but the result is very efficient. It doesn't take a lot of computing power (in a relative sense) for ChatGPT to give an answer to a question. That's great for startup economics - high barrier to entry and loads of profit on the tail end. Unfortunately there's a problem where the ideal meets reality. And the reality is that AIs, as we know them now, need a lot of managing and modification and constant maintenance. When thousands of male youths devote their lives to trying to convince the AI that Hitler did nothing wrong, maybe there is no tail end. So we have this article, which says that OpenAI is spending $700,000 every single day to keep ChatGPT (just ChatGPT) running. And all the stats say that things are moving away from them ever being profitable. https://www.firstpost.com/tech/news...ompany-700000-dollars-every-day-12986012.html
I saw pictures of Chatgpt making extensions of paintings beyond the frame of the picture. I wonder if I would give it a scan of an old picture/portait that has been damaged, a crack running all over the face, chatgpt could "repair" it. How did they do it with those paintings?
I don't know about OpenAI tools (although I suspect they could), but there are other AI tools that already do this. Here's a free one (for a limited number of images): https://www.capcut.com/ Go to the web site, and about three pages down you get a rainbow list of some of the things it can do and in the middle is "Old photo restoration". Click on that, and you go to a page that allows you to upload photo scans and have them fixed of blemishes and even colored. Then you can download the result.
So is AI going to replace everyone's job except mine and take over the world? Someday, yeah. But probably not anytime soon. It seems that the current generation of AIs are a bit of a fad, only being good for scams, cheating on school tests, and making spam. It's going to make our lives worse in some ways (especially next election, which will probably be awful), but it doesn't look like it's going to change the world much otherwise. I can say that because we are starting to get numbers of the general population's adoption of AI and it doesn't look good. Bing didn't gain market share at all since they started giving away AI for free. The number of people going to OpenAI's web site is down. Polls indicate people are just not interested. https://www.honest-broker.com/p/ugly-numbers-from-microsoft-and-chatgpt
The part in bold will change over the next couple of weeks when schools and colleges are back in session.
As a former defender of less-than-stellar ability, I of course have been compared to an orange traffic cone. Now, thanks to protests in San Francisco, I found something that, as an orange traffic cone, I could stop all the time: Self driving cars. https://www.npr.org/2023/08/26/1195695051/driverless-cars-san-francisco-waymo-cruise Two people dressed in dark colors and wearing masks dart into a busy street on a hill in San Francisco. One of them hauls a big orange traffic cone. They sprint toward a driverless car and quickly set the cone on the hood. The vehicle's side lights burst on and start flashing orange. And then, it sits there immobile. "All right, looks good," one of them says after making sure no one is inside. "Let's get out of here." They hop on e-bikes and pedal off. All it takes to render the technology-packed self-driving car inoperable is a traffic cone. If all goes according to plan, it will stay there, frozen, until someone comes and removes it.... ... Safe Street Rebel isn't the only group that's had issues with the autonomous vehicles. San Francisco's police and fire departments have also said the cars aren't yet ready for public roads. They've tallied 55 incidents where self-driving cars have gotten in the way of rescue operations in just the past six months. Those incidents include driving through yellow emergency tape, blocking firehouse driveways, running over fire hoses and refusing to move for first responders.
Legendary USMNT and LA Galaxy player, Danny Califf, was last seen resting on the hood of a self-driving car in San Francisco.
The Taylor Swift AI porn pics got me wondering how AI will get used in the upcoming election. I will be surprised if they (Russia, China, GOP) don't start putting out negative AI about Biden.
We've had faked audio in the Slovakian election already: https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/ and there were fake Biden robocalls in the New Hampshire primary telling people to stay home: https://www.msn.com/en-us/news/poli...oncerns-about-ai-in-2024-election/ar-BB1hdeNv Fake audio is easy now, if occasionally stilted and discernible that way. Fake video is around, but it's still not 100%. If you watch business YouTube, you probably came across a scam video ad of the creator of Etherium telling people to connect their crypto wallets to some link to double their holdings of that coin. It's fairly convincing motion wise, but so blurry it looks like it was filmed on a 20 year old webcam. But I guess a state actor can throw a lot more resources at it than a single scammer.
A self-driving Waymo car was vandalized and went up in flames in San Francisco’s Chinatown Saturday night. Video shows group vandalizing Waymo driverless car in San Francisco's Chinatown (msn.com) the Butlerian Jihad has begun...
I suppose AI is used in those translation tools too. Going by the quality of what these come up with, you might think AI isnot evil, but plain stupid. I sometimes type in an English word to get a translation and it comes up with something completely wrong. Then I type in the web browser "xxx means" and I get an answer from some kind of on line dictionary that gives me meaning etc.
Hey, next time you're at work, type in "AI Milf". Then walk away from your computer for some coffee...
Speaking of "xxx", the new AI talk is about how scientists are using AI to make visualizations for their papers. The thing that put this in the news is an image of a rat (which I won't post) with an absolutely gargantuan dck with completely fictional internal workings. The people that draw these kinds of things are very skilled, but very expensive. It makes sense that the people writing papers would look for cheaper alternatives. But AI doesn't know anything about what it is drawing. It just in a very general sense knows the elements that make up a certain kind of drawing, so when you ask it to make that kind of drawing it draws those elements in a way that makes a cohesive whole. Works for some things, but for scientific drawings, it's still at a state where it is comically bad.