Just give it six months: https://www.dailystar.co.uk/news/la...n=continue_reading_button#amp-readmore-target
How will this influence the use of AI? Scientists calculate we will produce 180 zettabytes of information in 2025, compared to the last year's 90 zettabytes. This 180 is the limit of what we can store, so everything will be needing scrutinizing if it will be stored. Is AI getting limited by the lack of data space, or will it be used to delete stuff, by "deciding" what's relevant enough to keep. Hint: buy a lot of HD/thumb drives now 667
https://www.technologyreview.com › 2021 › 05 › 27 › 1025453 › artificial-intelligence-learning-create-itself-agi AI is learning how to create itself | MIT Technology Review And the past year has seen a raft of projects in which AI has been trained on automatically generated data. Face-recognition systems are being trained with AI-generated faces, for example. AIs are ... So, if AI creates data itself, will it decide it's own data are more important than your pictures in the cloud of your grandchildren or your collection of kitten pictures in the case of lack of data space? 667
The UK should bar technology developers from working on advanced artificial intelligence tools unless they have a licence to do so, Labour has said. Ministers should introduce much stricter rules around companies training their AI products on vast datasets of the kind used by OpenAI to build ChatGPT, Lucy Powell, Labour’s digital spokesperson, told the Guardian. Her comments come amid a rethink at the top of government over how to regulate the fast-moving world of AI, with the prime minister, Rishi Sunak, saying it could pose an “existential” threat to humanity. Powell said: “My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.” She suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. “That is the kind of model we should be thinking about, where you have to have a licence in order to build these models,” she said. “These seem to me to be the good examples of how this can be done.” https://www.theguardian.com/technol...umans-in-two-years-says-uk-government-adviser
Uhm, how are you going to monitor/control that? It's not like you need special equipment etc. to build AI applications. My nephew and nice have top grade German university computer science qualifications and could do that stuff at home with a couple of computers linked together to work as a poor man's supercomputer.
https://www.eetasia.com/ai-cant-design-chips-without-people/ You can't be evil if you aint got fantasy.
This is an interesting article. Basically, ChatGPT 4.0 correctly said the number 17077 is a prime number 98% of the time back in March and 2% of the time now. ChatGPT 3.5 underwent the complete opposite change, going from 7% correct in March to 87% correct now. https://finance.yahoo.com/news/over-just-few-months-chatgpt-232905189.html The problem is that we can't really tell what the AI is keying on when it finds an answer. They make some subtle changes to ChatGPT to make its speech more natural, and it's math goes to hell. Or gets a lot better. No one knows why. No one can know why. This is something we've known for a while. For example, we've long had AI that interprets what pictures represent. But you can take a picture of a stop sign and change one single pixel, and if it's the right pixel the AI will be completely confused.
I received an AI phone call this week from my medical supplier. Before I could say hello, it launched into I’m guessing was a rushed, toneless monologue on how to reorder. Followed by 2 phone numbers that to me didn’t have time to register. Then goodbye!!! it took me 3 calls and being passed along until I found the right department. I mentioned the call and he admitted that there was an ongoing adjustment. I told him he should be diplomat. Laughing, he said all the receptionists were frustrated with their potential replacement.
You know, language occupies alot of brainpower. Seems the growth of the human brain for the most part went into language processing. Maths is in that same territory. So most humans are good in language or maths, mediocre in both and only geniuses good in both. In other words, it seems logical.
Talking about language and AI. I guess google translate was/is AI based. Well, it wasnot very good in the first place, but recently it really got shittier and shittier. Some times it hasnot got a clue about real English words, it fails to recognize them. So, knowing for certain it's a real word I looked on the internet adding means to it and presto, several sites give meaning, phonetics etc. of it.
There used to be a bigsoccer poster called "Brushes Sand." "Brushes Sand" was how google translated the name of U.S. National team coach Bruce Arena.
For years in the DC United forums we referred to ourselves as ventilators because that was how fans was translated in an article about Jaime Moreno.
To ChatGPT, there is no difference between language and math. It only does one thing - interpret the input and pick one word after another to make an output that its neural net thinks is best. If you ask it what one plus one is, it doesn't know. It only knows that the vast majority of pages on the internet indicate the likeliest answer is the word "two". It doesn't compute. It doesn't reference trusted sources. If there isn't a lot of pages saying 17077 is a prime number, then it can get confused by keying onto something else. Maybe in trying to get ChatGPT to emphasize pages with better English, it also pointed away from pages with the wrong answer.
That's on a different level. It's like saying because our brains run on chemical interactions we must be good at chemistry.
Well, AI indeed tries to kill you Pak'nSave AI meal bot suggests deadly and toxic spreads, supermarket says it will 'keep fine-tuning' NZ Herald on MSN.com|14 hours ago The supermarket promises to 'keep fine-tuning' the tool.