This guy's got some interesting thoughts on AI and it's uses going forward, including for Elmo's driverless cars... As you say, some of these questions can't be answered and we need to think of the ones that should, first.
L1 is 900,000 miles from Earth. It takes 4 to 5 seconds for a radio signal to reach that far and 4 months for a SpaceX rocket to get there.
Or you could just put it in orbit and forget all about the Langrangian Points. The ISS seems to be doing okay without massive amounts of shielding.
Those super-sensitive instruments at L2 have to operate very near absolute zero. I don’t think that’s a constraint for a data center. The main silliness is the amount of energy and resources it would take to design, build, launch, assemble, and maintain such infrastructure vs. just doing it here on earth.
Another question, if you put something in low Earth orbit the size of 68 football fields, would it affect the environment?
900k miles away? I don’t think so. EDIT: I just realized I misread you. I still don’t think it would be that big of a deal even at low earth orbit.
I've been watching some vids about that and it seems most people are of the opinion he's just trying to keep the roulette wheel spinning longer by putting the word 'space' into it, presumably in the hope it will pay off at some point or, even it doesn't, he'll have sold it off to some other mug.
Right, so I've gone down a bit of a rabbit-hole on this stuff, (AKA 'Doing a jitty'), and it seems to explain why Prof. Ann Pettifor said that AI, (more accurately AGI, or advanced general intelligence), and LLM are two distinct things. This guy explains how LLM's work in giving responses to questions and why, in his opinion, they'll hit the buffers at some point when using the scaling method... This guy, (Prof. Michael Wooldridge again), goes further into the implications... As he says, they will still be useful because they can learn what a typical customer service assistant, for instance, would say when asked a question. But they're not really 'intelligence' in any meaningful sense. They're what he calls 'a hack'. Well, atm, anyway. Then I watched this one that spoke about how AI, (this time elmo's Grok thing), when working on maths problems... I can't claim to follow the maths so maybe the guy's talking out of his hat but he seems to think recent claims by elmo are false or, at best, somewhat misleading. But he then goes onto to say it improved quite quickly. But the thing is, can we necessarily rely on things like this to solve problems in the real world without extensive prompts and training? It seems to me that, as things stand, these things are incredibly useful for some things but only of limited use for others. The other thing worth saying is that elmo musk can't be trusted... but we knew that anyway
The execs pocketing the big bonuses, I mean lobbying for massive investments, are deliberately blurring the line between LLMs and AGI. Real AGI is decades away at least.
The daft thing is, the LLM's and some of the other stuff can still be VERY useful. https://www.bbc.co.uk/news/technolo...Mia circles two,over the following five years. https://prostatecanceruk.org/research/research-we-fund/ma-tia24-001 I don't know if it will be decades away but, AIUI, OpenAI will run out of money in 2027. We're already in 2027. They won't have anything working by then... at least, not anything that will be able to fund the amortisation and return for THAT level of investment.
They can be useful. Unfortunately if there's a gap in their knowledge they make shit up. Everything needs to be verified.
We've been there a long time now, certainly before ChatGPT 5 but 5 made it obvious to everyone. This is where the conversation was a long time ago. We already know they can't do this reliably. People can talk AIs into making unwarranted discounts and promises that the company has to honor. We already know that "hallucinations" are baked into LLMs and can't be gotten rid of. There is intelligence because it has a worldview. It does think word by word and pixel by pixel, but clearly it is more than that because we have sentences and images that make sense and not just likely things slapped together that end up making meaningless noise. It's just that the intelligence it has is both largely undiscoverable and totally alien to us. There is an incredible amount of effort expended on putting a harness on the raw LLM thought to make it helpful and useful. The whole "Mecha-Hitler" event is an example of what happens when the harness isn't done right. This is actually playing into LLMs strengths because there's a lot of math out there for them to digest. Where LLMs fall down is in, first, making statements based on information with very few references, and second, trying to keep a long thread of thought going. For example, if you ask a chatbot to explain the strengths and weaknesses of a chess opening it does a perfect job. If you ask it to play chess, it starts strong but eventually it can't even play by the rules let alone play well. There are thousands of books on chess opening strategies, but there's a near infinite number of end game positions with only a small percent referenced, and after a while the internal memory of the LLM starts forgetting what it thought before.
I built myself an AI data analyst last week It just goes in the data lake to look for the days headline numbers and then reports on anything useful. At first it had a lot of hallucinations so i tweaked the prompts a bit to calm it down. I am super impressed at how good it is.
I think that's the sort of thing this stuff is going to be good at. A data extraction and reporting tool. Whether we can call that 'intelligence' is a moot point I suppose.
Yeah it's like a series of automations combined with conversational analysis Every day I dump the new data in the Big Query data lake, and then run a query that acts as the prompt for Gemini to look at the latest data, plus its previous reports and write a daily report. I've told it to mostly focus on what is new or interesting, and only devote 20% to recommendations. It also 'knows' things from a list i created summarising the last 12 months of data. Then a different automation in Looker Studio gets the report and sends it to me a a crude email which i can read over my coffee. if there is anything interesting I can log on to the data viz and see what is happening. Also I can go in what is called "AI garden" where Gemini lives, and use it to ask follow up questions. At first the analyst was really dumb as it could only see one day. But as it sees a whole week of data, it got much smarter.
It's an AI search engine, though the only intelligence what you put in and how it formats the output? I guess I'm struggling with the term artificial intelligence, as your the only one providing the intelligence, then it goes off and does a search and puts the results in a pretty format.
The intelligence is it has to look at the underlying, complex data tables, and identify what is happening (it doesn't get any viz to help it), and then write a report summarising what's going on. So it actually does have to understand what the data is about. It's memory is that it reads yesterdays report and I tell it some historical facts in the prompt. IMO it helps that all the products are google products, so it understands the data to a higher level than i do. It knows everything about how to analyse the data base, and can read SQL data IMO where this has huge potential, is you don't really need a human viz interface anymore, as those things are costly and complicated