Originally Posted by
Pokemon Trainer Sarah
Yeah you're right. I can't help feeling a bit worried about it but I'm trying not to xD We had a uni professor come and give us a talk about how large language models like ChatGPT work last year. And he said it's all just statistics to predict the most likely next word in a sequence based on what it has been trained on before. Soit really has no concept of when things are true. But the most interesting thing he said was that the creators of these AIs don't even know how they work so well at predicting text, which means two things - when things go wrong or it says something unexpected, they have no idea why. And they can't really build on what they've created by tweaking it because they don't understand how they created it. So to get true AI we would actually need to take a step back. Not sure how much of that is true but the guy was a government advisor and researches AI stuff so I guess it should be true. Pretty interesting!