Quote Originally Posted by Neo Emolga View Post
Yeah, I turn that auto-suggest response stuff off. It's pesky to even just look at it or even worse, pop up in the message window in gray words in front of what you're typing as "predictive text." No thanks, I'm an adult, I can write my own messages! Let the message come from the head and the heart of a real human soul rather than a machine that's just following its code.

But yeah, the high-volume pattern-recognition tools are things I see being very helpful, especially when examining tons of data that needs to be processed immediately and in real time. The financial and medical fields especially can benefit from that.
Another thing that worries me is that the developers/owners of the AI obviously want people engaging with it and I've read that they are constantly tweaking things to keep people coming back. Especially people who are vulnerable or lonely and just want someone/thing to talk to. It's really interesting just how many people feel like they are connecting with LLMs on a deeper level to the point they prefer talking to them over human contact. It's not necessarily a bad thing and kind of makes sense as the LLM is always complimentary and encouraging and doesn't judge you. But my worry is that there are humans in control of these things and what kind of things they say/don't say etc. And those humans are very easily going to be able to manipulate all of those people just by tweaking the kinds of things the LLMs says or suggests. I feel like it could get quite dangerous for spreading disinformation or propaganda in the wrong hands. We already have enough trouble with bots online trying to do the very same things, but an AI that people feel like they have a personal connection to would be a whole other level!!