Page 1 of 2 12 LastLast
Results 1 to 10 of 13
  1. #1
    Cheers and good times! Neo Emolga's Avatar
    Senior Administrator

    Join Date
    Mar 2013
    Location
    New Jersey
    Posts
    17,481

    Your thoughts about AI

    I’m sure most people have heard about ChatGPT, Microsoft Copilot, Google Gemini, Grok, and many others. What are your thoughts about them and do you think they’re more helpful or harmful?

    Chakra and I were messing around with some of them to see just how they worked at creating and recreating Pokémon images and art and what they were capable of. Ultimately, I think ChatGPT seems to be the best at creating images and other functionality, but currently it’s so overused that it takes a long while to process the request, though it seems to be the most accurate at knowing what certain Pokémon look like. Gemini is faster, but it tends to make more mistakes. Tell it to make an image of Emolga and it really doesn’t know what Emolga is and just makes up a fakemon on the fly, but it seems to handle more well-known Pokémon like Pikachu and Eevee quite well. Still, your control over the output is still very limited and in some cases it just can't or won't give you exactly what you wanted the way learning and knowing how to do it yourself would.

    Personally, I think they’re good for brainstorming and creating reference pictures to help overcome being stuck in writer’s block or needing inspiration as a place to get started, but I don’t foresee them replacing people and I feel it’s a bad idea to become too reliant on them and/or use what they created at face-value. It reminds me of the Sorcerer’s Apprentice tale where the apprentice casts a spell on a broom to have it do his chores for him, but he slacks off, ignores the broom overdoing the tasks, loses control over it, and things end up as a disaster because he tried to take a shortcut and thought he could get the job done with minimal effort. Despite being an old tale, I feel it's a cautionary message that still applies here.

  2. #2
    growing strong Pokemon Trainer Sarah's Avatar
    Site Editor

    Senior Administrator

    Join Date
    Feb 2013
    Location
    Route 1
    Posts
    10,750
    It's cool to learn about the different ways people are using AI. I would never have thought of trying to get it to make Pokemon art xD

    I personally feel really uneasy about the whole thing. I don't use it at all. I know people are using it more and more at my work, like to summarise things they have to read or to find out information. Apparently it now gives sources. I just don't trust it and I think it's lame that people are using to write emails and messages and stuff to send to other people. In the end I feel like it will just be AI taking to AI and what's the point xD
    GCEA


  3. #3
    Cheers and good times! Neo Emolga's Avatar
    Senior Administrator

    Join Date
    Mar 2013
    Location
    New Jersey
    Posts
    17,481
    Quote Originally Posted by Pokemon Trainer Sarah View Post
    It's cool to learn about the different ways people are using AI. I would never have thought of trying to get it to make Pokemon art xD

    I personally feel really uneasy about the whole thing. I don't use it at all. I know people are using it more and more at my work, like to summarise things they have to read or to find out information. Apparently it now gives sources. I just don't trust it and I think it's lame that people are using to write emails and messages and stuff to send to other people. In the end I feel like it will just be AI taking to AI and what's the point xD
    I totally get that. I don't use it for anything serious and I don't use it at my job at all. It only takes a few minutes to write an email, so why even consult an AI to do it? To me, it just seems like a lazy way to do things that are so easy anyway.

    But yeah, the only things I'll use it for are things like "create a fictional name of a beer" or "create a fictional planet name" or something like that. And even thing I may only use part of it or just use it as a basis to get started on the possibilities. But for sure, I wouldn't use the results outright and then claim I came up with it.

    Overall I think most people will just stick with what fellow humans create. Sure, it can be useful for handling huge, tedious tasks that would ordinarily take a human many brain-fying hours to complete, but for things that need imagination and creativity, that's definitely best left to humans. Robots and machines just don't have the kind of imagination we do.

  4. #4
    growing strong Pokemon Trainer Sarah's Avatar
    Site Editor

    Senior Administrator

    Join Date
    Feb 2013
    Location
    Route 1
    Posts
    10,750
    I know one of my friends uses it when she has to retire and email that she feels awkward about or doesn't know how to phrase, like to send condolences to someone about something or kind of tell someone off xD I just think it's kind of sad that we are getting to that point where we are getting AI to do emotional stuff like that! I also hate when I get AI pop ups in apps offering to summarise people's messages for me so I don't have to read them. It's like someone didn't time to write that out and then it wants to summarise it and then suggest replies??? Why do we even bother xD

    For some niche things, I think it could be good it should certainly be able to improve early disease diagnosis and things like that which rely on pattern recognition. Not necessarily using the large language models like chatgpt but more specialised tools.
    GCEA


  5. #5
    Cheers and good times! Neo Emolga's Avatar
    Senior Administrator

    Join Date
    Mar 2013
    Location
    New Jersey
    Posts
    17,481
    Yeah, I turn that auto-suggest response stuff off. It's pesky to even just look at it or even worse, pop up in the message window in gray words in front of what you're typing as "predictive text." No thanks, I'm an adult, I can write my own messages! Let the message come from the head and the heart of a real human soul rather than a machine that's just following its code.

    But yeah, the high-volume pattern-recognition tools are things I see being very helpful, especially when examining tons of data that needs to be processed immediately and in real time. The financial and medical fields especially can benefit from that.

  6. #6
    growing strong Pokemon Trainer Sarah's Avatar
    Site Editor

    Senior Administrator

    Join Date
    Feb 2013
    Location
    Route 1
    Posts
    10,750
    Quote Originally Posted by Neo Emolga View Post
    Yeah, I turn that auto-suggest response stuff off. It's pesky to even just look at it or even worse, pop up in the message window in gray words in front of what you're typing as "predictive text." No thanks, I'm an adult, I can write my own messages! Let the message come from the head and the heart of a real human soul rather than a machine that's just following its code.

    But yeah, the high-volume pattern-recognition tools are things I see being very helpful, especially when examining tons of data that needs to be processed immediately and in real time. The financial and medical fields especially can benefit from that.
    Another thing that worries me is that the developers/owners of the AI obviously want people engaging with it and I've read that they are constantly tweaking things to keep people coming back. Especially people who are vulnerable or lonely and just want someone/thing to talk to. It's really interesting just how many people feel like they are connecting with LLMs on a deeper level to the point they prefer talking to them over human contact. It's not necessarily a bad thing and kind of makes sense as the LLM is always complimentary and encouraging and doesn't judge you. But my worry is that there are humans in control of these things and what kind of things they say/don't say etc. And those humans are very easily going to be able to manipulate all of those people just by tweaking the kinds of things the LLMs says or suggests. I feel like it could get quite dangerous for spreading disinformation or propaganda in the wrong hands. We already have enough trouble with bots online trying to do the very same things, but an AI that people feel like they have a personal connection to would be a whole other level!!
    GCEA


  7. #7
    Cheers and good times! Neo Emolga's Avatar
    Senior Administrator

    Join Date
    Mar 2013
    Location
    New Jersey
    Posts
    17,481
    Quote Originally Posted by Pokemon Trainer Sarah View Post
    Another thing that worries me is that the developers/owners of the AI obviously want people engaging with it and I've read that they are constantly tweaking things to keep people coming back. Especially people who are vulnerable or lonely and just want someone/thing to talk to. It's really interesting just how many people feel like they are connecting with LLMs on a deeper level to the point they prefer talking to them over human contact. It's not necessarily a bad thing and kind of makes sense as the LLM is always complimentary and encouraging and doesn't judge you. But my worry is that there are humans in control of these things and what kind of things they say/don't say etc. And those humans are very easily going to be able to manipulate all of those people just by tweaking the kinds of things the LLMs says or suggests. I feel like it could get quite dangerous for spreading disinformation or propaganda in the wrong hands. We already have enough trouble with bots online trying to do the very same things, but an AI that people feel like they have a personal connection to would be a whole other level!!
    The key thing to remember is not to let anything that happens online get too under your skin. I have been burned, betrayed, lied to, cheated, and scammed out of money online by people who I thought I could trust and by people who I initially felt sorry for. It’s scummy when you encounter it, but the thing to remember is tomorrow can be a better day and move forward. Best to take the experience’s wisdom and lessons learned to avoid such a thing from happening again and move to the next day, because no one deserves to be haunted by a bad experience for the rest of their days.

    Like with AI, people need to remember that it’s just a machine and it’s only following its programming, algorithms, and coding. Just don’t think of it as a person the way you wouldn’t think your printer or your calculator is a person. It’s just a tool with advanced functionality that was programmed to type and reply like an actual person. If a person is really down in the dumps and needs emotional support, they should talk to family, seek other like-minded people as friends, or adopt a pet to love and as companionship. But yeah, don’t make AI a replacement for real friends.

  8. #8
    growing strong Pokemon Trainer Sarah's Avatar
    Site Editor

    Senior Administrator

    Join Date
    Feb 2013
    Location
    Route 1
    Posts
    10,750
    Yeah you are totally right! I am sure most people can avoid feeling emotionally connected to AI but I feel like as the AI gets better and is programmed to induce those emotions more, it's only going to get harder! Will be interesting to see what the future holds, for sure.

    On another note I recently read about a religion that believes God is talking through ChatGPT. Pretty crazy stuff!!
    GCEA


  9. #9
    Cheers and good times! Neo Emolga's Avatar
    Senior Administrator

    Join Date
    Mar 2013
    Location
    New Jersey
    Posts
    17,481
    Quote Originally Posted by Pokemon Trainer Sarah View Post
    Yeah you are totally right! I am sure most people can avoid feeling emotionally connected to AI but I feel like as the AI gets better and is programmed to induce those emotions more, it's only going to get harder! Will be interesting to see what the future holds, for sure.

    On another note I recently read about a religion that believes God is talking through ChatGPT. Pretty crazy stuff!!
    Yeah, I see a lot of people becoming terrified at what AI is becoming and think it's hitting doomsday levels of being the next Skynet, Matrix, or some other dystopian sci-fi disaster. It really depends on what permissions people give it and how well people can contain and control it. But as I've seen with AI stuff in the past, these things always have weaknesses, exploits, and limitations. One way or another I think people are capable of pulling the plug if need be.

    Also, that's pretty bizarre. I did see a thing around Easter that had an AI based off of Jesus Christ and I thought that was odd. God wouldn't talk through AI. Heck, I can't even see the need for the Internet or machines in general up in Heaven to be honest.

  10. #10
    growing strong Pokemon Trainer Sarah's Avatar
    Site Editor

    Senior Administrator

    Join Date
    Feb 2013
    Location
    Route 1
    Posts
    10,750
    Quote Originally Posted by Neo Emolga View Post
    Yeah, I see a lot of people becoming terrified at what AI is becoming and think it's hitting doomsday levels of being the next Skynet, Matrix, or some other dystopian sci-fi disaster. It really depends on what permissions people give it and how well people can contain and control it. But as I've seen with AI stuff in the past, these things always have weaknesses, exploits, and limitations. One way or another I think people are capable of pulling the plug if need be.

    Also, that's pretty bizarre. I did see a thing around Easter that had an AI based off of Jesus Christ and I thought that was odd. God wouldn't talk through AI. Heck, I can't even see the need for the Internet or machines in general up in Heaven to be honest.
    Yeah you're right. I can't help feeling a bit worried about it but I'm trying not to xD We had a uni professor come and give us a talk about how large language models like ChatGPT work last year. And he said it's all just statistics to predict the most likely next word in a sequence based on what it has been trained on before. Soit really has no concept of when things are true. But the most interesting thing he said was that the creators of these AIs don't even know how they work so well at predicting text, which means two things - when things go wrong or it says something unexpected, they have no idea why. And they can't really build on what they've created by tweaking it because they don't understand how they created it. So to get true AI we would actually need to take a step back. Not sure how much of that is true but the guy was a government advisor and researches AI stuff so I guess it should be true. Pretty interesting!
    GCEA


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •