• lloram239@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    LLMs are fundamentally different from human consciousness.

    They are also fundamentally different from a toaster. But that’s completely irrelevant. Consciousness is something you get when you put intelligent in an agent that has to move around in and interact with an environment. A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

    It simply returns the next most-likely word in a response.

    So does the all knowing oracle that predicts the lotto numbers from next week. It being autocomplete does not limit its power.

    LLMs are a dead end.

    There might be better or faster approaches, but it’s certainly not a dead end. It’s a building block. Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side. GPT as it is today is already “good enough” for a scary number of things that used to be exclusively done by humans.

    • Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

      It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

      Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side.

      I agree this would be a very useful chatbot. But it is still not a toaster. Nor would it be conscious.

      • poweruser@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Suppose you were saying that about me. How would I prove you wrong? How could a thinking being express that it is actually sentient to meet your standards?

        • Veraticus@lib.lgbtOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          By telling me you are.

          If you ask ChatGPT if it is sentient, or has any thoughts, or experiences any feelings, what is its response?

          But suppose it’s lying.

          We also understand the math underlying it. Humans designed and constructed it; we know exactly what it is capable of and what it does. And there is nothing inside it that is capable of thought or feeling or even rationality.

          It is a word generation algorithm. Nothing more.

          • Communist@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            We also understand the math underlying it. Humans designed and constructed it; we know exactly what it is capable of and what it does

            This is false. Read about their emergent properties. We have no way of knowing when emergent properties appear, we just notice them.

      • Communist@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

        Who cares? This has no real world practical usecase. Its thoughts are what it says, it doesn’t have a hidden layer of thoughts, which is quite frankly a feature to me. Whether it’s conscious or not has nothing to do with its level of functionality.