• superguy@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          Yeah. I’m really annoyed by this trend of having programs that could function offline require connecting to a server.

        • boonhet@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 months ago

          Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.

          Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.

      • taanegl@beehaw.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…

    • PixxlMan@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      To be fair - people don’t know what they want until they get it. In 2005 people would’ve asked for faster flip phones, not smartphones.

      I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.