Archive link

Silicon Valley has bet big on generative AI but it’s not totally clear whether that bet will pay off. A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.

Microsoft, which has bet big on the generative AI boom with billions invested in its partner OpenAI, has been losing money on one of its major AI platforms. Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports. The problem is that users pay $10 a month subscription fee for Copilot but, according to a source interviewed by the Journal, Microsoft lost an average of $20 per user during the first few months of this year. Some users cost the company an average loss of over $80 per month, the source told the paper.

OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high. A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.

AI platforms are notoriously expensive to operate. Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint. At the same time, the infrastructure to run AI systems—like powerful, high-priced AI computer chips—can be quite expensive. The cloud capacity necessary to train algorithms and run AI systems, meanwhile, is also expanding at a frightening rate. All of this energy consumption also means that AI is about as environmentally unfriendly as you can get.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    58
    ·
    edit-2
    11 months ago

    Good. Maybe the hype will finally die down soon and “AI” won’t be shoved into every nook, cranny, and Notepad app anymore.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      43
      ·
      11 months ago

      I’ll say AI is a bit more promising, but all of this just really reminds me of the blockchain craze in 2017. Every single business wanted to add blockchain because the suits upstairs just saw it as free money. Technical people down below were like “yeah cool, but there’s no place for it”. At least I could solve some problems, but business people again just think that it’s going to make them limitless money

      • Turkey_Titty_city@kbin.social
        link
        fedilink
        arrow-up
        20
        ·
        11 months ago

        Before that it was ‘big data’. remember that?

        every 5 or so years the media needs some new tech to hype up to get people paranoid

        • Lanthanae@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          31
          ·
          11 months ago

          “big data” runs the content recommendation algorithms of all the sites people use which in tirn have a massive influence on the world. It’s crazy to think “big data” was just a buzzword when it’s a tangible thing that affects you day-to-day.

          LLM powered tools are a heavy part of my daily workflow at this point, and have objectively increased my productive output.

          This is like the exactly opposite of Bitcoin / NFTs. Crypto was something that made a lot of money but was useless. AI is something that is insanely useful but seems not to be making a lot of money. I do not understand what parallels people are finding between them.

          • psudo@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            The hype cycle. And just like how even a reasonable read on the supposed benefits are going to leave most people very disappointed when it happens. And I’m glad you’re one of the people that have found a good use for LLMs, but you’re in the vocal minority, as far as I can tell

            • Lanthanae@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              That’s a weird argument. Most technological advancements are directly beneficial to the work of only a minority of people.

              Nobody declares that it’s worthless to research and develop better CAD tools because engineers and product designers are a “vocal minority.” Software development and marketing are two fields where LMMs have already seen massive worth, and even if they’re a vocal minority, they’re not a negligible one.

              • psudo@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                I don’t see how saying things failing to live up to their promises and helping a mere fraction of the people claimed is. And I can’t speak to marketing, but I can to software development and it really is not having the impact claimed, at least in my professional network.

      • tal@lemmy.today
        link
        fedilink
        arrow-up
        17
        ·
        edit-2
        11 months ago

        Nah, blockchain has extremely limited applications.

        Generative AI legitimately does have quite a number of areas that it can be made use of. That doesn’t mean that it can’t be oversold for a given application or technical challenges be disregarded, but it’s not super-niche.

        If you wanted to compare it to something that had a lot of buzz at one point, I’d use XML instead. XML does get used in a lot of areas, and it’s definitely not niche, but I remember when it was being heavily used in marketing as a sort of magic bullet for application data interchange some years back, and it’s not that.

      • bioemerl@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        Technical people down below were like “yeah cool, but there’s no place for it

        I think you might underestimate entertainment and creation. Right now I can imagine some character or scenario in my head and generate a little avatar with stable situation then render it onto a live chat that (mostly) works.

        I’ve paid like 2k for a computer that enables this. It’s make money from me at least.

  • ryan@the.coolest.zone
    link
    fedilink
    arrow-up
    18
    ·
    11 months ago

    AI is absolutely taking off. LLMs are taking over various components of frontline support (service desks, tier 1 support). They’re integrated into various systems using langchains to pull your data, knowledge articles, etc, and then respond to you based on that data.

    AI is primarily a replacement for workers, like how McDonalds self service ordering kiosks are a replacement for cashiers. Cheaper and more scalable, cutting out more and more entry level (and outsourced) work. But unlike the kiosks, you won’t even see that the “Amazon tech support” you were kicked over to is an LLM instead of a person. You won’t hear that the frontline support tech you called for a product is actually an AI and text to speech model.

    There were jokes about the whole Wendy’s drive thru workers being replaced by AI, but I’ve seen this stuff used live. I’ve seen how flawlessly they’ve tuned the AI to respond to someone who makes a mistake while speaking and corrects themself (“I’m going to the Sacramento office – sorry, no, the Folsom office”) or bundles various requests together (“oh while you’re getting me a visitor badge can you also book a visitor cube for me?”). I’ve even seen crazy stuff like “I’m supposed to meet with Mary while I’m there, can you give me her phone number?” and the LLM routes through the phone directory, pulls up the most likely Marys given the caller’s department and the location the user is visiting via prior context, and asks for more information - “I see two Marys here, Mary X who works in Department A and Mary Y who works in Department B, are you talking about either of them?”

    It’s already here and it’s as invisible as possible, and that’s the end goal.

  • java@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    11 months ago

    This is how investments in new technologies work. That’s such a non-story.

  • UrLogicFails@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    As someone who has always been skeptical of “AI,” I definitely hope corporations dial back their enthusiasm on it; but I think its value has never been commercial, but industrial.

    “AI” was not designed so consumers could see what it would look like to have Abraham Lincoln fighting a T-Rex without having to pay artists for their time. “AI” was designed so that could happen on a much larger enterprise scale (though it would probably be stock images of technology or happy people using technology instead).

    With this in mind, I think “AI” being a money pit won’t dissuade corporations since they want the technology to be effective for themselves, they just want consumers to offset costs.

    • Turkey_Titty_city@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 months ago

      Exactly. It will lead to improved automation for industrial processes but it won’t ever be a consumer tech other than improving your siri results.

      It won’t replace jobs, anymore than industrial robots in factories replace them.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      “AI” was not designed so consumers could see what it would look like to have Abraham Lincoln fighting a T-Rex without having to pay artists for their time.

      Sure… AI can do that… but it can also be used for “here’s photo of my head with trees in the background, remove the trees”.

      I could also do that as a human, but it’d take me hours to do a good job blending my hair into a transparent png without a green tinge. AI can do it in seconds.

      Just because a tool can be used to do useless things, doesn’t mean the tool is useless.

  • Franzia@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    9
    ·
    11 months ago

    Billionaires spend billions making some jobs just a bit more efficient, against the wishes of the people who do those jobs, and then cry when it doesn’t actually pay that much.

  • Turkey_Titty_city@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    11 months ago

    ‘big data’ ‘crypto’ etc.

    AI is just the next ‘big thing’ that amounts to nothing. 90% of what anyone says in the press/media is total nonsense. And most AI researchers are downplaying the hype because they know it’s all bullshit and AI will ultimately not be a major change anymore than navigation systems in cars was. It is merely convenience and those that ‘rely’ on it will end up in trouble.

    It’s a complementary technology, not a revolution.

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    I feel like companies were all hoping to get in early, to get a solid chunk of the cake. Well, and then a lot more companies got in than anyone could have guessed, so the slices of the cake are a lot smaller.

    We’ll have to see what happens, though. It’s possible that the startups have to give up and only a few big fish remain. But if those have to increase prices to become profitable, this market will still be a lot smaller than people were hoping for.

  • ParsnipWitch@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    Good, we as a society aren’t ready for these kind of tools. AI would further increase the divide between people. One of the reasons is that it costs too much to run it.

    Everyone who can’t afford the hardware would be dependent on AI owned by corporations. And most people can’t even afford those fees. Since we build our society around (materialistic) “productivity”, I am sure AI would escalate how we treat people and whole countries who fall off the capitalism train. I hope the hype dies.

    • Dr Cog@mander.xyz
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      11 months ago

      Ok, but AI isn’t going away. So if these companies stop serving open access, the ONLY people that will use them will be the people who can afford the server/processing time.

      This article isn’t about usefulness of the models to normal people. It’s about profitability of the models to the corporations that serve them.

  • explodicle@local106.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Rather than debate every new technology for energy worthiness on a case-by-case basis up front, it would be more productive to direct our efforts towards better policy that internalizes the cost of pollution.

    “These new-fangled ‘lasers’ don’t even do anything useful for the energy they consume!”

  • idyllic_optimism@lemmy.today
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    I’m not too well-versed on the subject but, isn’t user interactions with LLM’s also train them further? They make it sound like the product has already been matured and they’re letting people use it for free.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.

    Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports.

    OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high.

    A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.

    Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint.

    To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers.


    Saved 60% of original text.

  • kittenroar@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    Oh, no! Billionaires with short term, selfish thinking might lose money! What a tragedy.

  • twistedtxb@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    With MS hinting at SAAS for Win12 with Copilot, this will be a good test.

    Nobody can profit off “monthly usage credits” or without a subscription.

    At least not until the tech becomes more affordable.

  • Gork@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Could OpenAI refactor their code and algorithms to be more efficient? The more Instructions Per Cycle (IPC) that can be performed in the same amount of time can reduce costs to some extent when employed at scale.

    That and computers are always getting more energy efficient, so if they can have these keep up or outpace user growth then it might become a little more sustainable.

    • kglitch@kglitch.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      My guess is specialized hardware. Similar to how mining crypto moved from CPU -> GPU -> ASIC, we’ll see LLM hardware that optimizes the currently-cpu-intensive pieces.

      • GreyBeard@lemmy.one
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        Although you are right that more specialized hardware will come, NVIDIA has those already, and probably other manufacturers too. I’m not saying there isn’t a lot room for improvement, but most of these AI models are simply matrix multiplications, and we are good at those.