• dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    16
    ·
    5 months ago

    Why do we call it hallucinating? Call it what it is: lying. You want to be more “nice” about it: fabricating. “Google’s AI is fabricating more lies. No one dead… yet.”

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      41
      ·
      edit-2
      5 months ago

      To be fair, they call it a hallucination because hallucinations don’t have intent behind them.

      LLMs don’t have any intent. Period.

      A purposeful lie requires an intent to lie.

      Without any intent, it’s not a lie.

      I agree that “fabrication” is probably a better word for it, especially because it implies the industrial computing processes required to build these fabrications. It allows the word fabrication to function as a double entendre: It has been fabricated by industrial processes, and it is a fabrication as in a false idea made from nothing.

      • dohpaz42@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        I did look up an article about it that basically said the same thing, and while I get “lie” implies malicious intent, I agree with you that fabricate is better than hallucinating.

      • zout@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        LLM’s may not have any intent, but companies do. In this case, Google decides to present the AI answer on top of the regular search answers, knowing that AI can make stuff up. MAybe the AI isn’t lying, but Google definitely is. Even with the “everything is experimental, learn more” line, because they’d just give the information if they’d really want you to learn more, instead of making you have to click again for it.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          In other words, I agree with your assessment here. The petty abject attempts by all these companies to produce the world’s first real “Jarvis” are all couched in “they didn’t stop to think if they should.”

          • zout@fedia.io
            link
            fedilink
            arrow-up
            4
            ·
            5 months ago

            My actual opnion is that they don’t want to think if they should, because they know the answer. The pressure to go public with a shitty model outweighs the responsibility to the people relying on the search results.

            • Snot Flickerman@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              5 months ago

              It is difficult to get a man to understand something when his salary depends on his not understanding it.

              -Upton Sinclair

              Sadly, same as it ever was. You are correct, they already know the answer, so they don’t want to consider the question.

              • dohpaz42@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 months ago

                There’s also the argument that “if we don’t do it, somebody else would,” and I kind of understand that, while I also disagree with it.

          • marcos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Oh, they absolutely should. A “Jarvis” would be great.

            But that thing they are pushing has absolutely no relation to a “Jarvis”.

    • gamermanh@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      Because lies require intent to deceive, which the AI cannot have.

      They merely predict the most likely thing that should next be said, so “hallucinations” is a fairly accurate description

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 months ago

      It’s not lying or hallucinating. It’s describing exactly what it found in search results. There’s an web page with that title from that date. Now the problem is that the web page is pinterest and the title is the result of aggressive SEO. These types of SEO practices are what made Google largely useless for the past several years and an AI that is based on these useless results will be just as useless.

    • flop_leash_973@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 months ago

      The most damning thing to call it is “inaccurate”. Nothing will drive the average person away from a companies information gathering products faster than associating it with being inaccurate more times than not. That is why they are inventing different things to call it. It sounds less bad to say “my LLM hallucinates sometimes” than it does to say “my LLM is inaccurate sometimes“.