• NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    6 hours ago

    At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

      • AItoothbrush@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        49 minutes ago

        AI itself too i guess. Also i have to point this out every time but my username was chosen way before all this shit blew up into our faces. Ive used this one on every platform for years.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      Some people can only hear “AI means I can pay people less/get rid of them entirely” and stop listening.

      • anon_8675309@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        AI means C level jobs should be on the block as well. The board can make decisions based on their output.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          55 minutes ago

          The whole ex-Mckinsey management layer is at risk. Whole teams of people who were dedicated to producing pretty slides with “action titles” for managers higher up the chain to consume and regurgitate are now having their lunch eaten by AI.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      I’ve noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

    • Zement@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

      You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, … that alone is a game changer in “parsing the real world” … that energy amount needed to do this “right >= 3x” is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

      They poison their own well because they jump directly to the enshittyfication stage.

      So people talking about embedding it into workflow… hi… here I am! =D

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical