• 0 Posts
  • 27 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle




  • LLMs alone won’t. Experts in the field seem to have different opinions on if they will help get us there. What is concerning to me is that the issues and dangers of AGI also exist with advanced LLM models, and that research is being shelved because it gets in the way of profit. Maybe we’ll never be able to get to AGI, but we sure better hope if we do we get it right the first time. How’s that been going with the more primitive LLMs?

    Do we even know what the “right” AGI would be? We’re treading in dangerous waters.




  • Good questions.

    What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

    Honestly, we might be too late anyway for avoidance, but it’s specifically research of the alignment problem that I think regulation could help with, and since they’re still self regulation and free to do what OpenAI did with their department for that…it’s akin to someone manufacturing a new chemical and not bothering any research on side effects, only what they can gain from it. Oh shit, never mind, that’s standard operating procedure isn’t it, at least as long as the government isn’t around to stop it.

    And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

    Another topic that I personally think we’re doomed to ignore until things get so bad they affect more than poor people and countries. How does it compare? Climate change and the probable directions it takes the planet are much more of a certainty than the unknown of if AGI is possible and what effects AGI could have. Interesting that we’re taking the same approaches though, even if it’s more obvious a problem. Plus profiting via greenwashing rather than a concentrated effort to do effective things to mitigate what we could.


  • No surprise, since there’s not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well…

    And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.

    Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.


  • I don’t think it’s that uncommon an opinion. An even simpler version is the constant repeats over years now of information breaches, often because of inferior protect. As a amateur website creator decades ago I learned that plain text passwords was a big no-no, so how are corporation ITs still doing it? Even the non-tech person on the street rolls their eyes at such news, and yet it continues. CrowdStrike is just a more complicated version of the same thing.




  • Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.





  • Is it a physical HD (magnetic) and making noise? I had one years ago (fortunately my only failure so far) and if I kept persisting to try and read it via a USB recovery drive, I managed to pull enough data off that was important. If it’s a newer SSD, that’s a different thing. Doesn’t mean all the data is gone, just a lot harder (read $$$) to pull. Hopefully it’s just software or a loose cable.




  • The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).