• 3 Posts
  • 102 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle
  • EnderMB@lemmy.worldtoTechnology@lemmy.worldWho still uses pagers?
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    14 hours ago

    People that work on-call do this, especially in tech or security.

    I’m considering making the switch because my paging calls are from a random set of phone numbers, so I cannot attach a specific ringtone to them. After a few horrible pages, you start to associate your phone going off as a world-ending experience, when it’s just your wife calling to ask if you want her to pick something up for you from the shop. A separate device that disassociates my phone from pain would be nice.



  • My only fear with the indie gaming industry is that many of them are starting to embrace the churn culture that has led AAA gaming down a dark path.

    I would love an app like Blind that allows developers on a game to anonymously call out the grinding culture of game development, alongside practices like firing before launch and removing credits from workers. Review games solely on how the dev treated the workers, and we might see some cool corrections between good games and good culture.


  • How is this going to work while OpenAI currently burns through an absolute ocean of cash to keep improving its services? Alongside this, a good software engineer or applied scientist can make close to $1m a year. While I do think professionals should earn what their value is to an employer, OpenAI still loses a ton of money.

    As someone that works in AI, I think most of us know it’s full of people trying to make a quick buck while investors will stupidly throw money at it. OpenAI is ultimately the figurehead of this market though, because at least the big companies can prop their AI offerings with the money they make from shopping, cloud, ads, etc. The second OpenAI looks weak and needs money, the vultures will slice off a piece and we’ll see the AI market reduce to a wimper - just enough for tech to focus on the next grift.









  • Oh for sure, it’s not perfect, and IMO this is where the current improvements and research are going. If you’re relying on a LLM to hit hundreds of endpoints with complex contracts it’s going to either hallucinate what it needs to do, or it’s going to call several and go down the wrong path. I would imagine that most systems do this in a very closed way anyway, and will only show you what they want to show you. Logically speaking, for questions like “should I wear a coat today” they’ll need a service to check the weather in your location, and a service to get information about the user and their location.


  • I work on LLM’s for a big tech company. The misinformation on Lemmy is at best slightly disingenuous, and at worst people parroting falsehoods without knowing the facts. For that reason, take everything (even what I say) with a huge pinch of salt.

    LLM’s do NOT just parrot back falsehoods, otherwise the “best” model would just be the “best” data in the best fit. The best way to think about a LLM is as a huge conductor of data AND guiding expert services. The content is derived from trained data, but it will also hit hundreds of different services to get context, find real-time info, disambiguate, etc. A huge part of LLM work is getting your models to basically say “this feels right, but I need to find out more to be correct”.

    With that said, I think you’re 100% right. Sadly, and I think I can speak for many companies here, knowing that you’re right is hard to get right, and LLM’s are probably right a lot in instances where the confidence in an answer is low. I would rather a LLM say “I can’t verify this, but here is my best guess” or “here’s a possible answer, let me go away and check”.






  • A LLM is basically just an orchestration mechanism. Saying a LLM doesn’t do reasoning is like saying a step function can’t send an email. The step function can’t, but the lambda I’ve attached to it sure as shit can.

    ChatGPT isn’t just a model sat somewhere. There are likely hundreds of services working behind the scenes to coerce the LLM into getting the right result. That might be entity resolution, expert mapping, perhaps even techniques that will “reason”.

    The first initial point is right, though. This ain’t AGI, not even close. It’s just your standard compositional stuff with a new orchestration mechanism that is better suited for long-form responses - and wild hallucinations…

    Source: Working on this right now.

    Edit: Imagine downvoting someone that literally works on LLM’s for a living. Lemmy is a joke sometimes…