Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 1 Post
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle

  • Except there’s nothing illegal about scraping all the content from websites (including news sites) and putting it into your own personal database. That is–after all–how search engines work.

    It’s only illegal if you then distribute said copyrighted material without the copyright owner’s permission. Because that’s what copyright is all about: Distribution.

    The news sites distributing the content in this case freely gave it to OpenAI’s crawlers. It’s not like they broke into these organizations in order to copy their databases of news articles.

    For the news sites to have a case they need to demonstrate that OpenAI is creating a “derivative work” using their copyrighted material. However, that’s going to be a tough sell to judges and/or juries since the way LLMs work is not so different from how humans do: They take in information and then produce similar information (by predicting the next word/symbol, given a series of tokens/a prompt).

    If you read all of Stephen King’s books, for example, you might be better at writing horror stories. You may even start writing in a similar style! That doesn’t mean you’re violating his copyright by producing similar stories.


  • Riskable@programming.devtoTechnology@lemmy.worldThe Cult of Microsoft
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    7 days ago

    Ahaha! Microsoft employees are using AI to write hallucinate their own performance reviews and managers are using that very same AI to “review” said performance reviews. Which is exactly the dystopian vision of the future that OpenAI sells!

    What’s funny is that the “cult of Microsoft” is 100% bullshit so the AI is being trained in bullshit and as time goes on its being reinforced with it’s own hallucinated bullshit because everyone is using it to bullshit the bullshitters in management who are demanding this bullshit!






  • As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

    For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

    Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

    It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.


  • Just a point of clarification: Copyright is about the right of distribution. So yes, a company can just “download the Internet”, store it, and do whatever TF they want with it as long as they don’t distribute it.

    That the key: Distribution. That’s why no one gets sued for downloading. They only ever get sued for uploading. Furthermore, the damages (if found guilty) are based on the number of copies that get distributed. It’s because copyright law hasn’t been updated in decades and 99% of it predates computers (especially all the important case law).

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works. Which is kinda what’s going on but also not really. Let’s say that someone asks ChatGPT to write a few paragraphs of something in the style of Stephen King… His “style” isn’t even cooyrightable so as long as it didn’t copy his works word-for-word is it even a derivative? No one knows. It’s never been litigated before.

    My guess: No. It’s not going to count as a derivative work. Because it’s no different than a human reading all his books and performing the same, perfectly legal function.




  • Tom’s Hardware tested this software version of BitLocker last year and found it could slow drives by up to 45 percent.

    WTF‽ In Linux full disk encryption overhead is minimal:

    While in pure I/O benchmarks like FIO there is an obvious impact to full disk encryption and other synthetic workloads, across the real-world benchmarks the performance impact of running under full disk encryption tended to be minimal

    https://www.phoronix.com/review/hp-devone-encrypt/5

    There’s like five million ways you can use disk encryption on Linux though and not all of them are very performant. So keep that in mind if you see other benchmarks showing awful performance (use the settings Phoronox used).

    I suspect Microsoft made some poor decisions in regards to disk encryption (probably because of bullshit/insecure-by-design FIPS compliance) and now they’re stuck with them.









  • Maybe we should take a page from the Trumpers here and declare it a conspiracy!

    The deep state doesn’t want people following Harris! They don’t want you to know about it. They think they know better than you!

    “Let me tell you, folks, I know how to follow people and this Twitter situation smells. I know all about smelling. Smells. Smelling. Smell… Ling! The word just sounds awful, right? They want you to smell things. They’re coming for your smells!”

    Haha, yeah… This is Elon Musk’s X.com we’re talking about. It’s just sheer incompetence and the usual buggy bullshit. We should expect this as normal X behavior at this point. Is anyone really surprised that X is suddenly throwing errors when users try basic functionality? Come on. The platform is garbage and that’s not even taking account the garbage present on the platform.