• 0 Posts
  • 33 Comments
Joined 9 months ago
cake
Cake day: February 13th, 2024

help-circle








  • Not sure what you’re trying to say either, but fascist speech using lies is fascist recruitment. That is why autonomous anti-fascism is right to disrupt fascist recruitment events in universities. Because the state or moderates care more about maintaining order. So you have to disrupt the recruiting by any means.

    So if your argument is that “sunlight is the best disinfectant” then no, it definitely isn’t. There is historical evidence.



  • There is nothing to keep you from using factors of 1024 (except he slightly ludicrous prefix “kibi” and “mebi”), but other than low level stuff like disc sectors or bios where you might want to use bit logic instead of division it’s rather rare. I too started in the time when division op was more costly than bit level logic.

    I’d argue that any user facing applications are better off with base 1000, except by convention. Like a majority of users don’t know or care or need to care what bits or bytes do. It’s programmers that like the beauty of the bit logic, not users. @mb_@lemm.ee








  • Nice post. A while back I read something on reddit about a theory for technological advances always being used for the worst possible nightmare scenario. But I can’t find it now. Fundamentally I’m a technological optimist but I can’t even yet fully imagine the subtle systemic issues this will cause. Except the rather obvious one:

    Algorithms on social media decide what people see and that shapes their perception of the world. It is relatively easy to manipulate in subtle ways. If AI can learn to categorize the psychology of users and then “blindly anticipate” what and how they will respond to stimuli (memes / news / framing) then that will in theory allow for a total control by means of psychological manipulation. I’m not sure how close we are to this, the counter argument would be that AI or LLMs currently don’t understand at all what is going on, but public relations / advertising / propaganda works on emotional levels and doesn’t “have to make sense”. The emotional logic is much easier to categorize and generate. So even if there is no blatant evil master plan just optimizing for max profit, max engagement, could make the AI pursue (dumbly) a broad strategy that is more evil than that.



  • Yeah. I think there is a kind of power grab under way. Social media will try to push that they own the IP rights to the large texts uses for LLM. This will then require that producers of LLM software aquire the licensing rights which will cost many millions which in turn restricts the free use of LLM and in general any AI software that requires training data.

    The end result is that as the “means of production” become less based on human work the “means of generation” and AI will be controlled by the capitalists. If you can turn something into a commodity (like knowledge with patents and IP) you can control it. Leading to a darker timeline.