• 6 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • My guess is that scale and influence have a lot to do with

    To break this down a little, first of all “my guess”. You are guessing because the government which is literally enacting a speech restriction hasn’t explained its rational for banning one potential source of disinformation vs actual sources of disinformation. So you are left in the position of guessing. To put a finer point on it, you are in the position of assuming the government is acting with good intentions and doing the labor of searching for a justification that fits with that assumption. Reminds me of the Iraq war when so many conversations I had with people had their default argument be “the government wouldn’t do this if they didn’t have a good reason”. I don’t like to be cynical, and I don’t want to be a “both sides, all politicians are corrupt” kind of guy, but I think it’s pretty clear in this case there is every reason to be cynical. This was just an unfortunate confluence of anti Chinese hate and fear, anti young people hate, and big tech donations that resulted in the government banning a platform used by millions of Americans to disseminate speech. But because Dems helped do it, so many people feel the need to reflexively defend it, even forcing them to “guess” and make up rationales.

    As far as influence and reach, obviously that’s not in the bill. Influence is straight out, RT is highly influential in right wing spaces. In terms of numbers of users, that just goes to the profit potential that our good ol American firms are missing out on.

    If the US was concerned with propaganda or whatever, they could just regulate the content available on all platforms. They could require all platforms to have transparency around algorithms for recommending content. They could require oversight of how all social media companies operate, much like they do with financial firms or are trying to do with big AI platforms.

    But they didn’t. Because they are not attacking a specific problem, they are attacking a specific company.

    Also RT has been removed from most broadcasters and App Stores in the US.

    Broadcasters voluntarily dropped it after 2016, I think it’s still available on some including dish. As far as app stores, that’s just false, I just checked the Play store and it’s right there ready to download and fill my head with propaganda.


  • The US owns and regulates the frequencies TV and radio are broadcast on. The Internet is not the same. If the threat of foreign propaganda is the purpose, why can I download the official RT (Russia Today, government run propaganda outlet) app in the Play Store? If the US is worried about a foreign government spreading propaganda, why are they targeting the popular social media app that could theoretically (but no evidence it’s been done yet) be used for propaganda, instead of the actual Russian propaganda app? Hell I can download the south china morning post right from the Play store, straight Chinese propaganda! There are also dozens of Chinese and other foreign adversary run social media platforms, and other apps that could “micro target political messaging campaigns” available. So why did the US Congress single out one single app for punishment?

    Money. The problem isn’t propaganda. The problem is money. The problem is tik Tok is or is on the course to be more popular than our American social media platforms. The problem is American firms are being outcompeted in the marketplace, and the government is stepping in to protect the American data mining market. The problem is young people are trading their data for tik toks, instead of giving that data over to be sold to us advertising networks in exchange for YouTube shorts and Instagram stories. If the problem was propaganda, the US would go after propaganda. If the problem is just a Chinese company offers a better product than US companies, then there’s no reason to draft nuanced legislation that goes after all potential foreign influence vectors, you just ban the one app that is hurting the share price of your donors.


  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.




  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.




  • I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.

    But again I’m a dummy so you very well may be right.


  • Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.

    Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.


  • Interesting perspective! I think your right in a lot of ways, not least that it’s too big and heavy now. I’d also be shocked if the next iPhone didn’t have an AI powered siri built in.

    I guess fundamentally I am skeptical that we’re all going to want a screens around us all the time. I’m already tired of my smart watch and phone buzzing me with notifications, do I really want popups in my field of vision? Do I want a bunch of displays hovering in front of my while I work? I just don’t know. It seems like it would be cool for a week or so, but I feel like it’d get tiring to have a computer on your face all day, even if they got the form factor way down.


  • Apple has always had a walled garden on iOS and that didn’t stop them from becoming a giant in the US. Most people are fine with the App Store and don’t care about openness or the ability to do whatever they want with the device they “own.” Apple would probably love to have a walled garden for Macs as well, but knows that ship has sailed. Trying to force “spatial computing” (which this article incorrectly says was an Apple invention, it’s not Microsoft came up with that term for its hololense) on everyone is a great way to move to a walled garden for all your computing, with Apple taking a 30% slice of each app sale. I doubt the average Apple user is going to complain about it either so long as the apps they want to use are on the App Store.

    I think the bigger problem is we’re in a world where most people, especially the generations coming up, want less screens in their life, not more. Features like “digital well-being” are a market response to that trend, as are the thousands of apps and physical products meant to combat screen addiction. Apple is selling a future where you experience reality itself through a screen, and then you get the privilege of being up to clutter the real world with even more screens. I just don’t know that that is a winner.

    It’s funny too because at the same time AI promises a very different future where screens are less important. Tasks that require computers could be done by voice command or other minimal interfaces, because the computer can actually “understand” you. The Meta Ray-Ban glasses are more like this, where you just exist in the real world and you can call on AI to ask about the things you’re seeing or just other random questions. The Human AI pin is like that too (doubt it will take off, but it’s an interesting idea about where the future is headed).

    The point is all of these AI technologies are computers and screens getting out of your way so you can focus on what your doing in the real world, whereas Apple is trying to sell a world where you (as the Verge puts it) spend all day with an iPad strapped to your face. I just don’t see that selling, I don’t think anybody wants that world. VR games and stuff are cool because you strap in for a single emersive experience, and then take the thing off and go back to the real world. Apple wants you spending every waking moment staring at a screen, and that just sounds like it would suck.



  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.



  • Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.

    I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.






  • During an earnings call on Tuesday, UPS CEO Carol Tomé said that by the end of its five-year contract with the Teamsters union, the average full-time UPS driver would make about $170,000 in annual pay and benefits, such as healthcare and pension benefits.

    The headline is sensationalized for sure. But the article itself actually makes the point that the tech workers are misunderstanding that the $170k figure includes both salary and benefits.

    “This is disappointing, how is possible that a driver makes much more than average Engineer in R&D?” a worker at the autonomous trucking company TuSimple wrote on Blind, an anonymous jop-posting site that verifies users’ employment using their company email. “To get a base salary of $170k you know you need to work hard as an Engineer, this sucks.”

    It is important to note that the $170,000 figure represents the entire value of the UPS package, including benefits and does not represent the base salary. Currently, UPS drivers make an average of around $95,000 per year with an additional $50,000 in benefits, according to the company. The average median salary for an engineer in the US is $103,845 with a base pay of about $91,958, according to Glassdoor. And TuSimple research engineers can make between $161,000 to $250,000 in compensation, Glassdoor data shows.

    On the whole though this is a useless article covering drama on Blind, wrapped up with a ragebait headline.