• 0 Posts
  • 140 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle
  • Wouldn’t be a huge change at this point. Israel has been using AI to determine targets for drone-delivered airstrikes for over a year now.

    https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip gives a high level overview of Gospel and Lavender, and there are news articles in the references if you want to learn more.

    This is at least being positioned better than the ways Lavender and Gospel were used, but I have no doubt that it will be used to commit atrocities as well.

    For now, OpenAI’s models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.

    Yep, that was how they justified Gospel and Lavender, too - “a human presses the button” (even though they’re not doing anywhere near enough due diligence).

    But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

    Yes, OpenAI is well known for this, but they’ve also created other types of AI models (e.g., Whisper). I suspect an LLM might be part of a solution they would build but that it would not be the full solution.


  • Thanks for clarifying! I’ve heard nothing but praise for Kagi from its users so that’s what I was assuming, but Searxng has also been great so I wouldn’t have been too surprised if you’d compared them and found its results to be on par or better.

    By the way, if you’re self hosting Searxng, you can use add your own index. Searxng supports YaCy, which is an actively developed, open source search index and crawler that can be operated standalone or as part of a decentralized (P2P) network. Here are the Searxng docs for that engine. I can’t speak to its quality as I still haven’t set it up, though.



  • Your Passkeys have to be stored in something, but you don’t have to store them all in the same thing.

    If you store them with Microsoft’s Windows Hello, Apple Keychain, or Google Password Manager, all of which are closed source, then you have to trust MS/Apple/Google. However, Keychain is end to end encrypted (according to Apple) and Windows Hello is currently not synced to the cloud, so if you trust those claims, you don’t need to trust that they won’t misuse your data. I don’t know if Google’s offering is end to end encrypted, but I wouldn’t trust it either way.

    You can also store Passkeys in a password manager. Bitwarden is open source (though they did recently introduce a proprietary, source available SDK), as is KeepassXC. 1Password isn’t open source but can store Passkeys as well.

    And finally, you can store Passkeys in a compatible security key, like the YubiKey 5 series keys, which can each store 100 Passkeys. This makes them basically immune to being stolen. Note that if your primary interest in Passkeys is in the phishing resistance (basically nearly perfect immunity to MitM attacks) then you can get that same benefit by using WebAuthn as a second factor. However, my experience has been that Passkey support is broader.

    Revoking keys involves logging into the particular service and revoking them, just like changing your password. There isn’t a centralized way to do it as far as I’m aware. Each Passkey is only used for a single service, after all. However, in the same way that some password managers will offer to automatically change your passwords, they might develop a similar for passkeys.






  • Synthetic media should be required to be watermarked at the source

    Bit late for that (even in 2023). Best we could do now is something like public key cryptography, with cameras having secret keys that images are signed with. However:

    • That would require people to purchase new cameras (though phones could likely do this without a new device, leveraging the secure enclaves to sign)
    • Depending on the implementation of the signing, even applying filters to images, color grading, or cropping an image could make it stop matching. If you remove something from the background or make other overt changes, it’s definitely not going to match.
      • Adobe has a system for handling changes and attesting that no AI was used. Optimally other major photo editing tools will do something similar. However, I don’t think it’s feasible to securely sign such an attestation history locally, so all such images would need to be uploaded to be signed remotely.
    • This won’t work for traditional art

    For artists and photographers with old school cameras (“old school” meaning “doesn’t compute and sign a perceptual hash of the image”), something similar could still be done. Each such person can generate a public / private key pair for themselves and sign the images they’ve created manually. This depends on you trusting that specific artist, though, as opposed to trusting the manufacturer of the camera used.


  • This isn’t true or how it works, but there is a law being proposed that would sorta make it so: https://arstechnica.com/information-technology/2024/08/senates-no-fakes-act-hopes-to-make-unauthorized-digital-replicas-illegal/

    (In the US), your likeness is protected under state laws and due to case law, rather than federal laws, and I don’t know of any such law that imposes a responsibility upon sites like Twitter to take down violations upon your report in the same way that the DMCA does. Rather, they allow you to sue the entity who used your likeness for damages in civil court. That isn’t very useful to Jane when her ex-boyfriend uploads revenge porn of her or to Kate when a random Twitter account deepfakes her face onto a nude.

    However, if a picture you have copyright to (like a selfie) is used as an input into an AI, arguably you do have partial copyright to it, as the AI elements are not copyrighted and it could not have been created without your input. As such, I think it would be reasonable to issue a DMCA takedown request if someone posted a nonconsensual deepfake of you, on the grounds that you have a good faith belief that you do have copyright to it. However, if you didn’t take the picture used as an input yourself, you don’t have copyright to it and therefore don’t have partial copyright to the output, either. If it’s a deepfake face swap, then whoever owns copyright of the original scene image/video would also have partial copyright, and they could also issue a DMCA takedown request.


  • It’s like how they slapped ‘Smart’ on every tech product in the past decade. Even devices that are dumb as fuck are called ‘Smart’ devices.

    I’m not a big fan of “Smart” as a marketing term, either, but “Automatable” doesn’t exactly roll off the tongue, and “Connected” doesn’t really have the same appeal. That said, “smart” was used pretty consistently to refer to devices that could be controlled as part of a “smart home.” It wasn’t supposed to refer to a device that itself was intelligent, though.

    I always thought of AI as artificial consciousness, an unnatural and created-by-humans self-aware and self-thinking being.

    Sounds like you’re thinking of AGI (artificial general intelligence) or that your understanding is based off sci fi as opposed to the academic discipline/field of research, which has been around since the 1950s.

    And yes, marketing is often inaccurate… but almost every instance I’ve seen where they say they’re using AI, they were.

    In fact stuff like ChatGPT would’ve made more sense to actually be called ‘Smart’ search engines instea of ‘AI’.

    IMO “Smart” would be more misleading than “AI,” even if “Smart” didn’t have an existing, unrelated meaning. I do think we could use better words - AI is such a broad category that it doesn’t say much to call a product “AI-powered.” Stable Diffusion and Llama use completely different types of AI, for example. But people broadly recognize the term (even if they don’t understand it properly) and the same can’t be said for terms like “LLM.”

    They might be technological achievements, but they’re not AI.

    You’re illustrating the AI effect - “discounting of the behavior of an artificial-intelligence program as not “real” intelligence.” AI is used in a ton of different ways that you likely don’t ever think about or even notice.

    I recommend reading over at least the introduction to the Artificial Intelligence article on Wikipedia before proclaiming that something that fits cleanly into the definition of AI isn’t AI.



  • It doesn’t matter if it’s emulated legally or not. They can issue a takedown for showing gameplay captured from an NES hooked up to a CRT if they want.

    A fair use defense has to be defended in court, and it’s not just about whether you’re right but also about whether you can afford to fight.

    It’s also not certain that a fair use defense would fly. One of the elements for determining whether fair use is market impact, and I suspect that Nintendo’s lawyers would argue that demoing that their games can be emulated - even if the specific demoed games are not being sold - has a negative market impact, since it makes people who might buy a Switch and a Nintendo Online membership to play the official emulated games less likely to do so.






  • Because your friend would just be doing it out of curiosity, not as part of an investigation.

    It doesn’t matter why my friend wants to use my phone. If a friend wants to use my computer, I log out and sign into a guest account that doesn’t have access to my private documents. That’s not an option on my phone. It doesn’t matter if my friend has a legitimate reason to want to look at something on my phone. Maybe they want to see a picture I took last Friday - if they tell me that, I’ll just pull it up and share it with them.

    If my phone has something on it that can help the police and the police tell me what they’re looking for, I can check my phone myself and share specifically that information with them.

    If my phone doesn’t have that information, I can tell them that, too.

    This is the exact same as with my friends. The difference is that the police are much more likely to be antagonistic and much less likely to tell me what they want.

    If the police can’t articulate what they’re looking for or if they don’t trust me to tell me what it is, then I DEFINITELY don’t trust them to look at my phone themselves. And heck, that’s true of my friends, too.

    If I hand a police officer my phone unlocked, what’s stopping them from hooking it up to GrayKey or Cellebrite or some other similar tool, and dumping all the data from my phone without my knowledge - whether for legitimate or nefarious purposes? What stops them from doing this “out of curiosity?” This isn’t generally a risk with my friends, but it’s always a risk when dealing with the police.

    In the US, when there’s suspicion of police wrongdoing, the police investigate themselves (and either conclude that nobody did anything wrong or that only one person did something wrong but everyone else is fine). It’s so bad that it’s a meme (“We’ve investigated ourselves and found no evidence of any wrongdoing.”) But even if you don’t have the police investigating themselves in your country, it’s still the government doing that investigation. And nothing makes the government inherently trustworthy.

    As a private citizen of your country who was legitimately concerned that the police are retaining more data than necessary, could I visit the police station and ask them to give me supervised admin access to their computers (as well as the personal computers of anyone who might have had access to my device or to the data extracted from it), as well as full access to the station itself in case there are any unaccounted for computers, so that I can confirm that the police aren’t overstepping? If not, why not? It’s not like the police have anything to hide, right? And the sooner that the police cooperate and that information is shared with me, the sooner I can rest easy knowing I and my fellow citizens have not been victimized by the police.

    Hopefully you see how ridiculous it is for me to expect someone to just give me access to all of that information. That’s actually less ridiculous than a police officer asking me to hand them my unlocked phone.

    As a private citizen, I have to trust that police and government officials are doing their jobs properly. If they don’t, I can have my privacy invaded or be framed for a crime, with no method for recourse. And without any real accountability. I have to trust a police officer if I hand him my phone, and I’m the only one risking anything. In the opposite scenario, if I overstep while they’re supervising me reviewing their systems, they can hold me accountable immediately.


  • I’m a cop and I can tell you that, at least in my country, you’d have no reason to not unlock your phone if you haven’t done anything.

    I can understand that in some countries cops can be seen as criminals (and are behaving like criminals), but I don’t think a generality should be made.

    It sounds like you’re saying that you would assume that someone had done something illegal if they refused to unlock their phone for you. It’s a bit ironic that you then immediately say that people shouldn’t generalize about cops behaving as criminals.

    I don’t let my friends go through my phone. Cop or not, why would I let a stranger?