• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle
  • I think your take is a bit extreme.

    Currently their statement (regardless of the questionable justification) is largely correct, no major c++ projects have been written in a safe subset and no real work has really started yet. It isn’t practical.

    I do agree with you that a safe form of c++, once fully implemented and not frustrating to use, could easily become viable, the feature can be added. But that’s still years away from practical usage in large project, and even when done, many projects will stick to the older forms, making the transition slow and frustrating.

    The practical result is that he’s sort of right, if you just add the word “currently” to his statement.

    Otoh, I do agree with you that rust cannot be the sole answer to this problem either, it’s almost as impractical to rewrite codebases in rust as an as-yet unfinished safe form of C++. Only time and lots of effort can fix this problem






  • I disagree, they are not talking about the online low trust sources that will indeed undergo massive changes, they’re talking about organisations with chains of trust, and they make a compelling case that they won’t be affected as much.

    Not that you’re wrong either, but your points don’t really apply to their scenario. People who built their career in photography will have t more to lose, and more opportunity to be discovered, so they really don’t want to play silly games when a single proven fake would end their career for good. It’ll happen no doubt, but it’ll be rare and big news, a great embarrassment for everyone involved.

    Online discourse, random photos from events, anything without that chain of trust (or where the “chain of trust” is built by people who don’t actually care), that’s where this is a game changer.




  • Reasoning is obviously useful, not convinced it’s required to be a good driver. In fact most driving decisions must be done rapidly, I doubt humans can be described as “reasoning” when we’re just reacting to events. Decisions that take long enough could be handed to a human (“should we rush for the ferry, or divert for the bridge?”). It’s only the middling bit between where we will maintain this big advantage (“that truck ahead is bouncing around, I don’t like how the load is secured so I’m going to back off”). that’s a big advantage, but how much of our time is spent with our minds fully focused and engaged anyway? Once we’re on autopilot, is there much reasoning going on?

    Not that I think this will be quick, I expect at least another couple of decades before self driving cars can even start to compete with us outside of specific curated situations. And once they do they’ll continue to fuck up royally whenever the situation is weird and outside their training, causing big news stories. The key question will be whether they can compete with humans on average by outperforming us in quick responses and in consistently not getting distracted/tired/drunk.