• 0 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: July 28th, 2023

help-circle






  • Punch cards? Stored correctly there’s no reason they couldn’t last many human lifetimes. But… Yeah it’ll take a while to encode everything.

    I would have thought that with modern technology we could come up with something like punch cards but more space/time efficient. Physical storage of data - only one write cycle of course, but extremely durable. Even just the same system as punch cards but using tiny, tiny holes very close together on a roll of paper. Could be punched or read by a machine at high speed (compared to a regular punch card, presumably still Ber slow compared to flash media).




  • It’s an interesting point. If I need to confirm that I’m right about something I will usually go to the internet, but I’m still at the behest of my reading comprehension skills. These are perfectly good, but the more arcane the topic, and the more obtuse the language used in whatever resource I consult, the more likely I am to make a mistake. The resource I choose also has a dramatic impact - e.g. if it’s the Daily Mail vs the Encyclopaedia Britannica. I might be able to identify bias, but I also might not, especially if it conforms to my own. We expect a lot of LLMs that we cannot reliably do ourselves.










  • The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.

    For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.

    Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.