I have a colour one, I think the white is pretty white tbh? Or, I mean, it looks like paper. Off-white like a book.
I have a colour one, I think the white is pretty white tbh? Or, I mean, it looks like paper. Off-white like a book.
Yeah I misread before I commented, I didn’t know robot taxis were a thing, Jesus…
Surely you can just take over? You can’t expect the car to run people over for you lol
What always annoyed me was having to draw charts by hand. Just let me put the data in a computer for god’s sake, the rest of the working is there… I did actually write a python function for one of my assignments which was fine, but they told me not to do it for the exam.
Yes that’s what I’m thinking, some modernised physical data storage technique.
Punch cards? Stored correctly there’s no reason they couldn’t last many human lifetimes. But… Yeah it’ll take a while to encode everything.
I would have thought that with modern technology we could come up with something like punch cards but more space/time efficient. Physical storage of data - only one write cycle of course, but extremely durable. Even just the same system as punch cards but using tiny, tiny holes very close together on a roll of paper. Could be punched or read by a machine at high speed (compared to a regular punch card, presumably still Ber slow compared to flash media).
MBAs should be renamed Master’s in Bullshit Administration.
Check the other comment thread from the parent, there’s a discussion which goes into it.
It’s an interesting point. If I need to confirm that I’m right about something I will usually go to the internet, but I’m still at the behest of my reading comprehension skills. These are perfectly good, but the more arcane the topic, and the more obtuse the language used in whatever resource I consult, the more likely I am to make a mistake. The resource I choose also has a dramatic impact - e.g. if it’s the Daily Mail vs the Encyclopaedia Britannica. I might be able to identify bias, but I also might not, especially if it conforms to my own. We expect a lot of LLMs that we cannot reliably do ourselves.
LLMs are just that - Ms, that is to say, models. And trite as it is to say - “all models are wrong, some models are useful”. We certainly shouldn’t expect LLMs to do things that they cannot do (i.e. possess knowledge), but it’s clear that they can do other things surprisingly effectively, particularly providing coding support to developers. Whether they do enough to warrant their energy/other costs remains to be seen.
My model has 100% recall and 50% precision, not bad eh?
But - that model would not have 99.9% accuracy.
Yeah I really can’t imagine any scenario where I want my mouse to… Help me prompt AI??
Recycle the garbage that comes out… Still more garbage out.
Steam DB has a pretty decent search. It’s not SQL but the filters are a bit better.
I know how you feel tho - so few consumer orgs give us an advanced search worth it’s salt. I want to have (x AND y) OR z, or maybe x AND (y OR z)… Not whichever specific combination was preordained for me.
Idk about latest. I deleted my FB account a while ago, but every time I see anything from there it’s bizarre AI generated crap.
Don’t forget about the soldiers with huge boots over their prosthetic legs. It’s their birthday, all 10,000 of them. So sad. Did you see the young man who made an impressive sculpture of a dog? Or how about the other 7 million of them? Dog sculpting is really taking off I hear.
ChatGPT is not designed to fool us into thinking it’s a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn’t be intelligent, in my view.
The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.
For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.
Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.
LLMs are just predictive text but bigger
This one was wild:
From picking up and object to mass murder lmao. Not even close!