Yet if I was helping my elders over the phone, I’d get all sorts of “What Windows key?”, “I can’t find that Control key”, or “I did that key, the plus key, and then my hand slipped and I minimized everything.”
Yet if I was helping my elders over the phone, I’d get all sorts of “What Windows key?”, “I can’t find that Control key”, or “I did that key, the plus key, and then my hand slipped and I minimized everything.”
"We’re sorry (we got caught). Here’s a free identity production scam to make you feel safe again.
I begin to wonder if identity theft protection was the next big thing to get into after self storage. A bit of investment and then there’s very little upkeep, and the companies keep that demand rolling in.
I read “free credit monitoring” as allowing your name to get on another list to be sold.
LLMs alone won’t. Experts in the field seem to have different opinions on if they will help get us there. What is concerning to me is that the issues and dangers of AGI also exist with advanced LLM models, and that research is being shelved because it gets in the way of profit. Maybe we’ll never be able to get to AGI, but we sure better hope if we do we get it right the first time. How’s that been going with the more primitive LLMs?
Do we even know what the “right” AGI would be? We’re treading in dangerous waters.
So any entity that does something stupid that makes companies not want to associate with them anymore can sue for damages caused by their own mistake?
This is either very stupid or some 6-D chess move to make millions. Hmm…
Also, free market as long as it’s profitable for you, right?
Humans don’t live that long. That’s only about 1.5 million 30 min videos, which isn’t a huge amount for a whole day’s worth of scraping.
Good questions.
What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?
Honestly, we might be too late anyway for avoidance, but it’s specifically research of the alignment problem that I think regulation could help with, and since they’re still self regulation and free to do what OpenAI did with their department for that…it’s akin to someone manufacturing a new chemical and not bothering any research on side effects, only what they can gain from it. Oh shit, never mind, that’s standard operating procedure isn’t it, at least as long as the government isn’t around to stop it.
And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?
Another topic that I personally think we’re doomed to ignore until things get so bad they affect more than poor people and countries. How does it compare? Climate change and the probable directions it takes the planet are much more of a certainty than the unknown of if AGI is possible and what effects AGI could have. Interesting that we’re taking the same approaches though, even if it’s more obvious a problem. Plus profiting via greenwashing rather than a concentrated effort to do effective things to mitigate what we could.
No surprise, since there’s not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well…
And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.
Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.
I don’t think it’s that uncommon an opinion. An even simpler version is the constant repeats over years now of information breaches, often because of inferior protect. As a amateur website creator decades ago I learned that plain text passwords was a big no-no, so how are corporation ITs still doing it? Even the non-tech person on the street rolls their eyes at such news, and yet it continues. CrowdStrike is just a more complicated version of the same thing.
Especially in situations like this where it’s quite possible it would cost less to go back to the basics of better pay and training to create willing workers. Maybe the initial cost was less than what they have to spend to improve things, but add in all the backtracking and cost of mistakes, I doubt it.
That explains it. I read the title and wondered how they are doing prethought crime.
Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.
Carbon monoxide also contribute to ozone breakdown, and there are additional manmade substances similar to CFCs with chlorine and bromine that are still leaked. Environmental changes in the Antarctic also can increase ozone depletion as well as longer lasting cold air in the stratosphere (observed in 2020 in the Arctic). The mention of emissions was just to suggest that smaller reactions can get lost in all the other problems we have created, although wildfire increases are raising CO.
I didn’t see a mention in the paper on what amount the bump up would be with the maximum amount of AlO2 distributed in the layers of the atmosphere where the reactions would occur. When emissions are in the trillions of tons, I wonder if it would even be measurable.
At least the article came with the numbers. Given what I regularly read about all the pollutants we daily pump into the atmosphere, the numbers in this article for the materials being atomized is…well, they’re very small in scale.
Basically, if a few hundred tons per year is hurting the ozone (and other things), just imagine what the billions of tons per year of emissions does.
Is it a physical HD (magnetic) and making noise? I had one years ago (fortunately my only failure so far) and if I kept persisting to try and read it via a USB recovery drive, I managed to pull enough data off that was important. If it’s a newer SSD, that’s a different thing. Doesn’t mean all the data is gone, just a lot harder (read $$$) to pull. Hopefully it’s just software or a loose cable.
And Apple gets more usage. It’s a win-win for both companies.
That emphasized my point. If someone feels that they had always been a certain way in the past even though they didn’t look it or act it in public, there is no “other side” of themselves. I’m not trying to change the vocabulary, just was an observation of using a word past its usual meaning. That’s how words evolve.
The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).
The instructions were clear. Take him up, cut the line, come back for a refill.