

In machine learning, that task is referred to as classification
In machine learning, that task is referred to as classification
I had all of these problems with Jellyfin and then I discovered Netflix (www.netflix.com/totallynotareferral)
The walls get hot, you absorb the heat from the walls with a fluid. You use the fluid to heat water, you use the steam to drive a turbine, you use the turbine to turn a permanent magnet inside of a coil of wire. In addition, you can capture neutrons using a liquid metal (lithium) which heats the lithium, which heats the walls, which heats the water, which makes steam, which drives a turbine, which generates electricity.
If you poured water onto them they wouldn’t explode. 100 million degrees Celsius doesn’t mean much when the mass is so low compared to the mass of the water.
Until you see companies selling liquid nitrogen generators, you’re not going to have to worry about anyone pushing quantum chips on the average consumer.
It prints in white text on a black background
We got extended memory now! Bill gates doesn’t know what he’s talking about.
The people:
No, it’s recognizing that tinkering means different things now.
In the 80s and 90s, if you were learning computers you had no choice but to understand how the physical machine worked and how software interacted with it. Understanding the operating system, and scripting was required for essentially any task that wasn’t in the narrow collection of tasks where there was commercial software. There was essentially one path (or a bunch of paths that were closely related to each other) for people interested in computers.
That just isn’t the case now. There are more options available and many (most?) of them are built on top of software that abstracts away the underlying complexity. Now, a person can use technology and never need to understand how it works. Smartphones are an excellent example of this. People learn to use iOS or Android without ever knowing how it works, they deal with the abstractions instead of the underlying bits that were used to create it.
For example, If you want to play games, you press a button in Steam and it installs. If you want to stream your gaming session to millions of people, you install OBS and enter your Twitch credentials. You don’t need to understand graphical pipelines, codecs, networking, load balancing, or worry about creating client-side applications for your users. Everything is already created for you.
There are more options available in technology and it is completely expected that people distribute themselves amongst those options.
I’ve noticed that a lot in newer users.
Even in technical fields, the users know how to use the software but they don’t understand anything under that. A lot of people got into computer via smartphones where you are essentially locked out of anything below the application layer.
Best I can do is a hollow shell piloted by the most corrupt individuals which only exists to the extent that it channels tax dollars into donors pockets.
It never was free thinking.
It styled itself in that way to capture the upper-middle class market segment of people who wanted to use technology but couldn’t be bothered to learn how
Like every tech company, they sell you the ability to access the fruits of technology in exchange for your privacy and, in doing so, ensure you never have the motivation to learn how to do it yourself.
Want to watch a movie? Don’t worry about learning about media files, players, codecs, etc. Just install this spyware on your phone and pay us $9 $12 $15 $19.99/mo and you’ll never have to learn.
You’re already on Lemmy, so most of you understand the stakes of signing up for corporate mediated technology. Just don’t use their products.
There are thousands of different diffusion models, not all of them are trained on copyright protected work.
In addition, substantially transformative works are allowed to use content that is otherwise copy protected under the fair use doctrine.
It’s hard to argue that a model, a file containing the trained weight matrices, is in any way substantially similar to any existing copyrighted work. TL;DR: There are no pictures of Mickey Mouse in a GGUF file.
Fair use has already been upheld in the courts concerning machine learning models trained using books.
For instance, under the precedent established in Authors Guild v. HathiTrust and upheld in Authors Guild v. Google, the US Court of Appeals for the Second Circuit held that mass digitization of a large volume of in-copyright books in order to distill and reveal new information about the books was a fair use.
And, perhaps more pragmatically, the genie is already out of the bottle. The software and weights are already available and you can train and fine-tune your own models on consumer graphics cards. No court ruling or regulation will restrain every country on the globe and every country is rapidly researching and producing generative models.
The battle is already over, the ship has sailed.
I left a flat spot on my door so people can knock
Companies that are incompetently led will fail and companies that integrate new AI tools in a productive and useful manner will succeed.
Worrying about AI replacing coders is pointless. Anyone who writes code for a living understands the limitations that these models have. It isn’t going to replace humans for quite a long time.
Language models are hitting some hard limitations and were unlikely to see improvements continue at the same pace.
Transformers, Mixture of Experts and some training efficiency breakthroughs all happened around the same time which gave the impression of an AI explosion but the current models are essentially taking advantage of everything and we’re seeing pretty strong diminishing returns on larger training sets.
So language models, absent a new revolutionary breakthrough, are largely as good as they’re going to get for the foreseeable future.
They’re not replacing software engineers, at best they’re slightly more advanced syntax checkers/LSPs. They may help with junior developer level tasks like refactoring or debugging… but they’re not designing applications.
I know that it’s a meme to hate on generated images people need to understand just how much that ship has sailed.
Getting upset at generative AI is about as absurd as getting upset at CGI special effects or digital images. Both of these things were the subject of derision when they started being widely used. CGI was seen as a second rate knockoff of “real” special effects and digital images were seen as the tool of amateur photographers with their Photoshop tools acting as a crutch in place of real photography talent.
No amount of arguments film purist or nostalgia for the old days of puppets and models in movies was going to stop computer graphics and digital images capture and manipulation. Today those arguments seem so quaint and ignorant that most people are not even aware that there was even a controversy.
Digital images and computer graphics have nearly completely displaced film photography and physical model-based special effects.
Much like those technologies, generative AI isn’t going away and it’s only going to improve and become more ubiquitous.
This isn’t the hill to die on no matter how many upvotes you get.
No, you can’t find any copyrighted text inside the model’s weights.