• 0 Posts
  • 93 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • The concept is real. I mean, anyone who thought “vibe coding” would be a viable career path for long enough to actually have a career was just not paying attention to reality.

    Right now it legitimately takes some expertise to get good results from AI coding. (Most people doing it now get, at best, convincingly passable results.) But the job of a “vibe coder” is much simpler than the job of a conventional programmer, and it will become increasingly simple to automate out the human’s role. It’s not like progress is going to suddenly stop. The fruit is hanging so low that it might as well be on the ground.








  • Totally agree, there’s a big hole in the current crop of applications. I think there’s not enough focus on the application side; they want to do everything within the model itself, but LLMs are not the most efficient way to store and retrieve large amounts of information.

    They’re great at taking a small to medium amount of information and formatting it in sensible ways. But that information should ideally come from an external, reliable source.


  • I’d reframe this as: “Why AI is currently a shitshow”. I am optimistic about the future though. Open models you can run locally are getting better and better. Hardware is getting better and better. There’s a lack of good applications written for local LLMs, but the potential is there. They’re coming. You don’t have to eat whatever Microsoft puts in front of you. The future does not belong to Microsoft, OpenAI, etc.



  • I don’t know about Gab specifically, but yes, in general you can do that. OpenAI makes their base model available to developers via API. All of these chatbots, including the official ChatGPT instance you can use on OpenAI’s web site, have what’s called a “system prompt”. This includes directives and information that are not part of the foundational model. In most cases, the companies try to hide the system prompts from users, viewing it as a kind of “secret sauce”. In most cases, the chatbots can be made to reveal the system prompt anyway.

    Anyone can plug into OpenAI’s API and make their own chatbot. I’m not sure what kind of guardrails OpenAI puts on the API, but so far I don’t think there are any techniques that are very effective in preventing misuse.

    I can’t tell you if that’s the ONLY thing that differentiates ChatGPT from this. ChatGPT is closed-source so they could be doing using an entirely different model behind the scenes. But it’s similar, at least.



  • Does population decline worry you?

    I mean, it’s super important. The population of all of the places we love is shrinking. In 50 years, 30 years, you’ll have half as many people in places that you love. Society will collapse. We have to solve it. It’s very critical.

    Uhhh…what? There are a handful of countries with recent population decline, but most of the world is still growing even if growth rates are slowing. I’ve never seen any credible projections of catastrophic population decline.





  • In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.

    As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.