Some of the recently reported ones have been traced back to Reddit shitposts. The hard thing they have to deal with is that the more authoritative you wrote your reddit comments, shitpost or not, the more upvotes you would get (at least that’s what I felt was happening to my writing over time as I used reddit). That dynamic would mean reddit is full of people who sound very very confident in the joke position they post about (and it then is compounded by the many upvotes)
Yeah. I was including Reddit shit posts in the “random shit they’ve shoveled into their latest and greatest LLM”. It’s nuts to me that they put basically no actual thought into the repercussions of using Reddit as a data set without anything to filter that data.
It goes beyond me why a corporation with so much to lose does’t have a narrow ai that simply checks if its response is appropriate before providing it.
Wont fix all but if i try this manually chatgpt pretty much always catches its own errors.
That dynamic would mean reddit is full of people who sound very very confident in the joke position
A lot of the time people on reddit/lemmy/the internet are very confident in their non-joking position. Not sure if the same community exists here, but we had /r/confidentlyincorrect over on reddit
Yep. It’s gotta be hard to distinguish, because there are legitimately helpful and confidently correct people on reddit posts too. There’s value there, but they have to figure it out how to distinguish between good and shit takes.
Some of the recently reported ones have been traced back to Reddit shitposts. The hard thing they have to deal with is that the more authoritative you wrote your reddit comments, shitpost or not, the more upvotes you would get (at least that’s what I felt was happening to my writing over time as I used reddit). That dynamic would mean reddit is full of people who sound very very confident in the joke position they post about (and it then is compounded by the many upvotes)
Yeah. I was including Reddit shit posts in the “random shit they’ve shoveled into their latest and greatest LLM”. It’s nuts to me that they put basically no actual thought into the repercussions of using Reddit as a data set without anything to filter that data.
It goes beyond me why a corporation with so much to lose does’t have a narrow ai that simply checks if its response is appropriate before providing it.
Wont fix all but if i try this manually chatgpt pretty much always catches its own errors.
A lot of the time people on reddit/lemmy/the internet are very confident in their non-joking position. Not sure if the same community exists here, but we had /r/confidentlyincorrect over on reddit
Yep. It’s gotta be hard to distinguish, because there are legitimately helpful and confidently correct people on reddit posts too. There’s value there, but they have to figure it out how to distinguish between good and shit takes.