The "is this AI?" fatigue is very grating, but it seems inescapable even if you're not scrolling algorithmic feeds. It permeates group chats, DMs, and even personal blogs. I have no idea how we solve this!
Ironically, it seems that some of the more "anti-AI" people I know are more likely to re-share AI-generated content without realising it, because they aren't keeping up with what today's AI output looks like.
I believe that we will soon live in a future where the content will be fake by default and we will validate the authenticity by looking at the reputation of the source. Each time we will read, listen, or see a new content we will think "is this coming from a trustworthy source? otherwise I will not believe any single word of this".
In this context, the more fake news/content we see, the better it is, because it will only make the process of getting there fast.
Once AI gets good enough, we will be able to tailor it. It (our agent that we personally use) will tell us what is AI and what is not ai generated, if we wish it. But until then, it’s a disaster zone. A holocaust for integrity lol.
If you think this is naive and optimistic, ask yourself, what more valuable technology would there be than an AI agent thats legitimately accurate? It will replace search, it will replace gui. Just tell it what to do.
all content and websites will need 3rd party verification stamps like baseball cards gets, or non-gmo veggies, humanely raised meat, no animals harmed in the making of this movie
> What finally will break people's brains (and I extrapolate that from my brain) is the decision fatigue that is growing, that we now have to figure out if a funny cat video is real...
Nah. seems people forget social media has been fake for way longer than AI. sometimes Photoshop, or editing. Sometimes just deliberately miss-attributing a real photo for clicks. heck there's a whole "fake Asian videos" subreddit, funny videos that in another light are brilliant sketch comedy but always portray themselves as real. not to mention humans consume boatloads of knowingly-fake things -- movies, tv shows, cartoons, artwork etc.
oddly enough the rise of AI may flip one of the most annoying things in social media, all the fake stuff that people think is real. a small subset of people (like myself) are less annoyed by the fake content and more annoyed by the people falling for it. Like those "Fake Asian Videos" I've seen a few that were well written, well acted and even well filmed. If it were a comedy show on Netflix it would be hilarious, the distaste I have for them is that they portend to be real and people often believe them to be real.
In the short term this is going to be way worse as even discriminating people can't tell if something is fake, but the light at the end of the tunnel is when it hits such saturation that anything real is in the minority and everyone assumes that everything is fake and keeps on consuming.
People are reading human-generated text, blogs, and artisanal content more than ever before.
The whole point of consuming art is to appreciate the work that went into creating it. If something is generated without much effort, then what’s the point of looking at it?
Sure, LLMs can generate all the text and paintings in the world, but who’s knowingly consuming that? It’s garbage content, and people are already tired of this crap. That’s why I think writing blogs, poetry, and creating original work is more important than ever.
How did this get on the front page? Are there genuine users that feel this rambling is interesting or adds anything useful to what has already been said?
Yes, in a way.
The rampant exaggeration of the usefulness of LLMs and the fact that clearly not enough "people in tech" are saying no to their managers stuffing every single product with something "AI"-ish means we definitely need to talk about it.
The "is this AI?" fatigue is very grating, but it seems inescapable even if you're not scrolling algorithmic feeds. It permeates group chats, DMs, and even personal blogs. I have no idea how we solve this!
Ironically, it seems that some of the more "anti-AI" people I know are more likely to re-share AI-generated content without realising it, because they aren't keeping up with what today's AI output looks like.
I believe that we will soon live in a future where the content will be fake by default and we will validate the authenticity by looking at the reputation of the source. Each time we will read, listen, or see a new content we will think "is this coming from a trustworthy source? otherwise I will not believe any single word of this".
In this context, the more fake news/content we see, the better it is, because it will only make the process of getting there fast.
We are there already
Yes, but it's not common knowledge. The content disaster needs to be wide and strong, so that the impact is not negligible for most of the people.
Once AI gets good enough, we will be able to tailor it. It (our agent that we personally use) will tell us what is AI and what is not ai generated, if we wish it. But until then, it’s a disaster zone. A holocaust for integrity lol.
If you think this is naive and optimistic, ask yourself, what more valuable technology would there be than an AI agent thats legitimately accurate? It will replace search, it will replace gui. Just tell it what to do.
AI will always be bad at distinguishing AI from not-AI. If AI gets better, distinguishing gets harder.
It will be able to tell what ISNT ai! & what is sloppy or cheap AI meant to make a quick buck.
all content and websites will need 3rd party verification stamps like baseball cards gets, or non-gmo veggies, humanely raised meat, no animals harmed in the making of this movie
We already have greenwashing. Stamping things will be pretty much useless.
> What finally will break people's brains (and I extrapolate that from my brain) is the decision fatigue that is growing, that we now have to figure out if a funny cat video is real...
Nah. seems people forget social media has been fake for way longer than AI. sometimes Photoshop, or editing. Sometimes just deliberately miss-attributing a real photo for clicks. heck there's a whole "fake Asian videos" subreddit, funny videos that in another light are brilliant sketch comedy but always portray themselves as real. not to mention humans consume boatloads of knowingly-fake things -- movies, tv shows, cartoons, artwork etc.
oddly enough the rise of AI may flip one of the most annoying things in social media, all the fake stuff that people think is real. a small subset of people (like myself) are less annoyed by the fake content and more annoyed by the people falling for it. Like those "Fake Asian Videos" I've seen a few that were well written, well acted and even well filmed. If it were a comedy show on Netflix it would be hilarious, the distaste I have for them is that they portend to be real and people often believe them to be real.
In the short term this is going to be way worse as even discriminating people can't tell if something is fake, but the light at the end of the tunnel is when it hits such saturation that anything real is in the minority and everyone assumes that everything is fake and keeps on consuming.
Also, if people start leaving Meta because of AI-slop, Meta will figure out the right amount of AI-slop that results in the highest engagement.
> So I prompted my brain
Is there anyone for whom this phrasing has a clarity advantage over "So I asked myself..."/"So I thought to myself..." ?
The people who reply to every post on the internet with "I asked ChatGPT and it said..."
People are reading human-generated text, blogs, and artisanal content more than ever before.
The whole point of consuming art is to appreciate the work that went into creating it. If something is generated without much effort, then what’s the point of looking at it?
Sure, LLMs can generate all the text and paintings in the world, but who’s knowingly consuming that? It’s garbage content, and people are already tired of this crap. That’s why I think writing blogs, poetry, and creating original work is more important than ever.
I agree, it is always good to be openly hostile to the new boss!
How did this get on the front page? Are there genuine users that feel this rambling is interesting or adds anything useful to what has already been said?
Yes, in a way. The rampant exaggeration of the usefulness of LLMs and the fact that clearly not enough "people in tech" are saying no to their managers stuffing every single product with something "AI"-ish means we definitely need to talk about it.
Ah, neat. Now it is the employees who are supposed to swim against the stream and jeopardize their jobs. I love naiv activism.
If all those who know better, stay silent, then who will speak up in the end?
(also HN does not just consists of powerless employees who will be fired, if they dare to voice concern)
I liked that it was written by a human.
If that is the single criterium, I guess we need a new word: ape slob, maybe?
How do you know?
vibes
[dead]
Okay then read it to me.
AI bad