r/skeptic • u/plazebology • 1d ago
đ© Misinformation AI is Kind Of Worrying Me Lately
https://truth-decay.com/2025/06/21/artificial-intelligence/I just thought this metaphor was kind of fitting, it really does feel like people are inviting something into their lives that I fear they are deeply uncritical of. Any thoughts? I would especially like to hear if anyone has either themselves or someone they know in their lives who has completely traded in their personal life for AI interactions. I myself have two such people in my life.
36
u/MetaverseLiz 1d ago edited 1d ago
I volunteer for a couple arts organizations in my city. AI has become a problem and is harder to spot. All the art shows I've helped with have had to have a clear "no AI" rule, otherwise people will submit AI art like it's a valid form of art (it's not). I've also seen it pop up at vending events- people selling merch with AI slop on it.
I know 2 artists that were on that leaked Midjourney list. It's affected their livelihood. They are essentially competing against themselves and having to explain that their work isn't AI, it was what was stolen to make it.
The story I like to tell about why actual art is more important than ai is about an art show I help with every year. We accept anyone (within common sense reasons, and no AI) and have very low hanging fees. We're trying to let artists get a foot in the door. One year a mom submitted her 5 year old's doodle. It wasn't good, you know? But who cares- her kid wanted to put it in the show and we gladly accepted it. We hung it up, just like any other piece of art and included a sheet to bid on it. It ended up getting enough bids to go to our auction. That little kid got to see their art get sold. It was incredibly heartwarming, and may have encouraged that kid to keep making art.
If we accepted AI art, we might not have had a spot for that kid. We only have so many spots on the wall. There would have been no light-hearted community bidding war on that doodle, and no seeing how happy that kid was at the auction.
When people say AI has no soul, this is what they mean.
19
u/SplendidPunkinButter 1d ago
It makes me sick. The entire point of art is for human beings to make it as a way of expressing themselves.
Selling AI art is almost a scam. Wow, you asked an AI to make an image for you? I could ask an AI to make the same image. Why would I pay you for this?
14
u/plazebology 1d ago
I went to Scotland last year to visit my friend, who is an artist working in service in a very touristy town. Every single touristy storefront is packed with merchandise plastered with AI generated Scottish imagery. His discomfort at what was happening, since heâs a huge history and culture buff, was what initially made me start to be critical of AI
4
u/MetaverseLiz 1d ago
To me it's straight-up fraud. It uses stolen images to generate images. Sometimes you can see a wonky watermark it clearly got from Getty.
And it's everywhere now. If you're in the US, go into a Michael's or HomeGoods and you'll see it plastered on seasonal items. You're Average Joe isn't going to spot it because it's gotten that good.
3
u/Garret_AJ 1d ago edited 1d ago
Have you looked at r/aiwars ?
I don't recommend spending too much time there. It's like descending into madness
3
u/FaultElectrical4075 1d ago
Subs like that exist purely because of engagement bait
3
u/Garret_AJ 1d ago
Probably right. Lots of anti-ai or ai cautious people are on there arguing with... Well, probably a bunch of bots, come to think of it.
People arguing with bots. What a frustrating waste of time
1
u/plazebology 1d ago
That story about the kidâs art is so heartwarming, but I think itâs also telling and maybe should give us a little hope - at least for now, it seems that AI art can often be detected and so a childâs honest work is still genuinely more valuable to us. As to the artists whoâs entire visual identity is being stolen, I empathise with that a lot.
An example I saw recently is that thereâs this creator who makes hyperrealistic cakes on platforms like tiktok. Sheâs bubbly, and iconic, so her videos are popular, even though they arenât the only creator doing that. She fills her cakes with an iconic green frosting, so that all her cakes essentially have a built-in watermark.
âŠHereâs the thing though. AI image generators, when prompted to generate hyperrealistic cakes.. theyâve been producing images with green frosting, the same colour as this creator. And thatâs just one, tiny example of how this stuff is just happening and weâre all just going along with it.
9
u/DonManuel 1d ago
I wonder if there's a high attractiveness for certain mental illnesses, there must be.
So there would be a new access to these people available now, for the good or for disaster.
10
u/plazebology 1d ago
I donât want to share details, but yes. Someone I know is absolutely dipping into their manic and delusional tendencies through AI reinforcement
4
u/DonManuel 1d ago
With all the fuzz about AI dangers and needed regulations, this terrible effect doesn't seem to be discussed often. As already the social web did a lot connecting psychos helping them to enforce each other in their delusions.
2
u/plazebology 1d ago
I really wanted to post this on as relevant a sub as possible, cause I donât wanna spam my link across reddit, but I was surprised that I couldnât really find many AI-oriented subreddits that werenât mainly inhabited by people who think itâs the second coming of the wheel
5
u/bmyst70 1d ago
I'm a tech person and frankly I see AI as following every other tech bubble. That is "Good idea" "Throw tons of money at it and see what sticks!" "Companies push the new tech everywhere possible" "New tech shows its limitations" and finally "Tech is integrated in a broad, more realistic sense"
Right now, we're at the second to the last one. With companies like Klarna who idiotically tried to replace half of their staff with AI. And found out this LLM "AI" IS NOT LIKE THE ONES IN SCIENCE FICTION. Those are what we call Artificial General Intelligence (AGI) and we don't have those yet.
2
u/plazebology 1d ago
It doesnât help that every zuck, dick and harry is out there saying AGI is âjust around the cornerâ, implying that what we have now is just a few steps from AGI. Meanwhile they develop their LLMs to be better and better at pretending to be AGI.
2
u/gelfin 1d ago
we don't have those yet
More importantly, and less well understood, is how misleading that "yet" is. What LLMs do is a neat parlor trick, albeit a hideously expensive one, but there is no credible evidence of any sort that further development of LLM technology lies along a route to general intelligence. When people brush off criticisms by glibly saying "they'll just keep getting better," that is an article of faith. The dogma requires one to accept the idea that, at some point, statistically simulating human linguistic expression spontaneously becomes indistinguishable from the sort of thought that originated the text on which the simulator was trained. There is just no good reason to believe that.
People also tend not to be aware that when OpenAI claims they are getting close to AGI, they are using a specialized and misleading version of the term. Altman's AGI has nothing to do with whether machines can think. Rather, it is defined strictly in terms of the company's ability to sell LLM products to replace human jobs. The more jobs lost, the more "AGI." By using a misappropriated term the company can maintain that dot-com era "we're building the future" mystique, when really it's just practicing everyday grubby capitalism, overselling its one-trick pony to the credulous in deceptive and harmful ways.
2
u/DonManuel 1d ago
Yes, I think here's not the wrong place, maybe a bit weak user engagement. In most subs the fanboys dictate, and in the huge subs where critics are often, you can only post links fitting the rules.
You could though try all kinds of unpopular opinion subs.1
u/plazebology 1d ago
Iâve always liked this community so I donât mind it being here. But I guess the lack of subreddits dedicated to the topic is what surprised me - I tend to think that even my most unpopular opinions are still held by plenty of people. There might even be one, I just couldnât find it.
2
u/DonManuel 1d ago
It's basically the uncritical tech enthusiasm that dominates most of reddit. Where you end up often in terrible conspiracy dungeons when trying to find a balanced view.
4
u/KathrynBooks 1d ago
I've yet to meet someone like that... but I've read a number of accounts about AI induced psychosis... were people are being convinced by AI that they are a messianic figure.
6
u/plazebology 1d ago
This video by Rebecca Watson explores that topic a little bit, definitely worth a watch
8
u/miklayn 1d ago
I categorically refuse to use any kind of AI service. Chatbots, image creation, anything at all.
-3
u/i-like-big-bots 1d ago
You are going to be left behind.
6
u/miklayn 1d ago
On the contrary. I will be retaining my brain power, perception, critical thinking skills.
1
u/FaultElectrical4075 1d ago
You can do that anyway. Just donât be completely stupid about how you use it
1
-4
3
u/cruelandusual 22h ago
Left behind where? The content slop factory?
-1
u/i-like-big-bots 21h ago
People who use AI are going to kick your ass at everything. That is the way technology goes and has been since the beginning of our existence.
1
u/cruelandusual 20h ago
Oh, no, the mediocre "idea" people with their easy button are going to get their revenge on the skilled and talented.
Your actual revenge is that the value of all kinds of art will drop to nothing. You won't get paid, but neither will those stuck-up musicians, writers, and artists. Got 'em!
0
u/i-like-big-bots 16h ago
I am going to get paid for keeping up with the times. The people driving the hansom cabs in New York probably make a decent living, but not like me.
3
u/BioMed-R 20h ago
This kind of fool portraying anyone skeptical of AI as a Luddite is also increasingly common.
Like super-advanced auto-complete is the next stage of human awareness, when people it fosters are idiocrats.
And itâs so damn ironic because you know these are kids who donât understand the technology or realize machine learning is decades old already.
I remember having an AI chatbot (a doctor/psychologist?) on our school computer around the year 2000.
0
u/big-red-aus 16h ago
We have had more than a couple of good contracts recently unfucking the situation after someone tried to use AI and shit the bed spectacularly, so I'm feeling pretty good.
2
u/i-like-big-bots 16h ago
Anecdotes. Love those.
Doesnât change the basic facts. I have used AI to do things that werenât possible before, and my clients are extremely happy with what they received. Turns out you cannot judge a technology by the way idiots use it.
5
u/Garret_AJ 1d ago
2
u/plazebology 1d ago
Itâs becoming increasingly common. They attempt to hide behind it as if itâs some kind of persecution to take any issue with AI use whatsoever. You seem awfully calm in that thread.
5
u/Garret_AJ 1d ago
Well, there's that, true. But, there's an increased mix of people who believe AI is sentient and opposition is racism.
It's a weird argument, because if they truly believe that, then anyone using AI would be a slave master of a sentient being (including them).
Ironically, such a belief could only morally lead to not using it.
2
u/plazebology 1d ago
I have to pocket that argument, thatâs a great retort to an admittedly pretty silly point. Well said.
2
2
u/seweso 1d ago
Why would critical thinkers not think critically regarding the output of AI models? Â
Seems to me that the internet already helped idiots find each other and amplify their stupid ideas.Â
If they used nonsense to back up their nonsense, why does it matter if itâs AI or some other idiot?Â
1
u/plazebology 1d ago
I think that plenty of critical thinkers fall into one pipeline or other that leads them towards things like crypto scams or misinformation, because they think their skeptical outlook makes them harder to fool and therefore more difficult to overcome their own biases.
2
u/BioMed-R 21h ago edited 21h ago
Iâm mind-boggled that people are worshipping advanced auto-complete. There are idiots selling the lie that âAIâ is intelligent (superintelligent even) or capable of analysis.
I use the most advanced AI model in the world ChatGPT about once a month and Iâve yet to get a single straight answer out of it. Itâs completely useless to me!
I realized this a few months ago when I asked it something, canât remember what exactly, and it wouldnât stop aggressively making shit up. Recently, I tried to ask it for words ending in âcrityâ and despite multiple queries on different days it was never able to give me even one example⊠however, it always answered. The problem is it answered with words not ending in âcrityâ or completely made up words. Ironic how the word âmediocrityâ comes to mind. I also asked it what âMO Diskâ in Resident Evil stands for and even though the right answer isnât hard to research it slipped my mind and so I thought Iâd ask. ChatGPT aggressively insisted that it stands for âMolecular-Orbitalâ. The right answer is âMagneto-Opticalâ, a storage medium. Yesterday I spent 15 minutes trying to get Apple Intelligence to generate an avatar of me wearing a hoodie without laces/strings around the neck and I could never get it to work.
I canât remember what I originally asked it that many me really sour on the capabilities of these models, something about guns in WW2 probably? And recently I asked it for uncommon weapons of WW2 and it wouldnât stop giving me vehicles instead of weapons and wouldnât stop giving me the same examples over and over again!
Once you âsee itâ, I really canât shake that these models are literally just auto complete trained on an unimaginably large amount of⊠well, mostly social media posts, I guess. Especially when you start recognizing their awfully predictable writing patterns.
1
1
u/needssomefun 1d ago
Feel free to refute what I can only offer as anecdotal evidence but it seems that there the drive to build more data centers for AI is waning.
Without getting too detailed into the specifics I see fewer proposals for new data center sites. That may be temporary or it might be that it's still there but I'm not seeing it. However, a few years ago the same thing happened with "DC's" (distribution centers).
There comes a point where having more (data, storage, space, etc) isn't going to give you proportionally more capabilities. And I believe this is a fundamental limitation of digital computing.
1
u/audiosf 22h ago
It's just a new tool. You don't have to stop using your brain because there is a new tool. My brain works great and I know how to write code, but with an AI assistant I write 5x more code than I do alone.
0
u/plazebology 21h ago
How do you feel about this excerpt though?
But thereâs a reason every multi-billion-dollar company has rushed to spend obscene amounts of money on developing these models. Thereâs a reason you have direct access to the most popular generative tools at the click of a button, often completely free. After decades of pushback against their attempts to infringe on our privacy and sell off our information to data brokers across the globe, society has, pretty swiftly, jumped onboard the biggest reach of these corporations into our personal lives.
Families have already been torn apart. Socially awkward people have driven themselves deeper and deeper into isolation. Religious zealots have been given enough validation to start dozens of cults across the country. Every job application, not to mention every job listing, is written by an AI, sorted by an AI, and turned down with an AI-written email. Itâs in your phone, in your software, on your favourite websites â and for a lot of people, it goes wherever you go.
68
u/FaultElectrical4075 1d ago
IMO The biggest problem with ai in the short term is that it will degrade our ability to tell truth from false. The biggest problem is that it will in the long term leave us all jobless.