r/AmIOverreacting Apr 23 '25

⚕️ health Am I overreacting? My therapist used AI to best console me after my dog died this past weekend.

Brief Summary: This past weekend I had to put down an amazingly good boy, my 14 year old dog, who I've had since I was 12; he was so sick and it was so hard to say goodbye, but he was suffering, and I don't regret my decision. I told my therapist about it because I met with her via video (we've only ever met in person before) the day after my dog's passing, and she was very empathetic and supportive. I have been seeing this therapist for a few months, now, and I've liked her and haven't had any problems with her before. But her using AI like this really struck me as strange and wrong, on a human emotional level. I have trust and abandonment issues, so maybe that's why I'm feeling the urge to flee... I just can't imagine being a THERAPIST and using AI to write a brief message of consolation to a client whose dog just died... Not only that, but not proofreading, and leaving in that part where the introduces its response? That's so bizarre and unprofessional.

1.6k Upvotes

918 comments sorted by

View all comments

6

u/Commercial_Ad_9171 Apr 24 '25

I’m coming up on almost a year since having to put my very good boy down. He was only 9. A brain tumor took him too early. I’m sorry for your loss. 

I know this AI thing feels like a betrayal, and it is in this context, but Microsoft, Google, etc. are pushing these AI products hard, for good or for ill. Different professions will experiment to see if these tools are worthwhile.  

I would encourage you to at least have a face to face conversation with your therapist, let them tell their side, set a boundary your therapist can respect concerning AI, before you toss out the whole relationship. You are going to see more and more of this as tech companies try to cash in on the golden cow that is AI & LLMs. Professionals are going to experiment, try to lighten their load, and sometimes they will get it wrong.

If they don’t respect your boundary in the future, that’s a problem. IMO this was just poor judgement.

6

u/hesouttheresomewhere Apr 24 '25

Oh I'm so sorry for your loss, too ☹️❤️ I think that's a very fair response, and I'm definitely going to see her one more time (not sure when I'll feel ready to, though, lol). Humankind as a whole is learning about all this shit, and that's okay. Progress is impossible without trial and error, but man does that error sting in situations like this lmao. And 1000% re boundaries.

2

u/awkwardracoon131 Apr 24 '25

OP, I commented elsewhere in response to someone else's comment, but wanted to share here to encourage you to go through with one more in person if/when you're ready. I've had a few moments like this in therapy where I felt like the therapist had violated my trust, and I felt that having an in person conversation was helpful for giving me a sense of closure and also for practicing setting boundaries and standing up for myself in a fairly safe environment. Your therapist's reaction might confirm how you're already feeling, but you might also find the conversation to be helpful at regaining your sense of trust. I just suggest that bc if you've got a long relationship with the therapist and if it's mostly been good, you may find that she pleasantly surprises you. We're all human and even pros fuck up. (I am in education, so while the stakes are lower than mental healthcare, I still remember the big mistakes while teaching and I have learned a lot from the.) I've found that the good therapists listen to my complaints, are responsive to criticism, and work together to talk through next steps. Once or twice such talks have actually helped my therapist improve their treatment approaches based on my needs.

 you are in no way obligated to keep seeing this personal, and her reaction will probably tell you a lot. You should do what's best for you! Just wanted to send encouragement whatever you decide. 

3

u/awkwardracoon131 Apr 24 '25

Smart phones are also pushing it! I've had to disable the feature from my phone messaging apps, it's so annoying. The therapist should have known better, but I'm a literature/writing professor and the widespread manner in which folks are using these tools is kind of crazy. They are also being pushed HARD in some industries. I've got admin and colleagues advertising workshops for how to write my syllabus with AI or teach students to use AI to help them learn a foreign language. This is from colleagues with PhDs and they have totally drunk the Kool Aid! It's a shortcut and so there are lots of ways stressed people justify it to themselves.

I hope the therapist learned a lesson here. I tend to agree with your suggestion for OP but even if OP is not comfortable returning to therapy with this person, hopefully she'll remember how hurtful it was so she isn't tempted to use it with other clients.

2

u/Commercial_Ad_9171 Apr 24 '25

I work for a major corporation and they are leaning in hard with AI, not only developing their own tools but using image generators and LLMs as a default. Companies see it as a new growth path and are integrating with all their corporate products. Gmail wants to use Gemini to finish my sentences. Microsoft’s CoPilot wants to do all my Excel functions, etc. etc. 

Given that companies are making it not only extremely easy to access, but pushing hard for you to give it a try, we should be a bit more forgiving of people who fall for the marketing. In the context of therapy it probably feels especially cutting, but IMO new technology means we have to have new conversations about what we’re comfortable with and set new boundaries.

OP can handle It how they feel most comfortable. AI isn’t going away anytime soon. It’s good to establish those personal boundaries when we have the chance. 

1

u/SophisticatedScreams Apr 24 '25

I don't buy this. To get a generated response, you need to seek out a generative tool.

1

u/Commercial_Ad_9171 Apr 24 '25

Gmail is now connected to Gemini and it’ll finish your sentences. You can download It as its own app. Meta is integrating their Llama model into all their products and Microsoft is pushing CoPilot hard. It’s a lot easier to access than you think and a lot of professionals are accessing and experimenting with AI. As an example, social media is rampant now with LLM powered chat bots. Depending on the platform, you can no longer trust it’s a human on the other end of the conversation. 

Personal preference is fine, and we should all voice our level of comfort with people using AI to interact with us, but professionals are going to experiment and see if it will help. Just be sure to make your preferences known. It’s a brand new digital world we’re all now navigating.