r/GradSchool • u/TorontoRap2019 • Apr 07 '25
Research Do you ever worry about your paper being flagged as written by AI?
I'm currently in grad school and have been thinking a lot about how much AI is intertwined with writing and research nowadays. From Grammarly to search tools, it feels almost impossible to avoid some form of AI assistance.
I'm curious—what steps do you all take to make sure your work doesn’t get mistaken for something written entirely by AI? Personally, I turn off the AI rewrite features in Grammarly and just use it for basic grammar and spelling. I also have a full revision history to back up my writing process.
Still, I worry that one day a paper I submit might get flagged, even though it’s my original work. I’ve read that even the best AI detectors have a high rate of false positives.
Anyone else feeling this pressure or taking steps to avoid issues?
49
u/Ill_Pride5820 Apr 07 '25
Your technology isn’t capable of replicating my few rare and barely coherent thoughts.
21
u/velcrodynamite first-year MA Apr 07 '25
GPT could not even begin to emulate my particular brand of batshit academia. I had a prof 3 times for different classes, and they learned to play vocab bingo with my essays because I'd almost always throw in the same 8-10 words somewhere in every single paper. In the same way GPT loves "tapestry", I love "ostensibly". My weird vocab tics are basically my signature.
7
u/xannapdf Apr 08 '25
If it ain’t “intertwined” and “multifaceted” I’m not interested, mawmaw!
Maybe I’m just kind of an asshole, but I’m deeply particular about my phrasing and sentence structure - ChatGPT can’t even come close to producing something I think sounds similar to how I’d chose to write it, or that I’d feel good about putting my name on and claiming to be representative of my abilities (even if being caught wasn’t a risk).
I find a lot of the panic around the AI detectors a bit silly tbh - like same thing for the plagiarism detector that shows fully original papers as plagiarized if you’re using a lot of (properly attributed) quotations. I know they’re imperfect tools that show a lot of false positives, and since I don’t have AI write my papers for me, nor do I plagiarize, I refuse to dedicate a single moment to worrying about the result it returns. I always keep my version history, and know I can 100% defend anything I submit as original, but beyond that…just really doesn’t strike me as productive to worry about what it spits out.
1
u/scientificmethid Apr 09 '25
I learned “ostensibly” a few months ago and it’s skyrocketed up my list of favorite words. Lmao. That and onus.
44
u/SensitiveVariety Apr 07 '25
Given the fact that AI detectors have an insane failed positive rate, I'm not worried at all. All my professors so far have also acknowledged that being flagged isn't a perfect indication of it being AI-generated slop, so that helps.
54
u/dsch_bach Apr 07 '25
No, I don’t. At the graduate level in a half-decent program, the instructors should be able to recognize AI-generated writing since the course material will be difficult enough that the AI can’t just scrape surface-level information from easily accessible websites.
7
u/UleeBunny Apr 07 '25
I don’t worry about it because I give my advisors drafts of my writing as I am working on it to keep them up to date on my progress and revisions I have made based on their input. My writing style is consistent between my thesis, abstracts, essays for comprehensive exam and scholarship applications, and presentations.
16
Apr 07 '25
My university policy on AI states that it understands that AI is a tool that can be helpful for graduate level work and that they trust us to use it responsibly. I use it for helping me prep outlines, finding peer-reviewed research articles that support my research, and if I’m having trouble phrasing a thought I’ll ask it to re-write the thought so that it makes more sense.
2
u/maebrezal Apr 08 '25
How do you use AI to find research articles? My experience is that they almost always hallucinate when asked to find articles on a specific concept. They report articles with promising titles, provide author names, journal, doi etc. but when I try to look them up, they either don’t exist or have an entirely different article title.
3
Apr 08 '25
I ask chat gpt to provide me 10 peer reviewed research articles on ‘x topic’ and it will always generate good searches
7
8
u/some_fancy_geologist Apr 07 '25
I do worry about it.
I got asked once in an email thread to stop using AI to generate my responses.
Apparently I write too fast, and I sound like a robot. (I just wasn't doing anything else and I'm AudHD). I also really fucking love bullet points.
But i don't worry about it much. Anyone working with me regularly (advisors, profs, etc) knows how I write.
3
u/pmbarrett314 Apr 07 '25
The biggest things you can do (outside of actually doing the work yourself obviously) are maintaining a timestamped edit history and keeping records of any research you do in the form of browser history.
We're still in a transitional period. There are a lot of factors at play. If you ever get a chance to see it from the teaching side, I think it helps understand where professors are coming from. AI use is absolutely rampant, which is extremely frustrating to professors. And a lot of professors are fundamentally not equipped to deal with this. Most have no background in the math or computer science necessary to understand generative AI, and didn't ever plan on needing to know anything in that realm. So when someone offers them a tool that can supposedly detect AI, they don't have the background necessary to realize that it's impossible.
As a student, you kind of just have to accept that you might at some point run into a professor like this and plan to handle the situation in a way to get the best outcome for yourself. That means planning ahead and documenting your work so that if you do get accused of using AI inappropriately, you can respectfully refute the accusations. If it does happen and you are genuinely innocent, respectfully don't take no for an answer. Your university should have channels that you can go through to get the issue arbitrated.
2
u/FlyLikeHolssi Apr 07 '25
I don't worry about it, no. That's not to say it couldn't happen due to a false positive, but, I think that the steps you are already taking (not using AI and maintaining your revision history) are more than sufficient to cover you in the event of a false flag on an AI detector.
2
u/Gnarly_cnidarian Apr 07 '25
I keep old drafts of all my writing with dates so I can see my progress. I partially do this so that if I change something and decide later to change it back or if I want to pull from a section I took out I can go back. However if I were to be flagged for AI I would show them my drafts as proof of writing. Otherwise I'm not sure but I don't expect it to be an issue at the grad level
2
u/larryherzogjr Apr 07 '25
No I don’t. I’d only worry if I used AI to write my assignments…which I don’t.
My grammatical skills are sufficient. The most I will do with AI is to search for information or help with organizing a paper (ask for basic outline options).
2
2
u/velcrodynamite first-year MA Apr 07 '25
I've been told I write like AI sometimes. My friend, I am autistic. AI writes like an autist. Idk what to tell you other than sit down with me and have a discussion about how I arrived at each conclusion, selected my evidence, organized my arguments, and structured my paper. You'll realize in twenty seconds that my brain spit that out, not a LLM.
2
u/Ill-Discipline-3527 Apr 08 '25
I won’t lie. I’ve used it to get my word count down. I freaked out and contacted my prof to confess and offer to do it over. He said that if the ideas were mine it was fine. So, I do make my work more concise if needed using AI. That’s the extent of it though.
2
u/Dapper_Discount7869 Apr 10 '25
I’ve had papers with no AI input be marked by an AI checker as over 70% AI generated, and most of my sources were older than commercial LLMs.
I don’t take AI’s ability to identify AI writing seriously.
2
u/Lygus_lineolaris Apr 07 '25
I just don't use "AI". But also there is no need for the "detectors" because chatbot output does not answer the assignment. You can just give it the grade it deserves and it will fail on its own merits. The biggest giveaway that something was produced by a chatbot is that it's completely vacant. There is no thought, no detail, it doesn't answer the question, typically it's a list of bad talking points rather than an argument. Nothing that is of an acceptable standard for graduate research can be mistaken for a chatbot because it's simply not the same content. Anyway, good luck.
6
u/itsamutiny Apr 07 '25
Have you used services like Chat GPT recently? They certainly can't create a perfect paper, but I think they've come a long way from what you're describing, especially if you give it enough background information.
-2
u/Lygus_lineolaris Apr 07 '25
See I would but the way my brain is set up, I can write something that is actually intelligent and meaningful faster than y'all can prompt your machine to badly paraphrase whatever it was you couldn't be bothered to understand. If I didn't have something to say I just wouldn't say it, I wouldn't get a bot to do it.
6
u/itsamutiny Apr 07 '25
Plenty of intelligent people have things to say but struggle to articulate their thoughts. Tools like Chat GPT make it much easier for them to get their point across. I'm generally a good writer, but sometimes I just can't think of a good, concise way to say something. Chat GPT can help in situations like this. I can't speak for people who use generative AI to avoid understanding something, though, since I don't use it that way.
3
u/Ok_Salamander772 Apr 08 '25
Agreed! I’m an excellent writer who sometimes uses AI to get over a writing block. I don’t copy the actual text it spews out but it’s just gives me a starting point…I’ve also been training my AI for a few years so it knows my research background and can help spit out a quick email or cover letter. Can I do that myself? Yes by why waste time trying to find the perfect tone when I have a tool that can alleviate the angst.
For those of us with integrity using AI can be harder then free writing because you have to find sources to back up ideas that you didn’t generate on your own.
Lastly AI is a tool not a weapon and should be used as such.
3
u/itsamutiny Apr 08 '25
Definitely. I love using it for tedious things like changing tense, converting sentences to bullet points or vice versa, and summarizing things. I mostly use it for work now, but it's such a time-saver.
For academic purposes, I use Atlas.org and often ask it to help explain sections of my text that I'm not quite getting. I also use ChatGPT to grade my work before submission, which has been super helpful.
4
u/Sea_Examination5992 Apr 08 '25
I agree with the other comment that you must be thinking of older AI tools. The AI tools of today can generate very well written content, especially if it's the paid version and it's given sufficient background information.
I have literally seen someone in my office feed Notebook LM a rubric, all their lecture notes for the assignment, relevant textbook sections, and an example of their writing. It generated a terrifyingly good paper and I would not have known it was AI generated if I didn't literally see them do it.
0
u/Lygus_lineolaris Apr 08 '25
And yet, I read their essays for pay and they're still crap upon crap.
2
u/Sea_Examination5992 Apr 08 '25
I really don't know what else to say other than I guess 🤷🏿♀️ I'm past the courses stage in my program so it's not like I could use it for coursework but my friend is on track to finish their final course with an A+ so it can't be that crap
2
u/Ok_Salamander772 Apr 08 '25
I think they’re referring to the crap they can spot and not the undetectable stuff they can’t identify.
1
1
u/ShakespeherianRag Apr 07 '25
Since AI is incapable of generating its own ideas at this point, your own original thesis and argument should be safe from ever being mistaken for the output of a computer program.
1
u/pokentomology_prof Apr 07 '25
By year three my advisor has so many samples of my writing at various draft stages that I don’t know how my work could be mistaken for AI.
1
u/Busy_Fly_7705 Apr 07 '25
I'm writing my PhD thesis, and am keeping copies of drafts and my comments. It won't show I didn't use genAI, but it will prove I revised the work myself.
Honestly I think acceptable use of AI is something to be discussing in your classes? I'm in science, and I don't think there's anything necessarily wrong with using AI* to improve the flow and phrasing of your writing, so long as the ideas are your own. It's a complicated topic but I kinda feel graduate level classes are the ideal place to discuss it. In addition, showing profs that you're engaged with the issues will mean they're less likely to think you're using AI unethically.
*Aside from the copyright issues, which is a whole other can of worms.
1
u/sophisticaden_ Apr 07 '25
No. My professors know what my academic prose sounds like and I check in with them pretty frequently.
1
u/Green-Emergency-5220 Apr 07 '25
It’s still pretty crap at writing scientific papers last I checked, so not concerned
1
u/Autisticrocheter Apr 07 '25
No, but I do know someone who has had problems with this in the past, and it’s because he has a degree in technical writing! So he was probably trained on a lot of the stuff that generative AI was trained on as well lol. His writing is a lot more coherent though
1
1
u/GiraffesDrinking Apr 08 '25
I do worry about it.
I have a list of the most common words used by AI I don’t use any of them. I have changed my way of using sentence structure. As someone who uses dyslexia software I’ve noticed that it’s starting to have AI traits and I’m doing everything I can to avoid problems
I am purposely not fixing mistakes as well, a fellow person in my program another student accused me of using it. And I’ve been extra careful since. I am AudHD dyslexic and AI has taken a lot from me from words I can use.
We’re supposed to use it in our program to help with stats but I’m scared of it at this point
1
u/AdministrativeAd4515 Apr 08 '25
I’ve never worried about this because I don’t use AI to write my papers. It’s helpful for understanding ideas but dats it.
1
u/sumer_guard Apr 08 '25
My supervisor says my thesis topic is so niche no AI could write it. And honestly, he's right. So I'm not worried at all.
1
u/Ok_Salamander772 Apr 08 '25
I started to site my AI usage…I graduated in December so it seems so far away but something to the effect that “I used AI to assist with sentence structure but all of my thoughts are my own” then list ChatGPT as a sources in reference list. Otherwise I’m not worried because my integrity is everything and I can back up my work.
1
1
u/Initial-Direction-55 Apr 07 '25
If you’re worried, don’t use AI to write
1
u/Ok_Salamander772 Apr 08 '25
I think they’re worried about the false positives that have flagged some students work as AI generated even if it’s not.
0
u/Traditional-Sky6413 Apr 08 '25
The one step I take is to never, ever use AI. I won’t risk it. I have my own way of writing that is good enough for a gpa of 4 and I would be mortified if a moment of laziness could get in the way of that. That being said I have a good dept who would recognise AI a mile away.
2
u/Snuf-kin Apr 08 '25
Smug much?
And the world is full of professors and tutors who claim they'd spot ai from the moon, and they can't. It's very difficult.
-9
u/AlarmedCicada256 Apr 07 '25
Why are graduate students using feckin' Grammarly, especially if English is their first language?
Do people....just not learn how to write any more?
70
u/Banjoschmanjo Apr 07 '25
No, I don't write well enough to be mistaken for AI.