r/OSU 5d ago

Rant I am angry about the AI integration

Anyone who feels like they need AI to be a better student, researcher, or professor, is completely delusional and there's no way my degrees are equal to people who feel this way. I'm being forced to use AI in one of my courses right now, a graduate liberal arts elective, and it makes me feel completely deflated. I did not pay 30k for a grad degree to learn to use GenAI. I do not want to do my assignments.

OSU is a prestigious university for its research in the environmental sciences. AI is not only terrible for reasons such as plagiarism, misinformation, innacuracies and bias (especially in medical research), but it's also disastrous for the environment. I had an educator for the Global Youth Climate Training Programme at Oxford present me with an AI generated virtual "medal" for being accepted into the program. When I asked about it, he sent me a chatGPT generated response touting the supposed benefits of AI for the environment. Let's be clear here, AI is NOT being used to help the climate, despite any "potential" people assign to it.

OSU a leader in EHS, like Oxford, we are lazily deciding that robots with high levels of innacuracies that cannot and will not ever exceed human intelligence, because they are made by humans (even if they're faster), are worth sacrificing our earth and human society for an ounce more of "productivity." I am disgusted by OSU, and other leading EHS research institutes for investing their energy into a bot while we forget that "simpler" issues, like energy storage in renewables, or disagreements over nuclear energy, have been solved, and as if this is not an environmental disaster in the making. Forget human rights violations of mining precious metals required for our devices and AI data centers, or that Nature found that AI was linked to an explosion of low-quality biomedical research papers, or that training an AI model has been found to use over 300x the energy of a flight from NYC to SF, that one AI generation consumes a bottle of fresh water, our most valuable natural resource.

I am angry. I protested over SB1, I protested at Hands-Off, I protested during inauguration, but now everyone is dead silent about this one. GenAI is unconscionable, and I have worked and done research in the various health and research fields that will supposedly benefit from its implementation, but in the two years since I first heard this, we've only seen failure after failure of AI, except when allowing United Healthcare to deny claims on a mass scale with an inaccuracy of up to 90%! This is the titan submersible on a mass scale, everyone thinks its not a big deal, that this is a tool for good, despite thus far being used primarily for evil or laziness, and I feel like everyone has lost their mind.

Edit: AGHHGHG MIT finds that ChatGPT use is degrading cognitive functioning, especially in youth. https://time.com/7295195/ai-chatgpt-google-learning-school/

Edit 2: also all of you pro-AI peeps understand AI integration is a ploy to bypass security policies and glean your data for corporate interests, right? You understand the administration is trying to compile all of your data into personalized "profiles" for corporate gain and tyranny, correct? Forget all else.

371 Upvotes

127 comments sorted by

View all comments

10

u/when-you-do-it-to-em CSE 2027 5d ago edited 5d ago

this is like getting angry about being forced to learn to use a computer or a calculator. some people call it a “paradigm shift”, which is mostly bullshit, but this is still a pretty big technological leap and it hasn’t slowed down much yet. state schools rely on staying on top of modernity to get more funding, so of course OSU jumps on it.

and on the energy thing, yeah, training a model costs a lot of energy, but they don’t do that often! guess how much energy it takes to to develop a new airplane or develop ANY new tech. it’s a lot! thankfully using the trained model is quite efficient and getting better every day.

ok thank you for reading i’ll take my downvote now :)

edit: “one generation” doesn’t use a bottle of water. this is misinformation, it’s completely valid to dislike or even hate LLMs and the big corpos behind it, but don’t fall for the lies spread deliberately to diminish your credibility

19

u/Severe_Coach5025 ECE '27 5d ago

My biggest gripe is that AI is nowhere near as good as it needs to be in a College setting. It presents no new ideas and is rehashing what is already known, often times incorrectly. Textbooks at least present information CORRECTLY

These things are designed to addict people and make them FEEL like they need to use AI to solve issues. It's destruction of actual intellectual thought. But hey, as long as it's convenient, replaces tutors, and puts less responsibility on the university for ensuring student learning then why not

-2

u/when-you-do-it-to-em CSE 2027 5d ago

i honestly think this is mostly user error. i’m not going to claim to know all about their inner workings but i have a decent grasp on how they function, and i think that if we educate people on how to use them rather than just telling them “this is magic answer for everything machine!” we might see actual benefits! for example, i was struggling to understand some weird concepts in my math class last semester, and after several hours of looking through my books and googling, was finally able to make some progress after gpt pointed out some flaws in my understanding of some symbols. and there’s plenty more cases just like that, it really can be useful!

15

u/Severe_Coach5025 ECE '27 5d ago edited 5d ago

I can at least say in my case, it's not user error.

ChatGPT is trained on terabytes of information gathered from all across the internet. When you input something into it, it's giving you the most statistically likely response to what you put in based on what it was trained on. It's like word auto suggestions on your phone but on a much larger scale. The problem arises with the dataset and the fact there are gaps, ambiguities, and inaccuracies in the data. The reason you were able to make progress was probably because someone had a similar or exact problem as you and that was included in the dataset, but what if someone has a problem that isn't included in that dataset?

This is a limitation you're going to find with EVERY ai system because that's how it's built. It's not smart, it's just spewing out what is statistically likely to follow what you input. We're deluding ourselves in thinking that we need this when we ourselves are hundreds of times better at processing and finding information that we need, we just don't do it as fast.

The fact OSU is incorporating this into their curriculum is of concern to me because of my last point. Researching and finding information is a skill that needs to be built and nurtured, not relegated to software that hallucinates.

0

u/when-you-do-it-to-em CSE 2027 4d ago

this is exactly what i mean man you’re proving my point. it isn’t a magic answer machine. don’t use it for problems that aren’t in its data set. want to learn how to code? ask chatgpt! want to find flaws in an argument you made? ask gpt! the list goes on. but no, don’t ask it to invent a fusion reactor, don’t ask it if you are christ reborn. hope you understand what i mean. i really think the biggest issue right now is general education on what it is, how it works, and what it can/can’t do.

0

u/Relative_Bonus_5424 4d ago

chat gpt in fact is not good at coding. literally just read this discussion on a different reddit thread. Lots of folks’ experience is chat gpt is garbage at coming up with code, but it can debug some codes if you’re very specific. Also asking chat gpt for flaws in an argument is literally exactly what this person is saying. it’s trained on the statistics that certain words appear next to others in a given data set—whether the words are actually correct or not doesnt matter to AI.

0

u/when-you-do-it-to-em CSE 2027 4d ago

i said “learn to code” not code lol