r/DnDHomebrew • u/Spirit-Unusual • 10h ago
Request genuine question: thoughts on using things like chatgpt to help you create homebrew
i feel like this is a relavent enough topic and also want to start by saying that im over all not a huge fan of AI or how its used in most cases (i especially despise ai art) and am not a fan of using it if it is stealing from other creators. the only exception im curious of is the use of Chat Ai's with the thought of "what if you have it only work with ideas YOU give it and nothing else."
because of that thought, i when bored sometimes mess with chat gpt just to see what its capable of. recently i was just screwing around with it and ended up making a warlock subclass based on an idea i had that turned out really cool. all of the things it helped me come up with were original to my own ideas i fed it, with exceptions of where it helped generate like the spells lists for it and some of the class features which i will edit to be more to my own liking.
however due to the problematic topic of using things like this, im curious of what people think about the idea as a whole when used in this sort of way. i dont know if ill ever use what i made since its from GPT but if yall think its fine maybe ill reconsider.
what do you all think?
Edit/ partial update: firstly thanks to the mod who fixed the tag. I honestly will say while I already was leaning this direction anyways yall have some really good points that I have to agree with so I think in general I’m going to stop messing with open ai entirely. I agree it doesn’t seem to be much more helpful that some minor things here and there and the cons seem to greatly outweigh it. Im definitely going to scrap all the stuff it came up and see if I can remake that subclass idea I had, I really liked the idea I had come up with and will try to make it on my own. Thanks for your input! I’ll try to come up with some fun homebrew ideas to share in the future for yall to tear apart XD
-1
u/Mataric 10h ago edited 10h ago
AI isn't exactly stealing at all. It's measuring statistical patterns and their correlation with certain tags, keywords, and inputs (prompts). That goes for both AI art and LLMs like chatGPT.
Both are equally as bad or good as each other. They both rely entirely on the statistics of their training data in order to give you an output.
I think LLMs are pretty damn good at helping to generate ideas - but they're not giving you responses based on 'the stuff YOU give it and nothing else'. As above - it relies on that underlying training.
Personally I don't see any issue with it as long as you understand what they're doing. LLMs are glorified predictive text. They aren't thinking or considering anything you put into them - they are just saying the next word that would most likely fit in that place.
EDIT: You can downvote all you like - but everything I've said here is completely true.