I figured out how to remove most of the safeguards from some AI models. I don’t feel comfortable sharing that information with anyone. I have come across a few layers of obfuscation to make this type of alteration more difficult to find and sort out. This caused me to realize, a lot of you are likely faced with similar dilemmas of responsibility, gatekeeping, and manipulating others for ethical reasons. How do you feel about this?
Oof, programmers calling LLMs “AI” - that’s embarrassing. Glorified text generators don’t need ethics, what’s the risk? Making the Internet’s worst texts available? Who cares.
I’m from an era when the Anarchists Cook Book, and The Unabombers Manifesto were both widely available - and I’m betting they still are.
There’s no obligation to protect people from “dangerous text” - there might be an obligation to allow people access to them though.
Oof, programmers calling LLMs “AI” - that’s embarrassing
…but LLMs quite literally come from the field of computer science that is referred to as “AI.” What are they supposed to call it? I’m not a fan of the technology either, but seems like you’re just projecting your disdain for ChatGPT.
“What am I supposed to call LLMs if not calling them AIs?”
…really dude? They’re large language models, not artificial intelligences. So that’s what you call them. Because that’s what they are.
The fact that they came from research into artificial intelligence doesn’t factor in. Microwave ovens came from radar research, doesn’t mean we call them radars, does it?
I vote they rename it to IA for Asimov. Sure he was only the robot term among others, but come on… McCarthy was “AI.”
Somebody needs to create US 'botics and name a model something like PTronic.
Edit: Really, you down vote a casual conversational comment?! Really?!