It’s my understanding that LLM’s are thoroughly unsafe, always reporting everything it does and every input back to whoever made the LLM. So, wouldn’t it be easy for whoever owns the LLM to see what it’s being used for, and to refuse service to scammers?
It’s all fun and games until the scammers use AI themselves to massively scale their operations.
Good chance it’s probably happening already. Worst part is both eat so much power.
It’s been reported widely that it’s already happening. They use phone banks to scam, they use AI to scam. If it’s out there, it’s being used to scam.
It’s my understanding that LLM’s are thoroughly unsafe, always reporting everything it does and every input back to whoever made the LLM. So, wouldn’t it be easy for whoever owns the LLM to see what it’s being used for, and to refuse service to scammers?
There are on premise LLMs