Yep, and even worse. Lemmy has absolutely NO controls for quality and minimal moderation tools or capabilities. It’s in a much worse position than Reddit.
If it’s not already happening (And I think it is), it will.
That’s all manual effort currently. Someone has to report a problematic post, you have to manually look at it, manually decided if it should be removed, and manually remove it. I’m not the biggest fan of automatically removing content, but when someone posts 500 posts all at once that manual effort is a pain.
Yep, and even worse. Lemmy has absolutely NO controls for quality and minimal moderation tools or capabilities. It’s in a much worse position than Reddit.
If it’s not already happening (And I think it is), it will.
What tools are missing to moderate a LLM bot?
I am a mod, I receive a report from an user, check it and ban. All done with current tools we have.
This doesn’t scale.
This is how we did in reddit too.
That’s all manual effort currently. Someone has to report a problematic post, you have to manually look at it, manually decided if it should be removed, and manually remove it. I’m not the biggest fan of automatically removing content, but when someone posts 500 posts all at once that manual effort is a pain.
This is how we did in reddit too.