A machine learning librarian at Hugging Face just released a dataset composed of one million Bluesky posts, complete with when they were posted and who posted them, intended for machine learning research.
Daniel van Strien posted about the dataset on Bluesky on Tuesday:
“This dataset contains 1 million public posts collected from Bluesky Social’s firehose API, intended for machine learning research and experimentation with social media data,” the dataset description says. “Each post contains text content, metadata, and information about media attachments and reply relationships.”
The data isn’t anonymous. In the dataset, each post is listed alongside the users’ decentralized identifier, or DID; van Strien also made a search tool for finding users based on their DID and published it on Hugging Face. A quick skim through the first few hundred of the million posts shows people doing normal types of Bluesky posting—arguing about politics, talking about concerts, saying stuff like “The cat is gay” and “When’s the last time yall had Boston baked beans?”—but the dataset has also swept up a lot of adult content, too.
pretty sure that isn’t legal unless the Bluesky TOS allows for this
Either way I’m still glad I don’t use Bluesky
Legality hasn’t stopped AI training in the past, I’d say they beg forgiveness instead of ask for permission, but they don’t even do that lol
From the Bluesky TOS:
…
So looks like the Bluesky TOS simply doesn’t apply. Create a developer application and give it whatever training-friendly TOS you want.
It looks like they’re considering adding some equivalent of a robot.txt to express consent or non-consent for posts on ATProto, but of course as they say:
“Bluesky won’t be able to enforce this consent outside of our systems. It will be up to outside developers to respect these settings.”
There are definitely bots mining fediverse content as well. When the Reddit exodus was ongoing, there were entire Lemmy instances with no users but bots. Not posting or reposting, just…watching and waiting, I guess.
Not that it’s of any consolation, just better to assume that nowhere is safe from being mined for AI training.
But why would you need bots to scrape data? Wouldn’t a script just do fine?
I think the worry is that they are capable of doing to Lemmy what they did to Reddit: regurgitating content or producing astroturfed content while appearing like authentic users.
honestly that’s totally fair
The fediverse is an elegant solution. How do you stop people from monetizing your post history? You give it away for free.
GDPR has entered the chat