Summary
X, owned by Elon Musk, is suing to block California’s AB 2655, a law requiring social media platforms to remove “materially deceptive content” like deepfakes about politicians within 120 days of an election.
The lawsuit argues the law violates the First Amendment and Section 230 of the Communications Decency Act, claiming it would lead to broad censorship of political speech to avoid enforcement costs.
A similar California law, AB 2839, was blocked last month for overreaching into constitutionally protected speech, including parody and satire.
AB 2655 is set to take effect in 2024.
The law sounds reasonable. I’d argue that it provides additional definition to defamation laws. Deep fakes are meant too fool people into thinking the victim made a statement that they did not. That almost seems like the definition of slander.
Hmm.
I’d think that that’d also affect Lemmy instance operators were it to enter into force.
The text and its scope would also be interesting, because I can’t see a practical way for, say, an instance operator in Bakersfield, California, to have any realistic way to evaluate the truth of claims about an election, in, say, Malaysia, if it extends to all elections. I suspect that even in California alone, acting as an arbiter of truth would be tough to do reasonably.
EDIT: Looking at the bill text, it probably does not currently, as it looks like it has a floor on the number of California users, and there aren’t yet enough users on the Threadiverse:
(h) “Large online platform” means a public-facing internet website, web application, or digital application, including a social media platform as defined in Section 22675 of the Business and Professions Code, video sharing platform, advertising network, or search engine that had at least 1,000,000 California users during the preceding 12 months.
It’s also interesting that traditional media apparently is not covered:
The bill would exempt from its provisions a broadcasting station and a regularly published online newspaper, magazine, or other periodical of general circulation that satisfy specified requirements.
It is apparently specific to elections in California.
My guess is that it’ll probably get overturned on some First Amendment challenge, but we shall see…
I don’t think it’s practical or even really possible to remove all fake content. The safer and far more effective solution IMO is to digitally sign authentic content instead, and assume that anything unsigned is fake.
Technology is not the solution to a social problem. Big tech companies have an obligation to make it more difficult for state actors and extremists from multiplying obviously false claims about elections and protected minorities.
That’s great in theory. How are you going to force foreign actors to do it?
“Russia, stop spamming Facebook with all that manipulative fake garbage”
“Oh OK you only had to ask. Sorry about that”
Tweak algorithms to limit reach of new accounts, don’t allow russians to buy ads or blue checkmarks, have a team of moderators that moderate based on known bad images, known bad IP addresses, known bad account creation patterns. If non-profit researchers are able to uncover botnets, there’s no reason why billion dollar companies can’t. It’s a cat and mouse game, but it’s not acceptable for these companies to put in 0 effort. These companies are better funded than the Internet Research Agency.
How would you rate the odds of all those things happening in order for your scenario to come true?
100% if you fine them 6% of their global revenue for refusing safety recommendations by the EU and independent auditors
https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement
There’s a reason why Elon Musk is running to Trump for help after the EU started suing him for breaking this law.
https://www.politico.eu/article/donald-trump-elon-musk-x-tech-social-media-politics-elections-eu/
If Trump can’t dodge EU disinformation laws, no one can.
I hope you’re right.
The EU has already implemented a similar law making disinformation illegal.
Some platforms are also obliged to prevent the dissemination of harmful data, which does not necessarily have to be illegal content under European Union law or the national laws of European Union member states. This is, in particular, the case of online intermediaries that have obtained the status of Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) because they have an average number of monthly active users in the Union of at least 45 million and have therefore been qualified as such by the European Commission.
In the light of the DSA regulations, disinformation may potentially constitute primarily two systemic risks defined in the provisions of the Digital Services Act:
a) the risk relates to an actual or foreseeable negative impact on democratic processes, civic discourse and electoral processes, as well as on public security (recital 82),
b) the risk relates to an actual or foreseeable negative effect on the protection of public health, minors and serious negative consequences to a person’s physical and mental well-being, or on gender-based violence. Such risks may also stem from coordinated disinformation campaigns related to public health, or from online interface design that may stimulate behavioural addictions of recipients of the service (recital 83).
In turn, according to Article 37 of the DSA, providers of very large online platforms and very large online search engines at their own expense are obliged to undergo independent audits at least once a year to assess their compliance with the obligations set out, inter alia, in point 7 above.