Summary

X, owned by Elon Musk, is suing to block California’s AB 2655, a law requiring social media platforms to remove “materially deceptive content” like deepfakes about politicians within 120 days of an election.

The lawsuit argues the law violates the First Amendment and Section 230 of the Communications Decency Act, claiming it would lead to broad censorship of political speech to avoid enforcement costs.

A similar California law, AB 2839, was blocked last month for overreaching into constitutionally protected speech, including parody and satire.

AB 2655 is set to take effect in 2024.

  • randompasta@lemmy.today
    link
    fedilink
    arrow-up
    39
    ·
    6 days ago

    The law sounds reasonable. I’d argue that it provides additional definition to defamation laws. Deep fakes are meant too fool people into thinking the victim made a statement that they did not. That almost seems like the definition of slander.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Hmm.

    I’d think that that’d also affect Lemmy instance operators were it to enter into force.

    The text and its scope would also be interesting, because I can’t see a practical way for, say, an instance operator in Bakersfield, California, to have any realistic way to evaluate the truth of claims about an election, in, say, Malaysia, if it extends to all elections. I suspect that even in California alone, acting as an arbiter of truth would be tough to do reasonably.

    EDIT: Looking at the bill text, it probably does not currently, as it looks like it has a floor on the number of California users, and there aren’t yet enough users on the Threadiverse:

    (h) “Large online platform” means a public-facing internet website, web application, or digital application, including a social media platform as defined in Section 22675 of the Business and Professions Code, video sharing platform, advertising network, or search engine that had at least 1,000,000 California users during the preceding 12 months.

    It’s also interesting that traditional media apparently is not covered:

    The bill would exempt from its provisions a broadcasting station and a regularly published online newspaper, magazine, or other periodical of general circulation that satisfy specified requirements.

    It is apparently specific to elections in California.

    My guess is that it’ll probably get overturned on some First Amendment challenge, but we shall see…

    • cygnus@lemmy.ca
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      6 days ago

      I don’t think it’s practical or even really possible to remove all fake content. The safer and far more effective solution IMO is to digitally sign authentic content instead, and assume that anything unsigned is fake.

      • Justin@lemmy.jlh.name
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        6 days ago

        Technology is not the solution to a social problem. Big tech companies have an obligation to make it more difficult for state actors and extremists from multiplying obviously false claims about elections and protected minorities.

        • cygnus@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          6 days ago

          That’s great in theory. How are you going to force foreign actors to do it?

          “Russia, stop spamming Facebook with all that manipulative fake garbage”

          “Oh OK you only had to ask. Sorry about that”

    • Justin@lemmy.jlh.name
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      The EU has already implemented a similar law making disinformation illegal.

      Some platforms are also obliged to prevent the dissemination of harmful data, which does not necessarily have to be illegal content under European Union law or the national laws of European Union member states. This is, in particular, the case of online intermediaries that have obtained the status of Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) because they have an average number of monthly active users in the Union of at least 45 million and have therefore been qualified as such by the European Commission.

      In the light of the DSA regulations, disinformation may potentially constitute primarily two systemic risks defined in the provisions of the Digital Services Act:
      a) the risk relates to an actual or foreseeable negative impact on democratic processes, civic discourse and electoral processes, as well as on public security (recital 82),
      b) the risk relates to an actual or foreseeable negative effect on the protection of public health, minors and serious negative consequences to a person’s physical and mental well-being, or on gender-based violence. Such risks may also stem from coordinated disinformation campaigns related to public health, or from online interface design that may stimulate behavioural addictions of recipients of the service (recital 83).
      In turn, according to Article 37 of the DSA, providers of very large online platforms and very large online search engines at their own expense are obliged to undergo independent audits at least once a year to assess their compliance with the obligations set out, inter alia, in point 7 above.

      https://chambers.com/articles/the-digital-services-act-dsa-and-combating-disinformation-10-key-takeaways