Sammamish is a city in the greater Seattle metro area. It operates a Facebook page entitled “City of Sammamish – Government.” City employees post about events, public safety, and more. The city also livestreamed its city council meetings on its Facebook page and encouraged comments. The city has a policy on acceptable comments that restricts any comment that:
(1) That is not related to the particular article being commented on;
(2) Promotes or advertises commercial service, entities or products;
(3) Supports or opposes political candidates or ballot propositions;
(4) Is obscene;
(5) Discusses or encourages illegal activity;
(6) Promotes, fosters or perpetuates discrimination on the basis of creed, color, age, religion, gender, marital status, status with regard to public assistance, national origin, physical or mental disability or sexual orientation;
(7) Provides information that may potentially compromise the safety or security of the public or public systems;
(8) Violates a legal ownership;
(9) Sexual content or links to sexual content;
(10) Comments from children under 13 cannot be posted in order to comply with the Children’s Online Privacy Act; and
(11) Anonymous posts.
As one would expect from local governments creating speech policies, there are many obvious problems with the list. For example, COPPA doesn’t govern local governments, and the restrictions on “discussing illegal activity” and “information that may potentially compromise the safety or security of the public or public systems” are obviously overbroad.
This lawsuit involves the first rule, which restricts off-topic comments. The plaintiffs claim the city is overusing this rule to squelch their criticism. The plaintiffs seek a preliminary injunction against enforcement of the rule.
The Facebook Page is a Designated Public Forum. The court says the city made the Facebook page into a designated public forum because:
- it allows “comments on its Facebook posts and City Council meetings without any prior approval”
- “the City does not always apply its ‘off topic’ rule consistently.” As evidence, the plaintiffs pointed to other off-topic comments that hadn’t been removed. The court adds: “The lack of consistent application of the ‘off topic’ rule here weighs heavily in favor of finding a designated forum because an unevenly enforced rule ‘is no policy at all for purposes of public forum analysis.’” Spoiler alert: it’s impossible to moderate content “consistently”
- “the nature of Facebook as a forum for public discourse and the enabled commenting field strongly suggest that the Facebook page is a space ‘designed for and dedicated to expressive activities.’…Facebook itself is designed for individuals to share information and express ideas. It is not part of a government-run enterprise that only allows incidental expressive activities, such as the advertising space on the side of a city bus”
Strict Scrutiny. The off-topic rule is, by definition, a content-based restriction on speech. Thus, it triggers strict scrutiny, which it can’t survive. The city argued that the off-topic messages distracted from their public safety messages. The court is unimpressed because comments don’t dilute the initial message. “Public comments are not akin to, say, graffiti scrawled over a billboard that contains a city’s effort to disseminates public safety information.” [And everyone knows never to read the comments…right?] The court grants the requested preliminary injunction.
This injunction only applies to the “off-topic” rule. In theory, the city remains free to enforce its other existing rules. In practice, the city needs to rework those rules because many of them are constitutionally problematic. In the interim, the city surely will be skittish about enforcing any rules for fear of another constitutional challenge.
The court implies that if the city reviews all comments pre-publication, it might turn the designated public forum into a limited public forum and avoid strict scrutiny. That seems unlikely. The court emphasized the lack of consistency in applying the rules, which is an unavoidable feature of all content moderation. Prescreening won’t fix that; and the ability to pick-and-choose comments to publish might exacerbate censorial tendencies. Thus, I think most government-operated online commenting venues, especially on social media, will be characterized as designated public forums–with all of the legal baggage that attaches to that status.
We’ve already seen how trying to moderate social media activity has become a liability trap for government actors. In Tanner v. Ziegenhorn, a court held that the government couldn’t keep social media comments “family-friendly,” couldn’t deploy a customized blocklist of verboten words, and probably couldn’t piggyback on the service’s own house rules which exceeded constitutional standards. And in the social media blocking cases (of which Knight Institute v. Trump is the leading case), the courts have consistently held that government actors can’t block social media users.
The obligation to comply with the First Amendment puts government actors into a box when it comes to social media engagement. They can’t effectively moderate user content, but they can’t allow unrestricted user activity either. For example, in this case, if the government can’t remove off-topic posts, we all know that the comments section will go to shit very, very quickly. Plus, government actors will (rationally) fear that every decision/non-decision they make will get them sued, and they don’t have the capacity or money for that. Knowing the dilemmas they face, governments will logically conclude that allowing social media comments isn’t cost-benefit justified.
[Although it’s under Australian legal rules which are less favorable than those in the US, the Australian High Court’s ruling in Fairfax Media v. Voller triggered this response. The court said that Facebook accountholders are potentially liable for defamatory third-party comments. In response, some government officials immediately shut down users’ ability to comment on their Facebook pages.]
So, the future is clear. Given that every content moderation decision on social media will potentially lead to constitutional litigation, we must say goodbye to citizen engagement with governments on social media. Instead, progressive cities like Sammamish–that are willing to go out of their way to engage with online feedback from residents–will tell their residents to go old-school with their feedback and send letters or emails or attend the city council meetings. While this is better than the city exercising inevitably censorial control over online conversations, it’s still a loss because it reduces overall resident engagement in their government politics.
It’s become a popular political sport to bash Big Tech’s alleged mishandling of user content. To those haters, I’ve floated the alternative that government can’t trust market forces to provide socially optimal solutions they want, so the government should provide its own social media service–in full compliance with First Amendment principles. This suggestion might feel partially disingenuous, because we know that option isn’t realistic. This case reinforces that governments could only offer an anything-goes social media forum, and no one wants that.
This case helpfully previews a world where Texas or Florida can enforce their social media censorship laws. The regulated services will be petrified about removing “off-topic posts” because they can’t do so consistently–so every content moderation “mistake” they make becomes a liability trap. As Jess Miers and I explain here, those laws accelerate the end of user-generated content, just as this ruling previews the end of government-operated social media speech venues.
Case citation: Kimsey v. City of Sammamish, C21-1264 MJP (W.D. Wash. Nov. 22, 2021)