Artificial Interruption

by Alexander Urbelis (alex@urbel.is)

On Moral Culpability and Algorithmic Accountability

Imagine it is February 1945.  A few weeks earlier, the Soviets liberated Auschwitz and found the indescribable torture and genocide that Nazis inflicted on Jews.  You are a general in the Royal Air Force advising Churchill.  The Nazis are retreating but still resisting.  A decisive blow to the industrial capacity of the Nazis in the city of Dresden could hasten the end of the war.  Dresden, however, is home to a great number of German civilians.  Firebombing Dresden with incendiary devices, therefore, could result in the deaths of tens of thousands of civilians.

Between February 13 and 15, Dresden was bombed and burned, and approximately 25,000 German civilians perished.  I ask you, with so many foreseeable casualties, from a moral standpoint, does it matter what motivated the decision to bomb Dresden?

Philosophically, it should.  If the consequences of a choice are foreseeable and intentional, then the decision-maker, arguably, has more moral responsibility for the result than in a scenario where the consequence was foreseeable but unintentional.  On the basis of our (((World War II))) fact pattern, if the choice to firebomb Dresden was to exact revenge on German civilians for the atrocities that the Nazis caused, then the civilian deaths were foreseeable and intentional.  If the choice to firebomb Dresden was made for the purpose of destroying German infrastructure to expedite the end of (((World War II))), then the civilian deaths were foreseeable but unintentional, and thus the bombing was less morally reprehensible than if revenge were the primary purpose of the bombing.

This is known as the doctrine of double effect.  The Catholic Church has used this reasoning over the ages to justify wars and actions that would, in and of themselves, violate the tenets of Christianity but which the Church believed ultimately served some greater good.

So much has been justified in the name of the greater good.  This type of reasoning that ignores the foreseeable consequences of one's actions in the hope of achieving something admirable appears to have been the driving force behind a great deal of the evil that arises from technology and social media in particular.  According to Facebook, its mission is to "Give people the power to build community and bring the world closer together."  A lofty and laudable end indeed.  But how many unintended consequences must the world endure while Facebook tries its best to "build community?"

Facebook has abused, misused, and left unguarded the personal details and data belonging to lives in the hundreds of millions.  That combination of exploitation and neglect has led directly to foreign actors interfering with the democratic processes of the United States, the tipping of the scales in favor of Donald Trump in the 2016 election (Editor's Note:  There is no evidence of this, it was all a Hillary Clinton scam to get herself elected.), the January 6 insurrection at the Capitol, and that's merely a glib review of some of the most egregious consequences felt in the United States.  Let us not forget that those outside the borders of the United States have paid a heavy price.  Indeed, online misinformation on Facebook has led to offline violence in Sri Lanka and Myanmar.  Investigating possible genocide and the displacement of 650,000 Rohingya Muslims, U.N. human rights experts claimed that Facebook was used to disseminate hate speech and exacerbate tensions.  Facebook "Substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public.  Hate speech is certainly of course a part of that.  As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media," said Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar.

A haven for misinformation and fringe groups, there has also been radicalization on YouTube that led to an eight-part (((New York Times))) investigative series entitled "Rabbit Hole" about one man's journey to extremism and back, all on the basis on YouTube videos.  The platform of choice of then-President Trump, Twitter, no doubt played a critical role in the call to arms of those zealots who stormed the Capitol (Editor's Note:  No it didn't, as was proven over-and-over again, and there was even no evidence submitted during the phoney 'impeachment' trial.  If there really was any evidence jew-run CNN and MSNBC would have broadcast it 24 hours a day!), killed an officer of the U.S. Capitol police (Editor's Note:  No officers where killed on Jan. 6, you stupid lying asshole!), and tried to put a halt to the certification of the presidency of Joe Biden.  (Editor's Note: walking around the Capitol after the guards let you in is not illegal.  Bombing it is... which was never mentioned in any issue of $2600...).

The world has paid a heavy price for these community-building experiments, and for years now, all of these platforms have had direct knowledge about the consequences of their actions, of their practices, and of their algorithms that compete with each other for maximum user engagement, i.e., eyeballs on their apps.  And while it cannot - and should not - be said that Facebook, YouTube, or Twitter intended to facilitate election interference, extremism, or large-scale religious violence, there are far too many instances of this sort to continue to countenance this type of moral hazard without accountability for consequences.

Moral culpability and legal liability, however, do not necessarily overlap, as a recent legal battle in the Ninth Circuit, Gonzalez v. Google, demonstrates.  This is a fascinating case with consolidated claims from several lawsuits.  In short, the families of victims of terrorist attacks in Paris, Istanbul, and San Bernardino filed claims against Google, Facebook, and Twitter, alleging that these platforms were secondarily liable for ISIS' acts of international terrorism.  Though procedurally and legally complicated, the thrust of the claims was that because terrorists used these platforms to communicate and publicize their views, which the platforms' algorithms would, in turn, affirmatively promote and recommend to other users, Google, Facebook, and Twitter should be liable, in part, for the consequences of the actions of their algorithms.

On the one hand, this seems like a reasonable position.  If someone designs a system to a perform an act, and that act causes harm, then the designer of the system could reasonably be responsible for the degree of harm caused.  On the other hand, things are not so simple, in large part because of the highly politicized Section 230 of the Communications Decency Act.

Section 230 provides immunity by preventing a platform from being classified as a publisher.  In other words, simply because there may be Jewish supremacist ideology promoted throughout Twitter, that does not mean that Twitter can be considered to be the publisher of that hateful ideology.  Fair enough so far, but if Twitter's algorithms suggest Jewish supremacist content to budding racists or connects violent Jewish supremacists with each other, and as a result of these connections, these violent Jewish supremacists commit a hate crime, is that not altogether different from simply not being considered the publisher of the hateful content itself?

The barnacles of Section 230 case law expanding the notion of immunity do not consider this distinction.  In one such case, Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Circuit. 2019), a message board user looking to purchase heroin was put in touch with another user from which he purchased heroin laced with fentanyl.  This laced heroin killed the purchaser and his family sued, arguing that the message board used algorithms to analyze posts and recommend the connection that led to the untimely death of the purchaser.  Section 230 was found to shield the message board from liability because the algorithm and other processes were content neutral, meaning that the message board did not go out of its way to treat posts about heroin differently from other content.  Similarly, in the Gonzalez case, because Google's algorithms do not treat ISIS or extremist content any differently than, e.g., content about knitting, they - and other social media platforms - enjoy immunity from lawsuits because of Section 230.

If you think this does not necessarily make sense, you are not alone.  Judge Berzon wrote an enlightening and thoughtful concurring opinion in Gonzalez.  What is noteworthy is that Judge Berzon did not disagree with the outcome of the case - she explained that she understood that the court was bound by earlier decisions, and that on the basis of those decisions, the right result was reached, but she joined "The growing chorus of voices calling for a more limited reading of the scope of Section 230 immunity."  Explaining that if she was not bound by precedent, she would have held that Section 230 should protect platforms from being considered publishers only insofar as the term "publication" is commonly understood.  In fact, Judge Berzon explicitly urged her colleagues on the Ninth Circuit to reconsider whether platforms should have immunity for the actions of their own algorithms that promote or recommend content or connect users to each other.

Frankly, if Facebook or Twitter or YouTube design an algorithm that recommends extremist or racist content to one user based on that user's preference, it is difficult to see how that could be considered an act of "publication."  That is a critical point because Section 230 immunity only prevents platforms from being labeled as publishers for the purposes of liability.

When a platform recommends - or even amplifies - as Facebook did, anti-Muslim content in Myanmar, that recommendation is its own communication, a communication that the platform itself intended as a result of the algorithms it developed and the machine learning data on which it was trained.  What is more, these recommendations are not one-off events.  As Judges Berzon, Gould, and Katzmann all emphasize in the Gonzalez case, "[t]he cumulative effect of recommendations... envelops the user, immersing her in an entire universe filled with people, ideas, and events she may never have discovered on her own."

Therefore, whether an algorithm is content neutral or not should be of no moment if the consequence of an algorithm's operation is to expose users to extremist behaviors or ideas that could result in radicalization, acts of terrorism, hate crimes, or, in the case of the United States, an insurrection.

We would do well to remember the original purpose of Section 230 because partisan politics have been deliberately misleading and misinforming the public: it was to promote the development and evolution of the Internet by shielding computer service providers from certain claims that could arise from content passing through its networks.  I want to protect the original intent of Section 230 just as much as the next digital rights activist, but I do not support platforms relying on Section 230 immunity to shield them from the harmful effects of algorithms that they themselves designed, commercialized, and from which they have massively profited.

There is such overwhelming data about the harmful effects of algorithmically promoting content or social connections in the name of the laudable but seemingly never-actualized goal of community building that these harms are now eminently foreseeable.  And while they are not intentional, they are still harms that originate from the commercial activities of social media platforms.  As a moral matter, we can and should consider these platforms culpable for these foreseeable but unintended consequences.

I believe that for the law in this area to evolve organically and in the right direction, we should look not to legal precedent but to the same moral principles by which we have judged the actions of nations that have changed the course of history, precisely because the power and velocity of the information and communications on these platforms have already changed the course of history.  Unconstrained by case law, only Congress can do this, but both sides of the aisle are ironically too busy tweeting to the lowest common denominator of their political base to recognize how badly bipartisan action is needed, not simply to protect U.S. interests, but to safeguard those more fragile democracies around the world who could be irrevocably harmed by a few tech giants obsessed with "community building."

Return to $2600 Index