Defamation on social media

In an age when a piece of writing on the web by an obscure person could find a large audience in a very short period of time, in other words it can go viral, what happens when someone writes a defamatory comment on a social media platform because that person wanted to vent, and thereby causing real damage to another person’s reputation? This is a cautionary tale from British Columbia, Canada.

Here follows the usual proviso: I am not a lawyer, and this piece is not intended as discussions and analyses of the case and the law (though I would be grateful for any corrections), instead it is intended as a commentary from the perspective of someone interested in the use of social media and the social and cultural issues associated with it.

Ad fontes

The full text is available on the CanLII website at www.canlii.org/en/bc/bcsc/doc/2016/2016bcsc686/2016bcsc686.html. It may be quite long, but it is very well written, describing the whole situation extremely clearly (far better than many judgments I have come across), and I would encourage any reader of this piece to peruse the document in its entirety to which I fear I am doing injustice by writing this article. While the names and places are a matter of public record, thus can be read in the document referenced above, I have decided to anonymize them by inserting plaintiff and defendant in lieu of names, and I have not followed the conventions in citing cases (name v. name). The main reason is that were I to include the names, I might unfortunately be sending search engines signals to associate the victim (as I should perhaps call the plaintiff) further with the defamatory comments, which I would like to avoid.

Context: unneighbourly conduct

This case not only dealt with defamation to which I shall come in a moment, but also with nuisance.

There was and remains a waterfall in the defendant’s garden, which from the given description seems rather impressive:

The structure is on two levels, with water flowing along its length of approximately 20 or 25 feet, and flowing over two waterfalls. The nuisance claim is largely based on the constant noise emanating from the water cascading over the rocks, which the plaintiff and his wife testified has disrupted their sleep.

The noise of the waterfall was a constant nuisance for the plaintiff and his wife, so much so that the plaintiff sometimes had to make a choice between enduring heat by keeping the windows shut to seek relief from the noise, or bearing with the noise to find respite from the heat.

In addition, the defedant’s dog used to wander over to the plaintiff’s garden and defecate. There were other instances of unneighbourly conduct, such as late-night loud parties including an occasion when a quarter stick of dynamite was let off, and parking vehicles to block access to the plaintiff’s driveway.

The judge awarded the plaintiff $2,000 for the nuisance with regard to the waterfall as well as a permanent injunction to prohibit the operation of the waterfall between 10 pm and 7 am, and a further award of $500 in relation to the fouling dog. Thus far it sounds like a case of a neighbour from hell. I have to note parenthetically that the defendant ‘has now gotten rid of [the dog]’ (para. 40). What exactly happened to the poor creature whose misfortune was to have such an owner?

Defamatory and false accusation of paedophilia has serious consequences

Defamatory comments were made in this context of unneighbourly conduct by the defendant. Of all groundless comments that a person can hurl at another, the one of the most – perhaps the most – serious and devastating is to suggest that the defamed is a paedophile, especially when the defamed is in the teaching profession: this is what happened in this case. The plaintiff is a middle school music teacher, who actively participated in extra-curricular activities, and thanks to him the music programme at the school grew significantly.

Untrue remarks can be ruinous. There have been many high-profile cases of horrific child abuse in recent times involving institutions and well-known individuals across the globe who were in position of trust, and naturally parents are worried and sensitive to any allegations. Schools would not wish to take any risk either.

[The plaintiff] testified that he thinks it is unlikely that he could now get a job in another school district. (para. 36)

The plaintiff’s testimony was backed up the principal of the school.

[The principal of the middle school where the plaintiff teaches] testified, not surprisingly, that allegations of impropriety towards students, even if unsubstantiated, can end a teacher’s career. Principals would avoid hiring a teacher against whom such allegations had been made, even if unsubstantiated. He testified that if he did not know [the defendant], he would not hire him, based on the kind of allegations that were made against him. (para. 38)<

/blockquote>

The plaintiff is far more cautious in how he acts.

[The plaintiff] finds he is now constantly guarded in his interactions with students; for example, whereas before he would adjust a student’s fingers on an instrument, he now avoids any physical contact to shield himself from allegations of impropriety. He has cut back on his participation in extra-curricular activities. (para. 33)

Indeed, the plaintiff

has lost his love of teaching. (para. 33)

Venting on Facebook

So what was posted where and when? The following message was posted on Facebook on 9 June 2014 by the defendant.

My neighbour has mirrors hanging outside his home…[The plaintiff referred by the first name] also videotapes my kids in the backyard 24/7! Well [the plaintiff] … Meet my mirror! (para. 21)

Furthermore the defendant added the following.

Some of you who know me well know I’ve had a neighbour videotaping me and my family in the backyard over the summers.... Under the guise of keeping record of our dog...

Now that we have friends living with us with their 4 kids including young daughters we think it’s borderline obsessive and not normal adult behavior...

Not to mention a red flag because [the plaintiff] works for the [local] school district on top of it all!!!!

The mirrors are a minor thing... It was the videotaping as well as his request to the city [...] to force us to move our play centre out of the covenanted forest area and closer to his property line that really, really made me feel as though this man may have a more serious problem. (para. 22)

The three claims made by the defendant in her Facebook post – that i) the plaintiff had installed some sort of video surveillance system, as well as ii) a mirror to monitor the defendant’s property to spy on her family, and that iii) the plaintiff had asked the municipal authorities to move the defendant’s play centre closer to the property boundary – were all untrue.

The defendant had over 2,000 Facebook friends and her posts were public thus visible to a very large number of people. In the 21-hour period following the posting, the post by the defendant generated 57 other posts (it is unclear what exactly is meant by posts as the description suggests that the post was shared by others with comments to their pages): 9 by the defendant, 48 by 36 friends. The post by the defendant was up for approximately 27½ hours. In the comments / posts that were generated

[the plaintiff] was expressly referred to as a “pedo”, “creeper”, “nutter”, “freak”, “scumbag”, “peeper” and a “douchebag”. (para. 24)

As the judge had noted earlier:

In totality, the posts on the defendant’s Facebook page made by the defendant and by others, in their natural meaning and by innuendo, bore the meaning that the plaintiff was a paedophile. (para. 3)

One of her friends even took upon himself to inform the principal of the school about the post by e-mail. A parent whose children had been taught by the plaintiff and who considered the plaintiff to be an excellent teacher saw the post, as it appeared on her Facebook news feed, since some of her friends had commented on the post, even though she did not know the defendant, and she went to inform the plaintiff. There were comments by acquaintances of the plaintiff that indicated that they had read the posts. The plaintiff could not and does not to this day know how many people have read the post by the defendant and the subsequent comments by the defendant and her friends.

The nature of Facebook and other social media platforms is such that even though the defendant belatedly deleted the post,

the deletion apparently accomplished nothing in respect of the copies of [the defendant’s] posts that had by this time proliferated over Facebook. This would have included copies that made their way on to the Facebook pages of the defendant’s “friends” who provided comments, and potentially other “friends” of hers whose own pages were set up to receive notifications of posts made by her. Copies also would also have spread to the pages of any others with whom the initial posts had been “shared” (para. 32)

What is the defendant liable for?

Who should be held liable for posting, commenting on, and sharing (with comments) defamatory Facebook post? There were different types of Facebook posts and comments as well as an e-mail in this case. There were i) the post made by the defendant on Facebook; ii) republication of the defendant’s posts on Facebook and e-mail; and iii) defamatory remarks by others in reaction to the defendant’s post on Facebook. The plaintiff argued and the defendant denied that the defendant was liable for all three ways in which the plaintiff was defamed.

The first of these three modes in a sense is obvious, at least to a layperson like me, and there does not seem to be much room for controversy: the defendant had defamed the plaintiff with her original post and her subsequent replies directly and by innuendo in her own words. The far more interesting elements are the other two modes of defamation involving republication, sharing, and dissemination within Facebook as well as via e-mail in one part, and with regard comments by others in response to the defendant’s post and comments in the other part. Should the defendant be liable for those as well?

Republication on Facebook and by e-mail

Drawing on The Law of Defamation in Canada by Raymond E. Brown (Brown on Defamation), the general rule appears to be that the original publisher is not liable for republication by third parties who are free agents, over whom the original publisher does not have control or is responsible, and where the original publisher had neither authorized nor intended republication, however the original publisher may be liable when the original publisher had i) intended or authorized another to publish the defamatory remarks on his or her behalf; ii) published the defamatory remarks to someone who is under moral, legal, or social duty to repeat them to another party; or iii) published the defamatory remarks in a way republication is natural and probable.

Of the three criteria mentioned above, the second one probably does not apply in this case. The question then is whether the defendant had authorized or intended republication, and whether such repetition was natural and probable. The judge found that there was implicit authorization for widespread dissemination due to the nature of Facebook. The judge stated thus.

In my view the nature of Facebook as a social media platform and its structure mean that anyone posting remarks to a page must appreciate that some degree of dissemination at least, and possibly widespread dissemination, may follow. This is particularly true in the case of the defendant, who had no privacy settings in place and who had more than 2,000 “friends”. The defendant must be taken to have implicitly authorized the republication of her posts. There is evidence from which widespread dissemination of the defamation through republication may be inferred. (para. 83)

And the republication was natural and probable.

All of this republication through Facebook was the natural and probable result of the defendant having posted her defamatory remarks. [The defendant] is liable for all of the republication through Facebook. (para. 84)

Remember the person who decided to take upon himself to send an e-mail to the school principal? The defendant was also liable for that.

In my view, the implied authorization for republication that exists as a consequence of the nature of social media, and the structure of Facebook, is not limited to republication through the social media only. [The defendant] ought to have known that her defamatory statements would spread, not only through Facebook. She is liable for republication through the email on that basis. (para. 87)

The person sending the e-mail had effectively communicated that he was going to disseminate the post to the defendant in response to the defendant’s post on Facebook: given the chronology of comments in which the defendant replied and posted to later comments, it was deemed that the defendant had authorized its repetition thus liable as a publisher of the e-mail.

Third-party comments on the defendant’s post

The issue of the defendant’s liability for third-party comments on her post is also very interesting. In other words, should the defendant be liable for comments that were made by her friends in response to her original post in Facebook?

There is, according to the judge after referring to a number of cases,

support for there being a test for establishing liability for third party defamatory material with three elements: 1) actual knowledge of the defamatory material posted by the third party, 2) a deliberate act that can include inaction in the face of actual knowledge, and 3) power and control over the defamatory content. After meeting these elements, it may be said that a defendant has adopted the third party defamatory material as their own. (para.108)

The defendant had control over her own post, and as she was checking her Facebook account, as evidenced by her comments in response to others’ comments, she had an obligation to delete if necessary her own post and comments in entirety, when her friends started to post defamatory comments. Even more, it was not necessary for the defendant to have actual knowledge of the defamatory comments by others, since she ought to have anticipated and known others would be making them in the circumstances of this case.

In a case heard at the Cout of Appeal in New Zealand (available on the NZLII website at www.nzlii.org/cgi-bin/sinodisp/nz/cases/NZCA/2014/461.html), two different tests called for convenience actual knowledge and ought to know were outlined with regard to liability of third-party dafamatory comments on Facebook posts. The actual knowledge test refers to a situation when the original poster on Facebook knows about the defamatory comments, yet fails to remove them within a reasonable time, and thereby it is inferred that the poster takes responsibility over them, thus becomes liable. In the ought-to-know test, the original poster on Facebook does not know but ought to know that defamatory remarks are likely to be posted. In other words, the ought-to-know test means that the original poster on Facebook becomes liable as the publisher of defamatory comments as soon as the they are posted in response to the original post. The Court had concerns with the ought-to-know test, among which were that it places the original poster in a worse position than on the actual knowledge basis, and it makes the original poster liable on the basis of strict liability, even if its application were to be restricted to cases where the original poster had reasonable anticipation that defamatory remarks would be posted, meaning that such would be akin to being made liable for negligence instead of intentional tort which the tort of defamation is.

As mentioned earlier, the judge in this case in Canada had applied the ought-to-know test. The judge limited – thus addressed the concerns of the New Zealand Court of Appeal – the imposition of liability on the basis of the ought-to-know test

to situations where the user’s original posts are inflammatory, explicitly or implicitly inviting defamatory comment by others, or where the user thereafter becomes an active participant in the subsequent comments and replies. [The defendant] qualifies under either of those grounds. (para. 117)

The plaintiff is awarded damages for defamation

The judge did not award aggravated damages (for which malice is required), but in reasoning, he was pretty damning of the defendant.

I do not find that the claim of malice has been made out. Taken in its entirety, the evidence of the defendant’s actions – her self-centred, unneighbourly conduct; her failure to respond reasonably to the plaintiff’s various complaints, particularly regarding her dog; and her thoughtless Facebook posts – point just as much to narcissism as to animosity. Her belief that the decorative mirror hung on the exterior of the plaintiff’s house was some sort of surveillance device was simply ridiculous, speaking, to be blunt, more of stupidity than malice. (para. 131)

Self-centred, thoughtless, narcissistic, and stupid: this perhaps is an apt description of the defendant. The defendant was saved, in one sense, by her own stupidity.

I have to wonder whether the defendant ever realized the damage she had caused. The complete lack it seems of imagination and empathy to consider the likely consequences of her actions would point to the qualities listed above.

Prior to trial, [the defendant] made no apology to the plaintiff or his family. She deleted the offending posts from her Facebook page, but she has made no positive form of retraction or apology. She has done nothing to counter the effect of her posts having “gone viral”. She insinuated in her cross-examination of [the plaintiff’s wife] that she and her husband were unable to apologize because the [plaintiff]s had asked them not to come onto their property; she gave no explanation as to why a letter could not have been sent. (para. 39)

The judge awarded the plaintiff general damages of defamation of $50,000 and additional punitive damages of $15,000, thus a total of $65,000. The amount of damages is always a talking point: it was a very expensive and stupid act on the part of the defendant, but arguably inadequate given the effect on the plaintiff.

As a side note, I am surprised that the defendant chose to appear without counsel. Given the gravity of the case, I would have thought it prudent to seek good legal representatives.

Lessons to be learnt?

What are the lessons that users of social media platforms and the social media platforms themselves should draw from the case? In this age of extensive use of social media, any user can find himself or herself in the position of the defamer, the defamed, the propagator of defamation, and the publisher of defamation.

Individuals and defamation

The obvious starting point may be to state that it is wise not to post defamatory things. However, given the number of people on social media platforms and the amount of time people spend on them, who are equally self-centred, thoughtless, narcissistic, and stupid as the defendant in this case, the chances are that someone somewhere is making some defamatory remarks right now, fuelled by anger, frustration, or perhaps alcohol. The defamer might be thinking that he or she is just venting on a social media platform, therefore it cannot be serious, underestimating the dreadfulness of the consequences. That is no excuse, as this case demonstrates, particularly with regard to the damages – probably irreparable – suffered by the plaintiff.

Conflicts arising from real-life encounters, such as in this case, or from online exchanges can escalate into something nasty, and very quickly so. For the defamed, the damages are real and long-lasting. In many cases, the quickest way to find out about other people is to search then on the internet and read what is written about them. Once a wrong or defamatory piece of information spreads across different places, it might be (and would in all likelihood prove to be) impossible to erase such false information. They can affect future prospects of the indivdiual. In this case, the defendant was known to the plaintiff. There was no doubt that the account posting the defamatory remarks belonged to the defendant. On the one hand it made the legal process more straightforward for the plaintiff since the defendant was easily identifiable, on the other hand because of the presumably substantial overlap between the real-life and Facebook connections, the defamation spread to the people who were local and known to the plaintiff. There would be other cases, where establishing the identity of the defamer becomes extremely difficult. The defamer might be anonymous or living in another jurisdiction. Legal processes are often stressful, costly, and risky. They are time-consuming. And even if successful, the damaged reputation might not be completely repairable.

How quickly and widely the defamatory remarks on social media spread depends to a very large extent on third parties. For argument’s sake, let’s say that a social media user is extremely active in distributing the defamatory remarks, but if the defamer has no or very few friends or followers and no one engages with such messages by sharing or commenting or liking them, then the potential and actual audience is quite limited. It is those who share, comment, like, and otherwise engage with the defamatory posts who determine the gravity of the situation by amplifying the message. Even if the original message is posted by someone without friends and followers, if that writing is picked up by someone with a large number of friends and followers, it would go viral. It is hard to say what compels people to take immediately to social media and spread false or defamatory information, but it is a generally observable phenomenon that many people rather unthinkingly and angrily share and comment on materials of dubious veracity all the while feeling that they are doing the right thing. There must be some sort of herd mentality: those who become propagators of defamatory remarks do not seem to step back for a moment and consider the accuracy of the remarks or the likely consequences of spreading them. It is republication and propagation without a sense of responsibility.

The defamer does not have to start his or her own post on his or her profile. The defamer can ‘hijack’ other people’s posts. Let’s say two people fall out, and one of them makes a defamatory comment on their mutual acquaintance’s post that is a picture of a cat. Could the original poster of a cat picture become liable after notification by the defamed person? Would failing to delete the comment, or even the whole post, be seen as acquiescence? It certainly is going to put the original poster of the cat picture, a bystander, in an extremely awkward situation.

In this case, only one person was pursued as the defamer and the publisher of defamation, responsible for republication and comments on her own post. In other cases, the defamed will have to seek redress from multiple people. While there may be the original post that is defamatory, the dissemination of the defamatory content can involve many others, and crucially the repeated materials can end up on different social media platforms. In this case, the defendant was held liable for republication. Unlike her own posts and comments on her posts, the defendant did not have powers to delete posts by others or third-party comments made to others’ posts. In other words, the original poster loses control over the material, once defamatory remarks are shared by third parties on third-party web entities.

Social media platforms and defamation

What is the role of social media platforms with regard to this issue?

So much content is being created on social media that it would be impossible for the platforms to police the posts and comments effectively. Indeed, social media platforms do not create content themselves, even though they are making money from user data and content by selling advertisement: they are merely providing places where users can create, share, and engage with content. As such they absolve or at least attempt to absolve themselves of any responsibility and liability for what their users post.

Each social media platform has policies governing user conduct and behaviour, to which the user has agreed, allowing it to delete content or remove the user from the platform. Besides content that would be clearly illegal in many jurisdictions or ethically reprehensible, social media platforms reserve the right to remove content and terminate the user account for posting materials that may well be described as socially unacceptable as well as for commercial spam. Social media platforms would have automated means of stopping abuse, but they also rely on user reports. It is worth pondering on the point that social media platforms are daily determining in very large numbers the acceptability or otherwise of user posts, comments, and behaviour. Many social media platforms have very many users, more than the population of many states, and arguably their decisions have huge influence in shaping the contours of public discourse. It is a considerable amount of power exercised by a handful of companies.

For the defamed, if the priority is to remove the defamatory remarks from social media platforms rather than seeking damages from them, social media platforms’ internal system of complaints would be the first point of call. Defamation, slander, and libel may not be mentioned expressly in the terms and conditions as prohibited, except under the rubric of unlawful content or behaviour, but they would also fall under other unallowed content and behaviour, such as intimidation, harassment, bullying, revealing personal information, and so forth. When reporting, it helps to be precise, where possible give a full reason, and it is probably recommended to record what has happened: taking screenshots or recording the browser while making the report. In really grave situations, I might be tempted to print out everything, seek legal advice, and send the request for the removal of content to the social media platform by registered mail.

The reporting system available on social media platforms is shrouded in mystery. Social media platforms claim and exercise their discretionary powers every single day, but there is very little known about how the internal process works or who operates it. What are the processes and procedures? Do they respond in reasonable time? Who are assessing the reports? Are those people making the decision to remove or keep the content well trained and supervised? Are they cognizant of the laws of different jurisdictions? Do they have a proper compliance policy that will withstand external scrutiny? Are they able to judge different factors and come to an appropriate and proportionate response? One would expect consistency, and if I had to guess the workers whose unenviable task is to face all sorts of nasty things would have been provided with detailed guidelines, but I am not aware that there has been any clear and transparent disclosure on this matter by social media platforms to inspire great confidence. As a wild speculation, my gut feeling (and nothing more than that) is that social media platforms would rather remove bad materials than to risk litigation or adverse publicity by keeping them, though if contentious content is too eagerly removed they are accused of suppressing freedom of speech.

Based on a number of reports I have submitted mostly with regard to content that is sexually explicit and / or clearly spam (how I encountered such content is another long story), thus speaking solely based on personal experience, I find Facebook is doing a good job in terms of acknowledging user report submitted via its internal system and informing the reporter about the outcome of its decision. It may not do as I had hoped, but at least I am being informed. Reports on Twitter and Google+ do not generate any meaningful acknowledgement or clear indications as to what actions, if any, they have taken. A very sketchy impression suggests that Twitter is often more reluctant to remove bad materials compared to Facebook and Google+. I cannot really comment on other platforms.

In this case, the plaintiff did not sue Facebook, but in future, there may be attempts to sue social media platforms as the publisher, especially after a reasonable effort has been made to inform them of the defamatory comments, and in view of the functions of social media platforms that indubitably and by design encourage sharing, commenting, liking, and generally spreading content, as well as the fact that they benefit financially from content created and proliferated on their platforms. There is likely to be more scrutiny and more calls for transparency, as more conflicts inevitably arise from the increased use of social media.

Beware of the power

It is trite but nevertheless true to say that the internet has changed how information is created and spread. Now almost anyone can be an author, editor, publisher, republisher, and propagator of defamation and other offensive and objectionable materials, and potentially reach millions of people. There is no need to own physical printing presses and distribution networks: free and easily accessible social media platforms give every user a chance to have 15 minutes of notoriety or an expensive court case. There are things that social media platforms could do better by being more responsive and more transparent in how they handle various types of abuse and unlawful content. Yet perhaps the most important lesson to be learnt is that individuals have such potent destructive powers to ruin others’ lives. The challenge is to raise awareness among the adults, and provide better education for the young.