Can You Sue for Defamation on Social Media?

Yes. Australian defamation law applies to publications on Facebook, Instagram, X (formerly Twitter), TikTok, LinkedIn, Reddit, YouTube, and every other social media platform in exactly the same way it applies to traditional media. If someone publishes a statement about you on social media that is false, identifies you, and causes or is likely to cause serious harm to your reputation, you may have a cause of action for defamation.

Social media defamation is now one of the most common categories of reputational harm in Australia. The speed and reach of social media mean that a single post can be seen by hundreds or thousands of people within hours. Unlike a spoken comment, social media leaves a permanent, searchable record — compounding the damage over time. Australian courts have consistently awarded substantial damages in social media defamation cases, reflecting the seriousness with which this form of publication is treated.

The Three Elements of a Social Media Defamation Claim

To succeed in a defamation claim based on a social media publication, a plaintiff must establish three elements:

  1. Publication. The statement was communicated to at least one person other than the plaintiff. On social media, this is almost always established: any post, comment, or message visible to others constitutes a publication.
  2. Identification. The statement identifies the plaintiff — either by name, by photograph, or by description that would enable a reasonable reader to understand the statement refers to them. Pseudonyms and coded references do not prevent identification if the plaintiff can be reasonably identified from the context.
  3. Defamatory meaning. The statement conveys an imputation that damages the plaintiff's reputation in the eyes of ordinary, reasonable members of the community. The test is objective: it does not matter what the publisher intended.

In addition, for publications after 1 July 2021, the plaintiff must establish the serious harm threshold — that the publication has caused, or is likely to cause, serious harm to the plaintiff's reputation. For corporations with fewer than 10 employees (the only corporations that may sue for defamation), serious financial loss must be demonstrated.

Who Is Liable? Posters, Sharers, and Page Administrators

Multiple people may be liable for a single defamatory social media post:

The Original Poster

The person who authored and published the defamatory content is always the primary potential defendant. This includes anyone who writes a post, uploads a video, or publishes a comment on any platform.

Anyone Who Shares or Reposts

Sharing, retweeting, or reposting defamatory content is a fresh act of publication. Each person who shares the material can be independently liable. The law does not distinguish between creating content and amplifying it — republication is publication.

Page Administrators and Group Moderators

In Fairfax Media Publications Pty Ltd v Voller [2021] HCA 27, the High Court held by a 5:2 majority that media organisations that created and administered public Facebook pages were publishers of third-party comments posted on those pages. The Court held that any voluntary act of participation in the communication of defamatory content can give rise to liability, regardless of whether the defendant knew the specific content was defamatory.

This principle extends beyond media organisations. Anyone who administers a Facebook page, a community group, or a forum where defamatory comments are posted may be held liable as a publisher if their facilitation of the page amounts to a voluntary act of participation in the communication of the defamatory material.

The Digital Intermediary Defence: Section 31A

The Stage 2 uniform defamation reforms introduced a new section 31A into the Defamation Act 2005, creating a defence for digital intermediaries — including social media platforms, review sites, and forum hosts. The defence is now in force in New South Wales (from 1 July 2024), the ACT (from 1 July 2024), and Victoria (from 11 September 2024). Queensland has introduced its Defamation and Other Legislation Amendment Bill 2025 to adopt the same reforms; the remaining jurisdictions (SA, WA, NT) are yet to enact them.

Under section 31A, a digital intermediary has a defence if it can prove:

  1. It acted as a digital intermediary (provided a service enabling the publication of content by others);
  2. It had an accessible complaints mechanism (such as a webpage or email address for receiving complaints);
  3. If a written complaint was received, the intermediary took reasonable access prevention steps within seven days — meaning it removed or blocked the content; and
  4. The intermediary did not act with malice in providing the service.

The defence shifts the focus from automatic platform liability (as established in Voller) to a compliance-based framework. If the platform responds appropriately to a complaint, it may avoid liability. If it fails to act, or acts too slowly, the defence is lost.

For plaintiffs, this means that making a formal, written complaint to the platform in the correct format is now a critical procedural step. A valid complaint must include your name, a description of the defamatory material and its location, and a clear statement that you believe the post is defamatory. If the platform fails to take access prevention steps within seven days of receiving a valid complaint, it cannot rely on the section 31A defence.

Section 39A further empowers courts to order non-party digital intermediaries to remove or block defamatory content, even if the platform is not a defendant in the proceedings.

Recent Damages Awards in Social Media Cases

Australian courts have awarded substantial damages in social media defamation cases, reflecting the serious harm these publications can cause:

  • Greenwich v Latham [2024] FCA — Independent NSW MP Alex Greenwich was awarded $140,000 in damages after Upper House MP Mark Latham posted a homophobic tweet following the NSW state election. The Federal Court found the tweet conveyed an imputation of disgusting sexual conduct.
  • Makarios v Morelas [2026] FCA 156 — Archbishop Makarios was awarded $300,000 (including aggravated damages) for defamatory posts published on Facebook and YouTube by a former church member. The case demonstrated the significant damages available for online defamation campaigns.
  • Rodgers v Gooding [2023] QDC 115 — A Queensland woman was ordered to pay $279,000 in damages (including aggravated damages) for false allegations posted in a community Facebook group about her neighbours.
  • Musicki v de Tonnerre [2023] FCA 222 — A Melbourne surgeon was awarded $75,000 for a pseudonymous defamatory Google review, after obtaining preliminary discovery orders to unmask the anonymous poster.

These awards demonstrate that social media defamation is taken seriously by Australian courts, with damages ranging from $75,000 for a single fake review to $300,000 or more for sustained online campaigns.

Platform-Specific Considerations

Facebook and Instagram (Meta)

Facebook and Instagram are the most common platforms in Australian social media defamation cases. Defamatory content can appear as posts, comments, stories, reels, or shared content. Meta's Community Standards prohibit certain forms of harmful content, but the platform's internal moderation is often slow or inconsistent. Under the Stage 2 reforms (in NSW, ACT, and Victoria), a formal written complaint to Meta triggers a seven-day removal obligation. If Meta fails to act, it loses its digital intermediary defence.

X (Twitter)

X permits pseudonymous accounts, which increases the prevalence of anonymous defamation. The Greenwich v Latham case confirms that tweets — even short ones — can carry serious defamatory imputations and ground substantial damages awards. Where the poster is anonymous, preliminary discovery may be available to compel the platform to disclose identifying information.

TikTok and YouTube

Video platforms present distinct evidentiary challenges. The defamatory meaning is conveyed through a combination of spoken words, visual imagery, captions, and context — all of which must be carefully analysed. Video content can spread rapidly through algorithmic recommendation, significantly expanding the audience and the resulting harm. Evidence preservation is critical: videos should be downloaded and screenshots taken of view counts, comments, and share metrics.

LinkedIn

LinkedIn defamation typically arises in professional or business contexts. A false post about a competitor's business practices, a former employee's conduct, or a professional's qualifications can cause direct financial harm through lost business opportunities. The professional context of LinkedIn often makes it easier to establish serious harm, as the audience is precisely the community in which the plaintiff's reputation is most valuable.

Reddit and Online Forums

Reddit and similar platforms are predominantly pseudonymous, which complicates enforcement but does not prevent it. Forum operators may face liability as publishers under Voller principles, subject to the digital intermediary defence where the Stage 2 reforms are in force. The relative anonymity of these platforms makes preliminary discovery particularly important.

Time Limits and the Single Publication Rule

A defamation claim must generally be commenced within one year from the date the material was first published. Under the single publication rule introduced by the Stage 1 reforms (effective 1 July 2021), the limitation period runs from the date the material was first uploaded or posted — not each time it is viewed or shared. This means that even if a post goes viral weeks after it was first published, the clock does not reset.

The court has a limited discretion to extend the limitation period up to three years in exceptional circumstances, but this should not be relied upon. If you become aware of defamatory social media content, prompt action is essential.

Before commencing proceedings, you must also serve a concerns notice on the publisher and allow at least 28 days for a response. This mandatory step applies to all defamation claims, including those arising from social media publications.

Practical Steps If You Have Been Defamed on Social Media

  1. Preserve the evidence immediately. Take full screenshots showing the post content, the author's name or profile, the date, the URL, and any visible engagement (likes, comments, shares). Content can be deleted or edited at any time. Record the evidence across multiple devices.
  2. Do not engage with the post. Commenting, liking, or sharing — even to dispute the content — can complicate your position and extend the chain of publication. Do not respond publicly.
  3. Report the content to the platform. Use the platform's built-in reporting mechanism. While this may or may not result in removal, it satisfies the reasonable inquiry requirement and, in jurisdictions where the Stage 2 reforms are in force, a formal written complaint triggers the seven-day removal clock under section 31A.
  4. Seek legal advice promptly. The one-year limitation period begins on the date of first publication. Early advice ensures you understand your options, preserve your rights, and avoid missing critical deadlines.
  5. Serve a concerns notice. If the publisher is identifiable, your lawyer will prepare and serve a concerns notice setting out the defamatory imputations and the harm caused. If the publisher is anonymous, preliminary discovery may be needed first.
  6. Consider injunctive relief. In urgent cases — particularly where the defamatory content is ongoing or escalating — interlocutory injunctions may be available to restrain further publication. The new privacy tort may provide additional avenues for injunctive relief where private information has been disclosed.

How Matrix Legal Can Help

Social media defamation requires a combination of legal expertise, forensic awareness, and strategic judgment. The intersection of platform-specific complaint procedures, the digital intermediary defence, content removal strategies, and formal litigation demands careful navigation at every stage.

Mark Stanarevic and the Matrix Legal team advise on all aspects of social media and online defamation, including evidence preservation, platform complaints, concerns notices, preliminary discovery to identify anonymous posters, interlocutory injunctions, and formal proceedings for damages. Whether the defamation has occurred on Facebook, Instagram, TikTok, X, LinkedIn, or any other platform, request a free assessment or call 1800 950 627.

This article is general information and not legal advice. Defamation risk turns on the precise words used, the publication context, the audience, and available evidence.