Anyone who has performed a Google search in the past several years will likely have noticed Google’s auto-complete function, which automatically suggests search terms as the user types. For example, when a user recently entered the terms “Brann & Isaacson,” Google suggested “Brann & Isaacson law firm.” But what happens if, instead of helpful or innocuous suggestions, Google suggests something scandalous?
Google itself has consistently maintained that it has no direct control over its results or suggestions, which are generated automatically. Because Google’s suggestion algorithm reflects the frequency with which particular search terms are entered, however, it is a natural engine for spreading and legitimizing gossip. The more a rumor is repeated, re-tweeted or re-blogged, and the more it is searched, the louder it becomes, and the more likely it is to find its way into Google’s suggested auto-complete terms. Now, a German Court has ruled that Google can be held liable if its auto-complete function suggests defamatory search results.
In Decision VI ZR 269/12 (May 14, 2013), The German Federal Court of Justice (Bundesgerichtshof), Germany’s highest court of ordinary jurisdiction, considered a case brought against Google by two plaintiffs, a company and the company’s chairman, who had sued to remove auto-complete suggestions the plaintiffs considered defamatory. Users who searched for plaintiffs’ names would see suggested search terms including “Scientology” and “Fraud.” The Court ruled that, while Google has no obligation to proof its auto-complete results in advance, it does have an obligation, once it has been put on notice that suggestions falsely imply a factual link between an individual or entity and terms that have negative connotations, to remove those terms from the suggestions and to prevent similar suggestions from appearing in the future.
The case is making headlines in Germany, partly because of its potential effect on a high profile lawsuit brought by Germany’s former first lady, who has sued Google under the same theory, alleging that Google’s suggested search terms gave credence to certain rumors about her past. At this blog, however, we will be paying particular attention to how the ruling plays out in a larger context, and whether any other jurisdictions adopt similar positions.
To date, courts in the United States have rejected attempts by private litigants to hold Google or other search engines liable for unwanted search results or suggestions. Section 230 of the Communications Decency Act provides interactive computer service providers with broad immunity against liability for third-party-created content, providing that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and that “no cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. §230(c)(1), (e)(3). This section makes it difficult to hold a search engine liable for defamation in the United States based on search results. A woman in Wisconsin, who objects to the appearance of results and advertisements for impotency drugs in connection with her name, has filed a series of lawsuits against search engines alleging trademark violation and commercial misappropriation, rather than defamation. Her efforts have thus far proven unsuccessful. (See, most recently, Stayart v. Google, 710 F.3d 719 (7th Cir. 2013)).