BETA

Leave feedback

Report Harmful Content

Please follow our reporting wizard to get you the infomation you need:

Where the content is posted?

Before you submit a report to us it is essential that you have reported the material to the social media service directly using their online tools (at least 48 hours ago). You can find information on how to do this by clicking on the tabs below:

Have you reported it to Facebook?

Facebook

Have you reported it to LinkedIn?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Have you reported it to Skype?

Have you reported it to Bing?

Have you reported it to Xbox Live?

Have you reported it to Minecraft?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find Skype's Terms of Use and information about how to submit a report below

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find information below which will help you to submit a report

Information about reporting

If you have a concern about particular URLs or other information contained in search results, you may report these to Bing. Reporting a concern will not necessarily result in removal of a URL from search results. Bing limits removal of search results to a narrow set of circumstances and conditions to avoid restricting Bing users' access to relevant information.

Bing doesn't control the content that websites publish or that appears in Bing search results. To make sure content is removed from search results, your best option is to contact the webmaster for the website that published the content and request that it be deleted or removed. Even if Bing removes the URL from search results, it will continue to exist and can be discovered by going directly to the web address until the webmaster removes the content from their website.

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find information about content Xbox prohibits and how to make a report below

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Information to help you make a report on Minecraft

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links Facebook's Community Standards below which will help you to submit a report

You do not have to have a Facebook account to report content on Facebook. 

Click here to find out how to report something on Facebook if you don't have an account or can't see the content

Facebook Checklist

Have you reported it to YouTube?

Have you reported it to YouTube Kids

Have you reported it to Blogger?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links to YouTube's Community Guidelines below which will help you to submit a report

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find information on how to report inappropriate content

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find information below which will help you to submit a report

Information about reporting

If you want to remove a photo, profile link, or webpage from Google Search results, you usually need to ask the website owner (webmaster) to remove the information.

Even if Google deletes the site or image from our search results, the webpage still exists and can be found through the URL to the site, social media sharing, or other search engines. This is why your best option is to contact the webmaster, who can remove the page entirely.

If a photo or information shows up in Google search results, it just means that the information exists on the Internet and it doesn’t mean that Google endorses it.

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links to Blogger's content policy below which will help you to submit a report

Have you reported it to Instagram?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links to Instagram's Community Guidelines below which will help you to make a report

Instagram Checklist

Have you reported it to Snapchat?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links to Snapchat's community guidelines below which will help you make a report

Snapchatters can report any safety concern within the app on their phone. You don't have to have a Snapchat account to make a report though.

Click here to submit a report if you don't have an account.

Snapchat Checklist

Have you reported it to Twitter?

Age Regulations

Under new GDPR regulations we are unable to accept information from children under the age of 13. If you are under 13 and need help removing harmful content online please ask your parent or guardian to complete the following form on your behalf.

Continue to form

Please do not enter a link to any Child Sexual Abuse Imagery, report it Here

 
Choose File

Please upload the response that you received when you submitted your report

Please find links to the Twitter Rules below which will help you to submit a report

Twitter Checklist

Facebook's response to Bullying

Bullying happens in many places and comes in many different forms – from making statements that degrade someone's character, to posting inappropriate images, to threatening someone. We do not tolerate bullying on Facebook because we want the members of our community to feel safe and respected.

We will remove content that purposefully targets private individuals with the intention of degrading or shaming them. We recognise that bullying can be especially harmful to minors, and our policies provide heightened protection for minors because they are more vulnerable and susceptible to online bullying. In certain instances, we require the individual who is the target of bullying to report content to us before removing it.

Our Bullying Policies do not apply to public figures because we want to allow discourse, which often includes critical discussion of people who are featured in the news or who have a large public audience. Discussion of public figures nonetheless must comply with our Community Standards, and we will remove content about public figures that breaches other policies, including hate speech or credible threats.

Our Bullying Prevention Hub is a resource for teens, parents and educators seeking support and help with issues related to bullying and other conflicts. It offers step-by-step guidance, including on how to start important conversations for people being bullied, parents who have had a child being bullied or accused of bullying, and educators who have had students involved with bullying.

Click here to report bullying and harassment on Facebook

Snapchat's stance on harassment and bullying

We don’t tolerate bullying or harassment on Snapchat. Don’t Snap with the intention of making someone feel bad. If someone blocks you, it’s not okay to contact them from another account.

Click here to submit a report

Snapchat's stance on threats and violence

Never threaten to harm a person, group of people, or property. Don't post Snaps of gratuitous violence.

Click here to make a report to Snapchat

Snapchat's stance on impersonation

Don’t pretend to be someone you’re not including your friends, celebrities, brands, or other organizations.

Click here to make a report to Snapchat

Snapchat's stance on adult content

We prohibit accounts that promote or distribute pornographic content. Breastfeeding and other depictions of nudity in non-sexual contexts are okay.

Click here to make a report to Snapchat

Snapchat's advice if you’re concerned about a users safety

If you’re concerned about this Snapchatter’s wellbeing and feel comfortable communicating with them, please encourage them to seek help. They can use professional services such as a counsellor or therapist, talk to someone by calling a crisis hotline, or simply confide in a close friend or family member.  

Click here to share your concerns with Snapchat

Xbox Live Prohibited Content

Microsoft prohibit the following content in Xbox Live

(Note: the examples below are just a few examples of Content prohibited by the Microsoft Code of Conduct. Just because something isn't on this list doesn't mean it's OK.)

Content involving illegality. For example:

  • Gambling, piracy, child pornography, obscenity or criminal activity
  • Underage drinking, illegal drug use or socially irresponsible behaviour connected with drug use (e.g. drinking and driving)
  • Terrorism (e.g. bomb or other weapon making instructions)
  • Information that could help identity thieves (e.g. government-issued identification number)

Content that could harm or harass a person, including yourself, or an animal. For example:

  • Profane words or phrases
  • Suicide-related content
  • Negative speech (including hate speech or threats of harm) directed at people who belong to a group, including groups based on race, ethnicity, nationality, language, gender, age, disability, veteran status, religion or sexual orientation/expression
  • "Noise", which is excessive speech intended to interfere with or disrupt another person's or group's ability to enjoy a game or app on Xbox Live
  • Content showing or promoting animal abuse

Content that is controversial. For example:

  • Sexual, provocative, pornographic or adult content
  • Violent content
  • Controversial religious content
  • Anything involving notorious people or organisations
  • Anything involving sensitive events, current or historical

Content that is unauthorised. For example:

  • Images and other content you don't have permission to use
  • Illegitimately obtained videos

Content that promotes, or sounds or looks like words, phrases, puns, images or imagery that refer to any prohibited content

Skype code of conduct

Don’t do anything illegal.

Don’t engage in any activity that exploits, harms, or threatens to harm children.

Don’t send spam. Spam is unwanted or unsolicited bulk email, postings, contact requests, SMS (text messages), or instant messages.

Don’t publicly display or use the Services to share inappropriate Content or material (involving, for example, nudity, bestiality, pornography, graphic violence, or criminal activity) or Your Content or material that does not comply with local laws or regulations.

Don’t engage in activity that is false or misleading (e.g., asking for money under false pretenses, impersonating someone else, manipulating the Services to increase play count, or affect rankings, ratings, or comments) or libelous or defamatory.

Don’t circumvent any restrictions on access to or availability of the Services.

Don’t engage in activity that is harmful to you, the Services or others (e.g., transmitting viruses, stalking, communicating hate speech, or advocating violence against others).

Don’t infringe upon the rights of others (e.g., unauthorized sharing of copyrighted music or other copyrighted material, resale or other distribution of Bing maps, or photographs).

Don’t engage in activity that violates the privacy or data protection rights of others.

Don’t help others break these rules.

Enforcement. 

If you violate these Terms, we may, in our sole discretion, stop providing Services to you or we may direct Skype to close your Microsoft account or Skype account. We may also direct Skype to block delivery of a communication (like instant message) to or from the Services in an effort to enforce these Terms, or we may direct Skype to remove or refuse to publish Your Content for any reason. When investigating alleged violations of these Terms, Microsoft or Skype reserves the right to review Your Content in order to resolve the issue, and you hereby authorize such review. However, we cannot monitor the entire Services and make no attempt to do so.

How to make a report

Click here to find out about reporting abuse on Skype

How to stay safe on Minecraft in Multiplayer mode

The majority of harmful content reports in Minecraft are to do the online multiplayer part of the game. Please read Minecraft's How to Stay Safe Online guide to find out what you can do before playing online to help stay safer and, how to make a report if you recieve abuse or other harmful content whilst playing the game.

YouTube's stance on harassment and cyberbullying

We want you to use YouTube without fear of being subjected to malicious harassment. In cases where harassment crosses the line into a malicious attack, it can be reported and the content will be removed. In other cases, users may be mildly annoying or petty and should simply be ignored.

Harassment may include:

  • Abusive videos, comments and messages
  • Revealing someone's personal information, including sensitive personally identifiable information such as social security numbers, passport numbers or bank account numbers
  • Maliciously recording someone without their consent
  • Deliberately posting content in order to humiliate someone
  • Making hurtful and negative comments/videos about another person
  • Unwanted sexualisation, which encompasses sexual harassment or sexual bullying in any form
  • Incitement to harass other users or creators

Click here to submit a report. Rememember you will need a google account if you don't already have one

YouTube's stance on threats

The YouTube community is important to us and we want to see it continue to flourish. To ensure that this is possible, content that makes threats of serious physical harm against a specific individual or defined group of individuals will be removed. People who threaten others may receive a strike on their account and their account may be terminated.

Click here to make a report. Remember you will need a Google account to be able to do this

YouTube's stance on Impersonation

Activities such as copying a user's channel layout, using a similar username or posing as another person in comments, emails or videos may be considered harassment. In cases where our team determines that an account was established to impersonate another channel or individual, the account will be removed. Impersonation can happen on YouTube in two ways: impersonation of a channel or an individual.

Impersonation of a channel

A user copies a channel's profile, background or text, and writes comments to make it look like somebody else's channel posted the comments. If you feel that yours or another creator's channel is being impersonated, please visit our reporting tool.

Impersonation of an individual

A user creates a channel or video using another individual's real name, image or other personal information to deceive people into thinking they are someone else on YouTube. If you feel that you are being impersonated, report it using our webform.

What doesn't count as impersonation

Impersonation does not include channels or videos pretending to represent a business. In this case, you may want to consider submitting a legal complaint via our legal reporting forms.

YouTube's stance on vulgar language

Some language is not appropriate for younger audiences. The use of sexually explicit language or excessive swearing in your video or associated metadata may lead to your video being age-restricted.

Click here to make a report. Remember you will need a google account in order to do this

YouTube's stance on nudity and sexual content

If a video is intended to be sexually provocative, it is less likely to be acceptable for YouTube.

What is and isn't allowed

Sexually explicit content like pornography is not allowed. Videos containing fetish content will be removed or age-restricted depending on the severity of the act in question. In most cases, violent, graphic or humiliating fetishes are not allowed to be shown on YouTube.

A video that contains nudity or other sexual content may be allowed if the primary purpose is educational, documentary, scientific or artistic, and it isn't gratuitously graphic. For example, a documentary on breast cancer would be appropriate, but posting clips out of context from the same documentary might not be. Remember that providing context in the title and description will help us and your viewers determine the primary purpose of the video.

Age-restricted content

In cases where videos do not cross the line but still contain sexual content, we may apply an age restriction so that only viewers over a certain age can view the content.

Videos containing nudity or dramatised sexual conduct may be age-restricted when the context is appropriately educational, documentary, scientific or artistic. Videos featuring individuals in minimal or revealing clothing may also be age-restricted if they're intended to be sexually provocative, but don't show explicit content.

Click here to find out how to report innapropriate content such as nudity and sexual content. Remember you will need a Google account to be able to do this

YouTube's stance on violent or graphic content

Real depictions of graphic or violent content

Increasingly, YouTube is becoming an outlet for citizen journalists, documentarians and other users to publish accounts of what is happening in their daily lives. It is inevitable that some of these videos will contain content that is violent or graphic in nature.

It's not okay to post violent or gory content that's primarily intended to be shocking, sensational or gratuitous. If a video is particularly graphic or disturbing, it should be balanced with additional context and information.If posting graphic content in a news, documentary, scientific or artistic context, please be mindful to provide enough information to help people understand what's going on. In some cases, content may be so violent or shocking that no amount of context will allow that content to remain on our platforms. Lastly, don't encourage others to commit specific acts of violence. 

If the violence shown in your video is particularly graphic, please make sure to post as much information as possible in the title and metadata to help viewers understand what they are seeing. Providing documentary or educational context can help the viewer, and our reviewers, understand why they may be seeing the disturbing content.

For instance, a video by a citizen journalist which captures footage of protesters being beaten, uploaded with relevant information (date, location, context, etc.), would probably be allowed. However, posting the same footage without contextual or educational information may be considered gratuitous and may be removed from the site

Dramatised depictions of graphic or violent content

Some people post videos that contain dramatised depictions of violence. Much like movies and TV, graphic or disturbing content that contains violence, gore or shocking content is not suitable for minors and will be age-restricted.

Click here to find out how to report innapropriate content including graphic or violent content. Remember you will need a Google account to make a report

YouTube's stance on harmful or dangerous content

While it might not seem fair to say you can't show something because of what viewers might do in response, we draw the line at content that intends to incite violence or encourage dangerous or illegal activities that have an inherent risk of serious physical harm or death.

Videos that we consider to encourage dangerous or illegal activities include instructional bomb making, choking games, hard drug use or other acts where serious injury may result. A video that depicts dangerous acts may be allowed if the primary purpose is educational, documentary, scientific or artistic (EDSA), and it isn't gratuitously graphic. For example, a news piece on the dangers of choking games would be appropriate, but posting clips out of context from the same documentary might not be.

Videos that incite others to commit acts of violence are strictly prohibited on YouTube. If your video asks others to commit an act of violence or threatens people with serious acts of violence, it will be removed from the site.

We are very sensitive to any harmful or dangerous content that involves minors. If your video shows a minor participating in a harmful or dangerous activity, do not post it. In the interest of protecting minors, we may age-restrict videos showing adults participating in activities that have a high risk of injury or death.

Click here to find out how to report innapropriate content including harmful or dangerous content. Remember you will need a Google account to do this.

Instagram's stance on bullying and harassment

Instagram do not tolerate bullying and harassment on their platform. If an account is established with the intent of bullying or harassing another person or if a photo or comment is intended to bully or harass someone it should be reported.

Click here to submit a report

Instagram's stance on credible threats

Respect other members of the Instagram community.

We want to foster a positive, diverse community. We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages. We do generally allow stronger conversation around people who are featured in the news or have a large public audience due to their profession or chosen activities.

Serious threats of harm to public and personal safety aren't allowed. This includes specific threats of physical harm as well as threats of theft, vandalism, and other financial harm. We carefully review reports of threats and consider many things when determining whether a threat is credible.

Click here to make a report

Instagram's stance on Impersonation

Instagram takes safety seriously. If someone created an Instagram account pretending to be you, you can report it to them. Make sure to provide all the requested information, including a photo of your ID.

Instagram will only respond to reports sent to them from the person who's being impersonated or a representative of the person who's being impersonated (e.g. a parent). If someone you know is being impersonated, please encourage that person to report it.

If you have an account, click here to report impersonation

If you don't have an account click here to report impersonation

 

Instagram's stance on graphic content

We understand that many people use Instagram to share important and newsworthy events. Some of these issues can involve graphic images. Because so many different people and age groups use Instagram, we may remove videos of intense, graphic violence to make sure Instagram stays appropriate for everyone.

We understand that people often share this kind of content to condemn, raise awareness or educate. If you do share content for these reasons, we encourage you to caption your photo with a warning about graphic violence. Sharing graphic images for sadistic pleasure or to glorify violence is never allowed.

Click here to make a report

 

Instagram's stance on Nudity

We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don’t allow nudity on Instagram. This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed. Nudity in photos of paintings and sculptures is OK, too.

People like to share photos or videos of their children. For safety reasons, there are times when we may remove images that show nude or partially-nude children. Even when this content is shared with good intentions, it could be used by others in unanticipated ways. You can learn more on our Tips for Parents page.

Click here to report nudity on Instagram

 

Instagram's stance on self-injury

The Instagram community cares for each other, and is often a place where people facing difficult issues such as eating disorders, cutting, or other kinds of self-injury come together to create awareness or find support. We try to do our part by providing education in the app and adding information in the Help Center so people can get the help they need.

Encouraging or urging people to embrace self-injury is counter to this environment of support, and we’ll remove it or disable accounts if it’s reported to us. We may also remove content identifying victims or survivors of self-injury if the content targets them for attack or humor.

How to report inappropriate content on YouTube Kids

Report inappropriate videos

Our teams take extensive precautions to ensure a family-friendly environment in the YouTube Kids app. However, if you encounter any inappropriate content, please help by flagging the video for review:

  1. Go to the Watch page of the video you’d like to report.
  2. Tap More (...) in the top corner of the video player.
  3. Tap Report
  4. Select the reason for reporting the video (Inappropriate visuals, Inappropriate audio or Other). 

Flagged videos are reviewed 24 hours a day, seven days a week and will be removed from the YouTube Kids app if necessary. 

Information about videos on YouTube Kids

Videos in YouTube Kids

We’ve built the YouTube Kids app to be a family-friendly place for kids to explore their imagination and curiosity. Videos available in the app are determined by a mix of algorithmic filtering, user input and human review. Videos in YouTube Kids can be discovered in three ways. Some videos are pre-selected, while others are discovered by your child through search. The app can also recommend videos to your child based on what your child has watched. The pre-selected content goes through an additional level of quality control through some human review. Search and Recommended videos are selected by our algorithm without human review. Though our system has been tuned and tested rigorously, no algorithm is perfect, and even a perfect algorithm would not replace a parent’s judgement. This means that it’s possible that your child may find content in the app that you may not want your child to watch. If this happens, you can notify YouTube by flagging the video.  We use these flags to improve the app. 

For a more restricted experience, you can turn search off, allowing your child access only to pre-selected content. Turning search off reduces the chance of your child finding content that you don’t want them to watch.

LinkedIn's stance on Fake Profiles

Reporting Fake Profiles

Fake profiles are not allowed on LinkedIn. If you see a profile that you believe to be fake, you can flag the profile when viewing it.

Click here to find out how to report a fake profile on LinkedIn

LinkedIn's stance on graphic images and pornography

Be Professional

We ask our members to behave professionally by not being dishonest or inappropriate. We acknowledge the value of discussions around professional activities, but we do not want you to use LinkedIn to shock or intimidate others. It is not okay to share graphic images to shock others, and it is not okay to share obscene images or pornography on LinkedIn's service.

Click here to find out how to report innapropriate and offensive content such as graphic content on LinkedIn

LinkedIn's stance on bullying, harassment and threats

Be Nice

LinkedIn shouldn't be used to harm others. It is not okay to use LinkedIn's services to harass, abuse, or send other unwelcomed communications to people. Do not use LinkedIn's services to promote or threaten violence or property damage. 

Click here to find out how to report innapropriate or offensive content on LinkedIn

LinkedIn's stance on suicide and self injury

Suicide & Self Injury

If you're having thoughts about suicide or self-injury, or if you see posts or comments that suggest someone else may be having such thoughts, please know that help is available.

Some hotlines and other resources are listed at www.befrienders.org. Others can be found through an Internet search.

In addition, if you believe that someone is in imminent danger, please contact the police for immediate assistance. 

Click here to find out how to report inappropriate content on LinkedIn

Adult Content

Adult content

Bing offers SafeSearch settings, which allow most users to set the type of filtering of adult content that they would like applied to their search results. Bing wants to avoid delivering content that can be offensive or harmful when it wasn’t requested. By default, in most countries or regions, SafeSearch is set to Moderate, which restricts visually explicit search results but doesn't restrict explicit text.

Different countries or regions may have different local customs, religious or cultural norms, or local laws regarding the display of adult content (or search results accessing adult content). This may affect default SafeSearch settings for Bing in some countries. We endeavor to reevaluate these settings as the relevant local laws, customs, and norms evolve.

Please remember that Bing will only review search results containing illegal adult content or adult content that appears on a search when safe search is turned on

Click here to submit a report about adult content

Explicit content

Block explicit results 

You can filter explicit search results on Google, like pornography, using the SafeSearch setting. SafeSearch isn’t 100% accurate, but it helps you avoid explicit content.  

You can use SafeSearch as a parental control to help protect children from inappropriate search results on your phone, tablet, or computer.  

When SafeSearch is on, it helps block explicit images, videos, and websites from Google Search results. When SafeSearch is off, we'll provide the most relevant results for your search and may include explicit content when you search for it.

We do our best to keep the SafeSearch filter as thorough as possible, but sometimes explicit content, like porn and nudity, makes it through. If you have SafeSearch turned on, but still see inappropriate sites or images, let us know.

Click here to report offensive content appearing in SafeSearch filtered results

Harassment and threats

Harassment and Threats

Do not harass or bully others. Anyone using Blogger to harass or bully may have the offending content removed or be permanently banned from the site. Online harassment is also illegal in many places and can have serious offline consequences. Don't threaten other people on your blog. For example, don't post death threats against another person or group of people and don't post content encouraging your readers to take violent action against another person or group of people.

Click here to report harassment and/ or threats on Blogger

Impersonation

Impersonating others 

Please don't mislead or confuse readers by pretending to be someone else or pretending to represent an organization when you don't. We're not saying you can't publish parody or satire - just avoid content that is likely to mislead readers about your true identity.

Click here to report impersonation on Blogger

Violence and crude content

Violence

It's not okay to post violent or gory content that's primarily intended to be shocking, sensational, or gratuitous. If posting graphic content in a news, documentary, scientific, or artistic context, please be mindful to provide enough information to help people understand what's going on. In some cases, content may be so violent or shocking that no amount of context will allow that content to remain on our platforms. Lastly, don't encourage others to commit specific acts of violence.

Crude Content

Don't post content just to be shocking or graphic. For example, collections of close-up images of gunshot wounds or accident scenes without additional context or commentary would violate this policy.

Click here to report violence or crude content on Blogger

Adult content

Adult Content

We do allow adult content on Blogger, including images or videos that contain nudity or sexual activity. If your blog contains adult content, please mark it as 'adult' in your Blogger settings. We may also mark blogs with adult content where the owners have not. All blogs marked as 'adult' will be placed behind an 'adult content' warning interstitial. If your blog has a warning interstitial, please do not attempt to circumvent or disable the interstitial - it is for everyone’s protection.

Do not use Blogger as a way to make money on adult content. For example, don't create blogs that contain ads for or links to commercial porn sites.

We do not allow illegal sexual content, including image, video or textual content that depicts or encourages rape, incest, bestiality, or necrophilia.

Click here to report adult content on Blogger

Suicide prevention

Help

If you believe that someone else is in danger of suicide (and you have contact information for this user), contact your local law enforcement for immediate help.

Visit the International Association for Suicide Prevention site to find a crisis center in your area.

Abuse

Abuse

You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

Context matters 

Some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation. While we accept reports of violations from anyone, sometimes we also need to hear directly from the target to ensure that we have proper context. 

The number of reports we receive does not impact whether or not something will be removed. However, it may help us prioritize the order in which it gets reviewed.

We focus on behavior 

We enforce policies when someone reports behavior that is abusive and targets an entire protected group and/or individuals who may be members. 

This targeting can happen in any manner (for example, @mentions, tagging a photo, and more).

Twitter strives to provide an environment where people can feel free to express themselves. If abusive behavior happens, we want to make it easy for people to report it to us. Multiple Tweets can be included in the same report, helping us gain better context, while investigating the issues to get them resolved faster.

Click here to find out how to report abusive behaviour on Twitter

Violence and threats

Violence 

You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people. This includes, but is not limited to, threatening or promoting terrorism. You also may not affiliate with organizations that — whether by their own statements or activity both on and off the platform — use or promote violence against civilians to further their causes.

Take threats seriously 

If you believe you are in physical danger, contact the local law enforcement authorities who have the tools to address the issue.

If you decide to work with law enforcement, make sure to do the following:

  • Document the violent or abusive messages with print-outs or screenshots
  • Be as specific as possible about why you are concerned
  • Provide any context you have around who you believe might be involved, such as evidence of abusive behavior found on other websites
  • Provide any information regarding previous threats you may have received 

Click here to report abusive behaviour such as violent threats

Impersonation

Impersonation

You may not impersonate individuals, groups, or organizations in a manner that is intended to or does mislead, confuse, or deceive others. While you may maintain parody, fan, commentary, or newsfeed accounts, you may not do so if the intent of the account is to engage in spamming or abusive behavior. Read more about our impersonation policy.

Twitter accounts portraying another person in a confusing or deceptive manner may be permanently suspended under the Twitter impersonation policy.

An account will not be removed if:

  • The user shares your name but has no other commonalities, or
  • The profile clearly states it is not affiliated with or connected to any similarly-named individuals.

Accounts with similar usernames or that are similar in appearance (e.g. the same avatar image) are not automatically in violation of the impersonation policy. In order to be impersonation, the account must also portray another person in a misleading or deceptive manner.

Click here to submit an impersonation report on Twitter

Unwanted sexual advances

Unwanted sexual advances

You may not direct abuse at someone by sending unwanted sexual content, objectifying them in a sexually explicit manner, or otherwise engaging in sexual misconduct.

Click here to find out how to report abuse including unwanted sexual advances on Twitter

Graphic violence and adult content

Graphic violence and adult content

We consider graphic violence to be any form of gory media related to death, serious injury, violence, or surgical procedures. We consider adult content to be any media that is pornographic and/or may be intended to cause sexual arousal. Learn more about our media policy.

Twitter allows some forms of graphic violence and/or adult content in Tweets marked as containing sensitive media. However, you may not use such content in your profile or header images. Additionally, Twitter may sometimes require you to remove excessively graphic violence out of respect for the deceased and their families if we receive a request from their family or an authorized representative. 

We consider graphic violence to be any form of gory media related to death, serious injury, violence, or surgical procedures. Some examples include, but are not limited to, depictions of:

  • the moment at which someone dies
  • gruesome crime or accident scenes
  • bodily harm, torture, dismemberment, or mutilation

We consider adult content to be any media that is pornographic and/or may be intended to cause sexual arousal. Some examples include, but are not limited to, depictions of:

  • full or partial nudity (including close-ups of genitals, buttocks, or breasts)
    • Please note that exceptions may be made for artistic, medical, health, or educational content. Breastfeeding content does not need to be marked as sensitive. 
  • simulating a sexual act
  • intercourse or any sexual act (may involve humans, humanoid animals, cartoons, or anime)

Click here to learn how to report sensitive media such as graphic violence and adult content on Twitter

Suicide or self-harm

Suicide or self-harm

You may not promote or encourage suicide or self-harm. When we receive reports that a person is threatening suicide or self-harm, we may take a number of steps to assist them, such as reaching out to that person and providing resources such as contact information for our mental health partners.

If you or someone you know is at risk of self-harm or suicide, you should seek help as soon as possible by contacting agencies specializing in crisis intervention and suicide prevention.

After we assess a report of self-harm or suicide, Twitter will contact the reported user and let him or her know that someone who cares about them identified that they might be at risk. We will provide the reported user with available online and hotline resources and encourage them to seek help.

Click here to report suicide or self-harm onTwitter

Please find links to LinkedIn's Safety Centre articles below which will help you to submit a report

Have you reported it to Google Search?

Facebook's stance on Harassment

We do not tolerate harassment on Facebook. We want people to feel safe to engage and connect with their community. Our Harassment Policy applies to both public and private individuals because we want to prevent unwanted or malicious contact on the platform. Context and intent matter, and we allow people to share and re-share posts if it is clear that something was shared in order to condemn or draw attention to harassment. In addition to reporting harassing behaviour and content, we encourage people to use tools available on Facebook to help protect against harassment.

Click here to find out how to report harassment on Facebook

Facebook's stance on credible violence

We aim to prevent potential real-world harm that may be related to content on Facebook. We understand that people commonly express disdain or disagreement by threatening or calling for violence in facetious and non-serious ways. That's why we try to consider the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information such as a person's public visibility and vulnerability. We remove content, disable accounts and work with law enforcement when we believe that there is a genuine risk of physical harm or direct threats to public safety.

Click here to report abusive content such as credible violence Direct on Facebook

Facebook's stance on Impersonation

Authenticity is the cornerstone of our community. We believe that people are more accountable for their statements and actions when they use their authentic identities. That's why we require people to connect on Facebook using the name that they go by in everyday life. Our Authenticity Policies are intended to create a safe environment where people can trust and hold each other accountable.

Do not Impersonate others by:

  • Using their images with the explicit aim to deceive others
  • Creating a profile assuming to be or speaking for another person or entity
  • Creating a Page that assumes to speak for another person or entity without authorisation when the authorised party objects to the content

Click here to report impersonation on Facebook 

Facebook's response to text based unwanted sexual advances

We remove content that threatens or promotes sexual violence or exploitation. Where appropriate, we refer this content to law enforcement. 

If you receive any unwanted sexual comments or communication on Facebook, the best thing you can do is remove yourself from the conversation. If it doesn’t stop immediately, you should block the person and report them on the platform.

Click here to find out how to report abusive content such as unwanted sexual advances to Facebook.

Facebook's stance on graphic violence

We remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation. We allow graphic content (with some limitations) to help people raise awareness about issues. We know that people value the ability to discuss important issues such as human rights abuse or acts of terrorism. We also know that people have different sensitivities with regard to graphic and violent content. For that reason, we add a warning label to especially graphic or violent content so that it is not available to people under the age of 18, and so that people are aware of the graphic or violent nature before they click to see it.

Click here to find out how to report violence and graphic content on Facebook

How Facebook work to help prevent self-injury and suicide.

In an effort to promote a safe environment on Facebook, we remove content that encourages suicide or self-injury, including real-time depictions that might lead others to engage in similar behaviour. Self-injury is defined as the intentional and direct injuring of the body, including self-mutilation and eating disorders. We want Facebook to be a space where people can share their experiences, raise awareness about these issues and support each other through difficult experiences, and so we allow people to discuss suicide and self-injury. We encourage people to offer and to seek support from one another in connection with these difficult topics.

We work with organisations around the world to provide assistance to people in distress. We also talk with experts in suicide and self-injury to help inform our policies and enforcement. For example, we have been advised by experts that we should not remove live videos of self-harm while there is an opportunity for loved ones and authorities to provide help or resources.

We remove any content that identifies and negatively targets victims or survivors of self-injury or suicide, either seriously, in humour or rhetorically. People can, however, share information about self-injury and suicide to draw attention to the issue and allow for discussion so long as they do not promote or encourage self-injury or suicide.

Click here to find out how to report self-injury and suicide on Facebook

Facebook's stance on adult nudity and sexual activity

We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. Additionally, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content. Restrictions on the display of sexual activity also apply to digitally created content unless it is posted for educational, humorous or satirical purposes.

Our Nudity Policies have become more nuanced over time. We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breastfeeding and photos of post-mastectomy scarring. We also allow photographs of paintings, sculptures and other art that depicts nude figures.

Click here to find out how to report nudity on Facebook