TACKLING HATE SPEECH

In2me is a space for creators and fans to be who they are and express themselves freely. This commitment to freedom of expression is balanced by our dedication to ensuring that all creators and fans feel safe on our platform. In2me does not tolerate hate speech, discrimination, or threats in any form.

What kind of content is not allowed on In2me?

In2me’s Terms of Service and Acceptable Use Policy clearly state that, among other things, the following types of content are not allowed on the platform:

  • Content that contains, promotes, advertises, or refers to hate speech, including material intended to vilify, humiliate, dehumanize, exclude, attack, threaten, or incite hatred, fear, or violence against individuals or groups based on race, ethnicity, national origin, immigration status, caste, religion, sex, gender identity or expression, sexual orientation, age, disability, serious disease, veteran status, or any other protected characteristic.

  • Content that is illegal, fraudulent, defamatory, hateful, discriminatory, threatening, or harassing, or that encourages or promotes violence or any illegal activity.

  • Content that depicts or refers to firearms or weapons in a threatening or violent manner.

Does In2me’s subscription model allow for hate speech or abusive content?

No. In fact, In2me’s subscription model helps reduce the likelihood of hate speech, discrimination, or threats being shared. Because the platform does not allow anonymous posting, all users—whether they are creators or subscribers—must pass strict identity verification checks.

Unlike many other digital platforms, In2me knows the legal identity of all its users, which discourages users from posting or sharing hateful, threatening, or discriminatory content. Furthermore, because content on In2me is behind a paywall, it is less likely to “go viral.”

How does In2me detect hate speech or abusive behavior?

In2me does not use end-to-end encryption. Everything on the platform—including direct messages—is visible to our trained moderation team.

We continuously scan the platform using automated tools and human moderators to identify and prevent the posting of content that violates our Terms of Service or Acceptable Use Policy. In addition, community reporting plays an important role. Every post, message, image, and video on the platform includes a report button that users can use to flag inappropriate content.

In2me can review and remove any post, message, image, or video at any time.

What happens when In2me finds hate speech or abusive content?

If In2me identifies or receives a report of suspected hate speech, discriminatory, or threatening behavior, the content is immediately removed from public view and reviewed by our moderation team.

If the content violates our Terms of Service or Acceptable Use Policy, In2me takes appropriate action, which may include:

  • A formal warning

  • Temporary account suspension

  • Permanent account termination

Can In2me review private messages?

Yes. In2me does not use end-to-end encryption. There are no hidden posts, disappearing messages, or secret areas on the platform. Our moderation team can review and remove any direct message or private post shared on In2me.

How do I report hate speech or abusive content?

Each post, message, and account on In2me includes a report button. If you encounter content that you believe violates our hate speech or abuse policies, please click the report button.

Alternatively, you can email us directly at support@in2me.io with details about the content in question.

en_USEnglish