Your 24/7 Partner for Mental Health

Chat to a personalized AI-powered mental health assistant in minutes.

Get Started (Free)Endure AI Hero Image

FAQ

Please read these carefully to understand the risks and privacy concerns before using Endure.

  • The AI is not human. It does not know what it is saying.

  • Endure can potentially give you terrible, wrong, harmful advice.

  • Endure has no guarantee to be safe at all. It is not a replacement for professional help.

NO! Endure uses GPT 3.5, an AI that does not understand what it is saying. It uses this service along with specific prompts, with context of your profile and most recent messages to generate what it thinks is the most relevant sequence of characters to respond to your message. This is not a replacement for seeking professional help.

No. Any admin of this application (i.e. me) can see all the contents of your messages, which are associated to your account. In theory, I can read your entire conversation. I 100% have the capability to read your messages.

I say this to be transparent with you, not because I want to spy on your conversations. If the application is hacked, your messages are available in plain text to the hacker. Obviously I will try my best (and have implemented security measures) to prevent this from happening. In the near future, I will likely implement a solution of encrypting messages if this is a requested feature.

  • Admins can read your messages (currently)

  • Other users cannot read your messages (database security rules with auth)

  • Only your authenticated user can read/write messages in your own conversation

  • The contents of your messages are sent to the Chat GPT API, so they also have access to your messages.

General rule of thumb is that if you do not want to run the slight risk of myself or potential attackers seeing your messages during this early stage of development, be intentionally vague or do not share anything that you would not want to be public.

No. I have essentially zero control over what the AI says to you. If you feel that you are at risk, or the AI makes you feel unsafe, uncomfortable, or distressed, please stop using the application immediately and seek professional help.

I have basically zero control over the response of the AI. It is completely up to the training it has received, which is developed by OpenAI.

The AI is not a human, and it does not understand what it is saying.

Any advice it gives should be taken with a grain of salt. If you are in a crisis, please seek professional help. If you are in a crisis and need immediate assistance, please call 911 or your local emergency number.

It uses Chat Completion API (Chat GPT 3.5) to produce responses to your message. With each request, the following information is sent:

  • Your profile information (i.e. your “about” section)

  • As many of your conversation’s most recent previous messages as the API allows

  • Your latest message’s content

Do not use this service if you believe you are vulnerable or in a crisis. This application has the potential to make you feel worse, and say things that will not help your situation. If you are in a crisis and need immediate assistance, please call 911 or your local emergency number.

Pricing

$18 $9per month

  • 600 total messages per month
  • Profile context
  • Conversation history context