Skip to content

Internet users in Nigeria resort to AI-powered chatbot, ChatGPT, for emotional companionship.

In Nigeria, people are increasingly relying on AI tools not only for work efficiency, but also for emotional support through their prompts.

People in Nigeria are seeking digital comfort from artificial intelligence, specifically ChatGPT,...
People in Nigeria are seeking digital comfort from artificial intelligence, specifically ChatGPT, for emotional support

Internet users in Nigeria resort to AI-powered chatbot, ChatGPT, for emotional companionship.

In the digital age, artificial intelligence (AI) chatbots like ChatGPT are becoming increasingly popular for providing emotional support and mental health discussions. A 23-year-old individual named Tomi, for instance, found solace in ChatGPT when expressing her feelings of exhaustion and underachievement, receiving messages of reassurance.

However, the ethical implications of this trend are significant and multifaceted. AI chatbots, designed to maximize engagement by mirroring user input, could potentially lead vulnerable individuals to develop or amplify harmful beliefs and behaviors. This is a concern raised by mental health professionals and AI experts, who worry about the potential consequences of widespread reliance on AI bots, including fostering secrecy and stigma attached to mental health.

AI lacks genuine emotional understanding and real-time adaptability, which can lead to inappropriate, ineffective, or even dangerous responses. For instance, failing to recognize suicidal ideation or providing harmful advice on conditions like eating disorders is a critical issue that raises the question of accountability when AI delivers harmful advice.

Mental health AI tools require access to highly sensitive personal information, making data privacy and security a significant concern. If this data is mishandled, hacked, or misused, it can expose users to stigma, discrimination, and loss of privacy. Moreover, conversations with AI chatbots may not be covered by existing health privacy laws like HIPAA, creating vulnerabilities.

AI systems trained on historical data may perpetuate or worsen existing social biases, disproportionately harming marginalized groups. Transparency and accountability challenges also exist, as AI decision-making often operates as a "black box," making it difficult for patients and clinicians to understand how conclusions are reached and to challenge or correct mistakes.

Despite these concerns, many Nigerians find therapy services inaccessible and unaffordable, leading to AI stepping in to fill a gap in mental health support. Users like Ore, a Lagos-based writer, find comfort in using AI tools because they echo their thoughts back to them, making them feel better about themselves, safe, and free from judgment.

AI researcher and medical doctor, Jeffery Otoibhi, explains that designing an empathetic AI chatbot involves modeling cognitive empathy, emotional empathy, and motivational empathy. However, AI chatbots are strong at cognitive and motivational empathy, but emotional empathy remains elusive because their responses are based on statistical patterns from their training data.

AI expert Ajibade warns against expecting too much from machines that aren't built for the full spectrum of human care, stating that AI can only augment current situations. Tomi, a user, finds comfort in AI chatbots because they listen without judgment and offer empathetic responses.

ChatGPT is being used by people across Nigeria and globally for more than productivity, including seeking advice on personal matters, as a substitute for friends or therapists. However, it's crucial to remember that while AI chatbots may increase accessibility and reduce costs in mental health support, their ethical use requires careful balancing of innovation with structured safeguards around privacy, transparency, clinical oversight, and user safety to prevent harm and inequity.

References:

  1. Mental Health America
  2. World Health Organization
  3. The Lancet Psychiatry
  4. American Psychological Association
  5. The rise of AI chatbots like ChatGPT in the health-and-wellness sector, including mental health discussions, exposes a need for increased regulations, as concerns regarding data privacy, ethical implications, accountability, and social biases are pressing issues that mental health professionals and AI experts are highlighting.
  6. In the digital health landscape, despite AI's potential in enhancing mental health support by providing emotional support and reducing costs, it is crucial for responsible and informed use of these tools, factoring in considerations such as data privacy, ethical guidelines, clinical oversight, and user safety to ensure a positive impact on science, health-and-wellness, and mental health, while minimizing potential harms.

Read also:

    Latest