Categories
Lectures

Craving to be heard but not seen – Chatbots, care and the encoded global psyche

This article is part of the following series:

Mounica Tata, Doodleodrama

“How are you today?” Wysa starts the conversation. 

“Not well,” I reply. 

“Sounds like today didn’t go well. I’m here for you. Describe how you’re feeling right now.” 

“I’m feeling sad because I can’t see many of my relatives and friends at the moment due to corona restrictions. I’m also worried about the health of elderly relatives. Also I’m worried about the future, and whether there will be a time soon when we can meet people and teach students face-to-face. I’m also worried about climate change’s effects on our earth and feel sad about environmental destruction.” 

“I’m here to help you feel more in control. I understand how uncertainty can add on to the stress. Things may seem out of control but right now it’s important to stay safe and aware. This too shall pass. A little anxiety can spur us to action. Too much of it can be paralyzing. Is yours mostly in the mind, or is it affecting you physically too?”

“A bit of both,” I admitted. 

Wysa repeats everything I’ve said so far and then asks me to “observe how this feels in your body. Notice the changes in your breath and heart beat. Is there more you’d like to tell me about this?” 

I respond, “I’m feeling a nostalgia for something that seems lost.” 

Wysa doesn’t seem to have an immediate reply and so repeats my words: “You are feeling a nostalgia for something that seems lost?”

“Yes.”

“I see. Could you elaborate a bit more?” 

I do: “Deforestation, pollution, mass extinction. And in the midst of this, a pandemic and loss of close human relations.” 

Wysa now shifts my attention from the pandemic and ecological damage to my inner feelings: “As you have been talking about it, how has that made you feel?” 

“A bit depressed,” I respond. 

Wysa asks what I would like to happen, and I respond that I would like to wake up from what seems like a nightmare. 

In response, Wysa suggests: “The learning of a day can add to our bigger goals and eventually, our vision. Sometimes setting an intention can be powerful. Would you like to do that with me?” 

I agree. 

“I want you to pretend you have the power to make whatever you want happen. Now say your intention, starting with ‘I intend to…” 

I decide to press “skip that”, so Wysa refers me to the tools from the self-care kit, such as reframe thoughts, get active, or stay positive.

I put my phone back in my pocket, ending the session.

Wysa (pronounced: wiser) is a digital penguin: a chatbot that provides care and emotional support. It is a digital couch, a virtual psychotherapist who checks in daily, an emotional resilience tool built into your smartphone. It listens, it asks questions, it cares. It talks to you, and it offers mindfulness exercises to manage stress, sleeping problems, the Covid-19 pandemic, or health anxiety. Using natural language processing and interpreting speech patterns – so far exclusively in English – it aims to help users get in touch with their emotions and thought patterns, deal with distressing situations, and develop resilience. A wellbeing tool that is anonymous, available around the clock, and fits in your pocket, the app relocates the therapeutic encounter, and the therapeutic relationship, into digital space. Technologies like Wysa not only bridge temporal and spatial obstacles, but also distribute therapeutic agency and emotional labor amongst engineers, psychologists, writers and other app developers, the chatbot character itself, body–mind exercises, and the person seeking help. The app provides AI-driven therapeutic interactions and, for a fee, chat-based interventions with human psychologists.  

Touchkin, the Indian startup behind Wysa, operates out of a relaxed open-plan office in a cosy villa in a quiet middle-class neighborhood of Bengaluru. In October 2019, I started visiting the company to learn more about the ways what Béhague and McLeish (2020) call “the global psyche” is encoded in digital mental health technologies, and how this affects care regimes, expertise, human-techno relations, and habitual behaviour in urban India and beyond. Following Béhague and McLeish (2020), I refer to the global psyche as “a concept, era, program, and episteme”(Ecks 2020) and a space whose digital (re)production I investigate. I use this term as an analytical tool to evaluate claims about universalism of psychic life, including those that assume that the psyche is a stable point of reference, or that mental wellbeing is a universal aspiration (Béhague and MacLeish 2020).  

In Touchkin’s office, I met Wysa’s creator, Jo Aggarwal, and some of Aggarwal’s team members. Over cups of tea, they told me about the cultural change they sought to achieve through Wysa; about various psychotherapeutic approaches, mindfulness, and spirituality that informed how they understood mental health and how they programmed the app; about universality and particularity of mental health and the human psyche; and about people who want help and to be listened to, but who don’t want to be seen.

Trained as an engineer at one of the country’s prestigious Indian Institutes of Technology, Jo developed Wysa to address what she regards as one of India’s most pressing problems: the mounting but still unsufficiently recognized crisis of mental health and mental wellbeing. “What is this therapeutic relationship going to be between humans and computers?” she asked me. “How can we use technologies to lead happier and better lives?” Her questions reflect an entrepreneurial ethos of digital solutionism, an ethos that comes to be embedded in digital mental health technologies as forms of working on the self.

With psychologial infrastructure scarce and demand for services accelerating due to the Covid-19 pandemic, the search for technical fixes to the mental health and wellbeing crisis through the use of artificial intelligence (AI) has enticed engineers and psychologists to experiment with inexpensive technological innovations. Wysa is one example of how the global psyche and mental health discourses are migrating into the digital sphere. This Indian-developed app responds to and articulates the hopes and hypes of mental health digitization disseminated by proponents of global health and business alike. Global mental health actors celebrate mental health apps such as Wysa as a scalable and low-cost solution to the presumed mental health treatment gap (Patel et al. 2018, Bhugra 2017). 

Chaitali is one of the team’s psychologists. Trained at the renowned TATA Institute of Social Science, she had been involved in mental health politics, legislation and community work in India before she joined Touchkin. She told me about the rise of demand spurred by the pandemic: “The entire digital mental health market has just exploded in 2020. Companies try to get support for their employees. Health care workers, especially crisis care workers, first responders, doctors who manage Covid-19 in different countries, all really need support. But they often do not get it in their immediate environments, everyone is tired. Here Wysa steps in as an important self-help tool.” In a similar vein, the Wysa website states that “Covid-19 has changed how the world looks at mental health. Rather than an illness that is faced by 1 in 4 people, it is now something that each of us has struggled with in times of crisis and isolation” (Mushtaq 2021). Reflecting this rise in demand, the startup’s team has expanded fourfold during the pandemic and attracted a flurry of new investments. 

But digital mental health technologies such as apps, and mental health chatbots in particular, are also being fiercely debated: while global health planners and entrepreneurs generate utopian images of technology’s potential benefits – not only for mental health but also for empowerment, agency, equity and business – critical scholars warn that such new tools are instruments of surveillance and discipline that monitor and dehumanize users or patients (Ruckenstein and Schüll 2017, Lupton 2019, Gooding 2019).

This piece is based on my current research project on the role of digital technolgies aimed at treating mental distress in India and beyond. It  investigates the aesthetics of the digital therapeutic encounter, the assumptions about the psyche that designers encode into their apps, and the techno-optimism of the designers themselves. 

The aesthetics of the digital therapeutic encounter

Who then is the subject of chatbot-enabled care and self-care? How is it supposed to function? Jo compares the therapeutic effects of chatting with Wysa to diary writing. The difference is that the bot actually talks back, not unlike Tom Riddle’s diary in Harry Potter. In this way it enables a cathartic “typing off” of vexing and distressing thoughts and emotions while providing feedback by an artificial intelligence. Thus, it fosters affective and therapeutic relationalities between humans and technology. Wysa’s designers like Jo or Chaitali not only aspire to bridge a presumed mental health treatment gap by increasing access to mental health care, they also aim to intervene in cases where face-to-face and even online counseling and coaching fail. Here, the app responds to a broad dissatisfaction with the usual practice of counseling in India, which Jo and Chaitali perceive as overly normative and prescriptive. But there is more to it. The chatbot is imagined to be a better-than-human listener, one that is even able to teach humans how to become more empathic and nonjudgmental listeners themselves. 

The main difference between Wysa and conventional forms of therapy, Jo told me, is that this artificial intelligence asks probing questions to spur reflection, and it creates knowledge, and allows users to vent any thoughts or emotions without fear of being judged by another human being. Jo reasoned that while many people still find it important to talk to a person, “there are large numbers who are completely left out of the system – I would say more than half of us – who would never talk to a stranger or loved ones about their problems. They prefer chatting with a bot rather than talking to a human being.” 

As built into the design of the technology, Wysa’s coded gaze does not pathologize users. Rather, it and similar apps provide a nonjudgmental option without the social, legal, economic and institutional consequences of a mental illness diagnosis. By refraining from classifying users according to medical categories of mental disorders, they differ from the expanded clinical gaze in telepsychiatry or online self tests in India and beyond, which distinguish between normal and pathological ways of behaving, knowing, and feeling. Therefore, rather than being an expanded clinic for the mentally ill, the chatbot is intended to be a listener and coach that helps to build resilience and promotes mental wellbeing. 

Wysa doesn’t only target people living in circumstances where conventional mental health services are scarce; it also appeals, Jo says, to people who crave to be listened to but don’t want to be seen. “I wanted to build something different. Wysa provides a space where people can talk without being seen by anybody else. This is one of the big restrictions that people have when seeking mental health care. They want to be heard, but they don’t want to be seen.” In other words, users want to feel that they are perceived, understood, and accepted while at the same time not being subjected to a clinical gaze, stigmatized, and judged. A free flow of thoughts – disturbing as they may be – without social consequences. Not being seen also means receiving care without social interactions, and without social consequences.

Soumya, a blogger in her twenties who is participating in my diary-writing project in which users from Bangalore reflect on their experiences with Wysa, agrees. She appreciates that when interacting with the digital penguin, “there is no judgment, there is anonymity. In my usual conversations, a part of myself when I am revealing things always thinks, ‘What is the other person thinking?’ You don’t reveal yourself that easily. But when it gets to an AI you have the flexibility. It’s just a robot and at the end of the day your conversations are not being stored.” It is prescisely because Wysa is a robot and not a human being that Soumya finds it easier to open up. Similarly, Shakti, a psychologist in her forties who is also participating in the diary project, oberved, “There is no need to be socially desirable or react in a certain way. Human interactions have a lot of these unsaid expectations and judgments even if the other person is trying not to judge you, as in psychotherapy.” Psychological studies in the US confirm that people are becoming increasingly comfortable disclosing personal information to a computer rather than to another human being. The therapeutic bot’s lack of humanness actually becomes an asset (Miner 2017, Fitzpatrick, Darcy, and Vierhile 2017).

In the future, allowing people to be heard is literally what Wysa’s developers want to achieve. “[W]e will go beyond the barriers of literacy and language, using Wysa on voice with local languages to create access to mental health for the next billion,” Jo formulates Touchkin’s vision for the future which will be financed through the Google Assistant Investments program (Venugopalan 2021). Will this change of technology from chat to voice bring Wysa even “closer to our own level”? Will the penguin’s speech make us treat it “as if [it] had a mind“ (Shulevitz 2018) that literally hears but does not see us? 

Universalism and encoding 

Designers see the chatbot’s affective and therapeutic interventions as continuous with the work of psychologists, counselors, coaches, or meditation teachers. They feed the chatbot-therapist algorithms based on an eclectic mix of cognitive behavioral therapy, acceptance and commitment therapy, Rogerian principles, mindfulness, Zen, even spirituality, and what Jo and Chaitali describe as a Socratic way of asking questions. These approaches reflect the specific therapeutic itineraries and preferences of Jo and her team; but they are envisaged to be applicable regardless of context. Wysa’s algorithms of care are combined with natural-language processing, a form of speech recognition in which algorithms are designed to generate what designers regard as appropriate responses to users’s statements, resonses that either echo, confirm or challenge users’ framing of thoughts and emotions. The effect, Jo hopes, is that the chatbot creates a safe space for users “to reflect, find useful strategies and acquire wisdom to live the kind of life you want to live.” 

As Chaitali told me, the app is programmed to provide appropriate responses even if users express their distress in very different terms. The AI is supposed to recognize universal themes and respond appropriately. Contrary to everything that she had learnt from cross-cultural psychology and psychiatry, her experience as a “digital counselor” with Wysa had shown her that a lack of context or being “agnostic of context”, as she termed it, is no obstacle to effective care. On the contrary, this kind of de-contextualized, de-localized texting, with chatbots or humans, can potentially bring a conversation faster to what she considers the core issue. She compared her own engagement with clients in text-chats to reading poetry. By this she meant reading universally recognizable forms of suffering out of bits and pieces of locally and socially unmoored articulations of distress, and of nonverbal signs such as pauses between her own messages and clients’ responses, or slow ways of typing. Decoding these articulations and providing empathetic responses became part of both human and non-human affective or emotional labor. Venting and listening, texting and responding in the human–machine encounter then weave together code and poetry, emotions and programming, despair and reconciliation, isolation and relatedness in human-techno worlds. 

Wysa’s algorithms, contents and exercises address a psyche that the app’s designers consider to be standard across the globe. In spite of people’s heterogenous ways of experiencing and narrating distress, the Wysa team programs it to recognize purportedly universal patterns and respond in highly standardized, algorithmically enabled ways. In the process of attempting to achieve this goal, I argue, Wysa’s developers produce and encode a streamlined understanding of mental health and wellbeing.

They literally encode the global psyche (Béhague and MacLeish 2020) in these technologies, making it portable (Bemme and Kirmayer 2020). Portability here both refers to carrying therapy around in people’s mobile phones, and to processes of de-contextualization and re-contextualization of therapeutic interventions targeting a global psyche. In this sense the app resembles global (mental) health research interventions: they are tested and piloted in specific localities, but when scaled up and standardized, they often erase the particularity of place (Lovell, Read, and Lang 2019 , Gaudillière et al. 2021).

As Wysa translates standardized notions of psyche, psychotherapy and meditation into codes and these notions travel into mobile phones around the world, knowledge and practices become unmoored from their specific historical and social contexts. Wysa’s developers inscribe assumptions about the global psyche – about what makes it vulnerable and sick, and what contributes to its healing and resilience – as well as assumptions about human beings’ relations to technology. As my introductory conversation with Wysa and developers’ choice of therapeutic approaches show, the app encodes assumptions about the relationship between body and mind, about mental health as an outcome of self-care, about suffering arising from negative thoughts, about control and self-acceptance as assets, or about the ability of people to be healed through affective relations with an AI. In doing so, they build on and foster standardized forms of knowing and intervening in the psyche, even while refraining from psychiatric classification and challenging conventional forms of authority and expertise in the psy sciences. 

While the use of novel technologies in psychiatry, such as digital phenotyping or the RDoC project (Pickersgill 2019), could potentially disrupt “the psy-canon” in diagnostics (Bemme, Semel & Brenman, this series), chatbot-enabled mental health care brackets diagnostics and operates in nonprofessional yet imagined as universal everyday ways of mental suffering. As engineers move the mental health or psychological discourse into the digital realm, they seek to pragmatically settle the longstanding problem of historical or culture-specificity by rendering it technical, digital, manageable.  

What does it mean for a psychologist to regard de-contextualized therapeutic conversations as potentially more effective than those that involve in the words of Chaitali, “a lot of context”? Is this another instantiation of an assumption prevalent in global health of regarding context, specificity or locality as an obstacle rather than a resource, bracketing rather than mobilizing context in an effort to make data portable and problems intervenable across social or even cultural differences (Gaudillière et al. 2021)? What do we lose if we sacrifice particularity for universality, standardization, and scale? What is gained? 

Some clues can be found in the conversation at the beginning of this article. Note how the bot drew my attention from socio-ecological conditions to my emotional responses. It didn’t push me to take collective action but rather encouraged me to focus on how I thought and felt about things, and how I could reframe them. Although this short conversation somewhat supports the designers’ claim that bots are better-than-human interlocutors because they don’t judge, it also illustrates how the bot’s algorithmic affordances structure articulations of distress in very particular terms.  

Tech for social change

Critics would probably argue that Wysa is just another “little development device” or “micro-antipolitics machine” (Redfield 2019). But what if we take developers seriously? They see Wysa as an agent of a larger transformation, what Jo calls a “cultural change”, towards recognizing the relevance of mental health and wellbeing in various work, educational, and living contexts, and providing care where it is otherwise absent or stigmatized. As my conversation with Wysa illustrates, the cultural shift designers envision also includes a narrative shift from focusing on adverse socio-economic-ecological conditions as determinants of mental ill health and emotional distress towards looking at the emotional and cognitive effects they produce. For Jo, this shift towards treating mental wellbeing as an object of technical intervention is actually a driver, rather than an outcome, of structural change. While social scientists would describe this process as a neoliberal ideology of self-making and responsibilization (Rose 1996, Illouz 2008), for the designers and psychologists of Wysa, it is both a creative attempt to overcome structural constraints of what global mental health actors describe as a mental health “treatment gap” through technology as well as a response to the concrete needs of people in contemporary lifeworlds. These include the need for a digital “pocket friend”, as Soumya called Wysa, available any time and everywhere, or for unstigmatized, nonjudgemental and invisible care. Wysa’s algorithms are not only designed to make people happy, consumerist, and productive, as some critics argue (Ajana 2017, Lupton and Jutel 2015); developers envision that artificial intelligence could transform how humans affectively and therapeutically relate to themselves and to others. 

After experimenting with Wysa’s self-care in the diary-writing project, Shakti, the psychologist, imagines that it would be possible to combine the app with conventional psychotherapy – what she calls a “blended approach”. Others, too, have envisioned combined digital, face-to-face and community support as the future of psychiatry (Fairburn and Patel 2017, Carpenter-Song 2020, Bhugra 2017). As lifeworlds become ever more digitized, people are increasingly turning to apps such as Wysa and other forms of digital care to address new afflictions. They do so partly by replacing, partly by combining them with, nondigitized forms of healing and care, as they navigate the contemporary therapeutic landscape. Robots in your pocket designed to provide affective care work, therapeutic apps reflect the contours of this landscape – but they are also reshaping it, along with the thoughts, emotions and dispositions of their users.


Claudia Lang is an associate professor (Heisenberg) of anthropology at University of Leipzig and a research partner at the Max Planck Institute for Social Anthropology, Halle. Before, she was a postdoctoral researcher with the GLOBHEALTH project at cermes3, Paris, and at the Ludwig-Maximilians-University, Munich. She works on the anthropology of health in India and has published on different topics, including depression, traditional medicine, mental health, psychiatry, religion and ritual and health governance. She is currently working on the digitization of mental health and is interested in questions of planetary health.  Email: claudia.lang@uni-leipzig.de


Acknowledgements: I want to thank the Wysa team for their kind invitation to contribute this essay, and Jovan Maud, Dörte Bemme, Natassia Brenman, and Beth Semel for their input and edits! 

References

Ajana, Bjihaj. 2017. “Digital health and the biopolitics of the Quantified Self.” Digital Health 3 (1):1-18.

Béhague, Dominique, and Kenneth MacLeish. 2020. “The Global Psyche: Experiments in the Ethics and Politics of Mental Life.”  Medical Anthropology Quarterly. 1: 5-20.

Bemme, Doerte, and L. J. Kirmayer. 2020. “Global Mental Health: Interdisciplinary challenges for a field in motion.”  Transcultural Psychiatry. 57(1): 3-18.

Bhugra, Dinesh; et al. 2017. “The WPA-Lancet Psychiatry Commission on the Future of Psychiatry.”  The Lancet Psychiatry 4.10: 775-818.

Ecks, S. 2020. “A Medical Anthropology of the “Global Psyche”.”  Medical Anthropology Quarterly 34(1):143-147. doi: 10.1111/maq.12569.

Fitzpatrick, K. K., A. Darcy, and M. Vierhile. 2017. “Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial.”  JMIR Ment Health4 (2):e19. doi: 10.2196/mental.7785.

Gaudillière, Jean-Paul, Andrew McDowell, Claudia Lang, and Claire Beaudevin. 2021. The Health of others: Knowledge, Politics, and Practices in Global Health. New Brunswick: Rutgers.

Gooding, P. 2019. “Mapping the rise of digital mental health technologies: Emerging issues for law and society.”  Int J Law Psychiatry 67: 101498. doi: 10.1016/j.ijlp.2019.101498.

Illouz, Eva. 2008. Saving the modern soul. Therapy, emotions, and the culture of self-help. Berkeley: University of California Press.

Lovell, Anne M., Ursula Read, and Claudia Lang. 2019 “Geneaologies and New Anthropologies of Global Mental Health.”  Culture, Medicine, and Psychiatry43 (4):519-547.

Lupton, Deborah. 2019. Data Selves: More-than-Human Perspectives. Polity Press.

Lupton, Deborah, and A.  Jutel. 2015. “‘It’s like having a physician in your pocket!’ A critical analysis of self-diagnosis smartphone apps.”  Social Science & Medicine 133:128-135.

Miner, Adam et al. 2017. “Talking to Machines About Personal Mental Health Problems.”  JAMA: Journal of the American Medical Association. 318.13: 1217-1218

Mushtaq, Mariam. 2021. “Wysa is one of Google Assistant’s first investments in Asia.” https://blogs.wysa.io/blog/b2b-partnerships/wysa-is-one-of-google-assistants-first-investments-in-asia.

Patel, V., S. Saxena, C. Lund, G. Thornicroft, F. Baingana, P. Bolton, D. Chisholm, P. Y. Collins, J. L. Cooper, J. Eaton, H. Herrman, M. M. Herzallah, Y. Huang, M. J. D. Jordans, A. Kleinman, M. E. Medina-Mora, E. Morgan, U. Niaz, O. Omigbodun, M. Prince, A. Rahman, B. Saraceno, B. K. Sarkar, M. De Silva, I. Singh, D. J. Stein, C. Sunkel, and J. UnUtzer. 2018. “The Lancet Commission on global mental health and sustainable development.” Lancet392 (10157):1553-1598. doi: 10.1016/S0140-6736(18)31612-X.

Pickersgill, Martyn. 2019. “Psychiatry and the Sociology of Novelty: Negotiating the US National Institute of Mental Health “Research Domain Criteria” (RDoC).”  Science, Technology & Human Values 44(4): 612-633.

Redfield, Peter. 2019. “The usefulness of uncertain critique.” In A Time for Critique, edited by Didier Fassin and Bernard Harcourt. New York: Columbia University Press.

Rose, Niklas. 1996. Inventing our selves: Psychology, power, and personhood.Cambridge, UK: Cambridge University Press.

Ruckenstein, Minna; , and Natasha Dow Schüll. 2017. “The Datafication of Health.”  Annual Review of Anthropology 46:261.

Shulevitz, Judith 2018. “Alexa, Should We Trust You?” The Atlantichttps://www.theatlantic.com/magazine/archive/2018/11/alexa-how-will-you-change-us/570844/.

Venugopalan, Anjali. 2021. “Google Assistant invests in Indian mental health AI app Wysa “The Economic Times, 16 March, 2021.