The cultural consequences of automating emotional labor

We exist in a feedback loop with our devices. The upbringing of conversational agents invariably turns into the upbringing of users. It’s impossible to predict what AI might do to our feelings. However, if we regard emotional intelligence as a set of specific skills – recognising emotions, discerning between different feelings and labelling them, using emotional information to guide thinking and behaviour – then it’s worth reflecting on what could happen once we offload these skills on to our gadgets.

Interacting with and via machines has already changed the way that humans relate to one another. For one, our written communication is increasingly mimicking oral communication. Twenty years ago, emails still existed within the boundaries of the epistolary genre; they were essentially letters typed on a computer. The Marquise de Merteuil in Les Liaisons Dangereuses (1782) could write one of those. Today’s emails, however, seem more and more like Twitter posts: abrupt, often incomplete sentences, thumbed out or dictated to a mobile device.

‘All these systems are likely to limit the diversity of how we think and how we interact with people,’ says José Hernández-Orallo, a philosopher and computer scientist at the Technical University of Valencia in Spain. Because we adapt our own language to the language and intelligence of our peers, Hernández-Orallo says, our conversations with AI might indeed change the way we talk to each other. Might our language of feelings become more standardised and less personal after years of discussing our private affairs with Siri? After all, the more predictable our behaviour, the more easily it is monetised.

https://aeon.co/essays/can-emotion-regulating-tech-translate-across-cultures