Eye movements reveal our personality type: enhancing AI interpretations of non-verbal cues
Updated: Jun 3, 2020
New research shows that AI technology can be used to decipher ‘who we are’ simply by recording our eye movements. But can AI be trusted in clinical contexts?
Recent years have seen an increase in the interest in non-verbal communication in deciphering an individual’s state of mind, including the decoding of facial expression, physical gestures, posture, mimicking others, micro-expressions, blinking rate, eye movements, and the duration of eye contact. This line of research is becoming more popular in clinical situations, such as in patients with dementia who are unable to express themselves verbally. Despite all this attention, surprisingly little is known about the links between one’s state of mind/personality and their external manifestations in real-world situations.
One line of related research has reported that eye movements are governed by our personality. A natural next question is whether, in turn, personality traits can be predicted from eye movements. This was the subject of a recent study published this week in Frontiers of Human Neuroscience. The study was conducted at the at the University of South Australia in collaboration with the University of Stuttgart and the Max Planck Institute for Informatics.
It’s easy to see why this paper has made mainstream news this week. Using state-of-the-art machine learning, eye movements that had been recorded during a brief walk and shop-stop reliably predicted four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) and perceptual curiosity. These findings not only underline the role of personality in eye movement behaviour in naturalistic settings, but also identified eye movement predictors of personality traits. As well as supporting previous similar lab-based work (e.g. using eye movements to predict cognitive states), this study was the first to show that these results hold for real-world settings.
STRONG IMPLICATIONS IN SOCIAL ROBOTICS
One exciting application of this technology is the contribution to social signal processing and social robots.
“This line of research might help to make human-machine interactions more natural. If robots/computers/machines can detect traits or mood states they might adapt how they interact with humans.”, says Dr. Tobias Loetscher, senior Lecturer in Psychology at the University of South Australia and co-author of the published study.
When the Turing test was developed in the 1950s (the test that measures a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human), ‘communication’ was considered to primarily comprise spoken (and written) language. The last few decades have revealed an equal if not much greater importance of non-verbal communication in human-human interactions. For the development of convincing AI technologies, it is therefore essential that this kind of work comes to the forefront; the ability to react to non-verbal cues similarly to what might be seen for human-human interactions is an exhibition of ‘intelligent’ behaviour. This sensitivity will increase the efficiency and flexibility of such AI systems – but no, a Westworld dystopia isn’t yet on the cards.
But for all the technological possibilities, some rather big questions lurk behind an otherwise sensational report on AI ability. Can we trust algorithms? Let’s not forget Google’s outrageous booboo (or, bobo?) in their 2015 image-recognition algorithm. And what about the use of AI when the stakes are even higher? Reports of an over-policing of black neighbourhoods due to biased input data spring to mind, but perhaps the most pertinent discussion to be had is the use of AI in healthcare.
CAN AI BE TRUSTED IN CLINICAL CONTEXTS?
One of the most lucrative applications of AI technology is in healthcare. The value of AI has been demonstrated in several fields, including medical research and remote dental monitoring, for example. “I’m mainly interested in the clinical applications,” Dr. Loetscher told me. “Among other things, I’d like to use eye movements and AI to detect and diagnose clinical disorders such as mild cognitive impairments and dementias.” Supposedly, such technology could be used to measure changes in cognition or personality, which can be indicative of pathological brain changes. But is enough yet known about eye movements in such disorders to calibrate this AI technology appropriately?
There are other, more political questions surrounding the use of AI in disease diagnostics. Let’s not forget the digital third party concerns when it comes to DeepMind and patient privacy, and the Google Flu Trends’ double failure. It’s clearly not a simple issue. “Discussion about regulation of AI and invasion of privacy needs to be intensified,” says Dr. Loetscher. “How can misuse be prevented?”.
This leads onto another question: could this AI potentially provide a more accurate definition of one’s personality than real-life clinicians or personality questionnaires? At the moment, probably not, and comparisons with the accuracy of human evaluations of personality have yet to be made. “If I had to guess, I would think that an expert clinician observing the participants in the ‘real world’ for a while would outperform our AI – at least for now,” says Dr. Loetscher. “Certain personality traits are probably easily recognizable if a person’s behaviour is observed for a while.”
This is also an issue to be approached with some concern; indeed, there have been misleading reports concerning diagnostic viability of AI in patients with breast cancer. According to an article in Nature, AI tools are not tested, developed, or peer-reviewed with the same rigour as, for example, new drugs. However, the published paper clearly seeks to qualify the present findings with other populations and during different activities, and makes several suggestions for future research.
QUOTE SOURCE: PERSONAL COMMUNICATION