Monthly Archives: December 2017

Personalised learning or a flight of fancy?

From the time when I was a very young girl until probably my late twenties, I recall the intense irritation I felt when (usually) older men, used to say to me something along the lines of, ‘Smile, it might not happen!’. Setting aside the horribly gendered implications (I was a child in the 1960s when little girls were supposed to smile), I have the sort of mouth that in repose tips downwards. This conveys the demeanour of one who is unhappy, discontented, or sour. I’m none of those things on a permanent basis. As an older woman now (and thus largely invisible) people comment less on my facial expression but if they do it is invariably to express surprise that the picture they see on my face is not reliably consistent with the words they hear uttered from my mouth.

I know I am by no means unique in the south-pointing mouth department. A more notable example is Helen Clark, former Prime Minister of New Zealand. While being female, capable and staunch were probably more than sufficient to invoke much of the vitriol and bigotry she has been subjected to, I’ve often wondered whether being in possession of a mouth that naturally turns down at the edges has exaggerated the effect. A single anatomical feature ‘confirming’ the worst generalisations about her character.

And this bias, bothersome as it is, is at least a human bias. More troubling, is the palpable excitement and enthusiasm in educational circles for affective computing, or emotion artificial intelligence (AI). Here, such bias can be compiled into and rapidly promulgated by machines. Affective computing proposes to identify, interpret and predict human affect, a major component of which is emotion. A range of data are used for this enterprise. Automated detection of facial expression is an important one but also the analysis of speech and language and physiological measurements such as pulse rate, blood flow and skin resistance.

As the owner of a face that is regularly misinterpreted by others, I have more than a few misgivings about building algorithms that rely, in whole or in part, on identifying facial features associated with specific emotions. And setting aside the issue of troublesome facial features, how confident can any of us be in differentiating emotions? How to distinguish anger from fear? Melancholy from sorrow? Emotions manifest in subtle ways and can change in a moment.

“Orlando would fall into one of his moods of melancholy; the site of the old woman hobbling over the ice might be the cause of it, or nothing […] For the philosopher is right who says that nothing thicker than a knife’s blade separates happiness from melancholy.” — Virginia Woolf

Speech and language can of course be used to gauge emotion. We use language in this way all the time – and we regularly get it wrong even with context and a range of cues, such as tone of speech or body language, to draw on. When taken out of context or incomplete it may be impossible to discern the emotional state of the speaker. And, when language becomes data, context is the first casualty. If it is hard enough for people to correctly discern emotion or, more broadly affective state, machines working from impoverished context will fare no better and certainly not to within the tolerance of the philosopher’s knife blade.

If the use of facial features and linguistic analysis to automatically detect emotion is a bit wobbly the foundation for physiological measurement is blancmange. Heart rate and electrical activity of the heart are easy to measure and ideal for telling us about the state of the human cardiovascular system. For example, an irregularly irregular pulse and no P wave on an ECG usefully suggest atrial fibrillation in a patient. They are useful measures precisely because we can explain, in detail, the relationship between the cardiac cycle, by which blood is circulated through the body and lungs, the quality of the pulse felt and changes in electrical activity in the heart. By contrast, readily accessible physiological measures are only general indicators of emotional state and may not be causally related to emotional state at all. A  sympathetic fight or flight response can occur in wide range of contexts. A pedagogenic fear of exams and anxiety that a boyfriend’s mother is coming to visit may induce similar biometrics but their aetiology is quite different and requires different interventions – assuming making an intervention is the point of taking the measures in the first place – simple biometrics are not going to distinguish the two. Setting aside potential pathologies, a regular, rapid pulse and raised blood pressure in a student could equally be a reflection of plain old physiology. Maybe the student came to class after a party or perhaps they were just running late.

The American Psychological Association’s assessment of polygraph (‘lie detector’) tests makes a similar argument. Polygraphs employ physiological tests to infer whether a subject is responding truthfully to statements. Their use is contested and in most countries, they are not admissible in court. If we cannot be confident that simple physiological measures or biometrics can accurately and reliably indicate a binary motive, to speak the truth or fabricate, why would we imagine they could distinguish complex, mercurial, multi-valued emotions?

Yet already, in a few schools and classrooms, biosensors are starting to appear. From video cameras for facial expression detection, to heart rate or blood flow monitors and EEG skull caps. Data harvested from these devices, together with interaction data from apps used as part of coursework, is monitored and analysed with the aim to enhance learning through influencing a student’s affective state. Except these devices don’t monitor emotion or affect – they provide data from which we rightly or wrongly infer the presence of an emotion. Nor do they provide actionable insights to teachers or to students – in the wild of the classroom they cannot hope to identify the source of any particular emotion with confidence. What they can do is help to shape behaviour.

The inimitable Audrey Watters’ latest piece on educational technology and the new behaviourism cuts to the quick of this issue:

[Proponents of affective educational technologies] will guide us – algorithmically, of course – to “good” academics and “good” thoughts and “good” feelings and “good” behavior, defining and designing, of course, what “good” looks like.

Her cautionary article points out that emotional and social learning, developing things like grit and a growth mindset, are being rapidly turned into technology products designed to change behaviour and to change it on a very personal level.

Personalised learning, understanding individual needs and responding appropriately and in a timely manner, has been on the AI in education agenda all the way back to Pressey and Skinner and their teaching machines. It has surfaced periodically since in the form of the Keller Plan, intelligent tutoring and adaptive testing but has always been dogged by its behaviourist roots – until now.

The dominant discourse in education, which traces back to Dewey, eschews the worst of behaviourism’s excess. It holds that formalised teaching and learning should be student-centred, inquiry-led, community supported and attend to the whole student. Yet, how better to attend to the whole student than to marshal biosensors, big data and machine learning to meet unique individual needs, including social and emotional needs. The result is what Ben Williamson has delightfully called Big Dewey, which combines progressivism with big data and analytics and undergirds the drive towards personalised education.

But if we don’t understand the cause of an emotion how can we effect change to address it? We cannot. All we can do is effect change in the thing we are measuring – we can slow our pulse, modify our EEG and of course, smile at the teacher.

And as Watters’ convincingly argues, wittingly or otherwise, measurement, control, vested interest and commerce are at the heart of this project. The most alarming signs of what may lie just over the horizon include monitored classrooms, DIY neurofeedback kits and reports like this one from the Potomac Institute. But Big Dewey has issued his rallying cry and the pundits and politicos are taking note.

Perhaps 2017 has been the year for parody – for Big Dewey is surely a parody of both education and science – a mawkish, flatulent figure, resplendent in a silicon suit and topped with a jaunty skull cap, flapping his arms and willing his students to fly.

 

Clover’s triplets present their facial features. Photo: Nick Beckwith