Last night we went to a meeting of the Institute of Electrical and Electronics Engineers. Cade has been a member of IEEE--referred to by insiders as "I Triple E," but always makes me think of the redneck call to arms frequently heard around one of my father's Friday night bonfires--for many years, but this is the first local meeting we've attended. Normally, domestic demands and a general lack of interest in the subject matter prevent us from attending, but the subject matter of last night's meeting was too tempting, so we got Cade's dad to babysit and headed over to Cannon's for supper and education.
The speaker last night was a woman from MIT who heads a research program focused on designing software that recognizes affective (emotional) states. They've begun testing the software with people with autism (or "autistics," as Ms. Picard referred to them), with the idea that these empathic machines might help them interact more normally. Ms. Picard did a brief demonstration, fielded some questions, took pains to emphasize the many strengths that people with autism--I just cannot refer to them as "autistics"--exhibit.
It was truly fascinating, and the research is exciting, but I can't help questioning the basic tenets of the research she described. Is it possible to train a machine to be empathic? Isn't empathy something that's fundamentally mysterious, and subjective, and full of everything that resists scientific inquiry? For example: assuming that cultural/social norms impact the way a person displays and interprets emotion, is it really possible to train a machine to recognize all of the various and sometimes subtle differences in affective communication? This would require some mass generalizations (i.e., Japanese people have flat affects), which makes everyone uncomfortable--as evidenced by Ms. Picard's response to my question regarding this very subject, wherein she made some vague and slightly defensive comments about avoiding cultural stereotypes and didn't really answer my question at all.
The question being: can a machine really tell us how we're feeling? Do we really want it to?
Or maybe I'm the one feeling defensive, because that's supposed to be my job?
The speaker last night was a woman from MIT who heads a research program focused on designing software that recognizes affective (emotional) states. They've begun testing the software with people with autism (or "autistics," as Ms. Picard referred to them), with the idea that these empathic machines might help them interact more normally. Ms. Picard did a brief demonstration, fielded some questions, took pains to emphasize the many strengths that people with autism--I just cannot refer to them as "autistics"--exhibit.
It was truly fascinating, and the research is exciting, but I can't help questioning the basic tenets of the research she described. Is it possible to train a machine to be empathic? Isn't empathy something that's fundamentally mysterious, and subjective, and full of everything that resists scientific inquiry? For example: assuming that cultural/social norms impact the way a person displays and interprets emotion, is it really possible to train a machine to recognize all of the various and sometimes subtle differences in affective communication? This would require some mass generalizations (i.e., Japanese people have flat affects), which makes everyone uncomfortable--as evidenced by Ms. Picard's response to my question regarding this very subject, wherein she made some vague and slightly defensive comments about avoiding cultural stereotypes and didn't really answer my question at all.
The question being: can a machine really tell us how we're feeling? Do we really want it to?
Or maybe I'm the one feeling defensive, because that's supposed to be my job?