Thursday, May 31, 2007

Emotional Intelligence

Last night we went to a meeting of the Institute of Electrical and Electronics Engineers. Cade has been a member of IEEE--referred to by insiders as "I Triple E," but always makes me think of the redneck call to arms frequently heard around one of my father's Friday night bonfires--for many years, but this is the first local meeting we've attended. Normally, domestic demands and a general lack of interest in the subject matter prevent us from attending, but the subject matter of last night's meeting was too tempting, so we got Cade's dad to babysit and headed over to Cannon's for supper and education.

The speaker last night was a woman from MIT who heads a research program focused on designing software that recognizes affective (emotional) states. They've begun testing the software with people with autism (or "autistics," as Ms. Picard referred to them), with the idea that these empathic machines might help them interact more normally. Ms. Picard did a brief demonstration, fielded some questions, took pains to emphasize the many strengths that people with autism--I just cannot refer to them as "autistics"--exhibit.

It was truly fascinating, and the research is exciting, but I can't help questioning the basic tenets of the research she described. Is it possible to train a machine to be empathic? Isn't empathy something that's fundamentally mysterious, and subjective, and full of everything that resists scientific inquiry? For example: assuming that cultural/social norms impact the way a person displays and interprets emotion, is it really possible to train a machine to recognize all of the various and sometimes subtle differences in affective communication? This would require some mass generalizations (i.e., Japanese people have flat affects), which makes everyone uncomfortable--as evidenced by Ms. Picard's response to my question regarding this very subject, wherein she made some vague and slightly defensive comments about avoiding cultural stereotypes and didn't really answer my question at all.

The question being: can a machine really tell us how we're feeling? Do we really want it to?

Or maybe I'm the one feeling defensive, because that's supposed to be my job?

3 comments:

Ashley said...

Damn. I'm a dues paying IEEE member, but I don't hear about this stuff. Maybe they still have me in Chicago.

I'm not familiar with Ms Picard's work, but her MIT colleague, Pattie Maes, has done some amazing stuff.

Also there at MIT on a fellowship is Dale Joachim. Too bad Tulane Engineering and Computer Science was so weak that they had to destroy the department...so weak that the former faculty end up at MIT. But I digress...

Ashley said...

Oh, and the "emotional agent" thing has been 'hot' for about 10 years now...I've avoided that research on purpose, because most of the people doing it are loopy. Not all, most.

Leigh C. said...

If machines are ever imbued with the capacity for emotion, it will most certainly be a situation akin to that of the golems of the middle ages. People will be running around aghast at their creations that have moved beyond their control. An off-switch will be essential.

Ms MIT doesn't have kids, right? ;-)