AI Decoder: When Big Brother Enters The Classroom

Chinese schools are using artificial intelligence-empowered facial recognition to keep watch over students, and prod them to perform better.

They’re cheering. Should we?

And are our classrooms next?

Two old friends — a clueless but inquisitive journalist (Paul) and an expert who has spent decades studying AI and developing businesses based on it (Jeffrey)— “talked” about those questions in a Google doc conversation. An edited version of the conversation follows. (Click here to read about the Chinese experiment.)

Paul: What caught your eye about this story?

Jeffrey: It is more proof of how AI is empowering governments to gather data on citizens. We are now getting to do this at a very different level of intensity.

China is certainly an early and enthusiastic user, but it is naive to think that it will stay there. It is already here under the usual guise of security —  like cameras screening people at the Super Bowl and matching photos against suspect databases. The power is hard to resist.

I find this example horrifying and am glad that I didn’t face it when I was in school. It is punitive, and not how I think AI can be used to promote learning.

How specifically is AI working in this case? As opposed to simply watching someone with a camera?

Given the incredible resolution of cameras and a big body of work in visual analysis, there is a lot that can be done with the data, even detecting emotions. It’s not perfect (and that’s a big issue by the way). But it’s very good.

So is the AI part involving matching the data captured by the camera with other data on a mega scale?

Well specifically you have a library of “bored” looks, and you match against that.

You are essentially asking,” Is the face I’m looking at like the faces that suggest boredom?” It’s called supervised learning, which requires a lot of human labelers.

Don’t forget that “boredom” is a human concept. Machines don’t know what boredom means.

These systems say boredom is anything that looks like that.

The point is that it is just amplifying a human judgment. Like the fact that you aren’t typing anymore indicates that you are probably bored.

Is it AI’s “fault” that you might misinterpret a lack of typing as boredom (rather than pausing godforbid for thought)? Or is the problem inherent in the process?

The problem is inherent.

And people shouldn’t forget that the system will always have some error rate. Kind of like people. But human errors are different from when machines make them

I think that intervention at this classroom level is crazy and inhuman. It is deeply invasive and it will have a pretty high error rate. A lot of kids will get notes that are misguided.

And, besides, boredom may be a good signal, like the teacher isn’t making sense

And we just had a good example of data problems. I thought you weren’t typing.  But you were, just below what I could see.

My prediction is that this type of intervention will amplify bad aspects of school.

Then why would it be in China’s interest to do this?

Because of their philosophy and approach, which in my experience they have some conflicts about.

We have over-involved parents here, too. Like parents who can’t stop texting their kids in high school

This is a similar kind of invasiveness. But here we have a machine which means that the same bad thing will be scaled up and normalized.

One can easily imagine in a metrics-mad world writing standards for how involved students should be and measuring that using these somewhat unreliable systems.

But we sanctify numbers. We sanctify measurement. Boredom is actually not a bad thing always.

Interesting! Is this an extreme example of America’s data-driven fixation to school reform? Or perhaps Singapore’s results-focused approach?

It is all of the same spirit. It stems from the factory model of education. And this is quality control

The story mentions that at least one kid performs better simply because he knows the camera’s watching him. Is there an upside to that? Downside?

Of course there is one kid who says that. You have to realize that the issue here is scale. It is well known in education research that any change will boost some performance temporarily.

What happens next with the data collected in these learning environments? How might the state use it for better or worse?

I’m not convinced that classroom data like this will ever be useful. There are some interesting applications like recordings of classrooms that let a teacher see what they have talked about and what they might have missed.

What do teachers want? That’s the key.

Obviously the Chinese have used these facial recognition tools to monitor potential dissent and send hundreds of thousands of Uighurs to reeducation camps. Is there an AI solution —  or any other solution —  to prevent the technology from being misused that way?

No. That’s a human challenge.

You could imagine sci fi disruptors, but being realistic the Chinese will be able to do this for the foreseeable future. We have not established a language to talk about acceptable use.

It stems in part from the fact that there are as of yet no legal structure to own your own data. Facebook, Google et al make money off of you, the product. You exchange that for a service

As someone who believes that AI can help us live better lives —  who works on developing those uses — do you see an upside to the use of facial recognition technology in classrooms or in society at large?

Facial recognition has lots of uses, visual recognition more broadly.

Here’s a surprising example: The military has explored using visual recognition on the tips of missiles to help them avoid hitting civilian targets. I’ve seen demos where the missile looks as though it is going to hit a bus and it aborts.

Obviously the downside of that is if that technology succeeds —  and it becomes way too easy to kill people to avoid tougher diplomatic and foreign policy choices and avoid feeling any human cost …

The military is actually very aware of this profound issue.

Facial recognition has also been used to help treat autistic children who have trouble recognizing cues.

That would suggest that you see promise as well as peril in the use of facial recognition in education. True?

Absolutely. When it comes to amplifying certain control/measure models, yes. When it comes to autonomous systems working with students, there is opportunity and, I think, value.

In the technology you’re developing to use AI to help teach people how to teach or learn ESL relying on facial recognition a part?

We have experimented with this idea some. Again, not in a way to control the student but to assess whether we are being interesting or responding to what they need.

So you do have to know whether someone is actually looking, or truly being, bored!

You can test it. Like what we humans do: “Am I boring you?”  It lets the system ask questions

And you would trust a person’s answers more than “objective” visual or other cues?

The goal is not to tell whether the student is bored but finding out that they need some kind of attention. Sometimes boredom might actually be anxiety because a student is not following.

We look for signals that tell us to learn something about the student’s state.

So bottom line: Where does AI go next in improving education? And what policy questions will that confront us with?

There are many ways. One big problem is that feedback is not available in many places. They have no teachers and no support system. I think AI can help address this by assisting students who don’t have the access.

Post a Comment

You must be logged in to comment

If you already have an account, please log in here | If not, please .

Comments

posted by: robn on October 2, 2018  2:13pm

AI is just a tool to give us better information to make better choices. Short of true sentience in AI, we’ll always have to choose.
The problem with the described concept is that, just like IRL, when students find out that they won’t get spanked, they’re going to misbehave.

posted by: Jill_the_Pill on October 2, 2018  7:31pm

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”—Mark Twain

AI, like many algorithmic techniques, gives the illusion of knowing something objective or unbiased.  That illusion replaces real understanding and can discourage further efforts to learn more, assuming it’s all already settled.