On How to Measure Knowledge in the Age of AI…

In college, I took a class in German. I wasn’t very good at it, but I’m normally really good at studying and taking tests, so I figured I’d be fine.

But there’s the thing: a huge part of my grade for the class was an “oral exam,” which meant I just had to sit in my professor’s office and have a conversation with her for five minutes. She greeted me in German, and then asked me about my day and made small talk. I had to actually put together a coherent conversation in German, on the spot; no book, no notes.

There was no “hacking” my way around this. Either I knew it or I didn’t. It underscored for me that most traditional exams or assignments are just analogues for the real world – they’re contrived ways of trying to figure out if you actually know something. But having a conversation in German WAS the real world. This was the entire purpose of the class. It was time to put up or shut up.

(I assume the professor was sensitive to issues involving social phobia or neuro-divergence. She seemed like a smart, compassionate woman who had been doing this for a long time. My gut told me she had a couple decades of experience that could immediately tell her if someone was putting in the effort, regardless of their actual, extemporaneous proficiency.)

For many years, I taught a couple classes on content management at a European university. Pre-COVID, I visited a few times and taught in-person, but it was otherwise remote. This meant that all the work was evaluated without ever having met or had a conversation with the students.

My last year teaching the course (I had taken on a co-teacher by that point), we got our first submission that was obviously generated by AI. We called the students out on it, and I believe it was resubmitted (my co-teacher was in charge of that), but it underscored to me that I was leaving the position at the right time. I’m just not in the mood to deal with any of that.

Still, it’s left me wondering, what is the equivalent to just sitting in a room and having a conversation with a professor in German? How would you find some method of evaluation like that which (1) exactly represents the point of what you’re learning, and (2) cannot be circumvented? What is the correct epistemological method?

Should we just have had a 30-minute conversation with each student, and asked them, “So, what have you learned?” and then maybe follow-up with some questions, to gauge where their head is at? Would that have been fair? (We could, of course, have just assigned a CMS implementation or something, but that was beyond the scope of the course.)

I don’t have any role in hiring or managing, and I don’t teach anymore, so I guess I don’t have to worry so much about it. But I do wonder – in this new world, how will we effectively evaluate proficiency? How will we figure out if someone actually knows what they say they know?

When everything is remote, and AI exists, what’s the “five-minute conversation” test of our times?

This is item #0 in a sequence of 77 items.

You can use your left/right arrow keys to navigate