An experiment is taking place in a Michigan University (Ferris University), where two AI students have enroled and will take the same route through courses as their fellow human students. The aim is to evaluate the ‘student experience’. Interesting idea.
We’ve had successful teaching assistants since 2016 in Higher Education, along with adaptive, personalised teaching systems that educate everyone uniquely. We also know that current LLMs can crush high stakes exams in HE but this attempts to track the learning experience of students, so can make its own decisions, choose courses and generally have the agency of a real student.
There is a more fundamental idea here, of the mystery shopper or evaluator for any service, whether in education, healthcare, retail, whatever. Run bot customers through the system and see what happens? This is a great way to identify flaws, redundant processes and risks. It can be built to include risk analysis methods to identify and quantify risks, as well as recommend improvements. The choice of 'student experience' is a bit of a get out. Let's not tackle teaching and learning, let's see if they enjoy their gilded cage?
As a reflective experiment this has some merit. Lots is made of the student experience, yet little is done to improve teaching and learning, with the lecture and essay still rock solid as core pedagogies. Placing a proxy learner in context, going through the motions is interesting.
Careful what you wish for
They have to be careful what they wish for here. The AI will have a flawless digital memory, will not sleep, be super quick at tasks, never distracted, can multitask, network, never gets a hangover. Will is smash the exams? Will it have the urge to cheat?
The set up will be important. Will they be modeled on typical student behaviour, where 40% don't turn up to lectures?
The AI students are called Ann and Fry – odd as they say there’s no genders attributed? They are not robots but will listen through microphones, do assignment and eventually speak . It will have to go through the admissions process, then registration and make decisions on what classes it wants to take.
In HE, it would, I’d imagine, do a hatchet job on lectures (transience effect, cognitive overload, little interaction, poor slide design and poor teaching). It could compare live to recorded lecture experiences and, in its eyes, conclude that the convenience of recorded lectures wins hands down. It could critique the need for long absences during holiday breaks, the odd idea of having to wait for months even a full year to resit an exam. It could notice the number of students consistently absent from lectures.
To be fair, it is being built in partnership with the U.S. Department of Defense, National Security Agency, Department of Homeland Security and Amazon Web Services. That some pretty heavy fire right there.
The lack of research on using AI to teach and learn has been puzzling, compared to studies on productivity in the workplace. They've been noticeable by their absence. Yet we have truckloads of papers, frameworks and report on ethics and AI. I'd like to see two groups, randomised etc. one with, the other without AI, to measure impact on learning and performance. In may ways that would be more useful.
Of course, we all know what will actually happen… you don’t need millions to see the weaknesses already… but more power to their elbow. The US is taking the AI bull by the horns, while others write endless reports on ethics… it’s easy to be a critic much harder to do do real stuff and test against reality.
No comments:
Post a Comment