How AI Could Transform the Way Schools Test Kids
PISA, the influential international test, expects to integrate AI into the design of its 2029 test. Piacentini said the Organization for Economic Cooperation and Development, which runs PISA, is exploring the possible use of AI in several realms.
- It plans to evaluate students on their ability to use AI tools and to recognize AI-generated information.
- It’s evaluating whether AI could help write test questions, which could potentially be a major money and time saver for test creators. (Big test makers like Pearson are already doing this, he said.)
- It’s considering whether AI could score tests. According to Piacentini, there’s promising evidence that AI can accurately and effectively score even relatively complex student work.
- Perhaps most significantly, the organization is exploring how AI could help create tests that are “much more interesting and much more authentic,” as Piacentini puts it.
When it comes to using AI to design tests, there are all sorts of opportunities. Career and tech students could be assessed on their practical skills via AI-driven simulations: For example, automotive students could participate in a simulation testing their ability to fix a car, Piacentini said.
Right now those hands-on tests are incredibly intensive and costly – “it’s almost like shooting a movie,” Piacentini said. But AI could help put such tests within reach for students and schools around the world.
AI-driven tests could also do a better job of assessing students’ problem-solving abilities and other skills, he said. It might prompt students when they’d made a mistake and nudge them toward a better way of approaching a problem. AI-powered tests could evaluate students on their ability to craft an argument and persuade a chatbot. And they could help tailor tests to a student’s specific cultural and educational context.
“One of the biggest problems that PISA has is when we’re testing students in Singapore, in sub-Saharan Africa, it’s a completely different universe. It’s very hard to build a single test that actually works for those two very different populations,” said Piacentini. But AI opens the door to “construct tests that are really made specifically for every single student.”
That said, the technology isn’t there yet, and educators and test designers need to tread carefully, experts warn. During a recent SXSW EDU panel, Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, said any conversation about AI’s role in assessments must first acknowledge disparities in access to these new tools. (Editor’s note: The panel was moderated by Javeria Salman, one of the writers of this article.)
Many schools still use paper products and struggle with spotty broadband and limited digital tools, Turner Lee said: The digital divide is “very much part of this conversation.” Before schools begin to use AI for assessments, teachers will need professional development on how to use AI effectively and wisely, she said.
There’s also the issue of bias embedded in many AI tools. AI is often sold as if it’s “magic,” Amelia Kelly, chief technology officer at SoapBox Labs, a software company that develops AI voice technology, said during the panel. But it’s really “a set of decisions made by human beings, and unfortunately human beings have their own biases and they have their own cultural norms that are inbuilt.”
With AI at the moment, she added, you’ll get “a different answer depending on the color of your skin, or depending on the wealth of your neighbors, or depending on the native language of your parents.”
But the potential benefits for students and learning excite experts such as Kristen Huff, vice president of assessment and research at Curriculum Associates, where she helps develop online assessments. Huff, who also spoke on the panel, said AI tools could eventually not only improve testing but also “accelerate learning” in areas like early literacy, phonemic awareness and early numeracy skills. Huff said that teachers could integrate AI-driven assessments, especially AI voice tools, into their instruction in ways that are seamless and even “invisible,” allowing educators to continually update their understanding of where students are struggling and how to provide accurate feedback.
PISA’s Piacentini said that while we’re just beginning to see the impact of AI on testing, the potential is great and the risks can be managed.
“I am very optimistic that it is more an opportunity than a risk,” said Piacentini. “There’s always this risk of bias, but I think we can quantify it, we can analyze it, in a better way than we can analyze bias in humans.”