Historically, argumentation, reasoning, and instruction have been inextricably linked. In a wide range of pedagogical contexts, instructors traditionally engage students in argumentation to make them better reasoners. “[A] good reasoner should be able to generate arguments, providing supportive reasons to the claims that are made…, consider arguments counter to his or her argument and be able to refute them or to re-evaluate one’s own position in reference to them….”
Given the vitality of this linkage, it is regrettable that across wide swaths of the educational scene, the use of argumentation as a gauge of students’ understanding is on the decline. Class sizes are too large for instructors to provide detailed feedback on students’ written arguments, more objective measures of learning, such as multiple-choice tests, are becoming the norm across the curriculum, and distance learning environments are not necessarily conducive to robust argument.
Meanwhile, argumentation has become a “hot topic” of Artificial Intelligence research. In the last few years, AI Journal special issues have devoted hundreds of pages to computational models of argument, and new conference series devoted to that topic have arisen and prospered. Argumentation research has implications for the semantic web, multi-agent systems, social networks, decision support in application areas, and educational technology.
A growing number of researchers have focused on using computer technology to teach humans argumentation skills, either in general or in application areas. This research has yielded intellectual products such as computational models of argument schema with critical questions geared to specific course content, techniques for integrating argumentation into human computer interfaces via argument diagrams, and tools for engaging students in collaborative argument-making.
This focus on educational technologies for teaching argumentation skills, comes just in time to assist two communities. First, these technologies may retard or even reverse the decline in reliance on argumentation as a pedagogical medium and means for gauging student understanding. To the extent that instructional argumentation systems reify pedagogically important elements of argumentation, enable students to practice argumentation skills outside of class, and provide tools for intelligently integrating source materials into arguments, they preserve and extend the efficacy of argumentation as an educational tool. Second, these educational technologies provide a practical context for evaluating AI’s new computational models of argument; if the models are robust, they should be the basis for instructional environments that help students learn, presumably an objectively measurable effect
Or is it? Ironically, in evaluating student learning with argumentation systems, we often face a similar conundrum. Instructors can understand textual arguments, but they do not have time to grade them. Computers work fast but they do not understand textual argumentation. In order objectively to evaluate how well the educational technologies work in teaching argumentation skills, it is tempting to use those same objective measures, for instance, multiple-choice tests, that are supplanting the more subjective, but arguably more probing “measures” based on how well students argue.
As work progresses on new educational technologies for teaching argumentation skills, therefore, researchers need to focus on developing new techniques for assessing how well students learn argumentation skills. Fortunately, some are; researchers are inventing ingenious assessment techniques harnessing computer-supported peer review, the “diagnosing” of argument diagrams, and technological ways to enhance the ability of objective tests to probe the depth of students’ understanding.
Kevin D. Ashley
Professor of Law and Intelligent Systems
Senior Scientist, Learning Research and Development Center
University of Pittsburgh