Mathematical knowledge assessment on the World Wide Web

Dear ActiveMath Forum users,

This forum post is not directly about the ActiveMath project and if you feel this message does not belong here, please delete it and forgive my intrusion.

I am currently working on a research project for an MSc degree in computer science at the Open University in the UK. In short terms, the project is about mathematical knowledge assessment of the Web. It comprises three parts, two of which are also covered by the ActiveMath projet: authoring (1), representation (2) and comparison of mathematical formulae (3) on the Web.

Whereas there is a lot of resources and literature available on the first two parts, the third part is not quite so well covered. Basically, the idea is to compare a formula (equation) entered by a student with the solution, given previously by the teacher (the author of the exercise) to determine whether the answer given by the student is correct or not. As a simple example, imagine the answer given by the teacher is “x + 1” and the student’s answer is “1 + x”. For a human being, it is obvious that the student is correct, but the goal of my research project is to find and implement an algorithm that can detect this automatically, without the intervention of the teacher.

The question I would like to ask here is whether there are plans to implement such a feature into the ActiveMath project? If not, for what reasons? Furthermore, I would be very grateful for any hint or documentation which could aid me in my research.

Yours faithfully, Chris Leesch

Trackback URL for this post:

Re:Mathematical knowledge assessment on the World Wide Web

Dear Chris,

Thank you for your posting. It shows how little is known to the world about ActiveMath semantic evaluation services. We do have within an exercise subsystem of ActiveMath a generic broker architecture for distributed evaluation services, which is able to connect to sevaral Computer Algebra Systems and Other Domain Reasoners in order to evaluate the answer of the learner in the context of a particular task. Semantic evaluation of learner’s answer to the tasks in the exercise steps works now in ActiveMath and is used within the most of the exercises we have available in the system. Moreover, we can do complex vector evaluation, in which the user answers depend on each other and so on. We use semantic evaluation services also for generating exercises, and also provide them for any other computational purpose in the whole system. We also have an explorative CAS console in ActiveMath in which the learner can enter any mathematical expression and evaluate it semantically.

For more information about the semantic evaluation in ActiveMath see the following paper and for other papers and more complete information about the exercises in ActiveMath see my dedicated exercise page

As a general comment, I do not agree that the topic of evaluation of mathematical formulae on the web is not sufficiently addressed. For some reference, have a look at the following
openmath pages. The whole page is interesting, but particularly the section “Tools for connecting mathematical software systems”,

also see the description of the project MONET=Mathematics On the Net

and perhaps an interesting research project to know about is MOWGLI=Mathematics On the Web Get it by Logic and Interfaces

A powerful industrial giant application for math on the web is e.g. MapleNet

Another giant is webMathematica

You can certainly not ignore those in your research.

If you have any other questions, do not hesitate to post them.

George Goguadze Faculty of Computer Science University of Saarland

AM<->Mathematica 6.0.0 Connectivity?

George Goguadze, thank you for your response.

I wonder if you could please update me about the state of the current connectivity features to Mathematica, in particular to Mathematica 6.0.0, the instantiation of Wolfram’s New Generation?

Re:Mathematical knowledge assessment on the World Wide Web

Dear George,

Thank you for your response. I have been studying the papers and web sites you mentioned and from there followed the trails to quite a lot of information on the subject. I realized that indeed, I was wrong about the subject not being sufficiently addressed.

Although this is certainly good news for the e-learning community, it draw away the basis for my master thesis. The LeActiveMath project is already far beyond the scope of what I was hoping to achieve. I will have to quickly redefine my research question by finding a niche in the process that needs to be looked into by a Computer Science student.

Nonetheless, I agree with you about how little is known to the world about ActiveMath semantic evaluation services. Although I am working in the e-learning field, as a software developer for the Ministery of Education in Luxembourg, I had not heard about it before.

Best regards,

Chris Leesch