Introduction to
Dibyanshu Chatterjee

Introduction to Dibyanshu ChatterjeeIntroduction to Dibyanshu ChatterjeeIntroduction to Dibyanshu Chatterjee

Introduction to
Dibyanshu Chatterjee

Introduction to Dibyanshu ChatterjeeIntroduction to Dibyanshu ChatterjeeIntroduction to Dibyanshu Chatterjee
  • Home
  • Projects
  • Research
  • Contact Me
  • More
    • Home
    • Projects
    • Research
    • Contact Me
  • Home
  • Projects
  • Research
  • Contact Me

The Birth of a Confidence Predictor


While working on a knowledge tracing model, I found myself venturing into uncharted territory. I was using BERT, a bidirectional encoder transformer, and my goal was to predict whether learners would answer questions correctly. But as I delved deeper, I found something unexpected. This work is a part of one of my recent studies and the source code will be released around May 2024.


Preprocessing


The first step in the journey was preprocessing the data. This involved combining various columns into a single string and tokenizing the text. The tokenizer used here is part of the BERT model, which is known for its effectiveness in understanding the context of words in a sentence. If a labels column was provided, it was added to the encodings.

function preprocess_data(data, labels_column)

   combine columns into a single string

   tokenize the text

   if labels_column is provided

       add labels to encodings

   return encodings

Evaluation


Once the model was trained, it was time to evaluate its performance. The evaluation function made predictions and compared them to true labels. This step is essential for understanding how well the model has learned from the training data and how accurately it can make predictions.

function evaluate(model, data_loader)

   set model to evaluation mode

   for each batch in data_loader

       make predictions with model

       compare predictions to true labels

   return predictions and true_labels

Results


The results were quite impressive. The confidence predictor achieved an AUC of 0.8827650322276925, an RMSE of 0.748254808481186, and an accuracy of 0.8458549222797928. These metrics indicate that the model was able to predict learners’ confidence levels with high accuracy.


Ongoing Work on Knowledge Tracing Model


While the confidence predictor has shown promising results, work on the original knowledge tracing model is still underway. Preliminary results have been encouraging with an AUC of 0.9488608489177573, an RMSE of 0.47556875056709763, and an accuracy of 0.948849104859335.


Conclusion


This journey has been full of surprises and valuable insights. The development of a novel confidence predictor from an original correctness prediction model shows how flexible and powerful these models can be. As we continue to explore this fascinating space, who knows what other surprises await? Stay tuned! 

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept