Side view of a open book.

Key Takeaways for Researchers and Developers

Our system is novel in its multi-platform approach. Computer-Supported Collaborative Learning (CSCL) environments are often designed to support student collaboration within a single digital platform. 

Students often find themselves working in multiple contexts in the classroom, and with the growth of technology in classrooms and the additional variable of a global pandemic, those contexts have grown increasingly digital (i.e., a student might work face-to-face with a peer on one task and then move to engaging in an online discussion for homework). 

Supporting Multiple Interaction Modes

Over the course of a multi-year study, we used this curriculum in in-person, fully online, and hybrid learning settings, combining several modes of instruction (e.g., synchronous/asynchronous) and collaboration across digital and physical classrooms (see system overview). Different instructional modes allowed specific affordances for students who have varied preferences for participation.

What worked well in our approach was that supporting multiple interaction modes enables students to participate in a way that lines up with their skills and preferences.  For example, a student might prefer collaborating with their peers during the class while another student might be more comfortable participating in an asynchronous environment taking their time to compose an answer. 

In the future we would better support teachers in transitioning between instructional modes within a particular lesson, and in using the curriculum across multiple lessons. With only three weeks to adjust to the system design, students did not get adequate time to learn the multi-platform system. Switching from platform to platform required a lot of time and explanation during class to help students make the transition.

Assessing Student Motivation with Personas 

In developing this system, we implemented ways to support students’ collaboration that adapted to aspects of their motivation. We used an interactive tool inspired by personas to dynamically assess motivation. The tool took student responses to a self-report questionnaire regarding motivational factors (like math self-concept), and then reflected those back to students in the form of an editable narrative prior to engaging in a collaborative activity (see system overview). We designed this interactive tool to build upon typical assessments of motivation like questionnaires alone, which do not capture dynamic aspects of motivation, or behavioral assessments by external observers, which can leave students with a lack of agency. 

What worked well in our approach was that we found early evidence that personas assess motivation more accurately than student initial surveys. Personas designed as a narrative are potentially easier to comprehend and complete than surveys that ask students to self report on their motivation. This kind of tool is simple to use and gives students more visibility into system decisions and control over reporting their motivational state. 

In the future we would further investigate potential benefits and drawbacks to encouraging students to edit their personas continuously over the course of the interaction. While the personas were designed to be a dynamic assessment, we found little evidence that the personas captured changes in students’ day to day motivations. Designing the persona in a more context-sensitive way (e.g., by changing persona narratives to represent the activity that students are immediately about to engage with), may better capture these subtle shifts.

Computational Modeling

We designed a logistic model to predict students’ likelihood of giving constructive help during a specific learning activity, taking both the students’ motivational factors (e.g, confidence in math, conscientiousness) and platform characteristics (e.g., public/private, synchronous/asynchronous) into account. Most models consider platform characteristics and student motivational factors separately. Our model integrates those two types of predictors to better explain collaborative behaviors. (see system overview). 

What worked well in our approach was that our modeling method successfully incorporated a rich qualitative analysis of student motivation to create a computational explanatory model from a small data set. We found that the model successfully explained student behavior on the data on which it was tuned, and was transferable to new data with minimal additional adjustment. The model is generalizable to other platforms for collaboration (e.g. a different chat server) because it incorporates the features of the platforms.

In the future we would advance this work by developing additional methods for automatically tuning model parameters based on new data. Currently the model needs to be manually adjusted for a new context. Having more data on student participation would improve the ability to automatically tune the model.

Asset-based feedback

We designed badges as a way to provide positive feedback to the students. The badges were awarded based on the students’ collaborative interactions, e.g., when a student asks a question or provides an explanation, the student receives a question badge or an explanation badge respectively.