Reduce free-riding and low feedback quality with Participation Grading

Peer feedback has proven itself to be an effective educational approach to stimulate collaborative and active learning in students. However, in practice the feedback produced often remains superficial and lacks critical depth. That’s why teachers from Wageningen University came up with an innovative approach called Participation Grading – to increase the quality of peer feedback, level of participation, and offer a safe learning environment. In this article, we will explore this innovative approach: its development, implementation, and impact. 

What’s wrong with peer feedback? 

No one can deny the benefits of peer feedback in improving students’ final work while promoting the transfer of lifelong skills. However, there are still plenty of hurdles to overcome. Implementation often faces two main issues: 1) low feedback quality – peer feedback comments are superficial and not critical enough and 2) free-riding – students lacking motivation contribute relatively less than their teammates. 

A common solution is to rate the peer-assigned grades based on the quantitative aspect (the timely delivery of feedback and the amount of feedback delivered). This solution is often criticized for lack of depth and qualitative aspect [4]. 

Instructors from Wageningen University of Research came up with a solution to increase students’ motivation to engage in the peer review process, while enhancing the quality of the feedback produced. Let’s take a look at what Participation Grading is and how it works.

What is Participation Grading?

Participation Grading is grounded in the principle of “Virtual Action Learning” (VAL), which emphasizes the demonstration of students’ competencies via delivering information about their learning activities [2] [3]. 

Therefore, the approach involves grading only the best contribution to a peer feedback assignment, selected by students. These ‘best contributions’ function as evidence of a student’s competence as a reviewer.

Why is Participation grading effective? 

Participation Grading is believed to: 

  1. Enhance the quality of peer feedback and the level of participation as it requires students to pay more attention to producing good feedback for their peers,
  2. Offer a safe learning environment: when students select their best contributions themselves, this guarantees that their errors or mistakes do not influence their grade,
  3. Deliver a scalable teaching method which can be applied in different learning contexts,
  4. Save time for teachers when grading the peer feedback, which can be applied in different learning contexts. [4]

How to exercise Participation Grading?  

Participation Grading can be implemented without a specific learning platform. However, in order to optimize both the selection of the best contributions and the grading process, Wageningen University and FeedbackFruits joined hands to develop a platform for Participation Grading, which is now available as a beta feature in the Group Member Evaluation tool. 

A complete process of Participation Grading can be summarized as follows: 

  1. Assignment submission: Students complete and hand in the assignment, which can be in written, graphical, audio, video, or powerpoint format. 
  2. Peer review: Students provide peer feedback for their peers’ work based on a teacher-designed rubric. 
  3. Best contributions selection: From all the produced peer comments, the students select which they think are their best ones.
  4. Peer feedback processing: Students study the received reviews and respond to the reviewers’ comments (if required by teachers). Teachers provide feedback on both the original assignment and the peer feedback.
  5. Best contributions grading (teacher): Teachers grade best contributions within the peer review context
  6. Reflection: Students write reflections based on received feedback [4].

The Participation Grading was adopted in 4 BA and MA courses at the Wageningen University of Research. Each of these courses involved a Peer Review and/or Online Discussion assignment to which Participation Grading was added in either a cross-over or nonequivalent group design [4]. Details of the course architecture can be found in the table below. 

Course 1

Course 2

Course 3

Course 4

Presentation skills Systematic reviews in Health and Society Food Quality Design Introduction to Epidemiology and Public Health
15–30 students, MA program 60 BSc students 160 bachelor’s students 180 bachelor’s students
Course: ‘Presentation skills’ Course: ‘Systematic reviews in Health and Society’ Course: ‘Systematic reviews in Health and Society’ Course: ‘Introduction to Epidemiology and Public Health’
Content: Teaches relevant skills to deliver an academic presentation Content: Teaches knowledge and skills essential to produce a systematic literature review Content: The course gives an introduction into the principles behind effective operational quality systems in complex food production chains Content: The course teaches basic concepts, measures and study designs used in epidemiology and public health
Activity structure: Students make presentations and give feedback to each other Activity structure: Students work individually on a written assignment, which is peer reviewed within groups of 4 Activity structure: Students work in groups of 4 to write reports, and give peer feedback on other group’s work Activity structure: Students engage in an online discussion

Measuring the impact of Participation Grading: Use case  

At this point, you might wonder what the effects of the Participation Grading were or whether the approach was successful. From interviews with instructors at Wageningen University, we produced a use case to document these results and the impact of Participation Grading. Here, we will share with you this story. 

In her bachelor’s course ‘Food Quality Design’, Dr. Cora Busstra wanted to help students develop problem-solving and critical thinking skills besides subject specific knowledge. She set up a group assignment where students worked in groups of four to co-write an in-depth report and then provided peer comments on other groups’ writings. To enhance students’ participation level and feedback quality, Dr. Busstra adopted FeedbackFruits Peer Review tool with its beta function, Participation Grading. 

Within the Peer Review environment, students were able to submit their group report, individually gave feedback to the work of another group, and selected their best feedback comments for the instructor to grade. This guaranteed that their errors or mistakes did not influence their course grades as long as they did not select those as their best contribution. This gave students a chance to iterate and improve on their feedback-giving skills without fear of being marked down. The implementation of Participation Grading proved to be a success, as acknowledged by Dr. Busstra: 

“Students felt they had improved their understanding of the topic by giving feedback using Best Contribution Grading [Participation Grading].” 

By enabling Participation Grading, the instructor was able to create a safe learning environment for students where they could learn from their mistakes as well as freely raise questions. Overall, this set both students and teachers up for an effective feedback process, one where self-doubts and insecurities were minimised, and critical thinking and deep reflection were made easier.


[1] D. Nicol, A. Thomsonb and C. Bres, “Rethinking feedback practices in

higher education: a peer review perspective.” Assessment & Evaluation in Higher Education, 2014 Vol. 39 (1), pp. 102–122.

[2] J.J.M. Baeten, “Virtual action learning: an educational concept on

Collaborative Creation with ICT.” Amsterdam: KIT publishers, 2011.

[3] J.J.M. Baeten, “The Power of Peer Feedback. Research on the Learning

Process within Virtual Action Learning.” Delft: Eburon Academic

Publishers, 2016.

[4] M. C. Busstra, F. K. Garcia, K. A. Hettinga, L. Huijgen, M. C. Gresnigt and B. Hintemann, “Improving peer review quality by grading the best contribution of each student: educational principle and evaluation design,” 2019 18th International Conference on Information Technology Based Higher Education and Training (ITHET), 2019, pp. 1-4, doi: 10.1109/ITHET46829.2019.8937372.

Read the full article at: https://feedbackfruits.com/blog