At least in a professional setting, my default position on assessment is that lower-stakes assessment should be the first option (where “low stakes” means there are no consequences for getting questions wrong, like polling). Professionals should not need external incentives to take assessments seriously, and high stakes assessment carry a lot of overhead costs. The point of assessment should be to let participants and instructors understand their own level of understanding, and that can usually be done with low stakes assessments. (High stakes testing is still useful or necessary sometimes, but just shouldn’t be the default.)
In Make It Stick, this gave me pause:
Make quizzing and practice exercises count toward the course grade, even if for very low stakes. Students in classes where practice exercises carry consequences for the course grade learn better than those in classes where the exercises are the same but carry no consequences. (p. 227)
Courses in professional environments don’t necessarily have grades, but that still raises the question of whether attaching some kind of consequence to practice questions in professional training would increase learning, as the authors assert.
To the extent that professionals who are participating in training should understand the direct relationship between what they are learning and their ability to grow in their careers, I’d imagine that external incentives or accountability shouldn’t be necessary unless those professionals are only in the training because they have been required to, or to pick up credits for their licensure, in which case external incentives might make sense, but also in which case there are bigger issues to solve.