Enabling Collaboration and Video Assessment: Exposing Trends in Science Preservice Teachers’ Assessments

Within the context of improving education within the science, technology, engineering, and arithmetic (STEM) fields, underrepresented groups within the STEM fields, K-20 and industry partnerships, social media use in education, and assessment is that the important concept of teacher reflection, practice, and improvement. As such, Rich and Hannafin (2009) involved “evidence of impact” of using video and reflection with preservice teachers (p. 64). Preservice and newer in-service teachers often struggle with reflection and, inherently, self-assessment. Interestingly, a big discrepancy also arises between the peer and self-assessment of end-products. In developing culminating products like videos and self-reflection documents, which are required by many teacher licensure programs, preservice teachers may benefit from additional peer support so as to enhance their self-assessment and reflection skills.

Currently, online video feedback systems contains a binary like or dislike judgment, with disjointed and unfocused open-response comments. These systems of peer assessment and feedback offer little constructive benefit to video creators. Viewers also face a challenge when providing summative assessment of videos. While the determination of binary feedback of a video may be a snap judgment, having the ability to define, describe, and justify the explanations behind the judgment is challenging thanks to the amount of variables and therefore the multiple points of reference into account throughout the video’s duration.

Aggregated binary assessments, the sort typically available on video sharing sites, are analogous to students’ receiving a group of pass/fail grades from all of their teachers, who all use their own private and unique scoring rubric. While popular social media sites like Youtube and Facebook, also as online learning sites like Coursera and Kahn Academy, all leave discussions of video content, their decoupled free response structures don’t leave endless formative assessment of the first content but, rather, a highly variable summative assessment supported the ultimate opinion of the content viewer.

A different approach or tool that gives continuous feedback could promote positive attitudes towards technology use, which has been shown by Cullen and Greene (2011) to predict intrinsic and extrinsic motivation. With this in mind, we created YouDemo, a web tool, and used it with preservice teachers to assess various aspects of assessment. This study focuses on the discrepancies between peer and self-assessment, the connection and bias between formative and summative assessment abilities, and therefore the impact of assessing the work of peers and comparing it to one’s self-assessment of comparable work.

Purpose/Problem/Gap in Literature

Within the context of educational assessment, a binary scoring system provides a weak summative and nonconstructive evaluation of the general product. The evaluation becomes a function of a private viewers’ personal lens and isn’t supported a precisely defined metric (characteristic or quality) or metrics over the course of the whole work. Currently, video annotation is predominately composed of tools that leave nonaggregating, text-based markup of videos. These tools include standalone PC applications like VCode (http://social.cs.uiuc.edu/projects/vcode.html) and ANVIL (http://www.anvil-software.org/), also as online applications that aren’t typically freely accessible to teachers, like VideoPaper (https://vpb.concord.org/) and MediaNotes (http://www.cali.org/content/medianotes/). Presently, the sole known free annotation tool is VideoANT (https://ant.umn.edu/), which allows text-based annotations to YouTube videos (Hosack, 2010).

In an academic setting, the age of traditional online courses and large open online courses (MOOCs), online video-based critiques and assessment by peers and mentors can lack the depth and richness of in-person critiques and debates (Rich & Hannafin, 2009). Practice with video assessment and self-reflection is critical, because many preservice teachers are now subject to edTPA requirements (Barron, 2015) and must submit teaching videos and showcase their ability to reflect and self-assess. Their video submissions are critical to their final edTPA scores.

The tool presented here, YouDemo.org, targets preservice teachers, their K-12 mentor teachers, and university professors who have an interest in critiquing peer videos and receiving aggregated evaluation feedback on their own videos. The tool links to existing YouTube videos, allows continuous critique of two metrics (or qualities), and provides a user access to the aggregated assessment.

YouDemo enables the continual assessment of two video-creator-defined metrics. For the rest of this text , “video creators” include users who create or upload videos, while “video evaluators” or “assessors” are those that provide feedback for the videos. Creators can view the results of aggregated quantitative metric assessment also as qualitative feedback provided by evaluators. Creators can then evaluate, reflect, and compare their own self-assessment with an aggregate of their peers’ anonymous assessment of their work. This process allows video creators to realize authentic summative and formative feedback on their videos, which promotes reflection and pedagogical questioning.

YouDemo provides a teaching mechanism for both formative and summative assessment which will support and enable learning in the least levels of education. Additionally, the validity and reliability of tools or assignments utilized in the classroom are important assessment aspects, and YouDemo underwent this scrutiny. As stated by Mertler (2003),

Evidence must be continually gathered and examined so as to work out the degree of validity possessed by decisions. Three formal sources of evidence that support the existence of validity include content, criterion, and construct evidence. Content evidence relies on professional judgment; whereas, criterion and construct evidence believe statistical analyses. Content evidence of validity is that the most vital source of evidence for classroom assessments. like validity, reliability addresses assessment scores and their ensuing use. (p. 66)

Over the course of 5 years, we trialed the continual evaluation and video data aggregation at three universities in North America. so as to assess the impact of the tool, we conducted a mixed methods study, where a subset of the trial participants’ feedback of their own videos which of their peers was captured before and after its use.

Although continuous rating and evaluation of a target source isn’t a replacement concept, having been utilized in election debates (Yang & Park, 2014), behavior coding practices (Messinger, Mattson, Mahoor, & Cohn, 2012), and even emotional response to music videos (Soleymani, Pantic, & Pun, 2012), we found no teaching connection. Thus, the new technology used during our study enabled preservice teachers with the means of collecting peer-assessment of any two instructor-selected video content qualities (such as content clarity, sound-level, humor, evidence of knowledge collection, evidence of knowledge analysis, and others).

Other potential use cases include K-20 teachers collecting critique feedback on student work from a category of scholars , K-20 students collecting critique feedback on their work or a group’s work from a category or panel of teachers, or administrators collecting feedback on their own work, teacher work, or student work.

To the simplest of our knowledge, no other online or free tool exists that permits continuous assessment of videos. Furthermore, no tools exist that allow users to specify and enforce the metric, or criteria, that they want to possess evaluated. The tool presented during this study, YouDemo, may be a free tool for continuous, metric-focused evaluation of videos enabling formative anonymous, peer-assessment also as experience in self-reflective practice.

Leave a Reply

Your email address will not be published. Required fields are marked *