top of page

Authentic Assessment Rubric Critique and Improvement

Whilst studying Assessment Evaluation and Learning ( EDU5713) at USQ, I learnt to appreciate more, the planning required for the evaluation of student work. This required my becoming familiar with principles, theoretical frameworks and best practices (at the time) to construct a quality assessment.

I also learn how to differentiate between assessment and evaluations, measurement and testing and different ways of reporting a student's outcomes. It was also very instructive as a tertiary tutor to become more aware of the frames of reference available to interpret a given assessment, such as using norms, criterion or ipsative perspectives. The following document is my critical essay on a rubric that I selected from one of my student's subjects.

assessment improvement evaluation higher

Introduction

The pressure on educators to be accountable for the learning outcomes of students is a key issue in the literature on authentic assessment. The following paper will critique rubric for an authentic assessment developed by Hanlonen and colleagues (2003) for use with first year psychology students taking the subject, The History of Psychology.

A convenience sample of post-graduate and undergraduate students (n= 5) who had taken at least one semester of psychology, were recruited to collaborate on the critique of Halnonen’s rubric. Collaboration with students in the design of rubric has shown to be a critical way to enhance the cohesiveness of a group of people with common goals (Halnonen, Bosack, Clay & McCarthy, 2003; Newhouse-Maiden & de Jong, 2004; University of NSW, 2003).

It is anticipated that this critique will enable better understanding of my professional skills, and the complexities involved in designing rubric, valuing authentic assessment and recognising the critical need to involve students in the design process.

Student Requirements

The rubric provided by Halonen and colleagues lacked clear student requirements for undertaking the production of a newspaper publication for an audience of first year psychology students (see Appendix 1). For example, To amend this, it is recommended that students will require access to a computer with Microsoft Word. It is not necessary for students to use Word Publisher to create pages that resemble a newspaper format, Word will suffice. They will also need access to the internet, psychology databases for peer-reviewed articles and websites/blogs, a variety of newspapers to determine the layout desired for the final product, and access to a black and white printer. For differently abled students an Eye-TV (from Access Ability Services) should be made available as this is a mobile item. The classrooms in which the lectures and tutorials will be held must be wheelchair accessible. To accommodate different learning styles, podcasts, video and transcripts of lecture and tutorial notes must be made available on Blackboard Learn.

Details of Conditions

Hanlonen gives few details of the conditions in which students will be assessed. The students are informed that some class time will be utilised for work on their project but that it is mostly an independent task requiring their own time in the library. This is ironic given that a purpose of the assessment is “to collaborate with others” which does take some skilled instruction to achieve (Cohen, 1994).

The students are also made aware by Halnonen that the final product will be displayed during a specified class time; however it is not made clear if the display is similar to a presentation or simply the newspaper made available for class members to view. And whilst the student groups may be called on to discuss their assessment it is not known if this is as a formal or informal presentation. Thus the conditions were deemed to be ambiguous and confusing.

To rectify this, it is suggested that the research and preparation for publication take place within tutorial times to enable students to have access to a tutor to guide them in team collaboration skills if they choose to seek them. As the work-life-study balance was determined to be a major stress for most group work projects, and making time available in class would enable different lifestyle circumstances to be less likely to impinge on accessing the group.

The final psychology newspaper publication will be made available to university students of all disciplines and community members at the cost of $1 which will be donated to a local charity to be negotiated by students prior to publication.

Instructions to the Examiner

The Halnonen example had no explicit instructions for the examiner. It is suggested that in the first two tutorials of term the rubric to be used is addressed with the class. A discussion will be held in which the students will contribute to critiquing and refining the rubric suggested here. This is an opportunity for the examiner to make students aware of how the syllabus concepts of scientific inquiry in psychology are related to the assessment (Moskal & Leydens, 2000). In this way students are enabled to be active learners.

Furthermore, the assessment strategy is to be divided into two distinct parts; a newspaper publication (completed as a group) and a critical reflection essay (completed as an individual). The written products are to be completed in the format described in the rubric (see Appendices 2 & 3).

Students are to be given access to examples of psychology newspapers produced in the real world (such as the pdf versions of articles at http://www.nimh.nih.gov/science-news). Time in class should be used for students to investigate different forms of online and offline psychology newspapers with the examiner indicating strengths and weaknesses of the articles. This will enhance student self-efficacy and give them a better idea of how to monitor their own performance (Halnonen et al., 2003; Gulikers, Bastiaens, & Kirschner, 2004).

Halnonen also requires students to take part in a self-assessment of their performance in the group project. However it is not evident that students will be exposed to critical reflection techniques and theories in class to enable them to self-assess effectively. Thus the topic of critical reflection needs to be a part of the syllabus for the subject and tutorial time made available for students to discuss concepts and processes of critical reflection.

It is required by Halnonen that students receive peer feedback from group members as to perceived levels of collaboration within the group. This method of assessment was determined to be unfair due to the high subjectivity of the task and the numerous personal accounts given by the critique group of poor intra-group processes when left unsupervised. Instead it is recommended that peer reviews be summarised and analysed by students within their critical reflection essay. In this way the psychology students can be better compare their judgments of goals met and practice the written communication to others of what they have learned from their experiences and how it is linked to theories they are learning about in class (Biggs, 2001).

Justification for the Assessment Strategy

An authentic assessment provides alternative measures of student performance, drawing on real-world tasks so that students actively take part in meaningful learning activities (Palomba & Banta, 1999). Assessment that is made relevant to the lives and career goals of students provides for evaluation of effective outcomes. Halnonen et al. state that the newspaper product is a real-life example which can be linked to student experiences to increase the likelihood that they will concpetualise and apply practical examples of problem solving and scientific thinking in the context of psychology. However, many aspects of the assessment could are considered to be irrelevant.

For example, are the students doing a psychology and journalism course or simply an undergraduate degree in psychology? The language, argument and style of communication required of an undergraduate require APA formatting and academic language. And although practicing psychologists may be asked to create products for lay persons, the assignment was to be delivered to first year psychology students by first year psychology students. Overall, the production of a newspaper article that, “Expresses events in language appropriate for a newspaper audience”, is unrealistic.

However, the critique group did feel that producing an academic-style psychology newspaper for an audience interested in psychology would be relevant, as it would mirror expectations in the workforce to produce newsletters for staff, clients or other academics.

Validity

Validity is the degree to which an instrument measures what it sets out to measure. One general form of validity used for an assessment instrument is content validity (Moskal & Leydens, 2000). Content validity draws on evidence which shows the extent to which an assessment instrument demonstrates the student’s knowledge of the content. The rubric used by Halnonen has very low content validity. There is only one content question, “Accurate identification of events within the assigned year”.

As was pointed out within the critique group, “other newsworthy events” are irrelevant outside of the context of psychology for the undergraduate. Instead, students should be encouraged to focus on socio-political events that would inform the reader, and themselves, about how the psychology theories of the assigned year were embedded in the culture and thinking of the time.

Reliability

Reliability is the consistency of scores on an assessment, so that a student can reasonably expect to receive the same score regardless of the time, place or marker of the assessment (Moskal & Leydens, 2000). Inter-rater reliability is a common measure of assessment reliability, and a well designed rubric enables students to be marked on an unbiased judgment, as different markers would give similar marks. Halnonen did not use well defined categories at all to clarify the scoring rubric of the assessment and so it has low reliability. There are no instructions provided for the examiner to determine what marks to give; are they to make a check mark or a score out of 10 for example?

It was determined within the critique group that the lack of well defined categories meant that it was highly likely that different examiners would give different marks. It was suggested that the final rubric criteria that I decided upon should be tested and then tested again to determine reliability. However, for the current assignment this longitudinal design was not possible.

Another reliability issue is that of the appropriateness of the scoring rubric to the population of students it is given. Given that Halnonen et al. were scoring psychology undergraduates with questions pertaining to journalism, the assessment is viewed as being unsuitable for the population and thus unreliable. Making the changes as suggested in the modified rubric will rectify this as the items focus on conceptualisation, communication, problem-solving, group work and critical reflection within a psychology context.

Flexibility

The rubric set by Halnonen is not a flexible assessment in that the continuous assessment is attributed relative weights; 50% for the newspaper and class presentation; 30% for collaboration, and 20% for the self-assessment. This follows the norm in most universities foreach form of assessment to be compulsory and given a fixed proportion of the overall assessment (Allsop, 2002).

However, for the redesigned rubric, each stage of the assessment will be considered in isolation and potentially worth 100% of the assessment. Thus, a student will be awarded the higher grade from either the two pieces added together (each weighted at 50%), or from the one piece that has a higher grade. In this way a student who did not perform well on one piece of assessment will not have their overall grade significantly affected and some of the stress of reaching academic goals can be removed (Allsop, 2002).

Fairness

Fairness in assessment is another critical factor that is used to minimize misinterpretation of results. The goal of fairness is to provide for students of both genders and all backgrounds an opportunity to do equally well in learning and demonstrating their learning. Bias is a factor that systematically affects an entire group of students as opposed to an individual student (Laboratory Network Program, 1993).

Halnonen’s rubric has bias in the context of the assessment, as some students may have done journalism subjects as part of their degree or in studies prior to their psychology degree. These students will know more about how to write in a journalistic manner and how to choose other newsworthy events. For this assessment task to be fair, its content, context and performance expectations need to be modified to reflect the knowledge, values and experiences that are equally familiar and relevant to all the students (Laboratory Network Program, 1993). Making the final product an APA style newspaper gives it an academic voice. Using a reflective piece of writing that draws on peer review is also a real world expectation in the workforce for psycholgists.

As the assessment is to evaluate understanding of new knowledge, ESL students will not be disadvantaged by potentially poor grammar as the tutor will be available to help them and other students to enhance their skills in academic English writing.

Task Efficacy

Halnonen et al. provide no details as the task efficacy of their rubric. However, as an ideal assessment is one that enhances the competence of students in their educational practices (Mentowski et al., 2000) it is concluded that Halnonen’s authentic assessment and rubric would have low task efficacy. The modified rubric has also not been tested; however, it is argued here that task efficacy will be high given the changes to content and context of the assessment and its rubric. The modified rubric is more likely to help students to make informed decisions about what they want to achieve as an undergraduate in psychology and how to go about achieving those goals because they will be involved in identifying their own levels of quality in real-world tasks.

For the present assignment, the task of collaborating with a group of students was very effective in that all were able to reflect on the goals of a rubric, authentic assessment for first year psychology and the value of student input in determining the criteria needed to develop knowledge, skills and competencies of a high quality suited to the workforce.

Conclusion

In conclusion, the design of an integrated and comprehensive rubric for an authentic assessment in first year psychology such as suggested here proved to be a complex though cooperative and collaborative adventure. Understanding and appreciating the processes involved in creating a rubric was made dramatically clear by the process of critiquing rubric.

The process of critique has provided for better articulation of the objectives and outcomes for rubric design for the entire critique group and promoted the desire to learn more about valid, reliable, fair and practical rubric designs for all forms of assessment. It is anticipated that enhanced inclusiveness of students in determining rubric and testing and retesting authentic assessments will contribute to avoidance of student comments, such as, “Grading and learning just don’t go together.”

References

Allsop, L. (2002). Flexibility in assessment. Retrieved April 17, 2010, from

http://www.informingscience.org/proceedings/IS2002Proceedings/papers/Allso068Flexi. pdf

Biggs, J. (2001). The reflective institution: Assuring and enhancing the quality of teaching and

learning. Higher Education, 41, 221-238.

Cohen, E. (1994). Designing Groupwork: Strategies for the Heterogeneous Classroom. Retrieved

April 14, 2010, from http://www.julieboyd.com.au/ILF/pages/members/cats/bkovervus /t_and_learn_pdfs/design_grpwork.pdf

Gulikers, J.T.M., Bastiaens, T.J., & Kirschner, P.A. (2004). Perceptions of authentic assessment:

Five dimensions of authenticity. Paper presented at the Second Biannual Joint Northumbria/EARLI SIG Assessment Conference, Bergen, Norway.

Halnonen, J.S., Bosack, T., Clay, S., & McCarthy, M. (2003). A rubric for learning and teaching,

and assessing scientific inquiry in psychology. Journal of Teaching of Psychology, 30(3), 196-208.

Laboratory Network Program. (1993). A Tool Kit for Professional Developers: Alternative

Assessment. Portland, Oregon: Northwest Regional Educational Laboratory.

Newhouse-Maiden & de Jong, T. (2004). Assessment for learning: Some insights from collaboratively constructing a rubric with post graduate education students. Proceedings

of the 13th Annual Teaching and Learning Forum, Seeking Educational Excellence.

Perth: Murdoch University.

Mentowski, M. et al. (2000). Learning that Lasts: Integrating learning, development, and

performance in college and beyond. San Fransisco: Jossey-Bass.

Moskal, B.M. & Leydens, J.A. (2000). Scoring rubric development: Validity and reliability.

Practical Assessment, Research & Evaluation, 7(10).

Palomba, C.A. & Banta, T.W. (1999). Assessment Essentials: Planning, implementing, and

improving assessment in higher education. San Francisco: Jossey-Bass.

The University of NSW (2003). Learn about rubrics. Retrieved online April 14, 2010, from



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page