The design and validation of an instrument for assessing undergraduate dissertations
Main Article Content
Abstract
INTRODUCTION. The dissertation represents the fulfillment of undergraduate studies in which the student demonstrates mastery and integration of the content and skills they have developed. Therefore, its evaluation requires complete and complex tools. The aim of this research is to design and validate a reliable instrument to evaluate the final dissertation of the degree of Bachelor Degree in Primary Education Teaching to avoid the deficiencies and problems commonly observed in the traditional rubrics. METHOD. A group of seven experts, both from the school and university environment, were trained to develop this instrument. Different evaluation tools and essays were analyzed for its elaboration. Subsequently, the reliability of the resulting instrument was studied by applying the intraclass correlation coefficient (ICC) to twenty disssertations by six independent experts. RESULTS. The result is an instrument that combines the best properties of the rubric and the checklist, with a concrete and precise breakdown of all the performance indicators that make up the written part of a dissertation, and that assesses elements considered central in a final manuscript: the contents and their quality, the aspects of format and expression. The intraclass correlation analysis offered excellent results in the measured scales, all of which were >.90, which shows their high consistency. DISCUSSION. These results allow confidence in the reliability of the tool. The instrument can be a clear support for dissertation supervisors and for the members of the dissertation panel due to its clarity, objectivity and ease of use. Likewise, it can be a useful tool for students' learning and favour their self-regulation and self-evaluation since it includes in detail all the indicators by which they will be evaluated.
Downloads
Article Details
References
An, Y. J. (2013). Systematic design of blended PBL: Exploring the design experiences and support needs of PBL novices in an online environment. Contemporary Issues in Technology and Teacher Education, 13(1), 61-79.
Baharuddin, A., Gharbaghi, A., Ahmad, M. H. y Rosli, M. S. (2013). A Check List for Evaluating Persuasive Features of Mathematics Courseware. International Education Studies, 6(9). http://dx.doi.org/10.5539/ies.v6n9p125
Bearman, M. y Ajjawi, R. (2018). Actor-network theory and the OSCE: formulating a new research agenda for a post-psychometric era. Advances in Health Sciences Education, 23(5), 1037-1049. https://doi.org/10.1007/s10459-017-9797-7
Bharuthram, S. (2015). Lecturers’ perceptions: the value of assessment rubrics for informing teaching practice and curriculum review and development. Africa Education Review, 12(13), 415-428. https://doi.org/10.1080/18146627.2015.1110907
Bharuthram, S. y Patel, M. (2017). Co-constructing a rubric checklist with first year university students: A self-assessment tool. Apples: Journal of Applied Language Studies, 11(4), 35-55. http://dx.doi.org/10.17011/apples/urn.201708073430
Bohórquez Gómez-Millán, M. y Checa Esquiva, I. (2019). Desarrollo de competencias mediante ABP y evaluación con rúbricas en el trabajo en grupo en Educación Superior. REDU. Revista de Docencia Universitaria, 17(2), 197-210.
Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading, ASCD.
Cañadas, L. (2020). Evaluación formativa en el contexto universitario: oportunidades y propuestas de actuación. Revista Digital de Investigación en Docencia Universitaria, 14(2), e1214. https://doi.org/10.19083/ridu.2020.1214
Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347-360. https://doi.org/10.1080/02602938.2015.1111294
Gori, F. (2014). The European Language Portfolio and Languages for Specific Purposes: A project to develop “can do” descriptors focused on students' interests and motivation. Language Learning in Higher Education, 3(2), 305. https://doi.org/10.1515/cercles-2013-0016
Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment and Evaluation in Higher Education, 39(7), 840-852.
Kang, H., Thompson, J. y Windschitl, M. (2014). Creating opportunities for students to show what they know: The role of scaffolding in assessment tasks. Science Education, 98(4), 674-704. https://doi.org/10.1080/02602938.2013.875117
Kayapınar, U. (2014). Measuring essay assessment: Intra-rater and inter-rater reliability. Eurasian Journal of Educational Research, 57, 113-136. http://dx.doi.org/10.14689/ejer.2014.57.2
Mauri, T., Colomina, R. y De Gispert, I. (2014). Transformando las tareas de escritura colaborativa en oportunidades para aprender: ayuda educativa y uso de rúbricas en la Educación Superior. Cultura y Educación, 26(2), 298-348. https://doi.org/10.1080/11356405.2014.935111
McClearyCale, C. G. y Furtak, E. M. (2020). The SABEL Checklist Science classroom assessments that work for emergent bilingual learners. Science Teacher, 87(9), 38-48. https://www.nsta.org/science-teacher/science-teacher-julyaugust-2020/saebl-checklist
McCollum, R. M. y Reed, E. T. (2020). Developing a Badge System for a Community ESL Class Based on the Canadian Language Benchmarks. Canadian Journal of Applied Linguistics, 23(2), 228-236. https://doi.org/10.37213/cjal.2020.30438
Meola, M. (2004). Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation portal: Libraries and the Academy, 4(3), 331-344. https://doi.org/10.1353/pla.2004.0055.
Merrill, M. D. (2020). A Syllabus Review Check-List to Promote Problem-Centered Instruction. TechTrends 64, 105-123. https://doi.org/10.1007/s11528-019-00411-4
Oakleaf, M. (2009). Using rubrics to assess information literacy: An examination of methodology and interrater reliability. Journal of the American Society for Information Science and Technology, 60(5), 969-983. https://doi.org/10.1002/asi.21030
Orden ECI/3857/2007 [Ministerio de Educación y Ciencia]. Por la que se establecen los requisitos para la verificación de los títulos universitarios oficiales que habiliten para el ejercicio de la profesión de Maestro en Educación Primaria. 27 de diciembre de 2007.
Orden ECI/3854/2007 [Ministerio de Educación y Ciencia]. Por la que se establecen los requisitos para la verificación de los títulos universitarios oficiales que habiliten para el ejercicio de la profesión de Maestro en Educación Infantil. 27 de diciembre de 2007.
Ortega, D., Carcedo, B. y Blanco, P. (2018). El Trabajo Fin de Grado en Didáctica de las Ciencias Sociales: líneas, materias y temáticas. Revista de Investigación en Didáctica de las Ciencias Sociales, 3, 35-51.
Pathirage, C. P., Haigh, R., Amaratunga, D., Baldry, D. y Green, C. M. (2004). Improving Dissertation Assessment. [Resumen de presentación de la conferencia]. Education in a Changing Environment 13th-14th, Universidad de Salford. http://eprints.hud.ac.uk/id/eprint/22716/
Pausch, L. M. y Popp, M. P. (1997, April). Assessment of information literacy: Lessons from the higher education assessment movement. Paper presented at the meeting of the Association of College and Research Libraries, Nashville, TN.
Pegalajar, M. C. (2021). La rúbrica como instrumento para la Evaluación de Trabajos Fin de Grado. REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación, 19(3), 77-96. https://revistas.uam.es/reice/article/view/13134
Pita Fernández, S., Pértega Díaz, S. y Rodríguez Maseda, E. (2003). La fiabilidad de las mediciones clínicas: el análisis de concordancia para variables numéricas. Cad Aten Primaria, 10(4), 290-296.
Proyecto Tuning (2009). Una introducción a Tuning Educational Structures in Europe. Universidad de Deusto.
Real Decreto 1393 de 2007. Por el que se establece la ordenación de las enseñanzas universitarias oficiales. 29 de octubre de 2007. BOE-A-2007-18770.
Reddy, Y. M. y Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. https://doi.org/10.1080/02602930902862859
Sánchez, I. D. (2021). Propuesta de validación de la homogeneidad de un modelo de rúbrica de evaluación del TFG en el grado de enfermería de la UCM [tesis de doctorado, Universidad Complutense de Madrid].
Shin, S. y Cheon, J. (2019). Assuring Student Satisfaction of Online Education: A Search for Core Course Design Elements. International Journal on E-Learning, 18(2), 147-164.
Stufflebeam, D. L. (2000). Guidelines for developing evaluation checklists: the checklists development checklist (CDC). The Evaluation Center Retrieved on January, 16, 2008.
Tomas, C., Whitt, E., Lavelle-Hill, R. y Severn, K. (2019, September). Modeling holistic marks with analytic rubrics. In Frontiers in Education (vol. 4, p. 89). Frontiers. https://doi.org/10.3389/feduc.2019.00089
Uzun, N. B., Alici, D. y Aktas, M. (2019). Reliability of the analytic rubric and checklist for the assessment of story writing skills: g and decision study in generalizability theory. European Journal of Educational Research, 8(4), 169-180. https://doi.org/10.12973/eu-jer.8.1.169
Valderrama, E., Rullán, M., Sánchez, F., Pons, J., Mans, C., Giné, F., Seco, G., Jiménez L., Peig, E., Carrera, J., Moreno, A., García J., Pérez, J., Vilanova, R., Cores, F., Renau, J. M., Tejero, J. y Bisbal, J. (2010). La evaluación de competencias en los Trabajos Fin de Estudios. IEEERITA, 5(3), 107-114.
Wiggins, G. (1998). Educative assessment. Jossey-Bass.
Zornoza-Gallego, C. y Vercher, N. (2021). Evaluación de competencias genéricas… Cuadernos Geográficos, 60(1), 119-138.