A Systematic Approach to Prepare Quality Rubrics

 

B. L. Gupta, Pratibha Bundela Gupta

Department of Management Education,

National Institute of Technical Teachers’ Training and Research, Bhopal, 462002, Madhya Pradesh, India.

Ex Ph D. Scholar, IPER, Bhopal, 462002, Madhya Pradesh, India.

*Corresponding Author E-mail: badrilalgupta72@gmail.com

 

ABSTRACT:

Outcome-based education philosophy calls for a demonstration of learning in a real-life situation and close to real-life situations. Rubrics are commonly used as an outcome-based assessment tool in higher education institutions. Rubrics have enormous potential for the learning and development of students. The focus of educational programmes is shifting from teachers to learners and assessment to learning and development. Rubrics are versatile tools for self and peer learning and assessment, and expert assessment for learning, development, and competency certification. The quality of the rubrics plays a significant role in learning, development and assessment. The paper describes a systematic approach to preparing quality rubrics. The paper is based on a literature review and experiences of the authors. The steps discussed in this paper will guide the course teachers in preparing valid and reliable rubrics with a purpose.  The rubric given in Appendix I will act as a tool for selecting appropriate rubrics to improve the quality of educational programmes.

 

KEYWORDS: Formative assessment, Performance criteria, Performance levels, Descriptors, Quality of rubric, Validity.

 


 

INTRODUCTION:

The assessment of learning outcomes is very important for certification and creditization of academic achievement in the context of the National Education Policy 2020 (MHRD, 2020) and National Credit Framework. In the 21st century, more focus is being given to formative assessment, which is used to improve the learning process and levels of learning concerning learning outcomes. Earlier conventional assessment tools were used for assessment and grading processes. In the 21st century, assessment is used to motivate learners to self-learn, peer-learn, and learn in teams to achieve the intended learning outcomes (Gupta and Gupta, 2021). David (2018) stated that rubrics encourage learner-centric approaches guiding individual learning processes. Rubrics guide teachers in designing and conducting instructions and activities for students to develop intended learning outcomes. Rubrics guide the students in planning and implementing the learning process, performing tasks, activities, assignments, projects, seminar presentations, and participating in discussions. Suryanti and Nurhuda (2021) concluded that using analytical rubrics with problem-based learning develops students' higher-order thinking skills. Reddy and Andrade (2010) stated that rubrics are used for multiple educational purposes, including evaluating and improving the educational programme. They stated that the rubric's quality lies in its validity, viz., it should measure (demonstration of performance) what it intended to measure (achievement of learning outcomes). Robbert et al. (2017) concluded that using rubrics enhanced teachers' diagnostic skills and students' learning improvement on formative feedback, self-regulation and self-efficacy. 

 

The rubrics are used in all higher education disciplines for formative assessment to know the progress on learning and learning gaps concerning learning outcomes. These are used for summative evaluation to award marks and grades and certify learning outcomes' attainment. Rubrics are used to set the learning and assessment standards concerning learning outcomes. These are declared at the beginning of the academic year so that learners may try to attain learning outcomes. AICTE (2018) and UGC (2019) stressed the importance of using rubrics for formative and summative assessment. The educational institutions are expected to report the use of rubrics under innovations in assessment under criterion 2 of engineering programmes for accreditation purposes (NBA, 2013). Francis (2018) recommended discussing the rubrics with students along with learning resources and using them for assessment for better results. 

 

Rubrics are precise measurement and assessment tools. They are used to assess critical learning outcomes or discipline-related core learning outcomes to ensure the attainment of learning outcomes at an intended level before certification. Gupta (2023b) stated that rubrics are innovative tools used in the learning process. There are scientifically designed, valid, reliable, and feasible tools.

 

The rubrics are being prepared by newly recruited and experienced teachers, trained and not trained teachers and experts in assessment. If teachers are not preparing rubrics, they select from the available rubrics and use them for various educational purposes, including assessment.

 

Rubrics require a scientific approach in preparation. If the scientific approach is not followed, the rubrics' validity, reliability, relevance, processes, and purposes are questioned. Ahmet (2020) stressed the importance of the validity of the rubric, stating that it should be the built-in part of the rubric's frame. Anders and Svingby (2007) concluded that rubrics enhance the reliability and validity of assessment and facilitate feedback and self-assessment. Brookhart (2018) concluded that the rubrics used are descriptive and are expected to be useful for learning, but they are not. Chen and Yi (2017) concluded that rubrics used for formative assessment enhance learning and assessment validity and reliability. Popham (1997) stated that 'rubrics have enormous contributions to instructional quality, but many rubrics are almost worthless'. Popham identified four flaws in the rubric, viz. evaluation criteria are general, there are too many lengthy rubrics, and the focus is on tests rather than skill tests. Farrag (2020) stated that rubrics increased the reliability of assessment. In this paper, we have developed a rubric to assess the quality of the rubrics prepared or selected for educational purposes. Wolf and Stevens (2007) stated that rubrics advance the students' learning, improve assessment, provide feedback, and provide vital information to improve the programme. Teresa (2023) stated that rubrics are more suited for assessment in project-based learning. Project-based assessment encourages deeper, lifelong, and self-directed learning supported by rubrics. Rubrics are used as learning, feedback, and assessment tools to maintain uniformity, documentation, and transparency in the assessment process. Gupta and Gupta (2021) stated the preconditions for using the rubrics. These are curriculum design using outcome-based education philosophy, clear articulation of course outcomes, preparation of course plan, well-designed activities for students, the proper selection of rubric, and explanation of rubric to students. Gupta and Gupta (2024) recommended designing and using different types of rubrics for various educational purposes. This paper is based on a comprehensive review of the literature and experiences of the authors.

 

A SYSTEMATIC APPROACH FOR PREPARING AND SELECTING RUBRIC

Designing a rubric is a challenging activity for the course teachers. One can start designing the rubric once the decision is made to design a rubric. Many questions need to be answered to make the rubric design activity easy. Wolf and Stevens (2007) suggested various steps to develop the rubric, such as setting performance criteria, levels, and descriptors. UNC (2017) described the process of creating the rubric for assessing programme-level students learning outcomes comprising steps such as aligning the activities where students demonstrate the learning outcomes, deciding performance criteria, identifying the scale, determining the performance levels, and piloting the rubric. Gupta and Gupta (2021) reported the processes followed for preparing the rubrics by different rubric developers. Some of the significant processes reported are;

·       Define learning goals, choose rubric type, define criteria, and prepare rubric.

·       A hierarchy of learning in different learning domains was used to develop the rubrics. 

·       Appropriate online rubrics were modified and used to fit the learning outcome assessment.

·       Brainstorming was conducted to prepare the rubric, and then, based on the ideas, the rubric was prepared.

·       The expertise of industry persons was used to generate the performance criteria and prepare the rubric.

·       Developers used logic and analytical skills to prepare the rubric.

·       They are prepared through discussion with trained faculty members.

 

CHECK BEFORE PROCEED FOR RUBRIC DESIGN

We have prepared a checklist of questions that should be responded to before designing the rubrics. The checklist is shown in Table 1.

 

Table 1. Check before proceeding to design a rubric

Questions

Yes

No

Have you frozen the learning outcomes?

 

 

Do you have work experience in drawing situations from the world of work for designing tasks, activities, assignments, project work, and learning and development events for students?

 

 

Have you scientifically designed the task, activity, assignment, and events for which you will use the rubric?

 

 

Does the task create scope for practising the skills to be assessed using the rubric you will develop?

 

 

Have you decided on the purposes (self-learning, instruction, feedback, formative assessment, summative assessment, audit of a process or product) of designing the rubric?

 

 

Are you trained in the measurement, assessment, and evaluation of learning?

 

 

Are you trained in designing and using the rubrics?

 

 

Have you planned the method of use of the rubric?

 

 

Do you have time to develop the rubric?

 

 

Do you have time to use the rubric for learning and assessment?

 

 

Do you know how to interpret the marks obtained from the assessment rubric?

 

 

Do you know a method of providing feedback to students to improve learning?

 

 

 

A SYSTEMATIC APPROACH

A systematic approach to preparing or selecting a rubric with a purpose is outlined in Figure 1 and briefly described. 

 

 

Figure 1. A systematic approach for preparing and selecting rubrics

Source: Authors

 

 

DECIDE LEARNING OUTCOMES

The learning outcomes are drawn from work situations as the learners will apply the learning in real-life situations. UGC (2019) defined learning outcomes as ‘the intended results of education or the ability to do’. The course outcomes are derived from the performance indicators of the competency, and competencies are derived from the programme and programme-specific outcomes. The programme and programme-specific outcomes are derived from programme educational objectives as stated in the UGC evaluation and reform document. The learning outcomes are stated in measurable and observable terms using action verbs, which are terminal.

 

The course outcomes are scientifically articulated using accurate performance-related technical terms. These are complete, compact and communicable. The course outcomes do not contain redundant, vague, and confusing words. The course outcomes begin with broader, bigger, and higher-level action verbs, which are terminal. In other words, narrow, small, and lower-level learning outcomes are encompassed in the course outcomes. The course outcomes are stated in performance-oriented terms and not in theory-oriented terms. In many performance situations, conditions and criteria are mentioned at the end of course outcome statements to communicate the level of proficiency in performance and conditions under which performance happens. The course outcomes that are best developed and assessed using rubrics have been selected for development. The course outcomes are distinct from each other and complete in themselves. The course outcomes, which are to be assessed using a rubric, directly contribute to programme and programme-specific outcomes. The selected course outcomes are validated and finalized. The indicators used to evaluate the quality of the course outcomes are shown in Figure 2.

 

Figure 2. Quality learning outcomes

Source: Authors

 

 

PURPOSE OF RUBRIC

The purpose of assessment is the first step in deciding on the assessment tool from the available wide range of assessment tools such as paper and pencil tests, checklists, rating scales, observation schedules, viva voice, interview schedules, and bipolar scales. Vercellotti (2021) stated that rubrics are used as assessment and learning tools but may not be appropriate for all learning and assessment situations. If the rubric is the most appropriate tool to serve the purpose, then only the course teachers should decide to design/select and use rubrics.

 

The purpose of the rubric is mentioned on the rubric for reusability by different teachers for different batches of students. The purposes of rubrics are shown in Figure 3 and briefly described below:

·       The rubrics are commonly used for marking, grading, and certification of competency. The scientifically designed assessment rubrics are the most accurate, valid, and reliable assessment instruments. These rubrics enhance the assessment process's objectivity, transparency, and credibility.

·       The rubrics are used to improve the quality of the learning process and attainment of learning outcomes. The rubrics are used for scaffolding to develop higher levels of learning in different learning domains. The rubrics are used for reflection in the learning process to revise previous learning, identify learning gaps, and take remedial learning exercises to fix the learning gaps. These are used as props and stimuli to create a leap-frogging effect in learning and development.

·       The rubrics are used in individualized, online, and distance learning to develop self-learning and assessment, self-efficacy, and lifelong learning skills. These are used to enhance the learning maturity of the students.

·       The rubrics are also used to improve the quality of the educational programmes. The educational processes, such as curriculum development, instructional processes, assessment processes, research, mentoring, training, guidance and counselling, and student services, are standardized and implemented using rubrics. Rubrics are also used to audit and evaluate educational systems and processes.

·        The rubrics are used to enhance the quality of documentation, and the transparency of the assessment processes.

 

Figure 3. Purposes of rubric

Source: Authors

 

SELECT THE CORRECT TYPE OF THE RUBRIC

A wide range of rubrics is available to serve different educational purposes, such as motivating the students to learn, involving them in the learning process, diagnosing learning problems and learning gaps in the students, taking corrective and preventive actions, assessing learning outcomes and awarding grades (Gupta and Gupta, 2021). Selection of appropriate types of rubrics with a purpose from various types of rubrics is a crucial step in rubric development. The various types of rubrics are shown in Figure 4 (Gupta and Gupta, 2021).

 

Figure 4. Types of rubrics

Source: Gupta and Gupta (2021)

 

 

The rubric design depends on the types and purposes of the rubrics. Dawson (2017) developed a framework comprising fourteen rubric design elements to develop a quality rubric and judge its quality.

 

 

 

DECIDE PERFORMANCE CRITERIA

Avila et al. (2019) stated that criteria are the concepts that the rubric intends to assess or aim to develop. The criteria indicate the standards of assessment and evaluation. The acceptable level of learning outcomes, or, in other words, the threshold level of performance, is incorporated in the articulation of assessment criteria (Zenobia and Simone, 2019). Brookhart (2018) stated that criteria and performance level descriptors distinguish rubrics from other assessment tools. Brookhart (2018) concluded that ‘trivial or surface level criteria will not draw learning goals for students as clearly as substantive criteria’. The quality of the assessment criteria is decided based on the attributes shown in Figure 5.

 

 

Figure 5. Attributes of assessment criteria

Source: Authors

 

The criteria are always aligned with learning outcomes. They are significant in assessing the major portion of performance. They are directly observed during the assessment process. A condition of necessary (without which assessment will be incomplete) and sufficient (not more than required to consume time and effort in the assessment process unnecessarily) is followed to write the criteria. The criteria are always complete and different from other criteria in the same rubric. The criteria indicate the threshold level of performance. They are stated in measurable (quantitative and qualitative) terms indicating the performance. In the same rubric, different criteria may relate to different domains of learning or the same domain depending on the course outcomes. There could be different criteria of importance, so appropriate weightage is assigned to each criterion. The criteria are stated to detail them in different levels, three to five. The criteria are always manageable in number.

 

The objectivity in the criteria of rubrics clears the expectations of course teachers to the students so that they learn accordingly and prepare themselves for assessment with confidence. The objectivity in criteria ensures consistency in assessment when there are more assessors, as happens in affiliating university systems where thousands of students and hundreds of assessors are involved. Noll et al. (2021) raised the problem of consistency among different assessors in project-based learning and concluded that rubrics improve consistency among assessors. Benjamin et al. (2024) concluded through a study conducted in higher education that subjectivity and non-clarity in criteria hinder learning and affect the validity of assessment as perceived by the students. The performance criteria are decided based on the purpose of the rubrics.

·       If the rubrics are to be used for summative assessment, the criteria are designed at a threshold level or minimum acceptable level of performance.

·       Suppose the rubrics are to be used to measure the quality of the learning output viz product. In that case, the criteria are set at the minimum acceptable level of product quality.

·       If the rubrics are to be used as a process assessment, the criteria are set considering the standard steps and acceptable level of performance in each step of the process.

·       If the rubrics are to be used for formative assessment, the criteria are set related to the core learning outcomes and performance details for identifying learning gaps and taking corrective and preventive action based on the feedback.

·       Suppose the rubrics are used for self-learning and assessment and mutual learning and assessment. The criteria are detailed in indicators or parameters for props, stimuli, and scaffolds. 

·       If the rubrics are to be used for grading linked to growth, sliding rubrics are prepared, which are linked to the learning portfolio of the students (Mahmood and Jacobo (2019).

·       If the rubrics are to be used for feedback in self-learning and e-content, the criteria are detailed in the form of constructive feedback for further learning (Doug, 2021). The rubric creator identifies the potential learning gaps and suggests the feedback points.

·       If the rubrics are to be used for reflection, higher learning, and deep learning, the criteria are integrated with the stages of instruction or learning (Laetitia et al., 2021).

 

DECIDE PERFORMANCE LEVELS

Performance levels indicate learners' performance levels on a particular performance criterion. The learners undergo a learning process to develop the learning outcome level ability on specific criteria gradually. During the initial learning stage, their attainment of learning outcomes may be much below the intended level. They are on the right learning path, and then they develop the theoretical ability and can apply it in real-life situations. They can deal with real-life situations independently and innovate to improve the situation. These are the levels of learning and performance. The levels indicate the level of performance on a particular criterion or proficiency in performance. If the rubric is used for learning and feedback purposes, levels are defined in terms of levels of learning. If the rubric is used for summative assessment, then proficiency levels are stated.

 

There is a practice of using three to five levels, but considering the criticality of the learning outcomes and assessment criteria, more than five levels can also be used. Safety is critical in some disciplines, such as electrical and chemical engineering. The students need to learn and practice safety abilities. For developing formative and summative assessment rubrics, one may go up to seven levels to ensure achievement of the intended level of learning outcomes related to safety.  

 

Chowdhury (2019) stated that the rubric scale can be described using various performance levels. Various nomenclatures of levels are available, viz excellent, good, average, poor, and absent, and they are used to assess the quality of the products, services, reports, and presentations.

·       Exemplary, proficient, partly proficient, and unsatisfactory are used to adhere to professional and social ethics, motor skills, and leadership skills.

·       Emerging, developing, proficient, and advanced are used for design, creative, interpersonal, grievance management, mentoring, and counselling skills.

·       Beginning, developing, competent, and accomplished are used in training and mentoring situations related to technical and managerial skills.

·       In assessing a task, the solution to the problem needs to be complete, effective, efficient, and inventive.

·       For communication skills, criterion levels could be: the communication is complex, unclear, somewhat clear, mostly clear, or unusually clear.

Considering these levels, one may derive levels related to learning outcomes and criteria.

 

The quality performance levels are aligned with learning outcomes and performance criteria being assessed. The performance levels for assessment criteria are clearly articulated in quantitative and qualitative (measurable) terms. These performance levels are articulated in a way that differentiates different levels objectively. The performance levels should be sufficiently high to make the rubric impressive. The quality of performance levels is decided based on the taxonomy of learning in different domains of learning or proficiency level used in the industry or levels of learning stages or levels and class of accuracy. The attributes of performance levels are stated in Figure 6.

 

 

Figure 6. Attributes of performance levels

Source: Authors

 

DECIDE DESCRIPTORS FOR EACH CRITERION UNDER EACH PERFORMANCE LEVEL

Descriptors are the rubric's building blocks for learning, development, assessment, and corrective actions. This is a challenging task for rubric designers to articulate descriptors for each criterion and each level of performance. Articulating each descriptor in objective and measurable terms is difficult. The attributes used for articulating the descriptors are stated in Figure 7.

 

 

Figure 7. Attributes of descriptors

Source: Authors

 

The rubric developers drop the idea of developing the rubric at this stage of articulating the descriptors. Articulating the descriptor is the differentiating point between the rubric and rating scale or observation schedule. As a rubric developer, if you fail to articulate descriptors for each criterion and each level of performance in such situations, you should opt for a rating scale instead of writing subjective, vague, and unmeasurable descriptors.

 

VALIDATION OF RUBRICS

In scientific assessment, validity, reliability, and usability are the crucial conditions to be satisfied. These conditions are equally applicable for rubrics as well. The process of rubric validation is shown in Figure 8.

 


 

Figure 8. Process of validation of rubrics

Source: Authors


 

The rubric is designed following the process and principles of design. Then, it is self-checked using the rubric evaluation criteria stated in this paper. Based on the self-check results, the design is improved. Then, it is given to colleagues for peer validation. The peers may be asked to critically analyze the rubric on all the rubric evaluation criteria and provide feedback for improving the design of the rubric on its construct and content. The rubric is further refined based on the suggestions of peer review. Now, at this stage, it is given to experts for validation. The experts' comments are incorporated in the rubric to make it a draft rubric for trying out on the students in actual learning and assessment situations. The rubric is further refined and finalized based on the try-out experiences. Such rubrics are published for wider use.

 

Subject experts in the workshop mode prepare the standard summative evaluation rubrics to ensure the quality of the rubric. In the summative evaluation process, standard rubrics are used for assessing thousands of students by hundreds of assessors. Such rubrics are part of the rubric bank at the national level. Grainger (2021) recommended developing the triad-based peer review process rubrics, which improve the learning, assessment, and feedback process. The approach resulted in reflective practices, professional learning opportunities, better student outcomes, improved accountability, and improved assessment experience.

 

NOTES TO USERS:

The preparation of quality rubrics consumes the time and effort of the teachers. Even selecting a rubric from the rubric bank is a time-consuming process. So rubric developers should put clear instructions on the rubric about the development process, context of development, validity, experiences of using the rubric, and points to be considered for use in future. The detailed instructions for the users add value to using rubrics for learning, development, and assessment purposes. The points on which notes are recorded are stated in Figure 9.

 

 

Figure 9. Notes to users

Source: Authors

 

Similarly, instructions for students to use the rubrics may be given to communicate the learning and assessment-related expectations. The course teachers discuss the rubric and clarify the learning and development expectations to the students at the beginning of the academic session.

 

CONCLUSION:

Rubrics are multipurpose educational tools used in higher education to achieve various educational purposes. Rubrics are used to develop and assess critical learning outcomes in a course. The preparation of a scientific rubric consumes time and requires subject expertise and the world of work experience. Rubric developers require training and mentorship to develop perfect rubrics with a purpose.

 

A scientifically developed and validated rubric may be standardized to develop and assess particular course outcomes in an educational programme at the national level. The model curriculum for different discipline educational programmes and textbooks available at national platforms may be benchmarked for developing the standard rubrics. Standard rubrics may save the time and effort of thousands of course teachers at the national level. The standard rubrics may facilitate quality assurance of students' learning at the national level. Uniformity and transparency may be maintained in learning and assessment at the national level. The competence of newly inducted teachers in outcome-based education and assessment may be developed and enhanced using these standard rubrics.

 

The systematic approach proposed in this paper may be used as training and mentoring input for teachers and rubric developers, resource material for rubric developers, an evaluation tool to judge the rubric quality using the rubric given in Appendix I, and an instrument to select appropriate rubrics from the rubric bank.

 

CONFLICT OF INTEREST:

The authors have no conflict of interest regarding this investigation.

 

REFERENCES:

1.      Ahmet, Çelik, and Selcuk Ozdemir. Tinkering learning in the classroom: An instructional rubric for evaluating 3D printed prototype performance, International Journal of Technology and Design Education. 2020; 30: 459-478. https://doi.org/10.1007/s10798-019-09512-w.

2.      AICTE. Examination reforms. All India Council for Technical Education, New Delhi. 2018

3.      Anders, Jonsson, and Gunilla Svingby. The Use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review. 2007; 2: 130-144. doi:10.1016/j.edurev.2007.05.002.

4.      Avila, C. O., Foss L., Bordini A. Debacco M. S., and Cavalheiro S. A. Evaluation rubric for computational thinking concepts, IEEE 19th International Conference on Advanced Learning Technologies. 2019. DOI:10.1109/ICALT.2019.00089.

5.      Benjamin, Taylor, Flora Kisby and Alice Reedy. Rubrics in higher education: an exploration of undergraduate students’ understanding and perspectives, Assessment and Evaluation in Higher Education. 2024. DOI: 10.1080/02602938.2023.2299330.

6.      Brookhart, S. M. Appropriate criteria: Key to effective rubrics. Frontiers in Education. (2018; 3(22): 1–12. doi: 10.3389/feduc.2018.00022.

7.      Chen, Beilei, and Zhang Yi. A rubric-based technology-enhanced assessment approach to improve students’ meta-cognitive awareness and learning achievement. The Sixth International Conference of Educational Innovation through Technology. 2017: 111-115.

8.      Chowdhury, F. Application of rubrics in the classroom: a vital tool for improvement in assessment, feedback and learning. International Education Studies. 2019; 12(1): 61–68. https://doi.org/10.5539/ies.v12n1p61.

9.      David, C. Leader. Student perceptions of the effectiveness of rubrics. Journal of Business and Educational Leadership. 2018; 8(1): 86–99.

10.   Dawson, P. Assessment rubrics: Towards clearer and more replicable design, Research and Practice. Assessment and Evaluation in Higher Education. 2017; 42(3): 347–360. http://www.tandfonline.com/doi/full/10.1080/02602938.2015.1111294.

11.   Doug, C., Gil R., Courtney S., Renée C., Juliette L., and Suzanne R. A novel rubric format for providing feedback on process skills to STEM undergraduate students, Journal of College Science Teaching. 2021; 50(6): 48-56.

12.   Farrag, S. G. Innovative assessment practice to improve teaching and learning in civil engineering, International Journal of Learning and Teaching. 2020; 6(2): 74-80, doi: 10.18178/ijlt.6.2.74-80.

13.   Francis, J. E. Linking rubrics and academic performance: an engagement theory perspective, Journal of University Teaching and Learning Practice. 2018; 15(1): 1-17. http://ro.uow.edu.au/jutlp/vol15/iss1/3.

14.   Grainger, P. Enhancing assessment literacies through development of quality rubrics using a triad-based peer review process. Journal of University Teaching and Learning Practice. 2021; 18(4): 1–13. https://ro.uow.edu.au/jutlp/vol18/iss4/4.

15.   Gupta, B. L. and Gupta, P. B. Rubrics as versatile educational tool for outcome-based education, Journal of Engineering and Technology Education. 2021; 15(2): 13–24.

16.   Gupta, B. L. and Gupta, P. B. Use of rubrics in educational programmes, International Conference on Work Integrated Learning (ICONWIL-24). 2024; 15-16 April 2024 BITs Pilani Hyderabad Campus.

17.   Gupta, B. L. Last thing first: rubrics as innovative tools for learning, University News. 2023; 61(12): 139–145.

18.   Gupta, B. L. and Gupta, P. B. A critical study on the use of rubrics in technical institutions of India, Indonesian Journal of Educational Assessment. 2021; 4(2): 20–33.

19.   Laetitia Monbeca, Namala Tilakaratnaa, Mark Brookea, Siew Tiang Laub, Yah Shih Chanb and Vivien Wub. Designing a rubric for reflection in nursing: a legitimation code theory and systemic functional linguistics-informed framework, Assessment and Evaluation in Higher Education. 2021; 46(8): 1157–1172. https://doi.org/10.1080/02602938.2020.1855414.

20.   Mahmood, D. and Jacobo, H. Grading for growth: using sliding scale rubrics to motivate struggling learners. Interdisciplinary Journal of Problem-Based Learning. 2019; 13(2): https://doi.org/10.7771/1541-5015.1844.

21.   MHRD. National Education Policy 2020. Ministry of Human Resource Development, Government of India, New Delhi.

22.   Noll, J., Rawska J. and Lilley M. Designing rubrics for consistency of marking in large STEM classes, 20th European Conference on e-Learning, University of Applied Sciences HTW. 2021. DOI: 10.34190/EEL.21.113.

23.   Popham, W. J. What’s Wrong – and What’s Right – with Rubrics. Educational Leadership. 1997; 55 (2): 72–75.

24.   Reddy, Y. M. and Andrade H. A review of rubric use in higher education. Assessment and Evaluation in Higher Education. 2010; 35(4): 435-448, DOI: 10.1080/02602930902862859.

25.   Robbert, S., Patricia B., Verena B, Thomas B. and Kurt H. Effect of a rubric for mathematical reasoning on teaching and learning. Instr Sci. 2017; 45: 603-622. DOI 10.1007/s11251-017-9416-2.

26.   Suryanti, N., and Nurhuda. The effect of problem-based learning with an analytical rubric on the development of students’ critical thinking skills. International Journal of Instruction. 2021; 14(2): 665-684. https://doi.org/10.29333/iji.2021.14237a.

27.   Teresa, S. H.  Are your rubrics hitting the mark? An exploration of the use of rubrics for project-based learning in engineering, International Conference on Active Learning in Engineering Education, PAEE/ALE’2022. 2023: 258-265. https://www.huro.ua.es/images/PAEEALE2022/PAEE_ALE_2022_PROCEEDINGS.pdf.

28.   UGC. Evaluation reforms in higher educational institutions, University Grants Commission of India. 2019

29.   UNC. Using rubrics to assess student learning outcomes at the programme level, University of North Carolina, Office of the Institutional Research and Assessment. 2017

30.   Vercellotti, M. Beyond the rubric classroom assessment tools and assessment practice. The Electronic Journal for English as a Second Language. 2021; 25(3): 1-16.

31.   Wolf, K. and Stevens E. The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching. 2007; 7(1): 3-14.

32.   Zenobia, C. and Simone Ho. Good and bad practices in rubric: The perspectives of students and educators. Assessment and Evaluation in Higher Education. 2019; 44(4): 533–545. https://doi.org/10.1080/02602938.2018.1522528.

 

 

 

 

Received on 04.08.2024      Revised on 13.02.2025

Accepted on 16.04.2025      Published on 28.05.2025

Available online from May 31, 2025

Asian Journal of Management. 2025;16(2):139-146.

DOI: 10.52711/2321-5763.2025.00022

©AandV Publications All right reserved

 

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Creative Commons License.