One of Lev Vygotskys foremost contributions was displaying how language plays a crucial role in cognitive development. It includes verbal, non-verbal, sign, mathematical language and other symbolic systems and is a powerful instrument that can be used to transmit values, information and world views. Thus, I will focus on assessment for the National Vocational Certificate (NCV) Level 2 English First Additional Language (EFAL 2). This class group is of mixed ability, have different school leaving grades and quite a few students experience barriers to learning which were not disclosed during the registration process.
Their mother tongue also differs from the language of learning and teaching (Lolt) which is English. Included is also a brief discussion of the principles of assessment, a discussion of an assessment theory, an analysis of the Department of Higher Education and Trainings (DHET) assessment guidelines and a review of an assessment instrument.
For the purpose of this assignment, I will only focus on the following assessment principles: validity, reliability, fairness and inclusivity.
Validity is probably the most important issue when designing an assessment instrument. It describes the degree to which to which the assessor/facilitator can draw conclusions about the candidates based on the assessments results. An assessments validity is established for a specific purpose and may not be valid other purposes. An instrument which is used to make predictions about a candidates oral communication skills may not be valid for predicting collaborative or problem-solving skills. Validity covers three aspects namely face, construct and impact validity (Messick, 1989). Face validity deals with the appropriateness of the test content for the candidates and level. Construct validity is concerned with the nature of the broader constructs which are tested memory recall, demonstrating collaborative skills, verbal skills. Construct validity aims to aid students develop decision making, organisational, thinking and reasoning skills. A carefully designed assessment should have useful face and construct validity. The effect that the assessment process has on a candidates behaviour is deemed to be impact validity.
The principle of reliability implies that all assessors should come to the same conclusion with regard to a candidates competency/ies (Popham, 2014). A marking guideline should accompany every assessment instrument and be scrutinised by the moderator during the pre-moderation process.
Adjustments may be made during moderation and/or memorandum discussion and the marking guideline should be amended. Depending on the instrument, assessment tools may include observation sheets, rubrics, model answers, comments, checklists and marks or grades. Sources of unreliability in assessment may include assessments which are either too long or too short, inconsistent intra-or inter-examiner reliability and individual test items which are inadequate.
An assessment would be considered fair if the candidates understand the assessment process, which was agreed upon by the candidates and assessors and the needs of the candidates are taken into account. It is important that candidates, especially those who experience barriers to learning, be accommodated, but it should not compromise the outcome. Amendments/adjustments should be made after consulting with a specialist. Adjustments which were made should be documented, enough detail should be provided which would enable another assessor to make a judgement.
A well designed assessment can be valid and reliable, but it should be fair to all individual candidates and groups of candidates. Assessors may have diverse marking biases resulting in candidates likely to be unfairly treated in open-ended written tests if the marking guidelines do not make provision for alternate responses or assume that the candidates responses are irrelevant.
The principle of inclusivitys goal is to provide equal academic opportunities for all candidates. It takes into account that some may experience barriers to learning which might put their capacity to meet the minimum assessment requirements at risk. It allows for modifications and special provisions to be made where necessary. It does not imply that the standards be lowered but alternate methods are used to reach the same conclusion. In a diverse country, such as South Africa, with eleven official languages, candidates may be disadvantaged due to shortcomings in their English proficiency. Students whose mother tongue is not English perform poorly compared to those whose mother tongue is English, perform poorly compared to those whose mother tongue is English (Smith, 2011).
The why (purpose) of assessment and how (effects) it affects learning should be considered when designing an assessment instrument. Assessment is significant for two fairly different reasons. Firstly, it is an essential element of teaching and learning and is used to advise students with their studies. Formative assessment help students with their academic development and facilitates learning thus the negative effects associated with summative assessments are averted. The students views of what is rewarded and disregarded by summative assessments will have a considerable effect upon their learning behaviour and therefore upon the course outcomes. Secondly, assessments must be accurate for a number of reasons: it would be unfair if it is inaccurate, for internal and external quality assurance purposes and to ensure that candidates, who believe that they have been unfairly judged or classified do not challenge the institution.
Summative assessments are administered at the end of a learning programme or a specific time e.g. term, trimester, semester or end of academic year. It evaluates whether students have mastered the learning material as opposed to formative assessments. Assessment evidence allows teachers to ascertain whether the learning material, curriculum and teaching methods were effective. Standardised assessments ensure that all candidates perform to set standards. Very little empirical evidence is available to prove that summative assessments lead to improved student performance (Rosenshine, 2003; Yeh, 2007).
ASSESSMENT INSTRUMENT REVIEW
Contemporary teachers and policy makers prefer assessments which: display students thinking and problem solving abilities instead of discrete knowledge (Berlak et al., 1992; CSUP, 1992; Taylor, 1994; National Committee of Inquiry into Higher Education, 1997); directly informs teaching (Nichols, 1994) and signifies meaningful, important and useful forms of human endeavour and competencies (Wiggins, 1989a).
The NCV EFAL2 First Paper (DHET, 2019) is a summative assessment which is used to determine whether a student has reached competency to progress to the next level. This assessment is based on some form of grading which is objective and externally verifiable. Therefore there is a marking guideline, which also allows for alternate responses with a mark allocation as scripts are marked internally at the different campuses across the country. Assessment requirements are available to teachers and must be disseminated to students, preferably at the beginning of the academic year. A list of instructions is available on the cover and the second page of the assessment. Candidates are able to see the subject, code, date, time on the cover page of the script-this helps them to ensure that they are writing the correct paper.
Prior to the assessment, candidates are provided with what is required- examination permit, date and time of assessment, venue and seat allocation to name but a few. If a candidate disputes the results, he/she may request a re-mark or re-check of the assessment and/or view the assessment. There is however a cost involved and it might exclude those candidates who may not have the economic power but who feel aggrieved. If the internal moderator detects a variance of 5% of 50 of the moderated assessments, it compels the assessor to remark the whole batch.
At the different examination centres, arrangements are made to ensure that evidence which is submitted is in actual fact the candidates own work. Examination rules are in the answer booklet and that to a certain inform candidates to refrain from copying another candidates responses and to present their own responses.