Advantages and Disadvantages of System Testing

Keywords: advantages of system testing, education system testing

Assessment and reporting are the means by which learning can be monitored and stakeholders can be informed of achievement. Its educational aspect sees results used to identify strengths and weaknesses and improve future learning, and its instrumental aspect involves grouping of students according to achievement . Parents, teachers and students are interested in its educational function, whereas external stakeholders such as governments are concerned with the instrumental aspect.

Movement towards a global and digital economy has necessitated skilled and knowledgeable school leavers, crucial for Australia’s social and economic prosperity. Governments therefore require schools to demonstrate student achievement at acceptable levels to justify their economic support . This accountability also ensures the community understands provision of funding and services to schools . To provide this information, assessment must be undertaken on a national scale. As the information required differs to that required in the classroom, strategies for assessment differ in design, implementation and reporting. National assessment must be inexpensive, rapid and externally mandated, and results must be transparent and accessible .

Herein lie the issues with national testing. Authentic assessment is becoming popular in the classroom, testing real-life experiences and practical knowledge over numerous assessment tasks. In contrast, national tests assess students on one occasion and rely on a ‘pen-and-paper’ mode of deliver, leading to debate over validity.

Benefits of system-wide testing

Over the past 40 years, international and national testing has increased substantially. While early implementation assisted selection of students for higher education, more recent national assessment is used to evaluate curriculum implementation . As different curricula operate throughout Australia and internationally, benchmarking has been developed to facilitate comparisons between countries or students and identify strengths and weaknesses .

In Australia, the National Assessment Program (NAP) incorporates annual NAP literacy and numeracy (NAPLAN), and three yearly sample assessments in science literacy, civics and citizenship, and information and communication technology literacy. Most debate surrounds NAPLAN, hence it will be discussed further.

NAPLAN proceeds under direction of the Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA, previously MCEETYA) and is federally funded. It was developed to test skills “essential for every child to progress through school and life” . Each year, all students in Years 3, 5, 7 and 9 are assessed in reading, writing, language conventions and numeracy. NAPLAN endeavours to provide data enabling Government to:

  • analyse how well schools are performing
  • identify schools with particular needs
  • determine where resources are most needed to lift attainment
  • identify best practice and innovation
  • conduct national and international comparisons of approaches and performance
  • develop a substantive evidence base on what works .

NAPLAN claims to achieve this by collecting a breadth of information that cannot be obtained from classroom assessment. Government benefits from analysis on such large data samples: outcomes for groups including males/females, Indigenous and low socio-economic status students provide an evidence-base to inform policy development and resource allocation .

Comparing individual students to others in their state, and national benchmarks provides detailed information for teachers to inform future learning. Individual students can also be ‘mapped’ over time, to identify areas of improvement or those requiring intervention. In addition, national testing assists students moving schools in that it allows immediate identification of their learning level by their new school .

Read also  The lesson of the moth

Strict guidelines surround reporting of results to ensure benefits are gained. The Government have committed to ensuring that public reporting:

  • focuses on improving performance and student outcomes
  • is both locally and nationally relevant
  • is timely, consistent and comparable .

If NAPLANs implementation follows these guidelines, it will provide great benefit to Australia. However in these early stages of implementation, it is important to consider the troubled experiences of other countries regarding national testing.

Lessons to be learnt

National assessment was introduced in England in 1992 to establish national targets for education. Students are assessed at ages 7 and 11 in English and mathematics, and at 14 also in science . The ‘no child left behind’ legislation was implemented in the USA in 2001 to reduce the disparity between high and low ends of student achievement, focussing on literacy and numeracy. Students are assessed yearly between Year 3 and 8, and once between Year 9 and 12. Results are analysed on the basis of socioeconomic and ethnic background, and published as school league tables by the media. Federal funding is linked to school performance . The common issues with both cases will be discussed below.

Being a topical issue, the majority of literature on national testing is highly biased towards the author’s opinion. However if or when these effects occur, they have the capacity to negatively impact on students. As such, they also need to be considered within the Australian context.

Narrowing of the curriculum

With funding linked to success, teachers are obliged to ensure students achieve the best result possible in assessed subjects, and can end up ‘teaching to the test’. Those teachers who ‘produce’ successful students using this strategy are rewarded, deepening the problem . Within assessed subjects, increased class time is spent teaching students to take tests and increasing focus on tested areas, leading to reduced emphasis on skills such as creativity and higher order thinking . Furthermore, time spent on subjects not tested is reduced in preference for those that are. This type of teaching has been labelled ‘defensive pedagogy’ and leads to narrowing of the curriculum .

Excluding low-achieving students

Reports suggest that some low-performing students are excluded from enrolment or suspended during testing to improve school performance . In one example, students with low scores were prevented from re-enrolling, but were officially labelled as having withdrawn . Compounding this effect, successful schools then have further power to choose students, leading to a widening gap between low and high performing schools; in direct opposition to the reasons for implementing national assessment .

disregarding high-achieving students

High-achieving students can also be adversely affected, as many results are reported only as percentage achieving benchmarks. Priority is therefore given to students just below benchmarks to ensure they reach them . This has been described as developing ‘cookie-cutter’ students, all with similar skills . In doing this, students achieving above benchmarks are not challenged, reducing motivation and causing disengagement.

Lowered self-esteem

In one study, for the three years after national testing was implemented student self-esteem was significantly reduced compared to students the previous two years. Furthermore, attainment in national tests correlated with self-esteem, suggesting that both pressure of testing and the student’s achievement can influence self-esteem .

Read also  Pepsico is a multinational corporation

Increased drop-out rates

When compared to schools of similar socio-economic background but without national testing, a significant increase in Year 8-10 students dropping out of school was observed . This may be linked to pressure to suspend students or reduced self-esteem and motivation associated with high-stakes testing.

Reporting of league tables

National testing results are often reported as ‘league tables’, presenting average scores allowing direct comparison between schools. However, results tend to reflect socio-economic status rather than true achievement, and the depiction of schools as successes or failures leads to even further inequity between socio-economic groups . Importantly, the tables give no information as to the cause of low achievement or means for improvement , and therefore do not fulfil their intended purpose.

Recent trends have seen publication of ‘value-added’ data, adjusted for socio-economic status , however the methods of calculation are not explicit, hence their benefit is debatable.

disparity from classroom assessment

Classroom assessment has become increasingly authentic, with students being assessed on real-world tasks , giving them the best possible chance of demonstrating knowledge and skills. The use of national testing opposes this model, assessing students on one single occasion and leaving teachers uncertain as to appropriate pedagogy. Results obtained during classroom inspections of authentic styles of assessment have been shown to differ to those from national testing , leading to questioning over validity.

Ensuring reliability and validity in Australia

The issues described above need to be considered to ensure reliability and validity of national testing in the Australian context.

Reliability

Reliability refers to consistency of assessment, where results should be the same irrespective of when, where and how the assessment was taken and marked . The primary issue is marking consistency throughout Australia. Information technology facilitates accurate marking of simple answers, and Newton suggests computer-based scoring algorithms for constructed responses also improve reliability. Moderation ensures all assessors use the same strategies, and marking by more than one person may also improve reliability. Moderation also assists in maintaining threshold levels over time .

Validity

Validity refers to the assessment testing what it was designed to test.

Construct validity: assessment is relevant, meaningful and fair and provides accurate information about student knowledge

Content validity: assessment is linked to a specific curriculum outcome

Consequential validity: assessment does not result in a specific group of students consistently performing poorly

Concurrent validity: students receive similar results for similar tasks .

Debate arises over the capacity of national assessment to demonstrate real-world tasks in meaningful contexts, or deep thinking and problem solving . With diverse cultural and language backgrounds, Australian students bring to school a variety of experiences and beliefs and demonstrate learning differently. The single occasion, ‘pen and paper’ style delivery of national testing does not capture this diversity and can lead to anxiety .

This is evident particularly for students from Indigenous and low socioeconomic backgrounds. One teacher suggested that the assessment is daunting, and skills valued in their culture are not seen as relevant . The concept of silent and individual examination is foreign due to their cultural value of collaboration, and the numeracy assessments are unfair because of their low English literacy (G. Guymer, personal communication, April 2011). Much time is spent teaching students how to complete forms, reducing teaching time already limited by low attendance .

Read also  My Vision To Become The Office Manager

The aspiration for equality in Australian education is evident. However, this evidence suggests that rather than ‘closing the gap’ national testing may actually be increasing it.

Reporting of results

In the past, rather than publishing league tables Australia has ‘value-added’ to data by grouping schools with similar characteristics, to track individual students, and identify schools in need . However, this grouping has been challenged, as each school is essentially unique . To address this, the My School website was published in 2010 (http://www.myschool.edu.au), publishing a school profile including information on staffing, facilities and financial resources. The NAPLAN results are reported for each school against national averages as well as against 60 schools with similar socio-economic characteristics throughout Australia .

Using results to improve learning

Despite the overwhelming negative response to national testing, it is unlikely to disappear. As such, using results to improve student learning is the best response. Some methods used successfully are described below.

Diagnostic application

Although not designed for the purpose, results can be used to identify strengths and weaknesses for individuals or groups of students. By analysing specific questions, common errors can be identified and inadequacies in thinking inferred . In doing so, national assessment results can be used as a formative assessment to guide future teaching.

As NAPLAN is undertaken every three years, results for individual students can also be analysed over time to identify improvement or decline.

Consistency of schooling

Together with the National Curriculum, results from NAPLAN will ensure students receive the same schooling across Australia. This will reduce difficulty associated with students changing schools, as their achievement level will be immediately accessible.

Incorporation of content in the classroom

NAP assessment tasks will be based on National Curriculum content once implemented. As students will be exposed to content during class, national testing should not pose an added burden for teachers. Teachers at Ramingining School ensure all worksheets incorporate question formats similar to those on NAPLAN tests, and in primary school tests are undertaken weekly in English or mathematics under test conditions (G. Guymer & B. Thomson, personal communication, April 2011). The school therefore does explicitly teach students to take the test.

Allocation of funding and resources

Arguably the most important outcome of national testing is to ‘identify schools with particular needs’ and ‘determine where resources are most needed to lift attainment’ . Appropriate distribution of funding and resources will mean NAPLAN has delivered on these promises. In turn, there should be a ‘closing of the gap’ between low and high achieving schools, and a reduction in many of the issues discussed.

Hopefully, implementation of the National Curriculum will support the purposes of NAPLAN, together leading to equality for young Australians.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)