A growing number of universities in the US abandon the use of standardized tests as a key factor in the admission process. Last month, Creighton University announced that it will not require hopeful applicants to submit ACT or SAT test results from 2020, and become a member of Arizona State University, DePaul University, Drake University, University of Arizona and the University of Chicago , among others.
Why are these respected institutions who test these test results respected, which has been a fundamental basis for assessing the university's readiness for nearly 60 years? The answer is simple: they are not a correct way of assessing students' knowledge and potential.
Despite the growing movement towards using ACT and SAT exams as the very best, all benchmarks for assessing the university's readiness, more students than ever take them ̵
Why are the results so bad? Are the underperforming test takers simply not ready for college? The answer to these questions is the very reason why many in cognitive science believe that these exams do not matter first and foremost – evaluations in time-in-time are full of problems and do not give a correct picture of real knowledge or potential. We have the technology to do better.
Think back to the exams you have taken in your life. What do you remember most about? Is the material tested on the sample, or the anxiety you felt about how much of your future is riding on a set of questions? That anxiety illustrates the underlying problem of SAT and ACT exams and an inherent injustice that adversely affects many students. In the end, so many factors affecting exam scores, such as anxiety, anxiety, physical health and happiness, are not what we really want to measure.
Perhaps most worrying is that many of these factors are more affected by who we are than what we have learned. A sharp two-thirds of high school students have experienced an uncomfortable level of test anxiety at some point, with severe and chronic test anxiety affecting up to one in four .
More than once 32 percent of adolescents have suffered from an anxiety disorder – numbers that have risen with the incidence of standardized testing. Research has also shown a strong correlation between performance on exams and factors such as minority status and family income.
Given these irregularities, it is no surprise that large-scale studies of thousands of students find  ACT or SAT submissions are a poor predictor of success in school. The same studies have also shown that the higher education average, measurement of performance over time and several test opportunities, were more successful indicators of future results and success.
This is a good start, but even the anchored use of GPA has room for improvement. Despite the positive results of GPA, the fact that it is a long-term, data-driven process that measures knowledge with consistent data points across a student's entire high school career. It is also affected by major exams.
What these current standards of knowledge assessment are missing is the widespread application of cognitive science (how we learn), technology (artificial intelligence and machine learning) and rich student data sets that help adapt the learning experience to each individual. This is the way to carefully assess real knowledge and potential – a GPA 2.0, if you like.
Educators, admissions managers and most of all students have so much to gain by moving to a better model to assess knowledge. Imagine if your courses could predict exactly when you would forget about the types of chemical bonds you needed to master or admit that you had not yet mastered Shakespeare's typical literary devices and could then deliver that information to you at exactly the right moment you needed to build long-term memory and retention.
Or whether the teacher could see a dashboard on how the students evolved to master information and use the insight to decide when to intervene and help those students who really need help, as opposed to being treated equally. 19659004] The combination of cognitive science and technology can do much more than just assess knowledge – it can help us learn more effectively first and foremost. Cognitive researchers have spent decades carefully mapping the most effective techniques to build long-lasting memories – deeper engagement, challenging self-testing and optimally distributed reviews – as well as identifying common approaches such as cramming, mnemonic devices, and re-reading, leading to poor retention. Unfortunately, the latter approaches are extremely widespread in ACT and SAT testing, while retransmission tactics have no effect .
Today, ACT and SAT are trading a lot. But as learning tools and data continue to improve learning experience, they should not. The real test will not be the ability to stay calm, complete test forms and outsmart test day, it will be objectively tracked, long-term knowledge and understanding.
As we move to online, on-demand curriculum and assessment, data on student performance, cognition and ability to learn will only increase and standalone tests will mean less. It would be a perfect rating.
Published March 16, 2019 – 19:30 UTC