TWG International Literacy Assessments Blog #2 April 2020

TWG International Literacy Assessments Blog #2 April 2020

PISA 2018 assessment of reading literacy: innovations and continuity

by Dominique Lafontaine, member of the PISA reading expert group

From print to digital reading

Between 2000 and 2018, reading underwent a tremendous transformation in our societies. When the first framework for PISA 2000 was developed, less than 2% of the world’s population used the Internet. Today, the Internet pervades the life of all citizens, in educational, professional or personal spheres. Laptops, tablets, and more recently smartphones make information available all the time.  Today’s students need to be skilled with digital tools and need to develop new skills to deal with the increased complexity and quantity of information available online. If the educational and societal relevance of PISA was to be kept, the PISA frameworks, assessments and questionnaires had to reflect this major change in the nature of reading, while maintaining enough continuity with previous cycles so that trends can be measured adequately.

What’s new in the assessment?

In 2018, a substantial revision has taken place: online reading now occupies a central place in the evaluation. All the new units developed for 2018 are online reading units, with the specific features of electronic texts, navigation tools between pages and hyperlinks. Only the old units from 2000 and 2009, necessary to link the results with previous cycles (“trend items”) are traditional print or static texts (without hyperlinks), with well-defined boundaries, even if they are delivered on computer.

The new framework shares a lot of communalities with the 2000 and 2009 ones, keeping similar the three main reading processes [“locate information”, “Understand” (instead of “Interpret” in 2000 and 2009), “Evaluate and reflect”]. However, it is conceptualised to address the main differences between print and online reading:

  1. The fact that in online reading, the text is not given; the readers have to build their own text, choosing which paths to follow, and which ones to dismiss, in a context where the reader is offered options, which can also create possibilities for getting lost in the process.
  2. The PISA 2018 framework for reading literacy aims at addressing the additional complexities linked to online reading comprehension. Within the three processes listed above, subprocesses have been added to the model, more particularly linked with online reading: “Search and select relevant text” (in “Locate”), “Assess quality and credibility” and “Corroborate and handle conflict” (in “Evaluate and reflect”). Obviously, these subprocesses take on increased importance in a society in which reading online is widespread practice.

 

  1. Additionally, PISA 2018 introduced a significant innovation in the design of the assessment. In previous PISA cycles, the reading tasks consisted of unrelated passages on a range of different topics. Students had to answer a set of discrete items on each passage and then move on to an unrelated passage. Consequently, there was no overarching purpose for reading other than to answer discrete questions. In contrast to this approach, in PISA 2018, a scenario-based assessment approach has been used. In this approach, students are provided an overarching purpose for reading a collection of thematically related texts in order to complete a higher level task. The scenario offers a collection of goals, or criteria, that students use to search for information, evaluate sources, read for comprehension and/or integrate across texts. The collection of sources can be diverse and may include a selection from literature, e-mails, blogs, websites, policy documents, primary historical documents and so forth. Besides that, traditional PISA reading units are kept to secure the trends.

In a nutshell, innovations in the PISA 2018 assessment were quite substantial and a legitimate question may be asked about how to compare “traditional skills” in reading and new skills or processes reflected by the three major innovations mentioned here. In the international reports and database, one can find two subscales labelled “single” and “multiple” sources. Comparing scores on these two subscales is the best way to approach the difference between students’ abilities with “traditional” skills and “online reading”. Indeed, items are classified as “single source” when the respondents need or are prompted to read one text to answer a question, and as “multiple” when they have to consult different texts (in the context of scenarios). On average across OECD countries, the students performed somewhat better on the multiple sources subscale (490 vs 485), but in some countries, the gap was larger. This is a topic of huge interest for analyses aiming at understanding the differences between countries and within countries, as well as checking whether this gap varies according to gender or other variables available in the contextual questionnaires.

What’s new in the contextual questionnaires?

In adherence with new developments linked to online reading assessment, all the reading related non-cognitive, teaching, learning and curriculum constructs and questions in the students’ and teachers’ questionnaires also had to cover not only print, but also online reading.

Several questions in PISA 2000 and 2009 already addressed students’ motivation and engagement, reading practices and metacognitive strategies. Regarding motivation, two new scales have been developed in 2018:  a self-efficacy and a self-concept scale. The typical questions about reading practices (frequency of reading fiction and non-fiction books, comics, magazines, newspapers) have been kept to allow for measuring trends. New questions have been added to estimate to what extent students prefer using digital devices, paper, or both when they read books and when they want to be informed about the news. Regarding metacognitive strategies, a new scenario was also developed to estimate students’ strategic knowledge about how to deal with an email which could be a spam.

Several questions addressing teaching practices in language lessons have been newly developed for PISA 2018, some of them in the students’ questionnaires, others in the two optional teachers’ questionnaires. (There is a “general” teacher questionnaire and a specific questionnaire targeting teachers in charge of language/literature lessons.) These questions are on the one hand about practices supporting reading engagement, on the other hand about practices enhancing reading skills and metacognitive strategies and opportunity-to-learn in reading – on paper, but also online. On that basis, it is possible to approach differences in the reading teaching practices between and within countries, and especially to study to what extent students are provided opportunities-to-learn online reading strategies at school. 

In the first three volumes of the international reports released by the OECD, very little information related to these new reading contextual questions is available, except some information about changes in reading practices. This is the result of the OECD strategy for dissemination of the results, which in 2018 put a strong focus on well-being and general dispositional attributes, not related to the reading domain. No specific report dedicated to reading is scheduled before 2021, which is a pity. However, as all data are available in the international database, this is an opportunity for researchers in the field of reading literacy to develop their own analyses.

Link to the framework http://www.oecd.org/education/pisa-2018-assessment-and-analytical-framework-b25efab8-en.htm

Link to the results https://www.oecd.org/pisa/publications/pisa-2018-results.htm

Share with