Making decisions about edtech

School leaders deserve to know what value an edtech product will bring to existing teaching and learning experiences. Dr Fiona Aubrey-Smith shares the questions to ask and how to challenge the evidence, to ensure edtech purchases fit the requirements of a school

As school leaders, we are undoubtedly becoming better at using research evidence to inform our decision-making, both individually and collectively. However, 42 per cent of buying decisions are still made based on the ‘word of mouth’ informal recommendations by other schools (NFER, 2018) which suggests we still have a long way to go.

There are an increasing number of sources of evidence to draw upon when making buying decisions about edtech. Whilst historically many suppliers have produced case studies from advocate schools and soundbites from enthusiasts, many now recognise the need for more robust evidence of impact. School leaders deserve to know what value using the edtech product adds to existing teaching and learning experiences.   

Edtech suppliers are increasingly working in partnership with academic researchers to undertake objective analysis – identifying precisely how their products make a direct impact on improving teaching and learning. Furthermore, edtech suppliers are also following the trend for online retail to provide customer ratings. Ventures such as EdTech Impact have been set up where suppliers list their products and existing customers provide validated reviews based on pre-determined criteria. Furthermore, sources of support such as Educate provide schools with comprehensive guidance about what to consider.

Getting it right

As school leaders, it is absolutely vital that we interpret the evidence presented to us – challenging bias within it and being absolutely clear on what it might mean for our school, our teachers and most importantly – for our students.  

Every school has its own unique flavour – a combination of size, catchment, strategic priorities, characteristics of teaching and learning, improvement or innovation priorities, policies, experience and expertise of staff, and a great many other variables. Even within the same school, a department, phase, year group or class can have a very different personality to its neighbour. We must remember that these kinds of variables affect the relationship of a particular product with a particular school. Moreover, the relationship between a particular product, the teachers and children using it, and the specific context that they are using it within (Aubrey-Smith, 2021).

So when receiving recommendations either from other schools, comparison websites or through supplier marketing materials, you are encouraged to ask a range of questions, as follows.

Firstly, ask what proportion of staff and students are using the product – and why those staff and those students are the ones using it? This will help to surface the other influences affecting its successful use.

What prompted the decision to use this particular product, and which others were considered? This will help to surface whether it’s the general concept of the product that is perceived as successful – such as automated core subject quizzes – or whether it is the specific product itself.

Ask how long the product has been in use for – and if it has been renewed what informed that decision? This will help surface how embedded the product is.

Since this product has been introduced to the school, find out what other improvement strategies have been implemented – either whole-school or within this particular subject/phase/department? This will help surface whether any improvements seen relate to the product, other T&L strategies, or a combination of both.

Once students are used to using this product, what evidence is there that show that their learning translates into the same levels of mastery in other contexts (e.g. if they score ‘x’ or do ‘y’ when using this product, can you be confident that they would later score ‘x’ or do ‘y’ when applying the same skill in an unrelated context?) – are the attainment increases about the child’s knowledge, or the child’s familiarity with the product?

Consider what evidence is there of student’s long term knowledge or skill retention – over a week, term, year and beyond? Note: this is not the same as progression through units of work – but about retaining knowledge over time. Is the product securing long term knowledge or targeting short term test preparation or skill validation?

Challenging evidence

Part of a school becoming an effective professional environment for all staff is about everyone engaging meaningfully with available evidence, and embedding specific types of strategic thinking and evaluative focus into practice (Twining & Henry, 2014). In other words, all of us using robust evidence to inform our thinking, and being clear on how we use that evidence meaningfully, to make future decisions.

There are three key lines of enquiry which will help you to challenge evidence meaningfully:

Firstly, realise correlation is not the same as causation. In other words, just because a school using a product saw improved attainment outcomes, increased engagement, reduction in workload or improved accountability measures, it doesn’t mean that it was the product that led to this.

Most schools implementing a new product do so as part of a broader strategy focused on improving specific priorities. One would therefore expect the improvements to be seen regardless of which products were chosen because of the underlying strategic prioritisation given to the issue. Instead, focus on how the product affects changes to behaviours, such as increased precision within teaching and learning dialogue. This is where meaningful impact will be found.

Secondly, note that for every research finding that argues one approach, there will be research elsewhere arguing for something different. Your role is to identify which research relates closest to your specific context. You can do this by asking who produced the material that I am reading? What bias might they have? Have they acknowledge that bias and shown how they have mitigated for it?

Ask what evidence led to their recommendations? What data are findings based on – and are these large scale but surface level, or smaller scale and probed more meaningfully?

Inquire what their vision is for teaching and learning and how does this align with the vision of what good learning and good teaching look like in our own school?

The third thing to do is to plan for impact before you commit to investing. A vital part of decision making is about planning from the outset how you will evaluate what works and why. You will then remain forensically focused on what matters most to your school throughout procurement, implementation and review. Furthermore, being able to identify and recalibrate when ideas do not work as intended so that future practice improves. Guskey (2016) encourages us to think about impact through five levels; reactions to something, personal learning about it, consequent organisational change, embedding ideas within new practices, and finally creating a positive impact on the lives of all those involved. These apply to both teachers and students (as well as leader, parents and other stakeholders – depending on the product).

Embedding meaningful review of the impact of your product choice connects your intentions to the lived experiences of the students whose needs and future you are serving.
The two vital questions that you will want to ask yourself and your team are: what evidence is there that our intentions for this product are being lived out in reality by our young people? And what evidence is there that our provision (through this product) is making a tangible difference to how students view themselves, their learning and their future?

Improving the quality of teaching

Finally, any decision made in school should always be rooted into improving the quality of teaching and learning. This can easily be lost amongst conversations about requirements and procurement.

To help with this, identify three to five “personas” – short descriptions of the people who the product is ultimately intended to support. For example:
High Attaining Pupil Premium Students; Working-Class Boys in KS2; KS3 Girls Disengaged with STEM; Children with EAL in KS1.

At every point keep coming back to these personas – how would each product, feature, piece of research, impact finding or sample of evidence support those specific students.

That way, we keep a forensic eye on what matters most – our students and their learning.

About the author
Named by Education Business as one of the 50 most influential people in education (2021), Dr Fiona Aubrey-Smith is an award-winning teacher and leader with a passion for supporting those who work with children and young people. As Director of One Life Learning, Fiona works with schools and trusts, professional learning providers and edtech companies. She is also an Associate Lecturer at The Open University, a Founding Fellow of the Chartered College of Teaching and sits on the board of a number of multi academy and charitable trusts. Fiona is also a sought-after speaker, panellist and author for publications and events addressing education, pedagogy and education technology.