Language & Life
The dawning of a new age of oracy
"How can I know what I think until I see what I say?"
Often attributed to E.M. Forster, Graham Wallas, Andre Gide or W.H. Auden, this question has resonated with me for many years. It captures the idea that language tends to construct our reality rather than just describing it.
The goal of my website is to help me make sense of some things I have spent much of my life thinking about, professionally and personally.
This homepage is focused on my thoughts about artificial intelligence, its impact on our humanity, and more specifically on the future of education and how our assessment systems urgently need to evolve. It's in five short sections (with dropdown headings) and takes 10-15 minutes to read.
I hope it is of some value to you.🙏🏼
Dr Jonathan (aka Joff) Wheeldon
Artificial intelligence or authentic stupidity*?
When I started to create this website in 2023, ChatGPT had been declared the latest 'game-changer' in large language models (LLMs). LLMs are the best known products of the branch of artificial intelligence known as 'generative AI'. Two years on, there is now fierce competition between LLM providers, and their tools are getting stronger everyday. They are having a huge impact on the evolution of our species, in particular:
how teaching, learning and research are most efficiently conducted
how knowledge is produced, authenticated and ranked for value and relevance
how power and influence are distributed throughout society
With only basic online access, it is now very easy for any of us to pose any question (orally or in writing), and to produce plausible and eloquent textual answers. We can convert the output instantly into slick presentational formats, including into convincing human dialogue such as a podcast. The style, tone and vocabulary of this output can easily be adjusted to cater for different audiences and age-groups.
For those who already have some knowledge, skills and a critical disposition to review the output, the productivity gains are spectacular. Coding time can be reduced from hours to minutes. Desktop research, high quality reports and presentational materials can be produced in hours rather than days.
LLMs are playing a wide range of roles: research assistant, editor, personal tutor, image-generator, even foreign-language conversational partner. We can ask the LLM to take on a particular persona or character type which suits our needs. We might still be some years away from the physical embodiment of the artificial friend (AF) envisaged by Kazuo Ishiguro in the brilliant Klara and The Sun (2021), but it is now very easy for us to imagine chatbots quickly becoming our mentors, coaches and non-clinical therapists. Although many people lost patience with the first wave of personal AI assistants (Siri and Alexa), the latest chatbots such as ChatGPT, Gemini, Claude and DeepSeek are likely to proliferate much more widely, with huge implications for bespoke learning, productivity and for human relations.
This is marvellous, but sometimes the critical-thinking skills and the deep-learning processes are conveniently by-passed. If we are not careful we can convince ourselves (and others) that we are more informed or intelligent than we really are. At least for a while. Those who just see LLMs as making life easier and simply pass off raw output as their own work, without editing, adding personal perspective, or without any process of critical review, are playing a dangerous game. A game that has harmful consequences for them and for all of us. Taking advantage of generative AI is hard work, it demands more of our brain power, not less.
The best output of generative AI is formed through an iterative co-production. That is to say, the quality of LLM output relies upon the thoughtfulness of the person asking the questions, assessing the integrity of the 'answers' and then asking more questions. Prompt engineering is the term used for the process of writing instructions to get the best results from LLMs. It requires creativity and critical thinking and is therefore fully aligned with mainstream teaching and learning goals. We should be putting it front and centre of digital literacy, a term that should now be familiar across many elements of the school curriculum.
Our role in improving generative AI
The vast sources of data trawled by LLMs exclude (for now) much valuable scientific and cultural knowledge protected by organisational firewalls, and by copyright and data protection laws. But beware - I am not confident that last sentence is accurate. What is really happening in the training of LLMs 'behind the scenes' is quite opaque and secretive. Whether or not all of that higher quality 'curated' knowledge becomes accessible to LLMs, we should be cautious in thinking of the output as attributable to an LLM, even though we must acknowledge its contribution. The human 'prompt-engineer' is a co-author in this collaborative process and must remain at least partially accountable for what is produced, especially when the LLM output is re-shared more widely.
Bear in mind that, directly or indirectly, we are part of a fine-tuning process known as Reinforcement Learning from Human Feedback (RLHF). By actively engaging with generative AI, we have a much better chance of evolving alongside side it in a good way. This is well expressed by Ethan Mollick** in his excellent book Co-Intelligence (2024) which helps us think quite profoundly about how AI can be an extension and evolution of our humanity.
With the metaphor of centaurs and cyborgs Mollick makes a useful distinction between the ways in which people are using LLMs. Centaur work has a clear line between person and machine, allocating tasks based on the capability of each. By contrast, cyborgs iteratively and seamlessly blend person and machine, building on each other's contribution. This makes it difficult to clearly attribute authorship. The cyborg approach is now very common in desktop research and writing, including the development of reports and image-enhanced presentation materials.
Amongst many other thought-provoking insights, Mollick promotes the responsibilities we all have. These include adapting our organisations, our managerial practices and our training programmes in an equitable way to avoid mass redundancy of the human workforce. Of most relevance to teachers and education leaders, he explains why (after decades of promises and false starts with 'edtech') classroom learning is now changing quite radically. Transformation will occur in a positive way as long as teacher and student each remain 'the human in the loop'.
To wrap up this section, we should remember that the output of LLMs is not always accurate or wise. They can create hallucinations***. Even so, they will almost certainly become more and more plausible and relied upon. As a result, the term literacy is being redefined. Or perhaps it is more accurate to say that what we understand by the word literacy is being culturally renegotiated to recognise the agility and pragmatism needed to thrive in a world transformed by AI. One thing is very clear - sooner or later, young people need to develop confidence, skills and maturity in making the most of LLMs. This includes understanding their limitations and the dangers of learning shortcuts.
Most urgently, our educational assessment methods must evolve, both to support children and to ensure the integrity of their learning processes.
Footnotes
*Ben Elton's ironic AI antonym 'authentic stupidity' neatly expresses the risk caused by replacing skills such as critical thinking, emotional intelligence or plain old 'common sense', with a lazy or premature reliance on artificial intelligence.
** Mollick's newsletter oneusefulthing.org. is a good resource for monitoring developments in generative AL, amongst many others such as The Neuron.
*** Hallucinations refer to the occasions when LLMs create misleading answers. There are many collaborative initiatives measuring the accuracy of LLMs. Humanity's Last Exam is a good illustration.
Hope versus fear - AI Fight Club*
Along with hope, AI is also generating fear. Only time will tell if (and how far) we are falling into the black hole of artificial general intelligence (AGI) or so-called 'technological singularity'. These are the terms used to describe the state where superhuman AI is irreversibly out of control. A state where a majority of humans have been charmed into being the serfs of dominant and super-efficient AI, and without the skills or the will to challenge our 'oppressor'. We might not even be aware of our oppression. For people of my generation, this scenario was very well captured by the 1999 movie The Matrix.
It is easy to imagine. Because in making life easier AI can make us lazier. The most proactive and energised minds will be enhanced by AI and will develop new types of cognitive faculties that give them competitive advantage in productivity and creativity. But many others could go the opposite way. Increasingly dependent on handy apps for just about everything, and becoming addicted to behaviourally-manipulative algorithms, our minds may lose capacity and focus. Without the compulsion to question our own lack of agency, we may become blissfully unaware of our diminished free will. Worse still, our human 'resources' i.e. our skills, passions, diversity, creativity and ingenuity, may become regarded by AI as obsolete, disruptive or worthless. In this vision, there is an inevitable existential crisis for humanity.
Of course this is not a new fear. Aldous Huxley's masterpiece Brave New World (1932) is highly relevant to discussions of being oblivious to our soft oppression through a singular political and cultural system. And despite its pluralistic intent, so is Francis Fukuyama's famous 1992 claim about 'the end of history' following the fall of the Soviet Union and the Berlin Wall. The assertion of an ultimate and permanent victory of liberal democracy and free-market capitalism over competing totalitarian fascist or communist ideologies was a premature claim to a dominant political and economic hegemony or singularity. One into which the 'developed' world has been seduced by the promises of technological 'progress' and by:
improvements in standards of living that are often superficial, inequitably distributed and not synonymous with happiness;
addictive and cynically-exploited consumerism;
managerialism, an organisational ideology that generally prioritises corporate interests over individual ones;
the obsession with GDP growth, such a blunt measuring tool that takes no account of impact on collective wellbeing or on the environment.
In an AGI future, which values and principles will dominate? Will there be a challenge, or a 'correction' of these imperfections of capitalism, AI's current champion and collaborator? Or will they become further embedded and no longer be perceived as imperfections? In a meritocracy that rewards those who are able to take advantage of AI to become 'super-human', how will the humans who get left behind be treated?
Every age has its 'opium of the masses' and today's tech billionaires are no different from the religious and political leaders of previous centuries. Whether or not these leaders are noble in their intent, they have a common desire to address or exploit some profound human needs and systemic shortcomings. What is playing out right now may not be exactly what Huxley had in mind in his ironic Brave New World (with its freely available happiness-inducing drug soma, and its social and genetic engineering), but we can see some frightening parallels with his dystopian vision. In this context, and with some irony, I asked ChatGPT to come up with some ideas for SOMA as an acronym with current-day relevance. The response came immediately, and showed good understanding of why I was asking the question. Here are ChatGPT's suggestions:
Stream Of Mesmerizing Algorithms
Scroll, Observe, Manipulate, Addict
Social Oblivion, Mechanical Absorption
Sensory Overload, Mindless Attraction
Screen-Oriented Mental Autopilot
Sedated Online, Mesmerized Always
Scary, but the biggest risk to our future is probably not AI being out of human control, but rather being undemocratically controlled by any of the three 'C's: countries, corporations or criminals.
In late January 2025 we saw some dramatic tension amongst two of these three 'C's. With some fanfare, the newly inaugurated US president Donald Trump proclaimed America's continuing dominance of the AI sector through the Stargate Project, a $500 billion OpenAI collaboration with a Japanese investment firm and an Emirates sovereign wealth fund. Almost simultaneously, China released DeepSeek, which along with other impressive Chinese LLMs, demonstrates that competitive LLMs can be produced at a tiny fraction of the cost of their American rivals. In response, almost US$1 trillion was wiped of technology stocks in one day, with chipmaker Nvidia losing $600 billion, the biggest one day loss ever recorded.
In the subsequent days there were claims and counterclaims that these LLMs had been copying each other. Few reports seemed to grasp the irony of tech billionaires being outraged at the illegal or unethical copying of products whose core capability is built on riding roughshod over intellectual property rights. Such loss of copyright protection is causing deep anxiety for the creative and knowledge industries. It may be difficult advice to follow in an informed way, but we should all remain conscious of the consequences of these developments in our decisions to use LLMs.
Amongst many more recent books presenting visions of an AI future are Ray Kurzweil's latest The Singularity is Nearer (2024), Yanis Varoufakis' Technofeudalism (2023), Mustafa Suleyman's The Coming Wave (2023) and Yuval Noah Harari's Homo Deus (2015). Each one captures valuable insights and hypotheses based on similar concerns.
Whether post-singularity existence turns out to be heaven or hell is a profound 21st century question that we should all spend at least some time pondering rather than fighting about. But it is a question that will not be resolved anytime soon. In the meantime we must do what we can to research, debate and determine the optimal role of AI in education to give the best chance for an acceptable outcome for humanity.
To end this section, the publisher's synopsis of Tom Chatfield's Wise Animals (2024) neatly reconciles the hope versus fear dilemma:
'Human children know no more of modern technology than their ancestors did of older technologies thousands of years ago, and develop in relation to the technologies of their time.
We co-evolve with technology as individuals as we have as a species over thousands of years.
Rather than see technology as a threat, this humanist contribution to the debate proposes that we are neither masters nor victims of our technologies. They are part of who we are, and our future – and theirs – is in our hands'.
Footnote
* AI Fight Club is a metaphor coined by Henry Farrell to describe the polarised positions around the future of AI. See his article at Programmablemutter.com 17/12/2024.
The human response - a new age of oracy
An equally impactful consequence of LLMs (in conjunction with the extraordinary pace of evolution in voice AI such as voice-typing, editing tools and automated voice generation) is that the distinction between literacy and its educational 'poor cousin' oracy is becoming blurred.
The opportunities are great, especially in levelling the playing-field for neurodiverse youngsters, or for those whose talents do not easily align with traditional learning and assessment methods. Pupils previously thought to be struggling with reading, or incapable of producing good written output, can now use this brilliant new technology and take pride in demonstrating their intelligence and creativity. They are reminding us just how articulate they are.
Meanwhile, there is a wider risk that all of us are reading and thinking less (or at least less patiently, diligently and critically), whilst copying, pasting and messaging much more rapidly. Texting, instant messaging and 'tweeting' are arguably nearer to speaking than to writing in the conventional sense. The conversational qualities, phonetic shortcuts and emojis of WhatsApp, X and other social media formats bear strong resemblance to oral communications. We now have a generation of young adults who are highly effective in these fast-paced protocols of communication, often to the detriment of attention to detail. Many struggle with the protocols of more thoughtful and considered correspondence such as email. The big question is: how concerned should we be about this?
Speech is natural and came first.* We are born with the neural wiring for speech, but reading and writing are less ‘natural’ skills, only acquired through education and through evolutionary technologies such as the alphabet, printing and publishing. As far back as the 4th century BC, Socrates (in Plato’s Phaedrus) tells of the Egyptian God Theuth's debate with King Thamus on the pros and cons of granting the Egyptian people the benefits of writing. On the one hand, the technology of writing extends memory and shares wisdom. On the other hand it will:
‘create forgetfulness in the learners’ souls, because they will not use their memories...they will appear to be omniscient and will generally know nothing.’
Hence emerged an enduring but sometimes chafing alliance between documented wisdom and rhetorical eloquence. Can true knowledge live in static, unresponsive text, or does it only emerge dialogically from a relationship between active human minds through spoken discourse?
In more recent centuries (since the invention of printing and publishing systems) mastering the skills of consuming, re-producing and expanding the 'canon' of curated text has dominated the academic and professional pathways to success. This is sometimes referred to as the age of the linear literary mind.
But we can now see the beginning of the end of the age of information eco-systems built on such reverence for curated printed text. This is partly because of the proliferation of non-curated and self-published electronic text. We are inundated by hyperlinked documents, web pages, slides, emails and multiple channels of news, promotions and instant messaging. We are all authors now.
So it has become impossible for us to read everything that appears on our screens, making it very difficult to differentiate the wheat from the chaff. This goes some way to explaining the enormous popularity of podcasts, vlogs and social media influencers. They are the latest channels of curation, with relatively low barriers to entry and minimal quality-assurance mechanisms.
With such growing reluctance to read long-form text, there is now a huge demand for it to be synthesised and summarised by LLMs. This can be easily achieved by speaking instructions into our devices, with the summaries being spoken back to us, if desired. It should be no surprise that there is this desire to develop more productive and dynamic oral interactions with technology, and with 'big data' in particular.
But oracy is not just about speaking. It is the ability to articulate ideas, actively listen, develop understanding and engage in dialogue with others through spoken language. I explore some of the consequences of this trajectory towards a new age of oracy in the website section "All the World's a Stage" - Securing Human Roles in an AI Future. The excellent report of the Oracy Education Commission (October 2024) is also well worth a read.
In education, our curriculum, assessment and teacher-training methods are terribly slow in adapting to these evolving aspects of our culture. This puts pressure on teachers to figure things out for themselves, and to experiment. Some are resistant to change. Others embrace generative AI, not only to produce/adapt learning materials, but also to support the assessment of their pupils' needs and their output. But meaningful and equitable progress in the profession of teaching is hampered when the institutional systems of learning and assessment show no signs of evolving to accommodate the ways in which the world is changing.
The sector must get over its fear and build pro-actively on the increasing evidence that AI augments human teaching rather than replacing it. When developed and applied rigorously, AI and other Edtech not only enhance learning, but also make teaching a more sustainable and rewarding career-choice. Technology can free teachers up to focus on the more human aspects of their roles: creativity, ingenuity, empathy. With more time to identify and nurture the strengths of each child and to respond to their differing learning needs, they can better guide them to a more resilient and self-determining future of lifelong learning. In my view this is the most impactful strategy for addressing the teacher recruitment and retention crisis.
Conversely, the greatest risk to the teaching profession is to ignore this evidence, passively allowing learners to develop the habits of using AI more as an 'answer-machine' than as a learning tool. Whenever I hear people complain of students using LLMs to cheat, I worry that we are missing the big opportunity and misdirecting our efforts.
We should not delude ourselves that AI in education is primarily a technical problem for the technologists to solve. It is first and foremost a teaching, learning and assessment challenge that the education profession needs to own. Of course there are complex aspects of the challenge which are cultural, political and technological, but we must not use that as an excuse to disown the challenge.
Like any new technology, the worthy and aspirational goals of generative AI will be shaped, and potentially corrupted, by the demands of those investing in it. Or by those influencing its application in everyday life. For all the reasons above, leaders in education have a responsibility to contribute to the development of quality assurance processes and governance structures for the effective and ethical use of AI in the areas over which we have influence.
Footnote
* I am indebted to Nicholas Carr's brilliant book The Shallows (2010) for some of the content in this paragraph. His book Superbloom - How Technologies of Connection Tear Us Apart is one of the most anticipated books of 2025.
Holistic oral assessment - the 'authentic human' solution
As mentioned above in the context of prompt-engineering, it seems likely that those with strong traditional literacy skills (including critical thinking) are more likely to thrive in an LLM world, and vice versa. So it is more important than ever that we do not fail our young people in these aspects of their education. Going further, to monitor the unintended consequences of AI, we must ensure that new channels and tools for interactive oral presentation and assessment are developed and applied wherever and whenever we encourage the use of AI in the classroom. Rather than worrying about students using LLMs to 'cheat' we must explore ways to encourage the increased learning-efficiency benefits of LLMs.
This includes finding more effective ways to assess learning and skills. Given the obvious difficulties in differentiating between work produced by humans and by LLMs, oral assessment may become the most reliable way to assess the integrity of students' understanding and their learning capacity. This implies quite a fundamental rethink about how to deploy the human resources of teaching and learning in new integrated methods of formative, summative and (inter)nationally recognised and accredited standards of assessment.
But such a major change in the assessment system requires open-mindedness, professional and political consensus and the resources to develop new solutions. After 12 years in the education sector, I am more realistic about the obstacles and what it will take to overcome them.
Much has been written, and for many years, about the glaring obsolescence of the current examinations system. In his 2025 BETT keynote address, Stephen Fry described it as a system that 'tests for ignorance and punishes us'. As a one-time English A-level teacher, I can personally attest to the weaknesses in GCSE English, as powerfully described in ASCL's much-cited 2019 report The Forgotten Third. Realistically there will be no substantial improvements in national summative assessment methods until we can embrace a more comprehensive reform of our learning and assessment ecology to accommodate the new age of oracy and generative AI.
Society tends to value most highly those things it can measure. In this respect I'm a strong supporter of the Rethinking Assessment movement. It might seem like putting the cart before the horse to drive reform of the curriculum from the assessment system. But being pragmatic, it will be the reform of assessment that will most likely stimulate the necessary behavioural shifts in the notoriously change-averse education sector.
The future of authorship, authority and the integrity of human knowledge
Technology evolution is awesome. It can enlighten and empower, but it can also obscure, deceive and exclude. The authorial identity, reliability and integrity of what is published now seems more fragile and fluid than it did when I was young. Levels of trust in previously respected media brands (e.g. broadcasters and newspapers) and in journalism more generally seem to be declining, especially amongst the young. In this environment LLMs are a competing source of uncurated wisdom which can be both alluring and unsettling.
Mistrust arises not necessarily because we are better informed. Nor is it because we are thinking more critically. It has more to do with the sheer volume of competing sources of human stimuli, peer-pressure and FOMO, whether these stimuli are for our education, our recreation or other forms of consumption. This is one of the themes in Johann Hari's thought-provoking book Stolen Focus (2022). Unable to cope with the overwhelming number of choices, we welcome the diversions created by clever and attractive design and/or by manipulative algorithms which shape and feed our preferences. This occurs even if we suspect that our behaviour is becoming compulsive or addictive, or that our biases (conscious or unconscious) are being cynically exploited and compounded. This is also partly what David Courtwright has described as 'limbic capitalism' in his 2019 book The Age of Addiction - How Bad Habits Became Big Business.
These kinds of developments over the last half-century sadly run counter to all of the more positive and constructive progress in human civilization. We must recognise that as much as technology can make us smarter, it can equally make us lazier. It can contribute to complexity-aversion and its dark consequences. I can recall quite vividly how, in the first decade of the 21st century as more voices became heard, it looked as though the internet and new media channels might contribute to a more pluralistic, well-informed and equitable world. The so-called Arab Spring was a good example of this optimism. With hindsight it was over-optimistic. Evgeny Morozov's The Net Delusion (2011) provided a scathing critique of those who proclaimed the internet as an inherently democratizing phenomenon. His illustrations of the internet's susceptibility to the corruption and abuse of 'knowledge' have sadly proven accurate. Concerns about the transparency and credibility of authorship, and about the processes for validating competing claims on 'truth', continue to increase. Educators have a responsibility to face this challenge head-on. We must not miss the opportunity to develop proactive strategies to build on generative AI rather than being blindsided by it.
To conclude my reflections with a question, I find Michel Foucault’s words prescient and particularly apt for a 21st century world where meaning and relevance are ever harder to extract from the exploding babble of texts, images, sounds and claims on truth. In his 1969 publication What is an Author? he writes:
'The author is therefore the ideological figure by which one marks the manner in which we fear the proliferation of meaning'.
Fear is a key word here. How much meaning do we want, or indeed, how much can any of us cope with?
Enough doom and gloom. I'm more optimistic than the last few paragraphs might suggest. As I move towards a more reflective stage in my career, I hope to make at least a tiny contribution to a more coherent, stable and equitable world.
If you feel that we might usefully share ideas or collaborate in some way, please don't be shy about reaching out. Send me an email or connect via LinkedIn, indicating that you've looked at this site.
Dr Jonathan (aka Joff) Wheeldon
Other website sections (overview)
My story - a star-crossed tale of words and numbers
I cannot expect anyone to take any notice of what I write without sharing something of who I am.
This is especially so given my unconventional career path:
an English literature and language graduate,
starting out in the financial services sector and qualifying as a Chartered Accountant
senior finance roles in music and film (PolyGram, Universal, EMI) with 7 years living overseas in New York, Madrid and Los Angeles
Finance Director of Lord Andrew Lloyd Webber's The Really Useful Group and Really Useful Theatres
doctoral research (sponsored by EMI) and a Visiting Lecturer at Henley Business School
Global CFO/COO and Board member of Macmillan Publishers (Nature, Palgrave, Pan, Picador, Macmillan Education...)
a two-year, mid-life stint as an A-level English teacher in a state 6th form college
and finally a move into educational governance, strategy and risk for some large schools groups.
This section is rather longer than I had originally intended, but I found the process of editing my life into some kind of coherent narrative quite enjoyable.
"All the world's a stage..." - securing human roles in an AI future
Over the past 15 years I've been asked for, or volunteered, advice to youngsters (pupils, nephews/nieces, godchildren, children of friends and colleagues), and I thought it would be good to try to capture some of what I've learned from those experiences. It contains activities to enable structured conversations and the development of plans of action.
It is primarily aimed at 17-30 year-olds who are uncertain or anxious about their futures. Also at their parents/carers, and at teachers with responsibilities for skills & careers.
There is a particular emphasis on identifying the right individual career 'nutrients' that align with our personalities. This can make careers much more rewarding and sustainable in the long-term.
It also explains the blurring of the boundary between literacy and oracy and how we must encourage our youngsters (and our education system) to embrace and take advantage of human strengths in an AI future.
It is largely shaped by what I would like to have known when I was 17, so inevitably has some subjectivity. Whilst it's rather advanced for the average 17 year-old, I do think that anyone capable of going into Higher Education ought to be able to appreciate it to some extent.
Link here to 'All the world's a stage': securing human roles in an AI future
"Unsex me here..." - an essay on language, gender and identity
This is a complex topic where words and their meanings have become terribly confused. This has contributed to considerable discord between people.
Beyond gender identity, I consider the future of gender relations, the stability of which are an essential contributor to human survival and happiness.
As a contribution to the long-awaited guidance from government, I reflect on the role played by teachers, especially English teachers, in making sense of this sensitive area of cultural evolution. It emphasises the need to promote generative dialogue and rigour in the way we speak and write about the challenges.
Link here to Unsex me here: a pathway to generative dialogue.
The cultural industries
I spent most of my career working in industries referred to as 'cultural': music, film, theatre, and publishing.
My research into the struggles of the cultural industries to adapt to the digital revolution has been published as a book: Patrons, Curators, Inventors & Thieves (Palgrave, 2014).
The intention is to help people understand why organisations and industries can be so resistant to change, and why copyright law might usefully be reformed.
It continues to be relevant and some updated summary extracts are included - link here.
The education system
Book summaries: the site started life as a place to host the educational book summaries I wrote in early 2022. The selection is based on recommendations from colleagues and include my view of the implications of the books for those who have responsibility for educational strategy. People tell me they find the summaries very useful, so I intend to continue to produce them. Link here.
Educational reform (coming soon): this section will capture my perspective on the current state of the English education sector, largely informed by my time teaching and subsequently (since 2015) advising schools on governance, strategy, risk and technology. My hypothesis is that the education system is so 'stuck' because the reform debate rarely gets past the first hurdle of agreeing which problems we are trying to solve. I therefore aim to unpack the problems and focus on the choices and compromises.
Teaching (coming soon): In 2013 I took a career break to do a PGCE and I spent a couple of years teaching A-level English in a state 6th form college. My colleagues and pupils seemed to find my materials engaging, especially those on Shakespeare and Chaucer, so I intend to provide them here along with some insights about what it's like to start to teach later in life. In the meantime, some of my teaching experiences are contained in the My Story section, link here.