Language, learning & life
The next chapter of our evolution - let's keep it human 🤞
"How can I know what I think until I see what I say?"
Often attributed to E.M. Forster, Graham Wallas, Andre Gide or W.H. Auden, this question has resonated with me for many years. It captures the idea that language tends to construct our reality more than just describing it. With that in mind I created this website to help me make sense of some things I have spent much of my life thinking about, professionally and personally.
This homepage is focused on my thoughts about generative artificial intelligence (AI) and its impact on our humanity. Or more specifically (and in relation to my experience) on the future of education and how our assessment systems need to evolve.
The astonishing acceleration of LLM usage (e.g. ChatGPT, Gemini, Claude, DeepSeek) makes the need to rethink assessment really quite urgent. Meaningful change in education will (in my view) only occur when we integrate formative and summative assessment in new ways to capture the breadth of human potential more holistically. This means recognising and accrediting creativity, critical thinking, self-motivation to learn and collaboration in problem-solving. Oracy skills and re-imagined oral assessment across many fields will likely play a big part in the process. If we can get assessment right, the competing priorities of the overloaded curriculum which tries to cover all bases become much less problematic. Knowledge gaps will always exist, but we will have at least set children up with stronger foundations to thrive.
Right now, younger generations are understandably losing focus and confidence in their economic future. But if our education system can help them to discover and take pride in sustainable career nutrients, and to align these with their strengths and personalities, there is hope. ⛅
There are just five short sections (with dropdown headings), taking 10-15 minutes to read. I hope it is of some value to you.
Feel free to share the link. 🙏🏼
Dr Jonathan (aka Joff) Wheeldon
Artificial intelligence or authentic stupidity*?
When I started to create this website in 2023, ChatGPT had been declared the latest 'game-changer' in large language models (LLMs). LLMs are the best known products of the branch of artificial intelligence known as 'generative AI'.
Two years on, there is now fierce competition between LLM providers, and their tools are getting stronger everyday. They are having a huge impact on the evolution of our species, in particular:
how teaching, learning and research are most efficiently conducted
how knowledge is produced, authenticated and ranked for value and relevance
how power and influence are distributed throughout society
With only basic online access, it is now very easy for any of us to have a highly intelligent conversation with an LLM. We can pose any question (orally or in writing), and it will produce plausible and eloquent answers. We can even determine the character profile of the artificial persona and guide it to suit our preferences for the types of interactions we would like to have. We can convert its responses instantly into slick presentational formats, including into convincing human dialogue such as a podcast. The style, tone and vocabulary of this output can easily be adjusted to cater for different audiences and age-groups.
For those who already have some knowledge, skills and a critical disposition to review the output, the productivity gains are spectacular. Desktop research, high quality reports, presentations and teaching materials can be produced in hours rather than days. Coding time can be reduced from hours to minutes.
LLMs are playing a wide range of roles: research assistant, editor, personal tutor, image-generator, even foreign-language conversational partner. We might still be some years away from the physical embodiment of the artificial friend (AF) envisaged by Kazuo Ishiguro in the brilliant Klara and The Sun (2021), but it is now very easy for us to imagine chatbots quickly becoming our mentors, coaches and non-clinical therapists. Although many people lost patience with the first wave of personal AI assistants (Siri and Alexa), the latest chatbots such as ChatGPT, Gemini, Claude and DeepSeek are likely to proliferate much more widely, with huge implications for bespoke learning, productivity and for human relations.
This is marvellous, but sometimes the critical-thinking skills and the deep-learning processes are conveniently by-passed. If we are not careful we can convince ourselves (and others) that we are more informed or intelligent than we really are. At least for a while.
Amongst many other concerns, the future of the 'essay', one of the most powerful formats for demonstrating how we wrestle and reconcile complex human thoughts into meaning, is now in question. Traditional forms of school homework and learning assessment face the same threat of obsolescence.
Those who just see LLMs as making life easier and simply pass off raw output as their own work, without editing, adding personal perspective, or without any process of critical review, are playing a foolish game. A game that has harmful consequences for them and for all of us. If we want to use generative AI effectively, for learning, teaching, creating and problem-solving, we should be using our brains more, not less. Otherwise, we may soon find ourselves to be the observers rather than the participants of intelligent life on earth.
We must impress upon students (and remind ourselves) that generative AI is not a simple shortcut to deep knowledge and wisdom. The best output of generative AI is formed through an iterative co-production. That is to say, the quality of LLM output relies upon the thoughtfulness of the person asking the questions, assessing the integrity of the 'answers' and then asking more questions. Prompt engineering is the term used for the process of writing instructions to get the best results from LLMs. It requires creativity and critical thinking and is therefore fully aligned with mainstream teaching and learning goals. We should be putting it front and centre of digital and media literacy, a critical skillset that addresses a multitude of risks to young people.
Our role in improving generative AI
The vast sources of data trawled by LLMs exclude (for now) much valuable scientific and cultural knowledge protected by organisational firewalls, and by copyright and data protection laws. But beware - I am not confident that last sentence is accurate. What is really happening in the training of LLMs 'behind the scenes' is quite opaque and secretive. Whether or not all of that higher quality 'curated' knowledge becomes accessible to LLMs, we should be cautious in thinking of the output as attributable to an LLM, even though we must acknowledge its contribution. The human 'prompt-engineer' is a co-author in this collaborative process and must remain at least partially accountable for what is produced, especially when the LLM output is re-shared more widely.
Bear in mind that, directly or indirectly, we are part of a fine-tuning process known as Reinforcement Learning from Human Feedback (RLHF). By actively engaging with generative AI, we have a much better chance of evolving alongside side it in a good way. This is well expressed by Ethan Mollick** in his excellent book Co-Intelligence (2024) which helps us think quite profoundly about how AI can be an extension and evolution of our humanity.
With the metaphor of centaurs and cyborgs Mollick makes a useful distinction between the ways in which people are using LLMs. Centaur work has a clear line between person and machine, allocating tasks based on the capability of each. By contrast, cyborgs iteratively and seamlessly blend person and machine, building on each other's contribution. This makes it difficult to clearly attribute authorship. The cyborg approach is now very common in desktop research and writing, including the development of reports and image-enhanced presentation materials.
Amongst many other thought-provoking insights, Mollick promotes the responsibilities we all have. These include adapting our organisations, our managerial practices and our training programmes in an equitable way to avoid mass redundancy of the human workforce. Of most relevance to teachers and education leaders, he explains why (after decades of promises and false starts with 'edtech') classroom learning is now changing quite radically. Transformation will occur in a positive way as long as teacher and student each remain 'the human in the loop'.
To wrap up this section, we should remember that the output of LLMs is not always accurate or wise. They can create hallucinations***. Even so, they will almost certainly become more and more plausible and relied upon. As a result, the term literacy is being redefined. Or perhaps it is more accurate to say that what we understand by the word literacy is being culturally renegotiated to recognise the agility and pragmatism needed to thrive in a world transformed by AI. One thing is very clear - sooner or later, young people need to develop confidence, skills and maturity in making the most of LLMs. This includes understanding their limitations and the dangers of learning shortcuts.
Most urgently, our educational assessment methods must evolve, both to support children and to ensure the integrity of their learning processes.
Footnotes
*Ben Elton's ironic AI antonym 'authentic stupidity' neatly expresses the risk caused by replacing skills such as critical thinking, emotional intelligence or plain old 'common sense', with a lazy or premature reliance on artificial intelligence.
** Mollick's newsletter oneusefulthing.org. is a good resource for monitoring developments in generative AL, amongst many others such as The Neuron.
*** Hallucinations refer to the occasions when LLMs create misleading answers. There are many collaborative initiatives measuring the accuracy of LLMs. Humanity's Last Exam is a good illustration.
Hope versus fear - AI Fight Club*
Along with hope, AI is also generating fear. Only time will tell if (and how far) we are falling into the black hole of artificial general intelligence (AGI) or so-called 'technological singularity'. These are the terms used to describe the state where superhuman AI is irreversibly out of control. A state where a majority of humans have been charmed into being the serfs of dominant and super-efficient AI, and without the skills or the will to challenge our 'oppressor'. We might not even be aware of our oppression.
For people of my generation, this scenario was very well captured by the 1999 movie The Matrix.
Dividing the best from the rest?
It is easy to imagine a new kind of social divide. Because in making life easier AI can make us lazier. The most proactive and energised minds will be enhanced by AI and will develop new types of cognitive faculties that give them competitive advantage in productivity and creativity.
But many others could go the opposite way. Increasingly dependent on handy apps for just about everything, and becoming addicted to behaviourally-manipulative algorithms, our minds may lose capacity and focus. Without the compulsion to question or to develop our own agency, we may become blissfully unaware of our diminished free will. Worse still, our human 'resources' i.e. our skills, passions, diversity, creativity and ingenuity, may become regarded by AI as obsolete, disruptive or worthless. In this vision, there is an inevitable existential crisis for humanity.
Of course this is not a new fear. Aldous Huxley's masterpiece Brave New World (1932) is highly relevant to discussions of being oblivious to our soft oppression through a singular political and cultural system. And despite its pluralistic intent, so is Francis Fukuyama's famous 1992 claim about 'the end of history' following the fall of the Soviet Union and the Berlin Wall. The assertion of an ultimate and permanent victory of liberal democracy and free-market capitalism over competing totalitarian fascist or communist ideologies was a premature claim to a dominant political and economic hegemony or singularity. One into which the 'developed' world has been seduced by the promises of technological 'progress' and by:
improvements in standards of living that are often superficial, inequitably distributed and far from being synonymous with happiness;
addictive and cynically-exploited consumerism;
managerialism, an organisational ideology that generally prioritises corporate interests over individual ones;
the obsession with GDP growth, such a blunt measuring tool that takes no account of impact upon collective wellbeing or upon the environment.
In an AGI future, which values and principles will dominate? Will there be a challenge and/or a 'correction' of these imperfections of capitalism, AI's current champion and collaborator?
Or will they become further embedded and no longer be perceived as imperfections?
In a meritocracy that rewards those who are able to take advantage of AI to become super-productive or even 'superhuman', how will the humans who cannot keep up be treated?
Every age has its 'opium of the masses' and today's tech billionaires are no different from the religious and political leaders of previous centuries. Whether or not these leaders are noble in their intent, they have a common desire to address or exploit some profound human needs and systemic shortcomings. What is playing out right now may not be exactly what Huxley had in mind in his ironic Brave New World (with its freely available happiness-inducing drug soma, and its social and genetic engineering), but we can see some frightening parallels with his dystopian vision.
In this context, and with some irony, I asked ChatGPT to come up with some ideas for SOMA as an acronym with current-day relevance. The response came immediately, and showed good understanding of why I was asking the question. Here are ChatGPT's suggestions:
Stream Of Mesmerizing Algorithms
Scroll, Observe, Manipulate, Addict
Social Oblivion, Mechanical Absorption
Sensory Overload, Mindless Attraction
Screen-Oriented Mental Autopilot
Sedated Online, Mesmerized Always
Scary, but the biggest risk to our future is probably not AI being out of human control, but rather being undemocratically controlled by any of the three 'C's: countries, corporations or criminals.
In late January 2025 we saw some dramatic tension amongst two of these three 'C's. With some fanfare, the newly inaugurated US president Donald Trump proclaimed America's continuing dominance of the AI sector through the Stargate Project, a $500 billion OpenAI collaboration with a Japanese investment firm and an Emirates sovereign wealth fund. Almost simultaneously, China released DeepSeek, which along with other impressive Chinese LLMs, demonstrates that competitive LLMs can be produced at a tiny fraction of the cost of their American rivals. In response, almost US$1 trillion was wiped of technology stocks in one day, with chipmaker Nvidia losing $600 billion, the biggest one day loss ever recorded.
In the subsequent days there were claims and counterclaims that these LLMs had been copying each other. Few reports seemed to grasp the irony of tech billionaires being outraged at the illegal or unethical copying of products whose core capability is built on riding roughshod over intellectual property rights. Such loss of copyright protection is causing deep anxiety for the creative and knowledge industries. It may be difficult advice to follow in an informed way, but we should all remain conscious of the consequences of these developments in our decisions to use LLMs.
Amongst many more recent books presenting visions of an AI future are Ray Kurzweil's latest The Singularity is Nearer (2024), Yanis Varoufakis' Technofeudalism (2023), Mustafa Suleyman's The Coming Wave (2023) and Yuval Noah Harari's Homo Deus (2015). Each one captures valuable insights and hypotheses based on similar concerns.
Whether post-singularity existence turns out to be heaven or hell is a profound 21st century question that we should all spend at least some time pondering rather than fighting about. But it is a question that will not be resolved anytime soon. In the meantime we must do what we can to research, debate and determine the optimal role of AI in education to give the best chance for an acceptable outcome for humanity.
To end this section, the publisher's synopsis of Tom Chatfield's Wise Animals (2024) neatly reconciles the hope versus fear dilemma:
'Human children know no more of modern technology than their ancestors did of older technologies thousands of years ago, and develop in relation to the technologies of their time.
We co-evolve with technology as individuals as we have as a species over thousands of years.
Rather than see technology as a threat, this humanist contribution to the debate proposes that we are neither masters nor victims of our technologies. They are part of who we are, and our future – and theirs – is in our hands'.
Footnote
* AI Fight Club is a metaphor coined by Henry Farrell to describe the polarised positions around the future of AI. See his article at Programmablemutter.com 17/12/2024.
The human response - a new age of oracy
Another profound consequence of LLMs (in conjunction with the extraordinary pace of evolution in voice AI such as voice-typing, editing tools and automated voice generation) is that the distinction between literacy and its educational 'poor cousin' oracy is becoming blurred.
The opportunities are great when these technologies make us more productive or efficient in our communications. They are also levelling the playing-field for neurodiverse youngsters, or for those whose talents do not easily align with traditional learning and assessment methods. Pupils previously thought to be struggling with reading, or incapable of producing good written output, can now use voice AI and take pride in demonstrating their intelligence and creativity. They are reminding us just how articulate they are.
We should keep in mind that the conversational qualities, phonetic shortcuts and emojis of WhatsApp, X and other social media formats bear strong resemblance to oral communications. More than a decade ago, John McWhorter coined the term 'fingered speech' to make the point that texting and instant messaging are arguably nearer to speaking than to writing in the conventional sense.
Literacy and oracy are two sides of the same coin in the currency that humans need in order to thrive in an AI future. But one risk of the blurring of literacy and oracy is that we read and think less (or at least less patiently, diligently and critically), whilst copying, pasting, and typing/dictating messages much more rapidly. Across electronic media, words are often moving faster than meaning and understanding.
We now have a generation of young adults skilled in fast-paced, informal channels and protocols of communication. These can deliver productivity gains. They can also be to the detriment of clarity, attention to detail and understanding. Many younger people admit to disliking or struggling with the protocols of more carefully considered correspondence such as email*. The big question is: how concerned should we be about this?
Posing a more precise question, with reference to Daniel Kahneman's influential 2011 book Thinking, Fast and Slow: has social media triggered a generational shift towards instinctive, unconscious and emotional thinking and away from more deliberative, logical and analytical thinking? If so, what are the long term consequences for human relations and for human wisdom in the age of AI?
A return to an oral tradition?
When considering the questions above I try to suppress my personal concerns (as a one-time English teacher) about declining standards in long-form writing skills. After all, speech is natural and came first.** We are born with the neural wiring for speech, but reading and writing are less ‘natural’ skills, only acquired through education and through evolutionary technologies such as the alphabet, printing and publishing. As far back as the 4th century BC, Socrates (in Plato’s Phaedrus) tells of the Egyptian God Theuth's debate with King Thamus on the pros and cons of granting the Egyptian people the benefits of writing. On the one hand, the technology of writing extends memory and shares wisdom. On the other hand it will:
‘create forgetfulness in the learners’ souls, because they will not use their memories...they will appear to be omniscient and will generally know nothing.’
This may be the very earliest reference to what is now known as cognitive offloading: 'why do I need to burden my brain by retaining knowledge when I can easily look things up?'
Hence emerged an enduring but sometimes chafing alliance between documented wisdom and oral eloquence. Does the value of knowledge live in static, unresponsive text, or does it only emerge dialogically from a relationship between active, well-informed human minds through spoken discourse? The answer lies somewhere in between, but precisely where will depend on cultural context and whether we are introverts or extroverts, engineers or imagineers, lovers or fighters.
In more recent centuries (since the invention of printing and publishing systems) mastering the skills of consuming, re-producing and expanding the 'canon' of curated published text has dominated the academic and professional pathways to success. This is sometimes referred to as the age of the linear literary mind.
But we can now see the beginning of the end of the age of information eco-systems built on such reverence for curated printed text. This is partly because of the proliferation of non-curated and self-published electronic text. We are inundated by hyperlinked documents, web pages, bulleted slides, emails and multiple channels of news, promotions and instant messaging. We are all authors now.
So it has become impossible for us to read everything that appears on our screens, making it much more difficult to differentiate the wheat from the chaff. This goes some way to explaining the enormous popularity of podcasts, vlogs and social media influencers. They are the latest channels of curation, with relatively low barriers to entry and minimal quality-assurance mechanisms.
With such growing reluctance to read long-form text, there is now a huge demand for it to be synthesised and summarised by LLMs. This can be easily achieved, including by speaking instructions into our devices. It should be no surprise that there is this desire to develop more productive and dynamic oral interactions with technology, and with 'big data' in particular.
But oracy is not just about speaking. It is the ability to articulate ideas, actively listen, develop understanding and engage in dialogue with others through spoken language. If humanity really is reverting to the oral roots of our species, we certainly need to 'up our game'. At the very least we need to put oracy on an equal footing with literacy in schools and reflect this parity in our assessment and accreditation systems..
I explore some of the consequences of this trajectory towards a new age of oracy, along with the importance of finding sustainable career nutrients, in the website section "All the World's a Stage" - Securing Human Roles in an AI Future. The excellent report of the Oracy Education Commission (October 2024) is also well worth a read.
Stepping up to the challenge
Our curriculum, assessment and teacher-training methods are terribly slow in adapting to all the evolving aspects of our culture. This puts pressure on individual teachers to figure things out for themselves, and to experiment. Some are resistant to change. Others are embracing generative AI, mostly to produce/adapt learning materials, but also (in more advanced usage) to support the assessment of their pupils' needs and their output. But meaningful and equitable progress in the profession of teaching is hampered when the institutional systems of learning and assessment show no signs of evolving to accommodate the ways in which the world is changing.
The sector must get over its fear and build pro-actively on the increasing evidence that AI augments human teaching rather than replacing it. When developed and applied rigorously, AI and other Edtech not only enhance learning, but also make teaching a more sustainable and rewarding career-choice. Technology can free teachers up to focus on the more human aspects of their roles: creativity, ingenuity, empathy and dialogue. With more time to identify and nurture the strengths of each child and to respond to their differing learning needs, they can better guide them to a more resilient and self-determining future of lifelong learning. In my view this is the most impactful strategy for addressing the teacher recruitment and retention crisis.
Conversely, the greatest risk to the teaching profession is to ignore this evidence, passively allowing learners to develop the habits of using AI more as an 'answer-machine' than as a learning tool. Whenever I hear people complain of students using LLMs to cheat, I worry that we are missing the big opportunity and misdirecting our efforts.
We should not delude ourselves that AI in education is primarily a technical problem for the technologists to solve. It is first and foremost a teaching, learning and assessment challenge that the education profession needs to own. Of course there are complex aspects of the challenge which are cultural, political, legal and technological, but we must not use that as an excuse to disown the challenge.
Like any new technology, the worthy and aspirational goals of generative AI will be shaped, and potentially corrupted, by the demands of those investing in it. Or by those influencing its application in everyday life. For all the reasons above, leaders in education have a responsibility to contribute to the development of quality assurance processes and governance structures for the effective and ethical use of AI in the areas over which we have influence.
Footnotes
* I am referring to the email protocols based on traditional physical correspondence such as letters, though this may be dying out.
** I am indebted to Nicholas Carr's brilliant book The Shallows (2010) for some of the content in this paragraph. His book Superbloom - How Technologies of Connection Tear Us Apart is one of the most anticipated books of 2025.
Holistic oral assessment - the 'authentic human' solution
As mentioned above in the context of prompt-engineering, it seems likely that those with strong traditional literacy skills (including critical thinking) are more likely to thrive in an LLM world, and vice versa. So it is more important than ever that we do not fail our young people in these aspects of their education. Going further, to monitor the unintended consequences of AI, we must ensure that new channels and tools for interactive oral presentation and assessment are developed and applied wherever and whenever we encourage the use of AI in the classroom. Rather than worrying about students using LLMs to 'cheat' we must explore ways to encourage the increased learning-efficiency benefits of LLMs.
This includes imagining more effective ways to assess learning and skills to capture the breadth of human capabilities and potential. Creativity, collaboration, self motivation to learn, and rigorous exploration of possibilities are all essential human qualities that have previously been excluded from our formal assessment systems.
Given the obvious difficulties in differentiating between work produced by humans and by LLMs, oral assessment and collaborative projects may become the most reliable way to assess the integrity of students' understanding and their learning and problem-solving capacity. This implies quite a fundamental rethink about how to deploy the human resources of teaching and learning in new integrated methods of formative, summative and (inter)nationally recognised and accredited standards of assessment. It also requires us to think about which knowledge and skills we are assessing. Whatever the specifics of the solutions may be, measuring meaningful and sustainable human co-intelligence with AI should be one of our highest priorities. Aligning our strengths, personalities and values with the right career nutrients will be the key to engaging young people in this endeavour.
But such a major change in the assessment system requires open-mindedness, professional and political consensus and the resources to develop new solutions. After 12 years in the education sector, I am more realistic about the obstacles and what it will take to overcome them.
Much has been written, and for many years, about the glaring obsolescence of the current examinations system. In his 2025 BETT keynote address, Stephen Fry described it as a system that 'tests for ignorance and punishes us'. As a one-time English A-level teacher, I can personally attest to the weaknesses in GCSE English, as powerfully described in ASCL's much-cited 2019 report The Forgotten Third. Realistically there will be no substantial improvements in national summative assessment methods until we can embrace a more comprehensive reform of our learning and assessment ecology to accommodate the new age of oracy and generative AI.
Society tends to value most highly those things it can measure. In this respect I'm a strong supporter of the Rethinking Assessment movement. It might seem like putting the cart before the horse to drive reform of the curriculum from the assessment system. But being pragmatic, it will be the reform of assessment that will most likely stimulate the necessary behavioural shifts in the notoriously change-averse education sector.
The future of authorship, authority and the integrity of human knowledge
Technology evolution is awesome. It can enlighten and empower, but it can also obscure, deceive and exclude. The authorial identity, reliability and integrity of what is published now seems more fragile and fluid than it did when I was young. Levels of trust in previously respected media brands (e.g. broadcasters and newspapers) and in journalism more generally seem to be declining, especially amongst the young. In this environment LLMs are a competing source of uncurated wisdom which can be both alluring and unsettling.
Mistrust arises not necessarily because we are better informed. Nor is it because we are thinking more critically. It has more to do with the sheer volume of competing sources of human stimuli, peer-pressure and FOMO, whether these stimuli are for our education, our recreation or other forms of consumption. This is one of the themes in Johann Hari's thought-provoking book Stolen Focus (2022). Unable to cope with the overwhelming number of choices, we welcome the diversions created by clever and attractive design and/or by manipulative algorithms which shape and feed our preferences. This occurs even if we suspect that our behaviour is becoming compulsive or addictive, or that our biases (conscious or unconscious) are being cynically exploited and compounded. This is also partly what David Courtwright has described as 'limbic capitalism' in his 2019 book The Age of Addiction - How Bad Habits Became Big Business.
These kinds of developments over the last half-century sadly run counter to all of the more positive and constructive progress in human civilization. We must recognise that as much as technology can make us smarter, it can equally make us lazier. It can contribute to complexity-aversion and its dark consequences. I can recall quite vividly how, in the first decade of the 21st century as more voices became heard, it looked as though the internet and new media channels might contribute to a more pluralistic, well-informed and equitable world. The so-called Arab Spring was a good example of this optimism. With hindsight it was over-optimistic. Evgeny Morozov's The Net Delusion (2011) provided a scathing critique of those who proclaimed the internet as an inherently democratizing phenomenon. His illustrations of the internet's susceptibility to the corruption and abuse of 'knowledge' have sadly proven accurate. Concerns about the transparency and credibility of authorship, and about the processes for validating competing claims on 'truth', continue to increase. Educators have a responsibility to face this challenge head-on. We must not miss the opportunity to develop proactive strategies to build on generative AI rather than being blindsided by it.
To conclude my reflections with a question, I find Michel Foucault’s words prescient and particularly apt for a 21st century world where meaning and relevance are ever harder to extract from the exploding babble of texts, images, sounds and claims on truth. In his 1969 publication What is an Author? he writes:
'The author is therefore the ideological figure by which one marks the manner in which we fear the proliferation of meaning'.
Fear is a key word here. How much meaning do we want, or indeed, how much can any of us cope with?
Enough doom and gloom. I'm more optimistic than the last few paragraphs might suggest. As I move towards a more reflective stage in my career, I hope to make at least a tiny contribution to a more coherent, stable and equitable world.
If you feel that we might usefully share ideas or collaborate in some way, please don't be shy about reaching out. Send me an email or connect via LinkedIn, indicating that you've looked at this site.
Dr Jonathan (aka Joff) Wheeldon
Other website sections (overview)
My story - a star-crossed tale of words and numbers
I cannot expect anyone to take any notice of what I write without sharing something of who I am.
This is especially so given my unconventional career path:
an English literature and language graduate,
starting out in the financial services sector and qualifying as a Chartered Accountant
15 years of senior management roles in music and film (PolyGram, Universal, EMI) including 7 years living overseas in New York, Madrid and Los Angeles
Finance Director of Lord Andrew Lloyd Webber's The Really Useful Group (including Really Useful Theatres and See Tickets)
doctoral research (sponsored by EMI) on the digital revolution in the music industry, and a Visiting Lecturer at Henley Business School
Global CFO/COO and Board member of Macmillan Publishers (Nature, Palgrave, Pan, Picador, Macmillan Education...)
a two-year, mid-life stint as an A-level English teacher in a state 6th form college
and finally a move into educational governance, strategy and risk for some large schools groups.
This section is rather longer than I had originally intended, but I found the process of editing my life into some kind of coherent narrative quite enjoyable.
"All the world's a stage..." - securing human roles in an AI future
Over the past 15 years I've been asked for, or volunteered, advice to youngsters (pupils, nephews/nieces, godchildren, children of friends and colleagues), and I thought it would be good to try to capture some of what I've learned from those experiences. It contains activities to enable structured conversations and the development of plans of action.
It is primarily aimed at 17-30 year-olds who are uncertain or anxious about their futures. Also at their parents/carers, and at teachers with responsibilities for skills & careers.
There is a particular emphasis on identifying the right individual career 'nutrients' that align with our personalities. This can make careers much more rewarding and sustainable in the long-term.
It also explains the blurring of the boundary between literacy and oracy and how we must encourage our youngsters (and our education system) to embrace and take advantage of human strengths in an AI future.
It is largely shaped by what I would like to have known when I was 17, so inevitably has some subjectivity. Whilst it's rather advanced for the average 17 year-old, I do think that anyone capable of going into Higher Education ought to be able to appreciate it to some extent.
Link here to 'All the world's a stage': securing human roles in an AI future
"Unsex me here..." - an essay on language, gender and identity
This is a complex topic where words and their meanings have become terribly confused. This has contributed to considerable discord between people.
Beyond gender identity, I consider the future of gender relations, the stability of which are an essential contributor to human survival and happiness.
As a contribution to the long-awaited guidance from government, I reflect on the role played by teachers, especially English teachers, in making sense of this sensitive area of cultural evolution. It emphasises the need to promote generative dialogue and rigour in the way we speak and write about the challenges.
Link here to Unsex me here: a pathway to generative dialogue.
The cultural industries
I spent most of my career working in industries referred to as 'cultural': music, film, theatre, and publishing.
My research into the struggles of the cultural industries to adapt to the digital revolution has been published as a book: Patrons, Curators, Inventors & Thieves (Palgrave, 2014).
The intention is to help people understand why organisations and industries can be so resistant to change, and why copyright law might usefully be reformed.
It continues to be relevant and some updated summary extracts are included - link here.
The education system
Book summaries: the site started life as a place to host the educational book summaries I wrote in early 2022. The selection is based on recommendations from colleagues and include my view of the implications of the books for those who have responsibility for educational strategy. People tell me they find the summaries very useful, so I intend to continue to produce them. Link here.
Educational reform (coming soon): this section will capture my perspective on the current state of the English education sector, largely informed by my time teaching and subsequently (since 2015) advising schools on governance, strategy, risk and technology. My hypothesis is that the education system is so 'stuck' because the reform debate rarely gets past the first hurdle of agreeing which problems we are trying to solve. I therefore aim to unpack the problems and focus on the choices and compromises.
Teaching (coming soon): In 2013 I took a career break to do a PGCE and I spent a couple of years teaching A-level English in a state 6th form college. My colleagues and pupils seemed to find my materials engaging, especially those on Shakespeare and Chaucer, so I intend to provide them here along with some insights about what it's like to start to teach later in life. In the meantime, some of my teaching experiences are contained in the My Story section, link here.