High school students ask the toughest AI questions
Dispatches from an AI Q&A with students at The Haverford School
I was recently approached by Matt Ator, a high school mathematics teacher at The Haverford School in Philadelphia, Pennsylvania. Matt had assigned my book, Mathematical Intelligence, to his students as part of an enrichment course. Soon after, I joined Matt and his students for a live online Q&A.
This piece gathers a handful of selected questions, along with my responses. The questions reflect a degree of discernment and inquisitiveness, and genuine regard for human progress, that is woefully lacking in large parts of the AI industry (and the media coverage that surrounds it).
The kids are more than okay, and their questions are worthy of your consideration.
With huge thanks to Matt for his forward-looking approach to school mathematics, and for arranging the event.
Your book discusses human advantages over artificial intelligence. Which one of these gaps between humans and machines do you think will be bridged most quickly, if at all, in the coming decades?
The ‘Sparks of AGI’ paper gives examples of where GPT-4 is able to solve Olympiad-level maths problems, which suggests mathematical reasoning may be emerging in large language models (LLMs). At the same time, chatbots continue to flounder with more rudimentary examples, and as long as this remains the case, I would be hesitant to conclude that reasoning has emerged altogether.
A more likely trajectory is that LLMs combine with other approaches to AI to overcome their logical blind spots. We’re already seeing how ChatGPT is being integrated with plugins such as Wolfram Alpha to help give it access to robust, hard-coded repositories of knowledge. We’re also likely to see some hybrid of pattern-matching LLMs with formal systems such as Lean, which seek to encode mathematical knowledge from the ground up. If LLMs can ‘learn’ how to switch between formal and informal mathematical arguments, then the game really could change for how we use technology to do mathematics.
How can math classes adapt to educate students about AI?
I am bemused by proposals to ban chatbots from the classroom, as if students won’t be drawn to them anyway. We have to wake up to the fact that AI tools are now mainstream, and are only growing more powerful. We must reckon with how they are encroaching on our own ways of thinking.
Generative AI presents a wonderful teachable moment. That chatbots lurch between producing solutions to highly complex problems to making inexplicable errors makes them a fascinating object of study. Their erraticness means we can never take their outputs for granted; we have to interrogate their claims, check their assumptions, account for leaps of logic. Guess what - this is exactly what mathematics is about! So I’d like to see a purposeful embrace of these tools because they can be incredibly useful as thinking partners, so long as we don’t outsource too much of our own thinking to them.
Math(s) class never really adapted to calculators or computers - the curriculum held firm to a rigid focus on routine procedures that come so readily to machines. With the advent of generative AI, we should urgently consider how these tools can augment our own ways of thinking and problem solving, and reflect on where our own human strengths lie. I hope one of the positive outcomes of AI is that it forces an upgrade of the school mathematics curriculum into an altogether more creative subject that is less prone to automation. Whether the appetite is there among policymakers remains to be seen!
Are the threats that advanced AI (and potentially AGI) pose to privacy and security any greater than those posed by earlier technologies such as the internet?
Creators of AI speak excitedly about the exponential nature of their technology - this can be defined in many ways, such as the amount of computing power, the size of language models, or the amount of data they are trained on. We must acknowledge that the risk profile of these technologies are also proceeding along the same upwards curve.
Through the surveillance capitalism business model of social media, Big Tech has been only too willing to upend our notions of democracy and truth by incentivising the spread of misinformation and extracting our private data for the sake of profit. Chatbots represent a dramatic continuation of this trend: first, their very existence depends on large-scale borrowing (some would say theft) of human-generated content. Moreover, with their propensity to spew nonsensical outputs (and with such unabashed confidence), we can expect a rise in misinformation.
In terms of security, much airtime is being given to the most apocalyptic scenarios of misaligned AI. An even more sobering thought, for me, is that even with the technologies that have already been released into the wild, there is scope for malicious human actors to cause immeasurable damage. We do not need AGI for AI to cause catastrophic harm. The only way to mitigate these risks is for our public institutions to catch up to the frantic pace of technological development.
What do you have to say about AI that veers off from its intended purpose like the Microsoft Bing AI?
We should not be the least bit surprised by strange behaviours of chatbots - they are, after all, simply making predictions on what words should come next in a sequence. Unless there are adequate guardrails in place, we have to be very mindful in how we use their outputs. There are some contexts, like programming, where the occasional mishap is easily identified and remedied. In other cases, such as writing essays or summarising documents, we have to work much harder to pick up their blind spots.
The headline-grabbing warnings of ‘AI takeover’ represent the most dramatic view of how AI can steer off course. The usual version of this narrative is that AI will become so smart that it will pursue its goals, producing sub-goals that we had not countenanced, which could cause us untold damage. In one example, a superintelligent AI that is tasked with eradicating poverty may do this by wiping out humanity itself - no humans, no poverty!
This does all seem rather speculative to me at this stage and, at the very least, I would question whether this degree of misalignment is really a signal of ‘superintelligence’, or whether it shows just how profoundly stupid and mindless these systems can be. Either way, we need to tread carefully when unleashing these tools onto society. Transparency - in particular, being able to explain how these systems arrive at their decisions - should be a basic prerequisite for doing so.
In your opinion, where is the line drawn between simulated emotions and real emotions?
I love this question, and it already hints at my answer. That there is a line is the most important recognition of all. Experience has shown us that humans are easily duped by artificial systems. Even primitive chatbots like ELIZA, developed in 1970s as a virtual psychotherapist that parroted the subject’s inputs in the form of the next question, convinced many people that they were convincing with an actual human. More recently, scores of people (mostly men) have professed love for Replika’s generative chatbot, while a Google engineer infamously proclaimed the company’s LaMBDA chatbot sentient.
The tendency of humans to anthropomorphise - to see ourselves in our digital creations - troubles me. If I am interacting with a chatbot, I want to be clear that I am engaging with a (powerful and useful) tool, not a creature that has feelings or any sense of subjective experience. We have to remember that these tools belong to corporations whose primary motive is not rooted in our emotional wellbeing. On the contrary, their bottom line may depend on exploiting our emotional vulnerabilities.
If we are to deploy chatbots as tutors or therapists or medical experts or financial advisors or whatever else, we need to apply giant labels that remind us of the distinctions between authentic human-human interaction and tame approximations of it. This will become even more urgent as generative AI becomes multimodal and we find it harder to distinguish between synthetic personalities and real ones.
Where do you think the potential of AI ends? Do you envision AI's advancement to eventually flatline?
Anyone who speculates on the long-term trajectory of AI is on a hiding to nothing, because the field is notoriously difficult to predict. I would always caution against declaring fundamental limits of AI, but I would also be sceptical of claims that our current path inevitably leads us to a world of superintelligence and singularity.
I would also reject the dichotomy that AI has to end in utopia or dystopia; our likely destination is somewhere in between. The question we should be focused on is whether the pace at which AI is being allowed to develop is appropriate. There is widespread recognition that things are moving too fast, and that government and public institutions are not adapting at anywhere near the speed needed to prevent great harms arising from AI.
The reckless abandon with which some technologists pursue their inventions, prioritising scientific inquiry ahead of human interest, appals me. So I hope there is a flatlining of deployment at the very least, motivated by concerns for human safety.
What are some things that humans can do to quell the anxiety surrounding progression of computers and artificial intelligence? It seems like inaccurate media coverage contributes to this problem a lot.
Now that AI has tipped into the mainstream, there is every opportunity to roll up your sleeves and have a play. Just interfacing with a chatbot (of which there are now many) will tear away some of the mystery surrounding these tools, and open your eyes to the range of applications at your disposal. On one level, it is undeniably cool to integrate these tools into your everyday projects. AI can be a formidable thought partner, if kept at bay.
I would steer clear of AI discussions on social media, and indeed much of mainstream media. Ignore every thread that promises ‘X ways AI will transform your life’. Instead, seek out a broad church of thoughtful writers and thinkers on places like Substack who focus on the harder questions of how AI will land on society, and who are willing to engage the nuances of how this technology can both help and hinder human progress.
Finally, take a breath. The AI apocalypse is not as imminent as some would have you believe. There’s a long road to travel and the impact of AI will ultimately be determined by how we humans choose to shape its development. You have a role to play; make it count.