|
In following a recent side quest that took me down a rabbit hole of opinions on the impact of AI on humanity, I found that most musings – delivered via old-fashioned Google search – focused on employment, economies and egos. While ranging from fear to FOMO and pretty much everything in between, all seemed to be looking just a bit too far ahead for what my side quest needed to come to a conclusion that resonated with my own thoughts. As I was typing yet another query into Google in an attempt to force the trusty old search engine to deliver something closer to what I was looking for, I found not a conclusion, but a jolting realisation. When yet another OK-but-not-great answer was delivered in a split second, I realised how much the ubiquitousness of AI has already subtly changed my own default actions in a way that has nothing to do with employment or economies. In an almost out-of-body experience, I recognised how – instead of using my (legendary amongst friends and family) skill of considering which search query would help Google to deliver the best result available – I lazily submitted a broader question, unconsciously falling into the habit of trusting that the coding behind the AI mode would fill in the gaps I left in my sloppily worded question. The realisation triggered a conversation I had recently over dinner with a top leader of a massive multi-national company. He was very excited about going on an AI training course the following week, and I could sense that he was about to make the same mistake so many others are making in the AI FOMO frenzy; not being crystal clear about the exact business case (and everything that went with it in a pre-AI world) for building a much-hyped AI agent before trusting the sparkling new bot to run off and do … stuff. Critical thinking as an antidote to an AI-informed world isn’t a new thought, but it's usually mentioned in the context of using it to verify the answers so very confidently presented by generative and agentic AI. Or sometimes in the system design phase context of AI governance and ethics.
We might also need to apply critical thinking on ourselves more than on what is being generated or “agented” by what I’ve heard so many professionals call their new best friend. Critical thinking – for me, at least – starts with a pause. A pause during which we can check what the true purpose of any activity is. Are we doing something to ease pressure? To free up time? To bridge a knowledge gap? To soothe insecurities? To save face? To leapfrog progress? To keep up with the proverbial Joneses? The pause should also allow for the consideration of the trade-offs of the activity. Will leapfrogging progress perhaps expose a lack of depth later on? Will the time saved during the initial activity potentially be gobbled up later by fact-checking or fixing? Will the rapidly constructed and followed advice become a case study in how the fable of the hare and the tortoise remain true even in the face of incredibly impressive technological advancements? Only once we’ve paused properly and with the intended purpose are we ready to take the next step: making a decision. It seems obvious, but it’s another unmissable element of critical thinking. We often tend to act from default rather than from decision, unless we consciously question our own actions while they are still impulses or considerations. In the example I’m describing above, I noticed my default having become asking lazy questions, simply because AI makes it more possible for me to get away with it. The challenge now is to pause before I do something as seemingly simple as typing a question, evaluate the true purpose of the question, consider whether asking a collection of databases – search-based or LLM-based – is truly the best way of going about finding an answer to my question, decide on the next course of action, and then apply true critical thinking theory to the answers presented by whichever tool I end up using. No wonder we’re looking for an easier way! Or that’s what is comfortable to tell ourselves. In reality, it takes much longer to type out an explanation of critical thinking than it actually takes the brain to go through the entire complex process.
Perhaps it’s time to append the saying “garbage in, garbage out” with “default in, default out.” To get better answers, we have to ask better questions by allowing critical thinking to come to our rescue sooner than later. DISCLOSURES
My policy on the use of AI is available here. Disclosure on use of generative AI for this piece of content:
Comments are closed.
|
Archives
March 2026
|