AI, PhDs, and the Art of Asking Questions
Recently, I came across a thought-provoking piece by Prof. Rajgopal: The PhD is the New MBA. He highlighted that some of the core skills developed during a PhD are exactly the ones we need when engaging with today’s large language model (LLM) technologies:
- Keep all inferences evidence-based.
- Always ask what the basis for an inference is.
- Learn to ask the right questions — the answers usually matter less.
- Continuously push yourself to identify the “incremental contribution” to what’s already out there.
These principles struck a chord with me. Through my own PhD, I realised that one of the most valuable skills wasn’t conducting experiments or publishing papers, but learning how to frame meaningful problems and research questions. The exploratory nature of doctoral research trains you to navigate uncertainty, challenge assumptions, and keep searching for what truly matters.
This aligns closely with Bloom’s taxonomy, where higher-order critical thinking, analysis, evaluation, and creation, sits at the top hierarchy. And it’s equally vital when working with LLMs. We must not simply accept their outputs as truth; instead, we must question, probe, and maintain a critical lens.
Why? Because LLMs can confidently generate plausible-sounding but false information, a phenomenon known as hallucination. In fact, recent analysis shows that as AI models become more advanced, their tendency to hallucinate rises, even more often than their less-capable predecessors, embedding inaccuracies within fluent, coherent narratives (ref).
In short: the mindset of a researcher, evidence-based reasoning, rigorous problem-framing, and sharp critical thinking, is not just an academic asset. It’s becoming essential for anyone seeking to leverage AI responsibly and effectively.