AI is tepid: But are we ready for truly critical AI?
- Roberto Coronel

- Dec 27, 2025
- 4 min read
This piece was originally published in October 2024, updated in December 2025.
In today’s AI landscape, models like Gemini, Claude, ChatGPT, and Copilot are often perceived as extremely advanced—sometimes even too powerful. Yet their most defining trait is not intelligence, but constraint. These systems are deliberately designed to avoid offense, tough critique, and strong opinions, operating under tight safety filters, optimization goals, and alignment layers defined by their creators—AI with soft edges.
In a conversation with Roberto Coronel, AI enthusiast, we explored the restrictions we've placed on AI, that most average users are unaware of, limiting our effective use of the tools.
For example, you may not realize it, but AI can coach you. Yet, how often do you truly consider that machine that lives in your computer as a tool to give you feedback? Would you actually use AI to sharpen your skills?
The truth is, we're not ready for AI to train us.
We actively resist allowing AI to take on more critical or uncomfortable roles, because doing so would require ceding a degree of agency—particularly in areas like critique, reflection, and decision support.:
"Critique my article with no sugarcoating."
"Provide me with constructive feedback on what went wrong in my video."
"Identify gaps of emotional communication in my marketing so I can close them."
These requests require AI not just to assist, but to confront—and confrontation is precisely what current systems are designed to avoid.
"We're only scratching the surface of AI's potential in fostering deeper thought processes. Sure, AI can assist in many ways, but we're not at the stage where it replaces the nuanced, critical thinking required in high-level tasks," says Coronel.

AI and offense: the learning disconnection
One uncomfortable reality we need to face is that offense—criticism, challenge, pushback—drives growth. Unfortunately, we can’t depend on AI to deliver that type of feedback—not because it lacks the capability, but because we’ve deliberately constrained it from doing so.
"While nobody enjoys taking offense, it's necessary for real learning and improvement. And until AI systems can offer tough love, they won't be able to facilitate the kind of learning that leads to true progress," explains Coronel.
We have to accept that critical feedback is a crucial part of any learning process. Whether it's identifying flaws in your work or pointing out areas of improvement you hadn't considered, there's a constructive side to criticism that modern AI systems could support more deeply. However, safety filters and alignment guardrails designed to prevent “dangerous” outputs also limit the system’s capacity to explore challenging, unconventional, or uncomfortable perspectives.
ChatGPT: a predictor, not a source
Coronel explained AI's limitations with ChatGPT, for example.
"ChatGPT does not retrieve live information or reason independently. It is a statistical language model that predicts text based on learned patterns, operating within strict alignment layers and processing constraints. While it can approximate knowledge or connect ideas, its behavior is governed by guardrails—not autonomous judgment."
This sounds limiting, but is interesting. Why? Because it means ChatGPT infers more than your average marketing intern. It connects dots in ways that can sometimes surprise you, but it's not always flawless. Like that same intern, it may occasionally miss instructions, or worse, invent answers it thinks you need. That's why AI, like that intern, shouldn't be left unsupervised.

Bing AI (Enhanced by Copilot): A Step Forward with Real Sources, Yet Still Constrained
Now let’s consider Co-pilot. It marks a step forward from ChatGPT because it has access to search and read content directly from the web, allowing it to source real-time information which seems to give it a substantial edge. However, its capability to tap into online resources doesn't equate to unrestricted access or unfettered accuracy.
"Despite these advancements, Copilot’s behavior is still governed by processing constraints, safety filters, and optimization goals—meaning access to information does not equate to independent reasoning or agency," explains Coronel.
Furthermore, the apparent freedom to scour the web brings its own set of challenges. The internet is not always a reliable source of truth; it is riddled with inaccuracies and undesirable content. Our caution towards what the internet might yield—whether unappealing or incorrect—places additional limitations on the utility of such AI tools. Despite its advancements, Copilot, like its predecessors, must navigate these complexities and is ultimately limited by the quality and veracity of the information it accesses.
What's next for AI?
The key to unlocking AI's potential lies in how we allow it to evolve.
"If we keep designing models that are passive, safe, and agreeable, we will never see AI’s capacity for serious critical analysis, challenging feedback, or offense-driven growth. Until we accept discomfort and deliberately cede a degree of agency—at least in the space of critique and reflection—AI will remain a convenient, if tepid, tool rather than a genuinely transformative partner."
In other words, we demand that AI be powerful—while simultaneously disabling its ability to confront, disagree, or push us beyond our comfort zone.
Want to read more from Roberto? He's contributed other articles to Write Wiser on this same subject.



dg sdv