FEB 09, 2026

Why AI skepticism in professional work is worth reconsidering

By Bogdan Aioanei, CEO of DIALOG AI

Why AI skepticism in professional work is worth reconsidering

There is a growing divide in professional circles between those who have embraced AI tools and those who remain deeply skeptical of them. Many executives are now mandating AI adoption across their organizations — an approach that, as a top-down strategy, often misfires. But the impulse behind it is understandable, because something genuinely significant is happening, and dismissing it outright carries real professional risk.

Some of the most capable people I know hold a firm conviction that AI is a passing trend — the next iteration of blockchain hype. I've been hesitant to challenge them, because many of them are sharper than I am. But their arguments often don't hold up under scrutiny, and they deserve a direct response. Extraordinarily talented professionals are spending time on work that AI tools already handle more efficiently, largely out of principle.

If all progress on large language models stopped today, they would still represent one of the most consequential developments in professional productivity in a generation.

An important caveat: the observations that follow are grounded primarily in knowledge work — fields like software development, data analysis, research, operations, and other domains where professionals spend significant time gathering information, producing structured output, and iterating on results. In creative fields such as fine art, music composition, and literary writing, the picture is more nuanced, and I'm inclined to defer to practitioners in those areas. But in domains defined by structured problem-solving, the case for AI is strong and getting stronger.

Understanding the current state of AI tools

Much of the skepticism I encounter is rooted in outdated experience. If your last serious attempt at using AI was a conversation with a chatbot six months ago — or, worse, an early encounter with a code completion tool two years ago — you are not evaluating what practitioners are actually using today.

The current generation of AI tools operates through what are broadly called "agents." These are systems that don't just respond to a single prompt; they explore your working environment, produce and revise output, execute tasks, and iterate based on feedback. In a software context, that means reading codebases, running compilers and tests, and correcting their own mistakes. In a research context, it might mean pulling information from multiple sources, cross-referencing findings, and producing structured summaries. In an operational context, it could mean analyzing logs, identifying anomalies, and proposing remediation steps.

The underlying architecture of these agents is not, itself, particularly exotic. It is straightforward systems integration — connecting an AI model to real-world tools, feedback loops, and verification mechanisms. The effectiveness of an agent has as much to do with how well the surrounding workflow is designed as it does with the underlying model's capabilities.

If your mental model of AI assistance is still "ask a chatbot a question and paste the answer somewhere," it is no surprise that you and the people advocating for these tools are talking past each other.

The practical case for AI adoption

AI tools excel at absorbing the tedious, high-volume, low-complexity work that fills a disproportionate share of most professionals' days. They drastically reduce the time spent searching for information, navigating documentation, and handling routine procedures. Most importantly, they don't get fatigued and they don't procrastinate.

Consider any project you've wanted to start but didn't. You scoped out the first steps, felt the weight of all the preliminary setup and research, and set it aside — for a day, a year, or indefinitely. AI tools can handle much of that initial overhead, often delivering you to precisely the point where the work becomes intellectually engaging and productive.

There is a secondary benefit that is easy to overlook. In most professional roles, there is a constant temptation to occupy yourself with low-priority but comfortable tasks — reorganizing files, reformatting documents, fine-tuning processes that are already adequate — as a way of avoiding the harder, more important work. When an AI tool can handle those tasks in the background, the comfortable hiding places disappear. You are left facing the work that actually requires your judgment, expertise, and creativity. That is, ultimately, a good thing.

Addressing the common objections

"You don't really understand the output"

This concern assumes that professionals using AI tools are accepting output uncritically. That has never been the standard in any serious workflow. You have always been responsible for the quality of what you deliver, regardless of where it originated. AI-generated output should be reviewed, edited, and validated with the same rigor you would apply to work produced by a colleague or a contractor.

If anything, reviewing AI output is often easier than reviewing human output, because it tends to be more predictable and uniform in structure. If a professional cannot evaluate and refine the kind of straightforward output an AI tool produces, that is a gap in their own capabilities — not an indictment of the tool.

"AI hallucinates"

This was a legitimate concern in earlier iterations of these tools, and it remains relevant in some contexts. But in most professional workflows, hallucination is increasingly a solved problem — not because the models have stopped making errors, but because well-designed agent systems include verification steps. They check their own output against real-world constraints: compilers, test suites, databases, linting tools, and other forms of ground truth. When an error is detected, the system corrects itself.

You will only notice this happening if you watch the detailed process logs. In practice, the end result is what matters, and modern systems are remarkably good at self-correction.

"The quality is mediocre"

This objection deserves a more nuanced response than it usually receives.

First, mediocrity is underrated. Not every piece of work demands excellence. A significant portion of professional output is routine, and producing adequate work quickly is often more valuable than producing exceptional work slowly. AI tools raise the floor of quality across an enormous volume of tasks. That alone is transformative.

Second, the claim that AI output is uniformly mediocre is increasingly inaccurate. These tools often possess a broader repertoire of techniques and approaches than any individual practitioner. They can draw on patterns from vast bodies of knowledge in ways that complement — and sometimes surpass — human intuition.

Third, the role of the professional is not diminished. AI tools handle the initial production; the professional provides curation, judgment, refinement, and direction. This is not a new dynamic. Senior professionals have always operated this way when managing teams — guiding less experienced contributors toward better outcomes. The difference is that AI tools are faster, cheaper, and available on demand.

"It threatens craftsmanship"

There is a meaningful distinction between craft as a personal pursuit and craft as a professional obligation. Professionals who take pride in elegant, meticulously refined work are not wrong to value that. But in a professional context, the goal is to solve problems effectively for the people who depend on your work. Spending excessive time perfecting output that is already adequate is not craftsmanship — it is a form of procrastination.

AI tools absorb the routine work and clear a path to the areas where your judgment, values, and expertise genuinely matter. Far from eliminating craftsmanship, they create more space for it — in the places where it counts.

"It will never achieve general intelligence"

This is irrelevant to the practical question. Whether or not AI ever achieves anything resembling general intelligence has no bearing on whether current tools are useful today. The hype surrounding artificial general intelligence is a distraction. What matters is whether the tools work, and in a growing number of professional contexts, they do.

"It threatens jobs"

This is the most serious objection, and it deserves an honest answer. AI tools very likely will displace some professional roles, particularly those defined primarily by routine information processing and production. That is a real and legitimate concern, and no amount of optimistic rhetoric about "new kinds of work" should paper over it.

But it is also worth acknowledging that technology-driven displacement is not new, and many of the professionals most resistant to AI have spent their careers building tools that automated other people's work. The discomfort of being on the other side of that dynamic is understandable, but it does not constitute an argument against the technology itself.

"There are serious intellectual property concerns"

This is a legitimate legal and ethical question, and one that courts and legislatures are actively working through. Professionals should take it seriously, and organizations should seek appropriate legal counsel on how AI tools interact with proprietary information and licensing obligations.

That said, intellectual property concerns are not unique to AI, and in many professional domains, the practical implications are less dramatic than the rhetoric suggests. AI tools synthesize and transform information in ways that are often more removed from their source material than the output of a human professional working from the same references.

The broader picture

When I began drafting this piece, I described the current state of AI tools as a way of establishing common ground. But the pace of development is such that any specific description of capabilities becomes outdated almost immediately.

Professionals who have embraced these tools are not just using them for individual tasks. They are integrating them into asynchronous workflows — delegating multiple streams of work to AI agents, reviewing the results in batches, and achieving levels of throughput that would have been unimaginable a short time ago. The productivity gap between those who have adopted these tools and those who have not is widening rapidly.

I have spoken with colleagues across a range of industries who describe the same experience: those on their teams who have embraced AI are operating at a fundamentally different level of productivity. These are not breathless futurists or venture capitalists with an agenda. They are practitioners reporting what they observe in their daily work.

There are still plenty of tasks where AI tools are unreliable or inappropriate. Critical decisions, sensitive contexts, and novel problems at the frontier of a domain still require unassisted human judgment. But the share of professional work that falls into those categories is smaller than most skeptics believe.

A call for honest engagement

I am not a futurist, and I am not arguing that AI will solve every problem or that its adoption is without risk. I am arguing that the reflexive skepticism I see in many professional communities is not well-founded, and that it is preventing talented people from realizing significant gains in their own work.

AI is receiving a level of attention comparable to what smartphones received in 2008 — perhaps less than what the internet received in the mid-1990s. That level of attention seems proportionate.

My expectation is that this will become clearer over the coming year. The dismissive posture toward AI tools — treating them as toys or gimmicks — is increasingly difficult to maintain in the face of what practitioners are actually accomplishing with them. When the skeptics do come around, and many will, their expertise and critical thinking will make these tools dramatically more effective than they are today.

The most productive path forward is not uncritical enthusiasm or reflexive dismissal. It is honest engagement with what these tools can and cannot do, grounded in current reality rather than outdated experience or speculative fears. The professionals who take that path will be the ones best positioned to thrive in what comes next.