AI Can’t Automate a Process That Doesn’t Exist

If your company is struggling to leverage AI on an enterprise level, there may be a simple explanation: you haven’t made the boring decisions yet.

I’ve watched a similar pattern play out with consulting clients over the past 18 months. The make-or-break factor is whether the team has made the basic strategic decisions required to automate.

Failure comes in different forms. Maybe the CEO gives hyper-specific feedback on individual pieces that can’t be generalized into guidelines. Or the VP of Marketing disagrees with the CEO’s taste but won’t push back directly. Maybe a writer has never written for the vertical before, or the content team is doing everything manually, chasing their tails trying to navigate internal landmines. When you’re understaffed and under-resourced, low-stakes decisions on individual deliverables can seem like an afterthought not worth the time and effort.

But if you can’t decide what good looks like, AI won’t help you get there any faster.

This applies to any process a company tries to automate—sales enablement, customer support, product ops. But I see it most clearly in content because that’s where I most often focus.

The Real Blocker Isn’t the Tech

AI adoption advice often focuses on picking the right tools, writing better prompts, or getting buy-in. But in practice, the companies that can’t adopt AI successfully often haven’t made the boring foundational decisions required to automate work in the first place.

Before you can automate content workflows, you need answers to unglamorous questions: Who owns which tasks? What does “good” look like? Who can approve AI output? What’s the actual process from start to finish? These aren’t trick questions, but most teams can’t answer them consistently.

Without clear answers, you get organizational gridlock. Executives are disappointed that AI isn’t delivering the promised ROI. Managers are caught between their boss’s directives and their team’s reality. Contributors spin their wheels redoing monotonous work manually. Everyone is frustrated and disappointed.

Five Boring Decisions Every AI-Enabled Workflow Needs

These aren’t AI prompts—they’re basic operational decisions that need to be made before expecting AI to help improve output.

1. The What and the Why

What exactly needs to get done? For a content project, it’s not “create an article” or “write a case study” but “create a customer story that proves our impact.” What are the building blocks that get you there? What framework will hit the mark on repeat?

2. Task Definition & Ownership

Rarely is the content team the owner of the proprietary information that will make a brand story matter. Where can they get the data they need? Who is the subject matter expert? Who owns the final output? Until you can answer these with specificity, you’re not ready to automate.

3. Quality Standards

What does “good” look like? What are the non-negotiables versus the nice-to-haves? Who decides if something meets the bar? If the answer is “the CEO reviews everything,” you don’t have a process—you have a bottleneck.

4. The Actual Process

What are the steps from start to finish? What tasks can AI cover, and where do humans need to take over? What’s the handoff between steps? What requires executive sign-off versus manager discretion versus individual judgment? If you can’t draw this on a whiteboard, you can’t automate it.

5. Success Metrics

How do you know if your process is working? What would make you stop using AI for this task? How often do you revisit these decisions? Without agreed-upon metrics, you’ll spend more time debating whether it’s working than actually using it.

When AI Exposes Dysfunction

Companies have always gotten away with undefined processes because they had people who could navigate the chaos. A good writer learns to read the room. They figure out that the CEO wants data-driven case studies while the VP prefers emotional customer stories. They develop a sixth sense for what will pass review and what won’t. They become expert translators between competing visions of “good.”

This works—sort of. It’s inefficient and burns people out, but content gets produced.

AI can’t do that. It can’t read a room or triangulate between three competing definitions of quality. It needs clear instructions, documented standards, and someone to have actually made a decision.

When you try to automate before that’s in place, the dysfunction becomes visible. Reviewers can’t hide behind “I’ll know it when I see it.” The problem was never the writer or the tool. It is when these decisions get skipped—too tactical, too in the weeds—that contributors are expected to figure it out.

But AI won’t figure it out for you. Its output will fail, and make it obvious that the decisions were never made.

What the Other Side Looks Like

The companies that get this right don’t look dramatically different from the outside right now. They’re not using fancier tools or bigger budgets. They have a documented definition of what “good” looks like—not in someone’s head, but written down, agreed upon, and referenced regularly.

They have clear ownership: who provides the inputs, who reviews the output, and who has final say. They have a process a new team member could follow on day one without needing six months of institutional knowledge.

Once that foundation exists, AI becomes a multiplier instead of a mirror of dysfunction. Content production accelerates. Quality stays consistent because the standards are explicit, not intuited. The team stops spending energy on political navigation and starts spending it on improvement and iteration. The humans focus on the work that actually requires their judgment—the strategic framing, the creative instinct, the editorial eye—and the system can handle the rest.

That’s what happens when you make the boring decisions first and build a content engine on a strong foundation.

—Meghan Graham

Next
Next

AI Can’t Automate a Process That Doesn’t Exist