Commentary from Greg @ Bain & Co. The theme is similar to what is becoming clear that business is deploying AI on employee desktops, but there is much less operational deployment targetted at productivity growth.
______________


| Every other week we’ll provide updates on the latest value levers and trends operators are asking us about in Technology and Software. If there are things you want to hear more about – shoot us a note. |
| Let’s start with the easy answer: Yes, AI is real and we have clients achieving 10-25% EBITDA improvement (top and bottom line). However, for every success story there are dozens of situations where management teams are not seeing results. We call these situations “the trap of micro-productivity.” For our initial “Inside Software” note we figured we’d share a few themes: Personal AI is everywhere. Not surprising, but personal AI is becoming a workplace utility. Employees view consumer AI the way they view Word, Excel, or PowerPoint. As one exec put it: “In three years, we won’t be measuring the ROI on giving access to enterprise LLM licenses—we’ll simply accept them as how work gets done, not dissimilar to how we think about Word, Excel, PowerPoint, Google Docs, and more.” Despite rapid adoption of personal AI, only scale enterprise-wide solutions are creating under-writable impact, but many of those efforts are failing. When talking to operators, they highlight a number of reasons: Many pilots, but not enough scaling: Lots of teams are handing out licenses and encouraging “experimentation,” but without a system to identify the use cases that are really driving value and building them in a scalable way, companies are getting stuck in “micro-productivity” (e.g., an individual has saved time, but it doesn’t scale to peers / functions). Putting AI on bad process: Whether it be code writing, blog writing, or many other activities, companies are sticking AI on top of processes that need to be re-invented. A classic one is when marketers use AI to help accelerate blog writing, but the process still takes 8-10 weeks because the bottleneck isn’t the writing, it’s all the sign-offs and re-writes before the blog gets posted. We have seen companies take these processes from 8 weeks to 5 days, but they did so by fully rethinking the process and using AI to accelerate it. Lack of clarity on the objectives: For South Park aficionados, this might feel familiar—the “underpants gnomes” plan, where Step 2 is a total mystery. If you haven’t seen it, imagine a business strategy with a big blank in the middle; that’s how many companies approach AI today: Step 1: Use AI Step 2: ? Step 3: Efficiency! Without clear objectives teams are applying AI to “everything” but don’t have clarity on the results they are trying to drive. Talent + Architecture: These two go hand and hand. Business leaders get stuck because they don’t know how to build solutions and technologists get stuck because they don’t have the context of the situation. Without these two teams working together, most pilots remain just pilots. What This Means for Leaders: While this is by no means exhaustive, we have found that management teams achieving AI success have a number of elements in place: Leadership moment: Set the ambition, hold the targets, and close the back doors. Put clocks on outcomes (e.g., “cost-to-serve down X% in 6–12 months”). Pick use cases, not functions—and think end-to-end: Don’t do “AI in Marketing”—solve lead-to-opportunity or content-to-campaign. Example: a marketing team can auto-reply to inbound leads with agents, but if sales can’t absorb the follow-ups, you just balloon the backlog and blow SLAs. Fix the whole flow, not just the shiny task. Re-imagine your process, then layer in AI: Collapse handoffs, trim approvals, codify the new way of working. We’ve watched a marketing asset cycle go from 3 weeks → 3 days after the process was rebuilt and then automated. Bring operators and architects: Many companies are assigning their AI tasks to either Operators, who understand “real use cases,” or Enterprise Architects, who understand how to build “scalable” solutions. To be successful you need both. We have seen companies further increase their odds of success when they bring in 3rd party partners to supplement internal talent and accelerate speed to result. Balance data readiness with results: While AI requires rich data context, too many organizations stall in cleanup efforts with little to show. Leaders must pursue pragmatic approaches—piloting with existing data while building scalable foundations over time. TLDR: run two plays at once. Keep cheering on personal productivity (you’ll never perfectly measure it), and stand up a small set of end-to-end process rebuilds with operators, architecture, and pragmatic data. That’s when AI stops being a cool demo and starts showing up in results. We hope you found this initial email interesting. We encourage replies to this email and are glad to discuss these themes further. Best, Greg Callahan & Bain’s Software Practice. |
![]() Greg CallahanPartner & Global Leader of Software Practice Boston |

