On the Friday 15, Andy & Brian talk to Optimizely’s Trevor Pope about whether practitioners should think about AI initiatives in terms of their impact on costs or their impact on revenue. (You might be surprised to hear that it can be both…)
FAQ
Q: Should B2B companies prioritize using AI to save money or make money? A: According to the Master B2B community poll, 41% said making money is the top priority, 30% said saving money, and 27% said improving customer experience. However, as Andy Hoar observed, “Making money is the reason people start the process. Saving money is why they renew the process.” The revenue side is harder to attribute, while cost savings are much easier to prove with straightforward A/B testing.
Q: Where are B2B companies actually seeing AI cost savings? A: The biggest savings are coming from team efficiency — specifically enabling non-developers to do work that previously required expensive specialists. Trevor Pope shared that companies are using AI to write code for website experiments, run SEO/GEO analysis, and handle results interpretation without hiring outside consultants or waiting for developer resources. One key finding: individual people are now able to run entire experimentation programs that previously required larger teams, with companies seeing around 25% savings in people’s time on specific tasks.
Q: Can you give a real example of AI generating revenue in B2B? A: Pope shared that a midsize distributor increased sales by $250,000 in just 90 days by using AI-powered experimentation to optimize their product detail pages. The key insight was about shipping — their customers’ purchasing decisions were driven not by product price but by shipping costs and delivery timing. By optimizing how shipping information was presented (and even inflating product prices while lowering shipping costs), they won significantly more sales. They never would have discovered this without systematic experimentation.
Q: Why is the enterprise-wide productivity impact of AI so much lower than task-level gains? A: According to Wharton research cited in the episode, generative AI shows average labor cost savings of 25% for specific tasks, but the enterprise-wide impact drops to just 5%. Andy Hoar argued this is the enterprise’s fault, not AI’s — companies are plugging AI into existing processes without rethinking how they work. As he put it, “People are not changing how they work. They just plug in the AI and it makes a marginal to no difference because they’re not thinking differently about how to use the technology.”
Q: What does the data say about AI-ready companies vs. laggards? A: BCG found that AI-ready companies achieve five times the revenue increases compared to laggards. Additionally, 82% of top-performing companies target growth and innovation with AI, whereas only 50% of average companies do. BCG also estimated that companies successfully scaling AI across revenue-generating functions see revenue increases of up to 10%.
Q: How is Optimizely using AI differently than traditional A/B testing? A: While Optimizely has offered experimentation tools for over a decade, their current AI capabilities (powered by Gemini) go well beyond traditional A/B testing. Their AI tool, Opal, combines general AI intelligence with Optimizely’s proprietary marketing data to suggest experiment ideas, recommend variations based on past performance, prevent duplication of failed tests, and — critically — analyze results automatically. The analysis piece is especially valuable because interpreting test results has historically been the biggest bottleneck in experimentation programs.


