
I have written extensively for this publication on the significant change-management and business process challenges that legacy companies face as they undertake organizational transformation to prepare for an AI future. But what about digitally native businesses, founded during the peak of the Internet era? What opportunities and challenges do these digitally native firms confront as they navigate the next great wave of technology-driven transformation?
SurveyMonkey is a U.S.-based software company best known for its online platform that enables individuals and organizations to create surveys and forms, collect responses, and analyze data to make decisions. Founded in Silicon Valley in 1999, Bain Capital was among the investors that acquired a majority stake in SurveyMonkey in 2009. Dave Goldberg, a former Yahoo executive, was named CEO and led the company through a period of rapid growth, leading to a $2 billion valuation. Today, the company is led by CEO Eric Johnson and serves users in 190+ countries and processes millions of survey responses daily. The company’s products are widely used across business, education, healthcare, government, and non-profits worldwide.
I recently discussed what AI transformation looks like for an Internet era company, with Meenal Iyer, who serves as senior vice president for data and analytics at SurveyMonkey. As head of enterprise data and AI strategy, Iyer leads an integrated 30+ person team spanning data engineering, analytics, and machine learning. She comments, “This structure matters because it keeps us focused on end-to-end impact rather than disconnected pieces of work.” Iyer continues, “I’m responsible for the foundation: the platform, architecture, tooling, and governance that make AI production ready.” She adds, “I’m also responsible for how AI connects to the business: how we prioritize investments, where AI can change outcomes, and how we measure whether our AI investments are working.”
Iyer has spent her career leading data organizations that convert business complexity into measurable growth and competitive advantage in fintech, retail, and travel. “A large part of my role is to close the gap between technological experimentation and business impact,” says Iyer. “Many organizations are stuck in AI experimentation, with interesting pilots.” Some Chief Data and AI Officers refer to this common challenge as “pilot-itis”. Iyer responds, “Everything we build has to tie back to business impact. It is not enough to build a demo. It must be something the business can use.” She adds, “My mandate is to move us past the demo stage — to make data and AI part of how the company operates, not something that just sits alongside it.”
Making AI Production-Ready at SurveyMonkey
To ensure that AI is production-ready across the company, SurveyMonkey is focused on three core priorities, as Iyer explains. First, putting intelligence at the point of decision, so sales, marketing, and customer success aren’t hunting for insights — they’re getting these in the tools they already use. The second shift is embedding data and AI into workflows, not layering it on top. Iyer notes, “If it’s not part of how work gets done, it doesn’t get used.”
The third element, says Iyer, is activating something that’s unique to SurveyMonkey: over two decades of data on how people think, respond, and make decisions, captured across industries, at a global scale. Iyer explains, “We’ve seen more than 100 billion questions answered on our platform. That’s volume, but it’s also a deep understanding of how to ask, interpret, and act on human input.”
While many organizations have made data more accessible, fewer have made it usable in real decisions, contends Iyer. “This is the gap we’re focused on closing”. She explains that SurveyMonkey is thinking about this evolution in phases:
The first phase is about Access, says Iyer, which she describes as conversational intelligence layers — making data accessible through natural language and meeting people where they already work. She explains, “In our case, that’s tools like Slack. That’s what our current architecture delivers today: a multi-domain agent where marketing, product, sales, and customer success each have their own agents, all operating in the flow of work.”The second phase is Cognitive Intelligence, which Iyer explains, is where the system answers questions and starts connecting signals across the business. Iyer elaborates, “Patterns that no single team would see on its own begin to surface when you can bring product, marketing, sales, and customer data together in context.” She adds that this is also where things tend to break if the foundation isn’t solid. Iyer notes, “If your definitions aren’t aligned or your data isn’t consistent, AI amplifies it. That’s why the context layer and decision logic underneath have to be airtight. That’s the work we’re focused on right now.”The third phase is Automated Intelligence, which Iyer describes as high-confidence, governed processes that are continuously measured and improve over time. This is where certain decisions can be made with minimal human intervention and start to function more like a strategic partner to the business, notes Iyer.
Iyer highlights two notable examples of how SurveyMonkey is moving AI into production:
The first is a health score model used by SurveyMonkey’s customer success team. It continuously monitors signals, contextualizes these against account history and product behavior, and surfaces prioritized recommendations directly to customer success teams, in the tools they already use. Iyer explains, “The human still makes the final call, but they’re doing it with far better context, at the moment it matters. It’s a simple example of what it means to bring intelligence to where work actually happens.”The second is what SurveyMonkey calls profile pulse, an agent the sales team uses. Before customer meetings, sales teams had been investing significant time preparing for each account. As Iyer notes, this level of context is critical, but it becomes harder to scale across a large customer base. The profile pulse agent handles that now, directly in Slack, which Account Executives (AE) are already using. Iyer comments, “The agent pulls from multiple sources, synthesizes the information, and delivers a structured brief that the AE can refine and use immediately.”
Iyer concludes, “Both of these are examples of the same shift: moving data and AI out of experimentation and into dependable infrastructure that drives measurable business outcomes. The governance infrastructure and trust models we’re building today are what will make this level of automation safe and effective when we reach it.”
Ensuring Business Value from Data and AI Investments at SurveyMonkey
Measuring the business value of its investment in data, analytics, and AI is part of every business decision at SurveyMonkey. As Iyer explains, “Instead of measuring AI activity, we measure what actually changes in the business.” She continues, “The question I consistently ask my team is simple: what’s different because of what we built? I’m not asking how many models we shipped or experiments we ran. I am asking whether it’s driving outcomes the business can see and use.”
SurveyMonkey has developed what the company refers to as its ATV model for building toward business value: Adoption first, then Trust, then Value, where:
Adoption is the foundation which sets the infrastructure: getting data flowing reliably across the business, with the right governance and privacy controls in place.Trust comes next: shared definitions, including the semantic layer, the business glossary, and the enterprise metrics people actually believe in.Value is manifested in better decisions, faster workflows, and real business impact.
To keep activities grounded in business outcomes, SurveyMonkey applies a framework called GET, which stands for Growth, Effectiveness, and Trust:
Growth is demonstrated by revenue and new opportunities.Effectiveness is measured by productivity, efficiency, and total cost of ownership. Trust means risk reduction and platform credibility that drives adoption across the business. How often are AI-driven recommendations acted on without being overridden? How does that change over time? Iyer notes, “This is one of the clearest signals of whether your AI is actually working or just running.”
Iyer notes that every initiative goes through a 30/60/90-day value review. She explains, “Instead of progress updates, we focus on decision points. If something isn’t delivering value on day 30 or 60, we adjust or stop.” Iyer continues, “Every major initiative is anchored to a metric a business leader actually cares about, whether that’s revenue impact, cost reduction, time-to-insight, or conversion improvement. Every AI initiative is tied to one of those outcomes before it is resourced.”
“This is a great example of the ATV model playing out at the human level: adoption happens through good experiences, trust gets established over time, and value follows from both,” notes Iyer. “That’s both safer and more sustainable and makes adoption stick. It compounds over time.” Iyer concludes, “Think about the monetization, the AI capabilities, and the data products. That’s the gap we see across most enterprises today. Most companies try to jump straight to value. But if people aren’t using the data consistently, or don’t believe in it, you don’t get there.”
Enabling Organizational Adoption of AI at SurveyMonkey
Data and AI business adoption begins with organizational leadership at SurveyMonkey. “Leadership alignment is one of the real competitive advantages in what we’ve been able to build,” says Iyer. “When I talk about moving AI from experimentation to infrastructure, it’s a whole company agenda and not tied merely to the data team. And this only works when leadership is aligned around this outcome.” Iyer adds, “At SurveyMonkey, this alignment is very real. AI is treated as a company-level priority instead of a back-office function. You can see it in how our leaders engage with our work.”
One of the most effective capabilities that SurveyMonkey has built is its AI Ambassadors’ Program, a cross-functional group of AI champions embedded across the business. Iyer explains, “They’re testing tools, running experiments, and surfacing use cases within a structured, governed environment, ensuring that experimentation scales responsibly and aligns with our privacy and data standards.” She continues, “They bring real problems back to the data and AI team and help translate solutions into how work actually happens. It works as a two-way channel. We push new capabilities out, and they pull real problems back in.” Iyer adds, “This keeps the work grounded in what teams actually need, not just what is technically possible.”
Iyer notes that programs like AI Ambassadors rely on active executive sponsorship. “Leaders across marketing, sales, and customer success are permitting participation and asking for more, faster,” says Iyer. “This same alignment carries through into how we’re building AI into the product.” Iyer adds, “Building our semantic context layer, multi-agent orchestration, and AI observability requires sustained executive commitment. This is what allows us to move from isolated use cases to results that are scalable and repeatable.”
SurveyMonkey also runs a prompt-to-pattern pipeline, where individual experiments, once proven, get standardized and scaled across the business. Iyer explains, “Someone in marketing builds a useful prompt workflow. A customer success analyst automates something that was taking hours each week.” She continues, “When something works, we put the right governance around it and move it into the enterprise AI platform as a reusable asset. That’s how you scale culture without losing control.”
Iyer sums up, “We look at adoption across programs like our AI Ambassadors’ initiative, usage of our pulse agents, and how often individual experiments get scaled into reusable, governed assets. Those are leading indicators that the culture is actually shifting.” Iyer concludes, “Most organizations wait until significant investment has already been made before evaluating ROI. We measure while we can still change the outcome.”
Factoring in the Human Element at SurveyMonkey
Human opinions matter in the age of AI. “Judgement still sits with the human. We’ve removed the prep work that gets in the way of it. This is what makes our AI different,” says Iyer. “Our AI is trained on data and grounded in context shaped by 25+ years of survey science. That’s our advantage.” She notes, “Most organizations say data is a strategic asset. But data only becomes strategic when people trust it.”
Iver elaborates, “AI only works when it’s operating on shared context and decision logic, not just pattern-matching on raw input. We use data to generate insights and build it directly into our product and how the business operates. Nothing gets better until you get curious.” She adds, “What we’ve invested in is making our context explicit, so core business definitions are consistent across teams and systems. Without this, AI just scales inconsistency faster. The impact has been measurable.” She sums up, “We’ve seen hours saved that can now be spent engaging with customers.”
“People build confidence over time, with humans in the loop at every meaningful decision point. As trust grows, so does the level of autonomy. The best decisions come from built-in listening,” reflects Iyer. She concludes, “An AI system nobody uses has no value, no matter how sophisticated it is!”