Featuring Dharmateja Priyadarshi Uddandarao, Senior Data Scientist at Amazon and expert in advanced behavioral analytics.
Introduction
In the era of big data, human behavior can increasingly be modeled and predicted through analytics. Behavior analytics refers to the systematic study of human actions via data collection and analysis. By leveraging statistics and machine learning, organizations are transforming qualitative behavioral insights into actionable economic inputs – turning clicks and interactions into forecasts of revenue, churn, and other business KPIs. “We can now quantify patterns in customer behavior and simulate outcomes, rather than relying on intuition,” says Dharmateja Priyadarshi Uddandarao, a data science expert leading this charge. These techniques are revolutionizing how companies understand consumers and optimize their P&Ls (Profit & Loss statements) by informing product strategy and decision-making.
Dharmateja Priyadarshi Uddandarao exemplifies this new breed of analytics leader. He holds a Master’s degree in Analytics with Statistical Modeling major from Northeastern University and a bachelor’s in Computer Science from NIT Trichy. A seasoned practitioner in advanced analytics and Data Science, Mr.Uddandarao has held key positions at Amazon, Aetna, and Capital One. His work spans e-commerce, finance, and Healthcare analytics, all centered on using data-driven models to drive business economic value.
We sat down with Dharmateja Uddandarao to discuss his work on counterfactual behavior modeling, and how generative AI combined with causal graphs enables businesses to simulate interventions (like UI changes, pricing shifts, or feature rollbacks) and forecast consumer responses. In this interview, Mr.Uddandarao shares insights on the science behind the framework, its real-world applications, and why it marks a significant innovation in behavioral analytics.
Ethan Lee: Human behavior has traditionally been seen as difficult to quantify. How can data science and statistical techniques help model human behavior in a business context? Why is this so significant for companies today?
Dharmateja Uddandarao: Great question. Modeling human behavior is now very feasible because of the vast amounts of interaction data we collect and the advances in AI. By applying data science, we can find patterns that predict what users will do next – almost like turning consumer psychology into a quantitative model. This is hugely significant for businesses: if you can anticipate and understand customer actions, you can design better products and interventions. For example, instead of guessing whether a new feature will engage users, a company can use data-driven models to predict it. It moves decision-making from gut instinct to evidence-based forecasts. And importantly, it makes behavior an economic input – something you can factor into revenue projections or customer lifetime value. In short, analytics lets businesses treat user behavior as a science, which means fewer surprises and more optimized outcomes.
Ethan Lee: Can you explain in simple terms how forecasting user behavior works and what makes it unique?
Dharmateja: Absolutely. The core idea is to marry causal inference with generative AI. We start by building a structural causal model – essentially a directed graph of cause-and-effect relationships between key factors: user interactions, product features, engagement metrics, and so on. This causal graph encodes our domain knowledge (for example, “a faster onboarding flow causes higher user activation” or “showing personalized recommendations increases time spent”). Once we have that, we introduce a transformer-based generative model on top of it. This generative AI (similar to the models behind chatbots and GPTs) is trained on historical user interaction data but conditioned on the causal variables. In practice, that means the AI can generate realistic user behavior sequences, like simulating a series of user actions over time, for a given hypothetical scenario.
Ethan Lee: What kind of scenarios or interventions can we simulate? Could you give a couple of concrete examples of how a product team might use this?
Dharmateja: There are many practical use cases. Think of any change a product team might be considering – our framework lets you test it virtually first. For example, user onboarding is crucial for apps. Suppose a mobile app is debating a change to its onboarding flow (say, reducing the number of steps to sign up). We can simulate that onboarding flow change and forecast its impact on user activation and 7-day retention. The model would generate user journey data under the new onboarding design and predict metrics like activation rate or drop-off. This helps the team see if the simplified flow would likely improve retention or if there might be unintended consequences.
Another example is feature rollbacks or removals. Imagine an e-commerce site introduced a new recommendation widget on the homepage. It’s supposed to increase engagement, but you worry it might also be distracting users from making purchases. We can effectively ask, “What if we removed that recommendation feature?” The causal generative model would simulate user behavior in the absence of the widget and estimate the difference in purchase rate or time spent. If the simulation shows, say, a 2% sales lift when the feature is removed, that’s a clue something about the widget is negatively impacting conversions. In fact, in one of my studies we noted that individual UI elements in a larger product launch can have unintended negative effects even if the overall launch is positive.
We can also simulate things like pricing shifts. Let’s say a streaming service considers raising its monthly price – we could model how users might respond (e.g. reduced sign-ups or higher churn) by generating behavior under the higher price scenario. All these what-if analyses let product managers preview the outcome of an intervention before committing to it.
Ethan Lee: How does this approach compare to traditional methods like A/B testing or standard forecasting? Does it replace them, or complement them?
Dharmateja: It’s a complement and an evolution of those methods. A/B testing is the gold standard for measuring causal impact by actually implementing changes for a subset of users. However, A/B tests can be expensive, time-consuming, and sometimes infeasible, you might not want to test a potentially harmful change on real users. You can try dozens of virtual experiments quickly. For example, instead of running 10 different A/B tests for various price points or UI tweaks, you can simulate those 10 scenarios and narrow down to the most promising ones to tests in reality. This saves time and reduces risk.
Compared to standard predictive forecasting, which might use time-series models or machine learning to project metrics, our approach provides causal insight. A typical forecast might tell you “sales will drop next quarter” but not tell you why or what to do about it. Because we model the drivers (via causal graphs) and can simulate changes, we can answer “sales will drop unless we improve feature engagement by X%”, or “if we discount prices, we could counteract the drop.” It’s a more prescriptive form of analytics. We also found the counterfactual generative method can be more accurate than traditional uplift models or forecasts, likely because it captures the true cause-effect signals rather than spurious correlations. In tests on web and app datasets, it outperformed conventional forecasting techniques in predicting user behavior under interventions. And importantly, it produces interpretable outputs – the causal graphs and pathways help explain why a metric would change, which is something neither black-box predictive models nor raw A/B test results alone usually give you.
Ethan Lee: Let’s talk about the bigger picture. How can techniques like this impact product strategy and business performance? Can it really change how companies make decisions?
Dharmateja: I believe so, yes. By having a tool to forecast consumer responses to potential actions, companies can be much more strategic. It’s like having a crystal ball for “if we build it, will they come?”. In terms of product strategy, this means teams can proactively design with outcomes in mind. You’re not just launching features and praying they work – you’re choosing the ones that the model predicts will hit your target metrics. That leads to more efficient use of resources (engineering effort, marketing spends, etc.), focusing on changes that drive impact.
For business performance and P&L, the effects are direct. Every product decision eventually shows up in revenue, costs, or customer value. Suppose our simulation suggests that a UI change could boost conversion by 3% – that translates to dollars on the bottom line, and a product manager can build a business case for it. Or conversely, if an idea looks likely to hurt engagement, the company avoids a potential loss. Over time, using such models could improve key metrics like customer retention, lifetime value, and profitability because you’re essentially optimizing the business with foresight.
Ethan Lee: Thank you, Dharmateja, for sharing your insights. It’s fascinating to see how analytical techniques can turn human behavior into a strategic asset.
Dharmateja: Thank you – it was a pleasure to discuss this. I’m glad to see growing interest in bridging AI with causal thinking. That fusion is where a lot of the next big innovations will happen, and I’m excited to be part of it.
