In the high-stakes world of global finance, innovation is often viewed through the lens of speed. However, for Anant Somvanshi, the most critical component of the current technological revolution isn’t just how fast we can deploy Artificial Intelligence, but how safely and responsibly we can do so. With over 20 years of experience at the intersection of quantitative analysis and risk management, Somvanshi has become a leading voice in the movement for Trusted AI.
I sat down with Anant to discuss his journey from statistical modeling to the helm of enterprise-wide AI governance, and why the “black box” of AI must finally be opened for the sake of the modern consumer.
The New Standard: Beyond Traditional Risk
Peter: Anant, you’ve spent your career in some of the largest financial institutions in the world. How has the nature of “risk” changed since you started?
Anant Somvanshi: When I began my career, risk management was primarily about numbers and historical patterns—predicting credit defaults or identifying transaction fraud using traditional statistical models. Today, the landscape is fundamentally different. We aren’t just managing financial risk; we are managing algorithmic risks and dealing with black-box nature of models.
As we’ve moved into the era of Generative AI, the risks have become more nuanced. We are now dealing with potential “hallucinations,” where a model generates false information with total confidence, or “prompt injection,” where external actors try to manipulate a model’s output. My role is to ensure that as the industry adopts these tools, we have a centralized governance framework that is every bit as sophisticated as the technology itself.
Peter: You talk a lot about “Trusted AI.” That’s a term we hear frequently, but what does it look like in practice?
Anant Somvanshi: To me, Trusted AI isn’t a vague concept; it’s a measurable set of standards. In my current work, I’ve operationalized what I call the Pillars of Trust: Security, Safety, Privacy, Fairness, Transparency, Explainability, Reliability, and Resiliency.
In practice, this means that before any AI system is deployed, it undergoes a rigorous Trusted AI Assessment. We test for bias—ensuring the model doesn’t unfairly disadvantage specific groups—and we demand explainability, meaning we must be able to describe why a model reached a certain conclusion. If a model’s decision-making process is a “black box” that we can’t explain to a regulator or a customer, then it’s not ready for production.
Bridging the Gap: From Data Scientist to Governance Leader
Peter: Your background is deeply quantitative, with a Master of Statistics and years spent as a Statistical Analyst. How does that technical foundation influence your approach to policy?
Anant Somvanshi: It’s vital. You cannot govern what you do not understand. My early years—whether it was analyzing clinical trial data in the healthcare sector or building fraud detection scorecards—gave me a “under the hood” perspective.
Because I’ve built these models myself, I know where the “weak points” usually hide. I can speak the language of the data scientists who are building the latest GenAI tools, but I can also translate those technical risks into business and regulatory terms for senior leadership. Governance isn’t just about saying “no”; it’s about providing a structured, repeatable path to “yes” that is defensible and safe.
Peter: You’ve had a significant impact on the industry’s response to fraud. How did that work prepare you for AI governance?
Anant Somvanshi: Fraud is the ultimate cat-and-mouse game. Whether I was managing application fraud for credit card portfolios or building infrastructure to mitigate credit abuse, the goal was always the same: stay one step ahead of the threat while protecting the customer experience.
I’ve spent years providing “independent challenges” to model assumptions. In the world of AI, that mindset is more important than ever. We need to be the “skeptical peer” to the technology—constantly asking, “What are the limitations? What happens when this fails?” That proactive stance is what allowed me to lead large-scale transformations, like the 150-member Analytics Center of Excellence I helped establish earlier in my career.
The Future: Ethics as a Competitive Advantage
Peter: Looking ahead, where do you see the industry going? Is there a tension between innovation and regulation?
Anant Somvanshi: I actually think the two are complementary. In a regulated industry, trust is your most valuable asset. If customers don’t trust the AI-driven advice or services they receive, they will leave.
The future belongs to the institutions that can scale AI adoption responsibly. This means moving toward Responsible AI—frameworks that account for stereotyping and harmful content right from the design phase. We are moving away from ad-hoc oversight and toward institutionalized governance bodies—committees and risk panels that ensure cross-functional alignment between legal, compliance, and technology teams.
Peter: Finally, what is the most rewarding part of your work?
Anant Somvanshi: It’s the ability to empower a whole enterprise to use a new technology with confidence. When we deliver training and awareness programs that help employees understand these standards, we aren’t just checking a box. We are building a culture of accountability. To see a complex, high-performing AI system go live and know it has been vetted against the highest ethical standards—that is incredibly fulfilling.