Ep. 30 - Scaling Product Security In The AI Era with Teja Myneedu

In this episode, we dive into the depths of application and product security with Teja Myneedu, Sr. Director, Security and Trust @ Navan. Teja shares his philosophy on achieving security at scale, discussing some challenges and approaches specially in the AI era. Teja's career spans over two decades on the front lines of product security at hyper-growth companies like Splunk. He currently operates at the complex intersection of FinTech and corporate travel, where his responsibilities include securing financial transactions and ensuring the physical duty of care for global travelers.

Key Takeaways

Below are some key takeaways from this episode:

  • Evolving Stance on Security by Obscurity - Teja noted that his professional opinion has changed over time regarding security by obscurity. He now believes that "security by obscurity is not a bad thing" if it helps protect the enterprise. The goal should be to maximize protection incrementally, embracing "good improvement over perfect any day," and prioritizing the sense of urgency to secure the perimeter now, rather than delaying fixes while striving for an ideal solution.

  • Prioritization and "Band-Aid" Fixes - With attackers increasingly leveraging AI to quickly discover surface area, Teja asserts that it is crucial to take immediate defensive action, such as deploying a Web Application Firewall (WAF) rule, to "plug the bleeding" right away, even if it risks providing justification for not addressing the underlying root cause.

  • Flawed Prioritization - Prioritization is viewed as inherently flawed, serving mainly to match allocated time with available work, and should be customized to a company's specific risk profile (e.g., availability is paramount for a payment gateway) rather than relying on industry standards.

  • The Context Problem: Teja mentioned that the primary difficulty in security engineers writing fixes is not the security vulnerability itself, but the lack of tribal knowledge about downstream dependencies, which can lead to breaking service-to-service calls or unintended consequences. AI promises to bridge this gap by enabling engineers to discover organizational context and create PRs that are more production-ready.

  • Gathering Context - The majority of context (80%) can be derived from the code repository and deployment infrastructure. The remaining, harder 20% involves organizational knowledge like ownership, team changes, product intent, and business case prioritization.

  • Novel Threats from LLMs and Agents - The integration of LLMs introduces new risks beyond the known threats like prompt injection. The primary novel concern is the blurring of the trust boundary as LLMs are used as decision-making engines that dynamically determine business logic.

  • Authorization Challenge - This shift reignites and amplifies the challenge of identity and authorization. Authorization, already a complex problem in microservices architectures, is put "on steroids" when an agent acts as a delegate with delegated access and privileges.

We hope you tune in and, if you like the episode, please do subscribe!


If you like the content and don't want to miss out on new posts, enter your email and hit the Subscribe button below. I promise I won't spam. Only premium content!