“In 2026, the biggest shift isn’t that systems are more complex—it’s that directors are expected to understand what that complexity means for people’s rights, and they can’t hide behind jargon anymore,” states Modupe Akintan, a Privacy and AI Engineer whose work now stretches from technical architectures to board‑level conversations about risk and accountability. Her formulation captures a turning point in corporate governance: AI and data‑driven systems have moved from back‑office tools to strategic assets, and with that move, cybersecurity and privacy have been recast as questions of fiduciary duty as much as network defense.
For Modupe, who trained in computer and network security at Stanford before moving into roles at Apple, Amazon, and policy and standards bodies, the task is to turn highly technical concerns into something directors can interrogate and act on. “If a board can’t ask informed questions about how an AI system uses data, or who’s accountable when it fails, then we haven’t done our job as engineers,” she says. It is this insistence on translation that has made her an increasingly prominent figure in a field trying to keep pace with both regulatory scrutiny and technological acceleration.
From Lab Work To Risk Language
Modupe’s field of expertise sits deliberately at the intersection of privacy engineering, AI governance, cybersecurity, and technology policy. Rather than treating these as separate domains, she approaches them as facets of a single problem: how to identify, assess, and mitigate privacy, security, and societal risks as AI‑enabled systems are designed, deployed, and scaled. That problem has become more pressing as regulators move toward principle‑based, risk‑oriented frameworks that expect organizations to justify not only what their systems do, but why they do it that way.
Her early research at Stanford’s Empirical Security Research Group focused on third‑party risk management, evaluating how vendor scoring models translate into real‑world trust decisions. In practice, those scores influence procurement, outsourcing, and even regulatory disclosures, yet the methodologies behind them are often opaque. “We found that a lot of ‘risk understanding’ boiled down to accepting someone else’s model without really interrogating it,” she recalls. That experience shaped her conviction that risk metrics must be explainable not only to security teams but to executives and boards who ultimately own the consequences.
Technical Mastery In An AI Governance Era
The decade’s defining trend has been the convergence of cybersecurity, privacy, and AI governance into a single, board‑visible surface. Guidance now frames AI governance as a set of frameworks, policies, and practices meant to ensure responsible, safe, and compliant development and use of AI systems, with explicit emphasis on human rights, accountability, and robust security controls. Boards are urged to assign clear responsibility for AI oversight, embed AI risks in enterprise risk registers, and demand policies that spell out acceptable use, monitoring, and escalation paths.
Modupe’s technical background allows her to meet these demands with specificity. With a master’s degree in computer science specializing in computer and network security, a privacy engineering certificate, and foundational security certifications, she operates fluently at the protocol, architecture, and control design levels. Yet she is quick to note that technical mastery must be paired with governance literacy. “We’re past the era when you could just say ‘we encrypted it’ and consider the conversation over,” she says. “Encryption, access controls, logging—those are ingredients. Boards need to know what story those ingredients tell about accountability.”
High-Level Privacy Leadership Inside Big Tech
At Amazon, Modupe serves as a Privacy and AI Engineer, focusing on high-level privacy, AI governance, and risk management for data‑driven systems. She describes her remit in broad terms: translating regulatory and compliance into practical implementation guidance, rather than discussing specific internal tools or proprietary systems. The work, she suggests, involves helping teams understand how obligations from data‑protection law, AI regulations, and security standards should shape design decisions long before a system reaches production.
As a member of the Cloud Security Alliance’s AI Safety and Data Privacy Engineering Working Group, she helps develop privacy‑by‑design guidance across the machine‑learning lifecycle, aligning technical best practices with emerging AI safety initiatives and data privacy standards. The CSA’s AI Safety Initiative, which draws in government agencies and major cloud providers, aims to create a “north star” for AI best practices that can complement public regulation. “It’s not enough to have one company doing the right thing in isolation,” she says. “Boards need confidence that there are shared baselines—industry standards they can point to when they challenge their own organizations.”
Policy Fellowship And The Politics Of Risk
Modupe has also built a substantial policy portfolio. As Director of Partnerships at the Paragon Policy Fellowship, she worked to connect technologists with policymakers, scoping applied projects and collaborating with government partners on issues such as AI governance, surveillance, and platform accountability. The fellowship’s description of itself as a hands‑on, project‑based experience reflects her view that technical expertise must be tested against real institutional constraints.
She is also a Fellow of CHAIRES, which focuses on AI, human rights, and emerging technologies, and a member of the Center for AI and Digital Policy’s AI Policy Clinic, where she contributes to analysis and recommendations on global AI governance. Those roles place her in conversations where risk is framed as more than just a probability‑times‑impact calculation; it encompasses structural harms, inequities, and the long‑term consequences of embedding AI into public services and critical infrastructure.
How Technical Voices Reach The Boardroom
Recent guidance on AI oversight makes explicit what was previously vague: boards are expected to understand, at a high level, how AI systems operate, the risks they pose, and how governance structures manage those risks. Analysts note that board oversight of AI has increased sharply since 2024, but also warn that relying solely on audit committees may not provide the proactive, technical scrutiny required for high‑impact AI deployments. Organizations are urged to create dedicated AI or technology governance committees and ensure that directors have sufficient expertise to challenge management on issues such as bias testing, data protection, and security vulnerabilities.
Modupe sees privacy engineers and similar specialists as conduits in this process. On the “Working in Tech” podcast, she described part of her job as helping companies understand how their privacy structures are set up and how to communicate them to leadership. “You need people who can sit with engineers one day and with executives the next, and not change the facts when the audience changes,” she says. In practice, that means turning technical controls into narratives about reduced regulatory exposure, improved incident response, and more credible public commitments.
A Critic’s View: Influence Or Insulation?
Yet the very presence of technically sophisticated privacy experts in board conversations has prompted a counter‑argument. “There is a risk that privacy engineers become a kind of insulation layer for boards,” warns a governance advisor involved in AI and cybersecurity briefings, who asked not to be named to speak freely. “Directors can feel reassured by the language of frameworks and working groups without fully confronting whether their business models are compatible with the privacy promises they are making.”
The critic points to reports showing that AI‑related risk has become a clear operational concern linked to disclosures, litigation, and reputational damage, while board oversight structures remain uneven. Some companies have created dedicated AI committees; others still rely on periodic updates to bodies designed for retrospective financial review.
Changing Defaults, Not Just Language
Modupe acknowledges the concern but argues that it misreads what meaningful influence looks like. “If my role is just to provide nicer language around the same risk, then I’ve failed,” she says. “The whole point of bringing technical mastery into governance is to make certain options harder—to change the defaults so that the easiest path is also the most responsible one.” In her view, real board‑level influence manifests in decisions such as shortening data‑retention windows, declining opaque data‑sharing deals, or delaying the deployment of AI features until governance safeguards are in place.
Her service on conference program committees, IEEE initiatives, and AI safety working groups is part of that effort to move the baseline. By helping shape what counts as best practice, she aims to give boards something more concrete than aspiration to latch onto. “Standards, charters, risk methodologies—these are levers,” she says. “They let you say, ‘this isn’t just my opinion; this is where the field is going.’”
What Influence Is For
Asked how she measures whether technical mastery has truly translated into boardroom‑level privacy influence, Modupe’s answer is characteristically spare. “Influence only matters if it shows up in the systems people actually use,” she says. “If, five or ten years from now, ordinary users can move through AI‑driven services without feeling constantly watched or powerless, that won’t be because boards heard a few good presentations—it will be because they said no at the right moments.”
For her, the endgame is not to make directors fluent in every technical detail, but to ensure they understand the stakes well enough to demand proof rather than promises. “Technical mastery should make it impossible to hide the trade‑offs,” she reflects. “If I can help a board see exactly what they’re signing off on—who it protects, who it exposes—then privacy stops being a line in a report and becomes a real constraint on how we build.”
