AI in Executive Compensation: What Proxy Filings Reveal About Leadership Behaviors

Three-panel framework diagram for AI in executive compensation showing S-curve change model, continuous whitewater adaptability, and selflessness with empathy, humility, and service icons

The proxy filings are coming in. Boards are paying executives for AI. The 2025-2026 DEF 14A season surfaced at least 14 S&P-listed companies that formally embedded AI-specific objectives into executive incentive programs. Those companies include Microsoft, Salesforce, IBM, Textron, and Ralph Lauren. Equilar’s analysis on the Harvard Corporate Governance Blog documents the trend in detail. Weightings range from 5% in annual cash incentive plans to 20% in long-term equity awards. Some companies replaced their ESG component with an AI adoption metric. Others introduced strategic multipliers tied directly to their AI platform strategy.

This is not a trend. AI in executive compensation is now a governance signal, and boards that haven’t addressed it will face increasing scrutiny from institutional investors within the next two proxy seasons. But the filings leave something out. Most of these programs measure AI output and adoption, not the leadership behaviors that make AI transformation actually work. That gap is where executive pay gets it wrong, and where HR and compensation leaders have the most important work ahead of them.

The 14 Companies at a Glance

Company Plan type AI weighting/structure
Microsoft (MSFT) Annual + equity    AI platform shift as core program rationale; no standalone metric
Salesforce (CRM) Annual + equity Strategic multiplier tied to Agentforce and Data Cloud strategy
Qorvo (QRVO) LTIP (PBRSUs) 20% standalone objective: AI tools deployment for productivity
Juniper Networks (JNPR) Annual (AIP) 10% standalone: “Win the AI Opportunity” — AI clusters and data centers
Recursion Pharma (RXRX)    Annual ~16.7% weighting: lead the data and AI revolution in therapeutic R&D
Cognizant (CTSH) Annual (ACI) ~10% strategic initiatives basket includes gen AI development
Ralph Lauren (RL) Annual (STI) AI scorecard replaces citizenship/sustainability as a strategic goal modifier
Unity Software (U) Equity (RSUs) Supplemental RSUs for Chief AI Officer tied to AI initiative leadership
Hyatt Hotels (H) Annual (MBO) AI modernization as an individual business goal; no public weighting
S&P Global (SPGI) Annual Data and technology category at 150% weighting; AI drives outperformance
Halozyme (HALO) Annual Operational efficiency goal: executing three AI initiatives
Kforce (KFRC) Annual (MBO) “Leveraging AI to advance firm strategies” as a multi-year MBO component
Textron (TXT) Annual (2026) 5% AI adoption and utilization, qualitative — replaces ESG component
IBM LTIP AI strategy execution is linked to the relative TSR modifier

Source: 2025-2026 DEF 14A proxy filings via SEC EDGAR; Equilar analysis on the Harvard Corporate Governance Blog, March 2, 2026.

What the Proxy Filings Actually Say About AI in Executive Compensation

The language across the 14 companies varies significantly. Textron will incorporate an AI adoption and utilization component weighted at 5% beginning in 2026, explicitly replacing the ESG component, and the Committee will assess performance qualitatively. Ralph Lauren is introducing AI metrics as a scorecard, replacing citizenship and sustainability objectives as the strategic goal modifier in its short-term incentive plan for FY2026. Qorvo established a standalone long-term incentive objective, weighted at 20%, focused on AI tool deployment to enhance organizational productivity. Salesforce restructured its entire incentive architecture around its Agentforce and Data Cloud strategy, linking executive cash and equity directly to AI execution.

These are not cosmetic changes. The compensation committee decisions behind them signal that boards now view AI in executive compensation as a strategic governance imperative, not a nice-to-have metric. Yet most programs share a common structural flaw: they measure adoption, deployment, revenue tied to AI products, or qualitative progress. What they rarely measure is whether the executive demonstrated the leadership behaviors required to sustain transformation beyond the initial implementation wave. Equilar’s March 2026 analysis on the Harvard Corporate Governance Blog documents verbatim DEF 14A language on AI in executive compensation from several of these same companies. AI-linked incentive design is accelerating across sectors. Read the full analysis: AI as a Performance Metric: What Companies Are Disclosing Now.

Furthermore, the Forbes employer rankings tell a revealing secondary story. Microsoft, Salesforce, IBM, and Hyatt also rank among the top 50 in Forbes America’s Best Large Employers 2026. Each has among the most sophisticated AI compensation links in the dataset. The correlation is not perfect, and causation cannot be claimed. Still, the pattern suggests that companies building AI into pay strategy may also be building cultures that attract and retain talent at scale.

The S-Curve Problem Boards Are Not Solving

Every compensation consultant knows the S-curve. An organization builds, dips briefly, ramps up, plateaus, and then declines. The instinct at every stage is to hold the line and optimize what is working before committing to the next curve. That instinct is lethal in an AI-driven environment.

The companies that survive transformation are not the ones that managed decline gracefully. They are the ones who started the next S-curve before they could see the plateau. That requires fundamentally different executive behavior. It also requires AI in executive compensation programs designed to incentivize it.

Most boards are still structuring AI incentives around the ramp phase: deploying tools, generating adoption data, and driving revenue from AI products. That framing treats AI-linked incentives as a deployment problem rather than a leadership problem. Qorvo’s 20% LTIP weighting, for example, focuses on “operational efficiencies and cost reduction,” a ramp-phase goal. That is reasonable for FY2025. By FY2028, however, any committee still measuring the same adoption objectives will have missed the behavioral inflection entirely. By then, the question will not be whether the executive deployed AI. The question will be whether they built an organization capable of moving to the next curve before the current one peaks.

Boards need to build that expectation now, while AI deployment still has momentum. Waiting until the plateau is visible in the numbers is too late.

Continuous Whitewater: The Behavior Boards Are Not Buying

I learned the concept of “continuous whitewater” years ago. Coined by organizational theorist Peter Vaill, it remains the best description I have encountered of what executive leadership looks like in a genuine transformation environment. The image is exact: you are never in calm water. Rapids do not end. What changes is whether your team fights the current or learns to navigate it, whether the organization treats every disruption as an emergency or as the operating environment.

AI transformation is continuous whitewater. When AI in executive compensation programs only measures quarterly deliverables, they are designed for calm water. Technology is not stabilizing. Workforce implications are not resolving. Moreover, the regulatory environment is evolving faster than most compensation committees can track. Building an executive team that thrives in that environment is categorically different from building one that executes against a static strategic plan.

Most executive incentive plans are structured for the latter. They define annual objectives, measure quarterly, and reward attainment. That structure is suited for a calm-water environment. Applying it unchanged to AI transformation produces a specific failure mode: executives optimize for the measurable deliverable and under-invest in the adaptive capacity that the next wave will require.

AI in executive compensation needs to reward continuous whitewater navigation, not just the completion of a specific AI initiative. IBM’s approach links AI execution to relative total shareholder return through a modifier structure, which comes closest to capturing this dynamic. Relative TSR over a multi-year period measures whether the executive built something durable. It is a stronger signal than any single-quarter AI deployment metric.

Selflessness as the First Behavior: Why It Is Harder Than It Sounds

If I were designing an AI transformation behavior framework, I would start with selflessness. Most compensation committees would look at me blankly when I said it.

Here is what I mean. Designing AI in executive compensation to reward selflessness requires a clear-eyed recognition. Transformation demands that executives hand power, information, and decision rights to people below them and to systems outside their control. It requires them to stop protecting functional empires, fund initiatives that shrink their own roles, and champion successors who will eventually outpace them. The executives who cannot do that will slow-walk AI transformation regardless of what the incentive plan says.

Selflessness in this context is not a soft cultural value. It is a specific, observable leadership behavior with measurable consequences. Does the executive fund an AI capability that reduces their team headcount and their budget authority? Do they actively develop the talent that will replace them? Do they share data and tooling across functions when doing so reduces their own operational control? These are decisions that incentive programs can track if boards are willing to ask the questions.

Compensation committees can score selflessness directly using three observable metrics. First, the percentage of the AI budget allocated to cross-functional initiatives that the executive does not own or control. Second, documented successor development plans for at least two direct reports with AI-technical skill profiles that exceed the executive’s own. Third, a reduction in functional headcount or budget authority directly attributable to AI deployment that the executive personally championed. Each of these is auditable. None requires subjective judgment. Together, they turn selflessness from a leadership philosophy into a compensation committee scoring rubric.

Cognizant’s approach is instructive here. Its Compensation Committee updated strategic initiatives to include a focus on “gen AI development and advancement” in a basket alongside other strategic priorities. That framing creates space to ask what AI advancement actually requires beyond adoption metrics. The committee has the latitude to assess whether the executive built the conditions for AI success, not just whether they hit a deployment milestone.

Building an AI Behavior Framework Into Executive Pay

The Governance Timing Window

The practical challenge for compensation committees designing AI in executive compensation programs is that behavior-based objectives are harder to defend to institutional investors than numerical metrics. Here is the nuance most boards are missing. The Glass Lewis 2026 Proxy Season Preview is the only major proxy advisor with a dedicated 2026 AI governance section. Glass Lewis calls it “the defining theme of the 2026 proxy season.” The ISS 2026 Americas Policy Updates contain zero AI references in the main 35-page document. BlackRock and Vanguard are equally indirect.

That means the scrutiny risk today is real but narrow. The window to build defensible AI behavior frameworks before prescriptive voting policies arrive is open. Boards that move now will be ahead of the scrutiny curve. Retrofitting incentive plan language when Glass Lewis formalizes expectations in 2027 or 2028 is a losing position. Qualitative assessments carrying material weight will draw scrutiny under existing pay-for-performance frameworks. There are design principles that bridge it.

First, tie AI transformation behaviors to outcomes that are already measurable. Selflessness is difficult to score directly, but the outcomes of selflessness (cross-functional AI adoption rates, internal AI capability development, successor readiness) are measurable with reasonable precision. Build those as leading indicators within a broader AI performance objectives framework.

Designing for Six-Month Objective Resets

Second, abandon multi-year performance periods for AI behavior objectives. Rethink annual cycles, too. Here is the argument most compensation consultants are not making yet: AI transformation is not a slow technology absorption like email or the internet. Those were tools that augmented human workers while organizational structure stayed largely intact. AI is a workforce substitution event. An executive managing 20 people today may lead 5 humans and 15 AI agents within 18 months. Their organizational footprint, budget authority, decision rights, and headcount metrics can all shift dramatically inside a single performance year. A 3-year LTIP cycle locks the committee into measuring an organization that no longer exists. Even a 12-month window is likely stale before the review period closes.

Span of control is no longer a static number. Compensation frameworks built on annual headcount assumptions are already measuring the wrong organization.

The right design for AI in executive compensation: 6-month performance windows with documented, mandatory objective resets. Not optional. Mandatory. At each reset, the committee re-anchors objectives to the executive’s actual current workforce. How many humans, how many AI agents, and what decisions have shifted to autonomous systems? Each reset requires documented answers before new objectives are set. One governance guardrail is essential. The committee needs a documented rationale standard. It must distinguish a legitimate objective reset from an executive using rapid AI change as cover to escape accountability. Without that guardrail, 6-month resets become an exit ramp, not an agility feature.

Separating Deployment From Transformation Leadership

Third, separate AI deployment objectives from AI transformation leadership objectives. Deployment is a milestone. Transformational leadership is a sustained behavior pattern. Compensation committees that conflate them will reward the easy work and miss the hard work every time. Tools like SimplyMerit support structured documentation of evolving incentive frameworks. As objective definitions change across performance cycles, the record stays traceable and auditable, not buried in a spreadsheet from three cycles ago.

Finally, for organizations that are not public companies and lack a proxy disclosure requirement, the same principles apply. The 14 companies in the 2025-2026 DEF 14A filings are the governance advance signal. Private companies and nonprofit organizations should treat them as early warning data, not as a public-company-only concern. Understanding where the labor market is heading is equally essential context. See Labor Market Intelligence: An HR Guide for 2026 for the broader workforce picture.

Actionable Guidance for Smaller and Private Organizations

Not every company filing a DEF 14A is your client. If you lead HR or compensation strategy at a smaller or privately held organization, the large-cap proxy data still gives you a direct action map. Use these approaches now, before your company scales into the scrutiny window.

  1. Start simple and qualitative. Copy the Textron or Halozyme model: add a 5% to 10% AI adoption and utilization MBO or scorecard objective. Measure it through employee training hours, tools deployed, or documented cost savings. No complexity required.
  2. Swap, do not add. Follow Ralph Lauren’s lead and replace an existing metrics category with an AI scorecard. Zero new framework complexity, and it signals strategic alignment to any board or investor already reading proxy trends.
  3. Tie to productivity and efficiency. Qorvo’s framing works for any size organization: “deployment of AI tools to enhance productivity and reduce costs.” Use it directly. That language works in any employment agreement or annual incentive plan.
  4. Create role-specific equity for your AI lead. If you are hiring or promoting an AI-focused leader, Unity’s Chief AI Officer equity model is the right template. Structure supplemental RSUs around AI initiative milestones and long-term capability development.
  5. Use internal documents as your proxy filing. Even without SEC disclosure requirements, put the language in your internal compensation policy, employment agreements, or board minutes. Private equity investors and acquirers increasingly look for evidence that leadership accountability for AI transformation is formally structured.
  6. Nonprofit organizations: add one sentence. A short statement in Schedule O or Schedule J Part III is enough. It signals to grantmakers and regulators that your executive bonus pool includes progress on AI efficiency. One sentence is sufficient to establish the intent.

Key Takeaways

  • At least 14 S&P-listed companies embedded AI-specific objectives into executive incentive programs in the 2025-2026 proxy season, with weightings from 5% to 20%.
  • Most proxy programs measure AI output and adoption, not the leadership behaviors that sustain transformation.
  • The S-curve problem demands that executives start the next change cycle before the current plateau is visible; few incentive programs reward that.
  • Continuous whitewater is the operating environment for AI transformation, and executive pay structures designed for calm-water environments will produce the wrong behaviors.
  • Selflessness (sharing power, funding disruptive capabilities, and developing successors) is the foundational behavior that AI transformation requires, and it is the one least likely to appear in a standard incentive plan.
  • AI is not tool adoption. It is workforce substitution. Compensation frameworks built on annual headcount assumptions and multi-year LTIP cycles are already measuring the wrong organization. Six-month windows with mandatory resets are the right design. A span of 20 can become 5 humans and 15 AI agents within 18 months.

Quick Implementation Checklist: Building AI Behaviors Into Executive Pay

  1. Audit the current executive incentive plan for AI-related objectives. Do any exist? If yes, assess whether they measure output, adoption, or behavior.
  2. Define the three to five leadership behaviors your AI transformation specifically requires (start with selflessness, adaptive capacity, and S-curve awareness).
  3. Identify measurable leading indicators for each behavior (e.g., cross-functional AI adoption rates, internal capability investment, successor readiness scores).
  4. Separate deployment milestones from transformation leadership objectives in incentive plan design.
  5. Replace multi-year performance cycles for AI behavior objectives with 6-month windows and mandatory documented resets. Re-anchor objectives at each reset to the executive’s actual current workforce composition, including the human-to-AI agent ratio.
  6. Build a qualitative assessment framework for the Compensation Committee that can withstand ISS scrutiny. Document the scoring rubric before the performance period begins.
  7. Schedule a proxy peer analysis for the next season to benchmark your AI compensation design against the emerging governance standard.

Frequently Asked Questions

For Compensation Professionals

Q: How many companies have formally added AI metrics to executive compensation programs?

A: Based on 2025-2026 DEF 14A proxy filings available on SEC EDGAR, at least 14 S&P-listed companies have embedded AI-specific objectives into executive incentive programs. These include Microsoft, Salesforce, IBM, Textron, Ralph Lauren, Qorvo, Juniper Networks, Recursion Pharmaceuticals, Cognizant, Unity Software, Hyatt Hotels, S&P Global, Halozyme, and Kforce. The metric weightings range from 5% to 20%, and most are still in early design stages.

Q: What is the most common structure for AI in executive compensation programs?

A: The most common structure is a qualitative strategic objective in annual incentive plans. Weightings typically fall between 5% and 16.7% within a broader strategic priorities basket. Some companies, like Qorvo, are also using objectives-based PBRSUs in their long-term incentive plans. Salesforce represents a more aggressive approach, restructuring the entire incentive architecture around its AI product strategy.

Q: Should AI objectives in executive pay be quantitative or qualitative?

A: Both are defensible, but the right choice depends on what you are measuring. AI deployment and adoption milestones can include quantitative metrics such as revenue from AI products, headcount trained, and tools deployed. Transformational leadership behaviors are better assessed qualitatively with a documented rubric. The risk of pure qualitative assessment is ISS scrutiny; the risk of pure quantitative assessment is that executives optimize for the metric without building sustainable capability. Therefore, a hybrid structure with quantitative guardrails and qualitative leadership assessment typically works best.

For Executives and HR Leaders

Q: What is “continuous whitewater” and why does it matter for AI transformation?

A: Continuous whitewater is a leadership concept coined by organizational theorist Peter Vaill, describing an operating environment of unending rapid change, one where there is no calm period between disruptions. In the context of AI transformation, it reflects the reality that the technology, workforce implications, and competitive dynamics are not stabilizing. Executives who thrive in continuous whitewater build adaptive organizations rather than optimized ones. Compensation programs that reward only plan attainment will systematically underperform in this environment.

Q: What is the S-curve problem in AI transformation leadership?

A: The S-curve describes the lifecycle of any growth strategy: an organization builds, ramps, plateaus, and declines. The executives who drive lasting transformation are those who begin the next S-curve before the current plateau is visible. In the context of AI, that inflection point is arriving faster than most compensation committees are designed to track. An executive managing 20 people today may be accountable for 5 humans and 15 AI agents within 18 months. Annual performance cycles measure an organization that may no longer exist by review time. That is why MorganHR recommends 6-month performance windows with mandatory objective resets. Each reset re-anchors objectives to the executive’s actual current workforce, not the one that existed when objectives were first set.

Q: How does selflessness function as a leadership behavior in AI transformation?

A: Selflessness in AI transformation means willingly reducing one’s own authority, budget control, or headcount in service of broader capability. Specifically, it involves funding AI initiatives that shrink the executive’s own team, championing successors who may outpace them technically, and sharing data across boundaries that previously protected functional empires. These behaviors are observable, have measurable downstream outcomes, and are directly correlated with AI transformation success. They are also notably the behaviors that standard incentive programs most consistently fail to reward.

Regulatory and Compliance Considerations

Q: Are there SEC disclosure requirements for AI-related executive compensation objectives?

A: There is currently no SEC rule specifically requiring disclosure of AI objectives within executive compensation programs. Companies that include AI metrics in incentive plans must disclose them in the CD&A section of the DEF 14A, consistent with Item 402 requirements under Regulation S-K. The SEC’s 2022 cybersecurity and human capital disclosure rules have also set a precedent for expanded technology-related disclosure. Companies should consult legal counsel on the appropriate level of specificity when disclosing qualitative AI performance assessments.

Q: What role does institutional investor scrutiny play in AI executive compensation design?

A: The landscape is more nuanced than most boards realize. Glass Lewis is the most direct voice. Its 2026 Benchmark Policy Guidelines include a dedicated “Artificial Intelligence” section. The 2026 Proxy Season Preview calls AI governance “the defining theme of the 2026 proxy season.” Glass Lewis focuses on board oversight and disclosure. Compensation plan design is not yet its target.

By contrast, ISS’s 2026 Americas Policy Updates contain zero references to AI in the main 35-page document. BlackRock’s 2026 guidelines reference “emergent and disruptive technology” only in the context of board advisory access, while Vanguard’s 2026 policies are entirely silent on AI. The practical implication is important. Qualitative AI objectives in executive pay will face scrutiny if they carry significant weight without a documented scoring rubric. Today’s pressure comes from Glass Lewis’s board oversight angle, not a dedicated compensation voting policy. That window will close. Compensation committees should therefore build and disclose a clear assessment framework now, before prescriptive policies arrive.

For Teams Using Compensation Technology

Q: How can compensation technology support the administration of AI transformation incentive objectives?

A: As executive incentive plan designs evolve, especially when AI objectives shift from deployment milestones to behavior-based leadership assessments, maintaining a clean, auditable record of objective definitions, scoring rubrics, and performance results becomes critical. Compensation administration tools like SimplyMerit support that discipline by removing the spreadsheet-based processes that make incentive plan documentation fragile and inconsistent across cycles. When a compensation committee needs to demonstrate to an institutional investor how AI performance was assessed three years ago, the answer should not be buried in a shared drive.


Call to Action

If your executive incentive program does not yet address AI transformation leadership, not just AI adoption. You are already behind the governance curve. Let’s talk. MorganHR designs AI in executive compensation programs that reward the behaviors transformation actually requires, not the milestones easiest to count. Contact us or request a SimplyMerit walkthrough to see how structured compensation administration keeps your incentive framework audit-ready as your objectives evolve.

 

About the Author: Laura Morgan

As a founder and owner of MorganHR, Inc., Laura Morgan has been helping organizations to identify and solve their business problems through the use of innovative HR programs and technology for more than 30 years. Known as a hands-on, people-first HR leader, Laura specializes in the design and implementation of compensation programs as well as programs that support excellence in the areas of performance management, equity, wellness, and more.