Your Chief Financial Officer asks a simple question: “Why does this position pay $85,000 when the survey shows $78,000?” This question often arises due to variations in compensation research methods.
You confidently cite your source. Then she pulls up a different survey showing $92,000. Now she’s questioning everything you’ve presented for the past year.
Welcome to the compensation research crisis nobody’s talking about.
The compensation profession has a dirty secret: most practitioners have never learned how to actually research. Instead, they’ve learned to consume pre-packaged data and present it with authority. That worked fine when survey sources were limited and relatively aligned. Today, with dozens of data providers showing wildly different numbers for the same role, data regurgitation isn’t expertise—it’s a liability. Salary surveys can vary by 10-25% or more due to differences in data collection timing, participant pools, and inclusion of total rewards like equity (HRDataHub).
The education system failed compensation professionals on this front. Universities don’t teach data source evaluation. Certification programs focus on applying data, not validating it (W@W., SHRM). WorldatWork’s Certified Compensation Professional (CCP) emphasizes analytics but not in-depth source critique, while SHRM certifications cover HR principles with limited emphasis on data validation. HR technology delivers insights without revealing methodology. The result? A profession full of smart people who know what the data says but can’t explain why different sources disagree by 25%.
This isn’t about becoming a research scientist. However, it is about developing a systematic approach to move from “the data says” to “here’s what I know, how I know it, and where the gaps exist.” Because the HR Director who can explain why data conflicts exist earns more credibility than the one who simply picks the most favorable number.
The Knowledge Framework Compensation Professionals Need
Effective compensation research requires distinguishing between different levels of knowledge certainty. Most practitioners operate as if all information has equal validity. Consequently, they present survey data with the same confidence as regulatory requirements, creating credibility problems when stakeholders question their sources.
Start with this five-level hierarchy:
Level 1: I Know – Verifiable facts with primary source documentation. For example, your state’s minimum wage, your company’s actual pay data, specific provisions in the Equal Pay Act. These facts don’t require interpretation or trust in third-party aggregators. You can point to the statute, the payroll report, or the signed offer letter.
Level 2: I Know How I Know – Information with transparent methodology and replicable processes. Market data from surveys where you understand the participant criteria, job matching protocols, data cuts, and aging calculations. You can explain to your CFO exactly which companies contributed data, how jobs were leveled, and why outliers were included or excluded (ERI).
Level 3: I Think I Know – Reasonable inferences based on partial information or indirect indicators. For instance, estimating competitor pay based on job postings, Glassdoor reviews, and recruiting feedback. You have multiple data points suggesting a range, but you can’t verify the underlying sources or methodology. Nevertheless, the convergence of signals gives you confidence within boundaries.
Level 4: I Think I Know Why – Your professional judgment about causation or trends. When you see a turnover spike in a role after a market adjustment, you might infer that pay compression caused retention problems. Maybe that’s accurate. Alternatively, the real issue could be management changes, workload increases, or market shifts. You have a hypothesis worth testing, not a conclusion worth stating as fact.
Level 5: I Heard – Secondhand information, anecdotes, or industry rumors. “I heard Amazon is paying $150K for this role.” “Someone said the new pay transparency law requires salary bands in all job postings” (GovDocs). These statements might contain truth, but they’re not actionable without verification. Treat them as research starting points, not ending points.
The critical skill is knowing which level you’re operating at for any given statement and communicating that uncertainty appropriately. When you tell your CEO “the market rate is $85,000,” are you at Level 1 (you have clean data from named participants) or Level 3 (you’re triangulating from multiple imperfect sources)? The number might be the same, but your credibility depends on not overstating your confidence.
Most compensation mistakes don’t come from bad data. Instead, they come from treating Level 3 or Level 4 information as if it’s Level 1, then getting defensive when stakeholders identify the gaps you pretended didn’t exist (iMercer).
“Most compensation mistakes don’t come from bad data; they come from treating reasonable inferences as verifiable facts, then getting defensive when stakeholders identify gaps you pretended didn’t exist.” – MorganHR
Why Data Presents Itself—And Why That’s Dangerous
The compensation technology ecosystem has optimized for consumption, not comprehension. Survey vendors provide clean dashboards with percentile calculations, aging factors, and peer group comparisons. HRIS platforms surface “market insights” without revealing their data sources. Consequently, compensation analysts operate like consumers scrolling through Netflix recommendations—accepting what the algorithm serves without questioning how it works (KornFerry).
This creates three specific problems:
First, practitioners lose the ability to detect data quality issues. When you receive a spreadsheet showing 127 data points for Software Engineer II, you should immediately ask: How many unique companies contributed? What’s the distribution across industries, company sizes, and geographies? Are any data points outliers that skew the percentiles? If your survey vendor won’t answer these questions clearly, you don’t have reliable data—you have marketing material disguised as research (ERI, IMercer).
Second, you can’t explain discrepancies to stakeholders. Your CFO doesn’t care that “the survey says” when she’s holding a different survey with different numbers. She wants to understand why the sources disagree. Is one using cash compensation, while the other includes equity? Is one national, while the other is regional? Do they define the job differently? Without understanding methodology, you’re just picking numbers that support your preferred outcome.
Third, you miss the unknown unknowns. When data presents itself, you only see what the vendor chose to show you. You don’t know what questions weren’t asked, which companies opted out, or what data got excluded as “outliers.” The most important insights often live in the gaps—but you can’t investigate gaps you don’t know exist.
Effective compensation research requires actively seeking information, not passively consuming it. That means starting with questions, not answers. What do I need to know? Where would that information exist? How can I verify what I’m told? Who has an incentive to present this information favorably?
MorganHR’s approach to data transparency reflects this philosophy—showing clients the underlying participant data, calculation methods, and opinion on certainty rather than just serving up numbers. Because compensation planning built on opaque data is compensation planning built on hope, not evidence.
The Six-Question Research Protocol
Professional researchers follow systematic protocols to evaluate information quality and identify knowledge gaps. Compensation practitioners need the same discipline (AIHR). Therefore, before accepting any data point or making any recommendation, work through this six-question sequence:
Question 1: What exactly am I claiming to know? Be specific. “Market rate for Software Engineers” is too vague. “75th percentile base salary for Software Engineer II with 3-5 years experience in mid-sized technology companies in the Bay Area as of Q4 2024” is specific enough to evaluate. Most compensation errors start with imprecise claims that can’t be verified or falsified.
Question 2: What is my primary source? Who originally collected or generated this information? Not who reported it—who created it. If you’re citing a news article about a compensation trend, the journalist isn’t the primary source. The research report, survey data, or regulatory filing they cited is the primary source. Always trace back to the origin. Moreover, primary sources should be named, dated, and accessible.
Question 3: What methodology produced this information? How was the data collected? Who participated? What definitions were used? How were outliers handled? What assumptions underlie calculations? Methodology determines validity. A “national average” derived from 12 companies in three states isn’t a national average—it’s a small regional sample with selection bias.
Question 4: What are the known limitations? Every data source has constraints. Survey data ages. Job descriptions vary across companies. Sample sizes may be small for niche roles. Geographic cuts may blend high-cost and low-cost markets. Acknowledging limitations doesn’t undermine your credibility—it demonstrates your analytical rigor. Stakeholders trust advisors who explain trade-offs more than advisors who present certainty (iMercer).
Question 5: What alternative explanations or data exist? Professional researchers actively seek disconfirming evidence. If one survey shows $85K and another shows $92K, don’t just pick one and ignore the other. Investigate why they differ. If both are methodologically sound, the truth might be “market rate depends on company profile” rather than a single number. Looking for contradictions prevents confirmation bias (ScienceDirect).
Question 6: What don’t I know that could change my conclusion? This is the unknown unknowns question. What information, if it existed, would make you revise your recommendation? Are there factors you haven’t investigated? Questions you haven’t asked? Sources you haven’t consulted? The most important research skill is recognizing when you don’t have enough information to be confident.
This protocol doesn’t require hours of work for every data point. However, it does require asking these questions consciously rather than accepting presented information uncritically. Over time, evaluating sources becomes automatic—you develop instincts for what’s credible and what requires deeper investigation.
Where to Look When Survey Data Conflicts
Compensation professionals encounter conflicting data constantly. One survey shows $78K, another shows $85K, Glassdoor shows $92K, and job postings range from $70-95K (Rush). Most practitioners respond by either picking the number that fits their budget or averaging everything together. Both approaches are intellectually lazy and strategically dangerous (OutSolve).
Instead, use disagreement as a research opportunity. Conflicting data reveals important market dynamics that single sources obscure. Here’s where to investigate:
Primary survey documentation. Don’t rely on executive summaries or vendor presentations. Request the full methodology section, participant lists, and job descriptions. Understand exactly what’s being measured. A 20% variance between sources often disappears when you discover one includes bonuses and the other doesn’t, or one is national while the other is regional (ERI).
Participant profiles and data cuts. Who contributed data? How many data points exist? What’s the size, industry, and location distribution? A survey of 200 technology companies yields different results than a survey of 50 financial services firms and 150 healthcare organizations. Geography matters enormously—national averages that blend San Francisco with Des Moines hide more than they reveal (BLS).
Job matching details. How did the survey define this job? What were the key responsibilities, reporting relationships, and required qualifications? If you’re matching your Senior Financial Analyst to survey data for Senior Financial Analyst, do those actually represent the same scope? Job titles mean nothing—job content determines market rate (iMercer).
Compensation philosophy assumptions. Some surveys explicitly remove outliers above the 90th percentile. Others include them. Some age data monthly. Others use annual factors. Some separate base and variable pay. Others combine them. These choices dramatically affect reported percentiles. Make sure you understand what you’re comparing (ERI).
Recent market movements. Compensation data ages the moment it’s collected. If you’re using 2023 survey data in late 2024, you need external validation of whether the market moved. Review recent news, M&A activity, funding rounds, mass layoffs, and labor market reports. A survey showing $85K might be accurate for 2023 but completely wrong for current hiring.
Alternative data sources. Cross-reference surveys with job posting data, recruiter intelligence, employee self-reports (Glassdoor, Levels.fyi), and your own offer acceptance rates (Rush). These sources have different biases, but triangulation reveals patterns. When job postings consistently exceed survey data, you’re likely looking at an accelerating market. When they’re below, you might have selection bias in your survey.
Company-specific factors. Your organization isn’t the survey average. You might pay premium compensation because you’re a high-growth company or lag the market because you offer exceptional benefits. Understanding your competitive position requires knowing why you pay what you pay relative to the data, not just matching survey percentiles (AIHR)
The goal isn’t perfect data—it doesn’t exist. Rather, the goal is understanding the range of reasonable estimates and the factors driving variation. When you can explain to your CFO “here’s why three sources show different numbers, here’s what that tells us about the market, and here’s why I recommend this approach,” you’ve moved from data regurgitation to professional judgment.
The Hardest Part: Making the Call When Data Still Conflicts
Here’s the moment that separates compensation analysts from compensation leaders: you’ve done the research, evaluated the sources, understood the methodologies, and the data still doesn’t align. Survey A says $82K, Survey B says $88K, job postings range $75-95K, and your recruiter insists you need $90K to compete. Now what?
Most compensation professionals freeze at this exact point. They’ve been trained to find “the answer” in data, not to synthesize conflicting information into a defensible recommendation. Consequently, they either pick the most convenient number, average everything together (which is mathematically absurd when sources measure different things), or throw their hands up and ask their boss to decide. None of these approaches demonstrate professional judgment.
Making the call requires weighing multiple factors your research has revealed:
First, assess source quality and relevance. Not all data deserves equal weight. A survey of 200 comparable companies with transparent methodology matters more than aggregated job posting data with unknown selection bias. Recent information matters more than aged data in a moving market. Geographic specificity matters more than national averages when you’re hiring in a particular location. Your research should have revealed which sources are most credible for your specific context—now use that assessment (ScienceDirect)
Second, consider your organization’s competitive strategy. High-growth startups competing for scarce technical talent probably need to price near the high end of the range. Established companies with strong employer brands and exceptional benefits might compete effectively at the 50th percentile of base salary. Organizations trying to shift their talent profile toward higher performers benefit from paying above market. Market data informs your decision; business strategy determines your decision.
Third, factor in your risk tolerance and adjustment flexibility. If you’re wrong on the low side, what happens? Can you make a quick correction if the role doesn’t fill or the hire quits quickly? If you’re wrong on the high side, what’s the budget impact and internal equity effect? Sometimes the “safe” choice is pricing slightly high to secure talent quickly. Other times, it’s pricing conservatively and being prepared to adjust. Your recommendation should account for the consequences of being wrong in either direction.
Fourth, acknowledge remaining uncertainty explicitly. The strongest recommendations don’t pretend certainty exists. Instead, they say: “Based on these sources with these methodologies, I estimate market rate between $84-90K. I recommend $87K because we’re hiring in a tight market and need to fill quickly, but we should monitor offer acceptance rates and be prepared to adjust if we’re seeing resistance. Here’s the business case for this positioning, and here are the triggers that would indicate we need to reconsider.”
This is professional judgment in action. You’ve done the research. You understand the data quality and limitations. You’ve weighed competing factors. You’ve made a recommendation with clear rationale and defined conditions that would change your thinking. You’re not claiming perfect knowledge—you’re demonstrating systematic thinking and strategic reasoning.
The compensation professionals who struggle most with this step are those who believe their job is finding the “right answer” in data. There is no right answer. There are multiple reasonable answers depending on strategy, risk tolerance, and business context. Your job isn’t to find the answer—it’s to make the best recommendation given available information and strategic priorities, then monitor whether reality confirms or contradicts your assumptions (WTWCO).
This is genuinely hard. It requires confidence to make calls when data conflicts. Humility matters when acknowledging limitations. Strategic thinking connects compensation decisions to business outcomes. Communication skills help you explain your reasoning to skeptical stakeholders. But this is exactly what earns you a seat at the strategic table rather than remaining a data administrator.
The compensation professionals commanding six-figure salaries aren’t the ones with the most survey subscriptions. They’re the ones who can take messy, conflicting information and synthesize it into clear recommendations with sound business rationale. They’re comfortable saying “here’s what I know, here’s what I don’t know, here’s what I recommend and why, and here’s how we’ll know if we need to adjust.” That’s expertise.
From Data Consumer to Compensation Expert
The distinction between a compensation analyst and a compensation expert isn’t credentials or years of experience. It’s the ability to evaluate information critically, identify knowledge gaps proactively, and communicate uncertainty honestly.
Data consumers accept what they’re given. Experts question sources, understand methodology, and synthesize conflicting information into actionable recommendations. Data consumers say, “The survey shows $85K.” Experts say, “Based on these three sources with these limitations, I estimate the market rate between $82-88K, and here’s why I recommend positioning at $86K given our hiring needs and competitive posture”.
This shift requires developing three specific capabilities:
First, cultivate healthy skepticism. Question convenient data. When a number perfectly aligns with your budget or confirms your hypothesis, investigate harder. Confirmation bias is the enemy of good research. The most valuable data often contradicts your assumptions—pay attention to those moments (ScienceDirect).
Second, build a research network. Compensation expertise isn’t individual knowledge—it’s knowing who to ask. Develop relationships with peers at other companies, survey vendors, consultants, recruiters, and industry experts. When you encounter questions you can’t answer, knowing who has relevant expertise saves enormous time. Furthermore, cross-company conversations reveal market dynamics no survey captures.
“Compensation expertise isn’t about having the most survey subscriptions—it’s about evaluating source quality, synthesizing conflicting data, and making defensible recommendations when certainty doesn’t exist.” – MorganHR
Third, document your research process. Don’t just save conclusions—save your methodology, sources, assumptions, and limitations. When someone questions your recommendation six months later, you need to reconstruct your thinking. Additionally, documentation creates institutional knowledge that survives turnover. Your successor shouldn’t have to start from scratch (Astron).
The compensation profession needs practitioners who can research, not just read. Because the data isn’t going to get cleaner or more aligned. Technology will continue fragmenting information sources. AI will generate more data faster. Pay transparency laws will surface more conflicts (hgovdocs.com). The competitive advantage will belong to professionals who can navigate complexity, evaluate quality, and synthesize insights—not those who simply regurgitate whatever presentation lands in their inbox.
Key Takeaways
- Use the five-level knowledge framework to distinguish between verifiable facts, transparent methodology, reasonable inferences, professional judgment, and unverified information—then communicate each appropriately.
- Apply the six-question research protocol before accepting any compensation data: define your claim precisely, identify primary sources, understand methodology, acknowledge limitations, seek alternative explanations, and recognize the knowledge gap.s
- Treat conflicting data as an opportunity by investigating participant profiles, job matching details, methodology differences, and market movements rather than simply picking convenient numbers
- Master the synthesis step by weighing source quality, competitive strategy, risk tolerance, and remaining uncertainty to make defensible recommendations when data conflicts persist.
- Develop critical research capabilities: question convenient data, build expert networks, and document your research process to move from data consumption to professional expertise.
Quick Implementation Checklist
- Audit your current data sources – List every survey, database, and information source you use; document what you know about methodology and what you don’t
- Apply the knowledge framework – Take your last three recommendations and categorize each supporting claim by certainty level (I Know, I Know How, I Think I Know, I Think Why, I Heard)
- Request full methodology – For your primary survey vendor, ask for complete participant lists, job matching protocols, data cuts, and calculation methods
- Cross-reference one conflicting data point – Find a role where you have multiple sources showing different numbers, then systematically investigate why using the six-question protocol
- Practice making judgment calls – Take one compensation decision with conflicting data and write out your recommendation, including source quality assessment, strategic considerations, risk factors, and explicit uncertainty
- Build your research network – Identify three compensation peers at other companies and one external expert (consultant, vendor, recruiter) you can consult on methodology questions
- Document one complete research process – For your next significant recommendation, create a written record of sources, methodology evaluation, assumptions, limitations, and decision rationale
- Develop a source evaluation template – Create a standard form you complete for every new data source, covering methodology, limitations, appropriate use cases, and confidence level
- Schedule quarterly data validation – Set recurring reviews to check if your primary sources remain current, relevant, and methodologically sound for your organization’s needs.
Frequently Asked Questions
How much time should I spend researching each compensation decision?
Scale research effort to decision impact. For individual adjustments under $5K, verify your primary source methodology once and apply consistently. For market studies affecting 50+ employees or executive compensation, invest hours in cross-referencing sources and validating assumptions. The key is systematic evaluation, not exhaustive investigation for every data point (WTWCO).
What if my budget doesn’t allow multiple survey sources?
Single-source research is still research when you understand the methodology thoroughly. Request detailed documentation from your vendor. Supplement with free resources like Bureau of Labor Statistics data, job posting analytics, and peer networking (BLS). Focus on understanding your source’s limitations rather than pretending limitations don’t exist.
How do I handle situations where research reveals we’re significantly off-market?
Present findings with context: data sources, methodology, confidence level, and market factors driving the gap. Provide options ranging from immediate correction to phased adjustment to strategic positioning. Include business case analysis showing retention risk and recruitment challenges. Research reveals problems; your expertise recommends solutions.
Should I share my research methodology with hiring managers and executives?
Absolutely. Transparency builds credibility. When managers understand why you recommended a number, they’re more likely to trust other recommendations. Use simplified explanations focusing on key methodology points and limitations. Avoid overwhelming detail, but never hide your analytical process.
How do I develop research skills if my compensation education didn’t cover this?
Start with one source and master its methodology completely. Join compensation associations and attend methodology workshops. Read academic papers on survey design and sampling. Practice the six-question protocol on low-stakes decisions. Build gradually—research skills improve with deliberate application, not innate talent.
What’s the biggest research mistake compensation professionals make?
Treating all data as equally valid. A survey of 500 companies with transparent methodology deserves more weight than job posting data or Glassdoor self-reports. Conversely, current job posting data might better reflect an accelerating market than year-old survey data. Weigh sources by relevance and quality, don’t average everything together (Glassdoor)
How do I respond when executives want definitive answers but data is uncertain?
Reframe uncertainty as risk management. Instead of “I don’t know,” say “Here’s what we know, what we’re confident about, and what remains uncertain—here’s my recommendation with contingency plans if we’re wrong.” Executives respect advisors who acknowledge complexity while still providing direction.
Can AI tools help with compensation research methods?
AI excels at finding sources, identifying patterns, and drafting methodology questions. However, it can’t evaluate data quality, detect subtle sampling bias, or apply professional judgment about your specific context. Use AI to accelerate research tasks, but never outsource critical thinking about data validity and applicability (Zayla).
Stop regurgitating compensation data and start researching it. Schedule a consultation to discuss how MorganHR helps organizations build research-driven compensation strategies that stand up to stakeholder scrutiny—or explore SimplyMerit to see how transparent data methodology should work.