Home Artificial Intelligence (AI)The most dangerous sentence in compensation: ‘ChatGPT says I should be making…’

The most dangerous sentence in compensation: ‘ChatGPT says I should be making…’

by Todd Humber
A+A-
Reset

Nearly three million times a day, an American worker types a question into ChatGPT and waits to find out what they’re worth.

That number comes from OpenAI itself, published recently in a report framing the trend as a kind of democratization — workers finally getting the wage information they need, when they need it, without the awkwardness of asking a colleague or the frustration of sifting through scattered salary pages. The company even introduced a product called WorkerBench, a new benchmark tool that tested its model against U.S. government wage data and pronounced the results highly accurate.

It’s a compelling story. It’s also one that should make every compensation professional reach for a very strong cup of coffee.

Here’s the thing about wage information: it looks simple on the surface and isn’t. The number ChatGPT returns when someone asks what a supply chain manager earns in Winnipeg is not a number drawn from a rigorous, methodologically sound compensation survey. It is, at best, a synthesis of whatever publicly available data the model encountered during training — job postings, salary aggregators, anecdotal forums, outdated studies.

At worst, it’s a confident-sounding figure that happens to be wrong. The OpenAI report itself acknowledges the benchmark was tested against national occupation and metro-level medians, not the granular, industry-specific, role-calibrated data that professional compensation surveys painstakingly collect.

The salary survey is not glamorous. That’s the point

Traditional salary surveys are slow. They can be expensive. They require employers to submit actual, real payroll data, which is then validated, cleaned, and analyzed by people who specialize in exactly this kind of work.

Robert Half, WTW, Mercer, the Conference Board, Normandin Beaudry, Hays, Eckler and more all pour tons of resources into giving HR professionals the most accurate data possible. They’re not perfect, but they represent something AI currently cannot: verified, structured, contextualized information about what real employers are actually paying real people in specific roles, at specific levels, in specific geographies and industries.

The OpenAI report notes, almost as a point of pride, that wage searches are highest where pay is “harder to benchmark, more negotiable, or more important to career mobility.” In other words, the tool is most popular precisely where the answers are most complex.

That is not a reassuring data point.

For workers using ChatGPT to do a quick gut-check before a job interview, the stakes are manageable. They might walk in slightly misinformed about market rates, negotiate a little high or low, and life goes on. That’s not ideal, but it’s recoverable.

The scenario that should worry minds in HR departments is different. It’s the one where a harried compensation manager, under pressure to move quickly on a budget cycle, starts leaning on AI-generated figures to shortcut the benchmarking process. Or where a small or mid-sized employer without access to premium survey data decides that a few ChatGPT queries is close enough.

It won’t be close enough.

And the people most likely to pay the price for that gap are not the ones setting the budgets.

Pay equity doesn’t survive on approximations

Canada is in the middle of a serious, overdue reckoning on pay equity. Federal pay equity legislation came into force in 2021, and provincially, the work of identifying and closing compensation gaps is ongoing and deeply technical. It requires disaggregated data. It requires job evaluation methodologies that hold up to scrutiny. It requires, above all, that the numbers being used are grounded in something more than a language model’s best guess.

We’re making decent strides through recent legislative efforts like pay transparency.

Pay equity analysis is not a task for a tool that, by OpenAI’s own description, is still working toward accuracy “beyond national benchmarks.” Systemic pay gaps don’t close when you’re working from approximate data. They calcify.

There is a broader pattern worth looking at here. AI is genuinely useful for a wide range of HR tasks — finessing job descriptions, summarizing policy documents, screening large applicant pools, synthesizing engagement survey results. These are areas where speed and scale matter and where a degree of imprecision is tolerable.

Compensation is different. Compensation is where imprecision compounds. A job evaluation done wrong in year one affects every merit increase, every promotion band, every equity audit that follows.

OpenAI’s framing of ChatGPT as a labour market resource is understandable from where they sit. They are building a product, and workers genuinely do struggle to find pay information. But the solution to wage opacity is not a faster, more confident version of guessing. It’s better data infrastructure, more transparent employer pay practices, and compensation professionals who have the resources and the standing to do their work properly.

What the three-million-messages-a-day figure actually tells us is that workers are hungry for information that the labour market has historically withheld from them. That’s a real problem worth solving. But the answer to a broken system isn’t a shortcut around expertise — it’s a genuine commitment to the kind of rigorous, human-led analysis that makes compensation fair, defensible, and durable.

A machine can tell you what it thinks you’re worth. The question is whether you want your employer deciding your salary based on the same answer.

Related Posts

Leave a Comment