Economists like to draw triangles. In trade, there can be no high tariffs, no retaliation, and no price changes. Monetary policy cannot fix interest rates, fix the money supply, or promise complete stability. A similar triangle exists for employment under unequal starting conditions, and most arguments about employment equity slip through the cracks.
When companies rely on algorithms to allocate scarce jobs, they gravitate towards three attractive goals. These are strong efficiency (choosing the candidates most likely to perform well), strong representativeness (reflecting the results more or less in the group’s stock), and strong formal neutrality (mechanically applying the same rules to everyone).
The problem is simple but unpleasant. You can’t have all three at once. You can choose any two, but the third one will take you in the wrong direction. That’s the “equity trilemma,” and once you understand it, much of the confusion around hiring algorithms and equity and inclusion efforts starts to look less like a mystery and more like standard pricing theory. The formal statement and proof can be found in my research paper “The Fairness Trilemma: An Impossibility Theorem for Algorithmic Governance”.
old promise
For a while, the story many companies told when it came to hiring was simple. Prejudice lived in people’s heads. Inefficiency relied on intuitive judgment. The solution was obvious. Standardization, automation, and measurement. Replacing discretion with data will make hiring fairer and more effective.
This story has fueled a wave of investment in DEI programs and algorithmic recruitment tools. The vendor promised something very attractive in both public policy and corporate governance: moral improvement without trade-offs. Better outcomes for disadvantaged groups, no performance degradation, and fewer uncomfortable conversations about discretion and power.
Algorithmic recruitment systems were sold as a way out of the quagmire. It collects resumes and applications, learns what predicts performance, and mathematically enforces “fairness” and balances the model.
However, algorithms do not eliminate discretion. They relocate it to model design, data selection, and the definition of “fairness” itself. And we tend to relocate it to places that are harder to see and answer.
parable of the three corners
The now famous story of Amazon’s experimental recruitment algorithm is a useful parameter. Trained on past resumes and hiring decisions, the system learned that applicants with similar profiles to past male hires were more likely to score higher in technical roles. In reality, it downgraded resumes that appeared to be “coded for women” to reflect the male-dominated tech workforce.
In a narrow technical sense, the model had not failed. We have optimized the prediction performance given the data. The same scoring rules were applied to all applicants. It was efficient and formally neutral. What it failed to do was generate representative results from unrepresentative data.
At that point, the company faced three options that clearly corresponded to the trilemma. You can keep the model and accept unequal outcomes (efficiency + neutrality, weak representation), add fairness constraints to bring the results closer to equality and accept lower predictive accuracy (efficiency + representation, weak neutrality), or reintroduce human judgment and overrides to correct the pattern (representation + discretion, weak formal neutrality). Amazon eventually withdrew from the system.
Something similar happened with HireVue’s AI video interviews. The company touted automated analysis of facial expressions, tone, and word choice as a way to standardize early screening and reduce bias. Critics have noted that these characteristics correlate with disability status, neurodiversity, and demographic background in ways that are difficult to justify as relevant to the profession. Under increasing pressure, HireVue has completely discontinued facial analysis.
In both cases, it wasn’t the idea of screening itself that failed. What failed was the belief that measurement could be neutral in a world of unequal starting conditions, and that efficiency, representation, and neutrality could be obtained “for free” from a suitable model.
toy models
Simple models are clearly structured. Imagine a company that needs to fill a certain number of positions from two groups of applicants, A and B. Applicants in both groups are scored by a predictive model that estimates the probability of success. Group A has a higher average predicted success rate than group B because of unequal starting conditions such as quality of schooling, previous experience, and background. Enterprises consider a single threshold rule. The idea is to hire everyone whose predicted success score exceeds a predetermined level.
One rule cannot do all three if the base rates are not equal. You cannot choose the candidate with the highest expected performance. Match hires from Group A and Group B approximately to their share in the applicant pool (or population). And apply the same threshold to everyone. If companies insist on strong efficiency and strong neutrality, they set one common threshold. Adopters are drawn disproportionately from Group A, which has a higher predictive score. Representatives are different from group stocks.
If you insist on strong efficiency and strong representativeness, you need to soften neutrality with group-specific thresholds and weights, allowing you to hire more Group B applicants while still selecting the best among them. However, A and B applicants with the same score will be treated differently.
If the company insists on strong representation and strong neutrality (same rules for everyone, similar hiring rates for each group), it will not choose the candidate with the highest overall score. This results in some high-scoring applicants not being hired and low-scoring applicants being hired, at the cost of efficiency unless the underlying inequalities are addressed.
This is the fairness trilemma in its simplest form. You can select any two corners of the triangle, but the third corner will be moved to the opposite side. This impossibility is not primarily about machine learning. It is allocating scarce slots under unequal conditions.
Scarcity does not go away. it moves
Economists have seen this movie before. Consider rent control. Imposing a price ceiling below the market clearing level does not eliminate scarcity. It moves. It manifests itself in queuing, non-price screening, side payments, and reduced quality. Landlords who cannot distribute through rent will do so through waiting lists, human networks, and discretion. Empirical studies such as the Diamond-McQuade-Qian study of rent control in San Francisco demonstrate this pattern.
The employment system works in much the same way. If you constrain one allocation mechanism, scarcity will find another channel. When distributions cannot be made on performance metrics due to equity constraints, organizations use committees, exceptions, holistic reviews, and opaque overrides to determine ratios. Each move preserves two corners of the trilemma by relaxing the third corner of the trilemma. Policy constraints moderate scarcity. ;they do not eliminate scarcity;
What companies should do
Once we accept that efficiency, representation, and formal neutrality cannot all be maximized at once, the question changes. Instead of asking, “How can we eliminate bias without trade-offs?” companies need to ask, “What margins are we willing to soften and where are we going to have discretion?”
A more honest approach to fairness and inclusion in hiring algorithms accomplishes at least three things. Clarify your priorities and design your governance around those choices. Rather than burying value judgments in model design or opaque fairness metrics, place discretion in places where it can be monitored, such as structured committees, documented overrides, and review processes. And stop selling algorithms as a silver bullet. The model cannot eliminate the fundamental trade-offs caused by unequal starting conditions. At best, it can make it clear where the constraints are bound and what the costs of the choices are.
The goal is not perfection. That’s legitimacy. Decide openly where the trilemma will lead in a given situation and take responsibility for the outcome.
(0 comments)
Source link
