Computer recruiters

America's first law regulating AI bias in hiring takes effect this week

While the law aims for transparency, critics say it may not be enough to protect against AI bias
To whom AI may concern.
To whom AI may concern.
Photo: Unsplash
We may earn a commission from links on this page.

Artificial intelligence isn’t simply changing how we do our jobs: It’s also deciding whether we get jobs at all. Companies are increasingly incorporating algorithmic tools into their hiring processes, from software that reads our resumes to AI bots that score our first interviews. Now in New York City, new legislation is being used to determine how much say AI can have in job applications.

This week, regulators from the city’s Department of Consumer and Worker Protection will start enforcing a first-of-its-kind law aimed squarely at AI bias in the workplace. The law requires more transparency from employers that use AI and algorithmic tools to make hiring and promotion decisions; it also mandates that companies undergo annual audits for potential bias inside the machine. Enforcement begins on July 5.

While it’s a stride toward transparency about how AI decides who gets a job, some critics say the law—along with others that follow it—isn’t enough to protect candidates from bias in hiring. As AI’s capabilities race ahead, regulators need to run faster to keep up.

How does New York City’s AI hiring law affect job candidates?

First passed in 2021, the city’s hiring law stipulates that companies using AI and algorithmic-based tools for hiring and promotions must tell all candidates they do so. Candidates can also ask what personal data is being collected.

As US Equal Employment Opportunity Commission chair Charlotte Burrows suggested in a hearing this year, as many as four out of five American employers use some kind of automated technology to make employment decisions. Those tools aren’t limited to secret in-house projects: Vendors sell plenty of off-the-shelf software, adjusting the algorithms to a company’s ideal hiring profile. There’s now an AI for every step in the hiring process. Some of those include:

  • Automated resume screeners that read job applications and recommend the best candidates for an open role
  • Matchmaking algorithms that scour millions of job postings to recommend roles to candidates—and vice versa
  • Social media scrapers that collect data on applicants to compile personality profiles based on what they’ve found online
  • AI-based chatbots that ask candidates questions about their qualifications, then decide if they’ll proceed in the interview process
  • Algorithmic video platforms that have candidates answer interview questions on camera, record their replies, transcribe their responses, and analyze their vocal or facial patterns for subjective traits like “openness” or “conscientiousness”
  • Logic games that purport to identify qualities like “risk-taking” or “generosity”

“It’s difficult to paint a complete picture because there are so many different vendors that offer these tools to big companies,” Ben Winters, who leads the AI and human rights project at the Electronic Privacy Information Center (EPIC), told Quartz this spring. The sheer scale of the market makes the tools hard to track, and harder to regulate.

That matters when the makers of these tools don’t want to reveal what’s inside their technology—and in some cases, the companies using them don’t always understand what the algorithm is really prioritizing. AI-powered hiring technology has been found to make erratic, arbitrary, and often discriminatory decisions that rule against job candidates.

Take the algorithmic video software that generated different personality scores when a candidate put on glasses or a headscarf, or the AI interview tool that couldn’t transcribe regional accents in the UK. Algorithms have been found to drop women for tech roles, rate German speakers as proficient in English, or simply favor candidates named Jared.

Under New York City’s law, companies must hire independent auditors annually to review their AI tools for bias. That’s evaluated with an “impact ratio,” which measures the technology’s effect on hiring across legally protected groups defined by race, ethnicity, and gender—although, notably, not people with disabilities or older workers. Violations by employers will result in a fine.

But what are critics saying about the AI hiring law? 

But advocates for ethical uses of AI critique the law and its standards of measurement, saying it doesn’t go far enough to meaningfully protect job candidates. Unless the law is airtight, they charge, developers can find loopholes to pass—or bypass—audits.

“There’s a real concern that good governance tools, like audits and impact assessments regarding artificial intelligence programs, become this administrative wand-waving in front of your face,” Winters said.

He cited an example involving video interview platform HireVue. After EPIC filed a Federal Trade Commission complaint against HireVue (pdf) for its use of algorithms and facial recognition technology, the company hired independent auditors to investigate bias in its products. HireVue issued a release saying auditors had found no bias risks. But when you access the audit, Winters said, it doesn’t state that the tool is unbiased—just that the auditors didn’t have didn’t have enough information to decide.

“I don’t hold out any hope that [the law] will give us any information,” he added, “or really allow people to increase the equity around their experience with hiring.”

New York City’s law is no stranger to critique. After its passage under the De Blasio administration in 2021, the proposed regulation rode out packed public hearings and a flood of public comments; enforcement was postponed twice, first from this January to April, then from April to July, to take the feedback into account. Some public interest advocates say revisions to the law bent to business interests opposing regulation.

“What could have been a landmark law was watered down to lose effectiveness,” Alexandra Givens, president of the Center for Democracy & Technology, told the New York Times, citing narrow phrasing that leaves many uses of AI out of audits. “My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers.”

Where else are laws being written to regulate AI in hiring?

While New York City’s law is the first to address bias in AI-based hiring tools, other states have also moved to regulate automated hiring technology. Illinois and Maryland were early to oversee AI video hiring software, with both requiring that job candidates consent to be evaluated by an algorithm. Meanwhile, states including California, New Jersey, New York, and Vermont, along with the District of Columbia, are at work on laws regulating the use of AI in hiring.

And lawmakers in Europe have approved the EU AI Act, a sweeping set of rules that will classify algorithmic technologies into categories of risk, allowing the European Union to limit high-risk technology—and outright ban software they rank “unacceptable.”

So how can candidates trust the AI being used in their job interviews?

Other advocates for ethics in AI are optimistic that despite their shortcomings, laws tackling hiring algorithms force some meaningful transparency for job candidates. “All of [the laws] are flawed,” Mona Sloane, a senior research scientist at NYU’s Center for Responsible AI, told Quartz this spring. But they also make AI regulation more concrete—and reign in an algorithmic Wild West. If regulators really want to the laws to work, Sloane and others point to one way to close loopholes: resources.

“There is no funding available to think concretely about innovation and compliance,” Sloane said. To change that, she argued, regulators—not just companies contracting audits—must give auditors the access and resources they need to carry out their work. Only then can candidates trust that a company’s hiring process uses truly independent tools, techniques, and protocols. Otherwise, job seekers and regulators will have to take AI tools—and the companies that use them—at their word.