Home Economy New York City to regulate the use of artificial intelligence in hiring

New York City to regulate the use of artificial intelligence in hiring

by SuperiorInvest

European legislators they are finishing work on the AI ​​act. The Biden administration and leaders in Congress have their own plans to dominate AI. Sam Altman, the chief executive of OpenAI, creator of the AI ​​sensation ChatGPT, recommended in Senate testimony last week the creation of a federal agency with oversight and licensing authority. And the topic came up Group of 7 summit in Japan.

Amid sweeping plans and promises, New York City has emerged as a modest pioneer in AI regulation.

The city government approved the law in 2021 and adopted it specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

City law requires companies using AI hiring software to inform candidates that an automated system is being used. It also requires companies to have independent auditors review technology distortions annually. Candidates can request and be informed about what data is collected and analyzed. Companies will be fined for violations.

New York City’s targeted approach represents an important front in AI regulation. Experts say that at some point the general principles developed by governments and international organizations must be translated into details and definitions. Who is affected by technology? What are the benefits and harms? Who can intervene and how?

“Without a specific use case, you’re not able to answer these questions,” said Julia Stoyanovich, associate professor at New York University and director of its Center for Responsible Artificial Intelligence.

But even before it took effect, the New York law was a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it’s impractical.

Complaints from both camps point to the challenge of AI regulationwhich proceeds at a rapid pace with unknown consequences, evokes excitement and anxiety.

Unpleasant compromises are inevitable.

Ms. Stoyanovich worries that the city’s law has loopholes that could weaken it. “But it’s a lot better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”

The law applies to companies with workers in New York, but labor experts expect it to affect practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate artificial intelligence in hiring. And Illinois and Maryland have passed laws restricting the use of specific artificial intelligence technologies, often for workplace surveillance and job applicant screening.

The New York law arose out of a clash of sharply opposing viewpoints. The City Council approved it during the final days of Mayor Bill de Blasio’s administration. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rulemaking agency.

The result, according to some critics, is too accommodating to business interests.

“What could have been a landmark law has been watered down to make it ineffective,” said Alexandra Givens, president of the Center for Democracy and Technology, a policy and civil rights organization.

That’s because the law defines an “automated employment decision tool” as technology used “to substantially support or replace discretionary decision making,” she said. Rules adopted by the city appear to interpret that wording narrowly, so that artificial intelligence software will require an audit only if it is the sole or primary factor in hiring decisions or is used to override a human, Ms. Givens said.

That leaves out the main way automated software is used, she said, with the hiring manager always making the final choice. The potential for AI-driven discrimination, she says, typically comes in screening hundreds or thousands of candidates down to a handful, or in targeted online recruiting to build a pool of candidates.

Ms Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It includes bias based on gender, race and ethnicity, but not discrimination against older workers or people with disabilities.

“My biggest concern is that this becomes a pattern at a national level when we should be asking a lot more from our politicians,” Ms Givens said.

The law was narrowed to sharpen it and ensure it is focused and enforceable, city officials said. The council and the worker protection agency heard many voices, including those from public interest activists and software companies. Its goal was to weigh the tradeoffs between innovation and potential harm, officials said.

“This is a significant regulatory achievement toward ensuring that AI technology is used ethically and responsibly,” said Robert Holden, who chaired the Council’s Technology Committee at the time of the bill’s passage and remains a member of the committee.

New York City is struggling to address new technology in the context of federal workplace laws with hiring guidelines that date back to the 1970s. The Equal Employment Opportunity Commission’s main rule states that no selection practice or method used by employers should have a “disparate impact” on a group protected by law, such as women or minorities.

Businessmen criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent AI audits is “not feasible” because “the control environment is nascent,” lacking standards and professional oversight bodies.

But the nascent field is a market opportunity. Experts say audit AI will only grow. It is already attracting law firms, consultants and start-up companies.

Companies that sell artificial intelligence software to help make hiring and promotion decisions have generally embraced regulation. Some have already passed external audits. They see the requirement as a potential competitive advantage, providing evidence that their technology is expanding the pool of job applicants for companies and increasing opportunities for workers.

“We believe we can meet the law and show what good AI looks like,” said Roy Wang, general counsel at Eightfold AI, a Silicon Valley startup that makes software that helps hiring managers.

New York City law also takes an approach to regulating artificial intelligence that may become the norm. A key measure of the law is the “impact ratio,” or calculation of the effect of software use on a protected pool of job seekers. It does not deal with how the algorithm makes decisions, a concept known as “explainability”.

In life-affecting applications like recruiting, critics say people have a right to an explanation of how the decision was made. But AI like ChatGPT-style software is increasingly complex, perhaps putting the goal of explainable AI out of reach, some experts say.

“The focus becomes the output of the algorithm, not the operation of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which develops certifications for the safe use of AI applications in the workplace, healthcare and finance.

Source Link

Related Posts

%d bloggers like this: