The Labor Department is tasked with leading the development of artificial intelligence guidelines for employers under a sweeping executive order signed by President Joe Biden on Monday.
The order directs the department, in consultation with other federal agencies, to craft a set of standards to guide companies in mitigating the risks of AI adoption in the workplace, including displacement of jobs and increased collection of employee data. The standards must be published within six months.
“AI must be developed and assessed within clearly articulated frameworks that put human values — not corporate profits — at its center,” AFL-CIO President Liz Shuler said in a statement welcoming Biden’s order.
In a global survey released by McKinsey in August, 40% of C-suite executives said their organizations were planning to increase their investment in AI in the wake of recent advancements, particularly the rise of a “generative” form of the technology capable of producing text, images or other content based on data used to “train” it.
“The expected business disruption from gen AI is significant, and respondents predict meaningful changes to their workforces,” McKinsey said in a report on the findings. “They anticipate workforce cuts in certain areas and large reskilling efforts to address shifting talent needs.”
Goldman Sachs estimates, using data on occupational tasks in both the U.S. and Europe, that generative AI technologies could automate up to one-fourth of current work in the future.
IBM CEO Arvind Krishna has said the company expects to pause hiring for certain non-customer-facing roles it thinks could be replaced with AI in coming years, Bloomberg News reported in May. “I could easily see 30% of that getting replaced by AI and automation over a five-year period,” he told the publication.
Meanwhile, the risks associated with the technology are gaining increased attention in Washington.
“We face a genuine inflection point in history, one of those moments where the decisions we make in the very near term are going to set the course for the next decades,” Biden said at a White House event just before signing his new order.
Besides calling for the Labor Department to create new AI standards for employers, the EO directs the National Institute of Standards and Technology within the Commerce Department to establish guidelines to promote “safe, secure, and trustworthy” AI systems.
In another section, the Federal Trade Commission, an independent agency, is “encouraged” to consider exercising its existing rulemaking authority to ensure fair competition in the AI marketplace and to protect consumers and workers from AI-related harms.
Biden said the order builds on “critical steps” the administration has previously taken to address AI-related concerns.
In September, the White House announced that eight technology companies, including Salesforce, Adobe, IBM, and Nvidia, were added to a group that had committed to adhering to a set of voluntary safeguards for the technology, including developing “robust technical mechanisms” to ensure that users know when content is AI-generated, such as a watermarking system; publicly reporting their AI systems’ capabilities, limitations and areas of appropriate and inappropriate use; and prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.
In May, the White House convened a listening session with workers, researchers, labor and civil rights leaders, and policymakers on the use of related technologies by employers to monitor, evaluate and manage their workers.
Biden’s latest move drew a mixed reaction from the U.S. Chamber of Commerce.
“The Chamber appreciates the priorities outlined in the Executive Order, such as attracting highly skilled workers, bolstering resources needed for intra-government coordination, and speeding up the development of standards,” Tom Quaadman, executive vice president of the Chamber’s Technology Engagement Center, said in a statement.
Still, he warned that a plan with short, overlapping timelines for agency-required action “endangers necessary stakeholder input, thereby creating conditions for ill-informed rulemaking and degrading intra-government cooperation.”
Quaadman also cautioned agencies such as the FTC and Consumer Financial Protection Bureau from viewing the order “as a license to do as they please” as opposed to acting within the limits of their congressional mandates.