President Joe Biden’s executive order on artificial intelligence is setting up a tug of war between those who fear agencies empowered under it will overstep their bounds and those who worry the government won’t do enough.
Last month’s order requires multiple departments to collect public comments, draw up new regulations and prepare a slew of reports. It hands significant responsibilities to the Homeland Security and Commerce departments, including the National Institute of Standards and Technology, which is charged with developing safety standards.
The secretary of Homeland Security is directed to establish an advisory AI Safety and Security Board to improve security. The Defense, Veterans Affairs, and Health and Human Services departments must develop regulations for responsible use of AI in their respective fields.
President Joe Biden hands Vice President Kamala Harris the pen he used to sign a new executive order regarding artificial intelligence on Oct. 30 at the White House.
The order directs the Federal Trade Commission, the Consumer Financial Protection Bureau and the Federal Housing Finance Agency to draw up regulations to address bias and other harms by artificial intelligence systems. The FTC must also examine whether it could enforce fair competition among AI companies using existing authorities.
It sets a timeline of three to nine months for the various agencies and departments to produce several reports. They must also call for public comments before drawing up new regulations while identifying new funding opportunities for AI in several fields.
The volume of activity is drawing attention from a spectrum of special interests. The U.S. Chamber of Commerce, which represents the largest U.S. companies, welcomed the executive order, saying that it could help the United States set a global standard for AI safety while funding a slew of new projects.
But Jordan Crenshaw, a senior vice president at the chamber, said he was concerned about multiple new regulations as well as the number of public comments required by various agencies. He said agencies like the FTC, CFPB and FHFA, “which have already been shown to have exceeded their authority for trying to grab power, may use this (order) as a justification to continue how they have operated.”
Crenshaw gave the example of the FTC’s consideration of rules on commercial surveillance and data security, in which it asks the public whether such rules could be applied across the economy. He said any attempt by the FTC to impose such broad rules, without clear new authorities granted by Congress, could run afoul of the Supreme Court’s so-called major questions doctrine. The court, in West Virginia v. EPA, ruled in 2022 that the EPA went too far in its attempt to regulate greenhouse gas emissions without explicit authority from Congress.
Biden’s order creates as many as 90 different requests for comments by the various agencies that are tasked with drawing up regulations, Crenshaw said. “And with very short comment time frames, we might get comment overload and stakeholders may actually miss opportunities to weigh in just because of the massive amount of comments that we have to track,” he said.
Some digital rights groups fear that the order could result in little oversight.
“Biden has given the power to his agencies to now actually do something on AI,” Caitlin Seeley George, managing director at Fight for the Future, a nonprofit group that advocates for digital rights, said in an email. “In the best case scenario, agencies take all the potential actions that could stem from the executive order, and use all their resources to implement positive change for the benefit of everyday people.”
“But there’s also the possibility that agencies do the bare minimum, a choice that would render this executive order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI,” she said.
NIST is likely to play a pivotal role in creating new safety standards on AI.
Vice President Kamala Harris announced at the Global Summit for AI Safety in the U.K. last week that under Biden’s order NIST will establish an AI Safety Institute, which she said “will create rigorous standards to test the safety of AI models for public use.” But a study of NIST’s physical and financial needs mandated by Congress and completed by the National Academies of Sciences, Engineering, and Medicine in February found serious deficiencies at the agency.
Congress appropriated $1.65 billion for NIST in fiscal 2023. A spokesperson for NIST did not respond to questions on whether the agency plans to seek an increase in funding to meet the new requirements under the order.
But NIST will likely need to double its team of AI experts to 40 people to implement the president’s order, said Divyansh Kaushik, the associate director for Emerging Technologies and National Security at the Federation of American Scientists, who has studied NIST’s needs.
The agency will also need about $10 million “just to set up the institute” announced by Harris, Kaushik said. “They don’t have that money yet.”
Trends in AI ethics before and after ChatGPT
Trends in AI ethics before and after ChatGPT

Computational systems demonstrating logic, reasoning, and understanding of verbal, written, and visual inputs have been around for decades. But development has sped up in recent years with work on so-called generative AI by companies such as OpenAI, Google, and Microsoft.
When OpenAI announced the launch of its generative AI chatbot ChatGPT in 2022, the system quickly gained more than 100 million users, earning it the fastest adoption rate of any piece of computer software in history.
With the rise of AI, many are embracing the technology’s possibilities for facilitating decision-making, speeding up information gathering, reducing human error in repetitive tasks, and enabling 24-7 availability for various tasks. But ethical concerns are also growing. Private companies are behind much of the development of AI, and for competitive reasons, they’re opaque about the algorithms they use in developing these tools. The systems make decisions based on the data they’re fed, but where that data comes from isn’t necessarily shared with the public.
Users don’t always know if they’re using AI-based products, nor if their personal information is being used to train AI tools. Some worry that data could be biased and lead to discrimination, disinformation, and—in the case of AI-based software in automobiles and other machinery, accidents and deaths.
The federal government is on its way to establishing regulatory powers to oversee AI development in the U.S. to help address these concerns. The National AI Advisory Committee recommends companies and government agencies create Chief Responsible AI Officer roles, whose occupants would be encouraged to enforce a so-called AI Bill of Rights. The committee, established through a 2020 law, also recommended embedding AI-focused leadership in every government agency.
In the meantime, an independent organization called AIAAIC has taken up the torch in making AI-related issues more transparent. Magnifi, an AI investing platform, analyzed ethics complaints collected by AIAAIC regarding artificial intelligence dating back to 2012 to see how concerns about AI have grown over the last decade. Complaints originate from media reports and submissions reviewed by the AIAAIC.
A significant chunk of the public struggles to understand AI and fears its implications

Many consumers are aware when they’re interacting with AI-powered technology, such as when they ask a chatbot questions or get shopping recommendations based on past purchases. However, they’re less aware of how widespread these technologies have become.
When Pew Research surveyed Americans in December 2022, and asked if they knew about six specific examples of how AI is used, only 3 in 10 adults knew all of them. This includes understanding how AI works with email services and organizing your inbox, how wearable fitness trackers utilize AI, and how security cameras might recognize faces. This low understanding of how AI manifests in daily life contributes to Americans’ attitudes toward this technology. Pew found that 38% of Americans are more concerned than excited about the increase of AI.
As AI works its way into consumer tech, concerns grow to a fever pitch

Concerns about AI initially focused on social media companies and their algorithms—like the 2014 Facebook study when the company’s researchers manipulated 700,000 users’ feeds without their knowledge, or algorithms spreading disinformation and propaganda during the 2020 presidential election.
The viral adoption of ChatGPT and multimedia creation tools in the last year have fueled concerns about AI’s effects on society, particularly in increasing plagiarism, racism, sexism, bias, and proliferation of inaccurate data.
In September 2022, an AIAAIC complaint against Upstart, a consumer lending company that used AI, cited racial discrimination in determining loan recipients. Other complaints focus on a lack of ethics used in training AI tools.
In June 2023, Adobe users and contributors filed an AIAAIC complaint about Adobe’s Firefly AI art generator, saying the company was unethical when it failed to inform them it used their images to train Firefly.
Government, technology, and media emerge as leading industries of concern

While the AIAAIC data set is imperfect and subjective, it’s among the few sources to track ethical concerns with AI tools. Many of the government agencies that have embraced AI—particularly law enforcement—have found themselves on the receiving end of public complaints. Incidents such as facial recognition technology caused wrongful arrests in Louisiana, for example, and a quickly scrapped 2022 San Francisco Police Department policy that would allow remote-controlled robots to kill suspects.
Not surprisingly, many citizens and organizations have concerns about technology companies’ use of AI in the rise of chatbots. Some involving ChatGPT and Google Bard center around plagiarism and inaccurate information, which can reflect poorly on individuals and companies and spread misinformation.
The automotive industry is another sector where major players like Tesla leverage AI in their sprint toward autonomous vehicles. Tesla’s Autopilot software is the subject of much scrutiny, with the National Highway Traffic Safety Administration reporting the software has been connected with 736 crashes and 17 fatalities since 2019.
The optimistic case for AI’s future is rooted in the potential for scientific, medical, and educational advancements

As the federal government works toward legislation that establishes clearer regulatory powers to oversee AI development in the U.S. and ensure accountability, many industries ranging from agriculture and manufacturing to banking and marketing are poised to see major transformations.
The health care sector is one field gaining attention for how AI changes may signficantly improve health outcomes and advance human society. The 2022 release of a technology that can predict protein shapes is helping medical researchers better understand diseases, for example. AI can help pharmaceutical companies create new medications faster and more cheaply through more rapid data analysis in the search for potential new drug molecules.
AI has the potential the benefit the lives of millions of patients as it fuels the expansion of telemedicine and has the potential to aid in expanding access to health care; assist with diagnosis, treatment, and management of chronic conditions; and help more people age at home while potentially lowering costs.
Scientists see potential for creating new understandings by leveraging AI’s ability to crunch data and speed up scientific discovery. One example is Earth-2, a project that uses an AI weather prediction tool to forecast extreme weather events better and help people better prepare for them. Even in education, experts believe AI tools could improve learning accessibility to underserved communities and help develop more personalized learning experiences.
In the financial sector, experts say AI warrants a considerable number of ethical concerns. Gary Gensler, the head of the US Securities and Exchange Commission, told the New York Times that herding behavior—or everyone relying on the same information, faulty advice, and conflicts of interest could spell economic disaster if not preempted. “You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Gensler said in an interview with the New York Times. To address those concerns, the SEC put forward a proposal that would regulate platforms’ use of AI, prohibiting them from putting their business needs before their customers’ best interests.
Story editing by Jeff Inglis. Copy editing by Kristen Wegrzyn.
This story originally appeared on Magnifi and was produced and distributed in partnership with Stacker Studio.