< Back

How U.S. Federal Agencies Are Using Existing Laws to Regulate AI

Artificial Intelligence (AI) is advancing rapidly, offering unprecedented benefits but also posing significant risks. Yet, while AI technology develops at a breakneck pace, comprehensive AI-specific legislation remains elusive in the U.S. As a result, federal agencies are turning to existing laws to regulate AI's impact in areas such as consumer protection, civil rights, and national security.

Leveraging Existing Laws

Since new AI-specific regulations have yet to be enacted, federal agencies are increasingly relying on their current regulatory frameworks to manage AI-related risks. Agencies such as the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Department of Labor are already utilizing long standing laws in creative ways to address AI challenges.

  • Federal Trade Commission (FTC): The FTC is using its mandate to prevent deceptive business practices and unfair competition to tackle AI-related fraud and bias. For instance, the agency has investigated AI's role in data privacy violations and discrimination in consumer lending practices(Law.com)(Law & Policy Hub ).
  • Consumer Financial Protection Bureau (CFPB): The CFPB is applying existing consumer protection laws to AI. It has warned that AI must comply with regulations like the Equal Credit Opportunity Act to ensure that AI-driven lending does not discriminate against protected groups( Law.com ).
  • Civil Rights and AI: Several agencies are also addressing AI’s implications for civil rights. The Department of Justice (DOJ) and the Equal Employment Opportunity Commission (EEOC) have committed to ensuring that AI technologies, particularly in hiring, do not perpetuate discriminatory practices under laws like Title VII of the Civil Rights Act( Privacy Pros )( Holistic AI - AI Governance Platform ).

The Biden Administration’s Approach

In addition to federal agencies, the Biden administration has issued executive orders aiming to curb AI risks. These orders focus on safe and responsible AI deployment, directing agencies to take immediate steps under existing laws while new regulations are drafted. A 2023 Executive Order, for example, outlines requirements for federal agencies to assess the impact of AI on areas such as data privacy, public safety, and employment(KPMG)(Law & Policy Hub).

Congressional Action

Congress has held multiple hearings addressing AI’s implications for intellectual property, civil rights, and democracy. However, the road to comprehensive AI legislation remains slow. In the meantime, specific AI-related bills, like the "No AI Fraud Act" and the "AI Disclosure Act," have been introduced to address deepfakes and ensure transparency when AI is involved in decision-making processes(Holistic AI - AI Governance Platform).

Challenges and the Road Ahead

While using existing laws provides some stop-gap measures, many experts argue that it is not a long-term solution. The pace at which AI is evolving will eventually outstrip the ability of existing frameworks to manage its risks. A more robust legal framework designed specifically for AI is needed to address emerging challenges such as bias in algorithms, AI’s role in employment decisions, and threats to national security(KPMG).

Conclusion

Federal agencies are stepping up their efforts to regulate AI by creatively applying existing laws. While this has provided some temporary solutions, the complexity and rapid growth of AI technology require comprehensive and tailored legislation to ensure that its benefits are realized while mitigating its risks.

TopicLake Insights Publication. AI Assisted ✎