Opinion: The relationship between AI and policymakers is currently a chaotic mess. But I predict that by 2028, we’ll see the emergence of AI-literate policymakers who understand the technology’s potential and risks, leading to more effective and balanced regulations. How will this transformation unfold?
Key Takeaways
- By Q4 2027, expect to see at least three major pieces of legislation in the US aimed at regulating AI’s impact on employment, with a focus on retraining programs for displaced workers.
- The EU’s AI Act, initially passed in 2024, will undergo significant revisions by mid-2026 to address loopholes identified in its initial implementation, specifically regarding generative AI.
- To prepare for increased AI regulation, businesses should conduct internal audits of their AI systems by the end of 2026, focusing on data privacy, algorithmic bias, and transparency.
The Current State: A Regulatory Vacuum
Right now, the reality is that most policymakers are playing catch-up. They’re grappling with a technology that’s evolving faster than they can understand it. I’ve seen this firsthand at industry roundtables. Last year, I attended a meeting at the Fulton County Government Center where the discussion around AI in local government was… rudimentary, to put it kindly. There was a lot of talk about “the cloud” and not much about the actual implications of using AI in, say, predictive policing or resource allocation.
This knowledge gap leads to two major problems: knee-jerk reactions and regulatory capture. Knee-jerk reactions happen when policymakers, faced with public pressure or a scary headline, rush to implement regulations without fully understanding the consequences. Think about the initial responses to social media a decade ago – a lot of poorly thought-out laws that were later walked back or struck down. Regulatory capture, on the other hand, occurs when industry insiders, those with deep pockets and technical expertise, influence policy to benefit their own interests. It’s a constant battle to ensure regulations serve the public good, not just corporate bottom lines. According to a report by the Center for Public Integrity, lobbying efforts around AI regulation have increased by 300% since 2023.
Prediction 1: The Rise of the AI-Savvy Politician
This regulatory vacuum won’t last. The increasing pervasiveness of AI in everyday life—from personalized medicine to automated transportation—will force policymakers to become more informed. We’ll see a new generation of politicians and regulators emerge, individuals with backgrounds in STEM fields or a genuine interest in understanding the technology.
How will this happen? Several ways. First, universities are already starting to offer specialized programs for policymakers on AI ethics and governance. Second, expect to see more “tech fellowships” that place experts in government agencies. I predict that by 2028, every major congressional committee will have at least one staffer with a PhD in computer science or a related field. Third, AI literacy initiatives will become more common, with organizations like the Electronic Frontier Foundation offering training and resources to policymakers.
Prediction 2: Sector-Specific Regulation
General AI regulation is a fool’s errand. It’s too broad, too vague, and ultimately ineffective. Instead, expect to see a shift towards sector-specific regulations that address the unique challenges and risks posed by AI in different industries.
For example, in healthcare, we’ll see stricter regulations around the use of AI in diagnostics and treatment, focusing on issues like data privacy and algorithmic bias. The FDA is already working on guidelines for AI-powered medical devices, but expect those guidelines to become much more prescriptive in the coming years. In finance, regulators will focus on preventing AI-driven market manipulation and ensuring fair lending practices. The SEC is likely to introduce new rules requiring firms to disclose their use of AI in trading algorithms. And in the realm of criminal justice, expect intense scrutiny of AI-powered surveillance technologies and predictive policing algorithms. The ACLU of Georgia has already filed several lawsuits challenging the use of facial recognition technology by local law enforcement.
This sector-specific approach will allow regulators to tailor their policies to the specific risks and benefits of AI in each industry, leading to more effective and targeted regulation. We also need to consider AI in the classroom and the policies surrounding it.
Prediction 3: International Cooperation and Competition
AI is a global technology, and its regulation will require international cooperation. But let’s be realistic: it’ll also be a source of international competition. Countries that develop effective and balanced AI regulations will attract investment and talent, while those that stifle innovation with overly restrictive policies will fall behind.
The European Union’s AI Act, passed in 2024, is a prime example of this dynamic. While some applaud the EU for taking a proactive approach to AI regulation, others worry that its strict rules will stifle innovation and give other countries, like the United States and China, a competitive advantage. According to a report by the Brookings Institution, the AI Act could cost the EU billions of dollars in lost investment.
Expect to see a complex interplay of cooperation and competition in the coming years, with countries working together on some issues (like data privacy and cybersecurity) while competing on others (like AI research and development). The G7 is likely to play a key role in coordinating international AI policy, but ultimately, each country will need to develop its own approach based on its unique values and priorities. It’s a debate similar to escaping the echo chamber in news.
Addressing the Counterarguments
Some argue that any regulation of AI will stifle innovation and that we should let the technology develop freely. I disagree. Unfettered AI development poses significant risks to society, from job displacement to algorithmic bias to the potential for misuse. The argument that regulation always hinders innovation is simply not supported by the evidence. In many cases, regulation can actually foster innovation by creating a level playing field and providing clear guidelines for companies to follow.
Others argue that AI is too complex to regulate effectively and that policymakers will never be able to keep up with the pace of technological change. While it’s true that AI is a complex technology, that doesn’t mean we shouldn’t try to regulate it. We regulate all sorts of complex technologies, from nuclear power to pharmaceuticals. The key is to focus on principles-based regulation that sets broad goals and standards, rather than trying to micromanage the technology itself.
What’s more, failure to regulate now will only make the problem worse later. As AI becomes more deeply integrated into our lives, the potential for harm will only increase. We need to start building the regulatory frameworks now to ensure that AI is used for good, not ill. Considering how AI impacts admin jobs is essential in the regulatory process.
We had a case study last year that exemplifies the need for clear guidance. A client, a small fintech startup in Atlanta, was developing an AI-powered loan application system. They were struggling to comply with existing fair lending laws, as the AI algorithms were inadvertently discriminating against certain demographic groups. Without clear guidance from regulators, they were essentially flying blind, unsure of how to ensure their system was fair and compliant. This situation highlights the urgent need for policymakers to provide clear and actionable guidance to companies developing and deploying AI systems.
The time for action is now. Call your representatives in the Georgia State House (404-656-5000) and demand they prioritize AI literacy and responsible regulation. The future of our society depends on it.
What specific skills will AI-savvy policymakers need?
They’ll need a solid understanding of machine learning principles, data privacy, algorithmic bias, and cybersecurity. They also need to be able to critically evaluate the claims made by AI developers and understand the potential social and economic impacts of the technology.
How will sector-specific AI regulations be enforced?
Existing regulatory agencies, such as the FDA, SEC, and FTC, will likely be responsible for enforcing AI regulations within their respective domains. They may need to develop new expertise and tools to effectively monitor and audit AI systems.
What are the biggest challenges to international cooperation on AI regulation?
Differing values, priorities, and legal systems pose significant challenges to international cooperation. Some countries may prioritize economic growth over ethical considerations, while others may have stricter data privacy laws. Finding common ground will require compromise and a willingness to bridge cultural and political divides.
How can businesses prepare for increased AI regulation?
Businesses should conduct internal audits of their AI systems, focusing on data privacy, algorithmic bias, and transparency. They should also invest in AI ethics training for their employees and engage with policymakers to help shape the regulatory landscape.
What role will AI play in the regulatory process itself?
AI can be used to automate regulatory compliance, monitor AI systems for violations, and identify emerging risks. However, it’s important to ensure that AI is used responsibly and ethically in the regulatory process, with appropriate human oversight.
It’s time to stop treating AI like magic and start treating it like the powerful tool it is. Demand that your elected officials take AI seriously, educate themselves, and develop thoughtful regulations that protect our society while fostering innovation. The future is not predetermined – it’s up to us to shape it.