Last Thursday, Meta announced Llama 3, the latest version of its large-scale language model (LLM). The latest model comes with what Meta claims is a “high quality” data training set and new computer programming capabilities. Chris Cox, Meta’s chief product officer, predicts that future versions of the model will include advanced “multimodality” driven inference. Meta’s ambitions grab the headlines, but the system is so complex that Cox and his colleagues’ predictions are likely to be off. For example, Rama 2 was unable to understand basic context. It is important that regulators recognize the complexity of the artificial intelligence ecosystem and allow developers to adjust and improve their models throughout the implementation process.
Researchers, engineers, businesses, academic institutions, and government agencies are collaborating across disciplines and industries to integrate AI into a wide range of complex sociotechnical and economic systems, as illustrated by the examples of ChatGPT and Gemini. doing. Developing these foundational models requires collaboration between linguists, computer scientists, engineers, and companies with the computational power and data needed to build and train the models. This process also requires funders to develop the model, scientists from other fields such as sociology and ethics, and companies to use the model in customer-facing applications such as websites.
As a result, the resulting ecosystems are incredibly complex and exhibit complex system characteristics such as incomplete knowledge, uncertainty, unpredictability, asynchrony, and non-decomposability. Because of the large number of interconnected components, the behavior of the entire system cannot be easily predicted or controlled. The proliferation of AI applications therefore poses new challenges for understanding, explaining, and controlling new behaviors in coupled systems. For example, LLMs are prone to “hallucinations” and reporting false information (independent research shows that this happens about 20% of the time, even with the most “truthful” systems currently available). ). Because of the complexity of the model, even its creators are unable to explain why and how each particular “falsehood” is produced. Therefore, it is difficult to create systematic ways to detect or prevent such behavior.
Governance of complex systems like AI ecosystems requires policymakers to consider different perspectives, unintended consequences, and unpredictable emergent actions, both of the systems themselves and their human counterparts. . It may not be clear where regulatory responsibility lies, as applications may be developed and deployed in a variety of multiple jurisdictions, and their impact and impact may span a variety of sectors. there is. At a minimum, effective governance requires coordination and cooperation among multiple stakeholders. This level of coordination itself is complex because the ecosystem is constantly changing as new systems are developed, new applications are deployed, and more experience is gained.
Until now, EU and US regulations have been premised on risk and risk management to provide society with assurance that AI will be developed and deployed in a manner that is deemed “safe.”
The EU rules build on the continent’s experience in embracing regulations to ensure product safety and protect individuals from known harms associated with certain AI uses and privacy violations. Systems are classified according to the perceived risk they pose. Prohibited AI includes manipulating the behavior of individuals in certain undesirable ways and using certain technologies (e.g. biometric data, facial recognition) in given situations.
High-risk AI, which requires extensive documentation, auditing and pre-certification, is subject to existing EU product safety compliance legislation (e.g. interoperability of toys, protective equipment, agricultural and forestry vehicles, civil aviation and rail systems). In addition to widespread use, where physical safety is a priority (e.g. critical infrastructure) or where there may be a risk of psychological or economic harm (e.g. access to education, employment, services) access). Low-risk applications that are required to meet only transparency obligations perform narrow procedural tasks. The focus is on improving the outcome of human decision making, and the final decision is controlled by the human decision maker.
Although the U.S. Office of Management and Budget regulations regarding government use of AI are less restrictive and prescriptive than EU rules, they still address some of the AI risks and governance and innovation issues directly related to the use of AI by government agencies. The emphasis is on addressing. Specifically, the risks addressed “result from reliance on AI output to inform, influence, determine, or implement agency decisions and actions; “Any action that may impair the effectiveness, safety, fairness, equity, transparency, accountability, appropriateness, or legality of “. “
In both cases, the risks being addressed arise almost exclusively in relation to specific products, activities, decisions, or uses of AI, rather than the complex ecosystem in which it operates. The relevant circumstances are narrowed down to a specific set of circumstances, actions, actors, and outcomes that are already widely known and controllable. Even the prohibited uses in the EU are limited to specific outcomes that have already been largely identified and described. In both cases, a specific individual will need to take ultimate responsibility for AI and its regulatory reporting and management.
Neither regulation addresses the complexity, uncertainty, unpredictability, asynchrony, and non-degradability of the ecosystem in which AI operates. In fact, the references to “complexity” and “uncertainty” stand out because they are clearly not taken into account. Neither seems to accommodate the extensive multi-stakeholder collaboration and multiple perspectives needed to manage complex dynamic systems.
Perhaps it is time for regulators to have some humility and recognize what these regulations can and cannot accomplish. These do not guarantee safety as you proceed with the development and deployment of AI. They do not acknowledge what we know, what we do not know, and what we cannot know, because the rationality of the humans who supervise them is limited. They only seek to manage a subset of risks that have already been identified or predicted. As we gain experience with new ecosystems in operation, we should expect some surprises from unexpected new behaviors and discoveries of things we did not previously know or understand.
The question is, how to deal with such a situation? We need some leadership in discussing how our society evolves in the face of such unmistakable uncertainty. Existing regulatory efforts that look back are unlikely to be sufficient, and alternatives to this large and complex effort, which must necessarily be forward-looking toward an inherently uncertain future, are unlikely to suffice. I can’t imagine it being a thing.
Bronwyn Howell is an adjunct senior fellow at the American Enterprise Institute, where she focuses on the regulation, development, and implementation of new technologies.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.