As artificial intelligence upends the tech industry, regulators and lawmakers are scrambling to respond. Microsoft’s departure from the OpenAI board, a Senate hearing on AI privacy, and experts calling for new regulatory approaches highlight the complex challenges facing the AI industry and its overseers.
Microsoft cuts ties with OpenAI board
As regulators on both sides of the Atlantic step up pressure on AI partnerships, Microsoft has reportedly withdrawn its observer status on OpenAI’s board of directors, a position the tech giant’s legal team argues served the purpose of providing insight without compromising OpenAI’s independence.
The move comes as the European Commission and US regulators are scrutinizing the close relationship between the two AI giants. The EU has reluctantly acknowledged that observer status does not threaten OpenAI’s autonomy, but is still seeking a third-party opinion on the deal.
Microsoft’s withdrawal from the board, first secured amid drama over OpenAI’s leadership last November, appears aimed at dodging a regulatory bullet. As AI continues to transform the tech industry, the strategic step highlights the tightrope facing big tech companies: balancing cooperation and independence under the scrutiny of global regulators.
The partnership between Microsoft and OpenAI, valued at more than $10 billion, is a cornerstone of both companies’ AI strategies. The partnership has allowed Microsoft to integrate modern AI into its own products and provide OpenAI with significant computing resources. The partnership has produced some high-profile products, such as ChatGPT and image generator DALL-E, sparking both excitement and concern about the rapid advancement of AI.
Senate dives into AI privacy issue
The Senate Commerce Committee is set to tackle the thorny issue of AI-driven privacy concerns in a hearing scheduled for Thursday (July 11).
The United States has been slow to enact privacy legislation despite being home to the tech giants driving AI innovation, as states and other countries try to fill the void, creating a patchwork of regulations that are increasingly difficult for companies to navigate.
A bipartisan effort, the American Privacy Rights Act, seemed poised to move forward but stalled last month when House Republican leaders applied the brakes. The bill aims to give consumers more control over their data, including the ability to opt out of targeted advertising and data transfers.
Thursday’s hearing is scheduled to feature testimony from legal and technology policy experts, including representatives from the University of Washington and Mozilla. As AI’s reach expands, pressure is growing on Congress to act. The question remains: Can lawmakers keep up with the breakneck pace of technological advances?
AI safety and competition: regulators face a tightrope walk
In a rapidly evolving AI environment, Brookings Institution fellows Tom Wheeler and Blair Levin are urging federal regulators to strike a delicate balance. As the Federal Trade Commission (FTC) and Department of Justice (DOJ) step up their antitrust scrutiny of AI collaborations, the two experts argue in a Monday (July 8) op-ed that promoting both competition and safety is both important and achievable.
Wheeler and Levin propose a new regulatory approach, taking inspiration from sectors such as finance and energy. Their model features three key elements: an oversight process to develop evolving safety standards, market incentives to reward companies that exceed these standards, and rigorous monitoring of compliance.
To ease antitrust concerns, the authors point to past precedent in which governments have authorized collaboration with competitors in the national interest. They suggest that the FTC and DOJ issue a joint policy statement, similar to one issued on cybersecurity in 2014, to make it clear that legitimate AI safety collaborations do not raise antitrust red flags.
The move comes amid growing fears about the potential risks of AI and the concentration of power in the hands of a few large tech companies. Wheeler and Levin argue that a new approach is urgently needed, as developments in AI outpace traditional regulatory frameworks.
Their proposals aim to strike a balance between unleashing AI’s potential and protecting the public interest. As policymakers grapple with these challenges, the authors’ recommendations could provide a roadmap for fostering a competitive, yet responsible, AI ecosystem.