The UK’s Competition and Markets Authority (CMA) concluded its initial review of generative AI, which was announced back in May, with a report containing seven proposed principles to ensure “consumer protection and healthy competition are at the heart of responsible development and use of the foundation model”. (FM), as it puts it.
The principles considered by the competition watchdog, as it begins its second round of stakeholder engagement on the potential market impact of AI, are:
- Responsibility: “FM developers and users are responsible for the output provided to consumers”
- entry: “Current ready access to main input, without unnecessary restrictions”
- Diversity: “Sustainable diversity of business models, including both open and closed”
- selection: “Enough choice for businesses to decide how to use FM”
- Flexibility: “Having the flexibility to switch and/or use multiple FMs as needed”
- fair dealing: “No anti-competitive conduct including self-preference, tying or bundling”
- Transparency: “Consumers and businesses are informed about the risks and limitations of FM-derived content so they can make informed choices”
The competition watchdog is drawing on its experience of regulating market competitiveness and bringing together this first draft of pro-innovation principles with some preliminary research and feedback from AI stakeholders. The move follows instructions from the UK government to existing regulators to consider AI impacts on their patch. Down the line, the CMA will thus promote such a list as a best practice to avoid competition complaints at the cutting edge of AI.
Nothing is set in stone yet, though, with another update on its thinking in this area due in early 2024. So watch this space.
Scoping AI Impact
“In this market for foundation models, there is a lot at stake for both competition and consumers. If the market does well, the best products win. And so do customers and people. But if it doesn’t, people can really lose out and struggle to compete in an attractive business. So with this review… we wanted to be on the front foot as much as possible — trying to understand what’s going on rather than coming in and trying to understand the facts,” said Will Hayter, senior director of the CMA’s Digital Markets Unit (DMU), in an interview with TechCrunch. are
“There are tons of potential benefits for these models. But, of course, some risks too. And we think that gains or losses can happen very quickly. So we’ve tried to focus on some potentially positive outcomes and some potentially less positive outcomes. And then we’ve really thought about the types of drivers that push in one direction or the other.”
Building on the foundation models, CMA said its focus is on large-scale AI models that can be adapted through a process of fine-tuning into downstream customer applications – so that AI plays a specific role in the supply chain for the purpose of creating it. By others who develop consumer-facing apps and services.
Hayter confirmed that it is too early for the CMA to have an established view on how this still-rapidly-developing AI technology might impact markets. It cannot say whether foundation model makers could be future candidates for bespoke regulation under the UK’s planned “pro-competition” overhaul of the rules applied to Big Tech with so-called “Strategic Markets Status” (a long-trailed digital regulation reboot that was recently Prime Minister Rishi revived by Sunak) — telling us “I think it would be really wrong to prejudge and try and predict at this point”.
But the regulator is clearly eager to be proactive in savoring cutting-edge technologies that have such huge potential for impact.
“We think it’s important to try to move the market toward some of these positive outcomes. And that’s what the proposed set of principles is trying to achieve,” Hayter said: “But I would really emphasize that at this stage they are actually proposed.
“We’ve done the report but it means we can now go out and discuss both the content but also the principles with different types of organizations… to see how those principles can be refined and adopted. For those more positive results. So we’re just at the beginning of the conversation, which we’re looking forward to.”
CMA received some 70+ responses after calling for input prior to the review. Hatter would not be drawn into the breakdown of where this feedback came from but suggested that he had “heard from a variety of organizations — (AI) labs, large companies, some civil society, academia, a. A range of experts”, as well as conducting their own research to feed the report.
“We have pulled in a lot of diverse inputs. But, again, what I’m really looking forward to now is that now that we have initial thoughts and these principles, we can really use that as a framework for the next round of conversations — and see how. We will probably be able to work with a variety of organizations in cooperation with you to try and help get the market to the best possible place,” he added.
The UK government introduced its own set of principles to guide the development of AI in its policy white paper in March. There is some overlap between these two lists (the government’s five principles for AI are: safety, security, and robustness; Transparency and explanation; fairness accountability and governance; (Competition and Redress) The CMA’s proposed principles are specifically targeted at potential risks falling within its competition and consumer protection purview.
It’s also worth noting that it doesn’t even look at the full spectrum of potential customers’ concerns – for example it notes that issues like security and data protection aren’t considered in the initial review. Here it seems keen to ensure it stays well within its regulatory lane (when issues like data security and privacy fall more clearly under the Information Commissioner’s Office – which is also issuing guidance for generative AI developers).
When asked about this potential gap, the CMA stressed that it would also work with other UK regulators to look into it AI under the government’s scheme for development Context-specific guidance — pointing to the Digital Regulatory Cooperation Forum (which was established by the CMA, ICO and Ofcom in July 2020, with the FCA joining as a full member in April 2021) playing a key role in any relevant joint work.
Targeted approach
A big question facing competition regulators is how to balance the need to allow novel AI technologies (and business models) to evolve versus responding to a sense of urgency in light of the technology’s scale and power. And, also, the need for speed to deal with what could be a new wave of problems brewing in digital markets that have — for years — been characterized by issues like tipping and unfair dealing, while consumers have also faced exploitative business models. Through dominant platforms operating under their own self serving T&C.
One such concern is rebooting the UK’s local competition regime by adding a proactive system of bespoke rules that can be applied to the most powerful platform by the DMU, where Haytor is a senior director.
The European Union already has its own flavor of digital competition reform — which is up and running (aka the Digital Markets Act, which applies to Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft). Germany has, for some years, implemented its own update of competition rules aimed at Big Tech. The UK therefore lags behind peers in addressing the market power of tech giants.
On foundation models, Hayter suggested that some (potentially) negative scenarios versus market effects could come from similar problems (current generation) to problems with Big Tech. However, he said there is much uncertainty about how the AI power play will predict whether the market will blossom into another wave of AI-fueled concentration (ie powered by a few dominant foundation model makers) or vice versa: vibrant competition as a business taps into the power of FM.
According to him, both scenarios are possible.
“You could see a situation where these models help new challengers challenge the larger positions and that would be great… it could challenge those positions of market power,” he argued. “On the flip side, of course, depending on some of these issues that we’ve highlighted — things like how access to key inputs can be controlled — you can get the opposite situation where these infrastructure models actually help companies. Currently strong positions to increase those positions further.
“It’s all really going to depend on the specific context and the specific market — and you might not see one scenario and another scenario versus another — so we really, really need to focus on the evidence and the reality. Be prepared to take action when necessary and the specifics of the market. But don’t jump into it too soon.
Although seen as an early intervention by the CMA to take control of emerging AI developments, Hayter’s top-line message is one of reassurance to the industry: the UK regulator will not rush to rein in the cutting edge.
Any future regulation (or set of affirmed principles) would need to be “very narrowly targeted,” he stressed.
“We certainly need to do a lot of measuring, and I think any kind of regulation… whether it’s for competition or other reasons, needs to address specific questions and issues/problems based on actual evidence,” he added. : “This report certainly doesn’t suggest jumping in and regulating… it’s trying to identify the kinds of things that will help realize the maximum potential of the technology and be aware of things to look out for.”
“We’re trying very hard not to guess what’s going on here,” he added. “We’re trying to work out … what are the specific drivers that will push in one direction or the other. So we highlight access to data. And the driver we explored in that area was whether access to proprietary data would be important. We’ve heard that at the moment, there’s a fair amount of publicly available data to train the models on but it’s possible that, over time, proprietary data will play a bigger role — and then, as you say, it could play into the hands of companies that have big banks and proprietary data. . But we don’t think that’s the case at this point. And yet this may not happen – it depends on how the market develops. And likewise on access to computation.
“This is obviously a very important input to these models, which is why we’ve highlighted access as one of these key principles. And… the work that Ofcom is doing is ongoing elsewhere to consider access to public cloud services and has previously proposed to be sent to us as a market investigation. So we were waiting to see the outcome.”
“There is a broader question about the ability of the current framework to respond to new developments,” he told us during our interview. “So you’ve seen us explain in a number of contexts that existing tools — namely competition, and competition and consumer law enforcement — can sometimes be a little slow to respond to certain developments. and intends to help advance the Digital Markets Competition and Consumer Bill through Parliament. But just to re-emphasize that… that means targeting very specific problems and intentionally setting high barriers to taking any action, that’s the concept of strategic market status in that framework.
“The Digital Markets Competition and Consumers Bill — the Strategic Market Status Framework — gives us a broad framework to be able to address some of these issues that arise in digital markets. But we certainly don’t want to rush in and do anything here (with FM) because we still think this market can develop in a more positive direction, especially if these principles are supported.
Read More
- With iOS 17, the iPhone feels like a mature platform
- opinion | The Google Antitrust trial is really about the future of AI
- Xiaomi 13T leaks show exciting design
- This Humanoid AI ‘CEO’ is not the future, it’s nightmare fuel
- UK competition watchdog lays out principles for ‘responsible’ generative AI
- iPhone 15 camera photo and video samples: Galaxy and Pixel should steal Apple’s camera tricks soon
- The boss is in charge again
- Joby Aviation chose Dayton, Ohio for its first scaled electric air taxi factory
- Carrot will report the weather using your own voice on iOS 17
- Microsoft AI researchers accidentally exposed terabytes of sensitive data