Wednesday, March 29, 2023

Moral panic and AI regulation...

Open letters asking for bans strike me as the wrong approach, the unrepresentative few, denying the many of things they may want. It is a power play and profoundly undemocratic. I remember people demanding that we stop and block access to 'Wikipedia' in Schools and Universities actually did this.... blocked access to a knowledge base. Why? Power - they saw themselves as the sole purveyors of knowledge. The blockers now are largely in academia, as this is a technology they fear. They see themselves as these overseers and this threatens their status. 

Blocking technology is sometimes a churlish attempt to hold onto power. I note that one minute they despise Elon Musk, then suddenly see him as a saviour! Fickle bunch. We are months into the release of ChatGPT and have hardly seen the end of civilisation. The release was deliberate to test with a large number of real users across the Globe. That worked and ChatGPT4 is miles better due to feedback and human training. I note that most of the examples I see on Twitter are still ChatGPT3.

You’d think, from the moral panic around AI, that no one was doing anything around ethics. Every man, woman and their dog is chucking out advice, frameworks, papers, rules opinions and pronouncements on AI, as if they were the first to see ethical problems. Much of it is not ‘ethics’ at all, as there is barely a mention of the benefits. That is a big problem as the net benefits also need to be identified in making an overall judgement. This is, of course, normal. Every major shift in technology gets this reaction - writing (read Plato), printing, calculators, internet, Wikipedia, social media, computer games, smartphones… whenever a new tectonic plate rubs up against the old one, the old is subsumed beneath it and there is seismic activity, even a few volcanic outbursts!

We sometimes forget that there is a great deal of existing law and regulation that covers technology and its use. In addition to existing regulation, huge teams have been working on regulation in dozens of countries also political blocs, like the EU. There has also been communications and alignment between them.

For example, if you have  an AI solution to solve a real clinical problem, you need to certify it as it as it develops, through some pretty tough regulatory standards for Software as a Medical Device (SaMD). You cannot launch the product or service without jumping through these hoops, which are demanding and expensive. There is also GDPR and many other country-specific laws.


It is pretty much specific use cases at the moment at state level with little Federal law. The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI and there isn proposed regulation going through the process as we speak.

The 'Blueprint' has five principles:

  1. Safe and Effective Systems: You should be protected from unsafe or ineffective systems.

  2. Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

  3. Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

  4. Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

  5. Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

There is much talk of an 'AI Bill of Rights’ but these are still regulatory guidelines, a blueprint for legislation. They get quite specific in certain areas, what the EU would call ‘high risk’ areas, such as HR, money lending and surveillance. That, I think is the right approaches tahrs is a massive baby and bathwater problem here, being so strict on legislation tat the benefits of AI are nor realised.


The EU have been hard at it for several years now, since 2018, and although they tend to suffer from technocratic hubris, have taken an angle that is pragmatic and easy to understand.

They have published proposals for regulations called the Artificial Intelligence Act. It has some good stuff around the usual suspects - data quality, transparency, human oversight and accountability and right tackle sector specific issues. But its big idea, which is reasonable, is to classify systems by risk and mandate. The classification system identifies levels of risk that an AI system could pose and there are four tiers: 

  1. unacceptable 

  2. high

  3. limited

  4. minimal

Minimal risk will be unaffected and that is right, as they’ve been working for decades to do good, innovative work and that should continue. The others will be subject to scrutiny and reasonable regulation. The problem is that it can't cope with new products and ideas, as EU law is set in stone, unlike common law that is more flexible. There are already signs that they will regulate so hard that innovation will be stifled. It is the EU so it will tend towards overregulation and the EU is only 5.7% of the world's population, so let’s not imagine that it holds all the cards here. In truth the EU is not a powerhouse in AI, all the innovation and product is coming from the US. The EU law is expected in 2024.

The Council of Europe have also been publishing a large number of discussion documents in the field, including several in education.


China has a much more aggressive attitude towards regulation, and some good focus on preventing fake news and fraud on the elderly but the fiery dragon has a long tail of problems. These laws are in place and any foreign companies in China must comply.

Its DMA and DSA Regulations went into effect in March 2022. They tackled Bothe general and specific issues, information service norms and user rights protection. It also demands audits and transparency. It has a focus on protecting users, especially minors, from data harvesting and the elderly - this, I think is enlightened. They are also keen to avoid monopolies and want control over algorithmic manipulation, so are very specific with their targets:

Article 13 prohibits the algorithmic generation of fake news and requires online service news providers to be licensed (the sting in the tail)

Article 19 offers protection to the elderly by requiring online service providers to address the needs of older users, especially on fraud

Other targets include manipulating traffic numbers and promoting addictive content, hence the limiting of screen time for young people. It is here that things get very strange, as there are ‘ethics’ rules around ‘Upholding mainstream value’ (Government ethics), ‘Vigorously disseminating positive energy' (Government propaganda) and the ‘Prevention or reduction of controversies or disputes’ (toe the line - straightforward censorship).


Unlike the EU, the UK is taking its time and leaving sector-specific bodies to do their work within the existing law. I think this is right. We do not want to regulate out innovation. The AI sector is now strong and growing. They’re taking a de minimus approach, being careful and flexible. There are no statutory laws as yet, atlthough GDPR is there.

They have just published a white paper outlining five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:

  1. Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed

  2. Transparency and "explainability": organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI

  3. Fairness: AI should be used in a way which complies with the UK's existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes

  4. Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes

  5. Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

We will see a slow, sensible and pragmatic approach, sensitive to new developments.


There is a geo-political AI-race and that affects regulation. It is likely, in my opinion that we will get US-European alignment, to keep us competitive in AI. There is a EU-US Trade and Technology Council also looking at alignment. I think we will see a parting of the ways between US/Europe and China. Another scenario is that the EU overegg everything and go it alone without the US. This would be a big mistake and just push the EU further behind in harvesting the benefits from AI. The UK, post-Brexit, has the freedom to make more flexible choices.

However, one can see an interesting synths taking place , where we do the following:

1. Take an Occam's razor approach to regulation from the UK, the minimum number of legal regulations to meet our goal

2. Adopt the EU idea of graded regulation to the size and function of organisations to protect innovation

3. Makes sure the regulation is flexible enough to quickly cope with new advances in get technology

4. Take the Chine approach of specific targets at 'protecting minors' and 'fraud against the elderly'

5. Have a unified global body issue, first a set of guidelines then cascade back to nation states.



There is of course the WEF, there’s always the WEF, one of my least favourite organisations. There’s much rhetoric around the Fourth Industrial Revolution - it is neither industrial nor the fourth - often infantile. There are also a lot of long academic reports that are out of date before they are printed. I wouldn’t hold my breath.

No comments: