America innovates, China implements, EU regulates
The EU's AI obsession is regulation. That's fine and I have little criticism of the direction of such regulation, part from the usual bureaucracy. What I do find depressing is the dampening effect this has on actual effort. Thankfully our own little UK 'AI and Ethics' group was more like a Parish Council, a rather amateurish academic attempt to tell companies how to run their business by people who don't know much about business. It amounted to little more than a rather dull checklist. This is good news as it remains unknown and largely ignored. AI is not as good as you think it is and not as bad as you fear.
In AI for Learning in Higher Education, we have several world class companies in the UK. One has received a seven figure investment from a US University but has literally zero UK customers. Our effort in this area is largely third rate AI and Ethics commentators. In AI itself, however, we have a ton of talent.
Where does this leave the UK? We should diverge from the EU here. In fact, we already have, Deepmind and other AI companies in the UK looked to the US, not Europe or China for investment and markets. Similarly in my own field, AI for Learning. There is little UK-EU commercial or M&A activity. It is almost all UK-US. We need to stay innovative and look to those countries not obsessed by negativity around AI and Ethics to move forward.
The investment community in London and US is well connected and most fo the deals are on that axis. This has increased post-Brexit, with even more alignment. The EU is linguistically diverse and much messier in terms of marketing and implementation. few companies see the EU as their target market, preferring the much bigger US market, which more aligned linguistically, culturally and financially.
China has made the investment and is actually forging ahead with AI for Learning. I've written about this here. They have a strategic view, with huge government targets and investments, that is markedly different from the EU. We have already seen the emergence of large-scale projects in schools and Universities. On the other hand, their attitude towards social scoring and surveillance technology leaves them open to criticism.
Appendix - EU legislation
As I say, the proposed EU legislation is OK and has been leaked (probably deliberately). It is, as expected, bureaucratic with lots of quangos being set up and a typical piece of EU overkill. Some of it, however, is eminently sensible and Google and others have been asking for this for some time. This is the right level for such discussions, if it is aligned with other efforts from IEEE and so on.
1. Yet another Board! European Artificial Intelligence Board (one for each of the EU27 countries, a representative of the Commission & European Data Protection Supervisor)
2. Digital Hubs and Testing Facilities to be set up
3. Member states need inspection bodies for assessment and certification (3rd parties for 5 years)
4. High risk AI systems tested before release
5. High risk is, for example, face recognition for physical safety decisions in healthcare, transport or energy
6. Authorisation for use of biometric identification in public domain
7. Rules on exploitation of data
8. Manipulation of human behaviour (to people’s detriment)
9. Prevents mass surveillance
10. Disclosure for deep fakes
11. Voice agents cannot pretend to be human
12. Emotion recognition has to be made explicit to user
13. Ban ranking social behaviour (as in China)
14. Self-assessment requirements for AI used for the purpose of determining access or assigning persons to educational and vocational training institutions
15. Fines on a GDPR scale
16. Aim is to prevent abuses with sizeable fines up to 4% global revenue
17. Notable exceptions for military and safeguarding public security
18. SMEs to get privileged access
19. Exemptions for training data
20. Notable get outs for member states (national security worries)