The ethics of Artificial Intelligence - the next step?

You don't need any intelligence - artificial
or otherwise - to guess the AmeriKat's
favorite activity
Two weeks ago, techUK hosted the Digital Ethics Summit 2017, with a specific focus on Artificial Intelligence - an emerging hot topic for IP lawyers as we start wrangling with increasingly interesting IP issues in this space.  Attendees from industry, academia and professional services were treated to talks and panel discussions by leaders in the AI sector.  In part of a series of posts relating to future technology trends facing IP lawyers (both for our clients and for us), the AmeriKat asked Alex Calver (Bristows) to report on the highlights from the morning session.  Over to Alex:
"Introduction
The AI sector is moving fast, with most experts revising down their estimates of when it will infiltrate most, if not all, areas of life.  Given the powerful nature of this (and other) technology, discussions about the role of ethics in AI have been rife in 2017.  However, this Summit aimed to push the discussion on to the next step.  We need to foresee future problems.  We need to establish how to implement an ethics framework.  The catchphrase of the morning was “2018 is the Year of Doing”.

Martha Lane Fox (LastMinute.com, DotEveryone) noted that the lack of trust in the tech industry is a huge obstacle for the implementation of AI.  It is particularly worrying that this distrust is based on current technologies (data breaches from social media companies, for example); future technologies which could have more cogent consequences pose an even bigger difficulty.  In this setting, Fox argued that Britain needs to carve out its future role in the tech landscape.  We cannot compete with the funding available in the US, or the sheer quantity of data collected in China.  We need to lead in the practical implementation of such technologies, i.e. standard-setting.  Diversity and transparency are key to achieving this: gender equality, education of children, and an understanding of supply chain (similar to what has happened in the clothing industry, but something that is harder in tech) are all key factors in improving trust in the sector.

Panel Discussion
A panel discussion followed, ‘What is the current AI landscape and how to ensure ethical foresight?’, with speakers Prof Luciano Floridi, Dr Stephen Cave, Dr Clare Craig, George Zarkadakis and Rob McCargow.  Floridi opened the discussion, noting 4 key issues that need to be kept in mind as we implement AI:
  •   Delegation: to whom or what are we giving this technology to?
  •  Responsibility: how does that person/thing keep control?
  •  Manipulation: are we aware of the extent to which this technology will affect our lives?
  •  Prudence: there is a risk that humans become reliant on technology; if something goes wrong with that tech, will a human be able to fix it?
The panellists went on to consider the irony that despite the institutional distrust in technology, businesses will have to adopt it soon.  To ease the acceptance of AI, and therefore ensure its success, they used the analogy of the 5p plastic bag charge here in the UK.  Acceptance of the change (and consequent decline of plastic bag use), was achievable after regulation and corporate participation harnessed the good intentions of the public.  The regulators and corporate entities provided the nudge to the ‘willing-in-theory’ public.
The panellists discussed what shape the regulatory framework could take.  They opined that the ‘agile’ methodology generally used to introduce new software technologies (where incremental tweaks and improvements are made to a product following release, upon finding faults and glitches) simply might not be appropriate for AI.  Given the power of the technology and the potential unintended consequences, there is simply not scope for ‘trials’ where those consequences might be huge.  For example, ‘beta’ tech products we often see released into the current public arena would be wholly unacceptable if they could have adverse consequences for human life.  This scenario would exacerbate the public trust problem, and could lead to a swift rejection of a potentially beneficial technology. 
Offering solutions to this dilemma, the panel considered pharmaceutical-style testing  - something that is at the opposite end of the spectrum to technology’s so-called ‘move-fast and break-things’ approach.  Whilst the rigorous framework allows confidence in safe products, it was highlighted that adopting the pharma approach in tech would stifle the rate of innovation.  This is largely incompatible with the current funding and development structure of the tech industry.
Finally, the panellists discussed the need for a consistent, intelligible discourse to effectively explain new technologies to the general public.  The conversation needs to be diverse: it should include philosophers, historians, psychologists as well as engineers.  Only with this diversity of voice and expertise can a well-rounded framework be built.
In our analysis of future issues facing AI, we often hover in the middle of the general and the specific.  We only think about what we can visualise (e.g.  the trolley problem) rather than other less tangible effects (e.g. obesity as a result of AI).  Rather, analysis needs to be taking place both in general, universal terms as well as in specific terms.  Only then can we begin to frame the future world in which AI is seamlessly integrated.

Matt Handcock MP (Department for Digital, Culture, Media and Sport) 
A brief talk from the Minister in charge of Digital Policy explained the function of the new Government Centre for Data Ethics & Innovation.  The Centre will engage in public communication; it will propose measure to regulators, without being a regulator itself; and it will merge across various government departments such as DCMS, Transport, Health, and Business, Energy &  Industrial Strategy  in order to ensure the consistent discourse mentioned above 
Handcock commented that the UK is well-positioned to be a world leader in AI standards and ethics. It has a level of freedom to develop a practical framework that will, in turn, carry influence around the world.  By contrast the US Government, with its separation of powers, simply cannot be this nimble.

Keynotes
The rest of the day included the three Keynote speeches by Microsoft’s Dr Carolyn Nguyen, the ICO’s Elizabeth Denham (see here), and Nuffield’s Chief Executive Tim Gardam (see here).  Unfortunately there is no copy of Nguyen’s talk, but Denham’s and Gardham’s are well worth a read."
For more on AI applications, here is one dear to the AmeriKat's heart involving Google, cat photos and DNA.  
The ethics of Artificial Intelligence - the next step? The ethics of Artificial Intelligence - the next step? Reviewed by Annsley Merelle Ward on Wednesday, January 03, 2018 Rating: 5

No comments:

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.