Technological development in the field of machine learning and its real-life application (in speech and image recognition, autonomous driving and the like) has been driven by the ingestion of relevant data sets - i.e. training signals analysed by a computer system leading to an output which has some real-world utility.

The process of machine learning is traditionally carried out using one of the following models:

(i) supervised learning - where the system is fed categorised input data and "taught" or guided by human intervention in order to reach a known end result (e.g. an input of driving images to a system which learns to correctly classify visual aspects of road signs); or

(ii) reinforcement learning - where a machine "agent" carries out different actions in an environment, receiving a "reward" for certain outcomes, (which it then seeks to repeat) in order to maximise its reward.

Both of these methods require a central role to be played by humans in "teaching" the system; by specifying either the targets or rewards to be aimed for.

Beyond these, a third method is rapidly gaining traction in the software industry due to cost and speed savings in the software development process - unsupervised learning.

Unsupervised learning is a method which seeks to reward software agents when they find an otherwise undetectable pattern in a data set which has not been classified by humans and/or where no "target" per se has been set.  

This avoids the costly process of having humans comb through input data to classify or annotate it; a process which represents a substantial portion of development effort involved in a machine learning application.

Does this solve the problem of bias? 

A central problem considered by academics, regulators and lawyers alike is the potential for automated systems to contain (unwanted) bias. Ethical codes, trustworthiness, diversity and transparency (see the recent ICO and Alan Turing Institute guidance on AI decision making) have all been considered as methods to counter the potential for bias or to otherwise identify or mitigate its potential effect.

However, one view is that a maker's mark is (largely) unavoidable - if humans design a system, then the system will reflect the biases of the organizational teams, designers and data scientists and engineers who put it together.  

So, does the use of "unsupervised learning" remove the legal risks arising from biased AI decision-making?

Unfortunately not.

Although human bias may be removed through the learning process, the data set itself may be biased - resulting in a biased system. If characteristic A has a statistical correlation to outcome B (in the data set), then the system will probably make a prejudicial judgment / connection, which then requires humans to unpick, putting us back at square one by introducing the potential for human bias (again).  

What about ownership? 

 Autonomous or unsupervised AI agents capable of creating new ideas and forms of expression create a myriad of conceptual issues for copyright lawyers and academics worldwide (principally because international law says that copyright should operate for the benefit of.…a natural person - Art 2.6 Berne Convention).

Back in 1988, the UK admirably sought to tackle the issue and "recognise the advent—not here yet, but coming—of artificial intelligence" by enacting s178 of the Copyright, Designs and Patents Act to recognise "computer-generated" works as belonging to the person who "undertook the arrangements necessary for its creation".

That provision may have worked back then, but does not conceptually "fit" with the sophistication of a semi-autonomous AI systems of today, where in reality, there may be very little connection between the human involved and the output generated by the system.  

Academic debate has continued as to whether copyright protection for AI rights is even necessary. However, the real-world value (and associated need for legal protection of) AI creations is difficult to ignore, as more and more examples of market value being attributed to these works arise over time.

One thing is clear, the legal issues relating to AI creative endeavours and the outputs of "unsupervised learning" is likely to keep lawyers busy for years to come.