Information on the mechanization of agriculture, gardening, components and multifunctionality.
Innovation

AI, a world to be discovered

The evolution of computer technologies and automated systems has produced a large number of “smart” machines. The real frontier of artificial intelligence, however, is that of autonomous machine learning mechanisms. Pros and cons of the various systems, and a focus on strengthened learning algorithms

by Alessio Bolognesi
May - June 2022 | Back

It is difficult to find an unambiguous definition of Artificial Intelligence in the literature, but there is general agreement on one basic assumption: AI is a very broad area; nevertheless, it will be one of the most debated topics in the years to come.

But let's try to be clear: the most common definition of Artificial Intelligence is any technique that attempts to imitate human behaviour. It is therefore understandable that most of the software systems and algorithms that have been used reliably on machines for years are broadly included in this statement. Whether vehicles or industrial robots, for example, simple conditional algorithms rather than those based on statistical or Bayesian approaches fall under the above definition.

When we talk about AI nowadays, we actually more often mean a subset of it: Machine Learning, i.e. a set of techniques that allow a computer or, in general, a system to learn certain things without being explicitly programmed to do so. We are already getting closer to the goal, but first we need to add something to this definition that can easily be found on the web. A Machine Learning algorithm is able to learn certain characteristics from a set of annotated input data, i.e. information to which someone has virtually 'stuck' a label to guide the algorithm in identifying recognisable elements in order to classify the data. Let us take an example: if a system has to learn to distinguish cars, planes and ships from photographs, it will be necessary to provide an algorithm - appropriately selected according to the type of data it will have to process - with a certain number of images of planes, cars and ships, to each of which a label will be attached identifying which type of vehicle is represented. The algorithm will slowly learn that there are vehicles that have wings, a fuselage, a tail and will call them 'aircraft'; it will determine that cars have four wheels, a bonnet, etc. and classify them as such. We can conclude that a machine learning algorithm learns to classify elements belonging to a well-defined domain and will be all the better at doing so the larger, more complete and well-annotated the data set that is used to train it. All this has a consequence: outside of what it has learnt (in the previous example, to recognise cars, planes and ships in an image) a system based on Machine Learning is bound to fail anyway. Not only that, once trained such a system is absolutely static (its behaviour and performance does not change unless it is trained again) and deterministic (the same input will always correspond to the same output). So, at least for this first subset of Artificial Intelligence, we must dispel a myth: a system based on ML (and thus on AI) is not a system that evolves over time autonomously. It does not learn by itself. In fact, we speak of 'supervised learning'.

But are there systems that learn by themselves? Basically, yes, but it is necessary to dot the 'i'. There is an even smaller and more specialised subset of ML and thus of AI and it is the one that almost everyone, perhaps unconsciously, refers to today: Deep Learning and neural networks. The basic concept is not too different from that of machine learning but with one major difference: the data is not annotated, i.e. it is not labelled to tell the machine what it should learn. Neural network-based systems learn autonomously to extract features from the data given to them during training. So, in effect, they autonomously learn to recognise certain things. Whether they do it the right way or the way we would like, however, is by no means obvious or evident. There are several factors that influence learning, but we could say that to a large extent it again depends on the quality, number and distribution of the data supplied as input to the neural network.

We speak of neural networks because the DL is based on minimal 'processing' units, the perceptrons, which from a logical point of view work similarly to a human neuron. Each of them learns to recognise a well-defined feature or detail. DL algorithms learn in an 'unsupervised' manner but, again, a neural network-based system 'evolves' - or rather learns autonomously - only during the training/learning phase. After that phase and after being tested, the result is always a deterministic system even if it is extremely good at doing what it was trained to do, such as, for example, recognising infestations in a plant from images or extracting knowledge from a mass of big data. Normally such a system is much better and more efficient than any traditional algorithm developed to do the same thing, but it is still 'static'.

There is, however, an even narrower category that actually 'evolves'.

These are systems based on 'reinforcement learning' algorithms, i.e. algorithms that, to put it very simply, learn from their mistakes. Each decision/action in fact has a specific effect that generates a 'reward', which serves to tell the system whether it has done something good or not, so that it can correct itself and converge towards a positive result. A vehicle, for example, will perform manoeuvres by bumping into various obstacles until, through the mechanism of rewards, it learns to move without bumping into anything. In this case, as a last resort, one could market a machine that is pre-trained but able to refine its behaviour over the course of its existence ... but would anyone really want that in our industry?

A very strong implication of reinforcement learning when applied to a machine is that it cannot be trained except in a closed and protected environment or, even better, in a simulated one. During the training phase its behaviour will be erroneous and unpredictable.

This long introduction serves to understand how far we are from the concept that seems to terrorise everyone (including the EU Commission): that machines learn and evolve autonomously. From what has just been written, it is easy to see how this is only true in a very small portion of cases.

Yet a new regulation called AI-Act has been proposed, aimed at placing strong controls in the field of AI and which will also affect agricultural machinery. This is because the safety components of machines and vehicles are, in general, defined as high-risk systems and, if based on AI, will have to be certified by notified bodies. At least three glaring weaknesses emerge from this last sentence: the first is that the original commission document refers to Artificial Intelligence tout-court, i.e. potentially to any software programme or algorithm that falls under the most general definition (almost all in the worst case). The second weakness lies in the lack of expertise in the field of AI in various industries and thus the difficulty of having competent centres that can carry out certifications. The third weakness is that, by their very nature, ML and DL systems are closely dependent on the data with which they are trained and the algorithms used are the subject of ongoing research: is it therefore really possible to define standard certification protocols? Well, experts are rather unanimous in answering 'no' or 'very difficult' to this last question.

If we then think of our own industry, the application of ML or DL algorithms to agricultural machinery safety systems should certainly be monitored, but we forget a little too often, lately, all the constraints and regulations that a manufacturer already has to fulfil in order to make his product safe. Accurate risk analyses are carried out and, even if an algorithm is used in the field of AI, we have seen before that the resulting system that will be put on the market will still be a deterministic system that does not evolve autonomously after commissioning. And honestly, the future in which this will happen is probably very remote. The FederUnacoma, together with CEMA, has been extremely active over the last few months in defining a number of proposals for amendments to the AI-Act, aimed at making it possible for its application to be reasonable in our industry. We would not like to find ourselves, in the coming years, having to have a robust and well-tested automated driving system certified by a third party just because it uses an algorithm based on statistical approaches that, as currently proposed, would fall under the Commission's definition of AI in the Act. This does not detract from the fact that a system with safety implications that implements AI solutions that enable it to learn even after it has been deployed can, or should, be tested and certified (on the basis of which criteria and standards will be a matter of debate for years to come).

The reality, in the writer's opinion, is that the AI-Act is designed to address very important ethical and social problems that may arise from the use of Artificial Intelligence. In fact, it is intended to prevent an algorithm from making decisions that can categorise a human being, convey the behaviour of a society, and so on. The intent, in my opinion, is certainly laudable, and such goals related to human beings and their rights should certainly be pursued. But this does not mean treating AI as an 'evil ogre' even in cases where that ogre is actually as dangerous as Shrek.

Gallery

THE MOST READ of the latest edition