Artificial Intelligence: what is it?
uine)
3 April 2023 / by Boris Landoni
We take an overview of the solutions adopted in the interaction between robot and environment and integrated into the concept of Machine Learning and the wider one of AI.
Amateur or educational robots and in particular self-propelled robots (called "rovers") use, to move independently in the environment, sensors that are commonly of the type with digital output; By this term we mean that they provide information on the mere presence or absence of obstacles, therefore not on the distance from them.
With this type of interaction with the environment, the driving algorithm can hardly approach an AI (Artificial Intelligence), remaining rather in the field of traditional programming. In order to clarify this premise, we need to try to outline the nature of AI, so we will briefly introduce this concept, which is so much talked about lately.
Artificial Intelligence: what is it?
Everyone knows that a computer can be idealized as a small box that receives data, from the keyboard or through the selection with the mouse, and produces the result of processing this data on the screen or printer.
In this type of use the human being, who enters the data and receives the result, is the fundamental interface of the computer.
But in different situations it is possible, and even more useful, to eliminate this human mediation and make the computer act on the basis of inputs collected directly from the environment.
In this case, generally, the computer will produce an output that will act on the environment and man will be just a spectator who collects the results in terms of changes made to his reality.
In general, this is precisely the future imagined, by computer and electronics, for the computer.
It is no coincidence that electronics have been very focused on the production of sensors and actuators.
An autonomous system of this kind must, by definition, be "intelligent", in the sense that it must behave in such a way as to achieve a purpose for which it was created (Fig. 1).
Fig. 1 Intelligent autonomous system.
Let’s now focus on the algorithm that links the output functionally to the input of an intelligent autonomous system.
The ability to more or less well meet the objectives, regardless of input variability, defines the degree of "intelligence" of the algorithm. In reality, even a very sophisticated algorithm cannot avoid poor input quality and, in the second place, poor output quality.
This also applies to man, where sensory deprivation does not allow for adequate development or confuses the behavior of an adult.
Conversely, a sophisticated sensory system needs a sophisticated algorithm to express the potential of the autonomous system.
Therefore, a digital ON/OFF input does not allow a particularly intelligent behavior and it is necessary to switch at least to sensors with continuous (para-analog) response.
In the case of self-contained, moving systems, for example, a sensor of distance values at different angles is the minimum necessary for simple orientation and search tasks; but having such a kind of sensors, The problem immediately arises of interfacing them with an algorithm that runs on a digital computer.
So, how to manage this rich amount of information? Infinite sets of "if ... then" and therefore we must use other paradigms. And this is where we begin to talk about Artificial Intelligence.
First, we need to make a basic distinction between these two types of algorithms: programmed algorithms and learning algorithms.
Programmed algorithms
Fuzzy logic systems are algorithms that in a sense adapt continuous sensors to the digital world of computers.
The fuzzy logic gets by simulating a virtual computer with non-boolean logic but continues, that is, making it treat variables that can have a degree of continuous truth between 0 and 1.
It has been successfully used also in microcontrollers, because it allows to manage continuous sensors efficiently even with limited hardware.
This technique is the easiest way to create simplified automatic control systems, for example, or to control self-propelled robots with complex sensors and continuous response.
Have we entered the field of AI? It is not said, because it depends on the complexity of the algorithm that uses fuzzy logic. Defining the area of AI is indeed complex. In the first approximation, we can say that a system falls within AI if its behavior simulates human behavior (see "Turing Test"). But at this point it is a question of defining "human behavior".
We can get by saying that a human behavior is basically adaptive, that is, able to adapt its response to the variation of the environmental response.
Examples of AI applications are expert systems and heuristic exploration programs; these are algorithms that still process digital data, therefore not suitable to manage sensors that provide analog signals.
Expert systems attempt to simulate the behavior of a human being by describing a large set of rules (booleans of the type "if ... then" called "knowledge bases"), obtained by questioning human beings for every possible input situation.
Clearly this approach is limited to very circumscribed arguments of reality and was one of the first attempts of AI that has achieved some significant result but that is now being overcome. Heuristic algorithms have narrow and specialized fields such as game programs.
Algorithms that learn
Programmed algorithms predict the existence of a human being who constructs and implements the relationship between input and output, often very complex.
On the contrary, the algorithms that they learn require that the human being constructs and implements a system that is capable of representing this relationship independently, according to some general directives whose nature differentiates the various types of learning.
Since learning is characteristic of the human being (but also of other living beings) we can say with greater confidence that we enter the field of AI.
This time, too, we must make a division between two types: supervised learning and unsupervised learning.
Supervised learning
These algorithms, to learn, need a series of consistent examples input/ output; based on these examples build a correspondence even complex and generally statistical between the values of the sensors and the correct output.
Fig. 2 proposes an example of supervised training.
Then, after training, they can respond appropriately to any set of input values in addition to those in the examples.
These algorithms are usually based on "neural network" systems. Even the "Deep Learning" systems (Fig. 3), very fashionable today, are of this type. Pattern and image recognition is the most common application.
Compared to early neural networks, modern systems have incorporated a series of elaborations and filters that allow neural networks to better extract knowledge from examples.
It is evident that in these applications, provided that sensory input remains complex, the output can also be a warning to the human interface and not a drive.
The strong point of this type of algorithms based on neural networks is the efficient mode that adjusts the parameters to the occurrence of an error on the output; this mode is called "Error Back Propagation".
Fig. 2 Supervised training.
Fig. 3 Learning with evolutionary algorithm.
Unsupervised learning
Now things get more difficult: there is no longer a human being training and therefore the system must acquire all the information from the environment, including the mistake it makes at every step.
Since it serves to correct its response in order to improve its performance.
But on the basis of what criterion is the error or goodness of the answer defined, if the examples are no longer available?
To clarify this we refer to nature: how does a living being modify his behavior to adapt it to environmental conditions with maximum efficiency?
In two ways: with evolution and with reinforcement. The first acts in very long times and at population level, while the second, during life and on the individual, trying to satisfy one or more internal stimuli, "wired" by evolution. Usually hunger and urge to procreate.
The imitation of natural evolution has produced genetic or evolutionary algorithms (Fig. 4) that involve the testing of a population of intelligent systems with the selection of the best based on a "fitness" criterion established a priori.
The starting population is produced with random parameters for each individual system: for example, random weights of the neural network synapses implementing the algorithm. From the best is generated a subsequent population of children by applying small random mutations to the parameters of the algorithm. Population after population, the behavior of individuals improves even if the achievement of optimal behavior is not guaranteed.
However, this technique is typically used in a virtual world with simulated individuals, for practical reasons; but there is nothing to prevent the reuse of the best results algorithms to manage a physical system, clearly provided that the artificial life simulation has been satisfactorily realistic.
Fig. 4 Learning by reinforcement.
But the living being and in particular the man, learns by trial and error also during the life; in this paradigm it is therefore necessary to imagine an algorithm that corrects the action of the intelligent system, based on one or more criteria of satisfaction.
These algorithms are called "reinforcement" because they try to enhance the actions that bring the goal (or objectives) while trying to make forget the actions that are not productive. All for each of the almost endless possible values of the input (or if we want of the states of the system).
It is clear that we need algorithms that once again extract and synthesize action classes for sensory input classes.
They are the most complex and yet unsolved systems in a general and optimal way. In fact, in addition to the agent system, we also need a system that assesses the situation in terms of achieving objectives.
The development of these algorithms could lead, probably in the future, even to the self-consciousness of autonomous intelligent systems, or the so-called "strong AI", which is distinguished from the "weak AI" which is related to intelligent behavior for limited tasks, like neural networks, deep learning, etc.
Tof sensors
Returning to our "rover", without bothering the complex management of the images of a possible camera placed on it, already a "radar" system would be enough to provide rich information for an intelligent movement.
An ultrasonic meter of acceptable cost has no resolution and horizon necessary, because you would have to mount it on a rotating system, servo type, to make a continuous scan; or put many to cover a wide angle.
Lately, however, new types of distance sensor have appeared based on the "flight time" (tof: Time of Flight) of a laser beam, which have a relatively low price (< 20€) and are more accurate than an ultrasonic system, also because the measuring angle can be much narrower.
A first product is the one based on the VL6180 (of the ST) of which Futura Elettronica distributes the test circuit (breakout board, shown in Fig. 5) with the code "BREAKOUT017 ".
The small board does nothing but adapt the supply voltage and the logical levels of the I²C bus; it can also be powered to 5V and the logical levels of the communication bus can also be 5V, while the integrated, of its is compatible with levels 0/3V.
The connection with Arduino or other microcontroller is made through the I²C serial bus and by using a library that can be downloaded from the product page.
The library makes operation very simple and, once the sensor is initialized, just ask it for the distance it detects at that time.
The maximum distance can reach almost 15cm and there is a certain immunity to the variations of ambient brightness and the reflectance variations of the objects.
The laser emission is on the infrared (850nm) and its detection angle is about 25 years, while the response time is at most 15ms.
To try to have continuous detection on 180 seconds, we can place the sensor on a servo and do a two-way scan. Unfortunately a too cheap servo (GS-9018) does not have a correct absolute positioning and the scan changes according to the direction of rotation.
You can then try to use a stepper motor, the cheap SM28BYJ (code SM2003DB), which is also quieter and more reliable for continuous scanning.
The result was better, but the problem of absolute starting positioning remains: in fact, not having a position "feedback" like the servo, the shift must be managed by software starting from the start position that must be initialized by hand.
The verification of the "scanner" was made using a cardboard template represented by the thick line in Fig. 6.
In Fig. 7 you can see the difference between using a GS-9018 servo and a stepper like SM28BYJ for scanning.
However, even in the first case by applying an "offset" on the return scan you get a good result. In both cases it is advisable to scan with angles <= 15 µl and apply an average.
This composite scanner could be used on a very small two-wheeled "rover", given its limited range. In essence, for example, a desk teaching robot, provided you maintain a small size even for the scanner.
Fig. 6 Tof detection on servo or stepper (mediated and corrected).
Fig. 7 a) scan with economic servo b) scan with stepper-motor.
The second tof product is more sophisticated and theoretically reaches 150/200cm.
With such visibility of the obstacles it is conceivable to be used on a self-propelled floor or ground.
The integrated that implements it has the acronym VL5310X (always of the ST) and the corresponding breakout board (always produced by Futura Elettronica) is practically identical to that described just now, but is signed "BREAKOUT018".
Let’s see how it might work within a continuous scan.
The emitted laser is on a slightly higher infrared (940nm), but above all it has a more powerful integrated microprocessor to process more readings, providing a result cleaned of noise.
In fact the response times are higher, about 30ms of base, up to reach even 300ms in the mode "High accuracy".
The bookcase is different from the previous one and allows more possibilities of tuning. Finally, the angle of detection is again 25 years.
However, tests using the library have shown some limitations. First of all a test as a meter has found a satisfactory measure only up to the distance of just over a meter.
With an error of 3cm in this range. The "Long Range" configuration also did not change the situation. In any case, by applying the scan at a distance of 120cm limited to 180,000, the results were more approximate than the previous tof.
In Fig. 8 is shown an example of scanning for the detection of a perimeter represented by the thickest line.
With these presuppositions, a navigation can count on a vision of the frontal reality only approximate and little precise; since the integrated has a wide margin of configuration, it is not excluded that a thorough study and adjustment of the library and parameters can improve the performance.
But a more complete and usable product is the one described in the next paragraph.
Fig. 8 Scanning with tof VL5310X.
Lidar sensors
The development of research aimed at creating autonomous cars has brought to market a series of products with radar functionality.
Among these the LIDAR (Laser Imaging Detection and Ranging) have become very popular and have begun to be brought also in hobby field. They are substantially more sophisticated tof and integrated with a scanning system.
One of these is the YDLIDAR X4 of the "EAI" visible in Fig. 9 and distributed by Futura Elettronica with the code FR704.
It has an amateur destination and has an approximate cost slightly greater than 100€.
It has dimensions of about 7X10 cm with a height of about 5 cm. A silent stepper motor rotates the head that supports the emitter laser and the receiver.
It has an angular resolution of half a degree and a range from 10 cm to 10 m, with an accuracy greater than 1 cm (average 5 mm).
It rotates at a speed of about 1.5/sec, then takes about 5,000 scans per second. Communicates via serial to 128,000 baud via a proprietary application protocol.
However, a library is provided for its use.
Fig. 9 Lidar YDLIDAR X4.
As can be seen from Fig. 10, scanning even at metric distances provides a plausible map of the scanned environment.
In any case it is necessary to have available algorithms and software that can effectively use this amount of analog sensory input.
1. COMMENT
Alessandro Cartoni - 6 April 2023 at 11:16 pm
Dear director, Perhaps I did not read carefully your article to the point that there is no reference to Arduino inside, even though it is present in the title and unusually in brackets. I believe, indeed I am sure, that his article would have been interesting from the editorial point of view even without Arduino in the title, as I am sure that the products developed by Futura would have greater identity and equal if not greater appeal even without the ending "little". Historically, out of necessity and virtue we have always been a people of inventors with an imaginative character that has always distinguished us. A cordial greeting.
Answer
Leave a