Artificial Intelligence: A new way to connect (us)

Commonly, or at least during my short life, I have felt a concern around the concept of Artificial Intelligence and its real meaning. Clearly, this is a notion that can seem quite loud, quaint, and even exacerbated. With the very mention of this idea, we immediately believe that we are traversing a dystopian world of cyborgs and flying cars.

But the truth is that, by looking for a specific definition of the term, we can establish that it is the computer's ability to perform cognitive tasks that we associate with the human mind. That is, it is the ability to argue, solve problems independently and even the ability of perception is included.

This is how artificial intelligence has made it possible to make certain human behaviors tangible through neural models, thus getting closer to human notions regarding how we perceive the world. Along these lines, AI as a resource puts us in search of some way to objectify knowledge, generate correlations and allow detecting opportunities, all from an anthropocentric perspective.

But where are the rest of the species that make up this very biodiverse world? Is it possible to connect with them?

We exist on a planet with enormous biodiversity, where 80% of living beings are plants. Unfortunately, we know that humanity has contributed to the loss of 83% of wild mammals and half of plant species. Although in recent years it seems that sustainability has begun to be part of the collective mind, we are still a long way from repairing this damage.

It is clear that human beings have endowed themselves with a true superiority complex, but the truth is that we currently represent only a 0.01% of the living population of planet Earth. However, today the role and efforts of our technology continue to be biased and derived from this human gaze, without valuing the interspecies perception that should guide these advances.

So, what if we direct our efforts to listen to and process each other's rich content through these neural models? Would it be possible to train a bio hybrid model?

Clearly we are facing an ambitious scenario, but it is also necessary given the critical global context in which we find ourselves. Although the difficulty is high, our mission should be the coexistence between organisms in order to regenerate and recover the lost biodiversity, a responsibility that, until today, continues to be ignored.

From very early on, we are taught that plants are "vegetable" beings. Currently, this term is used with a rather negative connotation in terms of the ability to understand and perceive the world. But what the vast majority of people do not know is that plants have a great capacity to connect, understand and adapt to the environment. Is their trajectory throughout the planet's existence a mere coincidence?

This is where artificial intelligence comes into the equation, since scientific studies have shown similarities between the human neural communication and the electrochemical communication that plants perform.

Communicating is vital to every living being: it allows us to avoid danger, to accumulate experience, to know our own body and the environment. Is there any reason why this simple mechanism should be denied to plants?

Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. (J. Benham, Trans.). Washington, DC: Island Press.

Despite the fact that the trajectory of plants on this planet has demonstrated their perceptual capacity, which has an impact on significant adaptability throughout their existence, technology has been biased by wanting to exacerbate human perception. Instead of generating a more transversal understanding and allowing ourselves to be nurtured by other species, we insist on separating ourselves from the rest of the world as if we were the only living beings capable of communicating and impacting the ecosystem.

It may seem that nature and technology are isolated areas, but the reality is that they have tremendous potential to coexist, potential reflected in trends such as biomimicry.

But where does artificial intelligence fit into this whole equation? 

Let's take a step back, and remember the ability of artificial intelligence to model, build and train different models based on data. This data feeds this model and perfects it. The same happens when it comes to learning a new language. A language is a way of communicating, and if we break it down even more, simplifying what communication is, we can define it as an information transference from a sender to a receiver.

Based on this premise, it is possible to understand our relationship with the environment, both ours and with other species, as a form of communication. A few years ago, many did not believe or imagine the rapid growth of artificial intelligence and its applications. Today, it could be the key to understanding and being able to connect with other organisms, particularly plants.

With their senses, plants gather information about their environment and orient themselves in the world. Plants are able to measure dozens of different parameters and process a great many data.

Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. (J. Benham, Trans.). Washington, DC: Island Press.

Artificial intelligence could well be a means of communication between different species, all in order to understand the realities of each one and enrich our knowledge. It has the potential to establish associations and patterns of different electrophysiological responses of plants, laying the foundations for a more complete understanding between both species.

AI, as a technology, offers us the possibility of going further to close the communication gaps that we maintain with other species on the planet. This mission, although late, could allow us to heal, to some extent, part of the debt and damage that we have caused on the planet in the name of progress.



VOYAGER: First real-time tests conclude successfully

After several months of work and remote meetings, due to the contingency caused by COVID-19, the human team behind VOYAGER met on Friday, November 20 at Noi Hotel to participate in a real-time demonstration. The meeting became an instance to analyze the current scope of the project and its future projections.

Álvaro Riquelme, doctor and Product Manager of the system, was in charge of showcasing the proposal’s processing capabilities using speech recognition, deep learning and cloud computing. This technology, applied by UNIT, is responsible for generating a structured electronic medical record for each patient, centralizing and systematizing important information regarding their diagnosis and treatment.

"The creation of this interdisciplinary team has allowed us to shape VOYAGER based on knowledge from various disciplines, helping us to understand the problem and landing a solution concept that is the one we have in the form of a demo today," explained Dr. Riquelme.

VOYAGER: A public-private collaboration

Aldo Diez de Medina, director of San Vicente de Tagua Tagua’s Hospital, highlighted the importance of the collaboration developed by UNIT, Roche and the establishment he runs. "Companies are usually focused on generating profits, so it is pleasant to be able to see the evolution of the system and work with companies that are supporting public health," said the manager.

Alex Pozo, medical technologist and director of Supplies at the hospital, referred to the importance of improving treatment guidelines for these types of diagnoses, a goal that VOYAGER is on track to meet.

“It is a tool that is based on artificial intelligence and machine learning to manage the treatment of chronic patients. We are talking about people over 70 years of age, so it is a patient profile who has a hard time adhering to their treatment and that needs a constant support network to be able to comply with it”, Pozo explained.

For her part, Cecilia Acuña, who is Roche Diagnostics Innovation Lead & Patient Outcomes Consultant, assured that “I was very impressed with the voice data entry system. I think it is a tool that has the potential to revolutionize the work of the clinical area".

“Nowadays a big complaint from doctors is the large amount of time they must dedicate to the administrative area. VOYAGER manages to solve this problem, to allow health professionals to dedicate this time to what really matters, which is, in the first instance, to talk with patients and then, of course, treat them ”, pointed out Acuña.

The project uses artificial intelligence to improve clinical care protocols for patients with multifactorial diseases, whose condition is essentially chronic. Among the most common pathologies is diabetes, which has been one of the major focuses of the project.

"For us, these projects are fundamental because they are a way to innovate and develop a complete and comprehensive solution, something that helps not only the patient, but also the health team responsible for their treatment," said Andrea Vergara, New Business Models Manager of the Diabetes area of ​​Roche.

In this way, the piloting of the system within the Creasphere program is aimed at being a success story worldwide, to benefit millions of patients with diabetes and other multifactorial diseases in adherence to their treatment.



UNIT Art Lab: A look at technology through art

Since primitive times, art and its many expressions have been tools to examine, analyze and record what happens around us. In a similar way, science has contributed tremendously to the understanding of different phenomena throughout history. However, each of these disciplines seemed to always work as opposites. Well, until now. This is when Art Lab comes in.

The subjective dimension of art and the rigorous nature of science are the two natures that come together in UNIT Art Lab. The project is promoted by UNIT, a company dedicated to the development of artificial intelligence solutions and aims to generate new views on technology and the fate of life on Earth.

Along these lines, digital platforms offer us the possibility of creating imaginary spaces or environments that traditional art has barely managed to explore. This is why they are the protagonists of this project.

To achieve this, data analysis tools are used in a crossover with media arts techniques. The result is different visualizations and figures that the visual artist Sergio Mora-Díaz created from the data of more than 600 patients, who were monitored for two years to evaluate the evolution of their blood based on the INR indicator of blood coagulation. This, from the VOYAGER project promoted by UNIT.

“My artistic work is closely linked to space and, mainly, to the generation of experiences. Being able to discover new technologies to propose new types of sensory experiences is a great opportunity of which I am very happy to be a part of”, explains Mora-Díaz.

"Much of my work is based on the use of algorithms, that is, mathematical data, to be able to create geometric figures or interactive environments. For example, through sensors capable of capturing information from the environment and translating it into light, sound or image”, the artist points out.

“Universal intelligence includes art as its most influential means of expression, since it connects and articulates, in different ways, creative thinking, vision and intimate sensations of the world around us. In this way, Art Lab allows a further arrival in the transcendent thinking of our community, complementing our rational base of analytical tools and software”, assures Juan Larenas, CEO of UNIT.

Open call

You are an artist? Would you like to be part of this experiment? Find more information in the following link.

 

 

 


Face Recognition: a constantly updated technology

Face Recognition: a constantly updated technology

Face recognition refers to the technology capable of identifying the identity of subjects in images or videos. It is a non-invasive biometric system, where the techniques used have varied enormously over the years.

During the 90's, traditional methods used handcrafted features, such as textures and edge descriptors. Gabor, Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), etc. are some examples of this, which were the basis for more complex representations, through coding and transformation of characteristics such as Principal Component Analysis (PCA), LCA, among others. Aspects such as luminosity, pose or expression can be managed through these parameters.

In the past, there was no technique that could fully and comprehensively master all scenarios. One of the best results achieved is the one presented in the study "Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification", where 95% is achieved in the Labeled Face in the Wild (LFW) database. This indicates that the existing methods were insufficient to extract a representation of the faces that was invariant to the changes of the real world.

How does facial recognition work today?

In recent years, traditional methods have been replaced by others based on deep learning, which in turn have their origin in Convolutional Neural Networks (CNN). The main advantage of methods based on deep learning is that they can “learn”, from large databases, the best characteristics to represent the data, that is, to build the faces.

An example of this is the DeepFace network, which in 2014 achieved a “state of the art” performance in the famous LFW database. With this, he was able to approximate the performance of a human in an unrestricted scenario (DeepFace: 97.35% vs Humans: 97.53%). This, training a 9-layer model on 4 million images of faces. Inspired by this work, the focus of the research shifted towards methods based on deep learning, reaching 99.8% in just three years.

Facial recognition systems are usually made up of the stages shown in the following figure:

  1. Face detection: A query image is entered into the system. A detector finds the position of the face in the query image and returns the coordinates of the position.
  2. Face Alignment: Your goal is to scale and crop the image in the same way for all faces, using a set of reference points.
  3. Representation of the face: The pixels of the image of the face image are transformed into a compact and discriminative representation, that is, into a vector of characteristics. This representation can be achieved using classical methods or models based on deep learning. Ideally, all images of the faces of the same subject should have vectors with similar characteristics.
  4. Face matching: The images of the faces of registered individuals make up a database called a gallery. Each face image in the gallery is represented as a feature vector. Most methods calculate the similarity between the feature vector in the query image and the vectors in the gallery, using the cosine distance or the L2 distance. The one with the smallest distance indicates to which individual the consulted face belongs.