Artificial Intelligence: A new way to connect (us)
Commonly, or at least during my short life, I have felt a concern around the concept of Artificial Intelligence and its real meaning. Clearly, this is a notion that can seem quite loud, quaint, and even exacerbated. With the very mention of this idea, we immediately believe that we are traversing a dystopian world of cyborgs and flying cars.
But the truth is that, by looking for a specific definition of the term, we can establish that it is the computer's ability to perform cognitive tasks that we associate with the human mind. That is, it is the ability to argue, solve problems independently and even the ability of perception is included.
This is how artificial intelligence has made it possible to make certain human behaviors tangible through neural models, thus getting closer to human notions regarding how we perceive the world. Along these lines, AI as a resource puts us in search of some way to objectify knowledge, generate correlations and allow detecting opportunities, all from an anthropocentric perspective.
But where are the rest of the species that make up this very biodiverse world? Is it possible to connect with them?
We exist on a planet with enormous biodiversity, where 80% of living beings are plants. Unfortunately, we know that humanity has contributed to the loss of 83% of wild mammals and half of plant species. Although in recent years it seems that sustainability has begun to be part of the collective mind, we are still a long way from repairing this damage.
It is clear that human beings have endowed themselves with a true superiority complex, but the truth is that we currently represent only a 0.01% of the living population of planet Earth. However, today the role and efforts of our technology continue to be biased and derived from this human gaze, without valuing the interspecies perception that should guide these advances.
So, what if we direct our efforts to listen to and process each other's rich content through these neural models? Would it be possible to train a bio hybrid model?
Clearly we are facing an ambitious scenario, but it is also necessary given the critical global context in which we find ourselves. Although the difficulty is high, our mission should be the coexistence between organisms in order to regenerate and recover the lost biodiversity, a responsibility that, until today, continues to be ignored.
From very early on, we are taught that plants are "vegetable" beings. Currently, this term is used with a rather negative connotation in terms of the ability to understand and perceive the world. But what the vast majority of people do not know is that plants have a great capacity to connect, understand and adapt to the environment. Is their trajectory throughout the planet's existence a mere coincidence?
This is where artificial intelligence comes into the equation, since scientific studies have shown similarities between the human neural communication and the electrochemical communication that plants perform.
Communicating is vital to every living being: it allows us to avoid danger, to accumulate experience, to know our own body and the environment. Is there any reason why this simple mechanism should be denied to plants?
Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. (J. Benham, Trans.). Washington, DC: Island Press.
Despite the fact that the trajectory of plants on this planet has demonstrated their perceptual capacity, which has an impact on significant adaptability throughout their existence, technology has been biased by wanting to exacerbate human perception. Instead of generating a more transversal understanding and allowing ourselves to be nurtured by other species, we insist on separating ourselves from the rest of the world as if we were the only living beings capable of communicating and impacting the ecosystem.
It may seem that nature and technology are isolated areas, but the reality is that they have tremendous potential to coexist, potential reflected in trends such as biomimicry.
But where does artificial intelligence fit into this whole equation?
Let's take a step back, and remember the ability of artificial intelligence to model, build and train different models based on data. This data feeds this model and perfects it. The same happens when it comes to learning a new language. A language is a way of communicating, and if we break it down even more, simplifying what communication is, we can define it as an information transference from a sender to a receiver.
Based on this premise, it is possible to understand our relationship with the environment, both ours and with other species, as a form of communication. A few years ago, many did not believe or imagine the rapid growth of artificial intelligence and its applications. Today, it could be the key to understanding and being able to connect with other organisms, particularly plants.
With their senses, plants gather information about their environment and orient themselves in the world. Plants are able to measure dozens of different parameters and process a great many data.
Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. (J. Benham, Trans.). Washington, DC: Island Press.
Artificial intelligence could well be a means of communication between different species, all in order to understand the realities of each one and enrich our knowledge. It has the potential to establish associations and patterns of different electrophysiological responses of plants, laying the foundations for a more complete understanding between both species.
AI, as a technology, offers us the possibility of going further to close the communication gaps that we maintain with other species on the planet. This mission, although late, could allow us to heal, to some extent, part of the debt and damage that we have caused on the planet in the name of progress.
Climate change: How can AI help us curb this global crisis?
According to a study published by the academic journal Nature, humans are responsible for between 25% and 40% more of the total production of methane emissions than previously estimated. Methane is one of the most powerful greenhouse gases, being approximately 28 times more effective than carbon dioxide in trapping heat in the atmosphere, thus contributing to climate change.
For this reason, this gas is currently responsible for about a quarter of global warming. While it is naturally generated by animals, volcanoes, and wetlands, it is also a by-product of oil and gas production. In the mining industry, this is a problem that is also very present, due to the negative impact of the fossil fuels that are used for both production and transport.
Using fossil fuel: on the way to greater efficiency
Under the 2015 Paris Agreement for climate change, 195 countries pledged to limit global temperature rise to 2.0 ° C, and ideally no more than 1.5 ° C. This objective has motivated, in part, the decarbonization of multiple industries. This apparent shift in mindset will no doubt soon increase pressure from governments, investors and society to reduce emissions from the mining sector.
Currently, this industry is responsible for 4 to 7% of greenhouse gas emissions worldwide. CO2 emissions generated by mining operations and energy consumption, respectively, amount to 1%. For its part, fugitive methane emissions from coal mining are estimated to be between 3 and 6%.
The negative impact of these emissions has been long documented, both abroad and in Chile. During the beginning of 2020, a study revealed that the rise in the planet's temperature was partially responsible for the devastating fires registered in Australia. In Chile, the climatic diversity present throughout the territory has been affected in multiple aspects, especially when it comes to rainfall, a factor that negatively impacts relevant industries, such as agriculture.
COSMOS
UNIT, a company dedicated to the development of solutions through artificial intelligence, addresses this problem through COSMOS. This project seeks to optimize the use of fuels in transportation industries, especially in the mining sector.
The platform allows to reduce fuel consumption, along with GHG emissions through predictions based on artificial intelligence models. These predict consumption, optimize performance and detect anomalies in fuel use to ensure better use of this resource. Early detection also provides feedback to operators on incorrect operational practices. Thus seeking excellence and operational efficiency.
UNIT Art Lab: A look at technology through art
Since primitive times, art and its many expressions have been tools to examine, analyze and record what happens around us. In a similar way, science has contributed tremendously to the understanding of different phenomena throughout history. However, each of these disciplines seemed to always work as opposites. Well, until now. This is when Art Lab comes in.
The subjective dimension of art and the rigorous nature of science are the two natures that come together in UNIT Art Lab. The project is promoted by UNIT, a company dedicated to the development of artificial intelligence solutions and aims to generate new views on technology and the fate of life on Earth.
Along these lines, digital platforms offer us the possibility of creating imaginary spaces or environments that traditional art has barely managed to explore. This is why they are the protagonists of this project.
To achieve this, data analysis tools are used in a crossover with media arts techniques. The result is different visualizations and figures that the visual artist Sergio Mora-Díaz created from the data of more than 600 patients, who were monitored for two years to evaluate the evolution of their blood based on the INR indicator of blood coagulation. This, from the VOYAGER project promoted by UNIT.
“My artistic work is closely linked to space and, mainly, to the generation of experiences. Being able to discover new technologies to propose new types of sensory experiences is a great opportunity of which I am very happy to be a part of”, explains Mora-Díaz.
"Much of my work is based on the use of algorithms, that is, mathematical data, to be able to create geometric figures or interactive environments. For example, through sensors capable of capturing information from the environment and translating it into light, sound or image”, the artist points out.
“Universal intelligence includes art as its most influential means of expression, since it connects and articulates, in different ways, creative thinking, vision and intimate sensations of the world around us. In this way, Art Lab allows a further arrival in the transcendent thinking of our community, complementing our rational base of analytical tools and software”, assures Juan Larenas, CEO of UNIT.
Open call
You are an artist? Would you like to be part of this experiment? Find more information in the following link.
Face Recognition: a constantly updated technology
Face Recognition: a constantly updated technology
Face recognition refers to the technology capable of identifying the identity of subjects in images or videos. It is a non-invasive biometric system, where the techniques used have varied enormously over the years.
During the 90's, traditional methods used handcrafted features, such as textures and edge descriptors. Gabor, Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), etc. are some examples of this, which were the basis for more complex representations, through coding and transformation of characteristics such as Principal Component Analysis (PCA), LCA, among others. Aspects such as luminosity, pose or expression can be managed through these parameters.
In the past, there was no technique that could fully and comprehensively master all scenarios. One of the best results achieved is the one presented in the study "Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification", where 95% is achieved in the Labeled Face in the Wild (LFW) database. This indicates that the existing methods were insufficient to extract a representation of the faces that was invariant to the changes of the real world.
How does facial recognition work today?
In recent years, traditional methods have been replaced by others based on deep learning, which in turn have their origin in Convolutional Neural Networks (CNN). The main advantage of methods based on deep learning is that they can “learn”, from large databases, the best characteristics to represent the data, that is, to build the faces.
An example of this is the DeepFace network, which in 2014 achieved a “state of the art” performance in the famous LFW database. With this, he was able to approximate the performance of a human in an unrestricted scenario (DeepFace: 97.35% vs Humans: 97.53%). This, training a 9-layer model on 4 million images of faces. Inspired by this work, the focus of the research shifted towards methods based on deep learning, reaching 99.8% in just three years.
Facial recognition systems are usually made up of the stages shown in the following figure:
- Face detection: A query image is entered into the system. A detector finds the position of the face in the query image and returns the coordinates of the position.
- Face Alignment: Your goal is to scale and crop the image in the same way for all faces, using a set of reference points.
- Representation of the face: The pixels of the image of the face image are transformed into a compact and discriminative representation, that is, into a vector of characteristics. This representation can be achieved using classical methods or models based on deep learning. Ideally, all images of the faces of the same subject should have vectors with similar characteristics.
- Face matching: The images of the faces of registered individuals make up a database called a gallery. Each face image in the gallery is represented as a feature vector. Most methods calculate the similarity between the feature vector in the query image and the vectors in the gallery, using the cosine distance or the L2 distance. The one with the smallest distance indicates to which individual the consulted face belongs.
Professional growth: How can AI help you build your career?
Everyone has dreamed of being their own boss, an ideal for which we invest resources and countless hours of study. Although this is undoubtedly the basis of a successful career and subsequent professional growth, nowadays there are multiple tools capable of crossing different fields of expertise to help us enhance our objectives and allow us to take the lead in the job world.
Currently, every data science student, teacher and professional has a formed opinion about Artificial Intelligence (AI), its uses, applications and limitations. It is a discipline in constant expansion, and its scope is becoming increasingly transversal. Even those in other academic branches and professions are beginning to have more than a passing interest in these emerging technologies.
According to MIT Sloan Research, more than 90% of the largest companies globally are using AI to improve their customer interaction protocols. In other words, we are entering a new decade, one that will definitely be defined by data. As a consequence, the demand for professionals dedicated to these disciplines will be much more intense.
New opportunities
US-based management consulting firm McKinsey & Company estimates that 13% of current work activities performed in occupations that require a college or advanced degrees could be displaced. Specifically, the study found that 60 to 375 million people around the world could be faced with the need to change their field of work by 2030.
Contrary to popular opinion, where these tools are often the cause of job losses, the new era led by data offers multiple and new opportunities. In this change of landscape, technologies such as AI and Machine Learning (ML) will lead the demand for professionals for this date. But where to start?
Who to follow
Keep up to date with the latest news from the world of artificial intelligence by following some of the most respected voices in the AI world on social media.
Bob Swan, Intel Corporation
Jen-Hsun “Jensen” Huang, Nvidia
Demis Hassabis, DeepMind Technologies
Jeff Bezos, Amazon
Juan Larenas, UNIT
Free resources for professional growth
There are many free access tools that can help you forge your new career in AI. Here we recommend some.
Elements of AI
The Elements of AI is a series of free online courses created by Reaktor and the University of Helsinki. They combine theory with practical exercises so you can learn at your own pace.
MIT Artificial Intelligence Course
Available through the official YouTube channel of the Massachusetts Institute of Technology, it is aimed at professionals with basic knowledge of AI.
Intensive Google Machine Learning Course
Although it does not require any prior knowledge, we recommend that you have experience in Python programming. However, the course contains secondary resources to help you continue learning.
Stanford Machine Learning Course
The popular online course platform Coursera offers this course taught by the renowned Stanford University. It is focused on acquiring practical knowledge on key aspects of AI.
Would you like to know your level of AI and Machine Learning? Put your skills to the test with this free trial from PixelTests.
Artificial Intelligence: Why do facial recognition systems fail?
Artificial Intelligence: Why do facial recognition systems fail?
Contrary to password-protected systems, our biometric information is widely available and relatively easy to obtain. Therefore, there are some types of attacks that are easy to implement and that can be successful if there are no measures to avoid them. In particular, facial recognition systems can be compromised using one of the following methods:
- A photography
- A video
- A 3D face model
Various methods have been developed to deal with the problem of spoofing with face images. These can be divided into two approaches: dynamic characteristics and static characteristics.
Dynamic feature approaches seek to detect motion in a video sequence by analyzing the trajectory of specific segments of the face. These reveal valuable information to discriminate between real faces and static copies. Some typical methods are those based on the detection of the lids of the eyes; head and face gestures (nodding, smiling, or looking in different directions) and face and gaze tracking through flow estimation. These techniques are highly effective at detecting attacks that use photos, but are less effective when it comes to videos.
In order to increase the performance in video attacks, specific methods of liveness detection in videos have been developed. For example, exploring the 3D structure of videos, analyzing a large number of 2D images with different head positions; context-based analysis to take advantage of the non-facial information available in the samples, such as characteristics of movements in the scene (movement in the background vs. foreground), and others. Modified versions of Local Binary Patterns or LBP are also being used, mostly to take advantage of the temporal information present in the video or to analyze the dynamic textures in comparison with rigid objects such as photos and masks.
The search for solutions
One way to tackle the problem is to focus on detecting life. For this, it is necessary to consider a spatio-temporal representation that combines the facial aspect and its dynamics. To achieve this, the key lies in using a spatio-temporal representation based on LBP due to the performance shown in the modeling of face movement and recognition of facial expressions, and also in the recognition of dynamic texture.
How is spoofing in facial recognition detected?
The LBP operator for texture analysis is defined as a texture-invariant grayscale measure, derived from a general definition in a local area. This is a powerful texture descriptor, and its properties for real-world applications include its discriminative power, computational simplicity, and tolerance to monotonic grayscale changes.
The LBP operator was initially conceived to deal with spatial information. However, its use has been extended to space-time representations for dynamic texture analysis, giving way to the Volume Local Binary Pattern (VLBP) operator.
VLBP consists of finding the dynamic texture in a video, which is represented as a volume (X, Y, T), where X and Y denote the spatial coordinates and T represents the frame index. On the other hand, the area close to each pixel is defined in a three-dimensional environment. The volume of VLBP can be defined by orthogonal planes, giving way to what is known as LBP-TOP or LBP Three Orthogonal Planes. Here the XY, XT and YT planes are defined. From them, the LBP maps are extracted for each plane, denoted as XY-LBP, XT-LBP and YT-LBP and then they are concatenated to obtain the LBP representation considering a pixel of the volume as the center, as shown in the figure .
In the LBP-TOP operator, the radius of the LBP algorithm on the X axis is denoted Rx, on the Y axis it is denoted Ry and on the T axis it is denoted by Rt.
The number of neighboring points in the XY, XT, and YT planes is PXY, PXT, and PYT, respectively. The type of operator in each plane can vary, these can be, uniform patterns (u2) or uniform patterns invariant to rotation (rui2).
Unlike photographs, real faces are non-rigid objects with contractions of the facial muscles that result in temporary deformations. For example, eyelids and lips. Therefore, it is assumed that specific patterns of facial movement should be detected when a living human is observed with a frontal camera. The movement of a photograph in front of a camera causes distinctive movement patterns that do not describe the same pattern as a genuine face.
The figure presents the anti-spoofing methodology, which consists of the following stages:
- Each frame of the original sequence is converted to grayscale and ran through a face detector.
- The detected faces are geometrically normalized to 64 × 64 pixels. This, in order to reduce the noise of the face detector, the same bounding box is used for each set of frames used in the calculation with the LBP-TOP operator.
- The LBP operator is applied in each plane (XY, XT and YT) and the histograms are calculated and then concatenated.
- A binary classifier is used to determine what the actual data is.
Each of the videos, whether of actual attacks or accesses, is transformed into a 3D and grayscale arrangement that represents the spatial distribution X, Y, T. Then, they are divided into sequences of 75 frames to which it is applied a face detection algorithm in the center frame.
This method is useful for preventing simple attacks (such as photographs), but not recommended for more complex attacks. The objective of the method is to identify temporary variations, which can be easily violated with a mask. That is why it is always suggested to combine methods to build a robust biometric system.
For more information and the code of the developed project visit the project on GitHub.
How to apply AI tools for health innovation?
Since its inception, the development of artificial intelligence has been exposed to much scrutiny and even some mistrust from the scientific communities and especially the general public. However, the constant advances of AI tools have sought to overcome these obstacles to find solutions to the great problems of humanity.
In November 2018, the Duke University Health System Emergency Department launched "Sepsis Watch." The tool was designed through deep learning to help professionals in the area detect the first signs of one of the leading causes of hospital death worldwide: infections and their overwhelming ability to wreak havoc on the human body.
The dreaded sepsis occurs when an infection triggers inflammation throughout the body, which can cause immediate -and multiple- organ failure. Fever, shortness of breath, low blood pressure, fast heartbeat, and mental confusion are just some of its symptoms. Although its effects are extremely harmful, the truth is that it can be treated with an early diagnosis. However, this is easier said than done since its early signs are often confused with other ailments.
Sepsis Watch is the product of three and a half years of development, during which medical records were digitized and 32 million data points were analyzed. Subsequently, the Duke University team focused on designing a simple interface so that the tool could be used in the form of an iPad app. The app checks each patient's information and assigns them a rating based on their probability of developing the condition. Once a doctor confirms the diagnosis, an immediate treatment strategy is put in place.
The result is a drastic reduction in the deaths of patients from sepsis. Currently, the AI tool is part of a federally registered clinical trial. The preliminary results of which will be available by 2021.
VOYAGER: AI Tools solution for the health area made in Chile
Similar to the cases of death due to sepsis, arterial hypertension, Alzheimer's, schizophrenia, retinitis pigmentosa, asthma and diabetes mellitus are pathologies with high mortality rates according to the WHO. Due to the complexity of their diagnosis, their treatment normally consists of rigid protocols, the results of which may vary from one patient to another.
VOYAGER, developed by UNIT, focuses on exponentially improving the management of these diseases, known as multifactorial. Through the use of artificial intelligence, the system is capable of processing data collected by voice interfaces to fully understand the status of each patient and perform predictive and automated monitoring of their treatment.
Similar to what happens with Sepsis Watch, this translates into more efficient diagnoses and identification of higher risk cases, directly impacting the fatality rate of these diseases. In concrete terms, VOYAGER's goal is to reduce serious hospitalizations by 50% for those suffering from diabetes, cerebrovascular diseases, hypertension and even obesity, both in public and private health.