UNIT Art Lab: A look at technology through art

Since primitive times, art and its many expressions have been tools to examine, analyze and record what happens around us. In a similar way, science has contributed tremendously to the understanding of different phenomena throughout history. However, each of these disciplines seemed to always work as opposites. Well, until now. This is when Art Lab comes in.

The subjective dimension of art and the rigorous nature of science are the two natures that come together in UNIT Art Lab. The project is promoted by UNIT, a company dedicated to the development of artificial intelligence solutions and aims to generate new views on technology and the fate of life on Earth.

Along these lines, digital platforms offer us the possibility of creating imaginary spaces or environments that traditional art has barely managed to explore. This is why they are the protagonists of this project.

To achieve this, data analysis tools are used in a crossover with media arts techniques. The result is different visualizations and figures that the visual artist Sergio Mora-Díaz created from the data of more than 600 patients, who were monitored for two years to evaluate the evolution of their blood based on the INR indicator of blood coagulation. This, from the VOYAGER project promoted by UNIT.

“My artistic work is closely linked to space and, mainly, to the generation of experiences. Being able to discover new technologies to propose new types of sensory experiences is a great opportunity of which I am very happy to be a part of”, explains Mora-Díaz.

"Much of my work is based on the use of algorithms, that is, mathematical data, to be able to create geometric figures or interactive environments. For example, through sensors capable of capturing information from the environment and translating it into light, sound or image”, the artist points out.

“Universal intelligence includes art as its most influential means of expression, since it connects and articulates, in different ways, creative thinking, vision and intimate sensations of the world around us. In this way, Art Lab allows a further arrival in the transcendent thinking of our community, complementing our rational base of analytical tools and software”, assures Juan Larenas, CEO of UNIT.

Open call

You are an artist? Would you like to be part of this experiment? Find more information in the following link.

 

 

 


Face Recognition: a constantly updated technology

Face Recognition: a constantly updated technology

Face recognition refers to the technology capable of identifying the identity of subjects in images or videos. It is a non-invasive biometric system, where the techniques used have varied enormously over the years.

During the 90's, traditional methods used handcrafted features, such as textures and edge descriptors. Gabor, Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), etc. are some examples of this, which were the basis for more complex representations, through coding and transformation of characteristics such as Principal Component Analysis (PCA), LCA, among others. Aspects such as luminosity, pose or expression can be managed through these parameters.

In the past, there was no technique that could fully and comprehensively master all scenarios. One of the best results achieved is the one presented in the study "Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification", where 95% is achieved in the Labeled Face in the Wild (LFW) database. This indicates that the existing methods were insufficient to extract a representation of the faces that was invariant to the changes of the real world.

How does facial recognition work today?

In recent years, traditional methods have been replaced by others based on deep learning, which in turn have their origin in Convolutional Neural Networks (CNN). The main advantage of methods based on deep learning is that they can “learn”, from large databases, the best characteristics to represent the data, that is, to build the faces.

An example of this is the DeepFace network, which in 2014 achieved a “state of the art” performance in the famous LFW database. With this, he was able to approximate the performance of a human in an unrestricted scenario (DeepFace: 97.35% vs Humans: 97.53%). This, training a 9-layer model on 4 million images of faces. Inspired by this work, the focus of the research shifted towards methods based on deep learning, reaching 99.8% in just three years.

Facial recognition systems are usually made up of the stages shown in the following figure:

  1. Face detection: A query image is entered into the system. A detector finds the position of the face in the query image and returns the coordinates of the position.
  2. Face Alignment: Your goal is to scale and crop the image in the same way for all faces, using a set of reference points.
  3. Representation of the face: The pixels of the image of the face image are transformed into a compact and discriminative representation, that is, into a vector of characteristics. This representation can be achieved using classical methods or models based on deep learning. Ideally, all images of the faces of the same subject should have vectors with similar characteristics.
  4. Face matching: The images of the faces of registered individuals make up a database called a gallery. Each face image in the gallery is represented as a feature vector. Most methods calculate the similarity between the feature vector in the query image and the vectors in the gallery, using the cosine distance or the L2 distance. The one with the smallest distance indicates to which individual the consulted face belongs.

 

 

 


Artificial Intelligence: Why do facial recognition systems fail?

Artificial Intelligence: Why do facial recognition systems fail?

Contrary to password-protected systems, our biometric information is widely available and relatively easy to obtain. Therefore, there are some types of attacks that are easy to implement and that can be successful if there are no measures to avoid them. In particular, facial recognition systems can be compromised using one of the following methods:

  • A photography
  • A video
  • A 3D face model

Various methods have been developed to deal with the problem of spoofing with face images. These can be divided into two approaches: dynamic characteristics and static characteristics.

Dynamic feature approaches seek to detect motion in a video sequence by analyzing the trajectory of specific segments of the face. These reveal valuable information to discriminate between real faces and static copies. Some typical methods are those based on the detection of the lids of the eyes; head and face gestures (nodding, smiling, or looking in different directions) and face and gaze tracking through flow estimation. These techniques are highly effective at detecting attacks that use photos, but are less effective when it comes to videos.

In order to increase the performance in video attacks, specific methods of liveness detection in videos have been developed. For example, exploring the 3D structure of videos, analyzing a large number of 2D images with different head positions; context-based analysis to take advantage of the non-facial information available in the samples, such as characteristics of movements in the scene (movement in the background vs. foreground), and others. Modified versions of Local Binary Patterns or LBP are also being used, mostly to take advantage of the temporal information present in the video or to analyze the dynamic textures in comparison with rigid objects such as photos and masks.

The search for solutions

One way to tackle the problem is to focus on detecting life. For this, it is necessary to consider a spatio-temporal representation that combines the facial aspect and its dynamics. To achieve this, the key lies in using a spatio-temporal representation based on LBP due to the performance shown in the modeling of face movement and recognition of facial expressions, and also in the recognition of dynamic texture.

How is spoofing in facial recognition detected?

The LBP operator for texture analysis is defined as a texture-invariant grayscale measure, derived from a general definition in a local area. This is a powerful texture descriptor, and its properties for real-world applications include its discriminative power, computational simplicity, and tolerance to monotonic grayscale changes.

The LBP operator was initially conceived to deal with spatial information. However, its use has been extended to space-time representations for dynamic texture analysis, giving way to the Volume Local Binary Pattern (VLBP) operator.

VLBP consists of finding the dynamic texture in a video, which is represented as a volume (X, Y, T), where X and Y denote the spatial coordinates and T represents the frame index. On the other hand, the area close to each pixel is defined in a three-dimensional environment. The volume of VLBP can be defined by orthogonal planes, giving way to what is known as LBP-TOP or LBP Three Orthogonal Planes. Here the XY, XT and YT planes are defined. From them, the LBP maps are extracted for each plane, denoted as XY-LBP, XT-LBP and YT-LBP and then they are concatenated to obtain the LBP representation considering a pixel of the volume as the center, as shown in the figure .

LBP in three orthogonal planes. (a) The planes intersect one pixel. (b) LBP histograms of each plane. (c) Concatenation of the histograms.

In the LBP-TOP operator, the radius of the LBP algorithm on the X axis is denoted Rx, on the Y axis it is denoted Ry and on the T axis it is denoted by Rt.

The number of neighboring points in the XY, XT, and YT planes is PXY, PXT, and PYT, respectively. The type of operator in each plane can vary, these can be, uniform patterns (u2) or uniform patterns invariant to rotation (rui2).

Unlike photographs, real faces are non-rigid objects with contractions of the facial muscles that result in temporary deformations. For example, eyelids and lips. Therefore, it is assumed that specific patterns of facial movement should be detected when a living human is observed with a frontal camera. The movement of a photograph in front of a camera causes distinctive movement patterns that do not describe the same pattern as a genuine face.

The figure presents the anti-spoofing methodology, which consists of the following stages:

LBPTOP-based anti-spoofing method block diagram.
  1. Each frame of the original sequence is converted to grayscale and ran through a face detector.
  2. The detected faces are geometrically normalized to 64 × 64 pixels. This, in order to reduce the noise of the face detector, the same bounding box is used for each set of frames used in the calculation with the LBP-TOP operator.
  3. The LBP operator is applied in each plane (XY, XT and YT) and the histograms are calculated and then concatenated.
  4. A binary classifier is used to determine what the actual data is.

Each of the videos, whether of actual attacks or accesses, is transformed into a 3D and grayscale arrangement that represents the spatial distribution X, Y, T. Then, they are divided into sequences of 75 frames to which it is applied a face detection algorithm in the center frame.

This method is useful for preventing simple attacks (such as photographs), but not recommended for more complex attacks. The objective of the method is to identify temporary variations, which can be easily violated with a mask. That is why it is always suggested to combine methods to build a robust biometric system.

For more information and the code of the developed project visit the project on GitHub.

 

 

 


Global Initiative: UNIT among the selected projects for Startup Creasphere

UNIT, a company dedicated to the creation of universal intelligence products, is among the eleven projects selected for Batch 5 of Startup Creasphere. The initiative, under the premise "Transforming Healthcare Together", provides the opportunity to develop a pilot project focused on innovation for the health industry.

Currently one of the largest innovation platforms for digital health solutions, Startup Creasphere was founded two years ago in Munich by Roche and Plug and Play, both responsible for the initial growth of companies such as Google, Paypal and Dropbox. In 2019, SANOFI and Lonza joined as partners to continue expanding the scope of the call internationally.

Batch 5, of which UNIT is a part as the sole representative of Latin America, is the most recent selection of projects that will begin this process of development and acceleration under the mentorship of the founding partners Roche and Plug and Play.

One of UNIT's greatest goals is to continue to challenge the limits of artificial intelligence applications in the industry. In line with this objective, they currently use tools such as data science, machine learning, mathematical modeling, deep learning and language recognition to develop products focused on solving the needs of people in a wide range of fields.

“For UNIT, it is essential to participate in this program to carry out scientifically and technologically validated pilot tests. These will allow us to verify the results obtained for the benefit of each patient, as well as to strengthen our business model by being associated with world-class partners ”, explains Jordaj Zuleta, Chief Design Officer for UNIT.

"We seek to establish a solid presence in low and middle-income countries, where the public health systems have similar challenges to those that we experience in Chile in terms of patient adherence and the quality of treatment itself," the executive pointed out.

VOYAGER

Did you know that multifactorial diseases are currently one of the main causes of death in Chile and the world? High blood pressure, Alzheimer's, schizophrenia, rhinitis, asthma and diabetes mellitus are just some of the pathologies derived from this condition.

These health issues are produced by the combination of multiple environmental factors and mutations in several genes, generally from different chromosomes. One of their main complexity factors is that they do not follow common genetic inheritance patterns, making their diagnosis and treatment even more difficult. Even when it is possible to detect them, they are usually treated through rigid protocols, which do not always have positive results in different patients.

VOYAGER, developed by UNIT, is a product focused on improving the management of these diseases, seeking to reduce the most serious cases and save every human life possible. Its name is inspired by the space probes that have gone further in the Universe. Through artificial intelligence, the system is capable of processing data collected through voice interfaces to fully understand the state of each person's disease and carry out predictive and automated monitoring of their treatment.

This translates into more efficient diagnoses and identification of higher risk cases, directly impacting the fatality rate of these diseases. In concrete terms, its goal is to reduce serious hospitalizations by 50% for those with diabetes, cerebrovascular diseases, hypertension and even obesity. Currently, VOYAGER has applications in the pharmaceutical and clinical world, both public and private.