A melhor pesquisa sobre Machine Learning até agora

6 mins read

The uses of machine learning are expanding rapidly. Already in 2019, significant research has been done in exploring new vistas for the use of this technology. Gathered below is a list of some of the most exciting research that has been undertaken in the realm of machine learning so far this year.

Transfer learning for vision-based tactile sensing

The capacity of computers to mimic human sensory abilities has seen uneven progression; of the five senses, the tactile has perhaps seen the slowest development. To bypass these shortcomings, researchers Carmelo Sferrazza and Raffaello D’Andrea have published a paper entitled “Transfer Learning for Vision-Based Tactile Sensing,” in which they advocate the use of “soft optical (or vision-based) tactile sensors,” which “combine low cost, ease of manufacture and minimal wiring.” This model relies largely on computer vision in order to train the tactile models and to aid the tactile sensors in object identification. Their proprietary system used a soft-gel sensor and a computer vision-trained network in which they “enabled the use of cameras to sense the force distribution on soft surfaces, by means of the deformation that elastic materials undergo when subject to force.”

Self-Supervised Learning of Face Representations for Video Face Clustering

As face-recognition technology has improved in recent years, researchers have begun to reconsider the bounds of this technology and how it can be applied. For systems designed to study video, studies have moved beyond simply recognizing main characters to using the knowledge of faces to analyze stories. In their recent paper on the topic, “Self-Supervised Learning of Face Representations for Video Face Clustering,” a team of researchers from the University of Toronto notes that “the ability to predict which characters appear when and where facilitates a deeper video understanding that is grounded in the storyline.” To this end, these researchers have developed an unsupervised model that is able to rely on existing data sets (i.e. facial databases like YoutubeFaces) and a limited amount of training to create highly accurate facial-recognition models. These models “can leverage dynamic generation of positive/negative constraints based on ordered face distances and do not have to only rely on track-level information that is typically used.” The decreased reliance on complex and time-intensive model training points to greater potential use for video analysis in the future.

Competitive Training of Mixtures of Independent Deep Generative Models

Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) are the most prominent model types used for unsupervised learning, but each has significant drawbacks: VAEs have trouble producing high-quality samples when fed natural images, while GANs require an inhibitive amount of training. A recent research project undertaken by a team of researchers at the Max Planck Institute seeks to ameliorate the shortcomings of each model by using the strengths of both, taking “a general approach to train multiple models in parallel which focus on independent parts of the training distribution.” In the paper “Competitive Training of Mixtures of Independent Deep Generative Models,” which summarizes their findings, they write that more intuitive use of models, or the contemporaneous use of multiple types, will create more robust environments for model training, allow for a more capacious use of data, and “could shed some light on how to perform model selection on the fly.”

Can You Trust This Prediction? Auditing Pointwise Reliability After Learning

As machine learning becomes more deeply integrated into daily business operations, the desire to test the reliability and accuracy of predictive models increases as well. While most measures of accuracy focus on removing errors from the training process, few options exist to assess the accuracy of active models. To remedy this, professors Peter Schulam and Suchi Saria of Johns Hopkins have submitted an auditing algorithm called resampling uncertainty estimator (RUE) that “estimates the amount that a prediction would change if the model had been fit on different training data”. The purpose of this new algorithm, per its creators, is to “help to increase the adoption of machine learning in high- stakes domains such as medicine.” In their resultant research paper, “Can You Trust This Prediction? Auditing Pointwise Reliability After Learning,” they note that because of the liabilities involved in such areas, machine-learning must be measurable for accuracy both before and after adoption. Developments such as RUE will expedite the adoption of machine learning in such fields.

Conclusion

Machine learning has already led to the automation of menial tasks in areas like finance and HR. Now, as research aims to make the technology more reliable, accurate, and widely available, we may see more tasks automated in areas ranging from advertising to medicine. Where do you think the machine learning revolution will lead next?

Deixe um comentário

Your email address will not be published.