All news

Skoltech gurus’ research to be rolled out at conference on artificial intelligence

This year, the rate of accepted works accounts for 22%, with the conference taking place in Barcelona on December 5-11

MOSCOW, December 2. /TASS/. Two papers from the Computer Vision Group at the Skolkovo Institute of Science and Technology led by Professor Victor Lempitsky have been accepted to the NIPS - the largest global forum on artificial intelligence and machine learning, said the Skoltech’s press service. The papers which have passed a rigorous selection devoted to applications of deep neural networks for image analysis.

"Year after year, it becomes more difficult to publish your work at the NIPS," Professor Lempitsky commented. "The fact that two papers from our group had been accepted this year evidences the fine level of research in our group. These studies belong to quite different areas of research and have many practical applications."

NIPS (Advances in Neural Information Processing Systems) is the leading world forum on research in the fields of artificial intelligence and machine learning held on an annual basis since 1987. The studies submitted for review to take part in the conference are critically analyzed through anonymous cross-reviewing. This year, the rate of accepted works is 22%, with the conference taking place in Barcelona on December 5-11.

Russian scientists’ research papers

Skoltech Computer Vision Group is developing computer systems to retrieve and organize the information contained in images. For this purpose, researchers developed new machine learning techniques that can be adapted to visual information diversity in the modern world. One of these classes of methods actively studied in Lempitsky’s group is deep learning approaches.

The first paper that will be presented at the NIPS is co-authored by Skoltech Master's graduate Oleg Grinchuk and PhD student Vadim Lebedev. The work offers an alternative to the existing classes of visual markers such as QR codes and bar codes. The researchers suggested creating markers with a synthesizing network, with the second network being capable of decoding markers. With this, the learning process is tuned in such a way that parameters of both networks are optimized in parallel which benefits in decoding stability as well as in aesthetical properties of markers.

As a result, markers designed by deep learning can be done in any visual style or brand including the style of a particular organization. Thus, the new markers look quite different from the usual black and white QR-codes and other existing systems on the market. Examples of such markers can be found on the project page.

The second paper accepted to the NIPS was furnished by PhD student Evgeniya Ustinova and is concerned with training neural networks that solve the task of building compact data representations valid for searches by images. The research team introduced new criteria for training such networks which require substantially fewer efforts for tuning learning parameters and which facilitate acquiring representations leading to more accurate searches and recognition.