Neural networks to remain the most researched AI approach in 2019 for Facebook and Google, based on 2018 publications

By Gabriel, 31 Dec 2018 , updated 10 Feb 2019

In 2018 Google and Facebook Artificial Intelligence research labs have again heavily invested into neural networks: This is still the most researched approach to IA, concentrated approximately 2/3 of their research effort based on the number of publications. A trend that is likely to remain in 2019.


Neural networks in general, and more specifically advanced variants of them (like deep neural network, convolutional neural network), concentrated most of the effort in Artificial Intelligence in 2018. This is an impression you can easily get when reading news about breakthroughs in the area in the last couple of years. I wanted to investigate if that is actually the case for the researchers, in the labs and by how much.

A bit of context

Recent AI breakthroughs…

During our decade, the 2010s, Artificial Intelligence has achieved a number of significant breakthroughs. Example to illustrate that are IBM Watson beating 2 Jeopardy! champions in 2011, AlexNet significant win at the ImageNet competition in 2011 or Deepmind Alphago beating 2 Go champions in 2016 and 2017).

…have made more funding available…

Those successes and the publicity around them brought back the spotlight to this computer science … along with considerable fundings! One example among many: DeepMind was founded in 2010 and acquired by Google in 2014 for $500 million. That’s a lot of money for a company that’s just over 2 years old and have only 75 employees (at the time)

…and triggered an ever increasing research effort…

In 2018 the large technology companies have increased their effort into Artifical Intelligence research projects. While this is not new, the trend has continued in 2018 at a strong pace. Google AI publications output stagnated between 300 and 400 per year for the period 2009-2015, then sharply increased 2016, 2017 and 2018 to reach 634 publications for 2018, ie nearly 20% increase per year. Looking at publications about basic AI only, (ie by filtering out from Goggle AI publications database some less AI-related publications on research area like Economics and Electronic Commerce or Privacy) the increase is more important: around 37% per year for the period 2016-2018. Facebook AI Research lab is younger (2013) and publishes less but their effort has increased at a much faster rate: from approximately 20 publications a year in 2015, 2016 to 75 in 2017 and 164 in 2018, that’s respectively more than triple in 2017 and double again in 2018!

…beneficial to the neural networks approach

Quiet logically the research area in Artificial Intelligence that benefit the most of all this research effort is the one that is responsible for those recent success. Neural network is a generic term and covers multiple machine learning techniques. Among them, Deep artificial neural networks is the workhorse of the recent AI success, the most bankable, the current best hope to achieve even more success in the foreseeable future, according to the orientation taken by AI research labs in big technology companies. And in the study below we can see just how big this trend was in 2018 and, most likely, will be in 2019.

The study

I have reviewed the 2018 publications availables on those 3 websites:

I took the example of Google and Facebook because they are majors actors in the area and that they are openly publishing considerable amount of work, making it easier to review.

Grand total is 677 publications! That’s a lot to review. On the publications website for Google AI and Facebook AI the publications research area is always given (machine learning, computer vision…) but the approach taken isn’t unfortunately :( It took me a couple of late nights to browse through those websites, read the title and most of the time the abstract paragraph, on rare occasions I had to open the actual publications, assess the approach and aggregate that in the document linked below. It took me roughly 20 hours!

The classification

The classification I choose is simple to the extreme: there are 3 categories:

Neural Network

Artificial Neural Network (ANN) is a generic term here to describe all research works using algorithms loosely based after the human brain. This approach is not new (Wikipedia cite some work done on ANN as early as 1943!) but, it went through a lot of successful developments lately.

Programmatically ANN in its simplest form can be described as a collection of connected nodes (like biological neuron). Each node exchanges information, ie numeric values with its neighbours. It’s the inputs, outputs of the node. (like neurotransmitter in a synapse for real neurons). Each node will run a mathematical function to compute outputs based on its inputs and outputs and some local parameters and connection weights. Typically nodes are aggregated into multiple layers and only the node of the first layer will receive the external signal: text, image, sound (like the optic nerve for the biological brain) and only the outputs of the nodes in last layer will be considerated as the ANN output (like the hand movement for a human). Multiple layers can be traversed in-between. The arrangement of the nodes and the choice of local parameters and connection weight for each one is more a less random at the ANN creation. The network is then train by sending successively different signals, measuring the network output and modifying the parameters and weight to improve the network output against a desirable output (imitating the process of knowledge acquisition of a human). More precise and/or up-to-date definition can be found on Internet (see below for some good readings)

Today the state of the art of the ANN variants are, just to name a few: * Deep Neural Network * Convolutional Neural Network * Generative Adversarial Networks * Autoencoder All this approaches have in common that they are ANN as the core of the implementation.

Other

Well anything else! Mainly it contains some work on the symbolic approach, also known as GOFAI (“Good Old-Fashioned Artificial Intelligence”) and some work on other statistical learning approach (like Hidden Markov model or genetic algorithm). It contains as well a few publications that I wasn’t able to classified because the abstract and sometimes the actual publication document was too hard to understand for me! :(

Tools

As I was going through the list I noticed a lot of publications not about a specific approach, neither a specific domain, but about building some tools for researchers to accurately benchmark, compare their programs. Putting together big documented database of image, translated text, annotated videos… Indeed a common concern about every new improvement is to measure it quickly and precisely against current programs. That what those tools are for. They can be used to benchmark any programm regardless of its approach.

I choose to put those publications into their own category and to exclude them from this analysis about the approach.

This classification is not perfect. Some algorithms just doesn’t fit into these categories like the decision making algorithm Monte Carlo tree search - notably used in game play - and some research mixing multiples approach. But regardless I think it provides a good idea of just how much Neural network is omnipresent today.

The results

It appears that respectively 61% of all 2018 AI publications at Google AI are based around neural network approach, 65% at DeepMind (Google) and 68% at Facebook AI Resarch.

Considering that I couldn’t assess the approach of some of research this number can be higher.

You can download the full result here: 2018 AI publications analysis (spreadsheet).

In fact neural network is so omnipresent that the word is sometimes not even used in the abstract of the publications, neither is the name of one of the more modern variant. It is implied that this is the approach used. We’ll find the words “train models”, “the model”, “deep models”, “training” in the abstract. Only a careful reading of the papers or an analysis of the codebase reveal what the approach is.

It seems to me that the there is another reason why this is so popular now, besides its recent success: ANN is specifically suited for organizations possessing extremely large set of data, like Google and Facebook. Put it differently: For the Big Four tech companies (Google, Apple, Facebook, Amazon) if the question is: What can I do regarding to artificial intelligence with those huge set of data? Artificial neural network is a good answer.

Further reading