Mario Klein of the Max Planck Institute for Light Science and colleagues trained an artificial intelligence model to analyze 143,000 papers published on the arXiv preprint server between 1994 and 2021. All papers covered areas related to artificial intelligence.
The researchers then used a natural language processing tool to generate a list of nearly 65,000 key concepts by stripping keywords and phrases from the paper titles and abstracts.
These concepts became the nodes of a semantic network that allowed the AI model to discover connections between ideas and papers. This data tells the AI model how the AI research field has changed over time and how scholars are making connections and exploring new areas of interest. Ten other AI machine-learning methods then used the semantic network to try to find out which concepts had not been learned over a five-year period.
By testing against historical data, the AI was able to predict which unstudied concepts would appear in at least three papers over a five-year period with an accuracy of more than 99.5 percent. The researchers suggest that the method could be used to predict future hot topics or to help develop artificial intelligence with human understanding.
Gabrielle Pereira of the London School of Economics in the United Kingdom said, "We think this paper largely reflects the current way of thinking in computer science and artificial intelligence."