Artificial intelligence – specifically, machine learning – is a part of everyday life for computer and smartphone users. Automatic switching from typical types of recommendations for new music, alcoholic learning algorithms can help you make life easier. They can even make mistakes.

This can be difficult for computer science to find out what happened in such cases. That's because many machine learning algorithms learn information and their projections inside a virtual "black box", leaving a few clues researchers to follow.

At the University of Maryland, a group of computer scientists has developed a new teaching approach to interpret algorithms of mechanics. In contrast to previous efforts that attempted to "break" the algorithms, the UMD Group had instead reduced the word to minimize the correct answer, eliminating words from irresponsible responses. On average, the researchers correctly respond to three words.

In some cases, the researchers' model algorithms provide the correct answer in a single word. Often, the word and phrase of the input was introduced to the answer which expressed a substantial sense of how certain algorithms are reflected in a specific language. Because many algorithms are programmed to answer it does not matter – even when prompted by incorrect input – the results can help computer scientists build more efficient algorithms that can recognize their limitations.

Researchers present their work on November 4, 2018, during the 2018 Conference on Empirical Methods in Natural Language Processing.

"Black box models work better than simple models, such as solutions of trees, but even those who wrote the primary code can not exactly do what's going on," said Boyd-Grabber of Jordan, and Associate Professor of Computer Science UMD. "When these models mean returning the wrong or unwanted answers, why is it so acute? So we try to express the minimum result that could make the right result. The average input was three words, but we could have one case in some cases."

In one example, the researchers made a photo of sunflower and read the text, "What is the flower blossom?" As an input model algorithm. These methods gave "yellow" the right answer. After reading a few shortcuts in question, researchers have found that they can get the same answer as "flower" as an algorithm only for text input.

An even more complicated example is the researchers that "in 1899, John Jacob Astor IV renewed investments of 100 thousand dollars to develop a new lighting system in Tesla, but Tesla used the money for Colorado Springs experiments."

Then they asked the algorithm: "What did you spend with Tesla Aser's money?" And got the correct answer "Colorado Springs Experiment." The reduction of this input in a word "made" yielded the same correct answer.

The work expresses a substantial content on the rules of learning algorithms in solving the problem. In many real-world algorithms, the fact that people who give the opinion of human opinion cause unwanted answers. To show that the opposite is also possible – that unwanted input can also provide correct, sensitive responses – Boidah-Grabber and his colleagues show the need for algorithms that recognize the irresponsible question of high quality trust.

"The bottom line is that all these favorable cars can be quite stupid," Boida-Grabber, who also holds the UMIACS Institute for Higher Education (UMIACS), and UMD College Co-Chairs, Research and Language Science Center. "When computer scientists want to prepare these models, we tend to show only real questions or real punishment, we do not notice that they do not have unwanted phrases or words, and the models do not know that they should be confused with these examples" .

Most algorithms are forced to provide answers, even if not enough or conflicting data, according to Boid-Grabber. This can be some algorithms produced by the algorithms studied in the car, algorithms of the model used for research, and the algorithms in the real world that can help us with a spam email or alternate driving. Understanding these errors can help computer scientists find solutions and build more reliable algorithms.

"We show that models can be prepared to know that they should be confused," Boidah-Grabber said. "Then they can come and say," I'll show you that I did not understand. "

In addition, the UMD-related researchers are also involved in Bachelor's researcher Eric Wallace; Graduates: Shim Feng and Pedro Rodriguez; And former MA student Mohit Jair (M.S.14, Ph.D.17, Computer Science).

Presentation of the research, "Pathology of Nervous Models Make It Difficult", Sri Feng, Eric Wallace, Alvin Greene II, Pedro Rodriguez, Moht Iyer and Jordan Boida-Grabber will present the 2018 Conference on Empirical Methods on Natural Language Processing on November 4, 2018.

This work was supported by the National Defense Research Agency (HR0011-15-C-011) and the National Science Foundation (IIS1652666). The contents of this article are not the views of these organizations.

.