Advocate of artificial intelligence

Dr. Amos Azaria

Department of Computer Science

Dr. Amos Azaria began computer programming at the age of 6 and was a finalist in the International Bible Contest at 12. Now he hopes to win the Indianapolis 500 Autonomous Challenge.

“My professional interests involve human-agent interaction, machine learning, deep learning, reinforcement learning and natural language processing. That may sound very sophisticated, but basically, the main task of AI is to deal with the drudgery of dull, mundane and dangerous tasks that people don’t want to do.”
Dr. Amos Azaria

Dr. Amos Azaria joined the Department of Computer Science at Ariel University after returning to Israel from a 2-year post-doc at Carnegie Mellon University in Pittsburgh in 2016 after earning undergraduate and graduate degrees from the Technion and Bar Ilan University. He grew up in Beit Gamliel, a small agricultural community near Rehovot. At the age of 6, he began computer programming, and admits that “it was only much later that I figured out that you can actually do that for a living and get paid for it!”

His research involves human-agent interaction, machine learning, deep learning, reinforcement learning and natural language processing, where algorithms, or “agents” interact with the environment to decide what actions to take. “In autonomous vehicles, for example, the agent needs to decide how to steer the car, slow down, avoid collisions, etc.,” explains Dr. Azaria. “Initially, the agent does not know what actions to take, but with time it learns how to improve from random actions. In a driverless vehicle simulation, for example, the agent may initially learn from the actions of a human driver, and then as the learning progresses, it begins to learn from its own actions. If the simulated vehicle hits a pedestrian, the agent is penalized, but if the car safely reaches its destination, it is rewarded. Thus, the agent proceeds and learns to take actions that will yield more positive rewards and fewer negative penalties.”

In addition to creating a model for measuring conditions outside the vehicle, Dr. Azaria is currently collaborating with Dr. Oren Musicant on creating a model for evaluating drivers’ stress using sensors outfitted inside the car to measure how people react under stress, including heart rate, conductivity of sweat, etc. With this model, he hopes to use reinforcement learning agents to learn how to drive in a manner that avoids stressing passengers. “Even if a near-collision is avoided at the last moment, we want to prevent getting into near-collision situations altogether. All this is happening as the vehicle is traveling under very high-speed conditions.”

Dr. Azaria and his students are currently collaborating on a joint project with Prof. Zvi Shiller and his team from the Department of Mechanical Engineering applying reinforcement learning to developing intricate algorithms for safe and efficient operation of autonomous cars. The team has its sights set on competing in the Indy Autonomous Challenge, scheduled for October 2021. Thirty finalists from universities around the world, including MIT, will be competing for the $1 million prize. Ariel University’s entry to the competition is being coordinated by Gabriel Hartmann, who began working on the project for his master’s and now his PhD, both under the co-advisership of Dr. Azaria and Professor Shiller. His thesis deals with autonomous driving at optimal (maximal) speed while maintaining vehicle stability, concentrating mainly on emergency vehicles and emergency situations (averting accidents). Since Hartmann’s topic is so closely related to race cars, he is the “main engine” leading Ariel University’s Indy 500 project. AU’s entry will be outfitted with software that will feature a dual approach system that not only involves reinforcement learning, but also integrates a kind of “envelope” to protect the system from taking unsafe actions, like speeding, by analytically determining the maximum speed the car should drive. The competing cars will be actual race cars that will reach speeds of 200 km per hour.

Ariel University’s Mobile Lab:

Car at Ariel University’s Mobile Lab

Other areas of Dr. Azaria’s research:

Criticality: Dr. Azaria is working with another of his PhD students, Yitzhak Spielberg, on an aspect of reinforcement learning called “criticality” for assessing how critical a situation is. In their research, game participants assess the level of criticality in various situations. The information is then fed to certain agents, thereby promoting quicker learning than in agents that do not receive this input. “Just by knowing which situations are more critical than others, our algorithms were more effective than those that did not get this information,” Dr. Azaria explains. “This may be compared to a student learning to drive. Instead of the driving teacher slamming on the brakes if a student is about to drive over a hole in the pavement, the instructor warns the student to pay attention and avoid the imminent danger. Thus, the student learns how to handle such a situation.  If, however, the student does not take the proper action to avoid the situation, then what she actually learns is that she drove over the hole because she didn’t listen to the instructor’s warning.” This method is applicable to all types of reinforcement learning.

Ride sharing: One of Azaria’s students, Chaya Levinger (co-advised with Dr. Noam Hazon), is developing an application that provides efficient computation for how to fairly split the cost of a shared ride. It is a complicated calculation because some passengers need to go further than others in varying traffic conditions. To solve this, they use a concept called the Shapley value. “We have developed an algorithm that computes the Shapley value or a value that comes very close to it. It basically considers what each person’s value would be without the value of other riders and computes each passenger’s fair share of the ride.” The method outperforms current state-of-the-art systems.

Another student, David Zar, is working on what information an agent in a shared ride should present to users with respect to other alternatives, and why a shared ride is ultimately the best alternative. This may include information such as how much the fare would have cost if the passenger had driven her own car, or how much time she is saving by sharing the ride instead of taking a bus.

Dictionary encoding: Another of Dr. Azaria’s students, Keren Nivasch, is working on using reinforcement learning for data compression applications needed for shortening the encoding of dictionary words used in the application. Every time it encounters a new word, the agent learns to decide whether to enter it into the dictionary or not.

Vocal lie detector: In a collaborative project with a researcher from Taiwan, participants play a card game during which the agent can determine by the tone of their voices in their responses if they are telling the truth or a lie. Evgeny Neiterman, the student who is carrying out this study, has shown that by using the developed “lie detector” component, the agent performs significantly better.

Detecting  malicious content: Another student, Merav Chkroun, is working on developing a way for chat boxes to learn if some of the content they are “taught” is malicious, as was the case in Microsoft’s “Tay” program, which was shut down after less than 24 hours in 2016.

When asked if he believes that artificial intelligence will enable computers to control the world, Dr. Azaria divided the issue into two scenarios:

  1. Machines will become so intelligent that they will decide to control the world.
    “The fact of the matter is that computers have already taken over the world in a way, be it the internet, banking, social media, etc. There are people who have used, are using and will use computers for corrupt purposes. The question is, do computer themselves have desires? If computers had any desire to take over, they could have done so long ago, since the internet controls everything. But why would a computer want to take over the world? Computers can only carry out what they’ve been programmed to do. The only desires they have are to perform those tasks that they are preprogrammed to do. The question is, how do computers understand what we tell them to do.  Computers are neither smart nor dumb. It may be possible that somehow commands are misinterpreted. If, in theory, a computer misinterprets its task, and instead of constructing an automobile, for example, it develops a way to attack people, then it would not be difficult to just disarm it. We are still very far from any kind of super intelligence that is trying to take over the world.”

  2. The second scenario, which Dr. Azaria believes to be much more realistic, is that machine learning can be used to design weapons, for example, which if they fall into the wrong hands may be used against innocent civilians. “This is a problem that is most likely unavoidable and may occur with any technology, but which can be modified by developing better technology to counter it. On the positive side, technologies such as drones and robots have the capability of helping save lives.”
    When he’s not working at the computer, Dr. Azaria’s favorite pastime is photography. He particularly enjoys photographing his 6 children, who reside with him and his wife in Ariel.
Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on email
Share on whatsapp