Machine ‘social network’ to update Air Force AI robots on the fly

Rate this post

Artificial intelligence is becoming increasingly capable, but early releases may not be quite ready for the real world: MicrosoftMSFT had to withdraw its Tay chatbot in 2016 after a group of users tricked the AI ​​into making racist and sexist comments. The same problem would apply to military systems like smart drones, but with more devious adversaries and more serious consequences. The US Air Force is looking for a proactive solution.

“Real-world examples are limited, and there are a large number of non-benign conditions,” Dr. Lisa Dolev, CEO of AI specialist Kylur Intelligent Systems told Forbes. “That combination can cause problems.”

The US Air Force needs to update AI software and retrain it when faced with adversities, and Qylur has been awarded a Phase I Small Business Innovation Research contract by the AFWERX Directorate, which is tasked with accelerating the adoption of the technology in the Air Force. It will provide a means to quickly and efficiently update AI systems in the field using the Social Network of Intelligent Machines or SNIM AI®, managing updates for AI-based systems such as drones and ground robots.

SNIM AI identifies problems in the AI ​​model and helps solve them with data collected by all machines connected to the system.

No machine-learning system can be trained on every possible scenario, and it can’t deal with something new when it comes across it. Dolev says it could be something like seeing snow for the first time as an object recognition system and seeing everything stand out. Dolev says such problems are inevitable with the types of systems the Air Force deploys because of the limited data available.

“It’s not like learning to recognize a cat, where you have endless data from the Internet. You’re working with small, noisy data sets,” Dolev says. “It’s a chaotic environment.”

The problem with trained systems is called AI model drift, and is well known in the business world. But while commercial operations can tolerate running problems over a period of days, anything in the defense world needs to be fixed as quickly as possible.

“In our system we have drift monitors embedded in the loop so we can see if anything is wrong. If it’s wrong, we have to go back and retrain,” says Dolev.

SNIM AI allows data from all connected machines to be combined and used for retraining, so when encountering snow, it will be able to extract all images of snow-covered objects captured in the field. The updated identification system will then be tested and verified by human operators before being pushed back into some of the machines in the field.

Dolev says they don’t deploy everything to everyone. The sparse computing resources on board drones and other mobile systems mean they are not burdened with all the data. SNIM AI is ‘mission adaptive’ so for example snow updates will not be applied to machines in the tropics.

Dolev has plenty of experience in the field, starting 17 years ago, when the idea of ​​embedding artificial intelligence into drones and other systems seemed like magic to some. Qylur’s products are used in the commercial security sector where they are used to automatically detect explosives and other threats in public places. The need for fast, responsive updates drove Dolev to develop SNIM AI.

While SNIM AI could in theory be fully automated, Dolev has a strict policy of keeping human oversight in the process.

“I insist on not just looking at the math and results, but a mandatory physical ‘sanity test’ to verify that what we see in the lab matches what we see in the field,” says Dolev.

Keeping human common sense in the update loop removes the risk that the AI ​​will retrain itself in a way that makes the problem worse. AI may be smart, but is notoriously brittle and prone to weird errors, so Dolev’s approach offers some measure of security. This is only going to become more important as AI gains momentum. Businesses — and the military — are scrambling to deploy systems so they don’t get left behind. These autonomous machines must be safe and reliable.

As in the example of the Tay chatbot, any system can encounter adversaries who deliberately try to trip it, feed it misleading data, or confuse it with situations not covered by its training data. This is especially true in defense, where there is already research into forms of camouflage to fool automatic identification algorithms. An entire specialist field of ‘counter-AI warfare’ could emerge with the aim of finding vulnerabilities in AI systems that would allow them to fail in ways no one else would.

Being able to exploit an AI’s weaknesses can provide a decisive advantage, but not if those exploits are quickly identified and corrected. Rapid update processes like SNIM AI will help keep the Air Force’s autonomous machines a step ahead of its adversaries.

follow me Twitter. check My website or some of my other work here.

Leave a Comment