Adversarial machine learning is a technique that attempts to trick models by supplying deceptive input, of which the most common reason is to cause malfunctions in the machine learning model.

Now, Microsoft in collaboration with IBM, MITRE, NVIDIA, and a host of other tech companies, has launched an open framework called the Adversarial ML Threat Matrix, to help security analysts in detecting, and remediating adversarial attacks against machine learning (ML) systems.

The initiative is perhaps the first attempt to organize the different techniques used by adversarial attackers in subverting ML systems, and even more crucial as AI (artificial intelligence) and ML are being deployed in a variety of novel applications.

What are the Adversarial Attacks and Defenses in Deep Learning?



The rapid developments in AI and deep learning (DL) techniques, makes it critical to ensure the security and robustness of the deployed algorithms. And recently, the vulnerability in DL algorithms to adversarial attack samples has been widely documented, with the fabricated samples leading to various misbehaviors of the DL models.



As such, adversarial attack and defense techniques is beginning to attract increasing attention from both ML and security communities at large. And threat actors not only are able to abuse the technology to run malware, but also leverage it to trick machine learning models, thereby causing the systems to make incorrect decisions, which poses a threat to the safety of AI applications.

Security researchers have also documented what's called model-inversion attacks, which provides access to a model that's abused to infer information about training data. Albeit, most machine learning techniques are designed to work on specific problem sets in which the training data were generated from same statistical distribution (IID).

What Adversarial ML Threat Matrix brings to the table?



Adversarial ML Threat Matrix aims to address the issue of threats against data weaponization with a curated set of vulnerabilities and adversary behaviors which Microsoft and MITRE have vetted to be effective against ML systems.

Thus, organizations can use the Adversarial ML Threat Matrix to test their own AI models' resilience by simulating attack scenarios using a list of known tactics to gain access to the environment, contaminate training data, execute unsafe ML models, and exfiltrate sensitive information via the model stealing attacks.

The overall goal, however, is that the framework will help security analysts to orient themselves in the new and upcoming threats scenarios to stay abreast of the threats actors.

Open Framework to protect Machine Learning (ML) Systems from Adversarial Attacks

Adversarial machine learning is a technique that attempts to trick models by supplying deceptive input, of which the most common reason is to cause malfunctions in the machine learning model.

Now, Microsoft in collaboration with IBM, MITRE, NVIDIA, and a host of other tech companies, has launched an open framework called the Adversarial ML Threat Matrix, to help security analysts in detecting, and remediating adversarial attacks against machine learning (ML) systems.

The initiative is perhaps the first attempt to organize the different techniques used by adversarial attackers in subverting ML systems, and even more crucial as AI (artificial intelligence) and ML are being deployed in a variety of novel applications.

What are the Adversarial Attacks and Defenses in Deep Learning?



The rapid developments in AI and deep learning (DL) techniques, makes it critical to ensure the security and robustness of the deployed algorithms. And recently, the vulnerability in DL algorithms to adversarial attack samples has been widely documented, with the fabricated samples leading to various misbehaviors of the DL models.



As such, adversarial attack and defense techniques is beginning to attract increasing attention from both ML and security communities at large. And threat actors not only are able to abuse the technology to run malware, but also leverage it to trick machine learning models, thereby causing the systems to make incorrect decisions, which poses a threat to the safety of AI applications.

Security researchers have also documented what's called model-inversion attacks, which provides access to a model that's abused to infer information about training data. Albeit, most machine learning techniques are designed to work on specific problem sets in which the training data were generated from same statistical distribution (IID).

What Adversarial ML Threat Matrix brings to the table?



Adversarial ML Threat Matrix aims to address the issue of threats against data weaponization with a curated set of vulnerabilities and adversary behaviors which Microsoft and MITRE have vetted to be effective against ML systems.

Thus, organizations can use the Adversarial ML Threat Matrix to test their own AI models' resilience by simulating attack scenarios using a list of known tactics to gain access to the environment, contaminate training data, execute unsafe ML models, and exfiltrate sensitive information via the model stealing attacks.

The overall goal, however, is that the framework will help security analysts to orient themselves in the new and upcoming threats scenarios to stay abreast of the threats actors.

No comments