Skip to main content

Classifier Resistant to Adversarial Example Attacks

A novel algorithm that provides security by training multiple classifiers on randomly partitioned class labels.

The Challenge

Deep learning neural networks have had phenomenal success with complex problem-solving applications.

But their susceptibility to adversarial attacks remains a primary safety and security concern for companies and nation-states alike.

The delivery of a robust deep learning algorithm, capable of thwarting infiltrations of state-of-the-art adversarial examples/images with high levels of accuracy remains a concern for artificial intelligence research and development.

The Solution

We have developed a novel algorithm that provides security against adversarial examples/images by training multiple classifiers on randomly partitioned class labels.

Classifiers are trained using meta-features derived from the outputs of each randomly partitioned class. This results in a much larger label space.

Our approach maps meaningful classes to a considerably smaller subset of the label space. This significantly reduces the probability of adversarial examples/images being assigned valid random labels.

The algorithm is highly robust. Attackers must develop noise optimisation techniques for multiple classifier outputs to ensure that their adversarial examples/image receives a valid label.

Our novel algorithm has produced excellent results against Carlini-Wagner (L2) and Projected Gradient Descent attacks. It also has high accuracy with MNIST (>97%) and CIFAR-10 (>80%) datasets.

Intellectual Property Status

  • Filed. Awaiting publication.
  • UK Patent Application No 2117796.9
  • Classifier Resistant to Adversarial Example Attacks
  • University of Newcastle upon Tyne

The Opportunity

Application Description: A randomized labelling and partitioning based method to defend against adversarial examples.

We seek a partner who will invest in R&D to develop a solution to adversarial attacks. The solution should aim for mass deployment through product/process/service offering(s).

Enquiries for further technical and product development or licensing opportunities are encouraged.

The technique could support:

  • Autonomous vehicles
  • Image recognition
  • Malware intrusion
  • Surveillance

Contact