Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
David Miller
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
UnavailablePublisher: Journals Gateway
Neural Computation (2019) 31 (8): 1624–1670.
Published: 01 August 2019
FIGURES
| View All (22)
Abstract
View articletitled, When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
View
PDF
for article titled, When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time
A significant threat to the recent, wide deployment of machine learning–based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbations to a test pattern (e.g., an image) that induce a change in the classifier's decision, until recently there has been a relative paucity of work defending against such attacks. Some works robustify the classifier to make correct decisions on perturbed patterns. This is an important objective for some applications and for natural adversary scenarios. However, we analyze the possible digital evasion attack mechanisms and show that in some important cases, when the pattern (image) has been attacked, correctly classifying it has no utility---when the image to be attacked is (even arbitrarily) selected from the attacker's cache and when the sole recipient of the classifier's decision is the attacker. Moreover, in some application domains and scenarios, it is highly actionable to detect the attack irrespective of correctly classifying in the face of it (with classification still performed if no attack is detected). We hypothesize that adversarial perturbations are machine detectable even if they are small. We propose a purely unsupervised anomaly detector (AD) that, unlike previous works, (1) models the joint density of a deep layer using highly suitable null hypothesis density models (matched in particular to the nonnegative support for rectified linear unit (ReLU) layers); (2) exploits multiple DNN layers; and (3) leverages a source and destination class concept, source class uncertainty, the class confusion matrix, and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence. Tested on MNIST and CIFAR image databases under three prominent attack strategies, our approach outperforms previous detection methods, achieving strong receiver operating characteristic area under the curve detection accuracy on two attacks and better accuracy than recently reported for a variety of methods on the strongest (CW) attack. We also evaluate a fully white box attack on our system and demonstrate that our method can be leveraged to strong effect in detecting reverse engineering attacks. Finally, we evaluate other important performance measures such as classification accuracy versus true detection rate and multiple measures versus attack strength.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (2): 425–450.
Published: 15 February 1996
Abstract
View articletitled, Hierarchical, Unsupervised Learning with Growing via Phase Transitions
View
PDF
for article titled, Hierarchical, Unsupervised Learning with Growing via Phase Transitions
We address unsupervised learning subject to structural constraints, with particular emphasis placed on clustering with an imposed decision tree structure. Most known methods are greedy, optimizing one node of the tree at a time to minimize a local cost. By constrast, we develop a joint optimization method, derived based on information-theoretic principles and closely related to known methods in statistical physics. The approach is inspired by the deterministic annealing algorithm for unstructured data clustering, which was based on maximum entropy inference. The new approach is founded on the principle of minimum cross-entropy, using informative priors to approximate the unstructured clustering solution while imposing the structural constraint. The resulting method incorporates supervised learning principles applied in an unsupervised problem setting. In our approach, the tree “grows” by a sequence of bifurcations that occur while optimizing an effective free energy cost at decreasing temperature scales. Thus, estimates of the tree size and structure are naturally obtained at each temperature in the process. Examples demonstrate considerable improvement over known methods.