Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
George Kesidis
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (5): 1329–1371.
Published: 13 April 2021
FIGURES
Abstract
View article
PDF
Backdoor data poisoning attacks add mislabeled examples to the training set, with an embedded backdoor pattern, so that the classifier learns to classify to a target class whenever the backdoor pattern is present in a test sample. Here, we address posttraining detection of scene-plausible perceptible backdoors, a type of backdoor attack that can be relatively easily fashioned, particularly against DNN image classifiers. A post-training defender does not have access to the potentially poisoned training set, only to the trained classifier, as well as some unpoisoned examples that need not be training samples. Without the poisoned training set, the only information about a backdoor pattern is encoded in the DNN's trained weights. This detection scenario is of great import considering legacy and proprietary systems, cell phone apps, as well as training outsourcing, where the user of the classifier will not have access to the entire training set. We identify two important properties of scene-plausible perceptible backdoor patterns, spatial invariance and robustness, based on which we propose a novel detector using the maximum achievable misclassification fraction (MAMF) statistic. We detect whether the trained DNN has been backdoor-attacked and infer the source and target classes. Our detector outperforms existing detectors and, coupled with an imperceptible backdoor detector, helps achieve posttraining detection of most evasive backdoors of interest.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (8): 1624–1670.
Published: 01 August 2019
FIGURES
| View All (22)
Abstract
View article
PDF
A significant threat to the recent, wide deployment of machine learning–based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbations to a test pattern (e.g., an image) that induce a change in the classifier's decision, until recently there has been a relative paucity of work defending against such attacks. Some works robustify the classifier to make correct decisions on perturbed patterns. This is an important objective for some applications and for natural adversary scenarios. However, we analyze the possible digital evasion attack mechanisms and show that in some important cases, when the pattern (image) has been attacked, correctly classifying it has no utility---when the image to be attacked is (even arbitrarily) selected from the attacker's cache and when the sole recipient of the classifier's decision is the attacker. Moreover, in some application domains and scenarios, it is highly actionable to detect the attack irrespective of correctly classifying in the face of it (with classification still performed if no attack is detected). We hypothesize that adversarial perturbations are machine detectable even if they are small. We propose a purely unsupervised anomaly detector (AD) that, unlike previous works, (1) models the joint density of a deep layer using highly suitable null hypothesis density models (matched in particular to the nonnegative support for rectified linear unit (ReLU) layers); (2) exploits multiple DNN layers; and (3) leverages a source and destination class concept, source class uncertainty, the class confusion matrix, and DNN weight information in constructing a novel decision statistic grounded in the Kullback-Leibler divergence. Tested on MNIST and CIFAR image databases under three prominent attack strategies, our approach outperforms previous detection methods, achieving strong receiver operating characteristic area under the curve detection accuracy on two attacks and better accuracy than recently reported for a variety of methods on the strongest (CW) attack. We also evaluate a fully white box attack on our system and demonstrate that our method can be leveraged to strong effect in detecting reverse engineering attacks. Finally, we evaluate other important performance measures such as classification accuracy versus true detection rate and multiple measures versus attack strength.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (7): 1926–1966.
Published: 01 July 2012
FIGURES
| View All (13)
Abstract
View article
PDF
We introduce new inductive, generative semisupervised mixtures with more finely grained class label generation mechanisms than in previous work. Our models combine advantages of semisupervised mixtures, which achieve label extrapolation over a component, and nearest-neighbor (NN)/nearest-prototype (NP) classification, which achieve accurate classification in the vicinity of labeled samples or prototypes. For our NN-based method, we propose a novel two-stage stochastic data generation, with all samples first generated using a standard finite mixture and then all class labels generated, conditioned on the samples and their components of origin. This mechanism entails an underlying Markov random field, specific to each mixture component or cluster. We invoke the pseudo-likelihood formulation, which forms the basis for an approximate generalized expectation-maximization model learning algorithm. Our NP-based model overcomes a problem with the NN-based model that manifests at very low labeled fractions. Both models are advantageous when within-component class proportions are not constant over the feature space region “owned by” a component. The practicality of this scenario is borne out by experiments on UC Irvine data sets, which demonstrate significant gains in classification accuracy over previous semisupervised mixtures and also overall gains, over KNN classification. Moreover, for very small labeled fractions, our methods overall outperform supervised linear and nonlinear kernel support vector machines.