AAAI.2016 - Vision

Total: 35

#1 Learning to Answer Questions from Image Using Convolutional Neural Network [PDF] [Copy] [Kimi]

Authors: Lin Ma ; Zhengdong Lu ; Hang Li

In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA) task. Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for image QA, with the performances significantly outperforming the state-of-the-art.

#2 Domain-Constraint Transfer Coding for Imbalanced Unsupervised Domain Adaptation [PDF] [Copy] [Kimi]

Authors: Yao-Hung Hubert Tsai ; Cheng-An Hou ; Wei-Yu Chen ; Yi-Ren Yeh ; Yu-Chiang Frank Wang

Unsupervised domain adaptation (UDA) deals with the task that labeled training and unlabeled test data collected from source and target domains, respectively. In this paper, we particularly address the practical and challenging scenario of imbalanced cross-domain data. That is, we do not assume the label numbers across domains to be the same, and we also allow the data in each domain to be collected from multiple datasets/sub-domains. To solve the above task of imbalanced domain adaptation, we propose a novel algorithm of Domain-constraint Transfer Coding (DcTC). Our DcTC is able to exploit latent subdomains within and across data domains, and learns a common feature space for joint adaptation and classification purposes. Without assuming balanced cross-domain data as most existing UDA approaches do, we show that our method performs favorably against state-of-the-art methods on multiple cross-domain visual classification tasks.

#3 Learning Cross-Domain Neural Networks for Sketch-Based 3D Shape Retrieval [PDF] [Copy] [Kimi]

Authors: Fan Zhu ; Jin Xie ; Yi Fang

Sketch-based 3D shape retrieval, which returns a set of relevant 3D shapes based on users' input sketch queries, has been receiving increasing attentions in both graphics community and vision community. In this work, we address the sketch-based 3D shape retrieval problem with a novel Cross-Domain Neural Networks (CDNN) approach, which is further extended to Pyramid Cross-Domain Neural Networks (PCDNN) by cooperating with a hierarchical structure. In order to alleviate the discrepancies between sketch features and 3D shape features, a neural network pair that forces identical representations at the target layer for instances of the same class is trained for sketches and 3D shapes respectively. By constructing cross-domain neural networks at multiple pyramid levels, a many-to-one relationship is established between a 3D shape feature and sketch features extracted from different scales. We evaluate the effectiveness of both CDNN and PCDNN approach on the extended large-scale SHREC 2014 benchmark and compare with some other well established methods. Experimental results suggest that both CDNN and PCDNN can outperform state-of-the-art performance, where PCDNN can further improve CDNN when employing a hierarchical structure.

#4 Face Video Retrieval via Deep Learning of Binary Hash Representations [PDF] [Copy] [Kimi]

Authors: Zhen Dong ; Su Jia ; Tianfu Wu ; Mingtao Pei

Retrieving faces from large mess of videos is an attractive research topic with wide range of applications. Its challenging problems are large intra-class variations, and tremendous time and space complexity. In this paper, we develop a new deep convolutional neural network (deep CNN) to learn discriminative and compact binary representations of faces for face video retrieval. The network integrates feature extraction and hash learning into a unified optimization framework for the optimal compatibility of feature extractor and hash functions. In order to better initialize the network, the low-rank discriminative binary hashing is proposed to pre-learn hash functions during the training procedure. Our method achieves excellent performances on two challenging TV-Series datasets.

#5 Group Cost-Sensitive Boosting for Multi-Resolution Pedestrian Detection [PDF] [Copy] [Kimi]

Authors: Chao Zhu ; Yuxin Peng

As an important yet challenging problem in computer vision, pedestrian detection has achieved impressive progress in recent years. However, the significant performance decline with decreasing resolution is a major bottleneck of current state-of-the-art methods. For the popular boosting-based detectors, one of the main reasons is that low resolution samples, which are usually more difficult to detect than high resolution ones, are treated by equal costs in the boosting process, leading to the consequence that they are more easily being rejected in early stages and can hardly be recovered in late stages as false negatives. To address this problem, we propose in this paper a new multi-resolution detection approach based on a novel group cost-sensitive boosting algorithm, which extends the popular AdaBoost by exploring different costs for different resolution groups in the boosting process, and places more emphases on low resolution group in order to better handle detection of hard samples. The proposed approach is evaluated on the challenging Caltech pedestrian benchmark, and outperforms other state-of-the-art on different resolution-specific test sets.

#6 Structured Output Prediction for Semantic Perception in Autonomous Vehicles [PDF] [Copy] [Kimi]

Authors: Rein Houthooft ; Cedric De Boom ; Stijn Verstichel ; Femke Ongenae ; Filip De Turck

A key challenge in the realization of autonomous vehicles is the machine's ability to perceive its surrounding environment. This task is tackled through a model that partitions vehicle camera input into distinct semantic classes, by taking into account visual contextual cues. The use of structured machine learning models is investigated, which not only allow for complex input, but also arbitrarily structured output. Towards this goal, an outdoor road scene dataset is constructed with accompanying fine-grained image labelings. For coherent segmentation, a structured predictor is modeled to encode label distributions conditioned on the input images. After optimizing this model through max-margin learning, based on an ontological loss function, efficient classification is realized via graph cuts inference using alpha-expansion. Both quantitative and qualitative analyses demonstrate that by taking into account contextual relations between pixel segmentation regions within a second-degree neighborhood, spurious label assignments are filtered out, leading to highly accurate semantic segmentations for outdoor scenes.

#7 Transductive Zero-Shot Recognition via Shared Model Space Learning [PDF] [Copy] [Kimi]

Authors: Yuchen Guo ; Guiguang Ding ; Xiaoming Jin ; Jianmin Wang

Zero-shot Recognition (ZSR) is to learn recognition models for novel classes without labeled data. It is a challenging task and has drawn considerable attention in recent years. The basic idea is to transfer knowledge from seen classes via the shared attributes. This paper focus on the transductive ZSR, i.e., we have unlabeled data for novel classes. Instead of learning models for seen and novel classes separately as in existing works, we put forward a novel joint learning approach which learns the shared model space (SMS) for models such that the knowledge can be effectively transferred between classes using the attributes. An effective algorithm is proposed for optimization. We conduct comprehensive experiments on three benchmark datasets for ZSR. The results demonstrates that the proposed SMS can significantly outperform the state-of-the-art related approaches which validates its efficacy for the ZSR task.

#8 Face Model Compression by Distilling Knowledge from Neurons [PDF] [Copy] [Kimi]

Authors: Ping Luo ; Zhenyao Zhu ; Ziwei Liu ; Xiaogang Wang ; Xiaoou Tang

The recent advanced face recognition systems werebuilt on large Deep Neural Networks (DNNs) or theirensembles, which have millions of parameters. However, the expensive computation of DNNs make theirdeployment difficult on mobile and embedded devices. This work addresses model compression for face recognition,where the learned knowledge of a large teachernetwork or its ensemble is utilized as supervisionto train a compact student network. Unlike previousworks that represent the knowledge by the soften labelprobabilities, which are difficult to fit, we represent theknowledge by using the neurons at the higher hiddenlayer, which preserve as much information as the label probabilities, but are more compact. By leveragingthe essential characteristics (domain knowledge) of thelearned face representation, a neuron selection methodis proposed to choose neurons that are most relevant toface recognition. Using the selected neurons as supervisionto mimic the single networks of DeepID2+ andDeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6 times compression ratio and 90 times speed-up in inference, making this cumbersome model applicable on portable devices.

#9 MC-HOG Correlation Tracking with Saliency Proposal [PDF] [Copy] [Kimi]

Authors: Guibo Zhu ; Jinqiao Wang ; Yi Wu ; Xiaoyu Zhang ; Hanqing Lu

Designing effective feature and handling the model drift problem are two important aspects for online visual tracking. For feature representation, gradient and color features are most widely used, but how to effectively combine them for visual tracking is still an open problem. In this paper, we propose a rich feature descriptor, MC-HOG, by leveraging rich gradient information across multiple color channels or spaces. Then MC-HOG features are embedded into the correlation tracking framework to estimate the state of the target. For handling the model drift problem caused by occlusion or distracter, we propose saliency proposals as prior information to provide candidates and reduce background interference. In addition to saliency proposals, a ranking strategy is proposed to determine the importance of these proposals by exploiting the learnt appearance filter, historical preserved object samples and the distracting proposals. In this way, the proposed approach could effectively explore the color-gradient characteristics and alleviate the model drift problem. Extensive evaluations performed on the benchmark dataset show the superiority of the proposed method.

#10 Co-Occurrence Feature Learning for Skeleton Based Action Recognition Using Regularized Deep LSTM Networks [PDF] [Copy] [Kimi]

Authors: Wentao Zhu ; Cuiling Lan ; Junliang Xing ; Wenjun Zeng ; Yanghao Li ; Li Shen ; Xiaohui Xie

Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.

#11 Diversified Dynamical Gaussian Process Latent Variable Model for Video Repair [PDF] [Copy] [Kimi]

Authors: Hao Xiong ; Tongliang Liu ; Dacheng Tao

Videos can be conserved on different media. However, storing on media such as films and hard disks can suffer from unexpected data loss, for instance from physical damage. Repair of missing or damaged pixels is essential for video maintenance and preservation. Most methods seek to fill in missing holes by synthesizing similar textures from local or global frames. However, this can introduce incorrect contexts, especially when the missing hole or number of damaged frames is large. Furthermore, simple texture synthesis can introduce artifacts in undamaged and recovered areas. To address aforementioned problems, we propose the diversified dynamical Gaussian process latent variable model (D2GPLVM) for considering the variety in existing videos and thus introducing a diversity encouraging prior to inducing points. The aim is to ensure that the trained inducing points, which are a smaller set of all observed undamaged frames, are more diverse and resistant for context-aware and artifacts-free based video repair. The defined objective function in our proposed model is initially not analytically tractable and must be solved by variational inference. Finally, experimental testing illustrates the robustness and effectiveness of our method for damaged video repair.

#12 Discrete Image Hashing Using Large Weakly Annotated Photo Collections [PDF] [Copy] [Kimi]

Authors: Hanwang Zhang ; Na Zhao ; Xindi Shang ; Huanbo Luan ; Tat-seng Chua

We address the problem of image hashing by learning binary codes from large and weakly supervised photo collections. Due to the explosive growth of user generated media on the Web, this problem is becoming critical for large-scale visual applications like image retrieval. While most existing hashing methods fail to address this challenge well, our method shows promising improvement due to the following two key advantages.First, we formulate a novel hashing objective that can effectively mine implicit weak supervision by collaborative filtering. Second, we propose a discrete hashing algorithm, offered with efficient optimization, to overcome the inferior optimizations in obtaining binary codes from real-valued solutions. In this way, our method can be considered as a weakly-supervised discrete hashing framework which jointly learns image semantics and their corresponding binary codes. Through training on one million weakly annotated images, our experimental results demonstrate that image retrieval using the proposed hashing method outperforms the other state-of-the-art ones on image and video benchmarks.

#13 Video Semantic Clustering with Sparse and Incomplete Tags [PDF] [Copy] [Kimi]

Authors: Jingya Wang ; Xiatian Zhu ; Shaogang Gong

Clustering tagged videos into semantic groups is importantbut challenging due to the need for jointly learning correlations between heterogeneous visual and tag data. The taskis made more difficult by inherently sparse and incompletetag labels. In this work, we develop a method for accuratelyclustering tagged videos based on a novel Hierarchical-MultiLabel Random Forest model capable of correlating structured visual and tag information. Specifically, our model exploits hierarchically structured tags of different abstractnessof semantics and multiple tag statistical correlations, thus discovers more accurate semantic correlations among differentvideo data, even with highly sparse/incomplete tags.

#14 Deep Quantization Network for Efficient Image Retrieval [PDF] [Copy] [Kimi]

Authors: Yue Cao ; Mingsheng Long ; Jianmin Wang ; Han Zhu ; Qingfu Wen

Hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing improves the quality of hash coding by exploiting the semantic similarity on data pairs and has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, then quantized by a separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, since the quantization error is not statistically minimized and the feature representation is not optimally compatible with the hash coding. In this paper, we propose a novel Deep Quantization Network (DQN) architecture for supervised hashing, which learns image representation for hash coding and formally control the quantization error. The DQN model constitutes four key components: (1) a sub-network with multiple convolution-pooling layers to capture deep image representations; (2) a fully connected bottleneck layer to generate dimension-reduced representation optimal for hash coding; (3) a pairwise cosine loss layer for similarity-preserving learning; and (4) a product quantization loss for controlling hashing quality and the quantizability of bottleneck representation. Extensive experiments on standard image retrieval datasets show the proposed DQN model yields substantial boosts over latest state-of-the-art hashing methods.

#15 Recognizing Actions in 3D Using Action-Snippets and Activated Simplices [PDF] [Copy] [Kimi]

Authors: Chunyu Wang ; John Flynn ; Yizhou Wang ; Alan Yuille

Pose-based action recognition in 3D is the task of recognizing an action (e.g., walking or running) from a sequence of 3D skeletal poses. This is challenging because of variations due to different ways of performing the same action and inaccuracies in the estimation of the skeletal poses. The training data is usually small and hence complex classifiers risk over-fitting the data. We address this task by action-snippets which are short sequences of consecutive skeletal poses capturing the temporal relationships between poses in an action. We propose a novel representation for action-snippets, called activated simplices. Each activity is represented by a manifold which is approximated by an arrangement of activated simplices. A sequence (of action-snippets) is classified by selecting the closest manifold and outputting the corresponding activity. This is a simple classifier which helps avoid over-fitting the data but which significantly outperforms state-of-the-art methods on standard benchmarks.

#16 Path Following with Adaptive Path Estimation for Graph Matching [PDF] [Copy] [Kimi]

Authors: Tao Wang ; Haibin Ling

Graph matching plays an important role in many fields in computer vision. It is a well-known general NP-hard problem and has been investigated for decades. Among the large amount of algorithms for graph matching, the algorithms utilizing the path following strategy exhibited state-of-art performances. However, the main drawback of this category of algorithms lies in their high computational burden. In this paper, we propose a novel path following strategy for graph matching aiming to improve its computation efficiency. We first propose a path estimation method to reduce the computational cost at each iteration, and subsequently a method of adaptive step length to accelerate the convergence. The proposed approach is able to be integrated into all the algorithms that utilize the path following strategy. To validate our approach, we compare our approach with several recently proposed graph matching algorithms on three benchmark image datasets. Experimental results show that, our approach improves significantly the computation efficiency of the original algorithms, and offers similar or better matching results.

#17 Zero-Shot Event Detection by Multimodal Distributional Semantic Embedding of Videos [PDF] [Copy] [Kimi]

Authors: Mohamed Elhoseiny ; Jingen Liu ; Hui Cheng ; Harpreet Sawhney ; Ahmed Elgammal

We propose a new zero-shot Event-Detection method by Multi-modal Distributional Semantic embedding of videos. Our model embeds object and action concepts as well as other available modalities from videos into a distributional semantic space. To our knowledge, this is the first Zero-Shot event detection model that is built on top of distributional semantics and extends it in the following directions: (a) semantic embedding of multimodal information in videos (with focus on the visual modalities), (b) semantic embedding of concepts definitions, and (c) retrieve videos by free text event query (e.g., "changing a vehicle tire") based on their content. We first embed the video into the multi-modal semantic space and then measure the similarity between videos with the event query in free text form. We validated our method on the large TRECVID MED (Multimedia Event Detection) challenge. Using only the event title as a query, our method outperformed the state-the-art that uses big descriptions from 12.6\% to 13.5\% with MAP metric and from 0.73 to 0.83 with ROC-AUC metric. It is also an order of magnitude faster.

#18 Large Scale Similarity Learning Using Similar Pairs for Person Verification [PDF] [Copy] [Kimi]

Authors: Yang Yang ; Shengcai Liao ; Zhen Lei ; Stan Li

In this paper, we propose a novel similarity measure and then introduce an efficient strategy to learn it by using only similar pairs for person verification. Unlike existing metric learning methods, we consider both the difference and commonness of an image pair to increase its discriminativeness. Under a pairconstrained Gaussian assumption, we show how to obtain the Gaussian priors (i.e., corresponding covariance matrices) of dissimilar pairs from those of similar pairs. The application of a log likelihood ratio makes the learning process simple and fast and thus scalable to large datasets. Additionally, our method is able to handle heterogeneous data well. Results on the challenging datasets of face verification (LFW and Pub-Fig) and person re-identification (VIPeR) show that our algorithm outperforms the state-of-the-art methods.

#19 Pose-Guided Human Parsing by an AND/OR Graph Using Pose-Context Features [PDF] [Copy] [Kimi]

Authors: Fangting Xia ; Jun Zhu ; Peng Wang ; Alan Yuille

Parsing human into semantic parts is crucial to human-centric analysis. In this paper, we propose a human parsing pipeline that uses pose cues, e.g., estimates of human joint locations, to provide pose-guided segment proposals for semantic parts. These segment proposals are ranked using standard appearance cues, deep-learned semantic feature, and a novel pose feature called pose-context. Then these proposals are selected and assembled using an And-Or graph to output a parse of the person. The And-Or graph is able to deal with large human appearance variability due to pose, choice of clothing, etc. We evaluate our approach on the popular Penn-Fudan pedestrian parsing dataset, showing that it significantly outperforms the state of the art, and perform diagnostics to demonstrate the effectiveness of different stages of our pipeline.

#20 Metric Embedded Discriminative Vocabulary Learning for High-Level Person Representation [PDF] [Copy] [Kimi]

Authors: Yang Yang ; Zhen Lei ; Shifeng Zhang ; Hailin Shi ; Stan Li

A variety of encoding methods for bag of word (BoW) model have been proposed to encode the local features in image classification. However, most of them are unsupervised and just employ k-means to form the visual vocabulary, thus reducing the discriminative power of the features. In this paper, we propose a metric embedded discriminative vocabulary learning for high-level person representation with application to person re-identification. A new and effective term is introduced which aims at making the same persons closer while different ones farther in the metric space. With the learned vocabulary, we utilize a linear coding method to encode the image-level features (or holistic image features) for extracting high-level person representation. Different from traditional unsupervised approaches, our method can explore the relationship(same or not) among the persons. Since there is an analytic solution to the linear coding, it is easy to obtain the final high-level features. The experimental results on person re-identification demonstrate the effectiveness of our proposed algorithm.

#21 DARI: Distance Metric and Representation Integration for Person Verification [PDF] [Copy] [Kimi]

Authors: Guangrun Wang ; Liang Lin ; Shengyong Ding ; Ya Li ; Qing Wang

The past decade has witnessed the rapid development of feature representation learning and distance metric learning, whereas the two steps are often discussed separately. To explore their interaction, this work proposes an end-to-end learning framework called DARI, i.e. Distance metric And Representation Integration, and validates the effectiveness of DARI in the challenging task of person verification. Given the training images annotated with the labels, we first produce a large number of triplet units, and each one contains three images, i.e. one person and the matched/mismatch references. For each triplet unit, the distance disparity between the matched pair and the mismatched pair tends to be maximized. We solve this objective by building a deep architecture of convolutional neural networks. In particular, the Mahalanobis distance matrix is naturally factorized as one top fully-connected layer that is seamlessly integrated with other bottom layers representing the image feature. The image feature and the distance metric can be thus simultaneously optimized via the one-shot backward propagation. On several public datasets, DARI shows very promising performance on re-identifying individuals cross cameras against various challenges, and outperforms other state-of-the-art approaches.

#22 Multi-View 3D Human Tracking in Crowded Scenes [PDF] [Copy] [Kimi]

Author: Xiaobai Liu

This paper presents a robust multi-view method for tracking people in 3D scene. Our method distinguishes itself from previous works in two aspects. Firstly, we define a set of binary spatial relationships for individual subjects or pairs of subjects that appear at the same time, e.g. being left or right, being closer or further to the camera, etc. These binary relationships directly reflect relative positions of subjects in 3D scene and thus should be persisted during inference. Secondly, we introduce an unified probabilistic framework to exploit binary spatial constraints for simultaneous 3D localization and cross-view human tracking. We develop a cluster Markov Chain Monte Carlo method to search the optimal solution. We evaluate our method on both public video benchmarks and newly built multi-view video dataset. Results with comparisons showed that our method could achieve state-of-the-art tracking results and meter-level 3D localization on challenging videos.

#23 Unsupervised Co-Activity Detection from Multiple Videos Using Absorbing Markov Chain [PDF] [Copy] [Kimi]

Authors: Donghun Yeo ; Bohyung Han ; Joon Hee Han

We propose a simple but effective unsupervised learning algorithm to detect a common activity (co-activity) from a set of videos, which is formulated using absorbing Markov chain in a principled way. In our algorithm, a complete multipartite graph is first constructed, where vertices correspond to subsequences extracted from videos using a temporal sliding window and edges connect between the vertices originated from different videos; the weight of an edge is proportional to the similarity between the features of two end vertices. Then, we extend the graph structure by adding edges between temporally overlapped subsequences in a video to handle variable-length co-activities using temporal locality, and create an absorbing vertex connected from all other nodes. The proposed algorithm identifies a subset of subsequences as co-activity by estimating absorption time in the constructed graph efficiently. The great advantage of our algorithm lies in the properties that it can handle more than two videos naturally and identify multiple instances of a co-activity with variable lengths in a video. Our algorithm is evaluated intensively in a challenging dataset and illustrates outstanding performance quantitatively and qualitatively.

#24 Reading Scene Text in Deep Convolutional Sequences [PDF] [Copy] [Kimi]

Authors: Pan He ; Weilin Huang ; Yu Qiao ; Chen Loy ; Xiaoou Tang

We develop a Deep-Text Recurrent Network (DTRN)that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered highlevel sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. It achieves impressive results on several benchmarks, advancing the-state-of-the-art substantially.

#25 Concepts Not Alone: Exploring Pairwise Relationships for Zero-Shot Video Activity Recognition [PDF] [Copy] [Kimi]

Authors: Chuang Gan ; Ming Lin ; Yi Yang ; Gerard Melo ; Alexander G. Hauptmann

Vast quantities of videos are now being captured at astonishing rates, but the majority of these are not labelled. To cope with such data, we consider the task of content-based activity recognition in videos without any manually labelled examples, also known as zero-shot video recognition. To achieve this, videos are represented in terms of detected visual concepts, which are then scored as relevant or irrelevant according to their similarity with a given textual query. In this paper, we propose a more robust approach for scoring concepts in order to alleviate many of the brittleness and low precision problems of previous work. Not only do we jointly consider semantic relatedness, visual reliability, and discriminative power. To handle noise and non-linearities in the ranking scores of the selected concepts, we propose a novel pairwise order matrix approach for score aggregation. Extensive experiments on the large-scale TRECVID Multimedia Event Detection data show the superiority of our approach.