| Total: 8
Alzheimer's disease (AD) is a genetically complex neurodegenerative disease, which leads to irreversible brain damage, severe cognitive problems and ultimately death. A number of clinical trials and study initiatives have been set up to investigate AD pathology, leading to large amounts of high dimensional heterogeneous data (biomarkers) for analysis. This paper focuses on combining clinical features from different modalities, including medical imaging, cerebrospinal fluid (CSF), etc., to diagnose AD and predict potential progression. Due to privacy and legal issues involved with clinical research, the study cohort (number of patients) is relatively small, compared to thousands of available biomarkers (predictors). We propose a hybrid pathological analysis model, which integrates manifold learning and Random Vector functional-link network (RVFL) so as to achieve better ability to extract discriminant information with limited training materials. Furthermore, we model (current and future) cognitive healthiness as a regression problem about age. By comparing the difference between predicted age and actual age, we manage to show statistical differences between different pathological stages. Verification tests are conducted based on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Extensive comparison is made against different machine learning algorithms, i.e. Support Vector Machine (SVM), Random Forest (RF), Decision Tree and Multilayer Perceptron (MLP). Experimental results show that our proposed algorithm achieves better results than the comparison targets, which indicates promising robustness for practical clinical implementation.
We present a general solution towards building task-oriented dialogue systems for online shopping, aiming to assist online customers in completing various purchase-related tasks, such as searching products and answering questions, in a natural language conversation manner. As a pioneering work, we show what & how existing NLP techniques, data resources, and crowdsourcing can be leveraged to build such task-oriented dialogue systems for E-commerce usage. To demonstrate its effectiveness, we integrate our system into a mobile online shopping app. To the best of our knowledge, this is the first time that an AI bot in Chinese is practically used in online shopping scenario with millions of real consumers. Interesting and insightful observations are shown in the experimental part, based on the analysis of human-bot conversation log. Several current challenges are also pointed out as our future directions.
Effective human-aware robots should anticipate their user’s intentions. During hand-eye coordination tasks, gaze often precedes hand motion and can serve as a powerful predictor for intent. However, cooperative tasks where a semi-autonomous robot serves as an extension of the human hand have rarely been studied in the context of hand-eye coordination. We hypothesize that accounting for anticipatory eye movements in addition to the movements of the robot will improve intent estimation. This research compares the application of various machine learning methods to intent prediction from gaze tracking data during robotic hand-eye coordination tasks. We found that with proper feature selection, accuracies exceeding 94% and AUC greater than 91% are achievable with several classification algorithms but that anticipatory gaze data did not improve intent prediction.
Imagining the future helps anticipate and prepare for what is coming. This has great importance to many, if not all, human endeavors. In this paper, we develop the Planning Projector system prototype, which applies plan-recognition-as-planning technique to both explain the observations derived from analyzing relevant news and social media, and project a range of possible future state trajectories for human review. Unlike the plan recognition problem, where a set of goals, and often a plan library must be given as part of the input, the Planning Projector system takes as input the domain knowledge, a sequence of observations derived from the news, a time horizon, and the number of trajectories to produce. It then computes the set of trajectories by applying a planner capable of finding a set of high-quality plans on a transformed planning problem. The Planning Projector prototype integrates several components including: (1) knowledge engineering: the process of encoding the domain knowledge from domain experts; (2) data transformation: the problem of analyzing and transforming the raw data into a sequence of observations; (3) trajectory computation: characterizing the future state projection problem and computing a set of trajectories; (4) user interface: clustering and visualizing the trajectories. We evaluate our approach qualitatively and conclude that the Planning Projector helps users understand future possibilities so that they can make more informed decisions.
Robots operating in the real world must be able to handle both discrete and continuous change. Many robot behaviors can be controlled through numeric parameters (called control variables), which affect the rate of the continuous change. Previous approaches capable of reasoning efficiently with control variables impose severe restrictions that limit the expressivity of the problems that can be solved. A broad class of robotic applications require, for example, convex quadratic constraints on state variables and control variables that are jointly constrained and that affect multiple state variables simultaneously. However, extensions to prior approaches are not straightforward, since these characteristics are non-linear and hard to scale. We introduce cqScotty, a heuristic forward search planner that solves these problems efficiently. While naive formulations of consistency checks are not convex and do not scale, cqScotty uses an efficient convex formulation, in the form of a Second Order Cone Program (SOCP), that is very fast to solve. We demonstrate the scalability of our approach on three new realistic domains.
Configuration planning is a form of task planning that takes into consideration both causal and information dependencies in goal achievement. This type of planning is interesting, for instance, in smart home environments which contain various sensors and robots to provide services to the inhabitants. Requests for information, for instance from an activity recognition system, should cause the smart home to configure itself in such a way that all requested information will be provided when it is needed. This paper addresses temporal configuration planning in which information availability and goals are linked to temporal intervals which are subject to constrains. Our solutions are based on constraint-based planning which uses different types of constraints to model different types of knowledge. We propose and compare two approaches to configuration planning. The first one models information via conditions and effects of planning operators and essentially reduces configuration planning to constraint-based temporal planning. The second approach solves information dependencies separately from task planning and optimizes the cost of reaching individual information goals. We compare these approaches in terms of the time it takes to solve problems and the quality of the solutions they provide.
While recent advances in computer vision have caused object recognition rates to spike, there is still much room for improvement. In this paper, we develop an algorithm to improve object recognition by integrating human-generated contextual information with vision algorithms. Specifically, we examine how interactive systems such as robots can utilize two types of context information--verbal descriptions of an environment and human-labeled datasets. We propose a re-ranking schema, MultiRank, for object recognition that can efficiently combine such information with the computer vision results. In our experiments, we achieve up to 9.4% and 16.6% accuracy improvements using the oracle and the detected bounding boxes, respectively, over the vision-only recognizers. We conclude that our algorithm has the ability to make a significant impact on object recognition in robotics and beyond.
Interaction between multiple agents requires some form of coordination and a level of mutual awareness. When computers and robots interact with people, they need to recognize human plans and react appropriately. Plan and goal recognition techniques have focused on identifying an agent's task given a sufficiently long action sequence. However, by the time the plan and/or goal are recognized, it may be too late for computing an interactive response. We propose an integration of planning with probabilistic recognition where each method uses intermediate results from the other as a guiding heuristic for recognition of the plan/goal in-progress as well as the interactive response. We show that, like the used recognition method, these interaction problems can be compiled into classical planning problems and solved using off-the-shelf methods. In addition to the methodology, this paper introduces problem categories for different forms of interaction, an evaluation metric for the benefits from the interaction, and extensions to the recognition algorithm that make its intermediate results more practical while the plan is in progress.