Seminar Advances in Artificial Intelligence
Artificial Intelligence is a central discipline in computer science nowadays. Research at the Institute of Artificial Intelligence focuses on knowledge-based techniques, i.e., how we can formalize and reason about knowledge within intelligent systems. Knowledge-based techniques complement learning-based techniques, e.g., to provide far-ranging foresight based on map knowledge for autonomous vehicles or for planning actions that an intelligent agent needs to perform in order to reach a certain goal. The seminar will introduce students to selected research topics in this area.
Participation and Requirements
Each participant is assigned a scientific article that has to be summarized and presented to the other students. Additionally, the participants have to write a short summary, survey related work with respect to their assigned topic, peer review the work of their fellow students, and actively discuss the topics presented in the course. The language of the seminar is English.
Schedule
The seminar consists of (at least) one initial meeting for the whole course and individual meetings with the topic supervisors. The final presentations take place in blocks at the end of the semester. Exact dates and times are still to be scheduled.
Link to Moodle course:
Topics
In order to deal with the dynamic nature of data streams, the latter is often segmented into so-called "windows" that represent static subparts of the streaming data. Typically, these windows are of fixed size and defined by a user independently of the actual streaming data. In this paper, a new type of data-dependent windows, called "frames", is introduced which actively consider and adapt to the processed data for the stream segmentation, thus providing more flexibility and expressiveness.
The Resource Description Framework (RDF) is a standardized data model for representing machine processable knowledge within the Semantic Web. Stream Reasoning deals with reasoning over streams of RDF data. This article presents a technique for incrementally maintaining logical consequences over windows of RDF data streams. The technique exploits time information to determine expired and new logical consequences. The provided experimental evidence shows that the approach significantly reduces the time required to compute valid inferences at each window change.
The paper presents incremental evaluation algorithms to compute changes to materialized views in relational and deductive database systems, in response to changes (insertions, deletions, and updates) to the relations. The view definitions can be in SQL or Datalog, and may use UNION, negation, aggregation (e.g., SUM, MIN), linear recursion, and general recursion. The first algorithm is based on counting and tracks the number of alternative derivations (counts) for each derived tuple in a view. A second presented algorithm is called Delete and Rederive, DRed, and maintains views incrementally also for recursive views (negation and aggregation are permitted).
Materialisation describes both the result and process of fully extending a set of facts by all its consequences based on provided inference rules. Since these computations can be quite expensive, a materialisation is typically updated incrementally whenever the original facts change, rather than re-computing everything from scratch each time. This article introduces the so-called Backward/Forward (B/F) algorithm that combines backward and forward chaining in order to update Datalog materialisations. Compared to other approaches, like DRed, for example, B/F is especially efficient if many facts can be derived by several entailments.
http://www.cs.ox.ac.uk/boris.motik/pubs/mnph15incremental-BF.pdf
Dialogue management (DM) is a difficult problem. This article presents OntoVPA, an Ontology-Based Dialogue Management System (DMS) for Virtual Personal Assistants, whose features are offered as generic solutions to core DM problems, such as dialogue state tracking, anaphora and coreference resolution. OntoVPA is the first commercially available, fully implemented DMS that employs ontologies and ontology-based rules for (a) domain model representation and reasoning, (b) dialogue representation and state tracking, and (c) response generation. OntoVPA is a declarative, knowledge-based system which can be customized to a new VPA domain by modifying and exchanging ontologies and rule bases, with very little to no conventional programming required.
OntoVPA—An Ontology-Based Dialogue Management System for Virtual Personal Assistants | SpringerLink
IWSDS2017_paper_29.pdf (uni-ulm.de)
Even though Dialogue systems have many applications, they are often limited to single turn interactions, thus making personalization, customization or context dependent conversations difficult. To address this problem, this article describes an approach that uses domain-independent planning to automatically create dialogue plans serving as goal-oriented guidance in multi-turn dialogues.
This paper is concerned with finding a winning strategy that determines the arguments a proponent should assert during a dialogue such that it will successfully persuade its opponent of some goal arguments, regardless of the strategy employed by the opponent. By restricting the strategies for the proponent to simple ones and by modelling this as a planning problem, it is possible to use an automated planner to generate optimal simple strategies for realistically sized problems. These strategies guarantee with a certain probability that the proponent will be successful no matter which arguments the opponent chooses to assert.
This paper describes a system that applies technologies related to the Internet of Things (IoT), like wearable devices or wireless sensors, in order to collect and analyze healthcare-relevant data of a person. In particular, the novel usage of a fuzzy classifier allows for more accurate diagnoses, while its implementation based on a field-programmable gate array (FPGA) results in lower execution times compared to other approaches.
A new healthcare diagnosis system using an IoT-based fuzzy classifier with FPGA (researchgate.net)
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, the authors propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. They also introduce a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, this paper presents a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, new methods are presented that show improved computational performance and/or better consistency with human intuition than previous approaches.
Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, the authors propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. It is demonstrated through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.
This article presents ORB-SLAM2, a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. The back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. The system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches to map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that the proposed method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution.