Industry

Roland FleischhackerDr. Sonja Kabicher-Fuchs

As one of the largest property management companies in Europe Stadt Wien - Wiener Wohnen (WW) manages approximately 220,000 community-owned apartments, 47,000 parking spaces and 5,500 shops. More than half a million tenants, and thus about a quarter of Vienna's city population, cause 1.5 million customer inquiries to the contact center per year. The reported customer issues are manifold, ranging from technical defects, suggestions, information and complaints to commercial issues about rent and operating costs. This large variety of topics and the proper selection of associated procedures for handling the concerns remain for the employees of the contact center a major challenge. In particular taking into account the fact that some by the call center initiated businesses process very high costs.

To increase the quality and speed of the concern identification, WW implemented the cognitive decision system DEEP.assist, which went live in June 2014. With DEEP.assist the call center agent now only has to type in the statements of the caller in form of normal German sentences. Doing this, the call center agent documents the business case and the system analyses additionally in real time the meaning of the text and the call center agent gets proposals for the solution already during the writing. A key challenge in problem solving was the fact, that the caller often does not describe the specific problem, but articulates the symptoms of the concern. With the help of chains of associations DEEP.assist is able to identify the concerns, even with very unusual descriptions of the caller.

Miroslav LíškaMarek Šurek

At present it is very difficult to work with government data effectively. A lot of effort is just spent to integrate various datasets. Data are often at low level quality such as they are inconsistent or incomplete and published in different formats. This all limits their integration and utilization for various purposes. At present the linked data based method to government data integration seems to be most promising approach in this field. Data are annotated with ontologies hence they can be easily linked with semantics and processed with reasoners for additional content inferencing. Subsequently, when the government data are linked and also open, then great business value can be produced. On the one hand the data integration process is more effective and precise and on the other hand, any software project can benefit from including open linked data in their solutions. 

This presentation aims to provide information about semantic web adoption process for Slovak government data. First, an initial formal proposal of semantic standards for Slovak government data [SK-SEM2013] is presented. Second, the focus is oriented into presentation how the URI became the key element of Slovak semantics standards. Third, a new approach to semantic standards is presented. The base properties of semantic standards, i.e. approved ontologies and a method to URI creation, are shown. Finally, the concrete example of government linked data is presented. It covers the Slovpedia, Slovak open linked data database and the Pharmanet, the Slovpedia client that provides an approach to NLP based drugs interactions extended with inferencing.

Michiel Hildebrand

CultuurLINK is a Web application to link vocabularies. The application is developed to support the cultural heritage community with the alignment of vocabularies. While several tools have existed previously to support the fully automatic alignment of vocabularies, these are difficult to apply effectively in practice. With CultuurLINK, the user guides the system step-by-step through the alignment process. With the graphical strategy editor the user builds a case-specific link strategy out of building blocks that provide operations such as filters and string comparison. The output of each step is directly available for the user to inspect, manually evaluate and decide which step to take next. Links are exported as SKOS triples while the definition of the link strategy provides the provenance of these links. CultuurLINK is a new, free service for the Dutch cultural heritage community, part of the national roadmap for digital heritage, to establish a digital infrastructure that connects collections from all over the Netherlands to each other, and to the rest of the world.

Interactive session: We are looking forward to discuss your own uses cases for searching and linking vocabularies. Learn how to use this technology in other domains.

Lieke Verhelst

Organisations that develop semantic models have a lot to think about. The envisioned benefits from semantic solutions have to compete with many factors of uncertainty that threaten project results. Challenges lie not only in the evident scarcity of knowledge, skills and tools but also in other factors such as business objectives and requirements. 

In this talk Lieke Verhelst shares her long experience as a semantic modeller. Side by side with subject matter experts she has constructed semantic models and infrastructures for the environment, construction and education sectors. She will point out which common pitfalls she has seen during a ontology development process. While illustrating these, she will come to answering the question why SKOS is the key to success for semantic solution projects.

Julien Gonçalves

The development of Big Data technologies offers new perspectives in building powerful disambiguation systems. New approaches can be imagined to discover and normalize non-controlled vocabularies such as named entities.

In this presentation, I will explain how Reportlinker.com, an award-winning market research solution, developed an inference engine based on supervised analysis to disambiguate the names of companies found in a corpus of unstructured documents.

Through several examples, I will explain the main steps of our approach:
- The discovery of non-verified fact (hypotheses) using a large volume of data
- The transformation of hypotheses into verified facts, using an iterative graph processing system
- The construction of a relational graph to attribute new context around each normalized concept.

Juan Sequeda

Our customer represents one of the fastest growing organizations in the $30B Multi-level Marketing (MLM) industry. The customer has been managing their business with a relational database solution for over four months that has unfortunately been misaligned with internal data requirements.

Due to the lack of documentation and understanding of the misaligned solution, the company was not able to generate quarterly business and sales reports. For example, a simple question: “How many Orders were placed in May 2015” meant numerous things to different people and departments within the organization.

In this presentation we will discuss how semantic technologies play a key role in addressing this problem. We will highlight how we bootstrap an Enterprise Ontology from a relational database and how we virtually create a Semantic Data Warehouse by mapping the relational database to the Enterprise Ontology without having to physically move the data.

Ilian UzunovGeorgi Georgiev

To serve their daily readership of 2.2 million and realise their 'digital-first' strategy, the Financial Times chose Ontotext's GraphDB Enterprise Edition. The installation of Ontotext's RDF database has pushed the state of the art by increasing the scalability, reliability and availability of semantic technology. Additionally, Ontotext provided a number of NLP-services leveraging the semantic database to provide concept extraction, disambiguation and personalised recommendation service for FT's customers.

Our talk will discuss the problems facing the implementation of semantic-driven approaches in an enterprise environment, the lessons learned and how those problems were overcome to help a world-renowned news publisher provide innovative new products and services.

Interactive session: Join us for an in-depth technology demonstration and take the hot seat on our panel session.

Hans-Christian Schmitz

The Multilateral Interoperability Programme (MIP) is a multinational military standardization committee with participants from 24 member nations and NATO. It develops interoperability specifications for Command and Control Information Systems (C2IS). A key product is the MIP Information Model (MIM) that serves as a standard for information exchange for multiple echelons in joint and combined operations. Technically, the MIM is based on UML, extended by so-called UML profiles that constitute the MIM meta model. The MIM refers to various legacy data models and is under continuous development for enabling interoperability under changing operational requirements. To ensure model soundness and consistency, it comes with a suite of sophisticated tools for semantic analysis and configuration management. It seeks to close the gap between the domain expert on the one hand and the software implementer on the other hand, enabling model-driven software development. To this end, several transformations for the MIM have been defined. Among them is a transformation to OWL2. The derivation of an OWL ontology from the MIM makes it possible to add domain knowledge that cannot be expressed adequately with UML. The OWL-transformation is thus an important step in constructing a commonly agreed upon, extensible C2 ontology.

Cedric Lopez

In order to deal with marketing strategy and competitive intelligence, industries need to monitor the Web to gather and make sense of such a large amount of information. This information is scattered and it takes time for humans to analyze the different resources and to compile the gathered knowledge in an intelligent way. SMILK is a Joint laboratory between the Inria research institute and the VISEO company to study the strong coupling of algorithms and linguistic models at a semantic level, the extraction and the disambiguation of the knowledge guided by Web resources and the combination of various ways of reasoning (logical inferences, approximations and similarity, etc.).

In this context, we will present a prototype gathering results so far obtained to enrich user knowledge while browsing the Web using Natural Language Processing, Web Open Data, and Social Networks. Our presentation will focus on the demonstration of an easy to install and use browser plugin enriching the users’ experience with data gathering and intelligence making functionalities applied in real time to the visualized page.

Carlo Trugenberger

Potential uses of machine intelligence in health care applications have made big waves in the news recently. While the focus has been mainly in diagnostic help, there is another realm where the potential is staggering, that of speeding up drug research. I will describe a first concrete example, a pilot project of Merck using the InfoCodex software, in which semantic machine intelligence was successfully used to comb through large quantities of biomedical research papers in search of hidden correlations pointing to new biomarkers for diabetes and obesity.

Pages

Subscribe to RSS - Industry