First Workshop on Semantic AI

Time: 
Tuesday, September 13, 2022 - 14:30

Organized by:

  • Sebastian NEUMAIER, Data Intelligence Research Group, St. Polten University of Applied Sciences, Austria 
  • Martin KALTENBÖCK, Semantic Web Company GmbH, Austria
  • Marta SABOU,  Institute for Data, Process and Knowledge Management, Vienna University of Economics and Business, Austria

AI approaches based on machine learning have become increasingly popular across all sectors. However, experience shows that AI initiatives often fail due to the lack of appropriate data or low data quality. Furthermore, state-of-the-art AI models are widely opaque and suffer from a lack of transparency and explainability. Semantic AI approaches combine methodology from statistical AI and symbolic AI based on semantic technologies such as knowledge graphs as well as natural language processing, while incorporating mechanisms for explainable AI. Semantic AI requires technical and organizational measures, which get implemented along the whole data lifecycle. While the individual aspects of semantic AI are being studied in their respective research communities a dedicated community focusing on their combination is yet to be established. The proposed workshop intends to contribute to this endeavour.

More details

Call for Papers

  • Submission of workshop papers: June 13, 2022, 11:59 pm, Hawaii time
  • Notification of acceptance: July 15, 2022, 11:59 pm, Hawaii time

Topics relevant to this workshop include – but are not limited to – the following aspects of semantic AI:

  • Classification of types of emerging semantic AI approaches, e.g. those listed below
  • Novel design patterns for semantic AI systems
  • System development life-cycle stages when creating semantic AI systems
  • Neuro-symbolic AI, i.e., methods combining statistical AI based on machine learning and symbolic AI, based on semantic technologies
  • Machine teaching and other use of knowledge graphs in supervised machine learning to improve model quality and robustness
  • Semi-supervised learninng, i.e., combination of supervised and unsupervised machine learning, e.g., in natural language processing
  • Distant supervision, i.e., use of semantic rules and other heuristics for data labelling
  • Few- and zero-shot learning, i.e., adapting to unseen classes given only a few or no examples at all
  • Semantics-based approaches to explainable AI, human in the loop in AI to improve trustworthiness
  • Knowledge graphs and other semantic technologies for automated data quality management
  • Reports on benefits/limitations of combining machine learning and semantic technologies, possibly from the perspective of concrete application domains and tasks

Background

AI approaches based on machine learning (ML) have become increasingly popular across all sectors. However, experience shows that AI initiatives often fail due to the lack of appropriate data or low data quality. Furthermore, state-of-the-art AI models are widely March 2022 opaque and suffer from a lack of transparency and explainability. This means that even if the underlying mathematical principles of these methods are understood, it is often unclear why a particular prediction has been made and if meaningful and grounded patterns have led to this prediction. Thus, there is a risk that the AI learns biases from the data or makes its decisions based on wrong or ambiguous information.

Semantic AI approaches combine methodology from ML-based AI and symbolic AI based on semantic technologies such as knowledge graphs (KG) as well as natural language processing (NLP), while incorporating mechanisms for explainable AI (XAI). Semantic AI requires technical and organizational measures, which get implemented along the whole data lifecycle. KGs, being one of the core elements of symbolic AI, provide a human-understandable and machine-processable way to model and reason over complex relationships of entities of interest. They also provide means for a more automated data quality management.

The interaction of ML-based and KG-based approaches can manifest in different forms, including the provision of rich background information for the input data by interlinking to existing external KGs, using the real-world domain knowledge in the KG to guide the training process of ML models, and enhancing causal inference to verify and/or assist the prediction. This close interconnection is not only capable of increasing the prediction performance of the AI model, but also of improving its robustness. Furthermore, the interaction between ML and KG leads to more understandable and interpretable predictions, as the ML approaches can be exploited to symbolise their intrinsic knowledge by generating new entities and relations within the KG and thus extending it, and the KG can be leveraged to generate human-understandable explanations for automatically produced predictions.

While the neuro-symbolic nature of semantic AI systems supports the understandability and interpretability of the produced predictions, more focused approaches originating from the field of XAI enable truly explainable decisions. XAI focuses on integrating explainability mechanisms in complex black-box models, either post-hoc (on already trained models) or during training (self-learned explainability). The integration and close interaction of these mechanisms with the neuro-symbolic system shall overcome existing limitations of state-of-the-art XAI methods and provide explanations which are directly formulated in a domain-specific language.

Registration to this workshop

You need an admission to the SEMANTICS 2019 conference to attend this workshop.
You may choose a full conference pass or a single day workshop ticket at our ticket store.
Please enter your ticket ID and email to register for this workshop.

Please enter the ID in the format XXXX-XXXX-XXXX; e.g. 5364-3318-8955