Volker Tresp truly is a veteran on the field of machine learning and semantic technologies. Having received his MSc and PhD at the Yale University in 1986 and 1989, respectively, he’s been leading research teams at Siemens for the last 27 years. The spin-off company Panoratio which provides big data as a service for B2B solution providers as well as the coordination of the first ever nationally funded Big Data project are just a few achievements of his life-long involvement in semantic research. His current interest focuses on Statistical Relational Learning, which combines machine learning with relational data models and first-order logic as well as enables machine learning in knowledge bases.
At the SEMANTiCS 2016, Volker will deliver his keynote on the exciting, prevailing topic of “Learning with Memory Embeddings and its Application in the Digitalization of Healthcare”.
Q1: Which application areas for semantic technologies do you perceive as most promising?
The most successful and convincing applications currently are knowledge graphs, e.g., the Google Knowledge Graph. The latter contains several tens of billion of triples, and is used in search, document understanding and much more. It demonstrates what is possible if data quality issues can be resolved, a lot of (manual) effort and care is spent in modeling, and if there are clearly defined applications which drive the developments of the semantic basis. Semantic projects often fail, if there is not a clear use case that drives the development and that provides a measurable quality score.
Semantics for its own sake is bound to fail since in general there is no unique "natural" semantic ground truth description of a domain. The only clear measure is usefulness in applications. Semantic technologies are great but are no silver bullet: data integration is known to be a complex, difficult and labor intensive process, which is why many companies are providing services here (e.g., in data warehouse development) and make good money. In our work on machine learning with knowledge graphs, we derive embeddings or latent representations that describe domain entities, which can then be used in applications. This approach is to some degree more robust towards issues with modeling quality.
Q2: What is your vision of semantic technologies and artificial intelligence?
I am convinced that, artificial intelligence, if ever achieved, will consist of an assembly of subsystem which all solve different tasks and which all might act and function quite differently. So there is no "intelligence" emerging out off only one single paradigm.
I see semantic knowledge graphs playing important roles in memory functions, such as semantic and episodic memory.Both in the brain and in technical solutions, they can form a basis on top of whichother intelligent functionalities such as planning and logical reasoning can be developed.
Q3: How do you personally contribute to the advancement of semantic technologies?
With surprisingly simple constructs,semantic technologies supply structure and background knowledge. Following our prior history, we always looked at opportunities for statistical machine learning in semantics. We developed matrix and tensor decomposition approaches which were successfully used by us and other groups for statistical machinelearning in semantic knowledge graphs. Currently we are studying how semantic knowledge graphs can be used to model temporaldata and how they might form a mathematical basis for cognitive forms of semantic and episodic memory.
Q4: How did semantic technology help you discover what your surname Tresp actually meant (dreist, from the village Dreisten)? Did it change your self-perception?
It is a debatedquestion if the semantics of your name influences your character or your life in general. I only learned recently that "Tresp" might have had its origin in “dreist” (English: brazen, audacious). Do I now feel motivated to adapt my personality accordingly?