ML2R became the Lamarr Institute – find all current information here!
Learn more about our Research
The ML2R scientists conduct research on Machine Learning (ML) solutions and thus the technological foundations for Artificial Intelligence. We view ML as a service of the future that is oriented toward people and earns the trust of users. Our solutions, based on both data and knowledge, deliver reliable and verifiable results. We design ML applications that meet the highest quality standards, preserve resources, and function on a wide variety of platforms, from mobile devices to quantum computers.
Trustworthy Machine Learning
Trustworthy Machine Learning aims to utilize robust and verifiable methods to increase trust in ML and AI applications and allow for the certification of these technologies.
Machine Learning (ML) and Artificial Intelligence (AI) applications are increasingly becoming an integral part of our everyday lives. In the process, the technologies are also affecting areas that directly impact people’s lives and health, such as medicine, autonomous driving, or human resources management. Especially in these fields of application, it is necessary that decisions are fair and free of any bias, that the procedures are robust, and that the results are verifiable.
What is Trustworthy Machine Learning?
With their research on trustworthy Machine Learning, ML2R scientists aim at strengthening trust in ML and AI applications. The basis of this work lies in the development of robust and verifiable ML methods. In addition, the researchers are working on the design of modular and hybrid ML applications that enable the application of reusable, standardized ML components.
To this end, researchers are taking the following approaches:
- Ensuring explainability of AI and ML applications by making operating principles and results interpretable for human experts.
- Increasing the transparency of technologies to make them safe and reliable for end users who do not have scientific expertise.
- Enabling AI safety and certification by testing and evaluating the entire AI process – from the selection of training data to its use in practice.
Measuring Trustworthiness of an AI Application
An additional approach is the development of evaluation and quality criteria for AI. Criteria for compliance with ethical standards, data protection and privacy hereby play a central role. Mathematical methods and proofs are used when making quantitative statements about quality, specifying quality criteria for ML methods and describing the accuracy of the results.
Supporting End Users
The researchers at ML2R also have those users in mind who would like to use AI without having to delve deeply into the mathematical basics of ML. To this end, ML2R scientists are working on forms of representation that contain the essential information of a Machine Learning method in a compact and comprehensible way, similar to the package insert of a drug or the care instructions for an item of clothing: What application scenarios is the AI intended for? What requirements are necessary and how accurate are the expected results?
Highlight
Hybrid Machine Learning
Hybrid ML integrates knowledge from multiple sources into learning systems to produce reliable results even in regard to small or uncertain stocks of data.
ML2R researchers are working on methods to make heterogeneous data and complex knowledge usable for Machine Learning. To this end, they are investigating methods to process knowledge in a uniform manner and to uncover structures in data. In addition, data-based forms of ML, such as Deep Learning using Artificial Neural Networks, are linked with other forms of learning that are grounded in explicit and logical knowledge.
What is hybrid Machine Learning?
Hybrid ML aims to address both practical and fundamental problems of Machine Learning and Artificial Intelligence. The hybrid approach hereby pertains to the application of both classical, data-based ML methods and knowledge-based methods. ML applications that consist of flexibly deployable, reusable modules offer great opportunities to combine different methods. In developing hybrid ML solutions, researchers not only draw on concepts from computer science, but also on knowledge and methods from other disciplines, such as mathematics, statistical physics, linguistics, and psychology.
What is the Research Focus?
Research on hybrid ML can be divided into three main sections:
- Informed Machine Learning imposes data-independent domain knowledge into training algorithms or model classes to improve training time and generalization concerning real-life applications.
- Representation Learning aims to uncover and explain semantically meaningful and causal factors of variations in data. This makes it possible to supplement or replace missing or insufficient training data with the help of machine-generated data.
- Theoretical Machine Learning is concerned with the fundamental investigation of ML algorithms and learning procedures. It thus provides the theoretical basis for statements on interpretability, precision of results and validity ranges.
In Practice: Logistics
ML methods often work particularly well when they can learn with a lot of data. This differs greatly from situations where only few or even no data are available. In logistics, for example, weather influences, construction measures or accidents can cause exceptional situations for which no sufficiently large amounts of data exist.
This example also illustrates another challenge of ML: Data and knowledge stem from different sources and present diverging formats. Containers used to transport goods are increasingly equipped with sensors that transmit information about their current position as well as the pressure and temperature inside. This sensor data must then be linked with knowledge from other information sources, such as traffic planning systems and weather data. Sometimes knowledge is even only available in the form of experience or intuition of human experts. For applications in logistics, systems therefore have to be developed in a way that makes complex relationships in logistics planning transparent to humans and at the same time makes optimal use of the potential of ML.
Highlight
Resource-Aware Machine Learning
Resource-aware ML enables computations to be performed reliably using Machine Learning, even on small devices such as smartphones or directly within sensors.
The developers at ML2R are working on making Machine Learning available even on devices with restricted computing power and limited energy and memory resources. For safety-critical applications in particular, ML methods must be designed in such a way that data can be reliably processed within a given time frame. In addition, the researchers are optimizing ML applications to require as little energy and computing resources as possible.
What is Resource-Aware Machine Learning?
Resource-aware ML provides methods that minimize the consumption of energy, storage space, and computing capacity. These resource-aware ML methods are able to process large amounts of data efficiently and quickly without compromising on the quality of the results. With the growing heterogeneity of modern computer architectures, it is also becoming increasingly important that ML procedures are adapted to specific execution platforms and optimized for them.
In which Fields of Application is Resource Efficiency particularly in Demand?
In addition to the actual production, modern, automated production plants generate large amounts of data. Examples are the operating data from machines or measurement results from sensors. The evaluation of this large stock of data holds enormous potential for optimization: Processes can be accelerated, quality improved, and machines can be maintained with foresight. In the past, powerful computing centers were often used for such Big Data applications.
However, the increasing linkage of physical and virtual worlds in the “Internet of Things” (IoT) requires decentralized data processing. Particularly for applications in Industry 4.0 and logistics, it is not always possible – for example, due to a lack of bandwidth for data transmission – to send all data to a central computer. Instead, the devices that record the data, such as sensors, must already conduct parts of the processing and analysis.
Limiting the Consumption of Energy, Storage and Computing Capacity
A first step in the development of resource-efficient ML methods is the analysis of resource requirements. The properties of the algorithms used must be considered as well as the specific implementations on different computer architectures.
A further approach is to develop hardware and software optimized for specific learning tasks. FPGAs (Field Programmable Gate Arrays), which can be used as energy-efficient, flexibly configurable circuits in various application scenarios, are particularly suitable for IoT applications.
Finally, one research approach is to simplify ML algorithms so that they can be executed with less memory and computing capacity. This often involves the use of approximation methods, which must be theoretically well-founded and provided with the specification of error bounds.
Highlight
Machine Learning on Quantum Computers
Quantum computers can compute a variety of solution paths in parallel and thus have the potential to process information faster and handle more complex tasks than classical digital computers.
Classical computers reach their limits time and again. This is particularly evident in applications from the fields of Big Data and Artificial Intelligence, when it is a matter of processing large amounts of data, calculating many different solution paths or selecting the optimal result from a great number of options. Today, such tasks can often only be accomplished with a tremendous amount of time and computing effort, and some are even practically unsolvable because the calculations would take years.
How does Machine Learning work on Quantum Computers?
Quantum computers have the potential to overcome the fundamental limitations of classical computers. Today, quantum computing is more than just a theoretical concept. After considerable investments in research and development, the first technical implementations of quantum computers exist, which will also be available to researchers at ML2R in the future.
Using Quantum Effects for Faster Calculations
Quantum computers use quantum effects such as superposition or entanglement for information processing and can thus, in principle, deliver results faster. While a digital computer computes with bits, a quantum computer works with qubits, which, unlike classical bits, can assume not only one of two possible states, but also any superposition of the two.
What are Current Research Questions?
Research projects at ML2R address the question of how these quantum effects can best be used for Machine Learning. It has already been shown that Machine Learning methods can be adapted for quantum computers in such a way that multiple solution paths are followed simultaneously. Some quantum algorithm-based methods are significantly faster than classical computations. Even a single quantum computer can thus find solutions faster than several classical computers operating in a cluster.