What is machine reasoning?
While machine learning is typically applied to learn complex functions using vast amounts of data, such as learning to classify images using supervised learning or learning to master the game of go by reinforcement learning, machine reasoning can help us to integrate intent into the process.
For humans, learning is the physical process of acquiring knowledge that allows us to structure behaviors, build new skills, and form beliefs.
However, human intelligence is not solely defined by the ability to learn, it is clearly conditioned by knowledge. What we know and what we believe will usually determine our decisions. But what is it that brings relevance in what we know so as to bear our decisions and actions? What is it that allows us to adapt and respond in different situations?
It is the power of mind to represent and reason by adopting an intentional stance on concepts, things, their properties and connections. It is the power of thinking.
The shortcomings of machine learning
Machine reasoning can help us to overcome some of the shortcomings presented by machine learning.
Machine Learning is able to process large volumes of data and capture the hidden patterns needed to effectively predict outcomes. The algorithms behind this are in a sense deterministic even in their unsupervised learning form, and tackle a pre-determined problem, with clear inputs and expected outputs.
For many early applications and use-cases, this data inefficiency has not posed a problem as the questions and the data were generally available. However, we are continuously faced with situations where there is simply not enough data, or it is difficult and/or costly to acquire or move appropriate datasets to make machine learning work, increasing the need for techniques like Federated Learning.
Machine Learning also is less effective when exposed to data outside the distribution the algorithms are trained on. This is due to poor ability to generalize, the inability to re-use or transfer previously acquired experience, for example, across problems that we humans consider to be slightly different from the original, or when encountering novel samples of input data.
To maximize human trust and improve decision quality, there is a need for transparency in the machine-driven decision-making process. Machine Learning is very capable of producing predictions, decision making or state transition sequences, however they rarely correspond to humanly comprehensible reasoning steps or semantics. By building on top of this base we can further ensure aspects of responsible AI: interpretability, explainability and auditability.
From machine learning to machine reasoning
Continuing what machine learning started, machine reasoning can be seen as an attempt to implement abstract thinking as a computational system.
The technologies considered to be part of the machine reasoning group are driven by facts and knowledge which are managed by logic. Domain modelling is used to capture concepts and entities, their relations, and behaviors in a machine-processable form. Symbolic models are difficult to create and require both expert knowledge and understanding of the domain and also proficiency in the modelling techniques, but are usually modular, maintainable and easily interpretable by a human. Due to their declarative nature, symbolic representations lend themselves to re-use in multiple tasks, promoting data efficiency. These representations tend to be high-level and abstract, facilitating generalization, and because of their language-like, propositional character, they are amenable to human understanding. One of the main challenges then becomes the effective integration of statistical learning and symbolic reasoning, in ways that allow the strengths of each approach to complement the weaknesses of the other.
Interpreting and using domain models by machines is characteristic for machine reasoning technologies. The models are associated with mathematical semantics and algorithms, for example computing all facts that logically follow the already asserted ones however are not explicitly stated. A reasoner which uses the domain model as a guide in finding an optimal path (with respect to metric) between any two given states is called a planner.