NEURO-SYMBOLIC AI


Integrating deep learning and symbolic structures. Alan Turing Institute Interest Group.

ABOUT

This is an interest group with the following aims, and currently organising virtual seminars.

Identify foundations of neuro-symbolic AI (logical semantics, embedding techniques, correctness, robustness, generalizability, transferability)

Explore and survey methods for integrating learning and reasoning

Identify applications in robotics and commonsense reasoning

Survey languages and implemented tools

Organize workshops with academic and industry leaders

Better articulate how it fits the broader goals of AI

01

CONTACT

@AAMAS

02

Science


AI has vast potential, some of which has been realised by developments in deep learning methods. However, it has become clear that these approaches have reached an impasse and that such “sub-symbolic” or “neuro-inspired” techniques only work well for certain classes of problem and are generally opaque to both analysis and understanding. On the other hand, “symbolic” AI techniques, based on rules, logic and reasoning, while not as efficient as “sub-symbolic” approaches, have much better behaviour in terms of transparency, explainability, verifiability and, indeed, trustworthiness. A new direction described as “neuro-symbolic” AI has been suggested, combining the efficiency of “sub-symbolic” AI with the transparency of “symbolic” AI. This combination can potentially provide a new wave of AI tools and systems that are both interpretable and elaboration tolerant and can integrate reasoning and learning in a very general way.


This approach provides a bridge between low-level, data intensive, perception and high-level, logical, reasoning and promises a future generation of AI tools that are not only efficient but transparent, reliable and trustworthy. This will provide an opportunity to take a step beyond the current orthodoxy of “data-driven” machine learning and to develop a hybrid approach that is much more acceptable to the public (since transparency and explainability are straightforward), to regulators (since verifiability and assurance are both viable within “symbolic” components) and industry (since this approach can help move practical AI and autonomous systems out of their “dead end” towards broader and more sophisticated applicability). Without a step change in the way AI systems are devised, then not only AI tools, but “driverless” cars, domestic robotics, and a range of robots deployed in distant environments will continue to under-deliver. Overall, this step-change will remain unlikely.





03