Date: November 21, 2024
Time: 12:00 PM to 01:00 PM
Location: Virtual
Toward Trustworthy AI/ML in 6G Networks through Explainable Reasoning
This talk emphasizes the importance of trustworthy Artificial Intelligence (AI) in 6G networks in response to growing global attention on AI governance. Notable initiatives such as the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, DARPA’s Assured Neuro-Symbolic Learning and Reasoning and eXplainable AI (XAI) programs, and the European Union’s AI Act, highlight the increasing regulatory focus on AI transparency and responsibility. As 6G networks transition from AI-native to automation-native, the need for explainability and trustworthiness becomes critical, especially in mission-critical and high-stakes applications. Traditional post-hoc explainability methods, which aim to explain AI decisions after they are made, are no longer adequate in complex network environments. Instead, in-hoc explainability or explanation-guided techniques – where explanations guide the learning process itself – is emerging as a crucial approach for establishing trust in AI systems from the ground up. Indeed, integrating explanatory mechanisms directly within AI learning models enables transparent decisions and enhances learning. Furthermore, incorporating neuro-symbolic approaches, which combine neural networks with symbolic reasoning, provides a robust framework to tackle the increasing complexity of 6G networks. By integrating these approaches, AI systems can make more explainable, contextually guided decisions, boosting trust and performance while mitigating risks associated with black-box AI models.
Learn more and register here:
https://events.vtools.ieee.org/m/443359