Skip to main content

Improving cybersecurity through explainable artificial intelligence: a systematic literature review

The rapid adoption of artificial intelligence (AI) in cybersecurity has introduced critical challenges in interpretability, trust, and regulatory compliance. This systematic literature review examines how Explainable AI (XAI) bridges the gap between advanced threat detection and human understanding by enhancing transparency in AI-driven security systems. The study synthesizes research across five key domains: technical foundations of XAI, human-AI collaboration, regulatory compliance, adversarial robustness, and scalability. Findings reveal that XAI techniques—such as Shapley Additive Explanations (SHAP) and attention mechanisms—improve analysts' trust and decision-making while addressing biases and legal mandates like the General Data Protection Regulation (GDPR). However, trade-offs between explainability and performance persist, necessitating future work in real-time XAI and standardized evaluation metrics. The review underscores XAI’s transformative potential in fostering resilient, accountable cybersecurity frameworks.

Shadrack Oriaro
Robert Morris University
United States
soost77@mail.rmu.edu

 

Sushma Mishra
Robert Morris University
United States
mishra@rmu.edu