Skip to main content

Toward Smarter Cybersecurity: Leveraging AI for Software Understanding

This study investigates how artificial intelligence (AI) can enhance offensive cybersecurity through improved software understanding and vulnerability discovery. As software ecosystems become more complex and integrated, the ability to analyze systems, particularly those without accessible source code, is increasingly vital. The research reviews current literature on AI-assisted methods, including fuzzing, symbolic execution, and behavioral modeling, drawing on technical publications and guidance from agencies such as CISA, NSA, and DARPA. It identifies key advances and legal and operational barriers that hinder broader adoption, particularly limitations on the use of commercial AI platforms for security testing and gaps in system-of-systems analysis. To extend the findings, the paper proposes a conceptual Internet of Things (IoT) cybersecurity lab to evaluate these AI tools in a controlled, research-oriented environment. The study concludes that cross-disciplinary collaboration and stronger governance frameworks are needed to support lawful, scalable AI integration in offensive security. The work contributes to ongoing efforts to align AI innovation with ethical cybersecurity practices.

Alan Stines
Middle Georgia State University
United States
alan.stines@mga.edu