Introduction
In modern software development, Static Application Security Testing (SAST) plays a vital role in identifying vulnerabilities before code ever runs. By examining source code, bytecode, or binaries, SAST tools help developers uncover logic flaws, insecure practices, and coding errors early in the lifecycle—long before they can become production-level security risks. These tools have long been trusted allies in secure software engineering, forming a cornerstone of DevSecOps pipelines.
However, the traditional SAST approach is evolving. The emergence of AI-driven analysis—especially through large language models (LLMs)—has opened new opportunities for automated reasoning about code. Unlike pattern-based scanners, LLMs can understand context, infer developer intent, and detect subtle issues that would otherwise go unnoticed. This blend of static analysis and artificial intelligence promises smarter, more adaptable tools capable of learning from real-world coding practices rather than relying solely on rigid rule sets.
Yet, while the capabilities of cloud-based AI services are impressive, they come at a cost—both financial and operational. Continuous remote inference on proprietary or sensitive code can be expensive, slow, and risky from a data privacy standpoint. For many teams, sending large codebases to external APIs simply isn’t an option. The need for local, cost-efficient, and privacy-preserving AI analysis is becoming increasingly clear.
That’s where tools like AeyeGuard_cmd come in. Designed as a simple yet reliable and capable static code analyzer, AeyeGuard_cmd harnesses the power of a local LLM to perform deep, context-aware analysis without ever exposing your code to external servers. It combines the transparency and control of traditional SAST with the intelligence of modern AI—offering developers a practical way to bring the benefits of language models into their secure development environments.
Structure of the solution
The architecture of AeyeGuard_cmd reflects its experimental nature and the evolution of its design goals. At the foundation lies the main program, AeyeGuard_cmd.py, which serves as the central coordinator. When executed, it takes as input the path of a target directory and recursively navigates through its structure. For each file encountered, it checks the extension to verify whether the language is supported and, if so, delegates the task to a dedicated, language-specific analyzer.
Each supported language is handled by its own specialized module:
AeyeGuard_cs.pyfor C#,AeyeGuard_java.pyfor Java,AeyeGuard_react.pyfor React frontend files written in TSX or JSX.
This architecture originally emerged from the initial idea of analyzing single files rather than entire projects. As the concept evolved, the modular approach was retained—resulting in separate programs that cooperate through a simple orchestration layer. While this structure can introduce a small degree of overhead, it offers significant advantages: a complete decoupling between the analyzers, clear separation of logic, and the flexibility to run each analyzer independently when needed. This makes AeyeGuard_cmd both an experimental playground and a practical framework for exploring LLM-assisted static analysis across multiple programming languages.
Development approach
The development of AeyeGuard_cmd followed an iterative and exploratory process that blended human guidance with AI-assisted coding. Most of the implementation was generated using Claude Code, while I focused on designing and maintaining the specification documents stored in the docs folder. These specifications served as the foundation for a structured workflow: I provided Claude with detailed prompts instructing it to follow the written requirements precisely while generating the code modules.
Once the code was produced, each component underwent a line-by-line review. Every function and logic block was carefully verified, refined, and, when necessary, manually adjusted or re-generated through targeted prompts. This hands-on revision ensured that the resulting code remained both correct and maintainable. The final phase involved real-world testing, running the analyzers on diverse codebases to validate their reliability, optimize prompt effectiveness, and uncover practical edge cases.
This hybrid approach—where human intent defines structure and AI accelerates implementation—proved highly effective for rapid prototyping. It allowed the project to evolve naturally from conceptual design to a working experimental tool, balancing the creativity of AI-assisted development with the precision of manual engineering.
Local LLMs
AeyeGuard_cmd is designed to interact seamlessly with LM Studio through the LangChain framework, which handles prompt orchestration and model communication. The decision to rely on a local LLM rather than a cloud-based one stems primarily from the goal of maintaining zero operational cost and full data privacy. By keeping all inference processes on the local machine, developers can run advanced static analysis without network latency, API fees, or the risk of exposing proprietary source code to external services.
In practical testing, the tool has been evaluated on real-world projects using the Qwen/Qwen3-Coder-30B model, which currently serves as the default configuration. However, flexibility remains a key design principle: by simply providing the –model option at runtime, users can specify any other model available in LM Studio. This makes AeyeGuard_cmd adaptable to a wide range of hardware setups and model preferences, while still preserving the same reliable analysis workflow.
Program execution
To run the program, first create a Python virtual environment, then install the dependencies by running
pip install -r requirements.txtand finally execute the static analysis of your codebase by running
python AeyeGuard_cmd.py /path/to/codebaseTo see all available command line options, see README.md file.
To facilitate experimentation and validation, AeyeGuard_cmd includes a dedicated tests folder containing a shell script named test_multilang.sh. This script automates the analysis of a set of vulnerable example code samples across multiple programming languages, demonstrating how the tool operates in realistic scenarios. By running the script, users can observe the complete output of the analyzers, including detected issues, reasoning traces, and the interaction flow between the main program and the specialized language modules.
This simple yet effective testing setup allows developers to evaluate the behavior of the analyzers without additional configuration, making it an ideal starting point for understanding AeyeGuard_cmd’s capabilities and validating its integration with the local LLM environment.
To run this test, run these commands:
cd tests
./test_multilang.shDownload of the complete code
The complete code is available at GitHub.
These materials are distributed under MIT license; feel free to use, share, fork and adapt these materials as you see fit.
Also please feel free to submit pull-requests and bug-reports to this GitHub repository or contact me on my social media channels available on the contact page.
FAQ
What is AeyeGuard_cmd and what makes it different from other static analyzers?
AeyeGuard_cmd is an experimental static code analyzer that leverages a local large language model (LLM) through LangChain and LM Studio. Unlike traditional SAST tools that rely on rule-based scanning, it uses AI reasoning to detect potential issues in the code. Its main distinction lies in running completely offline, offering privacy, zero API costs, and full control over the analysis environment.
Which programming languages does AeyeGuard_cmd currently support?
The current version supports C#, Java, and React (TSX/JSX). Each language has its own specialized analyzer — AeyeGuard_cs.py, AeyeGuard_java.py, and AeyeGuard_react.py — managed by the central program AeyeGuard_cmd.py. The modular design also makes it easy to add support for new languages in future versions.
Why does AeyeGuard_cmd use a local LLM instead of a cloud-based one?
The choice of a local LLM ensures that all analysis happens securely on the developer’s machine, without sending proprietary or sensitive code to external servers. It also eliminates ongoing API usage costs and avoids the latency typical of remote inference, providing both economic and practical benefits.
Which model is used by default, and can I change it?
By default, AeyeGuard_cmd is configured to use the Qwen/Qwen3-Coder-30B model via LM Studio. However, users can easily switch to any locally hosted model by specifying it with the –model option when running the program. This flexibility allows for performance tuning and compatibility with different hardware setups.
How can I test AeyeGuard_cmd and see its output in action?
Inside the tests folder, you’ll find a script named test_multilang.sh. Running this script executes a series of analyses on vulnerable example code, demonstrating how the tool detects and reports issues across different languages. It’s a great way to understand the tool’s workflow and evaluate its real-world performance.
