Harnessing AI for DevSecOps: Is AI the Future?

Harnessing AI for DevSecOps: Is AI the Future?

“Perhaps at some point, AI law will become a well-developed area, but right now it is the Wild West. It changes almost every week, literally both on the legal side and the technical side.”

Van Lindberg, Attorney and CEO of OPOSCO

In a perfect world, DevSecOps teams can leverage technology to unite development and security. Artificial Intelligence (AI) is increasingly pivotal in DevSecOps by automating complex tasks, improving security, and boosting efficiency. In this blog post, we will explore the uses of AI in DevSecOps and how it can help identify vulnerabilities, automate remediation scripts for cloud misconfigurations, and enhance security across various stages of the development lifecycle. A recent report from GitHub showed that developers can complete tasks more than 50% faster using an AI tool. We will also discuss the pitfalls of AI and how overdependence on AI tools can lead to significant errors.

Primary Uses of AI in DevSecOps

Identifying Vulnerabilities in Code and Generating Exploits

The heart of DevSecOps is code security. This includes different aspects of the software development lifecycle, including environment hardening, code deployment, and pipeline security. AI has the potential capability to enhance code security by identifying vulnerabilities and even attempting to generate exploits to verify them. Tools like ChatGPT are making great strides in this domain.

ChatGPT is a solid AI tool for identifying vulnerabilities in code. It allows developers to describe their code to it, and it can provide insights into potential vulnerabilities, offering remediation suggestions and best practices. While it doesn’t generate exploits, it helps bridge the gap between developers and security experts, streamlining the security review process.

ChatGPT operates through its Generative Pre-trained Transformer, leveraging sophisticated algorithms to discern intricate patterns inherent within data sequences. This transformer extensively draws upon an expansive corpus of data to craft its responsive output.

BERT, or Bidirectional Encoder Representations from Transformers, is a highly influential natural language processing (NLP) model in artificial intelligence. Developed by Google researchers in 2018, BERT is designed to understand the context of words in a sentence by considering the surrounding words on both sides (hence, “bidirectional”).

On the other hand, Chat GPT, Microsoft’s BARD, and BERT, with their deep learning abilities, excel in generating exploits. It can analyze code to identify vulnerabilities and then attempt to create exploits to validate them. One note: BERT’s capabilities require more technical expertise and are typically used by seasoned security professionals.

PASTA Threat Modeling with ChatGPT

PASTA (Process for Attack Simulation and Threat Analysis) is a leading risk-centric methodology for threat modeling. IT teams can use AI, such as ChatGPT, to analyze threats identified during the modeling process. ChatGPT can provide insights and suggestions and even generate threat scenarios based on the input. However, someone knowledgeable must generate the input to ensure the data is correct. When combined with the expertise of our threat modeling team, our developers can effortlessly generate security requirements that can be tracked throughout the development process. It automatically links these requirements to well-regarded compliance standards like NIST CSF (National Institute of Standards and Technology). You can also check the implementation status at any point and create Infrastructure as Code templates for countering cloud-related security threats.

  • Writing Pipeline Automation Scripts

DevSecOps relies heavily on automation in pipeline processes, and AI can enhance the automation process. A Continuous Integration/Continuous Delivery (CI/CD) pipeline is a set of automated processes and tools that allows developers and operations professionals to work cohesively to build and deploy code to a production environment. Writing pipeline automation scripts is crucial for ensuring security throughout the development and deployment lifecycle. AI tools can help generate scripts that include security checks, code analysis, and other DevSecOps-related tasks, reducing the required manual effort. However, using AI to create code from scratch is unwise. It will only be able to “create” scripts based on the input received, and generally, the code often has incorrect syntax and may omit portions of the complete program.

  • Writing Lambda Functions in the Cloud Using AI

Lambda functions in cloud environments often execute code in response to events. With professional oversight, AI can assist in writing these functions by analyzing the event triggers, understanding the expected behavior, and generating efficient Lambda functions. This saves development time and facilitates optimization of functions for performance and security.

  • Integrating ChatGPT into Pipelines and Services

Integrating AI, such as ChatGPT, into DevSecOps pipelines and services can help reduce manual tasks. ChatGPT can potentially assist in analyzing real-time security logs and alerts, helping identify possible threats and anomalies. For instance, it can analyze logs for suspicious activities, assess the impact of potential threats, and recommend actions.

Since ChatGPT can generate regular expressions or rules to identify certain anomalies in logs or other data sources, it can enhance threat detection and response capabilities.

  • Using AI to Generate Nuclei Templates

Nuclei is a popular tool for detecting security vulnerabilities in various services. AI is employed to generate Nuclei templates that encompass a wide range of security checks and assessments. By analyzing historical data and known vulnerabilities, AI creates skeleton vulnerability templates covering the latest threats and security issues, helping DevSecOps teams discover known vulnerabilities at scale. While it will help streamline testing workflows, developers will still need to customize the code to craft tailored security checks. They will also have to test the code to make sure it works.

  • Creating Automation for Analyzing Suspicious Emails

Emails are a common vector for security threats. DevSecOps Teams can use tools to create automation that analyzes suspicious emails. AI can process the content, attachments, and sender information to flag potentially malicious emails. If needed, your team can build out sequences in AI tools to help quarantine emails, alert administrators, or initiate further analysis. However, be cautious with proprietary code—AI tools have no sense of loyalty, and your data could be skimmed and used by threat actors.

Training AI for Specific Tasks

Training AI for specific tasks is crucial in utilizing it effectively in DevSecOps. A high-level approach to training AI for security tasks involves the following steps:

  1. Data Collection: Gather a comprehensive dataset with examples of the task you want the AI to perform. For instance, if you train AI to identify SQL injection vulnerabilities, collect code samples with and without such vulnerabilities. Nonetheless, you will need vast quantities of code samples, and your team will need to know which group contains the vulnerabilities.
  2. Preprocessing: Clean and preprocess the data to ensure it’s in a suitable format for training. This might include normalizing code, labeling data, and removing noise.
  3. Model Selection: Choose the appropriate AI model for your task. For code analysis, models like Bard can be fine-tuned for your specific needs.
  4. Training: Train the selected model on your dataset, adjusting hyperparameters and optimizing performance.
  5. Evaluation: Assess the model’s performance using metrics relevant to your task, such as precision, recall, and F1 score.
  6. Fine-tuning: Iteratively fine-tune the model based on evaluation results to improve accuracy.

AI for DevDecOps: Are We Opening Pandora’s Box?

As with all new technology, there is a trade-off for DevSecOps teams. Large Language Models (LLMs) can generate code at the speed of thought, which helps developers become more efficient and enables companies to build applications faster. Nonetheless, AI in general, and LLMs in particular, can come with significant risks. For example, AIs often generate incorrect, contextually misleading, or fictitious information. These AI hallucinations have produced some infamous and disturbing outcomes, which stresses the point that AI tools are still tools that require human intelligence to operate.

  • Google’s Bard chatbot stated that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system.
  • Microsoft’s chat AI, Sydney, claimed to have fallen in love with users and spied on Bing employees.
  • In 2022, Meta’s Galactica platform provided users with inaccurate information, sometimes rooted in prejudice.

Most generative AI models learn through vast datasets. If this data is outdated, they may not have access to the most current information or may propagate obsolete information. Therefore, ensuring that the code snippets your team uses are clean in the context of the entire codebase is imperative.

Addressing these risks involves meticulous code reviews, comprehensive testing, fine-tuning the generated code, and finding the right balance of automated tools and human involvement throughout the software development lifecycle.

In addition, there are multiple security hazards with generative AI due to the simple fact that much of it stems from open-source software, and the security landscape may be untested. In the foreseeable future, the threat posed by LLMs will keep evolving alongside the increasing adoption of these systems.

AI models can also be vulnerable to adversarial attacks. Without significant improvements in security standards and practices, the probability of targeted attacks and the emergence of vulnerabilities will increase. Incorporating generative AI tools necessitates addressing specific challenges and general security issues. Your DevSecOps team must adapt its security measures to ensure the responsible and secure utilization of LLM technology.

Conclusion

AI is revolutionizing the world of DevSecOps by automating critical security processes, enhancing code security, and optimizing cloud security. ChatGPT, Bard, and BERT are becoming indispensable in identifying vulnerabilities, automating cloud misconfiguration remediation, and securing the entire development lifecycle. The high-level training of AI for specific tasks and integration into various DevSecOps processes provides developers with the means to stay ahead of the ever-evolving threat landscape. As DevSecOps continues to evolve, AI will undoubtedly play an even more significant role in fortifying security practices in the future.

However, this does not mean that AI is a silver bullet, nor will it replace the intuition, creativity, and skill of DevSecOps teams. Instead, DevSecOps leaders should look to AI as a tool to enhance the efficiency of their teams. AI is limited in understanding contextual risk, and AI models still need a great deal of data input, as well as trial-and-error, to ensure that the data coming out is correct.

Toolchain complexity and security concerns are constant struggles for developers, and AI tools cannot ensure that code is safe. AI should be an opportunity to do more with less and remain competitive—but it will not replace the human element.

Learn how VerSprite combines outstanding skills with AI to get better insights, more accurate threat models, and better results for your enterprise.