The DevOps Approach to Automating C2 Infrastructure (Part One)

The DevOps Approach to Automating C2 Infrastructure (Part One)

Author: Ramiro Molina, Senior OffSec Security Consultant

Time is always at a premium in red team exercises. Red teaming and DevOps; you can’t have one without the other.

This is the main reason it’s so important to automate the process of deploying a C2 infrastructure, as this is a repetitive task every time we engage in a new red teaming gig. This allows our team to save time, get our hands dirty in the trenches, and have the maximum amount of time for planning the attacks and performing manual testing.

The term C2 Infrastructure, or C&C, stands for Command and Control. During an offensive campaign, our consultants maintain, choreograph, and control compromised assets through the use of a C2 infrastructure which, after it’s automatically spun up, is customized ad-hoc for the engagement. This setup helps with communications back and forth to the compromised assets so that they are secure, concealed, and reliable.

In this post, we’ll walk you through the DevOps approach on how to build a functional C2 infrastructure using an automated approach with Covenant servers, Terraform, and other tools.


DevOps

When we think of infrastructure automation, we usually think of provisioning, which means getting and setting the components required to run specific applications. It’s just like mise-en-place in cooking.

Now, when it comes to Infrastructure as Code (IaC), there are many popular tools:

  • Terraform
  • CloudFormation
  • Ansible
  • Chef
  • Puppet
  • SaltStack, and more


We chose Terraform for our build. Terraform is an Infrastructure as Code (IaC) tool created by Hashicorp for DevOps engineers, with many interesting features for our deployment:

  • Designed infrastructure can be reused multiple times.
  • Terraform is platform-agnostic and supports multiple cloud providers such as AWS, Azure, GCP, DigitanalOcean, and more.
  • Plan, create and destroy infrastructure with a simple command.
  • All configuration files can be managed using Git.
  • Maintain different workspaces (separate complete deployments) for different engagements.


Once the required infrastructure–such as VMs–has been deployed to your cloud provider, you still have additional tasks, such as installing software and configuration changes. Terraform is not explicitly designed for this job, but some “tricks” can be played to perform those tasks. While there are much better solutions out there, like Chef, Puppet, and Ansible; however, in this instance, we use simple bash scripts to complete the setup.


Architecture Design for DevOps

For our architecture, we will use the most common model recommended by Cobalt Strike’s author, Raphael Mudge. A simple setup based on this architecture requires:

  • A C2 framework: we chose Covenant, with one server and one HTTPS listener.
  • Redirectors: considering it is a best practice not to expose your C2 server directly, redirectors for your servers receive the incoming traffic instead and filter it based on specific rules you provide. In this case, we used one HTTPS redirector server, which sits in front of the Covenant server.


The following diagram shows the architecture.

DevOps services


Terraform for DevOps

Terraform includes many providers that require their specific configuration. A provider is an abstraction of each API for different vendors to access their services, and you can leverage multiple providers simultaneously to build your infrastructure.

We used the Amazon Web Services (AWS) provider. AWS requires you to install the AWS Command Line Interface (CLI), complete the initial configuration, and create a profile with your Access and Secret keys to interact with those services.

Terraform allows you to provide the infrastructure configuration files in one or more *.tf files.

You will need to download and install Terraform from the Terraform official site. In GNU/Linux, you can add the repository or download the static-linked binary.


Terraform Scripts

Our example Terraform folder structure is the following:

DevOps services

We will use the “main.tf” file to define the providers and the basic networking needed for our infrastructure. First, we will define the AWS provider and another provider to help us set DNS records. For our example, we will use Namecheap’s provider.

######################################################################
##### Main
######################################################################

terraform {
  required_providers {
    # Setup the Amazon Web Services (AWS) provider
    # https://registry.terraform.io/providers/hashicorp/aws/latest/docs
    aws = {
      source = "hashicorp/aws"
      version = "4.54.0"
    }
  
    # Setup a provider for DNS
    #https://registry.terraform.io/providers/namecheap/namecheap/latest
    namecheap = {
      source = "namecheap/namecheap"
      version = ">= 2.0.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
    profile = "versprite"
    region = var.region
}

Then we created several AWS networking components to allow the inner networking between our EC2 instances.

######################################################################
##### Networking
######################################################################

# 1. Create a new VPC
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc
resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "Main VPC"
  }
}

# 2. Create a VPC Internet Gateway.
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/internet_gateway
resource "aws_internet_gateway" "internet_gateway" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "Main Internet Gateway"
  }
}

# 3. Create the main Subnet with Default Route to Internet Gateway
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet
resource "aws_subnet" "main" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "${var.region}a"
  map_public_ip_on_launch = true
  depends_on              = [aws_internet_gateway.internet_gateway]

  tags = {
    Name = "Public Subnet"
  }
}

# 4. Create the Route Table for Main Subnet
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route_table
resource "aws_route_table" "route_table_public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.internet_gateway.id
  }

  tags = {
    Name = "Public Route Table"
  }
}

# 5. Link the Main Subnet and Public Route Table
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route_table_association
resource "aws_route_table_association" "route_table_as_public" {
  subnet_id      = aws_subnet.main.id
  route_table_id = aws_route_table.route_table_public.id
}

Then we generated a “variables.tf” file that defines several variables for use across the Terraform scripts. We provided the values for these variables in another file to keep the configuration settings in one place.

variable "ami" {
  description = "The AMI base for the instances"
  type = string
}

variable "region" {
  description = "The AWS Region for deployment"
  type = string
  default = "us-east-1"
}

variable "ssh_ipv4_cidr_blocks" {
  description = "The IPv4 CIDR blocks that will be allowed to SSH into the servers"
  type = list(string)
}

variable "https_redirector_domain" {
  description = "The domain for the HTTPS redirector"
  type = string
}

variable "https_redirector_hostname" {
  description = "The hostname for the HTTPS redirector"
  type = string
}

variable "https_redirector_domain_host" {
  description = "The fqdn for the HTTPS redirector"
  type = string
}

variable "ssh_pub_key_aws" {
  description = "The name of the public SSH key configured in AWS used when creating the EC2 instances."
  type = string
}

variable "ssh_priv_key_local" {
  description = "The filename in your local ~/.ssh/ folder of the private SSH key that can connect to the AWS hosts for provisioning."
  type = string
  validation {
    condition = fileexists("~/.ssh/${var.ssh_priv_key_local}")
    error_message = "The file containing the private SSH key for provisioning is was not found in the folder: ~/.ssh/."
  }
}

variable "user_agent" {
  description = "User-agent used by the client (i.e. Grunt, Beacon, etc)"
  type = string
}

variable "namecheap_username" {
  description = "Namecheap API username"
  type = string 
}

variable "namecheap_api_key" {
  description = "Namecheap API Key"
  type = string 
}

variable "namecheap_api_ip" {
  description = "Namecheap API allowed IP address"
  type = string 
}


Terraform for DevOps: An In-depth Look

Since we have already defined the values of the variables in a file, “terraform.tfvars”, we won’t be asked to specify those in the command line. The file should have several components. It should include the AWS “ami” and “region” variables to validate the matching values from the AWS EC2 Console. We used an Ubuntu 22.04 LTS image. Our team recommends keeping this updated to the latest version so that the VM package update process goes faster later.

In the AWS EC2 console, we set up an SSH public key to connect to the new instances. Once complete, we supply the name in the “ssh_pub_key_aws” variable. Simultaneously, we provide the file name containing the corresponding private key in our local “~/.ssh/ folder.”

This key pair will connect to the EC2 instances created to complete the deployment. To properly harden these instances, it’s best to whitelist access to administrative services, like SSH, from our external IP addresses. Using the CIDR notation in the “ssh_ipv4_cidr_blocks” variable, we gave different IP ranges or individual addresses.

Since we wanted to create a DNS A record automatically using a Terraform provider, we included the domain and host names for the redirector in the “https_redirector_domain”, “https_redirector_hostname” and “https_redirector_domain_host” variables. We also had the authentication details for accessing the DNS API in this file—the Namecheap one.

Finally, we provided the User Agent our Grunts (agents) in the “user_agent” variable, so that the redirector only allows requests with this specific header to pass and reach the Covenant listening service. We set the User-Agent configured by the default Covenant HTTP listeners, but this is simply a matter of preference.

# The AMI base for the instances available at the selected AWS region
ami = "ami-0557a15b87f6559cf" # Canonical, Ubuntu, 22.04 LTS, amd64 jammy image build on 2023-02-08

# The AWS Region for deployment
region = "us-east-1"

# The name of the public SSH key configured in AWS used when creating the EC2 instances.
ssh_pub_key_aws = "terraform_vs_deploy"

# The allowed range(s) to access the private services for management.
#ssh_ipv4_cidr_blocks = ["0.0.0.0/0"] # Use this for any source IP
ssh_ipv4_cidr_blocks = [ "11.22.33.44/32" ]

# Redirector domains
https_redirector_domain = "yourdomain.com"
https_redirector_hostname = "test"
https_redirector_domain_host ="test.yourdomain.com"

# User-agent used by the client (i.e., Grunt, Beacon, etc.) Remember these must match your setup in the c2 server listener
user_agent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"

# The filename in your local ~/.ssh/ folder of the private SSH key that can connect to the AWS hosts for provisioning.
# You need to have this file locally.
ssh_priv_key_local = "terraform_vs_deploy"

# DNS - NameCheap API Settings
namecheap_username = "your_username"
namecheap_api_key = "your_api_key"
namecheap_api_ip = "your_allowed_ip_address"

We then prepared another file we named “security_groups.tf”. There we defined the inbound and outbound firewall rules for our instances. For the redirector, we allowed ingress traffic from ports 80 and 443 TCP from any source IP and to port 22 TCP (SSH) from our whitelisted IP as per our configuration above. We allowed egress traffic to any destination.

######################################################################
##### *Redirector* Security Group
######################################################################

# Security group for the HTTPS Redirector, allows incoming Web traffic
resource "aws_security_group" "https_redirector_sg" {
  name        = "https_redirector_sg"
  description = "Allow HTTP(S) inbound traffic"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "HTTPS"
    from_port        = 443
    to_port          = 443
    protocol         = "TCP"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    description      = "HTTP"
    from_port        = 80
    to_port          = 80
    protocol         = "TCP"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    description      = "SSH"
    from_port        = 22
    to_port          = 22
    protocol         = "TCP"
    cidr_blocks      = var.ssh_ipv4_cidr_blocks
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
  }

  tags = {
    Name = "https_redirector_sg"
  }
}

For our Covenant server, we allowed ingress traffic to port 443 TCP only from our redirector server and to port 22 TCP–but only from our whitelisted source IP address. Egress traffic is allowed to any destination.

######################################################################
##### *Team Server* Security Group
######################################################################

# Security group for the Team Server, allows incoming HTTPS traffic from the redirector(s)
resource "aws_security_group" "teamserver_sg" {
  name        = "teamserver_security-group"
  vpc_id      = aws_vpc.main.id

  ingress {
    description     = "TCP 443"
    from_port       = 443
    to_port         = 443
    protocol        = "TCP"
    security_groups = [aws_security_group.https_redirector_sg.id]
  }

  #Restricted access to the management services from our allowed source IPS.
  ingress {
    description      = "SSH"
    from_port        = 22
    to_port          = 22
    protocol         = "TCP"
    cidr_blocks      = var.ssh_ipv4_cidr_blocks
  }

  # The team server has unlimited access to the internet
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

    tags = {
      Name = "teamserver_sg"
  }
}


HTTPS Redirector

In the “https_redirector.tf” file, we created an interface for the HTTP redirector that is associated with its security group. Then we provided the settings for the EC2 Instance.

To provision the instances created–installing and configuring the software we will need- we used a series of provisioners that allow us to upload files with bash scripts and execute them remotely.

The crucial files are “config/nginx-https.conf,” which is the template file for the NGINX configuration. This filters and forwards the HTTPS traffic to the Covenant server if the correct User Agent is included in the request. The “scripts/https_redirector_setup.sh” file is also uploaded to the instance and executed remotely using the “remote-exec” provider. This last bash script file updates the instance packages, installs, and configures NGINX by applying the corresponding config file. For both the redirector and the Covenant Server, we uploaded the “config/authorized_keys,” in which we included the SSH public keys of our team members. That way, the different team members can access these servers.

# Network interface for the HTTPS Redirector
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/network_interface
resource "aws_network_interface" "https_redirector_nic" {
  subnet_id       = aws_subnet.main.id
  private_ips     = [cidrhost("10.0.1.0/24", 10)]
  security_groups = [aws_security_group.https_redirector_sg.id]

  tags = {
    Name = "https_redirector_nic"
  }
}

# HTTPS redirector instance
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
resource "aws_instance" "https_redirector" {
  ami               = var.ami
  instance_type     = "t2.micro"
  availability_zone = "${var.region}a"
  key_name          = var.ssh_pub_key_aws

  network_interface {
    device_index         = 0
    network_interface_id = aws_network_interface.https_redirector_nic.id
  }

  # Connection needed for the provisioners
  # https://www.terraform.io/language/resources/provisioners/connection
  connection {
    type        = "ssh"
    user        = "ubuntu"
    host        = self.public_ip
    private_key = file("~/.ssh/${var.ssh_priv_key_local}")
  }

  # https://www.terraform.io/language/resources/provisioners/file
  provisioner "file" {
    content     = "https_redirector - ${self.public_ip}\n"
    destination = "/home/ubuntu/instance"
  }

  provisioner "file" {
    source      = "config/authorized_keys"
    destination = "/home/ubuntu/authorized_keys"
  }

  # Redirector HTTPs config file
  provisioner "file" {
    # https://www.terraform.io/language/functions/templatefile
    content = templatefile("config/nginx-https.conf", {
      domain       = var.https_redirector_domain_host,
      user_agent   = var.user_agent,
      teamserver   = cidrhost("10.0.1.0/24", 50),
    })
    destination = "/home/ubuntu/nginx-https.conf"
  }

  provisioner "file" {
    source      = "scripts/https_redirector_setup.sh"
    destination = "/home/ubuntu/https_redirector_setup.sh"
  }

   provisioner "remote-exec" {
    inline = [
      "sudo bash /home/ubuntu/https_redirector_setup.sh",
      "rm /home/ubuntu/https_redirector_setup.sh"
    ]
  }

  tags = {
    Name = "https_redirector"
  }
}

Below is the “config/nginx-https.conf” template file, where we have filtered requests only to allow the ones that include the specified User-Agent header value.

server {
    # SSL configuration
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;

    ssl_certificate /home/ubuntu/cert.pem;
    ssl_certificate_key /home/ubuntu/key.pem;

    root /var/www/html;
    server_name ${domain};

    location ~ ^.*$ {
        # Check the request matches the User-Agent configured for client (i.e. Grunt, Beacon, etc)
        if ($http_user_agent != "${user_agent}") {
            return 404;
        }
        proxy_pass https://${teamserver};
        proxy_redirect off;
        proxy_ssl_verify off;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        client_max_body_size 50M;
    }
}

The “scripts/https_redirector_setup.sh” file updated the instance, created a self-signed certificate, installed some tools, including NGINX, and configured that service.

#!/bin/bash

# Add the rest of the team members public SSH keys to grant access
cat /home/ubuntu/authorized_keys >> /home/ubuntu/.ssh/authorized_keys
rm /home/ubuntu/authorized_keys

# Update package repository
apt-get update -q &>/dev/null

# Upgrade packages
echo "[INFO] Upgrading installed packages"
sudo apt-get upgrade -y &>/dev/null
echo "[INFO] Packages Upgraded"

# Install Nginx packages
apt-get -y install nginx &>/dev/null
echo "[INFO] Installed Nginx"

#Install other tools
apt-get -y install ncat net-tools &>/dev/null
echo "[INFO] Installed other Tools"

# Generate a self-signed certificate for the HTTPs service
openssl req -x509 \
                    -sha256 -days 356 \
                    -nodes \
                    -newkey rsa:2048 \
                    -subj "/CN=demo.coolstartup.io/C=US/L=San Francisco" \
                    -keyout key.pem -out cert.pem &>/dev/null
echo "[INFO] Generated self-signed Certificate"

# Apply the HTTPS redirector production configuration
mv /home/ubuntu/nginx-https.conf /etc/nginx/sites-available/default

# Basic hardening such as removing tokens on nginx (version info disclosure)
cat /etc/nginx/nginx.conf | sed 's/# server_tokens off;/server_tokens off;/' > /etc/nginx/tmp
mv /etc/nginx/tmp /etc/nginx/nginx.conf

# Restart Nginx to update the config
systemctl restart nginx &>/dev/null
# Enable nginx on boot
systemctl enable nginx &>/dev/null


Covenant Server

The Covenant server is defined in the “team_server_covenant.tf” file. We prepared an interface for it associated with its particular security group and added the settings for the EC2 Instance as well. Since the Covenant server requires more resources, we increased the instance type to a “t2.medium”.

We uploaded the “scripts/teamserver_covenant_setup.sh” bash script and executed it remotely with the “remote-exec” provisioner.

# Network interface for the Team Server
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/network_interface
resource "aws_network_interface" "teamserver_nic" {
  subnet_id       = aws_subnet.main.id
  private_ips     = [cidrhost("10.0.1.0/24", 50)]
  security_groups = [aws_security_group.teamserver_sg.id]

  tags = {
    Name = "teamserver_nic"
  }
}

# Covenant Team Server instance
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
resource "aws_instance" "teamserver" {
  ami               = var.ami
  instance_type     = "t2.medium"
  availability_zone = "${var.region}a"
  key_name          = var.ssh_pub_key_aws

  network_interface {
    device_index         = 0
    network_interface_id = aws_network_interface.teamserver_nic.id
  }

  # Connection needed for the provisioners
  # https://www.terraform.io/language/resources/provisioners/connection
  connection {
    type        = "ssh"
    user        = "ubuntu"
    host        = self.public_ip
    private_key = file("~/.ssh/${var.ssh_priv_key_local}")
  }

  # https://www.terraform.io/language/resources/provisioners/file
  provisioner "file" {
    content     = "teamserver - ${self.public_ip}\n"
    destination = "/home/ubuntu/instance"
  }

  provisioner "file" {
    source      = "config/authorized_keys"
    destination = "/home/ubuntu/authorized_keys"
  }

  provisioner "file" {
    source      = "scripts/teamserver_covenant_setup.sh"
    destination = "/home/ubuntu/teamserver_covenant_setup.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo bash /home/ubuntu/teamserver_covenant_setup.sh",
      "rm /home/ubuntu/teamserver_covenant_setup.sh"
    ]
  }

  tags = {
    Name = "teamserver"
  }
}

The “scripts/teamserver_covenant_setup.sh” bash script file updates the instance and installs some tools for the Covenant server. In this case, it uses Docker, clones Covenant’s GitHub repository, applies changes to the default source code of the Grunt Stager and Executor, and then builds and executes the container.

#!/bin/bash

# Add the rest of the team members public SSH keys to grant access
cat /home/ubuntu/authorized_keys >> /home/ubuntu/.ssh/authorized_keys
rm /home/ubuntu/authorized_keys

# Update package repository
apt-get update &>/dev/null

# Upgrade packages
echo "[INFO] Upgrading installed packages"
sudo apt-get upgrade -y &>/dev/null
echo "[INFO] Packages Upgraded"

# Install docker
apt-get -y install docker.io &>/dev/null
echo "[INFO] Docker packages installed"

#Install other tools
apt-get -y install unzip ncat net-tools &>/dev/null
echo "[INFO] Installed other Tools"

# Clone Covenant Repo
git clone -q --recurse-submodules https://github.com/cobbr/Covenant /home/ubuntu/Covenant
echo "[INFO] Cloned Covenant Repo"

# Make Covenant Modifications
#Bug fix in default GruntHTTP Stager and Executor to enable TLS1.2 (https://github.com/cobbr/Covenant/issues/263) 
sudo sed -i "s/ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls;/ServicePointManager.SecurityProtocol = 
(SecurityProtocolType)3072 | SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls;/g" /home/ubuntu/Covenant/Covenant/Data/Grunt/GruntHTTP/GruntHTTP*.cs

# Build and Run Docker instance
echo "[INFO] Building Covenant docker image"
docker build -q -t covenant /home/ubuntu/Covenant/Covenant/
echo "[INFO] Running Covenant Image"
docker run -d --restart unless-stopped -p 127.0.0.1:7443:7443 -p 80:80 -p 443:443 --name covenant -v /home/ubuntu/Covenant/Covenant/Data:/app/Data covenant


DNS Provider Configuration in DevOps

We used Namecheap as the provider to create an A record on our test domain. We defined this configuration on the “dns.tf” file along with the details to access Namecheap’s API. With these settings, this provider automatically creates a new A record in the specified domain and points it to our Redirector’s public IP address.

# Namecheap API credentials
provider "namecheap" {
  user_name = "${var.namecheap_username}"
  api_user = "${var.namecheap_username}"
  api_key = "${var.namecheap_api_key}"
  client_ip = "${var.namecheap_api_ip}"
  use_sandbox = false
}

resource "namecheap_domain_records" "https_redirector_domain" {
  domain = "${var.https_redirector_domain}"
  mode = "OVERWRITE"

  record {
    hostname = "${var.https_redirector_hostname}"
    type = "A"
    address = "${aws_instance.https_redirector.public_ip}"
  }
}


Output Information

In the “outputs.tf” file, we display useful information about the deployment once it is complete, such as the external IP addresses for the Covenant server and Redirector. We use that information to connect and manage those instances.

# Information displayed once the deployment is completed
output "HTTPS_Redirectors_Information" {
  description = "HTTPS Redirector Information:"
  value = join("\n",
    [
      "Redirector Public IP: ${aws_instance.https_redirector.public_ip}",
      "Redirector SSH: ssh ubuntu@${aws_instance.https_redirector.public_ip}",
      "Redirector FQDN: ${var.https_redirector_domain_host}",
    ]
  )
}

output "Team_Server_Information" {
  description = "Covenant Team Server Information"
  value = join("\n",
    [
      "Covenant Team Server Public IP: ${aws_instance.teamserver.public_ip}",
      "Covenant Team Server SSH (Allowed Range ${join(", ",var.ssh_ipv4_cidr_blocks)}): ssh ubuntu@${aws_instance.teamserver.public_ip}",
      "Covenant Team Server Port Forwarding: ssh -L 7443:localhost:7443 ubuntu@${aws_instance.teamserver.public_ip} -N"
    ]
  )
}


DevOps Services

Protect your business now with DevOps services from the expert team at VerSprite, your cybersecurity professionals.

Contact VerSprite today to get started.