JIT Happens: Creating a malware analysis lab for Smartloader

SBT Content Engineers 28/02/2025
JIT Happens: Creating a malware analysis lab for Smartloader

We need a suitable environment to detonate the malware in a repeatable fashion. This environment should allow us to substitute our mock APIs and host to ensure the malware operates as close to real as possible.

This environment should allow us to substitute our mock APIs and host to ensure the malware operates as close to real as possible.

The malware checks multiple items to see if they are in a sandbox. We need realistic environments to get as close to real results as possible.

Requirements

We need the ability to simulate any networked services that the malware attempts to utilize; from our initial triage in (Blog post), we can see multiple HTTP-based services, which we can simulate with an Nginx installation, and multiple DNS queries, which we can simulate with a BIND DNS server.

Each C2 request will need a custom response, so we must create a WSGI (Web Server Gateway Interface) service to respond to them. Luckily, we have decrypted the C2 communications and can see it is a simple service.

To ensure networking works properly, we will need a way to allocate an IP address to our VM (or future VMs). For this, we will use a router with DHCP (Dynamic Host Configuration Protocol) enabled, which will handle network routing and address allocation.

Network Diagram

We will use an on-premises hypervisor and multiple VMs to ensure we control the full network path and all required services to enable all of this.

NAT Router

This will be any COTS (Common Off The Shelf) routing OS that provides access to the Internet. By using NAT to translate our internal host IPs to routable IPs on the Internet, this host will also allow us to block any outgoing traffic we are not happy with.

Debian Host

This will be our simulation host. It is based on Debian 12 (only due to familiarity, not for any technical reasons; for your own, you can choose any OS that you are comfortable working in).

This host uses dummy interfaces to provide simulated network services for HTTP, DNS, and fake IP addresses. It routes any traffic not intended for itself upstream to the NAT router, allowing us to change which services we are simulating dynamically.

Target Router

This is a simple COTS routing OS that is configured to provide routing and networking to any Target hosts via DHCP. This allows us to change all target hosts' DNS servers and IP addresses in one place.

Target Windows Machine

This is our Windows host for detonating the malware, it will have tools to download and execute the malware and a suite of dynamic analysis tools such as Procmon, Wireshark, Frida, and API Monitor. These will allow us to capture and analyze further actions taken by the malware.

Network Diagram

We also need to understand the IP addresses that we need to make the network work.

We have chosen the 172.16.0.0/24 address range for our malware network and the 203.0.113.0/24 range for hosted services as it is RFC 5735 compliant and doesn’t appear to be a private network.

HostInterfaceIP
NAT RouterWAN<Provided by DHCP>
NAT RouterLAN192.168.1.1/24
Internal RouterWAN203.0.113.2/24
Internal RouterLAN172.16.0.1/24
Debian Hosteth0203.0.113.3/24
Debian HostC287.120.36.50/32
Debian Hostipapi203.0.113.4/32
Debian Hostfiles203.0.113.5/32
Target HostethernetDHCP

Building the hosts

We start by building basic networking devices that allow us to work from the internet onwards. This allows us to build and access the internet while installing third-party software.

Other building strategies are possible, but they involve moving VMs between internet-connected networks or having a local mirror of packages inside the network.

 

External router

OPNsense Web GUI

 

We start off with our internet-connected router. This is the default configuration of OPNSense, and if we set the interface connected to the internet as the WAN interface, the defaults provide us with NAT for free.

Adding the Debian host to the LAN of this subnet allows us to get internet and other services (this will depend on your ISP and local configuration)

Debian Host

The Debian host is the heart of our malware lab. We need to first ensure that the networking works correctly and that we can access the wider internet from our host.

Firstly, we installed a fresh Debian 12 image on our host using the default configurations, which gives us a non-gui environment with the basics installed.

Host Setup

The first thing to set up is our network stack; we can do this by editing /etc/network/interfaces with the following configuration:

# Ensure local config is loaded
source /etc/network/interfaces.d/*

# Loopback interface
auto lo
iface lo inet loopback
# This is the simulated internet
allow-hotplug ens18
iface ens18 inet static
  address 203.0.113.3/24
  up ip route add 172.16.0.0/24 via 203.0.113.2 dev ens18
  
# This is the upstream NAT Router network
allow-hotplug ens19
iface ens19 dhcp

# Dummy interface for C2 traffic
auto c2
iface c2 inet static  address 87.120.36.50/32
  pre-up ip link add c2 type dummy
  down ip link del c2
  
# Dummy interface for spoofing ip-api
auto ipapi
iface ipapi inet static
  address 203.0.113.4/32
  pre-up ip link add ipapi type dummy
  down ip link del ipapi
  
# Dummy interface for serving generic files
auto files
iface files inet static
  address 203.0.113.5/32
  pre-up ip link add files type dummy
  down ip link del files

We also now have to enable IPv4 forwarding on our host, which we can do by uncommenting the net.ipv4.ip_forward in /etc/sysctl.conf.

After configuring this, we can now issue the command systemctl restart networking and check if our configuration works:

As you can see, we now have the correct networking configuration, and we should be able to ping google.com. This also means we can install and update Debian packages.

Configuring Services

Before we start installing packages, we need to ensure we are running the latest and greatest packages on our host. So, we will issue the command apt update and apt upgrade to update our running system.

DNS (BIND/named)

We then install the bind9 package:

We change the forwarders stanza in /etc/bind/named.conf.options to forward all requests to 8.8.8.8

The next step is ensuring we have the correct zones for our BIND server; we can edit /etc/bind/named.conf.local to add our new zone files.

zone "ip-api.com" {
  type master;
  file "/etc/bind/ipapi.db";
}

zone "github.com" {
  type master;
  file "/etc/bind/github.db";
}

We can now create our new zone files, which are all plaintext and easy to edit, starting with ipapi.db

This will give us an SOA (Start of Authority) record, multiple A records pointing to where we are simulating the service, and finally, a wildcard record handling all subdomains.

$TTL 86400
@ IN  SOA SOA ns.ip-api.com. admin.ip-api.com. 2024022401 3600 1800 604800 86400
@ IN  NS  ns
ip-api.com. IN  A 203.0.113.4
ns  IN  A 203.0.113.3
admin IN  A 203.0.113.3
www IN  A 203.0.113.4
* IN  A 203.0.113.4

We can also modify the github.db following the same style.

$TTL 86400
@ IN  SOA SOA ns.github.com. admin.github.com. 2024022401 3600 1800 604800 86400
@ IN  NS  ns
github.com. IN  A 203.0.113.5
ns  IN  A 203.0.113.3
admin IN  A 203.0.113.3
* IN  A 203.0.113.5

We can now safely start and enable BIND with the following command systemctl enable --now named and then test the configuration with the dig command.

This shows that our wildcard record works and is returning the correct addresses.

Web Server (Nginx)

We can now start simulating the web services; we need a server that handles HTTP requests.

In our environment, we will use Nginx, which is Free and Open Sourced Software that can act as an HTTP server (amongst other services); we can install it by issuing the command apt install Nginx

Once installed, we need to configure a site for each of the services we are hosting; we will start with /etc/nginx/sites-available/ipapi

server {
  listen 203.0.113.4:80;
  server_name ip-api.com;
  root /var/www/ip-api;
  index index.html;
  
  location / {
    autoindex on;
    try_files $uri $uri/ =404;
  }
  access_log /var/log/nginx/ipapi.log;
}

This configuration tells nginx to listen on the correct IP address and port 203.0.1134:80 and sets the root directory to /var/www/ip-api, which is what nginx will use to return content to clients. In line 11, we tell nginx to log all requests to this server related to ipapi to /var/log/nginx/ipapi.log, which we can use to debug later if required.

Next up to configure is /etc/nginx/sites-available/github

server {
  listen 203.0.113.5:80;
  server_name github.com;
  root /var/www/github;
  index index.html;
  
  location / {
    autoindex on;
    try_files $uri $uri/ =404;
  }
  access_log /var/log/nginx/github.log;
}

This configuration is similar to the ipapi but uses a different listen address, root folder, and log file.

The final and more complicated service is the C2 server. We are planning on using Flask to simulate it as it requires a dynamic response to requests. (Any script/code can achieve this; Flask was just faster for us to create.)

Using Flask in a production environment normally requires a WSGI server to host the code. We will use Gunicorn, which by default listens on 127.0.0.1:8000. So, we will configure Nginx to forward all requests to this location and configure the Flask application later.

server {
  client_max_body_size 50M;
  listen 87.120.36.50:80;
  server_name 87.120.36.50;
  index index.html;
  root /var/www/c2;
  location / {
    proxy_pass http://127.0.0.1:8000;
  }
  access_log /var/log/nginx/c2.log;
}

This configuration has some extra changes:

  • client_max_body_size This adjusts the maximum size a request can have; as the C2 uploads images, we may need to increase this from the default of 1 Megabyte.
  • proxy_pass This is the command that sets nginx to forward the request to another service and then send the result back to the client.

We can now enable the services by creating a symbolic link in /etc/nginx/sites-enabled/ pointing to the configurations we have just created, then enabling nginx.

ln -s /etc/nginx/sites-available/c2 /etc/nginx/sites-enabled/c2
ln -s /etc/nginx/sites-available/github /etc/nginx/sites-enabled/github
ln -s /etc/nginx/sites-available/ipapi /etc/nginx/sites-enabled/ipapi
systemctl enable --now nginx

Configuring ip-api

To emulate this service, we just need to host static files in /var/www/ipapi, the URL identified in the previous blog post was http://ip-api.com/json/.

We start by creating the directory structure mkdir -p /var/www/ip-api/json ( the -p switch creates parent directories if they don’t already exist).

We can now take the response from ip-api and place it in /var/www/ip-api/json/index.html

sensitive data redacted

We can check with curl to see if we have simulated it correctly:

Configuring GitHub

GitHub is a massive dynamic service with loads of features. However, our malware uses it as a file storage service, so we must match the path of the files it is trying to download to emulate it.

To do this, we identify files downloaded by the malware, download them to the server, and match the path.

For example, hxxp[://]github[.]com/aidagluglu/files/raw/refs/heads/master/rhsc_e[.]txt gets converted to the local file path: /var/www/github/aidagluglu/files/raw/refs/heads/master/rhsc_e.txt.

We start by creating the path mkdir -p /var/www/github/aidagluglu/files/raw/refs/heads/master/, then download the original file to that directory.

Configuring the C2 application

The final step is to code and configure the simulated Smartloader C2; we have created this in a GitHub repo, which we will clone and prepare for execution.

We install the pre-requisites of python3-venv and git by using apt, then clone the repository to /opt/Smartloader

The next step is to configure our python venv to ensure we can run the application, this is done by running the command python3 -m venv venv which will create a Python virtual environment in the venv directory.

After this, we load the virtual environment to our shell, then install the requirements with pip install -r requirements.txt

The first step after this is to ensure that the Flask application still runs:

Fantastic, it appears to be working; we can now create a systemd service file and serve the application; we start by editing /etc/systemd/system/c2.service

[Unit]
Description=Smartloader C2 Service
After=network.target

[Service]
User=root
Group=www-data
WorkingDirectory=/opt/Smartloader
Environment="PATH=/opt/Smarloader/venv/bin"
ExecStart=/opt/Smarloader/venv/bin/gunicorn -w4 wsgi:app
Restart=always

[Install]
WantedBy=multi-user.target

We can now enable our C2 service by running the command systemctl enable --now c2.service, and our service will be online!

Our configuration for the Debian host is now complete; we can move on to the internal router.

Internal Router

We start our initial configuration by installing VyOS, a simple routing OS, which will allow us to configure network services such as DHCP and routing.

Installing VyOS

We start off by downloading the latest rolling release of VyOS from their website.

We create a new VM in our hypervisor with 2GB of RAM and a single-core CPU.

Once booted, we need to install VyOS to the underlying hard drive to ensure configuration persists across reboots. We start by opening a console and logging in with the credentials vyos:vyos

We can now issue the install image command, which will ask us to confirm whether we would like to continue. We type y and continue the installation.

We create a secure password for our VyOS installation and choose the defaults for all configuration options

We can now safely issue the reboot command, remove the ISO from the virtual CD drive, and boot it into our fresh VyOS installation.

We now need to configure the interfaces and the DHCP scopes.

Interface Configuration

We can now use the show interfaces command to display the MAC addresses for the identified interfaces. We can cross-reference these with the configuration in the hypervisor to ensure we give them the correct addresses.

We can see that The Internal network is BC:24:11:A8:4C:7D and the external network is BC:24:11:43:76:25, we can map these to eth1 and eth0 respectively.

Using the set interface ethernet <interface> address <address> command, we can configure our IP addresses.

After this, we need to ensure that all traffic is sent to the upstream Debian host to ensure network connectivity.

This needs to have a DHCP scope set up to provide network addresses for our target host; we will issue the following commands to ensure that it is set up correctly:

configure
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 subnet-id 1
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 option default-router 172.16.0.1
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 option name-server 203.0.113.2
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 lease 86400
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 range 1 start 172.16.0.2
set service dhcp-server shared-network-name 'LAN' subnet 172.16.0.0/24 range 2 stop 172.16.0.254
commit
save
exit

We can now test by putting a generic host in the Internal network, checking the DHCP configuration provided, and checking network connectivity.

Target Host

To create the target host, we used a basic install of Windows 10 x64 from the Microsoft Website and created a basic user named malware

After it was installed, we installed the following software by downloading and executing the installer.

After these were all installed, we could snapshot our VM (Hypervisor specific) and prepare to download our malware.

Putting it all together

We are now fully prepared to download and execute the malware, perform our analysis, and report our findings. Later blog posts will show our deep dives into static and dynamic analysis.

About SBT

Security Blue Team is a leading online defensive cybersecurity training provider with over 100,000 students worldwide, training security teams across governments, military units, law enforcement agencies, managed security providers, and many more industries. If you're looking to take your skills to the next level, our Blue Team Level 2 (BTL2) certification offers comprehensive hands-on training designed for cybersecurity professionals ready to take the next step in their defensive security careers.

Contributors

David Elliott & Gareth Baddams, SBT Content Engineers

Go back

Read the previous post in this series here

About SBT Content Engineers

SBT Content Engineers

The Content Engineers at SBT stay on top of the latest industry news and cybersecurity trends, to bring fresh labs, blog content, and free resources for the benefit of our learning community.