SOCKS Proxy Tunnelling and Covert Channel Detection: When Legitimate Protocols Carry C2 Traffic

SOCKS Proxy Tunnelling and Covert Channel Detection: When Legitimate Protocols Carry C2 Traffic

The Problem

Modern C2 frameworks do not rely on exotic protocols. Cobalt Strike, Havoc, and Sliver all pivot to the same legitimate transports that enterprise networks depend on — HTTPS, DNS, ICMP — because those channels are almost never blocked and rarely inspected beyond basic flow metadata. The network controls that would detect traditional C2 traffic (unusual ports, plaintext command protocols, known bad IPs) are simply irrelevant when the attacker is speaking SOCKS5 through an existing HTTPS beacon or encoding commands in DNS query payloads that route through Cloudflare.

The threat is not theoretical. In a typical post-exploitation scenario using Cobalt Strike’s socks command, the C2 operator issues one command on their Cobalt Strike teamserver. The CS agent on the victim host opens a SOCKS5 listener on a local port, then proxies that listener’s traffic outbound through the existing HTTPS C2 channel. The attacker’s browser or tooling connects to the SOCKS5 port on the teamserver side, and from that point, every connection they initiate routes through the victim host as a transparent proxy. The victim’s network egress controls see HTTPS traffic to a CDN. The attacker sees the victim’s internal network. Sliver implements the same capability with socks5 start, and Havoc’s tunnel module does the same with its HTTPS malleable C2 profile.

This matters because it converts a single compromised host — perhaps a developer laptop or a workstation with no sensitive data — into a full network pivot point. From that pivot, the attacker can reach internal services that have no external exposure: Active Directory, internal GitLab, Kubernetes API servers, database clusters. The firewall rules protecting those services assume that external traffic is blocked. They do not account for the attacker routing through a trusted internal host that already has network access.

DNS-over-HTTPS compounds the problem differently. Traditional DNS-based C2 (tools like iodine, dnscat2) is detectable because it generates unusual DNS traffic: high query volumes, long subdomain labels, TXT record queries from workstations, NXDomain responses for encoded data. Corporate DNS resolvers log this and EDR tools alert on it. DoH collapses the detection surface by moving DNS queries to HTTPS on port 443 to a small set of known-good IP addresses — Cloudflare’s 1.1.1.1, Google’s 8.8.8.8. The C2 framework establishes its DNS queries as HTTPS requests to these resolvers. Your DNS monitoring sees nothing. Your HTTPS inspection sees a connection to Cloudflare, which has an unblemished reputation. The DNS query itself, containing the encoded C2 subdomain, is invisible inside the TLS session.

ICMP tunnelling is the oldest technique in this space but remains effective. Tools like ptunnel-ng and icmptunnel embed TCP sessions in ICMP echo request/reply pairs. Each ICMP packet carries up to 65,507 bytes in its data field. A working TCP-over-ICMP session looks like ping traffic. Firewalls that pass ICMP for diagnostic purposes pass the entire covert channel. Custom C2 implementations do the same: the agent on the victim host listens on a raw socket, receives ICMP echo requests from the C2 server, extracts the embedded payload, and forwards it to the target service. The C2 server assembles the responses and presents a working socket to the operator.

HTTP CONNECT tunnelling sits in a slightly different category — it exploits HTTP proxies rather than firewalls. An outbound HTTP CONNECT request to an attacker-controlled hostname looks legitimate at the proxy level because CONNECT is the standard mechanism for HTTPS traffic through an HTTP proxy. The proxy establishes the tunnel and then steps aside; it only logs the CONNECT, not the content of the tunnel. If TLS inspection is deployed at the proxy, it typically decrypts the outer TLS session between the client and proxy, but the CONNECT tunnel’s inner TLS session between the client and the C2 server is a second TLS layer that the proxy does not terminate. The inner session is opaque.

What makes all of these hard to block is that each uses a legitimate protocol on a legitimate port to a legitimate or high-reputation destination. What makes them detectable is that their traffic patterns differ from legitimate use in measurable ways — if you have the telemetry and are looking for the right signals.

Threat Model

A C2 operator who has established a foothold on a single corporate workstation via HTTPS beaconing issues socks 1080 in the Cobalt Strike teamserver. The workstation becomes a SOCKS5 proxy. The operator’s tooling routes through it to reach internal services: the Kubernetes API server (port 6443), an internal Vault instance (port 8200), Active Directory LDAP (port 389). The corporate firewall sees only the existing HTTPS connection between the workstation and the C2 server. The operator achieves lateral movement without any new network connections from external addresses.

A piece of malware that bypasses corporate DNS monitoring by configuring itself to use DoH via 1.1.1.1 over HTTPS. The corporate DNS resolver never sees the malicious domain lookups. Domain reputation checks, DNS-based threat intelligence feeds, and DNS query logging all produce nothing. The malware connects to its C2 domains, receives tasking, and exfiltrates data — all appearing as HTTPS connections to Cloudflare.

An ICMP tunnel from a compromised server in a cloud environment where the security group allows ICMP for health checking. The attacker’s C2 server sends ICMP echo requests to the compromised instance. The instance processes the embedded commands and sends ICMP echo replies with the output embedded in the data field. The security group’s ICMP rule is entirely legitimate; the operation appears as normal ping traffic from the attacker’s perspective, and no alert fires.

An HTTP CONNECT request through the corporate HTTP proxy to a C2 server on port 443. The proxy passes the CONNECT because port 443 is on its permitted port list and the destination is not on any blocklist. TLS inspection at the proxy terminates the outer TLS session but does not inspect the CONNECT tunnel’s contents. The operator’s C2 traffic flows through a corporate-sanctioned proxy, logged only as a CONNECT request to a destination the proxy considers acceptable.

Hardening Configuration

1. Suricata Rules for SOCKS5 and ICMP Tunnelling

The SOCKS5 handshake has a specific byte structure: the first byte is the version (0x05), the second byte is the number of authentication methods (NMETHODS), and the following bytes list the supported methods. This structure is recognisable in packet payload regardless of the port. Most C2 tooling runs SOCKS5 on non-standard ports — port 1080 is rarely used because it is obvious. Rules that match only port 1080 will miss nearly all C2 SOCKS5 traffic.

# /etc/suricata/rules/covert-channels.rules

# SOCKS5 handshake on any port except 1080
# Matches version byte + NMETHODS=1 in first two bytes, to_server direction
alert tcp any any -> any !1080 (
    msg:"SOCKS5 handshake on non-standard port";
    content:"|05 01|";
    depth:2;
    flow:established,to_server;
    threshold: type limit, track by_src, count 3, seconds 60;
    classtype:policy-violation;
    sid:9200001; rev:2;
)

# SOCKS5 CONNECT request (method 0x01 = CONNECT, address type 0x01 = IPv4)
# Fires on the actual tunnel establishment request after auth negotiation
alert tcp any any -> any !1080 (
    msg:"SOCKS5 CONNECT request on non-standard port";
    content:"|05 01 00 01|";
    depth:4;
    flow:established,to_server;
    threshold: type limit, track by_src, count 1, seconds 60;
    classtype:policy-violation;
    sid:9200002; rev:2;
)

# Large ICMP echo request payload — normal ping is 32-56 bytes (Windows/Linux defaults)
# ptunnel-ng and icmptunnel use near-maximum ICMP payload sizes
alert icmp any any -> any any (
    msg:"ICMP echo request with large payload - possible tunnelling";
    itype:8;
    dsize:>100;
    threshold: type limit, track by_src, count 5, seconds 60;
    classtype:policy-violation;
    sid:9200003; rev:2;
)

# ICMP echo reply with large payload — tunnelling tools send data both directions
alert icmp any any -> any any (
    msg:"ICMP echo reply with large payload - possible tunnelling";
    itype:0;
    dsize:>100;
    threshold: type limit, track by_src, count 5, seconds 60;
    classtype:policy-violation;
    sid:9200004; rev:2;
)

# DoH bypass: direct HTTPS connection to Cloudflare resolver IPs
# Legitimate browsers connecting to Cloudflare DoH use cloudflare-dns.com as SNI
# Connections without a matching SNI or with bare-IP SNI suggest programmatic DoH
alert tls any any -> [1.1.1.1,1.0.0.1] 443 (
    msg:"TLS to Cloudflare DoH IP without expected SNI - possible C2 DoH bypass";
    flow:established,to_server;
    tls.sni; content:!"cloudflare-dns.com";
    threshold: type limit, track by_src, count 1, seconds 300;
    classtype:policy-violation;
    sid:9200005; rev:2;
)

alert tls any any -> [8.8.8.8,8.8.4.4] 443 (
    msg:"TLS to Google DoH IP without expected SNI - possible C2 DoH bypass";
    flow:established,to_server;
    tls.sni; content:!"dns.google";
    threshold: type limit, track by_src, count 1, seconds 300;
    classtype:policy-violation;
    sid:9200006; rev:2;
)

# HTTP CONNECT to non-standard ports (attackers sometimes use CONNECT to non-443 ports)
alert http any any -> any any (
    msg:"HTTP CONNECT tunnel to non-HTTPS port";
    http.method; content:"CONNECT";
    http.uri; content:!":443";
    threshold: type limit, track by_src, count 3, seconds 60;
    classtype:policy-violation;
    sid:9200007; rev:2;
)

These rules fire at the packet level. The SOCKS5 byte matching is specific enough to avoid most false positives — accidentally matching |05 01| in application-level traffic is unlikely because the match requires the bytes to appear at the very start of the application data (depth:2). The ICMP payload threshold of 100 bytes catches standard tunnelling tools while avoiding false positives from network diagnostic tools that occasionally send larger pings. Adjust the dsize threshold based on your baseline — run without alerting for a week and look at the distribution of ICMP payload sizes in your environment first.

2. Zeek Scripts for Behavioural Anomaly Detection

Zeek’s application-layer analysis catches patterns that packet-level rules miss. The SOCKS protocol analyser identifies proxy connections regardless of the byte pattern’s depth in the stream, and Zeek’s connection model gives you the source and destination context that a raw Suricata alert lacks.

# /usr/share/zeek/site/covert-channels/socks-detection.zeek

@load base/protocols/socks
@load base/frameworks/notice

module CovertChannels;

export {
    redef enum Notice::Type += {
        SOCKS_Internal_Pivot,
        SOCKS_Unusual_Port,
        ICMP_Beaconing_Detected,
        ICMP_Large_Payload
    };
}

# Alert when a SOCKS tunnel targets an internal (RFC1918) address.
# This is the lateral movement pivot scenario: attacker using victim host
# to reach internal services they cannot reach directly.
event socks_request(c: connection, version: count, request: SOCKS::Request)
{
    if ( version != 5 )
        return;

    local target = request$host;

    if ( Site::is_private_addr(target) )
    {
        NOTICE([$note=SOCKS_Internal_Pivot,
                $conn=c,
                $msg=fmt("SOCKS5 pivot to internal address: %s via %s",
                         target, c$id$orig_h),
                $identifier=cat(c$id$orig_h, target),
                $suppress_for=5min]);
    }
}

# Alert when SOCKS is observed on non-standard ports.
# C2 tooling almost never uses port 1080 — too obvious.
event connection_established(c: connection)
{
    # Zeek marks connections with SOCKS analyser when it detects the handshake.
    # We use the protocol tag to filter.
    if ( c$service ?$ "socks" && c$id$resp_p != 1080/tcp )
    {
        NOTICE([$note=SOCKS_Unusual_Port,
                $conn=c,
                $msg=fmt("SOCKS proxy on non-standard port %s from %s",
                         c$id$resp_p, c$id$orig_h),
                $identifier=cat(c$id$orig_h, c$id$resp_p),
                $suppress_for=10min]);
    }
}

# ICMP beaconing detection: look for regular timing intervals from a single source.
# Ptunnel-ng and custom C2 ICMP implementations send packets at fixed intervals
# to maintain the session — this regularity is the detection signal.

global icmp_event_times: table[addr] of vector of double
    &create_expire=10min
    &default=vector();

event icmp_echo_request(c: connection, icmp: icmp_info,
                         id: count, seq: count, payload: string, len: count)
{
    local src = c$id$orig_h;

    # Flag large payloads immediately — legitimate ping is 32-56 bytes
    if ( len > 100 )
    {
        NOTICE([$note=ICMP_Large_Payload,
                $conn=c,
                $msg=fmt("ICMP echo request with %d-byte payload from %s",
                         len, src),
                $identifier=cat(src, len),
                $suppress_for=5min]);
    }

    icmp_event_times[src] += network_time();

    local times = icmp_event_times[src];
    local n = |times|;

    # Need at least 6 data points for meaningful interval analysis
    if ( n < 6 )
        return;

    # Compute intervals between consecutive packets
    local intervals: vector of double = vector();
    local i = 0;
    while ( i < n - 1 )
    {
        intervals += (times[i+1] - times[i]);
        ++i;
    }

    # Compute mean and variance manually (Zeek doesn't expose statistics builtins)
    local sum = 0.0;
    for ( idx in intervals )
        sum += intervals[idx];
    local mean = sum / |intervals|;

    local sq_sum = 0.0;
    for ( idx in intervals )
        sq_sum += (intervals[idx] - mean) * (intervals[idx] - mean);
    local variance = sq_sum / |intervals|;
    local stdev = sqrt(variance);

    # Coefficient of variation < 0.1 indicates highly regular timing.
    # Legitimate interactive ping (user running ping manually) has CV > 0.3
    # because humans don't keep perfect 1s intervals. C2 beaconing does.
    local cv = ( mean > 0.001 ) ? stdev / mean : 999.0;

    if ( cv < 0.1 )
    {
        NOTICE([$note=ICMP_Beaconing_Detected,
                $conn=c,
                $msg=fmt("Regular ICMP from %s: mean_interval=%.2fs CV=%.3f over %d packets",
                         src, mean, cv, n),
                $identifier=cat(src),
                $suppress_for=10min]);

        # Reset after alerting to avoid duplicate suppression masking escalation
        delete icmp_event_times[src];
    }
}

Load this script from /usr/share/zeek/site/local.zeek:

@load covert-channels/socks-detection

The CV threshold of 0.1 requires tuning for your environment. Network time variations on busy segments will introduce jitter even in attacker beaconing. If you see false positives on legitimate monitoring tools (SNMP polling, regular ping-based health checks), add the known monitoring source IPs to a whitelist table and skip the analysis for them. The large-payload threshold is less tunable — legitimate ping traffic above 100 bytes is uncommon in most enterprise environments.

3. DNS Volume Baseline and DoH Bypass Detection

The DoH bypass problem has two detection angles: detecting direct connections to DoH resolver IPs (which Suricata handles), and detecting the volume drop in traditional DNS traffic that occurs when DoH is adopted on a host. The second signal is more reliable for mature C2 implementations that are careful to avoid connecting directly to resolver IPs.

#!/usr/bin/env python3
"""
doh-detection.py — Detect DNS-over-HTTPS bypass attempts via two methods:
1. Direct HTTPS connections to known DoH resolver IPs (from firewall/proxy logs)
2. Statistically significant drops in per-host DNS query volume (from DNS resolver logs)
"""

import ipaddress
import statistics
from collections import defaultdict
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from typing import Optional

# Known DoH resolver IP addresses.
# These are the IPs that browsers and C2 tools contact when bypassing
# the system resolver to use DoH directly.
DOH_RESOLVER_IPS: frozenset[str] = frozenset({
    "1.1.1.1",           # Cloudflare primary
    "1.0.0.1",           # Cloudflare secondary
    "2606:4700:4700::1111",  # Cloudflare IPv6
    "8.8.8.8",           # Google primary
    "8.8.4.4",           # Google secondary
    "2001:4860:4860::8888",  # Google IPv6
    "9.9.9.9",           # Quad9 primary
    "149.112.112.112",   # Quad9 secondary
    "208.67.222.222",    # OpenDNS primary
    "208.67.220.220",    # OpenDNS secondary
    "94.140.14.14",      # AdGuard primary
    "94.140.15.15",      # AdGuard secondary
})

# Expected SNI values for legitimate DoH connections.
# Browsers using DoH will present these SNIs; direct IP connections or
# C2 tools often present no SNI or the bare IP address.
EXPECTED_DOH_SNIS: dict[str, set[str]] = {
    "1.1.1.1": {"cloudflare-dns.com", "one.one.one.one"},
    "1.0.0.1": {"cloudflare-dns.com", "one.one.one.one"},
    "8.8.8.8": {"dns.google"},
    "8.8.4.4": {"dns.google"},
    "9.9.9.9": {"dns.quad9.net", "dns9.quad9.net"},
    "149.112.112.112": {"dns.quad9.net", "dns9.quad9.net"},
}


@dataclass
class TLSConnection:
    src_ip: str
    dst_ip: str
    dst_port: int
    tls_sni: Optional[str]
    timestamp: datetime


@dataclass
class DNSEvent:
    src_ip: str
    query: str
    timestamp: datetime
    resolver: str  # which resolver received the query


def detect_doh_bypass(conn: TLSConnection) -> Optional[dict]:
    """
    Detect direct HTTPS connections to DoH resolver IPs that lack
    the expected SNI for that resolver.

    Legitimate DoH from browsers: SNI is cloudflare-dns.com, dns.google, etc.
    Legitimate DoH from OS resolver: the system typically uses the hostname,
    not the IP, so the SNI is present.
    C2 tools using DoH to bypass monitoring: frequently connect to the bare IP,
    resulting in no SNI or the IP as the SNI value.
    """
    if conn.dst_port != 443:
        return None
    if conn.dst_ip not in DOH_RESOLVER_IPS:
        return None

    sni = conn.tls_sni or ""
    expected_snis = EXPECTED_DOH_SNIS.get(conn.dst_ip, set())

    # No SNI at all — highly suspicious for a connection to a DoH resolver
    if not sni:
        return {
            "type": "doh_bypass_no_sni",
            "src": conn.src_ip,
            "dst": conn.dst_ip,
            "reason": "TLS connection to DoH resolver IP with no SNI",
            "severity": "high",
        }

    # SNI is the IP address itself — programmatic connection, not browser
    try:
        ipaddress.ip_address(sni)
        return {
            "type": "doh_bypass_ip_sni",
            "src": conn.src_ip,
            "dst": conn.dst_ip,
            "sni": sni,
            "reason": "TLS connection to DoH resolver IP with IP address as SNI",
            "severity": "high",
        }
    except ValueError:
        pass

    # SNI doesn't match any expected hostname for this resolver
    if expected_snis and sni not in expected_snis:
        return {
            "type": "doh_bypass_unexpected_sni",
            "src": conn.src_ip,
            "dst": conn.dst_ip,
            "sni": sni,
            "expected_snis": list(expected_snis),
            "reason": f"Unexpected SNI {sni!r} for DoH resolver {conn.dst_ip}",
            "severity": "medium",
        }

    return None


def detect_dns_volume_anomaly(
    host: str,
    baseline_queries: list[int],   # per-hour query counts over baseline period
    current_queries: list[int],    # per-hour query counts for current period
    drop_threshold: float = 0.6,   # alert if volume drops by >60%
    min_baseline_volume: int = 10, # ignore hosts with low query volumes
) -> Optional[dict]:
    """
    Detect hosts whose traditional DNS query volume has dropped significantly.
    When a host switches to DoH, its queries disappear from the corporate
    resolver but appear as HTTPS connections to DoH IPs. A sustained volume
    drop (not a spike, not weekend noise) is the signal.

    Use hourly bucketing to smooth out natural variation. Require at least
    72 hours of baseline data before alerting to avoid false positives from
    hosts that were legitimately quiet.
    """
    if len(baseline_queries) < 72:
        return None  # Not enough baseline data

    baseline_mean = statistics.mean(baseline_queries)
    if baseline_mean < min_baseline_volume:
        return None  # Host is normally quiet; drop is not meaningful

    if not current_queries:
        return None

    current_mean = statistics.mean(current_queries)
    drop_ratio = max(0.0, (baseline_mean - current_mean) / baseline_mean)

    if drop_ratio > drop_threshold:
        return {
            "type": "dns_volume_drop",
            "host": host,
            "baseline_mean_hourly": round(baseline_mean, 1),
            "current_mean_hourly": round(current_mean, 1),
            "drop_pct": round(drop_ratio * 100, 1),
            "reason": (
                f"DNS query volume dropped {drop_ratio*100:.0f}% from baseline "
                f"({baseline_mean:.0f}/hr → {current_mean:.0f}/hr). "
                "Host may be using DoH to bypass resolver monitoring."
            ),
            "severity": "medium",
        }

    return None


def analyse_dns_tunnel_patterns(events: list[DNSEvent]) -> list[dict]:
    """
    Even when using DoH, some C2 DNS tunnels leave detectable traces
    if the framework falls back to traditional DNS for any reason.
    Detect high query rates, long subdomain labels, and entropy anomalies.
    """
    findings = []
    by_host: dict[str, list[DNSEvent]] = defaultdict(list)
    for e in events:
        by_host[e.src_ip].append(e)

    for src_ip, host_events in by_host.items():
        # High query rate: more than 100 DNS queries in 5 minutes
        # Groups events into 5-minute windows
        window_counts: dict[int, int] = defaultdict(int)
        for e in host_events:
            bucket = int(e.timestamp.timestamp() // 300)
            window_counts[bucket] += 1

        max_rate = max(window_counts.values(), default=0)
        if max_rate > 100:
            findings.append({
                "type": "dns_high_query_rate",
                "src": src_ip,
                "max_per_5min": max_rate,
                "reason": f"{max_rate} DNS queries in a single 5-minute window",
                "severity": "medium",
            })

        # Long subdomain labels: dnscat2 and iodine encode data in subdomain labels
        # Legitimate queries rarely have subdomain components > 30 characters
        for e in host_events:
            labels = e.query.rstrip(".").split(".")
            for label in labels[:-2]:  # Skip the registered domain and TLD
                if len(label) > 50:
                    findings.append({
                        "type": "dns_long_label",
                        "src": src_ip,
                        "query": e.query,
                        "label_length": len(label),
                        "reason": f"DNS label of {len(label)} chars suggests encoding",
                        "severity": "high",
                    })
                    break

    return findings

Run detect_dns_volume_anomaly from a scheduled job that reads per-host query counts from your DNS resolver logs (Unbound, BIND, or Windows DNS). Pipe the output to your SIEM. The volume-drop detection works best when combined with connection logs showing that the same host started making HTTPS connections to DoH resolver IPs around the same time — the correlation makes the finding high confidence rather than just medium.

4. Enforce DNS Routing Through Corporate Resolver

Blocking direct DNS and DoH connections forces all DNS queries through the corporate resolver where they are inspected, logged, and subject to domain reputation filtering. This is the single most effective control against DoH-based C2 bypass.

#!/bin/bash
# dns-enforcement.sh — iptables rules to force all DNS through corporate resolver
# Apply on all managed Linux hosts via Ansible or a systemd oneshot service.
# Corporate resolver: 10.0.0.53

# Flush existing DNS enforcement rules if re-running
iptables -D OUTPUT -j DNS_ENFORCEMENT 2>/dev/null
iptables -F DNS_ENFORCEMENT 2>/dev/null
iptables -X DNS_ENFORCEMENT 2>/dev/null

iptables -N DNS_ENFORCEMENT

# Block direct UDP/TCP DNS to known public resolvers
for RESOLVER in 8.8.8.8 8.8.4.4 1.1.1.1 1.0.0.1 9.9.9.9 149.112.112.112 208.67.222.222 208.67.220.220; do
    iptables -A DNS_ENFORCEMENT -p udp --dport 53 -d "${RESOLVER}" -j DROP
    iptables -A DNS_ENFORCEMENT -p tcp --dport 53 -d "${RESOLVER}" -j DROP
done

# Block direct HTTPS to DoH resolver IPs (forces DoH through the proxy
# where the SNI is inspected, or blocks it if the proxy restricts DoH IPs)
for DOH_IP in 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 9.9.9.9 149.112.112.112; do
    iptables -A DNS_ENFORCEMENT -p tcp --dport 443 -d "${DOH_IP}" -j REJECT \
        --reject-with tcp-reset
done

# Redirect any surviving DNS queries to the corporate resolver
iptables -t nat -A OUTPUT -p udp --dport 53 ! -d 10.0.0.53 \
    -j DNAT --to-destination 10.0.0.53:53
iptables -t nat -A OUTPUT -p tcp --dport 53 ! -d 10.0.0.53 \
    -j DNAT --to-destination 10.0.0.53:53

# Apply the enforcement chain
iptables -A OUTPUT -j DNS_ENFORCEMENT

echo "DNS enforcement rules applied."

This does not stop all DoH — an attacker can stand up their own DoH resolver on port 443 on a clean IP that is not in the blocklist. But it eliminates the lowest-effort bypass (connecting directly to Cloudflare or Google DNS over HTTPS) and forces the attacker to invest in their own infrastructure, which has a higher likelihood of appearing in threat intelligence feeds.

5. Squid HTTP Proxy Configuration for CONNECT Inspection

# /etc/squid/squid.conf — relevant sections for CONNECT tunnel control

# Define permitted SSL destination ports
acl SSL_ports port 443

# Define CONNECT method
acl CONNECT method CONNECT

# Deny CONNECT to non-HTTPS ports — attackers sometimes use CONNECT to
# route to C2 servers on ports like 8443, 8080, or 4444
http_access deny CONNECT !SSL_ports

# CONNECT destination allowlist: only permit CONNECT to destinations
# that are in the corporate approved list OR pass domain reputation check.
# Everything else is logged and blocked.
acl approved_connect_destinations dstdomain "/etc/squid/approved-connect-domains.txt"
external_acl_type reputation_check ttl=300 %DST /usr/local/bin/reputation-check.sh
acl reputable_destination external reputation_check

http_access deny CONNECT !approved_connect_destinations !reputable_destination

# Log all CONNECT requests separately for security analysis
# Format: timestamp src_ip dst_host:port method status bytes
logformat connect_log %{%Y-%m-%dT%H:%M:%S}tg %a %ru %rm %>Hs %<st
access_log /var/log/squid/connect_tunnel.log connect_log CONNECT

# Alert on CONNECT to destinations with no prior HTTPS visits from the same source.
# This requires an external ACL helper that tracks connection history.
# A host that only ever sends CONNECT (never GET/POST) is suspicious.
external_acl_type connect_only_host ttl=60 %SRC /usr/local/bin/connect-only-check.sh
acl suspicious_connect_only external connect_only_host
http_access deny suspicious_connect_only

The connect-only-check.sh helper queries a Redis instance that the proxy updates on every non-CONNECT request. If a source IP has no non-CONNECT entries in the last 24 hours and suddenly issues a CONNECT, the helper returns ERR and the proxy denies the request. This catches C2 tools that establish a CONNECT tunnel as their very first proxy interaction, without any prior browsing behaviour to blend in with.

6. ICMP Payload Size Baseline and Statistical Anomaly Detection

#!/usr/bin/env python3
"""
icmp-tunnel-detector.py — Statistical analysis of ICMP traffic to detect tunnelling.
Reads from Zeek's icmp.log or a custom PCAP processor.
"""

import json
import math
import statistics
from collections import defaultdict
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional


@dataclass
class ICMPEvent:
    src_ip: str
    dst_ip: str
    icmp_type: int   # 8 = echo request, 0 = echo reply
    payload_size: int
    timestamp: float
    seq: int


@dataclass
class HostICMPProfile:
    events: list[ICMPEvent] = field(default_factory=list)

    def intervals(self) -> list[float]:
        if len(self.events) < 2:
            return []
        times = sorted(e.timestamp for e in self.events)
        return [times[i+1] - times[i] for i in range(len(times) - 1)]

    def payload_sizes(self) -> list[int]:
        return [e.payload_size for e in self.events]

    def coefficient_of_variation(self, values: list[float]) -> Optional[float]:
        if len(values) < 3:
            return None
        mean = statistics.mean(values)
        if mean < 0.001:
            return None
        return statistics.stdev(values) / mean

    def analyse(self) -> list[dict]:
        findings = []

        if len(self.events) < 5:
            return findings

        # Payload size analysis
        sizes = self.payload_sizes()
        mean_size = statistics.mean(sizes)
        max_size = max(sizes)

        if mean_size > 100:
            findings.append({
                "check": "large_mean_payload",
                "mean_payload_bytes": round(mean_size, 1),
                "max_payload_bytes": max_size,
                "packet_count": len(self.events),
                "detail": (
                    f"Mean ICMP payload of {mean_size:.0f}B across {len(self.events)} "
                    "packets. Legitimate ping is 32-56B on Linux/Windows defaults."
                ),
                "severity": "medium" if mean_size < 500 else "high",
            })

        # Timing regularity: beaconing produces very consistent intervals
        intervals = self.intervals()
        cv = self.coefficient_of_variation([float(i) for i in intervals])

        if cv is not None and cv < 0.1 and len(self.events) >= 8:
            mean_interval = statistics.mean(intervals)
            findings.append({
                "check": "regular_timing",
                "cv": round(cv, 4),
                "mean_interval_seconds": round(mean_interval, 2),
                "packet_count": len(self.events),
                "detail": (
                    f"ICMP interval CV={cv:.3f} (< 0.1 threshold). "
                    f"Mean interval {mean_interval:.1f}s over {len(self.events)} packets. "
                    "Highly regular timing indicates automated beaconing, not interactive ping."
                ),
                "severity": "high",
            })

        # Bidirectional volume balance: ptunnel-ng balances request/reply payloads.
        # Legitimate ping: reply payload == request payload (echoed back).
        # Tunnelling: reply carries different (response) data than the request.
        req_sizes = [e.payload_size for e in self.events if e.icmp_type == 8]
        rep_sizes = [e.payload_size for e in self.events if e.icmp_type == 0]

        if req_sizes and rep_sizes:
            mean_req = statistics.mean(req_sizes)
            mean_rep = statistics.mean(rep_sizes)
            # In legitimate ping, echo reply == echo request (the payload is echoed).
            # If they differ significantly, the data field is being used asymmetrically.
            if abs(mean_req - mean_rep) / max(mean_req, mean_rep, 1) > 0.2:
                findings.append({
                    "check": "asymmetric_payload",
                    "mean_request_bytes": round(mean_req, 1),
                    "mean_reply_bytes": round(mean_rep, 1),
                    "detail": (
                        "ICMP echo request and reply payloads differ significantly. "
                        "Legitimate ping echoes the request payload back verbatim. "
                        "Asymmetry suggests independent data in each direction."
                    ),
                    "severity": "high",
                })

        return findings


def analyse_icmp_log(log_path: str, window_minutes: int = 60) -> list[dict]:
    """
    Process Zeek icmp.log (JSON format, with zeek-cut or json output).
    Groups events by source IP, runs profile analysis, returns findings.
    """
    profiles: dict[str, HostICMPProfile] = defaultdict(HostICMPProfile)
    alerts = []

    with open(log_path) as f:
        for line in f:
            if line.startswith("#"):
                continue
            try:
                record = json.loads(line)
            except json.JSONDecodeError:
                continue

            event = ICMPEvent(
                src_ip=record.get("id.orig_h", ""),
                dst_ip=record.get("id.resp_h", ""),
                icmp_type=int(record.get("itype", -1)),
                payload_size=int(record.get("v", 0)),  # 'v' is payload length in zeek icmp.log
                timestamp=float(record.get("ts", 0)),
                seq=int(record.get("seq", 0)),
            )

            if event.icmp_type in (0, 8):  # Only echo request and reply
                profiles[event.src_ip].events.append(event)

    for src_ip, profile in profiles.items():
        findings = profile.analyse()
        for finding in findings:
            alerts.append({"src_ip": src_ip, **finding})

    return alerts


if __name__ == "__main__":
    import sys
    log = sys.argv[1] if len(sys.argv) > 1 else "/var/log/zeek/current/icmp.log"
    findings = analyse_icmp_log(log)
    for f in findings:
        print(json.dumps(f))

Run this as a cron job or a Zeek log rotation hook, feeding output to your SIEM or alerting pipeline. The asymmetric payload check is the most reliable individual signal — it fires on ptunnel-ng sessions immediately because ptunnel-ng sends outbound data in ICMP requests and inbound (server response) data in ICMP replies, producing systematically different payload sizes in each direction.

Expected Behaviour

When an attacker initiates a ptunnel-ng session through a firewall that permits ICMP, Suricata fires sid:9200003 on the first ICMP echo request with a payload above 100 bytes. Zeek logs the connection in icmp.log with the payload size. After six or more packets, the Zeek script’s beaconing analysis computes a CV below 0.1 and raises a ICMP_Beaconing_Detected notice. The Python analyser, running on the icmp.log at the end of the hour, additionally fires asymmetric_payload because ptunnel-ng’s request and reply payloads carry independent data.

When a Cobalt Strike agent executes socks 54321, the SOCKS5 handshake flows over the existing HTTPS channel — Suricata does not see it because it is encrypted. But when the operator connects to the SOCKS5 listener on the teamserver side and routes a new connection through the tunnel, a new TCP connection from the victim host to an internal target is established. Zeek logs this connection and correlates it with the SOCKS analyser. If the connection goes to an RFC1918 address, the socks_request event fires the SOCKS_Internal_Pivot notice. That notice is the detection: not the original SOCKS5 handshake, but the proxied connection from the victim to the internal target.

When a host’s DoH bypass is blocked by iptables with REJECT --reject-with tcp-reset, the connection attempt receives an immediate TCP RST. The C2 tool or malware receives Connection reset by peer. Depending on the tool’s error handling, it either falls back to traditional DNS (which is logged at the corporate resolver) or fails to resolve domains. In both cases, the blocking produces a detectable outcome: either a fallback that appears in DNS logs, or C2 communication failure visible as a beacon timeout.

Trade-offs

Blocking direct HTTPS to Cloudflare’s 1.1.1.1 and Google’s 8.8.8.8 affects browsers with DoH enabled — Firefox’s Trusted Recursive Resolver (TRR) mode attempts to use Cloudflare by default, and Chrome’s Secure DNS feature does the same. Users with these browser features enabled will see DNS resolution failures for sites that were reachable before. The resolution is to push browser configuration via Group Policy or managed browser profiles that disable DoH (set Firefox network.trr.mode to 5 — disabled — via your MDM, or set Chrome’s DnsOverHttpsMode to off via Workspace policy). Do not block the IPs without also configuring the browsers, or you will generate a help-desk spike and pressure to remove the control.

ICMP monitoring rather than blocking is the correct posture. Blocking ICMP breaks ping, traceroute, MTU discovery (ICMP type 3 unreachable messages), and path MTU discovery, which causes subtle TCP performance degradation for connections through certain network paths. The right approach is to monitor and alert on anomalous ICMP while leaving it open. If the alert fires and a confirmed tunnel is identified, block the specific source-destination pair during containment — do not implement a blanket ICMP block.

SOCKS5 detection on arbitrary ports produces false positives in environments where developers run local SOCKS proxies for debugging or where applications use the SOCKS protocol for legitimate purposes. The Suricata rule’s depth:2 and threshold settings reduce but do not eliminate this. When integrating these rules, baseline for two weeks before enabling automated responses. Use the rules in alert-only mode initially and tune the byte signature against your traffic profile.

The Squid connect-only-host check fails gracefully when the Redis backend is unavailable — configure the external ACL helper with allow_miss on so that Redis failures do not block all HTTPS traffic through the proxy. Monitor the helper’s response times; helpers that exceed the configured TTL cause proxy latency spikes that are difficult to diagnose.

Failure Modes

Monitoring only port 1080 for SOCKS5 traffic. Cobalt Strike’s socks command, Sliver’s socks5 start, and Havoc’s tunnel module all default to user-configurable ports, and operators always choose ports that blend with existing traffic — 443, 8443, 3128 (a common proxy port). A Suricata rule scoped to port 1080 catches essentially no real C2 SOCKS traffic and creates false confidence.

No DNS query volume baseline before enabling DoH detection. The volume-drop detection is meaningless without a baseline. If you deploy the detection on day one without historical data, you have no reference point. Hosts with naturally low DNS query volumes (batch systems, network appliances) will appear as anomalies. Collect at least 30 days of per-host DNS query counts before enabling volume-anomaly alerting.

Passing ICMP through the firewall without payload inspection. A security group rule of ALLOW ICMP FROM 0.0.0.0/0 with no payload size restriction passes every ICMP tunnel without any signal. The Suricata and Zeek rules require that they see the traffic — a network TAP or traffic mirror feeding the IDS is mandatory. If your IDS does not receive ICMP traffic (common in environments where the IDS sits inline on the HTTP proxy path only), you have no visibility into ICMP tunnelling.

HTTP CONNECT traffic not logged separately. The default Squid access log mixes CONNECT entries with GET, POST, and other methods in a single log stream. Without filtering or separate log targets for CONNECT, the CONNECT entries are buried in millions of normal requests. By the time an analyst queries for CONNECT to an unusual destination, the session is long over. Separate CONNECT logs (as shown in the Squid configuration above) allow near-real-time alerting on anomalous tunnel establishment.

Treating DoH resolver IPs as static. New DoH resolvers are deployed regularly. NextDNS, Adguard, and regional ISP resolvers all operate DoH endpoints on IP addresses that are not in common blocklists. An attacker operating their own DoH resolver on a VPS with a clean IP bypasses all IP-based blocking. The IP-based blocking catches opportunistic and low-sophistication C2 that relies on Cloudflare or Google; it does not stop a determined attacker with their own infrastructure. Pair the IP blocking with TLS inspection that validates the SNI and certificate chain on HTTPS traffic to DoH IPs for the additional coverage, and invest in behaviour-based detection (DNS volume drops, ICMP timing analysis) that works regardless of which IP the attacker uses.