Blog
Next

Malware Research and Development 

Advanced Analysis of Malware Design, Behavior, and Detection Evasion

Introduction: The Discipline Imperative

The fundamental misunderstanding plaguing red team operations is treating malware deployment as a capability maximization problem rather than a risk minimization problem. Most security literature depicts malware as aggressive, noisy, and designed for immediate control. The operational reality is fundamentally different.

Advanced threat actors succeed not because their malware is technically superior, but because it operates under disciplined constraints. Success is measured in invisibility duration, not capability breadth.

The fundamental distinction: Red teams don't choose between capability and stealth—they choose between short-term, high-impact operations and long-term, low-impact persistence. Operational objectives determine which is appropriate.

Malware Research Banner

This research examines the trade-offs defining advanced malware:

  • How red teams prioritize persistence over privilege escalation ?
  • Why stealth-oriented malware deliberately removes capabilities ?
  • Programming language selection as operational risk management
  • Legitimate service abuse: attacker advantages vs. detection surfaces
  • C2 design under detection pressure
  • Practical detection opportunities

Operational Disclaimer

This analysis is for authorized defensive and offensive security research only. All concepts are restricted to isolated lab environments with proper authorization. Any deployment outside controlled research environments is prohibited and unsupported.


Part 1: Red Team Operational Constraints

Tier 1 - Undetected Persistence: Maintain access without triggering detection (10/10 importance)

Tier 2 - Environmental Awareness: Map network topology, identify defenses, locate targets (9/10 importance)

Tier 3 - Trigger-Based Execution: Act when conditions align with objectives (8/10 importance)

Tier 4 - Privilege Escalation: Often deprioritized in favor of stealth (5/10 importance)

Tier 5 - Expanded Capability: Additional tools only when necessary (3/10 importance)

This prioritization contradicts commodity malware thinking, which emphasizes escalation and immediate capability expansion. Real adversaries operate under different constraints—they assume detection will occur and plan for maintaining access afterward.


Part 2: Design Trade-Offs in Advanced Malware

Low-level languages (C, Rust, Assembly):

  • Minimal behavioral noise → reduces detection surface
  • Small binaries → easier to manage
  • Direct system calls → tight control
  • Trade-off: Slower development, higher stability risk

High-level languages (Python, Go, C#, Java):

  • Rapid development → faster iteration
  • Cross-platform compatibility
  • Trade-off: Behavioral noise (GC pauses, threads, runtime overhead), larger binaries expose more code

Language choice is an operational risk decision, not a technical one. Each creates distinct behavioral signatures defenders can baseline and detect.


Part 3: Command-and-Control Architecture

Modern C2 shifted from dedicated servers (detectable) to legitimate service abuse—this fundamental change defines contemporary APT operations.

Why Dedicated C2 Servers Failed

Traditional infrastructure created obvious detection signals:

  • Domain reputation systems flag suspicious domains
  • Geo-IP analysis identifies server locations
  • Network monitoring systems recognize C2 communication patterns
  • Sinkholing redirects traffic to defenders
  • CISO awareness of infrastructure patterns became standard

Modern Approach: Legitimate Service Abuse

Advanced threat actors now systematically abuse trusted, legitimate platforms:

Cloud Platforms (AWS, Azure, Google Cloud):

  • S3 buckets for command delivery
  • Blob storage for data exfiltration
  • Reasoning: Built-in encryption, trusted infrastructure, indistinguishable from legitimate business traffic

Developer Tools (GitHub, GitLab):

  • Repositories for command encoding
  • CI/CD runners for command execution
  • Reasoning: Expected traffic in security environments, encoded within legitimate development protocols

Communication Services (Discord, Slack, Telegram):

  • Slash commands and webhooks for C2 channels
  • Bot-based operator communication
  • Reasoning: End-to-end encryption, operator mobility, traffic expected on developer machines

Content & Messaging Platforms (Twitter/X, Pastebin, Reddit):

  • Steganographic command encoding in public posts
  • Reasoning: High traffic volume, legitimacy assumed

The Detection Paradox

Legitimate service abuse creates a counterintuitive opportunity:

What Attackers Gain:

  • 99% firewall bypass (trusted services rarely blocked)
  • Built-in encryption (HTTPS from trusted providers)
  • Scale and anonymity (millions of legitimate connections)

What Attackers Lose:

  • Cloud platform logging and audit trails
  • Service provider abuse detection systems
  • API rate limiting and behavioral anomalies
  • Account metadata and creation patterns

Defender Strategy: Rather than blocking services (impossible), focus on behavioral anomalies within services:

  • Unusual API usage patterns (bulk downloads, unusual timing)
  • Account creation → immediate activity correlation
  • Authentication anomalies (locations, devices, timings)
  • Communication patterns indicating coordination

Part 4: Venom—Rust-Based Educational Malware Simulation

What is Venom?

Venom is a transparent, auditable collection of Rust-based C2 simulations designed for authorized security research. It demonstrates how real APT malware balances capability, stealth, and persistence using minimal code and legitimate service abuse.

Unlike obfuscated offensive tools, Venom's source code is readable and not hidden. This makes it perfect for learning both attacker design patterns and defender detection strategies.

Purpose:

  • Red teamers: Study realistic APT trade-offs (small footprint, native APIs, coordinated C2)
  • Blue teamers: Test detection strategies against realistic signals and timelines
  • Both: Understand what attackers must do operationally and what signals they necessarily create

How Venom Works

Core Architecture:

  • Language: Native Rust binary (no interpreter/runtime dependency)
  • Agent ID: Deterministic, cached (repeatable multi-agent scenarios)
  • Execution: Headless (CREATE_NO_WINDOW), output via service attachments
  • Capabilities: GDI+ for screenshots, Media Foundation for webcam, native Windows APIs
  • Multi-agent: Coordinated command execution with atomic agent selection

Venom Models: Two Service Abuse Approaches

Visit biki.com.np/lab to access both working implementations:

Model 1: Discord-Based C2

How it works:

  • Agent polls Discord slash commands for operator directives
  • Commands executed locally, results posted to Discord channel
  • Multi-agent coordination via broadcast commands
  • All communication appears as normal Discord bot traffic

Operator Commands:

CommandPurposeOutput Location
lsList all connected agentsDiscord message
interact <id>Select target agentLocal selection
cmd <command>Execute shell commandDiscord attachment if >2000 chars
cmd-all <command>Broadcast to all agentsPer-agent Discord attachments
captureScreenshot (GDI+)Discord attachment (s.png)
camWebcam frame (Media Foundation)Discord attachment (cap.png)

Detection Surface:

  • Discord bot token usage on non-developer machines
  • Multiple agent registrations from same network
  • Batch API calls indicating cmd-all broadcasts
  • GDI+ API calls followed by Discord uploads

Model 2: GitHub-Based C2

How it works:

  • Agent clones private GitHub repository for commands
  • Commands encoded in repository files/branches
  • Results pushed back to repository as issue/PR comments
  • All communication looks like legitimate Git operations

Operator Commands:

CommandPurposeOutput Location
lsList all connected agentsPushed as commits or posted as PR comments
interact <id>Select target agentLocal selection in Memory
cmd <command>Execute shell commandPushed as commits or posted as PR comments
cmd-all <command>Broadcast to all agentsPer-agent commits or PR comments

Detection Surface:

  • Unusual Git push frequency from user endpoints
  • Repository activity at odd hours
  • Private repository access from VPN/unusual locations
  • Commit messages with suspicious encoding patterns

Getting Started with Venom

Access the lab:

  1. Visit biki.com.np/lab
  2. Choose model:
    • Discord Model - Easier setup, immediate feedback
    • GitHub Model - More stealthy, event-driven
  3. Clone or download code (no git clone needed—direct access on site)
  4. Review source code (fully auditable, all logic visible)

Prerequisites:

  • Windows 10/11/Linux VM (isolated, no corporate network)
  • Rust toolchain (optional for modification)
  • Discord server OR GitHub private repository
  • Monitoring tools (Procmon, API Monitor, Sysmon for visibility)

Detection Surfaces: What to Hunt For

Level 1 - Service/API Telemetry

Discord Model:

  • Multiple bot interactions from same IP/user
  • Unusual API timing patterns (simultaneous commands)
  • Account creation → immediate activity
  • Batch command patterns (cmd-all broadcasts)

GitHub Model:

  • Unusual Git push frequency from endpoints
  • Repository operations at non-business hours
  • Private repository clones from VPN locations
  • Commits with suspicious encoding or binary patterns

Level 2 - Behavioral Correlation

  • Simultaneous execution across hosts within 30 seconds
  • Periodic polling patterns (Discord or Git polling)
  • Agent dormancy (idle but maintaining connection)
  • Coordinated multi-agent commands across timeline

Level 3 - Identity Tracking

  • New agent IDs appearing in logs
  • Agent ID persistence across reboots
  • Subnet-clustered agent IDs (lateral movement indicator)
  • Service account usage patterns inconsistent with role

Lab Exercises: Practical Detection Research

Exercise 1: Single-Agent Detection

Setup:

  1. Deploy Venom agent (Discord or GitHub model)
  2. Run basic commands: ls, cmd whoami, capture
  3. Capture telemetry: Process creation, file writes, network traffic

Objective: Build detector that flags:

  • GDI+ API calls from Rust binary
  • TEMP file write (s.png) immediately following API calls
  • Discord/GitHub API activity within 5 seconds

Expected result: HIGH confidence detection of screenshot capability


Exercise 2: Multi-Agent Timeline Correlation

Setup:

  1. Deploy 3+ agents (separate VMs)
  2. Use cmd-all systeminfo across all agents
  3. Capture process creation logs from all hosts

Objective: Build correlation rule that detects:

  • Same command executed on 3+ hosts
  • Execution timing within 30 seconds
  • Command parent process unusual for user

Expected result: VERY HIGH confidence detection of coordinated attack


Exercise 3: Service Model Adaptation

Discord model:

  1. Disable screenshot capability (modify code)
  2. Rebuild and redeploy
  3. Re-run detection rules
  4. Question: What signals survive? What disappear?

GitHub model:

  1. Change command encoding (ROT13 instead of Base64)
  2. Redeploy
  3. Re-run detection
  4. Question: How does encoding affect detection?

Expected result: Understanding that detection must focus on immutable constraints, not specific implementation details


Exercise 4: Cross-Model Comparison

Compare both models:

  • Discord: How does real-time C2 vs. event-driven differ?
  • GitHub: How does file-based command encoding vs. API-based differ?
  • Detection: Which is harder to detect? Why?
  • Mitigation: How would you block each differently?

Expected result: Appreciation for attacker trade-offs between operational convenience and detection avoidance

Why Venom Is Valuable for Defense Research

Auditability:
Open source, readable code—instrument directly without reverse-engineering. Modify capabilities and observe how detection signals change.

Repeatability:
Deterministic agent IDs, clear command sequences → reproducible experiments. Run same scenario across 10 labs and get identical results.

Realism:
Uses actual Windows APIs (GDI+, Media Foundation) and legitimate C2 channels that real attackers abuse. Signals created are exact matches to what defenders must hunt.

Observable:
Creates exact signals real attackers create. Unlike theoretical discussions, you can measure, correlate, and detect Venom's behavior.

Flexible:
Two service models let you understand different attacker approaches. Modify code to test variations (timing, encoding, multi-stage).

CRITICAL LEGAL NOTICE

Venom is for authorized lab environments only. Use is restricted to:

  • Systems you own
  • Systems with explicit written authorization
  • Controlled research environments

Any unauthorized use is illegal and unsupported.


Conclusion: Feedback Loop Between Offense & Defense

Offensive and defensive capabilities are continuous feedback loops, not opposites.

Red teams that understand detection surfaces build more realistic operations.
Blue teams that understand attacker constraints build more effective detection.

Plan operations assuming detection will occur. Focus on maintaining access after detection. Design for persistence under known defenses. Add capability only when necessary.

Understand what attackers must do. Find signals their necessary actions create. Focus detection on immutable constraints, not avoidable behaviors. Use timeline correlation, not point-in-time alerts.

The Real Insight:

The most advanced threats are defined by what they don't do.
The most advanced defenses are defined by what they correlate.

The future belongs not to those with the most sophisticated tools, but to those who best understand trade-offs between capability, stealth, and persistence.

Discipline, patience, understanding—these are the real weapons in sophisticated threat operations.

Correlation, baseline, behavioral analysis—these are the real defenses in sophisticated threat detection.

How is this guide?