Introduction
sandtrace is a Rust security tool for Linux that combines malware sandboxing, credential file monitoring, codebase auditing, and whitespace obfuscation scanning in a single binary.
Wormsign never lies. — The surface tremor that reveals a hidden threat before it strikes.
What sandtrace does
| Command | Purpose |
|---|---|
sandtrace run | Sandbox untrusted binaries with syscall tracing + 8-layer isolation |
sandtrace watch | Monitor credential files for suspicious access in real-time |
sandtrace audit | Scan codebases for hardcoded secrets, supply-chain threats, steganography |
sandtrace sbom | Generate a CycloneDX SBOM from manifests and lockfiles |
sandtrace scan | Fast parallel filesystem sweep for whitespace obfuscation |
sandtrace init | Initialize ~/.sandtrace/ config and rules |
Use cases
- Audit your codebase before every commit or CI run — catch hardcoded AWS keys, Stripe tokens, JWTs, and 27+ credential patterns before they ship.
- Generate an SBOM for a package tree or monorepo — emit a CycloneDX inventory from npm, Cargo, and Python requirement files.
- Sandbox untrusted installs — run
npm installorpip installinside an 8-layer isolation envelope and get a full JSONL syscall trace of what it tried to do. - Monitor credential files — get real-time alerts when an unexpected process touches
~/.aws/credentials,~/.ssh/id_rsa, or any of 14 monitored credential locations. - Detect obfuscation attacks — scan for whitespace obfuscation techniques used in supply-chain attacks: trailing spaces, content hidden past column 200, zero-width unicode, and homoglyph substitution.
- CI/CD gating — output SARIF for GitHub Code Scanning or JSON for custom pipelines, with exit codes that fail the build on high/critical findings.
Philosophy
sandtrace follows the "wormsign" philosophy: detect the surface tremor that reveals a hidden threat before it strikes. Rather than trying to be a comprehensive security platform, it focuses on the specific attack vectors that are most commonly exploited in modern software supply chains.
Each subcommand operates independently and can be adopted incrementally — start with audit in CI, add watch on developer machines, and use run when evaluating untrusted packages.
Getting Started
Requirements
- Rust 1.87+ (for building from source)
- Linux 5.13+ (Landlock v1 support for
sandtrace run) - Linux 5.3+ (PTRACE_GET_SYSCALL_INFO for
sandtrace run)
sandtrace audit,sandtrace scan, andsandtrace watchwork on any Linux kernel. The kernel version requirements above only apply to thesandtrace runsandbox.
Install
cargo build --release
cp target/release/sandtrace ~/.cargo/bin/
Initialize
sandtrace init
This creates ~/.sandtrace/ with default configuration and rules:
| File | Purpose |
|---|---|
config.toml | Global settings — redaction markers, custom patterns, thresholds |
rules/credential-access.yml | 14 credential file monitoring rules |
rules/supply-chain.yml | 4 supply-chain attack detection rules |
rules/exfiltration.yml | Data exfiltration detection rules |
Use sandtrace init --force to overwrite existing files.
Quick examples
Audit a project for secrets
sandtrace audit ./my-project
Scan your home directory for obfuscation
sandtrace scan
Watch credential files
sandtrace watch --alert desktop
Sandbox an npm install
sandtrace run --allow-path ./project --output trace.jsonl npm install
Generate SARIF for GitHub Code Scanning
sandtrace audit ./my-project --format sarif > sandtrace.sarif
sandtrace audit
Scans source code for hardcoded secrets, supply-chain threats, steganographic payloads, and unicode obfuscation.
Usage
sandtrace audit ./my-project # Terminal output
sandtrace audit ./my-project --format json # JSON for CI pipelines
sandtrace audit ./my-project --format sarif > r.sarif # SARIF for GitHub Code Scanning
sandtrace audit ./my-project --severity high # Only high + critical
sandtrace audit ./my-project --rules ./my-rules/ # Custom rules directory
Flags
| Flag | Default | Description |
|---|---|---|
TARGET | (required) | Directory to scan |
--format | terminal | Output format: terminal, json, sarif |
--severity | low | Minimum severity: info, low, medium, high, critical |
--rules | ~/.sandtrace/rules/ | Rules directory |
--no-color | false | Disable colored output |
-v / -vv | — | Increase verbosity |
Exit codes
| Code | Meaning |
|---|---|
0 | Clean — no findings at or above the minimum severity |
1 | High findings detected |
2 | Critical findings detected |
Exit codes make sandtrace audit easy to use as a CI gate — any non-zero exit fails the build.
Examples
Audit with high severity filter
sandtrace audit ./my-project --severity high
Only reports findings with severity high or critical.
JSON output for scripting
sandtrace audit ./my-project --format json | jq 'length'
SARIF for GitHub Code Scanning
sandtrace audit . --format sarif > sandtrace.sarif
Upload the SARIF file using the github/codeql-action/upload-sarif@v4 action. See CI/CD Integration for a full workflow example.
Custom rules directory
sandtrace audit ./my-project --rules ./my-custom-rules/
Override the default rules directory. See Custom Rules for the YAML rule format.
Built-in detection rules
See Detection Rules for the full list of 50+ built-in patterns that sandtrace audit checks, including 30 obfuscation rules across 3 tiers.
sandtrace sbom
Generate a CycloneDX SBOM for a project or monorepo by discovering common package manifests and lockfiles.
Usage
sandtrace sbom ./my-project
sandtrace sbom ./my-project --output bom.json
sandtrace sbom ./workspace --no-pretty
Options
| Flag | Default | Description |
|---|---|---|
TARGET | (required) | Directory to inspect |
--format cyclonedx-json | cyclonedx-json | Output format |
-o, --output FILE | stdout | Write JSON to a file |
--no-pretty | false | Emit compact JSON |
What It Detects
Current SBOM generation supports:
npm-shrinkwrap.json,package-lock.json,package-lock.yaml, andpackage.jsonfor npm projectspnpm-lock.yamlfor pnpm projectsyarn.lockfor Yarn projectsCargo.lockandCargo.tomlfor Rust projectsrequirements.txt,poetry.lock,uv.lock,pylock.toml,pyproject.toml,Pipfile.lock, andPipfilefor Python projectsconda-lock.yml,conda-lock.yaml,explicit.txt,explicit-*.txt,*-explicit.txt,environment.yml, andenvironment.yamlfor Conda environmentscomposer.lockandcomposer.jsonfor Composer projectsGemfile.lock,Gemfile, and.gemspecfor Ruby projectsgo.sumandgo.modfor Go projectsmix.lockandmix.exsfor Elixir projectsbun.lockfor Bun projects, withbun.lockbfalling back topackage.jsondeno.json,deno.jsonc, anddeno.lockfor Deno projects, includingnpm:,jsr:, and remote URL importspom.xml,gradle.lockfile, and Gradle build files for Java projectspackages.lock.jsonand.csprojfor .NET projectsPackage.resolvedandPackage.swiftfor Swift projects
When a text lockfile is present, sandtrace sbom prefers resolved package versions. When only a manifest is available, it emits the dependency with a sandtrace:version_spec property when the version is a range or other unresolved specifier.
Output
The command emits CycloneDX 1.5 JSON with:
- metadata about the scanned application
- discovered library components
- a root dependency list for direct dependencies when they can be inferred
Example:
sandtrace sbom . --output bom.json
jq '.bomFormat, .metadata.component.name, (.components | length)' bom.json
Notes
- Hidden and generated directories such as
node_modules,target,vendor,.git, and common cache folders are skipped during discovery. - The current implementation focuses on inventory generation, not vulnerability matching. Use
sandtrace auditandsandtrace runfor behavioral and static detection.
sandtrace scan
Fast parallel filesystem sweep using rayon thread pool and ignore-aware directory walking. Detects whitespace obfuscation techniques used in supply-chain attacks.
Usage
sandtrace scan # Scan $HOME for 50+ consecutive whitespace chars
sandtrace scan /tmp # Scan specific directory
sandtrace scan /tmp -n 20 # Lower threshold to 20 chars
sandtrace scan /tmp -v # Show line previews
sandtrace scan /tmp --max-size 5000000 # Skip files over 5MB
Flags
| Flag | Default | Description |
|---|---|---|
TARGET | $HOME | Directory to scan |
-n, --min-whitespace | 50 | Minimum consecutive whitespace characters to flag |
-v, --verbose | false | Show line preview for each finding |
--max-size | 10000000 | Maximum file size in bytes |
--no-color | false | Disable colored output |
Skipped directories
The following directories are automatically skipped during scanning:
node_modules, .git, vendor, .pnpm, dist, build, .cache, __pycache__, .venv, venv, .tox
How it works
sandtrace scan uses rayon for parallel directory walking. Each file is checked line-by-line for runs of consecutive whitespace characters (spaces and tabs) that exceed the threshold. This detects whitespace obfuscation attacks where malicious payloads are hidden in whitespace at the end of source lines or past column 200.
Examples
Quick scan with default threshold
sandtrace scan
Scans your entire home directory for lines with 50+ consecutive whitespace characters.
Targeted scan with lower threshold
sandtrace scan ./vendor -n 20 -v
Scans a vendor directory with a lower threshold and shows line previews for each finding.
sandtrace watch
Monitors sensitive files via inotify and matches access events against YAML rules. Alerts through multiple channels when unexpected processes access credential files.
Usage
sandtrace watch # stdout alerts
sandtrace watch --alert desktop # Desktop notifications
sandtrace watch --alert webhook:https://hooks.slack.com/services/T00/B00/XXX
sandtrace watch --alert stdout --alert desktop # Multiple channels
sandtrace watch --paths /opt/secrets/ # Watch additional paths
sandtrace watch --daemon --pid-file /tmp/st.pid # Run as daemon
Flags
| Flag | Default | Description |
|---|---|---|
--rules | ~/.sandtrace/rules/ | Rules directory |
--paths | — | Additional paths to monitor (repeatable) |
--alert | stdout | Alert channel (repeatable) |
--daemon | false | Fork to background |
--pid-file | — | PID file for daemon mode |
--no-color | false | Disable colored output |
-v / -vv | — | Increase verbosity |
Alert channels
| Channel | Format | Description |
|---|---|---|
stdout | --alert stdout | Print to console |
desktop | --alert desktop | Desktop notification (notify-rust) |
webhook | --alert webhook:<url> | HTTP POST to webhook URL |
syslog | --alert syslog | System log |
Multiple alert channels can be combined by repeating the --alert flag.
How it works
- On startup, sandtrace reads all YAML rules from the rules directory.
- It registers inotify watches on all file paths defined in the rules.
- When a file access event fires, it checks the accessing process against the rule's
excluded_processeslist. - If the process is not in the allowlist, an alert is dispatched to all configured channels.
Examples
Desktop + webhook alerts
sandtrace watch --alert desktop --alert webhook:https://hooks.slack.com/services/T00/B00/XXX
Daemon mode
sandtrace watch --daemon --pid-file /tmp/sandtrace-watch.pid --alert syslog
Runs in the background and logs to syslog. Use the PID file to stop the daemon:
kill $(cat /tmp/sandtrace-watch.pid)
Monitor additional paths
sandtrace watch --paths /opt/vault/creds/ --paths /etc/ssl/private/
Built-in watch rules
See Watch Rules for the full list of 19 built-in rules that monitor credential files, supply-chain directories, and exfiltration attempts.
sandtrace run
Executes untrusted binaries inside an 8-layer isolation sandbox with ptrace-based syscall tracing.
Usage
sandtrace run --trace-only -vv /bin/ls /tmp # Trace only (no enforcement)
sandtrace run --allow-path ./project --output trace.jsonl npm install # With filesystem restriction
sandtrace run --policy policies/strict.toml ./untrusted # Custom policy
sandtrace run --allow-net curl https://example.com # Allow network
Flags
| Flag | Default | Description |
|---|---|---|
--policy | — | TOML policy file |
--allow-path | — | Allow filesystem access to path (repeatable) |
--allow-net | false | Allow network access |
--allow-exec | false | Accepted by the CLI; reserved for future child execution controls |
-o, --output | — | JSONL output file |
--timeout | 30 | Kill process after N seconds |
--trace-only | false | Disable enforcement (no Landlock/seccomp) |
--follow-forks | true | Trace child processes (enabled by default) |
--no-color | false | Disable colored output |
-v / -vv / -vvv | — | Verbosity level |
Sandbox layers
Applied in order after fork():
| Layer | Mechanism | What it does |
|---|---|---|
| 1 | User namespace | Gain capabilities without privilege escalation |
| 2 | Mount namespace | Restrict filesystem view |
| 3 | PID namespace | Hide host processes |
| 4 | Network namespace | Isolate network (unless --allow-net) |
| 5 | PR_SET_NO_NEW_PRIVS | Prevent privilege escalation via setuid |
| 6 | Landlock LSM | Kernel-level filesystem access control |
| 7 | seccomp-bpf | Block dangerous syscalls at kernel level |
| 8 | ptrace TRACEME | Signal readiness to tracer, raise SIGSTOP |
Layers 6-7 are skipped when --trace-only is set.
Always-blocked syscalls
The following syscalls are always blocked by the seccomp-bpf filter, regardless of policy:
kexec_load, kexec_file_load, reboot, swapon, swapoff, init_module, finit_module, delete_module, acct, pivot_root
JSONL output format
When using --output, each line is a self-contained JSON object:
Syscall event
{
"event_type": "syscall",
"timestamp": "2025-01-01T12:00:00Z",
"pid": 1234,
"syscall": "openat",
"syscall_nr": 257,
"args": {"raw": [0, 140234567, 0, 0, 0, 0]},
"return_value": 3,
"success": true,
"duration_us": 42,
"action": "allow",
"category": "file_read"
}
Process lifecycle
{"event_type": "process", "kind": "spawned", "timestamp": "2025-01-01T12:00:00Z", "parent_pid": 1234, "child_pid": 1235}
{"event_type": "process", "kind": "exited", "timestamp": "2025-01-01T12:00:01Z", "pid": 1234, "exit_code": 0}
Trace summary
{"event_type": "summary", "timestamp": "2025-01-01T12:00:01Z", "total_syscalls": 4521, "unique_syscalls": 23, "denied_count": 2, "process_count": 3, "duration_ms": 1200, "exit_code": 0}
Examples
Trace a command without enforcement
sandtrace run --trace-only -vv /bin/ls /tmp
Useful for understanding what syscalls a binary makes before writing a policy.
Sandbox npm install
sandtrace run --allow-path ./my-project --output npm-trace.jsonl npm install
Restricts filesystem access to the project directory, traces all syscalls, and writes results to a JSONL file for analysis.
Use a policy file
sandtrace run --policy examples/strict.toml ./untrusted-binary
See Policies for the TOML policy format and example policies.
sandtrace init
Initializes the ~/.sandtrace/ directory with default configuration and rule files.
Usage
sandtrace init # Create config and rules (skip if exists)
sandtrace init --force # Overwrite existing files
What it creates
~/.sandtrace/
├── config.toml # Global settings
└── rules/
├── credential-access.yml # 14 credential file monitoring rules
├── supply-chain.yml # 4 supply-chain attack detection rules
└── exfiltration.yml # Data exfiltration detection rules
config.toml
The global configuration file. Controls:
- Rules directory paths
- Default alert channels for watch mode
- Obfuscation scanner thresholds
- Custom credential and IOC patterns
- Redaction markers for false positive suppression
See Configuration for the full reference.
rules/credential-access.yml
14 rules that define which credential files to monitor and which processes are allowed to access them. Used by sandtrace watch.
See Watch Rules for the full rule list.
rules/supply-chain.yml
4 rules that detect suspicious writes to dependency directories (node_modules, pip config, cargo registry) from unexpected processes.
rules/exfiltration.yml
Rules that detect potential data exfiltration patterns, such as unexpected curl/wget outbound requests during build or install operations.
Flags
| Flag | Default | Description |
|---|---|---|
--force | false | Overwrite existing files |
When to re-initialize
Run sandtrace init --force after upgrading sandtrace to get the latest default rules and configuration. Your custom rules in additional directories will not be affected.
Configuration
sandtrace is configured through ~/.sandtrace/config.toml, created by sandtrace init.
config.toml reference
# Primary rules directory
rules_dir = "~/.sandtrace/rules"
# Additional rule directories (e.g. community packs)
additional_rules = ["~/.sandtrace/community-rules"]
# Default alert channels for watch mode
default_alerts = ["stdout"]
# Obfuscation scanner thresholds
[obfuscation]
max_trailing_spaces = 20 # Flag lines with more trailing spaces than this
steganographic_column = 200 # Flag content found past this column
enable_typosquat = false # Enable typosquatting detection (Levenshtein distance 1)
known_internal_prefixes = [] # Package prefixes for dependency confusion detection
# known_internal_prefixes = ["@mycompany/", "internal-"]
# Custom patterns (appended to built-in patterns)
[[custom_patterns]]
id = "cred-internal-api"
description = "Internal API key found in source"
severity = "high" # critical, high, medium, low, info
pattern = 'INTERNAL_[A-Z0-9]{32}'
# IOC example: literal string match
[[custom_patterns]]
id = "ioc-c2-domain"
description = "Known C2 domain found in source"
severity = "critical"
match_type = "literal"
pattern = "malware-c2.example.com"
tags = ["ioc", "c2"]
See examples/config.toml for a fully commented example.
Settings
pattern_files
Array of additional TOML files containing [[custom_patterns]] entries. Patterns from these files are merged with custom_patterns defined in config.toml at load time. Missing files log a warning but do not cause an error.
pattern_files = ["~/.sandtrace/npm-malware.toml"]
This is useful for large, auto-generated pattern sets (e.g. the npm malware IOC feed) that you don't want mixed into your hand-curated config file.
rules_dir
Path to the primary rules directory. Defaults to ~/.sandtrace/rules.
additional_rules
Array of additional rule directory paths. Rules from all directories are merged. Useful for community rule packs or team-specific rules.
default_alerts
Default alert channels for sandtrace watch when no --alert flag is specified. Options: stdout, desktop, syslog, webhook:<url>.
obfuscation
Thresholds and feature flags for the obfuscation detection scanner used by sandtrace audit:
- max_trailing_spaces — Lines with more trailing spaces than this are flagged (default: 20).
- steganographic_column — Content found past this column is flagged as potentially hidden (default: 200).
- enable_typosquat — Enable typosquatting detection, which flags package names that are 1 Levenshtein edit distance from popular npm/pip packages (default: false). Disabled by default to avoid false positives on private packages.
- known_internal_prefixes — List of package name prefixes that indicate internal/private packages. Used by the dependency confusion rule to flag internal-looking packages without a private registry configured (default: empty).
Backward compatibility: existing config files using
[shai_hulud]will continue to work.
custom_patterns
Add custom detection patterns that are appended to the built-in set. Each pattern supports three match types:
| Field | Required | Default | Description |
|---|---|---|---|
id | Yes | — | Unique identifier for the rule |
description | Yes | — | Human-readable description |
severity | Yes | — | critical, high, medium, low, or info |
pattern | Yes | — | Pattern to match (regex, literal string, or filename) |
match_type | No | "regex" | "regex", "literal", or "filename" |
file_extensions | No | [] | Only scan files with these extensions (empty = all) |
tags | No | [] | Tags for categorization (e.g. "ioc", "malware") |
See Custom Rules — IOC Rules for IOC examples.
Redaction markers
Lines containing redaction markers are skipped during audit to prevent false positives. Default markers include placeholder, your_token, changeme, process.env, {{ ., ${, and more.
Add your own markers:
redaction_markers = [
# ... default markers ...
"test_fixture_value",
"my_custom_marker",
]
Inline suppression
Suppress a specific finding by adding a comment on the line above it:
// @sandtrace-ignore
const EXAMPLE_KEY = "AKIAIOSFODNN7EXAMPLE";
Both @sandtrace-ignore and sandtrace:ignore are recognized. The suppression applies only to the immediately following line.
Cloud Ingest
Sandtrace Cloud is split into two parts:
- the CLI uploader in
sandtrace - the ingest workload in
sandtrace-ingest
The CLI produces and uploads audit, run, and SBOM payloads when SANDTRACE_API_KEY is set. The ingest workload receives those machine-facing payloads, validates them, persists them, and exposes lightweight read APIs for recent records and dashboard summaries. The stable contract is documented in docs/cloud-ingestion-spec.md.
Why this is separate
The ingest workload is intentionally separate from the product UI.
sandtracestays a local-first CLIsandtrace-ingeststays machine-facing and write-heavy- a future Laravel app or dashboard can sit on top of the normalized records instead of handling raw uploads directly
For sandtrace run, the recommended long-term product model is a separate hosted execution add-on rather than forcing privileged tracing into standard CI runners. See Hosted Runtime Analysis.
Service endpoints
Current endpoints exposed by sandtrace-ingest:
| Method | Path | Purpose |
|---|---|---|
GET | /healthz | Liveness check |
GET | /v1/admin/api-keys | List hashed API keys from Postgres |
GET | /v1/admin/api-key-events | List admin API key lifecycle events from Postgres |
POST | /v1/admin/api-keys | Mint a new API key in Postgres |
POST | /v1/admin/api-keys/{api_key_hash} | Deactivate an API key |
DELETE | /v1/admin/api-keys/{api_key_hash} | Permanently remove an inactive API key |
POST | /v1/admin/api-keys/{api_key_hash}/rotate | Replace an active API key and return a new plaintext key once |
POST | /v1/ingest/audit | Accept an audit upload |
POST | /v1/ingest/run | Accept a run upload |
POST | /v1/ingest/sbom | Accept an sbom upload |
GET | /v1/ingest/audits | List recent audit index records |
GET | /v1/ingest/runs | List recent run index records |
GET | /v1/ingest/sboms | List recent SBOM index records |
GET | /v1/ingest/audit/{id} | Fetch one audit record and payload |
GET | /v1/ingest/run/{id} | Fetch one run record and payload |
GET | /v1/ingest/sbom/{id} | Fetch one SBOM record and payload |
GET | /v1/projects/overview | Return one row per visible project with latest activity and current SBOM alert counts |
GET | /v1/sbom/inventory | Return package inventory for one SBOM or commit |
GET | /v1/sbom/timeline | Return commit-level SBOM history with package-change and security-alert counts |
GET | /v1/sbom/diff | Return package additions, removals, and version changes between two SBOMs |
GET | /v1/sbom/alerts | Return direct-package additions and direct version-change alerts from the latest SBOM comparison |
GET | /v1/sbom/advisories | Query OSV for vulnerability matches on packages from one SBOM or commit |
GET | /v1/sbom/security-alerts | Return vulnerable direct-package additions and vulnerable direct version changes from the latest SBOM comparison |
GET | /v1/sbom/security-alerts/history | Return persisted vulnerable package-change history with filters for project, commit, kind, and package identity |
GET | /v1/dashboard/overview | Return dashboard-ready aggregate counts |
API Versioning Policy
All ingest API endpoints are prefixed with /v1/. This section defines when and how the version changes.
Compatibility guarantees for /v1/
- Additive changes are non-breaking. New fields in response JSON, new optional query parameters, and new endpoints under
/v1/can be added without a version bump. Clients must ignore unknown fields. - Removing or renaming a response field is breaking. This requires a new version (
/v2/). - Changing the type of an existing field is breaking. (e.g., string → number, object → array).
- Changing the meaning of an existing field is breaking.
- Removing an endpoint is breaking. Deprecated endpoints remain available for at least 90 days after deprecation notice.
When to create /v2/
A new API version is warranted when:
- The SBOM schema changes in a way that alters existing field semantics
- Authentication model changes (e.g., replacing Bearer tokens with a different scheme)
- A fundamental change to the ingest payload format
Deprecation process
- Add a
SunsetHTTP header to deprecated endpoints with the removal date - Log warnings when deprecated endpoints are called
- Document the migration path in release notes
- Maintain deprecated endpoints for a minimum of 90 days
CLI-to-cloud compatibility
The CLI (sandtrace audit --upload, sandtrace sbom --upload) and the ingest service must stay compatible across releases. The CLI always targets the latest API version it was built against. When a breaking change is introduced:
- The new CLI version targets
/v2/ - The ingest service supports both
/v1/and/v2/simultaneously - Older CLI versions continue working against
/v1/until sunset
Current status
All endpoints are /v1/. No breaking changes are planned.
Environment variables
CLI uploader
| Variable | Purpose |
|---|---|
SANDTRACE_API_KEY | Enables upload from sandtrace audit, sandtrace run, and sandtrace sbom |
SANDTRACE_CLOUD_URL | Base URL for the ingest service |
SANDTRACE_CLOUD_TIMEOUT_MS | Upload timeout budget |
SANDTRACE_CLOUD_ENVIRONMENT | Logical environment label |
SANDTRACE_CLOUD_RAW_TRACE | Raw trace policy flag parsed by the client |
Ingest service
| Variable | Purpose |
|---|---|
SANDTRACE_INGEST_BIND | Bind address, default 127.0.0.1:8080 |
SANDTRACE_INGEST_ADMIN_TOKEN | Bearer token required for admin API key endpoints |
SANDTRACE_INGEST_ADMIN_SUBJECT | Label stored in API key lifecycle events, default admin-token |
SANDTRACE_INGEST_DIR | Storage root, default ./var/ingest |
SANDTRACE_INGEST_DATABASE_URL | Optional Postgres DSN for normalized metadata records |
SANDTRACE_INGEST_KEYS_FILE | JSON file of API key principals |
SANDTRACE_INGEST_API_KEYS | Comma-separated fallback key list |
SANDTRACE_INGEST_ORG | Fallback org slug when using env-only keys |
SANDTRACE_INGEST_PROJECT | Fallback project slug when using env-only keys |
SANDTRACE_INGEST_ACTOR | Fallback actor label when using env-only keys |
SANDTRACE_OSV_API_URL | Optional OSV API base URL, default https://api.osv.dev |
SANDTRACE_OSV_CACHE_TTL_HOURS | Advisory cache freshness window in hours, default 24 |
Principal file format
Use a JSON file when you want multiple orgs or projects on one ingest instance.
Example: examples/ingest-principals.json
[
{
"api_key": "st_dev_acme_web_123",
"org_slug": "acme",
"project_slug": "web",
"actor": "ci"
}
]
Local end-to-end flow
1. Start the ingest service
SANDTRACE_INGEST_KEYS_FILE=examples/ingest-principals.json \
cargo run --bin sandtrace-ingest
2. Send an audit upload
SANDTRACE_API_KEY=st_dev_acme_web_123 \
SANDTRACE_CLOUD_URL=http://127.0.0.1:8080 \
sandtrace audit .
3. Send a run upload
SANDTRACE_API_KEY=st_dev_acme_web_123 \
SANDTRACE_CLOUD_URL=http://127.0.0.1:8080 \
sandtrace run --trace-only /bin/true
4. Send an SBOM upload
SANDTRACE_API_KEY=st_dev_acme_web_123 \
SANDTRACE_CLOUD_URL=http://127.0.0.1:8080 \
sandtrace sbom . --output bom.json
5. Query recent ingests
curl -H "Authorization: Bearer st_dev_acme_web_123" \
http://127.0.0.1:8080/v1/ingest/audits
curl -H "Authorization: Bearer st_dev_acme_web_123" \
http://127.0.0.1:8080/v1/ingest/runs
curl -H "Authorization: Bearer st_dev_acme_web_123" \
http://127.0.0.1:8080/v1/ingest/sboms
6. Query dashboard summary
curl -H "Authorization: Bearer st_dev_acme_web_123" \
http://127.0.0.1:8080/v1/dashboard/overview
7. Mint an API key
curl -H "Authorization: Bearer dev-admin-token" \
-H "Content-Type: application/json" \
-d '{"org_slug":"acme","project_slug":"worker","actor":"ci"}' \
http://127.0.0.1:8080/v1/admin/api-keys
8. Rotate an API key
curl -X POST \
-H "Authorization: Bearer dev-admin-token" \
http://127.0.0.1:8080/v1/admin/api-keys/<api_key_hash>/rotate
9. Delete an inactive API key
curl -X DELETE \
-H "Authorization: Bearer dev-admin-token" \
http://127.0.0.1:8080/v1/admin/api-keys/<api_key_hash>
10. Query API key lifecycle events
curl -H "Authorization: Bearer dev-admin-token" \
"http://127.0.0.1:8080/v1/admin/api-key-events?org_slug=acme&limit=20"
Docker Compose stack
Use docker-compose.ingest.yml when you want a local Postgres-backed stack without installing Rust or Postgres directly on the host.
docker compose -f docker-compose.ingest.yml up --build
The stack starts:
postgreson127.0.0.1:5432sandtrace-ingeston127.0.0.1:8080
It uses:
Dockerfile.ingestfor the ingest service imageexamples/ingest-principals.jsonfor API key principals- a named volume for raw payload files and a separate named volume for Postgres data
Storage model today
Today the ingest workload stores:
- raw accepted payloads as JSON files
- normalized index records as JSON files
- records partitioned by authenticated
org_slug
If SANDTRACE_INGEST_DATABASE_URL is set, normalized index records are also written to Postgres and the read endpoints prefer Postgres for list, detail, and dashboard queries. Raw payloads remain on disk.
With Postgres enabled, the ingest service also maintains:
organizationsprojectsingest_api_keys
API keys are stored as SHA-256 hashes, not plaintext. Principals loaded from SANDTRACE_INGEST_KEYS_FILE or the fallback env vars are upserted into those tables on startup, and request authorization prefers the database-backed keys before falling back to in-memory config.
When Postgres auth is enabled, the database is authoritative for request auth. The file or env principals are treated as startup seed data, so deactivated or rotated keys stop working immediately even if they originally came from SANDTRACE_INGEST_KEYS_FILE.
Bootstrapping is non-destructive: it inserts missing keys, but it does not reactivate inactive hashes or mark keys as recently used on startup.
The admin endpoints return plaintext API keys only once at creation time. Subsequent reads expose only the stored hash and metadata.
Rotation follows the same rule: the replacement plaintext key is only returned by the rotate response, and the replaced key is marked inactive.
Deletion is only allowed for inactive keys so an admin cannot accidentally hard-delete the only active credential for a project without first revoking it.
Keys with a project_slug are project-scoped for reads. Keys without a project_slug can read records across the whole organization.
The service also records API key lifecycle events for created, deactivated, rotated, and deleted. Those events are stored in Postgres and can be queried through /v1/admin/api-key-events for operational auditing.
This is enough for local evaluation and API-contract testing, but not the intended production storage model.
Production direction
The expected next step is:
- API keys stored in a real auth table
- normalized records in Postgres
- raw payloads or optional raw traces in object storage
- Laravel or another product app reading normalized records for customer-facing dashboards
SBOM handling
SBOMs need a different treatment from audit and run because the generated CycloneDX document is already the portable artifact customers expect to export, diff, and enrich later.
The current cloud flow is:
sandtrace sbomuploads the raw CycloneDX JSON whenSANDTRACE_API_KEYis set.- The ingest layer stores that raw SBOM unchanged for evidence and export use.
- The ingest layer stores normalized SBOM summary records keyed by org, project, commit, and SBOM hash.
- When
SANDTRACE_INGEST_DATABASE_URLis configured, the ingest layer also writes normalized package rows into Postgres. - The read API serves package inventory views and commit diffs from those normalized rows when available, with file-backed fallback when they are absent.
- The product layer can use those records for “new package introduced” alerts and future advisory enrichment.
Today that alert surface is exposed as GET /v1/sbom/alerts, which compares the latest SBOM to the previous SBOM for each visible project and emits only:
- new direct packages
- direct package version changes
On-demand advisory enrichment is exposed as GET /v1/sbom/advisories. It queries OSV for the selected SBOM or commit and returns package-to-vulnerability matches.
When SANDTRACE_INGEST_DATABASE_URL is configured, advisory results are cached in Postgres by package query key. The response summary includes:
cache_hitsfresh_queries
Security-focused change detection is exposed as GET /v1/sbom/security-alerts. It compares the latest SBOM to the previous SBOM for each visible project, uses the cached OSV advisory layer, and emits only:
new_vulnerable_direct_packagevulnerable_direct_version_change
Persisted alert history is exposed as GET /v1/sbom/security-alerts/history. When Postgres is enabled, the ingest service writes those alerts at SBOM ingest time and serves them back without re-querying OSV. If the persisted table is empty, the history route backfills it from normalized SBOM package rows and the OSV cache before returning results. The history endpoint supports filters for:
project_slugkindfrom_git_committo_git_commitpackage_identity
Commit history for UI timelines is exposed as GET /v1/sbom/timeline. It returns one record per visible SBOM upload with:
component_countdirect_dependency_countdiff_base_git_commitpackage_alert_countsecurity_alert_count
That gives the product app a single read for “what changed on this commit” without stitching together inventory, diff, and alert endpoints client-side.
Project landing views are exposed as GET /v1/projects/overview. It returns one row per visible project with:
- latest activity timestamp
- upload counts for
audit,run, andsbom - latest audit, run, and SBOM index records
- current package-change alert count for the latest SBOM
- current vulnerable package-change alert count for the latest SBOM
The contract and next persistence step live in docs/cloud-ingestion-spec.md under POST /v1/ingest/sbom.
Manual sandtrace run Workflow
When the shared GitHub workflow runs on hosted runners it only executes audit and sbom. The run command requires a privileged host that allows ptrace/namespace creation. Follow these steps once, then reuse them as needed.
1. Provision a privileged host
- Use a VM, dedicated container, or self-hosted GitHub runner that you control.
- Ensure the kernel allows
CAP_SYS_PTRACEand the command below exits without errors:
docker run --rm --privileged --cap-add=SYS_PTRACE ubuntu:24.04 sh -c 'software-properties-common >/dev/null'
- On that host install
sandtrace(v0.3.0) or copy/usr/local/bin/sandtracefrom this repo.
2. Run a command through the sandbox
sandtrace run --allow-exec --timeout 60 --trace-only=false -- echo hi
--allow-execlets the traced process spawn children.--timeoutavoids hanging forever.--trace-onlyshould stayfalseso enforcement runs, matching your CI needs.
If the run succeeds, sandtrace prints JSONL to stdout. Save it (e.g. run.jsonl).
3. Upload to Sandtrace Cloud
curl -X POST https://ingest.sandtrace.cloud/v1/ingest/runs \
-H "Authorization: Bearer st_sandtrace_cdf0c509739d482dae44bcc4670b414c" \
-H "Content-Type: application/json" \
-d @run.jsonl
- Use the org-scoped API key created via the ingest admin API.
- If the request succeeds, the response contains
run_id; the “cloud” dashboard shows the new run under the corresponding project.
4. Document the process
Add a short note to your repo’s README or internal wiki describing:
- Which host/runners can execute
sandtrace run. - The command above plus any extra flags (e.g.,
--allow-net). - That audit+sbom stay on GitHub-hosted runners while runs go to the privileged host.
When a new teammate needs sandtrace run, point them at this guide.
Hosted Runtime Analysis
Hosted Runtime Analysis is the recommended product model for sandtrace run.
audit and sbom fit standard CI runners. run does not. It depends on ptrace, namespace creation, and a tightly controlled Linux environment. That makes it a separate operational product, not just another step in the default GitHub workflow.
Product shape
Recommended packaging:
- base plan:
audit+sbom - add-on:
Hosted Runtime Analysis - enterprise add-on: dedicated isolated runner pool with stronger tenancy controls
Recommended positioning:
- base plan catches static package risk before merge
- hosted runtime analysis executes package install or setup commands in Sandtrace-managed workers
- customers get runtime telemetry without owning ptrace-capable CI infrastructure
Why this should be separate
sandtrace run is not reliable on:
- GitHub-hosted runners
- many WSL environments
- locked-down containers without full namespace and ptrace support
It is reliable on:
- native privileged Linux hosts
- Sandtrace-managed isolated workers
- customer self-hosted runners that meet the sandbox requirements
That makes run a good premium capability:
- it costs real infrastructure to operate
- it needs queueing and scheduling
- it has a different support and security profile from
auditandsbom
Customer workflow
Default flow
- Customer installs the GitHub integration or reusable workflow.
- Standard CI runs
sandtrace auditandsandtrace sbom. - Customer enables Hosted Runtime Analysis for selected repos.
- Sandtrace receives a runtime job request on selected events.
- Sandtrace checks out the repo in an isolated privileged worker.
- Sandtrace executes the configured command through
sandtrace run. - Results are uploaded to
sandtrace-ingestand shown in Sandtrace Cloud. - GitHub receives a check result or PR comment.
First supported triggers
Ship the simplest useful set first:
- manual “Run hosted analysis” button from the product UI
- pull request to protected branch
- push to default branch
Keep the runtime trigger narrow at first:
- dependency manifest changes
- lockfile changes
- install script changes
Repo-level configuration
Each repo that enables Hosted Runtime Analysis needs a small configuration record:
- command to execute, such as
pnpm installornpm ci - working directory
- timeout
- branch rules
- event rules
- whether outbound network is allowed
- whether child process execution is allowed
Good first defaults:
- command: package-manager install command inferred from repo files
- timeout:
300seconds - branch rules: protected branches and pull requests
- network: enabled only when the install process needs it
- process execution: enabled
Architecture
Hosted Runtime Analysis should be built as a service layer around the existing ingest pipeline.
The first concrete implementation boundary is documented in Runtime Orchestrator Spec.
Components
sandtrace-web- billing, repo settings, user controls, results UI
runtime-orchestrator- accepts jobs, schedules workers, tracks status
runtime-workers- ephemeral privileged Linux workers that execute
sandtrace run
- ephemeral privileged Linux workers that execute
sandtrace-ingest- accepts normalized
runuploads and serves read APIs
- accepts normalized
- queue and metadata store
- job state, retries, metering, worker assignment
Execution path
- Product UI or GitHub event requests a hosted runtime job.
runtime-orchestratorvalidates plan entitlements and repo settings.- Orchestrator creates a queued job.
- A worker claims the job.
- Worker fetches repo contents with a GitHub App installation token.
- Worker executes the configured command through
sandtrace run. - Worker uploads the resulting
runpayload tosandtrace-ingest. - Orchestrator marks the job complete and publishes check/status output back to GitHub.
Worker requirements
Workers should be:
- native Linux
- ephemeral per job
- privileged enough for
ptraceand namespace creation - isolated from one another
- configured with strict egress policy
- short-lived with guaranteed teardown
Security model
Hosted Runtime Analysis is higher risk than static scanning and should be designed that way from the start.
Required controls
- one fresh worker per job
- no shared writable workspace between jobs
- installation-token checkout instead of long-lived repo credentials
- short-lived upload credentials
- strict timeout and kill behavior
- upload only the normalized
runresult and selected evidence - explicit retention policy for raw traces or evidence slices
Recommended controls
- outbound network policy per job
- package registry allowlists
- environment variable injection policy
- encryption for evidence at rest
- audit log for who triggered each run
Billing model
Recommended packaging:
- base plan includes
auditandsbom - Hosted Runtime Analysis is an add-on
- enterprise tier can upgrade to dedicated workers
Recommended metering:
- base add-on fee
- plus usage by runtime minute or completed run
This matches the actual cost model better than folding run into the base plan.
UI changes
Sandtrace Cloud should expose Hosted Runtime Analysis as a clearly separate capability.
Billing and plan UI
- add-on enabled or disabled
- monthly usage summary
- run-minute or run-count consumption
- upgrade CTA when disabled
Repo settings UI
- enable hosted runtime analysis for this repo
- command to execute
- branch and event rules
- timeout
- network policy
Results UI
- list of hosted runtime jobs
- current job status
- run detail page
- verdict, suspicious events, and evidence summary
- links from project pages into runtime results
MVP scope
The first version should stay intentionally narrow.
Include
- shared worker pool only
- one Linux base image
- one command per repo
- manual trigger plus pull-request trigger
- upload results into the existing
runcloud views - basic GitHub check status output
Exclude
- customer-provided base images
- private networking
- long-lived workers
- multi-step runtime pipelines
- arbitrary secrets passthrough
- non-Linux workers
Recommended rollout
- Keep
auditandsbomin the current reusable GitHub workflow. - Treat local
sandtrace runas an advanced developer workflow. - Build Hosted Runtime Analysis as a paid Sandtrace-managed execution path.
- Add GitHub App support for repo installation, status checks, and job triggering.
- Keep the existing ingest service as the storage and read boundary for results.
Current recommendation
Until Hosted Runtime Analysis exists, the practical support model is:
- CI:
audit+sbom - local or self-hosted privileged Linux:
run - WSL: best-effort only, not a supported
runplatform
That keeps the current product reliable while leaving a clear path to a premium hosted execution model.
Runtime Orchestrator Spec
This document defines the first implementation boundary for Hosted Runtime Analysis.
sandtrace-ingest already accepts normalized run uploads. The missing piece is the service that decides when to execute a hosted runtime job, how a worker claims it, and how the result is handed off to ingest.
Scope
This spec covers:
- job submission
- job state transitions
- worker lease behavior
- runtime execution payloads
- result upload handoff into
sandtrace-ingest - minimal database schema
This spec does not cover:
- customer billing calculations
- dedicated worker pools
- custom base images
- private networking
- full GitHub App design
Core model
Job lifecycle
Each hosted runtime execution is a runtime_job.
Required high-level states:
queuedrunninguploadedfailedcanceled
Optional internal states that are useful but not required on day one:
lease_acquiredchecking_outexecutinguploading
The public API should expose only the high-level states unless debugging requires more detail.
State rules
- new jobs start as
queued - only a worker with an active lease can move a job to
running - a job becomes
uploadedonly aftersandtrace-ingestacknowledges the run payload - terminal states are
uploaded,failed, andcanceled - terminal jobs cannot be resumed; retries create a new job row linked to the original
Job submission API
The orchestrator should accept a single create-job request from either the product UI or a GitHub-triggered integration layer.
POST /v1/runtime/jobs
Creates a hosted runtime job.
Example request:
{
"org_slug": "sandtrace",
"project_slug": "web",
"source": {
"kind": "github",
"repo_url": "https://github.com/cc-consulting-nv/web.git",
"owner": "cc-consulting-nv",
"repo": "web",
"ref": "refs/heads/main",
"git_commit": "c13aa82903ea336cf3f21bdf2d930dc1a41f65cf",
"pull_request_number": 98
},
"execution": {
"working_directory": ".",
"command": [
"pnpm",
"install"
],
"timeout_seconds": 300,
"allow_network": true,
"allow_exec": true
},
"trigger": {
"kind": "pull_request",
"actor": "github-app"
}
}
Example response:
{
"job_id": "rtj_01kkygkagq0jk17bx6y1w8c3df",
"status": "queued",
"created_at": "2026-03-17T20:00:00Z"
}
Validation rules
- org must have Hosted Runtime Analysis enabled
- repo/project must be enabled for hosted runtime analysis
commandmust be non-emptytimeout_secondsmust be within the allowed plan limitrepo_urlandgit_commitmust be present
Job query API
GET /v1/runtime/jobs
Lists jobs for a visible org or project.
Recommended filters:
project_slugstatustrigger_kindgit_commitlimit
GET /v1/runtime/jobs/{job_id}
Returns the job record, current status, and last event summary.
POST /v1/runtime/jobs/{job_id}/cancel
Cancels a job if it is still queued or running.
If a worker already holds a lease, the worker should observe the cancellation signal and stop execution as soon as possible.
Worker lease API
Workers should not scan the database directly for jobs. Use a lease endpoint so orchestration policy stays centralized.
POST /v1/runtime/leases
Claims one queued job and returns a worker lease plus the full execution payload.
Example request:
{
"worker_id": "wrk_01kkygmfjf4j9s26hzd0h93j0r",
"pool": "shared-linux",
"capabilities": {
"linux": true,
"ptrace": true,
"namespaces": true
}
}
Example response:
{
"lease_id": "rtl_01kkygn4z3fkef0g4tgm6g3b1j",
"job": {
"job_id": "rtj_01kkygkagq0jk17bx6y1w8c3df",
"org_slug": "sandtrace",
"project_slug": "web",
"source": {
"repo_url": "https://github.com/cc-consulting-nv/web.git",
"owner": "cc-consulting-nv",
"repo": "web",
"ref": "refs/heads/main",
"git_commit": "c13aa82903ea336cf3f21bdf2d930dc1a41f65cf"
},
"execution": {
"working_directory": ".",
"command": [
"pnpm",
"install"
],
"timeout_seconds": 300,
"allow_network": true,
"allow_exec": true
}
},
"lease_expires_at": "2026-03-17T20:05:00Z"
}
Lease rules
- a lease is exclusive to one worker
- a lease must expire automatically
- workers must renew the lease while running long jobs
- expired leases return the job to
queuedorfailed, depending on retry policy - lease expiry should emit a job event
POST /v1/runtime/leases/{lease_id}/heartbeat
Renews the lease expiry while the job is still healthy.
POST /v1/runtime/leases/{lease_id}/complete
Marks worker execution complete and provides the ingest handoff details.
Example request:
{
"result": {
"status": "uploaded",
"ingest_run_id": "run_20260317143329_bd39b25f6f31",
"uploaded_at": "2026-03-17T20:03:20Z"
}
}
POST /v1/runtime/leases/{lease_id}/fail
Marks the job failed and includes failure metadata.
Example request:
{
"result": {
"status": "failed",
"reason": "sandbox_apply_failed",
"message": "Namespace creation failed: EPERM"
}
}
Result upload handoff
The worker should upload the final run result to sandtrace-ingest using the existing ingest contract instead of inventing a second result store.
Worker flow
- worker claims lease
- worker checks out repo
- worker executes
sandtrace run - worker uploads the normalized
runpayload toPOST /v1/ingest/run - worker records the returned
run_id - worker completes the lease with
status=uploaded
Required run upload metadata
The worker-generated upload should include:
org_slugproject_slugrepo_urlgit_commitcommandtrigger_kindworker_idjob_id
job_id should be preserved inside the run payload metadata so the UI can link a hosted runtime job to the stored run record.
Minimal database schema
The first implementation only needs three tables.
runtime_jobs
Suggested columns:
idjob_ulidorg_slugproject_slugsource_kindrepo_urlrepo_ownerrepo_namegit_refgit_commitpull_request_numbertrigger_kindtrigger_actorworking_directorycommand_jsontimeout_secondsallow_networkallow_execstatusretry_of_job_ulidingest_run_idfailure_reasonfailure_messagecreated_atstarted_atfinished_at
Indexes:
(org_slug, project_slug, created_at desc)(org_slug, git_commit)(status, created_at)- unique
(job_ulid)
runtime_job_events
Suggested columns:
idjob_ulidevent_typeactor_kindactor_idpayload_jsoncreated_at
Purpose:
- audit trail
- debugging
- timeline rendering
runtime_worker_leases
Suggested columns:
idlease_ulidjob_ulidworker_idpoolstatusleased_atexpires_atcompleted_at
Indexes:
- unique
(lease_ulid) (job_ulid, status)(worker_id, status)
Retry policy
The first version should stay conservative.
- no automatic retry for successful upload failures without operator review
- allow one automatic retry for worker crash or lease expiry
- do not retry permanent validation failures
- retries create a new
runtime_jobsrow withretry_of_job_ulidset
UI implications
The product UI will need these read shapes later:
- recent hosted jobs per project
- job detail by
job_id - status badge for
queued,running,uploaded,failed,canceled - link from job detail to the uploaded run detail when
ingest_run_idexists
That means the orchestrator should preserve ingest_run_id and terminal failure details from the first version onward.
MVP recommendations
The first implementation should:
- support only GitHub-backed jobs
- use one shared Linux worker pool
- allow one command per repo
- support only manual and pull-request triggers
- upload only the final normalized run payload
Do not add these yet:
- customer-provided worker images
- multi-step pipelines
- arbitrary environment variable passthrough
- private networking
- non-GitHub source providers
Relationship to current product behavior
Until the orchestrator exists:
auditandsbomstay in standard CIrunstays local or self-hosted on a privileged Linux environment
This spec is the bridge from that model to a hosted paid add-on.
Audit Detection Rules
sandtrace audit checks codebases against 50+ built-in detection rules across four categories: credential patterns, obfuscation detection (3 tiers), and supply-chain threats.
Credential patterns
| Rule ID | Severity | What it finds |
|---|---|---|
cred-aws-key | Critical | AWS Access Key IDs (AKIA...) |
cred-private-key | Critical | RSA, EC, DSA, OpenSSH private keys |
cred-github-token | Critical | GitHub PATs (ghp_, gho_, ghu_, ghs_, ghr_) |
cred-slack-token | Critical | Slack tokens (xoxb-, xoxp-, xoxa-, xoxr-, xoxs-) |
cred-stripe-key | Critical | Stripe API keys (sk_live_, pk_live_, sk_test_, pk_test_) |
cred-jwt-token | High | JWT tokens (eyJ...) |
cred-generic-password | High | Hardcoded password = "..." assignments |
cred-generic-secret | High | Hardcoded secret, token, api_key assignments |
Obfuscation detection
These rules detect code obfuscation techniques used in supply-chain attacks. They are organized into three tiers by sophistication.
Original rules
| Rule ID | Severity | What it finds |
|---|---|---|
obfuscation-trailing-whitespace | High | Excessive trailing whitespace (>20 chars) |
obfuscation-hidden-content | Critical | Content hidden past column 200 |
obfuscation-invisible-chars | Critical | Zero-width unicode characters (U+200B, U+FEFF, U+2060, etc.) |
obfuscation-base64 | Medium | Large base64-encoded blobs in source files |
obfuscation-homoglyph | High | Cyrillic/Greek homoglyphs mixed with ASCII |
Tier 1 — Encoding & string manipulation
| Rule ID | Severity | What it finds |
|---|---|---|
obfuscation-hex-escape | Medium | Chains of 3+ hex escape sequences (\x63\x75\x72\x6c). Skips .c/.h/.cpp files. |
obfuscation-unicode-escape | Medium | Chains of 3+ unicode escapes (\u0065\u0076\u0061\u006C). Skips .json files. |
obfuscation-string-concat | High | String concatenation hiding dangerous function names ('ev' + 'al') |
obfuscation-charcode | High | String.fromCharCode() and PHP chr() concatenation chains |
obfuscation-bracket-notation | High | Bracket notation hiding dangerous functions (window['ev' + 'al']) |
obfuscation-constructor-chain | Critical | .constructor.constructor() chains — almost exclusively malicious |
obfuscation-git-hook-injection | Critical | Suspicious content in .git/hooks/ (curl, wget, eval, pipe-to-shell) |
obfuscation-php-variable-function | High | PHP variable functions storing dangerous names ($fn = 'system') |
Tier 2 — Advanced obfuscation
| Rule ID | Severity | What it finds |
|---|---|---|
obfuscation-atob-chain | High | Nested atob(atob(...)) or large atob() payloads |
obfuscation-polyglot | Critical | Binary magic bytes (PNG/JPEG/PDF/ELF/MZ) in source file extensions |
obfuscation-symlink-attack | Critical | Symlinks targeting .ssh, .aws, .gnupg, /etc/shadow, .env, etc. |
obfuscation-filename-homoglyph | High | Cyrillic/Greek characters in filenames mixed with ASCII |
obfuscation-rot13 | Medium/High | PHP str_rot13() calls; elevated to High when decoding to dangerous functions |
obfuscation-template-literal | High | Adjacent template literal fragments in JS/TS (${'ev'}${'al'}) |
obfuscation-php-create-function | High | create_function() — deprecated PHP dynamic code execution |
obfuscation-php-backtick | Critical | PHP backtick execution operator (equivalent to shell_exec()) |
obfuscation-python-dangerous | High | __import__('os'), pickle.loads(), exec(compile()), marshal.loads() |
Tier 3 — Supply chain
| Rule ID | Severity | What it finds |
|---|---|---|
obfuscation-typosquat | High | Package names 1 edit distance from popular npm/pip packages. Requires enable_typosquat = true. |
obfuscation-dependency-confusion | High | Internal-looking packages (-internal, -private, @company/) without .npmrc |
obfuscation-install-script-chain | Critical | node -e, python -c, hidden dir refs, env var URLs in install scripts |
obfuscation-php-preg-replace-e | Critical | preg_replace() with /e modifier — executes replacement as PHP code |
obfuscation-suspicious-dotfile | Medium | Unknown dotfiles in source directories (src/, lib/, app/, etc.) |
obfuscation-proxy-reflect | Medium | new Proxy() / Reflect.apply() metaprogramming in JS/TS |
obfuscation-json-eval | Critical | eval(, Function(, javascript:, <script in .json files |
obfuscation-encoded-shell | Critical | `echo B64 |
What are obfuscation attacks?
Obfuscation attacks hide malicious code using encoding, string manipulation, binary polyglots, or visual tricks. These techniques make payloads invisible to code review while remaining executable. Common vectors include:
- Encoding — hex escapes, unicode escapes, charcode construction, base64 nesting
- String splitting — concatenation (
'ev'+'al'), bracket notation, template literals, ROT13 - Binary tricks — polyglot files (PNG header + JS payload), constructor chain exploits
- Filesystem — symlinks to sensitive files, homoglyph filenames, git hook injection, suspicious dotfiles
- Supply chain — typosquatting, dependency confusion, malicious install scripts, preg_replace /e
These rules detect the surface indicators that suggest malicious content is hiding in plain sight.
Supply-chain detection
| Rule ID | Severity | What it finds |
|---|---|---|
supply-chain-suspicious-script | Critical | package.json postinstall/preinstall scripts with curl, wget, eval(, base64, pipe-to-shell |
Custom IOC patterns
In addition to built-in rules, you can add custom indicators of compromise (IOCs) as detection rules. See Custom Rules — IOC Rules for examples of matching known malicious domains, file hashes, IP addresses, and filenames.
Severity levels
| Level | Meaning | Exit code |
|---|---|---|
critical | Confirmed secret or active threat | 2 |
high | Likely secret or dangerous pattern | 1 |
medium | Suspicious but may be intentional | 0 |
low | Worth reviewing | 0 |
info | Informational only | 0 |
Use --severity to filter the minimum level reported:
sandtrace audit . --severity high # Only high + critical
sandtrace audit . --severity medium # Medium and above
Suppressing false positives
See Configuration — Redaction Markers and Configuration — Inline Suppression for ways to suppress known false positives.
Watch Rules
sandtrace watch uses YAML rules to monitor credential files and detect suspicious access patterns. Built-in rules cover 19 detection scenarios across three categories.
Credential access rules (14 rules)
These rules alert when processes outside the expected allowlist access sensitive files:
| Rule ID | Files Monitored | Allowed Processes |
|---|---|---|
cred-access-aws | ~/.aws/credentials, ~/.aws/config | aws, terraform, pulumi, cdktf |
cred-access-ssh | ~/.ssh/id_*, ~/.ssh/config | ssh, scp, sftp, ssh-agent, git |
cred-access-gpg | ~/.gnupg/* | gpg, gpg2, gpg-agent, git |
cred-access-npm | ~/.npmrc, ~/.config/npm/* | npm, npx, pnpm, yarn, node |
cred-access-docker | ~/.docker/config.json | docker, dockerd, containerd |
cred-access-kube | ~/.kube/config | kubectl, helm, k9s, kubectx |
cred-access-gcloud | ~/.config/gcloud/* | gcloud, terraform, pulumi |
cred-access-azure | ~/.azure/* | az, terraform, pulumi |
cred-access-pgpass | ~/.pgpass | psql, pg_dump, pg_restore, pgcli |
cred-access-volta | ~/.volta/* | volta, node, npm, npx |
cred-access-triton | ~/.triton/* | triton, node |
cred-access-netrc | ~/.netrc | curl, wget, git, ftp |
cred-access-git-credentials | ~/.git-credentials | git, git-credential-store |
cred-access-cpln | ~/.config/cpln/* | cpln, node |
Supply-chain rules (4 rules)
These rules detect writes to dependency directories from processes outside the expected package managers:
| Rule ID | What it detects |
|---|---|
supply-chain-node-modules | Direct write to node_modules outside npm/yarn/pnpm |
supply-chain-npmrc-write | Unexpected modification of .npmrc |
supply-chain-pip-conf | Unexpected modification of pip configuration |
supply-chain-cargo-registry | Direct modification of Cargo registry cache |
Exfiltration rules (1 rule)
| Rule ID | What it detects |
|---|---|
exfil-curl-unknown | curl/wget outbound requests during build/install |
How rules work
Each watch rule defines:
- File paths to monitor via inotify
- Excluded processes (the allowlist) that are expected to access those files
- Access types to watch for (read, write, or both)
- Alert channels and message templates
When a file access event occurs, sandtrace checks the accessing process name against the rule's excluded processes list. If the process is not in the allowlist, an alert fires.
Adding custom watch rules
See Custom Rules for the YAML format to write your own watch rules.
Custom Rules
Write custom YAML detection rules for sandtrace watch to monitor additional files and processes.
YAML rule format
rules:
- id: cred-access-custom-vault
name: Custom Vault File Access
severity: high
description: Unexpected process accessed vault credentials
detection:
file_paths:
- "/opt/vault/creds/*"
excluded_processes:
- vault
- consul
access_types: [read, write]
alert:
channels: [stdout, desktop]
message: "{process_name} (PID: {pid}) accessed {path}"
tags: [credential, vault]
enabled: true
Rule fields
| Field | Required | Description |
|---|---|---|
id | Yes | Unique rule identifier |
name | Yes | Human-readable name |
severity | Yes | critical, high, medium, low, or info |
description | Yes | What the rule detects |
detection.file_paths | Yes | Glob patterns for files to monitor |
detection.excluded_processes | Yes | Process names that are allowed (the allowlist) |
detection.access_types | Yes | Array of read, write, or both |
alert.channels | No | Override alert channels for this rule |
alert.message | No | Custom alert message template |
tags | No | Tags for filtering and categorization |
enabled | No | Set to false to disable (default: true) |
Template variables
Use these variables in the alert.message field:
| Variable | Description |
|---|---|
{process_name} | Name of the process that triggered the alert |
{pid} | Process ID |
{path} | File path that was accessed |
Where to place custom rules
Custom rules can be placed in:
~/.sandtrace/rules/— the default rules directory- Any directory listed in
additional_rulesin yourconfig.toml
# config.toml
additional_rules = [
"~/.sandtrace/community-rules",
"/opt/team-rules/sandtrace",
]
All rules from all directories are merged at startup.
IOC Rules
The custom_patterns configuration in config.toml supports three match types that make it easy to add indicators of compromise (IOCs) without writing regex.
Match types
| Type | Description | Use case |
|---|---|---|
regex | Regular expression matched against file content (default) | Custom credential formats |
literal | Exact string match against file content (case-insensitive) | IOC domains, IPs, hashes |
filename | Match against file names/paths (case-insensitive) | Known malicious filenames |
C2 domains
[[custom_patterns]]
id = "ioc-c2-domain-1"
description = "Known C2 domain: evil-payload.example.com"
severity = "critical"
match_type = "literal"
pattern = "evil-payload.example.com"
tags = ["ioc", "c2"]
[[custom_patterns]]
id = "ioc-c2-domain-2"
description = "Known C2 domain: data-exfil.example.net"
severity = "critical"
match_type = "literal"
pattern = "data-exfil.example.net"
tags = ["ioc", "c2"]
Malicious file hashes
# SHA256 hash
[[custom_patterns]]
id = "ioc-malware-sha256-1"
description = "Known malware SHA256: Trojan.GenericKD"
severity = "critical"
match_type = "literal"
pattern = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
tags = ["ioc", "malware", "sha256"]
# MD5 hash
[[custom_patterns]]
id = "ioc-malware-md5-1"
description = "Known malware MD5: Backdoor.Agent"
severity = "critical"
match_type = "literal"
pattern = "d41d8cd98f00b204e9800998ecf8427e"
tags = ["ioc", "malware", "md5"]
Suspicious IP addresses
[[custom_patterns]]
id = "ioc-c2-ip-1"
description = "Known C2 IP address"
severity = "critical"
match_type = "literal"
pattern = "198.51.100.42"
tags = ["ioc", "c2", "ip"]
Known malicious filenames
[[custom_patterns]]
id = "ioc-tool-mimikatz"
description = "Mimikatz credential dumping tool"
severity = "high"
match_type = "filename"
pattern = "mimikatz"
file_extensions = ["exe", "dll", "ps1"]
tags = ["ioc", "tool", "credential-theft"]
[[custom_patterns]]
id = "ioc-webshell"
description = "Common webshell filename"
severity = "critical"
match_type = "filename"
pattern = "c99shell"
file_extensions = ["php", "asp", "jsp"]
tags = ["ioc", "webshell"]
Bulk IOC import
For large IOC lists, generate config.toml entries programmatically. Example with a domains list:
# Convert a plain-text IOC list to TOML config entries
while IFS= read -r domain; do
id=$(echo "$domain" | tr '.' '-' | tr -cd 'a-z0-9-')
cat <<EOF
[[custom_patterns]]
id = "ioc-domain-${id}"
description = "IOC domain: ${domain}"
severity = "critical"
match_type = "literal"
pattern = "${domain}"
tags = ["ioc", "c2"]
EOF
done < domains.txt >> ~/.sandtrace/config.toml
Automated IOC feeds
sandtrace can automatically ingest indicators of compromise from external threat feeds. The pattern_files config option lets you load auto-generated pattern files without cluttering your main config.toml.
npm malware feed (OpenSSF / OSV)
The scripts/update-npm-iocs.sh script downloads the OpenSSF malicious-packages dataset (published in OSV format) and generates a TOML file of [[custom_patterns]] entries for every known-malicious npm package.
Requirements: curl, unzip, jq
Usage:
# Download and generate (default output: ~/.sandtrace/npm-malware.toml)
./scripts/update-npm-iocs.sh
# Custom output path
./scripts/update-npm-iocs.sh /path/to/output.toml
Then add the generated file to your config.toml:
pattern_files = ["~/.sandtrace/npm-malware.toml"]
Run on a schedule with cron:
# Update npm malware patterns daily at 3 AM
0 3 * * * /path/to/sandtrace/scripts/update-npm-iocs.sh
The generated file contains one [[custom_patterns]] entry per malicious package, with match_type = "literal" and file_extensions = ["json"] so it only scans package.json, package-lock.json, and similar files.
Examples
Monitor a custom secrets directory
rules:
- id: cred-access-app-secrets
name: Application Secrets Access
severity: critical
description: Unexpected process accessed application secrets
detection:
file_paths:
- "/opt/app/secrets/*"
- "/opt/app/.env"
excluded_processes:
- myapp
- supervisor
access_types: [read, write]
alert:
channels: [stdout, webhook]
message: "ALERT: {process_name} (PID: {pid}) accessed {path}"
tags: [credential, application]
enabled: true
Monitor database config files
rules:
- id: cred-access-mysql
name: MySQL Config Access
severity: high
description: Unexpected process accessed MySQL credentials
detection:
file_paths:
- "~/.my.cnf"
- "/etc/mysql/debian.cnf"
excluded_processes:
- mysql
- mysqld
- mysqldump
- mysqlsh
access_types: [read]
alert:
channels: [stdout]
message: "{process_name} read MySQL config at {path}"
tags: [credential, database]
enabled: true
Policies
TOML policy files configure the sandbox for sandtrace run. They define filesystem access, network permissions, syscall filters, and resource limits.
Policy format
[filesystem]
allow_read = ["/usr", "/lib", "/lib64", "/etc/ld.so.cache", "/dev/null", "/proc/self"]
allow_write = ["./output"]
allow_exec = []
deny = ["/home/*/.ssh", "/etc/shadow", "**/.env"]
[network]
allow = false
[syscalls]
deny = ["mount", "ptrace", "reboot"]
log_only = ["mprotect", "mmap"]
[limits]
timeout = 30
Sections
[filesystem]
| Field | Description |
|---|---|
allow_read | Paths the sandboxed process can read |
allow_write | Paths the sandboxed process can write to |
allow_exec | Paths from which the sandboxed process can execute binaries |
deny | Paths that are always blocked, even if matched by an allow rule |
Path patterns support globs (*, **). Deny rules take precedence over allow rules.
[network]
| Field | Description |
|---|---|
allow | Whether to allow network access (true/false) |
When false, the sandbox creates an isolated network namespace with no external connectivity.
[syscalls]
| Field | Description |
|---|---|
deny | Syscalls to block (returns EPERM) |
log_only | Syscalls to log but allow |
Note: The always-blocked syscalls are blocked regardless of policy configuration.
[limits]
| Field | Description |
|---|---|
timeout | Kill the process after N seconds |
Example policies
Example policies are included in the examples/ directory:
| File | Description |
|---|---|
strict.toml | Minimal filesystem access, no network, blocked dangerous syscalls |
permissive.toml | Broad read access, trace-focused |
npm_audit.toml | Tuned for npm install sandboxing |
pnpm_audit.toml | Tuned for pnpm install sandboxing |
composer_audit.toml | Tuned for composer install sandboxing |
Usage
sandtrace run --policy examples/strict.toml ./untrusted-binary
sandtrace run --policy examples/npm_audit.toml npm install
Policy flags can be combined with CLI flags. CLI flags (--allow-path, --allow-net, etc.) are merged with policy file settings, with CLI flags taking precedence.
Writing your own policies
Start from one of the example policies and customize:
- Start strict — begin with
strict.tomland add only what the binary needs. - Use trace-only first — run with
--trace-onlyto see what the binary accesses, then write a policy based on the trace. - Deny sensitive paths — always deny
~/.ssh,~/.aws,~/.gnupg, and.envfiles. - Log before blocking — use
log_onlyfor syscalls you're unsure about before adding them todeny.
CI/CD Integration
sandtrace integrates into CI/CD pipelines through SARIF output (for GitHub Code Scanning) and JSON output (for custom pipelines).
GitHub Actions with SARIF
Upload findings directly to GitHub Code Scanning:
name: Security Audit
on:
push:
branches: [main]
pull_request:
jobs:
sandtrace:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install sandtrace
run: |
cargo install --path .
sandtrace init
- name: Run sandtrace audit
run: sandtrace audit . --format sarif > sandtrace.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v4
if: always()
with:
sarif_file: sandtrace.sarif
SARIF findings appear in the Security tab of your GitHub repository under Code scanning alerts.
JSON output for custom pipelines
Use JSON output with exit codes for custom CI logic:
- name: Security audit
run: |
sandtrace audit . --format json --severity high > findings.json
if [ $? -eq 2 ]; then echo "Critical findings detected"; exit 1; fi
Exit codes
| Code | Meaning | CI action |
|---|---|---|
0 | Clean | Pass |
1 | High findings | Fail (or warn, depending on your policy) |
2 | Critical findings | Always fail |
Severity gating
Control which severity levels fail your build:
# Fail on critical only
sandtrace audit . --severity critical
# Fail on high and critical
sandtrace audit . --severity high
# Report everything (never fails on medium/low/info alone)
sandtrace audit . --severity low
Pre-commit hook
Run sandtrace as a git pre-commit hook:
#!/bin/sh
# .git/hooks/pre-commit
sandtrace audit . --severity high --format terminal
JSON output schema
sandtrace audit --format json emits a JSON array of finding objects. Summary counts are written to stderr, and CI gating should use the command exit code.
[
{
"file_path": "src/config.rs",
"line_number": 42,
"rule_id": "cred-aws-key",
"severity": "critical",
"description": "AWS Access Key ID found",
"matched_pattern": "AKIA[0-9A-Z]{16}",
"context_lines": [
"const AWS_KEY = \"<redacted>\";"
]
}
]
Architecture
Source tree
sandtrace/
├── Cargo.toml
├── rules/ # Built-in YAML detection rules
│ ├── credential-access.yml # 14 credential file monitoring rules
│ ├── supply-chain.yml # 4 supply-chain attack detection rules
│ └── exfiltration.yml # Data exfiltration detection rules
├── examples/ # Example policy and config files
│ ├── config.toml # Annotated global config
│ ├── strict.toml # Strict sandbox policy
│ ├── permissive.toml # Permissive sandbox policy
│ └── *_audit.toml # Package manager audit policies
├── src/
│ ├── main.rs # CLI entry point + dispatch
│ ├── cli.rs # clap derive structs + validation
│ ├── config.rs # Global config (~/.sandtrace/config.toml)
│ ├── error.rs # thiserror error hierarchy
│ ├── event.rs # SyscallEvent, AuditFinding, Severity
│ ├── init.rs # `init` subcommand
│ ├── scan.rs # `scan` subcommand (rayon parallel sweep)
│ ├── process.rs # Process tree tracking via /proc
│ ├── rules/
│ │ ├── mod.rs # RuleRegistry, rule loading
│ │ ├── matcher.rs # File access + process matching
│ │ ├── schema.rs # YAML schema definitions
│ │ └── builtin.rs # Built-in rule definitions
│ ├── audit/
│ │ ├── mod.rs # Audit orchestrator (parallel via rayon)
│ │ ├── scanner.rs # Credential + supply-chain patterns
│ │ └── obfuscation.rs # Steganography + obfuscation detection
│ ├── watch/
│ │ ├── mod.rs # Watch orchestrator (tokio async)
│ │ ├── monitor.rs # inotify file monitoring
│ │ └── handler.rs # Event handler + rule matching
│ ├── alert/
│ │ ├── mod.rs # AlertRouter + AlertDispatcher trait
│ │ ├── stdout.rs # Console alerts
│ │ ├── desktop.rs # Desktop notifications (notify-rust)
│ │ ├── webhook.rs # HTTP webhook (reqwest)
│ │ └── syslog.rs # Syslog alerts
│ ├── policy/
│ │ ├── mod.rs # Policy struct, path/syscall evaluation
│ │ ├── parser.rs # TOML policy parsing
│ │ └── rules.rs # PolicyEvaluator
│ ├── sandbox/
│ │ ├── mod.rs # SandboxConfig, apply_child_sandbox()
│ │ ├── namespaces.rs # User, mount, PID, network namespaces
│ │ ├── landlock.rs # Landlock LSM filesystem control
│ │ ├── seccomp.rs # seccomp-bpf syscall filtering
│ │ └── capabilities.rs # Capability dropping, NO_NEW_PRIVS
│ ├── tracer/
│ │ ├── mod.rs # Main ptrace event loop
│ │ ├── decoder.rs # Syscall argument decoding
│ │ ├── memory.rs # Tracee memory access
│ │ ├── state.rs # Per-PID state tracking
│ │ ├── syscalls.rs # Syscall categorization
│ │ └── arch/
│ │ ├── mod.rs # Architecture trait
│ │ ├── x86_64.rs # x86_64 syscall table
│ │ └── aarch64.rs # ARM64 syscall table
│ └── output/
│ ├── mod.rs # OutputManager + OutputSink trait
│ ├── jsonl.rs # JSONL writer
│ └── terminal.rs # Colored terminal output
└── tests/
├── integration.rs # Integration tests
└── fixtures/binaries/ # Test binaries (fork, mount, read, connect)
Module overview
CLI layer (main.rs, cli.rs)
Entry point using clap derive macros. Parses arguments, loads config, and dispatches to the appropriate subcommand.
Config (config.rs)
Loads and validates ~/.sandtrace/config.toml. Provides defaults for all optional fields.
Audit (audit/)
Parallel codebase scanner using rayon. The scanner.rs module handles credential and supply-chain pattern matching via regex. The obfuscation.rs module handles whitespace obfuscation, zero-width unicode, and homoglyph detection.
Scan (scan.rs)
Standalone parallel filesystem sweep using rayon and ignore-aware directory walking. Focused specifically on detecting consecutive whitespace runs.
Watch (watch/)
Async file monitoring using tokio and inotify. The monitor.rs module manages inotify watches, and handler.rs matches events against YAML rules.
Rules (rules/)
YAML rule loading, parsing, and matching. The RuleRegistry merges rules from all configured directories. The matcher.rs module handles file path globbing and process name matching.
Alert (alert/)
Pluggable alert dispatch using the AlertDispatcher trait. Implementations for stdout, desktop notifications (via notify-rust), HTTP webhooks (via reqwest), and syslog.
Policy (policy/)
TOML policy parsing and evaluation. The PolicyEvaluator determines whether a given filesystem access or syscall should be allowed, denied, or logged.
Sandbox (sandbox/)
Linux namespace and security module setup. Applied in order: user namespace, mount namespace, PID namespace, network namespace, NO_NEW_PRIVS, Landlock, seccomp-bpf, ptrace.
Tracer (tracer/)
ptrace-based syscall tracing. The main event loop handles PTRACE_SYSCALL stops, decodes arguments via architecture-specific tables (x86_64, aarch64), and applies policy decisions.
Output (output/)
Pluggable output using the OutputSink trait. JSONL writer for machine-readable traces and colored terminal output for human consumption.
Security
Important notes
sandtrace is a prototype security tool. Review these notes before using it in production environments.
Kernel requirements
sandtrace run (sandbox)
| Requirement | Kernel setting | Minimum version |
|---|---|---|
| Unprivileged user namespaces | kernel.unprivileged_userns_clone=1 | Linux 3.8+ |
| Landlock LSM | Built-in or module loaded | Linux 5.13+ (Landlock v1) |
| PTRACE_GET_SYSCALL_INFO | — | Linux 5.3+ |
| YAMA ptrace scope | kernel.yama.ptrace_scope <= 1 | — |
Check your kernel settings:
sysctl kernel.unprivileged_userns_clone
sysctl kernel.yama.ptrace_scope
cat /sys/kernel/security/lsm # Should include "landlock"
sandtrace watch
Requires inotify support (available in all modern Linux kernels).
sandtrace audit / sandtrace scan
No special kernel requirements. Works on any Linux system with Rust 1.87+.
Limitations
- The sandbox is defense-in-depth, not a security boundary guarantee. A determined attacker with kernel exploits could escape the sandbox. Use sandtrace as one layer in a defense-in-depth strategy, not as your sole isolation mechanism.
- ptrace-based tracing has overhead. Sandboxed processes will run slower than native execution due to syscall interception. This is acceptable for auditing untrusted packages but not suitable for production workloads.
- seccomp-bpf filters are process-wide. Once applied, they cannot be relaxed — only made more restrictive. This is by design.
- Landlock restrictions are cumulative. Like seccomp, Landlock rules can only be made more restrictive after initial application.
Security model
sandtrace's sandbox applies multiple independent isolation layers. Each layer provides protection even if other layers are bypassed:
- Namespace isolation prevents the sandboxed process from seeing or affecting the host.
- Landlock provides kernel-enforced filesystem access control.
- seccomp-bpf blocks dangerous syscalls at the kernel level.
- ptrace tracing provides visibility into all syscall activity.
The combination means an attacker would need to bypass multiple independent kernel security mechanisms to escape.
Responsible use
- Test sandtrace thoroughly in your environment before production use.
- Keep your kernel updated for the latest security patches.
- Review sandbox traces to understand what untrusted code is doing before allowing it in production.
- Use strict policies by default and only relax restrictions when you understand why they're needed.