Testing Your Installation¶
Use the fakedata tool to generate synthetic data and validate your ByteFreezer proxy installation.
Installation¶
# Clone and build fakedata
git clone https://github.com/bytefreezer/fakedata.git
cd fakedata
go build -o fakedata .
# Or download pre-built binary
curl -L https://github.com/bytefreezer/fakedata/releases/latest/download/fakedata-linux-amd64 -o fakedata
chmod +x fakedata
Quick Validation¶
Test UDP Input¶
# Start your proxy with UDP listener on port 5000
# Then send test data:
fakedata udp --host <proxy-ip> --port 5000 --rate 10 --count 100
# Expected: 100 JSON events sent, verify in proxy logs
Test TCP Input¶
Test Syslog Input¶
# RFC 3164 format
fakedata syslog --host <proxy-ip> --port 514 --rfc 3164 --rate 10 --count 100
# RFC 5424 format
fakedata syslog --host <proxy-ip> --port 514 --rfc 5424 --rate 10 --count 100
Test sFlow Input¶
Test IPFIX Input¶
Message Queue Testing¶
NATS (Embedded Server - No Dependencies)¶
The simplest option - runs an embedded NATS server with no Docker or external install:
# 1. Start embedded NATS server and publish test data (single command!)
fakedata nats-server \
--port 4222 \
--subject bytefreezer.events \
--rate 10 \
--count 100
# 2. Configure proxy to connect to nats://localhost:4222 subject "bytefreezer.events"
NATS (External Server)¶
If you prefer a standalone NATS server:
# 1. Start NATS server
docker run -d --name nats -p 4222:4222 nats:latest
# 2. Configure proxy to consume from NATS subject "bytefreezer.events"
# 3. Send test data
fakedata nats \
--servers nats://localhost:4222 \
--subject bytefreezer.events \
--rate 10 \
--count 100
Kafka / Redpanda¶
# 1. Start Redpanda (Kafka-compatible, lighter weight)
docker run -d --name redpanda \
-p 9092:9092 \
vectorized/redpanda
# 2. Configure proxy to consume from topic "bytefreezer-events"
# 3. Send test data
fakedata kafka \
--brokers localhost:9092 \
--topic bytefreezer-events \
--rate 10 \
--count 100
AWS SQS (LocalStack)¶
# 1. Start LocalStack
docker run -d --name localstack \
-p 4566:4566 \
localstack/localstack
# 2. Create test queue
aws --endpoint-url=http://localhost:4566 \
sqs create-queue --queue-name test-queue
# 3. Configure proxy with queue URL and LocalStack endpoint
# 4. Send test data
fakedata sqs \
--queue-url http://localhost:4566/000000000000/test-queue \
--endpoint http://localhost:4566 \
--region us-east-1 \
--rate 10 \
--count 100
AWS Kinesis (LocalStack)¶
# 1. Start LocalStack (if not already running)
docker run -d --name localstack \
-p 4566:4566 \
localstack/localstack
# 2. Create test stream
aws --endpoint-url=http://localhost:4566 \
kinesis create-stream \
--stream-name test-stream \
--shard-count 1
# 3. Configure proxy with stream name and LocalStack endpoint
# 4. Send test data
fakedata kinesis \
--stream test-stream \
--endpoint http://localhost:4566 \
--region us-east-1 \
--rate 10 \
--count 100
End-to-End Validation¶
Step 1: Verify Proxy is Running¶
Step 2: Send Test Data¶
# Send 1000 events at 100/sec
fakedata udp \
--host <proxy-ip> \
--port 5000 \
--rate 100 \
--count 1000
Step 3: Verify Data Flow¶
Check the following:
- Proxy logs - Should show events received
- Spooling directory - Raw files should appear in
{spool_dir}/{tenant}/{dataset}/raw/ - Queue directory - Batched
.gzfiles in{spool_dir}/{tenant}/{dataset}/queue/ - Receiver - Should receive forwarded batches
- S3/MinIO - Data should appear in bucket
Step 4: Check Metrics¶
# Proxy metrics
curl http://<proxy-ip>:8080/metrics
# Look for:
# - bytefreezer_proxy_messages_received_total
# - bytefreezer_proxy_bytes_received_total
# - bytefreezer_proxy_batches_sent_total
Load Testing¶
Sustained Load Test¶
# 10,000 events/sec for 60 seconds (600,000 total events)
fakedata udp \
--host <proxy-ip> \
--port 5000 \
--rate 10000 \
--count 600000
Burst Test¶
# Quick burst: 50,000 events/sec for 10 seconds
fakedata udp \
--host <proxy-ip> \
--port 5000 \
--rate 50000 \
--count 500000
Multi-Protocol Test¶
Run multiple generators in parallel to test concurrent input handling:
# Terminal 1: UDP
fakedata udp --host <proxy-ip> --port 5000 --rate 1000
# Terminal 2: TCP
fakedata tcp --host <proxy-ip> --port 5001 --rate 1000
# Terminal 3: Syslog
fakedata syslog --host <proxy-ip> --port 514 --rate 1000
Sample Data Format¶
All generators produce JSON events with this structure:
{
"timestamp": "2024-01-15T10:23:45.123456789Z",
"source_ip": "192.168.1.100",
"dest_ip": "8.8.8.8",
"source_port": 54321,
"dest_port": 443,
"username": "admin",
"action": "login",
"status": "success",
"process": "sshd",
"bytes_sent": 1234,
"bytes_recv": 5678,
"duration_ms": 150,
"session_id": "sess_123456789"
}
After proxy ingestion, each record includes an additional BfTs field (ByteFreezer Timestamp) with the Unix milliseconds timestamp of when the data was received.
Troubleshooting¶
No Data Received¶
- Check proxy is listening:
ss -tuln | grep <port> - Check firewall rules allow traffic
- Verify correct host/port in fakedata command
- Check proxy logs for errors
Data Received but Not Forwarded¶
- Check spooling directory permissions
- Verify receiver endpoint is reachable
- Check proxy queue directory for pending files
- Review proxy logs for forwarding errors
High Drop Rate¶
- Reduce send rate:
--rate 100 - Check system resources (CPU, memory, disk I/O)
- Increase proxy buffer sizes in configuration
- Consider scaling horizontally