Response Modes
Understanding the three response modes: Log, Challenge, and Block
Response Modes
BotSigged provides three response modes to handle detected threats. Each mode offers different trade-offs between user friction and security.
Overview
| Mode | Action | User Impact | Use Case |
|---|---|---|---|
| Log | Record the detection | None | Monitoring, analytics |
| Challenge | Require proof-of-work | Brief delay | Suspected bots, rate limiting |
| Block | Prevent form submission | Denied access | Known bad actors |
Log Mode
Log mode is passive monitoring. BotSigged tracks all sessions and calculates risk scores, but takes no action against the user.
When to Use
- Initial deployment: Understand your traffic before taking action
- Analytics: Track bot vs. human ratios over time
- Low-risk pages: Public content where bots cause minimal harm
- Gathering data: Build cohort intelligence before enabling enforcement
Implementation
botsigged.onScoreUpdate((data) => {
// Log to your analytics
analytics.track('session_scored', {
classification: data.classification,
bot_score: data.bot_score,
session_id: data.session_id
});
// No action taken - purely observational
});
Log mode is always active. Even when using Challenge or Block modes, all sessions are logged for analysis.
Challenge Mode
Challenge mode requires suspicious sessions to complete a proof-of-work computation before proceeding. This adds a small computational cost that deters automated attacks while remaining invisible to most humans.
How It Works
- BotSigged detects a suspicious session based on classification or cohort risk
- A cryptographic puzzle is sent to the client
- The client must compute a hash that meets the difficulty target
- Once solved, the session can proceed
- Server verifies the solution before accepting form submissions
The Proof-of-Work Challenge
The challenge uses SHA-256 hashing with an adjustable difficulty:
// Client receives challenge
{
challenge_id: "abc123",
prefix: "botsigged:1735520000:random123",
difficulty: 18 // Number of leading zero bits required
}
// Client must find a nonce such that:
// SHA256(prefix + nonce) starts with 18 zero bits
Difficulty levels:
- 16 bits: ~65K iterations, <100ms on modern devices
- 18 bits: ~262K iterations, 100-500ms typical
- 20 bits: ~1M iterations, 500ms-2s typical
- 22 bits: ~4M iterations, 2-5s typical
Impact on Bots vs. Humans
| Actor | Impact |
|---|---|
| Humans | Brief pause (usually unnoticeable) |
| Scrapers | Massive slowdown - must solve for each request |
| Bot farms | Economic deterrent - CPU cost per session |
| Headless browsers | Works but adds significant overhead |
When to Use
- Neutral classifications: Scrapers, headless_fetch, suspicious sessions
- Rate limiting: Slow down high-volume actors without blocking
- Soft enforcement: When you’re not certain the traffic is malicious
- High-value forms: Signups, checkouts, contact forms
Implementation
// Automatic challenge for suspicious traffic
botsigged.configure({
challengeMode: 'auto',
challengeThreshold: 50 // Score threshold to trigger
});
// Or manual triggering
botsigged.onScoreUpdate(async (data) => {
const neutral = ['scraper', 'headless_fetch', 'suspicious'];
if (neutral.includes(data.classification)) {
const solved = await botsigged.challenge();
if (!solved) {
// Failed to solve - likely a low-resource bot
disableFormSubmission();
}
}
});
Server-Side Verification
Always verify challenges server-side:
app.post('/api/submit', async (req, res) => {
const { sessionId, challengeProof } = req.body;
// Verify the proof-of-work was completed
const verified = await botsigged.verifyChallengeProof(sessionId, challengeProof);
if (!verified) {
return res.status(403).json({ error: 'Challenge not completed' });
}
// Process the submission
});
Block Mode
Block mode prevents form submissions entirely for malicious actors. This is the strongest enforcement option.
How It Works
- Session classified as malicious (bad_bot, stealth_bot, etc.)
- BotSigged disables form submission on the client
- Server-side validation rejects requests from blocked sessions
- User sees an error message (customizable)
When to Use
- Malicious classifications: bad_bot, stealth_bot, bad_agent, bad_scraper
- Malicious cohorts: Sessions from fingerprints/IPs with violation history
- Attack patterns: Sessions exhibiting known attack signatures
- Repeat offenders: Sessions that failed challenges multiple times
Client-Side Blocking
botsigged.onScoreUpdate((data) => {
const malicious = [
'bad_bot', 'stealth_bot', 'bad_agent',
'bad_scraper', 'abusive_human'
];
if (malicious.includes(data.classification)) {
// Disable all forms
botsigged.blockForms({
message: 'Session blocked due to suspicious activity.'
});
}
});
Server-Side Blocking (Required)
Client-side blocking can be bypassed. Always enforce server-side:
app.post('/api/checkout', async (req, res) => {
const session = await botsigged.getSession(req.body.sessionId);
// Check classification
const blocked = ['bad_bot', 'stealth_bot', 'bad_agent', 'bad_scraper'];
if (blocked.includes(session.classification)) {
return res.status(403).json({
error: 'Access denied',
code: 'BOT_BLOCKED'
});
}
// Check cohort risk
if (session.cohort_risk === 'malicious') {
return res.status(403).json({
error: 'Access denied',
code: 'COHORT_BLOCKED'
});
}
// Process checkout...
});
Graceful Degradation
Consider showing a human-friendly message:
botsigged.blockForms({
message: 'Unable to complete this action. If you believe this is an error, please contact support.',
showContactLink: true
});
Choosing a Response Mode
Decision Matrix
Classification Recommended Mode
-------------- ----------------
human Log only
search_engine Log only (skip billing)
known_agent Log + optional rate limit
scraper Challenge
headless_fetch Challenge
suspicious Challenge
bad_bot Block
stealth_bot Block
bad_agent Block
bad_scraper Block
abusive_human Challenge (may be false positive)
Progressive Enforcement
Start with logging, then gradually increase enforcement:
-
Week 1-2: Log mode only
- Understand your baseline traffic
- Identify false positive patterns
-
Week 3-4: Enable challenges for neutral classifications
- Monitor solve rates
- Adjust difficulty based on results
-
Week 5+: Enable blocking for malicious classifications
- Watch for customer complaints
- Fine-tune thresholds as needed
Configuration Example
botsigged.configure({
// Response mode settings
responseMode: 'progressive',
// Challenge settings
challengeClassifications: ['scraper', 'headless_fetch', 'suspicious'],
challengeDifficulty: 18,
// Block settings
blockClassifications: ['bad_bot', 'stealth_bot', 'bad_agent', 'bad_scraper'],
blockCohortRisk: 'malicious',
// Callbacks
onChallenge: (session) => console.log('Challenging session:', session.id),
onBlock: (session) => console.log('Blocking session:', session.id)
});
Monitoring Response Modes
Track the effectiveness of your response modes:
// Example metrics to track
const metrics = {
total_sessions: 0,
challenged_sessions: 0,
challenge_solve_rate: 0,
blocked_sessions: 0,
false_positive_reports: 0
};
botsigged.onScoreUpdate((data) => {
metrics.total_sessions++;
});
botsigged.onChallenge((result) => {
metrics.challenged_sessions++;
if (result.solved) {
metrics.challenge_solve_rate =
metrics.challenged_sessions / metrics.total_sessions;
}
});
botsigged.onBlock(() => {
metrics.blocked_sessions++;
});
In the BotSigged Explorer, you can view:
- Response mode distribution over time
- Challenge solve rates by classification
- Block rates by cohort
- User complaints and appeals