Session Classifications
Understanding BotSigged's granular session classification system
Session Classifications
BotSigged uses a multi-dimensional classification system to categorize sessions based on user agent, behavior, automation signals, and cohort risk. This provides more actionable insights than a simple bot/human binary.
Classification Dimensions
Each session is evaluated across four dimensions:
1. UA Category
Derived from the User-Agent string:
-
browser- Standard web browsers (Chrome, Firefox, Safari, etc.) -
search_engine- Search engine crawlers (Googlebot, Bingbot, etc.) -
ai_agent- AI/LLM crawlers (GPTBot, ClaudeBot, etc.) -
fetch_tool- HTTP clients (curl, wget, Python requests, etc.) -
unknown- Unrecognized or empty user agents
2. Behavior Presence
Based on SDK signals received:
-
interactive- Has mouse activity, form interactions, or clicks -
passive- Has scroll or page view events only -
none- No behavioral signals received
3. Automation Score
Derived from behavioral analysis (0-100 score):
-
human- Score < 40 -
suspicious- Score 40-69 -
bot- Score >= 70
4. Cohort Risk
Based on fingerprint/IP cohort history:
-
benign- Good history, no violations -
suspicious- Minor violations or approaching limits -
malicious- Rate limits exceeded, attack patterns detected
Classification Quadrant
The two most important dimensions for browser sessions are Automation Level (human-like vs bot-like behavior) and Risk Level (benign vs malicious intent).
- Bottom-left (Trusted): Human-like behavior with benign intent → Allow
- Top-left (Risky): Human-like behavior but suspicious/malicious intent → Challenge
- Bottom-right (Neutral): Bot-like behavior but benign intent → Monitor/Rate limit
- Top-right (Malicious): Bot-like behavior with malicious intent → Block
See the Classification Quadrant visualization at the bottom of this page for how all 11 classifications map across these dimensions.
Cohort Risk modifies these: a human with malicious cohort becomes abusive_human, and any bot with malicious cohort becomes a “bad” variant.
Declared agents (AI crawlers, fetch tools) bypass this quadrant - they’re classified by their UA and cohort risk alone.
Final Classifications
The four dimensions combine into one of 11 final classifications:
Trusted (Green)
| Classification | Description | Typical Action |
|---|---|---|
| Human | Interactive behavior, human-like patterns, benign cohort | Allow |
| Search Engine | Declared search engine crawler (Googlebot, etc.) | Allow, skip billing |
| Known Agent | Declared AI agent with benign behavior | Allow with monitoring |
Neutral (Amber)
| Classification | Description | Typical Action |
|---|---|---|
| Scraper | Declared fetch tool with benign cohort | Allow with rate limits |
| Headless Fetch | Browser UA but no behavioral signals | Monitor, may challenge |
| Suspicious | Interactive but anomalous behavior patterns | Challenge or monitor |
Malicious (Red)
| Classification | Description | Typical Action |
|---|---|---|
| Bad Bot | Bot-like behavior with malicious cohort | Block or challenge |
| Stealth Bot | Bot-like behavior trying to appear human | Block or challenge |
| Bad Agent | Declared AI agent exceeding rate limits | Block |
| Bad Scraper | Fetch tool with malicious cohort | Block |
| Abusive Human | Human-like behavior but malicious cohort | Challenge |
Classification Flow
User Agent Analysis
│
▼
┌──────────────────┐
│ search_engine? │──yes──▶ search_engine
└────────┬─────────┘
│ no
▼
┌──────────────────┐
│ ai_agent? │──yes──▶ cohort malicious? ──yes──▶ bad_agent
└────────┬─────────┘ │
│ no └──no──▶ known_agent
▼
┌──────────────────┐
│ fetch_tool? │──yes──▶ cohort malicious? ──yes──▶ bad_scraper
└────────┬─────────┘ │
│ no └──no──▶ scraper
▼
┌──────────────────┐
│ no behavior? │──yes──▶ headless_fetch
└────────┬─────────┘
│ has behavior
▼
┌──────────────────┐
│ automation=bot? │──yes──▶ cohort malicious? ──yes──▶ bad_bot
└────────┬─────────┘ │
│ no └──no──▶ stealth_bot
▼
┌──────────────────┐
│ automation= │──yes──▶ suspicious
│ suspicious? │
└────────┬─────────┘
│ no (human-like)
▼
┌──────────────────┐
│ cohort malicious?│──yes──▶ abusive_human
└────────┬─────────┘
│ no
▼
human
Responding to Classifications
In the SDK
botsigged.onScoreUpdate((data) => {
switch (data.classification) {
case 'human':
case 'search_engine':
case 'known_agent':
// Trusted - allow normally
break;
case 'scraper':
case 'headless_fetch':
case 'suspicious':
// Neutral - consider rate limiting or challenges
showCaptcha();
break;
case 'bad_bot':
case 'stealth_bot':
case 'bad_agent':
case 'bad_scraper':
case 'abusive_human':
// Malicious - block or heavily restrict
blockAccess();
break;
}
});
Server-Side Verification
Always verify classifications server-side for sensitive operations:
// API endpoint
app.post('/api/checkout', async (req, res) => {
const session = await botsigged.getSession(req.body.sessionId);
const blocked = ['bad_bot', 'stealth_bot', 'bad_agent', 'bad_scraper'];
if (blocked.includes(session.classification)) {
return res.status(403).json({ error: 'Access denied' });
}
// Process checkout...
});
Rate Limits and Classification Changes
Classifications can change during a session based on:
- Behavioral signals - As more interactions occur, the automation score updates
- Cohort violations - If the session’s fingerprint or IP cohort exceeds rate limits
- Attack detection - If patterns matching known attacks are detected
For example, a known_agent that exceeds rate limits will be reclassified as bad_agent.
Rate Limit Thresholds
Default cohort rate limits (configurable per site):
| Cohort Type | Requests/Min | Requests/5Min | Sessions/Hour |
|---|---|---|---|
| Fingerprint | 60 | 200 | 20 |
| Canvas Hash | 300 | 1000 | - |
| WebGL Hash | 300 | 1000 | - |
| IP Network (/24) | 100 | 400 | - |
Explorer Filtering
In the BotSigged Explorer, you can filter sessions by classification:
- Use the Classification checkboxes to show specific types
- Classifications are grouped by severity (Trusted, Neutral, Malicious)
- Combine with score ranges and date filters for detailed analysis
Classification Quadrant
This visualization shows how classifications map across automation level and risk dimensions.