Security Detection & Response Maturity Model
Reset
Tuning & Optimization
Use Case Implementation
Data Normalization
Use Case Discovery
Automation & Response
Data Onboarding & Governance
Security Analysis & Engineering
Security SIEM SME
SIEM Engineering
Security Engineering
Create and Deploy Playbooks
Authentication Logs (Identity & Access)
Enable or Create Detections
Data Governance
Suppress or Exclude Noisy Detections
Authentication Datamodel
Security Use Case Workshop
Common Information Model (CIM)
Endpoint & Malware Datamodel
Endpoint Security Logs
Create Response Plans & Workflows
Create Allow Lists and Filters
Assign Risk Objects & Set Base Score (RBA)
SIEM ReplacementWorkshop
Network Traffic, IDS & Session Datamodel
Network Security Logs
Observed Indicator Management
Create Risk Factors (RBA)
Track Detection Efficacy Over Time
Security Detection Optimization Workshop
Enrichment (Assets, Identities, Threat Intel)
EnrichmentDatamodels
Enrich with MITRE Annotations
Adjust Throttling and Scheduling
Integrate with External Systems and Tools
Risk-Based Alerting (RBA) Workshop
Web, Email, Update
Cloud, Email Activity & Audit Logs
Automate Enrichment Actions (SOAR)
Test & Validate
Tune Risk Scoring Thresholds (RBA)
SOAR Implementation Workshop
DNS / DHCP Logs
Network Resolution
Datamodel
Use Cases Analysis Dashboard
Automated Alert Routing by Use Case (SOAR)
VulnerabilitiesDatamodel
Threat Vulnerability Management Logs
Field Mapping and Classification (SOAR)
Custom Data Model
Custom Data Onboarding
LEGEND
Low
Data Source Priority
Medium
Critical
High
This Task is meant to be continuous
Required Skill Set
Adjust Throttling and Scheduling
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Optimize how often detections run and how frequently alerts can be triggered by applying throttling and scheduling controls. This ensures detections provide timely insights without overwhelming analysts or systems with redundant alerts.
Example:
Example:
- Allow list corporate VPN IP ranges to avoid flagging normal remote logins.
- Exclude file transfers from approved backup servers.
- Filter admin account activity from known IT maintenance windows.
Why we need it?
- Prevents duplicate or excessive alerts from the same activity, reducing alert fatigue.
- Balances performance and timeliness by scheduling detections to run at the right intervals.
- Improves efficiency by surfacing only actionable alerts while suppressing noise.
- Ensures detections align with business risk tolerance (e.g., high-risk rules run more frequently than low-risk ones).
- Preserves Splunk system resources, keeping searches performant and cost-efficient.
Network Security Logs - Critical
Why we need it ?
They provide visibility into traffic flows, firewall activity, and intrusion attempts, making them essential for detecting command-and-control, data exfiltration, and lateral movement. Network logs also validate segmentation policies, support compliance reporting, and maximize ROI by ensuring threats are detected even when endpoints are compromised.
Example
- Palo Alto Networks – Next-gen firewall logs for application, user, and threat visibility.
- Fortinet FortiGate – Unified firewall logs covering traffic, VPN, and threat prevention.
- Cisco ASA / Firepower – Firewall and IPS logs for connection, intrusion, and malware detection.
- Check Point – Security gateway logs for firewall, VPN, and advanced threat protections.
Vulnerabilities Data Model - Low
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why We need it?
- Context, not Detection – vulnerability logs don’t directly generate detections, they enrich investigations.
- Dependency on Other DMs – findings only gain value when correlated with Auth, Endpoint, or Asset/Identity data.
- Periodic, Not Real-Time – scans are scheduled, making them less actionable for immediate threat detection.
- Compliance Support – useful for audits and demonstrating patch/remediation progress.
- Adds ROI via Enrichment – helps prioritize alerts on vulnerable systems but provides limited standalone security coverage.
Automated Alert Routing by Use Case
Route alerts to the right team, queue, or analyst based on the use case, severity, source system, and required authority. Splunk SOAR auto-creates tickets, escalates critical threats, and measures routing quality to remove bottlenecks and speed accurate responses.
Who is it for?
Why we need it?
- SOC triage teams and incident responders.
- IR leads and on-call coordinators.
- IT/Cloud/Network remediation teams.
- ITSM owners and security leaders needing throughput metrics
- Delivers the right alerts to the right people at the right time, reducing delays.
- Cuts misclassification and rework; accelerates MTTR.
- Scales triage without adding headcount; improves analyst utilization.
- ROI: fewer manual handoffs, faster resolution, and better use of existing tools and teams
What you get?
- Automatic ticket creation for the correct remediation team.
- Severity- and source-aware routing logic (queues, on-call, geo).
- Auto-escalation of critical threats to senior handlers.
- Routing performance dashboards (misroutes, SLAs, cycle time).
- Full audit trail of ownership and handoffs
Observed Indicator Management
Leverage SOAR to capture and track every indicator observed in events (IPs, domains, hashes, URLs, emails). Centralizing these observables enables KPI calculation, historical correlation/retro-hunt, and the growth of an internal threat-intel corpus—turning raw indicators into actionable context across cases and detections.
Who is it for?
Why we need it?
- SOC analysts and incident responders.
- Threat intel analysts and hunters.
- Detection engineers / SIEM–SOAR platform owners.
- Security leaders needing program KPIs and trends
- Turns raw IOCs into actionable intel via enrichment and history.
- Exposes low-and-slow campaigns through cross-case/time correlation.
- Quantifies efficacy (prevalence, hit rate, recidivism, time-to-block) to tune detections and controls.
- Accelerates response by pushing block/allow decisions from a single source of truth.
- ROI: fewer redundant investigations, shorter dwell time, and lower analyst workload.
What you get?
- Central, enriched indicator catalog (IPs, domains, hashes, URLs, emails).
- Historical correlation/retro-hunt across cases and time.
- Lightweight workflows and integrations to push IOCs to SIEM/EDR/firewalls.
Create and Deploy Playbooks
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Design and implement automated response playbooks for common incident types such as phishing, malware, and brute-force attacks. These playbooks define standardized workflows, automate repetitive investigative and containment steps, and evolve through version control to ensure continuous improvement. By operationalizing repeatable processes, playbooks accelerate response, reduce analyst workload, and strengthen SOC efficiency.
Example:
Example:
Who is it for?
Why we need it?
- Security Operations Center (SOC) teams managing high alert volumes.
- Incident responders handling repetitive investigations.
- Security leaders seeking faster, more consistent response at scale.
- Standardizes response to common incidents (phishing, malware, brute-force).
- Automates repetitive tasks (WHOIS lookups, IP reputation checks, host isolation).
- Reduces mean time to detect (MTTD) and mean time to respond (MTTR).
- Improves consistency and accuracy by following predefined steps.
- Enables version control to track improvements and adapt to new threats.
- ROI: Increases SOC efficiency by freeing analysts from manual work, allowing focus on high-value investigations and reducing operational costs.
What you get?
- Automated workflows for common incidents.
- Faster, consistent investigations.
- Evolving playbooks with version control.
- Improved SOC efficiency and ROI.
Network Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why it’s critical?
- Enables Threat Detections – command-and-control, data exfiltration, beaconing, lateral movement.
- Prevents Blind Spots – ensures complete visibility into traffic flows, protocols, and session activity.
- Ensures Consistency – normalizes logs from firewalls, proxies, IDS/IPS, and DNS/DHCP sources.
- Supports Compliance & Governance – provides audit-ready evidence of network activity and policy enforcement.
- Powers ES Dashboards – accurate MITRE ATT&CK mapping, correlation searches, and KPIs for network visibility.
- Maximizes ROI – turns raw network telemetry into actionable insights for detection and investigation
Splunk Common Information Model (CIM)
Data Governance
The Splunk Common Information Model (CIM) is a standardized framework that ensures data from different sources is treated consistently, making it easier to extract value and insights. Delivered as an add-on, CIM includes prebuilt data models, documentation, and tools to normalize data for efficient and accurate analysis at search time.
Example:
Example
Why we need it ?
- ROI & scale: One detection, many sources. Normalizing fields (via CIM) lets a single analytic work across Windows, Linux, VPN, EDR, cloud, etc.—cutting duplicate rule-building and maintenance.
- Faster investigations: Consistent field names power shared dashboards/playbooks, speeding analyst triage and reducing handoffs.
- Better performance: CIM-aligned data models enable tstats searches and data model acceleration, shrinking search time and compute costs.
- Portability & resilience: Content from Splunk ES, ESCU, and community sources “just works” when data is CIM-mapped, lowering time-to-value.
- Clear reporting: Standard fields make MITRE coverage, KPIs, and risk scoring reliable across diverse data.
- Authentication model: Map src, user, app, and event codes into CIM → one “suspicious logon” rule covers AD, AzureAD, Okta, and VPN instead of four vendor-specific rules.
- Endpoint model: Normalize process fields (process_name, parent_process, hash) → a single “living-off-the-land binary” detection runs across Sysmon, Carbon Black, and CrowdStrike without rewrites.
Priority: Low
Enrichments - High
Why we need it ?
Assets & Identities (A&I) – Provide business context by linking events to users, devices, and critical systems. This enables smarter detections, reduces false positives, and helps prioritize alerts based on what matters most to the organization.Threat Intelligence – Adds external context such as known bad IPs, domains, or hashes. It improves detection of emerging threats, speeds investigations, and strengthens proactive defense. Together, enrichments transform raw events into context-rich, actionable insights that improve accuracy, reduce noise, and maximize ROI.
Example
- LDAP – Directory logs for user authentication, group membership, and access control.
- Azure – Cloud identity and activity logs for sign-ins, apps, and resources.
- CMDB – Asset inventory providing system ownership, classification, and criticality.
- Recorded Future – Threat intelligence feed with malicious IPs, domains, and indicators.
Field Mapping and Classification
Normalize event data by mapping SIEM fields to your SOAR schema and types. In Splunk environments, this typically means translating CIM fields to the SOAR event format (e.g., CEF/platform schema), setting the right data types, and applying transforms. e.g., via Splunk App for SOAR Export, map source_ip → sourceAddress (type: ip); in XSOAR, map “XDR Detection Time” → detection_time using TimeStampToDate.
Who is it for?
Why we need it?
- Detection engineers and SOAR automation/playbook authors.
- SIEM/SOAR platform owners and integrations engineers.
- SOC analysts who rely on correct context for ad-hoc actions
- Ensures playbooks receive the expected fields and correct types, reducing failures.
- Speeds development and maintenance through abstraction and modularity.
- Improves context for enrichment and ad-hoc actions, boosting accuracy.
- ROI: faster time-to-value, fewer integration fixes, and more reliable automation
What you get?
- A standardized field dictionary across SIEM and SOAR.
- Reusable mappers/classifiers with type-correct fields (IP, user, domain, timestamp).
- Built-in transforms (e.g., timestamp conversions) that speed playbook development.
- Less brittle playbooks that work across data sources and tools
Test & Validate
Test & Validate ensures that newly created detections work as intended before they are fully deployed. This step involves running simulations or replaying historical data to confirm the detection’s accuracy, performance, and alignment with its intended MITRE ATT&CK coverage. It helps verify that alerts trigger under the right conditions, minimize false positives, and provide sufficient context for investigation.
Why we need it?
Example - Brute Force Detection Validation
- Ensures detections and risk rules work as intended before they are put into production.
- Reduces false positives and false negatives, improving analyst efficiency and trust in alerts.
- Validates alignment with business context and MITRE ATT&CK coverage, proving real value rather than just theoretical detection.
- Provides measurable ROI by confirming that time spent building detections results in actionable, high-quality alerts.
- Helps identify gaps and tuning opportunities early, preventing wasted resources and alert fatigue.
- A new detection is created to identify brute-force logon attempts.
- During validation, the team simulates an attack by generating multiple failed logons against a controlled test account.
- Instead of flooding the system with hundreds of alerts, the detection correlates the activity.
- A single, high-priority alert is raised, clearly flagging the brute-force attempt.
- The validation confirms the rule works as intended: it identifies real threats while reducing noise for analysts.
Cloud, Email Activity & Audit Logs - High
Why we need it ?
Cloud logs track user actions, configuration changes, and resource usage across platforms. Email activity logs provide visibility into communication patterns to detect phishing, account compromise, and misuse. Audit logs record access and administrative actions, ensuring compliance and maintaining a trusted audit trail.
Example
- AWS CloudTrail – Tracks user activity and API calls across AWS services.
- Azure Activity Logs – Records sign-ins, resource changes, and admin actions in Azure.
- Google Workspace Admin Logs – Captures user and admin activity across Gmail, Drive, and Workspace apps.
- Office 365 Audit Logs – Provides visibility into user actions, email activity, and admin operations in Microsoft 365.
Assign Risk Entities & Set Base Score
Risk factors are attributes applied to assets, identities, or detections that increase or decrease their overall risk score in Splunk’s Risk-Based Alerting framework. Examples include criticality of a server, privileged accounts, repeat offenses, or known vulnerabilities. By creating and applying these factors, detections can be weighted with business and threat context, producing smarter and prioritized alerts.
Why we need it?
How Risk Factoring works
- Contextual Prioritization – ensures that alerts tied to critical assets or privileged users are escalated faster.
- Noise Reduction – prevents low-value alerts from overwhelming analysts by weighting what truly matters.
- Business Alignment – ties detections directly to organizational risk (e.g., compliance systems, financial apps).
- Investigation Acceleration – allows analysts to focus first on high-risk entities, improving efficiency.
- Maximized ROI – makes detections smarter, ensuring that time and resources are spent on the threats with the greatest potential impact.
Risk factors in Splunk ES are rules that adjust risk scores dynamically based on user, asset, or event characteristics. Instead of relying only on raw detection scores, risk factors modify them , raising or lowering scores depending on context such as asset criticality, user role, or event category.
Final Risk Score = (Business Impact×Detection Accuracy) × Risk Factor
Vulnerability Management Logs - Low
Threat Vulnerability Management Logs
Why we need it ?
They provide visibility into system and application weaknesses, track remediation progress, and validate patch effectiveness. These logs are essential for reducing attack surface, prioritizing high-risk exposures, demonstrating compliance, and strengthening overall security posture.
Example:
Example
- Scan Results – Data from tools like Qualys, Tenable, or Rapid7 showing discovered vulnerabilities, their severity, and affected assets.
- Patch Status Logs – Records of applied or missing patches on systems.
- Asset Risk Scores – Prioritization data indicating which vulnerabilities pose the highest risk.
- Remediation Activity Logs – Tracking fixes, patch deployments, or configuration changes.
- Exception & Waiver Logs – Documentation of approved exceptions for certain vulnerabilities.
- Historical Trends – Logs showing vulnerability counts and severity changes over time.
Priority: Low
Security Detection Optimization Workshop
What is it?
Why we do it?
Who is this for?
What you get
Customers with existing detections who need help identifying gaps, tuning rules, validating data source value, and reducing alert noise—refining content for accuracy and alignment with business priorities.
A 5-day expert-led workshop that reviews data quality, CIM status, and detections against MITRE ATT&CK. We provide tuning recommendations, identify RBA opportunities, and help reduce costs without losing visibility.
- Coverage – detections by data source & MITRE tactics
- Optimization – tune noisy alerts & costly searches
- RBA – evaluate and refine implementation
- Data Value – surface high-value, drop low-value sources
- Foundations – improve onboarding, CIM, and best practices
To maximize the value of existing detections by closing gaps, reducing noise, and ensuring data sources and alerts are aligned with business priorities.
Authentication and access logs - Critical
Why we need it ?
Onboarding authentication and access logs into the SIEM provides visibility into user logins, access attempts, and identity-related activities across on-premises and cloud environments. This data is essential for detecting suspicious behavior, investigating incidents, enforcing compliance, and supporting identity-based threat detection.
Example
- Windows Security Logs – Event ID 4624 (successful login), Event ID 4625 (failed login)
- Active Directory – User account creation, privilege changes, group membership modifications
- Azure AD / Entra ID – Conditional access policy evaluations, MFA prompts, risky sign-ins
- Okta – Authentication success/failure, application access, MFA events
- Ping Identity – Single sign-on activity, authentication errors, policy denials
- Duo – MFA approvals, push notification denials, enrollment activity
Use cases Analysis Dashboard
As part of the Use Case Implementation stage, we provide an executive overview dashboard that consolidates all active and planned use cases into a single view. Designed for leadership and stakeholders, this dashboard highlights progress, coverage, and priorities, turning complex detection details into clear, actionable insights for informed decision-making.
Why we need it?
- Translates technical detection work into business-relevant insights for leadership.
- Provides visibility into what’s already covered, what’s planned, and where gaps remain.
- Enables data-driven decisions on security priorities, investments, and resource allocation.
- Demonstrates ROI by showing measurable progress in detection and response maturity.
- Builds confidence with executives and stakeholders that security efforts align with business risks.
Authentication Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why it’s critical?
- Enables Core Detections – brute force, credential stuffing, suspicious logins, privilege escalation
- Prevents Blind Spots – avoids missing or misclassified authentication events
- Ensures Consistency – normalizes logins across sources (AD, LDAP, Azure AD, Okta, VPN, etc.)
- Supports Compliance – reliable records for audits and reporting
- Powers ES Dashboards – accurate KPIs, correlation searches, and risk-based alerting
- Maximizes ROI – turns raw login data into actionable security insights
DNS / DHCP Logs - Medium
Why we need it ?
DNS logs reveal domain queries and connections, making them valuable for detecting phishing, malware callbacks, and data exfiltration. DHCP logs link IP addresses to specific devices and users, providing attribution and context for investigations. Together, they connect network activity to identities, strengthen detections, and support compliance with audit-ready visibility.
Example
- Infoblox – Centralized DNS, DHCP, and IPAM logs for network visibility.
- Microsoft DNS – Tracks domain queries and name resolution in Windows environments.
- Cisco Umbrella – Cloud-based DNS security logs for blocking malicious domains.
- AWS Route 53 Logs – Monitors DNS queries and routing activity in AWS.
- Windows DHCP – Assigns and logs IP addresses to devices for user attribution.
Custom Applications & Data Logs - Low
Why we need it ?
It enables organizations to bring in unique, non-standard data sources that are critical to their environment but not covered by default integrations. By parsing, mapping, and normalizing this data into the SIEM’s models (e.g., Splunk CIM), custom onboarding ensures complete visibility, closes detection gaps, and maximizes ROI—turning otherwise siloed data into actionable intelligence for threat detection, compliance, and investigations.
Example
- Industry Applications – Epic (healthcare), SAP (finance/ERP), Oracle E-Business Suite.
- Cloud / SaaS Apps – ServiceNow, Salesforce, Workday.
- Homegrown / Legacy Systems – In-house web portals, legacy authentication servers, proprietary databases.
- IoT / OT Devices – Building management systems, SCADA logs, smart sensors.
- Security Tools – Smaller niche firewalls, custom honeypots, or third-party monitoring systems without Splunk TA support.
Priority: Low
Priority: Low
Risk-Base Alerting (RBA) Workshop
What is it ?
What do you get ?
Who is this for ?
Why we do it ?
A 3-day expert-led workshop that evaluates existing detections, maps them to MITRE ATT&CK, reviews risk scoring, and identifies ways to cut alert fatigue, improve prioritization, and boost visibility.
Organizations that want to move beyond simple alert scoring to smarter, context-driven detections. Ideal for customers ready to reduce noise, prioritize threats intelligently, and strengthen their security posture with a modern risk-based approach.
- A risk scoring framework tailored to your environment
- Mapped detections to MITRE ATT&CK
- Assets and identities normalization to enrich detections and risk context
- A clear path to enable smarter, context-driven detections
To replace noisy, alert-heavy workflows with smarter, risk-based detections that cut wasted effort, accelerate investigations, and maximize ROI by focusing resources on the threats that matter most.
Read More
Custom Data Model - Low
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why We need it?
- Close Visibility Gaps – capture logs from industry apps, legacy systems, IoT/OT, or niche tools not covered by Splunk CIM.
- Tailored to the Business – normalize data that reflects unique business processes, compliance needs, or sector-specific threats.
- Enable Correlation – bring custom sources into alignment with standard data models for dashboards, RBA, and investigations.
- Support Compliance – integrate specialized logs (e.g., healthcare, finance, government) to meet regulatory reporting requirements.
- Maximize ROI – ensure all ingested data, even proprietary or custom, is actionable for detections and investigations rather than siloed.
Enrichments Data Models - High
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why it’s Important?
- Enables Context-Aware Detections – ties events to users, devices, and critical assets for smarter detection and prioritization.
- Supports Risk-Based Alerting (RBA) – essential for weighted risk scoring and accurate threat prioritization.
- Improves Investigations – accelerates triage by linking alerts directly to owners and impacted systems.
- Meets Compliance Needs – provides identity-to-activity mapping for audit trails and accountability.
- Maximizes ROI – ensures ingested data is enriched with context, driving higher detection accuracy and efficiency.
- Strengthens Threat Intel Correlation – enables faster matching of IOCs to specific assets and identities.
- Prioritizes Threats Intelligently – helps analysts focus on attacks targeting the most critical systems and users
Data Governance
Data Governance
Defines the policies, ownership, and controls that keep security data accurate, complete, and compliant across its lifecycle. It sets classification and access rules, retention/archival and lineage, and enforces quality (schema, timestamps, deduplication) and source integrity. Data governance improves the return on investment (ROI) of a Security Information and Event Management (SIEM) system by ensuring the data flowing into it is high-quality, necessary, and consistent.
Example:
Example
Why we need it ?
- Data Completeness — Verify all required fields are present; no gaps.
- Data Accuracy — Validate/normalize timestamps, IPs, and usernames.
- Retention & Archival — Set policy-aligned retention; archive for audit/forensics.
- Duplicate Log Prevention — De-duplicate events to keep reporting/compliance clean.
- Source Integrity Checks — Monitor source health/tamper to ensure data integrity.
- Trustworthy detections & reporting: Enforces completeness, accuracy, and consistent data flow.
- Compliance assurance: Clear retention, access, lineage, and audit trails.
- Minimizes storage and licensing fees: Data governance policies prevent the ingestion of "dark data" that has no security value.
- Reduced risk: Source integrity checks, ownership, and change control prevent gaps.
Priority: Low
SIEM Replacement Workshop
What is it ?
What do you get ?
Why we do it ?
Who is this for ?
- Modernize security with Splunk’s integrated platform
- Align architecture and detections to business risks
- Accelerate value with prescriptive roadmaps
- Maximize ROI by eliminating noise and unused data
Organizations moving away from legacy SIEMs due to cost, scale, ROI, or limited detection capabilities.
A tailored 14 days engagement designed to help organizations evaluate and plan their transition to a new SIEM platform. The workshop combines Architecture Workshop and the Security Use Case Workshop.
- Architecture – scalable design aligned to best practices
- Data Sources – onboarding methods, enrichment, retention strategy
- Detections – roadmap mapped to MITRE ATT&CK
- Optimization – eliminate noise, maximize ROI
- Implementation Plan – tasks, timelines, prerequisites
The engagement covers everything In the Security Use Case Workshop Plus:
Create allow lists and filters
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Establish and maintain controlled allow lists and filtering rules to exclude known, trusted, or acceptable activities from triggering detections. This step ensures that recurring legitimate behaviors do not generate unnecessary alerts or risk scores.
Example:
Example:
- Allow list corporate VPN IP ranges to avoid flagging normal remote logins.
- Exclude file transfers from approved backup servers.
- Filter admin account activity from known IT maintenance windows.
Why we need it?
- Prevents repetitive false positives by recognizing approved users, hosts, applications, or processes.
- Reduces noise and alert fatigue, allowing analysts to focus on true suspicious activity.
- Improves accuracy and credibility of detections by clearly distinguishing between normal business operations and potential threats.
- Enhances ROI by saving time and resources that would otherwise be wasted on investigating non-issues.
- Supports a scalable and mature detection framework where business context is continuously incorporated.
Web, Email, Updates Data Models - High
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why They Are Important?
- Enables Threat Detections – reveals malicious browsing, drive-by downloads, and data exfiltration over HTTP/HTTPS.
- Supports Compliance – tracks acceptable use and data protection policies
- Improves Investigations – supplies URL, domain, and content detail to accelerate triage
- Strengthens Threat Intel Correlation – matches IOCs like domains and URLs to web activity
- Detects Phishing & Compromise – identifies suspicious senders, links, and attachments
- Tracks Patch Compliance – visibility into OS and software update status across the enterprise
- Improves Security Posture – closes gaps attackers exploit in unpatched systems
- Enhances Threat Intel Context – links unpatched systems to known exploits in threat feeds
Enable or Create Detections
The first step in use case implementation within the TDIR framework. This stage focuses on building or activating analytic rules, whether rule-based, anomaly, risk-based, or threat-intel driven that align to prioritized threats and business risks. By establishing these analytic rules in Splunk, we create actionable visibility, feed the risk framework, and lay the foundation for investigation and response.TDIR (Threat Detection, Investigation, and Response) framework brings threat detection, investigation, and response together in one unified workflow, allowing security teams to spot, analyze, and remediate threats faster without the friction of switching between disconnected tools.
Why we need it?
- Translates business risks into actionable detections.
- Provides early visibility into threats before they escalate.
- Ensures alignment with MITRE ATT&CK and industry best practices.
- Builds measurable security value from day one of implementation.
- Forms the baseline for continuous tuning, optimization, and automation.
Track Detection Efficacy Over Time
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Continuously measure how well detections perform by monitoring their accuracy, volume, and contribution to risk scores over an extended period. This process identifies which detections consistently add value, which require tuning, and which may need to be retired or replaced.
Example:
Example:
- Allow list corporate VPN IP ranges to avoid flagging normal remote logins.
- Exclude file transfers from approved backup servers.
- Filter admin account activity from known IT maintenance windows.
Why we need it?
- Ensures detections remain effective as threats, environments, and business priorities evolve.
- Highlights high-value detections that consistently surface true threats, proving ROI to leadership.
- Identifies noisy or ineffective detections early, reducing wasted analyst effort.
- Supports continuous improvement by providing evidence for tuning, refinement, or retirement.
- Builds confidence with stakeholders that the detection program is measurable, adaptive, and aligned with real-world risk.
Endpoint Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why it’s critical?
- Enables Key Detections – malware execution, process injection, persistence, lateral movement.
- Prevents Gaps – ensures all endpoint activity (processes, file changes, registry, network) is captured and normalized.
- Ensures Consistency – unifies logs from EDR tools (CrowdStrike, SentinelOne, Carbon Black, Defender, etc.).
- Supports Compliance & Forensics – reliable endpoint evidence for investigations and audits.
- Powers ES Dashboards – accurate visibility into host activity and mapped MITRE ATT&CK coverage.
- Maximizes ROI – converts raw telemetry into actionable security insights across multiple endpoint tools
Endpoint Security Logs - Critical
Why we need it ?
Endpoint logs provide in-depth visibility into system and user activities, making them essential for detecting malware, privilege escalation, and lateral movement. They enable faster investigations and response, while also supporting compliance audits and proving security controls are effective. By focusing on high-value data, endpoint logs maximize ROI and strengthen overall security posture.
Example
- Windows Security Event Logs – Core OS logs for authentication, access, and policy changes.
- Sysmon – Detailed process, registry, and network telemetry for advanced threat detection.
- CrowdStrike Falcon – EDR with behavioral analytics and threat intelligence.
- SentinelOne – AI-driven EDR/XDR with automated detection and response.
- Microsoft Defender – Native Microsoft endpoint protection integrated with Windows and cloud.
- Carbon Black – Endpoint telemetry and forensics for advanced threat hunting and compliance.
Assign Risk Entities & Set Base Score
Define and register critical entities—such as users, hosts, applications, and IPs as risk objects within Splunk, and establish their baseline risk score. This creates the foundation for risk-based alerting by linking detections and signals back to specific entities, providing business context, reducing noise from isolated alerts, and enabling teams to prioritize response on the entities that matter most to the organization.
How to calculate Risk Base score
Why we need it?
- Provides a consistent way to track and measure risky behavior across multiple data sources.
- Ensures detections don’t fire in isolation but contribute to an entity’s overall risk picture.
- Helps security teams prioritize investigations based on cumulative risk rather than individual alerts.
- Establishes the baseline for tuning thresholds and reducing false positives.
- Forms the backbone of the TDIR workflow by connecting analytics → risk scoring → investigation.
During the workshop, we assign each detection a Base Risk Score by multiplying its business impact (e.g., disruption, data loss, compliance risk) by its accuracy, both rated 1–5. This prioritizes detections that pose the highest, most reliable risks. For example, an unauthorized database access detection with impact 4 and accuracy 4 yields a score of 16, ranking it higher in triage for faster response.
Risk Base Score= (Business Impact×Detection Accuracy)
Network Resolution & Sessions Data Models - Medium
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
- Common Field Names & Tags – Standardizes equivalent events across sources/vendors.
- Schema-on-the-fly – CIM applies at search time without altering raw machine data.
- Normalization – Once data is normalized, you can build unified reports, correlation searches, and dashboards.
- App Integration – Normalized data powers dashboards in Splunk ES, PCI App, and other CIM-compliant apps.
- Visibility – Only data normalized to CIM fields and tags appears in these dashboards and reports.
Why They Are Important?
- Support Attribution – DHCP ties IPs to devices/users, DNS shows domain queries, but both rely on context from higher-priority data models.
- Enhance Threat Detection – useful for spotting C2, phishing, and suspicious domain activity, but rarely actionable alone.
- Assist Investigations – provide supporting evidence to trace activity back to hosts and users.
- Compliance Value – offer audit trails of network resolution and address assignments, though not always mandated.
- ROI Consideration – valuable for enrichment and context, but lower direct detection impact compared to Authentication, Endpoint, or A&I logs.
Suppress or Exclude Noisy Detections
Refine detections by filtering, suppressing, or excluding conditions that generate excessive false positives or low-value alerts. This ensures that only relevant, high-fidelity signals are promoted into the risk framework and surfaced to analysts.
Example:
Why we need it?
- Reduces alert fatigue and prevents analysts from being overwhelmed by noise.
- Improves efficiency by ensuring time is spent on true threats, not chasing benign activity.
- Increases confidence in the detection framework, making alerts more actionable.
- Strengthens ROI by aligning detection output with meaningful business risk.
- Creates a sustainable maturity model where detections are continuously tuned for accuracy and relevance.
Integrate with External Systems and Tools
Turn Splunk SOAR into the SOC’s “central nervous system” by integrating across security, IT, and business platforms. Tight, bi-directional connections eliminate silos and enable end-to-end automation—from detection to ticketing to remediation and reporting.
Who is it for?
Why we need it?
- SOC analysts, incident responders, and automation engineers.
- Security leaders seeking scale and measurable outcomes.
- IT operations teams (network, endpoint, cloud) and service management owners
- Eliminates swivel-chair work and manual handoffs; accelerates MTTR.
- Creates closed-loop response with auditable, consistent workflows.
- Scales the SOC via automation while reducing errors and manual overhead.
- ROI: fewer manual steps, faster remediation, better utilization of security investments
What you get?
- Direct actioning with SIEMs, EDRs, firewalls, and email gateways.
- Seamless ITSM linkage for ticket creation, updates, and closure.
- Cloud security control (accounts, identities, resources) via API ties.
- Push of enriched data to dashboards/BI for program reporting and KPIs
Create Allow Lists and Filters
Tune Risk Scoring Thresholds
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Refine and adjust the thresholds at which cumulative risk scores trigger alerts or investigations. By tuning these thresholds, security teams ensure that risk-based alerting surfaces meaningful threats while minimizing noise from low-value activity.
Example:
Example:
Why we need it?
- Ensures alerts reflect true business risk instead of raw event volume.
- Reduces false positives by preventing low-impact or low-confidence activity from escalating unnecessarily.
- Prioritizes high-risk entities, helping analysts focus on the most important threats first.
- Improves ROI by aligning detection output with organizational risk appetite and tolerance.
- Supports continuous maturity by adapting thresholds as the environment, threat landscape, and business priorities evolve.
Enrich with MITRE Annotations
Map each analytic rule and its associated risk events to the MITRE ATT&CK framework by tagging relevant tactics and techniques. This enrichment adds context to detections, making it easier to understand adversary behaviors, identify coverage gaps, and communicate risk in a standardized language. Within RBA, MITRE annotations ensure that entity risk scores reflect not just raw alerts, but how those activities align to known attacker tactics, ultimately improving prioritization and guiding more effective investigations and responses.
Why we need it?
- Standardized language: MITRE annotations align detections to a globally recognized framework, making threats easier to explain across teams and to leadership.
- ROI & efficiency: Helps prove security value by showing clear coverage against known adversary tactics, reducing wasted effort on low-value alerts.
- Better prioritization: Adds context to risk scores, ensuring alerts linked to critical tactics (e.g., privilege escalation, lateral movement) are escalated quickly.
- Detect low-and-slow activity: By tracking MITRE tactics and techniques across time, teams can identify extended attack campaigns that might evade traditional single-alert detection.
- Strategic visibility: Highlights coverage gaps and supports roadmap planning for new detections, ensuring continuous improvement.
Security Use Case Workshop
Who is this for ?
What do you get ?
Why we do it ?
What is it ?
- New Splunk customers
- Rebuilding detections
- Need use case guidance
- Want a clear roadmap
- Faster time-to-value
A focused 5-day workshop to establish security visibility, assess client needs, and deliver a tailored, prioritized use case roadmap.
- Use Case Roadmap – customized and prioritized detections road map mapped to MITRE ATT&CK
- Gap Analysis – identify missing detections and coverage gaps
- Data Sources – required logs, onboarding methods, enrichment needs
- Maximize ROI from existing tools and data
- Gain clear visibility into critical assets and threats
- Prioritize use cases for highest business impact
- Reduce risk with actionable, business-aligned detections
Read More
SOAR Implementation Workshop
What is it ?
Why we do it ?
Who is this for ?
What do you get ?
A 5-day expert-led workshop that designs SOAR playbooks, integrates with existing tools, and automates response workflows to reduce response times and improve security operations.
To help organizations scale their response, cut manual effort, and maximize ROI by improving speed, consistency, and efficiency in security operations.
Customers looking to automate repetitive tasks, improve analyst productivity, and strengthen their security posture by integrating and orchestrating response across their existing tools.
By the end of the workshop, your team will have up to three playbook designs and an applicable SOAR workbook to jumpstart automation implementation.
Automate Enrichment Actions
Automate context gathering so analysts have the facts before they open a case. Splunk SOAR pulls intelligence from threat feeds, internal logs, identity/CMDB systems, and third-party APIs, then annotates the case—speeding triage and improving decision accuracy.
Who is it for?
Why we need it?
- SOC analysts and incident responders.
- Threat hunters and detection engineers.
- SecOps/IR managers seeking faster, consistent triage
- Cuts triage time from minutes/hours to seconds.
- Improves accuracy and reduces unnecessary escalations.
- Eliminates manual, error-prone lookups across tools.
- ROI: higher analyst throughput and lower response costs
What you get?
- Automatic reputation checks (domains, IPs, hashes) from TIPs.
- User and asset context from identity/CMDB systems.
- On-demand queries for related logs in Splunk.
- Cases auto-annotated with enriched, searchable context
Create Response Plans & Workflows
Create Allow Lists and Filters
Create Response Plans and Workflows
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Define structured response plans and step-by-step workflows for different incident scenarios. These plans outline roles, responsibilities, escalation paths, and communication channels, ensuring a coordinated and consistent response across teams. By standardizing workflows, organizations reduce confusion during incidents, improve collaboration, and accelerate recovery.
Example:
Example:
Who is it for?
- SOC teams coordinating incident response.
- Incident managers and security leaders overseeing response processes.
- Cross-functional stakeholders (IT, Legal, HR, PR) involved in major incidents.
Why we need it?
- Coordinates roles, escalation, and communications to reduce chaos during incidents.
- Cuts MTTR by giving teams clear, pre-approved steps.
- Improves cross-team collaboration (SOC, IT, Legal, HR, PR) and decision speed.
- Strengthens compliance and auditability with documented, repeatable processes.
- ROI: Less downtime, fewer misfires/escalations, and lower incident handling costs.
What you get?
- Clear, predefined response steps for key incident types.
- Aligned roles, responsibilities, and escalation paths.
- Consistent cross-team coordination during incidents.
- Faster recovery and reduced business impact.
12.2025 Security Detection & Response Maturity Model
jhansen
Created on October 14, 2025
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Modern Presentation
View
Terrazzo Presentation
View
Colorful Presentation
View
Modular Structure Presentation
View
Chromatic Presentation
View
City Presentation
View
News Presentation
Explore all templates
Transcript
Security Detection & Response Maturity Model
Reset
Tuning & Optimization
Use Case Implementation
Data Normalization
Use Case Discovery
Automation & Response
Data Onboarding & Governance
Security Analysis & Engineering
Security SIEM SME
SIEM Engineering
Security Engineering
Create and Deploy Playbooks
Authentication Logs (Identity & Access)
Enable or Create Detections
Data Governance
Suppress or Exclude Noisy Detections
Authentication Datamodel
Security Use Case Workshop
Common Information Model (CIM)
Endpoint & Malware Datamodel
Endpoint Security Logs
Create Response Plans & Workflows
Create Allow Lists and Filters
Assign Risk Objects & Set Base Score (RBA)
SIEM ReplacementWorkshop
Network Traffic, IDS & Session Datamodel
Network Security Logs
Observed Indicator Management
Create Risk Factors (RBA)
Track Detection Efficacy Over Time
Security Detection Optimization Workshop
Enrichment (Assets, Identities, Threat Intel)
EnrichmentDatamodels
Enrich with MITRE Annotations
Adjust Throttling and Scheduling
Integrate with External Systems and Tools
Risk-Based Alerting (RBA) Workshop
Web, Email, Update
Cloud, Email Activity & Audit Logs
Automate Enrichment Actions (SOAR)
Test & Validate
Tune Risk Scoring Thresholds (RBA)
SOAR Implementation Workshop
DNS / DHCP Logs
Network Resolution Datamodel
Use Cases Analysis Dashboard
Automated Alert Routing by Use Case (SOAR)
VulnerabilitiesDatamodel
Threat Vulnerability Management Logs
Field Mapping and Classification (SOAR)
Custom Data Model
Custom Data Onboarding
LEGEND
Low
Data Source Priority
Medium
Critical
High
This Task is meant to be continuous
Required Skill Set
Adjust Throttling and Scheduling
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Optimize how often detections run and how frequently alerts can be triggered by applying throttling and scheduling controls. This ensures detections provide timely insights without overwhelming analysts or systems with redundant alerts.
Example:
Example:
Why we need it?
Network Security Logs - Critical
Why we need it ?
They provide visibility into traffic flows, firewall activity, and intrusion attempts, making them essential for detecting command-and-control, data exfiltration, and lateral movement. Network logs also validate segmentation policies, support compliance reporting, and maximize ROI by ensuring threats are detected even when endpoints are compromised.
Example
Vulnerabilities Data Model - Low
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why We need it?
Automated Alert Routing by Use Case
Route alerts to the right team, queue, or analyst based on the use case, severity, source system, and required authority. Splunk SOAR auto-creates tickets, escalates critical threats, and measures routing quality to remove bottlenecks and speed accurate responses.
Who is it for?
Why we need it?
What you get?
Observed Indicator Management
Leverage SOAR to capture and track every indicator observed in events (IPs, domains, hashes, URLs, emails). Centralizing these observables enables KPI calculation, historical correlation/retro-hunt, and the growth of an internal threat-intel corpus—turning raw indicators into actionable context across cases and detections.
Who is it for?
Why we need it?
What you get?
Create and Deploy Playbooks
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Design and implement automated response playbooks for common incident types such as phishing, malware, and brute-force attacks. These playbooks define standardized workflows, automate repetitive investigative and containment steps, and evolve through version control to ensure continuous improvement. By operationalizing repeatable processes, playbooks accelerate response, reduce analyst workload, and strengthen SOC efficiency.
Example:
Example:
Who is it for?
Why we need it?
What you get?
Network Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why it’s critical?
Splunk Common Information Model (CIM)
Data Governance
The Splunk Common Information Model (CIM) is a standardized framework that ensures data from different sources is treated consistently, making it easier to extract value and insights. Delivered as an add-on, CIM includes prebuilt data models, documentation, and tools to normalize data for efficient and accurate analysis at search time.
Example:
Example
Why we need it ?
Priority: Low
Enrichments - High
Why we need it ?
Assets & Identities (A&I) – Provide business context by linking events to users, devices, and critical systems. This enables smarter detections, reduces false positives, and helps prioritize alerts based on what matters most to the organization.Threat Intelligence – Adds external context such as known bad IPs, domains, or hashes. It improves detection of emerging threats, speeds investigations, and strengthens proactive defense. Together, enrichments transform raw events into context-rich, actionable insights that improve accuracy, reduce noise, and maximize ROI.
Example
Field Mapping and Classification
Normalize event data by mapping SIEM fields to your SOAR schema and types. In Splunk environments, this typically means translating CIM fields to the SOAR event format (e.g., CEF/platform schema), setting the right data types, and applying transforms. e.g., via Splunk App for SOAR Export, map source_ip → sourceAddress (type: ip); in XSOAR, map “XDR Detection Time” → detection_time using TimeStampToDate.
Who is it for?
Why we need it?
What you get?
Test & Validate
Test & Validate ensures that newly created detections work as intended before they are fully deployed. This step involves running simulations or replaying historical data to confirm the detection’s accuracy, performance, and alignment with its intended MITRE ATT&CK coverage. It helps verify that alerts trigger under the right conditions, minimize false positives, and provide sufficient context for investigation.
Why we need it?
Example - Brute Force Detection Validation
Cloud, Email Activity & Audit Logs - High
Why we need it ?
Cloud logs track user actions, configuration changes, and resource usage across platforms. Email activity logs provide visibility into communication patterns to detect phishing, account compromise, and misuse. Audit logs record access and administrative actions, ensuring compliance and maintaining a trusted audit trail.
Example
Assign Risk Entities & Set Base Score
Risk factors are attributes applied to assets, identities, or detections that increase or decrease their overall risk score in Splunk’s Risk-Based Alerting framework. Examples include criticality of a server, privileged accounts, repeat offenses, or known vulnerabilities. By creating and applying these factors, detections can be weighted with business and threat context, producing smarter and prioritized alerts.
Why we need it?
How Risk Factoring works
Risk factors in Splunk ES are rules that adjust risk scores dynamically based on user, asset, or event characteristics. Instead of relying only on raw detection scores, risk factors modify them , raising or lowering scores depending on context such as asset criticality, user role, or event category.
Final Risk Score = (Business Impact×Detection Accuracy) × Risk Factor
Vulnerability Management Logs - Low
Threat Vulnerability Management Logs
Why we need it ?
They provide visibility into system and application weaknesses, track remediation progress, and validate patch effectiveness. These logs are essential for reducing attack surface, prioritizing high-risk exposures, demonstrating compliance, and strengthening overall security posture.
Example:
Example
Priority: Low
Security Detection Optimization Workshop
What is it?
Why we do it?
Who is this for?
What you get
Customers with existing detections who need help identifying gaps, tuning rules, validating data source value, and reducing alert noise—refining content for accuracy and alignment with business priorities.
A 5-day expert-led workshop that reviews data quality, CIM status, and detections against MITRE ATT&CK. We provide tuning recommendations, identify RBA opportunities, and help reduce costs without losing visibility.
To maximize the value of existing detections by closing gaps, reducing noise, and ensuring data sources and alerts are aligned with business priorities.
Authentication and access logs - Critical
Why we need it ?
Onboarding authentication and access logs into the SIEM provides visibility into user logins, access attempts, and identity-related activities across on-premises and cloud environments. This data is essential for detecting suspicious behavior, investigating incidents, enforcing compliance, and supporting identity-based threat detection.
Example
Use cases Analysis Dashboard
As part of the Use Case Implementation stage, we provide an executive overview dashboard that consolidates all active and planned use cases into a single view. Designed for leadership and stakeholders, this dashboard highlights progress, coverage, and priorities, turning complex detection details into clear, actionable insights for informed decision-making.
Why we need it?
Authentication Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why it’s critical?
DNS / DHCP Logs - Medium
Why we need it ?
DNS logs reveal domain queries and connections, making them valuable for detecting phishing, malware callbacks, and data exfiltration. DHCP logs link IP addresses to specific devices and users, providing attribution and context for investigations. Together, they connect network activity to identities, strengthen detections, and support compliance with audit-ready visibility.
Example
Custom Applications & Data Logs - Low
Why we need it ?
It enables organizations to bring in unique, non-standard data sources that are critical to their environment but not covered by default integrations. By parsing, mapping, and normalizing this data into the SIEM’s models (e.g., Splunk CIM), custom onboarding ensures complete visibility, closes detection gaps, and maximizes ROI—turning otherwise siloed data into actionable intelligence for threat detection, compliance, and investigations.
Example
Priority: Low
Priority: Low
Risk-Base Alerting (RBA) Workshop
What is it ?
What do you get ?
Who is this for ?
Why we do it ?
A 3-day expert-led workshop that evaluates existing detections, maps them to MITRE ATT&CK, reviews risk scoring, and identifies ways to cut alert fatigue, improve prioritization, and boost visibility.
Organizations that want to move beyond simple alert scoring to smarter, context-driven detections. Ideal for customers ready to reduce noise, prioritize threats intelligently, and strengthen their security posture with a modern risk-based approach.
To replace noisy, alert-heavy workflows with smarter, risk-based detections that cut wasted effort, accelerate investigations, and maximize ROI by focusing resources on the threats that matter most.
Read More
Custom Data Model - Low
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why We need it?
Enrichments Data Models - High
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why it’s Important?
Data Governance
Data Governance
Defines the policies, ownership, and controls that keep security data accurate, complete, and compliant across its lifecycle. It sets classification and access rules, retention/archival and lineage, and enforces quality (schema, timestamps, deduplication) and source integrity. Data governance improves the return on investment (ROI) of a Security Information and Event Management (SIEM) system by ensuring the data flowing into it is high-quality, necessary, and consistent.
Example:
Example
Why we need it ?
Priority: Low
SIEM Replacement Workshop
What is it ?
What do you get ?
Why we do it ?
Who is this for ?
Organizations moving away from legacy SIEMs due to cost, scale, ROI, or limited detection capabilities.
A tailored 14 days engagement designed to help organizations evaluate and plan their transition to a new SIEM platform. The workshop combines Architecture Workshop and the Security Use Case Workshop.
The engagement covers everything In the Security Use Case Workshop Plus:
Create allow lists and filters
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Establish and maintain controlled allow lists and filtering rules to exclude known, trusted, or acceptable activities from triggering detections. This step ensures that recurring legitimate behaviors do not generate unnecessary alerts or risk scores.
Example:
Example:
Why we need it?
Web, Email, Updates Data Models - High
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why They Are Important?
Enable or Create Detections
The first step in use case implementation within the TDIR framework. This stage focuses on building or activating analytic rules, whether rule-based, anomaly, risk-based, or threat-intel driven that align to prioritized threats and business risks. By establishing these analytic rules in Splunk, we create actionable visibility, feed the risk framework, and lay the foundation for investigation and response.TDIR (Threat Detection, Investigation, and Response) framework brings threat detection, investigation, and response together in one unified workflow, allowing security teams to spot, analyze, and remediate threats faster without the friction of switching between disconnected tools.
Why we need it?
Track Detection Efficacy Over Time
Create Allow Lists and Filters
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Continuously measure how well detections perform by monitoring their accuracy, volume, and contribution to risk scores over an extended period. This process identifies which detections consistently add value, which require tuning, and which may need to be retired or replaced.
Example:
Example:
Why we need it?
Endpoint Data Model - Critical
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why it’s critical?
Endpoint Security Logs - Critical
Why we need it ?
Endpoint logs provide in-depth visibility into system and user activities, making them essential for detecting malware, privilege escalation, and lateral movement. They enable faster investigations and response, while also supporting compliance audits and proving security controls are effective. By focusing on high-value data, endpoint logs maximize ROI and strengthen overall security posture.
Example
Assign Risk Entities & Set Base Score
Define and register critical entities—such as users, hosts, applications, and IPs as risk objects within Splunk, and establish their baseline risk score. This creates the foundation for risk-based alerting by linking detections and signals back to specific entities, providing business context, reducing noise from isolated alerts, and enabling teams to prioritize response on the entities that matter most to the organization.
How to calculate Risk Base score
Why we need it?
During the workshop, we assign each detection a Base Risk Score by multiplying its business impact (e.g., disruption, data loss, compliance risk) by its accuracy, both rated 1–5. This prioritizes detections that pose the highest, most reliable risks. For example, an unauthorized database access detection with impact 4 and accuracy 4 yields a score of 16, ranking it higher in triage for faster response.
Risk Base Score= (Business Impact×Detection Accuracy)
Network Resolution & Sessions Data Models - Medium
Why the CIM exists?
The CIM ( Common Information Model) helps you to normalize your data to match a common standard:
Why They Are Important?
Suppress or Exclude Noisy Detections
Refine detections by filtering, suppressing, or excluding conditions that generate excessive false positives or low-value alerts. This ensures that only relevant, high-fidelity signals are promoted into the risk framework and surfaced to analysts.
Example:
Why we need it?
Integrate with External Systems and Tools
Turn Splunk SOAR into the SOC’s “central nervous system” by integrating across security, IT, and business platforms. Tight, bi-directional connections eliminate silos and enable end-to-end automation—from detection to ticketing to remediation and reporting.
Who is it for?
Why we need it?
What you get?
Create Allow Lists and Filters
Tune Risk Scoring Thresholds
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Refine and adjust the thresholds at which cumulative risk scores trigger alerts or investigations. By tuning these thresholds, security teams ensure that risk-based alerting surfaces meaningful threats while minimizing noise from low-value activity.
Example:
Example:
Why we need it?
Enrich with MITRE Annotations
Map each analytic rule and its associated risk events to the MITRE ATT&CK framework by tagging relevant tactics and techniques. This enrichment adds context to detections, making it easier to understand adversary behaviors, identify coverage gaps, and communicate risk in a standardized language. Within RBA, MITRE annotations ensure that entity risk scores reflect not just raw alerts, but how those activities align to known attacker tactics, ultimately improving prioritization and guiding more effective investigations and responses.
Why we need it?
Security Use Case Workshop
Who is this for ?
What do you get ?
Why we do it ?
What is it ?
A focused 5-day workshop to establish security visibility, assess client needs, and deliver a tailored, prioritized use case roadmap.
Read More
SOAR Implementation Workshop
What is it ?
Why we do it ?
Who is this for ?
What do you get ?
A 5-day expert-led workshop that designs SOAR playbooks, integrates with existing tools, and automates response workflows to reduce response times and improve security operations.
To help organizations scale their response, cut manual effort, and maximize ROI by improving speed, consistency, and efficiency in security operations.
Customers looking to automate repetitive tasks, improve analyst productivity, and strengthen their security posture by integrating and orchestrating response across their existing tools.
By the end of the workshop, your team will have up to three playbook designs and an applicable SOAR workbook to jumpstart automation implementation.
Automate Enrichment Actions
Automate context gathering so analysts have the facts before they open a case. Splunk SOAR pulls intelligence from threat feeds, internal logs, identity/CMDB systems, and third-party APIs, then annotates the case—speeding triage and improving decision accuracy.
Who is it for?
Why we need it?
What you get?
Create Response Plans & Workflows
Create Allow Lists and Filters
Create Response Plans and Workflows
Define trusted entities, behaviors, or conditions to exclude from risk scoring, reducing false positives and focusing on genuine threats.
Define structured response plans and step-by-step workflows for different incident scenarios. These plans outline roles, responsibilities, escalation paths, and communication channels, ensuring a coordinated and consistent response across teams. By standardizing workflows, organizations reduce confusion during incidents, improve collaboration, and accelerate recovery.
Example:
Example:
Who is it for?
Why we need it?
What you get?