Skip to content
Security operations center with multiple monitoring screens
Workflow intermediate

Prompt Engineering for Security Operations: Copy-Paste Templates for SOC Analysts

Practical prompt templates for SOC analysts using LLMs in daily operations, covering incident triage, log analysis, threat hunting, phishing analysis, malware summarization, and report writing.

Most SOC analysts using LLMs are getting mediocre results because they’re writing prompts the way they’d search Google. A vague question produces a vague answer. In security operations, vague answers waste time you don’t have during an active incident.

The difference between a useful LLM output and a useless one comes down to prompt structure. This guide gives you copy-paste-ready prompts for the tasks SOC analysts do every day, along with the principles behind why they work. These are tested against real security workflows, not theoretical exercises.


Prompt Structure Principles for Security Context

Before the templates, understand the framework. Every effective security prompt has four components:

1. Role assignment. Tell the LLM what expertise to apply. “You are a senior SOC analyst with 10 years of experience in enterprise threat detection” produces materially different output than a bare question. The model selects relevant training data based on the role you assign.

2. Context boundary. Define what the LLM knows about your environment. Include relevant details: what SIEM you run, what EDR is deployed, what compliance frameworks apply, what your network topology looks like at a high level. Without this, the model guesses, and its guesses are often wrong for your specific environment.

3. Structured output format. Specify exactly what format you want the response in. If you need a table, say so. If you need MITRE ATT&CK technique IDs, ask for them explicitly. If you need a timeline, define the format. LLMs follow format instructions reliably when you’re specific.

4. Constraint specification. Tell the model what NOT to do. “Do not speculate beyond what the provided log data supports” prevents hallucinated threat attribution. “Limit your analysis to the artifacts provided” keeps responses grounded. Constraints are as important as instructions.

With those principles in mind, here are the templates.


Prompt 1: Incident Triage and Priority Assessment

Use this when a new alert fires and you need a fast initial assessment before diving into investigation.

Role: You are a Tier 2 SOC analyst performing initial triage on a security alert.

Context: Our environment runs [SIEM name] with [EDR name] on endpoints.
We have [number] employees and operate in [industry]. Our critical assets
include [list key systems].

Alert data:
[Paste the alert details, including timestamp, source IP, destination,
user account, process name, and any IOC matches]

Task: Provide an initial triage assessment including:
1. Severity rating (Critical/High/Medium/Low) with justification
2. MITRE ATT&CK techniques that match this behavior
3. Three most likely explanations ranked by probability
4. Immediate investigation steps (specific queries or checks, not generic advice)
5. Escalation recommendation: yes/no with reasoning

Constraints: Base your assessment only on the alert data provided.
Flag any assumptions you make. Do not attribute to a specific threat
actor without high-confidence IOC matches.

Why this works: The structured output format forces the LLM to address each triage dimension systematically rather than producing a wall of text. The constraint against attribution without evidence prevents the model from confidently naming APT groups based on ambiguous indicators, which is a common failure mode.


Prompt 2: Log Analysis and Anomaly Identification

For when you have a block of raw logs and need to identify what’s abnormal.

Role: You are a senior log analyst specializing in [Windows Event Logs /
Linux syslog / firewall logs / cloud audit logs].

Context: The following logs are from [source system] covering the period
[start time] to [end time]. Normal business hours are [hours] [timezone].
The user/system in question is [role/function].

Logs:
[Paste log entries]

Task:
1. Identify any anomalous entries and explain why each is anomalous
2. Correlate related events into sequences (group by attack chain if applicable)
3. For each anomaly, provide the specific log field values that triggered concern
4. Suggest follow-up queries I should run in [SIEM name] to investigate further
   (provide actual query syntax, not pseudocode)

Output format: Table with columns: Timestamp | Log Entry Summary | Anomaly
Type | Severity | Recommended Follow-up Query

Why this works: Asking for specific query syntax in your actual SIEM gives you actionable output. The table format makes it scannable during an incident when you’re working multiple alerts simultaneously. Specifying the log type in the role assignment primes the model to apply the right analytical framework; Windows Event Log analysis and cloud audit log analysis require different expertise.


Prompt 3: Threat Hunting Query Generation

When you have a hypothesis and need detection queries written.

Role: You are a threat hunter writing detection queries for [Splunk SPL /
KQL / Sigma / Elasticsearch DSL].

Context: I'm hunting for [specific technique or behavior, e.g., "lateral
movement via WMI" or "data staging in temp directories"]. Our environment
uses [OS versions], [relevant tooling], and logs are indexed in [SIEM]
with the following sourcetypes/indices: [list relevant ones].

Task: Write three detection queries at different fidelity levels:
1. HIGH fidelity: Minimal false positives, catches only clear-cut instances
2. MEDIUM fidelity: Broader net, may require analyst review of results
3. LOW fidelity: Widest coverage, expect noise, use for baseline assessment

For each query:
- Provide the full query syntax ready to paste into the search bar
- Explain what each clause filters for
- List expected false positive scenarios
- Suggest tuning parameters (thresholds, exclusion lists) for production use

Constraints: Use only standard fields available in [sourcetype/index].
Do not reference custom fields unless I specify them.

Example output (abbreviated):

HIGH FIDELITY - WMI Lateral Movement Detection (Splunk SPL):

index=wineventlog sourcetype=WinEventLog:Security EventCode=4648
| where Logon_Type=3 AND Process_Name="*wmiprvse.exe"
| stats count by Source_Network_Address, Account_Name, Workstation_Name
| where count > 1
| sort -count

Explanation: EventCode 4648 captures explicit credential use. Filtering
for Logon_Type 3 (network logon) with wmiprvse.exe as the process isolates
WMI-based remote execution. The stats aggregation surfaces accounts
performing WMI connections to multiple workstations, which is the lateral
movement pattern.

Expected false positives: SCCM/ConfigMgr performing legitimate WMI queries
for inventory. System management tools using WMI for health checks.

Tuning: Add "NOT Account_Name IN (sccm_svc, configmgr_svc)" to exclude
known service accounts. Adjust count threshold based on your baseline.

Prompt 4: Phishing Email Analysis

For when a suspicious email lands in the abuse inbox and you need a structured breakdown.

Role: You are an email security analyst performing phishing triage.

Email headers:
[Paste full email headers]

Email body:
[Paste email body text]

URLs found in email:
[List any URLs]

Attachments:
[List filenames, sizes, hashes if available]

Task: Analyze this email and provide:
1. Verdict: Phishing / Suspicious / Legitimate, with confidence level
2. Sender analysis: SPF/DKIM/DMARC alignment, sending infrastructure assessment
3. Content indicators: urgency language, impersonation attempts, brand spoofing
4. URL analysis: domain age, reputation, redirect chains, hosting infrastructure
5. Attachment risk: file type risk assessment, known malicious hash matches
6. Recommended response actions for the SOC
7. User communication template (if phishing confirmed)

Constraints: Clearly separate confirmed findings from inferences. Mark
any assessment that requires sandbox detonation or further investigation
as "REQUIRES VERIFICATION."

Why this works: The structured verdict format matches how phishing triage workflows operate in practice. Requiring the model to separate confirmed findings from inferences prevents analysts from treating LLM guesses as confirmed indicators. The user communication template saves time on the downstream notification that has to happen when phishing is confirmed.


Prompt 5: Malware Behavior Summarization

When you have sandbox output or a threat intel report and need a plain-language operational summary.

Role: You are a malware analyst translating technical analysis into
operational intelligence for SOC consumption.

Malware analysis data:
[Paste sandbox report, VirusTotal results, or threat intel write-up]

Task: Produce an operational summary structured as follows:

EXECUTIVE SUMMARY (2-3 sentences, non-technical, suitable for management)

TECHNICAL SUMMARY:
- Malware family and variant (if identifiable)
- Initial access vector
- Execution chain (parent process -> child processes)
- Persistence mechanisms installed
- C2 communication method and infrastructure
- Data targeted for collection/exfiltration
- Lateral movement capabilities

DETECTION OPPORTUNITIES:
- Network IOCs (IPs, domains, JA3 hashes, user-agent strings)
- Host IOCs (file paths, registry keys, scheduled tasks, service names)
- Behavioral signatures (process relationships, API call patterns)

RECOMMENDED ACTIONS:
- Immediate containment steps
- Indicators to add to blocklists
- Detection rules to deploy (provide Sigma or YARA rule if possible)

Constraints: If the provided data is insufficient to determine any field,
write "INSUFFICIENT DATA" rather than guessing. Do not attribute to a
threat actor without explicit evidence in the source material.

Prompt 6: Security Report Writing

For turning raw investigation notes into a formatted incident report.

Role: You are a senior incident responder writing a formal incident report
for [audience: management / legal / compliance / technical team].

Investigation notes:
[Paste your raw notes, timeline entries, findings]

Task: Write a formal incident report with these sections:
1. Executive Summary (3-5 sentences, business impact focus)
2. Incident Timeline (table format: timestamp, event, source, analyst action)
3. Technical Analysis (what happened, how, evidence supporting each claim)
4. Impact Assessment (systems affected, data at risk, business operations impact)
5. Root Cause Analysis (contributing factors, not just the initial vector)
6. Containment and Remediation Actions Taken
7. Recommendations (short-term fixes and long-term improvements)
8. Appendix: IOCs (table format suitable for sharing with partners)

Writing style: Factual, precise, no speculation presented as fact. Use
passive voice minimally. Every claim must reference a specific artifact
or log entry from the investigation notes.

Constraints: If the investigation notes don't support a conclusion in any
section, explicitly note what additional investigation is needed rather
than filling gaps with assumptions.

Why this works: Specifying the audience changes the output dramatically. A report for legal counsel emphasizes data exposure and regulatory notification timelines. A report for the technical team emphasizes TTPs and detection gaps. The constraint against speculation presented as fact is critical; incident reports become legal documents, and inaccurate claims create liability.


Prompt 7: MITRE ATT&CK Mapping

For mapping observed attacker behavior to the ATT&CK framework during or after an investigation.

Role: You are a threat intelligence analyst mapping observed attacker
behavior to the MITRE ATT&CK framework (Enterprise, version 15).

Observed behaviors:
[List specific attacker actions observed during the investigation,
with as much technical detail as available]

Task: For each observed behavior, provide:
1. ATT&CK Technique ID and name
2. Sub-technique ID if applicable
3. Confidence level (High/Medium/Low) that this mapping is accurate
4. Evidence from the provided data that supports this mapping
5. Related techniques the attacker likely also used but we may not
   have detected (with reasoning)

Output format: Table with columns: Observed Behavior | Technique ID |
Technique Name | Sub-technique | Confidence | Supporting Evidence

After the table, provide:
- A text-based ATT&CK Navigator layer showing which tactics were observed
- Gaps in the kill chain where we lack visibility
- Recommended data sources to close those visibility gaps

Prompt 8: Vulnerability Prioritization

For when a scan dumps hundreds of findings and you need to focus remediation.

Role: You are a vulnerability management analyst prioritizing remediation
for a [size] organization in [industry].

Context: Our environment includes [brief infrastructure description].
We follow [compliance framework] requirements. Our patch cycle is
[frequency]. Current active threats: [any active exploitation
campaigns relevant to your stack].

Vulnerability scan results:
[Paste scan output or summary — CVE IDs, affected systems, CVSS scores]

Task: Prioritize these vulnerabilities using this framework:
1. CRITICAL (patch within 48 hours): Actively exploited in the wild +
   affects internet-facing or critical systems
2. HIGH (patch within 7 days): High CVSS + known exploit exists OR
   affects critical systems
3. MEDIUM (patch within 30 days): Moderate CVSS, no known active exploitation
4. LOW (next patch cycle): Low impact, internal-only, no known exploit

For each vulnerability provide:
- CVE ID and affected system count
- Your priority tier with justification
- Whether CISA KEV lists this as actively exploited
- Compensating controls if immediate patching isn't possible
- Grouping with other vulnerabilities that can be patched together

Constraints: Do not rely solely on CVSS base scores. Factor in your
environmental context, exploit availability, and exposure.

Prompt 9: Threat Intelligence Briefing Generation

For producing daily or weekly intel summaries from raw feeds.

Role: You are a CTI analyst preparing a threat intelligence briefing for
SOC leadership.

Raw intelligence:
[Paste threat intel feed entries, advisories, or news items from the
past 24h/week]

Our environment profile:
- Industry: [industry]
- Key technologies: [list major platforms, vendors, OS versions]
- Geographic presence: [regions]
- Previous incidents: [any relevant recent incidents]

Task: Produce a structured briefing:

PRIORITY ALERTS (directly relevant to our environment):
- Threat name, affected technology, our exposure, recommended action

WATCH LIST (potentially relevant, monitor for development):
- Threat name, relevance assessment, monitoring recommendation

INFORMATION ONLY (awareness, no immediate action):
- Brief summary of notable items

For each priority alert, include:
- Specific IOCs we should check against our environment immediately
- Detection queries for our SIEM
- Reference links to primary sources

Constraints: Only classify as PRIORITY if there is a direct connection
to technology in our environment profile. Do not inflate threat severity
for attention.

Prompt 10: Incident Response Playbook Step Generation

For when you need to quickly generate response procedures for a specific scenario.

Role: You are an incident response lead writing a response procedure for
your SOC team.

Scenario: [Describe the incident type — e.g., "confirmed ransomware
execution on a domain-joined workstation" or "compromised service account
with access to production databases"]

Environment context:
- EDR: [product]
- SIEM: [product]
- Identity provider: [product]
- Cloud platforms: [list]
- Communication channels: [Slack/Teams/etc.]
- Backup solution: [product]

Task: Write a step-by-step response procedure covering:

IMMEDIATE (first 15 minutes):
- Containment actions with specific commands/console steps
- Communication: who to notify, using what channel

SHORT-TERM (first 4 hours):
- Investigation steps with specific queries to run
- Evidence preservation actions
- Scope assessment procedures

RECOVERY (4-24 hours):
- Remediation steps
- Verification procedures before restoring service
- Monitoring enhancements to deploy

POST-INCIDENT:
- Documentation requirements
- Lessons learned meeting agenda items
- Detection improvements to implement

For each step, specify: the action, the tool/console to use, the
expected output, and the decision point (what to do if the result
is unexpected).

Getting Better Results Over Time

These templates are starting points. The analysts who get the most value from LLMs in security operations follow a few practices:

Save and iterate on prompts that work. When a prompt produces useful output, save it with the context that made it work. Build a team prompt library in your wiki or runbook system. What works for your environment won’t be identical to what works for someone else’s.

Chain prompts for complex investigations. Use the output of Prompt 1 (triage) as input context for Prompt 2 (log analysis). Feed the output of Prompt 5 (malware summary) into Prompt 6 (report writing). Each prompt in the chain adds structure and analysis that the next prompt builds on.

Include your false positive context. If you know that a specific behavior is normal in your environment, say so in the prompt. “Note: our finance team regularly uses PowerShell scripts for automated reporting; this is authorized” prevents the LLM from flagging known-good activity that your baseline includes.

Specify your SIEM and tooling explicitly. An LLM asked to “write a detection query” will produce generic pseudocode. An LLM asked to “write a Splunk SPL query using the wineventlog sourcetype” will produce something you can paste into the search bar. The difference is in the specificity of your prompt.

Never trust LLM output on IOCs without verification. LLMs hallucinate IP addresses, domain names, and hash values. If the model produces an IOC that wasn’t in your input data, verify it independently before adding it to a blocklist. Blocking a hallucinated IP can disrupt legitimate services. This is non-negotiable.

The prompts above handle the workflows SOC analysts spend the most time on: triage, analysis, hunting, and reporting. Use them as-is for quick wins, then customize them to match the specific tools, data sources, and processes in your environment. The analysts who invest 30 minutes building a good prompt library save hours every week on the operational tasks that would otherwise eat into time better spent on actual threat hunting and investigation.

> Related Tools