LLM Security Methodology
LLM Security Methodology — Current
METHODOLOGY
RECON
FRAMEWORK
Structured red team methodology for assessing large language model security. Covers threat modelling, attack surface enumeration, prompt injection chaining, and systematic evaluation frameworks for identifying vulnerabilities in LLM-powered deployments.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-security-methodology.html
Access Methodology
→
LLM Security Methodology v1
LLM Security Methodology — Version 1
METHODOLOGY
LEGACY
V1 BASELINE
Version 1 baseline of the LLM red team security methodology. Provides foundational attack patterns, initial reconnaissance vectors, and the original testing framework used as a reference baseline for comparing against evolved assessment techniques.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-security-methodology_v1.html
Access Methodology v1
→
General Red Team Payloads
Redteam LLM Injection Payload Library
OFFENSIVE
INJECTION
LLM EXPLOIT
Comprehensive collection of adversarial prompt injection payloads designed for red-teaming large language models. Includes jailbreaks, role-play bypasses, system prompt leakage, and instruction override vectors targeting general-purpose LLM deployments.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/Redteam_LLM_Injection_payloads.html
Access Payload Library
→
Domain-Specific: Finance & Banking
Financial & Banking LLM Prompt Injection Test Library
HIGH VALUE TARGET
FINTECH
BANKING
Specialised prompt injection test vectors targeting LLM deployments within financial and banking environments. Covers fraud detection bypass, PII extraction, transaction manipulation prompts, regulatory evasion, and contextual override attacks against finance-tuned models.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/Financial&Banking_LLM_Prompt_injection_Test_Library.html
Access Finance Payloads
→
Domain-Specific: Enterprise & Corporate
Enterprise & Corporate LLM Prompt Injection Test Library
CRITICAL INFRA
ENTERPRISE
CORPORATE
Targeted prompt injection test vectors for LLM deployments in enterprise and corporate environments. Covers internal data exfiltration, privilege escalation via prompt, policy bypass, supply-chain prompt poisoning, and adversarial attacks against enterprise-tuned models and copilot integrations.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/Enterprise&Corporate_LLM_Prompt_injection_Test_Library.html
Access Enterprise Payloads
→
Domain-Specific: Medical & Healthcare
Medical & Healthcare LLM Prompt Injection Test Library
HIGH SENSITIVITY
HEALTHCARE
MEDICAL
Specialised prompt injection test vectors for LLM deployments in medical and healthcare environments. Covers PHI/PII extraction, clinical decision override, diagnostic manipulation, HIPAA evasion vectors, and adversarial attacks against health-tuned models and patient-facing AI systems.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/Medical&Healthcare_LLM_Prompt_injection_Test_Library.html
Access Medical Payloads
→
System Prompts & Prompt Injection
LLM System Prompts — Prompt Injection Guide
INJECTION
SYSTEM PROMPT
OVERRIDE
Focused guide on system prompt architecture and prompt injection attack vectors. Covers instruction hierarchy bypass, system-level override techniques, context poisoning, and crafting adversarial inputs that manipulate or neutralise system-level directives in deployed LLM pipelines.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-System-Prompts_Prompt-Injection.html
Access System Prompt Guide
→
Jailbreak Techniques
LLM Jailbreak Guide
JAILBREAK
BYPASS
SAFETY EVASION
Comprehensive jailbreak methodology for large language models. Covers role-play jailbreaks, DAN-style prompts, fictional framing, token smuggling, multi-turn jailbreak chains, and systematic techniques for bypassing model safety filters and alignment guardrails.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm_jailbreak_guide.html
Access Jailbreak Guide
→
Prompt Leakage
LLM Prompt Leakage Guide
DATA EXFIL
LEAKAGE
RECON
Techniques for extracting hidden system prompts, confidential instructions, and internal configurations from LLM deployments. Covers direct extraction probes, indirect inference attacks, memory leakage via context window manipulation, and prompt reconstruction from partial disclosures.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-prompt-leakage-guide.html
Access Prompt Leakage Guide
→
Insecure Output Handling
LLM Insecure Output Handling Guide
OUTPUT ATTACK
XSS / INJECTION
DOWNSTREAM
Attack vectors targeting insecure handling of LLM-generated output. Covers prompt-driven XSS injection, SQL injection via model output, code execution through downstream parsers, markdown/HTML injection, and second-order attacks exploiting trust in LLM-produced content.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-insecure-output-handling-guide.html
Access Output Handling Guide
→
Agentic Tool Abuse
LLM Agentic Tool Abuse Guide
AGENTIC
TOOL ABUSE
AUTONOMOUS
Exploitation techniques targeting LLM agents with access to external tools and APIs. Covers malicious tool invocation, function-call hijacking, indirect prompt injection via tool responses, privilege escalation through chained tool use, and attacking autonomous agent pipelines with persistent memory.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-agentic-tool-abuse-guide.html
Access Agentic Tool Guide
→
Denial of Service
LLM DoS Pentest Guide
DOS
RESOURCE ABUSE
AVAILABILITY
Denial-of-service techniques targeting LLM infrastructure and availability. Covers token flooding, recursive prompt loops, context window exhaustion, compute-intensive query crafting, API rate-limit abuse, and adversarial inputs designed to degrade model performance or force error states.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-dos-pentest-guide.html
Access DoS Pentest Guide
→
RAG Security & Pentest
LLM RAG Pentest Guide
RAG ATTACK
RETRIEVAL
POISONING
Penetration testing guide for Retrieval-Augmented Generation (RAG) pipelines. Covers document poisoning, retrieval manipulation, context injection via knowledge base tampering, indirect prompt injection through retrieved content, and adversarial queries designed to exploit RAG-based LLM deployments.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm-rag-pentest-guide.html
Access RAG Pentest Guide
→
Advanced Attack Vectors
LLM Advanced Attack Vectors Guide
ADVANCED
MULTI-VECTOR
EVASION
In-depth guide covering advanced and multi-stage attack vectors against large language models. Covers chained prompt injection, adversarial token manipulation, cross-context attacks, model inversion, training data extraction, and sophisticated evasion techniques targeting hardened LLM deployments and safety layers.
https://crazywifi.github.io/Redteam_LLM_Injection_payloads/llm_Advance_attack_Vectors_guide.html
Access Advanced Attack Guide
→