Chapter 1

Objective 1.1

Scope Definition

  • Regulations, Frameworks, and Standards
    • Privacy:
      • Ensure compliance with privacy laws (e.g., GDPR, HIPAA).
      • Notes
    • Security:
      • Adhere to security standards (e.g., ISO/IEC 27001, NIST).
      • Notes
  • Rules of Engagement
    • Exclusions
      • Define what systems, networks, or data are off-limits.
      • Example: Exclude the production environment to avoid disruptions.
    • Test Cases
      • Specify the scenarios and conditions under which the testing will occur.
      • Example: Testing for SQL injection vulnerabilities in the login module.
    • Escalation Process
      • Establish a protocol for addressing critical issues discovered during testing.
      • Example: Immediate notification to the security team if a critical vulnerability is found.
    • Testing Window
      • Determine the timeframe for when testing will occur.
      • Example: Conduct tests during off-peak hours to minimize business impact.
    • Key Points:
      • The timeline for the engagement and when testing can be conducted.
      • What locations, systems, applications, or other potential targets are in scope.
      • Types of tests that are allowed or disallowed.
      • Data handling requirements for information gathered during the penetration test.
      • What behaviors to expect from the target.
      • What resources are committed to the test.
      • Legal concerns.
      • When and how communications will occur.
      • Who to contact in case of particular events.
      • Who is permitted to engage the pentest team.
  • Agreement Types
    • Non-Disclosure Agreement (NDA) → Legal documents that help enforce confiden- tial relationships between two parties.
      • NDAs protect one or more parties in the relationship and typically outline the parties, what information should be considered confidential, how long the agreement lasts, when and how disclosure is acceptable, and how confidential information should be handled.
    • Master Service Agreement (MSA) → Defines the terms that the organizations will use for future work.
      • This makes ongoing engagements and SOWs much easier to work through, as the overall MSA is referred to in the SOW, prevent- ing the need to renegotiate terms.
      • MSAs are common when organizations anticipate working together over a period of time or when a support contract is created.
    • Statement of Work (SoW) → A document that defines the purpose of the work, what work will be done, what deliverables will be created, the timeline for the work to be completed, the price for the work, and any additional terms and conditions that cover the work.
      • Alternatives to statements of work include statements of objectives (SOOs) and performance work statements (PWSs), both of which are used by the U.S. government.
    • Terms of Service (ToS) → Defines the rules that users must agree to abide by to use a service.
      • Ex. Conditions under which the penetration testing services will be rendered, including acceptable use policies.
  • Target Selection
    • Classless Inter-Domain Routing (CIDR) Ranges → Defines a range of IP addresses for network targeting.
      • Example: The CIDR range 192.168.1.0/24 includes all IP addresses from 192.168.1.0 to 192.168.1.255.
    • Domains
      • Specifies domain names to be tested.
      • Example: Testing example.com and its subdomains (sub.example.com).
    • Internet Protocol (IP) Addresses
      • Individual IP addresses selected for penetration testing.
      • Example: Testing specific servers at 192.168.1.10 and 192.168.1.20.
    • Uniform Resource Locator (URL)
      • Specific web addresses within domains targeted for testing.
      • Example: Testing the URL http://example.com/login for vulnerabilities.

Assessment Types

  • Web
    • Focuses on identifying vulnerabilities in web applications and websites.
    • Example: Testing for cross-site scripting (XSS) and SQL injection.
    • Comparison: Web assessments often involve different tools and techniques than network assessments due to the nature of web technologies.
  • Network
    • Examines network infrastructure, including routers, switches, and firewalls, for security weaknesses.
    • Example: Scanning for open ports, weak configurations, and vulnerabilities in network devices.
    • Comparison: Network assessments are more focused on connectivity and data flow between systems, unlike web or mobile assessments.
  • Mobile
    • Targets vulnerabilities in mobile applications and devices.
    • Example: Testing for insecure data storage, insufficient encryption, and insecure communication in a mobile app.
    • Comparison: Mobile assessments require different skill sets and tools compared to web and network assessments due to the unique operating systems and application environments.
  • Cloud
    • Assesses security of cloud-based infrastructure, platforms, and services.
    • Example: Evaluating the security of AWS, Azure, or Google Cloud configurations.
    • Comparison: Cloud assessments involve understanding cloud-specific security practices and compliance requirements, different from on-premises assessments.
  • Application Programming Interface (API)
    • Examines the security of APIs, which facilitate communication between different software components.
    • Example: Testing for insecure authentication, authorization, and input validation in APIs.
    • Comparison: API assessments are specialized and focus on data exchange mechanisms, unlike general application assessments.
  • Application
    • Broad category encompassing the assessment of software applications, including desktop and enterprise applications.
    • Example: Testing for buffer overflows, improper error handling, and insecure code practices.
    • Comparison: Application assessments are broader and can include aspects of web, mobile, and API assessments.
  • Wireless
    • Focuses on the security of wireless networks, including Wi-Fi and Bluetooth.
    • Example: Testing for weak encryption protocols (e.g., WEP), unauthorized access points, and insecure wireless configurations.
    • Comparison: Wireless assessments require specific tools and techniques, such as Wi-Fi sniffers and signal analyzers, differing from wired network assessments.

Shared Responsibility Model

  • Hosting Provider Responsibilities
    • Infrastructure Security: Ensuring the physical and foundational security of servers, storage, and networking components.
      • Example: Data center security, hardware maintenance, and network security (e.g., DDoS protection).
      • Compliance: Adhering to regulatory and industry standards.
      • Example: Compliance with SOC 2, ISO 27001, or PCI-DSS for data protection and privacy.
  • Customer Responsibilities
    • Data Security: Protecting data within the cloud environment, including encryption and access controls.
      • Example: Encrypting sensitive data stored in cloud databases.
    • Configuration Management: Properly configuring cloud services and resources.
      • Example: Setting up secure configurations for virtual machines and storage buckets to prevent unauthorized access.
    • User Access Management: Managing user identities and access to resources.
      • Example: Implementing multi-factor authentication (MFA) and least privilege access controls.
  • Penetration Tester Responsibilities
    • Testing Authorization: Obtaining necessary permissions to conduct penetration testing.
      • Example: Securing formal approval from both the customer and hosting provider before initiating tests.
    • Scope Adherence: Testing within the agreed-upon scope and respecting rules of engagement.
      • Example: Only testing authorized systems and avoiding any non-approved systems or data.
    • Vulnerability Reporting: Providing detailed reports on discovered vulnerabilities and recommendations for remediation.
      • Example: Creating comprehensive reports with clear, actionable recommendations for improving security.
  • Third-Party Responsibilities
    • Service Integration Security: Ensuring the security of third-party services integrated into the customer’s environment.
      • Example: Securely integrating third-party payment processors or authentication services.
    • Compliance and Audits: Adhering to relevant compliance requirements and undergoing regular security audits.
      • Example: Ensuring third-party vendors comply with GDPR or HIPAA regulations as required.
    • Incident Response: Collaborating in incident response activities when security breaches involve third-party services.
      • Example: Coordinating with third-party providers to quickly address and mitigate breaches.
  • Authorization Letters
    • Purpose: Formal documents granting permission to conduct penetration testing.
      • Example: A written authorization from a company’s senior management allowing a pentester to test specific systems.
    • Importance: Protects both the client and the tester legally, ensuring all parties are aware of the testing activities.
    • Content: Should include scope, timeframe, and any limitations of the test.
      • Example: An authorization letter specifying the systems to be tested, the methods to be used, and the duration of the testing period.
  • Mandatory Reporting Requirements
    • Legal Obligation: Certain vulnerabilities or breaches must be reported to relevant authorities or stakeholders.
      • Example: Reporting discovered vulnerabilities to the organization’s security team and, if applicable, to regulatory bodies.
    • Compliance: Adhering to industry standards and regulations that mandate reporting.
      • Example: GDPR requires notifying authorities within 72 hours of discovering a data breach.
    • Ethical Responsibility: Ensuring transparency and accountability by reporting findings that could impact stakeholders.
      • Example: Reporting a critical vulnerability in a financial system that could lead to significant data loss or theft.
  • Risk to the Penetration Tester
    • Legal Risks: Potential legal consequences if testing is done without proper authorization.
      • Example: Facing charges of unauthorized access or data tampering if tests are conducted without explicit permission.
    • Physical Risks: Possible dangers when testing physical security controls or on-site systems.
      • Example: Risk of injury when physically accessing and testing security of data centers or other secure facilities.
    • Professional Risks: Reputation and career implications if testing is conducted unethically or results are mishandled.
      • Example: Loss of credibility or job if a tester fails to disclose a significant vulnerability or mishandles sensitive information.

Objective 1.2

Peer Review

  • Purpose: Ensures accuracy and thoroughness of the penetration testing results through review by fellow security professionals.
  • Example: A pentester’s report is reviewed by another team member for completeness and accuracy.

Stakeholder Alignment

  • Purpose: Ensures all relevant parties are informed and in agreement with the objectives and scope of the penetration test.
    • Example: Regular meetings with IT, security teams, and management to align on testing goals and expectations.
  • Importance: Facilitates a unified approach and understanding among stakeholders.
  • Outcome: Cohesive and coordinated efforts towards improving security.

Root Cause Analysis

  • Purpose: Identifies the underlying reasons for discovered vulnerabilities or security issues.
    • Example: Analyzing why a SQL injection vulnerability existed in an application’s code.
  • Importance: Helps prevent recurrence by addressing the fundamental issues rather than just symptoms.
  • Outcome: Implementation of long-term fixes and improvements in security practices.

Escalation Path

  • Purpose: Defines a clear process for escalating critical issues discovered during testing.
    • Example: Immediate notification to senior management if a critical vulnerability is found.
  • Importance: Ensures swift action and decision-making to address serious risks.
  • Outcome: Timely and effective mitigation of critical vulnerabilities.

Secure Distribution

  • Purpose: Ensures sensitive findings and reports are shared securely with authorized personnel only.
    • Example: Using encrypted emails or secure portals to share test results.
  • Importance: Protects sensitive information from unauthorized access and potential misuse.
  • Outcome: Maintains confidentiality and integrity of the findings.

Articulation of Risk, Severity, and Impact

  • Purpose: Clearly communicates the risks, severity, and potential impact of identified vulnerabilities.
    • Example: Explaining the potential business impact of a critical vulnerability in layman’s terms to non-technical stakeholders.
  • Importance: Helps stakeholders understand the urgency and significance of the findings.
  • Outcome: Informed decision-making regarding remediation priorities and resource allocation.

Goal Reprioritization

  • Purpose: Adjusts testing and remediation goals based on new findings and evolving business needs.
    • Example: Shifting focus to newly discovered critical vulnerabilities that pose immediate risks.
  • Importance: Ensures resources are effectively utilized to address the most pressing security issues.
  • Outcome: Dynamic and responsive approach to penetration testing and remediation.

Business Impact Analysis

  • Purpose: Assesses the potential impact of vulnerabilities on business operations.
    • Example: Evaluating how a vulnerability could affect customer data and business continuity.
  • Importance: Provides context for understanding the real-world implications of security issues.
  • Outcome: Prioritized remediation efforts based on business risk.

Client Acceptance

  • Purpose: Obtains formal approval from the client for the findings, recommendations, and remediation plan.
    • Example: Presenting the final report to the client and gaining their agreement on the next steps.
  • Importance: Ensures client buy-in and commitment to implementing recommended security measures.
  • Outcome: Successful collaboration and alignment on security improvements.

Objective 1.3

Open Source Security Testing Methodology Manual (OSSTMM)

  • Purpose: Provides a comprehensive methodology for security testing and analysis.
  • A broad penetration testing methodology guide with information about analysis, metrics, workflows, human security, physical security, and wireless security. Unfortunately, it has not been updated since 2010, resulting in more modern techniques and technologies not being included in the manual.

Council of Registered Ethical Security Testers (CREST)

  • Purpose: Offers accreditation and certification for organizations and individuals in the security testing industry.
  • Key Features: Sets professional standards for security testing and provides guidelines and certifications.

Penetration Testing Execution Standard (PTES)

  • Purpose: Provides a detailed framework for performing penetration testing.
  • Key Features: Covers seven phases: Pre-engagement Interactions, Intelligence Gathering, Threat Modeling, Vulnerability Analysis, Exploitation, Post-Exploitation, and Reporting.
  • It ranges from pre-engagement interactions like scoping and questions to ask clients, to details such as how to deal with third parties.
  • It also includes a full range of penetration testing techniques and concepts, making it one of the most complete and modern openly available penetration testing standards.

MITRE ATT&CK

OWASP Top 10

  • Purpose: Lists the top 10 most critical web application security risks.
  • Key Features: Focuses on prevalent and severe web application vulnerabilities like SQL injection, XSS, and more.

OWASP Mobile Application Security Verification Standard (MASVS)

  • Purpose: Provides a framework for securing mobile applications.
  • Key Features: Defines security requirements and verification levels for mobile app security.

Purdue Model

  • Purpose: A reference model for industrial control systems (ICS) security.
  • Key Features: Divides ICS networks into different levels, each with specific security considerations.
  • The Purdue Model, also known as the Purdue Enterprise Reference Architecture (PERA), is a widely accepted framework used to segment and secure Industrial Control Systems (ICS) environments.
  • It organizes the ICS architecture into multiple layers, each with specific roles and security requirements.
  • This model helps in understanding how to effectively secure and manage different components of an ICS network.
  • Layers of the Purdue Model

    • Level 0: Physical Process
      • Description: The actual physical processes and machinery, including sensors, actuators, and other devices that interact directly with the physical environment.
      • Examples:
        • Sensors measuring temperature, pressure, or flow rates.
        • Actuators controlling valves, motors, or pumps.
    • Level 1: Basic Control
      • Description: The control devices that directly manage Level 0 equipment, often referred to as programmable logic controllers (PLCs) or remote terminal units (RTUs).
      • Examples:
        • PLCs and RTUs executing control logic to automate processes.
        • Human-Machine Interfaces (HMIs) at the local control level.
    • Level 2: Supervisory Control
      • Description: Systems that provide supervisory control and data acquisition (SCADA) functions, aggregating data from Level 1 and providing oversight and control.
      • Examples:
        • SCADA systems for real-time monitoring and control.
        • HMIs at the supervisory control level.
    • Level 3: Operations Management
      • Description: Systems used for production control, including batch management, production scheduling, and other operational functions.
      • Examples:
        • Manufacturing Execution Systems (MES) managing production workflows.
        • Systems for coordinating production processes and ensuring quality control.
    • Level 4: Enterprise Systems
      • Description: Enterprise-level systems that manage business logistics, planning, and enterprise resource management.
      • Examples:
        • Enterprise Resource Planning (ERP) systems.
        • Customer Relationship Management (CRM) systems.
    • Level 5: External Networks
      • Description: Connections to external networks, including business partners, suppliers, and the internet.
      • Examples:
        • Connections to corporate networks.
        • External cloud services.

Threat Modeling Frameworks

  • DREAD (Damage potential, Reproducibility, Exploitability, Affected users, Discoverability)
    • Purpose: Provides a quantitative assessment of threat severity.
    • Components:
      • Damage Potential: Measures the potential impact of a threat.
        • Example: High damage potential for a vulnerability that allows full system takeover.
      • Reproducibility: Assesses how easily the threat can be reproduced.
        • Example: A threat that can be reproduced consistently scores high.
      • Exploitability: Evaluates how easy it is to exploit the threat.
        • Example: A threat that requires minimal technical skill to exploit scores high.
      • Affected Users: Estimates the number of users impacted by the threat.
        • Example: A vulnerability affecting all users of an application scores high.
      • Discoverability: Measures how likely the threat is to be discovered.
        • Example: A vulnerability visible in public-facing code scores high.
    • Usage: Helps prioritize threats based on their overall risk score.
  • STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Elevation of privilege)
    • Purpose: Identifies potential threats by categorizing them into six types.
    • Components:
      • Spoofing: Impersonation of a user or device.
        • Example: Unauthorized access using stolen credentials.
      • Tampering: Unauthorized alteration of data.
        • Example: Modifying transaction details in a database.
      • Repudiation: Denying an action or transaction without proof.
        • Example: A user denying the submission of a malicious request.
      • Information Disclosure: Unauthorized exposure of information.
        • Example: Data leakage through unsecured channels.
      • Denial of Service (DoS): Disruption of service availability.
        • Example: Overloading a server to prevent legitimate access.
      • Elevation of Privilege: Gaining unauthorized higher-level access.
        • Example: Exploiting a vulnerability to gain admin rights.
    • Usage: Provides a structured approach to identify and categorize threats during system design and analysis.
  • OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)
    • Purpose: Focuses on organizational risk management and strategic assessment.
    • Components:
      • Identifying Critical Assets: Recognize and prioritize key organizational assets.
        • Example: Identifying customer data and intellectual property as critical assets.
      • Threat Profiling: Determine potential threats to each critical asset.
        • Example: Profiling threats such as cyber-attacks, insider threats, and natural disasters.
      • Vulnerability Assessment: Identify vulnerabilities that can be exploited by threats.
        • Example: Assessing systems for software bugs, misconfigurations, and weak access controls.
      • Risk Mitigation Planning: Develop strategies to mitigate identified risks.
        • Example: Implementing security controls and response plans for identified vulnerabilities.
    • Usage: Provides a comprehensive approach for assessing and managing risks at an organizational level.

Objective 1.4

Format Alignment

  • Purpose: Ensures consistency and clarity in report presentation.
  • Example: Using a standard template with predefined sections, headings, and formatting styles.
  • Importance: Enhances readability and professionalism, making it easier for stakeholders to understand and act on the findings.

Documentation Specifications

  • Purpose: Establishes detailed guidelines for documenting the penetration test.
  • Example: Specifying the format for capturing screenshots, logs, and evidence of findings.
  • Importance: Ensures comprehensive and clear documentation that can be easily reviewed and referenced.

Risk Scoring

  • Purpose: Provides a quantifiable measure of the risk associated with identified vulnerabilities.
  • Example: Using a scoring system like CVSS (Common Vulnerability Scoring System) to rate the severity of each vulnerability.
  • Importance: Helps prioritize remediation efforts based on the risk level.

Definitions

  • Purpose: Clarifies terminology and concepts used in the report.
  • Example: Defining terms like “exploit,” “vulnerability,” “risk,” and “threat.”
  • Importance: Ensures all stakeholders have a common understanding of the terms used in the report.

Report Components

  • Executive Summary
    • Purpose: Provides a high-level overview of the test findings and recommendations.
    • Example: Summarizing key vulnerabilities, overall risk level, and major recommendations.
    • Importance: Allows executives and non-technical stakeholders to grasp the essential outcomes and actions needed.
  • Methodology
    • Purpose: Describes the testing approach and techniques used.
    • Example: Detailing the phases of the test, tools used, and the scope of testing.
    • Importance: Ensures transparency and reproducibility of the test.
  • Detailed Findings
    • Purpose: Provides an in-depth description of each identified vulnerability.
    • Example: Including vulnerability description, evidence, risk rating, and potential impact.
    • Importance: Offers detailed insights for technical teams to understand and address the issues.
  • Attack Narrative
    • Purpose: Describes the steps taken to exploit vulnerabilities in a narrative format.
    • Example: Detailing the sequence of actions taken to compromise a system and the outcomes.
    • Importance: Illustrates the practical impact of vulnerabilities and the effectiveness of defenses.
  • Recommendations
    • Purpose: Offers guidance on how to remediate identified vulnerabilities.
    • Example: Providing specific remediation steps, configuration changes, or patches needed.
    • Importance: Provides actionable steps to mitigate risks and improve security posture.
    • Remediation Guidance: Specific instructions for fixing the identified vulnerabilities.
  • Test Limitations and Assumptions
    • Purpose: Clarifies the scope limitations and assumptions made during testing.
    • Example: Noting any areas not tested, assumptions about network configurations, or system states.
    • Importance: Sets realistic expectations about the coverage and accuracy of the test results.

Reporting Considerations

  • Legal
    • Purpose: Ensures the report complies with legal requirements and protects the interests of all parties.
    • Example: Including disclaimers about the use of the report and confidentiality agreements.
    • Importance: Avoids legal liabilities and ensures proper use of the report.
  • Ethical
    • Purpose: Adheres to ethical standards in reporting and handling findings.
    • Example: Ensuring responsible disclosure of vulnerabilities and protecting sensitive information.
    • Importance: Maintains professional integrity and trustworthiness.
  • Quality Control (QC)
    • Purpose: Ensures accuracy and completeness of the report through thorough review.
    • Example: Peer reviewing the report and verifying all findings and recommendations.
    • Importance: Enhances the reliability and credibility of the report.
  • Artificial Intelligence (AI)
    • Purpose: Utilizes AI tools to enhance the report’s insights and accuracy.
    • Example: Using AI to analyze patterns, detect anomalies, or automate parts of the reporting process.
    • Importance: Improves the efficiency and depth of analysis in the report.

Objective 1.5

Technical Controls

  • System Hardening: Secures system configurations to reduce vulnerabilities.
  • Sanitize User Input/Parameterize Queries: Prevents injection attacks by properly handling inputs.
  • Multifactor Authentication (MFA): Adds layers of verification to enhance access security.
  • Encryption: Protects data confidentiality by converting it into unreadable formats.
  • Process-level Remediation: Addresses vulnerabilities within applications and processes.
  • Patch Management: Regularly updates systems to fix known vulnerabilities.
  • Key Rotation: Periodically changes cryptographic keys to limit exposure risks.
  • Certificate Management: Manages digital certificates for secure communications.
  • Secrets Management Solution: Secures sensitive information like passwords and tokens.
  • Network Segmentation: Divides networks into isolated segments to enhance security.
  • Infrastructure Security Controls: Secures physical and virtual infrastructure components.

Administrative Controls

  • Role-based Access Control (RBAC)Notes
  • Secure Software Development Life Cycle (SDLC): Integrates security into the software development process to produce secure software.
  • Minimum Password Requirements: Sets baseline standards for password creation to enhance account security.
  • Policies and Procedures: Establishes a framework for organizational security practices and employee behavior, supported by training and awareness programs.

Operational Controls

  • Job Rotation: Reduces risk of fraud and errors by changing employees’ roles periodically.
  • Time-of-Day Restrictions: Limits access to specific times to reduce unauthorized access risks.
  • Mandatory Vacations: Detects and prevents fraudulent activities by requiring regular vacations.
  • User Training: Educates employees on security policies and best practices to reduce human error and enhance overall security.

Physical Controls

  • Access Control Vestibule: Controls and monitors entry to secure areas, preventing unauthorized access.
  • Biometric Controls: Authenticates individuals using unique biological characteristics for high security.
  • Video Surveillance: Monitors and records activities to deter unauthorized actions and provide evidence.