Chapter 2

Objective 2.1

  • Firewall → A firewall is a network security device that monitors and controls incoming and outgoing network traffic based on predetermined security rules.
    • Placement:
      • Perimeter Firewall: Positioned at the network boundary to filter traffic between internal and external networks.
      • Internal Firewall: Placed within the network to segment and protect different network segments.
    • Configuration:
      • Rule Setting: Define rules to allow or block traffic based on IP addresses, ports, and protocols.
      • Logging and Monitoring: Enable logging to monitor traffic and detect suspicious activities.
      • Regular Updates: Keep firmware and rules updated to counteract new threats.
  • Intrusion Prevention System (IPS):
    • Placement:
      • Inline Deployment: Positioned directly in the path of network traffic to actively block threats.
    • Configuration:
      • Signature Updates: Regularly update threat signatures.
      • Policy Configuration: Set policies to determine the action on detecting a threat (e.g., block, alert).
      • Integration: Integrate with other security tools for comprehensive threat management.
  • Intrusion Detection System (IDS):
    • Placement:
      • Network-based IDS (NIDS): Deployed at key points within the network.
      • Host-based IDS (HIDS): Installed on individual devices to monitor local activities.
    • Configuration:
      • Signature and Anomaly Detection: Configure for both known and unknown threat detection.
      • Alerting: Set up alerting mechanisms to notify administrators of potential threats.
      • Log Management: Ensure detailed logging for forensic analysis.
  • Vulnerability Scanner:
    • Placement:
      • Internal Scanner: Deployed within the network to identify internal vulnerabilities.
      • External Scanner: Placed outside the network to identify external vulnerabilities.
    • Configuration:
      • Regular Scans: Schedule scans to run at regular intervals.
      • Custom Policies: Configure scan policies tailored to the organization’s needs.
      • Integration: Integrate with patch management systems for remediation.
  • Virtual Private Network (VPN):
    • Placement:
      • VPN Gateway: Positioned at the network edge to handle VPN connections.
    • Configuration:
      • Encryption Protocols: Configure strong encryption protocols (e.g., AES-256).
      • Authentication Methods: Implement robust authentication (e.g., multi-factor authentication).
      • Access Controls: Define access controls based on user roles.
  • Network Access Control (NAC):
    • Placement:
      • Edge Deployment: Positioned at network access points such as switches and wireless access points.
    • Configuration:
      • Policy Definition: Define policies for device compliance (e.g., antivirus, patches).
      • Quarantine: Configure quarantine networks for non-compliant devices.
      • Continuous Monitoring: Implement continuous monitoring of devices for compliance.
  • Web Application Firewall (WAF):
    • Placement:
      • In Front of Web Servers: Positioned in front of web servers to inspect incoming and outgoing traffic.
    • Configuration:
      • Rule Configuration: Define rules to block common web attacks (e.g., SQL injection, XSS).
      • Logging: Enable detailed logging for traffic analysis.
      • Updates: Regularly update rules and signatures.
  • Proxy:
    • Placement:
      • Between Clients and Servers: Positioned between client devices and external servers.
    • Configuration:
      • Caching: Configure caching to improve performance.
      • Access Control: Implement access controls to restrict web access.
      • Logging: Enable logging for monitoring web activity.
  • Reverse Proxy:
    • Placement:
      • In Front of Web Servers: Positioned in front of web servers to handle client requests.
    • Configuration:
      • Load Balancing: Configure to distribute traffic across multiple servers.
      • SSL Termination: Implement SSL termination to offload encryption tasks.
      • Caching: Enable caching to improve response times.
  • API Gateway:
    • Placement:
      • In Front of APIs: Positioned in front of API endpoints.
    • Configuration:
      • Rate Limiting: Implement rate limiting to control the number of API requests.
      • Authentication and Authorization: Set up mechanisms to authenticate and authorize API consumers.
      • Monitoring: Enable monitoring and logging of API usage.
  • Taps:
    • Placement:
      • In-Line with Network Links: Positioned directly on network links to capture traffic.
    • Configuration:
      • Non-Intrusive: Ensure non-intrusive capturing without affecting network performance.
      • Aggregation: Aggregate traffic for centralized monitoring.
      • Security: Secure captured data to prevent unauthorized access.
  • Collectors:
    • Placement:
      • Distributed Across Network: Deployed on key network nodes and devices.
    • Configuration:
      • Source Configuration: Configure sources from which logs are collected.
      • Centralized Storage: Set up centralized storage for collected data.
      • Integration: Integrate with SIEM systems for analysis.
  • Content Delivery Network (CDN):
    • Placement:
      • Globally Distributed: Deployed across multiple geographic locations.
    • Configuration:
      • Content Caching: Configure caching of static content to improve load times.
      • Load Distribution: Implement load distribution to balance traffic.
      • Security Features: Enable security features like DDoS protection and SSL.

Availability and Integrity Design Considerations

  • Load Balancing → Load balancing is the process of distributing network or application traffic across multiple servers to ensure no single server becomes overwhelmed, thereby improving availability and performance.
  • Recoverability → Ability to restore systems, applications, and data to a previous state after a failure or disaster.
  • Interoperability → Refers to the ability of different systems, applications, and services to work together seamlessly.
    • Ex. A healthcare system using HL7 standards and APIs to ensure interoperability between electronic health record (EHR) systems and laboratory information systems.
  • Geographical Considerations → Geographical considerations involve planning for the physical location of systems and data to optimize performance, compliance, and disaster recovery.
  • Vertical vs. Horizontal Scaling → Scaling refers to the ability to increase the capacity of a system to handle more load. Vertical scaling (scaling up) involves adding more power (CPU, RAM) to an existing server, while horizontal scaling (scaling out) involves adding more servers to a system.
  • Persistence vs. Non-Persistence → Refers to the ability of data and applications to retain their state across sessions, while non-persistence involves systems that do not retain state, resetting after each session.

Objective 2.2

Security Requirements Definition

  • Functional Requirements → Functional security requirements specify what a system should do to ensure security. These requirements outline specific behaviors and actions that the system must perform to maintain its security posture.
    • Ex. A functional requirement for a banking application might specify that user login sessions must expire after 10 minutes of inactivity to protect against unauthorized access.
  • Non-Functional Requirements → Non-functional security requirements define the quality attributes, performance, and constraints of the security mechanisms in a system. These requirements ensure that the system’s security measures are effective and sustainable.
    • Ex. A non-functional requirement might state that the system must detect and log 95% of all access attempts within one second to ensure timely responses to potential security incidents.
  • Security vs. Usability Trade-Off → The security vs. usability trade-off involves balancing the need for robust security measures with the need to maintain a user-friendly experience. Strong security often introduces complexity that can impact usability, and vice versa.
    • Implementing multi-factor authentication (MFA) improves security but may inconvenience users. Balancing this could involve offering convenient authentication methods (e.g., biometrics) to reduce friction.

Software Assurance

  • Static Application Security Testing (SAST) → SAST is a method of analyzing source code or binaries to identify security vulnerabilities without executing the application.
    • Ex. A SAST tool scanning a Java application’s source code and identifying SQL injection vulnerabilities before the code is deployed.
  • Dynamic Application Security Testing (DAST) → DAST involves testing a running application to identify vulnerabilities by simulating external attacks.
    • Ex. A DAST tool simulating attacks on a web application to identify vulnerabilities like cross-site scripting (XSS).
  • Interactive Application Security Testing (IAST) → IAST combines elements of SAST and DAST by analyzing applications in real-time during normal operation to identify vulnerabilities.
    • Real-time Analysis: Provides real-time security insights.
    • Context-aware: Offers detailed context about the application’s state during vulnerabilities.
    • Integration: Can be integrated with development and testing workflows.
    • Ex. An IAST tool monitoring a web application during testing and identifying an insecure data handling practice.
  • Runtime Application Self-Protection (RASP) → RASP protects applications by detecting and blocking attacks in real-time while the application is running.
    • Deploy RASP, monitor execution, block attacks
    • Immediate protection, self-defending, detailed logging
    • Ex. A RASP tool embedded in a web application that detects and blocks an SQL injection attempt in real-time.
  • Vulnerability Analysis → Vulnerability analysis involves identifying, categorizing, and assessing vulnerabilities in an application or system.
    • Ex. A vulnerability analysis revealing several high-severity vulnerabilities in a web application, leading to prioritized remediation.
  • Software Composition Analysis (SCA) → SCA identifies and manages security risks in the open-source and third-party components used in an application.
    • Scan components, identify vulnerabilities, manage risks
    • Dependency Management: Tracks and manages dependencies.
    • License Compliance: Ensures compliance with open-source licenses.
    • Security Visibility: Offers visibility into the security of all components.
    • Ex. An SCA tool identifying a vulnerable version of a library used in an application and suggesting an upgrade to a secure version.
  • Software Bill of Materials (SBoM) → SBoM is a comprehensive list of all components, libraries, and modules that make up a software application.
    • Ex. An organization maintaining an SBoM for its software products to ensure transparency and manage supply chain risks.
  • Formal Methods → Formal methods involve using mathematical and logical techniques to specify, develop, and verify software systems.
    • Ex. Using formal methods to verify the correctness of an algorithm used in a critical safety system, ensuring it behaves as expected under all conditions.

Continuous Integration/Continuous Deployment (CI/CD)

  • Coding Standards and Linting → Coding standards are guidelines and best practices for writing code, ensuring consistency, readability, and maintainability. Linting involves using tools to automatically check the code for adherence to these standards and potential errors.
    • Ex. Using ESLint to check JavaScript code against predefined coding standards in every pull request.
  • Branch Protection → Branch protection involves implementing rules and policies to protect important branches (e.g., main, master) from unintended changes, ensuring code quality and stability.
    • Ex. Requiring at least two code reviews and passing CI checks before merging changes into the main branch.
  • Continuous Improvement → Continuous improvement is an ongoing effort to enhance processes, tools, and practices in the CI/CD pipeline to increase efficiency, quality, and performance.
    • Ex. Regularly reviewing CI/CD pipeline metrics and implementing automation to reduce build times and increase test coverage.
  • Testing Activities → Testing activities in CI/CD involve various types of tests to ensure code quality, functionality, and performance before deployment. These tests include canary, regression, integration, automated test and retest, and unit tests.
    • Canary Testing:
      • A technique where a new software version is gradually rolled out to a small subset of users before a full deployment, to detect any issues early.
      • Steps:
        • Deploy Incrementally: Release new code to a small subset of users.
        • Monitor Feedback: Collect performance and error metrics.
        • Gradual Rollout: Gradually increase the user base if no issues are detected.
      • Functionalities:
        • Risk Mitigation: Reduces risk by limiting exposure to new changes.
        • Real-time Validation: Validates changes in a live environment.
      • Example: Deploying a new feature to 5% of users and monitoring for errors before a full rollout.
    • Regression Testing:
      • The process of re-testing software after changes (e.g., updates or fixes) to ensure that the new code does not negatively affect existing functionality.
      • Steps:
        • Identify Test Cases: Select test cases that cover existing functionalities.
        • Automate Tests: Automate regression tests in the CI/CD pipeline.
        • Run Tests: Execute regression tests after every code change.
      • Functionalities:
        • Stability: Ensures new changes do not break existing functionalities.
        • Automation: Provides automated validation of past functionalities.
      • Example: Running automated regression tests on an e-commerce application to ensure checkout functionality remains unaffected by new updates
    • Integration Testing:
      • Testing in which individual software modules are combined and tested as a group to ensure they work together correctly.
      • Integration testing is used to test individual components of a system together to ensure that they interact as expected
      • Steps:
        • Define Test Scenarios: Identify scenarios that test the interaction between components.
        • Automate Tests: Implement automated integration tests.
        • Run Tests: Execute integration tests in the CI/CD pipeline.
      • Functionalities:
        • Component Interaction: Validates that different components work together as expected.
        • Early Detection: Identifies issues in the integration phase.
      • Example: Testing the integration between the user authentication service and the payment gateway in a web application.
    • Automated Test and Retest:
      • The use of automated tools to execute tests repeatedly, often used in continuous integration/continuous deployment (CI/CD) pipelines to ensure that changes do not introduce new bugs.
      • Steps:
        • Create Test Scripts: Develop automated test scripts.
        • Integrate with CI/CD: Integrate automated tests into the CI/CD pipeline.
        • Retest: Automatically retest after every code change or deployment.
      • Functionalities:
        • Consistency: Ensures consistent and repeatable testing.
        • Efficiency: Reduces manual testing effort and speeds up feedback.
      • Example: Automated retesting of critical workflows after each deployment in a CI/CD pipeline.
    • Unit Testing:
      • The testing of individual components or functions of a software application in isolation from the rest of the system to verify that each part works correctly.
      • Unit testing is used to test a particular block of code performs the exact action intended and provides the exact output expected.
      • Steps:
        • Write Unit Tests: Develop unit tests for individual components or functions.
        • Automate Execution: Automate unit tests to run with every code change.
        • Analyze Results: Review unit test results to identify and fix issues.
      • Functionalities:
        • Isolated Testing: Tests individual components in isolation.
        • Early Detection: Catches issues early in the development cycle.
      • Example: Writing and automating unit tests for a function that calculates user discounts in an e-commerce application.
  • Supply Chain Risk Management
    • Software Supply Chain Risk Management → Managing risks associated with the acquisition, integration, and deployment of software components from external sources.
      • Steps:
        • Identify Dependencies: Catalog all third-party software components.
        • Evaluate Vendors: Assess the security practices and reliability of software vendors.
        • Monitor and Audit: Continuously monitor and audit software components for vulnerabilities.
        • Patch Management: Ensure timely application of patches and updates.
      • Functionalities:
        • Transparency: Maintain visibility into software dependencies.
        • Risk Assessment: Evaluate the potential risks posed by third-party software.
        • Security Assurance: Ensure software components are secure and reliable.
      • Ex. Using a Software Composition Analysis (SCA) tool to identify vulnerabilities in open-source libraries and manage their updates.
    • Hardware Supply Chain Risk Management → Managing risks associated with the acquisition, integration, and deployment of hardware components from external sources.
      • Steps:
        • Vendor Assessment: Evaluate the security and reliability of hardware vendors.
        • Component Validation: Verify the authenticity and integrity of hardware components.
        • Supply Chain Monitoring: Monitor the supply chain for potential risks, such as counterfeit components.
        • Incident Response: Develop and implement a response plan for hardware-related incidents.
      • Functionalities:
        • Authentication: Ensure the authenticity of hardware components.
        • Integrity Checking: Verify that hardware components have not been tampered with.
        • Continuous Monitoring: Monitor the supply chain for emerging threats.
      • Ex. Implementing a process to verify the integrity of hardware components using cryptographic techniques before deployment.

Hardware Assurance

  • Certification and Validation Process → Hardware assurance through certification and validation involves evaluating and verifying that hardware components meet specific security, quality, and performance standards. This process ensures that hardware is reliable, secure, and free from tampering or defects.
    • Ex. A manufacturer certifies its processors with the Trusted Computing Group (TCG) to ensure they meet rigorous security and reliability standards.

End-of-Life (EOL) Considerations

  • End-of-life considerations encompass the strategies and actions taken when a product is no longer supported by the manufacturer, ensuring security, compliance, and minimal disruption during the transition.
  • Steps:
    • Assessment: Identify and assess products nearing EOL.
    • Notification: Inform stakeholders about EOL timelines and implications.
    • Support and Maintenance: Plan for continued support and security measures.
    • Replacement Planning: Develop a strategy for replacing or upgrading EOL products.
    • Data Migration: Ensure safe migration of data from EOL products.
    • Disposal: Securely dispose of EOL hardware or decommission software.
  • Ex. A company plans for the end-of-life of its Windows 7 workstations by upgrading to Windows 10 before the EOL date to ensure continued support and security.

Objective 2.3

Attack Surface Management and Reduction

  • Attack surface management and reduction involve identifying, assessing, and mitigating potential entry points for attackers within an organization’s IT infrastructure.
  • Vulnerability Management → A process of identifying, evaluating, treating, and reporting on security vulnerabilities in systems and software.
    • Ex. Using a vulnerability scanner like Nessus to identify and patch vulnerabilities in a network.
  • Hardening → refers to the process of securing a system by reducing its surface of vulnerability.
    • This involves configuring system settings and implementing security controls to minimize potential attack vectors.
    • Ex. Hardening a web server by disabling unused ports and services, and applying secure configurations according to best practices.
  • Defense-in-Depth → A security strategy that employs multiple layers of defense to protect against potential threats. Each layer serves as a backup in case one defensive measure fails.
    • Ex. Implementing a defense-in-depth strategy that includes firewalls, network segmentation, antivirus software, and encryption.
  • Legacy Components within an Architecture → Legacy components are outdated or obsolete hardware and software systems that are still in use within an organization’s IT infrastructure.
    • Ex. Using virtual patching and network segmentation to secure a legacy database system until it can be replaced.

Detection and Threat-Hunting Enablers

  • Detection and threat-hunting enablers are critical components that enhance an organization’s ability to identify, monitor, and respond to potential threats.
  • Centralized Logging → Centralized logging involves aggregating log data from various sources (e.g., servers, applications, network devices) into a single, centralized system for easier analysis and monitoring.
    • Ex. Using a SIEM (Security Information and Event Management) system like Splunk or LogRhythm to centralize and analyze logs from web servers, firewalls, and endpoints.
  • Continuous Monitoring → An ongoing observation of an organization’s IT environment to detect and respond to security threats and vulnerabilities in real-time.
    • Ex. Using an EDR (Endpoint Detection and Response) solution like CrowdStrike Falcon to continuously monitor endpoint activities for suspicious behavior.
  • Alerting → Alerting involves setting up notifications to inform security teams of potential security incidents or anomalies detected within the IT environment.
    • Configuring a SIEM system to send email alerts to the security team when unusual login activities are detected.
  • Sensor Placement → Sensor placement involves strategically deploying sensors throughout the IT environment to capture and monitor security-relevant data.
    • Ex. Deploying network intrusion detection sensors at the network perimeter and key internal segments to monitor for malicious traffic.

Information and Data Security Design

  • Classification Models → Classification models are frameworks used to categorize data based on its sensitivity and importance, defining how data should be handled and protected.
    • Ex. A company classifies its data into four levels: public, internal, restricted, and confidential. Public data is freely accessible, while confidential data is heavily restricted and encrypted.
  • Data Labeling → Data labeling involves assigning labels or tags to data that indicate its classification level, ownership, and other relevant attributes.
    • Ex. Using a data classification tool to automatically label documents containing personal identifiable information (PII) as “confidential” and apply appropriate access controls.
  • Tagging Strategies → Tagging strategies involve the systematic use of metadata tags to organize, manage, and protect data. Tags can include information about data classification, ownership, usage, and security requirements.
    • Ex. Implementing a tagging strategy where all financial data is tagged with “financial” and “restricted,” ensuring it is stored securely and only accessible by authorized personnel.

Data Loss Prevention (DLP)

  • At Rest → DLP at rest involves protecting data stored on devices, servers, databases, or other storage media.
    • Ex. Encrypting a company’s customer database and restricting access to it using role-based access control (RBAC).
  • In Transit → DLP in transit refers to protecting data as it moves across networks, whether between devices, within internal networks, or over the internet.
    • Ex. Using TLS to secure email communications and prevent interception of sensitive information.
  • Data Discovery → Data discovery involves locating, identifying, and classifying sensitive data across the organization’s data repositories.
    • Ex. Using a data discovery tool to scan company servers and identify files containing personally identifiable information (PII).

Hybrid Infrastructures

  • Hybrid infrastructure combines on-premises data centers, private clouds, and public clouds to create a cohesive and flexible IT environment.
  • Ex. A company uses a hybrid infrastructure where critical applications run on-premises for better control and compliance, while development and testing workloads are hosted on a public cloud to take advantage of scalability and cost savings.

Third-Party Integrations

  • Third-party integrations refer to the incorporation of external services, applications, or systems into an organization’s existing infrastructure to extend capabilities and improve efficiency.
  • Ex. Integrating a third-party payment gateway (like PayPal or Stripe) into an e-commerce platform to handle online transactions securely and efficiently.

Control Effectiveness

  • Control effectiveness refers to the degree to which security controls achieve their intended objectives and mitigate risks to an acceptable level.
  • Assessments:
    • Definition: Evaluating the design and operation of security controls.
    • Steps:
      • Define assessment criteria.
      • Conduct control reviews.
      • Document findings and recommend improvements.
    • Example: Regularly reviewing access control mechanisms to ensure only authorized personnel have access to sensitive data.
  • Scanning:
    • Definition: Using automated tools to identify vulnerabilities and weaknesses in systems.
    • Steps:
      • Schedule regular scans.
      • Analyze scan results.
      • Remediate identified issues.
    • Example: Running a vulnerability scan on network devices to detect and patch security flaws.
  • Metrics:
    • Definition: Quantitative measures used to evaluate the performance of security controls.
    • Steps:
      • Define relevant metrics.
      • Collect and analyze data.
      • Use metrics to inform decision-making.
    • Example: Tracking the number of security incidents detected and responded to within a specified time frame.

Objective 2.4

Provisioning/De-provisioning

  • Provisioning is the process of creating and granting access to new accounts
  • De-provisioning involves revoking access and removing accounts when they are no longer needed.
  • Credential Issuance → A process of providing users with the necessary authentication information, such as usernames and passwords, to access systems and applications.
    • Ex. An IT department generates a unique username and password for a new employee and securely sends the credentials via a secure email or a secure portal.
  • Self-Provisioning → Allows users to create and manage their own accounts and access rights through an automated system, often within defined policies and guidelines.
    • Ex. A company allows employees to use a self-service portal to request access to specific applications, which are then approved based on predefined policies.

Federation

Single sign-on (SSO)

  • An authentication process that allows a user to access multiple applications with one set of login credentials.
  • Ex. A user logs into their company’s SSO portal and gains access to email, HR systems, and other internal applications without re-entering their credentials.

Conditional Access

Identity Provider

  • An identity provider (IdP) is a system that creates, maintains, and manages identity information and provides authentication services within a federation or SSO system.
  • Ex. A company uses an IdP to authenticate employees accessing internal and external applications.

Service Provider

  • A service provider (SP) is an entity that provides services or applications to users and relies on an identity provider to authenticate users.
  • Ex. An online application that allows users to log in using their corporate credentials managed by an external IdP.

Attestations

  • Attestations are statements or assertions made by a trusted entity (like an identity provider) about a user’s identity or attributes.
  • Verify Attributes: Provide verified information about users.
  • Trust-Based: Rely on the trustworthiness of the asserting entity.
  • Enhance Security: Ensure user information is accurate and trustworthy.
  • Ex. An identity provider asserts that a user has a specific role within their organization, which is used to grant access to certain resources.

Policy Decision and Enforcement Points

  • Policy decision points (PDP) and policy enforcement points (PEP) are components in an access control system.
  • PDPs decide if a user should be granted access, while PEPs enforce that decision.
  • Policy Decision Point (PDP): Evaluates access requests against policies.
  • Policy Enforcement Point (PEP): Enforces access decisions made by PDPs.
  • Centralized Control: Separates decision-making from enforcement for better control.
  • Ex. A PDP evaluates if a user can access a secure application based on their role, and the PEP enforces this decision by allowing or denying access.

Access Control Models

Logging and Auditing

  • Logging → Logging involves the continuous recording of events, activities, and transactions within a system or network to provide a detailed record of actions and changes.
    • Ex. A server logs every user login attempt, including successful and failed attempts, along with the timestamp and IP address of the user.
  • Auditing → Auditing is the systematic examination and evaluation of logs and other records to ensure compliance with policies, detect anomalies, and improve security posture.
    • Ex. An auditor reviews the access logs of a financial system to ensure that only authorized personnel accessed sensitive financial data and investigates any anomalies.

Public Key Infrastructure (PKI) Architecture

  • A framework that enables secure, encrypted communication and authentication over networks
  • It uses a pair of cryptographic keys, public and private, along with digital certificates to validate identities and ensure data integrity.
  • Certificate Extensions → Certificate extensions provide additional information about the certificate and its intended use, enhancing the basic functionality of a digital certificate.
    • Ex. A certificate extension may indicate that the certificate can be used for both email protection and client authentication.
  • Certificate Types → Different types of certificates are used within a PKI to serve various purposes, each providing a specific function or level of assurance.
    • Ex. An organization uses an end-entity certificate to secure its web server and a code signing certificate to validate its software updates.
  • Online Certificate Status Protocol (OCSP) Stapling → OCSP stapling is a method to provide real-time certificate status information to clients, improving performance and security.
    • Ex. A web server includes a current OCSP response when presenting its certificate, allowing clients to quickly verify its validity.
  • Certificate Authority/Registration Authority (CA/RA) → A Certificate Authority (CA) issues and manages digital certificates, while a Registration Authority (RA) assists the CA by handling registration and identity verification of certificate applicants.
    • Ex. A CA issues a digital certificate to an employee after the RA verifies their identity through company records and personal identification.
  • Templates → Templates are predefined configurations for creating certificates, ensuring consistency and adherence to organizational policies.
    • Ex. An organization uses a template to issue employee certificates with predefined attributes, such as validity period and key usage.
  • Deployment/Integration Approach → The deployment and integration approach outlines how PKI components are implemented and integrated into an organization’s existing infrastructure.
    • Ex. An organization integrates PKI with its existing Active Directory to manage user certificates and implement single sign-on (SSO) capabilities.

Access Control Systems

  • Access control systems are mechanisms that restrict access to resources based on user identity and predefined policies.
  • Physical → Physical access control systems manage access to physical spaces such as buildings, rooms, and secured areas through various methods like keycards, biometrics, and security guards.
    • Ex. An office building uses a keycard system where employees must swipe their keycards at entry points to gain access to different floors and rooms.
  • Logical → Logical access control systems regulate access to computer systems, networks, and data through user authentication and authorization mechanisms.
    • Ex. A company network requires employees to log in with their username and password, with additional access to sensitive data protected by multi-factor authentication.

Objective 2.5

Cloud Access Security Broker (CASB)

Shadow IT Detection

  • Shadow IT refers to the use of IT systems, devices, software, applications, and services without explicit IT department approval.
  • Ex. Using a CASB to monitor and detect unauthorized use of cloud services by employees, identifying unsanctioned applications being accessed.

Shared Responsibility Model

  • A security framework that delineates the responsibilities of cloud service providers and customers in securing cloud environments.
  • Provider Responsibilities: Secure the cloud infrastructure, including hardware, software, networking, and facilities.
  • Customer Responsibilities: Secure everything they put in the cloud, including data, applications, and operating systems.
  • Collaboration: Both parties work together to ensure overall security.
  • Ex. In AWS, AWS is responsible for the security of the cloud (physical infrastructure), while the customer is responsible for securing their data and applications within the cloud.

CI/CD Pipeline

  • A method to automate the process of software delivery, enabling continuous integration, continuous delivery, and continuous deployment.
  • Ex. Using Jenkins to automate the CI/CD pipeline for deploying web applications, ensuring faster and more reliable software releases.

Terraform

  • An open-source infrastructure as code (IaC) tool that allows users to define and provision data center infrastructure using a high-level configuration language.
  • Infrastructure as Code: Define infrastructure using declarative configuration files.
  • Provisioning: Automate the creation and management of infrastructure.
  • Scalability: Easily scale infrastructure up or down as needed.
  • Ex. Using Terraform scripts to provision and manage AWS resources such as EC2 instances, S3 buckets, and VPCs.

Ansible

  • An open-source automation tool used for IT tasks such as configuration management, application deployment, and task automation.
  • Agentless: Operates without needing agents on target machines.
  • Playbooks: Uses YAML files to describe automation tasks.
  • Scalability: Manages large-scale environments efficiently.
  • Ex. Using Ansible playbooks to automate the deployment and configuration of web servers across multiple environments.

Package Monitoring

  • The practice of monitoring software packages for vulnerabilities, updates, and compliance.
  • Ex. Using tools like Snyk or Dependabot to monitor and manage dependencies in a project, ensuring they are secure and up-to-date.

Container Security

  • The process of implementing security measures to protect containerized applications and their environments.
  • Image Security: Use trusted base images and scan for vulnerabilities.
  • Runtime Security: Monitor container behavior and enforce security policies.
  • Network Security: Implement network segmentation and control access.
  • Ex. Using tools like Aqua Security or Twistlock to scan Docker images for vulnerabilities and monitor running containers for suspicious activities.

Container Orchestration

  • Automating the deployment, management, scaling, and networking of containers.
  • Ex. Using Kubernetes to orchestrate and manage containerized applications, ensuring high availability and scalability.

Serverless Computing

  • Serverless computing is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers.
  • Users can run code without managing the underlying infrastructure.
  • Workloads → Workloads in serverless computing refer to the tasks or processes that are executed by serverless functions.
    • These workloads can vary widely, from simple data processing tasks to complex, event-driven applications.
    • Ex. Processing images uploaded to an S3 bucket using a serverless function to resize and store them in a different bucket.
  • Functions → Functions in serverless computing are small, single-purpose pieces of code that execute in response to events. They are the core component of serverless architectures.
    • Ex. An AWS Lambda function that triggers when a new record is added to a DynamoDB table, processes the record, and sends a notification.
  • Resources → Resources in serverless computing refer to the cloud infrastructure components and services that serverless functions interact with or depend on.
    • Ex. An AWS Lambda function that processes data from an S3 bucket and stores results in a DynamoDB table, using API Gateway to expose the function as an HTTP endpoint.

API Security

  • Authorization → Authorization in API security refers to the process of determining if a user or system has the appropriate permissions to access or perform actions on resources.
    • Ex. Using OAuth 2.0 to grant a web application access to a user’s Google Drive files, specifying that the application can only read files and not modify them.
  • Logging → Logging involves recording API interactions, including requests, responses, and errors, to monitor, troubleshoot, and audit API activities.
    • Ex. Using AWS CloudWatch Logs to collect and monitor API request logs for an application, setting up alerts for suspicious activities like failed login attempts.
  • Rate Limiting → Rate limiting controls the number of API requests a client can make within a specific timeframe to protect the API from abuse and ensure fair usage.
    • Ex. Implementing rate limits to allow a maximum of 1000 API requests per hour per user to prevent abuse and ensure service availability.

Cloud vs. Customer-Managed

  • Encryption Keys → Encryption keys are used to encrypt and decrypt data to protect it from unauthorized access.
    • In a cloud environment, the management of these keys can either be handled by the cloud provider (cloud-managed) or by the customer (customer-managed).
    • Cloud-Managed Encryption Keys → Cloud-managed encryption keys are created, stored, and managed by the cloud service provider.
      • Customers use these keys to encrypt data, but the management and rotation of keys are handled by the provider.
      • Ex. Using AWS S3 with server-side encryption managed by AWS Key Management Service (KMS), where AWS handles key management and rotation.
      • Pros:
        • Reduced Administrative Burden: Cloud provider handles all aspects of key management.
        • Automatic Key Rotation: Providers often offer automatic key rotation features.
        • Integrated Security: Cloud providers have robust security practices and compliance certifications.
      • Cons:
        • Limited Control: Less control over key management and rotation.
        • Shared Responsibility: Security is shared between customer and provider.
    • Customer-Managed Encryption Keys → Customer-managed encryption keys are created, stored, and managed by the customer. This approach gives customers full control over key lifecycle and access policies.
      • Ex. Using Azure Key Vault to create and manage encryption keys for encrypting data stored in Azure Blob Storage.
      • Pros:
        • Full Control: Complete control over key management and policies.
        • Custom Policies: Ability to implement custom key management practices.
        • Enhanced Security: Can meet stricter compliance and security requirements.
      • Cons:
        • Increased Administrative Burden: Requires more effort to manage keys and policies.
        • Manual Rotation: Key rotation and lifecycle management are the customer’s responsibility.
  • Licenses → Licenses are agreements that allow customers to use specific software, services, or resources.
    • In the context of cloud and customer-managed environments, licenses can be managed by either the cloud provider or the customer.
    • Cloud-Managed Licenses → Cloud-managed licenses are included in the cloud service offerings, where the cloud provider handles the acquisition, management, and compliance of software licenses.
      • Ex. Using Office 365 where Microsoft handles all software licensing, updates, and compliance as part of the subscription.
      • Pros:
        • Simplified Management: The provider handles all licensing aspects.
        • Included Costs: Licenses are included in the subscription or service fee.
        • Automated Updates: Software updates and compliance are managed by the provider.
      • Cons:
        • Limited Control: Less control over license management and updates.
        • Fixed Costs: Costs are tied to the service subscription model.
    • Customer-Managed Licenses → Customer-managed licenses are acquired, managed, and renewed by the customer. This approach provides customers with control over their software licenses.
      • Ex. Purchasing and managing software licenses for on-premises applications like Adobe Creative Suite.
      • Pros:
        • Full Control: Greater flexibility and control over licenses and their usage.
        • Custom Agreements: Ability to negotiate terms and conditions with vendors.
        • Tailored Licensing: Can manage licenses specific to organizational needs.
      • Cons:
        • Administrative Effort: Requires more work for managing licenses and compliance.
        • Separate Costs: Licensing costs are additional and separate from cloud service costs.

Cloud Data Security Considerations

  • Data Exposure → Data exposure refers to situations where sensitive information is accessible to unauthorized individuals or entities, either accidentally or maliciously.
    • Ex. A cloud database with publicly accessible settings that exposes customer personal information to the internet.
  • Data Leakage → Data leakage occurs when sensitive information unintentionally leaves the organization or is exposed to unauthorized parties.
    • Ex. Sensitive information being exposed through misconfigured cloud storage buckets.
  • Data Remanence → Data remanence refers to the residual data left on storage media after deletion or decommissioning, which can potentially be recovered by unauthorized parties.
    • Ex. Data on decommissioned hard drives that could be recovered using data recovery tools.
  • Unsecured Storage Resources → Unsecured storage resources are cloud storage services or resources that are not properly secured, exposing data to unauthorized access.
    • Ex. An S3 bucket configured with public read access, allowing unauthorized users to access stored files.

Cloud Control Strategies

  • Proactive Controls → Proactive controls aim to prevent security incidents before they occur by identifying and mitigating risks early.
    • Ex. Implementing automated vulnerability scans and proactive monitoring.
  • Detective Controls → Detective controls focus on identifying security incidents and breaches as soon as they occur.
    • Ex. Using centralized logging and security information and event management (SIEM) tools.
  • Preventative Controls → Preventative controls aim to minimize the likelihood of security incidents through proactive measures.
    • Ex. Configuring access controls, encryption, and implementing firewall rules.

Customer-to-Cloud Connectivity

  • Customer-to-cloud connectivity refers to the methods and mechanisms used to establish and manage secure connections between a customer’s on-premises environment and cloud service providers.
  • Ex. Setting up a Virtual Private Network (VPN) connection to securely connect an on-premises network to a cloud service.

Cloud Service Integration

  • Cloud service integration refers to the process of connecting various cloud services and applications to work together seamlessly.
  • Ex. Integrating AWS Lambda functions with Amazon S3 and DynamoDB to process data events.

Cloud Service Adoption

  • Cloud service adoption involves the process of selecting, implementing, and managing cloud services to meet organizational needs.
  • Ex. Adopting a cloud-based CRM solution for managing customer relationships.

Objective 2.6

Continuous Authorization

  • Continuous authorization involves ongoing evaluation and validation of user and device access permissions to ensure they remain valid over time.
  • Using a Security Information and Event Management (SIEM) system to continuously monitor and review user activities and adjust access permissions based on real-time threats.
  • Ensures access permissions are continually reviewed.

Context-Based Re-authentication

  • Context-based re-authentication requires users to re-authenticate based on changes in their context or behavior, ensuring that access remains secure under varying conditions.
  • Ex. Requiring users to re-authenticate if they attempt to access sensitive information from a new device or location.
  • Reduces the risk of unauthorized access based on changes in context.

Network Architecture

  • Network Segmentation → Network segmentation involves dividing a network into smaller, isolated segments to limit the scope of security breaches and improve overall network security.
    • Ex. Dividing a network into separate segments for users, applications, and servers to control access and contain potential threats.
  • Micro-segmentation → Micro-segmentation is the practice of creating isolated, smaller network segments within a larger segment to enforce granular security controls.
    • Provide more granular access controls and limit the lateral movement of threats.
    • Ex. Implementing policies that restrict communication between different applications or services within a single network segment.
  • VPNNOTES
  • Always-On VPNNOTES

API Integration and Validation

  • API integration involves connecting different systems or applications to enable data exchange and functionality.
  • API validation ensures that APIs operate securely and as expected, protecting against potential security risks.
  • Ex. Integrating a third-party payment gateway into your application while validating the API for secure transactions and proper error handling.

Asset Identification, Management, and Attestation

  • Asset identification, management, and attestation involve discovering, classifying, managing, and verifying the integrity of assets in an IT environment.
  • Objective: Maintain an accurate inventory of assets, manage them securely, and perform attestation to ensure compliance and integrity.
  • Ex. Identifying all hardware and software assets in your environment, managing them through a centralized system, and performing regular audits for compliance and security.

Security Boundaries

  • Security boundaries are points or layers in an architecture where security controls are applied to protect data and system components.
  • These boundaries help define where to implement policies and controls to ensure a Zero Trust security model.
  • Data Perimeters → Data perimeters define the boundaries around data to ensure its security and integrity. In a Zero Trust model, data perimeters help to manage and protect data access and movement.
    • Objective: Establish boundaries to protect data from unauthorized access and ensure data security.
    • Approach: Define and enforce access controls, encryption, and monitoring at the data level.
    • Ex. Creating a data perimeter around sensitive customer information to control access and ensure data protection.
  • Secure Zones → Secure zones are isolated areas within a network that are protected by security controls to safeguard different types of data or services.
    • Objective: Create isolated areas for different security needs to manage risks and protect sensitive resources.
    • Approach: Design and implement secure zones with appropriate controls and access mechanisms.
    • Ex. Creating a secure zone for the finance department to ensure that financial data is isolated from other parts of the organization.
  • System Components → System components are the individual elements of a network or application infrastructure that need to be protected as part of the overall security strategy.
    • Objective: Ensure that all system components are secure and operate according to security policies.
    • Approach: Apply security measures to individual components and manage their interactions.
    • Ex. Securing components like servers, databases, and applications by implementing appropriate security measures and controls.

Deperimeterization

  • Deperimeterization refers to the practice of shifting security controls from the traditional network perimeter to a more granular, identity-based approach that enforces security policies at the level of users, devices, and applications.
  • Secure Access Service Edge (SASE) → SASE is a security framework that integrates network and security functions into a unified cloud-delivered service to support the needs of modern, distributed workforces.
    • Objective: Provide secure, scalable access to applications and resources from anywhere, without relying on traditional network perimeters.
    • Approach: Combine SD-WAN and security services (like secure web gateways, CASB, and firewall as a service) into a single, cloud-native platform.
    • Ex. Using a SASE solution to provide secure, scalable access to cloud applications for remote employees.
  • Software-Defined Wide Area Network (SD-WAN) → SD-WAN is a technology that simplifies the management of WAN networks by abstracting and virtualizing network functions.
    • Objective: Enhance WAN management for improved performance, reliability, and security.
    • Approach: Use centralized management to optimize connectivity and apply security policies across the WAN.
    • Ex. Deploying SD-WAN to connect branch offices with headquarters and cloud services in a cost-effective and secure manner.
  • Software-Defined Networking (SDN)https://heydc7.github.io/obsinote/Prep/Security-Plus/#infrastructure-as-code
    • SDN is a network architecture approach that separates the network control plane from the data plane to enable more flexible and programmable network management.
    • Objective: Improve network management through centralized control and automation.
    • Approach: Use SDN to manage network resources dynamically and apply security policies.
    • Ex. Using SDN to dynamically adjust network resources for different applications and enforce security policies.

Defining Subject-Object Relationships

  • n a Zero Trust architecture, subject-object relationships refer to the interactions between entities (subjects) like users or devices (subjects) and resources or services (objects) they want to access.
  • Properly defining these relationships involves ensuring that access controls, authentication, and authorization mechanisms are in place to enforce security policies effectively.
  • RBAC, ABAC
  • Policy Enforcement Points (PEPs) and Policy Decision Points (PDPs) → PEPs are components that enforce security policies, while PDPs evaluate and decide on access requests based on policies.
    • Objective: Separate the decision-making and enforcement of access control policies.
    • Approach: Use PEPs to enforce policies and PDPs to make decisions.
    • Ex. A firewall (PEP) enforces access control rules decided by a security policy server (PDP).
  • Zero Trust Network Access (ZTNA) → ZTNA is a security model where access to resources is granted based on strict verification processes rather than relying on perimeter security.
    • Objective: Provide secure access to resources based on verification of every request.
    • Approach: Ensure all access requests are verified and authorized regardless of the request’s origin.
    • Ex. Using a ZTNA solution to verify a user’s identity and device security posture before granting access to corporate applications.