Admins use it to locate potential problems between two systems
hping → This command is similar to ping command but it can send the ping using TCP, UDP & ICMP packets
Useful to identify if firewall is blocking ICMP traffic
theHarvester → Passive recon CLI tool → Uses OSINT methods to gather data such as emails, employee names, host IPs, & URLs
It uses popular search engine for queries & give you a report
sn1per → Automated scanner used for vulnerability assessment & to gather info on targets during penetration test
scanless → Python based CLI tool used to scan ports
dnsenum → Enumerate DNS records for domains
It can perform many Domain Name System (DNS)-related functions, including querying A records, nameservers, and MX records, as well as performing zone transfers, Google searches for hosts and subdomains, and net range reverse lookups.
It can work in automated fashion
Cuckoo → Open Source automated software analysis system / Sandbox
Preparation → This phase occurs before an incident & provides guidance to personnels on how to respond to an incident
Identification → Verify it is a actual incident or not
Containment → After identifying an incident, security personnel attempt to isolate or contain it
This protects critical systems while maintaining business operations
The goal of isolation is to prevent the problem from spreading to other areas in network
Eradication → After containing the incident, it’s necessary to remove components from the attack
Includes deleting or disabling the infected accounts
Recovery → During the recovery process, admins return all affected systems to normal operation & verify they are operating normally
Lessons Learned → After personnel handle an incident, security personnel perform the lessons learned review
This incident may provide some valuable lessons & organizations might modify procedures or add additional controls to prevent reoccurrence of the incident
syslog → This protocol specifies general log entry format & details on how to transport log entries
Originators → Any systems that sends syslog messages
Collector → Originators send syslog log entries to the collector → syslog server
Syslog protocol only specifies how to format the syslog messages & send them to the collector
Linux systems include the syslogd daemon which is the service that handles the syslog messages → etc/syslog.conf → var/syslog
Syslog-ng → Extends syslogd, allowing a system to collect logs from any source
It provides correlation, routing abilities to route log entries, rich filtering capabilities, content-based filtering,
It supports TCP & TLS
Rsyslog → Improvement for syslog-ng → Ability to send log entries directly into database engines
It supports TCP & TLS
NXLog → Log Management Tool similar to rsyslog & syslog-ng → Supports Linux & Windows
It functions as a log collector & can be integrated with SIEM systems
journalctl → Command that displays several log entries from different sources on Linux system
Bandwidth Monitors → By comparing captures taken at different times, investigators can determine changes in network traffic.
PRTG and Cacti are both network monitoring tools that can provide bandwidth monitoring information.
Bandwidth monitors can help identify exfiltration, heavy and abnormal bandwidth usage, and other information that can be helpful for both incident identification and incident investigations.
NetFlow → A feature available on many routers & switches that can collect IP traffic statistics & send them to NetFlow collector
Analysis software of NetFlow allows admins to view & analyze network traffic
Netflow data provides detailed information about the network traffic → Metadata → source and destination IP addresses, ports, protocols, timestamps, and the amount of data transferred
sFlow → A sampling protocol → Provides traffic information based on a preconfigured sample rate
Ex. It may capture 1 packet out of 10 packets & send this sample data to the collector
As it captures & send only sample data, it is less likely to impact the device’s performance, allowing it to work on devices with high volume of data
IP Flow Information Export (IPFIX) → Similar to NetFlow v9 → Replacement to NetFlow
Legal Hold → Refers to a court order to maintain different types of data as evidence
Data retention policy applies here
Admissibility → When collecting documentation & evidence, it’s essential to follow specific procedures to ensure that the evidence is admissible in a court of law
Chain of custody → A process that provides assurances that evidence has been controlled & appropriately handled after collection
Forensics experts establish chain of custody when they first collect the evidence
It provides a record of every person who was in possession of a physical asset collected as a evidence → Chain of custody forms are forms that list every person who has worked with or who has made contact with the evidence that is a part of an investigation
A proper chain of custody procedure ensures that evidence presented in the court of law is the same evidence that security professionals collected
A well-documented chain of custody can help establish provenance for data, proving where it came from, who handled it, and how it was obtained.
Provenance → Refers to tracing something back to its origin
The provenance of a forensic artifact includes the chain of custody, including ownership and acquisition of the artifact, device, or image
Tags → A tag is places on evidence items when they are identified
Sequence of Events
Timestamps
Time Offset → Provides info about how the timestamps are recorded
Reports → After analyzing all the relevant evidence, digital forensics experts create a report documenting their findings
Order of Volatility → Refers to the order in which you should collect evidence
You should collect evidence starting with most volatile & moving to least volatile
Order of volatility from most to least:
Registers, Cache → The contents of CPU cache and registers are extremely volatile, since they are changing all of the time. Literally, nanoseconds make the difference here. An examiner needs to get to the cache and register immediately and extract that evidence before it is lost.
Routing Table, ARP Cache, Process Table, Kernel Statistics, Memory
Temporary File Systems
Disk
Remote Logging and Monitoring Data that is Relevant to the System in Question
Physical Configuration, Network Topology, and Archival Media
Old:
Cache → Data in cache memory including the processor & hard drive cache
RAM → Data in RAM used by OS & applications
Swap / Pagefile → Swap (pagefile) is the system disk drive → Extension of RAM & stored on hard drive
Disk → Data files stored on local disk drives & they remain there after rebooting
Attached Devices → USB drive also holds data when system is powered down
Network → Servers & shared folders accessible by users & used to store log files
Data Acquisition →
Snapshot → Forensic experts use snapshots to capture data for forensics analysis
Artifacts → Forensics artifacts are the pieces of data on a device that regular users are unaware of, but digital forensic experts can identify & extract
Web History
Recycle Bin
Windows Error Reporting
Remote Desktop Protocol (RDP) cache
When artifacts are acquired as part of an investigation, they should be logged and documented as part of the evidence related to the investigation.
Right to Audit Clauses → Allows customers to hire an auditor & review the cloud provider’s record
Auditing helps customer to ensure that the cloud provider is implementing adequate security
Many cloud service providers do not allow customer-driven audits, either by the customer or a third party. They also commonly prohibit vulnerability scans of their production environment to avoid service outages.
Instead, many provide third-party audit results in the form of a service organization controls (SOC) report or similar audit artifact.
Regulatory Jurisdiction → The company must comply with relevant laws
Data Breach Notification Laws → This law requires organizations to notify customers about a data breach & take steps to mitigate the loss