Understand your Network – Simplifying Netstat with the Network Top Processes Program!

In this post, we discover network secrets with netstat, and provide you with a powerful Netstat Network Connection Program that refines its abilities to identify the active network connections that are most utilized. 

Just in case you missed it, we’ve covered netstat in our Networking Forensics Basics post and included it in our Basic Network Commands Reference Guide.

Understanding established connections on your network can lead towards creating reliable and efficient network infrastructure. Netstat provides the ability to display a list of all the current network connections on your computer, both incoming and outgoing, while including the following information:

Local Address

Local end of the connection

(IP address and port number). 

Foreign Address

Remote end of the connection, so what you are connecting to 

(IP address and port number).

Protocol

Displaying the communication protocol being used for each connection.

Examples:

   TCP (Transmission Control Protocol) 

   UDP (User Datagram Protocol).

State 

It shows the current state of each connection, such as:

ESTABLISHED (actively exchanging data)

LISTENING (waiting for incoming connections)

TIME_WAIT (waiting for the connection to fully close).

However, running netstat can provide overwhelming information. By understanding the top processes running on your network, you can significantly enhance your ability to effectively monitor and manage network connections. And that is exactly why we’ve scripted a program to make it happen!

Network Top Processes Program will do the following: 

  1. Summarize the top 10 network processes. 
  2. Provide a count of the top active connections. 
  3. Show detailed information about each process, including the PID, local address, remote address, and connection status.
  4. Display a graphical representation of the connection counts. 

How does the Network Top Processes Program run? 

Below are the steps the program takes to execute: 

Step 1 import psutil: Process and System Utilities library to gather information about established network connections, the status of the connection and their associated processes.  
Step 2 import matplotlib.pyplot as plt: matplotlib imported to create the bar chart graph.  
Step 3 from tabulate import tabulate: Tabulate library, which helps to format and display tabular data neatly.
Step 4 The function ‘get_top_processes_with_connections_info()’ is defined to gather and display information about the top processes with the most network connections.
Step 5 psutil.net_connections(kind=’inet’) + psutil.net_connections(kind=’inet6′) – gets the internet connections on the computer, both for IPv4 and IPv6.
Step 6

process_connections: This dictionary creates a key, value pair for process names and number of collections.  Key = process names and Value = number of collections 

Step 7

Iterating through ‘connections’: 

The code loops through each network connection and checks if the connection status is “ESTABLISHED.”

If it is established, it retrieves the process name associated with that connection using the `psutil.Process(pid).name()` function and updates the `process_connections` dictionary to count the number of connections for each process.

Step 8 Sorting the processes: The dictionary ‘process_connections’ is then sorted in descending order based on the number of connections each process has, creating a list of tuples called `sorted_processes`.
Step 9 Displaying the top processes: The code prints a table using the `tabulate` library, showing the top 10 processes with the most connections. The table has two columns: “Process” (the name of the process) and “Connections” (the number of connections it has).
Step 10

Getting detailed connection information: For each of the top processes, the code prints a more detailed table that shows:

PID – Process ID 

LADDR – local address with its port

RADDR – remote address with its port

STATUS – connection status 

Step 11 Creating a bar chart: The bar chart represents the top processes and the number of connections they have.
Step 12 ‘if __name__ == “__main__’: This block of code ensures that the `get_top_processes_with_connections_info()` function is called only when the program is run as the main script.

By running this script, you can gain valuable insights into your network’s top processes and connections. It allows you to monitor network performance, identify resource-intensive processes, detect unusual network activity, and optimize network usage.

Run the program and let us know what you think! 

The Art of Transcription: Decoding Videos for Deeper Insights in Digital Forensics

In the world of digital forensics, where every piece of evidence holds crucial significance, video transcription emerges as a valuable tool.The ability to accurately transcribe the audio content of videos becomes a pivotal aspect of investigations, aiding in evidence documentation, analysis, and interpretation. 

In this article, we will explore the fundamental importance of video transcription and introduce an interactive program that empowers you to transcribe videos firsthand. Discover the compelling reasons behind video transcription and unlock the ability to actively transcribe videos with our innovative solution!

Importance

Overall, video transcription enhances the efficiency and effectiveness of digital forensics investigations by providing a searchable, analyzable, and shareable representation of video content. Here are some key reasons why video transcription is important in this field:

Evidence Documentation: Video transcription helps in documenting and preserving evidence found in videos. Transcribing the audio content of a video provides a written record of the conversations, actions, and other important details depicted in the video. This transcription can be used as evidence in court or for further analysis.

Searchability: Transcribing videos enables investigators to search for specific keywords or phrases within the transcript. This expedites the investigative process by enabling swift identification and retrieval of pertinent information.It also helps in identifying connections, patterns, or keywords that might be crucial to the case.

Analysis and Interpretation: Video transcription provides a text-based representation of the video’s content, making it easier to analyze and interpret the information. Investigators can review the transcript multiple times, annotate it, and extract valuable insights that may not be immediately apparent by simply watching the video.

Cross-referencing: Video transcription allows for easy cross-referencing with other digital evidence. By transcribing multiple videos or combining video transcripts with other types of transcriptions (such as text messages or chat logs), investigators can identify correlations, inconsistencies, or connections.

Accessibility and Collaboration: Video transcription makes video content accessible to a wider audience. It allows investigators, attorneys, or other stakeholders to review the video’s content without having to watch the entire video repeatedly. Transcriptions can also be easily shared and collaborated upon, enabling multiple experts to analyze and contribute to the investigation.

Metadata Verification: In some cases, video transcriptions can assist in verifying the accuracy and authenticity of video metadata. By comparing the content of the video with the associated transcript, investigators can help determine if the video was tampered with or altered, strengthening the evidential value of the video.

 

Transcribe Video Program

To assist with your investigations on video evidence files, here is a program that transcribes an inputted video file to text: Transcript_Video.py 

The program will prompt you to enter the path to the video file you want to transcribe and the path where you want to save the transcription. The transcription will be saved in the specified text file.

Make sure to install the necessary dependencies by running the following commands before executing the code:

pip install moviepy 

pip install SpeechRecognition

brew install ffmpeg (for Macs or Linux based systems)

 

The program relies on the Google Web Speech API for speech recognition, so an active internet connection is necessary.

The following is a description of how Transcribe_Video.py functions: 

The scripts imports the following libraries: 

‘os’ for file operations

‘moviepy.editor’ from the moviepy library for video processing

‘speech_recognition’ as ‘sr’ for speech recognition capabilities.

The ‘transcribe_video’ function takes two parameters: ‘video_path’ (the path to the video file) and ‘output_path’ (the path to save the transcription). It performs the following steps:

  1. It loads the video file using ‘moviepy.editor.VideoFileClip()’ and extracts the audio 
  2. The audio is saved as a temporary WAV file using the ‘write_audiofile()’ method. (Specified ‘codec=”pcm_s16le”’ to ensure compatibility with the SpeechRecognition library). 
  3. The SpeechRecognition library is used to perform the speech recognition. The ‘sr.Recognizer()’ object is created, and the audio file is opened using ‘sr.AudioFile()’. The audio is then recorded using the ‘record()’ method.
  4. The recorded audio is passed to the ‘recognize_google()’ method to perform speech recognition. The resulting transcription is stored in the ‘transcription’ variable.
  5.  The temporary audio file is deleted using ‘os.remove()’ to clean up.
  6.  The transcription is saved to the specified output file using `open()` in write mode. The content is written using the ‘write()’ method of the file object.
  7.  Finally, a message is printed to indicate the location where the transcription is saved.

In the main section of the code, the program prompts the user to enter the path to the video file they want to transcribe. It checks if the file exists using ‘os.path.isfile()’ and displays an error message if the path is invalid. 

The program then prompts the user to enter the path where they want to save the transcription (including the file name and extension). If the file path is valid, the `transcribe_video` function is called with the provided video file path and output file path.

Conclusion 

As investigations increasingly rely on digital evidence, videos play a crucial role in capturing vital information and events. By transcribing these videos into text, forensic examiners gain a comprehensive and searchable record of the audio content, facilitating evidence documentation, preservation, and analysis. Ultimately, transcription enhances the efficiency, accuracy, and reliability of video data for digital forensics investigations.

Beyond the Surface: The Hidden World of Email Headers in Digital Forensics

Email headers are the metadata that accompany an email and provide vital information about its origin, path, and delivery. 

By default, email headers are not visible to the recipient when viewing an email through a typical webmail interface. This is primarily to simplify the user experience and avoid overwhelming users with technical information.

However, Digital forensics experts rely on these headers to investigate cybercrimes, track email sources, analyze communication patterns, and verify the integrity of email messages.

Email header information 

The email header contains a range of information, below is a list of the data it typically stores:

Return-Path: Specifies the email address to which bounced or undeliverable messages should be returned.

Received: A chain of headers indicating the servers or systems the email passed through during transmission, which includes timestamps, IP addresses, and hostnames.

Delivered-To: Specifies the email address or mailbox where the message was delivered.

Received-SPF: Indicates the result of the Sender Policy Framework (SPF) check, which verifies if the email’s origin server is authorized to send emails for the claimed domain.

Authentication-Results: Provides the results of various email authentication methods, such as SPF, DKIM, and DMARC.

DKIM-Signature: Contains the cryptographic signature generated by the sending domain to verify the integrity and authenticity of the email.

DomainKey-Signature: A deprecated method similar to DKIM for verifying the authenticity of the email.

From: Specifies the email address and, optionally, the name of the sender.

Reply-To: Indicates the email address to which replies should be sent, which may differ from the sender’s address.

To: Primary recipient’s email address.

Cc: Lists the email addresses of additional recipients who receive a copy of the email.

Bcc: Similar to Cc, but the email addresses of Bcc recipients are hidden from other recipients.

Subject: The subject line or title of the email.

Date: Indicates the date and time when the email was sent.

Message-ID: A unique identifier assigned by the email server to the message.

In-Reply-To: Specifies the message ID of the email to which the current email is a reply.

References: Contains a list of message IDs referring to previous related emails in a conversation.

MIME-Version: Specifies the version of the Multipurpose Internet Mail Extensions (MIME) standard used for encoding the email.

Content-Type: Describes the type of content within the email, such as plain text or HTML.

Content-Transfer-Encoding: Indicates the encoding method used for transferring the content.

X-Priority: Specifies the priority level of the email.

Importance: Indicates the importance level of the email, such as low, normal, or high.

User-Agent: Identifies the email client or software used to send the email.

X-Mailer: Specifies the software or program used to send the email.

X-Originating-IP: Indicates the IP address of the device or server from which the email originated.

X-Sender: Specifies the email address of the sender.

X-Original-Sender: Indicates the original email address of the sender, which may be different from the From address.

X-AntiAbuse: Contains information related to anti-abuse measures taken by the email system.

X-AntiAbuse-Source: Indicates the source of potential abuse, such as the originating IP address.

X-AntiAbuse-UserAgent: Specifies the user agent or software used by the sender to compose the email.

Digital Forensic Focus 

Email headers are of great importance in digital forensics investigations. Email headers serve as a valuable source of information that can help in verifying the authenticity of emails, tracking the flow of communication, and attributing email messages to specific individuals or entities. Below are a couple of key areas to focus on when conducting investigations involving email headers:

Email Source Identification: Email headers provide information about the source of an email, including the IP addresses and domains of the sending and receiving mail servers. This information can help trace the origin of an email and identify potential sources of malicious activity.

Timestamp Analysis: Email headers include timestamps that indicate when the email was sent, received, and delivered. These timestamps can be crucial in establishing timelines, determining the sequence of events, and correlating email communications with other digital evidence.

Email Routing Information: Email headers contain details about the mail servers involved in the delivery of an email. Forensic investigators can analyze this routing information to understand the path the email took and identify any other involved parties. This can be useful in tracing the route of malicious emails or identifying potential points of compromise.

Message Integrity Verification: Email headers often include cryptographic signatures, such as DKIM (DomainKeys Identified Mail) and SPF (Sender Policy Framework). These signatures can be used to verify the authenticity and integrity of the email, ensuring that it has not been tampered with during transit.

Email Metadata Analysis: Email headers provide metadata about the email, such as the email addresses of the sender and recipient(s), subject lines, and message identifiers. This metadata can be analyzed to establish communication patterns, identify relationships between individuals, and reconstruct email conversations or threads.

Tracking Email Forwarding and Redirection: Email headers may contain information about email forwarding, redirection, or replies. Forensic investigators can examine these headers to understand the flow of information, track the path of email messages, and identify any alterations or manipulations of the email chain.

Header Manipulation Detection: Email headers can be analyzed to detect any attempts at header manipulation. This can help identify spoofed emails, phishing attempts, or email fraud schemes.

Analysis of Error Messages and Bouncebacks: Email headers contain information about delivery status notifications, bounce messages, and any encountered errors during the delivery process. This information can be used to gather evidence of email delivery issues, identify potential tampering or interference, or trace the existence of intermediary mail servers.

Email Header Script 

I scripted a code that presents an interface that allows users to browse and select an email file, extract its header information, and display the extracted contents in a user-friendly manner.

To access the script, you can download it from the following GitHub repository: Email Header.py

Below are the instructions to execute the program:

1. Execute the script.

2. The GUI window will appear, and you can proceed by clicking the “Browse” button.

3. Choose an email file in .eml format using the file dialog that opens.

4. The script will extract the header information from the selected email file.

5. A new GUI window will open, displaying the extracted header information.

Note: This script assumes that the email file is in UTF-8 encoding. If your email files are encoded differently, you may need to adjust the encoding accordingly in the extract_header_info function.

To run this script, make sure you have the following prerequisites:

    1. Python 3 installed, which should include the tkinter module by default for GUI functionality.
    2. Ensure that the PIL (Python Imaging Library) module is installed. If it’s not already installed, you can install it by running the following command: `pip3 install pillow`.

Email Parsing Details: 

The extract_header_info() function takes the filename as an argument. It opens the selected email file, reads its contents, and uses email.message_from_file() to parse the email message.

The function then creates an empty dictionary called header_info to store the header information. It iterates over the headers in the email message (msg._headers) and extracts the name and value of each header.

The function attempts to decode the header value using decode_header(). If the value is encoded, it decodes it using the appropriate encoding (typically UTF-8). The decoded value is then added to the header_info dictionary with the header name as the key.

Conclusion

In conclusion, email headers play a crucial role in digital forensics investigations. They provide valuable information for tracing the origin, path, and authenticity of an email. Email headers allow forensic analysts to identify the sender, recipient, and intermediate servers involved in the transmission, helping to establish timelines, track the email’s route, and verify its integrity. Additionally, email headers can reveal crucial details such as IP addresses, cryptographic signatures, and authentication results, which are vital in investigating cybercrimes, phishing attempts, and other fraudulent activities. Therefore, the analysis of email headers is a fundamental component of digital forensics, enabling investigators to unravel the evidence and uncover insights essential for resolving cases and ensuring the integrity of electronic communications.

Navigating the Aftermath: Dealing with the Impact of an Application Security Breach

The recent security breach experienced by CalPERS serves as a reminder of the ever-present threat to sensitive data. 

As an individual who is directly impacted by the breach, I have been personally affected and left with a multitude of questions. It is important to delve into the specifics of the incident, comprehensively understand the scope of the breach, and explore the measures being undertaken to address the situation and prevent future occurrences.

Let’s talk about it. 

What is CalPERS?

CalPERS stands for the California Public Employees’ Retirement System. It is a public pension fund that provides retirement and health benefits to public employees, retirees, and their beneficiaries in the state of California.

Structure of the Attack 

CalPers uses a third party vendor, PBI Research Services/Berwyn Group (PBI) for their MOVEit Transfer Application. This application allows payments and benefits to be sent to users. 

CalPERS received a notification from PBI on June 6, 2023, regarding a “zero-day” vulnerability discovered in their MOVEit Transfer Application. This vulnerability resulted in unauthorized third-party access, enabling the downloading of our data.

“Zero-day” is a  vulnerability that refers to a security flaw or weakness in a system or device that has been discovered by an attacker before a developer or security engineer. The name stems from a software vendor having zero days or no advance notice to address and patch the vulnerability. 

Extent of Breach

It has been confirmed that sensitive personal information of individuals currently receiving monthly benefit payments through CalPERS (as of Spring 2023) has been unlawfully accessed and downloaded. The compromised data included the following categories of personally identifiable information (PII):

      • Names
      • Dates of birth
      • Social security numbers
      • Family names of individuals, such as spouses, domestic partners, children, etc.
      • Work history information

CalPERS Response 

CalPERS stated that PBI’s initial communication “did not provide sufficient detail as to the scope of the data that was impacted and the individuals to which that data belonged”. Additionally, “as soon as we received additional information, CalPERS officials moved quickly to set up new security procedures, secure credit monitoring and identity theft protection services for our members” [1].

Issues CalPERS Addressing the Breach

There were two main issues with how CalPERS addressed the breach: 

  1. The unknowing of the extent of the breach. 

Due to the lack of knowledge of the breach, it led to CalPERS having a two week delay in informing the public of the attack. Additionally, placing reliance on a third-party entails depending on the business to remain vigilant, promptly communicate updates, and keep the affected company informed regarding the breach. 

  1. Stating “new security procedures” were in place. 

Given the limited understanding of the third-party’s internal structure, how can CalPERS ensure the implementation of appropriate security measures?

Breach Details

The Clop group was identified to have exploited the MOVEit Transfer application via a vulnerability before a patch was deployed. Meaning the group probably injected malicious software to gain unauthorized access to sensitive information. The injected software could have been in the form of malware, viruses, worms, or other types of malicious code designed to exploit vulnerabilities and compromise the security of the targeted system. 

Insufficient elaboration has been provided regarding the specifics of the attack and the safeguards implemented to prevent further exploitation. This lack of information leaves numerous questions unanswered and creates uncertainty regarding the level of protection in place.

Identifying The Potential Issue

The MOVEit Transfer Application allows for the transfer of mass information, meaning the following vulnerabilities can pose security risks: 

      1. Insecure data transmission – Without the use of secure communication protocols, sensitive information transferred between devices or servers can be intercepted by attackers.
      2. Lack of Encryption – Failure to apply strong encryption leads to a vulnerable state of  data during transmission or storage.
      3. Weak Authentication /Authorization – Improper authorization controls can allow unauthorized users to gain access.
      4. Code Injection Attacks – Applications that do not validate user inputs can be open to code injection attacks, where malicious code is injected into the app’s codebase.
      5. Inadequate Session Management – Having insecure session management can lead to session hijacking, enabling unauthorized individuals to access and control user sessions.
      6. Insufficient Error Handling – Applications that do not handle errors properly may inadvertently leak sensitive information or provide attackers with insights into the app’s infrastructure.
      7. Data Storage Vulnerabilities –  Weak or insecure storage mechanisms can make mass information susceptible to unauthorized access, retrieval, or modification.
      8. Inadequate Input Validation: Lack of proper input validation can allow attackers to exploit vulnerabilities such as SQL injection or cross-site scripting (XSS) to gain access to mass information.

Mitigations Techniques 

To address and minimize these vulnerabilities, application developers should adhere to a comprehensive application development lifecycle that prioritizes robust security protocols. This lifecycle should encompass the following essential stages:

      1. Follow secure coding practice, including input validation, proper error handling, and safe memory management. Using frameworks and libraries with built-in security features can assist with applying these security measures.
      2. Secure network communication by using secure communication protocols, to protect data transmitted between the app and backend servers. Additionally, the implementation of certificate pinning to ensure the authenticity of the servers. 
      3. Implement encryption for data transmission and secure storage. Ensure there are proper access controls and enforce data separation to minimize the impact of a potential breach.
      4. Enforce strong authentication and authorization measures to verify the identity of users and authorize their access to specific functionalities and/or data. This can be enhanced with session management controls, secure password storage techniques and multi-factor authentication.
      5. Regularly update and patch the app focusing on the app’s underlying framework, libraries, and operating system. 
      6. Conduct security testing and code review including penetration testing and code reviews, to identify vulnerabilities and weaknesses.
      7. Stay informed about emerging threats and best practices in app security. Promote security awareness among app developers by providing training and resources on secure coding practices, common vulnerabilities, and emerging threats. 

Conclusion 

Rebuilding trust in the aftermath of a security incident is crucial for maintaining a strong relationship between a service provider and its users. In the case of the MOVEit Transfer incident, it is essential for PBI to take proactive steps to address concerns and provide transparency to regain trust. One effective approach is the release of a comprehensive technical security report. Such a report should include the following elements:

      1. Incident overview 
      2. Vulnerability Assessment:
      3. Impact Assessment
      4. Remediation Plan
      5. Future Plans/Lessons Learned 

Reference:

1. PBI Data Breach – Frequently asked questions. CalPERS. (2023, June 23). https://www.calpers.ca.gov/page/home/pbi

Network Forensics Basics

Network Connections

A computer network functions by establishing connections between devices, enabling them to exchange data and communicate with each other. To establish a network connection, there is a combination of hardware and software components. 

Hardware: Enables devices to establish network connections for data transmission and reception.

Software: Implements protocols that define communication standards, ensuring proper data transmission and verification processes.

This process is defined within the stages of the Open Systems Interconnection (OSI) Model. OSI was created by the International Organization for Standardization (ISO) to establish a framework of the functions within the network system. Below is a overview of OSI:

 

OSI Model Reference Guide.

 

Network Forensics

Network forensics involves the analysis of network activity, logs, and data to investigate and respond to security incidents, identify potential threats, and gather evidence for investigations.

Network forensics finds diverse applications across various contexts and scenarios, serving distinct purposes such as:

Red Team (offensive approach) – The emphasis is placed on proactively addressing vulnerabilities by identifying weaknesses through activities such as penetration testing, exploitation, and simulated hacking, with the ultimate goal of prevention.

Blue Team (defensive approach) – The core objective is to uphold network security by employing monitoring techniques and conducting traffic analysis. These measures ensure ongoing protection while also facilitating post-incident investigations to reveal the specifics of security events and gain valuable insights into system activities.

Regardless of the specific application, it is crucial to have a comprehensive understanding of the underlying network infrastructure before utilizing the skills in any context.

Basic network commands

Basic network commands are essential for analysis, troubleshooting and configuration purposes. Below are the basic networking commands that can be utilized to better understand a network:

Basic Network Commands Reference Guide. 

 

Network data sources

When adopting a Blue Team approach, there are various data sources within a network that can be gathered and analyzed to conduct a comprehensive investigation. These data sources include:

Source: Intrusion Detection Systems (IDS)  

Benefit: A security tool that monitors network traffic for signs of unauthorized access, suspicious activities and provides alerts and notifications to protect against potential cyber threats.

Source: Firewall

Benefit: The logs from a firewall contain a record of network traffic and security events, providing information about incoming and outgoing connections, blocked or allowed traffic, and potential security incidents.

Source: Security Information and Event Management (SIEM) 

Benefit: The management system combines security event monitoring, log collection, and analysis for centralized visibility and effective monitoring of network incidents.

Source: Packet Sniffers

Benefit: Is a tool that captures packets transferred on a network,  allowing for the inspection of data exchanged between devices for security analysis and/or network optimization purposes.

Source: Network Forensics Analysis Tools (NFAT) 

Benefit: Specialized software that monitors the network traffic dedicated to enhance security, identify threats and actively protect a network. 

Network Collection tools 

Multiple tools are available to gather network related data. The following is a compilation of commonly used tools:

Tcpdump – packet capture tool used for analyzing and inspecting network traffic run via the command line. 

Nmap – network scanning tool used for assessing and discovering hosts, open ports, and services on computer networks.

Wireshark – network protocol analyzer used for capturing and examining live network traffic. 

SolarWinds – network management software that provides tools for monitoring, analyzing, and optimizing network performance and security.

Network Miner – network forensic analysis tool that extracts and displays valuable information from captured network traffic, aiding in the identification of potential security threats and incidents.

Conclusion

In network forensics, investigations revolve around the collection and analysis of network data following a security incident. Before conducting any assessment, it is crucial to comprehend the network’s configuration. Once a thorough understanding of the network’s infrastructure is obtained, the process of data collection and analysis can commence.