Have you ever heard of such thing as “Penetration Testing”? Maybe for most people who work in the field of network security are familiar with this term. In short, this is a complex simulation that will be carried out on your SD-WAN because it is related to network security, of course this is about firewalls.
Let’s discuss universally first, namely discussing computer networks before discussing penetration testing.
Table of Contents
Getting to Know About Network
First of all, let’s get to know about computer networks first. Computer networks refer to connected computing devices (such as laptops, desktops, servers, smartphones, and tablets) and an ever-expanding array of IoT devices (such as cameras, door locks, doorbells, appliances, audio/visual systems, thermostats, and various sensors). ) that communicate with each other.
Of course, subconsciously, you may have used this network on a daily basis. When you send your files from computer to smartphone via Bluetooth or play games on the computer with your next door friend who uses the same computer using a LAN cable? It is a very simple representation of the network.
Modern networks provide more than just connectivity. Organizations are starting to transform themselves digitally. Their network is critical to this transformation and their success. The type of network architecture that develops to meet this need requires software and hardware that is capable of security.
Software-defined (SDN) is able to respond to new requirements in the “digital” era, network architectures are becoming more programmable, automated, and open. In software-defined networks, traffic routing is controlled centrally through software-based mechanisms. This helps the network to react quickly to changing conditions.
Understanding Computer Networks
A computer network is a set of devices that are connected via a link. A node can be a computer, printer, or other device capable of sending or receiving data. The link that connects the nodes is known as the communication channel.
It also uses distributed processing where tasks are shared among multiple computers. In contrast, one computer handles the entire task, each separate computer handling a subset.
Computer networks can be divided into two. Namely wireless networks and networks that use cables.
As we all know, “cable” refers to any physical medium that cables consist of. Copper cable, twisted pair, or fiber optic cable are all options. Wired networks use cables to connect devices to the Internet or another network, such as a laptop or desktop PC.
“Wireless” means wireless, media consisting of electromagnetic waves (EM Waves) or infrared waves. The antenna or sensor will be on all wireless devices. Cell phones, wireless sensors, TV remotes, satellite disc receivers, and laptops with WLAN Cards are examples of wireless devices. For data or voice communications, wireless networks use radio frequency waves instead of wires.
Although their overall goals are similar, different types of networks serve different purposes. Networks are currently classified in the broad categories below.
If based on geography, the network can be divided into three, including:
Local Area Network
A LAN is a collection of connected devices in one physical location, such as a home or office. LANs can be small or large, ranging from a home network with a single user to a large corporate network with thousands of users and devices. LANs can include both wired and wireless devices.
Regardless of its size, a special characteristic of a LAN is that it connects devices that are within a limited area.
- Simple and relatively cheap
- Resource Collaboration
- Association involving client and server
- Accessing software programs
- Guaranteed data protection
- Fast communication
- Information security issues that arise
- Remote main limitation
- All devices may be disproportionately affected if the server fails
- Installing LAN is difficult and expensive
- Sharing Data through External Sources
Metropolitan Area Network
The term Metropolitan Area Network (MAN) in short is a metropolitan area network. It lies between LAN and WAN technologies and covers the entire city. Very similar to LAN technology. The following is information about the advantages and disadvantages of MAN to find out more details about it.
- It provides higher security compared to WAN
- It’s wider than LAN
- It helps in cost-effective sharing of common resources like printers etc
- It helps people connect fast LANs together. This is because of the easy implementation of the link
- MAN requires less resources compared to WAN. This saves implementation costs
- The dual buses used in MAN help transmit data in both directions simultaneously
- It provides a good backbone for large networks and also provides greater access to the WAN
- MAN usually involves several city blocks or an entire city
- Improve data handling efficiency
- Increase data transfer speed
- Easy to apply link
- Save on installation costs for building wide area networks
- More cables needed for MAN connection from one place to another
- Slow data rate compared to LAN
- It is difficult to make the system safe from hackers
- Large networks are difficult to manage
- It’s hard to secure the network after getting big
- Network installation requires skilled network technicians and administrators, increasing overall installation and management costs.
- Higher cost than LAN
- When we move our network to another city or area, it doesn’t work
Wide Area Network
What exactly is a WAN? In short, a WAN is a large communications network that connects locations across a wide geographic area, including cities, states, countries, and continents.
Enterprise WANs are often created for a single organization and are usually private.
- Centralized IT infrastructure
- Increase bandwidth
- Increase your privacy
- Eliminating the Need for ISDN
- Guaranteed uptime
- Cut costs, increase profits
- High setup fee
- Attention to safety
- Maintenance Problem
Based on the Distribution of Information Sources
If you distinguish the types of networks based on the distribution of information sources, these can be divided into two, namely:
The centralized network architecture is built around a single server that handles all major processing. Less powerful workstations connect to servers and send their requests to a central server instead of running them directly. This can include applications, data stores, and utilities.
Some of the main advantages of centralized network management are consistency, efficiency, and affordability.
Network administrators are under pressure to keep machines patched and up-to-date, so having one central server controlling the entire network means less IT management time and less admin. In addition, all data on a centralized network must go through one place, making it very easy to track and collect data across the network.
Centralized networks do have their drawbacks, for example, a single point of failure can be a risk factor for an organization. If a central—or master—server goes down, the individual “client” machines connected to it cannot process user requests. The impact of this failure will depend on how many processes the server has. If the client machine does little more than send requests, system availability can be seriously compromised.
They also offer limited scalability. Because all applications and processing power are stored on a single server, the only way to scale your network is to add more storage, I/O bandwidth, or processing power to the server. be a cost-effective solution in the long run.
In computing terms, a distributed network architecture will distribute workloads among multiple machines, instead of relying on a single central server. This trend has evolved from the rapid advancement of desktop and laptop computers, which now offer performance far beyond the needs of most business applications; meaning the extra computing power can be used for distributed processing.
Distributed networks offer a variety of benefits over more conventional centralized networks, including increased system reliability, scale, and privacy.
One of the most important benefits of network management is the fact that there are no real points of failure—this is because individual user machines do not rely on a single central server to handle all processes. Distributed networks are also much easier to scale, because you can add more machines to the network to add more computing power.
In addition, the distributed network architecture allows for greater privacy, as information does not pass through a single point and instead passes through a number of different points, making it more difficult to trace across the network.
On the downside, decentralized networks require more machines, which means more maintenance and potential problems, which in turn means an additional burden on your IT resources.
Based on Relationships Between Computers
And if based on the relationship between computers, the network can be divided into two, namely:
A client-server network is the medium through which clients access the resources and services of a central computer, either over a LAN or a WAN, such as the Internet.
- Semua file disimpan di lokasi pusat
- Periferal jaringan dikendalikan secara terpusat
- Pencadangan dan keamanan jaringan dikendalikan secara terpusat
- Pengguna dapat mengakses data bersama yang dikendalikan secara terpusat
- Special network operating system required
- Servers are expensive to buy
- Specialist staff such as network manager is required
- If any part of the network fails, many disturbances can occur
Peer to peer network
In a peer-to-peer network, the computers on the network are the same, with each workstation providing access to resources and data. It is a simple type of network where computers can communicate with each other and share what is on or attached. their computers with other users.
- No need for network operating system
- Does not require expensive servers because each workstation is used to access files
- There is no need for specialist staff like network technicians as each user sets their own permissions for which files they want to share.
- Much easier to set up than client-server networks (no special knowledge required)
- If one computer fails, it won’t interfere with other parts of the network, it just means that the files are not available to other users at that time.
- Since each computer may be being accessed by others, it can slow down the performance of the user
- Files and folders cannot be backed up centrally
- Files and resources are not centrally organized into specific ‘shared areas’. These files and resources are stored on individual computers and may be difficult to find if the computer owner does not have a logical filing system.
- Ensuring that viruses do not enter the network is the responsibility of each user
- Users often don’t need to log into their workstations.
The Purpose of the Network
Computer networks add power, functionality, and flexibility to any computing environment. Once available exclusively in government and university settings, computer networks now extend from offices to homes and directly into our pockets, with cell phones and music players.
The purpose of building the network itself there are several, among others:
Computer networks also allow sharing of network resources, such as printers, dedicated servers, backup systems, input devices, and Internet connections. By sharing resources, unique equipment such as scanners, color printers, or high-speed copiers can be made available to all network users simultaneously without being moved, eliminating the need for expensive redundancy.
What’s more, specific shared resources can be targeted to deliver documents or results directly to the office or department that needs them.
Ease of Administration
IT personnel and computer network administrators love network systems because they allow IT professionals to maintain uniform versions of software, protocols, and security measures across hundreds or thousands of individual computers from a single IT management station.
Instead of individually upgrading each computer in the company individually, network administrators can initiate upgrades from the server and automatically duplicate updates across the network simultaneously, enabling everyone in the company to maintain uniform software, resources, and procedures.
File and Data Sharing
At one time, file sharing consisted mostly of saving documents to a flash disk that could be physically transferred to another computer by hand. With a network, however, files can be shared instantly across the network, either with a single user or with hundreds.
Employees across departments can collaborate on documents, exchange background material, revise spreadsheets, and make simultaneous additions and updates to a single central customer database without generating conflicting versions.
Distributing Computing Power
Organizations that demand tremendous computing power benefit from computer networking by distributing computing tasks across multiple computers across the network, breaking complex problems into hundreds or thousands of smaller operations, which are then shared across individual computers.
Each computer in the network performs its own operation on the larger part of the problem and returns the results to the controller, which collects the results and draws conclusions that the computer cannot solve on its own.
Preventing the loss of critical data saves businesses worldwide countless millions of dollars every year. Networking of shared computers allows users to distribute copies of critical information across multiple locations, ensuring critical information is not lost with the failure of a single computer in the network.
By leveraging a central backup system both on and off site, unique documents and data can be automatically collected from every computer on the network and backed up securely in case of physical damage to the computer or accidental deletion.
Computer networks also enable organizations to maintain complex internal communication systems. Network emails can be sent instantly to all users, voicemail systems can be hosted over the network. It is also available system-wide and collaborative scheduling software and program management tools enable employees,
Its purpose is to coordinate meetings and work activities that maximize effectiveness, while also informing managers and co-workers of plans and progress.
Penetration Testing (Pentest)
Now go to the discussion about Penetration Testing or Pentest. Penetration testing is a simulated cyber attack against your computer system to check for vulnerabilities that can be exploited. In the context of web application security, penetration testing is usually used to add a web application firewall (WAF).
Pentests can involve attempting to breach a number of application systems, (e.g., application protocol interfaces, frontend/backend servers) to uncover vulnerabilities, such as unclean input that is vulnerable to code injection attacks.
The insights provided by penetration testing can be used to fine-tune your WAF security policies and patch detected vulnerabilities.
Why Should You Do Penetration Testing
So, actually this pentest will be done to test what if the computer network is attacked with cyber attacks.
Hacking through corporate security protections usually takes a lot of time and skill. However, today’s technological advances make it easier for criminals to find the organization’s most vulnerable points. The goal of penetration testing is to help businesses know where they are most likely to face an attack and proactively shore up those vulnerabilities before they are exploited by hackers.
Steps in Penetration Testing
After you know pentest, what are the steps taken in penetration testing? The following is a step by step in penetration testing:
1. Planning and Reconnaissance
The first penetration step involves planning to simulate a malicious attack – the attack is designed in a way that helps gather as much information on the system as possible.
Based on the findings at the planning stage, penetration testers use scanning tools to explore system and network vulnerabilities.
3. Gaining System Access
After understanding system vulnerabilities, pen testers then infiltrate the infrastructure by exploiting security vulnerabilities.
4. Persistent Access
This pentest step identifies the potential impact of exploiting vulnerabilities by exploiting access rights.
5. Analysis and Reporting
This is the result of a penetration test. As part of the final stage, the security team prepares a detailed report that describes the entire penetration testing process.
Penetration Testing Method
Pentesting assignments are classified based on the level of knowledge and access granted to the pentester at the start of the assignment. The following methods that are often used for the pentest method are as follows:
Black Box Testing
In black box testing tasks, penetration testers are placed as average hackers, with no internal knowledge of the target system. Testers are not provided with any architecture diagrams or source code that is not publicly available. Black box testing determines vulnerabilities in the system that can be exploited from outside the network.
Gray Box Testing
While black box testing examines the system from an outsider’s point of view, gray box testing has a level of user access and knowledge, potentially with higher privileges on the system. Gray box pentesters typically have knowledge of network internals, possibly including design and architecture documentation and internal accounts to networks.
White Box Testing
Then there is white box testing under several different names, including clear-box, open-box, auxiliary, and logic-based testing. It’s on the opposite end of the spectrum from black box testing: penetration testers are given full access to source code, architectural documentation, and so on.
Targeted testing involves the company’s IT team working closely with external professionals to determine the vulnerability of the company’s systems. Tasks are performed on an open network where teams can compare their findings and find solutions to strengthen systems to prevent potential attacks.
Next is external testing. This test is carried out when a company wants to know the vulnerabilities of its external devices and servers such as firewalls, email servers, and web servers. The purpose of this test is to determine the vulnerability of the system to external attackers.
The company may decide to test its system to the extent that disgruntled employees can access unauthorized information. This task is performed by qualified personnel behind the firewall. This is internal testing.
This procedure mimics a real cyber attack, despite the fact that the company has allowed it. The information provided is limited and ethical hackers have to find out most of the company’s information, similar to unethical hackers.
Double Blind Test
This kind of testing is similar to blind testing, except that someone in the organization is aware of the ongoing activity.
How Often Penetration Testing Should Be Done
Penetration testing really has to be done often, this is because cyber attacks are growing here, so security must also be updated.
In its implementation, penetration testing can be divided into 5 stages that must be carried out in a coherent manner, namely:
Planning and Reconnaissance
The first stage is planning and reconnaissance which involves:
- Define the scope and objectives of the test, including the system to be handled and the test method to be used.
- Gather intelligence (e.g., network and domain names, email servers) to better understand how targets work and their potential vulnerabilities.
The next step is to understand how the target application will respond to various intrusion attempts. This is usually done using:
- Static analysis: Examines application code to estimate its behavior at run time. These tools can scan the entire code in one go.
- Dynamic analysis: Checks the application code is running. This is a more practical way of scanning, as it provides a real-time look into the application’s performance.
This stage uses web application attacks, such as cross-site scripting, SQL injection, and backdoors, to uncover the target’s vulnerabilities. Testers then try and exploit these vulnerabilities, usually by increasing privileges, stealing data, intercepting traffic, etc., to understand the damage they can cause.
The goal of this stage is to see if the vulnerability can be used to achieve a persistent presence in the exploited system—long enough for bad actors to gain deep access. The idea is to emulate advanced persistent threats, which often remain in systems for months to steal an organization’s most sensitive data.
The penetration test results are then compiled into a report detailing:
- Special vulnerability exploited
- Sensitive data accessed
- The amount of time the pen tester can remain in the system undetected
Standardization in Pen Testing
Pentest has a standard (Penetration Testing Execution Standard/PTES) which is applied as a reference in its implementation, which is divided into several stages. First, Pre-engagement Interactions or Planning. At this level, the scope of the Pentest must be discussed, the period of time, legal documents (contracts), the number of teams required. Including whether employees are notified beforehand or not about the existence of a pentest.
So, it is certain that penetration testing is how you can try any gaps in your network so you can close the gaps if there is an attack loophole from a party you don’t want.
Pentest is a starting point for how you can prevent cyber attacks. If you want to find a party who can do the best penetration testing and with various methods, NetData is the solution you can rely on if the penetration testing is of the highest quality.