6.2 Conduct security control testing

  • Vulnerability assessment
  • Penetration testing
  • Log reviews
  • Synthetic transactions
  • Code review and testing
  • Misuse case testing
  • Test coverage analysis
  • Interface testing
  • Breach attack simulations
  • Compliance checks

Vulnerability Assessment

How do I describe a vulnerability so that we have a consistent and uniform description that everybody can understand?  We can use the Security Content Automation Protocol (SCAP), which allows different systems to share vulnerability data.

Components of SCAP

  • Common Vulnerabilities and Exposures (CVE)

    • What do we call the vulnerability?

  • Common Vulnerability Scoring System (CVSS)

    • How do we rank the vulnerabilities by severity?

  • Common Configuration Enumeration (CCE)

    • How do we name system configuration issues?

  • Common Platform Enumeration (CPE)

    • How do we name operating systems?

  • Extensible Configuration Checklist Description Format (XCCDF)

    • How do we specify security checklists?

  • Open Vulnerability and Assessment Language (OVAL)

    • How do we describe security test procedures?

We might use a scan tool to automate the vulnerability scanning

  • It is more thorough, because it can check everything on our network

  • It is faster than a human

  • We can automatically generate reports once the scan is complete

  • We can schedule scans at night or at regular intervals

Network Discovery Scan

  • A scan to discover what is on our network

  • Scans all items on a network in a range of IP addresses

  • Can check for devices with a scan tool and see if they have open ports

  • TCP SYN Scanning

    • We send each device a SYN packet asking to open a new connection and see if it is interested

    • This is a much faster scan, but we don’t always have permission to do it because some devices won’t respond

  • TCP Connect Scanning

    • We send each device a SYN and actually follow through with opening the connection

    • We can do this when we don’t have permission to send just a SYN packet

  • Xmas Scanning

    • We send a packet that is flagged with FIN (saying to close the connection), PSH (immediately forward the packet to the application layer, don’t hold it in the buffer until more packets come in), and URG (urgent)

    • Normally, if we send a valid packet to an open port, the port will accept it, and if we send a packet to a closed port, it will be rejected.

    • When we send an Xmas packet to an open port, the open port will drop the packet due to the invalid flags.  When we send an Xmas packet to a closed port, the system will respond with an RST packet.  By sending an Xmas packet, the hacker can detect which ports are closed by evaluating the RST packets.

We can use nmap to perform the scan.

  • Open – the port on the remote system is open.  That means some application on the remote system is accepting connection requests

  • Closed – the port on the remote system is closed.  That means that we have access to the port, but no applications on the remote system are actively listening for connection requests

  • Filtered – we don’t know if the port is open or closed because the firewall has blocked the request

  • Ideally, ports should be Open when there is a legitimate application that requires incoming connections and otherwise filtered.

  • A hacker might use nmap to scan a system for vulnerabilities and see what services or applications are running on it

Network Vulnerability Scan

  • This is a deeper scan than just seeing what ports are open

  • We also check for actual vulnerabilities (security holes) in the system

  • A Vulnerability Scan tool might contain a database of known vulnerabilities that it can scan against

    • False positive – we detect a vulnerability that doesn’t exist

    • False negative – we fail to detect an existing vulnerability

  • If we had the actual password of the system, we could run an even deeper scan – an authenticated scan

  • We might scan the web or Wi-Fi for additional vulnerabilities

Web Application Scan – some things we can scan

  • SQL injection attempts

  • Scan of all web server applications

  • Scan of all web application code any time the code is changed

  • Database vulnerability scan

Vulnerability Workflow – what happens when we detect a vulnerability?

  • Detection – identify the vulnerability during a scan

  • Validation – once we detect the vulnerability, we need to confirm that it is real (not a false positive)

  • Remediation – we need to fix the vulnerability.  We might change the configuration, install a patch, por some other workaround.  We should give priority to the most severe vulnerabilities, especially if the changes of them being exploited are high.

Penetration Testing

In a penetration test, we are not just looking for vulnerabilities.  If we find a vulnerability, we also try to exploit it.  In a vulnerability scan, we notice that there is an “open door” to the system.  In a penetration test, we try to enter through the door and walk around inside.  We might also try to break in through a locked door to see if we can get through and/or if the alarm sounds.

The steps

  • Planning – we need to agree with the owner of the system that we are performing the penetration test.  Sometimes, security staff are not aware of our planned test (we might not want them to be aware because we are also testing their effectiveness), but we want to have explicit authorization to perform the test.  Tests can be illegal without agreement and can disrupt the existing systems.

  • Information gathering – we gather information about the target system so that we can plan what areas we will target

  • Vulnerability scanning – we check for weaknesses as before

  • Exploitation – we use tools to penetrate the system

  • Reporting – we report back the results of our test

  • There are several test types

    • A White Box test is when the tester has full knowledge of the inner workings of the system.  This is also known as a Known Environment test.  The tester attempts to penetrate any item that he believes is weak.  A white box tester will understand how data flows through the application and will be able to take advantage of all the different routes.

    • A Gray Box test is when the tester has some knowledge of the inner workings of the system.  This is also known as a Partially Known Environment test.  The tester interacts with the system just like a normal user but can also eliminate areas that are a waste of time to test.  A Gray Box test can be the most efficient form of testing because all possible attack points are tested.

    • A Black Box test is when the tester has no knowledge of the inner workings of the system.  This is also known as an Unknown Environment test.  The tester interacts with the system just like a normal (or malicious) user, attempting different inputs to damage or infiltrate the system.  A Black Box test simulates the normal operation of the application or system but does not detect all vulnerabilities.

Software Testing

Why do we need to test software?

  • It has access to the operating system, and possibly direct access to the hardware

  • The software handles critical information, trade secrets, personal health information, credit card numbers, or even classified data.  What if the software has a security hole that causes it to leak data?

  • If the software stops working, the business might stop working.  We need to make sure that the software is reliable.

  • Code Review is a type of software review where we evaluate the source code

    • Can be a per review

    • Other people review our code to make sure it does what it is supposed to; if it doesn’t then, it goes back to production

    • We might use the Fargan Inspection process when the code is critical

    • Fargan Inspection Steps

      • 1. Planning

      • 2. Overview

      • 3. Preparation

      • 4. Inspection

      • 5. Rework

      • 6. Follow-up

    • Otherwise, we just use peer review that involves another developer or a senior developer reviewing the code.  We might also use an automated tool before or instead of having a person review it.

  • Static Testing

    • We analyse the source code or the compiled application to detect flaws

    • We are not actually testing the software as it operates

  • Dynamic Testing

    • We check the software while it is actually running

    • That means we install it on hardware that is designed for the task

    • We might use testers who don’t know anything about the source code

    • We make transactions through the system

  • Fuzz Testing

    • Dynamic testing with many different inputs

    • We try to make the system crash by giving it inputs that it does not expect

    • Mutation Fuzzing – we take input values from the software and change it slightly

    • Generational Fuzzing – we use intelligent data models to create inputs

    • Fuzz testing is not thorough, but it can show us how the software behaves under stress

  • Interface Testing

    • We are testing the software to make sure that it works well with other software programs

    • Many times, an organization will use several different programs, and they must exchange data.  We need to make sure that each program communicates properly

    • Application Programming Interface (API) – we should test the APIs to make sure that they function

    • User Interface (UI) – we might have a GUI or a command-line.  We need to make sure that the user interface displays the correct information, and that it functions, and that it does not display information that it shouldn’t (based on the user’s security level for example).

      We should also make sure that it is user friendly.

    • Physical Interface – we should verify that the physical interface also functions.  Physical interfaces are more likely to be found on medical and industrial equipment.

  • Misuse Testing

    • People will try to misuse or abuse the software

    • We want to try and see if they can

  • Test Coverage

    • We are not able to fully test any application because there are many ways for it to be attacked

    • The cost of testing might be more than the benefit we derive

    • We perform a test coverage analysis to figure out how well we have tested

    • Test Coverage = Number of use cases tested (number of inputs tested) / total number of cases (total number of possible inputs)

    • There are possibly an infinite number of total cases, so we need to think about it

      • Branch Coverage – have we tested every “if” statement under all possible conditions (if, if else, and else)

      • Condition Coverage – have we tested every logical test?

      • Function Coverage – have we tested every function?

      • Loop Coverage – has every loop been tested (while, for, do while) and under every condition (run once, run several times, and never run)?

      • Statement Coverage – has every line of code been executed?

Website Monitoring

We might want to monitor a website to see if it stays online.  There are several types of monitoring

  • Passive Monitoring – we monitor and capture all of the network traffic between the network and the server so that we can see what is happening.  Real User Monitoring is when we pretend to be an actual user accessing the website.  We can use passive monitoring to capture real issues that took place.  But we only see issues after they take place.

  • Synthetic Monitoring – also called active monitoring.  We use artificial intelligence to visit the website.  We might be able to see issues before they happen to users.  But we cannot capture real data.

We might use both techniques at the same time