8.3 Assess the effectiveness of software security

  • Auditing and logging of changes
  • Risk analysis and mitigation

An audit asks: what is normal and what is abnormal.  We run an audit to make sure that the rules are being followed.  As discussed earlier, there are specific audit frameworks that must be followed.  An audit must be planned out.  We can’t just jump in and look at random items because it won’t be thorough.  We might conduct an audit randomly, on a regular basis, or after an incident has taken place.

An audit is useless if the system is not capable of logging, or if logging is disabled – the audit can only tell us that logging rules are not being followed.  We might try to make some changes and see if they show up in the logs.

Can we identify

  • What happened?

  • Who did it?

  • When did they do it?

  • Where did they do it (which system)?

  • How did they do it?

When we develop an audit of a software program, we must consider all of the phases

  • Requirements

    • Did we document the requirements correctly?

    • Were the requirements approved by management?

    • Does each requirement have a legitimate need?

    • Have we established a clear budget and schedule for the software development?

    • Is security a clear part of the requirements?

    • Do any new features create additional attack surfaces?

    • Do any new features introduce additional regulatory requirements?

  • Design

    • Have we completed a formal risk assessment for the design?

    • Did we create a design that takes into account all of the requirements?

    • Does our design comply with applicable laws (data sovereignty, privacy, etc.)?

    • Does our design force us rely on a specific vendor for hardware, software, libraries, or other components?  Some scenarios include

      • Buying proprietary hardware like a fire alarm that can only be serviced by a specific manufacturer.

      • Storing data in a Microsoft SQL database, which has a proprietary storage format.  This makes it difficult to move our data to another provider.

      • Relying on special products maintained by a cloud service provider such as Salesforce or Microsoft Azure Active Directory.

  • Implementation

    • Have we conducted penetration tests in the source code?

    • Have we evaluated the security and authenticity of each library included in our application?

    • Have we compared the source code against common vulnerabilities and exploits?

    • Have we implemented all of the requirements in the source code?

    • Have we followed the design when writing the source code?

    • Does the source code store data securely or is it subject to modification?

    • Do unauthorized individuals have access to the source code?

  • Verification

    • Does the software work the way that it is supposed to?

    • Does the software incorporate all of the requirements?

    • Have we tested the software the way that a legitimate user would use it?

    • Have we tested the software the way that a malicious user would use it?

  • Operation and Maintenance

    • Does the software function the way that it is supposed to?

    • Does the software store data the way that it is supposed to?

    • Does the software produce logs, and does it log all of the data that is required?

    • Do end users receive cybersecurity awareness training?

    • Are we keeping track of new patches and vulnerabilities?  Are we regularly patching the application?

Risk Analysis

We must produce a risk analysis for the software.  There are four areas that we can look at.

  • Data

    • The data must travel through the software and reside on various servers and storage appliances.  We must make sure that these devices are physically and logically secure.

    • We must make sure that the data is encrypted at rest, in transit, and in use (if possible).  We must also make sure that the level of encryption is high enough to meet the requirements of applicable laws or regulations.

    • We must ensure that unauthorized individuals do not have an opportunity to view or modify the data.

  • New Technology

    • New technology has the potential to provide us with new capabilities.  But it is also risky because we do not know how to evaluate it.  It may contain many vulnerabilities that we are unaware of.

    • The people who implement the technology must understand how to use it and how to maintain it.  They must also understand how to secure it.

    • If the manufacturer of the technology is new, there is a risk that they could go out of business, in which case we will not have any support.

  • Systems

    • The physical hardware that our software runs on could fail.

    • We must ensure that the hardware is backed up in accordance with the standards outlined earlier.  The software, operating system, configuration, and user data must all be backed up.

    • If our software runs on a legacy system, and the legacy system fails, there is a risk that we will not be able to repair or replace that system.

    • If our systems run in the cloud, there is a risk that the cloud service provider suffers a data breach or an outage.

  • Code

    • Every change to the code introduces risk.  It is impossible to detect every possible vulnerability because new ones are always being created.  Code that is secure, but that relies on an operating system API, driver, or library may become insecure tomorrow if a vulnerability is discovered or introduced in that API, driver, or library.

    • Code is also intellectual property.  It may be protected by patents, copyrights, and/or trade secret law.  The loss or unauthorized disclosure of the code could be catastrophic.