ISC2 CSSLP Exam Questions

Page 1 of 25

1.

Which of the following mechanisms for protecting sensitive data from exposure relies on a lookup table stored in a secure environment?

  • Tokenization

  • Data minimization

  • Data masking

  • Anonymization

Correct answer: Tokenization

Some methods by which organizations can protect data from unauthorized access and disclosure include:

  • Data Minimization: Data minimization involves collecting, processing, and storing the minimum data required. It is the most effective data protection mechanism because an organization can’t breach or misuse data that it doesn’t have.
  • Data Masking: Data masking involves hiding part or all of the sensitive data, such as replacing most of a credit card number with asterisks on a receipt.
  • Tokenization: Tokenization uses a random value to represent sensitive data in insecure locations. The actual values can be looked up based on the token value as needed.
  • Anonymization: Anonymization involves removing any data from a record that can be used to uniquely identify an individual. This is difficult as even combinations of non-identifying characteristics can be combined to uniquely identify an individual.

2.

Which of the following is the term for ensuring that software requirements are met?

  • Verification

  • Validation

  • Qualification

  • Acceptance

Correct answer: Verification

Verification ensures that requirements are met.

Validation ensures that requirements are correct and complete.

3.

Risk to software itself is classified as which of the following?

  • Technical risk

  • Business risk

  • Exploitation risk

  • Inherent risk

Correct answer: Technical risk

Technical risk is the risk posed by threats to software by attacks against it.

Business risk is the risk posed to the business by attacks against software and the resulting loss of functionality.

4.

Which of the following can provide insight into how well an organization works to fix issues with contractual, legal, and regulatory compliance?

  • Audit Reports

  • Past Incident Reports

  • Policies and Procedures

  • Security Architecture Documentation

Correct answer: Audit Reports

Some considerations when evaluating an organization’s security track record include:

  • Past Incidents: How has the organization handled past security incidents?
  • Audit Reports: Are there repeated audit findings that indicate that problems don’t get fixed?
  • Policies and Procedures: What policies and procedures does the organization have in place?

5.

Unit testing is an example of which of the following types of testing?

  • White-box

  • Black-box

  • Gray-box

  • Red-box

Correct answer: White-box

Application security testing can be performed in a few different ways, including:

  • White-Box: White-box testing is performed with knowledge of an application’s internals. It can achieve higher test coverage than black-box testing. Unit testing is an example of white-box testing.
  • Black-Box: Black-box testing is performed without knowledge of an application’s internals, sending inputs to the application and observing the responses. It may identify the vulnerabilities most likely to be exploited by an attacker. Penetration testing is an example of black-box testing.
  • Gray-Box: Gray-box testing sits between white-box and black-box testing. For example, a tester may be granted the same level of knowledge and access as an advanced user but not access to system documentation.

Red-box testing is a fabricated term.

6.

Which of the following is NOT a method of risk ranking?

  • STRIDE

  • Delphi

  • Average

  • PxI

Correct answer: STRIDE

Methods of risk ranking include:

  • Delphi Ranking: In Delphi ranking, each team member independently and privately provides a ranking (Minimal, Severe, or Critical) for each threat. This provides insight into the consensus on the severity of various risks.
  • Average Ranking: Average ranking assigns numeric values to each risk category and averages the results. One common risk ranking methodology is DREAD.
  • Probability x Impact (PxI): PxI ranking multiplies the probability that a risk will materialize with the impact if it does.

STRIDE is an acronym describing various types of security risks.

7.

What is the term for how an application addresses unexpected events?

  • Error handling

  • Exception management

  • Error mitigation

  • Exception handling

Correct answer: Error handling

Error handling is how an application responds when something unexpected occurs.

Exception management is writing code to implement error handling.

8.

Which of the following is MOST closely linked to user authentication?

  • Missing Defense Functions

  • Buffer Overflows

  • Canonical Form

  • Output Validation Failures

Correct answer: Missing Defense Functions

Poor input validation is the cause of many of the most common vulnerabilities. Some common errors include:

  • Buffer Overflows: Buffer overflow vulnerabilities occur when a program attempts to write more data to a memory location than fits in the allocated space. Buffer overflows can be exploited to overwrite critical data stored in memory or execute malicious code.
  • Canonical Form: Data can be encoded in various ways, such as URL or Base64 decoding, and software commonly converts it to canonical form before processing it. This makes it possible for an attacker to bypass input validation schemes that inspect the data before it is canonicalized.
  • Missing Defense Functions: User authentication and authorization functions help to control access to privileged functionality and sensitive data. If these defenses are missing or can be bypassed, it places application security at risk.
  • Output Validation Failures: The output from one function or application may be used as input by another. Application output should also be sanity-checked for errors that could cause problems down the line.

9.

Which of the following organizations develops open standards relevant to information security, such as SAML and ADVL?

  • OASIS

  • OWASP

  • NIST

  • ISO

Correct answer: OASIS

The Organization for the Advancement of Structured Information Standards (OASIS) develops open standards for information security. Some relevant OASIS standards include:

  • Application Vulnerability Description Language (ADVL)
  • Security Assertion Markup Language (SAML)
  • eXtensible Access Control Markup Language (XACML)
  • Key Management Interoperability Protocol (KMIP) Specification
  • Universal Description, Discovery, and Integration (UDDI)
  • Web Services (WS-*) Security

10.

At which stage of the information lifecycle is classification a significant concern?

  • Generation

  • Retention

  • Disposal

  • Usage

Correct answer: Generation

Information Lifecycle Management (ILM) addresses various aspects of data management, including security. The information lifecycle has three main stages:

  • Generation: When data is created, it should be appropriately classified and stored.
  • Retention: While data is needed and in active use, it should be encrypted while at rest and in transit and be protected by access controls.
  • Disposal: When data is no longer needed, it should be disposed of securely.

Usage is not a stage of the information lifecycle.

11.

Which of the following aspects of trusted computing is used to create a secure root of trust for a computer?

  • TPM

  • TCB

  • TPB

  • TMB

Correct answer: TPM

A trusted platform module (TPM) is a piece of hardware designed to be tamper-proof. It hosts encryption keys that are not accessible outside of it and acts as a secure root of trust for the system.

The trusted computing base (TCB) describes the parts of a computer (hardware, firmware, and software) where security matters. Everything outside the TCB can misbehave without creating a security incident.

TPB and TMB are not acronyms associated with trusted computing.

12.

Which of the following defines which vulnerabilities MUST be fixed in software before release?

  • Bug bar

  • Bug threshold

  • Criticality threshold

  • Criticality bar

Correct answer: Bug bar

A bug bar defines the threshold at which a bug must be fixed (i.e., all critical bugs).

13.

An organization is concerned about leaks of confidential data to systems that are not cleared for it. It should focus on developing rules that prevent which of the following?

  • Write-down

  • Read-up

  • Write-up

  • Read-down

Correct answer: Write-down

Bell-LaPadula is a confidentiality protection model that combines attributes of Mandatory Access Control (MAC) and Discretionary Access Control (DAC). Its Simple Security Rule prevents reading data at a higher level of classification ("read up"), while its * property prevents writing data to a system with a lower classification level ("write down").

Biba is an integrity model designed to protect higher-level, more trustworthy data from being corrupted by lower-level data. Its no write-up rule blocks systems from writing data to a system with a higher classification level. Its second rule states that a system reading/processing data from a lower-level system ("read down") will have its integrity level lowered as a result.

14.

Which of the following can be exploited to allow an attacker to bypass input validation and sanitization checks?

  • Canonical Form

  • Buffer Overflow

  • Missing Defense Functions

  • Output Validation Failures

Correct answer: Canonical Form

Poor input validation is the cause of many of the most common vulnerabilities. Some common errors include:

  • Buffer Overflows: Buffer overflow vulnerabilities occur when a program attempts to write more data to a memory location than fits in the allocated space. Buffer overflows can be exploited to overwrite critical data stored in memory or execute malicious code.
  • Canonical Form: Data can be encoded in various ways, such as URL or Base64 decoding, and software commonly converts it to canonical form before processing it. This makes it possible for an attacker to bypass input validation schemes that inspect the data before it is canonicalized.
  • Missing Defense Functions: User authentication and authorization functions help to control access to privileged functionality and sensitive data. If these defenses are missing or can be bypassed, it places application security at risk.
  • Output Validation Failures: The output from one function or application may be used as input by another. Application output should also be sanity-checked for errors that could cause problems down the line.

15.

Which of the following testing techniques is MOST likely to cause an application to crash?

  • Fuzzing

  • Scanning

  • Penetration testing

  • Simulations

Correct answer: Fuzzing

Software testers may use various techniques to identify potential issues in an application, including:

  • Fuzzing: Fuzzing involves sending malformed and invalid inputs to an application in an attempt to trigger an error. Errors could indicate issues with an application’s logic, and crashes could highlight a flaw in error handling.
  • Penetration Testing: Penetration testing is a human-driven activity in which pen testers duplicate the tools and techniques of real cybercriminals to identify the issues and flaws that an attacker is likely to use.
  • Scanning: Scanners automatically interact with an application to learn information or identify vulnerabilities. For example, network scanners can identify active hosts on a system and the network-connected services that they run, while OS fingerprinting scanners try to identify the OS that a system is running. Vulnerability scanners look for vulnerabilities in an application based on various lists (OWASP, CVEs, PCI DSS, etc.)
  • Simulations: Simulations involve performing testing within a simulated environment that resembles the production environment. Simulation testing can help with identifying configuration issues, usability problems, and similar issues before putting an app into production.

16.

Which of the following input validation errors is commonly used to achieve malicious code execution?

  • Buffer Overflow

  • Canonical Form

  • Missing Defense Functions

  • Output Validation Failures

Correct answer: Buffer Overflow

Poor input validation is the cause of many of the most common vulnerabilities. Some common errors include:

  • Buffer Overflows: Buffer overflow vulnerabilities occur when a program attempts to write more data to a memory location than fits in the allocated space. Buffer overflows can be exploited to overwrite critical data stored in memory or execute malicious code.
  • Canonical Form: Data can be encoded in various ways, such as URL or Base64 decoding, and software commonly converts it to canonical form before processing it. This makes it possible for an attacker to bypass input validation schemes that inspect the data before it is canonicalized.
  • Missing Defense Functions: User authentication and authorization functions help to control access to privileged functionality and sensitive data. If these defenses are missing or can be bypassed, it places application security at risk.
  • Output Validation Failures: The output from one function or application may be used as input by another. Application output should also be sanity-checked for errors that could cause problems down the line.

17.

An attacker manages to slip in a malicious commit that adds a backdoor to an application. This is a failure of which of the following?

  • Code Repository Security

  • Build Environment Security 

  • Cryptographically Hashed, Digitally Signed Components

  • Secure Transfer

Correct answer: Code Repository Security

Ensuring the authenticity and integrity of third-party code and components is essential to protecting against supply chain attacks where malicious or vulnerable functionality is inserted by an attacker with access to a vendor/supplier’s systems. Steps that organizations can take include:

  • Secure Transfer: Software should be transferred over secure channels (i.e., TLS-encrypted) and should be digitally signed to ensure authenticity and integrity.
  • System Sharing/Interconnections: Organizations often have direct connections to third-party systems, such as cloud-hosted infrastructure. Risks of these connections that should be addressed include attacks across this connection (in either direction) and loss of availability of remote systems.
  • Code Repository Security: Code repositories should be protected against unauthorized and potentially malicious modifications to code. Code should only be added after it is fully scanned, and records of commit histories should be protected against tampering.
  • Build Environment Security: With DevOps, build environments involve continuous integration, delivery, and deployment, where frequent small changes are made to code due to internal or third-party code updates. The build pipeline should be secured to ensure that it can’t be tampered with and that any issues (such as vulnerabilities) cause a failed build rather than allowing malicious or vulnerable code into production.
  • Cryptographically Hashed, Digitally Signed Components: Digital signatures ensure the authenticity and integrity of the signed data. Requiring third-party components to be digitally signed whenever possible helps to verify the correctness of this external code.
  • Right to Audit: An organization may impose requirements on third-party suppliers as part of its risk management procedures. This should include the right to audit to ensure that these requirements are being followed.

18.

Which of the following anti-tampering strategies helps to protect against malicious code being injected into an application during the development process?

  • Version control

  • Code signing

  • Obfuscation

  • Encryption

Correct answer: Version control

Anti-tampering solutions are designed to protect software against malicious modifications. Some common techniques include:

  • Code Signing: Valid digital signatures can only be created with knowledge of the appropriate private key. Signing the application's code ensures that it is authentic and has not been modified since the signature was generated.
  • Version Control/Revision Control: Version control systems (like git) record every change to software. These systems allow comments to be applied to commits, limitations on who can commit, comparisons between versions, and reversion to a previous version of a file or release.
  • Obfuscation: Code obfuscation is designed to make production code more difficult to decompile and understand. Obfuscation can help to protect against tampering because it makes it more difficult to determine where and how to change the code.

Encryption is not a common anti-tampering solution because encrypted code can't be executed.

19.

Which of the following regulations is focused on preventing fraud?

  • SOX

  • GLBA

  • HIPAA

  • HITECH

Correct answer: SOX

The Sarbanes-Oxley Act of 2022 (SOX) is an anti-fraud regulation developed in response to corporate scandals. Publicly-traded companies are required to have integrity protections for accounting data to verify the accuracy of reported data.

The Gramm-Leach-Bliley Act (GLBA) or the Financial Modernization Act of 1999 is intended to protect personal financial information (PFI).  

The Health Insurance Portability and Accessibility Act (HIPAA) is a U.S. regulation that covers protected health information (PHI). The Health Information Technology for Economic and Clinical Health Act (HITECH Act) is a related law governing the use of electronic health records.

20.

An organization allows all users to have administrator-level access on their own computers to ensure that they can do their jobs. This is a failure of:

  • Least privilege

  • Psychological acceptability

  • Separation of duties

  • Fail secure

Correct answer: Least privilege

The principle of least privilege states that users, applications, etc. should only have the access and privileges needed to do their jobs. Most users don't need admin privileges, so this practice creates security risks.

Separation of duties refers to the fact that critical processes (such as approving payments) should be split across multiple people to protect against fraud, social engineering, etc.

Fail secure means that a system should default to a secure state if something goes wrong, rather than an insecure one. For example, magnetic locks on a secure area should be locked if they lose power.

Psychological acceptability indicates that users are more likely to comply with security requirements that are easy to use and transparent.