No products in the cart.
ISC2 CSSLP Exam Questions
Page 6 of 25
101.
Which of the following is NOT one of the test/audit areas of the OSSTMM?
-
Application
-
Data Networks
-
Wireless
-
Human
Correct answer: Application
The Open Source Security Testing Methodology Manual (OSSTMM) was developed by the Institute for Security and Open Methodologies (ISECOM) and uses analytical metrics to assess operational security. It includes five sections and test/audit areas:
- Data Networks: Information security controls
- Telecommunication: Telecommunications networks
- Wireless: Mobile devices and wireless networks and devices
- Physical: Access controls and building and physical perimeter controls
- Human: Social engineering controls, user security awareness training, and end-user security controls
102.
Which type of software requirements deals with how the software will fit into an organization's larger IT infrastructure?
-
Non-functional
-
Functional
-
User-driven
-
Environmental
Correct answer: Non-functional
The two types of software requirements are:
- Functional: Functional requirements describe how the software is supposed to do its job. These include business requirements, IT requirements (deployment environment, database, infrastructure, etc.), corporate coding, and security requirements. Functional requirements are often described in use cases or user stories.
- Non-Functional: Non-functional requirements include operational and deployment requirements. These describe how the software will fit into an organization’s IT infrastructure and interact with other software and systems.
User-driven and environmental are not the two main types of software requirements.
103.
Which stage of the pen testing process provides the MOST value to the customer?
-
Reporting
-
Reconnaissance
-
Removal of Evidence
-
Attack and Exploitation
Correct answer: Reporting
The four main steps in the penetration testing process are:
- Reconnaissance: The pen tester explores the target system, identifying active systems and potential vulnerabilities. Vulnerability scanning may be a part of this stage.
- Attack and Exploitation: After identifying a vulnerability, the attacker exploits it to gain access to the target system. From there, they might perform additional reconnaissance and exploitation of vulnerabilities to move laterally through the target network and achieve the objectives of the engagement.
- Removal of Evidence: After a penetration test is complete, a tester should clean up after themselves, restoring systems to their original state.
- Reporting: A penetration test is intended to provide the customer with insight into the vulnerabilities in their systems, making reporting the most valuable stage of the process to the customer. A pen test report should describe actions taken, findings, and recommended mitigations at a minimum.
104.
What is the term for something put in place to manage the risk posed by a threat and that is classified as preventative, detective, corrective, or compensating?
-
Control
-
Mitigation
-
Defense
-
Protection
Correct answer: Control
Controls are measures put in place to detect, prevent, or mitigate the risks posed by a threat.
105.
Which of the following incorporates risk assessments in each phase of the project, allowing developers to cut their losses?
-
Spiral
-
Scrum
-
XP
-
Waterfall
Correct answer: Spiral
Scrum is an Agile development method in which participants are classified as pigs or chickens and have defined roles in project development. Development is broken into sprints designed to implement specific features.
XP is a people-centric approach to development that iteratively storyboards and implements user requirements.
Waterfall is a predictive development methodology with a linear, sequential process through stages with no backtracking. In Waterfall, identifying issues early is critical, as it is difficult to fix problems after the fact.
The Spiral model combines elements of Waterfall and prototyping models. It incorporates risk assessments at each of its phases, enabling a team to minimize sunk costs on a failed project.
106.
Firewalls and access controls are examples of which type of security control?
-
Preventative
-
Detective
-
Responsive
-
Proactive
Correct answer: Preventative
The three main ways to manage security risks in production include:
- Prevention: Blocking a security incident from occurring. Examples of preventative controls include firewalls, access controls, and encryption.
- Detection: Identifying a security incident that requires mitigation. Detective controls include audit logs, honeypots, and intrusion detection systems (IDS).
- Response: Mitigating an identified security incident. Incident response efforts are supported by backups, incident response teams (IRTs), and computer forensics.
Proactive security actions would involve threat hunting or similar activities.
107.
A security patch reopened a previously fixed vulnerability. This should have been caught in which of the following types of testing?
-
Regression testing
-
Integration testing
-
Continuous testing
-
Failure mode testing
Correct answer: Regression testing
Changes to an application’s code can break functional or non-functional requirements. Regression testing is designed to ensure that code still meets requirements after an update.
Not all errors in an application will cause a crash. Failure testing involves ensuring that erroneous inputs cause a failure and that the fault is properly handled.
Applications are deployed in environments alongside other applications and systems. Integration testing ensures that a system as a whole (including multiple different applications) achieves its intended purpose.
Continuous testing processes build automated testing into development pipelines. This ensures that issues are identified and addressed as early as possible.
108.
Which of the following is MOST easily integrated into automated CI/CD pipelines?
-
IAST
-
DAST
-
SAST
-
RASP
Correct answer: IAST
A few different types of tools exist for software security analysis. These include:
- Static Application Security Testing (SAST): SAST or static analysis tools analyze the source code of an application for vulnerabilities. Since they use source code, they can be applied earlier in the SDLC than other tools that require a running application. Additionally, they provide better test coverage and can pinpoint an error within an application’s code. However, SAST tools are language-specific and cannot identify some types of vulnerabilities that are only detectable in running code.
- Dynamic Application Security Testing (DAST): DAST or dynamic analysis tools test a running application for vulnerabilities by sending it malicious or anomalous inputs and analyzing its behavior or responses. DAST can be cheaper than SAST, often has fewer false positives, and can identify issues that are only apparent at runtime. However, it has poorer code coverage, cannot pinpoint where an issue exists within the code (only that it does exist), and requires a running application (making it only usable later in the SDLC).
- Interactive Application Security Testing (IAST): IAST solutions use instrumentation to gain internal visibility of a running application while running tests against it. IAST solutions can pinpoint vulnerabilities in an application and are more easily integrated into CI/CD pipelines. However, IAST can be more expensive, slows code execution, and is a less mature solution.
- Runtime Application Self-Protection (RASP): RASP uses instrumentation to monitor and protect an application in production. Based on visibility into inputs, outputs, and application behavior, RASP can identify and block even zero-day attacks against an application. However, RASP does increase the size and complexity of the application that it protects.
109.
Which of the following is designed to eliminate single points of failure in security?
-
Defense in depth
-
Fail secure
-
Complete mediation
-
Least common mechanism
Correct answer: Defense in depth
Defense in depth means that multiple layers of security should be used so that a failure of one layer doesn't leave the system vulnerable.
Fail secure means that a system should default to a secure state if something goes wrong, rather than an insecure one. For example, magnetic locks on a secure area should be locked if they lose power.
Complete mediation ensures that access controls can't be bypassed by checking them on every request, not just the first one.
Least common mechanism prevents against sharing mechanisms or functions in code that are used by different users or processes if they have different levels of privilege.
110.
The end result of an assessment against the ISO/IEC Common Criteria is called what?
-
Evaluation Assurance Level
-
Security Target
-
Security Benchmark
-
Evaluation Result
Correct answer: Evaluation Assurance Level
ISO/IEC 15408: Information technology — Security techniques — Evaluation criteria for IT security defines the Common Criteria used to evaluate security technologies. The Target of Evaluation (TOE) has associated security properties or a Security Target (ST), and each type of solution (firewall, operating system, etc.) has a Protection Profile (PP) it must meet. The end result of an evaluation is one of the following Evaluation Assurance Level (EALs):
- EAL 1: Functionally Tested
- EAL 2: Structurally Tested
- EAL 3: Methodically Tested and Checked
- EAL 4: Methodically Designed, Tested, and Reviewed
- EAL 5: Semiformally Designed and Verified
- EAL 6: Semiformally Verified, Designed, and Tested
- EAL 7: Formally Verified, Designed, and Tested
111.
Which of the following manages how hardware, software, documentation, interfaces, and patching are set up?
-
Configuration control
-
Version control
-
Revision control
-
Baseline control
Correct answer: Configuration control
Configuration control manages the configuration of hardware, software, documentation, interfaces, and patching.
Version control involves managing the versions and changes to files and a codebase. Revision control is related to version control and involves defining and labeling each release. Baseline control is part of configuration management and includes change accounting and library management.
112.
A program uses cryptography that relies on the attacker not knowing the details of the algorithms used. This is a violation of which of the following?
-
Open Design
-
Component Reuse
-
Psychological Acceptability
-
Least Common Mechanism
Correct answer: Open Design
Some of the key security design principles include:
- Open Design: Also known as Kerckhoffs’s Principle, the principle of open design states that a system should not rely on security via obscurity. For example, in encryption algorithms the only secret is the secret key, all details of the encryption algorithm used can be known to an attacker without compromising the security of the system.
- Least Common Mechanism: Least common mechanism states that different processes with different privilege levels should not use the same function or mechanism because it is more difficult to keep these paths separate. Instead, each process should have its own mechanism.
- Psychological Acceptability: If users don’t understand a security control or feel that it obstructs their work, they’ll attempt to work around it, undermining it. Security functionality should be user-friendly and transparent to the user.
- Component Reuse: Don’t reinvent the wheel. The use of secure, high-quality components rather than custom code can improve the efficiency and security of software and reduce the attack surface.
113.
Which of the following is NOT an example of data that must be kept secret?
-
Digital certificate
-
SSH key
-
OAuth token
-
API key
Correct answer: Digital certificate
SSH keys, OAuth tokens, and API keys are all sensitive information. A digital certificate is used to publish public key information.
114.
Which of the following can quantify the cost that an attack has to an organization?
-
SLE
-
ARO
-
ALE
-
SRO
Correct answer: SLE
Single Loss Expectancy (SLE) estimates the loss caused by a threat and is calculated as the product of the asset value and the exposure factor.
Annual Rate of Occurrence (ARO) estimates the number of times that a specific threat will materialize each year.
Annual Loss Expectancy (ALE) estimates the loss caused by a threat across an entire year. It is calculated as the product of SLO and ARO.
115.
Which of the following types of software defects describes issues that are exploitable by an attacker?
-
Vulnerabilities
-
Flaws
-
Bugs
-
Errors and Faults
Correct answer: Vulnerabilities
Defects in software can be classified into five categories:
- Flaws: Design errors
- Bugs: Implementation errors
- Behavioral Anomalies: The application does not operate properly
- Errors and Faults: Outcome-based issues originating elsewhere
- Vulnerabilities: Issues that can be exploited by an attacker
116.
Which of the following cloud characteristics is LEAST related to resource availability in the cloud?
-
On-Demand Self-Service
-
Resource Pooling
-
Rapid Elasticity
-
Measured Service
Correct answer: On-Demand Self-Service
The five characteristics of the cloud are:
- On-Demand Self-Service: Customers can deploy solutions and make changes with minimal service provider involvement
- Broad Network Access: High-bandwidth connectivity exists to the cloud backend and cloud services are accessible over the network
- Resource Pooling: Cloud tenants share a pool of resources, which are allocated on an as-needed basis
- Rapid Elasticity: Cloud tenants can rapidly gain access to pooled resources, which can be reallocated when no longer needed
- Measured Service: Cloud customers' resource usage is monitored, and they are billed based on their usage
117.
The process of deploying code updates to fix functionality and security issues is called what?
-
Patch management
-
Update management
-
Hotfix management
-
Service pack management
Correct answer: Patch management
Patch management is the practice of applying updates to fix security and functionality issues. Key elements of patch management are ensuring that update code is secured against malicious modification and testing patches to ensure that they fix the issue and don’t break anything else (regression testing).
118.
Deleting sensitive fields from production data before using it for testing falls under which of the following categories?
-
Sanitization
-
Aggregation
-
Tokenization
-
Minimization
Correct answer: Sanitization
Production data can be useful for testing but should be properly anonymized. Some anonymization techniques include:
- Aggregation: Aggregation combines data from multiple different subjects to remove any identifiable information.
- Sanitization: Sanitization involves removing potentially sensitive data from records.
- Tokenization: Tokenization replaces sensitive data with a non-sensitive token that represents it on untrusted systems.
- Minimization: Minimization involves collecting, storing, and processing as little sensitive data as possible.
119.
Which of the following describes all of the tests for a set of related requirements?
-
Test suite
-
Test case
-
Test script
-
Test harness
Correct answer: Test suite
Test suites are groups of tests. For example, multiple tests focused on performance may be collected into a test suite.
A test case describes a particular requirement to be tested and how an application will be tested against that requirement. A test script automates the process of implementing a test case, providing repeatability and speeding the testing process. A test harness documents all aspects of a testing process including the systems under test and the tools, data, and configurations used during testing.
120.
Which type of flow control mechanism helps protect the privacy of one of the communicating parties?
-
Proxy
-
Firewall
-
Queue
-
Load balancer
Correct answer: Proxy
Flow control manages the movement of data between various systems, applications, etc. Important flow control tools include:
- Firewalls: Firewalls enforce corporate policy by inspecting network traffic and permitting or blocking it based on rules. Firewalls come in various forms, including packet-filtering, stateful, and next-generation.
- Proxies: Proxies act as a middleman in traffic flows, protecting the privacy and security of the source or destination of the traffic.
- Queues: Queuing protects against network congestion and overloading legacy clients by creating a backlog when sending rates are faster than the recipient can handle it.
Load balancers are not a common flow control mechanism.