ISC2 CISSP Exam Questions

Page 9 of 50

161.

When using a Redundant Array of Independent Disks (RAID), which RAID level both stripes and mirrors data across a set of drives?

  • 10

  • 0

  • 1

  • 5

Correct answer: 10

RAID-10 is a combination of RAID 1 and 0. Sets of drives are grouped into two separate RAID-1 groups. Each RAID-1 group is viewed as a volume in a RAID-0. This creates striping across the RAID-1 groups.

RAID levels:

  • RAID-0 - Data is striped between a set of drives without parity. This increases the risk of data loss. If one drive fails, the entire RAID will fail; however, it increases the usable storage and write speed.
  • RAID-1 - Data is mirrored between two identical drives. This provides redundancy. However, the usable storage is reduced by 50% of the total storage.
  • RAID-5 - Data is striped between a set of drives, but parity is also written to each drive. This allows for a single drive to fail without causing the RAID to fail. This provides redundancy, but the usable storage is reduced by one drive's worth of storage.
  • RAID-6 - Similar to RAID-5, however, two sets of parity are written to each drive. This allows for two drives to fail without causing the RAID to fail. This provides redundancy, but the usable storage is reduced by two drive's worth of storage.
  • RAID-10 – Combination of RAID-1 and RAID-0. The usable storage is reduced by 50% of the total storage.

162.

Which of the following protection solutions blocks access to certain websites based on their URL or content?

  • Web security gateway

  • Data loss prevention

  • Intrusion detection system

  • Data rights management

Correct answer: Web security gateway

A web security gateway blocks access to certain websites based on their URL or content, which can often be set by content category (e.g. gambling, social media, games). Web security gateways are sometimes integrated into proxy servers or next-generation firewalls.

An intrusion detection system monitors network communications for anomalous traffic and indicators of compromise. Data Loss Prevention (DLP) identifies, monitors, and prevents unauthorized data transfers or leaks. It safeguards sensitive information from being shared or accessed by unauthorized users, mitigating data breaches and compliance risks. Data Rights Management (DRM) controls access, usage, and distribution of digital content. It ensures authorized users adhere to predefined permissions, safeguarding intellectual property and sensitive information across various platforms and devices.

163.

A string of 512 characters is entered as input into an application field that does not perform input validation. The variable associated with that field only has the capacity for 8 characters. What attack type would MOST COMMONLY be applied to exploit this weakness?

  • Buffer overflow

  • Memory leak

  • Session hijacking

  • Output encoding

Correct answer: Buffer overflow

A buffer overflow attack would most commonly be applied to exploit this weakness. A buffer overflow occurs when an application buffer is populated with data that exceeds the capacity allocated to it, causing the excess data to "overflow" into adjacent memory (thereby overwriting it with the excess data). Buffer overflow attacks exploit this to overwrite memory locations that contain application code with malicious code that is incorporated into the excess data.

A memory leak describes the waste condition that results from an application that fails to release memory it has allocated but no longer needs. A memory leak is not itself an attack type, but can sometimes be exploited by malicious actors performing denial-of-service attacks. Session hijacking is an attack that utilizes captured authentication or transaction details from a session to assume the identity of, and act on behalf of, one of the parties in that session. Output encoding is not an attack type but an application security technique used to prevent attacks. Output encoding converts certain characters within website form inputs (e.g. ') into their HTML character entity reference equivalents (e.g. &apos) to ensure these characters are processed as data (and not potentially misinterpreted as programming syntax).

164.

Of the following, which identification device provides the BEST tamper resistance coupled with complex encryption?

  • Smart card

  • Synchronous token

  • Radio Frequency Identification (RFID) badge

  • Cipher lock

Correct answer: Smart card

Smart cards are credit card-sized devices that contain a microprocessor. A smart card typically contains an encrypted private key issued through a Public Key Infrastructure (PKI) system that the authenticating environment trusts. When the smart card is inserted into a reader, the user must enter a Personal Identification Number (PIN) before the smart card releases the private key. Smart cards can be programmed to wipe themselves if a PIN is entered incorrectly too many times.

A synchronous token is incorrect because it is not protected with encryption, and anyone with physical access can view the PIN. A Radio Frequency Identification (RFID) badge is incorrect because, generally, RFID badges are not protected with encryption. A cipher lock is incorrect because it is not protected with encryption, a cipher lock is opened with the use of a PIN.

165.

What type of testing BEST identifies potential security flaws in a software’s design?

  • Misuse case testing

  • Interface testing

  • Fuzz testing

  • Bug testing

Correct answer: Misuse case testing

Misuse case testing is used to help identify potential security flaws in a software’s design by examining how software could be abused or manipulated into doing something malicious.

Interface testing is incorrect because it specifically examines a software’s interfaces, such as Application Programming Interface (API), Graphical User Interface (GUI), and physical interface. Fuzz testing is incorrect because it only tests user input. Bug testing is a fabricated term.

166.

Aiyana is a Data Analyst correlating traffic patterns with weather patterns. She accesses all 50 US states' Department of Transportation records and imports them into a system. What is this system MOST LIKELY called?

  • Data warehouse

  • Database

  • Open Database Connectivity (ODBC)

  • Big data

Correct answer: Data warehouse

A data warehouse is generally used for reporting and analysis across a large dataset from multiple sources. A data warehouse may need all the data converted into the same format before it can be analyzed.

A database is a single collection of data. In this example, it would be the data from one of the 50 US states. A database is a structured collection of data. Relational databases have data organized into tables. Big data is unstructured data and is a very large collection of data that can be a wide variety of types of data such as documents, spreadsheets, movies, pictures, etc. In the big data world, the right answer to this question would have been a data lake. Data warehousing and data lakes have the purpose of having data that is analyzable. ODBC is a language to talk to the database.

167.

The primary data owner has been given the responsibility to handle all data and associated assets with due care and due diligence. As a responsible person, the data owner knows to be careful with how all types of data are shared and used but considers these two terms to be the same.

What is the difference between due care and due diligence?

  •  Due care is the act of taking steps to initially secure assets and information. Due diligence is ensuring due care is effective.

  • Due care is a knowledge-based approach to asset security. Due diligence is a physical approach.

  • Due diligence is a proactive measure to ensure due care is properly implemented.

  • Due care is the act of ensuring due diligence is being followed with all assets.

Correct answer: Due care is the act of taking steps to initially secure assets and information. Due diligence is ensuring due care is effective.

Due care is acting upon assets to ensure their security, while due diligence is ensuring this security measure is doing what it should. A data owner can create policies and rules for data usage and assign the ability to share information with a data controller, which would be an example of due care. The data owner then watching over the data controller and ensuring all policies and procedures are being followed would be an example of due diligence.

Due diligence is a secondary measure meant to ensure the due care process is being followed and is effective. Often, some may see these terms as being synonymous, but they become different when data roles come into play and data is shared among other entities.

168.

Which of these is NOT one of the primary goals of integrity?

  • To prevent unauthorized data access

  • To prevent unauthorized users from making system modifications

  • To prevent authorized users from making improper modifications

  • To maintain internal and external consistency

Correct answer: To prevent unauthorized data access

Preventing unauthorized data access is not an integrity goal but a confidentiality goal.

Integrity goals include the consistency of data and the prevention of unauthorized modifications by unauthorized and authorized users. Clark and Wilson defined these three ‘incorrect’ answers as the three goals of integrity in their security model that was published in 1987.

Rule 1: Prevent unauthorized users from making any modifications.

Rule 2: Prevent authorized users from making improper modifications.

Rule 3: Maintain internal and external consistency.

169.

After patching, many corporate systems crash due to issues with the patch. This MOST likely indicates a failure during which stage of the patch management process?

  • Test Patches

  • Evaluate Patches

  • Approve Patches

  • Verify Patch Deployment

Correct answer: Test Patches

Patches are updates designed to fix vulnerabilities. Some of the key steps in a patch management program include:

  • Evaluate Patches: Determine whether the organization needs to deploy the patch
  • Test Patches: Test patches on a non-production system to ensure that they do their job and don’t create other issues
  • Approve Patches: Approve tested patches for deployment in production, potentially via a change management process
  • Deploy Patches: Deploy patches to production systems, potentially using automation
  • Verify Patch Deployment: Test patches to ensure that they were applied correctly and fix the vulnerability

170.

A medium-sized business that provides services for the government is building its Disaster Recovery Plan. Their lead information security manager is working with the team to determine the threats that they must address with their plan.  Which of the following BEST helps an organization to identify and prioritize risks?

  • A Business Impact Analysis (BIA)

  • A quantitative risk analysis

  • A qualitative risk analysis

  • Threat modeling

Correct answer: A Business Impact Analysis (BIA)

A BIA, or Business Impact Analysis, is a critical process used by organizations to assess the potential impacts of disruptions on their operations. It identifies and quantifies the financial, operational, and reputational consequences of various threats such as natural disasters, cyber-attacks, or supply chain disruptions. During a BIA, key business processes and their dependencies are analyzed, and the potential downtime, data loss, and recovery time objectives are determined. The findings help organizations prioritize their resources, develop business continuity and disaster recovery plans, and make informed decisions to mitigate risks and ensure continuity during adverse events.

Quantitative risk analysis is a method used to assess and measure risks in numerical terms, typically involving probabilities and potential impact. It involves data-driven approaches to quantify the likelihood of risks occurring and the potential magnitude of their consequences, enabling better-informed decision-making and risk prioritization.

Qualitative risk analysis is an assessment method that focuses on identifying and prioritizing risks based on their qualitative characteristics. It involves subjective evaluation, such as high, medium, or low, to determine the likelihood and potential impact of risks. This approach helps in understanding risks qualitatively and aids in risk management planning. Both quantitative and qualitative risk analyses are included in a BIA.

Threat modeling is a structured approach used to identify and assess potential threats and vulnerabilities in a system or application. It helps organizations understand potential attack scenarios and prioritize security measures to proactively mitigate risks.

171.

Daisy has been working with the Business Continuity (BC) teams to identify potential threats that have not been addressed appropriately or at all by the corporation. They have just identified a threat that has a low likelihood of occurrence and a low impact score. What would be the best response to this threat?

  • Risk acceptance

  • Risk transfer

  • Risk reduction

  • Risk avoidance

Correct answer: Risk acceptance

When a threat is not likely to be realized and will have little impact, an organization should document and accept the risk. Risk acceptance does not mean choosing to ignore the risk but rather concluding that doing something about the risk is more costly than the risk itself.

Risk transfer involves the sharing of the burden of the threat with another. The most common example given is insurance policies, but it can also include contracts and End User Licensing Agreements (EULA) to name a couple more. Risk reduction or risk mitigation includes tools or actions taken to reduce the change or impact. This could be a tool like an Intrusion Prevention System (IPS), encryption, policies, and so on. Risk avoidance would not be used in a risky activity to begin with, or to stop one when identified.

172.

Malware analysis can require the use of debuggers and decompilers for the purposes of breaking down and analyzing code. Many organizations develop their own custom set-ups consisting of these very programs, so users have one convenient place to conduct their work. What would a downloaded virtual machine consisting of pre-selected malware analysis programs BEST describe?

  • Integrated development environment

  • Hypervisor

  • Code repository

  • Dynamic link library

Correct answer: Integrated development environment

An integrated development environment is a pre-created environment composed of useful programs for the task at hand. An example of this would be Kali Linux, which is an operating system filled with various red and blue teaming tools.

A hypervisor is what controls a platform for virtualization. The hypervisor alone won't provide the tools for the job. In fact, it is the specific downloaded operating system that would provide such tools. Code repositories are collections of code made by many people, typically open for the use of others. An example of this would be Github. Dynamic Link Libraries (DLLs) are usually functions meant to be used with a specific coding language for a specific purpose. For example, there are many DLLs for creating machine learning AI in Python. These libraries allow the AI to have data for use when communicating with a person. The AI would possess a library in their code, which would tell them how to respond to specific statements.

173.

Which of the following is the MOST fundamental component of a Virtual Private Network (VPN)?

  • Encryption

  • Authentication

  • The Internet

  • Authorization

Correct answer: Encryption

A Virtual Private Network (VPN) is usually referred to as an encrypted tunnel. VPN technologies that can be used today include Transport Layer Security (TLS), Secure Shell (SSH), and Internet Protocol Security (IPSec). Traditionally, tunnels encapsulate one protocol inside of another. For example, when there are legacy systems that need to connect across the Internet, the legacy protocol packets can be wrapped or placed inside of an IP packet for delivery. That is not quite what VPNs do today, but the name stuck.

Authentication is the process of verifying a user is who they claim to be. This is done through one of three factors of authentication: something you know, something you have, and something you are. Authentication is important before granting access to systems or data. It is not a component of a VPN, but rather something done once the VPN is connected. The Internet is incorrect because VPNs can traverse or connect across the Internet, or any other network. It is not part of the VPN, but rather what the VPN connects across. Authorization is the process of granting permissions to the authenticated user. This would be done by the end system above the Open System Interconnect (OSI) model, within the application or software.

174.

Of the following, which form of Redundant Array of Independent Disks (RAID) provides the MOST fault tolerance?

  • RAID 6 & 10

  • RAID 6 & 0

  • RAID 10 & 5

  • RAID 5 & 0

Correct answer: RAID 6 & 10

Redundant Array of Independent Disks (RAID) 6 protects against two drive failures. RAID 10 may protect against two drive failures, depending on which drives fail.

If four drives at 100GB each are used, you get the following results for each RAID level:

  • RAID-0 – 400GB of usable space with no fault-tolerance.
  • RAID-1 – 100GB of usable space with 3x fault-tolerance.
  • RAID-10 – 200GB of usable space with ~2x fault-tolerance.
  • RAID-5 – 300GB of usable space with 1x fault-tolerance.
  • RAID-6 – 200GB of usable space with 2x fault-tolerance.

175.

What BEST validates that security controls have been implemented and work as expected?

  • Security audit

  • Vulnerability scan

  • Patch management

  • Change management

Correct answer: Security audit

Security audits are performed to validate that security controls have been implemented and work as desired. Security audits generally compare results against an external standard.

Patch management, vulnerability scan, and change management are incorrect because they do not validate that controls have been implemented. Patch management is the process of ensuring that patches are found, installed, and managed over time. These patches fix vulnerabilities. A vulnerability is a weakness or a flaw and can be found by doing a vulnerability scan. Change management is the process of controlling, tracking, and documenting alterations to an environment.

176.

Which feature of a biometric access control system is considered MOST crucial to its success?

  • A low crossover error rate

  • Customization

  • Storage capacity

  • Logging

Correct answer: A low crossover error rate

The point at which biometric type 1 errors (false rejection rate) and type 2 errors (false acceptance rate) are equal is the Crossover Error Rate (CER). When a biometric device is too sensitive, type 1 errors (false negatives) are more common. When a biometric device is not sensitive enough, type 2 errors (false positives) are more common. The best scenario is a low CER. This means that the device very rarely has a false acceptance or a false rejection.

Logging is done when someone does use the biometric system. It would record if someone is granted access or not. It is essential to log what happens so that when things break or there is a breach or something else of that sort, it is possible to sort out what happened. This is not critical for success but this is critical when there is a failure. The storage capacity of a biometric system is usually not an issue. It might be of some concern regarding the logs, but the logs from just the biometric system should not cause a capacity issue. Being able to customize the settings on the biometric system can be critical, but they are not usually crucial to its success.

177.

Which of the following documents would MOST LIKELY reference the importance of following institutional policies and outline sanctions for violating them?

  • Compliance policy

  • Service Level Agreement (SLA)

  • Playbook

  • Runbook

Correct answer: Compliance policy

A compliance policy would most likely reference the importance of following institutional policies and outline sanctions for violating them. Compliance policies are an essential addition to policy portfolios. Employees' compliance with institutional policies is vital for organizations to maintain consistency in the goods and services they provide while further ensuring that the organization itself remains compliant with laws, regulations, and contractual obligations.

Service Level Agreements (SLAs) use agreed-upon standards of measurement to establish minimum thresholds for acceptable service performance. SLAs are typically made between service providers and clients, whether internal (e.g. between different business units in an organization) or external (i.e. between the organization and a third-party provider), to ensure the quality of the services they have contracted to receive. While SLAs sometimes define sanctions if acceptable performance isn't met, they do not highlight the importance of following other policies. Playbooks & runbooks do not relate to policy compliance but are utilized to support incident response automation. Playbooks and runbooks document the step-by-step activities required to verify whether a detected security event is an actual incident and the step-by-step response activities needed to contain any such incidents.

178.

An organization has determined that one of its security vulnerabilities is failing to think as an attacker to mitigate risk. The Chief Information Security Officer (CISO) wants to compare the company's asset inventory with potential threats and then reduce each risk in a step-by-step manner. Which of the following would MOST LIKELY benefit the CISO for this project?

  • MITRE ATT&CK Matrix

  • Honey nets

  • Current event podcasts

  • Vulnerability scans

Correct answer: MITRE ATT&CK Matrix

The MITRE ATT&CK Matrix would most likely benefit the CISO, as it would provide a step-by-step guide on how attackers exploit vulnerabilities and can be easily compared with the company's vulnerabilities. With that, the CISO can then go step-by-step through attack phases and mitigate those, perhaps doing so in an order based on the most critical aspects first.

Honey nets would be great to see what attackers are after when they do in fact breach, but it will not be known for sure until a breach actually occurs. Even then, what would be known is restricted based on what the attacker actually does. Podcasts may be interesting and informative areas for information, but they may not have the same credibility as a national database or other proven methods of mitigating attacks. Vulnerability scans would certainly be informative, but would potentially provide false positives or be misconfigured. Vulnerability scanners may also be inaccurate if they cannot see certain ports or services due to a firewall, so it's important to combine this technique with the knowledge of the practitioner and the MITRE ATT&CK Matrix.

179.

An organization has been informed of an upcoming third-party audit on behalf of a large and well-known auditing company. Management has prepared their organization for this, conducting preliminary internal audits using their own penetration testers and a Breach Attack Simulation (BAS) system. With all potential issues remediated, management wants to determine if anything else should be prepared prior to this audit. What should management MOST LIKELY prepare prior to this third-party audit?

  • Non-disclosure agreements

  • Add new systems to the network

  • Add cloud infrastructure

  • Edit security policies for employees

Correct answer: Non-disclosure agreements

Non-disclosure agreements should be prepared prior to an audit. This is so all parties acknowledge that a reasonable time frame will be given to remediate potential vulnerabilities found. Not only is this done to conduct business in an ethical manner by being reasonable, but it's also to avoid jeopardizing the business and opening the doors to an increase in threats.

Adding new systems to the network and adding cloud infrastructure would increase the attack surface of a business, potentially voiding any results of the preliminary test. The organization would not want to edit security policies, as it's been stated that all potential issues have already been remediated.

180.

When deleting a file is not enough to satisfy an organization's data destruction policy, what BEST ensures the data cannot be restored, but the media can be reused?

  • Purging

  • Erasing

  • Clearing

  • Destruction

Correct answer: Purging

Purging is defined in NIST SP 800-88 as applying physical or logical techniques that render target data recovery infeasible using state-of-the-art laboratory techniques.

Erasing is another word for deleting. Clearing is defined in NIST SP 800-88 as the application of logical techniques to sanitize data in all user-addressable storage locations for protection against simple non-invasive data recovery techniques, typically applied through the standard Read and Write commands to the storage device, such as by
rewriting with a new value or using a menu option to reset the device to the factory state. Destruction is the most secure method; however, it destroys the media so it cannot be reused.