No products in the cart.
ISC2 CCSP Exam Questions
Page 10 of 50
181.
Simone has been working within the Information Technology (IT) department. While working there, they have been analyzing the golden images that they use when they start new virtual servers. They have used a software tool to analyze it in a running virtual environment. The tool has been able to detect that there are two fixes that need to be applied as a result of a CVE notice that has recently been released. It has been determined that there is a fix from the vendor that they can apply.
What would be the next action they should take?
-
Patching
-
Store a new golden image
-
Run a new scan
-
Confirm the Common Vulnerability Score
Correct answer: Patching
Patching is used to fix bugs found in software, apply security vulnerability fixes, introduce new software features, and more. Before patches are applied, they should be properly tested and validated. However, in this particular issue there is no option to test the patch. So, of these options, the patch is the logical next step. Actually, you cannot test the patch until it is applied. The answer "patch the golden image" does not eliminate that as the next step before the image is replaced.
Once it is patched, it is good to test it to ensure that everything is still working as it should be. Part of this could involve running a new scan. Once it is verified as good, a new golden image is stored and made available for use.
When a CVE is released, a score is given to it based on the Common Vulnerability Scoring System (CVSS). This is arguably a good thing to check, but the software tool that pointed to the CVE should also show the CVSS score.
182.
Which of the following tasks is typically easier for operators of private cloud environments?
-
Scheduling maintenance downtime
-
Scaling infrastructure
-
Onboarding new tenants
-
Encrypting data
Correct answer: Scheduling maintenance downtime
Private cloud deployments reduce challenges related to multi-tenancy. For example, scheduling maintenance downtime with one organization is typically simpler than scheduling downtime when multiple organizations are involved.
Private clouds are dedicated to a single organization, so onboarding new tenants is incorrect.
Encrypting data and scaling infrastructure are not typically considered easier for a private cloud operator.
183.
Lightweight operating systems like Ubuntu Core and the Zephyr real-time operating system are MOST LIKELY to be used in which applications?
-
Internet of Things
-
Physical servers
-
Virtual machines
-
Blockchain
Correct answer: Internet of Things
The Internet of Things (IoT) refers to non-traditional devices (e.g., lamps, refrigerators, or machines in a manufacturing environment) having access to the internet to perform various processes. IoT devices typically run lightweight operating systems due to limited resources on smart devices.
Virtual machines are constructed within the cloud through the use of hypervisors on top of servers. Virtual machines can run many different operating systems and using lightweight operating systems like Ubuntu Core and the Zephyr real-time operating system are not particularly common for VMs. Similarly, physical servers can run a wide variety of operating systems, and "server" operating systems such as Server 2019 or Ubuntu 22.04 LTS are more typically associated with servers.
Containers hold applications in a lighter way than hypervisors.
Blockchain is a technology that creates an immutable (i.e., unchangeable) record. It is used in things like cryptocurrency. It is possible to sell anything, though, and use the blockchain as a permanent record of the transaction.
184.
A cloud architect needs to ensure a seamless transition to the Disaster Recovery (DR) site given a disaster. What must the architect have in place to accomplish this?
-
Failover mechanism
-
Redundant internet providers
-
Numerous hypervisors
-
Web Application Programming Interface (API)
Correct answer: Failover mechanism
A failover mechanism must be in place for there to be a seamless transition between the primary site and the DR site if there's a disaster.
It may be necessary to have multiple internet providers to be able to access the public cloud, which is a good thing to include in the design of a corporation's network and cloud architecture. The question, though, points to a seamless transition, which is a closer match to a failover mechanism.
Numerous hypervisors will not ensure a smooth transition to the DR site. It may be necessary to have different types of hypervisors in a cloud data center but that does not ensure the transition between sites.
A Web API is a piece of software that enables functionality between websites.
185.
Which of the following is a major difference between public and private cloud environments?
-
Multitenancy
-
On-Demand Self-Service
-
Broad Network Access
-
Resource Pooling
Correct answer: Multitenancy
The six common characteristics of cloud computing include:
- Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols.
- On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand.
- Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers.
- Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure.
- Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use.
- Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure. Private cloud environments are single-tenant environments used by a single organization.
186.
Which of the following emerging technologies REDUCES the amount of computation performed on cloud servers?
-
Edge Computing
-
Artificial Intelligence
-
Blockchain
-
TEE
Correct answer: Edge Computing
Cloud computing is closely related to many emerging technologies. Some examples include:
- Machine Learning and Artificial Intelligence (ML/AI): Machine learning is a subset of AI and includes algorithms that are designed to learn from data and build models to identify trends, perform classifications, and other tasks. Cloud computing is linked to the rise of ML/AI because it provides the computing power needed to train the models used by ML/AI and operate these technologies at scale.
- Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms.
- Internet of Things (IoT): IoT systems include smart devices that can perform data collection or interact with their environments. These devices often have poor security and rely on cloud-based servers to process collected data and issue commands back to the IoT systems (which have limited computational power, etc.).
- Edge and Fog Computing: Edge and fog computing move computations from centralized servers to devices at the network edge, enabling faster responses and less usage of bandwidth and computational power by cloud servers. Edge computing performs computing on IoT devices, while fog computing uses gateways at the edge to collect data from these devices and perform computation there.
- Confidential computing: With confidential computing, cryptography is used to protect data in the cloud. A trusted execution environment (TEE) enables data decryption only for specific authorized access attempts.
187.
The software development team is working with the information security team through the Software Development Lifecycle (SDLC). The information security manager is concerned that the team is rushing through the phase of the lifecycle where the most technical mistakes could be made. Which phase is that?
-
Development
-
Requirements
-
Testing
-
Planning
Correct answer: Development
During the development or coding phase of the SDLC, the plans and requirements are turned into an executable programming language. As this is the phase where coding takes place, it is most likely the place where technical mistakes would be made.
Technical mistakes could be made in the planning or requirements phase, although more architectural problems are likely to occur.
Testing is technical and mistakes can be made during testing, but it is more likely that the testing is not as complete as needed.
188.
Which of the following event attributes provides a quick way of identifying anomalous events?
-
Geolocation
-
User Identity
-
IP Address
-
MAC Address
Correct answer: Geolocation
An event is anything that happens on an IT system, and most IT systems are configured to record these events in various log files. When implementing logging and event monitoring, event logs should include the following attributes to identify the user:
- User Identity: A username, user ID, globally unique identifier (GUID), process ID, or other value that uniquely identifies the user, application, etc. that performed an action on a system.
- IP Address: The IP address of a system can help to identify the system associated with an event, especially if the address is a unique, internal one. With public-facing addresses, many systems may share the same address.
- Geolocation: Geolocation information can be useful to capture in event logs because it helps to identify anomalous events. For example, a company that doesn’t allow remote work should have few (if any) attempts to access corporate resources from locations outside the country or region.
The CCSP doesn't identify the MAC address as an important attribute to include in event logs.
189.
Which of the following is MOST related to Infrastructure as Code (IaC)?
-
Configuration Management and Change Management
-
Redundancy
-
Scheduled Downtime and Maintenance
-
Logging and Monitoring
Correct answer: Configuration Management and Change Management
Some best practices for designing, configuring, and securing cloud environments include:
- Redundancy: A cloud environment should not include single points of failure (SPOFs) where the outage of a single component brings down a service. High availability and duplicate systems are important to redundancy and resiliency.
- Scheduled Downtime and Maintenance: Cloud systems should have scheduled maintenance windows to allow patching and other maintenance to be performed. This may require a rotating maintenance window to avoid downtime.
- Isolated Network and Robust Access Controls: Access to the management plane should be isolated using access controls and other solutions. Ideally, this will involve the use of VPNs, encryption, and least privilege access controls.
- Configuration Management and Change Management: Systems should have defined, hardened default configurations, ideally using infrastructure as code (IaC). Changes should only be made via a formal change management process.
- Logging and Monitoring: Cloud environments should have continuous logging and monitoring, and vulnerability scans should be performed regularly.
190.
An administrator working in a data center noticed that the humidity level was 80% relative humidity. What threat could this cause to systems?
-
Condensation may form, causing water damage
-
Excess electrostatic discharge could damage systems
-
Systems may overheat and fry internal components
-
80% relative humidity is within the ideal range, so it does not pose any risk to systems
Correct answer: Condensation may form, causing water damage
The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) recommends that data centers have a moisture level of 40-60 percent relative humidity. Having the humidity level too high could cause condensation to form and damage systems. Having the humidity level too low could cause an excess of electrostatic discharge, which may cause damage to systems.
191.
A medical corporation is going to use lab results, test results, and other data to determine the effectiveness of one of their vaccines. Since the US Health Information Portability and Accountability Act (HIPAA) demands that medical data be protected, the corporation will remove all direct identifiers from the records to protect the patients. Because some of the information may be relevant, considered indirect identifiers, they are going to leave that in place.
Which of the following is this called?
-
De-identification
-
Anonymization
-
Encryption
-
Tokenization
Correct answer: De-identification
Data de-identification is the process of removing direct identifiers. Bill 64 in Quebec, Canada, defines this quite clearly.
Anonymization is removing the direct and indirect identifiers.
Tokenization is to replace the data with another value. Commonly used for services like ApplePay, GooglePay, and PayPal. The token can be removed, and the original value put back in its place, unlike de-identification and anonymization, which are permanent removal methods.
Encryption effectively obscures or obfuscates the data. This can also be undone with decryption.
192.
Ben is part of an incident response (IR) team that has found that a bad actor has compromised a database full of personal information regarding their customers. What they must do is a good forensic investigation to figure out exactly what has been compromised, how, and hopefully by whom.
Which of the following can provide information regarding the runtime state of a running virtual machine?
-
Virtual Machine Introspection (VMI)
-
Hashing and digital signatures
-
Digital forensics
-
Technical readiness
Correct answer: Virtual Machine Introspection (VMI)
VMI is a tool that allows for information regarding the runtime state of a running virtual machine to be monitored. It tracks the events, such as interrupts and memory writes, which allow the content of memory to be collected for a running virtual machine.
Hashing and digital signatures can be used to provide evidence that the digital evidence has not been changed or modified.
Digital forensics is the process of collecting digital forensic evidence and examining it. Digital forensic science includes the analysis of media, software, and networks.
Technical readiness would be getting ready to perform evidence collection and analysis when needed in the future.
193.
A large consulting firm has a hybrid cloud environment. They have a private cloud that they manage on their premises, and they use a large public cloud provider for some of their Platform and Software as a Service (PaaS and SaaS) needs. Their security operations center (SOC) has been processing a few high-priority indications of compromise (IoC) that appear to point to a live incident.
For their response, what should they do?
-
Observe, Orient, Decide, Act
-
Reconnaissance, Execution, Evasion, Collection
-
Reconnaissance, Delivery, Exploitation
-
Sense, Categorize, Respond
Correct answer: Observe, Orient, Decide, Act
The OODA loop is Observe, Orient, Decide, and Act. This is a common incident response concept. The OODA loop is iterative: after completing one cycle, individuals continuously loop back to the beginning to gather new information, reassess the situation, and make further decisions and actions. The loop emphasizes the importance of speed, adaptability, and learning from feedback to maintain a competitive advantage and effectively respond to dynamic and uncertain situations.
"Sense, categorize, and respond" is taken from the Cynefin Framework and is used for clear situations with fixed constraints. Incident response is typically not clear enough at the onset to fit into the "clear" category of the Cynefin Framework.
Kill chains are the path that bad actors take in their attacks. They are good to be familiar with. In their entirety, they are as follows:
- The Lockheed-Martin Kill Chain is a comprehensive cybersecurity strategy that helps organizations identify and prevent advanced cyber attacks at various stages of the attack process. The concept is based on the idea of a chain, where each stage represents a link in the chain that can be broken or disrupted, effectively stopping the cyber attack from being successful. The stages are Reconnaissance, Weaponization, Delivery, Exploitation, Installation, Command & Control, and Actions on Objectives.
- The MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework, according to MITRE's website, "is a comprehensive knowledge base that describes the various Tactics, Techniques, and Procedures (TTPs) used by adversaries during cyberattacks. It provides a structured and standardized way of understanding and categorizing the different stages of an attack. One of the frameworks within MITRE ATT&CK is the "ATT&CK Kill Chain." The kill chain steps are Reconnaissance, Resource Development, Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, and Impact."
194.
Which of the following is a law that protects the privacy rights of people in Canada?
-
PIPEDA
-
HIPAA
-
GDPR
-
Personal Data Protection Act No. 25,326
Correct answer: PIPEDA
Personal Information Protection and Electronic Data Act (PIPEDA) is a Canadian law that requires the protection of personal data.
The General Data Protection Regulation, or GDPR, is a regulation and law that affects all countries in the European Union (EU) and the European Economic Area. The purpose of the GDPR is to protect data on all natural persons within the EU. If the person is within the EU or EEA and their data is collected from anywhere in the world, their data must be protected according to GDPR.
Personal Data Protection Act No. 25,326 is a similar law in Argentina.
HIPAA is a U.S. law that governs protected health information (PHI).
195.
Which of the following BEST describes the "create" phase of the cloud data lifecycle?
-
The creation of new or the alteration of existing content
-
The creation of new content
-
The creation of new content stored on a hard disk drive (HDD)
-
The creation or modification of content stored onto a solid state drive (SSD)
Correct answer: The creation of new or the alteration of existing content
The Cloud Security Alliance (CSA) defined the create phase of the data lifecycle as the creation of new or the alteration of existing content in their guidance 4.0 document. This exam is a joint venture between (ISC)2 and the CSA, so it is worth knowing what the CSA says. If you disagree with this definition, that is fine, but know that the CSA says this even though most people would put the alteration of content in the use phase. If you know these two options, it will be possible to work through exam questions.
If it is stored on a HDD or SSD, that means that data has moved from the create phase into the store phase. The question only involves the create phase.
196.
At which stage of the IAM process does the system determine whether a user should be granted access to a particular resource?
-
Authorization
-
Federation
-
Authentication
-
Audit
Correct answer: Authorization
Identity and access management (IAM) services have four main practices, including:
- Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
- Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
- Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
- Accountability: Monitoring the user’s actions on corporate resources is accomplished in the cloud via logging, monitoring, and auditing.
An audit is not a separate IAM practice and is covered in accountability.
Federation enables different organizations to share resources using their own identities and authentication mechanisms.
197.
An application uses application-specific access control, and users must authenticate with their own credentials to gain their allowed level of access to the application. A bad actor accessed corporate data after having stolen credentials. According to the STRIDE threat model, what type of threat is this?
-
Spoofing identity
-
Broken authentication
-
Insufficient due diligence
-
Tampering with data
Correct answer: Spoofing identity
The STRIDE threat model has six threat categories, including Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE). A bad actor logging in as a user is known as identity spoofing. Ensuring that credentials are protected in transmission and when stored by any system is critical. Using Multi-Factor Authentication (MFA) is also essential to prevent this. If you have an account with any internet-accessible account (your bank, Amazon, etc.), you should enable MFA. The same advice is true in the cloud.
Broken authentication is the entry on the OWASP top 10 list that includes identity spoofing (and more). The question is about STRIDE.
Insufficient due diligence is a cloud problem (and elsewhere) when corporations do not think carefully before putting their systems and data into the cloud and ensuring all the right controls are in place.
Tampering with data could occur once the bad actor is logged in as a user, but the question does not go that far. It is not necessary for someone to log in to tamper with data.
198.
A company offers integrated security services for a cloud environment. Which of the following BEST describes their role?
-
Cloud Service Partner
-
Cloud Service Provider
-
Cloud Service Broker
-
Cloud Customer
Correct answer: Cloud Service Partner
Some of the important roles and responsibilities in cloud computing include:
- Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs).
- Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control.
- Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure.
- Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment.
- Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
199.
Carrie is the information security team lead with the data architects. They are working to ensure that once data is entered into the database, it will retain that exact value and will not be changed or corrupted.
What mechanism could Carrie use to verify the integrity of the data over time?
-
SHA-3
-
AES
-
KMS
-
PKCS
Correct answer: SHA-3
SHA-3 is the only hash function listed.
Hashing is a process that can be used to verify the integrity of data. This is because if you use the same hashing algorithm on the same data time and time again, the hash value that is generated will be the same. If the data is changed, the hash value will be different, confirming that the integrity of the data is not intact.
AES is a symmetric encryption algorithm. While encryption can protect the data in the database, it does not prove the integrity of the data.
KMS is a tool that is used to store and protect cryptographic keys.
PKCS is another tool for managing crypto keys. Managing crypto keys is not related to proving integrity. Cryptography is usually used to protect confidentiality and maybe authenticity.
200.
Which of the following is NOT a physical or environmental control?
-
Intrusion Prevention System (IPS)
-
Intrusion Detection System (IDS)
-
Biometric lock
-
Uninterruptible Power Supply (UPS)
Correct answer: Intrusion Prevention System (IPS)
An intrusion prevention system helps protect a network from malicious activity and intrusions, and therefore, is not considered a physical or environmental control.
IDSs actually do exist in physical security. A sensor on a door or window that alerts when it is opened is a type of IDS. A biometric lock is a physical control even though they involve biometrics. A UPS is a battery that provides a power source if there's a power outage, which would be considered a physical control.