CompTIA Cloud+ Study Guide. Ben Piper
Чтение книги онлайн.
Читать онлайн книгу CompTIA Cloud+ Study Guide - Ben Piper страница 20
92 A. The tracert and traceroute commands are useful for network path troubleshooting. These commands show the routed path a packet of data takes from source to destination. You can use them to determine whether routing is working as expected or whether there is a route failure in the path. The other options are all incorrect because they do not provide network path data.
93 B, D. The Windows command-line utility nslookup resolves domain names to IP addressing. The Linux equivalent is the dig command. The other options are not valid for the solution required in the question.
94 B. The Windows Remote Desktop Protocol allows for remote connections to a Windows graphical user desktop.
95 C. The tcpdump utility allows a Linux system to capture live network traffic, and it is useful in monitoring and troubleshooting. Think of tcpdump as a command-line network analyzer. The dig and nslookup commands show DNS resolution but do not display the actual packets going across the wire. netstat shows connection information and is not DNS-related.
96 E. In a data center, terminal servers are deployed and have several serial ports, each cabled to a console port on a device that is being managed. This allows you to make an SSH or a Telnet connection to the terminal server and then use the serial interfaces to access the console ports on the devices to which you want to connect. The other options do not provide serial port connections.
97 C. Infrastructure security is the hardening of the facility and includes the steps outlined in the question, including nondescript facilities, video surveillance, and biometric access.
98 C, E. Common remote access tools include RDP, SSH, and terminal servers. IDSs/IPSs are for intrusion detection, and DNS is for domain name–to–IP address mappings and is not a utility for remote access.
99 C. A secure Internet-based connection would be a VPN.
100 A. Logging into systems is referred to as authentication. Also, the question references multifactor authentication (MFA) as part of the system.
Chapter 1 Introducing Cloud Computing Configurations and Deployments
THE FOLLOWING COMPTIA CLOUD+ EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
1.1 Compare and contrast the different types of cloud models.Deployment modelsPublicPrivateHybridCommunityCloud within a cloudMulticloudMultitenancyService modelsInfrastructure as a service (IaaS)Platform as a service (PaaS)Software as a service (SaaS)Advanced cloud servicesInternet of Things (IoT)ServerlessMachine learning/Artificial intelligence (AI)Shared responsibility model
1.3 Explain the importance of high availability and scaling in cloud environments.HypervisorsAffinityAnti-affinityRegions and zonesHigh availability of network functionsSwitchesRoutersLoad balancersFirewallsScalabilityAuto-scaling
1.4 Given a scenario, analyze the solution design in support of the business requirements.EnvironmentsDevelopmentQuality assurance (QA)StagingProductionTesting techniquesVulnerability testingPenetration testingPerformance testingRegression testingFunctional testingUsability testing
3.1 Given a scenario, integrate components into a cloud solution.ApplicationServerless
4.1 Given a scenario, configure logging, monitoring, and alerting to maintain operational status.MonitoringBaselinesThresholds
4.3 Given a scenario, optimize cloud environments.PlacementGeographicalCluster placementRedundancyColocation
4.4 Given a scenario, apply proper automation and orchestration techniques.Automation activitiesRoutine operationsUpdatesScaling
Introducing Cloud Computing
You'll begin your CompTIA Cloud+ (CV0-003) certification exam journey with a general overview of cloud computing. With a strong understanding of cloud terminology and architectures, you'll better understand the details of the cloud, which in turn means that you'll be better prepared for the Cloud+ exam and be effective when planning, deploying, and supporting cloud environments.
Let's start by briefly looking at where cloud sits in the broader scope of the IT world. Before cloud computing, organizations had to acquire the IT infrastructure needed to run their applications. Such infrastructure included servers, storage arrays, and networking equipment like routers and firewalls.
Options for where to locate this infrastructure were limited. An organization with an ample budget might build an expensive data center consisting of the following:
Racks to hold servers and networking equipment
Redundant power sources and backup batteries or generators
Massive cooling and ventilation systems to keep the equipment from overheating
Network connectivity within the data center and to outside networks such as the Internet
Organizations that are less fiscally blessed might rent physical space from a colocation (colo) facility, which is just a data center that leases rack space to the general public. Because colo customers lease only as much space as they need—be it a few rack units, an entire rack, or even multiple racks—this option is much cheaper than building and maintaining a data center from scratch. Customer organizations just have to deal with their own IT equipment and software. To put it in marketing terms, think of a colocation facility as “data center as a service.”
Cloud computing takes the concept of a colocation facility and abstracts it even further. Instead of acquiring your own IT equipment and leasing space for it from a colo, an organization can simply use a cloud service provider. In addition to providing and managing the data center infrastructure, a cloud service provider (or just provider for short) also handles the IT hardware infrastructure—namely servers, storage, and networking. The consumer of the cloud services pays fees to use the provider's equipment, and such fees are typically based on usage and billed monthly. In this chapter, we'll dive further into the details of how cloud computing works and how an organization might use the cloud instead of—or in addition to—a traditional data center or colo model.
Related to the data center versus cloud distinction, there are two terms that you need to know. On-premises (on-prem) hosting refers to an organization hosting its own hardware, be it in a data center or a colo. In contrast, cloud computing is an example of off-premises (off-prem) hosting, as the hardware resources are not controlled by the organization that uses them. To make this distinction easy to remember, just equate on-prem with data center and off-prem with cloud.
This book will reference the National Institute of Standards (NIST) SP 800-145 publication ( https://doi.org/10.6028/NIST.SP.800-145
) as the main source of cloud computing definitions. NIST defines cloud computing as follows:
… a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Pay close attention to that last sentence. The reference to “minimal management effort or service provider interaction” is code for automation. Unlike a traditional data center or colocation facility,