Monday 19 February 2018

Understanding Cisco Cloud Fundamentals Objective 5.5

Describe the various Cisco storage network devices

Cisco MDS family

Cisco® MDS 9500 and 9700 Series Multilayer Directors are director-class SAN switches designed for deployment in large, scalable enterprise clouds to enable business transformation. Layering a comprehensive set of intelligent features onto a high-performance, protocol- independent switch fabric, the Cisco MDS 9000 Family addresses the stringent requirements of large data center storage environments – uncompromising high availability, security, scalability, ease of management, and transparent integration of new technologies – for extremely flexible data center SAN solutions. Sharing the same operating system and management interface with other Cisco data center switches, the Cisco MDS 9000 Family enables easy deployment of unified fabrics with high-performance Fibre Channel and Fibre Channel over Ethernet (FCoE) connectivity to achieve low total cost of ownership (TCO).

Cisco Nexus family

The Cisco Nexus family of switches is designed to meet the stringent requirements of the next-generation data center. Not simply bigger or faster, these switches offer the following characteristics:
Infrastructure that can be scaled cost-effectively and that helps you increase energy, budget, and resource efficiency
Transport that can navigate the transition to 10 Gigabit Ethernet and unified fabric and can also handle architectural changes such as virtualization, Web 2.0 applications, and cloud computing.
Operational continuity to meet your need for an environment where system availability is assumed and maintenance windows are rare if not totally extinct.
All these switches use NX-OS Software, an operating system designed specifically for the data center and engineered for high availability, scalability, and flexibility.
  • Cisco Nexus 7000 Series – Modular switches with zero service loss architecture meet Gigabit Ethernet and 10 Gigabit Ethernet needs and future unified fabric requirements.
  • Cisco Nexus 5000 Series – Rack switches deliver high-performance, low-latency 10 Gigabit Ethernet, Data Center Ethernet (DCE), and Fibre Channel over Ethernet (FCoE)
  • Cisco Nexus 1000v Switch – This software switch integrates directly with the server hypervisor to deliver VN-Link virtual machine-aware network services.

UCS Invicta (Whiptail)

Cisco UCS Invicta Series Solid State Systems harness the power of flash technology to accelerate data-intensive application workloads. By bringing dramatically faster I/O into the computing domain, the Cisco UCS Invicta Series allows businesses to transact, process, and analyze more data in less time. In addition to accelerating applications, Cisco UCS Invicta eliminates labor-intensive processes and simplifies the data center, allowing for maximum operational efficiency and sustained business advantage.
Cisco UCS Invicta Series products include:
  • The Cisco UCS Invicta C3124SA Appliance for I/O acceleration in medium-scale environments
  • The Cisco UCS Invicta Scaling System for organizations requiring enterprise-class scale, capacity, management, and performance.
Stay tuned!!! and Thank you for your support by the way please subscribe to our YouTube channel named "Youngccnaguru lab".

Understanding Cisco Cloud Fundamentals Objective 5.4

Describe basic NAS storage concepts

Shares / mount points

A CIFS share is a named access point in a volume that enables CIFS clients to view, browse, and manipulate files on a file server.
After a NFS server provides remote clients with to its files (exporting) the file systems are then made available to the operating system and the user (mounting).

Permissions

Permissions are generally handled with access control lists (ACLs) like a Unix system. The userid and groupid must be entered with the permission controls. If the NAS device is compatible with Windows Active Directory, then the AD can be used to control access.
Q. Do you know why NFS server is designed for?
A. The NFS protocol is designed to be independent of the computer, operating system, network architecture, and transport protocol. 
Stay tuned for our next blog post we will give explanation for various Cisco storage network devices and by the way please subscribe to our YouTube channel named "youngccnaguru lab".

Understanding Cisco Cloud Fundamentals Objective 5.3

Describe basic SAN storage concepts

Initiator

An application or production system end-point (a server) that is capable of initiating a SCSI session, sending SCSI commands and I/O requests. Initiators are also identified by unique addressing methods.

Target

A storage system end-point that provides a service of processing SCSI commands and I/O requests from an initiator. A target is created by the storage system’s administrator, and is identified by unique addressing methods. A target, once configured, consists of zero or more logical units

Zoning

Zones are the basic form of data path security in a Fibre Channel environment. Zones are used to define which end devices (two or more) in a fabric can communicate with each other. Zones are grouped together into zone sets. For the zones to be active, the zone set to which the zones belong needs to be activated. Individual zone members can be part of multiple zones. Zones can be part of multiple zone sets. Multiple zone sets can be defined in a fabric. At any given time, only one zone set can be active.
If zoning is not activated in a fabric, all the end devices are part of the default zone. If zoning is activated, any end devices that are not part of an active zone are part of the default zone. The default zone policy is set either to deny (none of the end devices that are part of the default zone can communicate with each other) or permit (all the devices that are part of the default zone can communicate with each other).

VSAN

A virtual SAN (VSAN) is a logical grouping of ports in a single switch or across multiple switches that function like a single fabric. A VSAN is isolated from other VSANs in terms of traffic, security, and fabric services. Because of this, changes made to one VSAN do not affect the remaining VSANs, even though they may be present in the same physical SAN infrastructure hardware. Using VSANs, multiple logical SANs can be hosted on a physical SAN hardware infrastructure. A VSAN lends itself to SAN island consolidation on a higher port density physical switch, along with traffic isolation and increased security. Once a VSAN is created, it has all the properties and functions of a SAN.
Multiple VSANs can be defined on a physical switch. Each VSAN will require it own domain_ID. A single VSAN can span 239 physical switches (a Fibre Channel standards limit). At the current time, a maximum of 256 VSANs are supported in a physical switch.
Using VSANs provides some important advantages:
  • VSAN traffic stays within the VSAN boundaries. Devices can be part of just one VSAN.
  • VSANs allow you to create multiple logical SAN instances on top of a physical SAN infrastructure. This allows for the consolidation of multiple SAN islands onto a physical infrastructure, which minimizes the hardware that needs to be managed.
  • Each VSAN has it own set of fabric services, which allows the SAN infrastructure to be scalable and highly available.
  • Additional SAN infrastructure resources such as VSAN ports can be added and changed as needed without impacting VSAN ports that are already a part of the SAN infrastructure. Moving ports between VSANs is as simple as assigning the port to a different VSAN.
VSANs are numbered from 1 through 4094. VSAN 1 and VSAN 4094 are predefined and have very specific roles. The user-specified VSAN range is from 2 through 4093. VSAN 1 is the default VSAN that contains all ports by default. VSAN 1 is used as a management VSAN. VSAN 4094 is the isolated VSAN into which all orphaned ports are assigned. Devices that are part of VSAN 4094 cannot communicate with each other.

LUN

A LUN is a logical reference to a portion of a storage subsystem. A LUN can comprise a disk, a section of a disk, a whole disk array, or a section of a disk array in the subsystem. This logical reference, when it is assigned to a server in your SAN, acts as a physical disk drive that the server can read and write to. Using LUNs simplifies the management of storage resources in your SAN, because they serve as logical identifiers through which you can assign access and control privileges.
Stay tuned !!! for our next blog posts we will be sharing with you all and please subscribe to our YouTube channel named "Youngccnaguru lab" for more lab related informations.

Understanding Cisco Cloud Fundamentals Objective 5.2

Describe the difference between all the storage access technologies

Difference between SAN AND NAS; block and file

storage area network (SAN) is storage connected in a fabric (usually through a switch) so that there can be easy access to storage from many different servers. From the server application and operating system standpoint, there is no visible difference in the access of data for storage in a SAN or storage that is directly connected. A SAN supports block access to data just like directly attached storage.
Network-attached storage (NAS) is really remote file serving. Rather than using the software on your own file system, the file access is redirected using a remote protocol such as CIFS or NFS to another device (which is operating as a server of some type with its own file system) to do the file I/O on your behalf. This enables file sharing and centralization of management for data.
So from a system standpoint, the difference between SAN and NAS is that SAN is for block I/O and NAS is for file I/O. One additional thing to remember when comparing SAN vs. NAS is that NAS does eventually turn the file I/O request into a block access for the storage devices attached to it.
Block level storage
Anyone who has used a Storage Area Network (SAN) has probably used block level storage before. Block level storage presents itself to servers using industry standard Fibre Channel and iSCSI connectivity mechanisms. In its most basic form, think of block level storage as a hard drive in a server except the hard drive happens to be installed in a remote chassis and is accessible using Fibre Channel or iSCSI.
When it comes to flexibility and versatility, you can’t beat block level storage. In a block level storage device, raw storage volumes are created, and then the server-based operating system connects to these volumes and uses them as individual hard drives. This makes block level storage usable for almost any kind of application, including file storage, database storage, virtual machine file system (VMFS) volumes, and more. You can place any kind of file system on block level storage. So, if you’re running Windows, your volumes will be formatted with NTFS; VMware servers will use VMFS.
File level storage devices are often used to share files with users. By creating a block-based volume and then installing an operating system and attaching to that volume, you can share files out using that native operating system. Remember, when you use a block-based volume, you’re basically using a blank hard drive with which you can do anything.
When it comes to backup, many storage devices include replication-type capabilities, but you still need to think about how to protect your workloads. With this type of storage, it’s not unusual for an organization to be able to use operating system native backup tools or third-party backup tools such as Data Protection Manager (DPM) to back up files. Since the storage looks and acts like a normal hard drive, special backup steps don’t need to be taken.
With regard to management complexity, block-based storage devices tend to be more complex than their file-based counterparts; this is the tradeoff you get for the added flexibility. Block storage device administrators must:
  • Carefully manage and dole out storage on a per server basis.
  • Manage storage protection levels (i.e., RAID).
  • Track storage device performance to ensure that performance continues to meet server and application needs.
  • Manage and monitor the storage communications infrastructure (generally iSCSI or Fibre Channel).
From a use case standpoint, there are a lot of applications that make use of this block-level shared storage, including:
  • Databases. This is especially true when you want to cluster databases, since clustered databases need shared storage.
  • Exchange. Although Microsoft has made massive improvements to Exchange, the company still does not support file level or network-based (as in, CIFS or NFS) storage. Only block level storage is supported.
  • VMware. Although VMware can use file level storage via Network File System (NFS), it’s very common to deploy VMware servers that use shared VMFS volumes on block level storage.
  • Server boot. With the right kind of storage device, servers can be configured to boot from block level storage.
File level storage
Although block level storage is extremely flexible, nothing beats the simplicity of file level storage when all that’s needed is a place to dump raw files. After all, simply having a centralized, highly available, and accessible place to store files and folders remains the most critical need in many organizations. These file level devices — usually Network Attached Storage (NAS) devices — provide a lot of space at what is generally a lower cost than block level storage.
File level storage is usually accessible using common file level protocols such as SMB/CIFS (Windows) and NFS (Linux, VMware). In the block level world, you need to create a volume, deploy an OS, and then attach to the created volume; in the file level world, the storage device handles the files and folders on the device. This also means that, in many cases, the file level storage device or NAS needs to handle user access control and permissions assignment. Some devices will integrate into existing authentication and security systems.
On the backup front, file level storage devices sometimes require special handling since they might run non-standard operating systems, so keep that in mind if you decide to go the file level route.
With the caveat that you may need to take some steps with regard to authentication, permissions, and backup, file level-only devices are usually easier to set up than block level devices. In many cases, the process can be as simple as walking through a short configuration tool and moving forward.
If you’re looking for storage that screams — that is, if you need high levels of storage performance — be very careful with the file level option. In most cases, if you need high levels of performance, you should look at the block level options. Block level devices are generally configurable for capacity and performance. Although file-level devices do have a performance component, capacity is usually the bigger consideration.
File level use cases are generally:
  • Mass file storage. When your users simply need a place to store files, file-level devices can make a lot of sense.
  • VMware (think NFS). VMware hosts can connect to storage presented via NFS in addition to using block level storage.

Block technologies

Fibre Channel
Fibre Channel is a technology for transmitting data between computer devices at data rates of up to 16 Gbps. Fibre Channel is especially suited for connecting computer servers to shared storage devices and for interconnecting storage controllers and drives. Since Fibre Channel is three times as fast, it has begun to replace the Small Computer System Interface (SCSI) as the transmission interface between servers and clustered storage devices. Fibre channel is more flexible; devices can be as far as ten kilometers (about six miles) apart if optical fiber is used as the physical medium. Optical fiber is not required for shorter distances, however, because Fibre Channel also works using coaxial cable and ordinary telephone twisted pair.
Fibre Channel offers point-to-point, switched, and loop interfaces. It is designed to interoperate with SCSI, the Internet Protocol (IP) and other protocols, but has been criticized for its lack of compatibility – primarily because (like in the early days of SCSI technology) manufacturers sometimes interpret specifications differently and vary their implementations.
The world wide name (WWN) in the switch is equivalent to the Ethernet MAC address. As with the MAC address, you must uniquely associate the WWN to a single device. The principal switch selection and the allocation of domain IDs rely on the WWN. The WWN manager, a process-level manager residing on the switch’s supervisor module, assigns WWNs to each switch.
FC addresses are FCIDs, which get assigned by a switch, based on its internal representation of its ports. Each node is identified by an 8-bit Port_ID.
iSCSI
iSCSI is a transport layer protocol that describes how Small Computer System Interface (SCSI) packets should be transported over a TCP/IP network.
iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet. IBM developed iSCSI as a proof of concept in 1998, and presented the first draft of the iSCSI standard to the Internet Engineering Task Force (IETF) in 2000. The protocol was ratified in 2003.
iSCSI works by transporting block-level data between an iSCSI initiator on a server and an iSCSI target on a storage device. The iSCSI protocol encapsulates SCSI commands and assembles the data in packets for the TCP/IP layer. Packets are sent over the network using a point-to-point connection. Upon arrival, the iSCSI protocol disassembles the packets, separating the SCSI commands so the operating system (OS) will see the storage as a local SCSI device that can be formatted as usual. Today, some of iSCSI’s popularity in small to midsize businesses (SMBs) has to do with the way server virtualization makes use of storage pools. In a virtualized environment, the storage pool is accessible to all the hosts within the cluster and the cluster nodes nodes communicate with the storage pool over the network through the use of the iSCSI protocol.
FCoE
FCoE (Fibre Channel over Ethernet) is a storage protocol that enable Fibre Channel communications to run directly over Ethernet. FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface.
The goal of FCoE is to consolidate input/output (I/O) and reduce switch complexity as well as to cut back on cable and interface card counts. Adoption of FCoE been slow, however, due to a scarcity of end-to-end FCoE devices and a reluctance on the part of many organizations to change the way they implement and manage their networks.
Traditionally, organizations have used Ethernet for TCP/IP networks and Fibre Channel for storage networks. Fibre Channel supports high-speed data connections between computing devices that interconnect servers with shared storage devices and between storage controllers and drives. FCoE shares Fibre Channel and Ethernet traffic on the same physical cable or lets organizations separate Fibre Channel and Ethernet traffic on the same hardware.
FCoE uses a lossless Ethernet fabric and its own frame format. It retains Fibre Channel’s device communications but substitutes high-speed Ethernet links for Fibre Channel links between devices.
FCIP
Fibre Channel over IP (FCIP or FC/IP, also known as Fibre Channel tunneling or storage tunneling) is an Internet Protocol (IP)-based storage networking technology developed by the Internet Engineering Task Force (IETF). FCIP mechanisms enable the transmission of Fibre Channel (FC) information by tunneling data between storage area network (SAN) facilities over IP networks; this capacity facilitates data sharing over a geographically distributed enterprise. One of two main approaches to storage data transmission over IP networks, FCIP is among the key technologies expected to help bring about rapid development of the storage area network market by increasing the capabilities and performance of storage data transmission.

File Technologies

CIFS
The Common Internet File System (CIFS) is a protocol that gained rapid popularity around the turn of the millennium (the year 2000) as vendors worked to establish an Internet Protocol-based file-sharing protocol. At its peak, CIFS was widely supported by operating systems (OSes) such as Windows, Linux and Unix.
CIFS uses the client/server programming model. A client program makes a request of a server program (usually in another computer) to access a file or to pass a message to a program that runs in the server computer. The server takes the requested action and returns a response.
CIFS is a public or open variation of the original Server Message Block (SMB) protocol developed and used by Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses the Internet’s TCP/IP protocol. CIFS was viewed as a complement to existing Internet application protocols such as the File Transfer Protocol (FTP) and the Hypertext Transfer Protocol (HTTP). Today, CIFS is widely regarded as an obsolete protocol. Although some OSes still support CIFS, newer versions of the SMB protocol — such as SMB 2.0 and SMB 3.0 — have largely taken the place of CIFS.
Some capabilities of the CIFS protocol include:
  • The ability to access files that are local to the server and read and write to them;
  • File sharing with other clients using special locks;
  • Automatic restoration of connections in case of network failure
  • Unicode file names.
NFS
The Network File System (NFS) is a client/server application that lets a computer user view and optionally store and update file on a remote computer as though they were on the user’s own computer. The user’s system needs to have an NFS client and the other computer needs the NFS server. Both of them require that you also have TCP/IP installed since the NFS server and client use TCP/IP as the program that sends the files and updates back and forth. (However, the User Datagram Protocol, UDP, which comes with TCP/IP, is used instead of TCP with earlier versions of NFS.)
NFS was developed by Sun Microsystems and has been designated a file server standard. Its protocol uses the Remote Procedure Call (RPC) method of communication between computers.
Using NFS, the user or a system administrator can mount all or a portion of a file system (which is a portion of the hierarchical tree in any file directory and subdirectory, including the one you find on your PC or Mac). The portion of your file system that is mounted (designated as accessible) can be accessed with whatever privileges go with your access to each file (read-only or read-write).
Stay tuned for our next blog posts we will learning about basic SAN storage concepts and please subscribe to our YouTube channel named "youngccnaguru lab".

Understanding Cisco Cloud Fundamentals Objective 5.1

Describe storage provisioning concepts

Thick

Thick provisioning is a type of storage allocation in which the amount of storage capacity on a disk is pre-allocated on physical storage at the time the disk is created. This means that creating a 100GB virtual disk actually consumes 100GB of physical disk space, which also means that the physical storage is unavailable for anything else, even if no data has been written to the disk.

Thin

Thin provisioning is a method of optimizing the efficiency with which the available space is utilized in storage area networks (SAN). Thin provisioning operates by allocating disk storage space in a flexible manner among multiple users, based on the minimum space required by each user at any given time.

RAID

RAID (redundant array of independent disks; originally redundant array of inexpensive disks) provides a way of storing the same data in different places (thus, redundantly) on multiple hard disks (though not all RAID levels provide redundancy). By placing data on multiple disks, input/output (I/O) operations can overlap in a balanced way, improving performance. Since multiple disks increase the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.
RAID arrays appear to the operating system (OS) as a single logical hard disk. RAID employs the technique of disk mirroring or disk striping, which involves partitioning each drive’s storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.
RAID 0: This configuration has striping but no redundancy of data. It offers the best performance but no fault-tolerance.
RAID 1: Also known as disk mirroring, this configuration consists of at least two drives that duplicate the storage of data. There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage.
RAID 2: This configuration uses striping across disks with some disks storing error checking and correcting (ECC) information. It has no advantage over RAID 3 and is no longer used.
RAID 3: This technique uses striping and dedicates one drive to storing parity information. The embedded ECC information is used to detect errors. Data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an I/O operation addresses all drives at the same time, RAID 3 cannot overlap I/O. For this reason, RAID 3 is best for single-user systems with long record applications.
RAID 4: This level uses large stripes, which means you can read records from any single drive. This allows you to use overlapped I/O for read operations. Since all write operations have to update the parity drive, no I/O overlapping is possible. RAID 4 offers no advantage over RAID 5.
RAID 5: This level is based on block-level striping with parity. The parity information is striped across each drive, allowing the array to function even if one drive were to fail. The array’s architecture allows read and write operations to span multiple drives. This results in performance that is usually better than that of a single drive, but not as high as that of a RAID 0 array. RAID 5 requires at least three disks, but it is often recommended to use at least five disks for performance reasons.
RAID 5 arrays are generally considered to be a poor choice for use on write-intensive systems because of the performance impact associated with writing parity information. When a disk does fail, it can take a long time to rebuild a RAID 5 array. Performance is usually degraded during the rebuild time and the array is vulnerable to an additional disk failure until the rebuild is complete.
RAID 6: This technique is similar to RAID 5 but includes a second parity scheme that is distributed across the drives in the array. The use of additional parity allows the array to continue to function even if two disks fail simultaneously. However, this extra protection comes at a cost. RAID 6 arrays have a higher cost per gigabyte (GB) and often have slower write performance than RAID 5 arrays.

Disk pools

A disk pool a software definition of a group of disk units on your system.
A disk pool does not necessarily correspond to the physical arrangement of disks. Conceptually, each disk pool on your system is a separate pool of disk units for single-level storage. The system spreads data across the disk units within a disk pool. If a disk failure occurs, you need to recover only the data in the disk pool that contained the failed disk unit.
Your system may have many disk units attached to it for disk pool storage. To your system, they look like a single disk unit of storage. The system spreads data across all disk units. You can use disk pools to separate your disk units into logical subsets. When you assign the disk units on your system to more than one disk pool, each disk pool can have different strategies for availability, backup and recovery, and performance.
Disk pools provide a recovery advantage if the system experiences a disk unit failure resulting in data loss. If this occurs, recovery is only required for the objects in the disk pool that contained the failed disk unit. System objects and user objects in other disk pools are protected from the disk failure.

Stay tuned!!! for more information to be shared through our blog and please subscribe to our YouTube channel named "youngccnaguru lab".

Network Architectures for the Data Center: SDN and ACI

This chapter covers the following topics:  ■ Cloud Computing and Traditional Data Center Networks  ■ The Opposite of Software-Defined ...