Showing posts with label free job information. Show all posts
Showing posts with label free job information. Show all posts

Friday, March 28, 2008

Free tutors on File Systems and Network Attached Storage (NAS) LOCAL FILE SYSTEMS DATABASES AND JOURNALING Volume manager

Free tutors on File Systems and Network Attached Storage (NAS) LOCAL FILE SYSTEMS DATABASES AND JOURNALING

File systems form an intermediate layer between block-oriented hard disks and applications, with a volume manager often being used between the file system and the hard disk(Figure 4.1). Together, these manage the blocks of the disk and make these available to users and applications via the familiar directories and files. Disk subsystems provide block-oriented storage. For end users and for higher applications the handling of blocks addressed via cylinders, tracks and sectors is very cumbersome. File systems therefore represent an intermediate layer in the operating system that provides users with the familiar directories or folders and files and stores these on the block- oriented storage media so that they are hidden to the end users. This chapter introduces the basics of files systems and shows the role that they play in connection with storage networks. This chapter first of all describes the fundamental requirements that are imposed upon file systems (Section 4.1). Then network file systems, file servers and the Network Attached Storage (NAS) product category are introduced (Section 4.2). We will then show how shared disk file systems can achieve a significantly higher performance than classical network file systems (Section 4.3). The chapter concludes with a comparison with block-oriented storage networks (Fibre Channel SAN, iSCSI SAN) and Network Attached Storage (NAS)

LOCAL FILE SYSTEMS databases

File systems form an intermediate layer between block-oriented hard disks and applications, with a volume manager often being used between the file system and the hard disk (Figure 4.1). Together, these manage the blocks of the disk and make these available to users and applications via the familiar directories and files File systems and volume manager provide their services to numerous applications with various load profiles. This means that they are generic applications; their performance is not generally optimized for a specific application. Database systems such as DB2 or Oracle can get around the file system and manage

the blocks of the hard disk themselves (Figure 4.2). As a result, although the performance of the database can be increased, the management of the database is more difficult. In practice, therefore, database systems are usually configured to store their data in files that are managed by a file system. If more performance is required for a specific database, database administrators generally prefer to pay for higher performance hardware than to reconfigure the database to store its data directly upon the block-oriented hard disks.In addition to the basic services, modern file systems provide three functions – journaling, snapshots and dynamic file system expansion. Journaling is a mechanism that guarantees the consistency of the file system even after a system crash. To this end, the file system

Journaling

In addition to the basic services, modern file systems provide three functions – journaling, snapshots and dynamic file system expansion. Journaling is a mechanism that guaranteesthe consistency of the file system even after a system crash. To this end, the file system the blocks themselves

first of all writes every change to a log file that is invisible to applications and end users, before making the change in the filesystem itself. After a system crash the file system only has to run through the end of the log file in order to recreate the consistency of the file system. In file systems without journaling, typically older file systems like Microsoft's FAT32 file system or the UFS file system that is widespread in Unix systems, the consistency of the entire file system has to be checked after a system crash (file system check); in large file systems this can take several hours. In file systems without journaling it can therefore take several hours after a system crash – depending upon the size of the file system – before the data and thus the applications are back in operation.

Snapshots represent the same function as the instant copies function that is familiar from disk subsystems (cf. Section 2.7.1). Snapshots freeze the state of a file system at a given point in time. Applications and end users can access the frozen copy via a special path. As is the case for instant copies, the creation of the copy only takes a few seconds. Likewise, when creating a snapshot, care should be taken to ensure that the state of the frozen data is consistent.

compares instant copies and snapshots. An important advantage of snapshots is that they can be realized with any hardware. On the

other hand, instant copies within a disk subsystem place less load on the CPU and the buses of the server, thus leaving more system resources for the actual applications.

Volume manager

The volume manager is an intermediate layer within the operating system between the file system or database and the actual hard disks. The most important basic function of the volume manager is to aggregate several hard disks to form a large virtual hard Table 4.1 Snapshots are hardware-independent, however, they load the server's CPU Instant copy Snapshot Place of realization Disk subsystem File system Resource consumption Loads disk subsystem's controller and its buses Loads server's CPU and all buses Availability Depends upon disk subsystem (hardware-dependent) Depends upon file system (hardwarendependent) disk and make just this virtual hard disk visible to higher layers. Most volume managers provide the option of breaking this virtual disk back down into several smaller virtual hard disks and enlarging or reducing these (Figure 4.3). This virtualization within the volume manager makes it possible for system administrators to quickly react to changed storage requirements of applications such as databases and file systems. The volume manager can, depending upon its implementation, provide the same functions as a RAID controller (Section 2.4) or an intelligent disk subsystem (Section 2.7). As in snapshots, here too functions such as RAID, instant copies and remote mirroring are realized in a hardware-independent manner in the volume manager. Likewise, a RAID

controller or an intelligent disk subsystem can take the pressure off the resources of the server if the corresponding functions are moved to the storage devices. The realization of RAID in the volume manager loads not only on the server's CPU, but also on its buses

Thursday, March 27, 2008

Learn more on Basics Interoperability of Fibre Channel SAN

Learn more on Basics Interoperability of Fibre Channel SAN

Fibre Channel SANs are currently being successfully used in production environments. Nevertheless, interoperability is an issue with Fibre Channel SAN, as in all new cross manufacturer technologies. When discussing the interoperability of Fibre Channel SAN we must differentiate between the interoperability of the underlying Fibre Channel network layer, the interoperability of the Fibre Channel application protocols, such as FCP (SCSI over Fibre Channel) and the interoperability of the applications running on the Fibre Channel SAN. The interoperability of Fibre Channel SAN stands and falls by the interoperability of

FCP. FCP is the protocol mapping of the FC-4 layer, which maps the SCSI protocol on a Fibre Channel network (Section 3.3.8). The FCP is a complex piece of software that can only be implemented in the form of a device driver. The implementation of hardware-like device drivers alone is a task that attracts errors as if by magic. The developers of FCP device drivers must therefore test extensively and thoroughly.

Two general conditions make it more difficult to test the FCP device driver. The server initiates the data transfer by means of the SCSI protocol; the storage device only responds to the requests of the server. However, the idea of storage networks is to consolidate storage devices, i.e. for many servers to share a few large storage devices. Therefore, with storage networks a single storage device must be able to serve several parallel requests from different servers simultaneously. For example, it is typical for a server to be exchanging data with a storage device just when another server is scanning the Fibre Channel SAN for available storage devices. This situation requires end devices to be able to multitask. When testing multitasking systems the race conditions of the tasks to be performed come to bear: just a few milliseconds delay can lead to a completely different test result. The second difficulty encountered during testing is due to the large number of components that come together in a Fibre Channel SAN. Even when a single server is connected to a single storage device via a single witch, there are numerous possibilities that cannot all be tested. If, for example, a Windows server is selected, there is still the choicebetween NT, 2000 and 2003, each with different service packs. Several manufacturers offer several different models of the Fibre Channel host bus adapter card in the server. If we take into account the various firmware versions for the Fibre Channel host bus adapter cards we find that we already have more than 50 combinations before we even

select a switch. Companies want to use their storage network to connect servers and storage devicesfrom various manufacturers, some of which are already present. The manufacturers of Fibre Channel components (servers, switches and storage devices) must therefore perform interoperability tests in order to guarantee that these components work with devices from third-party manufacturers. Right at the top of the priority list are those combinations that are required by most customers, because this is where the expected profit is the highest. The result of the interoperability test is a so-called support matrix. It specifies, for example, which storage device supports which server model with which operating system versions and Fibre Channel cards. Manufacturers of servers and storage devices often limit the Fibre Channel switches that can be used. Therefore, before building a Fibre Channel SAN you should carefully check whether the manufacturers in question state that they support the planned configuration. If the desiredconfiguration is not listed, you can negotiate with the manufacturer regarding the payment of a surcharge to secure manufacturer support. Although non-supported configurations canwork very well, if problems occur, you are left without support in critical situations. If in any doubt you should therefore look for alternatives right at the planning stage. All this seems bsolutely terrifying at first glance. However, manufacturers now support a number of different configurations. If the manufacturers' support matrices are taken into consideration, robust Fibre Channel SANs can now be operated. The operation of up-to- date operating systems such as Windows NT/2000, AIX, Solaris, HP-UX and Linux is particularly unproblematic. Fibre Channel SANs are based upon Fibre Channel networks. The incompatibility of the fabric and arbitrated loop topologies and the networking of fabrics and arbitrated loops has already been discussed in Section 3.4.3. Within the fabric, the incompatibility of the Fibre Channel switches from different manufacturers should also be mentioned. At the end of 2003 we still recommend that when installing a Fibre Channel SAN only the switches

and directors of a single manufacturer are used. Even though routing between switches and directors of different manufacturers may work as expected, and basic functions of the fabric topology such as aliasing, name server and zoning work well across different vendors in so-called 'compatibility modes'. But bear in mind that there is still only a very small installed base of mixed switch vendor configurations. A standard has been passed that addresses the interoperability of these basic functions, meaning that it is now just a matter of time before these basic functions work across every manufacturers' products.However, for new functions such as SAN security, inter-switch-link trunking or B-Ports,teething troubles with interoperability must once again be expected. In general, applications can be subdivided into higher applications that model and support the business processes and system-based applications such as file systems, databases and back-up systems. The system-based applications are of particular interest from the point of view of storage networks and storage management. The compatibility of network file systems such as NFS and CIFS is now taken for granted and hardly ever queried. As storage networks penetrate into the field of file systems, cross-manufacturer standards are becoming ever more important in this area too. A first offering is Network Data Management Protocol (NDMP, Section 7.9.4) for the back-up of NAS servers. Further down the road we expect also a customer demand for cross-vendor standards in the emerging field of storage virtualization (Chapter 5). The subject of interoperability will preoccupy manufacturers and customers in the field of storage networks for a long time to come. Virtual Interface Architecture (VIA), Infini- Band and Remote Direct Memory Access (RDMA) represent emerging new technologies that must also work in a cross-manufacturer manner. The same applies for Internet SCSI (iSCSI) and its variants like iSCSI Extensions over RDMA (iSER). iSCSI transmits the

SCSI protocol via TCP/IP and, for example, Ethernet. Just like FCP, iSCSI has to serialize he SCSI protocol bit-by-bit and map it onto a complex network topology. Interoperability will therefore also play an important role in iSCSI.

Wednesday, March 26, 2008

Basics Hardware components for Fibre Channel SAN

Basics Hardware components for Fibre Channel SAN

Within the scope of this book we can only introduce the most important product groups. It is not worth trying to give an overview of specific products or a detailed description of individual products due to the short product cycles. This section mentions once again some product groups that have been discussed previously and introduces some product groups that have not yet been discussed. It is self-evident that servers and storage devices are connected to a Fibre Channel network. In the server this can be achieved by fiting the host bus adapter cards (HBAs) of different manufacturers, with each manufacturer offering different HBAs with differing performance features. In storage devices the same HBAs are normally used. However, the manufacturers of storage devices restrict the selection of HBAs. Of course, cables and connectors are required for cabling. In Section 3.3.2 we discussed different copper and fiber-optic cables and their properties. Various connector types are currently on offer for all cable types. It may sound banal, but in practice the installation of a Fibre Channel SAN is sometimes delayed because the connectors on the cable do not fit the connectors on the end devices, hubs and switches and a suitable adapter is not

to hand. A further, initially improbably, but important device is the so-called Fibre Channel-to- SCSI bridge. As the name suggests, a Fibre Channel-to-SCSI bridge creates a connection between Fibre Channel and SCSI (Figure 3.30). These bridges have two important fields of application. First, old storage devices often cannot be converted from SCSI to Fibre Channel. If the old devices are still functional they can continue to be used in the Fibre Channel SAN by the deployment of a Fibre Channel-to-SCSI bridge. Second, new tape libraries in particular often initially only support SCSI; the conversion to Fibre Channel is often not planned until later.With a Fibre Channel-to-SCSI bridge the newest tape libraries can be operated directly in a Fibre Channel SAN and Fibre Channel connections retrofitted as soon as they become available. Unfortunately, the manufacturers have not agreed upon consistent name for this type of device. In addition to Fibre Channel-to-SCSI bridge, terms such as SAN router or storage gateway are also common. The switch is the control centre of the fabric topology. It provides routing and aliasing, name server and zoning functions. Fibre Channel switches support both cut-through routing and the buffering of frames. In new switches a number of ports between eight and about 250 and a data transfer rate of 200 MByte/s should currently (2003) be viewed as standard. In Fibre Channel SANs that have already been installed, however, a large base of switches exists that still work at 100 MByte/s.

 

 

Resilient, enterprise-class switches are commonly referred to as 'directors', named after the switching technology used in mainframe ESCON cabling. Like Fibre Channel switches they provide routing, alias names, name server and zoning functions. Fibre Channel direc- tors are designed to avoid any single point of failure, having for instance two backplanes and two controllers. Current directors (2003) have between 64 and 256 ports. Designing a SAN often raises the question whether several complementary switches or a single director should be preferred. As described, directors are more fault-tolerant than switches, but they are more expensive per port. Therefore, designers of small entry-level SANs commonly choose two complementary Fibre Channel switches, with mutual traffic fail-over in case of a switch or a I/O path failure (Figure 3.31). Designers of larger Fibre Channel SANs often favour directors due to the number of ports currently available per device and the resulting layout simplicity. However, this argument in favour of directors becomes more and more obsolete since today switches with a greater number of ports are available as well. SANs running especially critical applications, e.g. stock market banking or flight control,

would use complementary directors with mutual traffic failover, even though these directors already avoid internal single points of failure. This is similar to wearing trousers with a belt and braces in addition: protecting against double or triple failures. In less critical cases, a single director or a dual complementary switch solution will be considered sufficient. If we disregard the number of ports and the cost, the decision for a switch or a director in an Open Systems Fibre Channel network primarily comes down to fault-tolerance of

 

an individual component. For the sake of simplicity we will use the term 'Fibre Channel switch' throughout this book in place of 'Fibre Channel switch or Fibre Channel director'. A hub simplifies the cabling of an arbitrated loop. Hubs are transparent from the point of view of the connected devices. This means that hubs send on the signals of the connected devices; in contrast to a Fibre Channel switch, however, the connected devices do not communicate with the hub. Hubs change the physical cabling from a ring to a star-shape. Hubs bridge across defective and switched-off devices, so that the physical

ring is maintained for the other devices. The arbitrated loop protocol is located above this cabling. Hubs are divided into unmanaged hubs, managed hubs and switched hubs. Unman- aged hubs are the cheap version of hubs: they can only bridge across switched-off devices. However, they can neither intervene in the event of protocol infringements by an end device nor indicate the state of the hub or the arbitrated loop to the out- side world. This means that an unmanaged hub cannot itself notify the administrator if one of its components is defective. A very cost-conscious administrator can build up a small SAN from PC systems, JBODs and unmanaged hubs. However, the upgrade path to a large Fibre Channel SAN is difficult: in larger Fibre Channel SANs it is questionable whether the economical purchase costs compensate for the higher administration costs. In contrast to unmanaged hubs, managed hubs have administration and diagnosis functions like those that are a matter of course in switches and directors.Managed hubs monitor the power supply, serviceability of fans, temperature, and the status of the individual ports. In addition, some managed hubs can, whilst remaining invisible to the connected devices, intervene in higher Fibre Channel protocol layers, for example, to deactivate the port of adevice that frequently sends invalid Fibre Channel frames. Managed hubs, like switches and directors, can inform the system administrator about events via serial interfaces, Telnet, HTTP and SNMP (see also Chapter 8). Finally, the switched hub is mid-way between a hub and a switch. In addition to the properties of a managed hub, with a switched hub several end devices can exchange data at full bandwidth. Fibre Channel switched hubs are cheaper than Fibre Channel switches, so in some cases they represent a cheap alternative to switches. However, it should benoted that only 126 devices can be connected together via hubs and that services such as aliasing and zoning are not available. Furthermore, the protocol cost for the connection or the removal of a device in a loop is somewhat higher than in a fabric (keyword 'Loop Initialisation Primitive Sequence', 'LIP'). Finally, so- alled link extenders should also be mentioned. Fibre Channel supports a maximum cable length of several ten kilometres (Section 3.3.2). A link extender can

increase the maximum cable length of Fibre Channel by transmitting Fibre Channel frames using MAN/WAN techniques such as ATM, SONET or TCP/IP (Figure 3.32).When using link extenders it should be borne in mind that long distances between end devices significantly increase the latency of a connection. Time-critical applications such as database transactions should therefore not run over a link extender. On the other hand,Fibre Channel SANs with link extenders offer new possibilities for applications such as back-up, data sharing and asynchronous data mirroring.

Fibre Channel SAN is a comparatively new technology. In many data centres in which Fibre Channel SANs are used, it is currently (2003) more likely that there will be several islands of small Fibre Channel SANs than one large Fibre Channel SAN (Figure 3.33).Over 80% of the installed Fibre Channel SANs consist only of up to four Fibre Channel switches. A server can only indirectly access data stored on a different SAN via the LAN and a second server. The reasons for the islands of small Fibre Channel SANs are that they are simpler to manage than one large Fibre Channel SAN and that it was often unnecessary to install a large one.

Originally, Fibre Channel SAN was used only as an alternative to SCSI cabling. Until now the possibility of flexibly dividing the capacity of a storage device etween several servers (storage pooling) and the improved availability of dual SANs have been the main reasons for the use of Fibre Channel SANs. Both can be realized very well with several small Fibre Channel SAN islands. However, more and more applications are now exploiting the possibilities offered by a Fibre Channel SAN. Applications such as back-up

 

(Chapter 7), remote data mirroring and data sharing over Fibre Channel SAN and storage virtualization (Chapter 5) require that all servers and storage devices are connected via a single SAN. Incidentally, the connection of Fibre Channel SANs to form a large SAN could be one field of application in which a Fibre Channel director is preferable to a Fibre Channel switch (Figure 3.34). As yet these connections are generally not critical. In the future, however, this could change (extreme situation: virtualization over several data centres). In our opinion these connection points between two storage networks tend to represent a single point of failure, so they should be designed to be particularly fault-tolerant.

Buy Vmware Interview Questions & Storage Interview Questions for $150. 100+ Interview Questions with Answers.Get additional free bonus reference materials. You can download immediately even if its 1 AM. You will recieve download link immediately after payment completion.You can buy using credit card or paypal.
----------------------------------------- Get 100 Storage Interview Questions.
:
:
500+ Software Testing Interview Questions with Answers are also available plz email roger.smithson1@gmail.com if you are interested to buy them. 200 Storage Interview Questions word file @ $97

Vmware Interview Questions with Answers $100 Fast Download Immediately after payment.: Get 100 Technical Interview Questions with Answers for $100.
------------------------------------------ For $24 Get 100 Vmware Interview Questions only(No Answers)
Vmware Interview Questions - 100 Questions from people who attended Technical Interview related to Vmware virtualization jobs ($24 - Questions only) ------------------------------------------- Virtualization Video Training How to Get High Salary Jobs Software Testing Tutorials Storage Job Openings Interview Questions

 Subscribe To Blog Feed