Showing posts with label free downloads. Show all posts
Showing posts with label free downloads. Show all posts

Tuesday, September 2, 2008

How to Make Money Online Learn Secrets for FREE from Experts

Many of you search for a way to make money online. Here is a Simple,EASY & FREE way to learn How to make Money Online.

You can make money online if you just have a service or a product which can be sold or you can have money because of some simple things like writing articles, creating content etc. With all those things you might make just few hundred dollars a month. But if you go through this link
"Search Engine Optimization" you can make a lot more money. Since using Search Engine Optimization you can get hundreds of visitors who are very much looking for the service or product you are selling.

This is FREE hence I am writing about it go here "FREE Secrets to Make Money Online"

This is not some cheap ebook they are going to send you a Video DVD along with lot more for almost FREE & this DVD has several Videos which explain how to make money online.

Go here Order for FREE watch this Video you will know this thing which they are giving away for FREE is worth a thousand dollar.

This product is from the industry leading team called Stompernet . Lots of people pay them to get the same secrets.

------
Subject: "Stomping the Search Engines 2" and "The Net Effect"
for HOW MUCH?

Hey

Andy Jenkins has finally given me the all-clear to spill the
beans on this insane offer that StomperNet has cooked up.

Tomorrow, Sept. 3rd at 3pm Eastern, you can get StomperNet's
big daddy expert SEO Video Course, "Stomping the Search Engines
2"... for FREE.

That's right. FREE.

All you need to do is just TRY their new monthly printed Action
Journal called "The Net Effect" - and guess what?...

You get the PREMIER ISSUE of "The Net Effect" for FREE TOO!

You don't pay one penny more than Shipping and Handling unless
you LOVE it and want to get issue 2 a month from now.

That's NUTS. They are betting the FARM that you will LOVE this
stuff and stick around for more. That takes GUTS, and and HUGE
confidence in the quality of their stuff.

But then again, it's StomperNet. I've SEEN the stuff, and can
vouch. It would be worth FULL PRICE.

But for FREE? You'd be FOOLISH not to check this out.

Don't believe it? Watch this video they've released to the
public. No fooling - this is a FOR-REAL DEAL.

https://member.stompernet.net/?r=1324&i=68

This MIGHT just change your online business fortunes...
forever.

P.S. There's no hint of scarcity here - they've got tons of
BOTH products ready to ship. But still - be there EARLY. If I
hadn't already gotten my "insider" review copy, I'd be the
FIRST one on this page tomorrow.

Friday, March 28, 2008

Free tutors on File Systems and Network Attached Storage (NAS) LOCAL FILE SYSTEMS DATABASES AND JOURNALING Volume manager

Free tutors on File Systems and Network Attached Storage (NAS) LOCAL FILE SYSTEMS DATABASES AND JOURNALING

File systems form an intermediate layer between block-oriented hard disks and applications, with a volume manager often being used between the file system and the hard disk(Figure 4.1). Together, these manage the blocks of the disk and make these available to users and applications via the familiar directories and files. Disk subsystems provide block-oriented storage. For end users and for higher applications the handling of blocks addressed via cylinders, tracks and sectors is very cumbersome. File systems therefore represent an intermediate layer in the operating system that provides users with the familiar directories or folders and files and stores these on the block- oriented storage media so that they are hidden to the end users. This chapter introduces the basics of files systems and shows the role that they play in connection with storage networks. This chapter first of all describes the fundamental requirements that are imposed upon file systems (Section 4.1). Then network file systems, file servers and the Network Attached Storage (NAS) product category are introduced (Section 4.2). We will then show how shared disk file systems can achieve a significantly higher performance than classical network file systems (Section 4.3). The chapter concludes with a comparison with block-oriented storage networks (Fibre Channel SAN, iSCSI SAN) and Network Attached Storage (NAS)

LOCAL FILE SYSTEMS databases

File systems form an intermediate layer between block-oriented hard disks and applications, with a volume manager often being used between the file system and the hard disk (Figure 4.1). Together, these manage the blocks of the disk and make these available to users and applications via the familiar directories and files File systems and volume manager provide their services to numerous applications with various load profiles. This means that they are generic applications; their performance is not generally optimized for a specific application. Database systems such as DB2 or Oracle can get around the file system and manage

the blocks of the hard disk themselves (Figure 4.2). As a result, although the performance of the database can be increased, the management of the database is more difficult. In practice, therefore, database systems are usually configured to store their data in files that are managed by a file system. If more performance is required for a specific database, database administrators generally prefer to pay for higher performance hardware than to reconfigure the database to store its data directly upon the block-oriented hard disks.In addition to the basic services, modern file systems provide three functions – journaling, snapshots and dynamic file system expansion. Journaling is a mechanism that guarantees the consistency of the file system even after a system crash. To this end, the file system

Journaling

In addition to the basic services, modern file systems provide three functions – journaling, snapshots and dynamic file system expansion. Journaling is a mechanism that guaranteesthe consistency of the file system even after a system crash. To this end, the file system the blocks themselves

first of all writes every change to a log file that is invisible to applications and end users, before making the change in the filesystem itself. After a system crash the file system only has to run through the end of the log file in order to recreate the consistency of the file system. In file systems without journaling, typically older file systems like Microsoft's FAT32 file system or the UFS file system that is widespread in Unix systems, the consistency of the entire file system has to be checked after a system crash (file system check); in large file systems this can take several hours. In file systems without journaling it can therefore take several hours after a system crash – depending upon the size of the file system – before the data and thus the applications are back in operation.

Snapshots represent the same function as the instant copies function that is familiar from disk subsystems (cf. Section 2.7.1). Snapshots freeze the state of a file system at a given point in time. Applications and end users can access the frozen copy via a special path. As is the case for instant copies, the creation of the copy only takes a few seconds. Likewise, when creating a snapshot, care should be taken to ensure that the state of the frozen data is consistent.

compares instant copies and snapshots. An important advantage of snapshots is that they can be realized with any hardware. On the

other hand, instant copies within a disk subsystem place less load on the CPU and the buses of the server, thus leaving more system resources for the actual applications.

Volume manager

The volume manager is an intermediate layer within the operating system between the file system or database and the actual hard disks. The most important basic function of the volume manager is to aggregate several hard disks to form a large virtual hard Table 4.1 Snapshots are hardware-independent, however, they load the server's CPU Instant copy Snapshot Place of realization Disk subsystem File system Resource consumption Loads disk subsystem's controller and its buses Loads server's CPU and all buses Availability Depends upon disk subsystem (hardware-dependent) Depends upon file system (hardwarendependent) disk and make just this virtual hard disk visible to higher layers. Most volume managers provide the option of breaking this virtual disk back down into several smaller virtual hard disks and enlarging or reducing these (Figure 4.3). This virtualization within the volume manager makes it possible for system administrators to quickly react to changed storage requirements of applications such as databases and file systems. The volume manager can, depending upon its implementation, provide the same functions as a RAID controller (Section 2.4) or an intelligent disk subsystem (Section 2.7). As in snapshots, here too functions such as RAID, instant copies and remote mirroring are realized in a hardware-independent manner in the volume manager. Likewise, a RAID

controller or an intelligent disk subsystem can take the pressure off the resources of the server if the corresponding functions are moved to the storage devices. The realization of RAID in the volume manager loads not only on the server's CPU, but also on its buses

Thursday, March 27, 2008

Learn more on Basics Interoperability of Fibre Channel SAN

Learn more on Basics Interoperability of Fibre Channel SAN

Fibre Channel SANs are currently being successfully used in production environments. Nevertheless, interoperability is an issue with Fibre Channel SAN, as in all new cross manufacturer technologies. When discussing the interoperability of Fibre Channel SAN we must differentiate between the interoperability of the underlying Fibre Channel network layer, the interoperability of the Fibre Channel application protocols, such as FCP (SCSI over Fibre Channel) and the interoperability of the applications running on the Fibre Channel SAN. The interoperability of Fibre Channel SAN stands and falls by the interoperability of

FCP. FCP is the protocol mapping of the FC-4 layer, which maps the SCSI protocol on a Fibre Channel network (Section 3.3.8). The FCP is a complex piece of software that can only be implemented in the form of a device driver. The implementation of hardware-like device drivers alone is a task that attracts errors as if by magic. The developers of FCP device drivers must therefore test extensively and thoroughly.

Two general conditions make it more difficult to test the FCP device driver. The server initiates the data transfer by means of the SCSI protocol; the storage device only responds to the requests of the server. However, the idea of storage networks is to consolidate storage devices, i.e. for many servers to share a few large storage devices. Therefore, with storage networks a single storage device must be able to serve several parallel requests from different servers simultaneously. For example, it is typical for a server to be exchanging data with a storage device just when another server is scanning the Fibre Channel SAN for available storage devices. This situation requires end devices to be able to multitask. When testing multitasking systems the race conditions of the tasks to be performed come to bear: just a few milliseconds delay can lead to a completely different test result. The second difficulty encountered during testing is due to the large number of components that come together in a Fibre Channel SAN. Even when a single server is connected to a single storage device via a single witch, there are numerous possibilities that cannot all be tested. If, for example, a Windows server is selected, there is still the choicebetween NT, 2000 and 2003, each with different service packs. Several manufacturers offer several different models of the Fibre Channel host bus adapter card in the server. If we take into account the various firmware versions for the Fibre Channel host bus adapter cards we find that we already have more than 50 combinations before we even

select a switch. Companies want to use their storage network to connect servers and storage devicesfrom various manufacturers, some of which are already present. The manufacturers of Fibre Channel components (servers, switches and storage devices) must therefore perform interoperability tests in order to guarantee that these components work with devices from third-party manufacturers. Right at the top of the priority list are those combinations that are required by most customers, because this is where the expected profit is the highest. The result of the interoperability test is a so-called support matrix. It specifies, for example, which storage device supports which server model with which operating system versions and Fibre Channel cards. Manufacturers of servers and storage devices often limit the Fibre Channel switches that can be used. Therefore, before building a Fibre Channel SAN you should carefully check whether the manufacturers in question state that they support the planned configuration. If the desiredconfiguration is not listed, you can negotiate with the manufacturer regarding the payment of a surcharge to secure manufacturer support. Although non-supported configurations canwork very well, if problems occur, you are left without support in critical situations. If in any doubt you should therefore look for alternatives right at the planning stage. All this seems bsolutely terrifying at first glance. However, manufacturers now support a number of different configurations. If the manufacturers' support matrices are taken into consideration, robust Fibre Channel SANs can now be operated. The operation of up-to- date operating systems such as Windows NT/2000, AIX, Solaris, HP-UX and Linux is particularly unproblematic. Fibre Channel SANs are based upon Fibre Channel networks. The incompatibility of the fabric and arbitrated loop topologies and the networking of fabrics and arbitrated loops has already been discussed in Section 3.4.3. Within the fabric, the incompatibility of the Fibre Channel switches from different manufacturers should also be mentioned. At the end of 2003 we still recommend that when installing a Fibre Channel SAN only the switches

and directors of a single manufacturer are used. Even though routing between switches and directors of different manufacturers may work as expected, and basic functions of the fabric topology such as aliasing, name server and zoning work well across different vendors in so-called 'compatibility modes'. But bear in mind that there is still only a very small installed base of mixed switch vendor configurations. A standard has been passed that addresses the interoperability of these basic functions, meaning that it is now just a matter of time before these basic functions work across every manufacturers' products.However, for new functions such as SAN security, inter-switch-link trunking or B-Ports,teething troubles with interoperability must once again be expected. In general, applications can be subdivided into higher applications that model and support the business processes and system-based applications such as file systems, databases and back-up systems. The system-based applications are of particular interest from the point of view of storage networks and storage management. The compatibility of network file systems such as NFS and CIFS is now taken for granted and hardly ever queried. As storage networks penetrate into the field of file systems, cross-manufacturer standards are becoming ever more important in this area too. A first offering is Network Data Management Protocol (NDMP, Section 7.9.4) for the back-up of NAS servers. Further down the road we expect also a customer demand for cross-vendor standards in the emerging field of storage virtualization (Chapter 5). The subject of interoperability will preoccupy manufacturers and customers in the field of storage networks for a long time to come. Virtual Interface Architecture (VIA), Infini- Band and Remote Direct Memory Access (RDMA) represent emerging new technologies that must also work in a cross-manufacturer manner. The same applies for Internet SCSI (iSCSI) and its variants like iSCSI Extensions over RDMA (iSER). iSCSI transmits the

SCSI protocol via TCP/IP and, for example, Ethernet. Just like FCP, iSCSI has to serialize he SCSI protocol bit-by-bit and map it onto a complex network topology. Interoperability will therefore also play an important role in iSCSI.

Wednesday, March 26, 2008

Free Tutors on Interoperability of Fibre Channel SAN and IP STORAGE

Free Tutors on Interoperability of Fibre Channel SAN and IP STORAGE

Fibre Channel SANs are currently being successfully used in production environments. Nevertheless, interoperability is an issue with Fibre Channel SAN, as in all new cross manufacturer technologies. When discussing the interoperability of Fibre Channel SAN we must differentiate between the interoperability of the underlying Fibre Channel network layer, the interoperability of the Fibre Channel application protocols, such as FCP (SCSI over Fibre Channel) and the interoperability of the applications running on the Fibre Channel SAN. The interoperability of Fibre Channel SAN stands and falls by the interoperability of

FCP. FCP is the protocol mapping of the FC-4 layer, which maps the SCSI protocol on a Fibre Channel network (Section 3.3.8).The FCP is a complex piece of software that can only be implemented in the form ofa device driver. The implementation of hardware-like device drivers alone is a task that attracts errors as if by magic. The developers of FCP device drivers must therefore test extensively and thoroughly. Two general conditions make it more difficult to test the FCP device driver. The server initiates the data transfer by means of the SCSI protocol; the storage device only responds to the requests of the server. However, the idea of storage networks is to consolidate storage devices, i.e. for many servers to share a few large storage devices. Therefore, with storage networks a single storage device must be able to serve several parallel requests from different servers simultaneously. For example, it is typical for a server to be exchanging data with a storage device just when another server is scanning the Fibre Channel SAN for available storage devices. This situation requires end devices to be able to multitask. When testing multitasking systems the race conditions of the tasks to be performed come to bear: just a few milliseconds delay can lead to a completely different test result.The second difficulty encountered during testing is due to the large number of components that come together in a Fibre Channel SAN. Even when a single server is connected to a single storage device via a single switch, there are numerous possibilities that can-

not all be tested. If, for example, a Windows server is selected, there is still the choice between NT, 2000 and 2003, each with different service packs. Several manufacturers offer several different models of the Fibre Channel host bus adapter card in the server. If we take into account the various firmware versions for the Fibre Channel host bus adapter cards we find that we already have more than 50 combinations before we even select a switch. Companies want to use their storage network to connect servers and storage devices from various manufacturers, some of which are already present. The manufacturers of Fibre Channel components (servers, switches and storage devices) must therefore perform interoperability tests in order to guarantee that these components work with devices from third-party manufacturers. Right at the top of the priority list are those combinations that are required by most customers, because this is where the expected profit is the highest. The result of the interoperability test is a so-called support matrix. It specifies, for example, which storage device supports which server model with which operating system versions and Fibre Channel cards. Manufacturers of servers and storage devices often limit the Fibre Channel switches that can be used. Therefore, before building a Fibre Channel SAN you should carefully check whether the manufacturers in question state that they support the planned configuration. If the desired configuration is not listed, you can negotiate with the manufacturer regarding the payment of a surcharge to secure manufacturer support. Although non-supported configurations can work very well, if problems occur, you are left without support in critical situations. If

in any doubt you should therefore look for alternatives right at the planning stage. All this seems absolutely terrifying at first glance. However, manufacturers now support a number of different configurations. If the manufacturers' support matrices are taken into consideration, robust Fibre Channel SANs can now be operated. The operation of up-to-date operating systems such as Windows NT/2000, AIX, Solaris, HP-UX and Linux is particularly unproblematic. Fibre Channel SANs are based upon Fibre Channel networks. The incompatibility of the fabric and arbitrated loop topologies and the networking of fabrics and arbitrated loops

has already been discussed in Section 3.4.3. Within the fabric, the incompatibility of the Fibre Channel switches from different manufacturers should also be mentioned. At the end of 2003 we still recommend that when installing a Fibre Channel SAN only the switches and directors of a single manufacturer are used. Even though routing between switches and directors of different manufacturers may work as expected, and basic functions of the fabric topology such as aliasing, name server and zoning work well across different vendors in so-called 'compatibility modes'. But bear in mind that there is still only a very small installed base of mixed switch vendor configurations. A standard has been passed that addresses the interoperability of these basic functions, meaning that it is now just a matter of time before these basic functions work across every manufacturers' products. However, for new functions such as SAN security, inter-switch-link trunking or B-Ports, teething troubles with interoperability must once again be expected.In general, applications can be subdivided into higher applications that model and support the business processes and system-based applications such as file systems, databases and back-up systems. The system-based applications are of particular interest from the point of view of storage networks and storage management. The compatibility of network file systems such as NFS and CIFS is now taken for granted and hardly ever queried. As storage networks penetrate into the field of file systems, cross-manufacturer standards are becoming ever more important in this area too. A first offering is Network Data Management Protocol (NDMP, Section 7.9.4) for the back-up of NAS servers. Further down the road we expect also a customer demand for cross-vendor standards in the emerging field of storage virtualization (Chapter 5).

The subject of interoperability will preoccupy manufacturers and customers in the field of storage networks for a long time to come. Virtual Interface Architecture (VIA), Infini Band and Remote Direct Memory Access (RDMA) represent emerging new technologies that must also work in a cross-manufacturer manner. The same applies for Internet SCSI (iSCSI) and its variants like iSCSI Extensions over RDMA (iSER). iSCSI transmits the SCSI protocol via TCP/IP and, for example, Ethernet. Just like FCP, iSCSI has to serialize the SCSI protocol bit-by-bit and map it onto a complex network topology. Interoperability will therefore also play an important role in iSCSI.

IP STORAGE

Fibre Channel SANs are currently (2003) being successfully implemented in production environments. Nevertheless, the industry is at pains to establish storage networks based upon IP (IP storage) and Ethernet as an alternative to Fibre Channel. This section first introduces various protocols for the transmission of storage data traffic via TCP/IP (Section 3.5.1). Then we explain to what extent TCP/IP and Ethernet are suitable transmission techniques for storage networks at all (Section 3.5.2). Finally, we discuss a migration path from SCSI and Fibre Channel to IP storage (Section 3.5.3).

 

 

Monday, March 24, 2008

FREE TUTORS ON THE FIBRE CHANNEL PROTOCOL STACK FLOW CONTROL (Service classes)

Free Tutors on THE FIBRE CHANNEL PROTOCOL STACK

FC-1: 8b/10b encoding, ordered sets and link control protocolFC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable(8b/10b encoding). FC-1 also describes certain transmission words (ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol). 8b/10b encoding In all digital transmission techniques, transmitter and receiver must synchronize their clock-pulse rates. In parallel buses the bus rate is transmitted via an additional data line. By contrast, in the serial transmission used in Fibre Channel only one data line is available through which the data is transmitted. This means that the receiver must regenerate the transmission rate from the data stream.

The receiver can only synchronize the rate at the points where there is a signal change in the medium. In simple binary encoding (Figure 3.11) this is only the case if the signal changes from '0' to '1' or from '1' to '0'. In Manchester encoding there is a signal change for every bit transmitted. Manchester encoding therefore creates two physical signals for each bit transmitted. It therefore requires a transfer rate that is twice as high as that for binary encoding. Therefore, Fibre Channel – like many other transmission techniques – uses binary encoding, because at a given rate of signal changes more bits can be transmitted than is the case for Manchester encoding. The problem with this approach is that the signal steps that arrive at the receiver are not always the same length (jitter). This means that the signal at the receiver is sometimes a little longer and sometimes a little shorter (Figure 3.12). In the escalator analogy this means that the escalator bucks. Jitter can lead to the receiver losing synchronization with the received signal. If, for example, the transmitter sends a sequence of ten zeros, the receiver cannot decide whether it is a sequence of nine, ten or eleven zeros. If we nevertheless wish to use binary encoding, then we have to ensure that the data stream generates a signal change frequently enough that jitter cannot strike. The so-called 8b/10b encoding represents a good compromise. 8b/10b encoding converts an eight-bit

byte to be transmitted into a ten-bit character, which is sent via the medium instead of the eight-bit byte. For Fibre Channel this means, for example, that a useful transfer rate of 100 MByte/s requires a raw transmission rate of 1 Gbit/s instead of 800 Mbit/s. Incidentally, 8b/10b encoding is also used for the Enterprise System Connection Architecture (ESCON), Serial Storage Architecture (SSA), Gigabit Ethernet and InfiniBand. Finally, it should be noted that 1 Gigabyte Fibre Channel uses the 64b/66b encoding variant for a certain cable type (single lane with serial transmission). Expanding the eight-bit data bytes to ten-bit transmission character gives rise to the following advantages: • In 8b/10b encoding, of all available ten-bit characters, only those that generate a bit

sequence that contains a maximum of five zeros one after the other or five ones one after the other for any desired combination of the ten-bit character are selected. There- fore, a signal change takes place at the latest after five signal steps, so that the clock synchronization of the receiver is guaranteed.

• A bit sequence generated using 8b/10b encoding has a uniform distribution of zeros and ones. This has the advantage that only small direct currents flow in the hardware that processes the 8b/10b encoded bit sequence. This makes the realization of Fibre Channel hardware components simpler and cheaper.

• Further ten-bit characters are available that do not represent eight-bit data bytes. These additional characters can be used for the administration of a Fibre Channel link.

Ordered sets

Fibre Channel aggregates four ten-bit transmission characters to form a 40-bit transmission word. The Fibre Channel standard differentiates between two types of transmission word: data words and ordered sets. Data words represent a sequence of four eight-bit data bytes. Data words may only stand between a Start-of-Frame delimiter (SOF delimiter) and an End-of-Frame delimiter (EOF delimiter).Ordered sets may only stand between an EOF delimiter and a SOF delimiter, with SOF sand EOFs themselves being ordered sets. All ordered sets have in common that they begin with a certain transmission character, the so-called K28.5 character. The K28.5 character includes a special bit sequence that does not occur elsewhere in the data stream. The input channel of a Fibre Channel port can therefore use the K28.5 character to divide the continuous incoming bit stream into 40 bit transmission words when initializing a Fibre Channel link or after the loss of synchronization on a link. Link control protocol With the aid of ordered sets, FC-1 defines various link level protocols for the initialization and Administration of a link. The initialization of a link is the prerequisite for data exchange by means of frames. Examples of link level protocols are the initialization and arbitration of an arbitrated loop.

3.3.4 FC-2: data transfer

FC-2 is the most comprehensive layer in the Fibre Channel protocol stack. It determines how larger data units (for example, a file) are transmitted via the Fibre Channel network. It regulates the flow control that ensures that the transmitter only sends the data at a speed that the receiver can process it. And it defines various service classes that are tailored to the requirements of various applications. Exchange, sequence and frame FC-2 introduces a three-layer hierarchy for the transmission of data (Figure 3.13). At the top layer a so-called exchange defines a logical communication connection between two end devices. For example, each process that reads and writes data could be assigned its own exchange. End devices (servers and storage devices) can simultaneously maintain several exchange relationships, even between the same ports. Different exchanges help the FC-2 layer to deliver the incoming data quickly and efficiently to the orrect receiver in the higher protocol layer (FC-3). A sequence is a larger data unit that is transferred from a transmitter to a receiver. Only one sequence can be transferred after another within an exchange. FC-2 guarantees that sequences are delivered to the receiver in the same order they were sent from the transmitter; hence the name 'sequence'. Furthermore, sequences are only delivered to the next protocol layer up when all frames of the sequence have arrived at the receiver (Figure 3.13). A sequence could represent the writing of a file or an individual database transaction. A Fibre Channel network transmits control frames and data frames. Control frames contain no useful data, they signal events such as the successful delivery of a data frame. Data frames transmit up to 2112 bytes of useful data. Larger sequences therefore have to be broken down into several frames. Although it is theoretically possible to agree upon

different maximum frame sizes, this is hardly ever done in practice. A Fibre Channel frame consists of a header, useful data (payload) and a CRC checksum

(Figure 3.14). In addition, the frame is bracketed by a Start-of-Frame delimiter (SOF) and an End-of-Frame delimiter (EOF). Finally, six filling words must be transmitted by means of a link between two frames. In contrast to Ethernet and TCP/IP, Fibre Channel is an integrated whole: the layers of the Fibre Channel protocol stack are so well harmonizedwith one another that the ratio of payload to protocol overhead is very efficient at up to 98%. The CRC checking procedure is designed to recognize all transmission errors if the underlying medium does not exceed the specified error rate of 10−12 Error correction takes place at sequence level: if a frame of a sequence is wrongly transmitted, the entire sequence is retransmitted. At gigabit speed it is more efficient to resend a complete sequence than to extend the Fibre Channel hardware so that individual lost frames can be resent and inserted in the correct position. The underlying protocol

layer must maintain the specified maximum error rate of 10−12so that this procedures efficient.

Flow control

Flow control ensures that the transmitter only sends data at a speed that the receiver can receive it. Fibre Channel uses the so-called credit model for this. Each credit represents the capacity of the receiver to receive a Fibre Channel frame. If the receiver awards the transmitter a credit of '4', the transmitter may only send the receiver four frames. The transmitter may not send further frames until the receiver has acknowledged the receipt of at least some of the transmitted frames. FC-2 defines two different mechanisms for flow control: end-to-end flow control and link flow control (Figure 3.15). In end-to-end flow control two end devices negotiate the end-to-end credit before the data exchange. The end-to-end flow control is realized on the host bus adapter cards of the end devices. By contrast, link flow control takes place at each physical connection. This is achieved by two communicating ports negotiating the buffer-to-buffer credit. This means that the link flow control also takes place at the Fibre Channel switches.

Service classes

The Fibre Channel standard defines six different service classes for data exchange between end devices. Three of these defined classes (Class 1, Class 2 and Class 3) are realized in products available on the market, with hardly any products providing the connection- oriented Class 1. Almost all new Fibre Channel products (host bus adapters, switches, storage devices) support the service classes Class 2 and Class 3, which realize a packet- oriented service (datagram service). In addition, Class F serves for the data exchange between the switches within a fabric. Class 1 defines a connection-oriented communication connection between two node ports: a Class 1 connection is opened before the transmission of frames. This specifies a route through the Fibre Channel network. Thereafter, all frames take the same route through the Fibre Channel network so that frames are delivered in the sequence in which they were transmitted. A Class 1 connection guarantees the availability of the full bandwidth. A port thus cannot send any other frames while a Class 1 connection is open.

Class 2 and Class 3, on the other hand, are packet-oriented services (datagram services): no dedicated connection is built up, instead the frames are individually routed through the Fibre Channel network. A port can thus maintain several connections at the same time. Several Class 2 and Class 3 connections can thus share the bandwidth. Class 2 uses end-to-end flow control and link flow control. In Class 2 the receiver acknowledges each received frame (acknowledgement, Figure 3.16). This acknowledge-ment is used both for end-to-end flow control and for the recognition of lost frames. A missing acknowledgement leads to the immediate recognition of transmission errors byFC-2, which are then immediately signalled to the higher protocol layers. The higherprotocol layers can thus initiate error correction measures straight away (Figure 3.18).Users of a Class 2 connection can demand the delivery of the frames in the correct order.Class 3 achieves less than Class 2: frames are not acknowledged (Figure 3.17). Thismeans that only link flow control takes place, not end-to-end flow control. In addition, the higher protocol layers must notice for themselves whether a frame has been lost. The loss of a frame is indicated to higher protocol layers by the fact that an expected sequence is not delivered because it has not yet been completely received by the FC-2 layer. A switch may dispose of Class 2 and Class 3 frames if its buffer is full. Due to greater time-outvalues in the higher protocol layers it can take much longer to recognize the loss of aframe than is the case in Class 2 (Figure 3.19).We have already stated that in practice only Class 2 and Class 3 are important. In practice the service classes are hardly ever explicitly configured, meaning that in current Fibre Channel SAN implementations the end devices themselves negotiate whether theycommunicate by Class 2 or Class 3. From a theoretical point of view the two service classes differ in that Class 3 sacrifices some of the communication reliability of Class 2in favour of a less complex protocol. Class 3 is currently the most frequently used service class. This may be because the current Fibre Channel SANs are still very small, so that

frames are very seldom lost or overtake each other. The linking of current Fibre Channel SAN islands to a large SAN could lead to Class 2 playing a greater role in future due toits faster error recognition.

 

 

 

 

 

FREE TUTOR ON STORAGE LUN MASKING AND AVAILABILITY OF DISK SUBSYSTEMS

FREE TUTOR ON STORAGE LUN MASKING  AND AVAILABILITY OF DISK SUBSYSTEMS
So-called LUN masking brings us to the third important function – after instant copy and remote mirroring – that intelligent disk subsystems offer over and above that offered by RAID. LUN masking limits the access to the hard disks that the disk subsystem exports to the connected server.A disk subsystem makes the storage capacity of its internal physical hard disks available
to servers by permitting access to individual physical hard disks, or to virtual hard disks created using RAID, via the connection ports. Based upon the SCSI protocol, all hard disks – physical and virtual – that are visible outside the disk subsystem are also known
as LUN (Logical Unit Number).Without LUN masking every server would see all hard disks that the disk subsystem pro-vides.
A disk subsystem without LUN masking to which three servers are connected. Each server sees all hard disks that the disk subsystem exports outwards.As a result, considerably more hard disks are visible to each server than is necessary.
AVAILABILITY OF DISK SUBSYSTEMS
In particular, on each server those hard disks that are required by applications that runon a different server are visible. This means that the individual servers must be verycarefully configured. In Figure 2.23 an erroneous formatting of the disk LUN 3 of server1 would destroy the data of the application that runs on server 3. In addition, some operating systems are very greedy: when booting up they try to draw to them each harddisk that is written with the signature (label) of a foreign operating system.Without LUN masking, therefore, the use of the hard disk must be very carefullyconfigured in the operating systems of the participating servers. LUN masking brings order to this chaos by assigning the hard disks that are externally visible to servers. As  result, it limits the visibility of exported disks within the disk subsystem.shows how LUN masking brings order to the chaos of Figure 2.23. Each server now sees only the hard disks that it actually requires. LUN masking thus acts as a filter between the exported hard disks and the accessing servers.It is now no longer possible to destroy data that belongs to applications that run on another server. Configuration errors are still possible, but the consequences are no longer so devastating. Furthermore, configuration errors can now be more quickly traced since the information is bundled within the disk subsystem instead of being distributed over all servers.We differentiate between port-based LUN masking and server-based LUN masking.Port-based LUN masking is the 'poor man's LUN masking', it is found primarily in low-end disk subsystems. In port-based LUN masking the filter only works using the granularity of a port. This means that all servers connected to the disk subsystem via the
same port see the same disks.Server-based LUN masking offers more flexibility. In this approach every server sees only the hard disks assigned to it, regardless of which port it is connected via or which other servers are connected via the same port.
 AVAILABILITY OF DISK SUBSYSTEMS
Disk subsystems are assembled from standard components, which have a limited fault-tolerance. In this chapter we have shown how these standard components are combined in order to achieve a level of fault-tolerance for the entire disk subsystem that lies sig-
nificantly above the fault-tolerance of the individual components. Today, disk subsystems can be constructed so that they can withstand the failure of any component without databeing lost or becoming inaccessible. We can also say that such disk subsystems have no
'single point of failure'.The following list describes the individual measures that can be taken to increase the availability of data:
• The data is distributed over several hard disks using RAID processes and supple-mented by further data for error correction. After the failure of a physical hard disk,the data of the defective hard disk can be reconstructed from the remaining data and
the additional data.46 INTELLIGENT DISK SYSTEMS• Individual hard disks store the data using the so-called Hamming code. The Hamming code allows data to be correctly restored even if individual bits are changed on the hard disk. Self-diagnosis functions in the disk controller continuously monitor the rate of bit errors and the physical variables (temperature sensors, spindle vibration sensors).
In the event of an increase in the error rate, hard disks can be replaced before datais lost.• Each internal physical hard disk can be connected to the controller via two internal I/O channels. If one of the two channels fails, the other can still be used.• The controller in the disk subsystem can be realized by several controller instances. If one of the controller instances fails, one of the remaining instances takes over the tasks of the defective instance.• Other auxiliary components such as power supplies, batteries and fans can often beduplicated so that the failure of one of the components is unimportant. When connect-ing the power supply it should be ensured that the various power cables are at leastconnected through various fuses. Ideally, the individual power cables would be supplied
via different external power networks; however, in practice this is seldom realizable.• Server and disk subsystem are connected together via several I/O channels. If one of the channels fails, the remaining ones can still be used.• Instant copies can be used to protect against logical errors. For example, it would be possible to create an instant copy of a database every hour. If a table is 'accidentally'
deleted, then the database could revert to the last instant copy in which the database is still complete.• Remote mirroring protects against physical damage. If, for whatever reason, the original data can no longer be accessed, operation can continue using the data copy that was generated using remote mirroring.This list shows that disk subsystems can guarantee the availability of data to a very high degree. Despite everything it is in practice sometimes necessary to shut down and switch off a disk subsystem. In such cases, it can be very tiresome to co-ordinate all project groups to a common waiting window, especially if these are distributed over different
time zones.Further important factors for the availability of an entire IT system are the availabilityof the applications or the application server itself and the availability of the connection between application servers and disk subsystems. Chapter 6 shows how multipathing can improve the connection between servers and storage systems and how clustering canincrease the fault-tolerance of applications.

know more about Remote mirroring and Instant copies (intlligent disk subsystems)

                                          know more about Remote mirroring
Instant copies are excellently suited for the copying of data sets within disk subsystems.However, they can only be used to a limited degree for data protection. Although data copies generated using instant copy protect against application errors (accidental deletion of a file system) and logical errors (errors in the database program), they do not protectagainst the failure of a disk subsystem. Something as simple as a power failure can prevent access to production data and data copies for several hours. A fire in the disk subsystem would destroy original data and data copies. For data protection, therefore, the proximity of production data and data copies is fatal.Remote mirroring offers protection against such catastrophes. Modern disk subsystem scan now mirror their data, or part of their data, independently to a second disk subsystem,which is a long way away. The entire remote mirroring operation is handled by the two participating disk subsystems. Remote mirroring is invisible to application servers and does not consume their resources. However, remote mirroring requires resources in the two disk subsystems and in the I/O channel that connects the two disk subsystems together,which means that reductions in performance can sometimes make their way through to the application. application that is designed to achieve high availability using remote mirroring. The application server and the disk subsystem, plus the associated data, are installed in the primary data centre. The disk subsystem independently mirrorsthe application data onto the second disk subsystem that is installed 50 kilometres away in the back-up data centre by means of remote mirroring. Remote mirroring ensures that the application data in the back-up data centre is always kept up-to-date with the time
                       INTELLIGENT DISK SUBSYSTEMS
Intelligent disk subsystems represent the third level of complexity for controllers afterJBODs and RAID arrays. The controllers of intelligent disk subsystems offer additional functions over and above those offered by RAID. In the disk subsystems that are currently
available on the market these functions are usually instant copies remote mirroring and LUN masking Instant copies can practically copy data sets of several terabytes within a disk subsystem in a few seconds. Virtual copying means that disk subsystems fool the attached servers into believing that they are capable of copying such large data quantities in such a short space of time. The actual copying process takes significantly longer. However, the same server, or a second server, can access the practically copied data after a few seconds Instant copies are used, for example, for the generation of test data, for the back-up of data and for the generation of data copies for data mining. Based upon the case study in Section 1.3 it was shown that when copying data using instant copies, attention
should be paid to the consistency of the copied data. Sections 7.8.5 and 7.10.3 discuss in detail the interaction of applications and storage systems for the generation of consistent instant copies.There are numerous alternative implementations for instant copies. One thing that all implementations have in common is that the pretence of being able to copy data in a matter of seconds costs resources. All realizations of instant copies require controller computing time and cache and place a load on internal I/O channels and hard disks. The different implementations of instant copy force the performance down at different times.However, it is not possible to choose the most favourable implementation alternative depending upon the application used because real disk subsystems only ever realize one
implementation alternative of instant copy.Instant copies can practically copy several terabytes of data within a subsystem in a few seconds: server 1 works on the original data (1). The original dpractically copied in a few seconds (2). Then server 2 can work with the data copy, server 1 continues to operate with the original data (3)In the following, two implementation alternatives will be discussed that functi very different ways. At one extreme the data is permanently mirrored (RAID RAID 10). Upon the copy command both mirrors are separated: the separated can then be used independently of the original. After the separation of the mirro production data is no longer protected against the failure of a hard disk. Therefoincrease data protection, three mirrors are often kept prior to the separation of the m
(three-way mirror), so that the production data is always mirrored after the separatthe copy.At the other extreme, no data at all is copied prior to the copy command, only the instant copy has been requested. To achieve this, the controller administers two areas, one for the original data and one for the data copy generated by means of icopy. The controller must ensure that during write and read access operations to ordata or data copies the blocks in question are written to or read from the data arquestion. In some implementations it is permissible to write to the copy, in some not. Some implementations copy just the blocks that have actually changed (partial copy)
others copy all blocks as a background process until a complete copy of the original data has been generated (full copy).
In the following, the case differentiations of the controller will be investigated in more detail based upon the example from Figure 2.18. We will first consider access by server 1 to the original data. Read operations are completely unproblematic; they are always served
from the area of the original data. Handling write operations is trickier. If a block is changed for the first time since the generation of the instant copy, the controller must first copy the old block to the data copy area so that server 2 can continue to access the old
data set. Only then may it write the changed block to the original data area. If a block that has already been changed in this manner has to be written again, it must be written to the original data area. The controller may not even back up the previous version of the
block to the data copy area because otherwise the correct version of the block would be overwritten.The case differentiations for access by server 2 to the data copy generated by means of instant copy are somewhat simpler. In this case, write operations are unproblematic:the controller always writes all blocks to the data copy area. On the other hand, for readoperations it has to establish whether the block in question has already been copied or not. This determines whether it has to read the block from the original data area or readit from the data copy area and forward it to the server.
 
Buy Vmware Interview Questions & Storage Interview Questions for $150. 100+ Interview Questions with Answers.Get additional free bonus reference materials. You can download immediately even if its 1 AM. You will recieve download link immediately after payment completion.You can buy using credit card or paypal.
----------------------------------------- Get 100 Storage Interview Questions.
:
:
500+ Software Testing Interview Questions with Answers are also available plz email roger.smithson1@gmail.com if you are interested to buy them. 200 Storage Interview Questions word file @ $97

Vmware Interview Questions with Answers $100 Fast Download Immediately after payment.: Get 100 Technical Interview Questions with Answers for $100.
------------------------------------------ For $24 Get 100 Vmware Interview Questions only(No Answers)
Vmware Interview Questions - 100 Questions from people who attended Technical Interview related to Vmware virtualization jobs ($24 - Questions only) ------------------------------------------- Virtualization Video Training How to Get High Salary Jobs Software Testing Tutorials Storage Job Openings Interview Questions

 Subscribe To Blog Feed