Thursday, March 27, 2008

How to Migration from SCSI and Fibre Channel to IP storage And INFINIBAND

How to Migration from SCSI and Fibre Channel to IP storage And INFINIBAND

his investment for as long as possible. This means that he should first of all invest in technologies that solve today's problems. Second, he should invest in technologies that have a long life cycle ahead of them. Finally, he not only has to purchase hardware and software, but also train his staff appropriately and gather experience in production environments. For anyone who wishes to use storage networks today (2003) it is almost impossible to avoid Fibre Channel. IP storage may have a great deal of potential but there are only a few products on the market. Anyone implementing IP storage today will be forced to tie themselves to one manufacturer: it is unlikely that products from different manufacturers will be interoperable until the relevant standards have been passed and the cross-vendor interoperability of IP storage components has been tested. IP storage is therefore only suitable in exceptional cases or for pilot installations or as extension to an already existing Fibre Channel SAN. In environments where no databases are connected via storage networks, Network Attached Storage (NAS, Section 4.2.2) provides an alternative to Fibre Channel SAN. In the coming one to two years (2004 and 2005) Fibre Channel will remain the only alternative for storage networks with high performance requirements. During this period, in our estimate, numerous IP storage products will come onto the market that represent cheap and production-ready alternatives to Fibre Channel SAN for storage networks with low and medium performance requirements. For high performance requirements we will have to wait for appropriate iSCSI host bus adapters, which handle a large part of the protocol stack on the network card and thus free up the server CPU. For local storage networks, Fibre Channel is currently the right choice in almost all situations. It is the only technology for storage networks that is used very successfully on a large scale in production environments. The comprehensive use of IP storage, on the other hand, is imminent. Nevertheless, you can invest in Fibre Channel components today with an easy mind. They will still be able to be operated in a subsequent transition to IP storage, for example, using iSCSI-to-Fibre Channel gateways. Even today, despite the lack of a standard, FCIP is suitable for the connection of

two Fibre Channel SANs over a TCP/IP route. However, FCIP components from the same manufacturer must be used at both ends of the connection. Due to the teething troubles described, iSCSI is suitable as an expansion of existing FC-SANs for certain sub-requirements. Figure 3.43 shows a possible migration path from Fibre Channel SAN to IP storage. In the first stage, iSCSI-to-Fibre Channel gateways and FCIP-to-Fibre Channel gateways are required so that, for example, server connected via iSCSI can back up its data over the storage network onto a tape library connected via Fibre Channel. Currently (2003) it looks like a good idea to invest further in existing Fibre Channel infrastructure and to additionally try out iSCSI and the integration of iSCSI and Fibre Channel in pilot projects. After an extended period of usage in practice we can prove whether IP storage represents an alternative to Fibre Channel SAN and to what extent IP storage will establish itself alongside Fibre Channel. Only when it has been proven in practice that IP storage (iSCSI in Figure 3.42) also fulfils the highest performance requirements will IP storage possibly marginalize Fibre Channel over time.

INFINIBAND

In the near future, Fibre Channel and Ethernet will support transmission rates of 10 Gbit/s and above. Consequently, the host I/O bus in the computer must be able to transmit data at the same rate. However, like all parallel buses, the transmission rate of the PCI bus can only be increased to a limited degree (Section 3.3.2). InfiniBand represents an emerging I/O technology that will probably supersede the PCI bus in high-end servers. InfiniBand replaces the PCI bus with a serial network (Figure 3.44). In InfiniBand the devices communicate by means of messages, with an InfiniBand switch forwarding the data packets to the receiver in question. The communication is full duplex and a transmission rate of 2.5 Gbit/s in each direction is supported. If we take into account the fact that, like Fibre Channel, InfiniBand uses 8b/10b encoding, this yields a net data rate of 250 MByte/s per link and direction. InfiniBand makes it possible to bundle four

or twelve links so that a transmission rate of 10 Gbit/s (1 GByte/s net) or 30 Gbit/s (3GByte/s net) is achieved in each direction. It can be expected that nfiniBand will initially only be used in high-end servers and that the PCI bus will, for now, remain the choice for all other computers. As a medium, InfiniBand defines various copper and fiber-optic cables. A maximum length of 17 metres is specified for copper cable and up to 10,000 metres for fiber optic

cable. There are also plans to realize InfiniBand directly upon the circuit board using conductor tracks. The end points in an InfiniBand network are called channel adapters. InfiniBand differentiates between Host Channel Adapters (HCAs) and Target Channel Adapters (TCAs). HCAs bridge between the InfiniBand network and the system bus to which the CPUs and the main memory (RAM) are connected. TCAs make a connection between InfiniBand networks and peripheral devices that are connected via SCSI, Fibre Channel or Ethernet. In comparison to PCI, HCAs correspond with the PCI bridge chips and TCAs correspond with the Fibre Channel host bus adapter cards or the Ethernet network cards. InfiniBand has the potential to completely change the architecture of servers and storage

devices. We have to consider this: network cards and host bus adapter cards can be located100 metres apart. This means that mainboards with CPU and memory, network cards, host bus adapter cards and storage devices are all installed individually as physically separate, decoupled devices. These components are connected together over a network. Today it is still unclear which of the three transmission technologies will prevail in which area. Figure 3.45 shows what such an interconnection of CPU, memory, I/O cards and stor- age devices might look like. The computing power of the interconnection is provided by two CPU & RAM modules that are connected via a direct InfiniBand link for the benefit of lightweight interprocess communication. Peripheral devices are connected via

the InfiniBand network. In the example a tape library is connected via Fibre Channel and the disk subsystem is connected directly via InfiniBand. If the computing power of the interconnection is no longer sufficient a further CPU & RAM module can be added. Intelligent disk subsystems are becoming more and more powerful and Infini Band facilitates fast communication between servers and storage devices that reduces the load on the CPU. It is therefore at least theoretically feasible for sub functions such as the caching of file systems or the lock synchronization of shared disk file systems to be implemented directly on the disk subsystem or on special processors (Chapter 4). Right from the start, the InfiniBand protocol stack was designed so that it could be realized efficiently. A conscious decision was made only to specify performance features thatcould be implemented in the hardware. Nevertheless, the InfiniBand standard incorporates performance features such as flow control, zoning and various service classes. However, we assume that in InfiniBand – as in Fibre Channel – not all parts of the standard will be realized in the products.

No comments:

Buy Vmware Interview Questions & Storage Interview Questions for $150. 100+ Interview Questions with Answers.Get additional free bonus reference materials. You can download immediately even if its 1 AM. You will recieve download link immediately after payment completion.You can buy using credit card or paypal.
----------------------------------------- Get 100 Storage Interview Questions.
:
:
500+ Software Testing Interview Questions with Answers are also available plz email roger.smithson1@gmail.com if you are interested to buy them. 200 Storage Interview Questions word file @ $97

Vmware Interview Questions with Answers $100 Fast Download Immediately after payment.: Get 100 Technical Interview Questions with Answers for $100.
------------------------------------------ For $24 Get 100 Vmware Interview Questions only(No Answers)
Vmware Interview Questions - 100 Questions from people who attended Technical Interview related to Vmware virtualization jobs ($24 - Questions only) ------------------------------------------- Virtualization Video Training How to Get High Salary Jobs Software Testing Tutorials Storage Job Openings Interview Questions

 Subscribe To Blog Feed