Sunday, December 7, 2008

High Salary Software jobs Interview Process Maps Interview preparation Tips Resume preparation Tips Salary Negotiation Videos Tutorials

Google Interview Process How many rounds of interview to get hired by Google

Microsoft Interview Process Mindmap Technical rounds in Microsoft software Job interview process

IBM Hiring Process Interview rounds how to get software job in IBM

SUN Microsystems Hiring Process Interview rounds how to get software job in SUN

HP Interview Process How To Get Job in HP company

Accenture Software Job Interview Process from Resume screening to Job offer

Oracle Interview Process Mindmap explaining how many technical rounds to clear to get a job in Oracle Software company

Dell Software Technologies Company Interview Process How To Get Job in Dell company

Siemens Company Interview Process How To Get Job in Siemens company

BOSCH Company Interview Process How To Get Job in BOSCH company

ACS Company Interview Process How To Get Job in ACS company

Working at Google Seattle US London UK Zurich Germany Videos

Patni Computer Services PCS Interview Process How To Get Job in PCS Software company

Aditi Company Interview Process How To Get Job in Aditi company

Symphony Company Interview Process How To Get Job in Symphony company

Mphasis Company Interview Process How To Get Job in Mphasis company

Caritor Company Interview Process How To Get Job in Caritor company

HCL Interview process Map detailed Video will be sent if you join the email newsletter below or at right side

Hexaware Technologies Company Interview Process How To Get Job in Hexaware company

MindTree Company Interview Process How To Get Job in MindTree company

ABB Company Interview Process How To Get Job in ABB company

3i Infotech Company Interview Process How To Get Job in 3i Infotech company

Interview Tips Top 10 Interview Tips which can make a huge difference in you getting the job offer

Interview Preparation Tips Step by Step guidance what you must do to get your dream job

Resume Preparation Tips Step by Step guidance what you must do to get your dream Job

Salary Negotiation How to negotiate a better Salary & When to negotiate for a Better Salary package

Salary Advisor talks about how SOftware employees need to think of longterm goals & how Employers need to value Talent

Salary of Software Developers How much Salary a Software programmer can get in Silicon Valley

Project Schedules Project Proposals Whats Important when you wanna present & get approval

Software Jobs

Tuesday, September 2, 2008

How to Make Money Online Learn Secrets for FREE from Experts

Many of you search for a way to make money online. Here is a Simple,EASY & FREE way to learn How to make Money Online.

You can make money online if you just have a service or a product which can be sold or you can have money because of some simple things like writing articles, creating content etc. With all those things you might make just few hundred dollars a month. But if you go through this link
"Search Engine Optimization" you can make a lot more money. Since using Search Engine Optimization you can get hundreds of visitors who are very much looking for the service or product you are selling.

This is FREE hence I am writing about it go here "FREE Secrets to Make Money Online"

This is not some cheap ebook they are going to send you a Video DVD along with lot more for almost FREE & this DVD has several Videos which explain how to make money online.

Go here Order for FREE watch this Video you will know this thing which they are giving away for FREE is worth a thousand dollar.

This product is from the industry leading team called Stompernet . Lots of people pay them to get the same secrets.

------
Subject: "Stomping the Search Engines 2" and "The Net Effect"
for HOW MUCH?

Hey

Andy Jenkins has finally given me the all-clear to spill the
beans on this insane offer that StomperNet has cooked up.

Tomorrow, Sept. 3rd at 3pm Eastern, you can get StomperNet's
big daddy expert SEO Video Course, "Stomping the Search Engines
2"... for FREE.

That's right. FREE.

All you need to do is just TRY their new monthly printed Action
Journal called "The Net Effect" - and guess what?...

You get the PREMIER ISSUE of "The Net Effect" for FREE TOO!

You don't pay one penny more than Shipping and Handling unless
you LOVE it and want to get issue 2 a month from now.

That's NUTS. They are betting the FARM that you will LOVE this
stuff and stick around for more. That takes GUTS, and and HUGE
confidence in the quality of their stuff.

But then again, it's StomperNet. I've SEEN the stuff, and can
vouch. It would be worth FULL PRICE.

But for FREE? You'd be FOOLISH not to check this out.

Don't believe it? Watch this video they've released to the
public. No fooling - this is a FOR-REAL DEAL.

https://member.stompernet.net/?r=1324&i=68

This MIGHT just change your online business fortunes...
forever.

P.S. There's no hint of scarcity here - they've got tons of
BOTH products ready to ship. But still - be there EARLY. If I
hadn't already gotten my "insider" review copy, I'd be the
FIRST one on this page tomorrow.

Thursday, April 17, 2008

Citrix Interview Questions

Citrix Interview Questions

1. What is the requirement for Citrix server installation.

2. What is Data store

3. What is Data collector

4. What is LHC

5. What is Client Lock Down

6. What is Printer terminology in Citrix

7. How to use datastore for database

8. What is the difference between all citrix versions

9. What are different load evaluators are available in Citrix

10. How to implement Policies in Citrix

11. What you will check when any user is not able to launch citrix application.

12. What is IMA

 

Windows Sysadmin Interview Question

1.      What is Active Directory schema?

2.      What are the domain functional level in Windows Server 2003?

3.      What are the forest functional level in Windows Server 2003?

4.      What is global catalogue server?

5.      How we can raise domain functional & forest functional level in Windows Server 2003?

6.      Which is the default protocol used in directory services?

7.      What is IPv6?

8.      What is the default domain functional level in Windows Server 2003?

9.      What are the physical & logical components of ADS

10.  In which domain functional level, we can rename domain name?

11.  What is multimaster replication?

The Active Directory schema contains formal definitions of every object class that can be created in an Active Directory forest it also contains formal definitions of every attribute that can exist in an Active Directory object. Active Directory stores and retrieves information from a wide variety of applications and services. So that it can store and replicate data from a potentially infinite variety of sources, Active Directory standardizes how data is stored in the directory. By standardizing how data is stored, the directory service can retrieve, update, and replicate data while ensuring that the integrity of the data is maintained.

Schema master is a set of rules which is used to define the structure of active directory. It contains definitions of all the objects which are stored in AD. It maintains information and detail information of objects.

.What is global catalog server?

A global catalogue server is a domain controller it is a master searchable database that contains information about every object in every
domain in a forest. The global catalogue contains a complete replica of all
objects in Active Directory for its host domain, and contains a partial replica
of all objects in Active Directory for every other domain in the forest.
It have two important functions:
i)Provides group membership information during logon and authentication
ii)Helps users locate resources in Active Directory

1.      The two technologies in DFS are as follows:

DFS Replication. New state-based, multimaster replication engine that is optimized for WAN environments. DFS Replication supports replication scheduling, bandwidth throttling, and a new byte-level compression algorithm known as remote differential compression (RDC).

DFS Namespaces. Technology that helps administrators group shared folders located on different servers and present them to users as a virtual tree of folders known as a namespace. DFS Namespaces was formerly known as Distributed File System in Windows 2000 Server and Windows Server 2003.

1.      1. DNS(Domain Name Service):
—————————-
It’s mainly used to resolve from host name(FQDN-Fully Qualified Domain Name) to IP address and IP address to host name.DNS mainly used in Internet. DNS divide in form of hierarchical.

2. DHCP(Dynamic Host Configuration Protocol):
———————————————
DHCP use for provide IP address dynamically to client machine. If that client not able to find DHCP server then client machine will go for APIPA(We have range for APIPA which is 169.254.0.1-169.254.255.254).

3. HUB and SWITCH:
——————
Switch is expensive than hub. If more then one user try to send packet at a time collision will occure but in switch we can send. Switch is full duplex. Maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC’s connected to the hub. Data can be sent in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC’s with which the bandwidth must be shared.

 

 

 

 

 

 

Friday, April 4, 2008

Know more about extension of the snia shared storage model to tape functions logical and physical structure of tapes and Extension of the model

Know more about extension of the snia shared storage model to tape functions logical and physical structure of tapes and Extension of the model

The SNIA Shared Storage Model described previously concentrates upon the modelling of disk-based storage architectures. In a supplement to the original model, the SNIA Technical Council defines the necessary extensions for the description of tape functions and back-up architectures.

The SNIA restricts itself to the description of tape functions in the Open Systems environment, since the use of tapes in the mainframe environment is very difficult to model and differs fundamentally from the Open Systems environment. In the Open Systems field, tapes are used almost exclusively for back-up purposes, whereas in the field of mainframes tapes are used much more diversely. Therefore, the extension of the SNIA model concerns

itself solely with the use of tape in back-up architectures. Only the general use of tapes in shared storage environments is described in the model.

The SNIA does not go into more depth regarding the back-up applications themselves. We have already discussed network back-up in Chapter 7. More detailed information on tapes can be found in Section 9.2.1. First of all, we want to look at the logical and physical structure of tapes from the

point of view of the SNIA Shared Storage Model (10.3.1). Then we will consider the differences between disk and tape storage (10.3.2) and how the model is extended for the description of the tape functions (10.3.3).

 Logical and physical structure of tapes

Information is stored on tapes in so-called tape images, which are made up of the following

logical components (Figure 10.20):

• Tape extent A tape extent is a sequence of blocks upon the tape. A tape extent is comparable with a volume in disk storage. The IEEE Standard 1244 (Section 9.5) also uses the term volume but it only allows volumes to reside exactly on one tape and not span multiple tapes.

• Tape extent separator The tape extent separator is a mark for the division of individual tape extents.

• Tape header The tape header is an optional component that marks the start of a tape.

• Tape trailer The tape trailer is similar to the tape header and marks the end of a tape. This, too, is an optional component. In the same way as logical volumes of a volume manager extend over several physical disks, tape images can also be distributed over several physical tapes. Thus, there may be precisely one logical tape image on a physical tape, several logical tape images on a physical tape, or a logical tape image can be distributed over several physical tapes. So- called tape image separators are used for the subdivision of the tape images (Figure 10.21). 10.3.2 Differences between disk and tape At first glance, disks and tapes are both made up of blocks, which are put together to form long sequences. In the case of disks these are called volumes, whilst in tapes they are called extents. The difference lies in the way in which they are accessed, with disks being designed for random access, whereas tapes can only be accessed sequentially. Consequently, disks and tapes are also used for different purposes. In the Open Sys- tems environment, tapes are used primarily for back-up or archiving purposes. This is completely in contrast to their use in the mainframe environment, where file structures – so-called tape files – are found that are comparable to a file on a disk. There is no definition of a tape file in the Open systems environment, since several files are generally bundled to form a package, and processed in this form, during back-up and archiving.

This concept is, therefore, not required here.

 Extension of the model

The SNIA Shared Storage Model must take into account the differences in structure and application between disk and tape and also the different purposes for which they are used. To this end, the file/record layer is expanded horizontally. The block layer, which produces the random access to the storage devices in the disk model, is exchanged for a sequential access block layer for the sequential access to tapes. The model is further supplemented by the following components (Figure 10.22):

• Tape media and tape devices Tape media are the storage media upon which tape images are stored. A tape devices is a special physical storage resource, which can process removable tape media. This differentiation between media and devices is particularly important in the context

of removable media management (Chapter 9). The applicable standard, IEEE 1244, denotes tape media as cartridge and tape device as drive.

• Tape applications The SNIA model concentrates upon the use of tapes for back-up and archiving. Special tape applications, for example, back-up software, are used for back-up. This software can deal with the special properties of tapes.

• Tape format system In the tape format system, files or records are compressed into tape extents and tape images. Specifically in the Open Systems environment, the host generally takes over this task. However, access to physical tape devices does not always have to go through the tape format system. It can also run directly via the extent aggregation layer described below or directly on the device.

• Extent aggregation layer The extent aggregation layer works in the same way as the block aggregation layer (Section 10.1.7), but with extents instead of blocks. However, in contrast to the random access of the block aggregation layer, access to the physical devices takes place

sequentially. Like the access paths, the data flows between the individual components are shown as arrows.

Know more on Asymmetric file services: NAS/file server metadata manager Object-based storage device (OSD)

Know more on  Asymmetric file services: NAS/file server metadata manager Object-based storage device (OSD)

A file server metadata manager (Figure 10.18) works in the same way as asymmetric storage virtualization on file level (Section 5.7.2):

• Hosts and storage devices are connected via a storage network.

• A metadata manager positioned outside the data path stores all file position data, i.e. metadata, and makes this available to the hosts upon request.

• Hosts and metadata manager communicate over an expanded file-oriented protocol.

• The actual user data then flows directly between hosts and storage devices by means of a block-oriented protocol. This approach offers the advantages of fast, direct communication between host and storage devices, whilst at the same time offering the advantages of data sharing on

file level. In addition, in this solution the classic file sharing services can be offere

in a LAN over the metadata manager.

 Object-based storage device (OSD)

The SNIA Shared Storage Model defines the so-called object-based storage device (OSD) The idea behind this architecture is to move the position data of the files and the access rights to a separate OSD. OSD offers the same advantages as a file sharing solution,combined with increased performance due to direct access to the storage by the hosts, and central metadata management of the files. The OSD approach functions as follows

(Figure 10.19):

• An OSD device exports a large number of byte vectors instead of the LUNs used in block-oriented storage devices. Generally, a byte vector corresponds to a single file.

• A separate OSD metadata manager authenticates the hosts and manages and checks the access rights to the byte vectors. It also provides appropriate interfaces for the hosts.

• After authentication and clearance for access by the OSD metadata manager, the hosts access the OSD device directly via a file-oriented protocol. This generally takes place via a LAN, i.e. a network that is not specialized for storage traffic.

 

Free tutors on Storage network attached block storage Block storage aggregation in a storage device: Network attached block storage with metadata and File server controller: NAS heads

Free tutors on Storage network attached block storage Block storage aggregation in a storage device: Network attached block storage with metadata and File server controller: NAS heads

 Multi-site block storage file servers File server controller: NAS heads

The connection from storage to host via a storage network can be represented in the Shared Storage Model as shown in Figure 10.12. In this case:

• Several hosts share several storage devices.

• Block-oriented protocols are generally used.

• Block aggregation can be used in the host, in the network and in the storage device.

 Block storage aggregation in a storage device:

SAN appliance Block aggregation can also be implemented in a specialized device or server of the storage network in the data path between hosts and storage devices, as in the symmetric storage virtualization (Figure 10.13, Section 5.7.1). In this approach:

• Several hosts and storage devices are connected via a storage network.

• A device or a dedicated server – a so-called SAN appliance – is placed in the data path between hosts and storage devices to perform block aggregation, and data and metadata traffic flows through this.

 Network attached block storage with metadata

server: asymmetric block services The asymmetric block services architecture is identical to the asymmetric storage virtualization approach (Figure 10.14, Section 5.7.2):

• Several hosts and storage devices are connected over a storage network.

• Host and storage devices communicate with each other over a protocol on block level.

• The data flows directly between hosts and storage devices.

• A metadata server outside the data path holds the information regarding the position of the data on the storage devices and maps between logical and physical blocks.

 Multi-site block storage

Figure 10.15 shows how data replication between two locations can be implemented  means of WAN techniques. The data can be replicated on different layers of the mo using different protocols:

• between volume managers on the host;

• between specialized devices in the storage network; or

• between storage systems, for example disk subsystems.

If the two locations use different network types or protocols, additional converters can be installed for translation.

 File server

A file server (Section 4.2) can be represented as shown in Figure 10.16. The following points are characteristic of a file server:

• the combination of server and normally local, dedicated storage;

• file sharing protocols for the host access;

• normally the use of a network, for example, a LAN, that is not specialized to the storage traffic;

• optionally, a private storage network can also be used for the control of the dedicated storage.

 File server controller: NAS heads

In contrast to file servers, NAS heads (Figure 10.17, Section 4.2.2) have the following properties:

• They separate storage devices from the controller on the file/record layer, via which the hosts access.

• Hosts and NAS heads communicate over a file-oriented protocol.

• The hosts use a network for this that is generally not designed for pure storage traffic, for example a LAN.

• When communicating downwards to the storage devices, the NAS head uses a block-oriented protocol.

NAS heads have the advantage over file servers that they can share the storage systems with other hosts that access them directly. This makes it possible for both file and block services to be offered by the same physical resources at the same time. In this manner, IT architectures can be designed more flexibly, which in turn has a positive effect upon scalability.

Thursday, April 3, 2008

Free tutors on Clustering Storage, data and information The service subsystem examples of disk-based storage architectures

Free tutors on  Clustering Storage, data and information The service subsystem examples of disk-based storage architectures

A cluster is defined in the SNIA Shared Storage Model as a combination of resources with the objective of increasing scalability, availability and management within the shared storage environment (Section 6.4.1). The individual nodes of the cluster can share their resources via distributed volume managers (multi-node LVM) and cluster file systems (Figure 10.8, Section 4.3).

 Storage, data and information

The SNIA Shared Storage Model differentiates strictly between storage, data and information. Storage is space – so-called containers – provided by storage units, on which the data is stored. The bytes stored in containers on the storage units are called data. Information is the meaning – the semantics – of the data. The SNIA Shared Storage Model names the following examples in which data–container relationships arise (Table 10.1).

10.1.14 Resource and data sharing In a shared storage environment, in which the storage devices are connected to the host via a storage network, every host can access every storage device and the data stored upon it (Section 1.2). This sharing is called resource sharing or data sharing in the SNIA model, depending upon the level at which the sharing takes place (Figure 10.9). If exclusively the storage systems – and not their data content – are shared, then we talk of resource sharing. This is found in the physical resources, such as disk subsystems and tape libraries, but also within the network. Data sharing denotes the sharing of data between different hosts. Data sharing is significantly more difficult to implement, since the shared data must always be kept consistent, particularly when distributed caching is used. Heterogeneous environments also require additional conversion steps in order to convert the data into a format that the host can understand. Protocols such as NFS or CIFS are used in the more frequently used data sharing within the file/record layers

. For data sharing in the block layer, server clusters with shared disk file systems or parallel databases are used

 The service subsystem

Up to now we have concerned ourselves with the concepts within the layers of the SNIA Shared Storage Model. Let us now consider the service subsystem (Figure 10.10). Within the service subsystem we find the management tasks which occur in a shared storage environment and which we have, for the most part, already discussed in Chapter 8. In this connection, the SNIA Technical Council mention:

• discovery and monitoring

• resource management

• configuration

• security

• billing (charge-back)

• redundancy management, for example, by network back-up

• high availability

• capacity planning.

The individual subjects are not yet dealt with in more detail in the SNIA Shared Storage Model, since the required definitions, specifications and interfaces are still being developed (Section 8.7.3). At this point we expressly refer once again to the check list in the

Appendix B, which reflects a cross-section of the questions that crop up here.

EXAMPLES OF DISK-BASED STORAGE ARCHITECTURES

In this section we will present a few examples of typical storage architectures and their properties, advantages and disadvantages, as they are represented by the SNIA in the Shared Storage Model. First of all, we will discuss block-based architectures, such as the direct connection of storage to the host (Section 10.2.1), connection via a storage network (Section 10.2.2), symmetric and asymmetric storage virtualization in the network

(Section 10.2.3 and Section 10.2.4) and a multi-site architecture such as is used for data replication between several locations (Section 10.2.5). We then move on to the file/record layer and consider the graphical representation of a file server (Section 10.2.6), a NAS head (Section 10.2.7), the use of metadata controllers for asymmetric file level virtualization (Section 10.2.8) and an object-based storage device (OSD), in which the position

data of the files and their access rights is moved to a separate device, a solution that combines file sharing with increased performance due to direct file access and central metadata management of the files (Section 10.2.9). 10.2.1 Direct attached block storage Figure 10.11 shows the direct connection from storage to the host in a server-centric architecture. The following properties are characteristic of this structure:

• No connection devices, such as switches or hubs, are needed.

• The host generally communicates with the storage device via a protocol on block level.

• Block aggregation functions are possible both in the disk subsystem and on the host.

Learn more on Combination of the block and file/record layers Access paths Caching Access paths

Learn more on Combination of the block and file/record layers Access paths Caching Access paths

Figure 10.5 shows how block and file/record layer can be combined and represented in the SNIA shared storage model:

• Direct attachment The left-hand column in the figure shows storage connected directly to the server, as is normally the case in a server-centric IT architecture (Section 1.1).

• Storage network attachment

In the second column we see how a disk array is normally connected via a storage network in a storage-centric IT architecture, so that it can be accessed by several host computers (Section 1.2).

• NAS head (NAS gateway) The third column illustrates how a NAS head is integrated into a storage network between SAN storage and a host computer connected via LAN.

• NAS server The right-hand column shows the function of a NAS server with its own dedicated storage in the SNIA Shared Storage Model.

Access paths

Read and write operations of a component on a storage device are called access paths in the SNIA Shared Storage Model. An access path is descriptively defined as the list of components that are run through by read and write operations to the storage devices and responses to them. If we exclude cyclical access paths, then a total of eight possible access paths from applications to the storage devices can be identified in the SNIA Shared

Storage Model (Figure 10.6):

1. Direct access to a storage device.

2. Direct access to a storage device via a block aggregation function.

3. Indirect access via a database system.

4. Indirect access via a database system based upon a block aggregation function.

5. Indirect access via a database system based upon a file system.

6. Indirect access via a database system based upon a file system, which is itself based

upon a block aggregation function.

7. Indirect access via a file system.

8. Indirect access via a file system based upon a block aggregation function.

 Caching

Caching is the method of shortening the access path of an application – i.e. the number of the components to be passed through – to frequently used data on a storage device. To this end, the data accesses to the slower storage devices are buffered in a faster cache storage. Most components of a shared storage environment can have a cache. The cache can be implemented within the file/record layer, within the block layer or in both.

In practice, several caches working simultaneously on different levels and components are generally used. For example, a read cache in the file system may be combined with a write cache on a disk array and a read cache with pre-fetching on a hard disk (Figure 10.7). In addition, a so-called cache-server (Section 5.7.2), which temporarily stores data for other components on a dedicated basis in order to reduce the need for network capacity or to accelerate access to slower storage, can also be integrated into the storage network. However, the interaction between several cache storages on several components means that consideration must be given to the consistency of data. The more components that use cache storage, the more dependencies arise between the functions of individual components. A classic example is the use of a snapshot function on a component in the block layer, whilst another component stores the data in question to cache in the file/record layer. In this case, the content of the cache within the file/record layer, which we will assume to be consistent, and the content of a volume on a disk array that is a component of the block layer can be different. The content of the volume on the array is thus inconsistent. Now, if a snapshot is taken of the volume within the disk array, a virtual

copy is obtained of an inconsistent state of the data. The copy is thus unusable. Therefore, before the snapshot is made within the block layer, the cache in the file/record layer on the physical volume must be destaged, so that it can receive a consistent copy later.

 Access control

Access control is the name for the technique that arranges the access to data of the shared

storage environment. The term access control should thus be clearly differentiated from

the term access path, since the mere existence of an access path does not include the right

to access. Access control has the following main objectives:

• Authentication

Authentication establishes the identity of the source of an access.

• Authorization

Authorization grants or refuses actions to resources.

• Data protection

Data protection guarantees that data may only be viewed by authorized persons. All access control mechanisms ultimately use a form of secure channel between the data on the storage device and the source of an access. In its simplest form, this can be a check to establish whether a certain host is permitted to have access to a specific storage device. Access control can, however, also be achieved by complicated cryptographic proce-

dures, which are secure against the most common external attacks. When establishing a control mechanism it is always necessary to trade off the necessary protection and efficiency against complexity and performance sacrifices. In server-centric IT architectures, storage devices are protected by the guidelines on the host computers and by simple physical measures. In a storage network, the storage devices, the network and the network components themselves must be protected against unauthorized access, since in theory they can be accessed from all host computers. Access control becomes increasingly important in a shared storage environment as the number of components used, the diversity of heterogeneous hosts and the distance between the individual devices rise. Access controls can be established at the following points of a shared storage environment:

• On the host In shared storage environments, access controls comparable with those in server-centric environments can be established at host level. The disadvantage of this approach is, however, that the access rights have to be set on all host computers. Mechanisms that reduce the amount of work by the use of central instances for the allocation and distribution of rights must be suitably protected against unauthorized access. Database

systems and file systems can be protected in this manner. Suitable mechanisms for the block layer are currently being planned. The use of encryption technology for the host's network protocol stack is in conflict with performance requirements. Suitable offload engines, which process the protocol stack on the host bus adapter themselves, are available for some protocols.

• In the storage network Security within the storage network is achieved in Fibre Channel SANs by zoning and virtual storage networks (Virtual SAN (VSAN), Section 3.4.2) and in Ethernet-based storage networks by so-called virtual LANs (VLAN). This is always understood to be the subdivision of a network into virtual subnetworks, which permit communication between a number of host ports and certain storage device ports. These guidelines can, however, also be defined on finer structures than ports.

• On the storage device The normal access control procedure on SAN storage devices is the so-called LUN masking, in which the LUNs that are visible to a host are restricted. Thus, the computer sees only those LUNs that have been assigned to it by the storage device (Section 2.7.3).

KNOW MORE ON THE SNIA SHARED STORAGE MODEL ,THE LAYERS AND THE MODEL 317

KNOW  MORE ON THE SNIA SHARED STORAGE MODEL ,THE LAYERS AND THE MODEL 317

The SNIA Shared Storage Model defines four layers (Figure 10.2):

I. Storage devices

II. Block aggregation layer

III. File/record layer

IIIb. Database

IIIa. File system

IV Applications

Applications are viewed as users of the model and are thus not described in the model. They are, however, implemented as a layer in order to illustrate the point in the model to which they are linked. In the following we'll consider the file/record layer (Section 10.1.6), the block layer (Section 10.1.7) and the combination of both (Section 10.1.8) in detail.

10.1.6 The file/record layer

The file/record layer maps database records and files on the block-oriented volume of the storage devices. Files are made up of several bytes and are therefore viewed as byte vectors in the SNIA model. Typically, file systems or database management systems take over these functions. They operate directories of the files or records, check the access, allocate storage space and cache the data (Chapter 4). The file/record layer thus works on volumes that are provided to it from the block layer below. Volumes themselves consist of several arranged blocks, so-called block vectors. Database systems map one or more records, so-called tuple of records, onto volumes via tables and table spaces:

Tuple of records −→ tables −→ table spaces −→ volumes

In the same way, file systems map bytes onto volumes by means of files:

Bytes −→ files −→ volumes

Some database systems can also work with files, i.e. byte vectors. In this case, bloc vectors are grouped into byte vectors by means of a file system – an additional abstraction level. Since an additional abstraction level costs performance, only smaller databases word in a file-oriented manner. In large databases the additional mapping layer of byte to bloc vectors is dispensed with for performance reasons. The functions of the file/record layers can be implemented at various points (Figure 10.3 Section 5.6):

• Exclusively on the host In this case, the file/record layer is implemented entirely on the host. Databases an the host-based file systems work in this way.

• Both in the client and also on a server component The file/record layer can also be implemented in a distributed manner. In this case th

functions are distributed over a client and a server component. The client component is realized on a host computer, whereas the server component can be realized on th following devices:

• NAS/file server A NAS/file server is a specialized host computer usually with a locally connected dedicated storage device (Section 4.2.2).

• NAS head A host computer that offers the file serving services, but which has access to external storage connected via a storage network. NAS heads correspond with the devices called NAS gateways in our book (Section 4.2.2). In this case, client and server components work over network file systems such as NFS or CIFS (Section 4.2).

The block layer

The block layer differentiates between block aggregation and the block-based storage devices. The block aggregation in the SNIA model corresponds to our definition of the virtualization on block level (Section 5.5). SNIA thus uses the term 'block aggregation' to mean the aggregation of physical blocks or block vectors into logical blocks or block vectors. To this end, the block layer maps the physical blocks of the disk storage devices onto

logical blocks and makes these available to the higher layers in the form of volumes (block vectors). This either occurs via a direct (1 : 1) mapping, or the physical blocks are first aggregated into logical blocks, which are then passed on to the upper layers in the form of volumes (Figure 10.4). In the case of SCSI, the storage devices of the storage device layer exist in the form of one or more so-called logical units (LU). Further tasks of the block layer are the labelling of the logical units using so-called logical unit numbers (LUNs), caching and – increasingly in the future – access control.

Block aggregation can be used for various purposes, for example:

• Volume/space management The typical task of a volume manager is to aggregate several small block vectors to form one large block vector. On SCSI level this means aggregating several logical units

 THE MODEL 317

to form a large volume, which is passed on to the upper layers such as the file/record layer (Section 4.1.4).

• Striping In striping, physical blocks of different storage devices are aggregated to one volume. This increases the I/O throughput of the read and write operations, since the load is distributed over several physical storage devices (Section 2.5.1).

• Redundancy In order to protect against failures of physical data carriers, RAID (Section 2.5) and remote mirroring (Section 2.7.2) are used. Snapshots (instant copies) can also be used for the redundant storage of data (Section 2.7.1). The block aggregation functions of the block layer can be realized at different points of the shared storage environment (Section 5.6):

• On the host Block aggregation on the host is encountered in the form of a logical volume manager software, in device drivers and in host bus adapters.

• On a component of the storage network The functions of the block layer can also be realized in connection devices of the storage network or in specialized servers in the network.

• In the storage device Most commonly, the block layer functions are implemented in the storage devices themselves, for example, in the form of RAID or volume manager functionality. In general, various block aggregation functions can be combined at different points of the shared storage environment. In practical use, RAID may, for example, be used in the disk subsystem with additional mirroring from one disk subsystem to another via the volume manager on the host computer (Section 4.1.4). In this setup, RAID protects against the failure of physical disks of the disk subsystem, whilst the mirroring by means of the volume manager on the host protects against the complete failure of a disk subsystem. Furthermore, the performance of read operations is increased in this set-up, since the volume manager can read from both sides of the mirror (Section 2.5.2).

Free Tutors the snia shared storage model Graphical representations An elementary overview and The components

Free Tutors the snia shared storage model  Graphical representations An elementary overview and The components

The SNIA Shared Storage Model further defines how storage architectures can be graphically illustrated. Physical components are always represented as three-dimensional objects, whilst functional units should be drawn in two-dimensional form. The model itself also defines various colours for the representation of individual component classes. In the black and white format of the book, we have imitated these using shades of grey. A coloured version of the illustrations to this chapter can be found on our home page Thick lines in the model represent the data transfer,

whereas thin lines represent the metadata flow between the components.

 An elementary overview

The SNIA Shared Storage Model first of all defines four elementary parts of a shared storage environment (Figure 10.1):

1. File/record layer The file/record layer is made up of database and file system.

2. Block layer The block layer encompasses the storage devices and the block aggregation. The SNIA Shared Storage Model uses the term 'aggregation' instead of the often ambiguously used term 'storage virtualization'. In Chapter 5, however, we used the term 'storage virtualization' to mean the same thing as 'aggregation' in the SNIA model, in order to avoid ambiguity.

3. Services subsystem The functions for the management of the other components are defined in the services subsystem.

4. Applications Applications are not discussed further by the model. They will be viewed as users o the model in the widest sense.

 The components

The SNIA Shared Storage Model defines the following components:

• Interconnection network The interconnection network represents the storage network, i.e. the infrastructure, that connects the individual elements of a shared storage environment with one another. The interconnection network can be used exclusively for storage access, but it can

also be used for other communication services. Our definition of a storage network (Section 1.2) is thus narrower than the dentitions of the interconnection network in the SNIA model. The network must always provide a high-performance and easily scalable connection for the shared storage environment. In this context, the structure of the interconnection network – for example redundant data paths between two components to increase fault-tolerance – remains just as open as the network techniques used. It is therefore a prerequisite of the model that the components of the shared storage environment are connected over a network without any definite communication protocols or transmission techniques being specified.

In actual architectures or installations, Fibre Channel, Fast Ethernet, Gigabit Ethernet, InfiniBand and many other transmission techniques are used (Chapter 3). Communication protocols such as SCSI, Fibre Channel FCP, TCP/IP, RDMA, CIFS or NFS are based upon these.

• Host computer Host computer is the term used for computer systems that draw at least some of their storage from the shared storage environment. According to SNIA, these systems were often omitted from classical descriptive approaches and not viewed as part of the environment. The SNIA shared storage model, however, views these systems as part of the entire shared storage environment because storage-related functions can be implemented on them. Host computers are connected to the storage network via host bus adapters or network

cards, which are operated by means of their own drivers and software. Drivers and software are thus taken into account in the SNIA Shared Storage Model. Host computers can be operated fully independently of one another or they can work on the resources of the storage network in a compound, for example, a cluster

• Physical storage resource All further elements that are connected to the storage network and are not host computers are known by the term 'physical storage resource'. This includes simple hard disk drives, disk arrays, disk subsystems and controllers plus tape drives and tape libraries. Physical storage resources are protected against failures by means of redundant data paths (Section 6.3.1), replication functions such as snapshots and mirroring (Section 2.7) and RAID (Section 2.5).

\• Storage device A storage device is a special physical storage resource that stores data.

• Logical storage resource The term 'logical storage resource' is used to mean services or abstract compositions of physical storage resources, storage management functions or a combination of these. Typical examples are volumes, files and data movers.

• Storage management functions The term 'storage management function' is used to mean the class of services that monitor and check (Chapter 8) the shared storage environment or implement logical storage resources. These functions are typically implemented by software on physical

storage resources or host computers.

Wednesday, April 2, 2008

Free tutors on the SNIA Shared Storage Model and The functional approach

Free tutors on the SNIA Shared Storage Model and The functional approach

The fact that there is a lack of any unified terminology for the description of storage architectures has already become apparent at several points in previous chapters. There are thus numerous components in a storage network which, although they do the same thing, are called by different names. Conversely, there are many systems with the same name, but fundamentally different functions. A notable example is the term' data mover' relating to server-free back-up (Section 7.8.1) in storage networks. When this term is used it is always necessary to check whether the component in question is one that functions in the sense of the 3rd-party SCSI Copy Command for, for example, a software component of back-up software on a special server, which implements the server-free back-up without 3rd-party SCSI. This example shows that the type of product being offered by a manufacturer and the functions that the customer can ultimately expect from this product are often unclear. This makes it difficult for customers to compare the products of individual manufacturers and find out the differences between the alternatives on offer. There is no unified model for this with clearly defined descriptive terminology. For this reason, in 2001 the Technical Council of the Storage Networking Industry Association (SNIA) introduced the so-called Shared Storage Model in order to unify the terminology and descriptive models used by the storage network industry. Ultimately, the SNIA wants to use the SNIA Shared Storage Model to establish a reference model, which will have the same importance for storage architectures as the seven-tier OSI model has for computer networks. In this chapter, we would first like to introduce the disk-based Shared Storage Model (Section 10.1) and then show, based upon examples (Section 10.2), how the model can be used for the description of typical disk storage architectures. In Section 10.3 we introduce the extension of the SNIA model to the description of tape functions. We then discuss examples of tape-based back-up architectures (Section 10.4). Whilst describing the SNIA Shared Storage Model we often refer to text positions in this book where the subject in question is discussed in detail, which means that this chapter also serves as a summary of the entire book.

THE MODEL

In this book we have spoken in detail about the advantages of the storage-centric architecture in relation to the server-centric architecture. The SNIA sees its main task as being to communicate this paradigm shift and to provide a forum for manufacturers and developers so that they can work together to meet the challenges and solve the problems in this field. In the long run, an additional reason for the development of the Shared Storage Model by SNIA was the creation of a common basis for communication between the manufacturers who use the SNIA as a platform for the exchange of ideas with other manufacturers. Storage-centric IT architectures are called shared storage environments by the SNIA. We will use both terms in the following. First of all, we will describe the functional approach of the SNIA model (Section 10.1.1) and the SNIA conventions for graphical representation (Section 10.1.2). We will then consider the model (Section 10.1.3), its components (Section 10.1.4) and the layers' file/record layer' and 'block layer' in detail (Section 10.1.5 to Section 10.1.8). Then we will introduce the definitions and representation of concepts from the SNIA

model, such as access paths (Section 10.1.9), caching (Section 10.1.10), access control (Section 10.1.11), clustering (Section 10.1.12), data (Section 10.1.13) and resource and data sharing (Section 10.1.14). Finally, we will take a look at the service subsystem (Section 10.1.15).

The functional approach

The SNIA Shared Storage Model first of all describes functions that have to be provided in a storage-centric IT architecture. This includes, for example, the block layer or the file/record layer. The SNIA model describes both the tasks of the individual functions and also their interaction. Furthermore, it introduces components such as server ('host computer') and storage networks ('interconnection network'). Due to the separation of functions and components, the SNIA Shared Storage Model is suitable for the description of various architectures, specific products and concrete

installations. The fundamental structures, such as the functions and services of a shared storage environment, are highlighted. In this manner, functional responsibilities can be assigned to individual components and the relationships between control and data flows in the storage network worked out. At the same time, the preconditions for interoperability between individual components and the type of interoperability can be identified. In addition to providing a clear terminology for the elementary concepts, the model should be simple to use and, at the same time, extensive enough to cover a large number of possible storage network configurations. The model itself describes, on the basis of examples, possible practicable storage architectures and their advantages and disadvantages. We will discuss these in Section 10.2  without evaluating them or showing any preference for specific architectures. Within the model definition, however, only a few selected examples will be discussed in order to highlight how the model can be applied for the description of storage-centred environments and further used.

Know more about THE IEEE 1244 STANDARD FOR REMOVABLE MEDIA MANAGEMENT Operational characteristics of the media manager

Know more about THE IEEE 1244 STANDARD FOR REMOVABLE MEDIA MANAGEMENT  Operational characteristics of the media manager

From the point of view of the client, the media manager works as a server that waits for MMP commands, which the client sends via a TCP/IP connection. The media manage executes these commands, generates appropriate responses and sends these back to the clients. All commands are given unambiguous task identifiers. The responses contain the task identifier of the command in question. The response takes place in two stages. First, the successful receipt of the command is acknowledged. In a second response, the application is informed whether the command has been successfully executed and which responses the system has supplied. Example 1 An application wants to mount the volume with the name back-up-1999-12 31. To this end it, sends the following command to the media manager: mount task[' '1' '] volname [' 'back-up-1999-12-31' ']

report [MOUNTLOGICAL.' 'MountLogicalHandle' ']; The media manager has recognized the command and accepted it for processing and

therefore sends the following response: response task[' '1' '] accepted; Now the media manager will transport the cartridge containing the volume into a drive to which the application has access. Once the cartridge has been successfully inserted, a response is generated that could look like this:

response task[' '1' '] success text [' '/dev/rmt0' ']; The media manager stores all commands in a task queue until all resources required for execution are available. Once all the resources are available, the media manager removes the command from the task queue and executes it. If several commands are present that require the same resources, the media manager selects the next command to be carried out on the basis of priorities or on a first come, first served basis. All other commands remain in the task queue until the resources in question become free again. In this manner

libraries, drives and also cartridges can be shared. Commands that are in the task queue can be removed again using the Cancel command

Operational characteristics of the library and drive managers The library manager receives the media manager's commands via the library manage

ment protocol (LMP) and converts these into the specific commands for the hardware in question. From the point of view of the media manager, a unified abstract interface tha conceals the properties of the hardware in question thus exists for all libraries. New hard ware can thus be integrated into the management system using a suitable library manager without having to make changes to the whole system. Accordingly, drive manager implementations of the abstract drive management protocol (DMP) are interfaces for a certain drive hardware. However, drive management

must also take into account the specific properties of the various client platforms upon which the applications that want to use the media management system run. If such an application is running on a UNIX-compatible platform, the drive manager must provide the corresponding names of a device special file for access to the drive. Under Windows, such a drive manager must supply a windows-specific file name, such as \\.\TAPE0.

Privileged and non-privileged clients The media manager carries out requests from clients that want to take advantage of the media management services. From the point of view of the media manager there are privileged and non-privileged clients:

• Non-privileged clients, such as back-up systems, can only handle objects for which they have been granted an appropriate authorization.

• Privileged clients, usually administrative applications, may perform all actions and manipulate all objects. They serve primarily to include non-privileged applications in the system and to establish suitable access controls.  The IEEE 1244 data model (Table 9.1) In addition to the architecture and the logs for communication, the standard also describes a complete data model, which includes all objects, and their attributes, that are necessary for the representation of the media management system. Objects can be provided with additional application-specific attributes. The object model can thus be dynamically and flexibly adapted to the task at hand, without changes being necessary to the underlying management system. 9.5.3 Media Management Protocol (MMP) The media management protocol (MMP) is used by the applications to make use of

the media management services of an IEEE 1244-compatible system. MMP is a text based protocol, which exchanges messages over TCP/IP. The syntax and semantics of the individual protocol messages are specified in the MMP specification IEEE 1244.3. MMP permits applications to allocate and mount volumes, read and write metadata and to manage and share libraries and drives platform-independently. Due to the additional bstraction levels, the application is decoupled from the direct control of the hardware. Thus, applications can be developed independently of the capability of the connected hard- ware and can be made available to a large number of different types of removable media.

Table 9.1 The most important objects of the IEEE 1244 data model

Object Description

APPLICATION Authorized client application. Access control is performed on the basis of applications. User management is not part of this standard, since it is assumed that it is not individual users, but applications that already manage their users, that will

use the services of the media management system.  AI Authorized instances of a client application. All instances of an application have unrestricted access to resources that are assigned to the application. LIBRARY Automatic or manually operated libraries.

LM Library managers know the details of a library. The library manager protocol serves as a hardware-independent interface between media

manager and library manager. BAY Part of a LIBRARY (contains DRIVES and SLOTS). SLOT Individual storage space for CARTRIDGEs within a

BAY. SLOTGROUP Group of SLOTS to represent a magazine, for example, within a LIBRARY. SLOTTYPE Valid types for SLOTs, for example 'LTO',

'DL-Tor3480', 'QIC' or 'CDROM'. DRIVE Drives, which can accept CARTRIDGEs for writing or reading. DRIVEGROUP Groups of drives.

DRIVEGROUPAPPLICATION This object makes it possible for applications to access drives in a DRIVEGROUP. This connection can be assigned a priority so that several DRIVEGROUPs with different priorities are available. The media manager selects a suitable drive according to priority.

DM Drive manager. Drive managers know the details of a drive and make this available to the media manager. The drive manager protocol serves as a hardware-independent interface between media manager and drive manager. CARTRIDGE Removable data carrier; media. CARTRIDGEGROUP Group of CARTRIDGEs. CARTRIDGEGROUPAPPLICATION This object makes it possible for applications to access CARTRIDGEs in a CARTRIDGEGROUP.

This connection can be assigned a priority so that several CARTRIDGEGROUPs with different priorities are available. The media manager selects a

suitable CARTRIDGE according to priority in order to allocate a VOLUME, if no further entries are made. (continued overleaf )

FREE TUTORS ABOUT THE IEEE 1244 STANDARD FOR REMOVABLE MEDIA MANAGEMENT SYSTEM ARCHITECTURE

FREE TUTORS ABOUT THE IEEE 1244 STANDARD FOR REMOVABLE MEDIA MANAGEMENT SYSTEM ARCHITECTURE

As early as 1990, the IEEE Computer Society set up the 1244 project for the development of standards for storage systems. The Storage System Standards Working Group was also established with the objective of developing a reference model for mass storage systems (Mass Storage System Reference Model/MSSRM). This reference model has significantly influenced the design of some storage systems that are in use today. The model was then revised a few times and in 1994 released as the IEEE Reference Model for Open Storage Systems Interconnection (OSSI). Finally, in the year 2000, after further revisions, the 1244 Standard for Media Management Systems was released. This standard consists of a series of documents that describe a platform-independent,distributed management system for removable media. It also defines both the architecture for a removable media management system and its interfaces towards the outside world. The architecture makes it possible for software manufacturers to implement very

scalable, distributed software systems, which serve as generic middleware between application software and library and drive hardware. The services of the system can thus be consolidated in a central component and from there made available to all applications. The specification paid particular attention to platform-independence and the heterogeneous environment of current storage networks was thus taken into account mazingly early on. Systems that build upon this standard can manage different types of media. In addition to the typical media for the computer field such as magnetic tape, CD, DVD or optical media, audio and video tapes, files and video disks can also be managed. In actual fact, there are no assumptions about the properties of a medium in IEEE 1244- compliant systems. Their characteristic features (number of sides, number of partitions, etc.) must be defined for each media type that the system is to support. There is a series of predefined types, each with their own properties. This open design makes it possible to specify new media types and their properties at any time and to add them to the current system. In addition to neutrality with regard to media types, the standard permits the management of both automatic and manually-operated libraries. An operator interface, which is

also documented, and with which messages are sent to the appropriate administrators of a library, serves this purpose. In the following sections we wish to examine more closely the architecture and functionality of a system based upon the IEEE standard.

 Media management system architecture

The IEEE 1244 standard describes a client/server architecture (Figure 9.9). Applications such as network back-up systems take on the role of the client that makes use of the services of the removable media management system. The following components are individually defined:

 • a media management component, which serves as a central repository for the metadata and provides mechanisms for controlling and co-ordinating the use of media, libraries and drives;

• a library manager component, which controls the library hardware on behalf of the media manager and transmits the properties and the content of the library to the media manager;

• a drive manager component, which manages the drive hardware on behalf of the media manager and transmits the properties of the drives to the media manager. In addition, the standard defines the interfaces for the communication with these components:

• the Media Management Protocol (MMP) for the communication between application (client) and media manager (server);

• the Library Management Protocol (LMP) for the communication between library manager and media manager;

• the Drive Management Protocol (DMP) for the communication between drive manager and media manager.

These protocols use TCP/IP as the transport layer. As in the popular Internet applications HTTP, FTP or SMTP, commands are sent via TCP/IP in the form of text messages. These protocols can be implemented and used just as simply on different platforms. The advantage of this approach is that the media manager can implement components as a generic application, i.e. independently of the specific library and drive hardware used. The differences, in particular with the control, are encapsulated in the library manager or drive manager for the hardware in question. For a new tape library, therefore, only a new library manager component needs to be implemented that converts the specific interface of the library into the library management protocol, so that this can be linked into an existing media manager installation. The next sections describe how communication takes place between clients and servers and how the media manager processes the commands.

Learn more on Life cycle management

Learn more on  Life cycle management

The life cycle of a cartridge describes a series of states that a cartridge can take on over the course of time. Essentially, the following events, which lead to state transitions, can be observed in the life cycle:

• Initialization Cartridges are announced to the system.

• Allocation of access rights Applications are permitted to access certain cartridges.

• Use Cartridges are used for reading and writing.

• deallocation The data on a cartridge is no longer needed; the storage space can once again be made available to other applications.

• Retirement The cartridge has reached the end of its life cycle and is removed from the system. These events directly yield a series of states for cartridges. States of media In addition to the location of the media, it is also important that storage administrators are aware of the state of the media. For example, a cartridge may not be removed from the system until no more logical volumes have been allocated to it. Otherwise, there is a

danger of data loss, because back-up software can no longer access this volume. During its life cycle (Figure 9.8) a cartridge can take on the following states:

• Undefined (unknown) No further information is known about a cartridge. This is the initial state of a cartridge before it is taken into a management system.

• Defined A cartridge is announced to the system. Information about the type, cartridge label, etc. are given, and the cartridge is thus described.

• Available In addition to the information that a cartridge exists in the system, information about whether, and where, data can be still written to the cartridge is also important. If this information is known, the cartridge is available for applications

• Allocated In order that an application can use a cartridge, suitable storage space on a cartridge

must be allocated by the system. This allocation of storage space generally leads to the placing of a volume on a partition of a cartridge. The application should be able to freely choose the identification of the volume. In general, the state of the first volume placed determines the state of the entire cartridge. To this end, as soon as the first volume has been placed on a cartridge, the state of the cartridge is set to 'allocated'.

• deallocated If the storage space (the volume) is no longer required by an application, the volume can be de-allocated. The system should then delete all information about the volume.

The application should continue to be able to reallocate the same storage space. If the

storage space is to be made available to other applications, a cartridge must be recycled.

• Recycled Depending upon system configuration, once all volumes have been removed from a cartridge the entire storage space can be made available to other applications.

• Purged A cartridge, and all information about this cartridge, is completely removed from the system. In general, this state is reached at the end of a cartridge's life cycle. Typically, this state exists only for a short time, since the cartridge is immediately placed in the undefined state.

Policy-based life cycle management Certain tasks should be automated so that as little manual intervention as possible is required during the management of the data carriers. In life cycle management, some tasks positively demand to be performed automatically. These include:

• the monitoring of retention periods;

• transportation to the next storage location (movement or rotation);

• the copying of media when it reaches a certain age;

• the deletion of media at the end of the storage period;

• the recycling or automatic removal of the cartridge from the system.

The individual parameters for the automated tasks are specified by suitable policies.

Individual cartridges, or groups of cartridges, are assigned suitable policies.

Tutors on Monitoring management of removable media

Tutors on Monitoring management of removable media

The large number of devices and media that have to be monitored in a data centre makes it almost impossible for monitoring to be performed exclusively by administrators. Automatic control of the system, or at least of parts of it, is therefore absolutely necessary for installations above a certain size. For removable media, in particular, it is important that monitoring is well constructed because in daily operation there is too little time to verify every back-up. If errors creep in whilst the system is writing to tape this may not be recognized until the data needs to be restored – when it is too late. If there is no second copy, the worst conceivable incident for the datacenter has occurred: data loss! Modern tape drives permit a very good monitoring of their state. This means that the number of read-write errors that cannot be rectified by the built-in firmware, and also the

number of load operations, are stored in the drive. Ideally, this data will be read by the management system and stored so that it is available for further evaluations. A further step would be to have this data automatically analyzed by the system. If certain error states are reached, actions can be triggered automatically so that at least no further error states are permitted. Under certain circumstances, errors can even be rectified automatically, for example by switching a drive off and back on again. In the worst case, it is only possible to mark the drive as defective so that it is not used further. In these tasks, too, a mechanism controlled by means of rules can help and significantly take the pressure off the administrator.

The data stored on the drives not only provides information on the drives themselves, but also on the loaded tapes. This data can be used to realize a tape quality management, which, for example, monitors the error rates when reading and writing and, if necessary, copies the data to a new tape if a certain threshold is exceeded.

Reporting

In addition to media management and drive and library sharing a powerful system requires the recording of all actions. For certain services it is even a legal requirement that so-called security audits are performed. Therefore, all actions must be precisely logged. In addition, he log data must be protected against manipulation in an appropriate manner. With the aid of a powerful interface, it should be possible to request data including

the following:

• When was a cartridge incorporated into the system?

• Who allocated which volume to which cartridge when?

• Who accessed which volume when?

• Was this volume just read or also written?

• Which drive was used?

• Was this an authorized access or was access refused? The following requirements should be fulfilled by the reporting module of a removable

media management system:

• Audit trails As already mentioned, it should be possible to obtain a complete list of all accesses to

a medium. Individual entries in this list should give information about who accessed a

medium, for how long, and with what access rights.

• Usage statistics Data about when the drives were used, and for how long they were used, is important

in order to make qualitative statements about the actual utilization of all drives. At

any point in time, were sufficient drives available to carry out all mount requests? Are

more drives available than the maximum amount needed at the same time over the last

twelve months? The answers to such questions can be found in the report data. Like

the utilization of the drives, the available storage space is, of course, also of interest.

Was enough free capacity available? Were there bottlenecks?

• Error statistics Just like the data on the use of resources, data regarding the errors that occurred during use is also of great importance for the successful use of removable media. Have the storage media of manufacturer X caused less read-write errors in the drives of manufacturer Y than the media of manufacturer Z? Appropriate evaluations help considerably in the optimization of the overall performance of a system.

• Future planning  Predictions for the future can be made from the above-mentioned statistics. How will the need for storage grow? How many drives will be used in twelve months? And  MANAGEMENT OF REMOVABLE MEDIA how many slots and cartridges? A management system should be able to help in the search for answers to these questions. In order that future changes can also be simply carried out, the addition of further drives or cartridges must also be possible without any problems and must not require any changes to the existing applications that use the management services.

Free tutors on Media tracking Grouping, pooling Drive pools

Free tutors on Media tracking Grouping, pooling Drive pools

As an integral part of a disaster recovery solution, a management system must ensure that all removable media plus the appropriate metadata remain in the system until they are deleted by an appropriately authorized user. Thus, under no circumstances may tapes be 'lost' or removed from the system without authorization. Furthermore, it must be possible to determine the storage location of each medium at all times. If access is available to the media online, for example, in automatic tape libraries in which tapes can be automatically identified by the reading of a barcode label by a scanner, a suitable audit can be performed at any time. In such an audit, the content of the inventory is compared with the real existing tapes. Such libraries also automatically report the opening of a door. After the door has been closed an audit should once again be automatically performed in order to ensure that no media has been removed from the system without authorization. If a part of the media is withdrawn from direct access, either to a well-protected safe or to another manually-operated library, this storage place must be managed with appropriate care by the responsible administrators. Ideally, the management software provides an interface for this vaulting. In order to increase the reliability of a disaster recovery concept and to fulfil statutory provisions, a two-stage or multi-stage strategy made up of online and offline storage is often performed. Storage media are first written in automatic libraries, then stored offline for a certain period of time and subsequently either taken out of circulation or reused (Figure 9.5). Media that are in transit from an online library to an offline storage place must be identified. A management system for removable media should serve both as a central repository for all resources and as a universal interface for applications. It should not be possible for any application to withdraw itself from the control of the system and access or move media in an uncontrolled manner. Only thus is it actually guaranteed that all media in the system can be located at any time. Such a central interface is always in danger of becoming a single point of failure. It is therefore very wise to use appropriate measures to guarantee a high level of availability for the entire solution. Consequently, care must be taken to ensure that all components of the entire system, from the hardware through the operating systems used with the media management to the back-up software, are designed to have a high level of availability. The hardware often offers suitable options. Drives and media changers are available with a redundant power supply and redundant access paths. Modern operating systems such as AIX or Solaris can automatically use such redundantly designed access paths in the  event of a fault.

Grouping, pooling

Systems for the management of removable media must be able to deal with a great many media and facilitate access to many applications. In order to plan and execute access control in a sensible manner and to guarantee its effective use, it is necessary to combine cartridges and drives into groups. Grouping also allows budgets for storage capacities or drives to be created, which are made available to the applications. Scratch pools

A scratch pool contains unused cartridges that are available to authorized applications so that they can place volumes upon them. As soon as an application has placed a volume upon a cartridge from such a scratch pool, this cartridge is no longer available to all other applications and is thus removed from the scratch pool. If several scratch pools are available, they cannot only be grouped for different media, it is also possible to define groups for certain application purposes. For example, an administrator should be able to make a separate pool of cartridges available for important

back-up jobs, whilst cartridges from a different scratch pool are used for 'normal' back-up jobs. The back-up application can choose which pool it would like to have a cartridge from. If it is possible to assign priorities to scratch pools on the basis of which the management system can decide from which pool a new cartridge will be provided, then such a request can be automated to a certain degree without the back-up application

having to know all available pools. To this end, the request for a new tape must be given an appropriate priority, whereupon the management system searches for a cartridge from a scratch pool with a suitably high priority. In addition to the requirement that all cartridges can be located at all times, a management system for removable media should be capable of offering free storage space to an application at any time. Only thus can back-up windows actually be adhered to. As is the case for media tracking, in order to fulfil this requirement a high-availability solution

covering all levels, from the hardware to the application software, should be pursued. In addition, scratch pools can help to contain the cartridges from two or more libraries (Figure 9.6). They also offer the guarantee that, even in the event of the failure of indi- vidual libraries, cartridges in the other libraries will remain usable. It is precisely in this case that the advantages of a storage network, together with an intelligent management

system for removable media, fully come to bear in the optimal utilization of resources that are distributed throughout the entire system.

In order to be able to react flexibly to changes, scratch pools should be dynamically expandable. To this end, an administrator must make additional storage space available to the system dynamically, whether by the connection of a new library or by the addition of previously unused cartridges. Ideally, this can be achieved without making changes to the applications that have previously accessed the scratch pool. An adjustable minimum size (low water mark) makes the management of a scratch pool easier. If this threshold is reached, measures must be taken to increase the size of the

pool, as otherwise there is the danger that the system will cease to be able to provide free storage space in the foreseeable future. The management system can help here by flexibly offering more options. Many actions are possible here, from the automatic enlargement of the scratch pools – as long as free media are available in the libraries – to the 'call home' function, in which an administrator is notified. If a cartridge has several partitions, then it would occasionally be desirable to collect just the free partitions – rather than the complete cartridges – into a scratch pool. Then

it would be possible to manage free storage capacity with a finer granularity and thus achieve an optimal utilization of the total amount of available storage capacity. Since, owever, the individual partitions of a medium cannot be accessed at the same time, a cartridge is currently generally managed and allocated to an application as the smallest unit of a scratch pool. As capacity increases, however, the additional use of partitions for

this may also be required.

Drive pools

The mere fact that cartridges are available does not actually mean that the storage space can be used. In addition, drives must be available that can mount the cartridges for reading or writing. Similarly to cartridges, it is also possible to combine drives into pools. A pool of high- priority drives can, for example, always be kept to fulfil mount requests if all other drives are fully utilized. In order to save the applications from having to know and request all drive pools, it is a good idea to have a priority attribute that is used by the management system to automatically locate a drive with an appropriate priority. If several libraries are available, drive pools should include drives from several libraries (Figure 9.7). This ensures that drives are still available even if one library has failed. At the very least, this helps when writing new data if free cartridges are still available.

Buy Vmware Interview Questions & Storage Interview Questions for $150. 100+ Interview Questions with Answers.Get additional free bonus reference materials. You can download immediately even if its 1 AM. You will recieve download link immediately after payment completion.You can buy using credit card or paypal.
----------------------------------------- Get 100 Storage Interview Questions.
:
:
500+ Software Testing Interview Questions with Answers are also available plz email roger.smithson1@gmail.com if you are interested to buy them. 200 Storage Interview Questions word file @ $97

Vmware Interview Questions with Answers $100 Fast Download Immediately after payment.: Get 100 Technical Interview Questions with Answers for $100.
------------------------------------------ For $24 Get 100 Vmware Interview Questions only(No Answers)
Vmware Interview Questions - 100 Questions from people who attended Technical Interview related to Vmware virtualization jobs ($24 - Questions only) ------------------------------------------- Virtualization Video Training How to Get High Salary Jobs Software Testing Tutorials Storage Job Openings Interview Questions

 Subscribe To Blog Feed