//
you're reading...
Storage

SAN overview

Why use a SAN?

In this section we describe the main motivators that drive SAN implementations, and present some of the key benefits that this technology might bring to data-dependent business.

The problem

Distributed clients and servers are frequently chosen to meet specific application needs. They may, therefore, run different operating systems (such as Windows Server, UNIX of differing flavors, VMware vSphere, VMS, and so on), and different database software (for example, DB2®, Oracle, Informix®, SQL Server). Consequently, they have different file systems and different data formats.Managing this multi-platform, multivendor, networked environment has become increasingly complex and costly. Multiple vendor’s software tools, and appropriately skilled human resources must be maintained to handle data and  storage resource management on the many differing systems in the enterprise. Surveys published by industry analysts consistently show that management costs associated with distributed storage are much greater, up to 10 times more, than the cost of managing consolidated or centralized storage. This includes costs of backup, recovery, space management, performance management, and disaster recovery planning. Disk storage is often purchased from the processor vendor as an integral feature, and it is difficult to establish if the price you pay per gigabyte (GB) is competitive, compared to the market price of disk storage. Disks and tape drives, directly attached to one client or server, cannot be used by other systems, leading to inefficient use of hardware resources. Organizations often find that they have to purchase more storage capacity, even though free capacity is available in other platforms.Additionally, it is difficult to scale capacity and performance to meet rapidly changing requirements, such as the explosive growth in server, application and desktop virtualization, and the need to manage information over its entire life cycle, from conception to intentional destruction.Information stored on one system cannot readily be made available to other users, except by creating duplicate copies, and moving the copy to storage that is attached to another server. Movement of large files of data may result in

significant degradation of performance of the LAN/WAN, causing conflicts with mission-critical applications. Multiple copies of the same data may lead to inconsistencies between one copy and another. Data spread on multiple small systems is difficult to coordinate and share for enterprise-wide applications, such as e-business, Enterprise Resource Planning (ERP), Data Warehouse, and Business Intelligence (BI). Backup and recovery operations across a LAN may also cause serious disruption to normal application traffic. Even using fast Gigabit Ethernet transport, sustained throughput from a single server to tape is about 25 GB per hour. It would take approximately 12 hours to fully back up a relatively moderate departmental database of 300 GBs. This may exceed the available window of time in which this must be completed, and it may not be a practical solution if business operations span multiple time zones. It is increasingly evident to IT managers that these characteristics of client/server computing are too costly, and too inefficient. The islands of information resulting from the distributed model of computing do not match the needs of the enterprise.

New ways must be found to control costs, improve efficiency, and simplify the storage infrastructure to meet the requirements of the modern business world.

The requirements

With this scenario in mind, we can think of a number of requirements that today’s storage infrastructures should meet. Some of the most important are

Unlimited and just-in-time scalability. Businesses require the capability to flexibly adapt to rapidly changing demands for storage resources without performance degradation.

System simplification. Businesses require an easy-to-implement infrastructure with the minimum of management and maintenance. The more complex the enterprise environment, the more costs are involved in terms of management. Simplifying the infrastructure can save costs and provide a greater return on investment (ROI).

Flexible and heterogeneous connectivity. The storage resource must be able to support whatever platforms are within the IT environment. This is essentially an investment protection requirement that allows you to configure a storage resource for one set of systems, and subsequently configure part of the capacity to other systems on an as-needed basis.

Security. This requirement guarantees that data from one application or system does not become overlaid or corrupted by other applications or systems. Authorization also requires the ability to fence off one system’s data from other systems.

Encryption. When sensitive data is stored it must be read or written only from those authorized systems and if for any reason the storage system is stolen, data must never be available to be read from it.

Hypervisors. Support of server, applicatin and desktop virtualization hypervisors features for cloud computing.

Speed. Storage networks and devices must be capable of managing the high number of Gigabytes and intensive I/O required by each business industry.

Availability. This is a requirement that implies both protection against media failure as well as ease of data migration between devices, without interrupting application processing. This certainly implies improvements to backup and recovery processes: attaching disk and tape devices to the same networked infrastructure allows for fast data movement between devices, which provides enhanced backup and recovery capabilities, such as:

– Serverless backup. This is the ability to back up your data without using

the computing processor of your servers.

– Synchronous copy. This makes sure your data is at two or more places

before your application goes to the next step.

– Asynchronous copy. This makes sure your data is at two or more places

within a short time. It is the disk subsystem that controls the data flow.

Storage Area Networks and System Networking

Speed. Storage networks and devices must be capable of managing the high number of Gigabytes and intensive I/O required by each business industry.

Availability. This is a requirement that implies both protection against media

failure as well as ease of data migration between devices, without interrupting

application processing. This certainly implies improvements to backup and

recovery processes: attaching disk and tape devices to the same networked

infrastructure allows for fast data movement between devices, which provides

enhanced backup and recovery capabilities, such as:

– Serverless backup. This is the ability to back up your data without using

the computing processor of your servers.

– Synchronous copy. This makes sure your data is at two or more places

before your application goes to the next step.

– Asynchronous copy. This makes sure your data is at two or more places

within a short time. It is the disk subsystem that controls the data flow.

In the next section, we discuss the use of SANs as a response to these business

requirements.

How can we use a SAN?

The key benefits that a SAN might bring to a highly data-dependent business

infrastructure can be summarized into three rather simple concepts:

infrastructure simplification, information lifecycle management and business

continuity. They are an effective response to the requirements presented in the

previous section, and are strong arguments for the adoption of SANs.

These three concepts are briefly described as follows.

Infrastructure simplification

There are four main methods by which infrastructure simplification can be

achieved: consolidation, virtualization, automation and integration:

Consolidation

Concentrating systems and resources into locations with fewer, but more

powerful, servers and storage pools can help increase IT efficiency and

simplify the infrastructure. Additionally, centralized storage management tools

can help improve scalability, availability, and disaster tolerance.

Virtualization

Storage virtualization helps in making complexity nearly transparent and at the same time can offer a composite view of storage assets. This may help reduce capital and administrative costs, while giving users better service and Draft Document for Review July 17, 2012 2:03 pm 5470 CH 02 Why, and how, can we use a SAN availability. Virtualization is designed to help make the IT infrastructure more responsive, scalable, and available.

Automation

Choosing storage components with autonomic capabilities can improve availability and responsiveness—and help protect data as storage needs grow. As soon as day-to-day tasks are automated, storage administrators may be able to spend more time on critical, higher-level tasks unique to a company’s business mission.

Integration

Integrated storage environments simplify system management tasks and improve security. When all servers have secure access to all data, your infrastructure may be better able to respond to your users information needs. information toward a single, and, most importantly, simplified infrastructure.

Figure 2-1 illustrates the consolidation movement from the distributed islands of

Information lifecycle management

Information has become an increasingly valuable asset, but as the amount of information grows, it becomes increasingly costly and complex to store and manage it. Information lifecycle management (ILM) is a process for managing information through its life cycle, from conception until intentional disposal, in a manner that optimizes storage, and maintains a high level of access at the lowest cost.

A SAN implementation makes it easier to manage the information lifecycle as it integrates applications and data into a single-view system, in which information resides, and can be managed more efficiently.

IBM Tivoli® Productivity Center for Data was specially designed to support ILM.

Business continuity

It goes without saying that the business climate in today’s on demand era is highly competitive. Customers, employees, suppliers, and business partners expect to be able to tap into their information at any hour of the day from any corner of the globe. Continuous business operations are no longer

optional—they are a business imperative to becoming successful, and maintaining a competitive advantage. Businesses must also be increasingly sensitive to issues of customer privacy and data security, so that vital information

assets are not compromised. Factor in those legal and regulatory requirements, and the inherent demands of participating in the global economy, and accountability, and all of a sudden the lot of an IT manager is not a happy one.

Nowadays, with natural disasters seemingly occurring with more frequency a Disaster Recovery (DR) plan is essential and implementing the correct SAN solution can help not only in real-time recovery techniques, but it also can reduce the recovery time objective (RTO) for your current DR plan.There are many specific vendors solutions in the market which require a SAN running in the background like VMware Site Recovery Manager for business continuity.It is little wonder that a sound and comprehensive business continuity strategy has become a business imperative, and SANs play a key role in this. By deploying a consistent and safe infrastructure, they make it possible to meet any availability requirements.

Advertisements

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: