Data storage market: forecasts for 2013
In 2013 there will be new methods of use a flash memory, storage in a cloud and software which will absolutely change approach to creation of future DWH.
The directory of solutions DWH and projects is available on TAdviser.
Cloud data storage, flash storage, storage virtualization software, disaster recovery and set of other technologies are well-known to solution providers. But 2013 will become transition time: customers seriously will think of how to implement (or to change) these technologies to increase performance, to simplify administration and to reduce the spaces occupied by corporate DWH.
For storage the flash memory is more and more widely used. But how exactly? And cloud? These questions are asked to themselves by customers and VAR'Y. And most important: what real costs?
These questions will be a discussion subject in the industry in 2013. Let's dwell upon them.
Climate change? Yes, if we speak about DR
Hurricane Sandy, perhaps, yet did not force all to believe finally in climate change, but its lessons will become a strong argument for benefit of sales of the systems of disaster recovery in 2013.
The organizations in the Northeast of the USA which had the good systems of disaster recovery managed to resume work quickly. It is possible to expect that solution providers and service providers of DR will speak more often about climate change which increases the probability of strong and more frequent tropical heavy rains, hurricanes and tornadoes to give more than the weight to the offers and it is profitable to sell them to customers.
Flash - data storage: vendors will be ready …
Set of startups already began to prepare the market for transition to DWH, entirely constructed on a flash memory, but large vendors hesitated with such offers so far. It will change in 2013, and large suppliers will begin to enter DWH on flash drives.
EMC will probably be the first thanks to purchase of XtremIO in 2012 HP also showed the readiness, having released the version of a matrix 3Par only with SSD. HP, NetApp, Hitachi Data Systems and, maybe, will even join Oracle this fight, buying or investing in these startups as soon as possible. These investments already began: one large vendor of DWH (it is not called) received WhipTail equity interest.
… and the market - yet is not present
If vendors begin to advance storage matrixes on SSD in 2013, showing the readiness for the future, then customers will hesitate with transition to these very expensive devices. Those who aim at higher performance, will select the hybrid models combining normal hard drives and the integrated cache on a flash memory that will provide necessary balance of performance at rather low price.
And still, acceptance will grow
In 2012 there were different implementations of technology a flash storage, and its influence on the market of DWH became much more noticeable and multilateral. In 2013 before customers there will be even more often a question: where to use a flash storage? In the server? Or in network? Or in DWH? Here one step to matrixes entirely on SSD.
It opens a door for solution providers to offer customers all variety of technologies of data storage, proceeding from their specific needs, and together with them to think how to provide performance maximum, without exhausting the limited budget. In process of emergence of new technologies a flash storage customers and VAR'Y will come to a conclusion that one universal answer is absent, and this technology will be gradually and in different forms to get into their DPCs.
Traditional DWH: ahead of the plateau
In 2013 customers will ask a question: whether really it it is necessary for storage high capacity if the new technologies allowing to unload tons of data from local DPCs are more and more widely offered?
- Cloud. Is ideal for storage of intermediate or temporary data (which can be discarded then) or for archiving which volume increases (even if these copies will never be necessary).
- Data compression technologies. They can be included in software or in the equipment, be followed by the separate license or not, but one is important: these new approaches - removal of duplicates, autoreplication, "thin" granting resources and automatic multilevel storage - will provide the most effective placement of data, will remove unnecessary megabytes and will add one more level of intellectual management and protection.
Cumulative capacity of storage of DWH, sold in 2013 probably will increase, but this growth will be brought mainly by suppliers of cloud services of storage, and normal business users most likely will moderate the appetites on DWH.
Storage software: main hope
Sales of software of data storage in 2012 already grew much quicker, than sales of the equipment, and this trend will remain in 2013 and further as business users will better aim to manage the available storage capacity.
DWH in typical DPC will never be something homogeneous, considering numerous acquisitions and company mergers, purchase of certain projects, the aggressive prices, energetic trade representatives, etc. But ensuring due replication of data, or data migration from one platform on another, or attempt to turn storage into service become harder and harder case as proprietary operating systems put the borders.
Software, using virtualization and other technologies of "consolidation", will be a key to liquidation of barriers on the way of management all available DWH simply and logically.
Proprietary DWH: main loser
Though software will help business users to manage all their DWH as to a whole, the organizations will begin to ask a question even more often: why in general to buy proprietary DWH?
And it will strongly affect large vendors of DWH. Not only that the old, settled system providers of storage use the file systems not compatible to matrixes of competitors, even different lines of DWH of the same vendor quite often are not joined with each other.
The vendor can try "lock" the customer within only the technology, but such strategy attracts on itself more and more criticism and causes response: customers find out that they can use good software which optimizes simple, inexpensive and open DWH, or even a storage cloud and to have an opportunity to provide performance, security and convenience of administration which offered only the few large vendors earlier.
Buy!
The wave of purchases and merges in the market of storage will not weaken. Will receive special attention:
- SSD-SHD developers. This segment of the industry is complete of startups: they advance development of the matrixes using only SSD. EMC purchased XtremIO when nobody even knew this company, and paved the way to other similar transactions.
- Software developers of data storage. Customers acquired the idea of purchase of complete solutions, and vendors of the equipment - Dell, HPS, HDS and IBM - answered with purchases of software in 2012. It is possible to expect also other similar transactions in the new year, including large - we will tell, CommVault or business of storage of Symantec.
Cloud storage: well, but it is expensive
The technology of cloud storage reached that point when almost any function from simple data storage before their protection or processing of Big Data can be performed in a cloud.
However the cost of cloud storage per one Gbyte is still several times higher, than on hard drives, SSD or tape drives. It will remain the essential factor which is slowing down broad transition to cloud services of storage in, leaving a cloud only for incidental applications: disaster recovery or processing of Big Data, - the tasks requiring high capacity, and solved incidentally. In such cases a normal alternative - to purchase high capacities, than it will be used, but it increases the total ownership cost (TCO).
It is simple to store data in a cloud? Yes, but not in 2013 and even, likely, not in the 2014th, despite promises of some vendors.
It is better to be prepared for BYOD
Interest in flow of personal mobile devices in the organization (BYOD) will lay down a new burden on administrators of DWH before whom there will be a task to learn to manage all data which users will create and exchange in and out of the organization from the devices with iOS, Android, Windows 8/RT/Phone, BB10 and webOS.
It is not a simple question. The organizations should draw a thin distinction between address enable to corporate data from personal devices and control from the IT administrator: who can have such access and as; and at the same time having provided transparent data availability for users irrespective of the specific platform.
In 2013 it will be important to take this trend under control in advance until business users for whom mobile devices already became the main tool in work, and at home - means of carrying out leisure induced to do it.
Mismatch of legacy systems of storage and virtualization goes deep
The normal storage systems used today were developed taking into account requirements of physical infrastructure more than 20 years ago. However in process of growth of extent of virtualization of DPCs mismatch between opportunities of such storage systems and requirements of virtual environments goes deep. Though some industry players try to adapt instruments of virtualization for legacy systems of storage using API interfaces or modification of such systems for work in virtual environments, any of these approaches is not able to overcome the abyss which reached depth of Grand Canyon between two not combined technologies. The storage systems which are specially intended for virtual environment are necessary for solution.
Excess storage systems will not change a situation
Usually more than 60% of costs for deployment of products of VMware corporation are the share of storage systems. Why? Because the universal disk systems badly cope with flows of accidental input-output in virtual environments. Because of it many companies select to users the excess systems for satisfaction of their requirements. Additional disk drives not only do not solve a fundamental problem, but also needlessly create the excess volume of storage, take the place in DPCs, consume energy and require management. In 2013 the number of the companies which realized will increase that storage costs exceed the expected economic benefit from virtualization and do not allow to perform it effectively.
The new generation of storage systems will demand new metrics
Storage systems often compare by the number of the input-output transactions made by them per second (input/output operations per second, IOPS). It is possible to expect that with growth of popularity of the systems of new generation of the company will use more often other criteria for determination of efficiency of the storage systems. For example, for not virtualized tasks with the highest requirements to performance — IOPS for one dollar, for the virtualized tasks at service of specific applications — storage cost at the solution of a certain task, and for unstructured data — the cost of storage of 1 GB or 1 Tb of data.
Simplification of management in virtual environments — the most important success factor of storage system
In 2013 ease of management of storage in virtual environments will be recognized a key condition of success. It, in turn, will bring more products including control functions by storage for virtual machines to life.
Use a flash memory becomes the main direction of development. But one flash memory is not enough
In process of reduction of prices of a flash memory all new companies will use it in the storage systems. However in itself it will not meet expectations of most the enterprises if to take into account performance, simplicity and abilities to manage data. Moreover, though the array based on a flash memory can and should provide a large number of IOPS, it is much more important that the indicator of IOPS conformed to requirements imposed to performance and a delay time. Producers a flash memory will need not just to release a mass product, but also to give it certain features. Main of them — control function in virtual environments.
Overcoming wrong views on VDI
Though some former wrong views on infrastructure of virtual desktops still remain (VDI - Virtual Desktop Infrastructure) (for example, VDI allegedly allows to save or VDI is expensive as storage manages too expensive, and to manage VDI too difficult), the industry proved falsehood of these myths long ago. Meanwhile all new enterprises begin to understand what in fact represents VDI. They found new users of VDI and successfully unrolled this infrastructure contrary to commonly held views. The efficiency of storage reached due to use a flash memory and the management tools intended for virtual means, at last, made VDI acceptable from the economic point of view. New tasks are set for VDI. It is necessary to expect that in 2013 more and more companies will begin to use VDI, being guided by this new approach.
To the forefront there is a service quality
Virtualization requires other storage systems. Such which consider features of input-output in virtual environment and automatically manage service quality in relation to each virtual machine, but not to logic devices or volumes. Besides, operating at the virtual machine level allows to exercise data management on all chain up to the specific application. The flash memory opens a possibility of creation of the dense storage systems capable to service thousands of virtual machines, occupying only several mounting racks. Considering so high density of placement, an important role will be played by the tools providing service quality. They should be clear for administrators and simple in use.
The 2013th - year of intellectual software for storage systems
In 2012 the company takeover of XtremIO and Texas Memory Systems respectively drew with EMC and IBM corporations great attention to the market a flash memory. A problem not in just to create products with support a flash memory, and mainly in using a flash memory reasonably and economically at intellectual access control to data. The focus will be shifted from creation basic and an inexpensive flash memory to intellectual by software which is well combined with the level of applications and allows administrators to concentrate not so much stored how many on data management of applications on virtual machines.
Attractiveness programmatically of the configured DPCs will increase
Talk concerning programmatically the configured DPCs is carried on in connection with the aspiration to create infrastructure which will be essentially more flexible, scalable and economic. Such aspiration begins to dominate when planning DPCs. Architects will try to design such infrastructure which considers the loading created by applications and is capable to select automatically resources according to requirements of applications. Instead of designing of the DPCs selecting to tasks excess resources, the concept programmatically of the configured DPC assumes more effective use and distribution of all elements of infrastructure, since servers and finishing with network systems of storage.
Storage will follow servers and networks
In process of growth of attractiveness of the concept programmatically it will be clearer than the configured DPC how far storage systems should progress in the development to come to compliance with model of program control. Today storage lags behind servers and networks which approach such model, and turns as a result into the main painful point. The industry cannot hope that programmatically the configured storage will appear thanks to simple adding of new features or connection points to the available legacy architecture of storage. Virtual environments require specially developed storage systems. The enterprises expecting to take all advantage programmatically of the configured DPCs will have to apply the storage systems intended for virtual environments which are capable to provide the simplicity of management and flexibility inherent in such environments.
Source: CRN/USA and eWeek