среда, 2 ноября 2011 г.

Storage vMotion of a Virtualized SQL Server Database

 

К вопросу о влиянии Storage vMotion на производительность – официальный документ с тестами Storage vMotion of a Virtualized SQL Server Database.

Выводы:

There are two main factors that can influence the svMotion of virtual disks in a vSphere virtual infrastructure: I/O access patterns on the virtual disk that has to be migrated and the underlying storage infrastructure. While it is hard to expect change to the former, careful considerations given to the latter can help in achieving a better
svMotion experience and in minimizing impact to the applications that run in a virtual infrastructure where the VM storage is migrated. Based on the knowledge gained during the tests, the following best practices can help while planning an infrastructure that is capable of allowing live migration of VM storage:

1. Random access patterns on virtual disks interfere with the sequential access of svMotion and negatively affect the I/O throughput of svMotion. This can increase migration time by a significant value. If there is a need to
migrate such virtual disks, plan to schedule the migration during periods when there is low or no I/O activity
on the virtual disk.

2. Sequential access patterns of a VM on a VM’s virtual disk (for example, writes to log files in a virtual disk) generally don’t affect the sequential access pattern of svMotion. Even with more than one sequential stream on the virtual disk, most modern arrays are capable of utilizing the I/O prefetching logic in the arrays’
firmware to improve the I/O performance. Such virtual disks could be migrated even when there is some I/O activity on them.

 

However, if the VM’s I/O access to its virtual disk is significant, then the svMotion traffic will have to contend with the VM’s access for the I/O bandwidth. This could reduce the throughput for both the traffic flows. In such
situations, it is better to schedule svMotion during periods when the existing I/O load level reduces.

1. svMotion moves data blocks in 64KB chunks. Any I/O operation from the VM that uses a small request size might see higher access latency due to the large sized blocks of svMotion traffic. If the application in the VM is very sensitive to an increase in the access latency, consider scheduling svMotion during periods of low or no
I/O activity on the virtual disk.

2. Most applications that rely on random access patterns to fetch data from a physical storage media may not benefit from cache in the storage array. In such situations, administrators tend to configure (if permissible) a significant chunk of array cache for write access patterns. This may limit the amount of data prefetched by
the array for svMotion which can potentially impact the disk migration time. Having enough buffer space to
hold the prefetched data for svMotion may help in reducing the disk migration time.

3. In certain situations such as the experiments discussed in this paper, moving a virtual disk from an array with
newer hardware, firmware, and larger cache to older arrays could be faster than the other way around. 4. Faster storage media such as solid state disks (SSDs) provide a faster access to the smaller blocks even in the
presence of requests for larger blocks. By utilizing the services of SSDs (for example, storage pools built on
SSDs and SSD-based secondary cache) in the array, the impact on the performance of the application can be reduced when migrating the application’s disks.

0 коммент.:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.