@MennodeLiege Killing positive time :-)
Midrange MegaLaunch by EMC #Speed2Lead
What if Life Was Faster?
As always EMC knows how to make an entrance. And finally it's 4 september. It was the worst-kept secret in storage land. We all knew it had something to do with new VNXes. So Fasten your seatbell, it's here. The brand new Multi Core CPUs and flash with new operating software VNX2. EMC has re-written parts of the code stack. The new VNX2 arrays show a great speed compared to the existing range. This VNX generation has active/active storage processors for higher performance, compared to the previous generation's active/passive. Let's go into details.
EMC XtremSW Cache intelligent caching software leverages server-based PCIe flash technology EMC XtremSF to reduce latency and accelerate throughput for dramatic application performance improvement. XtremSW Cache accelerates reads and protects data by using a write-through cache to the networked storage to deliver persistent high availability and disaster recovery. The result is a networked infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. XtremSW Cache accelerates block I/O reads for those applications that require the highest IOPS and the lowest response time. The software caches the most frequently referenced data in the server on XtremSF, shrinking storage access time while offloading the I/O processing from the storage array.
Version 2.0 of XtremSW Cache will be generally available on August 30. The caching software can now be deployed in a greater number of environments, allowing more users to take advantage of its powerful performance benefits. XtremSW Cache 2.0 offers further integration with EMC VMAX and VNX arrays. XtremSW Cache delivers enhanced interoperability with VMware vCenter features like HA and DRS and is now supported in IBM AIX environments. The software can now be used with any server flash hardware and will support Oracle RAC environments via distributed cache coherency. Finally, XtremSW Management now delivers a single point of management for users deploying multiple XtremSW Cache instances.
XtremSW Cache 2.0 now provides much greater integration with EMC arrays (VMAX) such as:
- Users can manage XtremSW Cache directly from Unisphere
- When used with VMAX, additional features such as prefetching enable
- Read full track improve I/O rates up to 25%
- Optimized read miss moves the read cache tier to the host increasing IOPS by as much as 2.5X
XtremSW Cache 2.0 now provides much greater integration with EMC arrays (VNX) such as:
- Users can manage XtremSW Cache directly from Unisphere Remote
- The single point of management also delivers performance and health monitoring, discovery, and configuration for XtremSW Cache
The Evolution of FLASH in Arrays
6 new VNX platforms designed to Shift from “FLASH as an add-on” to “FLASH Optimized”
- VNX 5200 - main range entry
- VNX 5400 - main range mid-tier
- VNX 5600 - mid range entry
- VNX 5800 - mid range mid-tier
- VNX 7600 - high-end mid-capacity
- VNX 8000 - high-end large-capacity
Multicore Code Path Optimization - MCx
Multicore & FLASH Optimized for Today’s Virtual Environments. More Virtual Machines per system.
EMC is introducing new multicore processing, or MCx, technology to increase performance. They promise that MCx reach more than 1 million transactions (IOPs), nearly five times the IOPs of the current VNX. That only is a huge improvements with more heavy workloads these days.
MCx also features dynamic core utilization, leveling the various tasks of the VNX, including Block and File, I/O, RAID, cache, data services, and management across the multiple processor codes rather than running each task in its own core to increase performance.
VNX is designed with FLASH. Adding deduplication to FLASH will lower cost. Adding FAST to FLASH also lowers cost. Combining deduplication with FAST amplifies cost saving to the maximum level of capacity efficiency. Unfortunately Block deduplication is not real-time, it occurs after the data is written and is throttled to minimize the impact on host I/O.
But that's not all...