Workload density has massively increased in the data center.
Now you have 30 workloads in a single rack [running virtual servers].
JEFF BOLES, SENIOR ANALYST, TANEJA GOUP
What’s more, between 2000 and 2010, the number of servers
worldwide multiplied by a factor of six, while the amount of
storage increased by a factor of 69, thanks to server virtualization, according to researchers at IBM.
In July, Computerworld polled dozens of storage administrators to find out how server virtualization has complicated their
work lives. Our findings yielded this list of five top headaches.
But fear not: IT analysts and virtualization veterans offer their
advice on how to deal with each challenge.
Continued from page 16
1 Storage Performance Slowdowns and I/O Bottlenecks IT administrators are painfully aware that storage performance is growing at a much slower rate than computing power. So when it comes to virtualization, it’s no surprise that I/O bottlenecks and slow storage performance are the No. 1 problem for one-third of the administrators who responded to the Computerworld poll.
“Virtualization lets you do a whole lot of workloads on one
physical piece of hardware, but there’s lots of different I/O
[operations] mixed into the I/O stream, so it makes disks work
harder and caching less effective,” says Jeff Boles, senior analyst
at Taneja Group in Phoenix. “Virtualization lets us easily do
more than our compute power is capable of.”
HOW TO DEAL: The solution to the I/O bottleneck depends
on where the problem lies: in the network or in the storage
domain. Most often, it’s in the storage environment, because
improvements in storage capability have lagged behind that of all
other infrastructure. “You have a very slow, creeping, linear progression of storage capability. Rotating disks can only go so fast.
Part of the problem is visibility. Administrators can’t see what’s
going on inside the storage environment, so they don’t know how
to fix it. Fortunately, we’re getting some tools that can help you
figure out that problem and address it [more easily],” Boles says.
Fibre Channel customers, for instance, might use Virtual
Instruments’ performance monitoring tool for storage area networks (SAN) to optimize performance and availability. Other
storage vendors delivering visibility tools include NetApp, which
recently acquired Akorri and its predictive tool for the virtual
infrastructure, and EqualLogic, which has a graphical user interface that lets customers monitor storage system performance.
Boston-based ad agency Arnold Worldwide virtualized
most of its servers five years ago. Chris Elam, senior systems
engineer, remembers when he first started doing backups and
noticed that throughput to the backups was dropping and that
backup times were growing. But visibility tools on the firm’s
Dell Compellent SAN alerted Elam to the problem. He added
more drives to increase I/O operations per second, and Compellent now spreads the data among the drives.
As an extra precaution, Arnold Worldwide’s I T staff set most
replications to take place during off-hours, except for those
involving its production file servers, which it replicates during
the day because data changes constantly. “That’s an I/O hit we
are willing to take,” Elam says, adding that customer service
is most important. “It’s one thing if backups take longer; it’s
another thing if users start to complain [about slow systems].”
Performance is another important consideration in the I/O
equation. “It’s really important that administrators start to
think about the I/O density and performance they need given
the amount of infrastructure they have,” Boles says. “Workload
density has massively increased in the data center. Now you
have 30 workloads in a single rack [running virtual servers].”
I/O density can be increased through the use of solid-state
drives and similar technologies, more effective caching or auto-
tiering. Also, I/O will only increase as the enterprise adds more
servers within a single storage system. Scale-out technologies can
help scale performance as well as capacity . “Small and medium-
size business customers can look at [tools from] Scale Computing,
for example. The midrange customer could look at EqualLogic,
and the enterprise could look at NetApp and 3Par,” Boles says.
2 More Complicated Data Backup and Disaster Recovery More than a quarter (27%) of the respondents in the Computerworld poll said that server virtualiza- tion has complicated backup and disaster recovery.
One of the biggest mistakes here is trying to protect a virtual infrastructure with traditional backup methods, according to Boles.
With traditional backup, “the degradation and backup performance is more than a linear degradation as you scale the number
of virtual machines on a piece of hardware. You’re effectively creating a blender for backup contention as you’re trying to protect these
virtual servers overnight. You try to do 10 backups simultaneously
on this one physical server, and you’ve got a lot of combat going on
inside that server for memory, CPU, network and storage,” he says.
Complicating matters are workload mobility tools, such
as VMware’s Storage vMotion, that let users relocate virtual
machine disk files between and across shared storage locations.
“Now you have to keep a backup going in relation to these virtual
servers that are going to be moving around, and possibly run into
other bottlenecks. That can be a serious headache,” says Boles.
HOW TO DEAL: A handful of vendors are building backup
and recovery tools for the virtual environment that runs within
their virtual infrastructure. That way, the vendors can capture