Jan 27, 2020
98 Views
0 0

Get a hands-on look at a virtualized desktop infrastructure

Written by

Virtualized desktop infrastructure (VDI) can offer solid benefits such as centralized administration, user self-service and resource consolidation. Take a look at one company’s VDI experience.

Image: iStock

I recently
visited a fellow system administrator and friend at his organization to have
lunch and get caught up on current events. During the discussion my friend
(whom I’ll call Derek) mentioned that his company had launched a virtualized
desktop infrastructure they called DWI, short for “Developer Workstation
Infrastructure.”

Intrigued, I asked Derek for details on their
implementation and our discussion became an article for Tech Pro Research as a
result. Derek was even kind enough to set me up with a test account in his
environment and gather some interesting screenshots.

Here’s the
situation

“So, we
previously had several dozen developers running Red Hat Linux 5 server on their
workstations – mainly Dell Precision 490, T5400 and T5500 desktops,” Derek told
me. “They ran tools like Squirrel, Eclipse, and Accurev. The developers also
had Windows VMs running within VMWare Player so they could access Microsoft
Office, Windows drive mappings and so forth. The virtual machine was really
incidental to their primary desktop functions, which were to compile code, run
builds, and otherwise develop software for the company.

Enjoying this article?

Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.

Join Premium Today

These
workstations are powerful, but they’re also expensive – and many had upwards of
16 or even 32 Gb of RAM. The real pain point had to do with the fact these
machines HAD to be kept running for long periods of time so that compiling and
builds could finish. However, if the machines had a problem, like losing power
or needing to be patched, this was really inconvenient for the developers. They
had far too many interruptions in their coding, and it got to be too cumbersome
to keep this expensive hardware running.

Not only
that, but frequently the developers wanted other Red Hat installations – both 5
and 6 – for different tools and purposes, or they moved from group to group and
required different workstation builds, so we spent a lot of time building
physical systems. This was compounded by the fact developers were being hired
on a pretty frequent basis so we were setting up new machines over and over. We
finally hit a tipping point and realized it was time to utilize the benefits of
virtualization.”

Here’s the
solution

Derek and
his team decide to take advantage of Red Hat virtualization to build a VDI
environment to meet their developers’ needs through the use of centralized
virtual machines based on a template or “gold image” that could be customized
or refreshed. Their environment runs on a Red Hat Enterprise Virtualization
Cluster and allows for isolation of tasks, a scalable architecture, and a
consistent platform.

The
expensive workstations were replaced with Windows 7 laptops; largely Dell
Latitude E6330 systems. In an ironic switchabout, the developers now run
Windows 7 as their primary workstation but connect to the VDI system’s user
portal via a browser and a special client application called Spice (Simple
Protocol for Independent Computing Environments) which provides a Remote Viewer
akin to Remote Desktop in Windows – except that it connects via a dedicated
network path. All laptops hook into docking stations with dual monitors; this
company has used multiple monitors since its inception for better productivity.

Users log in
with their Active Directory credentials (via a Lightweight Directory Access
Protocol connection to the company domain controllers) and now create and administer
their own virtual machines, including snapshots, cloning and deletion. They can
have multiple Red Hat Enterprise Linux 5 or 6 virtual machines depending on
their needs. These virtual machines offer plenty of storage, multi-core CPU and
16 Gb or more of RAM. Permissions are set up so developers can administer only
the VMs they have been granted access to. They can perform their work in their
virtual machines without having to reboot or worry about local workstation
problems and the company saves money by providing mid-level laptops rather than
pricey desktops to the programmers.

This sounds
enticing, however it’s important with VDI to keep in mind that a poorly-planned
out migration merely shifts costs from the front-end (user workstations) to the
back-end (virtualization environment), so getting the most bang for the buck
from the servers is a key priority. The servers should offer plenty of capacity
but that capacity should be used or reserved for future growth.

Back end
system specs

The Red Hat
Enterprise Virtualization Manager environment (RHEVM) runs on several Dell
PowerEdge R620/820 servers. There is one RHEV manager and several hypervisor
servers to share the load. Each hypervisor has four ethernet connections: two
for the RHEV/VM network and two for the SPICE network, one of each going to a
different Brocade switch.

The servers
have the following:

  • 2 CPU sockets/8 Cores per socket
  • 500 Gb RAM with 10 Gb Swap (275 Gb swap
    for some) / 260 Gb of RAM with 145 Gb Swap
  • 33 Tb of shared storage on EMC SAN
    via fiber using the following volumes:

         o 
Two
600 Gb (300 Gb free)

         o 
Fifteen
2140 Gb (average 900 Gb free)

The
following software is in use:

  • Red Hat
    4.4.7-4 with hypervisor RHEV Hypervisor – 6.5 – 20140324.0.el6ev
  • Red Hat
    Enterprise Virtualization Manager 3.3.1-0.48.el6ev
  • Linux
    2.6.32-431.11.2.el6.x86_64 kernel

Checking
out the User Portal

The user
portal allows users to log in from any browser to access or administer their
virtual machines. Here’s how they go about it:

virt a.png

Figure A

Once users log into their domain they are presented with a
screen similar to the following, which shows their virtual machines:

virt b.png

Figure B

In the above
example, I have a virtual machine with my name on it. If I had multiple virtual
machines these would also be listed. Note the four buttons to the right of my
VM name; these allow me to run, shut down, suspend or power off the VM,
respectively. There is also a “Console” button which lets me access the VM
(more on that in a bit). The button to the right of THAT offers console
configuration options:

virt c.png

Figure C

Since they
use the SPICE client (and a browser plug-in) this is set appropriately, but it’s
interesting to note the other options such as VNC or Remote Desktop.

Getting back
to the main screen, if I click my VM it shows me more information on the bottom
of the screen:

virt d.png

Figure D

You can see
the virtual machine specs and relevant details listed here; I have 16 Gb of
memory, 8 CPU cores and two monitors.

Clicking
“Network Interfaces” displays the following:

virt e.png

Figure E

Pretty
vanilla stuff here, since it’s just the one NIC, but I can view network details
such as speed and MAC address.

The “Disks”
tab shows me this view:

virt f.png

Figure F

This displays the virtual
disk details, type of provisionining, creation date and so forth.

The “Snapshots” tab will
allow users to work with virtual machine snapshots:

virt g.png

Figure G

This feature alone provides
an incredible set of tools to users; they can handle their own snapshots,
allowing them to make changes to their virtual machines and safely back these
out if needed. Derek informed me this feature alone has offered countless
benefits to the company. Emergency rebuilds are a thing of the past.

Clicking “Permissions” shows
who has access to this virtual machine; in this case only myself:

virt h.png

Figure H

The “Events” tab shows all
events pertaining to virtual machine activity such as console session usage and
startup details:

virt i.png

Figure I

“Applications” shows details
about what’s running on the VM (but not the OS level, which is Red Hat
Enterprise Linux 6):

virt j.png

Figure J

The
“Monitor” tab shows the CPU, Memory and Network usage:

virt k.png

Figure K

And finally
the “Sessions” tab will show who is logged into the VM:

virt l.png

Figure L

That’s not
all, however. Clicking “Resources” in the left-side toolbar shows me what’s
happening with my virtual CPUs, memory, storage and disks/snapshots:

virt m.png

Figure M

Getting back
to the Virtual Machines tab, clicking the “Console” button will launch my VM
via the Spice client Remote Viewer:

virt n.png

Figure N



Once I log in
I will see the following KDE desktop (I could also specify GNOME) which of
course is just like a real live physical machine:

virt o.png

Figure O

What if I
want to create new VMs? Simple – back at the main user portal page there is a
“New VM” link.

virt p.png

Figure P

Once I click
this link I am presented with the following dialogue box:

virt q.png

Figure Q

From here I
can specify the setup details, including what template to use (“Blank” is the
standard Red Hat Enterprise Linux 6 template), whether it is optimized for
server or desktop, and the network interface details. Clicking “Console” lets
me set the monitors to be used (two). “Advanced Options” provide memory/CPU
settings, time zone, high availability and boot options. Once I create the VM
it will show up in the user portal where I can log in and work with it.

Checking out
the admin portal

The user
portal offers flexibility and self-service options for business employees (and
their system administrators) to get their work done in a fast-paced
environment. However, it’s also worth taking a look at the separate admin
portal to this environment, which provides the Linux administrators with even
more VDI power. Let’s step through some screenshots to illustrate what they can
do. Note: virtual machine and domain names have been blocked out in order to
preserve confidentiality.

virt r.png

Figure R

Upon logon
to the admin portal you can see the tabs across the top as well as the options
on the left which provide insight and control features. The tabs are where most
of the action takes place, so I’ll cover each one.

The first, “Data Centers,” shows the virtual data
centers – in this case the unused default and the DWI desktop environment. More
could be added here as the company’s needs expand.

The Clusters tab provides a view of the clusters that
are running:

virt s.png

Figure S

A cluster is a container that holds hosts (hypervisors) and VMs.
Clusters have logical networks assigned, use the storage domains attached to
the datacenter, and can share templates.

“Hosts”
shows the actual physical servers which have been allocated to this setup,
along with their status and resource usage:

virt t.png

Figure T

Next up we
have “Networks” which is self-explanatory:

virt u.png

Figure U

Then there
is “Storage” which can tell you at a glance how much space has been allocated
and is being used.

virt v.png

Figure V

“Disks”
takes the storage concept further by showing all the virtual disks in use along
with their status (remember, the host names have been blocked out since these
contain user identity details):

virt w.png

Figure W

Now we come to
“Virtual Machines” which really demonstrates the nitty gritty of what’s going
on here:

virt x.png

Figure X

This screen
not only shows virtual machine details but many administrative controls as
well. VMs can be added, removed, migrated to other hosts, snapshotted, turned
into templates, and more.

“Pools” and “Volumes” do not
show anything since these have not been set up, but the “Templates” tab reveals
the available virtual machine template which can be used to set up new images:

virt y.png

Figure Y

Finally, the
“Users” tab shows the domain users authorized to log into this environment.

Issues with
VDI

VDI has been
a great solution for this organization, but there have been some challenges as
well. Developers have reported issues with copy and paste, the ability to
stretch or resize their virtual machine screen across two monitors, Spice
client crashes and – most interestingly of all – a weird authentication problem
whereby logging into the virtual machine causes their credentials to be passed
to Active Directory several times (nine at last count) meaning users can find
themselves locked out almost immediately just by one mistyped password! All of
these are being actively investigated with Red Hat support and will eventually
be resolved, but it’s important to note that there is almost always some
turbulence with any new endeavor. All issues are tracked and discussed weekly
with development managers so the problems can be checked off and the IT
department can ensure users are satisfied with the delivered solutions.

The
centralization of resources works well for this company and having multiple
hypervisors lets them spread their virtual machines among several servers – but
this does not eliminate the need for virtual machine and server patching, of
course. Reboots are still a part of life – though much more effectively planned
out now – and careful analysis and monitoring of the resources involved is
critical to ensure the programmers can do their jobs without overloading the
hardware.

In summary

The key to a
successful virtualization implementation is making sure that your benefits
outweigh your drawbacks. Derek’s company no longer has to pay several thousand
dollars for a workstation for a new hire… but they also must maintain a fairly
expensive clustered server arrangement to provide that new hire with sufficient
programming resources. It has paid off well for them due to meticulous analysis
and budgeting of the hardware and software behind their Red Hat Enterprise
Virtualization platform and the reliability/scalability they now enjoy. However,
if not for careful management it wouldn’t have been hard for the project to go
off track and wind up consuming more capital and staff labor.

Derek advised me:
“Take a look at your ‘before’ and ‘after’ figures before tackling something
like this, and don’t forget to account for future growth. Always do an in-house
evaluation rather than blindly relying on vendor promises, and most
importantly, get buy-in from your users and department heads before giving the
thumbs up. You’ll be the one building it, but they’re the ones who will be
using it on a daily basis. Virtualization made sense for us because our
developers have high-end requirements, but I don’t think we would have even
considered putting their generic Windows VMs in an environment like this; there
was no need since those ran fine on the local workstations so that would have
just been wasted money.”

Source link

Article Tags:
· · ·
Article Categories:
Technology

Comments are closed.