Articles from Jason Brooks

Trying out oVirt's Probabilistic Optimizer

Next week in Los Angeles, I'll be giving a talk at the SCALE 13x conference on oVirt's new OptaPlanner-powered scheduling adviser.

Martin Sivák wrote a great post about the feature a couple of months ago, but didn't cover its installation process, which still has a few rough edges.

Read on to learn how to install the optimizer, and start applying fancy probabilistic fu to your oVirt VM launches and migrations.

Read More »

RDO Quickstart: Doing the Neutron Dance

RDO, the community-oriented OpenStack distribution for CentOS, Fedora, and their kin, is super-easy to get up and running, as a recently posted YouTube video illustrates:

At the end of the process, you'll have a single-node RDO installation on which you can create VM instances and conduct various experimentation. You can even associate your VMs with floating IP addresses, which connect these instances to the "Public" network that's auto-configured by the installer.

BUT, that's where things stop being super-easy, and start being super-confusing. The auto-configured Public network I just mentioned will only allow you to access your VMs from the single RDO machine hosting those VMs. RDO's installer knows nothing about your specific network environment, so coming up with a more useful single-node OpenStack installation takes some more configuration.

Read More »

Fedora 21: Fedora Goes Atomic

This week, Fedora 21 (a.k.a., the release that must not be named) hit FTPs mirrors everywhere, with a feature list led by a new organizational structure for the distribution. Fedora is now organized into three separate flavors: Workstation, Server, and Cloud.

Fedora's Cloud flavor is further divided into a "traditional" base image for deploying the distribution on your cloud of choice, and an Atomic Host image into which Fedora's team of cloud wranglers has herded a whole series of futuristic operating system technologies.

Applications: Fedora Atomic is built to host applications in docker containers, which provide a simple-to-use means of getting at all the workload-hosting goodness that's built into Linux, but that tends to require some assembly.

Read More »

Up and Running with oVirt 3.5, Part Two

Two weeks ago in this space, I wrote about how to deploy the virtualization, storage, and management elements of the new oVirt 3.5 release on a single machine. Today, we're going to add two more machines to the mix, which will enable us to bring down one machine at a time for maintenance while allowing the rest of the deployment to continue its virtual machine hosting duties uninterrupted.

We'll be configuring two more machines to match the system we set up in part one, installing and configuring CTDB to provide HA failover for the nfs share where the hosted engine lives, and expanding our single brick gluster volumes to replicated volumes that will span all three of our hosts.

Before proceeding, I'll say that this converged virtualization and storage scenario is a leading-edge sort of thing. Many of the ways you might use oVirt and Gluster are available in commerically-supported configurations using RHEV and RHS, but at this time, this sort of oVirt+Gluster mashup isn't one of them. With that said, my test lab has been set up like this for the past six or seven months, and it's worked reliably for me.

Read More »

Up and Running with oVirt 3.5

Last week, version 3.5 of oVirt, the open source virtualization management system, hit FTP mirrors sporting a slate of fixes and enhancements, including a new-look user interface, and support for using CentOS 7 machines as virtualization hosts.

As with every new oVirt release, I'm here to suggest a path to getting up and running with the project on single server, with an option for expanding to additional machines in the future. First, though, a quick rundown of the different single-machine options for trying out oVirt:

  • oVirt Live ISO: A LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you're up, this is probably your lowest-performance option, and not suitable for extended use or expansion.
  • oVirt All in One plugin: Run the oVirt management server and virtualization host components on a single machine with local storage. This is a more permanent version of the Live ISO approach, and had been my favored kick-the-tires option until the rise of…
  • oVirt Hosted Engine: The self-hosted engine approach consists of an oVirt virtualization host that serves up its own management engine. This route is a bit more complicated than those above, but I like it because:
    • oVirt 3.5 supports CentOS 7 as a virtualization host, but not as a host for the management engine. Running oVirt Engine in a separate VM allows you to put CentOS 7 on your metal, and keep CentOS 6 around for the engine.
    • With the All-in-One approach, your management engine is married to the machine it's installed on, limiting your expansion options. The Hosted Engine can move among hosts.

For this howto, I'll be walking through the steps you can follow to get oVirt 3.5 up and running on a single machine with a self-hosted engine, and with self-hosted storage, courtesty of GlusterFS.

In my next post, I'll describe how to add two more machines to the mix to give yourself an installation hardy enough to bring a machine down for updates and maintainence without everything grinding to a halt.

If you have access to good external NFS or iSCSI storage to use with your oVirt exploration, I'll point out where you can skip the GlusterFS bits and use your external storage resource.

IMPORTANT NOTE:

I want to stress that this converged virtualization and storage scenario is a bleeding-edge configuration. Many of the ways you might use oVirt and Gluster are available in commercially-supported configurations using RHEV and RHS, but at this time, this oVirt+Gluster mashup isn't one of them. What's more, this configuration is not "supported" by the oVirt project proper, a state that should change somewhat in oVirt 3.6, which is set to include an official converged setup option.

I do use this converged setup in my own lab, and it does work reliably for me, but for a multi-host setup it's crucial to use three (not two) gluster replicas, and it's important that you use CTDB, or something like it, to provide for automated IP failover. While it may seem reasonable to simply use "localhost" as the NFS mount point for the hosted engine storage, and rely on Gluster to handle the replication between the servers, this will lead to split brain issues.

Read More »

GNOME Boxes 3.14, Unboxed

Welcome to the inaugural edition of Boxes of Boxes, a bi-monthly virtualization, containerization, and turduckenization column. Given the title and subject matter of this column, and the fact that version 3.14 of GNOME desktop environment has recently shipped, I decided to take a look at the project's built-in application for running virtual machines: GNOME Boxes. I took GNOME Boxes for a spin on Fedora 21 alpha, which also shipped recently, sporting GNOME 3.14 as its default desktop environment.

The GNOME 3.14 release notes point to support for Debian as a newly added "express installation" target for GNOME Boxes, so I started off by pointing the app at a Debian Wheezy installation ISO I'd downloaded. The express installation feature suggests a set of sane defaults for virtual disk size and for VM memory, asks for a password, and promptly chefs up a fresh VM instance, and the feature worked as expected with the installation I'd kicked off.

Read More »

oVirt 3.4, Glusterized

oVirt's Hosted Engine feature, introduced in the project's 3.4 release, enables the open source virtualization system to host its own management server, which means one fewer required machine, and more self-sufficiency for your oVirt installation.

While a self-sufficient oVirt installation has been achievable for some time using the project's "All-in-One" method of running an oVirt virtualization host and management server together on one machine, the Hosted Engine feature allows multiple machines to partake in the hosting duties, eliminating any one host as a single point of failure.

The Hosted Engine feature relies on NFS storage to house the management VM. Running an NFS server on one of our virtualization hosts would make that host a new single point of failure, which means we need either to tap an external NFS filer (the approach I took in the walkthrough I posted here recently) or we need to figure out how to make our oVirt hosts serve up their own, replicated NFS storage.

In this post, I'm going to walk through that latter option – setting up a pair of CentOS 6 machines to serve as oVirt virtualization hosts that together provide the NFS storage required for the Hosted Engine feature, using Gluster for this replicated storage and for NFS and CTDB to provide a virtual IP address mount point for the storage.

Read More »

Up and Running with oVirt 3.4

Last week, the oVirt Project delivered a new version of its open source virtualization management system, complete with a feature I've eagerly awaited for the past two years. The feature, called Hosted Engine, enables oVirt admins to host the system's management server (aka the engine) on one of the virtualization hosts it manages.

While oVirt was designed to run across separate management and virtualization hosts, it's been possible from early on (version 3.0) to hack up a machine to serve both roles. In subsequent releases, the project approved and refined this installation option into an easy-to-use All-in-One (AIO) installation plugin.

The problem with AIO is that it leaves you with one of your most important workloads (the oVirt engine) stuck running on a single piece of hardware, where it can't easily be moved around – a very un-virt scenario. Hosted Engine gives those of us interested in getting oVirt rolling on a single server a new deployment option, and one that promises to scale out more nicely than possible with the AIO plugin.

In this post, I'm going to walk through the installation and first steps of a basic oVirt install using the Hosted Engine feature.

Read More »

RDO, Gluster, oVirt Test Days this Month, and Thoughts on Fedora.next

In the open source software world, every day can be a test day, but there’s plenty to be gained when community members converge on an IRC channel to nick themselves on a project’s cutting edge. This month, there’s a handful of test days on tap:

RDO: February 4th & 5th

Following the OpenStack Icehouse milestone 2 release, the RDO project is holding a pair of test days on February 4th and 5th. For...

Read More »

CentOS SIG and Variant Activity

The CentOS Project is increasing its efforts to empower contributors to produce and collaborate on new CentOS Variants, in which groups of contributors combine the CentOS core with newer or otherwise custom components to suit that group’s needs.

Xen4CentOS, which combines CentOS 6 with components from the Xen project and the "longterm maintenance" release of the Linux kernel, is an example of an...

Read More »

Gluster and oVirt Test Days Coming Up

If you’re a fan of scale-out storage, datacenter virtualization, or (like me) a mixture of the two, you’ll want to mark your calendar for this pair of upcoming test days for the Gluster and oVirt projects.

This weekend, from Friday the 17th at midnight UTC to Monday the 20th at midnight UTC, the Gluster project is having a test weekend (aka Glusterfest) for its 3.5 release. There’s a breakdown...

Read More »

oVirt 3.3, Glusterized

The All-in-One install I detailed in Up and Running with oVirt 3.3 includes everything you need to run virtual machines and get a feel for what oVirt can do, but the downside of the local storage domain type is that it limits you to that single All in One (AIO) node.

You can shift your AIO install to a shared storage configuration to invite additional nodes to the party, and oVirt has supported...

Read More »

Testing oVirt 3.3 with Nested KVM

We’re nearing the release of oVirt 3.3, and I’ve been testing out all the new features — and using oVirt to do it, courtesy of nested KVM.

KVM takes advantage of virtualization-enabling hardware extensions that most recent processors provide. Nested KVM enables KVM hypervisors to make these extensions available to their guest instances.

Nested KVM typically takes takes a bit of configuration to...

Read More »

GlusterFest Test Day and the Gluster Test Framework

The first beta of glusterfs 3.4 is scheduled for release tomorrow, and the project plans to greet this new beta with GlusterFest: a 24-hour test day, starting at 8pm PDT May 7/03:00 UTC May 8.

Since I plan on participating in the testing, I thought it'd be a good idea to study up on Gluster's new test framework. You can learn all about the test framework in the video below, but I'll also walk you...

Read More »

OpenStack Summit highlights amazing open source outcomes

Last month, I attended my first OpenStack Summit as part of a team from Red Hat helping to launch a new community distribution of the popular open source infrastructure as a service (IaaS) project.

I came away from the Summit impressed with the size and velocity of OpenStack. The conference drew some 3000 users, developers, and members of the vendor community, roughly twice the draw from the previous...

Read More »