About jbrooks

Jason Brooks is the Manager, Community Infrastructure, in the Open Source Program Office. Follow him on Twitter @jasonbrooks

Trying out oVirt’s Probabilistic Optimizer

Next week in Los Angeles, I’ll be giving a talk at the SCALE 13x conference on oVirt’s new OptaPlanner-powered scheduling adviser.

Martin Sivák wrote a great post about the feature a couple of months ago, but didn’t cover its installation process, which still has a few rough edges.

Read on to learn how to install the optimizer, and start applying fancy probabilistic fu to your oVirt VM launches and migrations.

Continue reading

RDO Quickstart: Doing the Neutron Dance

RDO, the community-oriented OpenStack distribution for CentOS, Fedora, and their kin, is super-easy to get up and running, as a recently posted YouTube video illustrates:

At the end of the process, you’ll have a single-node RDO installation on which you can create VM instances and conduct various experimentation. You can even associate your VMs with floating IP addresses, which connect these instances to the "Public" network that’s auto-configured by the installer.

BUT, that’s where things stop being super-easy, and start being super-confusing. The auto-configured Public network I just mentioned will only allow you to access your VMs from the single RDO machine hosting those VMs. RDO’s installer knows nothing about your specific network environment, so coming up with a more useful single-node OpenStack installation takes some more configuration.

Continue reading

Fedora 21: Fedora Goes Atomic

This week, Fedora 21 (a.k.a., the release that must not be named) hit FTPs mirrors everywhere, with a feature list led by a new organizational structure for the distribution. Fedora is now organized into three separate flavors: Workstation, Server, and Cloud.

Fedora’s Cloud flavor is further divided into a "traditional" base image for deploying the distribution on your cloud of choice, and an Atomic Host image into which Fedora’s team of cloud wranglers has herded a whole series of futuristic operating system technologies.

Applications: Fedora Atomic is built to host applications in docker containers, which provide a simple-to-use means of getting at all the workload-hosting goodness that’s built into Linux, but that tends to require some assembly.

Continue reading

Up and Running with oVirt 3.5, Part Two

{:align="right"} Two weeks ago in this space, I wrote about how to deploy the virtualization, storage, and management elements of the new oVirt 3.5 release on a single machine. Today, we’re going to add two more machines to the mix, which will enable us to bring down one machine at a time for maintenance while allowing the rest of the deployment to continue its virtual machine hosting duties uninterrupted.

We’ll be configuring two more machines to match the system we set up in part one, installing and configuring CTDB to provide HA failover for the nfs share where the hosted engine lives, and expanding our single brick gluster volumes to replicated volumes that will span all three of our hosts.

Before proceeding, I’ll say that this converged virtualization and storage scenario is a leading-edge sort of thing. Many of the ways you might use oVirt and Gluster are available in commerically-supported configurations using RHEV and RHS, but at this time, this sort of oVirt+Gluster mashup isn’t one of them. With that said, my test lab has been set up like this for the past six or seven months, and it’s worked reliably for me.

Continue reading

Up and Running with oVirt 3.5

{:align="right"} Last week, version 3.5 of oVirt, the open source virtualization management system, hit FTP mirrors sporting a slate of fixes and enhancements, including a new-look user interface, and support for using CentOS 7 machines as virtualization hosts.

As with every new oVirt release, I’m here to suggest a path to getting up and running with the project on single server, with an option for expanding to additional machines in the future. First, though, a quick rundown of the different single-machine options for trying out oVirt:

  • oVirt Live ISO: A LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you’re up, this is probably your lowest-performance option, and not suitable for extended use or expansion.
  • oVirt All in One plugin: Run the oVirt management server and virtualization host components on a single machine with local storage. This is a more permanent version of the Live ISO approach, and had been my favored kick-the-tires option until the rise of…
  • oVirt Hosted Engine: The self-hosted engine approach consists of an oVirt virtualization host that serves up its own management engine. This route is a bit more complicated than those above, but I like it because:
    • oVirt 3.5 supports CentOS 7 as a virtualization host, but not as a host for the management engine. Running oVirt Engine in a separate VM allows you to put CentOS 7 on your metal, and keep CentOS 6 around for the engine.
    • With the All-in-One approach, your management engine is married to the machine it’s installed on, limiting your expansion options. The Hosted Engine can move among hosts.

For this howto, I’ll be walking through the steps you can follow to get oVirt 3.5 up and running on a single machine with a self-hosted engine, and with self-hosted storage, courtesty of GlusterFS.

In my next post, I’ll describe how to add two more machines to the mix to give yourself an installation hardy enough to bring a machine down for updates and maintainence without everything grinding to a halt.

If you have access to good external NFS or iSCSI storage to use with your oVirt exploration, I’ll point out where you can skip the GlusterFS bits and use your external storage resource.

IMPORTANT NOTE:

I want to stress that this converged virtualization and storage scenario is a bleeding-edge configuration. Many of the ways you might use oVirt and Gluster are available in commercially-supported configurations using RHEV and RHS, but at this time, this oVirt+Gluster mashup isn’t one of them. What’s more, this configuration is not "supported" by the oVirt project proper, a state that should change somewhat in oVirt 3.6, which is set to include an official converged setup option.

I do use this converged setup in my own lab, and it does work reliably for me, but for a multi-host setup it’s crucial to use three (not two) gluster replicas, and it’s important that you use CTDB, or something like it, to provide for automated IP failover. While it may seem reasonable to simply use "localhost" as the NFS mount point for the hosted engine storage, and rely on Gluster to handle the replication between the servers, this will lead to split brain issues.

Continue reading

GNOME Boxes 3.14, Unboxed

Welcome to the inaugural edition of Boxes of Boxes, a bi-monthly virtualization, containerization, and turduckenization column. Given the title and subject matter of this column, and the fact that version 3.14 of GNOME desktop environment has recently shipped, I decided to take a look at the project’s built-in application for running virtual machines: GNOME Boxes. I took GNOME Boxes for a spin on Fedora 21 alpha, which also shipped recently, sporting GNOME 3.14 as its default desktop environment.

The GNOME 3.14 release notes point to support for Debian as a newly added "express installation" target for GNOME Boxes, so I started off by pointing the app at a Debian Wheezy installation ISO I’d downloaded. The express installation feature suggests a set of sane defaults for virtual disk size and for VM memory, asks for a password, and promptly chefs up a fresh VM instance, and the feature worked as expected with the installation I’d kicked off.

Continue reading

oVirt 3.4, Glusterized

{:align=right}oVirt’s Hosted Engine feature, introduced in the project’s 3.4 release, enables the open source virtualization system to host its own management server, which means one fewer required machine, and more self-sufficiency for your oVirt installation.

While a self-sufficient oVirt installation has been achievable for some time using the project’s "All-in-One" method of running an oVirt virtualization host and management server together on one machine, the Hosted Engine feature allows multiple machines to partake in the hosting duties, eliminating any one host as a single point of failure.

The Hosted Engine feature relies on NFS storage to house the management VM. Running an NFS server on one of our virtualization hosts would make that host a new single point of failure, which means we need either to tap an external NFS filer (the approach I took in the walkthrough I posted here recently) or we need to figure out how to make our oVirt hosts serve up their own, replicated NFS storage.

In this post, I’m going to walk through that latter option — setting up a pair of CentOS 6 machines to serve as oVirt virtualization hosts that together provide the NFS storage required for the Hosted Engine feature, using Gluster for this replicated storage and for NFS and CTDB to provide a virtual IP address mount point for the storage.

Continue reading

Up and Running with oVirt 3.4

{:align=right}Last week, the oVirt Project delivered a new version of its open source virtualization management system, complete with a feature I’ve eagerly awaited for the past two years. The feature, called Hosted Engine, enables oVirt admins to host the system’s management server (aka the engine) on one of the virtualization hosts it manages.

While oVirt was designed to run across separate management and virtualization hosts, it’s been possible from early on (version 3.0) to hack up a machine to serve both roles. In subsequent releases, the project approved and refined this installation option into an easy-to-use All-in-One (AIO) installation plugin.

The problem with AIO is that it leaves you with one of your most important workloads (the oVirt engine) stuck running on a single piece of hardware, where it can’t easily be moved around — a very un-virt scenario. Hosted Engine gives those of us interested in getting oVirt rolling on a single server a new deployment option, and one that promises to scale out more nicely than possible with the AIO plugin.

In this post, I’m going to walk through the installation and first steps of a basic oVirt install using the Hosted Engine feature.

Continue reading

RDO, Gluster, oVirt Test Days this Month, and Thoughts on Fedora.next

In the open source software world, every day can be a test day, but there’s plenty to be gained when community members converge on an IRC channel to nick themselves on a project’s cutting edge. This month, there’s a handful of test days on tap:

RDO: February 4th & 5th

Following the OpenStack Icehouse milestone 2 release, the RDO project is holding a pair of test days on February 4th and 5th. For more information, and to indicate that you’ll be participating, head to the test day page on the RDO wiki.

Gluster: February 14th – 16th

The beta3 packages for Gluster 3.5 are expected to hit FTP this week, with a GlusterFest testing weekend to follow. This one is still somewhat TBD, so check the Gluster project site and mailing lists for updates and confirmation.

oVirt: February 11th and February 19th

oVirt 3.4 is nigh, with a projected general availability date of February 24th.

Leading up to the final 3.4 release, there’s a pair of test days scheduled for shaking out as many issues as possible ahead of the release. For more information, see the 3.4 test days page on the oVirt project wiki.

Fedora QA Tips & Fedora.next

The Fedora Project isn’t holding any test days for February, due in part to a longer-than-normal lead-in period for Fedora.next, an umbrella term for the "shape of Fedora in the post-F20 future."

However, Fedora QA titan Adam Williamson offered up a great list of suggested testing activities during the pre-F21 lull that’s well worth reading and acting on.

And, speaking of Fedora.next and Adam Williamson, Adam wrote a very insightful post about the new initiative and Fedora’s near future, based on the recent discussions of these matters in the Fedora community.

CentOS SIG and Variant Activity

The CentOS Project is increasing its efforts to empower contributors to produce and collaborate on new CentOS Variants, in which groups of contributors combine the CentOS core with newer or otherwise custom components to suit that group’s needs.

Xen4CentOS, which combines CentOS 6 with components from the Xen project and the "longterm maintenance" release of the Linux kernel, is an example of an existing variant project. For more on variants, refer to the CentOS Project site and the CentOS and Variants section of our FAQ.

The contributor groups behind variants are called Special Interest Groups. For more on SIGs, refer to the CentOS Project wiki.

The CentOS Project put out a call for the formation of new SIGs and variants last week, and has fielded a healthy response from the community. Several projects have expressed interest in forming a Cloud SIG, and there’s been interest in Web Hosting, Documentation, and other SIG themes, as well.

If you’re interested in proposing a SIG or variant, or would like to learn more, drop a line to centos-devel@centos.org, ask on irc in #centos on the Freenode network, or tune in next week for a chat about establishing the Cloud SIG at the CentOS Office Hours session on 23rd Jan 2014 @ 16:00 UTC.

Below is a rundown of the new variant and SIG activity on the centos-devel mailing list since last week:

Cloud SIG and Variant Discussion

Cloud-Infra SIG creation request

We’re looking to create an easy to use distribution of RDO/OpenStack built on CentOS. Our understanding is that we need to first create a SIG and then we’re able to create 1 or more variants.

What we’d like to do:

  • Provide all the dependencies that are either not in base CentOS or are too old in the base CentOS in a single location (maybe a distinct yum repo)

  • Be able to build and sign packages needed to run RDO/OpenStack within the CentOS infrastructure

  • Be able to generate a LiveCD in the CentOS infrastructure that allows people to get up and running quickly.

  • Provide Install media for people that do not want to use a LiveCD."

— Mike Burns on behalf of The RDO Team

OpenNebula Variant and Cloud SIG proposals

We have been involved with CentOS for the past year making a stable Cloud Management Platform [2]. I would like to hereby propose a new CentOS variant, namely the OpenNebula variant.

This variant would add three roles to the CentOS installation:

  • OpenNebula Frontend

  • OpenNebula Node KVM

  • OpenNebula Node Xen

— Jaime Melis from the OpenNebula project

Cloud SIG interest from CloudStack

Reading the call for SIG, there is definitely interest from CloudStack to participate and create a ClouStack centOS variant for both instances and head/hypervisor nodes. Our default image template is already a centos template and our best quick start guide is based on CentOS. We also have a community run yum repo for all our packages.

I see interest to create a Cloud Image for CloudStack clouds as well as creating variants for our management server and our hypervisor setup.

— Sebastien from the Apache CloudStack project

A formal request to create a Eucalyptus Special Interest Group

…​I would like to formally request the formation of a Eucalyptus SIG…​

We build Eucalyptus, a cloud infrastructure application, around CentOS and have for 3 years now. We have an installer that is derived from Anaconda. We’ve sponsored CentOS events, and we deliver CentOS images to our users. Our packages are currently in a standalone repository, but we would be happy to merge these into whatever CentOS repository emerges (EPEL? Some new version of EPEL? CentOS core itself? I’m unclear on where this sits currently.)

— Greg DeKoenigsberg from Eucalyptus

oVirt and CentOS Cloud SIG

Cloud SIG seems very relevant to oVirt. We’re looking forward to assist with oVirt support as well. We have some CentOS fans in the oVirt community, so this should work well for everyone.

Currently we have our live oVirt [1] built using CentOS. So having you guys reviewing it would be beneficial.

For the standard el platform, we need some help with dependencies. Here’s a short list which we would like to see go in over & above 6.5

  • qemu-kvm compiled with RHEV flags

  • Support of librbd, libgfapi in qemu-kvm & libvirt, including sanlock & kernel modules

  • Network namespaces, VXLAN, GRE support in the IP stack (kernel through iproute, dnsmasq, etc)

  • The cloud-init version being used in RHEL 6.5.

  • Support of cgroups

  • selinux policies

— Doron Fediuck from oVirt

Unified Cloud SIG

There seems to be some consensus in various different threads that what we need is a single consolidated Cloud SIG effort, and decide, over time, if it’s sensible to split into project-specific SIGs. Easier to start consolidated than try to figure out how to merge later.

— Rich Bowen from Red Hat

Other Variants and SIGs under Discussion

NethServer as CentOS Variant and SME SIG Proposal

We’d like to share with you our experience and our ideas for the future developments. You could think of NethServer as CentOS with some extra packages, particularly a powerful and extensible web interface that simplifies common administration tasks. NethServer is for the sysadmin who appreciates the effectiveness of a user interface which saves time compared to direct configuration file modification and for users who want to approach CentOS without having Linux skills.

— Alessio Fattorini from NethServer

Create the CentOS Hosting SIG

One of the proposed Future SIGs (http://wiki.centos.org/SpecialInterestGroup) is the Hosting (or "Web Hosters") SIG. Since Web Hosters are one of the key and core users of CentOS, this seems like a SIG that should be started sooner, rather than later 🙂

I’d like to propose that such a SIG be started.

— Jim Jagielski

Documentation SIG

Has there been any interest or progress in the suggested Documentation SIG?

— Philip Mather

VoIP SIG

I’d like to put my hand up to be part of the VoIP SIG.

After mentioning this in IRC, I’ve also had two other people contact me privately (puzzled and JHogarth) with their interest too.

Currently we (FreePBX) build ‘FreePBX Distro’, which is a up-to-date CentOS distro, with a couple of known-broken packages upgraded (drbd, pacemaker) or replaced (asterisk).

— Rob Thomas from FreePBX

ClearCenter Marketplace for CentOS Variant

ClearCenter and ClearFoundation are interested in starting a CentOS variant called ‘ClearCenter Marketplace for CentOS’. This will allow management of various CentOS services, EPEL packages, and third party applications to be easily managed and configured under CentOS.

We are also interested in being part of a SIG centered around ‘Server Management.’ Please let us know the next steps. We’d like to get started right away and we are willing to participate in the process of helping to set up shop. Let us know how we can serve.

— David Loper from ClearFoundation