Red Hat's mission: To be the catalyst in communities of customers, contributors, and partners creating better technology the open source way.

Red Hat Community News

oVirt New Year's Resolution: Building Better Community Tools

When I was a journalist, one of the things that inevitably happened every year was the year-end crush of stories that highlighted the "best of" the closing year and the predictions for the year ahead.

So who am I to break with tradition?

My work with the oVirt community began exactly 385 days ago, and in that time, there have been two point releases for this virtual datacenter management platform: 3.4 and 3.5. Each release has gone off without a major hitch, although each release was delayed to blocker bugs that affected major new features for that particular release.

Downloads for the software tends to be climbing, with an average of 4,633 downloads of oVirt Engine from the main resources.ovirt.org download site per month.

The development community is active as well. Data as of yesterday indicates the number of commits is up eight percent over the past 365 days, though the number of total developers is down three percent in the same period. On the user side, metrics from our mailing lists shows mailing list posters up by 13% in the past 365 days.

That last stat is both good and not-so good, in that it shows there are more people interested in the oVirt project, but they still may be too reliant on the mailing list (and the #ovirt IRC channel, according to the 10% growth there). This reliance is bittersweet: oVirt as a community has historically been very happy with the response rate and interaction between developers and users. But this growth may also mean that other forms of help, such as documentation and forums, are lacking. If documentation is clear and concise, one line of reasoning goes, then there's less of a need to ask a question on a mailing list.

This leads into what will be the first prediction for oVirt's new year: We are going to get our documentation act together. There will be input from anyone in the community who wants to contribute, but right now the basic plan is to shift away from ovirt.org's MediaWiki platform into something that's more static-based and version managed. There is more than one idea on how to go about this, but one way is to hold the actual content for the site in a distributed version control system (such as a git repository) in a format that will be easy to edit and consumable for the oVirt downstream commercial products, such as Red Hat Enterprise Virtualization and Wind River Open Virtualization.

oVirt is fortunate to be able to tap into the resources of Shaun McCance, a documentation expert with Red Hat who is helping us figure out what exactly will work for oVirt moving forward. For instance, one of the benefits of MediaWiki is that it is easily editable (that's the whole point of a wiki, after all). So how do we get the ease-of-access to content that MediaWiki provides into a system the relies on version-controlled files in a format such as ASCIIdoc or MarkDown (just to name two possibilities)? This is the kind of talk that Shaun will be helping us with, and I personally am excited to have someone of his caliber on board.

Such changes will probably bring changes to the oVirt web site, too. Many users have lamented the reliance on the "old-school" mailing lists, preferring instead the ease and searchability of forums. This could be a good opportunity, then, to add something like an "Ask" forum, such as the one over at Project Atomic. There will also be the addition of new oVirt community blog, possibly as a "Planet" format.

In general, the community for oVirt this past year has been all about figuring out where our strengths are and figuring out how to create an infrastructure to play to those strengths. Look for more workshops, more use-case studies, and more specific ways to deploy and manage oVirt, including ways to integrate oVirt with other open source tools, such as RDO, ManageIQ, Gluster, Ceph, and Docker, to name a few.

oVirt is great software, and it will continue to have a great community. Moving into 2015, there will be new and better tools for that community to use.

We look forward to having you continue the journey.

View article »

CentOS Project Rolling Builds

Something has been in the works over the past few months in the CentOS Project, what we're calling 'rolling builds'.

Generally a rolling build is where a software project makes regular builds of the latest code (for example, every month, week, or day). Typically all the updates or changes to the software are included in the build.

For CentOS Linux, this means rolling in all the latest updates from upstream Red Hat Enterprise Linux for each rolling build. The CentOS Project produces installable images (ISO files) of CentOS Linux, generic cloud images for popular service providers, the formal Docker image available via the Docker Hub, and an image for use with Project Atomic.

Project leader Karanbir Singh described it this way in his announcement:

CentOS Linux rolling builds are point in time snapshot media rebuild from original release time, to include all updates pushed to mirror.centos.org's repositories. This includes all security, bugfix, enhancement and general updates for CentOS Linux. Machines installed from this media will have all these updates pre-included and will look no different when compared with machines installed with older media that have been yum updated to the same point in time. All rpm/yum repos remain on mirror.centos.org with no changes in either layout or content.

The aim is to update and release a new set of these files at the end of every month. There may be interim and test builds done, as well as the possibility of building and releasing due to a security vulnerability, such as the recent Heartbleed and Shellshock exploits.

As the release cycles progress, we'll be pulling in more images, such as CentOS Linux 7 live media, and probably future releases coming from project special interest groups (SIGs). The SIGs provide additional software on top of the CentOS Linux platform, which may include changing out components in the base distro. It will be a great benefit to these SIGs and their user communities to have rolling builds of this software, as it is often representative of leading edge project work that many are interested in using such as OpenStack and software-defined storage and networking.

As it stands now, these rolling builds are not the same as a nightly snapshot and build that is common in some open source projects. The CentOS Project often will need a few days to test before release. Regardless of when the release actually happens (end of one month, or the start of another), the name and datestamp on the build will reflect the month in which it was built.

In this second month of rolling builds the following images were included:

Follow CentOS on Twitter at @CentOS and learn more about the dojos by following @CentOSEvents. You can also keep up with the CentOS community on G+, Facebook, YouTube, and in IRC.

View article »

CentOS Dojo Brussels CfP and Bangalore Dojo Report

Planning for the CentOS Dojo Brussels event next month is underway and organizers are still lining up talks. The dojo will be held on January 30th at IBM Client Center Brussels (yes, that's right before FOSDEM kicks off). If you're interested in speaking, contact the centos-promo@centos.org mailing list. Keep an eye on the CentOS wiki events page to see what other 2015 events are in the works.

The Red Hat Bangalore office hosted the first CentOS Dojo in India last month. Lalatendu Mohanty helped organize the dojo and he posted an event report on his blog, so check it out.

Although the Bangalore CentOS Dojo talks were not recorded, the project's YouTube channel does have recordings from other 2014 events. In addition to the YouTube channel, you can follow CentOS on Twitter at @CentOS and learn more about the dojos by following @CentOSEvents. You can also keep up with the CentOS community on G+ and Facebook, and in IRC.

View article »

Report: First CentOS Dojo in the Netherlands

Hosted by Schuberg Philis at their facilities near Amsterdam, the first CentOS Dojo in the Netherlands took place on December 2nd. Virtually all those who had RSVPed were already present from the start. The crowd consisted of an interesting, diverse mix of people with various backgrounds from across the Netherlands. Clearly CentOS caters to a wide audience. Fueled by fresh coffee, orange juice, and cookies, everybody eased into socializing in a relaxed atmosphere.

When it was time to start, Karanbir Singh did the honors of kicking off the first Dutch Dojo. He told a captivated audience about how he got involved in the CentOS project in the early days, how he became the Project Leader, how the project evolved over the years, and then finished with how the project joined forces with Red Hat.

After Karanbir's opening, Chmouel Boudjnah talked about Docker integration with OpenStack. Chmouel started with explaining how Docker integrates with OpenStack on an architectural level and how it all fits together. Next he showed a live demo, installing DevStack with Docker integration and deploying a container. Although the Docker Hypervisor project currently still lives out-of-tree, from what Chmouel's demo showed, the project holds a lot of promise.

Next up was Vincent Batts, who gave us an excellent introduction to Docker. Although everybody in the audience had heard of Docker, the folks with hands-on Docker experience was quite a different story, so Vincent's presentation was spot-on. He explained the technical details of Docker and then showed what a Dockerfile looks like, how to build a Docker image, and how to launch a Docker container, then he finished with showing how information is exchanged between Docker containers and the host. After Vincent's presentation, attendees broke for an excellent lunch and more socializing.

After lunch, I gave my talk about the history, present status, and future of high availability in OpenStack, and I focused on the networking part of OpenStack, which has a major impact on HA. I explained how the networking part evolved, got replaced, and evolved some more; its HA capabilities; and what to expect in the near future. Given the progress that has been made in Juno and the upcoming Kilo release, deploying the OpenStack Juno release is advisable if you want to benefit from networking high-availability features.

Next, Karanbir talked about the new CentOS community infrastructure. He told us about the build systems currently in place, the amazing amount of builds and tests that are performed, the expansion of the infrastructure with public git repositories and the community continuous integration environment, and how the community is most welcome to contribute. The expanded CentOS Community Infrastructure will definitely help the Special Interest Groups (SIGs) with making their deliverables available to the community at large.

Andreas Thienemann did the last presentation of the day. He offered interesting arguments that Docker containers are a step backwards, and he invited everybody to share their thoughts. A lively discussion about the (non-)merits of Docker ensued, with people on both sides of the fence. Although there was agreement that Docker is a great tool for development, the same could not be said about using Docker at scale for critical workloads in production. The CoreOS Rocket announcement meant a wait-and-see approach for most attendees.

Overall it was a great day, with interesting people in a nice location. A big thanks to both Schuberg Philis and Red Hat for sponsoring the first Dutch Dojo, to the speakers for presenting, and to the Dutch CentOS community members for participating.

Follow @CentOS on Twitter for the latest project news and @CentOSEvents to stay updated on Dojos.

View article »

Open Source Is Just Another Way of Doing Good Business

Red Hat has been doing what it's doing for quite a while now, and so far, it seems to be working out pretty well. Every once in a while, though, along comes a little independent validation about the viability of open source in the business world that deserves to be called out.

The most recent example is from The New York Times, specifically, an article that (rightly) highlights the need for data analytics and shepherding in the big data arena. As you might have guessed, I am personally in agreement with this point, because for a long time in big data there's been a whole lot of data gathering and not a lot of data analysis. What analysis there has been has been rife with a great deal of inane conclusions ("consumers love Product X!") and the occasional cool data analysis that makes you actually think about the world around you.

Given that it's human beings that are managing and looking at this data, that's to be expected, I suppose. We live in a world of mediocrity punctuated by flashes of brilliance. It's these flashes that tend to be remembered and have a legacy, and in this same article, there's an undercurrent that highlights – in a matter-of-fact tone – the brilliance of open source.

"Take Cask, a start-up in Silicon Valley founded in 2011, backed by leading venture capitalists and led by former Facebook and Yahoo engineers. In late September, the promising young company changed both its name and its business model — moving to supplying open-source software and trying to make money on technical support and consulting rather than on proprietary products."

This one paragraph is what made the point so well: just a nonchalant outline of what Cask, a Hadoop applications development company, has had to do to be nimble in the Boomtown atmosphere of Big Data right now. No explanations beyond this paragraph of what open source is or why it is the spawn of demons/greatest thing since sliced bread. Just the facts, and how Cask is faring with their call to change course.

The article is not about Cask, mind you; it's about how companies can generate revenue around big data. But the mentions of other companies who rely on open source software in their business models, such as Hortonworks and Cloudera, only serves to make the point. Organizations who are building on open source technologies are the ones who are making money in this space, which is a far cry from the traditional proprietary "lock it down" path many start ups have tried in the past.

In the course of rounding up successful and potentially successful companies in big data, Steve Lohr's piece has implicitly highlighted the benefits of open source at the same time. And it's the lack of fanfare that makes the point that much stronger.

View article »

Fedora 21 Makes Headlines

Fedora 21 officially rolled out Tuesday December 9th and made a lot of headlines. Congratulations to the Fedora community on an exciting new release that's getting rave reviews. Word on the street is Fedora 21 was well worth the wait.

To download Fedora 21 and see the official documentation, visit getfedora.org.

Here are a few of the articles that explain what you can expect with the latest (greatest) Fedora release:

And on YouTube:

Have you kicked the tires on Fedora 21 yet? Let us know what you think.

View article »

Fedora 21: Fedora Goes Atomic

This week, Fedora 21 (a.k.a., the release that must not be named) hit FTPs mirrors everywhere, with a feature list led by a new organizational structure for the distribution. Fedora is now organized into three separate flavors: Workstation, Server, and Cloud.

Fedora's Cloud flavor is further divided into a "traditional" base image for deploying the distribution on your cloud of choice, and an Atomic Host image into which Fedora's team of cloud wranglers has herded a whole series of futuristic operating system technologies.

Applications: Fedora Atomic is built to host applications in docker containers, which provide a simple-to-use means of getting at all the workload-hosting goodness that's built into Linux, but that tends to require some assembly.

Read More »

RDO Packaging

When we started the RDO project back in April of 2013, the main focus was on producing a distribution of OpenStack that made it easy to deploy on CentOS, Fedora, and Red Hat Enterprise Linux. While we put time into making it easy for the community around that distribution to grow and support itself, most of the technical work was done inside Red Hat, and there were parts of it that weren't very visible to the community.

It's time to prioritize opening up the RDO development process and make the technical governance of the project available to the entire community.

A month ago in Paris, at the OpenStack Summit, 40 or 50 RDO enthusiasts gathered to discuss the RDO community and what we can do to make it more inclusive. The number one thing that was asked for was more documentation around the process, and transparency into the CI results, so that everyone can see what's going on and know where they can jump in.

Read More »

FUEL Project Wins 2014 Manthan Award

Congratulations to the FUEL project community for winning a 2014 Manthan Award in the e-Localisation category last week. The Manthan Award is an annual award that recognizes exceptional digital content creation in South Asia. Chandrakant Dhutadmal, a FUEL project core member, accepted the Manthan Award for e-localization on behalf of the project.

Initiated by Red Hat, the FUEL Project is the largest repository of standard linguistic resources in the field of free and open source software. The FUEL (Frequently Used Entries for Localization) project community works to create standard linguistic and technical resources, such as standardized terminology and translation style, and the FUEL GILT Conference is the largest FOSS localization industry conference.

Learn more about the project in our recent report on the 2014 FUEL GILT Conference, which was held in India in November.

View article »

Foreman 1.7 Rolls Out

December brings a new major release of Foreman, the systems management tool incorporating provisioning and config management support, with a range of new features and fixes.

Smart class matchers are used to supply data dynamically to Puppet depending on attributes of a host or facts, and these now have better control over default values and the ability to merge hashes and arrays across matchers.

Read More »