Jenkins-as-Code: Creating Jenkins jobs with text, not clicks

1 Share

This is the first in a series of posts on how we upped our Jenkins game by treating Jenkins jobs as code, rather than pointing-and-clicking to create jobs.

In this series, I’ll cover:

  • the problems we had as our Jenkins use scaled throughout the organization
  • the target conditions we wished to achieve
  • how we addressed those problems using the job-dsl-plugin along with some sugar on top
  • what the development workflow looks like
  • what a realistic set of jobs looks like for a sample project
  • the sugar we built on top of job-dsl-plugin
  • how we encouraged adoption of this approach across teams
  • how this can be used complementary to the new pipeline jobs in Jenkins 2.x

 Credit

Before I even get started, I want to be very clear that I had very little to do with any of this. On our team, these people did all the hard work, notably David G and Dan D for initial experiments; and especially Irina M for ultimately executing on the vision, to whom I am eternally grateful. And none of this would be possible without the heroes behind the job-dsl-plugin.

The problem

At work, we have:

  • 5 Jenkins servers, in 2 separate hosting environments; none of these can communicate with one another
  • Most Jenkins jobs run in 1 of those hosting environments, not both. But some run in both.
  • Hundreds of Jenkins jobs across all these environments
  • Of the jobs that run in only 1 environment, many of them run on multiple Jenkinses in that environment, with slight differences (eg, in Dev all projects deploy on SCM change; on prod, most projects manually deploy and prompt for a tag to be deployed)
  • No control over the hosting/environment situation
  • Dozens of developers, working on dozens of projects
  • Significant growth in number of developers, demand for automation, and consequently number of Jenkins jobs
  • A very small group of folks who know Jenkins well

We also have:

  • A fantastic team of people
  • An organizational commitment, with leadership support, to solving the problems described above

For the jobs that were duplicated — with slight differences per environment — we found ourselves doing a significant amount of redundant pointing-and-clicking in different Jenkinses. In addition, we were creating a lot of snowflake jobs that did similar things differently, because of silos, skill gaps, absence of consistency / standards, etc. We were witnessing job configuration drift both between environments, and also between teams.

In practice, it looked like this:

“Why does this deploy job do [Thing A] in dev, but [Thing A+] in prod?”

“Why does this app deploy [this way], but this other app which is structurally the same deploy [that way]?”

“Who wants to build this [some job useful everywhere] we need in all 5 environments?”

“What’s our policy for discarding old builds? Because these jobs retain for 30 days, these for 50 builds, and most just retain forever.”

“Why do these jobs use Extended email, and these use plain email?”

“I really, really wish every job would have failure claiming turned on by default. Why the hell is that an option, anyways?”

And on and on. In other words, we accumulated a lot of organizational deployment technical debt, and we were not happy.

The solution: job-dsl-plugin

I’ll spare you the history and cut to where we landed. We realized we couldn’t solve the multi-environment problem… that is our infrastructure reality. And our automations team isn’t big enough — nor would we want to — police hundreds of Jenkins jobs across multiple environments and turn into the consistency enforcement team. We wanted to continue to empower all developers to use Jenkins, and we wanted to satisfy our own needs for increased consistency. We wanted to make it easy to do the right thing. After a several month discovery phase to investigate solutions to the problems above, we ended up adopting an approach to creating Jenkins jobs that centered around the job-dsl-plugin.

This enabled us to:

  1. use text to create Jenkins jobs
  2. store those jobs in source control
  3. easily code necessary differences per environment
  4. more easily see and eradicate unnecessary differences in jobs across environments
  5. easily create these jobs in multiple jenkinses, with a bit of config
  6. “make it easy to do the right thing”… providing the consistent defaults we wanted, for free
  7. simplify the small handful of jobs where we wanted “the one and only one way to do this thing”
  8. foster knowledge sharing and discovery for job configuration
  9. perform peer review of Jenkins job configuration

In short, we treat our Jenkins jobs like configuration-as-code

Real-world usage

Some really high-level info just to make what comes below grokkable until I get to the nitty-gritty details:

  1. Jobs are configured in text, using Groovy. No, you do not need to become a Groovy expert, retool, learn a whole new language, etc. The syntax is very basic, and the API viewer makes it trivial to copy/paste snippets for job configuration
  2. These jobs, contained in one or more .groovy file, are kept in source control
  3. One or more “seed jobs” are manually configured to pull those .groovy files from source control and then “process” them, turning the Groovy text into Jenkins jobs (truth: we even automate the creation of seed jobs; more in a future post)
  4. Nearly all of that happens via the job-dsl-plugin; the only exception is creating the seed jobs.

I also want to mention now — and I’ll repeat this a lot — that the job-dsl API viewer is pure friggin gold: https://jenkinsci.github.io/job-dsl-plugin/

Apologies in advance for starting a few steps ahead and leaving out a lot of hand-wavey stuff for now. I want to begin with the end in mind. I’ll fill in all the gaps later, I promise. As you’re reading along wondering “What’s this Builder stuff? How do these actually turn into Jenkins jobs?”, trust me, I’ll fill it all in.

I’ll start with the dead simplest job you can create with job-dsl. This is, say, step 0. Then I’ll do what everyone hates and rocket ahead to step 10.

View the code on Gist.

When the seed job runs and pulls that config from SCM, it’ll create a job named “example-job-from-job-dsl”, pulling from a github repo, triggered by a push, with a gradle step, and archiving artifacts.

Now, truth be told, at work we don’t use straight up “job” but instead have Builders that wrap job, and add some useful defaults for us that we want on all our jobs.

Here’s a fairly representative sample of what a simple Jenkins job looks like for us, in code. Ignore the “BaseJobBuilder” business for now, as it’s just some sugar on top of job-dsl-plugin that adds some sane (for us) defaults:

View the code on Gist.

When the seed job for this job runs, it results in a Jenkins job named “operations-jira-restart”, configured to pull from a git repo, with a shell step of “fab restart_jira”. The “BaseJobBuilder” bits for us add some other things to the job that we want to exist in all of our jobs (log rotation, failure claiming, and so on)

Here’s another, using a different “Builder“, with a bit more config added. We use this “SiteMonitorJobBuilder” builder

View the code on Gist.

This crazy job — pains me that it even needs to exist in our environments — will result in a Jenkins job named “jenkins-outbound-connectivity-check”, which we have configured in every single one of our Jenkinses. It runs hourly, runs http requests against configured URLs, and pulls from an external and internal GitHub repo, to confirm that the Jenkins instance can talk to all the things it should talk to.

I included this example because it demonstrates how easy it is for us now to solve one of the problems above, namely, how to change a job easily that runs in multiple Jenkinses. If we want to change how this connectivity check runs — or, heck, even delete it entirely — we just change it in code and push to SCM. The seed job responsible for this will run, and update the job in all our Jenkinses.

Next up: The job-dsl foundation and the sugar on top

Now that I’ve covered the problems we needed to solve, and a very high level look at our solution, I’ll go more in depth in the next post, covering the job-dsl-plugin and the Builders we’ve added on top of it.

Relevant links

Read the whole story
rosskarchner
3419 days ago
reply
Share this story
Delete

We're in a revolution

1 Comment

We are in the middle of a political revolution, but it's not the one Bernie Sanders called for. 

Instead our political system has re-formed around the limits of Twitter and cable news. 

It didn't start with Twitter. And it won't end with Twitter.

Of course Trump rose to the top, he's the master of the mindless soundbite, and that's what Twitter excels at. 

Now journalism has figured out Trump, they just repeat the question, over and over, and he responds with the same meaningless words. Let's see what comes after this. But this is new behavior. Politics never worked like this before.

Hopefully historians of politics and journalism are keeping careful notes of the transformation that's happening now. You want change. Keep your eyes and ears open. 

Read the whole story
rosskarchner
3421 days ago
reply
"our political system has re-formed around the limits of Twitter and cable news" ugh
Share this story
Delete

About and Timeline

1 Comment

Introduction

A bit over two months ago, I started a crowdfunding campaign for qutebrowser, with the goal of working full-time on adding QtWebEngine support to it, which will bring more stability, security and features.

I asked for 3000€ to fund a month of full-time work before starting my studies in September. The campaign took off more than I'd have ever imagined and was almost funded in the first 24h.

At the end of the campaign, I got two months of full-time work funded. I'm now close to starting those awesome two months and set up this blog as a work log for what I'm doing, inspired by the one of git annex assistant.

I also submitted this blog to planet python, planet pytest and planet Qt - if you're reading this via one of those, fear not: I have dedicated tags for them, and only will tag posts which actually seem relevant, so you won't see daily posts there.

Timeline

My full-time work is planned to start tomorrow. I have some other obligations until September, so there will be some days in between where I won't be working on qutebrowser, but other things related to either Python or my studies.

This is the tenative schedule:

  • June 6th - 10th: qutebrowser (days 1-5)
  • June 13th - 15th: qutebrowser (days 6-8)
  • June 16th - 29th I'll be in Freiburg for the development sprint on pytest (which qutebrowser is using too), and giving a 3-day training for it.
  • June 30th - July 1st: qutebrowser (days 9-10)
  • July 4th - 8th: qutebrowser (days 11-15)
  • July 11th - 15th: qutebrowser (days 16-20)
  • July 17th - 24th I'll be in Bilbao at EuroPython giving another training about pytest and hopefully learning a lot in all the awesome talks.
  • July 25th - 29th: qutebrowser (days 21-25)
  • August 1st - 5th: qutebrowser (days 26-30)
  • August 8th - 11th: qutebrowser (days 31-34)
  • August 12th I'll be travelling to Cologne for Evoke, a demoparty I'm visiting every year (let me point out this has nothing to do with political demos, go check the wikipedia article :P).
  • August 15th - 19th: qutebrowser (days 35-39)
  • August 22th - September 2nd I'll be busy with a math preparation course of the university I'll be going to.
  • September 5th - 9th: qutebrowser (day 40 and some buffer)

Plans

The work required to get QtWebEngine to run can roughly be divided into four steps:

  • Preparation: Writing end-to-end tests for all important features, merging some pull requests which are still open, doing a last release without any QtWebEngine support at all, and organizing/shipping t-shirts/stickers for the crowdfunding. A lot of this already happened over the past few months, but I still expect this to take the first few days.
  • Refactoring: Since I plan to keep QtWebKit support for now, I'll refactor the current code so there's a clear abstraction layer over all backend-specific code. This will also make it easier to add support for a new backend (say, Servo) in the future. Since this will probably break a lot in the initial phase, this work will happen in a separate branch. As soon as the current QtWebKit backend works fine again, that'll be merged and QtWebEngine support will be in the main branch behind a feature switch.
  • Basic browsing: The next step is to get basic browsing with --backend webengine working. This means you'll already be able to surf, but things like adblocking, settings, automatic insert mode, downloads or hints will show an error or simply not work yet.
  • Everything else: All current features which are implementable with QtWebEngine will work, others will be clearly disabled (a few obscure settings might be missing with --backend webengine for example). See the respective issue for a breakdown of features which will probably require some extra work.

Frequently asked questions

When will I be able to use QtWebEngine?:

This depends on what features you need, and how fast I'll get them to work. Estmating how long the steps outlined above will take is quite difficult, but I hope you'll have something to try after the first week or so.

Also note you'll need to have a quite recent Qt (5.6, maybe even 5.7 which isn't released yet) at least at the beginning, because QtWebEngine is missing some important features in older versions.

Is QtWebEngine ready?:

It certainly wasn't when it was first released with Qt 5.4 in December 2014.

That's also why I spent a lot of time writing tests for existing features instead of trying to start working on QtWebEngine support.

Nowadays with Qt 5.5/5.6/5.7 things certainly look better, and I believe I'll be able to implement all important features, however I'll need to rewrite some code in Javascript as there's no C++ (and thus no Python API) for all the functionality QtWebKit had.

Long story short: It's by no means a drop-in replacement (like initially claimed by Qt) - but most users won't notice any missing functionality which I can't implement at all with a recent enough QtWebEngine, and things are getting better and better.

How is this blog made?:

Using spacemacs, writing ReStructuredText, storing it in a git repo, processing it with Pelican, the Monospace theme and the thumbnailer plugin.

Definitely a better workflow than Wordpress ;)

Read the whole story
rosskarchner
3421 days ago
reply
Looking forward to seeing the results of this work
Share this story
Delete

Introducing The DWF project: Vulnerability reporting done the open source way

1 Share
(Image via GotCredit.com)

At Red Hat, we strongly believe that IT innovation is born of open, transparent and collaborative communities. From cloud computing to Linux containers, we believe the datacenter of the future isn’t built in a proprietary lab, but in public code repositories and community project sites. Our viewpoint, however, isn’t limited to just building open source offerings - it also applies to providing security for these technologies.

Read the whole story
rosskarchner
3424 days ago
reply
Share this story
Delete

Information Technology Specialist (Applications Software) (Software Developer)

2 Comments
Job Announcement Number:
16-CFPB-427-P
Location Name:
Washington DC, District of Columbia; Location Negotiable After Selection, United States
Agency:
Consumer Financial Protection Bureau
Occupation Code:
Information Technology Management
Pay Plan:
CN
Appointment Duration:
Term (not to exceed 2 years, may be extended up to 2 additional years)
Opening Date:
Thursday, June 2, 2016
Closing Date:
Thursday, June 9, 2016
Job Status:
Term
Salary:
$78,107.00 to $171,985.00 / Per Year
Pay Grade(s):
52 to 53
Who May Apply:
U.S. citizens ; no prior Federal experience is required.
Job Summary:
This position is located in the Consumer Financial Protection Bureau (CFPB), office of the Chief Operations Officer, Chief Information Officer (CIO). The CIO is working to redesign technology in government with a focus on elegant design, open source, open data, and current and cutting-edge software technologies and approaches. There are a total of three vacancies. Incumbents will have the option to be duty stationed at CFPB headquarters in Washington, DC or at their home of record. The pay employees receive will depend on their assigned duty location. An employee's pay may include a geographical pay differential depending on that duty location. The pay shown for this job depicts the range covering the LOWEST LEVEL WITHOUT geographical pay differential up to the HIGHEST LEVEL INCLUDING the highest geographical differential.
Read the whole story
rosskarchner
3425 days ago
reply
Join my team! Remote OK
Share this story
Delete

Don’t Mangle that Language, Mango It!

1 Comment


Whether you’re brushing up on a second language, practicing English as a second language or just learning a few phrases for a summer trip abroad, the library has got you covered. Most of our customers know the library offers books, CDs and DVDs to help you learn a foreign language. But you can also use our powerful language learning tool, Mango Languages, without ever leaving your home. All you need is your free Fairfax County Public Library card to get started with more than 70 online language courses. Each lesson covers practical, real-life conversations presented by native speakers in a clear and simple format.

Accessing Mango is easy:

  1. Go to the library's website at http://www.fairfaxcounty.gov/library.
  2. Select Online Resources in the menu on the left.
  3. Select the orange Languages icon in the e-Research section.
  4. Select Log On and enter your valid Fairfax County Public Library card number.
  5. Create a profile and log in to access the Mango language course of your choice.
You can use Mango as a guest user, but you’ll need to create a profile and log in in order to be able to track your progress through your completed lessons. Once you’ve created an account, Mango is accessible 24/7 and on the go as well. Download a Mango mobile app and learn a language whenever and wherever you choose. Soon you’ll be saying thank you in a whole new way.

-Rebecca Wolff, Centreville Library
Read the whole story
rosskarchner
3432 days ago
reply
I keep learning about cool stuff available to library members. I don't even need to actually go to the library!
Share this story
Delete
Next Page of Stories