BiOps
BI-Ops – 3/5 – Build (CI)

BI-Ops – 3/5 – Build (CI)

BI-Ops – 1/5 – What we are trying to do
BI-Ops – 2/5 – Source Control (Git)
BI-Ops – 3/5 – Build (CI)
BI-Ops – 4/5 – Deploy DB, SSIS and SSAS (CD)
BI-Ops – 5/5 – Deploy PowerBI (CD)

Now that we store our glorious code in Git and make sure some poor trainee will be able to go over it in 10 years (and maybe even understand it if we do end up documenting it fully one day), getting down to actually delivering it will help us justify our salaries better.

Azure Dev Ops

There, you have it. We use it. It’s awesome. Good Bye.

Ok, for those who want more than the TL,DR version, lets dive into it a bit.

One package

My intent when diving into CI/CD and in the end what we call BIOps now is to have our code bundled into a single package that would move from step to step in the deployment pipeline. One package that we could archive and pick off the shelf if we need to deploy it again. And one package so that we are sure that what we have in UAT is what we have in production, after customer validation.

So, how do we build that package?

Pipelines

We build one pipeline per project.

We run the pipeline after the complete the Pull Request from the develop branch into release. But what does the pipeline actually look like?

Here is what our default DB, SSIS & SSAS pipeline looks like.

We run the pipeline on a hosted build server agent that we share with our C# friends. It has Visual Studio Build Tools with SQL Server Data Tools setup.

Getting the source code

We run off Azure DevOps Git, as discussed previously. Just select the team project, the repo for the project we are working on and the branch (release here as we just did a PR into it and will be going into UAT and Production afterwards).

We like to clean our directories just to prevent surprises. Our builds are not huge and we can afford to wait a few extra seconds as our code builds. And XKCD taught us that compilation was always a great time.

We tag our sources so that we know where our code went afterwards. And we also go for badges because, we like to show off that our code compiles.

Honestly, this step is pretty straight forward.

Agent Job

We just use default settings here, nothing really to see.

Building DBs

We don’t point to the VS solution file as some projects don’t build well (SSIS, I’m looking at you here). So we just select all the *.sqlproj files in the Database directory of the project to build all databases at once and generate one DACPAC for each.

MSBuild is on the lastest version. We only have 2019 on the agent anyway. And we point to x64 architecture because everything runs on x64 now. All the rest is default.

This step is going to generate one DACPAC per DB in the $(build.sourcesdirectory) folder on the agent. We’ll get back to this later when we get ready to package the artifact.

Building SSIS

SSIS doesn’t build well using MSBuild, it makes us use devenv.com. Instead of doing complex things, we just got the new and free Microsoft SSIS tasks off the Azure DevOps Marketplace.

We point to our .dtproj project file to build.

As we don’t have configurations for different environments we don’t specify anything for that.

And then we want the output to go directly to the $(Build.ArtifactStagingDirectory) directory, ready to be published.

Build SSAS

For SSAS, we use exactly the same step as for the DB, except that this time we point to the .smproj project file in the repo.

And now, we are done with the build tasks themselves. All the rest is preparation for packaging and cleanup.

Artifact creation

The SSIS tasks lets us select an output directory so we used $(Build.ArtifactStagingDirectory). But that is not the case for DBs and SSAS. So we need to move the files.

We created two tasks to copy the DACPAC for the DBs and a bit more for SSAS and move them from the default $(build.sourcesdirectory) output to our $(build.artifactstagingdirectory) artifact staging directory.

For SSAS, we use the following contents parameter:

**\bin\**\*.deploymenttargets
**\bin\**\*.asdatabase
**\bin\**\*.deploymentoptions

And as you probably noticed, each type of objects gets its own folder at the target.

Once done, we use the default Publish Artifact task, publishing the contents of the staging directory and hosting it in Azure Pipelines, meaning in the database rather than on a shared folder. This just makes it easier to manage for us.

Finally, we have a Clean Agent task that we also got from the Azure DevOps Marketplace that cleans up all directories to make sure our agent server doesn’t run out of space. Just being civil here as it is a shared ressource.

The Artifact

In the end, when running the build task, we end up with one single package which you can browse and from which you can even download each individual element (just in case you aren’t sure it is real).

Build Naming

One last thing we changed is the default build name. We decided to simply reference it with the date and a number as Azure DevOps references also with the pipeline name. Here is out naming format:

$(date:yyyyMMdd)-b$(rev:rr)

We use bxx for build number as we might rerun this build on the same date. We added the b and not just a number as we do the same for releases and to prevent confusion will have something looking like Pipeline-BuildDate-bxx-rxx to identify our release and differentiating build iteration from release iteration.

Conclusion

I remember the first time our whole pipeline ran with success, we cheered. Now, its just business as usual. But looking back, it solved so many issues for us:

  • We can save our artifacts in one common location.
  • We don’t need to document how to build anymore, the pipeline is self-explanatory.
  • People don’t need to spend time building. They just commit, PR, run the pipeline and go drink coffee/tea/water. By the time they are back, its done.
  • Newcomers don’t need to bother with how to compile their code so its ready to deploy with environment specifics in mind. They don’t even need to understand everything that happens under the hood so they can focus on the business value first. That is really a huge step forwards for young people who joined us.

This succession of task seems simple, but it really is at the core of our delivery cycle. I hope it is for you as well. And if it isn’t, I hope reading about our experience will help secure and smoothen out your future deployments.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: