Friday 28 December 2012

@JoinTable

I recently had a requirement that I needed to register who is ignoring who on my website.

So, I have a Person class, who can ignore other Persons.

That is a ManyToMany relationship, and thus requires a coupling-table.

In my case, there's the mm_usertable database table and a mm_ignore database table. The mm_ignore table is a classic coupling table in that it has only two fields and both of them are foreign keys to the mm_usertable.

In JPA the @JoinTable annotation is used specifically for these situations.

In case you have more properties that apply to the relationship (i.e. there are more fields in the coupling table) you have no choice but to make a real JPA Entity of the coupling table. For example if you wish to indicate that a relationship has been deleted and/or wish to further describe the relationship, for example family, work, etc.

Good Lord, I mean, there's more annotations here than anything else!

The targetEntity property seems very important, otherwise it won't work. Probably because generic collections are not reified: they are not available at runtime. Generics are implemented using erasure.

Wednesday 5 December 2012

Continuous Delivery


On the 30th of November 2012, I was able to attend a training on Continuous Delivery, courtesy of Xebia. I received the offer, as a member of the NLJUG.

It is the first training I've witnessed where they provide you with an entire OS image, for easy starting of the training. I guess they're professionals who have worked with this before. They're using VirtualBox for this. Of course, it does mean you have to download a 1.3 Gb image beforehand.

The different technologies contain:
  • Subversion as version control system
  • Java for the programming (although no programming was actually done during the workshop)
  • maven for building
  • Jenkins for executing automated build- and packaging activities
  • Fitnesse for automated testing
  • DeployIT for automated deploying
  • Apache as the lowlevel webserver
  • JBoss as JEE application server
  • MySQL as the database
A note of criticism. The workshop mail mentioned working with Selenium, but it was only mentioned once, and briefly at that, during the entire day.

VirtualBox


I was much impressed with VirtualBox. It enabled me to start without having to configure a lot of software on my native OS (Fedora 16 86_64).


Bear in mind, you need the kernel headers installed. Also bear in mind, something which had me stumped at first, that the latest kernel headers are not necessarily of the kernel you are running at the moment. Upgrade your kernel as well, and reboot.

That fixed it for me.

So in no time I was running this Ubuntu precise 32 on this virtual machine on my Fedora core 16.

The Workshop


The workshop started off with a presentation by Mark van Holsteijn, principal consultant of Xebia, regarding what we were actually going to do and what the advantages were of doing it this way.

A lot of organisations create their applications by hand. In a lot of cases it involves an integration/systems engineer following a thick document containing a lot of steps to be performed in sequence. This has the following disadvantages:
  • it is time consuming
  • complexity is high
  • it's error prone, things get forgotten, things go wrong (especially in those cases where an integration engineer is stuck with a "non standard" environment/customer/hardware/etc)
  • it costs manpower, and manpower, contrary to computing power, is expensive
  • time between releases is long
  • a lot of new functionality is incorporated in a new release, increasing the number of possible bugs found in production
  • time to market is long
Continuous Delivery is all about bringing your Java JEE applications fast, flawless and completely automated to production. So from development, to junit testing, functional testing, systems/integration testing, to staging, to production. This is called a end-to-end Continuous Delivery Deployment pipeline.

It fits the Agile way of working. Agile is to satisfy customer through early and continuous delivery of valuable software.

So basically every checkin into svn is a release. It turns out that big releases of functionality are always more bugridden than a lot of releases with small added functionality. It is an example of the old mantra "release early- release often".

A quick incomplete first attempt at a new feature, will also provide valuable info on if your customers actually want this functionality. You can tell, by checking if the new functionality is actually used.

Jenkins

At the end of the workshop, we ended up with the pipeline in the picture below.
The workshop explained briefly all the following plugins in Jenkins we were going to use. They can be downloaded from jenkins-plugin-hub.heroku.com.
Parameterized Trigger
This was used to trigger another job from the current job, and to provide information for this next job, for example revision numbers. Essential if you wish to use the same build between jobs. In our workshop, the parameter PL_SVN_REVISION was provided by the first job as one of the post build actions, and passed onto the next jobs.
In the next jobs, checkbox "This build is parameterized" was checked and the parameter to be imported entered. the parameter could be used as ${PL_SVN_REVISION}. In our case it was frequently used for:
  • source code version management, making sure the proper revision was checked out in svn, using "http://10.20.20.20/svn/sample-app/cd-fitnesse-runner@${PL_SVN_REVISION}",
  • in the build parameters, " -Drevision=${PL_SVN_REVISION}" and
  • in the name of the build
Rebuilder
A simple plugin that rebuilds a parameterised build, by means of a button in the options of the build in question.
Maven Repository Server
changes Jenkins into a Maven Repository Server. The repository is defined in the settings.xml file of your jenkins installation, in our case it was appropriately called "jenkins". You can tell maven to use your profile defined in the settings.xml file using "-Pjenkins" as a parameter. The downstream jobs can define a Upstream Maven Repository in the Build Environment. We have set it to "../everything" to get all build artifacts from previous jobs.
Build Pipeline
provides a view of upstream and downstream connected jobs that typically form a build pipeline. A screenshot is provided above.
Environment Injector
useful for configuring your buildenvironment per job
Throttle Concurrent Builds
so we cannot build more than a specific number of builds in parallel. We haven't configured it, as it is not really an issue in our current setup.
Priority Sorter
allows for the build queue to be sorted based on pre-assigned priorities for each job. For example a smoke test has high prio.
Promoted Builds
A good way to distinguish good builds from bad builds, for example in such cases that a single job is not indicative of the health of the overall build. A build could get promoted if the jobs of junit testing, integration testing, and staging completed successfully.
Build Name Setter
An excellent little plugin that allows descriptive names for builds instead of the default jenkins build "#1", build "#2" etc. It can be defined in your build environment of your job. In our case it was defined as "#${BUILD_NUMBER} - rev ${ENV,var="SVN_REVISION"}", so we got "#1 - rev 34" as name.
Deployit
creates and uploads deployment packages (dar, deployment archive, yet another abbreviation of a specific jar) using artifacts in the jenkins workspace. Very important post build action in our deploy job.
Wall Display
for displaying on your bigscreen on the wall, to see what builds are running in what state, etc. Management will love it.
The first job, that triggers subsequent jobs, is set to "Poll SCM" about every minute (cron syntax * * * * *) to check for changes in SVN. So any changes are automatically picked up and also trigger a build.

FitNesse

FitNesse is:
  • software development collaboration tool
  • software testing tool
  • wiki
  • webserver
It is possible to run FitNesse standalone, it will start a small webserver, like this.

But during the workshop we also integrated it into Jenkins.

DeployIT

DeployIT is an application developed by Xebia Labs. A fairly big amount of time was spent on it during the workshop, but I don't blame them. They have to make a living too.

It automatically can deploy an application to multiple application servers, using a set of standard scripts (that can be modified if desired). It is executable from Jenkins, through a plugin they developed.

The advantages are that it is a general component and that it has knowledge of most application servers and how to deploy applications to them. It also can determine what needs to be deployed, has a lot changes or has little changed, deploy a little or deploy everything. It also contains monitoring.

You will have to create a DAR (Deployment Archive) package, that can be used by DeployIT to deploy to different servers. Advantage is that the package is environment independent.

DeployIT uses OverThere, an Open Source Framework library which knows how to connect to different operating systems, and to execute operations once there.

One of the tabs, "Repository" can show you the "world image" of DeployIT. There are configured which application server is where, etc.

It is possible to add Sql Scripts to the Deployment Package, but Sql Scripts, as usual, are a different thing altogether, because it deals with 'state'.

DeployIT can rollback applications to previous versions automatically if the deployment failed, but this is of course not automatically possible with Sql Scripts. It is possible to add special rollback scripts that are executed on the database upon a rollback to a previous version of the application.

DeployIT has security inside to determine if someone, once they are logged on, has the appropriate rights to deploy somewhere.

One of the features I really really like about DeployIT, is that it provides a central place where is administrated where all the revisions are installed, on which Application Servers they are deployed. No more doubt about what is installed where.

Tools


Some tools were not addressed in any way, but I managed to make notes of them anyways:
GreenPepper
is a tool integrating executable specifications and automated functional testing into software development processes, thus reducing any ambiguity related to the expression of needs between all actors.
Selenium
automates browsers
jmeter
load test functional behavior and measure performance

Requirements

There are some requirements in performing builds this way.

Your automated tests have to be very good, very thorough and fairly complete. Otherwise the whole process breaks down.

You and your team have to organise yourself around products and align on common goals. It's very important that the systems and integration teams are onboard with this.