Friday, 28 December 2012

@JoinTable

I recently had a requirement that I needed to register who is ignoring who on my website.

So, I have a Person class, who can ignore other Persons.

That is a ManyToMany relationship, and thus requires a coupling-table.

In my case, there's the mm_usertable database table and a mm_ignore database table. The mm_ignore table is a classic coupling table in that it has only two fields and both of them are foreign keys to the mm_usertable.

In JPA the @JoinTable annotation is used specifically for these situations.

In case you have more properties that apply to the relationship (i.e. there are more fields in the coupling table) you have no choice but to make a real JPA Entity of the coupling table. For example if you wish to indicate that a relationship has been deleted and/or wish to further describe the relationship, for example family, work, etc.

Good Lord, I mean, there's more annotations here than anything else!

The targetEntity property seems very important, otherwise it won't work. Probably because generic collections are not reified: they are not available at runtime. Generics are implemented using erasure.

Wednesday, 5 December 2012

Continuous Delivery


On the 30th of November 2012, I was able to attend a training on Continuous Delivery, courtesy of Xebia. I received the offer, as a member of the NLJUG.

It is the first training I've witnessed where they provide you with an entire OS image, for easy starting of the training. I guess they're professionals who have worked with this before. They're using VirtualBox for this. Of course, it does mean you have to download a 1.3 Gb image beforehand.

The different technologies contain:
  • Subversion as version control system
  • Java for the programming (although no programming was actually done during the workshop)
  • maven for building
  • Jenkins for executing automated build- and packaging activities
  • Fitnesse for automated testing
  • DeployIT for automated deploying
  • Apache as the lowlevel webserver
  • JBoss as JEE application server
  • MySQL as the database
A note of criticism. The workshop mail mentioned working with Selenium, but it was only mentioned once, and briefly at that, during the entire day.

VirtualBox


I was much impressed with VirtualBox. It enabled me to start without having to configure a lot of software on my native OS (Fedora 16 86_64).


Bear in mind, you need the kernel headers installed. Also bear in mind, something which had me stumped at first, that the latest kernel headers are not necessarily of the kernel you are running at the moment. Upgrade your kernel as well, and reboot.

That fixed it for me.

So in no time I was running this Ubuntu precise 32 on this virtual machine on my Fedora core 16.

The Workshop


The workshop started off with a presentation by Mark van Holsteijn, principal consultant of Xebia, regarding what we were actually going to do and what the advantages were of doing it this way.

A lot of organisations create their applications by hand. In a lot of cases it involves an integration/systems engineer following a thick document containing a lot of steps to be performed in sequence. This has the following disadvantages:
  • it is time consuming
  • complexity is high
  • it's error prone, things get forgotten, things go wrong (especially in those cases where an integration engineer is stuck with a "non standard" environment/customer/hardware/etc)
  • it costs manpower, and manpower, contrary to computing power, is expensive
  • time between releases is long
  • a lot of new functionality is incorporated in a new release, increasing the number of possible bugs found in production
  • time to market is long
Continuous Delivery is all about bringing your Java JEE applications fast, flawless and completely automated to production. So from development, to junit testing, functional testing, systems/integration testing, to staging, to production. This is called a end-to-end Continuous Delivery Deployment pipeline.

It fits the Agile way of working. Agile is to satisfy customer through early and continuous delivery of valuable software.

So basically every checkin into svn is a release. It turns out that big releases of functionality are always more bugridden than a lot of releases with small added functionality. It is an example of the old mantra "release early- release often".

A quick incomplete first attempt at a new feature, will also provide valuable info on if your customers actually want this functionality. You can tell, by checking if the new functionality is actually used.

Jenkins

At the end of the workshop, we ended up with the pipeline in the picture below.
The workshop explained briefly all the following plugins in Jenkins we were going to use. They can be downloaded from jenkins-plugin-hub.heroku.com.
Parameterized Trigger
This was used to trigger another job from the current job, and to provide information for this next job, for example revision numbers. Essential if you wish to use the same build between jobs. In our workshop, the parameter PL_SVN_REVISION was provided by the first job as one of the post build actions, and passed onto the next jobs.
In the next jobs, checkbox "This build is parameterized" was checked and the parameter to be imported entered. the parameter could be used as ${PL_SVN_REVISION}. In our case it was frequently used for:
  • source code version management, making sure the proper revision was checked out in svn, using "http://10.20.20.20/svn/sample-app/cd-fitnesse-runner@${PL_SVN_REVISION}",
  • in the build parameters, " -Drevision=${PL_SVN_REVISION}" and
  • in the name of the build
Rebuilder
A simple plugin that rebuilds a parameterised build, by means of a button in the options of the build in question.
Maven Repository Server
changes Jenkins into a Maven Repository Server. The repository is defined in the settings.xml file of your jenkins installation, in our case it was appropriately called "jenkins". You can tell maven to use your profile defined in the settings.xml file using "-Pjenkins" as a parameter. The downstream jobs can define a Upstream Maven Repository in the Build Environment. We have set it to "../everything" to get all build artifacts from previous jobs.
Build Pipeline
provides a view of upstream and downstream connected jobs that typically form a build pipeline. A screenshot is provided above.
Environment Injector
useful for configuring your buildenvironment per job
Throttle Concurrent Builds
so we cannot build more than a specific number of builds in parallel. We haven't configured it, as it is not really an issue in our current setup.
Priority Sorter
allows for the build queue to be sorted based on pre-assigned priorities for each job. For example a smoke test has high prio.
Promoted Builds
A good way to distinguish good builds from bad builds, for example in such cases that a single job is not indicative of the health of the overall build. A build could get promoted if the jobs of junit testing, integration testing, and staging completed successfully.
Build Name Setter
An excellent little plugin that allows descriptive names for builds instead of the default jenkins build "#1", build "#2" etc. It can be defined in your build environment of your job. In our case it was defined as "#${BUILD_NUMBER} - rev ${ENV,var="SVN_REVISION"}", so we got "#1 - rev 34" as name.
Deployit
creates and uploads deployment packages (dar, deployment archive, yet another abbreviation of a specific jar) using artifacts in the jenkins workspace. Very important post build action in our deploy job.
Wall Display
for displaying on your bigscreen on the wall, to see what builds are running in what state, etc. Management will love it.
The first job, that triggers subsequent jobs, is set to "Poll SCM" about every minute (cron syntax * * * * *) to check for changes in SVN. So any changes are automatically picked up and also trigger a build.

FitNesse

FitNesse is:
  • software development collaboration tool
  • software testing tool
  • wiki
  • webserver
It is possible to run FitNesse standalone, it will start a small webserver, like this.

But during the workshop we also integrated it into Jenkins.

DeployIT

DeployIT is an application developed by Xebia Labs. A fairly big amount of time was spent on it during the workshop, but I don't blame them. They have to make a living too.

It automatically can deploy an application to multiple application servers, using a set of standard scripts (that can be modified if desired). It is executable from Jenkins, through a plugin they developed.

The advantages are that it is a general component and that it has knowledge of most application servers and how to deploy applications to them. It also can determine what needs to be deployed, has a lot changes or has little changed, deploy a little or deploy everything. It also contains monitoring.

You will have to create a DAR (Deployment Archive) package, that can be used by DeployIT to deploy to different servers. Advantage is that the package is environment independent.

DeployIT uses OverThere, an Open Source Framework library which knows how to connect to different operating systems, and to execute operations once there.

One of the tabs, "Repository" can show you the "world image" of DeployIT. There are configured which application server is where, etc.

It is possible to add Sql Scripts to the Deployment Package, but Sql Scripts, as usual, are a different thing altogether, because it deals with 'state'.

DeployIT can rollback applications to previous versions automatically if the deployment failed, but this is of course not automatically possible with Sql Scripts. It is possible to add special rollback scripts that are executed on the database upon a rollback to a previous version of the application.

DeployIT has security inside to determine if someone, once they are logged on, has the appropriate rights to deploy somewhere.

One of the features I really really like about DeployIT, is that it provides a central place where is administrated where all the revisions are installed, on which Application Servers they are deployed. No more doubt about what is installed where.

Tools


Some tools were not addressed in any way, but I managed to make notes of them anyways:
GreenPepper
is a tool integrating executable specifications and automated functional testing into software development processes, thus reducing any ambiguity related to the expression of needs between all actors.
Selenium
automates browsers
jmeter
load test functional behavior and measure performance

Requirements

There are some requirements in performing builds this way.

Your automated tests have to be very good, very thorough and fairly complete. Otherwise the whole process breaks down.

You and your team have to organise yourself around products and align on common goals. It's very important that the systems and integration teams are onboard with this.

Tuesday, 20 November 2012

Tuesday, 13 November 2012

The network was the computer?


What happened to http://thenetworkisthecomputer.com/[1]? It used to point to a Tribute website regarding Sun, but now it just points to Oracle.

I guess all things pass.

P.S. Startling how back then (1984[3]!!) "The network is the computer" could nowadays be translated as "All your stuff's in the Cloud" (2006[4], 22 years later). I guess they were ahead of their time.

References

[1] a Tribute to Sun Microsystems
http://www.thenetworkisthecomputer.com/
[2] Screenshot : a Tribute to Sun Microsystems
http://dawhois.com/site/thenetworkisthecomputer.com.html
[3] Celebrate 25 years of SPARC Innovation
http://www.oracle-downloads.com/sparc25info/
[4] The Google Podium transcript
http://www.google.com/press/podium/ses2006.html

Sunday, 4 November 2012

J-Fall 2012 Report

J-Fall 2012[1] has come and gone, and I write a small blurb on what I managed to learn at each session.

It was a great J-Fall. I had a lot of fun and learned a lot from listening to some of the people, who were so passionate about the particular field they worked in. I hope next time it will be equally interesting. Or even better! Keep it up, y'all!!!

It's always interesting to see any big event that has a line for the mens room, instead of the ladies room.

Your Product Owner is Just Better at Pretending

Speaker(s): Erwin van der Koogh

It was extremely early in the morning, and I had to catch the very first train to the Conference, or I would miss out.

Some statistics. Of all the features of a software product, only 40% of those are actually used by the customer.

In which case it is very important to decide what to include in the product and what not to. Also, it might be obvious, who your customer is, but sometimes things are not as cut and dry.

The powerful example of Facebook was discussed. The Customers of Facebook are not the users, but the advertisers. Advertising is how Facebook makes (hopefully enough) money. The users are the Product. They are the reason Facebook can charge for advertising. This will make you look at Facebook in a whole other light.

Another statistic is that, if you were to eliminate your source code, and you'd have to write it again, but with the knowledge and experience you have now, how long would that take? Just as long? Twice as long? Half as long? It turns out around 1/3 to 1/4 of the time spend on a project is spent coding. Yet that is the only part that is currently being considered for efficiency in all our Software Development Methods we have gotten used to.

A third I wish to point out, was Documentation. Nobody (seems to) reads documentation. I always say that I prefer to have bad documentation instead of no documentation. I always have the optimistic view that if there is bad documentation, the incentive to fix it is greater than if there is no documentation to be the first to write it.

The speaker did not share my opinion. He mentions that bad documentation, if there is no hint that it is in fact bad, could be assumed to be correct and thusly provide misplaced confidence in what you are doing, leading to larger problems later on.

One other instance is to show people that there's a new Upgrade/Component/Widget/whatever available on your website that does X for you. Then place a link under it to a 404 page. Then run statistics on how often that 404 page was accessed. It gives you an idea of how much a feature is wanted, without writing any actual code. A lot of these ideas were expressed here.

I was sufficiently intrigued that I think there's more than one blogpost in here.

Keynote - Oracle


Stephen Chin @steveonjava Java Technology Ambassador and JavaOne Content Chair was late for the Keynote. He was on a Nighthacking tour on his bike through Europe attending all the major conferences.

But he did manage to get there, riding into the conference room, on said bike and biking gear.

They made quite a show of it.

Find more complete info on Geertjan's Blog.

I really liked the picture of 20 Raspberry Pies in a 8U unit.

Java EE 7 Platform Overview and Highlights

Speaker(s): David Delabassee


They are hard at work for the new version building new APIs and improving old ones.

new API:
  • JSONP presentation view
  • JSON API for Java
  • java.net.websocket
  • batchapplication for Java 1.0
  • java temp caching -> distributed across nodes?

old api to be updated:
  • rest + hypermedia + client api
  • JMS
  • Bean Validation
  • JSF 2.2
  • github.com/jersey./hol-sse-websocket

The Aquarium

javaee-spec.java.net

Some of the Specifications are not quite finished yet. So if you have something to say about how things should be implemented/work, you still have a chance to mention it.

Java EE Multi-tenancy in Practice

Speaker(s): Frans van Buul

Good stuff. Multi-tenancy here is having multiple customers in the same database, and your application is minimally impacted.

It is concerning adding a column to the tables (which can automatically be done by Oracle and by PostgresSQL Enterprise) that indicates the Tenant.

The idea is to have just one database, where every table can be "viewed" by tenants. The tenants will only see the information (rows) of himself and not of other tenants. This way, the impact in the code is minimal.

In a worse case scenario, we have to do all this stuff ourselves. This means that the code will be littered with a lot of if statements to double check if the proper tenant is inserted etc.etc...

In hibernate it is possible to "fix" the problem (though in Hibernate 5.0 there might be a solution for the problem) with an Interceptor (or a PrePersist) to fix the tenant properly on the entity by Lookingup in the context for the proper tenant. A good way of retrieving the tenant is by using the hostname for the proper information. For example holidayinn.localhost or novotel.localhost.

What I find most admiring is the fact that he did make some mistakes, and he managed to find out where he made those mistakes, and fix those mistakes.
Mistakes made were:
- tomcat selected instead of glassfish, and tomcat has a web profile, which isn't everything he needs

MySQL workbench was awesomeness! He generated the entire database from scratch every time, by executing a script created by MySQL workbench!

Combination Netbeans and Glassfish and MySQL and EclipseLink.

Netbeans never ceases to amaze me, the quick way in which an entire project can be started and generated in a manner of minutes.

EclipseLink supported a special annotation to get the Tenant system working,

One question in the public was that there were small and big companies and that the big companies, with their many permutations, would push the little companies out of the cache.

In the case of Frans, his organisations were all on the same scale and he didn't see this happening in his practice.

My opinion: the little companies do not need to be in the cache, as their involvement isn't that big to begin with. They can wait for their data a bit longer.

In general a very tricky question is the second level cache. It is possible to do all this by hand in MySQL, but you basically "screw" with the Primary keys. The primary key is partially "hidden" from the application, the tenant part to be precise. So the primary key known to the system might be the primary key of another record as well (but with a different tenant). This is a problem with caching, as the cache might provide the wrong instance. It is best in those case to just turn the second level cache off.

Question of the audience was: just make the primary key a autogenerated id, problem solved. Frans agreed in principle, but what if you have a customer that wishes to port his local database into your tenanted database. You'd have to resequence all the primary keys before inserting into your database. This is a complicated and error prone process. Not to mention that this must be done on a production database, during a time of low load, so basicallly at night.

Microsoft Keynote


Microsoft explained the advances made with their Windows Azure, their cloud solution. Apparently it is no longer Windows centered, but can support a number of different technologies. Even Linux was mentioned!

Also deploying to a staging environment, testing and then switching the staging and the production environment at the load balancer level to go live within a second was nice.

Hands-on Lab: RRRADDD! ... Really Ridiculously Rapid Application Development (Domain-Driven)

Speaker(s): Dan Haywood, Jeroen van der Wal

It concerns an Apache Incubation project called Apache Isis. It is still in its infancy but looks nice.

It reminds me the most of something we're building at work. It works on annotations for displaying your data on webpages without much hassle.

Hands-on Lab: MongoDB

Speaker(s): Maikel Alderhout

Database evolution in short:
  • 1990: Oracle RDBMS
  • 2000: RDBMS and OLAP/BI
  • 2010: NoSQL Hadoop

Several trends have a big impact on the database landscape:
  • data volume, type & use
  • agile development
  • new hardware architectures, cloud, commodity

4 categories of NoSQL are available:
  • key-value stores
  • document based
  • columnfamily/bigtable clones
  • graph databases

MongoDb is a scalable, high-performance NoSQL databaae of the document based type.
  • no transactions
  • json documents binary stored
  • replication/sharding/durability.

No strict data schema. Examples are
Twitter and Foursquares. Foursquares is actually running on MongoDB.

j:true => this is the system used by relational databases. Changes are stored in a journal on disk. MongoDB can be set to what you want:
  • asyn(default) instant feedback, "got the message I'll get around to it"
  • w=1, "Ï'll remember it"
  • j:true, "Wrote down what I need to do"
  • w=majority, "Wrote everything down"
  • w="<tag>", "Wrote everything down multiple times"

The command line client during the lab felt like working somewhere between SQL and calling javascript functions.

The situation:
  • RDBMS -> a lot of functionality, very little flexibility
  • memcache -> little functionality, a lot of flexibility
MongoDB for the most part goes a long way towards RDBMS feel.

Scala Through the Eyes of Java (8)

Speaker(s): Urs Peter

Urs Peter is a Speaker/Trainer of Xebia and provides courses in Scala, and it shows.

Scala started out in 2003. Created under the EPFL by Martin Odersky and exploited by the company Typesafe.

Has the following frameworks, Scala + Akka + Play.

Some points that came up:
  • syntax lightweight (helps?)
  • val = final
  • var = field
  • none = ?
  • operators are just methods (except == and a few others)
  • types are inferred.
  • no more NullPointerExceptions
  • functional programming (first class citizens)
  • object oriented programming (all the way, no native types)
  • multiple inheritance by means of traits.

An example was given using spaceships, always cool.
  • base, has the following traits
    • shield
    • gun
    • medic
  • commander, has the following traits
    • shield
    • gun
  • fighter, has the following trait
    • gun
  • mechanic, has the following traits
    • shield
    • medic

Dutch Scala Enthousiasts

Scala for the Impatient.

Shadaj Laddad

References

[1] J-Fall 2012
http://www.nljug.org/jfall/

Wednesday, 24 October 2012

J-Fall 2012

The NLJUG[1] is once again organising J-Fall 2012[2]. It takes place on the 31st of October 2012 in Nijkerk, in the Dutch province of Gelderland.

I shall be visiting, and see what new things I can learn.

I hope to write some blogs about it. As such I thought I'd post my current programme here.
Time: 08:00 - 08:50 Early Bird sessions
Title: Your Product Owner is Just Better at Pretending
Speaker(s): Erwin van der Koogh

Time: 09:20 - 10:10 General Session
Title: Keynote - Oracle

Time: 10:40 - 11:30 Parallelsessions
Title: Java EE 7 Platform Overview and Highlights
Speaker(s): David Delabassee

Time: 11:35 - 12:25 Parallelsessions
Title: Java EE Multi-tenancy in Practice
Speaker(s): Frans van Buul

Time: 13.35 - 14.20 General Session
Title: Keynote

Time: 14:25 - 15:15 Parallelsessions
Title: Hands-on Lab: RRRADDD! ... Really Ridiculously Rapid Application Development (Domain-Driven)
Speaker(s): Dan Haywood, Jeroen van der Wal

Time: 15:45 - 16:35 Parallelsessions
Title: Hands-on Lab: MongoDB
Speaker(s): Maikel Alderhout

Time: 16:40 - 17:30 Parallelsessions
Title: Scala Through the Eyes of Java (8)
Speaker(s): Urs Peter

References

[1] NLJUG - Nederlandse Java Users Group
http://www.nljug.org
[2] J-Fall 2012
http://www.nljug.org/jfall/

Saturday, 20 October 2012

Vector Deprecated?

Someone asked me to explain a comment of mine where I mentioned that the Vector class isn't used (much) any more.

Back in the old days, when I was programming and I needed a collection (there were no generics in sight yet), I'd use the Vector class as a convenient way of having a list of objects to maintain. Vector, at the time, could be filled with instances of supertype Object and I got stuck with a lot of casting down to the appropriate type I wanted.

The Java class Vector has been a part of the JDK since version 1.0.

In J2SE 1.2 (JDK 1.2) the Collections framework was introduced.

The following javadoc comments can be found on the Vector class.
As of the Java 2 platform v1.2, this class was retrofitted to implement the List interface, making it a member of the Java Collections Framework. Unlike the new collection implementations, Vector is synchronized. If a thread-safe implementation is not needed, it is recommended to use ArrayList in place of Vector.[1]

Also the following text is displayed in Netbeans if you do try to use the Vector class.[4]
"This inspection reports any uses of java.util.Vector or java.util.Hashtable. While still supported, these classes were made obsolete by the JDK1.2 collection classes, and should probably not be used in new development."

Basically, what I'm saying is, that Vector class was implemented before the Collections framework. As such it does not follow the Collections framework naming scheme of List,Set and Map. It also has a number of methods, for backwards compatibility, that are not to be found in any of the other collection classes.

If you do need synchronization, the Vector is indeed synchronized, yet it is synchronized on each and every operation. This is fine, if you do not need to execute a lot of operations as an atomic transaction. But, for instance, if you need to iterate over all the items in the Vector, it is always better to get a lock before the iteration and release it after the iteration.

So, in short:
  • use ArrayList, if you do not care for the synchronization aspect of Vector
  • use Collections.synchronizedList[2] if you need an array that is synchronized
  • use CopyOnWriteArrayList if you are dealing with multiple concurrent read operations and only a few write operations *)
  • use a native array, if you do not care for synchronization and performance is an issue.
*) be careful, as a write/add on a CopyOnWriteArrayList (as the name says) will create an entirely new Array populated with the old items and the new/added item. For large arrays this might be slow.

In the javadoc of synchronizedList there is a good example of iterating over it. I shall just write it down here for completeness.
List list = Collections.synchronizedList(new ArrayList());
      ...
synchronized (list) {
    Iterator i = list.iterator(); // Must be in synchronized block
    while (i.hasNext())
        foo(i.next());
}
As always in these things, the requirements you have, determine what kind of Collection you need to use.

Updates


23/10/2012: Updated because of the comment below on CopyOnWriteArrayList.

28/01/2013: Update: the same goes for HashTable (old) and HashMap (new).

References

[1] Vector
http://docs.oracle.com/javase/7/docs/api/java/util/Vector.html
[2] public   static  <T> List<T> synchronizedList(List<T> list)
http://docs.oracle.com/javase/7/docs/api/java/util/Collections.html#synchronizedList%28java.util.List%29
[3] Why is Java Vector class considered obsolete or deprecated?
http://stackoverflow.com/questions/1386275/why-is-java-vector-class-considered-obsolete-or-deprecated
[4] Is Vector an obsolete collection class?
http://www.coderanch.com/t/515352/java-developer-SCJD/certification/Vector-obsolete-collection-class
[5] Vector or ArrayList -- which is better?
http://www.javaworld.com/javaqa/2001-06/03-qa-0622-vector.html

Collections Empty List Factory Method

You can create an empty list like so:
List<String> s = Collections.emptyList();
However, the following doesn't work:
setStrings(Collections.emptyList());
Because the compiler cannot determine what the type should be at compile time. In the first example type inference comes to the rescue. In the second example, not soo much.

What, however, does work is the following syntax I came across:
setStrings(Collections.<String> emptyList());
It's a little odd at first, but looking at the source code of emptyList() reveals:
public static final <T> List<T> emptyList() {
    return (List<T>) EMPTY_LIST;
}
The <T> in front of List<T> indicates the type that is supposed to be inferred.

Please do not use EMPTY_LIST directly, if possible. It doesn't use generics.

References

[1] Java: Collections.emptyList() returns a List<Object>?
http://stackoverflow.com/questions/306713/java-collections-emptylist-returns-a-listobject


Friday, 6 July 2012

The Future Is Now


I was sitting in the train, with my labtop on my lap. I was watching the film "I, Robot"[1] with Will Smith (to wit[2], Will Smith was the actor in the film, he wasn't sitting next to me watching. (And we wonder why computers haven't learned human speech yet.)) It's extremely loosely based on Isaac Asimov's[3] stories.

There was this scene, where he walks up to a house, flashes his Police Detective Badge in front of a Scanner, and the door opens and a voice welcomes him to the house. It was really awesome, with flashy laserlights scanning the badge, the badge lighting up, and everything.

Well, I closed the labtop, got off the train and upon exiting the station held my wallet against the wallscanner. The wallscanner did *bleep*, indicated the cost of the traintrip and what my remaining balance was, and I went merrily on my way.

Then what I just did struck me.

It leaves me with only one question...

Where are we going?

P.S. And when did Science Fiction stop having original ideas?

References

[1] I, Robot
http://en.wikipedia.org/wiki/I,_Robot_%28film%29
[2] to wit
http://www.worldwidewords.org/qa/qa-wit1.htm
[3] Isaac Asimov
http://en.wikipedia.org/wiki/Isaac_Asimov

Monday, 25 June 2012

Creating A Binary Abacus


Dr. Emmett L. Brown: “Please excuse the crudity of this model, I didn't have time to build it to scale or to paint it.”
Every once in a while, it helps me to do something that is totally unrelated to the field of Software Design. In my case I like to make the occasional wooden thing. In this case, related to my previous blogpost, I decided to build a binary abacus.

Just to elaborate, a binary abacus is a abacus where each row has only one bead (which simplifies the design, even if it complicates the arithmetic for normal people). A picture of an abacus with all eight beads live, giving a maximum value of 255, is visible on the left.

Ingredients


  • some wood, 1.8 cm thickness
  • a cylindrical piece of wood, 15 mm thickness, 1 meter length which is too long, really.
  • a cylindrical piece of wood, 3 mm thickness, 1 meter length which is only just enough, so be carefull there
  • wooden beads, 15 mm diameter, exactly 8.
  • 4 or 6 screws (I used 40mm/3.5mm screws)

Utensils:
  • a hacksaw
  • old fashion woodsaw
  • glue (ended up not using it after all, depends on what you want)
  • sandpaper
  • tapemeasure (essential)
  • drill (3 mm and 4 mm)
  • pencil
  • screwdriver
  • paper
  • eraser
  • hammer

Schematic


The Making Of


Use the hacksaw to saw the cylindrical pieces of wood to the proper length. The support beams should be 11.5 cm. The bead rows should be 12.5 cm. Do not worry overmuch if the bead rows are a little too short or a little too long, a couple of mm isn't going to make much difference. Be careful though, the wood is quite brittle.

Cut up a piece of paper in the proper dimensions and then use that to indicate where to saw the wood, with the woodsaw.


Saw the wood into the proper shapes. You should end up with two identical rough wooden Trapezoids.

Drill the 16 3mm holes at 20 mm distances approximately 7 mm into the wood.

Drill the 4 (6) 4mm holes all the way through for the support beams.

Use the sandpaper on both the Trapezoids to get the rough edges off. Go nuts, the more the better the result.

Drill 3mm holes into the opposite ends of the two (or three) support beams, to guide the screws.

Erase all the pencil drawings you did on the wood.

Fix all the bead rows into place on one of the trapezoids. You should be able to do this using only your hands.

Add the support beams to the trapezoid.

Add the beads to the bead rows. Important! You don't want to find out that you have a perfect abacus but forgot to add the beads.

Have fun trying to get them properly aligned with the other trapezoid.

Screw support beams securely together, after everything seems properly fixed.

Small soft taps with a hammer (covered in cloth) to make sure the bead rows are secure.

Good luck!

Notes

Make sure that the beads can slide smoothly over the 3mm thick cylindrical piece of wood.

The hard part was getting all the rows to align properly against the holes in the wood.

I ended up using no glue at all, as the thing seemed pretty solid eventually.

Expansions


Possible extentions to the design are a red bead to indicate data overflow and some indications on the side on what's what. I was thinking along the lines of (from top to bottom) LSB 1, 2, 4, 8, 16, 32, 64, MSB 128.

In the example above, I used two supports at the bottom of the abacus. For maximum strength, you'd ideally wish to have a third support at the top somewhere.

And of course, some paint or some varnish will help.




Binary Abacus - Counting


Tuesday, 19 June 2012

Binary prefix

Apparently there's a new naming convention for amounts of bits and bytes.[1] One that I've never before heard of.

Apparently,
  • 1024 bytes is now called a kibibyte (KiB),
  • 10242 is now called a mebibyte (MiB) and
  • 10243 is now called a gibibyte (GiB)

However, Wikipedia does mention that it is not in much common use. Thank God.

References


[1] Binary prefix
http://en.wikipedia.org/wiki/Binary_prefix

Sunday, 10 June 2012

A (hopefully) simple Explanation of the Binary numeral system


Introduction


Hi! I am a software developer, and I can count to 32 with my fingers on just one hand.

Most people can count to 5 with their fingers.

How does that work? Well, imagine that you are counting with the fingers of your hand. Let's say you wish to indicate the number 1. No doubt you'd use your index finger. But it is of course also possible to use your thumb. When you think about it there are five different ways to indicate that you mean 1. This is terribly inefficient.

Instead of having 5 different ways to indicate the number 1, we can assign different numbers to the different fingers.

What you are, in fact, doing is no longer using how many fingers to determine the number, but which fingers.

This is what we all have been doing with the decimal system since we could count. When I ask you if there is a difference between the numbers 21 and 12, you say of course. But why is that? We are still using the same two digits, are we not? The answer is in the position of the two numbers.

The decimal system, the system we are dealing with every day, is called a positional notation with a radix of 10. The finger counting system explained above is called the binary numeral system, which is a positional notation with a radix of 2.

An Abacus

Whilst old people still know what an Abacus is, young people have no clue. An example of an Abacus using the Decimal numeral system that everybody knows is visible at the top of this article.

“The abacus is a device, usually of wood, having a frame that holds rods with freely-sliding beads mounted on them.”[1]

A Binary Abacus would look something like this[2]:

In the example above, one row on the abacus represents a “bit”, a value that is either 0 (no bead) or 1 (a bead). The combination of the 8 rows together make 256 combinations of beads possible. This is called a “byte”.

In the picture, all beads are "on", so the maximum value is displayed, 255.

Why?

Why do computers use a Binary numeral system? Why not have them use a Decimal numeral system, like we do? Wouldn't that make it easier?
That would make it easier for us, not easier for the computers. The reason for this is that computers are one of the most complicated machines we have. There are some advantages to computers dealing with binary numbers:
  • binary is easy for a computer, either it's on or off (yes or no, true or false, 1 or 0, electricity or no electricity). There's no ambiguity about it.
  • it is fault tolerant, which means that a computer can easily determine if the value was 1 or 0, even if there has been some problem. After all, there are only two options. If a transistor received 1.3 V instead of 1.5, the value is still a definite 1.
  • it is easier, no difficult hardware needs to be designed to take care of all the intermediate numbers we use like 5 and 6 and 3.

Note

When you think about it, the binary numeral system is also the smallest possible numeral system. Any less digits and you would not be able to count at all!

As you see, radix 2 means that a binary numeral system is always dealing with multitudes of 2. The numbers 1,2,4,8,16,32,64,128,256,512,1024 are numbers any Software Designer knows by heart. It also explains why a number of things dealing with computers are always expressed in multitudes of 2. Examples of these are the 32bits and 64bits processors and the need to always express memory in multitudes of 1024 instead of the 1000 we are used to.

An in depth article and the history of the binary numeral system is available at Wikipedia.[3]

Also, if anybody has any other insights to add to this article, I'd welcome it. I consider this article to be in permanent development.

Appendix A

The following numeral systems are currently in use:

nameradixused
binary2often
octal8rarely/none
decimal10most used
hexadecimal16heavily

Appendix B

The multitudes of 2:

number of bitsunitcomments
1bitoften
2--
4nibble, nybble or even nyble, semioctethalf a byte, corresponds to a single hexadecimal digit. An example of “ha ha only serious”[4]
8byte,octetthe number of bits in a byte
16word, shortvery old processors were only 16 bits
32longprocessors
64-modern processors
128--
256--
512--
1024kiloas in kilobits and kilobytes
10242megamegabytes
10243gigagigabytes
10244teraterabytes

References


[1] A Brief History of the Abacus
http://www.ee.ryerson.ca:8080/~elf/abacus/history.html
[2] Creating A Binary Abacus
http://randomthoughtsonjavaprogramming.blogspot.nl/2012/06/creating-binary-abacus.html
[3] Binary numeral system
http://en.wikipedia.org/wiki/Binary_numeral_system
[4] Ha ha only serious
http://www.catb.org/jargon/html/H/ha-ha-only-serious.html

Wednesday, 2 May 2012

Another weird if statement

Found the following code at my work.

if (order.getOrderCode() == null
{
    simpleOrder = true;
}
else 
{
    if (order.getOrderCode().length() == 0) 
    {
        simpleOrder = true;
    }
}

Refactor.

Note: There are plenty of libraries that offer very good single static operations to replace the if condition, but that's not important right now.

Monday, 9 April 2012

JDK7 EJB3.1 and Netbeans Project (Part III) - Testing

Part I - Introduction, Part II - Hibernate and Transactions, Part III - Testing

Netbeans + TestNG


I have been trying out TestNG and JMockit in the new Netbeans. [1]

TestNG has become a standard part of Netbeans, that is, if you download the Nightly builds[2] of Netbeans. There is no longer a need to install the plugin from contrib. I used it, and I was suitably impressed.


In Bugzilla you can file bugs under "java/TestNG".

JMockit


I do like adding JMockit to my testing, because I've gotten used to mocking all the classes that I am not interested in and being able to provide their behavior in the tests. Especially handy to prevent having to use a database.

In this case, this was easy as pie. Just download[3] and add the jmockit.jar to the testing libraries, and away I went.


Results


An example of what the results look like in Netbeans can be viewed at [2]. I like the test reports generated, though, they provide a deal more information on what is going wrong.


References


[1] Netbeans - TestNG
http://wiki.netbeans.org/TestNG
[2] Netbeans - Nightly Builds
http://bits.netbeans.org/download/trunk/nightly/latest/
[3] JMockit
http://code.google.com/p/jmockit/
The JMockit Testing Toolkit Tutorial
http://jmockit.googlecode.com/svn/trunk/www/tutorial.html
Unit Testing With TestNG and JMockit
http://java.dzone.com/articles/unit-testing-with-testng-and-j

Wednesday, 21 March 2012

1Z1-805 Upgrade to Java SE 7 Programmer


Woohoo!

I have successfully completed the "1Z1-805 Upgrade to Java SE 7 Programmer" Exam, taken on the 6th of January 2012. My score was 70%. The required passing score was 60%.

Beta Exams? What's that?

No doubt you've noticed that the exam I took is called the "1Z1-805 Upgrade to Java SE 7 Programmer", while the one that can be taken at Oracle is called "1Z0-805 Upgrade to Java SE 7 Programmer".[1] The numbers are diffferent.
It turns out that new exams have a "Beta" period.

“Beta exams are pre-production exams used by Oracle to evaluate new exam questions with the participation of a live audience.”[2]

Taking the exam during this Beta period has the following advantages/disadvantages:
  • reduced cost.
    I had to pay 25 € instead of 125 €
  • length.
    While the exam usually takes about an hour, the Beta exam takes three hours.
  • exam number.
    Beta Exam numbers begin with "1Z1"; real Exam numbers begin with "1Z0".
  • number of questions.
    The number of questions are usually more than 175.
  • validity.
    Both normal exams and beta exams are, when passed, perfectly valid.
  • grading.
    Due to the nature of the Beta exams, grading takes a great deal more time. I had my results in on March 20 2012. In my case this translates to 2½ months waiting. *
  • practise.
    As these new exams are regarding new software practices/technologies and the like, you might have to dig a little deeper to find out the ins and outs of the material to study. There're little to no practice exams yet, though the Oracle (Java) website has some good texts.[3]
  • limited period.
    As the Beta exam can only be taken during a specific period, there is a deadline for you to prepare. Once the official exam goes live, the Beta exam is gone.

*) update 15 april 2013 : the reason the grading takes a great deal more time, is that you are graded based on your answers on the questions that will make it to the official Production Exam. Since this exam is not available/created until 10 weeks later, it explains the delay.

References

[1] Upgrade to Java SE 7 Programmer
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?lang=US&p_exam_id=1Z0_805
[2] Oracle Certification Program Beta Exams
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=182&lang=US
[3] JDK 7 Adoption Guide
http://docs.oracle.com/javase/7/docs/webnotes/adoptionGuide/index.html

Monday, 12 March 2012

Math

I was experimenting with the new MathML, Markup Language for Displaying Equations in your browser.

There are a lot of different ways, apparently, of displaying equations. Most of them are JavaScript libraries[3] [4] that have been written.

I'm opting for "a standard", described at [1].

Thank heavens that it is possible to use MathML (3.0) without having to use html+xml and other requirements like in MathML 2.0. [2]

Unfortunately, it's support in browsers is still sorely lacking, for example Long Division is one of the new things in MathML, and it is either not well supported, or I am doing something seriously wrong.

Examples


The Quadratic Formula


x=b±b24ac2a

Long Division


٣ #x0664;٣٥٫٣ ١٣٠٦ ١٢ ١٠ ٩ ١٦ ١٥ ١٫٠ ٩ ١

Well, the long division example shows in my browser as utter junk. I hope someone else has more luck in viewing. Let me know. The examples in [5] also show non-working long division.

Browsers


Math MSIE 9 Chrome 17 FF10FF 8 FF 6Opera 11
EquationsXXOO OX
Long divisionXXXX XX

[1] MathML 3.0
http://www.w3.org/Math/
[2] MathML 3.0 Spec
http://www.w3.org/TR/MathML3/
[3] Math Jax
http://www.mathjax.org/
[4] Html5MathML
http://html5mathml.googlecode.com/svn/trunk/test1.html
"left/\right"
a notation that is commonly used in Netherlands for Long Division[2]
MathML, Firefox, and Firemath
http://lwn.net/Articles/440313/
[5] MathML 3.0 Spec Test Suite
http://www.w3.org/Math/testsuite/build/mathml3/frameset-full.xhtml
An introduction to MathML
http://www.ibm.com/developerworks/xml/library/x-mathml3/
Blogging Mathematics
http://holdenweb.blogspot.com/2011/11/blogging-mathematics.html
Mozilla MathML Project
https://developer.mozilla.org/en/Mozilla_MathML_Project
Firefox Mathml Demo
http://www.mozilla.org/projects/mathml/demo/
Firemath - Editor - Plugin for Firefox
http://www.firemath.info/

Wednesday, 29 February 2012

JDK7 EJB3.1 and Netbeans Project (Part II) - Hibernate and Transactions

Part I - Introduction, Part II - Hibernate and Transactions, Part III - Testing

Hibernate LazyInitializationException


In a Model-View-Controller pattern, the part that deals primarily with Transactions and Hibernate is the Model. This means the View, that needs the data to render the result to the user, is outside the transaction and in Hibernate this often causes LazyInitializationExceptions. Especially when traversing to proxies of collections inside the entities. In order to prevent this there are several solutions described in Open Session In View(1) article.

They are summarized below.
  1. use an interceptor, when the server is hit automatically start a transaction, when the result is transmitted back, automatically close/commit the transaction
  2. just make sure the Model provides all the data to the View, so the view does not run into the LazyInitializationException.
  3. have the view open a new transaction to retrieve the data, after the model is finished (which is a really really bad idea)
  4. have the framework deal with it
I prefer the last option, have the framework deal with it. At work, for example, this is done by using JBoss Seam and I must say, I've never had to deal with LazyInitializationExceptions.

Enterprise Java Beans - The Old Way


The good part of Enterprise Java Beans is that they provide the transaction support on the container level, so you, as a developer, do not need to be concerned with it. The bad part is that to access a Enterprise Java Bean requires either another Enterprise Java Bean or a call to the InitialContext. Like in the code below.

/**
 * Retrieve my gamebean.
 */

private GameBeanLocal lookupGameBeanLocal()
{
    GameBeanLocal gbl = null;
    try
    {
        javax.naming.Context c = new InitialContext();
        gbl = (GameBeanLocal) c.lookup("java:global/game/game-ejb/GameBean!mmud.beans.GameBeanLocal");
    } catch (NamingException ne)
    {
        itsLog.throwing(this.getClass().getName(), "lookupGameBeanLocal", ne);
        throw new RuntimeException(ne);
    }
    itsLog.exiting(this.getClass().getName(), "lookupGameBeanLocal");
    if (gbl == null)
    {
        throw new NullPointerException("unable to retrieve GameBean");
    }
    return gbl;
}
This is the code usually used in the WAR file of your EAR file to contact your Enterprise Java Beans. Any Hibernate entities the EJBs return suffer from the LazyInitializationException.

Enterprise Java Beans 3.1


But now, there's Enterprise Java Beans 3.1 which solves this problem, by the following new items:
  • EJBs can be contained inside your WAR
  • Context and Dependency Injection works in most (more) cases

For example the following Enterprise Java Bean was put inside the WAR, and annotated with REST Annotations and uses Hibernate Entities.

/**
 * Comment Enterprise Bean, maps to a Comment Hibernate Entity.
 * @author mr. Bear
 */

@Stateless
@Path("/comments")
public class CommentBean
{
    @PersistenceContext(unitName = "myDataSource")
    private EntityManager em;

    @EJB
    JobBean jobBean;

    protected EntityManager getEntityManager()
    {
        return em;
    }

    public CommentBean()
    {
    }

    @POST
    @Override
    @Consumes(
    {
        "application/xml""application/json"
    })
    public void create(Comment entity)
    {
        getEntityManager().persist(entity);
    }

    @PUT
    @Override
    @Consumes(
    {
        "application/xml""application/json"
    })
    public void edit(Comment entity)
    {
        getEntityManager().merge(entity);
    }

    @DELETE
    @Path("{id}")
    public void remove(@PathParam("id") Long id)
    {
        getEntityManager().remove(find(id));
    }

    @GET
    @Path("{id}")
    @Produces(
    {
        "application/xml""application/json"
    })
    public Comment find(@PathParam("id") Long id)
    {
        return getEntityManager().find(Comment.class, id);
    }
}

The Entity has appropriate annotations to indicate it can be converted to JSON and/or XML.
/**
 * Comment Entity mapped to the Comment table in the database.
 * @author mr. bear
 */

@Entity
@Table(name = "Comment")
@XmlRootElement
public class Comment implements Serializable
{
    private static final long serialVersionUID = 1L;
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Basic(optional = false)
    @Column(name = "id")
    private Long id;
    @Size(max = 255)
    @Column(name = "author")
    private String author;
    @Basic(optional = false)
    @NotNull
    @Column(name = "submitted")
    @Temporal(TemporalType.TIMESTAMP)
    private Date submitted;
    @Lob
    @Size(max = 65535)
    @Column(name = "comment")
    private String comment;
    @JoinColumn(name = "galleryphotograph_id", referencedColumnName = "id")
    @ManyToOne(optional = false)
    private GalleryPhotograph galleryphotographId;

    public Comment()
    {
    }

    public Comment(Long id)
    {
        this.id = id;
    }

    public Comment(Long id, Date submitted)
    {
        this.id = id;
        this.submitted = submitted;
    }

    public Long getId()
    {
        return id;
    }

    public void setId(Long id)
    {
        this.id = id;
    }

    public String getAuthor()
    {
        return author;
    }

    public void setAuthor(String author)
    {
        this.author = author;
    }

    public Date getSubmitted()
    {
        return submitted;
    }

    public void setSubmitted(Date submitted)
    {
        this.submitted = submitted;
    }

    public String getComment()
    {
        return comment;
    }

    public void setComment(String comment)
    {
        this.comment = comment;
    }

    @JsonIgnore
    @XmlTransient
    public GalleryPhotograph getGalleryphotographId()
    {
        return galleryphotographId;
    }

    public void setGalleryphotographId(GalleryPhotograph galleryphotographId)
    {
        this.galleryphotographId = galleryphotographId;
    }
}
And, voilà, no more LazyInitializationExceptions, no more retrieving EJBs through the InitialContext, no more EARs containing WARs and EJB JARs.

Infinite Recursion


One of the problems that occur, when you do NOT have any LazyInitializationExceptions, is Infinite Recursion. This happens when your Hibernate entities refer to each other, and in a REST service, Jersey tries to flatten the structure into JSON or XML for transmission.

This could be the case, in the example above, if there was a collection of comments in galleryphotograph, and a reference to the respective galleryphotograph in the comments.

In order to solve this, make sure to use XmlTransient and JsonIgnore at appropriate places.

Conclusion


The last paragraph "Can't this be done easier" in the Open Session In View is awesome. It provides the answer that the framework should handle all the transaction management, instead of yourself having to provide it.

And now this time has come! The new EJB 3.1 version allows you to put EJBs right there in your WAR! Either as a separate JAR file, or as class files. The same classloader will pick them up and you can use them in your classes via Dependency Injection as much as you like!

It does mean there is no modularization, but in my experience modularization is only a requirement for the exceptionally high-end big projects.

References

Open Session In View
https://community.jboss.org/wiki/OpenSessionInView
Data Transfer Objects
http://martinfowler.com/eaaCatalog/dataTransferObject.html
Wikipedia : Data Transfer Object
http://en.wikipedia.org/wiki/Data_transfer_object
Java Persistence With Hibernate
Christian Bauer, Gavin King
Is Java EE 6 War The New EAR? The Pragmatic Modularization And Packaging
http://www.adam-bien.com/roller/abien/entry/is_java_ee_6_war

Sunday, 19 February 2012

JDK7 EJB3.1 and Netbeans Project (Part I) - Introduction

Part I - Introduction, Part II - Hibernate and Transactions, Part III - Testing

Introduction


I've tasked myself with learning the new things available to JDK 7 and EJB 3.1 and how they integrate with Netbeans. In order to so do, in my experience, it is most gratifying to pick up a new project using these new technologies.

In this case, as at the time, I was wondering what to do with my old Photographs, I've decided to start up a project called YourPersonalPhotographOrganiser, which is nothing more than a simple Photo Gallery.

You can find the netbeans project on github at https://github.com/maartenl/YourPersonalPhotographOrganiser. Just check it out into your ~/NetBeansProjects/YourPersonalPhotographOrganiser directory, and see how far you get.

It's a work in progress, but it's at the stage where there's something more or less workable. Let me remind you that this software is for use at your own risk. Use it locally, as there is NO security (neither authentication nor authorization) implemented at the moment.

Requirements


  1. simple database, easy to make changes directly, if so required
  2. used for home use
  3. no authentication or authorization required
  4. helps me to understand the jdk 7, glassfish and jee 3.1, by using all the new stuff in there.
  5. absolutely NO changing of the photographs, all changes are done in java, in memory, in glassfish.*
  6. flexible in where these photographs are located (no need to keep them in the webdir, for example)

*) I've had too many instances where:
  1. changing files from webinterface is a security risk, and requires proper access rights.
  2. changing files causes the extra data present in the jpegs put there by photocameras to be discarded
  3. changing files potentially causes deterioration of the quality of the jpegs
  4. changing files has sometimes caused the file to be damaged in some way
  5. changing files makes it impossible to determine if the photo is already present in your collection

Technical

Some of the (new) stuff that is being used.
  1. JDK7 (Look for "JDK7" in the sourcecode)
    1. multiple catch
    2. try-with-resources
    3. new switch statement
    4. diamond-notation
    5. filevisitor interface
    6. Path class usage
  2. EJB 3.1
    1. no local interfaces on beans
    2. EJBs inside the WAR, no longer is an EAR required
    3. Improved Context and Dependency Injection
  3. Netbeans IDE 7.0.1.
  4. GlassFish Server Open Source Edition 3.1.1 (build 12).
  5. JPA (Hibernate)
  6. REST (Jersey)
  7. MySQL
  8. JQuery
  9. HTML, CSS, JavaScript and AJAX
  10. JSON

Database Schema

The database schema below shows the used Hibernate Entities. They have the same name as the tables. The database script below should run without errors on your average MySQL database.
drop table if exists Log;
drop table if exists Tag;
drop table if exists Comment;
drop table if exists GalleryPhotograph;
drop table if exists Gallery;   
drop table if exists Photograph;
drop table if exists Location;

create table Location (
 id bigint not null auto_increment primary key,
 filepath varchar(512)
);

create table Photograph (
 id bigint not null auto_increment primary key,
 location_id bigint not null,
 filename varchar(255),
 relativepath varchar(1024),
 taken timestamp,
 hashstring varchar(1024),
 filesize bigint,
 angle int,
 foreign key (location_id) references Location (id)
);

create table Gallery (
 id bigint not null auto_increment primary key,
 name varchar(80),
 description text,
 creation_date timestamp not null default current_timestamp,
 parent_id bigint,
 highlight bigint,
 sortorder int not null,
 foreign key (parent_id) references Gallery (id),
 foreign key (highlight) references Photograph (id)
);

create table GalleryPhotograph (
 id bigint not null auto_increment primary key,
 gallery_id bigint not null,
 photograph_id bigint not null,
 name varchar(255),
 description text,
 sortorder bigint,
 foreign key (gallery_id) references Gallery (id),
 foreign key (photograph_id) references Photograph (id)
);

create table Comment (
 id bigint not null auto_increment primary key,
 galleryphotograph_id bigint not null,
 author varchar(255),
 submitted timestamp,
 comment text,
 foreign key (galleryphotograph_id) references GalleryPhotograph (id)
);

create table Tag (
 tagname varchar(80) not null,
 photograph_id bigint not null,
 primary key (tagname, photograph_id),
 foreign key (photograph_id) references Photograph (id)
);

create table Log (
 id bigint not null auto_increment primary key,
 jobdate timestamp not null default current_timestamp,
 joblog blob not null
);

-- only allows a photograph to appear once in a gallery
create unique index unique_per_photograph_per_gallery
on GalleryPhotograph (gallery_id, photograph_id);

Update 1: Moved angle field from GalleryPhotograph over to Photograph
Update 2: It's nice to have a script for creating the database, but an ORM can automatically generate the proper tables for you if you like.