Jon's Blog

Coder, Ex BBC. Cycling obsessive and writer for cyclosport.org

XML Processing With Scala

Given scala’s built-in support for XML and its more concise syntax for dealing with iterables and collections I was interested to see what some of our java xml parsing code written using Dom4j could look like in the Scala world.

Background

A bit of brief background, the idea is we have a number of assets, an asset representing for example a story, index page, picture gallery, page with media. Each Asset has it’s own xml representation and is constructed using common snippets such as item-meta, page-options, media etc. See story.xml.

Our requirement, is to take this xml which is our own internal representation which we use to repesent assets in our content store and essentially transform this into our object model and serve this up as json or xml. You may ask why but isolating our internal format means we are free to change this without effecting the external representations of our assets and it also means we can customise the output and the format of the output easily.

Xml processing in java is typically done in sax or dom using a library like dom4J, we opted for the readability and ease of use the Dom over SAX and used something loosely based on the Strategy design pattern where we have a ParserFactory, this allocates the correct type of asset parser based on the type of asset parses the xml and creates our java object model. Xml and Json generation is handled by jaxb and jackson.

Some Bash Fu With Curl, Grep, Awk and Pipes

I wanted to test out the HTTP response codes from a number of our API endpoints. There is numerous ways to approach this but I thought it would be fun to use the command line and it turned out pretty straight forward.

First off I created a file with some urls in e.g.

1
2
3
http://www.google.co.uk
http://www.bbc.co.uk
http://www.arsenal.com

And ran the following command:

1
[root@pal ~]# grep "$1" urls.txt | awk '{print "curl --write-out "$0"=http-%{http_code}\"\n\" --silent --output /dev/null "$0'} | sh >> responses.txt

This takes each line (url) and passes the argument to curl before outputting the url and http response code for said url into a file called responses.txt

1
2
3
http://www.google.co.uk=http-200
http://www.bbc.co.uk=http-200
http://www.arsenal.com=http-302

You can then grep to count the number of different http response codes e.g.

1
2
[root@pal ~]# grep -c http-200 responses.txt
2

In reality I ran the following command as wanted to do something a bit more complicated with curl by specifying certs and custom headers

1
grep "$1" urls.txt | awk '{print "curl --cert /etc/pki/my-pem.pem --cacert /etc/my-ca.pem -H \"Accept:application/json\" -H \"X-Candy-Audience:Domestic\" -H \"X-Candy-Platform:HighWeb\" --write-out "$0"=http-%{http_code}\"\n\" --silent --output /dev/null "$0'} | sh >> responses.txt

I’m not suggesting this is the best way to do this but nice not to need any frameworks or libraries. If I want to do this more regularly I will probably use a cucumber feature and setup a job in our CI environment not, forgetting a nice looking report but for now this is fine.

Creating a HTTP Response Version Provider With Apache CXF Filters

I’ve been using Apache CXF which has some nice support for building JSR-311 compliant JAX-RS Services and recently had a requirement for sending back a custom version header as part of our API Responses.

It is generally accepted that versioning APIs is a good thing and adopting the general contract of Major.Minor.Patch covers our needs, now we need to have a standard method of reporting and documenting the APIs and versions. Before we get into the ins and outs of how one you should version an API, we have a company wide recommendation of adding a custom response header to each response X-API-Version: d.d.d which I have to say I’m in favour of over embedding the version in the url and is also the method adopted by Sun’s Cloud API – which certainly was commonly held to be a benchmark implementation of REST.

Using Apache CXF Filters led to this fairly elegant and simple solution which could be applied to add any type of additional headers but I found the documentation a little confusing so decided to write a post on it.

Useful XQuery

As I mentioned in my previous blog post I’ve recently been doing some work building RESTFul API’s backed by a Mark Logic XML Content Store utilising XQuery for document retrieval which has led me come up with the following useful snippets of XQuery that I thought I would share. They could easily be altered to work with slightly different requirements.

List all the document uris in database


List all the document uris in database based on some criteria


In this case we are restricting the documents that exist in the /sitemap/ directory and end with sitemap.xml and do not contain archive in the uri

Delete all the document in the database


Delete all the document in database based on some criteria


In this case we are deleting documents that are in the /content/ directory and whose uri ends with NEWSML.xml

Query a document by an element with a certain value


This could be easily altered to match on any element or attribute in your xml.

Actually a much more efficient way of doing the same is:

Evaluating Mark Logic XQuery Performance

I’ve recently been doing some work building RESTFul API’s backed by a Mark Logic XML Content Store utilising XQuery for document retrieval. This post details the steps involved in tuning what was deemed to be the most simplest of queries for optimum performance using some useful Mark Logic extensions for profiling.

Original Query


XQuery for looking up documents based on the value of a given attribute in the xml using XPath. (What could be simpler!)

Evaluating Performance


By default Mark Logic indexes the document structure and attributes are indexed by default as part of the universal index.

1. Adding xdmp:query-meters

By adding xdmp:query-meters() to the query we get some immediate feedback about how the query performs including elapsed time and the number of fragments and documents that were selected. Altering the above query as below gives us some interesting metrics and the query is taking nearly 2 seconds.

Immediately something looks a bit suspicious as all the documents in the database are being returned which would indicate that the query is not making effective use of Mark Logic’s Indexes.