A bit of brief background, the idea is we have a number of assets, an asset representing for example a story, index page, picture gallery, page with media. Each Asset has it’s own xml representation and is constructed using common snippets such as item-meta, page-options, media etc. See story.xml.
Our requirement, is to take this xml which is our own internal representation which we use to repesent assets in our content store and essentially transform this into our object model and serve this up as json or xml. You may ask why but isolating our internal format means we are free to change this without effecting the external representations of our assets and it also means we can customise the output and the format of the output easily.
Xml processing in java is typically done in sax or dom using a library like dom4J, we opted for the readability and ease of use the Dom over SAX and used something loosely based on the Strategy design pattern where we have a ParserFactory, this allocates the correct type of asset parser based on the type of asset parses the xml and creates our java object model. Xml and Json generation is handled by jaxb and jackson.
The recommendation looking at the scala documentation is to implement a toXML method and a corresponding fromXML method from within side a companion object which takes care of creating said object. e.g) A Story POJO which extends an Asset class.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Parsing and iterating over elements is much neater and concise than the java equivalent e.g) A byline is made up a name, a title and a list of Person objects.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
In my opinion this is where you start to see the real power of Scala by calling .toList on the sequence of person nodes you can then map this to a list of people. Person will have its own corresponding toXML implementation as well. We remove those horrible null checks and cumbersom for loops and my favourite removing all the static xPath constants. No need to worry about name spaces either.
The equivalent in java for parsing the byline and person elements which doesn’t include the pojo for byline either. Much more cumbersom and messy and you’ve got no feel for what the corresponding structure of the xml will look like when we deserialize a byline.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
We are able to eliminate the existing ParserFactory class and have an AssetFactory which allocates the right type of Asset e.g)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
I think the code is more testable now as well. I prefer the readability of these type of tests using something like ScalaTest. I created my own convenience trait XmlDataSpec which is essentially a way to minimise the number of mixins used in our tests. FixtureTestUtils gives us a way to load in xml fixtures. You can load in snippets from file or inline the xml element you wish to test and it all just seems more natural, readable and less verbose than the java equivalent. e.g)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
1
|
|
I’ve put the sample code on github, here. You’ll need SBT installed and once you have checked it out simple run sbt test to see the unit tests. In addition to this Main.sala is a simple test harness for processing a story. Please remember I’m not a scala expert, I’m pretty sure there is a way to improve the double .toList calls in Media.scala using zip and also some people may suggest that some of the functions inside .toList aren’t readable but I think they are if you are familiar with scala.
Gists are available here for the code here.
I’ve seen a lot of complaints about the current implementation of scala.xml but for simple xml representations and parsing I think it works really well and is much more readible than the java equivalent. The performance is equal if not better than dom4j
]]>First off I created a file with some urls in e.g.
1 2 3 |
|
And ran the following command:
1
|
|
This takes each line (url) and passes the argument to curl before outputting the url and http response code for said url into a file called responses.txt
1 2 3 |
|
You can then grep to count the number of different http response codes e.g.
1 2 |
|
In reality I ran the following command as wanted to do something a bit more complicated with curl by specifying certs and custom headers
1
|
|
I’m not suggesting this is the best way to do this but nice not to need any frameworks or libraries. If I want to do this more regularly I will probably use a cucumber feature and setup a job in our CI environment not, forgetting a nice looking report but for now this is fine.
]]>It is generally accepted that versioning APIs is a good thing and adopting the general contract of Major.Minor.Patch covers our needs, now we need to have a standard method of reporting and documenting the APIs and versions. Before we get into the ins and outs of how one you should version an API, we have a company wide recommendation of adding a custom response header to each response X-API-Version: d.d.d which I have to say I’m in favour of over embedding the version in the url and is also the method adopted by Sun’s Cloud API – which certainly was commonly held to be a benchmark implementation of REST.
Using Apache CXF Filters led to this fairly elegant and simple solution which could be applied to add any type of additional headers but I found the documentation a little confusing so decided to write a post on it.
Perhaps you want to do custom logging or additional processing but more interestingly the response handler implementation can optionally overwrite or modify the application Response or modify the output message the first of which we are particularly interested in.
By implementing The ResponseHandler Interface:
1
|
|
And overriding handleResponse:
1 2 3 4 |
|
Starting off with the simplest scenario which for me is that the X-API-Version header should be present in the response. Normally I’d start with the assert statement and work backwards using my Eclipse shortcuts. I’ve skipped ahead a few steps here but hopefully you get the idea.
1 2 3 4 5 6 7 |
|
So remember we are just doing the simplest thing possible to make the test pass which in this case is to add the new header to the final response.
1 2 3 4 |
|
We obviously want to preserve the original response object and message body on the outgoing response so I add a test for that now introducing some Mocks to mock out the original response object.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
fromResponse performs a shallow copy of the existing Response.
1 2 3 4 |
|
You can see the tidied up finished test class and implementation here:
Plug in to your cxf jax-rs providers spring configuration along with any other providers using your maven project version number.
1 2 3 4 5 6 7 8 9 10 11 |
|
In this case we are restricting the documents that exist in the /sitemap/ directory and end with sitemap.xml and do not contain archive in the uri
In this case we are deleting documents that are in the /content/ directory and whose uri ends with NEWSML.xml
This could be easily altered to match on any element or attribute in your xml.
Actually a much more efficient way of doing the same is:
Slightly more advanced…
Very useful for getting a view of the size of your documents
This is how the results are returned
Why is this useful I hear you say, demonstrates how to use xdmp:node-replace and xdmp:node-insert-child
XQuery for looking up documents based on the value of a given attribute in the xml using XPath. (What could be simpler!)
By default Mark Logic indexes the document structure and attributes are indexed by default as part of the universal index.
By adding xdmp:query-meters() to the query we get some immediate feedback about how the query performs including elapsed time and the number of fragments and documents that were selected. Altering the above query as below gives us some interesting metrics and the query is taking nearly 2 seconds.
Immediately something looks a bit suspicious as all the documents in the database are being returned which would indicate that the query is not making effective use of Mark Logic’s Indexes.
This can be verified with xdmp:estimate(), purely focusing on the XPath part of the query e.g.
The evaluator sees the XPath expression above and uses index lookup’s to match some sequence of fragments in the database. xdmp:estimate() gives an estimate of the number of documents in a sequence and is directed at the index-lookup phase, i.e “search”.
Next, the evaluator will fetch those matching fragment(s), if any, from the database. Now we are back in the evaluation phase. It will check to make sure the nodes really match: this is known as “filtering”. Then it will evaluate the entire XPath.
So what we are saying for the xquery above is that the number of matching fragments is all the documents in the database which will then get filtered so we are not making use of any of available Mark Logic indexing which means the query is very inefficient.
This further verifies that all the documents in the database are being selected and we are not fully leveraging indexes
/* accesses the entire database and returns every root element in the database, but we do it a second time in the predicate which is very expensive.
Changing the XPath to below and re-running the above steps results in a much more positive result, and look how quick the query is!
So far we have done our query evaluation ignoring the final piece which is to plugin the marklogic xinc:node-expand function which will resolves any x:includes in the results
1
|
|
Not cached – 6 seconds!!
1
|
|
Cached – 2 seconds
1
|
|
With our new optimised query we can see the time is much reduced below. This is a Mark Logic extension so we can’t really do much about the performance of this. However it is interesting to see how much additional time this adds to the processing even for a fully optimised query.
1
|
|
From the above it is easy to see the majority of the query is spent in the xinc:node-expand function but we have increased the overall performance dramatically.
Even what is deemed to be the simplest of xquery/xpath expressions might be inefficient. Mark Logic won’t tell you how to fix your xquery/xpath but it will provide insight into whether your query is utilizing indexes and how it is actually running.