Lightweight Application Monitoring with Hawtio and JMX.

In my previous life developing solutions for a biometrics company, we invested quite a bit in monitoring tools to know the status and performance of the cluster of servers on which our application was deployed. The tool of choice was RHQ 4.x, and I must say it served us well in many places. But it came with it’s own complexity, and having discovered the simple but quite efficient HawtIO, I think I have a new permanent member of my arsenal for software delivery and monitoring.

Hawtio is a modular web console that enables you to monitor any JVM based application via JMX, with the use of the Jolokia, a tool for exposing JMX via JSON to enable REST like invocation of JMX beans. All you need to do to get Hawtio working on Jboss7/Wildfly is to follow the instructions here.

My main interest was in profiling the performance of Hibernate within my JavaEE application deployed in JBoss 7, and Markus Eisele’s post on using HawtIO to display Hibernate statistics was just on point. Basically you can follow his instructions and make the following modifications for Jboss7/Wildfly.

1. Include these 2 files from Makus’s post in your project StatisticsService and DelegatingStatisticsService
2. Add a @Singleton @Startup EJB which does the following

@Startup
@Singleton
public class HibernateMBeanRegistrar {
    private static final Logger logger = LoggerFactory.getLogger(HibernateMBeanRegistrar.class);
  
    @javax.annotation.Resource(lookup = "java:jboss/MySessionFactory")
    private SessionFactory sessionFactory;
    
    @Inject 
    public void register() {
        try {
            try {
                MBeanServer mbeanServer
                        = ManagementFactory.getPlatformMBeanServer();
                ObjectName on
                        = new ObjectName("Hibernate:type=statistics,application=hibernatestatistics");

                StatisticsService mBean = new DelegatingStatisticsService(sessionFactory.getStatistics());
                mbeanServer.registerMBean(mBean, on);
                logger.info("Hibernate Statistics MBean registered successfully ...");
            } catch (MalformedObjectNameException ex) {
                logger.error("", ex);
            } catch (InstanceAlreadyExistsException ex) {
                logger.error("", ex);
            } catch (MBeanRegistrationException ex) {
                logger.error("", ex);
            } catch (NotCompliantMBeanException ex) {
                logger.error("", ex);
            }
        } catch (Exception e) {
            logger.error("Failure obtaining SessionFactory from EMF: "+e.getLocalizedMessage());
        }

    }
}

3. Add these to your persistence.xml

<property name="hibernate.generate_statistics" value="true"/>
<property name="hibernate.session_factory_name" value="java:jboss/MySessionFactory" /> 

4. Deploy your application alongside the hawtio-no-slf4j-x.x.x.war as stated in the instructions on installing hawtio for JBoss7/Wildfly. I renamed the file to hawtio to make life easier.
5. Once both hawtio and your application are deployed, navigate to http://localhost:8080/hawtio. Click on JMX and you should see a Hibernate–>statistics–>hibernatestatistics node. Clicking on that node should show you something like Markus’s view here. You can even add it to the dasboard if you wish so you get up to the minute figures as your application runs.

 

HIbernate Statistics

For me it did show that although my entities were being cached in the 2nd level cache, it seemed somehow that query caching was not working. So I’ve got work to do.

Hawtio does have a lot of plugins to display content from other sources, from Elasticsearch to log files. You should definitely give it a shot. Thanks Markus for the Hawtio intro.

Specifying Field Analyzers using Index Templates in Elasticsearch

I’ve been playing around with ElasticSearch recently, and I must say I’m quite impressed with it. However, I’ve had my fare share of poring over the internet to deal with specific challenges I’ve faced in making my application content easily searchable, and one was with preventing certain fields from being analyzed.

These fields are typically ids that should be stored as they are and not broken down. Some of these ids are UUIDs, and other applications specific ones with hyphen (“-“) separations. To enable it to match both possibilities of the whole id being supplied or some portions being supplied (which will required a wildcard search), these kinds of fields should not be analyzed but handled as is.

Since all my ids typically have 1 or more variations of the word “id” somewhere in the mix, the simple solution was to provide an index template that says not to analyze any such fields. The saviour was Elasticsearch’s index templates, and here is mine.

{
    "carewex_template": {
        "template": "*",
        "settings": {
            "index.number_of_shards": 2
        },
        "mappings": {
            "_default_": {
                "_all": {"enabled": false},
                "_source": {"compress": true},
                "dynamic_templates": [
                    {
                        "primarykey_template": {
                            "match": "*id",
                            "include_in_all": true,
                            "mapping": {"type": "string", "index": "not_analyzed"},
                            "match_mapping_type": "string"
                        }
                    },
                    {
                        "otherIdkey_template": {
                            "match": "*Id",
                            "include_in_all": true,
                            "mapping": {"type": "string", "index": "not_analyzed"},
                            "match_mapping_type": "string"
                        }
                    },
                    {
                        "thirdIdkey_template": {
                            "match": "*ID",
                            "include_in_all": true,
                            "mapping": {"type": "string", "index": "not_analyzed"},
                            "match_mapping_type": "string"
                        }
                    }
                ]
            }
        }
    }
}

Placing this file as my_template.json in the config/templates directory was all I needed. Whenever I indexed a document, all matching fields were prevented from being analyzed, and my problem was solved.

So, have fun with Elasticsearch.

Copying a subset of files from a directory on CentOS

I had a huge number xml files (somewhere close to 100k files) and i needed to copy just a 1000 of them for a few tests on CentOS. I wasn’t interested in the order of the files, simply just any 1000 will do.

After trolling all over the internet, if found this simple solution.

 find $targetDir -maxdepth 1 -type f |head -1000|xargs cp -t $destDir

Where $targetDir was the path to the folder containing the large set of files, and $destDir is the folder in which i wanted to copy the 1000 files into.

Sweet and simple. Gleaned from here

Tweaking Oracle XE for XA Transactions in JBoss 6

I’m currently testing an application in Oracle 10g XE environment and getting familiar with working with Oracle alongside JBoss AS 6 to support XA transactions. After configuring my XA datasource as per JBoss’s documentation, I started up the JBoss server only to come up with the ff exception.

ORA-12516: TNS:listener could not find available handler with matching protocol stack

As it turns out, it seems my settings for minimum connection pool size was more than what Oracle XE by default allows (i.e. 49). Thankfully, I found the perfect guide to solving this problem here. Thanks Andrew.

Having sweated that first part out, I restarted my application server again, only to get the next biggie.

ARJUNA-16027 Local XARecoveryModule.xaRecovery got XA exception XAException.XAER_RMERR: javax.transaction.xa.XAException
 at oracle.jdbc.xa.OracleXAResource.recover(OracleXAResource.java:638) 

It turns out again that the current user that I used to access my XE database was not XA enabled, with no ability to perform 2 phase commits. So I logged into my sqlplus console and entered the following statements, making sure I connected as ‘sysdba’. Note: MYUSER is the name of the user used to perform the XA transactions.

grant select on pending_trans$ to MYUSER;
grant select on dba_2pc_pending to MYUSER;
grant select on dba_pending_transactions to MYUSER;
grant execute on dbms_system to MYUSER;

 

I smiled after that, because my app started up nicely. Let’s see what else Oracle has up its sleeves to terrorise my life with.

JBOSS AS 6 Startup: ConnectionFactory Not Bound

So here I was, minding my own business migrating my Spring based application from JBoss 5.1.0.GA to JBoss 6.1. On an ordinary day, JBoss 5 will have been fine for me (otherwise move straight to 7), but RHQ doesn’t sit nicely with JBoss 5, and I didn’t want to risk deploying a major application to a customer on the freshly cut JBoss 7.

I got a very nasty surprise when after creating my datasource definitions, configuring all my JMS queues, deploying my native libraries to their appropriate locations and starting up my server, I get the ff exception.

 javax.naming.NameNotFoundException: ConnectionFactory not bound

And yet when I looked at the JNDIView in the JMX console under http://localhost:8080/jmx-console, the ConnectionFactory was bound to the global namespace i.e. /ConnectionFactory and the “java” namespace i.e. java:/ConnectionFactory. Looking at the console, I then noticed the ff statements after the log statements showing my application had failed deployment.

16:36:38,045 INFO  [HornetQServerImpl] trying to deploy queue jms.queue.UpdateQueue

Hmm, it seems that the HornetQ inside JBoss 6 starts up the ConnectionFactory and queues/topics after everything else has started up, and not before. I still don’t know why JBoss designed it that way, seeing as some applications may need to immediately connect to JMS resources at startup.

A little “googling” about and I found how to make the JMS resources bootstrap before any web application is deployed. Simply look for the following file under your JBoss 6 installation. Replace my path with your appropriate locations.

K:\dev\ jboss-6.1.0.Final\server\default\deploy\jbossweb.sar\META-INF\jboss-beans.xml

Add the ff declarations as dependencies to the “WebServer” MBean, and you should be fine.

 <depends>org.hornetq:module=JMS,name=”NettyConnectionFactory”,type=ConnectionFactory</depends>

<depends>org.hornetq:module=JMS,name=”InVMConnectionFactory”,type=ConnectionFactory</depends>

<depends>org.hornetq:module=JMS,name=”NettyThroughputConnectionFactory”,type=ConnectionFactory</depends>

Now whip up your JBoss again, and you’ll notice that the queues/topics are deployed before any web application. You’re in business now.

Charts in JSF: OpenFaces, PrimeFaces and JSFLot

I’ve been playing around with a small application that needs to display results of data collection in a chart as a certain selection is made on the jsf page. So I set myself the task of looking around for libraries that could provide me this charting support. One of the important considerations was that this library had to be compatible with Richfaces, since that was my default jsf library until further notice. It was a pity though that Richfaces didn’t have one, because most of their components seem to be fully fleshed, and I don’t tend to need other components libraries unless they don’t have it.

Primefaces

I’d already heard and read about Primefaces, and the reviews were quite positive. It comes with quite an impressive set of components, and will definitely fill in the holes that Richfaces has left quite nicely, with all the cool items like Accordions, Carousels, Docks (for MacOS fans), an IdleMonitor, ImageCropper and the rest. The documentation was also quite detailed from PDF and html docs as well from forums, so I was bound to have a good time, or so I hoped. I “mavened” it and configured it alongside Richfaces, without any complaint. So, to the hacking went on. The chart components were quite may and as cool as Primefaces always tends to be, and it’s model was quite easy to work with. All I needed was to create a Map<String,Integer> containing text and data points for a pie chart. For my needs, which is a bar char, all that was needed was something like the ff from their own documentation:

public class BirthDisplayBean {

private List<Birth> births;

public ChartBean() {

births = new ArrayList<Birth>();

births.add(new Birth(2004, 120, 52));

births.add(new Birth(2005, 100, 60));

births.add(new Birth(2006, 44, 110));

births.add(new Birth(2007, 150, 135));

births.add(new Birth(2008, 125, 120));

}

public List<Birth> getBirths() {

return births;

}

}

and then on the page

<p:lineChart value="#{chartBean.births}" var="birth" xfield="#{birth.year}">

<p:chartSeries label="Boys" value="#{birth.boys}" />

<p:chartSeries label="Girls" value="#{birth.girls}" />

</p:lineChart>

Sweet! Simply using my own model and basic collections, I had my data all ready to go.

Just when I was getting ready to enjoy splattering my pages with charts all over, I came across a problem. The number of points on which data is collected in my application is flexible, therefore I do not know before hand the number of “series” that I have to display. Unfortunately, Primefaces assumes that I know them before hand, in which case all I needed was to specify each p:chartSeries with label and value. Oops, spanner in the works!! I tried to use a ui:repeat to force it to render a dynamic number of p:chartSeries, but that didn’t work. So, my honeymoon with Primefaces charts was ended abruptly. But Primefaces is still in my web application classpath, waiting for the next interesting component I might think of using which Richfaces does not have. I suspect that will definitely be sooner than later. Primefaces is way too cool to ignore.

JSFLot

My next search threw up an interesting result: JSFLot. It’s quite an interesting libray only focusing on charting, and relying mostly on JavaScript to render the chart and it’s content. It has support for pie, bar, and line charts, which would meet most application needs. It seemed not to have as yet a big community around it like the others, but it’s documentation was good enough to focus on what it does best – charting. I only wished there was a downloadable version of the documentation, so I could take my time with it at home when I’m offline. In the end I had to use Scrapbook to grab a few pages, but that was good enough. It indeed has a very small footprint, with a jar around 245k. It had it’s own datamodel that you had to stuff your results to display in, so in that sense it is intrusive on your codebase. However they are quite simple and intuitive. They are XYDataPoint (for x and y data points) , XYDataList (for a series of x and y data points, as well as other information concerning the series) and XYDataSetCollection (an aggregation of 1 or more series or XYDataLists). But nowadays, what other JSF component library doesn’t call for some small intrusion to get you going?

I began digging into it, and was getting some interesting results. The charts were quite clean and easy to label. But when I wanted to be a bit more dynamic and display different charts based on different selections from a Seam DataModelSelection, I noticed that it didn’t seem to refresh to show the changing data points from the different objects from which the data points were being displayed. I thought maybe it had to work only with full page refreshes, so I resorted to using the normal <h:commandLink/> to make sure that the whole page was refreshing and not doing any funky Richfaces ajax thingy. But no go. Seeing as I was spending too much time trying out all my Seam hacking skills, I rather decided to focus my energies on finding a different library that could fulfil my needs. Maybe I was being dumb and making some mistake somewhere, but time wasn’t on my side.

OpenFaces

Having had two heartbreaks, I went back to looking for a new JSF library love that could fulfil my need for dynamic series data, and interestingly 2 weeks ago TheServerSide had an article about OpenFaces. Hmm, not heard of them; let’s see what they’ve got. It turns out that they weren’t that bad after all. Documentation was in an html which comes bundled with the library download, and contains everything you need to know to use it. They had quite a sweet implementation of DataTable, and their implementation of sorting via headers of the columns was far cleaner and more “pimped up” than Richfaces Datatable, so I’ve switched my pages to use theirs, and I’m loving it. They also have a TreeTable, a cool way of using a table to display hierarchical structures, and a DayTable for showing scheduled events. All this and it sat quite well with Seam and Richfaces.

Here they had support for pie, bar and line charts, which though less than Primefaces’s plethora of charts, is more than enough for most purposes. Oh by the way, they could do dynamic series data quite well. All would have been rosy, except for the fact that I have to use their model to squeeze my data in. Well, it involved using 2 of OpenFaces model classes to contain my data, and coming from JSFLot’s 3 and from all the disappointments, I definitely could live with that. So like Primefaces, I could define my data points in a Map<String,Integer> structure, but unlike it, I’d put them in a PlainSeries, and then put all my PlainSeries in a PlainModel. Job done, we can all go have a beer now.

But then what will software development be like if you had technologies which thought of the developer’s every need and did them even before he could think he needed them? That will be utopia, but then I’m still on this earth. I realised that I couldn’t specify a color per data element, again because the number of data points I have to display is dynamic. I tried to use a property that will generate a random comma separated list of colours for each data point as a string property , but the tag could not resolve EL when it came to reading the “colors” property. In the end I had to hard code one colour for every element to save me from disgrace.

<o:barChartView labelsVisible="true" valueAxisLabel="No. of respondents" keyAxisLabel="Responses" colors="#800000"/>.

In fact, neither could the valueAxisLabel and the keyAxisLabel read from my locale files to determine the right text to show there. Who in this day and age still hard codes labels, when there is something call internationalization? OpenFaces, sit up!!! This is JSF, and here EL is king, not hardcoded text values.

In the meantime, I don’t have a choice. At least OpenFaces meets the real important requirement of showing charts from content which is dynamic and ajax driven. I hope OpenFaces will wake up and realise that their new lover requires some additional pampering, but I guess for now the relationship seems to be working. Who knows, if they do get better at the EL stuff, I just might actually consider moving from a relationship to a marriage.

One thing I’ve taken away from the experience though – JSF has come a long way, for me to be able to have Richfaces, Primefaces and OpenFaces in one application. And my application is not even JSF 2, where the vendors are supposed to have worked on better integration paths for the component libraries. I’m waiting for Seam 3 to be fully released, and then I’ll switch everything to CDI, JPA2 and JSF2 without sacrificing any of my PDFing, Excel-ling, SMPCing (Managed Persistence Context) and the like.

NetBeans 6.8 and Maven – The Perfect Combination

There have been quite a  things that have endeared me to post 6.1 versions of Netbeans, the most vital and outstanding reason why we finally settled on NetBeans in our development office was its Maven support. It just works, and I’ll show you why we can’t do without NetBeans 6.8.

First off, I don’t have make any extra effort to specify that I want to open a maven I just click “Open Project”, navigate to the location of checked out project source, and it automatically detects that it’s maven project. That’s when all the magic sets in. And even during out from my Subversion repository, it automatically detects all maven modules and opens them accordingly. No mucking about.

From an open project,  immediately get the normal NetBeans structure of “Source and “Test Packages”. You also get “Libraries” – a list of all dependencies of compile scope , “Runtime Libraries” – list of all those of runtime scope, and “Test Libraries” – for those of test scope. Nicely separated to make it easy to classify your dependencies. What is even more exciting is that transitive dependencies are of a gray colour, and the brighter coloured jar
files are the direct compile dependencies (as you can see from xstream.jar and xpp3_min.jar in the picture). So you know if that jar file causing annoying ClassLoader exceptions was not actually added by you directly, but by a dependency you specified. Saved me quite some hours, this colour coding.

To add dependencies is quite easy, both by directly editing the pom.xml file, or by right-clicking any of the appropriate “Libraries” folders, from the menu of many other useful actions, clicking “Add Dependency” and doing a search using either the groupId or artifactId. The code completion provided when directly editing the pom is spot on, and
adding by using the wizard couldn’t be easier. It’s even intelligent enough to automatically set the selected scope to “runtime” when you right-click the “Runtime Libraries”. Clicking the appropriate version of the artifact automatically populates the rest of the fields for you, and with the “Ok” button, you are done. Interestingly, beside the artifact details are the indication whether this artifact is to be downloaded from some repository, or available
in your local .m2/repository folder. This is possible because it indexes your local repository, as well as remotely configured ones, which in my case is our local Nexus mirror tagged “central”.

Having added all your dependencies, you can then generate a dependency graph, which can be very useful in understanding where all the artifacts you find in your resultant build come from. Just right-click any of the “Libraries”
nodes (or the module itself), and select “Show Dependency Graph”. Interestingly, you can just right-click a particular dependency, and get it’s own dependency graph. Moving your mouse pointer over a dependency in the graph immediately shows you details on the artifact i.e. groupId, artfactId, version, scope and type. If you have a
million and one dependencies like we do in our project, the “Find” at the top of the graph is very helpful in digging out the particular dependency you are interested in without having to wade through that minefield.

Whenever your project becomes a multi module one, you can easily create a parent project. Then use NetBeans to move your existing module under the newly created parent folder in your file system. Netbeans will recognise
this immediately (provided the packaging of the newly created parent one is “pom”) and will edit the parent pom with a new module declaration.

But what about when you reference a class which is not available in your classpath whiles coding? Well, along other suggestions that the IDE gives you, one is the suggestion to look in your Maven Repositories. You then get a list of artifacts that match the class you have just referenced. Selecting the right version of the one you want and clicking “Add” automatically adds this artifact to your pom as a compile scope dependency. Could life be any easier than this?

Even NetBeans’s “Search” which appears on the right hand corner can be used to search through your maven repositories and automatically download an artifact. Sweet.

Also, Netbeans has a Maven Repositories tab, which shows you a searchable index of both your local repository as well as other repositories that come already pre-configure. You can add a new repository, search through existing
ones, and update your local index of these repositories. When you find a dependency of interest under any node of these repositories, just right-click and select “Copy”. Go to your pom file and under the dependencies declaration, right-click and paste. A new <dependency/> declaration is pasted here with the appropriate groupId, artifactId and the rest. Smart.

If your project makes use of profiles, activating them during a build is so easy. Just click the drop-down menu on the toolbar, next to the “Build” toolbar item, and your profiles are available. This is very useful to me when I’m switching between running test cases and just building the code without executing tests.

And, all of this is very flexible. In every project’s properties, there’s the opportunity to make all the changes you want, like changing what goal, properties and profiles are activated when you “Build” or “Clean and Build” a module. Whenever I add a dependency that is not available in my local repository, the module is highlighted with a suggestion
icon. Right-clicking the module, you get “Show and Resolve Problems”, which enables you to force the download of the
dependencies declared without waiting to build using maven itself. Also if your module is a web module that need to run in a server, it prompts you on the need to specify the appropriate server.

Interestingly, unless you make a lot of changes to the way your maven modules are built, NetBeans hardly ever creates any IDE specific files, and your project is virtually clean when you commit your changes back to your source control. This is not quite the case for IDEs like IntelliJ (which takes a very long time in reading and detecting the modules of your maven project on the first open of the project) and Eclipse.

The availability of the settings.xml and profiles.xml (if defined) for direct editing in the IDE provides a truly integrated experience. In some cases where I’m having issues with my local Nexus mirror, I’m able to just comment out the Nexus mirror defined in the settings.xml and download my artifacts directly from the internet. I don’t have to go looking for
my .m2 folder in my file system to make this change.

What strikes me most is the fact that every other integration that NetBeans provides works. When working with JPA, entities are automatically detected and I get the usual suggestions for mapping etc. If it’s a Spring project, all
my bean mapping files easily resolve their references. There’s nothing which the IDE provides already that I loose by my project being a maven project. Name it. Facelets, SVN, auto-redeploy when I make changes to a web project. Everything that’s supposed to work does indeed work.

And, no matter how different your build is from maven’s convention over configuration settings, NetBeans has not failed in being able to read all my modules and building my multi module project perfectly. I’ve had very frustrating experiences with M2Eclipse, especially their WTP support. It just hasn’t worked for us, though I never had any issues when I checked opened the same project with NetBeans (at least not since the last time I checked, which was last week with the 0.10 version of M2Eclipse).

So although I miss a lot of things like using Richfaces VPE for facelets editing in Eclipse, I think I’ll rather hand code my facelets with the excellent facelets code assist from 6.8 of NetBeans, just so when I’m finished and I click “Run”, it indeed runs without a hitch.

The only problem I have with this maven support in NetBeans is the attempt sometimes todownload sources when I’ve never specified that sources should be downloaded alongside the artifacts. But, that’s few and far between,
and I can absolutely live with that.

I’ve been meaning to say this for a while, and now finally I’ve got the chance to do so. NetBeans with Maven just rocks. I look forward to even cooler features and improved support in NetBeans 6.9.