Sunday, 26 February 2017

PicketLink Authentication and Authorization with Custom Simple Schema

I was looking for a modern CDI enabled JEE security framework and found a very good one in PicketLink(PL) as it has pretty much everything. What's more, it combines with Apache DeltaSpike for authorization which is cool.

However, I have one or two problems asociated with PicketLink's IDM structure which we use for this:
a) The ID in the IDM User class is defined as a String and not an Integer/Long. This is a problem. In most cases that we encounter, the ID is a numeric type and is referred in several other tables. Being a numeric type comes in handy particularly when writing raw queries - which is needed when debugging production issues. The String in the IDM is also a generated one and results in a long hash - which is very difficult to track and use in these types of situations.

b) Secondly, how do I use PicketLink + DeltaSpike authorization with all its goodies to an existing database - which mostly has a numeric ID field?

c) Thirdly, could I have a much simpler table structure to deal with authentication and authorization? PicketLink's IDM is robust, but, we may not need it in all cases.

There have been some successful solutions used by other people. The most common is to have a numeric ID and tie it as an attribute in the IDM scheme of things and then refer to that numeric value in other places.

In this post, I use a custom solution which I found useful and simple - which might come in handy for small application(s).

I define a typical model for user management - which is not related to the IDM schems, such as:



A simple one, with a table to store the user info, a separate related table to store password (this is useful because, when bringing back user from DB, I don't need to bring back the password related info). A master table for role and another table to associate the roles for a user.

I write corresponding JPA entities for these tables. The numeric ID field becomes the primary key and is also auto-generated via a sequence. Importantly, I don't have a relation from AppUser to UserPassword entity (it's the other way around) - simply because, as mentioned above, I don't want to bring back password information when I load user. Same applies in case of roles too. One of the basic concepts to remember about security is that, information should not be provided unless and until it is asked for (strictly on a need-to-know basis). 

Let us first create sample users and roles. I use the @Initializer class (as it is done in many PL quickstarts). First, create the records for the 'role_master' and then create users with roles. For an user, this would insert 1 record into user table, 1 record into the password table and 1 into the 'user_role' table. 

First things first, how do we encrypt the password and store the same in DB? Had we been using PL IDM schema, the PL API would have taken care of the same. 
I do almost the same as what IDM does (benefits of open source!) - make use of the org.picketlink.idm.credential.encoder.SHAPasswordEncoder class to encrypt the password (and then use JPA to store the same). This class simply uses the java.security.MessageDigest class to get the SHA implementation (so, there is no big danger of depending on a PL implementation). We pass an argument of 512 to indicate the strength of the SHA algorithm.

(sidescript: the createUser method of the @Initializer class can be used to create user when you want to add user via the UI and this method can be placed in a transaction).

Now, how do we authenticate when the user wants to log-in to the application? We have the usual login.xhtml JSF page which is same as in any PL example. For authentication, as the default PL uses IDM, we cannot rely on that. We need to write our own authenticator and wire this up with the PL ecosystem. A nice example is given at https://github.com/jboss-developer/jboss-picketlink-quickstarts/tree/master/picketlink-authentication-jsf

I follow the same and write the CustomAuthenticator class. Instead of IDM schema, I use simple JPA queries to query the user and the password tables with the user entered user id and password. As the user id is the login name in my example, I use the findByLoginName query to check against the user table (note that the numeric id that we spoke about in the beginning is obviously not used as the login id by the real users). 

As for the password, we again use org.picketlink.idm.credential.encoder.SHAPasswordEncoder instance and call the verify method after we get back the password hash from the table using JPA. 

Now comes the most important part. If the authentication is successful, we set the status as AuthenticationStatus.SUCCESS. Once we set the status as success and the page is loaded, the identity.isLoggedin would be true and the user and admin links would be displayed.

And next, we have to set the account object. This account object should be a type of org.picketlink.idm.model.basic.Account. This is in a way enforced by the API. Setting this account is both important and useful as this is the account that is retrieved via the Identity object (which can be injected). This is same as setting the user info in the session.

Now, because we are forced to use the IDM Account object (as the setAccount method takes that type), we will create an instance of org.picketlink.idm.model.basic.User class - as User extends Account. Now, we have to set the values of the User object (fn, ln, email etc.). The most important information is the ID. As outlined earlier, the ID within this IDM User object is a String. The ID that we use in our AppUser is a Long. So, we simply convert the Long to String and set it into the User's ID. 

So, now, later on, wherever we need the ID, we can simply @Inject the Identity object and get the value. For example, to get the user name, we can do:
((User)identity.getAccount()).getLoginName()


Now, to the next important part - authorization:

Authorization checks would be needed at two levels. One is at the UI side and the other at method-level checks on the server-side. Apache DeltaSpike provides a CDI enabled authorization module which blends nicely with PL. Some examples can be found in the PL quickstarts for the same.

We are going to deal with the UI side authorization checks - this again would be needed at two levels. First is at the displayed UI, where a part of the UI is hidden/disabled for certain roles. Second, at the URL level.

Within the UI, there are a few types of authorization needs. One is that showing/hiding or enabling/disabling parts of the UI for specific roles. Let's see how this works. This is actually very simple. Login as 'user1' and goto the common page - you should see the 'Save' button in disabled state. Now, login as 'admin1' and goto the common page. Now, you should see the 'Save' button enabled. 
This is achieved simply using JSF EL. The JSF EL uses a named bean 'authChecker' (AuthorizationChecker.java) on the isAdmin method. The isAdmin method in turn queries the 'UserRole' table using a JPA query to check if the user has the specified role. 

This is cool. Likewise, you can navigate to the admin page. The link to the admin page (menu) shown on the home page can also be shown/hidden based on the role. But, what if the user copies the admin url and pastes it into the address bar after being logged in as a normal user 'user1'? So, we need URL level authorization to happen here.

PL does supports URL level authorization in a simple manner. However, there are some issues here. Since our whole model is customized and not based on IDM, the custom authorization of URLs for a specific role does not work. For example, the following does not work:

When we build the security configuration, we can make use of the forPath() and authorizeWith() methods to specify URL authorizations. For example, we use the following in our code:
  .forPath("/faces/admin/*")
  .authorizeWith().role(AppRole.ADMIN.toString())

But, how and where do we hook the role into PL? Remember that in the custom authenticator that we wrote, we set the status and the account. But, nowhere did we set the roles for the logged in user. And, there is no method on the PL Account class to set the roles for the user.

This is usually the norm. Roles are not set anywhere. Security works on a need-to-know and deny-first principles and so the hasRole and such methods are provided. Still, we need to figure out on how this will work.

We can write a custom authorizer for URL level authorization too as follows:
.forPath("/faces/admin/*")
.authorizeWith().role(AppRole.ADMIN.toString()).authorizer(CustomPathAuthorizer.class)


This class needs to override the authorize method and return true/false. So, we can write our custom auth checks inside this method and return the value accordingly.This method takes a PathConfiguration as a parameter and we can get the roles with the following call:

pc.getAuthorizationConfiguration().getAllowedRoles();

And then with that, we can do our custom auth check. More info can be found from org.picketlink.authorization.DefaultAuthorizationManager source.

However, when I implement this and run, no matter what role I am logged on to, going to the admin URL directly simply fails and I get a 401/403. I began wondering if this was a bug and posted in the forum (https://developer.jboss.org/thread/272838)
Let's take a step back and see how the default PL works first. There are 4 authorizers built by default and added which are called one by one to do the check (authorize method which returns a boolean). If any of them return false, authorization will be denied.

When we add the custom authorizer, even though it is added as a 5th one, it seems, one of the default 4 returns false and so we continue to get access denied. And all the default authorizers are tied to the IDM schema.

I found a workaround thanks to a tip in one of the forum messages. The tip was to not use the 'role()' method at all. So, I tried that and the authorizer is working fine with the same now. Just use:
.authorizeWith().authorizer(CustomPathAuthorizer.class)

So, where do I specify the roles for the URL? Inside the authorizer, I can get the pathInfo from the request and then use it to check with my roles. For example, in the authorize method, I can get the path and then check against a List/Map to verify if this path is allowed for the role of the logged in user. As an example, I have used a trivial sample map with path and list of roles. For more robust ones, we can propably have this data also in the database.

In the current sample, I just loop against the paths to match and then see if the logged-in user has the requisite role to go to the URL. To test this, login as a user and then paste the admin URL directly in the browser. You should see an access denied page.

The complete sample code is available in my github repo.

Saturday, 4 June 2016

JavaFX TreeTableView Example with Different Entities

In an earlier post, I had showed how to build a tree-table component (in Swing with SwingX JXTreeTable component) with 2 different entities (which have a HAS-A relationship). In this post, let us examine how to achieve the same in JavaFX TreeTableView.

As indicated in my earlier post, in real scenario(s), we will need to deal with showing a tree-table view with different entities, for example, to display a department with list of employees belonging to the department. These may share a relationship at the db level, but it will be a HAS-A relationship at object level. When we attemp to display this in a tree-table, note that the parent is one type of object and the children (leaf) another.

The official JavaFX TreeTableView tutorial available on Oracle site, actually does display a department and a list of employees as children. But, if you notice the code, it uses only one object - an Employee object throughout (even for the department). The department is just displayed as a trick of using the first object as root (where the name is set and email is left as empty). This is good for illustration purposes, but, it is important to know how to deal with 2 different objects for such scneario(s).

Let us first create the entities. I will be using the same attributes (except photo for Employee) from the earlier example.

public class Employee {

    private int id;
    private String name;
    private Date doj;

    public Employee(int id, String name, Date doj) {
        this.id = id;
        ...
    }
    //setters and getters not shown for brevity
}

public class Department {

    private int id;
    private String name;
    private List<Employee> employeeList;

    public Department(int id, String name, List<Employee> empList) {
        this.id = id;      
        ...
    }

    public List<Employee> getEmployeeList() {
        return employeeList;
    }

    public void setEmployeeList(List<Employee> employeeList) {
        this.employeeList = employeeList;
    }

    //other setters and getters
}

As JavaFX came into existence after Java 1.5 (the Java API version), everything in JavaFX is generified. So, right from declaring the TreeTableView to adding columns, everywhere the entity is specified. However, for us, we will be dealing with 2 different entities. So, we will not specify the entity anywhere. We will just use the normal syntax for dealing with objects like:
TreeTableView treeTableView = new TreeTableView(dummyRoot);
instead of
TreeTableView<Employee> treeTableView = new TreeTableView<>(dummyRoot);

As we are going to show a list of departments (where each department will have a list of employees), I am using a dummy root object (this root object can represent an Organization when we want to show the root), like:
final TreeItem dummyRoot = new TreeItem();

 Now, I create the TreeTableView with this dummyRoot tree item. Then I call the setShowRoot(false) to hide the root. Again, note the lack of usage of generics here.

Now, we need to add all our items to this root. We will deal with that later.

Let's now create the columns. We need to show 3 columns (id, name and doj) for Employee. For Department, the 3rd column will not display any value. First, I create the column, like:
TreeTableColumn idColumn = new TreeTableColumn("Id");

Next step is to specify how the column is going to fetch the value from the entity and display it.
We need to set the cellValueFactory by passing a javafx.util.Callback which will be type parameterized as follows:
Callback<TreeTableColumn.CellDataFeatures<S,T>, ObservableValue<T>> 

The Callback interface has one method call() which we need to override. The call() method actually takes TreeTableColumn.CellDataFeatures<S,T> as param and returns ObservableValue<T>. Here, S is the parent object that represents that row and T is the type for that column.  This will be called back whenever JavaFX decides to update its view.

As is the norm in JavaFX, we would normally generify the call with the object that we expect, say Employee and the column type, say, String. However, in our case, as the parent object can either be Employee or Department, we will not be able to use generics. Likewise, the column may actually have Integer type for Employee object and String type for Department object (not in this example, but, this is very much possible).

So, we will simply use the Object as the type for the parameter to the call method, So, the implementation will be like:
idColumn.setCellValueFactory(new Callback() {
    @Override
    public Object call(Object obj) {
            //return ObservableValue (without type)
}
});

Remember that as mentioned above, the incoming Object obj is actually of type TreeTableColumn.CellDataFeatures. So, we have to type cast the same and then call getValue() which will then return a TreeItem object:
((TreeTableColumn.CellDataFeatures)obj).getValue()

TreeItem represents each node in the Tree and has a value attached to it - which is actually the value object. So, we have call getValue() on the TreeItem object to get the domain object which in our case can be Employee or Department:
Object dataObj = ((TreeTableColumn.CellDataFeatures)obj).getValue().getValue();

So, now we can do an instanceof check on dataObj to identify the correct type and then invoke the correct method, like:
((Department)dataObj).getId()

We need to return a ObservableValue from the method, so, we will return a ReadOnlyStringWrapper which takes a String value as an argument. So, we can return like:
return new ReadOnlyStringWrapper(String.valueOf(((Department)dataObj).getId()));

The full call looks like:
idColumn.setCellValueFactory(new Callback() {
@Override
public Object call(Object obj) {
Object dataObj = ((TreeTableColumn.CellDataFeatures)obj).getValue().getValue();
if(dataObj instanceof Department) {
return new ReadOnlyStringWrapper(String.valueOf(((Department)dataObj).getId()));
}
else if(dataObj instanceof Employee) {
return new ReadOnlyStringWrapper(String.valueOf(((Employee)dataObj).getId()));
}
return null;
}
});

We can use lambda expressions for the same, which I have done for the dojColumn:
dojColumn.setCellValueFactory((Object obj) -> {
final Object dataObj = ((TreeTableColumn.CellDataFeatures)obj).getValue().getValue();
if(dataObj instanceof Employee) {
return new ReadOnlyStringWrapper(((Employee)dataObj).getDoj().toString());
}
return null;
});

Remember that, there is no doj value present for Department. So, we do an instanceof check only for Employee and return the employee's doj. Otherwise, we simply return null. So, for the rows where the Department object is displayed, the doj column will simply be empty.

Once all the columns have been defined, we can add this to the tree as follows:
treeTableView.getColumns().setAll(idColumn, nameColumn, dojColumn);

As we created the TreeTableView with a dummyRoot TreeItem object, all we need to show data is to simply add TreeItem objects as children to this root and in a recursive/tree manner to add children of children. Remember that the Department object contains a List<Employee> within itself. We create two such Department objects and add them to another list called deptList. Adding data to tree is accomplished as follows:
//add data to tree
List<Department> deptList = buildData();
deptList.stream().forEach((department) -> {
final TreeItem deptTreeItem = new TreeItem(department);
dummyRoot.getChildren().add(deptTreeItem);
department.getEmployeeList().stream().forEach((employee) -> {
deptTreeItem.getChildren().add(new TreeItem(employee));
});
});


When I run the application, it looks like this:


When comparing with my earlier post on achieving the same with Swing, I have to say that doing the same is much more simpler in JavaFX as I deal with individual columns one at a time. And, I have to implement only one method that is equivalent of getValueAt. I don't have to deal with the getChild() and getChildCount() methods as in Swing.

The complete source code for this sample can be found in my github repo.

Thursday, 10 December 2015

I18N Tutor

Several years back I developed a small desktop application in Swing which helps the user (who is a Java developer himself) to understand Internationalization in Java. I called it a 'Practical' tutor. Unlike usual text based tutorials, being a Java application itself, this allows the developer to read as well as try out the examples side-by-side. I passed it on to many of my friends/colleagues and they all liked it.

The formatting was explained for Number, Decimal formats with pattern, Currency format and percentage. Along with providing input numbers, the developer could also change the locale and see how that changes the output of the formatters.

Today, I am happy to have ported it to a web page running on Google App Engine. Do try it out and pass on the link if you like the same:
https://i18n-tutor.appspot.com

Monday, 12 October 2015

NetBeans Diff API Extracted as a Standalone Project

Inspired by Emilian Bold who extracted the Progress bar API as a separate OSS project out of NetBeans source code, I started to work with the aim of extracting the Diff API out as a separate project. I had this thought in 2012 or so and it has taken 3 years to actually make it (and only the textual part!)

In the meantime, came to know that the amazing Geertjan had done something better at https://github.com/GeertjanWielenga/netbeans-visual-diff-standalone
(which I got to know via a post at http://forums.netbeans.org/viewtopic.php?t=63551&highlight=).

Nevertheless, I went ahead and completed the task that I started and also posted the code in my github repo. You can take a look if you are interested. Also, I created a test project at github repo which you can follow to make use of this library.

Also, would like to note down my experiences when trying to make a NetBeans project into an OSS which might be useful for others:

My idea was to make the diff library work with jut JDK. So, wherever I encountered external dependencies, I modified them, such as:

a) NbBundle replaced with ResourceBundle
NbBundle is used quite a lot. I replaced it with ResourceBundle, like:
ex: return ResourceBundle.getBundle("Bundle").getString("BuiltInDiffProvider.shortDescription");

b) org.openide.ErrorManager
This is used in logging exceptions. The Javadoc provides a general guidline where  clearly provides the simple alternative - "Rather then using the {@link ErrorManager} consider using JDK's {@link Logger}
 * for reporting log events, unwanted exceptions, etc."

So, I followed the advice and did the same.

c) ImageUtilities.loadImage:
This was from the package org.openide.util.ImageUtilities - source shows that class loader and ImageIO are used to load images plus provides caching.
I have simply used classloader with ImageIO.

d) The API uses NetBeans lookup mechanism throughout even though both interfaces and implementations are present in the same diff module. There are however, two ways of using lookup to find services - one is the usage of "@org.openide.util.lookup.ServiceProvider" annotation to mark the implementation. This is pretty straightforward and maps to java.util.ServiceProvider usage.
The other way used is also via using Lookup, but, registration of services is done in an old manner (which I think is deprecated now). This is explained in http://wiki.netbeans.org/DevFaqLookupDefault and http://wiki.netbeans.org/DevFaqSystemFilesystem

How did I find this? After reading the above links, I happened to look at the manifest.mf file - which has a reference to "OpenIDE-Module-Layer: org/netbeans/modules/diff/mf-layer.xml"
So, I opened the mf-layer.xml where the DiffProvider implementations are provided and also refer to other .settings file(s). These files in turn have the implementation class name.
It took quite a bit of time to figure this out.

Initially, I started with trying to have the complete visual part as a separate project, but, then decided to do only the textual part. For this, some of the useful code was present as part of the visual stuff. So, while working on this, I also extracted out some inner classes into top-level classes.
For example, the TextDiffVisualizer class has utility method 'differenceToLineDiffText' - which is very useful. I moved this out to a separate class named Util.

Finally, completed this and posted the code at my github repo

Friday, 14 August 2015

If I start a software company

If I start a software company...
I actually want to rant against some stuff being done in software companies...but instead of just sounding negative, I thought about what I would do if I run my own software company.
I list my thoughts about the same - first the rant and then on what I will do...

System:
I have always wondered about this. Executives would be given high-powered brand new Mac, but developers would be given a normal system with 2 GB RAM and a ok processor. Most of the work of executive is in his/her head - system is used mainly for mail access, documents, sheets and ppts. Isnt it?

I never really understood this. Particularly, being from a Java background, typically such systems require higher RAM. Usually, we will run the IDE, and also run the app server in our system along with a database too. I remember when working for a MNC, we got a system with 1 GB RAM and it would take ages to deploy. Request to upgrade to 2 GB RAM would go through a long bureaucratic route with the PM explaining the costs involved.

Isn't a good computer the most important tool for a developer? After the people themselves, it is the system where the most investment should go. And, if the people are the most important asset in software industry (as most companies claim), should we not give them the best tools to work with? Howmuchever you provide a fancy working place with good air conditioning, a good system is the most important thing for a developer - and should I mention that a slow unresponsive system is the most annoying thing for a developer.

So, my first priority would be to give the best system to my developers.

Email, attachments, sharing of files:
These are actually inter-related.

Email:
Yes, the most trivial one. But, the most important one too. In these days of everything-on-web, I just don't understand why companies spend to buy licenses for desktop mail clients. Why can't they just use OSS mail clients in that case? Wouldn't that save a lot of cost?
Or why not simply go for a solution like gmail for business? you just need a browser for that.

Attachments & File Sharing:
I have done this several times. Let's say we are working on an estimate and prepare a sheet. I will send this to my peer attaching the sheet. They will make changes, rename the file 'v2' and send back to me. I will again rename the file like file_ranga_v2.xls and take another copy. Very soon, there will be numerous copies of the file in my hard disk (not to mention the versions circulating in emails). If I were to revisit this a month later, I would struggle to find out which is the latest version. I will have to look at the modified date and find out. Ditto for docs, ppts. Also, imagine the size of these attachments circulating around filling up mail boxes. It soon becomes a pain.

I worked with Google sheets for some time with a client and I was amazed at how easy it was to share sheets. No hassles, no multiple versions and no sending around attachments wasting space and network bandwidth. On top of it, everything is on cloud and I could access it from my personal system or smart phone too. And, it makes our day-to-day operations easy to manage. Pretty simple, but, I still see many companies (even new ones) opting for say MS-Exchange and then buying license for MS-Outlook. Pray, why?
(Not to mention the license cost for MS-Office - I forgot about that)

Thinking about this, it also helps IS (Information Security) in a way - no data needs to be stored in individual machines.

Issue Tracking via Excel Sheets:
This is similar to the above one. This is actually ironical. Many companies use excel sheets to manage issue tracking, timesheets etc. I thought excel sheets were used 10 years back. Irony is that, we do write software for many customers where requirements go like: 'our folks are currently using excel sheets. we want to replace this with a system'!!
(I think the customers do not know that these companies actually use a excel sheet and not a system - if they come to know about it, they will actually think twice about awarding such projects!)

And imagine doing this in the above mentioned fashion - numerous versions circulating around in mail boxes. And, how do we derive metrics out of these? We will have to first identify the correct version and then gather data. Systems, on the other hand, generate reports with a click of a button. And are searchable too. And, you don't have to re-invent the wheel. Numerous OSS projects are available for such purposes in most technologies. In fact, we are spoilt for choices.

At the least, use something like Google sheets!!

Software and anti-OSS:
Some companies have a policy of no Open Source Software. Yes, it's true. However, you could see such companies using a lot of 'trail' software. I have seen many places where 'EditPlus' (a popular editor) being used - without license being bought. This particular software allows you to use it, but with an annoying warning window when it starts (which clearly states that you can use it only for 30 days). I don't get this. If it is so important for you, why don't you just buy a license. It is not just illegal to use software without buying a license. It is also unethical, unfair & disrespectful in my opinion - what these people are basically saying is: we earn money by writing software for our customer, but, we will not respect the other company/person who has spent lots of time developing this software. If you feel the cost is prohibitive and you won't make any margin, then, just don't take up such projects where there is dependency on high-cost licensed software.
Or, just use OSS and drop all pretensions about the same.

In my own company, I will make maximum use of OSS right from the Operating System. I have myself been using Ubuntu in my home machine for a few years and it is perfect for development - much much better than Windows - I don't have to restart my system after an install not to speak of the breeze to install stuff with apt-get. And, till date I have never seen a command equivalent to grep/egrep/find with  its unmatched speed - try searching for a file in Windows.
This covers all technologies except MicroSoft. Companies working on projects with MS technology should buy appropriate license(s).

Tools/IDE:
I am a tools guy. I believe developers can be more productive using tools. I have seen many companies impose tools/IDE on developers - they call it a 'standard'. And, many companies also buy expensive software/IDE which is such a waste - when the same could be done with OSS tools.

If the build system is independent, why can't we allow the developers to use their favorite tools? And, the sheer number of tools available is very high.

In my own company, I will strive to choose the build system for the project which is based on command-line - this would allow the developers to use their favorite tool to write code. In fact, this is a must. This allows us to introduce CI which helps in achieving higher quality and follow agile.

Laptop/Desktop:
In an earlier life where I worked at an MNC, I was given a desktop (yah, 1 GB RAM). And then, during conference calls, we would go over to a meeting room. I would need frequent access to my system to answer questions efficiently, but obviously, I can't carry my desktop to the meeting room. It used to be a pain. Some companies solve this problem by having a desktop in meeting rooms and doing a RDC to the developer's system.
I would rather provide laptops to all my developers (and not just managers, leads). The cost difference is not much these days. Only problem with laptops is that they have to be guarded against theft.

VOIP calls:
Of course, MNCs have the costly VOIP phones. I would prefer to use GTalk/Skype. I would also like my developer to take calls from home if need be - late evening or early morning calls. It is actually much more productive and helps in work-life balance.


I have tried to cover as much points in a random fashion. Of course, I have not said anything about getting projects. That is because I do not know anything about it. You might of course question me like, 'dude, you need projects to run a software company'. True. I have no idea how to do that. But, I do believe there are online avenues for these. Of course, it won't be easy.

The reason I have talked only about operational aspects is simply because - these are the ones that affect me on a day-to-day basis. And, I do strongly believe that, operational efficiency can make a major difference to an organization. You could ask about what is the USP of my software company. True, not much. But, operational efficiency can actually be the difference between success and failure of project(s). At least, that is what I think.

Sunday, 21 December 2014

Swing JTable to JavaFX TableView

Having worked quite a bit with Swing JTable, I explored on how to work with JavaFX TableView. In this post, I describe differences between the two, which, I hope would help Swing developers looking to migrate to JavaFX.

The complete source code is available in my github repo.

I will attempt to build a simple table which displays a list of employees. This will display varied information of the employee, like id (int), name (String), salary (double), part time flag (boolean), doj (LocalDate).

Let us first define the Employee class which uses JavaFX property model (more on this later).

public class Employee {
    private SimpleIntegerProperty id;
    private SimpleStringProperty name;
    private SimpleDoubleProperty salary;
    private SimpleBooleanProperty partTime;
    private SimpleObjectProperty<LocalDate> doj;

    public Employee(int id, String name, double salary, LocalDate doj, boolean partTime) {
        this.id = new SimpleIntegerProperty(id);
        this.name = new SimpleStringProperty(name);
        this.salary = new SimpleDoubleProperty(salary);
        this.partTime = new SimpleBooleanProperty(partTime);
        this.doj = new SimpleObjectProperty<>(doj);
    }

    public String getName() {
        return name.get();
    }

    public void setName(String value) {
        name.set(value);
    }

    public StringProperty nameProperty() {
        return name;
    }

    //other setters and getters...
}

Now, let us write the JavaFX code to build the table. I am directly using the main class to build the GUI (without using FXML).

    @Override
    public void start(Stage stage) throws Exception {
        //define sample Employee objects
        final Employee emp1 = new Employee(1, "Ram", 23123.23, LocalDate.now(), false);
        final Employee emp2 = new Employee(2, "Krishna", 32398.76, LocalDate.now(), true);

        final ObservableList<Employee> data = FXCollections.observableArrayList(emp1, emp2);

        //initialise the TableView
        TableView<Employee> tableView = new TableView<>();        
        //define the columns in the table
        TableColumn idCol = new TableColumn("ID");
        idCol.setCellValueFactory(new PropertyValueFactory<>("id"));

        TableColumn nameCol = new TableColumn("Name");
        nameCol.setPrefWidth(100);
        nameCol.setCellValueFactory(new PropertyValueFactory<>("name"));

        //...likewise for other columns

        //add the columns to the table view
        tableView.getColumns().addAll(idCol, nameCol, salaryCol, partTimeCol, dojCol);

        //Load the data into the table
        tableView.setItems(data);

        StackPane root = new StackPane();
        root.getChildren().add(tableView);
        
        Scene scene = new Scene(root, 450, 300);

        stage.setTitle("JavaFX TableView Sample");
        stage.setScene(scene);
        stage.show();
    }

When I run this application, we get the following output:


Let me now talk about the differences between Swing JTable and JavaFX TableView (I will be numbering the differences throughout the post (I have used a 'minus' against some numbers to indicate that they are negative points)).

1) Generics
The first difference we notice is that the TableView is right-away generified, like:
private TableView<Employee> tableView = new TableView<>();

In 99% of the cases, a table displays homogeneous data, so this is actually good. The Swing components were not generified (sure, Generics came later, but even after they came, this change was done in later versions only). Without this, in Swing, there was always a kind of discomfort - a row not being openly identified with an Object. It was just a physical row.

In JavaFX, due to this, loading data to the table is easy. We basically need to set an ObservableList to the table (more on this in point 8). A convenience class FXCollections is used to build the ObservableList - we can either build this directly from separate instances or use a collection:
final ObservableList<Employee> data = FXCollections.observableArrayList(emp1, emp2);
tableView.setItems(data);

2) Scrollpane
In Swing, when we add a JTable, we will not get any scrollbars. We need to wrap the JTable within a JScrollPane and actually add the scroll pane to the view. Only then will we get the scrollbars. In JavaFX, we get this right away (you can see the horizontal scrollbar in the image). Note that the component is named TableView and not just Table (probably due to the inherent scrollbar).

3) Column names and types:
In Swing, we will usually write a TableModel class which will provide the information about column names and types (via overridden getColumnName() and getColumnClass() methods).
As we don't write a model separately in JavaFX, we add columns directly when creating the table (yes, we don't need to write a separate model - more on that later).
This is done by creating a TableColumn instance and adding the same to the table, like:
TableColumn<Employee, Integer> idCol = new TableColumn<>("ID");

This declaration of the column provides information about the type of the column and also the name of the column (display name). And then we add this to the table, like:
tableView.getColumns().add(idCol);

4) Display of values:
In Swing, the getValueAt() method defined in the TableModel is queried and is used to display the data for the cells. So, we have to write the getValueAt() usually checking the column number and returning the appropriate value (Object). The toString() method will then be invoked and displayed (by the default renderer).

In JavaFX, this is made easier. After we create a TableColumn, we need to then set a 'cellValueFactory' which will calculate the value for the cell, like:
idCol.setCellValueFactory(new PropertyValueFactory<>("id"));

This probably uses reflection to invoke the getter method and get the value. Similar to Swing JTable, the toString() is invoked on the object and the value displayed. Note that reading and displaying the values is not related to the JavaFX property model. This would work with any POJO (JavaBean model) except for the CheckBox control (which I think is a bug - JavaFX TableView does not pick the correct value of the Boolean value when a POJO is used. Even when using a JavaFX property model, the value is picked and displayed correctly only when the xxxProperty() method is present in addition to the regular getter and setter).

In our Employee class, we have a LocalDate field doj. When we add this to the table, we see that the date is displayed by it's default toString() notation.

The other alternative is to set a cell factory on our own. This is a little verbose and we will have to return an ObservableValue (the verbosity can be reduced largely with lambdas though).

5) Sorting
Sorting is supported right out-of-the-box. This is good as this was a pain point in Swing JTable.

6) Alternate Row colors and CSS:
The TableView by default uses alternate colors for rows. We can customize the colors using CSS.

-7) Alignment (Integer and Double):
In our Employee class, we have an Integer field (id) and a Double field (salary). When these are added to TableView, they are not right-aligned. This would generally be the need. In Swing JTable, this would be done automatically which was cool (achieving this in JavaFX is easy though).

8) Updating the view:
In Swing, we would usually need to write a TableModel class (that usually extends AbstractTableModel). This class will provide all information about the data of the table (including column names, row count, and the row data themselves). We can load and show the data in the JTable with this. But, what happens when the data changes?
Normally, we would need to invoke the various 'fireXXX' methods from within this class which will in turn instruct the JTable update its view. Most starters with JTable miss this out and it takes some learning curve to get this working.

JavaFX TableView handles this right from the beginning. We simply supply the table via an ObservableList and rest is taken care of. From here on, whenever the data changes, view gets updated automatically (this happens because TableView 'observes' the passed in ObservableList).
(note that this works fine with using the JavaFX property model and PropertyValueFactory. If PropertyValueFactory is used along with regular POJO, this does not work).

This is really good. Along with generified TableView, this means that we simply need to build the list and pass it by wrapping it in an ObservableList. Building the data is also more cleanly separated.

-9) Boolean values
In our Employee class, we have a Boolean field (isPartTime). When we add this to the TableView, the simple toString() representation of this value is displayed. In Swing JTable, as soon as a column type is declared as Boolean, it right-away shows the value as a check box which is really cool. In JavaFX, we need to set a cell factory to do the same (see point 11.Editing).

10) Renderers:
Writing renderers for columns is similar to that of Swing. We write an implementation separately.

11) Editing
In Swing, we need to implement the setValueAt() method where we can set the value from the edited cell to the actual object (this works in combination with isCellEditable() method which dictates which cell is editable). We can also add custom editors like JComboBox etc.

In JavaFX, we need to set a CellFactory to the column to make it editable. This is made even more easier with the ready availability of several implementations - TextFieldTableCell for String values for example.

In our example, we simply set one for the nameColumn with the call 'TextFieldTableCell.forTableColumn()'. For the ID column though, as the data type is Integer and a TextField by default deals with String, we need to use a StringConverter. We can make use of the convenience method which takes an implementation of the abstract StringConverter class as the argument. Luckily, JavaFX comes with ready implementations like NumberStringConverter (which deal with java.lang.Number from which the wrappers descend). We simply use them.

For the Boolean column, ideally we need a CheckBox which is a different control. So, we use the available CheckBoxTableCell.

Note that editing works well with the JavaFX property model with minimal code. Otherwise we would need to write custom editor code (when we use a usual POJO, the editing will still work and the table cell will also display the edited value, but, the edit will not be updated back to the object!).

Everything works including editing except for the DOJ field. For the Date, similar to the Double field, we need to use a converter. JavaFX comes with a DateStringFormatter, but unfortunately, it works with the older java.util.Date and not the LocalDate which is what we want to use. So, to make this work, we simply need to write a converter by extending the StringConverter, like:
dojCol.setCellFactory(TextFieldTableCell.forTableColumn(new StringConverter<LocalDate>(){
    @Override
    public String toString(LocalDate object)
    {
        if(null == object)
            return "";
        return object.toString();
    }

    @Override
    public LocalDate fromString(String value)
    {
        if (value == null) {
            return null;
        }                
        return LocalDate.parse(value);
    }
}));

This is a very simple converter which works with the default LocalDate pattern. So, while editing, the same pattern should be used. Using a DatePicker for the same would be cool, but that needs altogether an implementation of TableCell.

The complete source code used in this post is available in my github repo.

Saturday, 26 July 2014

ServiceLoader API Sample

In this post, I am going to discuss the ServiceLoader API introduced in JSE 6. This is an API to load services where the definition and implementation of the services are decoupled. For example, I might define a service via an interface named Logger which has a method named log(String s). I might later write an implementation named FileLog which will log the messages to a file. Later on, someone else might write a DBLog which will log the messages to a database. Here, there are different implementations of the service available and as an user, I can choose to use one of the available implementations.

The important point to note is that, the person who defines the service need not be the one who provides implementations of the services. There are several such usages in JDK itself - for example, the ResultSet interface just defines the methods - different DB vendors provide implementations of this interface.

Now, the question is, how to load and use the implementations without a direct dependency on the implementations? That is where the ServiceLoader API comes in. Let us see a simple example to see how this works:

Let us first define the interface which is in a separate project:

public interface ServiceInterface {
    public String serviceMethod();
}

Now, let us write the implementation in a different project (this project would refer the interface definition project in its library or in the case of Maven will need to declare a dependency):

public class ServiceProviderImpl1 implements ServiceInterface {
    @Override
    public String serviceMethod() {
        return "Sun";
    }
}

The above class is a sample implementation named 'impl1'. Likewise, let us write a different implementation (in a different project which again depends on the interface project):

public class ServiceProviderImpl2 implements ServiceInterface {
    @Override
    public String serviceMethod() {
        return "Moon";
    }
}

We have two different implementations of the interface now.

Problem:
Let us try and develop the client which will actually use one of the implementations:

import java.util.ServiceLoader;
import <blog package>;

public class App {
    public static void main( String[] args ) {
        ServiceInterface si;
        si = new ???();
        si.serviceMethod();
    }
}

In OOPS, the general guideline is to program to interfaces. So, we declare a variable of the type ServiceInterface (and not of implementation). And then when creating an object, we have to create an instance of one of the implementations. Now, whichever implementation we decide to use, we have to hardcode the class name here (in place of the question mark).
What if we want to change the implementation tomorrow? We will be forced to make a code change.

Note: In the above case, this client project has to include both the interface and impl1 in the referenced libraries (dependencies in Maven).

Solution:
The solution is to use the ServiceLoader API. First let us look at the client code:

import java.util.ServiceLoader;
import com.blogspot.javanbswing.sp.api.ServiceInterface;

public class ClientTest {
    public static void main( String[] args ) {
        ServiceLoader<ServiceInterface> serviceLoader
            = ServiceLoader.load(ServiceInterface.class);
        ServiceInterface api = serviceLoader.iterator().next();
        System.out.println("from " + api.serviceMethod());
    }
}

Now, we are using the ServiceLoader API. This has a load method which will search for a implementation and use it. This method actually returns a collection of available implementations. But in our case, as we have included only the impl1 dependency, the search will return that implementation. And that will be the one used.

But, how does the API search and find the implementation? For this, the implementation jars should follow a mechanism called as provider-configuration file. This file is a simple text file which should be placed under META-INF/services folder. The file should be named exactly as the fully qualified name of the service interface.

For example, in our case, the service interface name is 'ServiceInterface'. And the fully qualified name is 'com.blogspot.javanbswing.sp.api.ServiceInterface'. We should create a file with this name. This file should in turn contain the fully qualified class name of the implementation. For example in impl1, the implementation class name is 'ServiceProviderImpl1' and the fully qualified name is 'com.blogspot.javanbswing.sp.impl1.ServiceProviderImpl1'. This is the only text that should be present in this file. Same should be done for impl2.

So, when we include the 'impl1' as the dependency for the client project, it will look for this file under the 'META-INF/services' folder in the jar and then pickup the implementation class name from within the text file.

Now, run the client program and you can see that 'Sun' is displayed. But, how do we switch the implementation? Simply change the dependency to impl2 (in the client's pom.xml). Now, run the program again and you should see 'Moon' displayed. 

In production, we will simply place the implementation jar based on what we want (place either the impl1.jar or impl2.jar in the classpath and we are done). So, without any code change in the client program, we can switch to a different implementation. Simple and cool!

The complete source code including the client is available in my github repo.