Only Software matters

Experiences in software development

Archive for July, 2011

UDIAS – The 5 levels of an ideal agile and automated testing strategy

Posted by Patroklos Papapetrou on July 24, 2011


Recently my team started working on a new project which is actually a re-developing of a legacy system in a new platform. We decided to move from a windows based application to a new web-based J2EE application. It is a project that has all the odds against failure. We know the domain, we know the new technology, the team is working together many years and there is only one restriction about it. We ought to use the same database schema since it may be used in conjunction with the old windows – client. Good news since we don’t have to redesign the database. Bad news since we have to stick on some bad database smells of the past. Anyway we have the chance to complete a project on time with no serious risks. It is also a very good opportunity to apply in practice the testing strategy we dreamed for all these years. A strategy that is fully automated, repeatable with a single key press and covers all aspects of the system. We decided to use Jenkins as our build system and in the following lines I briefly describe the testing strategy

Unit  Testing

Since it is a Java system we had to choose between the two famous unit test frameworks. JUnit and TestNG. For no particular reason (maybe because we are already experienced with it) we have chosen JUnit. All unit tests run after each commit with our first Jenkins job (let’s call it app-trunk). If coverage falls below a predefined threshold (i.e <75%) the the build automatically fails,team members are notified and last commiter(s) strive to fix the build.

Database Unit Testing / Database Integration Testing

You can name the next level of testing as you wish. I prefer to call it Database Integration Testing since we test how well is our system integrated with the database. Remember that we had to keep the database unchanged so we have a double challenge here. Test our code (EJB3 entities) against an existing / productive database with real data and test our code against a database that is automatically created with sample data. If am not mistaken the only available mature framework for database testing is DBUnit. It provides some flexibility about database testing but we had some more requirements so we created a layer over it to automate things like data generation, testing of entity objects manipulation etc. Maybe in another post we will describe this layer in details. For each entity we test functions like create, edit, find, delete and some schema-related issues, such as indexes, foreign keys and primary keys. Due to the fact that there are some tables with many columns we would like to sure that all mappings (columns and relations are correct). All kinds of these tests run through an automated build-job,that is being triggered whenever app-trunk succeeds. There are two matrix jobs in Jenkins that test our code against all supported database platforms with existing and sample data. Obviously, if something goes wrong, again all team members are notified and try to fix the problem.

Integration Testing

Our system runs on an application server so we need somehow to test its behavior and how well is integrated with some services and frameworks like JSF and CDI. Here comes Arquillian to make our life much easier (with JSFUnit extensions). The idea of writing tests just like unit tests is brilliant and since we don’t have to learn a new framework we can adopt it quickly enough, from the day – one of our project. Each class that uses services within the container should be tested at this level. Same rules with unit testing about coverage apply here as well and integration tests run after the success of all database integration test jobs. We have created once again a matrix build job in Jenkins that runs all integration tests against supported platforms and application servers.

Acceptance Testing

Probably the hardest job because this is the point where we have to test our system if it meets the end-user’s requirements. It is also the step that requires the strongest hardware since the software has to be deployed in a real environment and automatically be tested for the most critical scenarios and the most used application flows. There are plenty of tools, however we have chosen to use Selenium, due to the fact that has a quite stable integration with Jenkins CI and has a very powerful add on for Firefox browser. We try to cover in depth not all possible user screens but those that are the most . All acceptance tests run during our nightly build once a day and only if there is an existing commit since last run. Although it is very difficult to test our system in all different environments we try to run the tests in the most commonly used. Obviously acceptance testing does not end with the automated build job. It is QA team’s responsibility to perform a complete acceptance test of all scenarios, but we strongly believe that core functionality should be always tested automatically to catch serious defects before QA team.

Stress Testing

Our application stores a large amount of data and is accessed by many users so we have to ensure that its performance is acceptable by them and does not fall below some predefined thresholds. Without any automation it would be very hard to achieve this, so to conquer the last frontier of testing we have used Apache JMeter. Although JMeter has no tight integration with Jenkins we have created a different project (JAR) that includes all performance and stress tests, triggered also during our nightly build job. Stress testing is not our number one priority, however at the end of each iteration we evaluate the usefulness of our existing tests and we modify them accordingly if some serious requirements change has occurred during the last iteration.

The UDIAS (unit, database, integration, acceptance, stress) testing strategy is probably not a silver bullet, but it covers all the different layers and views of an application. The time needed to set running all the above build jobs is significantly large but the ROI of investment worth the effort. I don’t think that there is something more important in a system under development / maintenance than an automated testing plan.

Thanks for reading this post and feel free to rate it, post your comments or share it with others.

 

Posted in agile, ci, continuous integration, jenkins, software, testing | 4 Comments »

Why there is no standard for developing real modular web applications?

Posted by Patroklos Papapetrou on July 19, 2011


OSGI, SpringSource, Jboss Modules,J2EE and the list never ends.All these technologies promise to their end users/developers the same thing which is more or less java modular web applications(?). How many of us out there, though, have actually tried to develop a REAL modular software system in Java? How many of us have managed to get it done? You have probably noticed that I have capitalized the word real and this is not done accidentally. I shortly explain what I mean by the definition:”real modular java system”.

IMHO a REAL software module should be able to have parts for at least business logic, persistence, user interface and configuration. A software module, in an ideal scenario, should be able to be plugged-in and plugged-off easily in a running application without restarting it.  For example in a J2EE application server ( such as Jboss or GlassFish) the running application (core system) is a .war file. Some more modules (that contain parts mentioned above) of the core system are developed and packaged in separate jar files. These jar files shoule be deployed in the application server, integrated with the core system (by some extension points in business logic AND in User Interface ) without the need of re-deploying it. Can we do something like this?

Let’s see what related technologies suggest about it. I was a fan of JBoss Modules since their first steps, and after the recent latest release of JBoss AS I was looking forward to see them in action. To be honest JBoss modules provide a very simple and convenient way to define dependencies between modules. Oops!! Did I write “modules”? What kind of modules are these? They can include business logic, configuration, data model but what about user interface? Unfortunately nothing is mentioned about that and after a little research, I have found that the concept of modularity in JBoss modules does not include any user interface. What about OSGI? The most promising way of building moduar applications. OSGI is nowadays supported from many application servers and although its configuration looks like dinosaurs in the age of industrial revolution seems to be a very nice approach. Plenty of available services, a dozen of frameworks to use and of course no reference to any user interface capabilities. On the other hand there is vaadin that integrates(?) well with OSGI to build modular web applications, as the relevant article implies. I wonder if there is a real/productive modular enterprise application developed with OSGI and Vaadin. Spring is an independent framework by SpringSource which is currently the leader in developing java enterprise applications. Spring Dynamic Modules in conjuction with OSGI claim to be the most sophisticated way to build dynamic and modular web applications, but again I still feel that even Spring is not solving the problem I have raised. From my researching experience integration of the above technologies for a modular system is a hard task. Last but not least comes J2EE. The latest version of web and full profile specifications have transformed J2EE to a very powerful set of frameworks. JSF2 and CDI as well as with all the new features have dramatically increased its popularity among Java developers. When I read for the first time this article, I believed that finally I have found a standard solution for modular web applications.  Although each jar can contain all different parts: business logic with EJB and CDI , persistence with JPA and its implementations, configuration and user interface with JSF and its implementations ) according to this issue there is no way to handle jars with JSF components as a separate module. One more dissapointment. Modules in J2EE are supposed to be jars packaged in a single war. That was too close!! We have to wait, I guess, until the release of JSF 2.2 and some months later for support of the most well-known application servers.

Since there is no standard for building modular systems, many well-known projects, have developed their own module system based on one of the previous mentioned frameworks or from the scratch. Jenkins, Atlassian’s Jira and Sonar by SonarSource are all java based applications with a powerful module / plugin system. You develop your plugin (including user interface), following some guidelines and you deploy it as a single jar through a module/plugin manager. In most cases you need to restart the system but I think that this does not bother any administrator since you can easily increase the functionality of your software.

So my dilemma is still valid. Should I start building my own module / plugin system or should I wait for a standard to come and boost the developing of modular web applications worldwide? I wonder why this kind of standardization is not yet available and why we are obligated to try-catch-finally ( still without resources 🙂 ) among all these technologies and frameworks! Are we close to have a standard for developing real modular enterprise web applications or not?

Thanks for reading my thoughts and I am looking forward for your comments

Posted in java, software | Tagged: , , | 17 Comments »

Injecting Lists with CDI in Managed Beans and JSF compoments

Posted by Patroklos Papapetrou on July 10, 2011


It’s been 18 months since the first official release of “JSR 299: Contexts and Dependency Injection for the JavaTM EE platform” and Java is now more than ever ready to be compared (in Dependency Injection) with Google Guice, Spring or PicoContainer because CDI includes the best features of all of the above mentioned frameworks. Moreover, JSR-299 is now a standard so by following it you have no risk finding yourself dependant of third party solutions. Now you can enjoy dependency injection as a native / core feature of J2EE.

In this article I demonstrate how easy has become with CDI to use/share common lists in managed beans as well as in JSF components. Let’s assume that you would like to create and use an employee list (that rarely changes) in a web application. Before CDI you would probably create an ApplicationScoped global static bean with a private attribute employeeList (with relevant setter and getter) and a method to initialize it. Whenever a bean had to access this list you should write a code that seems like the following:

GlobalBean.getEmployeeList();

Quite ugly code for three reasons. Your code is dependent on the definition of getEmployeeList method and secondly you are obliged to use the static class to get a reference to the list. Finally there is no possible way to use this list directly within an XHTML page without accessing it through a managed bean.

Thanks to CDI, things become less complicated. First take a look at the simple Employee Class ( it could be as well an Entity in a real application ).

public class Employee {

    private String name;
    private String surname;
    private Long birthYear;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getSurname() {
        return surname;
    }

    public void setSurname(String surname) {
        this.surname = surname;
    }

    public Long getBirthYear() {
        return birthYear;
    }

    public void setBirthYear(Long birthYear) {
        this.birthYear = birthYear;
    }

    @Override
    public String toString() {
        return this.getSurname() + " " + this.getName();
    }
}

Now let’s get to some interesting parts. With CDI we can create our custom annotations and use them in our injected class exactly the same way as with predefined annotations. In our case we have created the following annotation to annotate an employee List.

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

import java.lang.annotation.Target;
import javax.inject.Qualifier;

@Target({ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Qualifier
public @interface EmployeeList {
}

But wait, the annotation itself is completely useless if there is no method that creates some how the employee list.  The ApplicationInitializationBean, shown just below includes a public method that returns an Employee list and this method is annotated with @Produces, @Named and @EmployeeList. All three annotations are very imporant and I explain why. By annotating a method with @Produces we have a producer method, that acts as a source of bean instances. The method declaration itself describes the bean and the container invokes the method to obtain an instance of the bean when no instance exists in the specified context. We can use it as well in a field.

@ApplicationScoped
public class ApplicationInitializationBean implements Serializable{

    private static final long serialVersionUID = 1L;

    @Produces
    @Named(value="employeeNamedList")
    @EmployeeList
    public List<Employee> getEmployees(){
        return this.generateEmployees();
    }

    private List<Employee> generateEmployees(){

        List<Employee> employees = new ArrayList<Employee>(5);

        for (int i=1 ; i<=5 ; i++){
            Employee emp = new Employee();
            emp.setName("Name_" + i);
            emp.setSurname("Surname_" + i);
            emp.setBirthYear(Long.valueOf(1976) + i);
            employees.add(emp);
        }
        return employees;
    }
}

As you have noticed we have also used the named annotation and the custom @EmployeeList. Each one has its own purpose.
By using @Named we can access directly from any xhtml / jsf page the list with its provided name (in our case is employeeNamedList) as shown in the following example

<h:dataTable id="employeesTable" value="#{employeeNamedList}" var="employee">
<h:column>
<f:facet name="header">
<h:outputText value="Surname" />
</f:facet>
<h:outputText value="#{employee.surname}"></h:outputText>
</h:column>
<h:column>
<f:facet name="header">
<h:outputText value="Name" />
</f:facet>
<h:outputText value="#{employee.name}"></h:outputText>
</h:column>
</h:dataTable>

By using @EmployeeList we can access from any other Bean the list with injection as follows

@Inject
@EmployeeList
List<Employee> employeeList;

As you can see we don’t care about who is responsible to hold the list, or even how is beeing populated. We just use its name ( in a jsf component or its custom annotation to inject it in a managed bean )

In conclusion, CDI provides a very powerful way to increase abstraction and have your code clear and readable. Of couse the above example is not the only use of a producer method and it is not limited only in lists. You can create producer methods for any kind of object you would like to inject in your application.

Thanks for reading this post and as always any comments are welcome and valuable!

P.S. You can find a working example of the above here

Posted in cdi, jsf, software | 6 Comments »

5+1 Sonar Plugins you must not miss

Posted by Patroklos Papapetrou on July 10, 2011


Sonar, to my humble opinion, is the leading system to help developer teams track,manage and eventually enhance the overall quality of their code and obviously their software products/projects. To be honest, this is not a post to describe either Sonar features or the necessity of a tool for every developer that respect his time and efforts. If you want to read such analysis you can see my related post To Sonar or Not to Sonar. In this article I briefly present 5+1 plugins that every Sonar Installation should have them. I would like to clarify though some exceptions I have made prior to my final choice. I have excluded all plugins that have to do with additional languages and IDE to keep this post as much as objective I can. I have also excluded all commercial plugins for obvious reasons. After that assumptions I have limited my selections to the following categories :
  • Additional Metrics
  • Governance
  • Integration
  • Visualization / Reporting
Sonar itself comes with a variety of features that cover most of the needs of a software development team. However I consider that the following plugins are essential, especially for those that have adapted or trying to adapt agile practices. To be honest it was very difficult to select only 6 plugins!!
1.Hudson / Jenkins plugin
Although Sonar analysis can be easily triggered from several build tools (maven, ant etc.) I strongly believe that its native integration with the most famous open source CI server makes itself an important part of the continuous integration / deployment practice. The configuration is extremely easy and as proposed the best practice is to trigger Sonar at night builds. Team members can track day by day software quality, automatically, without bothering when a new analysis should run.
2.Jacoco Plugin
Unit Test results, with drill down analysis, line and branch coverage, running and failed tests are features implemented in Sonar core and cover in depth all aspects of unit testing practice. But, as there is always a ‘but’, what about Integration tests? What if we want to have separated measures about unit and integration tests? Here comes JaCoCo plugin to save our time and money. Although JaCoCo is an alternative to Cobertura (default Sonar coverage tool ), it may be properly configured to display metrics only for Integration Tests. There is a great article that explains in details how we can use it and get the same analysis ( as for Unit Tests ) for Integration Tests.
3.Useless Code Plugin
It may looks similar to the Sonar Core feature named Duplicate Code, but it adds some more metrics, which I think are very useful especially for large or legacy systems. In general it measures how many lines can be removed from your code. It reports what is the number of unused private methods that can be safely removed and the number of unused protected methods exist in the code that can be removed after some more careful code examination. Finally it provides some more details about code duplication informing how duplicate lines are formed (i.e. x blocks of y lines )
4.SIG Maintainability Model Plugin

This plugin, as its name implies is an implementation of the Software Improvement Group(SIG) Maintainability Model. It reports ranking – from — :(very bad) to ++ (very good) on the following base indicators:Analysability, Changeability, Stability and Testability. The core idea for this ranking is to measure a series of base metrics such as Lines of Code(LOC), Duplications,Unit Tests,Complexity and Unit Size. Each of these metrics is then accounted into some of the mentioned indicators and the final result is representing the overall maintainability of the project. We can see the results of this analysis in a graphical (spider) presentation with all four axes of the model. With a glance a this graph you have a global and detailed at the same time view of how easy is to change and maintain your codebase. For me it is the first index I check every morning and if something is not + or ++ then we definitely have done something wrong 😉

5.Quality Index Plugin
Have you ever wanted to check a single number (indicator) and understand how healthy is your project? I am sure you have!! Well, the quality Index plugin is exactly what you are looking for. The plugin combines four weighted axes (complexity, coding violations, style violations, test coverage) of quality and produces a ranking between 0 (lowest) and 10(highest). Moreover it calculates a method complexity factor based on the complexity factor mentioned above. Have you ever tried to get a ranking of 10 with this plugin? I think it worths the effort! 🙂

6.Technical Debt Plugin
Last, but not least, the plugin that reports about the interest you have to pay as a developer, as a team, as a company. Technical debt is a term invented by Ward Cunningham to remind us that if we don’t pay our interest from time to time, then it is for sure that eventually this will make our software unmaintainable and hard to add new features or even find the root cause of defect. The plugin, which has a very powerful configuration, represents technical debt in four dimensions.

  • Debt Ratio : The percentage of current technical debt to the maximum possible technical debt.
  • Cost to reimburse : Cost in your currency to pay all interest and clean up your code
  • Work to reimburse : Same as above measured in man days.
  • Breakdown : Distribution to the following axes: Duplication, Violations, Complexity, Coverage, Documentation and Design

Be sure that you check its measures to avoid find yourself in bad situation like spaghetti code 🙂

I am pretty sure that there are plenty of interesting Sonar plugins so please feel free to post your comments with your list of them.

Posted in open source, quality, software, sonar | Tagged: , , , | 3 Comments »

 
%d bloggers like this: