Rafael Sanches

June 20, 2012

Hazelcast + Spring + @Cacheable

Filed under: java, performance, programming, server-side, spring — mufumbo @ 2:12 am

Today I’ve integrated Hazelcast with the @Cacheable spring annotation. I’ve chosen Hazelcast over EhCache + Terracotta, because it has a simpler configuration and there’s no need to run another deamon, which facilitates dev-environment setup.

Here is my hazelcast spring configuration, that is included in my main spring configuration:

<?xmlversion="1.0"encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:hz="http://www.hazelcast.com/schema/spring"
        xmlns:cache="http://www.springframework.org/schema/cache"
    xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsd
 http://www.hazelcast.com/schema/springhttp://www.hazelcast.com/schema/spring/hazelcast-spring-2.1.xsd
 http://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsd
 http://www.springframework.org/schema/cachehttp://www.springframework.org/schema/cache/spring-cache-3.1.xsd
 "> 
    <cache:annotation-driven cache-manager="cacheManager" mode="proxy" proxy-target-class="true" />

    <bean
        class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"
        p:systemPropertiesModeName="SYSTEM_PROPERTIES_MODE_OVERRIDE">
        <property name="locations">
            <list>
                <value>classpath:/hazelcast-default.properties</value>
            </list>
        </property>
    </bean>

    <hz:hazelcast id="instance">
        <hz:config>
            <hz:group name="mygroup" password="mypassword" />
            <hz:network port="5700" port-auto-increment="false">
                <hz:join>
                    <hz:multicast enabled="true" />
                    <hz:tcp-ip enabled="true">
                        <hz:interface>127.0.0.1:5700</hz:interface>
                    </hz:tcp-ip>
                </hz:join>
                <hz:interfaces enabled="true">
                    <hz:interface>127.0.0.1</hz:interface>
                </hz:interfaces>
            </hz:network>

            <hz:map name="default">
                <hz:map-store enabled="true" write-delay-seconds="0"
                    class-name="com.mufumbo.server.cache.hazelcast.EmptyCacheMapLoader" />
            </hz:map>

            <hz:map name="null-map" />

            <hz:map name="app" backup-count="3" async-backup-count="1"
                time-to-live-seconds="10" max-size="100" eviction-percentage="50"
                cache-value="true" eviction-policy="LRU" merge-policy="hz.LATEST_UPDATE" />
        </hz:config>
    </hz:hazelcast>

    <hz:config id="liteConfig">
        <hz:lite-member>true</hz:lite-member>
    </hz:config>

    <!-- set hazelcast spring cache manager -->  
    <bean id="cacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager">
        <constructor-arg ref="instance" />
    </bean>
</beans>

Please, notice the mode=”proxy” proxy-target-class=”true”. Without that configuration the beans with a super constructor and @Cacheable haven’t loaded. Notice that this isn’t a Hazelcast issue, it’s a Spring AOP issue, even if you use the SimpleCacheManager instead of Hazelcast one.

I was wondering about using mode=”aspectj”, but it was taking too much time, so maybe another day.

ATTENTION: CGLib proxies requires that the class needs to provide a default constructor, i.e. without any arguments. Otherwise you’ll get an IllegalArgumentException: “Superclass has no null constructors but no arguments were given.” This makes constructor injection impossible.

ATTENTION 2: Be careful if you have a BeanNameAutoProxyCreator matching the class that you flag as @Cacheable. In that case it means that there’s already an Cglib proxy behind, which can’t happen. It’s a hassle because I was using a BeanNameAutoProxyCreator to match all my *Service classes in order to create the JDO transactions on the methods create*, update* and save*. If you had the same problem, replace all your BeanNameAutoProxyCreator configuration with an AOP configuration like:

<tx:annotation-driven transaction-manager="transactionManager" mode="proxy" proxy-target-class="true" />

    <aop:config>
        <!-- http://blog.espenberntsen.net/tag/pointcut/ -->
        <!-- For all the classes annotated with @Service -->
        <aop:pointcut id="serviceMethodsCut" expression="within(@org.springframework.stereotype.Service *)" /> 
        <aop:advisor advice-ref="txAdvice" pointcut-ref="serviceMethodsCut" />
    </aop:config>

    <aop:config>
        <!-- For all the methods annotated with @Transactional -->
        <aop:pointcut id="transactionalCut" expression="execution(@org.springframework.transaction.annotation.Transactional * *(..))" />
        <aop:advisor advice-ref="txAdvice" pointcut-ref="transactionalCut" />
    </aop:config>

    <tx:advice id="txAdvice" transaction-manager="transactionManager">
        <tx:attributes>
            <tx:method name="update*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="insert*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="create*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="delete*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="save*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="store*" propagation="REQUIRES_NEW" rollback-for="java.lang.Exception"/>
            <tx:method name="get*" propagation="REQUIRED" read-only="true" />
            <tx:method name="*" propagation="SUPPORTS" read-only="true" />
        </tx:attributes>
    </tx:advice>

 

Also, remember to update your pom.xml with the AspectJ configuration:

<dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-aspects</artifactId>
            <version>${org.springframework-version}</version>
        </dependency>

        <dependency>
            <groupId>aspectj</groupId>
            <artifactId>aspectjrt</artifactId>
            <version>1.5.4</version>
        </dependency>

        <dependency>
           <groupId>org.aspectj</groupId>
           <artifactId>aspectjweaver</artifactId>
           <version>1.6.12</version>
        </dependency>

            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>aspectj-maven-plugin</artifactId>
                <version>1.4</version>
                <configuration>
                    <Xlint>warning</Xlint>
                    <complianceLevel>1.7</complianceLevel>
                    <source>1.7</source>
                    <target>1.7</target>
                    <encoding>UTF-8</encoding>
                    <aspectLibraries>
                        <aspectLibrary>
                            <groupId>org.springframework</groupId>
                            <artifactId>spring-aspects</artifactId>
                        </aspectLibrary>
                    </aspectLibraries>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>test-compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
Advertisements

February 2, 2012

Spring-mvc + Velocity + DCEVM

Filed under: java, performance, server-side, spring, tutorial — mufumbo @ 5:54 pm

Java web development can be frustrating at times. Things that slow me down:

  • Excessive pre-configuration to be able to deliver results
  • Many people use JSP’s in a confusing way and end up with mixed patterns in the views.
  • Server restarts and deployment kill my productivity.

In order to improve dev speed I have been using these three technologies together:

  • Spring-MVC: This one makes it easier to bind controllers and views together, simply using annotations. This makes it very easy to create well defined controllers before the execution of the view.
  • Velocity: One of the most simple and powerful template engine available. With this I can define clear and simple templates that can access and interact with Java objects at the runtime.
  • DCEVM: Dynamic Code Evolution VM. A modification of the Java HotSpot(TM) VM that allows unlimited class redefinition at runtime. In our case it will enable to deploy changes in java classes without restarting the servlet container.

Since there are many tutorials on how to use these technologies singularly, this post will only cover how to bind these three technologies together. I would also suggest the usage of Maven to glue the dependencies together.

Lately many people are excited about the Play Framework which adds speed to Java development. Personally, I don’t like being too tight to a framework, but it seems very good.

I would recommend veloeclipse eclipse plugin for coloring the templates.

Glueing Velocity With Spring

There are many tutorials covering this part, like this one from velocity and this one from spring, but no one of them talks about WebappResourceLoader and how to use relative paths in the velocity templates and in the controllers.

Here’s the spring configuration that I use:

    <bean id="viewResolver"
        class="org.springframework.web.servlet.view.velocity.VelocityViewResolver">
        <property name="cache" value="true" />
        <property name="prefix" value="" />
        <property name="suffix" value=".vm" />
        <property name="toolboxConfigLocation" value="/WEB-INF/velocity/tools.xml" />
        <property name="exposeRequestAttributes" value="true"/>
        <property name="exposeSessionAttributes" value="true"/>
        <property name="exposeSpringMacroHelpers" value="true"/>

        <property name="attributesMap">
            <map>
                <entry key="dateTool"><bean class="org.apache.velocity.tools.generic.DateTool" /></entry>
                <entry key="escapeTool"><bean class="org.apache.velocity.tools.generic.EscapeTool" /></entry>
            </map>
        </property>
    </bean>

    <bean id="velocityConfig"
        class="org.springframework.web.servlet.view.velocity.VelocityConfigurer">
        <property name="configLocation" value="/WEB-INF/velocity/velocity.properties" />
        <property name="resourceLoaderPath">
            <value>/</value>
        </property>
        <property name="velocityProperties">
           <props>
                 <prop key="contentType">text/html;charset=UTF-8</prop>
           </props>
          </property>
    </bean>

It’s very important to use  org.apache.velocity.tools.view.WebappResourceLoader in order to facilitate development.

Using ClasspathResourceLoader makes development painful because depending on your configuration it won’t reload the templates when you’re changing, or sometimes it will refresh the entire webapp context in order to refresh a single template. This process can take you minutes after each template change.

Here’s my configuration for velocity.properties:

runtime.log.invalid.reference = true
runtime.log.logsystem.class=org.apache.velocity.runtime.log.CommonsLogLogChute

input.encoding=UTF-8
output.encoding=UTF-8

directive.include.output.errormsg.start = 

directive.parse.max.depth = 10

velocimacro.library.autoreload = true
velocimacro.library = /VM_global_library.vm
velocimacro.permissions.allow.inline = true
velocimacro.permissions.allow.inline.to.replace.global = false
velocimacro.permissions.allow.inline.local.scope = false

velocimacro.context.localscope = false

runtime.interpolate.string.literals = true

resource.manager.class = org.apache.velocity.runtime.resource.ResourceManagerImpl
resource.manager.cache.class = org.apache.velocity.runtime.resource.ResourceCacheImpl

resource.loader = webapp, class

class.resource.loader.description = Velocity Classpath Resource Loader
class.resource.loader.class = org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader 

webapp.resource.loader.class = org.apache.velocity.tools.view.WebappResourceLoader
webapp.resource.loader.path = /WEB-INF/views/
webapp.resource.loader.cache = true
webapp.resource.loader.modificationCheckInterval = 2

Now you can start writing a simple java controller:

@Controller
public class MyControllerClass {
    @RequestMapping(value = "/my-url-path/{myPathVariable}", method = RequestMethod.GET)
    public String processRequest(@PathVariable String myPathVariable, @RequestParam(required = false) Long editId, Model model) {
        model.addAttribute("world", "world");
        return "/pages/forum/admin/forum-create";
    }
}

Now you can create simple velocity view with a templates like:

#parse("/parts/header.vm")

#parse("/parts/left-menu-p.vm")

Hello ${world}.

#parse("/parts/footer.vm")

Notice that no XML configuration was necessary, just the creation of the view and the controller. Also, thanks to DCEVM, there’s no need to restart the webapp after creating a new controller.

After configuring spring-mvc + velocity, the most important part is to configure DCEVM in order to not need to restart our tomcat container after every change in the classpath.

Configuring DCEVM

First download the binary from http://ssw.jku.at/dcevm/binaries/

If you’re in a windows or linux environment, just open the jar and choose the JDK that you want to have the modified VM running. Notice that you can enable and disable DCEVM support for that JDK. On Linux only the 32-bit JDK is supported.

If you’re running MacOSX you’ll have to download the 32-bit version of the Soylatte VM. This is available here: http://landonf.bikemonkey.org/static/soylatte/bsd-dist/javasrc_1_6_jrl_darwin/soylatte16-i386-1.0.3.tar.bz2

Unzip the soylatte VM under /Library/Java/JavaVirtualMachines/soylatte16-i386-1.0.3/

Now just run the dcevm-0.2.jar and choose /Library/Java/JavaVirtualMachines/soylatte16-i386-1.0.3/

After setting up the new VM we need to setup the eclipse project.

  1. Select the project properties and use the new JRE.
  2. Download tomcat and create a new server in eclipse, choose tomcat.
  3. Make it sure that your tomcat run in your newly created JRE.
  4. Open the new server configuration and make it sure to check “Automatically publish when resources change”
  5. Go to the server “modules” tab and edit your project web modules. Disable “Auto Reload”. This is extremely important since it will save you hours and days of restarting time.
  6. Just run your tomcat and every time you make a change on your project it will be pushed to the server. No need to restart!

For a more compreensive tutorial about how to configure tomcat + eclipse, please visit: How to Set Up Hot Code Replacement with Tomcat and Eclipse

If you like the DCEVM idea, also take a look into jRebel, which is even more powerful.

Also, if you use the Datanucleus JDO database, don’t forget to install the JDO eclipse plugin, in this way your classes are compiled on the fly after changes, so no need to restarts for enhancement.

Configuring Run-Jetty-Run Eclipse Plugin

Another simpler way is to install the Run Jetty Run plugin and run it over the new soylatte jvm. When creating your debug profile, remember to click in the JRE tab and choose the Soylatte VM. Running a jetty container with this plugin is 20 times faster than making a “mvn jetty:run”.

Install the plugin from their update site: http://run-jetty-run.googlecode.com/svn/trunk/updatesite

After the plugin is installed, remember to enable build automatically and disable source scanner in the jetty plugin:

run-jetty-run dcevm velocity

This will make it possible to save your java files and velocity templates without restarting the server. One cool configuration that I’ve made on mine is that I added the spring and velocity configuration directories to the “Custom Scan Folder and Files”, so every time I change any files it will redeploy the webapp. Notice that redeploying the webapp with this plugin is 20 times faster than making a “mvn jetty:run” from scratch.

June 13, 2011

Google Analytics lags on Android. How to make it more responsive!

Filed under: analytics, android, maintainability, performance — Tags: , , , — mufumbo @ 5:55 am

Google Analytics can be your best friend in order to track your mobile user behavior. Unfortunately the current Android implementation has performance limitations and the most problematic is that it uses SQLite to store your events.

Everyone who wants to write a responsive app knows that you can’t do SQLite operations in the UI Thread. Having to wrap the Google Analytics calls into a separated thread can be painful, so I wrote a very simple helper to handle it inside threads. I have many tracking events inside “button click” and it was taking about 200ms to execute, it’s too much on the UI Thread. It’s also too much if you have “onCreate” because it will take long time to open your new activity.

This helper is also very wrong because it maintains a static reference to the context. I do this in order to have better numbers on visit and “time on site”. You can just remove the static reference if you don’t like that.

Notice that my implementation has this: “Thread.sleep(3000);”
It means that I don’t want repetitive Google Analytics SQLite to be competing with my app inserts or gets.

This LAG happens because SQLite uses the internal memory which can be very slow depending on many factors, including concurrent SQLite operations or just internal memory without many space.

I hope it helps someone. Here’s the complete code:

package com.mufumbo.android.helper;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import android.content.Context;
import android.util.Log;

import com.google.android.apps.analytics.GoogleAnalyticsTracker;

public class GAHelper {
    String activity;
    static GoogleAnalyticsTracker tracker;
    static int instanceCount = 0;
    long start;

    // Limit the number of events due to outofmemory exceptions of analytics sdk
    final static int MAX_EVENTS_BEFORE_DISPATCH = 200;
    static int eventCount = 0;

    static final ExecutorService tpe = Executors.newSingleThreadExecutor();

    public GAHelper(final Context c, final String activity) {
        this.activity = activity;
        instanceCount++;
        if (tracker == null) {
            tpe.submit(new Runnable() {
                @Override
                public void run() {
                    tracker = GoogleAnalyticsTracker.getInstance();
                    tracker.start(Constants.GOOGLE_ANALYTICS_ID, Constants.GOOGLE_ANALYTICS_DELAY, c.getApplicationContext());
                }
            });
        }
    }

    public void onResume() {
        this.trackPageView("/"+this.activity);
    }

    public synchronized void destroy () {
        instanceCount--;
        if (instanceCount <= 0) {
            tpe.submit(new Runnable() {
                @Override
                public void run() {
                    Log.i(Constants.TAG, "destroying GA");
                    if (tracker != null)
                        tracker.stop();
                    instanceCount = 0;
                }
            });
        }
    }

    protected void tick() throws InterruptedException {
        Thread.sleep(3000);
        this.start = System.currentTimeMillis();
    }

    public void log (final String l) {
        if (Dbg.IS_DEBUG) {
            Dbg.debug("['"+(System.currentTimeMillis()-start)+"']["+eventCount+"] Logging on '"+this.activity+"': "+l);
            if (l.contains(" ")) {
                Log.e(Constants.TAG, "DO NOT TRACK WITH SPACES: "+l, new Exception());
            }
        }

    }

    public void trackClick(final String button) {
        checkDispatch();
        tpe.submit(new Runnable() {
            @Override
            public void run() {
                try {
                    tick();
                    tracker.trackEvent(
                            "clicks",  // Category
                            activity+"-button",  // Action
                            button, // Label
                            1);
                    log("trackClick:"+button);
                } catch (final Exception e) {
                    Log.e(Constants.TAG, "Error tracking", e);
                }
            }
        });
    }

    public void trackEvent (final String category, final String action, final String label, final int count) {
        checkDispatch();
        tpe.submit(new Runnable() {
            @Override
            public void run() {
                try {
                    tick();
                    tracker.trackEvent(
                            category,  // Category
                            action,  // Action
                            activity+"-"+label, // Label
                            1);
                    log("trackEvent:"+category + "#"+action+"#"+label+"#"+count);
                } catch (final Exception e) {
                    Log.e(Constants.TAG, "Error tracking", e);
                }
            }
        });
    }

    public void trackPopupView (final String popup) {
        checkDispatch();
        tpe.submit(new Runnable() {
            @Override
            public void run() {
                try {
                    tick();
                    final String page = "/"+activity+"/"+popup;
                    tracker.trackPageView(page);
                    log("trackPageView:"+page);
                } catch (final Exception e) {
                    Log.e(Constants.TAG, "Error tracking", e);
                }
            }
        });
    }

    public void trackPageView (final String page) {
        checkDispatch();
        tpe.submit(new Runnable() {
            @Override
            public void run() {
                try {
                    tick();
                    tracker.trackPageView(page);
                    log("trackPageView:"+page);
                } catch (final Exception e) {
                    Log.e(Constants.TAG, "Error tracking", e);
                }
            }
        });
    }

    public void checkDispatch() {
        eventCount++;
        if (eventCount >= MAX_EVENTS_BEFORE_DISPATCH)
            dispatch();
    }

    public void dispatch(){
        eventCount = 0;
        tpe.submit(new Runnable() {
            @Override
            public void run() {
                try {
                    tick();
                    tracker.dispatch();
                    log("dispatched");
                } catch (final Exception e) {
                    Log.e(Constants.TAG, "Error dispatching", e);
                }
            }
        });
    }
}

October 18, 2009

RSS parsing optimization for bandwidth and processing time with SAX and httpclient – pooling scripts

Filed under: android, maintainability, performance, programming — Tags: , , , — mufumbo @ 3:55 pm

My server was having a constant income traffic of 1.7mb/s for a service that download RSS from the internet and process them. Basically it need to return the last updates of multiple RSS feeds. It’s a very basic pooling system, but it was downloading too much data for just 15.000 active users. The growth wasn’t looking very feasible..

I was using the ROME java library to parse the XML. So far so good, the problem was that it downloads the whole feed and process it all. With my application scope I don’t need to download the whole RSS, just the new entries that i didn’t downloaded yet.

The solution was to use a custom SAX RSS parser, looping through the “” tags and identifying “”. In this way i can parse item per item, and identify if the current item is not updated, so I can abort the http connection and stop the download of the feed. I wish that ROME had an option to do that, like “stop processing when ‘publishedDate’ minor than..”.

The impact on bandwidth usage and processing time was impressive:

If someone is interested I can post and explain the java class. It’s compatible with com.sun.syndication.feed.synd and uses the SyndEntry and SyndFeed interfaces.

May 14, 2008

database replication tools

Filed under: performance, programming — Tags: — mufumbo @ 9:32 pm

Today I was searching about replication architecturesand found a very interesting presentation: Portable Scale-Out Benchmarks for MySQL that refers to GORDA – “Open Replication of Databases“. The following tools are the result of my search on that topic:

ESCADA is a opensource implementation of the GORDA replication server interface. It provides a full range of database replication options across a multiple database management systems, in a single inter-operable and evolutive package. Target application scenarios include:

  • Asynchronous master-slave replication, the no-frills industry standard approach.
  • Consistent multi-master/update everywhere replication for scalable and high performance shared-nothing clusters.
  • Zero data-loss inter-cluster replication over WAN for mission critical applications and disaster recovery.

SEQUOIA is a database cluster middleware that allows any Java application to transparently access a cluster of databases through JDBC. You do not have to modify client applications, application servers or database server software. You just have to ensure that all database accesses are performed through JDBC.

SEQUOIA allows to achieve scalability, high availability and failover for database tiers. It instantiates the concept of Redundant Array of Inexpensive Databases (RAIDb). The database is distributed and replicated among several nodes and SEQUOIA load balance the queries between these nodes. The server can be accessed from a generic JDBC driver, used by the clients. The client drivers forward the SQL requests to the SEQUOIA controller that balances them on a cluster of replicate d databases (reads are load balanced and writes are broadcasted).

Slony-I is a “master to multiple slaves” replication system supporting cascading (e.g. – a node can feed another node which feeds another node…) and failover.

The big picture for the development of Slony-I is that it is a master-slave replication system that includes all features and capabilities needed to replicate large databases to a reasonably limited number of slave systems.

Blog at WordPress.com.

%d bloggers like this: