Friday 30 July 2021

AWS Lambda with Java giving error: java.lang.UnsupportedClassVersionError: com/test/functions/HelloWorld has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

 In this short post I am going to explain about a silly mistake I made because of which I got below error:


com/test/functions/HelloWorld has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0: java.lang.UnsupportedClassVersionError
java.lang.UnsupportedClassVersionError: com/test/functions/HelloWorld has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)

 

What I was doing ? 

I was running AWS Lambda functions on Java 8 runtime from long time. Recently I got below email from Amazon:

[Action Required] AWS Lambda is migrating the Java 8 runtime to the Amazon Corretto 8 JVM

As per the email description there was no action required from my side. As after a given date AWS promised to update your lambda runtime from OpenJDK to Amazon Corretto. 

What went wrong ?

I consider this change as a good opportunity and timing to upgrade to Java 11 on Amazon Corretto instead of staying on Java 8 runtime. I have downloaded Amazon Corretto JDK, compiled my lambda jar and uploaded the new jar file in the function code.

My Laziness !
 
After uploading that I haven't tested my function and assumed things are good. Then after sometime I was expecting some alerts and events from my code but haven't received so I went to cloudwatch and checked the logs and found above error.

Crucial step

The error message was clear. I was trying to run Java 11 compiled code on Java 8 runtime !!!

Yeah you got it right. I forgot to change below configuration to Java 11 in AWS Lambda configuration. After setting to Java 11 my function started working fine.


Learning

Be careful and QA before releasing to production !

Don't be overconfident with code, technology and your past experience. All of them have tendency to go wrong when you don't think so !


Wiedersehen
seh dich später

Saturday 6 June 2020

AWS EC2 + Tomcat (JSR-356) + Secure Websockets + Cloudflare is giving : java.io.IOException: Unable to unwrap data, invalid status [CLOSED]

In this article I want to discuss about an error I have faced while I was trying a websocket POC for a requirement. The requirement was to create secure (wss) websocket server using tomcat's implementation of JSR-356 with domain name URL. I have used below technologies to achieve the same.


- AWS EC2 instance to run tomcat.
- AWS Elastic IP for assigning static IP to the EC2 instance.
- Tomcat 9.* server.
- Cloudflare DNS service to map our EC2 instance static IP to our domain name URL.

Below are different steps in order to achieve the same:

1) Write websocket server (API) using tomcat's implementation of JSR-356.

You will find many articles on the web to do this. For reference take a look at this:
Websocket example using tomcat
In this example we are creating an echo websocket server which responds back the same message to client what it has received. You can write the client using javascript or you can use tomcat websocket client library for Java.

2) Tomcat configuration for deployment- server.xml and cloudflare origin CA certificate.

As the main requirement is to create secure weboscket, I have configured HTTPS connector in tomcat server.xml file.

<Connector port="443" protocol="org.apache.coyote.http11.Http11Nio2Protocol"
connectionTimeout="-1" maxConnections="-1" acceptCount="5000" 
maxThreads="15000" scheme="https" secure="true" SSLEnabled="true"           
clientAuth="false" sslProtocol="TLSv1.2"  keystoreFile="/home/keystore" 
keystorePass="pswd123" >
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" /> 
</Connector> 

As we want to access this site via HTTPS, we need a valid signed certificate.
Here, the keystore is the cloudflare origin CA certificate generated from cloudflare. I will explain this in details in next section.


3) AWS EC2 Instance configuration.


I have assigned elastic IP to my AWS EC2 instance and for the inbound rule I have added HTTPS (port 443) in security group of instance, so that it can accept incoming requests over HTTPS/WSS.


4) Cloudflare DNS mapping and origin CA certificate.

In cloudflare DNS configuration section for my domain, I have added below configuration:


Name is the what I want to access my api with. e.g: myapi.mydomain.com
The IP address is elastic IP I have associated with EC2 instance.
Proxy status is "proxied". Let's discuss a bit more about this proxied option.

One good feature of cloudflare is, it provides free ssl for your site. Generally you have to pay for getting ssl certificate for your domain but if you are using cloudflare you don't have to do that as using below technique cloudflare add ssl to your site.



we are using Full (Strict) ssl mode in our site, Where cloudflare generates an Origin CA cert for us which is installed in tomcat server.xml configuration. Cloudflare consider that as trusted certificate. In our case traffic is proxied via cloudflare so the Browser will see the cloudflare signed certificate.

The summary is, traffic between Browser and cloudflare is secured with a cert valid for world while traffic between cloudflare and Origin Server(tomcat in our case) is signed with ckoudflare Origin CA certficate which only cloudflare consider a valid cert, not the outside world. These are steps to generate and configure cloudflare origin CA certificate.


5) Running the application

After all this configuration and setup I am starting the tomcat server and trying to access my websocket api using websocket client with this URL.

wss://myapi.mydomain.com?param1=hello

Once I connect to the websocket from client, after 2-3 minutes I see that the server is closing the connection from server side and I see below log in tomcat:

java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
java.io.IOException: java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
  at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:315)
  at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:258)
  at org.apache.tomcat.websocket.WsSession.sendCloseMessage(WsSession.java:612)
  at org.apache.tomcat.websocket.WsSession.doClose(WsSession.java:497)
  at org.apache.tomcat.websocket.WsSession.doClose(WsSession.java:459)
  at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.upgradeDispatch(WsHttpUpgradeHandler.java:176)
  at org.apache.coyote.http11.upgrade.UpgradeProcessorInternal.dispatch(UpgradeProcessorInternal.java:54)
  at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:59)
  at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
  at org.apache.tomcat.util.net.Nio2Endpoint$SocketProcessor.doRun(Nio2Endpoint.java:1675)
  at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
  at org.apache.tomcat.util.net.SecureNio2Channel$1.completed(SecureNio2Channel.java:959)
  at org.apache.tomcat.util.net.SecureNio2Channel$1.completed(SecureNio2Channel.java:898)
  at java.base/sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:127)
  at java.base/sun.nio.ch.Invoker$2.run(Invoker.java:219)
  at java.base/sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
  ... 4 more

I have searched a lot to find the possible cause for the error but couldn't found anything which can solve my problem. I have switched from HTTPS to HTTP and it has started working, it was not dropping connection in that case. So after doing lot of debugging and research I suspected on the cloudflare "proxied" behavior for HTTPS which may not be suitable for Websocket protocol which is a full duplex TCP connection between 2 system. Here cloudflare acts as a proxy between 2 system when we want to use HTTPS. To confirm my doubt I made below 2 changes and the error got resolved.

6) Resolution steps:

    1) Remove cloudflare proxied setting and make it DNS only.

I changed DNS config like below: 


With this option cloudflare will stop acting as a proxy for our api URL mapped to EC2 instance and it will only add DNS entry for routing.

Now next question can come to your mind is, how we will have HTTPS support for our api if we don't use cloudflare inbuilt SSL support? The answer is second step.

  2) Buy valid SSL certificate for your domain and configure in tomcat.

I have removed cloudflare origin CA certificate and place the new certificate I bought from GoDaddy. There are free options also available like freessl and letsencrypt where you can buy certificate for your site.

After applying above 2 steps, my secure webscoket server connection using domain name URL started working fine and the connection was up and running without a problem. Each webscoket connection remain established and drop or disconnect from server side was not found after that.

Now the question is, Do the cloudflare is not supporting proxy for webscoket ? No, it is not like that. It is supporting as per what is mentioned here: Link1, Link2 . Here it is clearly mentioned that secure websocket connection with proxied mode should work without a proeblem. But I am not sure what is the cause of issue in my case. It might be possible that tomcat's Websocket library has issue with this proxied mode. But that's a guess.

Update - Actual cause and resoultion:
I found the actual reason in this stackoverflow post:

https://stackoverflow.com/questions/39668410/whats-disconnecting-my-websocket-connection-cloudflare-apaches-mod-proxy

Here it is mentioned that if the connection remain idle for 100 second then cloudflare is closing it. That's exactly what is happening in my case. Below are possible solutions:

1) Buy their enterprise plan and change the setting as mentioned for timeout.
2) Add heartbeat/ping logic in your websocket API to keep it alive.

That's it for now...
I would love to hear your experience on this topic.

Please post your comments and doubts!!!

Monday 10 July 2017

Dealing with exceptions occured while datanucleus enhancement process in Google App Engine+Java+JDO2+Maven style project.

This article is very specific to Google app engine + java + JDO2 + Maven stack legacy application. I am going to talk about few exceptions facing while running/deploying the application of this type. You may skip if this is not applicable scenario to you.  But if you arrived at this page searching for the exceptions facing related to datanucleus enhancement for app engine project than please go ahead.

Why I have started facing the issue ?? 

Previously I was using IDE(Eclipse/Intellij) feature to create Google App Engine project. In this case I was able to run and deploy the code perfectly fine as the class enhancement process was taken care by IDE plugin if you create Google App Engine project.Now due to some deployment and build requirement I want to convert that project to Maven project so that I can build/run/deploy the project without depending on the IDE. I have converted traditional app engine project to traditional maven project and tried running/deploying the project. When project was deployed and I tried to access the project I was getting below exception for one of my Entity/data class:

1) Persistent class "Class com.ali.data.jdo.Student does not seem to have been enhanced.  You may want to rerun the enhancer and check for errors in the output." has no table in the database, but the operation requires it. Please check the specification of the MetaData for this class.

So this one is clear that you need to perform enhancement on your Data/Entity classes. Below link has more details why byte code enhancement needed after compilation and what are different ways to achieve that.
http://www.datanucleus.org/products/datanucleus/jdo/enhancer.html

Lets say you have selected one of the way from the above URL for enhancement and you are still facing one of the below exception in that process.

2) "org.datanucleus.enhancer" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL 
"file:/C:/Users/<username>/.gradle/caches/modules-2/files-2.1/org.datanucleus/datanucleus-enhancer/3.1.1/b141c67d55cc19f14639f091b84e692e2198dc50/datanucleus-enhancer-3.1.1.jar" 
is already registered, and you are trying to register an identical plugin located at URL 
"file:/C:/Users/<username>/.gradle/appengine-sdk/appengine-java-sdk-1.9.3/lib/opt/tools/datanucleus

This error will come due to multiple datanucleus-core.jar file presence in classpath. You need to remove it from one place depending on the way you are compiling the classes.

3) datanucleus enhancer failing command too long error
commandline too long) when starting the enhancer in forked mode

I got this error while I was using Maven plugin way to enhance the classes.
http://www.datanucleus.org/servlet/jira/browse/NUCMAVEN-47
http://www.datanucleus.org/servlet/jira/browse/NUCMAVEN-44
Here you can see they are claiming that it has been fixed in the newer version of the plugin so I tried it and got rid from this error at least. After that I have started getting below exception.


4) org.datanucleus.exceptions.NucleusException: Error reading the Meta-Data input "Content is not allowed in prolog.
I am not able to find the solution for this exception but at least I understood that it is coming because while enhancing it is trying to read some other format files (e.g: .xml,.md,.properties etc) from classpath. So try to provide path/packages list in plugin or by some other way to instruct which classes to enhance.

5) You have selected to use ClassEnhancer "ASM" yet the JAR for that enhancer does not seem to be in the CLASSPATH!
This one is very nasty and believe me even though you will do all the possible things to fix this stubborn bug it will not leave you. The datanucleous enhancer needs ASM jar in the classpath. Add the right version (may be 5) and see if you can get rid of it. Also check for presence of older version ASM jar in classpath and remove it. I googled a lot for this bug but nothing worked our for me. Comment down in case something will work for you than.


Final Hybrid Solution which I have derived for my requirement:

I have tried Maven datanucleous plugin way but it was not working for me.

Finally I have decided to go with "Manual invocation at the command line"

But as I said earlier I want to automate the build/run/deploy process automated.  So I achieved that by below 2 steps:

1) Created a script containing the list of commands to perform manual enhancement on the selected data classes.

As I am on windows platform I have decided to put my manual enhancement java commands in one of the .bat file which I will integrate in maven build flow later in the step 2.

My enhance.bat script will look like:
 java -cp target\hello-world-1.0\WEB-INF\classes;scripts\required-lib\* org.datanucleus.enhancer.DataNucleusEnhancer target\hello-world-1.0\WEB-INF\classes\com\ali\data\jdo\*.class  
 java -cp target\hello-world-1.0\WEB-INF\classes;scripts\required-lib\* org.datanucleus.enhancer.DataNucleusEnhancer target\hello-world-1.0\WEB-INF\classes\com\ali\common\jdo\*.class  

Here we are trying to run the DataNucleusEnhancer class on some selected compiled packges.


  • target\hello-world-1.0\WEB-INF\classes  : The intermediate classes generated by maven which is must to include in classpath to run this command
  • scripts\required-lib\* : I have created this folder structure in the root directory of project and put all the dependency jars you have used for project in this folder and also add "datanucleus-enhancer-1.1.4" jar file which has the main class DataNucleusEnhancer. 
  • org.datanucleus.enhancer.DataNucleusEnhancer : The main class which is entry point for performing the enhancement.
  • target\hello-world-1.0\WEB-INF\classes\com\ali\data\jdo\*.class : The list of packages containing the data/entity classes which you want to enhance.


2) Added a plugin in maven pom.xml file which calls that script in "process-classes" phase which is after the compilation and class files generated.

Plugin:

This is how I am calling the .bat script created in the first step from maven.
 <plugin>  
         <artifactId>exec-maven-plugin</artifactId>  
         <groupId>org.codehaus.mojo</groupId>  
         <executions>  
           <execution>  
             <id>enhancer</id>  
             <phase>process-classes</phase>  
             <goals>  
               <goal>exec</goal>  
             </goals>  
             <configuration>  
               <executable>${project.basedir}/scripts/enhance.bat</executable>  
             </configuration>  
           </execution>  
         </executions>  
       </plugin>  

This will process the compiled classes and then prepare WAR file from the enhanced classes.

For JDO2+ Google App Engine combination you must use below versioned jar files only:
asm-5.0.4.jar
datanucleus-core-1.1.5.jar
datanucleus-enhancer-1.1.4.jar
datanucleus-jpa-1.1.5.jar
datanucleus-appengine-1.0.10.jar
jdo2-api-2.3-eb.jar


Let me know for any queries or questions. Comment down in case you are stuck in similar scenario and want a detailed solution for the approach I have used.

Saturday 14 January 2017

Java interview question:Pass by value or Pass by reference ?

By looking at the title you may think this is just another article related to java object reference management. But in this article I want to share my own experience which was very surprising for me also. I have asked the same question to many experienced people in interview but surprisingly 7 out of 10 people gave wrong answer for this very basic but important concept of core java.


Below is the code example I used to ask to people:
 class A {  
   private int number;  
   int getNumber() {  
     return number;  
   }  
   void setNumber(int number) {  
     this.number = number;  
   }  
 }  
   
 public class App {  
   public static void main(String... arg) {  
     A obj = new A();  
     obj.setNumber(10);  
     function1(obj);  
     System.out.println(obj.getNumber());  
   }  
   
   private static void function1(A a) {  
     a.setNumber(12);  
     a = null;  
   }  
 }  
   


Try to guess the answer before scrolling down........






















7 out of 10 people told me that it will throw null pointer exception , which is incorrect answer.

Actual output:






Let me explain what is happening here:

in the main method First we have created object of type A and setting value 10 to that object.





After that we are calling function1 and passing "obj" as method argument, which is received by reference "a" .Below is the memory state:













After that we are setting value 12 to number via reference "a":














Than we are assigning null to reference "a" :



Here is the tricky part , After this the control will be returned to main method and we are using "obj" for printing value of number. "obj" is not null and contains modified value 12.

Key points to remember:
1) Java always pass method arguments by value only, yes you read it right, it is always always always always pass by value.
2) Java handles objects using their references we have created.
3) Two or more references can point to same object.


In this case the object reference value is copied and new reference is created for function1. But still it is pass by value only , never pass by reference. The object reference is copied not the actual object , as reference is the handle to access the object in java.

Why people trapped for pass by reference answer ??

Here you can see the value changed to 12 in function1 is visible in main method. This effect is the biggest reason why people think it is pass by reference. The value is changed because both the references pointing to same memory object in memory.


Now lets modify the function1 little bit and see what will happen:
 private static void function1(A a) {  
     a.setNumber(12);  
     a = null;  
     System.out.println(a.getNumber());  
   }  

Above program will give NullPointerException. Because the reference "a" is null now and we are trying to access null reference.



Next I am showing you similar example but this time it is for primitive type argument passing :
 public class App {  
   public static void main(String... arg) {  
     int a = 10;  
     function1(a);  
     System.out.println(a);  
   }  
   
   private static void function1(int b) {  
     b = 12;  
   }  
 }  

Here the outcome will be :






The reason here is also same  , as method argument is pass by value only, the value of "a" is copied to "b" only and here it is primitive type and they are having different memory location for both the variable, any change in "b" is not effective for "a".

One last exercise for you related to wrapper class:
 public class App {  
   public static void main(String... arg) {  
     Integer a = 10;  
     function1(a);  
     System.out.println(a);  
   }  
   
   private static void function1(Integer b) {  
     b = 12;  
   
   }  
 }  


That's it for now...

Please post your comments and doubts!!!

Saturday 17 December 2016

Part3: Build your own monitoring system using Riemann,Graphite,Collectd.

In previous article , part2, we have discussed about Graphite integration with riemann. In this article I will give overview of Collectd and some advanced stream processing options in riemann.

Collectd Overview

Collectd is a daemon and gathers metrics from various sources, e.g. the operating system, applications, logfiles and external devices, and stores this information or makes it available over the network.

Collectd itself is a big item to discuss and there are lot of things you can achieve with it.But here I will discuss only the area of our interest. What we will do is we will tell collectd to send the metrics collected by it to the graphite server!!!!!!!!! pretty amazing right???

Collectd installation and Plugin concept

Download collectd from this link according to the flavour of your Linux distribution.

For my case steps are:

1) sudo apt-get install collectd 
2) service start collectd

Done. Your collectd is installed and running.
Now lets take a look at very important config file related to collectd.
In my case it is located at /etc/collectd/collectd.conf
If you will open this file you can observer some list of plugins and configuration related to each plugin.

In collectd we have concept of plugins. We need different types of plugin to fetch different types of metrics and doing monitoring activity.
  
Above is plugin to fetch cpu related info of the system on which collectd is running. We will se outcome of this plugin very soon on Graphite.

Now you must have figured out that if we want collectd to forward this metrics to graphite than we must be having some plugin for that. Ohhhh yeah!!!! your guess is right. We do have a plugin for it.





















two things we are doing here. First we are defining write_graphite plugin and second we are providing config for that plugin. The host name of graphite server is localhost as it is installed in same VM. All collectd related graphs will be rendered in graphite under prefix name we have set here. After adding graphite related plugin save the file and restart the collectd service.


Below is the outcome on Graphite dashboard for collectd:

























Here I am stopping my discussion for collectd and moving towards last and important section, riemann stream processing.

Riemann stream processing examples

I will show you few stream processing examples in riemann.

1) Send email based on service status

Below is the configuration for sending mail from Gmail. You can do similar things for your SMTP.
Add this configuration in your riemann.config file and restart riemann.
 (def email (mailer {:host "smtp.gmail.com"  
             :port 465  
             :ssl true  
             :tls true  
             :user "myaccout@gmail.com"  
             :pass "mypassword"  
             :from "myaccout@gmail.com"}))  
 (streams  
   (where (state "critical")  
    (email "xyz@gmail.com")))  

two things we are doing here.
1) Declaring email related configuration. This could be vary depending on the SMTP provider.
2) I am defining one stream rule such that if state of any service is critical than send out mail to some email id.

Lets send "critical" state from java code for our "fridge" service created in part1.
 RiemannClient c = RiemannClient.tcp("localhost", 5555);  
     c.connect();  
     c.event().  
         service("fridge").  
         state("critical").  
         metric(10).  
         tags("appliance", "cold").  
         send().  
         deref(5000, java.util.concurrent.TimeUnit.MILLISECONDS);  

Lets see the received mail in xyz@gmail.com













This the default mail template used by riemann. You can change the format and details of email. I am leaving that part for your assignment.


2) Email the exception 
Add  below stream processing rule in your riemann.config file.
 (streams  
   (where (service "exception-alert")  
    (email "xyz@gmail.com")))  

Lets send some exception from java code:
 RiemannClient c = RiemannClient.tcp("localhost", 5555);  
     c.connect();  
     try {  
       // some business logic  
       throw new NullPointerException("NullPointer exception in your system..Somebody will be in trouble!!! ");  
     } catch (Exception e) {  
       c.event().  
           service("exception-alert").  
           state(e.getLocalizedMessage()). // you can send full stacktrace also  
           tags("error", "exception", "failure").  
           send().  
           deref(5000, java.util.concurrent.TimeUnit.MILLISECONDS);  
     }  

Lets see the received mail in xyz@gmail.com:












what else you can do ?
1) Send email alert if some VM/service is down.
2) Filter and process stream depending on hostname,service name, metric value, service state, tag values etc... and perform some actions based on that.
3) You can set threshold values for metrics received and perform some actions if threshold value is crossed. e.g: VM cpu is very high, above 95%, some business specific constraint value is violated...

These are just few examples I have given. Check out the link I have posted at the end of article for riemann.

Below is the updated architecture diagram:



























Collectd daemon will send all system related generic metrics to Graphite.


In this three series of article I have just scratched the surface for this area. There are thousand different things and possibilities you can think and achieve with this monitoring framework.

Below are the useful links for different types of config,plugins and integration of other systems you can do with riemann,Graphite and collectd. I have just explained 3% of entire.Rest things you can add as per your need and use case of system.

Riemann:
http://riemann.io/clients.html
http://riemann.io/howto.html

Graphite:
https://graphiteapp.org/#integrations
http://graphite.readthedocs.io/en/latest/tools.html
http://grafana.org/

Collectd:
https://collectd.org/
https://collectd.org/wiki/index.php/Plugin

This is the last article of this series.
Hope you have enjoyed!!!

Please post your comments and doubts!!!

Part2: Build your own monitoring system using Riemann,Graphite,Collectd.

In previous article , part1, we have discussed about riemann installation and basic event sending from java application. In this article we will see riemann integration with Graphite. First lets discuss about why we need Graphite.

1) In Riemann the events are stored only till the TTL-time to live- value , we need something to store the events for longer term so that in future we can look at the statistics and get idea about system behaviour at the time of failure or error scenarios.
2) Riemann is stateless system and the riemann-dash board is also stateless. There are ways to store the definition of created dashboards but still they will show live data only.

While graphite has 2 great capabilities.
1)Storage
2)Easy and powerful dashboard UI.

Lets start with Graphite installation.

Graphite Installation

In this link you can see 4 different way of installing graphite and other component needed for it.
I am using 4th way : Installing From Synthesize

Synthesize provides script which automates installation of all necessary dependency and components needed for Graphite. But Synthesize installation method available for Ubuntu 14.04 version only. If you are using some other version and flavour of Linux than you should go with other way.

Installation steps for my case:
$ cd synthesize
$ sudo ./install
that's it !! Done!!

open Graphite dashboard in browser:





Lets integrate graphite with riemann.

Riemann Graphite Integration

Open riemann.config file. In my case it is located at /etc/riemann/riemann.config



































The red part I have highlighted is the newly added config for Graphite.
First I have provided location of Graphite VM. In my case it is the same machine so I am using localhost.
The next thing is stream processing rules.You can specify which services you want to render on Graphite. Here I am declaring both "fridge" and "jvm.nonheap.memory" service to render on Graphite dashboard. We have created this services in part1.



















As you can see Graphite has capabilities to store the metrics so you can configure time/date range. One more thing you should observe is Graphite creates new folder structure for each "." present in the service name. Here jvm.nonheap.memory folder structure you can see. So that you can organise and send your metrics accordingly.


What next you can do 

Grafana is the next thing you can add in your framework. In simple word Grafana is a dashboard which can operate upon the data stored in Graphite storage. So basically Graphite will be there but Grafana can use Graphite's data and provide much much better and advanced dashboard options.
Explore more on this from here : http://docs.grafana.org/

Below is the updated architecture diagram:



























So riemann processes the events and pushes the metrics data associated with events to Graphite for storage. Garphite stores it and display it on the dashboard. Grafana can leverage the data present with Graphite for further rendering.

That's it for now...

part3 is my next article on this series.

Please post your comments and doubts!!!