Deploy a Spring Boot Java app to Kubernetes on GCP-Google Kubernetes Engine

Kubernetes is an open source project, which can run in many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, and from virtual machine (VM) instances to bare metal.

You’ll use GKE, a fully managed Kubernetes service on Google Cloud Platform, to allow you to focus more on experiencing Kubernetes, rather than setting up the underlying infrastructure.

In this post , i will show you the steps to deploy your simple react application to GCP app engine service .

Before going for actual deployment you should consider below pre-requisites –

GCP account – You need to create at least Free tier GCP account by providing your credit card details which will be valid for 3 months. You can create it using https://cloud.google.com/

Github project – Spring boot project on github (https://github.com/AnupBhagwat7/gcp-examples)

Below are the steps to deploy application to App Engine –

  1. Package a simple Java app as a Docker container.
  2. Create your Kubernetes cluster on GKE.
  3. Deploy your Java app to Kubernetes on GKE.
  4. Scale up your service and roll out an upgrade.
  5. Access Dashboard, a web-based Kubernetes user interface.

1. GCP Setup

Go to Google cloud console(https://console.cloud.google.com/) and click to open cloud shell –

Run the following command in Cloud Shell to confirm that you are authenticated:

gcloud auth list

This command will give you below output –

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

Now run the below command to get the list of projects present under your GCP account –

gcloud config list project

If project is not set then you can do it by using below command –

gcloud config set project <PROJECT_ID>

2. Package your java application

Get the application source code from github –

git clone https://github.com/AnupBhagwat7/gcp-examples.git
cd gcp-demo-springboot-app

Now run the project in gcp cloud shell –

mvn -DskipTests spring-boot:run

once the application is started , you can click on web preview as shown below –

You will be able to see your application launched in browser as below –

3. Package the Java app as a Docker container

Next, you need to prepare your app to run on Kubernetes. The first step is to define the container and its contents.

You need to take below steps to package your application as a docker image –

Step 1: Create the JAR deployable for the app

mvn -DskipTests package

Step 2: Enable Container Registry to store the container image that you’ll create

gcloud services enable containerregistry.googleapis.com

Step 3: Use Jib maven plugin to create the container image and push it to the Container Registry

mvn -DskipTests com.google.cloud.tools:jib-maven-plugin:build   -Dimage=gcr.io/$GOOGLE_CLOUD_PROJECT/gcp-demo-springboot-app.jar

Step 4: If all goes well, then you should be able to see the container image listed in the console by navigating to CI/CD > Container Registry > Images. You now have a project-wide Docker image available, which Kubernetes can access and orchestrate as you’ll see in next steps .

Step 5: You can locally test the image with the following command, which will run a Docker container as a daemon on port 8080 from your newly created container image:

docker run -ti --rm -p 8080:8080 \
  gcr.io/$GOOGLE_CLOUD_PROJECT/gcp-demo-springboot-app.jar

Step 6: You can go to web preview feature of cloud shell to check if docker container is started successfully .You will see response in browser –

4. Deploy your application to Google Kubernetes

Step 1: Create a cluster

You’re ready to create your GKE cluster. A cluster consists of a Kubernetes API server managed by Google and a set of worker nodes. The worker nodes are Compute Engine VMs.

First, make sure that the related API features are enabled

gcloud services enable compute.googleapis.com container.googleapis.com

Create a cluster named springboot-java-cluster with two n1-standard-1 nodes using below command –

gcloud container clusters create springboot-java-cluster \
  --num-nodes 2 \
  --machine-type n1-standard-1 \
  --zone us-central1-c

This will take few minutes to create a cluster. You can see all the clusters by navigating to Kubernetes Engine > Clusters

It’s now time to deploy your containerized app to the Kubernetes cluster. You’ll use the kubectl command line (already set up in your Cloud Shell environment).
The rest of the tutorial requires the Kubernetes client and server version to be 1.2 or higher. kubectl version will show you the current version of the command.

Step 2: Deploy app to Kubernetes cluster

A Kubernetes deployment can create, manage, and scale multiple instances of your app using the container image that you created. Deploy one instance of your app to Kubernetes using the kubectl run command.

kubectl create deployment springboot-java \
--image=gcr.io/$GOOGLE_CLOUD_PROJECT/gcp-demo-springboot-app.jar

To view the deployment that you created, simply run the following command:

kubectl get deployments

To view the app instances created by the deployment, run the following command:

kubectl get pods

At this point, you should have your container running under the control of Kubernetes, but you still have to make it accessible to the outside world.

Step 3: Allow external traffic

By default, the Pod is only accessible by its internal IP within the cluster. In order to make the springboot-java container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes service.

In Cloud Shell, you can expose the Pod to the public internet with the kubectl expose command combined with the –type=LoadBalancer flag. The flag is required for the creation of an externally accessible IP.

kubectl create service loadbalancer springboot-java --tcp=8080:8080

O/P: service/springboot-java created

The flag used in the command specifies that you’ll be using the load balancer provided by the underlying infrastructure. Note that you directly expose the deployment, not the Pod. That will cause the resulting service to load balance traffic across all Pods managed by the deployment (in this case, only one Pod, but you’ll add more replicas later).

The Kubernetes Master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud.

To find the publicly accessible IP address of the service, simply request kubectl to list all the cluster services.

kubectl get services

O/p:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 443/TCP 44m
springboot-java LoadBalancer 10.3.250.58 34.123.60.207 8080:32034/TCP 85s

Notice that there are two IP addresses listed for your service, both serving port 8080. One is the internal IP address that is only visible inside your Virtual Private Cloud. The other is the external load-balanced IP address. In the example, the external IP address is aaa.bbb.ccc.ddd. You should now be able to reach the service by pointing your browser to http://34.123.60.207:8080

Step 4: Scale your application

One of the powerful features offered by Kubernetes is how easy it is to scale your app. Suppose that you suddenly need more capacity for your app. You can simply tell the replication controller to manage a new number of replicas for your app instances.

kubectl scale deployment springboot-java --replicas=3

O/P: deployment.apps/springboot-java scaled

kubectl get deployment

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
springboot-java   3/3     3            3           23m

Step 5: Roll out an upgrade to your service

At some point, the app that you deployed to production will require bug fixes or additional features. Kubernetes can help you deploy a new version to production without impacting your users.

You can launch editor in CLOUD Shell and update the controller to return a new value as shown below-

Use Jib maven plugin to build and push a new version of the container image.

mvn -DskipTests package \
  com.google.cloud.tools:jib-maven-plugin:build \
  -Dimage=gcr.io/$GOOGLE_CLOUD_PROJECT/springboot-java:v2

In order to change the image label for your running container, you need to edit the existing springboot-java deployment and change the image from gcr.io/PROJECT_ID/springboot-java:v1 to gcr.io/PROJECT_ID/springboot-java:v2.

You can use the kubectl set image command to ask Kubernetes to deploy the new version of your app across the entire cluster one instance at a time with rolling updates.

kubectl set image deployment/springboot-java \
springboot-java=gcr.io/$GOOGLE_CLOUD_PROJECT/springboot-java:v2

Step 6: Rollback to previous version

Perhaps the new version contained an error and you need to quickly roll it back. With Kubernetes, you can roll it back to the previous state easily. Roll back the app by running the following command:

kubectl rollout undo deployment/springboot-java

This marks the end of this tutorial. Thanks for following.

Github link –

https://github.com/AnupBhagwat7/gcp-examples/tree/main/gcp-demo-springboot-app

Deploy Your React App On Google Cloud Platform’s App Engine

In this post , i will show you the steps to deploy your simple react application to GCP app engine service .

Before going for actual deployment you should consider below pre-requisites –

GCP account – You need to create at least Free tier GCP account by providing your credit card details which will be valid for 3 months. You can create it using https://cloud.google.com/

Github project – Simple hello world  react project on github (https://github.com/AnupBhagwat7/gcp-examples)

Below are the steps to deploy application to App Engine –

1. Create your app in the GCP

Go to Google cloud console(https://console.cloud.google.com/) and click on App Engine –

You need to create a new project to before you start using cloud as below –

Create new project

Once its created , you will see App Engine screen as below –

App engine dashboard

Now click on create application and select region from below screen –

region

Select Laguange as Nodejs and environment as standard-

2. Clone our app from our GitHub repository

Now you can clone the application from github using Cloud shell present at right top corner –

Open cloud shell
git clone https://github.com/AnupBhagwat7/gcp-examples.git

3. Build our app for deployment

Now navigate to your project –

cd

cd gcp-examples/gcp-demo-react-app/

Now you need to install the dependencies required and then build the project –

npm install
npm run build

These commands will create the build artifacts in build folder .

4. Add an app.yaml to the root directory and deploy!

Delete all the files and folders except application.yaml and build folder. By the end of this step, the only things left should be the “build” folder and “app.yaml”. That’s all the App Engine will need to run our app.

rm fileName - delete single file

rm -r foldername - Delete all files recursivey

app.yaml contains below content –

runtime: nodejs12
handlers:
# Serve all static files with url ending with a file extension
- url: /(.*\..+)$
  static_files: build/\1
  upload: build/(.*\..+)$
# Catch all handler to index.html
- url: /.*
  static_files: build/index.html
  upload: build/index.html

This file provides information required application deployment.

To deploy application , you need to execute below command on cloud shell –

gcloud app deploy

Just answer ‘y’ for any prompt comes up . After successful deployment you will see the application link on App Engine dashboard as below –

Once the application is deployed you will see URL in cloud shell console as below –

You can also see the application link on App engine dashboard –

Access application in browser –

Browser

Java 8 – Lambda Expression

What is Lambda expression in java 8?

Lambda expressions basically express instances of functional interfaces .Functional interface is an interface which has only one abstract method. Functional interfaces can have any number of default and static methods. But, they must have only one abstract method.

Ex. Comparator, Runnable interface.

Syntax:

(argumen1, argument2 ) -> { System.out.println(“Argument 1 is: “+ argument1 + “Argument 2 is: “+ argument2); };

@java.lang.FunctionalInterface
interface FunctionalInterface{
    //An abstract method
    void play(String sport);
    static void dance(){
        System.out.println("I am dancing");
    }
    default void run(){
        System.out.println("I am running");
    }

}


public class FunctionalInterfaceDemo{

    public static void main(String[] args) {

        FunctionalInterface fi = (game) -> {
            System.out.println("Lets play : "+ game);
        };

        fi.play("Badminton");
        fi.run();
        FunctionalInterface.dance();
    }
}

Output:

Lets play : Badminton
I am running
I am dancing

How does Lambda expression replace anonymous inner classes used before java 8?

Before Java 8, anonymous inner classes are used to implement functional interfaces. After Java 8, you can use lambda expressions to implement functional interfaces. Below are the functional interfaces in java –

@FunctionalInterface
public interface ActionListener extends EventListener 
{
    public void actionPerformed(ActionEvent e);  //Only One abstract method
}

@FunctionalInterface
public interface Runnable 
{
    public abstract void run();   //Only one abstract method
}

@FunctionalInterface
public interface Comparator 
{
    int compare(T o1, T o2);       //Only one abstract method
}

We used anonymous inner classes to provide the implementation for the abstract methods present in these above functional interfaces.

Example 1: Anonymous Inner class before Java 8(Comparator):

Comparator<Student> idComparator = new Comparator<Student>() {
            @Override
            public int compare(Student s1, Student s2) {
                return s1.getID()-s2.getID();
            }
        };

Lambda Expression in java 8:

Comparator<Student> idComparator = (Student s1, Student s2) -> s1.getID()-s2.getID();

Example 2: Anonymous Inner class before Java 8(Runnable):

Runnable r = new Runnable() {   
            @Override
            public void run() {
                System.out.println("Runnable Implementation Using Anonymous Inner Class");
            }
        };

Lambda Expression in java 8:

Runnable r = () -> System.out.println("Runnable Implementation Using Lambda Expressions");

You might have noticed that lambdas instantiate functional interfaces and implement their abstract method in a single line. Before Java 8, anonymous inner classes are used for this purpose. But, they end up writing more lines of code than actually needed. Lambdas let you to write less code for same task.

Where else can we use Lambda expressions ?

  • As we seen earlier, it can be used to implement function interaces.
  • Its also used extensively in stream API.
  • Foreach loop using Lambda expression –
public class FunctionalInterfaceDemo{

    public static void main(String[] args) {
        List<Integer> list = new ArrayList<>();
        list.add(1);
        list.add(2);
        list.add(3);
        list.add(4);
        list.add(5);

        System.out.println("List using Lambda expression");
        list.forEach((n) -> System.out.println(n));

    }
}

Output:

List using Lambda expression
1
2
3
4
5

Monitoring Spring boot application using Prometheus and Grafana

In continuation with previous article ,we will see how we can monitor spring boot application developed in earlier post with actuator. We have already seen prometheus tool for monitoring application but as prometheus doesn’t provide better visualization experience, we have to go with other tools which has rich UI for better visualization of application metrics.

You can use Grafana for the better visualization .

Grafana

While Prometheus does provide some crude visualization, Grafana offers a rich UI where you can build up custom graphs quickly and create a dashboard out of many graphs in no time. You can also import many community built dashboards for free and get going.

Grafana can pull data from various data sources like Prometheus, Elasticsearch, InfluxDB, etc. It also allows you to set rule-based alerts, which then can notify you over Slack, Email, Hipchat, and similar.

Let’s start with running Grafana using Doc

You can download grafana for windows using below link –

https://grafana.com/grafana/dashboards

If you visit http://localhost:3000, you will be redirected to a login page:

Login page

Default credentials to login are admin/admin .

Since Grafana works with many data sources, we need to define which one we’re relying on. In our case we have to select Prometheus as our data source:

Datasource configuration

Select Prometheus from below provided options –

Now, add the URL that Prometheus is running on, in our case http://localhost:9090 and select Access to be through a browser.

At this point, we can save and test to see if the data source is working correctly:

You can search for JVM (Micrometer) dashboard on grafana website and provide its ID in next step to import that dashboard into grafana –

You can find the dashboard ID as highlighted in below screenshot –

As i mentioned earlier, Grafana has lots of of pre-built dashboards. For Spring Boot projects, the JVM dashboard is popular. We are going to use JVM micrometer (ID = 4701) dashboard in this example .

Click on highlighted plus icon and then import –

Select the Prometheus datasource name created in earlier steps as highlighted in below screenshot-

Once you click on import , your dashboard is ready . You can have a look at all the metrics exposed by prometheus on a single dashboard –

I hope this tutorial is helpful for you to configure your spring boot application with Grafana

In this article, we used Micrometer to reformat the metrics data provided by Spring Boot Actuator and expose it in a new endpoint. This data was then regularly pulled and stored by Prometheus, which is a time-series database. Ultimately, we’ve used Grafana to visualize this information with a user-friendly dashboard.

Monitoring an application’s health and metrics helps us manage it better, notice unoptimized behavior, and better understand its performance. This especially holds true when we’re developing a system with many microservices, where monitoring each service can prove to be crucial when it comes to maintaining our system.

Based on this information, we can draw conclusions and decide which microservice needs to scale if further performance improvements can’t be achieved with the current setup.

Github Downlod Link:

Spring mvc integration with prometheus Example

In this article ,we will see how we can monitor spring MVC applications using prometheus tool . You wont get much documentation for this setup on web. Prometheus is mostly configured with spring boot easily. But in order to configure it with spring MVC based application , it requires some additional configurations –

Prometheus

Prometheus is a time-series monitoring application written in Go. It can run on a server, in a docker container, or as part of a Kubernetes cluster (or something similar). Prometheus collects, stores, and visualizes time-series data so that you can monitor your systems. You can tell Prometheus exactly where to find metrics by configuring a list of “scrape jobs”. Applications that are being monitored can provide metrics endpoints to Prometheus using any one of the many client libraries available; additionally, separate exporters can gather metrics from applications to make them available in Prometheus. Metrics get stored locally for 15 days, by default, and any Prometheus server can scrape another one for data. Additionally, remote storage is another option for Prometheus data – provided there is a reliable remote storage endpoint.

Benefits:

  • The option of “service discovery” allows Prometheus to keep track of all current endpoints effortlessly.
  • Outages are quickly detected .
  • The PromQL query language is incredibly flexible and Turing-complete.
  • There’s also a very low load on the services monitored (metrics get stored in memory as they get generated), allowing fewer resources to get used.
  • Additionally, Prometheus users can control traffic volumes, access metrics in the browser, and allow for easy reconfiguration.

Step 1 : Spring MVC application pom.xml configuration

Below prometheus dependencies are required in pom.xml for project –

<prometheus.version>0.6.0</prometheus.version>
<dependency>
    <groupid>io.prometheus</groupid>
    <artifactid>simpleclient</artifactid>
    <version>${prometheus.version}</version>
</dependency>
<!-- Hotspot JVM metrics-->
<dependency>
    <groupid>io.prometheus</groupid>
    <artifactid>simpleclient_hotspot</artifactid>
    <version>${prometheus.version}</version>
</dependency>
<!-- Exposition servlet-->
<dependency>
    <groupid>io.prometheus</groupid>
    <artifactid>simpleclient_servlet</artifactid>
    <version>${prometheus.version}</version>
</dependency>
<!-- Pushgateway exposition-->
<dependency>
    <groupid>io.prometheus</groupid>
    <artifactid>simpleclient_pushgateway</artifactid>
    <version>${prometheus.version}</version>
</dependency>
<dependency>
    <groupid>io.prometheus</groupid>
    <artifactid>simpleclient_spring_web</artifactid>
    <version>${prometheus.version}</version>
</dependency>
<dependency>
    <groupid>com.fasterxml.jackson.core</groupid>
    <artifactid>jackson-core</artifactid>
    <version>2.5.2</version>
</dependency>

Step 2 : Spring MVC application web.xml configuration

We need configure MetricsServlet to capture the metrics of our spring mvc application as below –

<servlet>
    <servlet-name>PrometheusServlet</servlet-name>
    <servlet-class>io.prometheus.client.exporter.MetricsServlet</servlet-class>
</servlet>
<servlet-mapping>
    <servlet-name>PrometheusServlet</servlet-name>
    <url-pattern>/metrics</url-pattern>
</servlet-mapping>

Step 3: Add an interceptor class

This will intercept all the requests coming to application and capture the metrics to be exposed to prometheus –

package com.myjavablog.config;

import io.prometheus.client.Counter;
import io.prometheus.client.Gauge;
import io.prometheus.client.Histogram;
import io.prometheus.client.Summary;
import org.apache.log4j.Logger;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;


/**
 * @author anupb
 */

public class PrometheusMetricsInterceptor extends HandlerInterceptorAdapter {

    private static Logger logger = Logger.getLogger(PrometheusMetricsInterceptor.class);

    private static final Histogram requestLatency = Histogram.build()

            .name("service_requests_latency_seconds")

            .help("Request latency in seconds.")

            .labelNames("systemId", "appId", "type", "name", "method").register();


    private ThreadLocal<Histogram.Timer> timerThreadLocal;


    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        return super.preHandle(request, response, handler);

    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {

        String name = this.getName(request, handler).toLowerCase();
        String method = request.getMethod().toUpperCase();

        timerThreadLocal = new ThreadLocal<>();
        timerThreadLocal.set(requestLatency.labels(name, method).startTimer());
        super.postHandle(request, response, handler, modelAndView);
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {

        super.afterCompletion(request, response, handler, ex);

        if (timerThreadLocal.get() != null) {
            timerThreadLocal.get().observeDuration();
        }
    }

    @Override
    public void afterConcurrentHandlingStarted(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        super.afterConcurrentHandlingStarted(request, response, handler);
    }

    private String getName(HttpServletRequest request, Object handler) {
        String name = "";

        try {
            if (handler != null && handler instanceof HandlerMethod) {

                HandlerMethod method = (HandlerMethod) handler;
                String className = ((HandlerMethod) handler).getBeanType().getName();
                name = className + "." + method.getMethod().getName();
            } else {
                name = request.getRequestURI();
            }

        } catch (Exception ex) {
            logger.error("getName", ex);
        } finally {
            return name;
        }
    }
}

Step 4: Add prometheus initialization configuration

This will expose the metrics to prometheus server –

package com.myjavablog.config;
 public class PrometheusConfig {

private static Logger logger = Logger.getLogger(PrometheusConfig.class);

 

@PostConstruct

public void initialize() {

logger.info("prometheus init...");

DefaultExports.initialize();

logger.info("prometheus has been initialized...");

}

}

Step 5: Add an interceptor to spring-mvc.xml

You need to first add the schema location as below –

http://www.springframework.org/schema/mvc

http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd

Then you need to add below tag –

<mvc:interceptors>

<bean class="com.xx.config.PrometheusMetricsInterceptor"/>

</mvc:interceptors>

Step 6: Add configuration to applicationcontext.xml

<bean id="prometheusConfig" class="com.myjavablog.config.PrometheusConfig" init-method="initialize" />

Once all this configuration is done , you can add the application URL in prometheus.

These parameters are useful to monitor your spring MVC application.

Monitoring Spring boot applications using actuator and prometheus

In this article ,we will see how we can monitor spring boot applications using actuator spring boot project and prometheus tool .

Prometheus

Prometheus is a time-series monitoring application written in Go. It can run on a server, in a docker container, or as part of a Kubernetes cluster (or something similar). Prometheus collects, stores, and visualizes time-series data so that you can monitor your systems. You can tell Prometheus exactly where to find metrics by configuring a list of “scrape jobs”. Applications that are being monitored can provide metrics endpoints to Prometheus using any one of the many client libraries available; additionally, separate exporters can gather metrics from applications to make them available in Prometheus. Metrics get stored locally for 15 days, by default, and any Prometheus server can scrape another one for data. Additionally, remote storage is another option for Prometheus data – provided there is a reliable remote storage endpoint.

Benefits:

  • The option of “service discovery” allows Prometheus to keep track of all current endpoints effortlessly.
  • Outages are quickly detected .
  • The PromQL query language is incredibly flexible and Turing-complete.
  • There’s also a very low load on the services monitored (metrics get stored in memory as they get generated), allowing fewer resources to get used.
  • Additionally, Prometheus users can control traffic volumes, access metrics in the browser, and allow for easy reconfiguration.

Part 1: Spring boot application configuration

We will create a simple spring boot REST application which will expose the metrics to prometheus. Please find below project structure of application –

Project Structure

Below is the pom.xml required for project –

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.db</groupId>
    <artifactId>spring-boot-prometheus</artifactId>
    <version>1.0-SNAPSHOT</version>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.1.9.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

We need spring-boot-starter-actuator dependency ,this would expose the endpoints of our application and makes it available to prometheus server. You can run the application and check the endpoints exposed by actuator by hitting below link in browser –

http://localhost:8081/actuator

Micrometer

micrometer-registry-prometheus dependency is required to register metrics with prometheus server. It exposes Actuator metrics to external monitoring systems such as Prometheus, Netflix Atlas, AWS Cloudwatch, and many more.

Similarly, Micrometer automatically exposes /actuator/metrics data into something your monitoring system can understand. All you need to do is include that vendor-specific micrometer dependency in your application.

Micrometer is a separate open-sourced project and is not in the Spring ecosystem, so we have to explicitly add it as a dependency. Since we will be using Prometheus, let’s add it’s specific dependency in our pom.xml:

Part 2: Prometheus Server Configuration

For this example , i will be installing portable prometheus on local system . You can download the prometheus setup from official prometheus website as below –

https://prometheus.io/download/

You need to make below changes in prometheus.yml file of prometheus configuration to scrape the metrices your application .Please add below job to scrape the metrices of your application –

 # Details to connect Prometheus with Spring Boot actuator end point to scrap the data
  # The job name is added as a label `job=` to any time series scraped from this config.
  - job_name: 'spring-actuator'
   
    # Actuator end point to collect the data. 
    metrics_path: '/actuator/prometheus'

    #How frequently to scape the data from the end point
    scrape_interval: 5s

    #target end point. We are using the Docker, so local host will not work. You can change it with
    #localhost if not using the Docker.
    static_configs:
    - targets: ['192.168.43.33:8081']

Run the prometheus server by double clicking prometheus.exe flie .

Now prometheus console will open and it will strat gathering the metrices of your application. You can access the prometheus server in browser using http://localhost:9090

Targets for metrics
Classes loaded in JVM prometheus metrics
Number of Threads in different states

You can gather the metrices of your application by selecting the various parameters provided by prometheus like –

  • jvm_classes_loaded_classes
  • jvm_threads_live_threads
  • jvm_threads_states_threads
  • jvm_threads_states_threads
  • tomcat_global_request_seconds_count

These parameters are useful to monitor your systems. In the next article, we will create a prometheus server using docker and expose the metrices.

Github Downlod Link:

Spring Cloud Eureka and Hystrix Circuit Breaker using Microservices

In this tutorial, we will use a microservice application created in previous post ( Microservices Example using Spring Cloud Eureka ) and add circuit breaker pattern using Hystrix Spring library in java.

Using Hystrix in your application helps to add defensive mechanism and makes applications more resilient and fault tolerant.

Tools Required –

  • Java 8
  • IntelliJ IDE

We have created three different applications as below –

  1. Eureka Service–  This Service will register every microservice and then the client microservice will look up the Eureka server to get a dependent microservice to get the job done.This Eureka Server is owned by Netflix and in this, Spring Cloud offers a declarative way to register and invoke services by using Java annotation.
  2. demo-server – This service will return a simple hello message.
  3. demo-client – It is similar to the standalone client service created in Bootiful Development with Spring Boot. It will consume the APIs provided by demo-server through Eureka Service .

Hystrix Documentationhttps://github.com/Netflix/Hystrix/wiki

Microservices are deployed on Cloud . As cloud provides a distributed environment , there are more chances that some of your services may be down at some point of time. You can have several micro-services in your application which are dependent on each other. So one service can call to other service. If destination service is down then source will get an exception in normal scenario. But with the help of Hystrix annotations , you can add fallback mechanism and handle the exception in services. Thus it makes your service more fault tolerant, resilient .

You need to add below dependency in your demo-client service application to enable Hystrix circuit breaker pattern –

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix</artifactId>
	<version>1.2.5.RELEASE</version>
</dependency>

We just have to add few annotations to handle fallback or break the service call in case your destination service(demo-server) is down. We need to change main class to enable Hystrix circuit breaker –

package com.myjavablog.democlient;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.client.loadbalancer.LoadBalanced;
import org.springframework.cloud.netflix.hystrix.EnableHystrix;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.client.RestTemplate;

@EnableCircuitBreaker
@EnableDiscoveryClient
@SpringBootApplication
public class DemoClientApplication {

	public static void main(String[] args) {
		SpringApplication.run(DemoClientApplication.class, args);
	}
}

@Configuration
class Config{
    @Bean
    @LoadBalanced
    public RestTemplate restTemplate(){
        return new RestTemplate();
    }

}

Also we need to change controller class to add fallback mehod as below –

package com.myjavablog.democlient;

import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;

@RestController
@RequestMapping("/demo/hello/client")
public class TestController {

    @Autowired
    public RestTemplate restTemplate;

    @GetMapping
    @HystrixCommand(fallbackMethod = "handleFallback")
    public String test(){
        String url = "http://demo-server/demo/hello/server";
        return restTemplate.getForObject(url, String.class);
    }

    public String handleFallback(){
        return "Fallback hello service";
    }
}

By default hystrix has timeout of 1 second for every request. So we need to disable timeout by setting below property in application.properties file –

hystrix.command.default.execution.timeout.enabled=false

So now i am intentionally stopping demo-server microservice and then we will make call to api to see fallback is working properly .You can see demo-server is not registered with Eureka below –

So now when you call the service ,it should show you a fallback message as below –

Github Downlod Link:

Benefits of microservices

You start building microservices because they give you a high degree of flexibility and autonomy with your development teams, but you and your
team quickly find that the small, independent nature of microservices makes them easily deployable to the cloud. Once the services are in the cloud, their small size makes it easy to start up large numbers of instances of the same service, and suddenly your applications become more scalable and with forethought, more resilient.

A microservice architecture has the following characteristics:

  • Application logic is broken down into small-grained components with well defined boundaries of responsibility that coordinate to deliver a solution.
  • Each component has a small domain of responsibility and is deployed completely independently of one another. Microservices should have responsibility for a single part of a business domain. Also, a microservice should be reusable across multiple applications.
  • Microservices communicate based on a few basic principles (notice I said principles, not standards) and employ lightweight communication protocols such as HTTP and JSON (JavaScript Object Notation) for exchanging data between the service consumer and service provider.
  • The underlying technical implementation of the service is irrelevant because the applications always communicate with a technology-neutral protocol (JSON is the most common). This means an application built using a microservice application could be built with multiple languages and technologies.
  • Microservices—by their small, independent, and distributed nature—allow organizations to have small development teams with well-defined areas of responsibility. These teams might work toward a single goal such as delivering an application, but each team is responsible only for the services on which they’re working.

What’s a microservice?

Before the concept of microservices evolved, most web-based applications were built using a monolithic architectural style. In a monolithic rchitecture, an application is delivered as a single deployable software artifact. All the UI (user interface), business and database access logic are packaged together into a single application artifact and deployed to an application server.
While an application might be a deployed as a single unit of work, most of the time there will be multiple development teams working on the application. Each development team will have their own discrete pieces of the application they’re responsible for and oftentimes specific customers they’re serving with their functional piece.

For example, customer relations management (CRM) application that involves the coordination of multiple teams including the UI, the customer master, the data warehouse and the mutual funds teams. Below Figure illustrates the basic architecture of this application.

Monolithic applications force multiple development teams to artificially synchronize their delivery because their code needs to be built, tested, and deployed as an entire unit.

The problem here is that as the size and complexity of the monolithic CRM application grew, the communication and coordination costs of the individual teams working on the application didn’t scale. Every time an individual team needed to make a change, the entire application had to be rebuilt, retested and redeployed.

A microservice is a small, loosely coupled, distributed service.
Microservices allow you to take a large application and decompose it into easy-to manage components with narrowly defined responsibilities. Microservices help combat the traditional problems of complexity in a large code base by decomposing the large code base down into small, well-defined pieces. The key concept you need to embrace as you think about microservices is decomposing and unbundling the functionality of your applications so they’re completely independent of one another. If we take the
CRM application we saw in above figure and decompose it into microservices, it might look like what’s shown in below figure.Looking at below figure, you can see that each functional team completely owns their service code and service infrastructure. They can build, deploy, and test independently of each other because their code, source control repository, and the infrastructure (app server and database) are now completely independent of the other parts of the application.

A microservice architecture of CRM application would be decomposed into a set of microservices completely independent of each other, allowing each development team to move at their own pace.

Microservices Example using Spring Cloud Eureka

In this tutorial, we will create a demo microservice using spring boot framework in java.

Tools Required

  • Java 8
  • IntelliJ IDE

We need to create three different applications as below –

  1. Eureka Service–  This Service will register every microservice and then the client microservice will look up the Eureka server to get a dependent microservice to get the job done.This Eureka Server is owned by Netflix and in this, Spring Cloud offers a declarative way to register and invoke services by using Java annotation.
  2. demo-server – This service will return a simple hello message.
  3. demo-client – It is similar to the standalone client service created in Bootiful Development with Spring Boot. It will consume the APIs provided by demo-server through Eureka Service .

1. Eureka Service

We need to create new project in intelliJ editor by selecting below dependencies –

Project Setup
Maven dependencies

You need to do below configurations to create Eureka service which allows other microservices to register with it .

package com.myjavablog;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@EnableEurekaServer
@SpringBootApplication
public class EurekaServiceApplication {

	public static void main(String[] args) {
		SpringApplication.run(EurekaServiceApplication.class, args);
	}

}

You need to create application.yml to configure your EurekaService project as register service. Other services will be able to register themselves with EurekaService. So you need to do below configurations –

eureka:
  client:
    registerWithEureka: false
    fetchRegistry: false
    server:
      waitTimeInMsWhenSyncEmpty: 0

Now you can run the application . Once the application is up and running you can go to below URL and see EurekaService is working . You will be able to see below home page of eureka-

As no instances are registered with EurekaService , you can see No instances available message as highlighted above . Now we will create services which will register with our Eureka Server.

2. Demo Server

This is a simple REST based webservice. This will be registering itself with the EurekaService. This service has a simple API which is just returning hello message. Create a project structure as shown below in IntelliJ IDE –

Setup
Maven Dependencies required

Project structure is as shown below –

You need to include Eureka Discovery client dependency in your pom file.

Once the application is up and running , you can see instance of demo-server registered on EurekaService as shown below –

3. Demo Client

Same way as Demo server , we need to create demo client which will consume services provided by demo-server through EurekaService.

demo-client will have autowired RestTemplate to call the services provided by demo-server.

package com.myjavablog.democlient;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;

@RestController
@RequestMapping("/demo/hello/client")
public class TestController {

    @Autowired
    public RestTemplate restTemplate;

    @GetMapping
    public String test(){
        String url = "http://demo-server/demo/hello/server";
        return restTemplate.getForObject(url, String.class);
    }
}

Also you can see in the above code , we are using String url = “http://demo-server/demo/hello/server”

demo-server is the name specified in the application.yml file of demo-server project . So instead of using http://localhost:8071/demo/hello/server we are using http://demo-server/demo/hello/server .

demo-server is the name registered on Eureka server for demo-server application as shown below –

spring:
  application:
    name: demo-server

server:
  port: 8071

eureka:
  client:
    registerWithEureka: true
    fetchRegistry: true
    serviceUrl:
      defaultZone: http://localhost:8070/eureka/
  instance:
    hostname: localhost

Once the application is up and running , you can see the demo-client is registered on Eureka Server as below –

When you call the APIs from demo client ,it will internally call demo-server APIs through Eureka Service . Eureka server handles all routing and load balancing part. So even if you scale up your services in future by creating multiple instances, you can go on and use the services with the name which is registered on Eureka server.

Github Downlod Link:

Close Bitnami banner
Bitnami