• Large monolith architectures are broken down into many small services.
    • Each service runs in its own process.
    • The applicable cloud rule is one service per container.
  • Services are optimized for a single function.
    • There is only one business function per service.
    • The Single Responsibility Principle: A microservice should have one, and only one, reason to change.
  • Communication is through REST API and message brokers.
    • Avoid tight coupling introduced by communication through a database.
  • Continuous integration and continuous deployment (CI/CD) is defined per service.
    • Services evolve at different rates.
    • You let the system evolve but set architectural principles to guide that evolution.
  • High availability (HA) and clustering decisions are defined per service.
    • One size or scaling policy is not appropriate for all.
    • Not all services need to scale; others require auto scaling up to large numbers.
Advertisements

Lombok

Posted: June 9, 2018 in General, Java, Java8
Tags: , ,

Lets take a look at a following sample code.

import java.io.Serializable;
import java.util.Objects;

public class User implements Serializable {

    private long id;
    private String username;
    private String login;

    public long getId() {
        return id;
    }

    public void setId(long id) {
        this.id = id;
    }

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getLogin() {
        return login;
    }

    public void setLogin(String login) {
        this.login = login;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        User user = (User) o;
        return id == user.id &&
                Objects.equals(username, user.username) &&
                Objects.equals(login, user.login);
    }

    @Override
    public int hashCode() {

        return Objects.hash(id, username, login);
    }
}

A class should have getter-setters for the instance variables, equals & hashCode method implementation, all field constructors and an implementation of toString method. This class so far has no business logic and even without it is 50+ lines of code. This is insane.

Lombok is used to reduce boilerplate code for model/data objects, e.g., it can generate getters and setters for those object automatically by using Lombok annotations. The easiest way is to use the @Data annotation.

import java.io.Serializable;
import lombok.data

@Data
public class User implements Serializable {

    private long id;
    private String username;
    private String login;
}

How to add Lombok to your java project ?

Using Gradle

dependencies {
    compileOnly('org.projectlombok:lombok:1.16.20')
}

Using Maven

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.16.20</version>
</dependency>

Tips to remember while using Lombok

  1. Don’t mix logic with lombok
  2. Use @Data for your DAOs
  3. Use @Value for immutable value objects
  4. Use @Builder when you have an object with many fields with the same type
  5. Exclude generated classes from Sonar report. If you are using Maven and Sonar, you can do this using the sonar.exclusions property.

How can we process really large collections efficiently? Normally we used loops to iterate over a collection.

Lets say we need to iterate over a list of Person object

List<Person> personList = new ArrayList<>();
personList.add(new Person("Sam", 10));
personList.add(new Person("Smith", 9));
personList.add(new Person("Zayn", 2));
personList.add(new Person("Nathan", 1));

Using foreach loop

personList.forEach(person -> {
    System.out.println(" Person Name :: " + person.getName());
});

In Java 8, we have something new called “Stream”. A stream represents a sequence of elements and supports different kind of operations to perform computations upon those elements.

System.out.println("Traversing List using streams.");
personList.stream().forEach(person -> {
    System.out.println(person.getName());
});

It may seem Java 8’s stream api is a bit verbose than the for-each loop for collections. And we wonder what benefit can come from it.

The difference between for-each loop and using stream api (collection.stream()) in Java 8 is that, we can easily implement parallelism when using the stream api with collection.parallelStream(). Whereas, in for-each loop you will have to handle threads on your own.

/**One of the goals of the stream API in Java8 is to let you break up processing on a system that has multiple CPUs.
 * This multi CPU processing is handled automatically by the Java runtime.
 * All you need to do is turn your sequential stream into a parallel stream.
 */
System.out.println("Traversing List using Parellel streams");
personList.parallelStream().forEach(person -> {
    System.out.println(person.getName());
});

 

Stream operations are either intermediate or terminal. Intermediate operations return a stream so we can chain multiple intermediate operations without using semicolons. Terminal operations are either void or return a non-stream result. In the above example filter, map and sorted are intermediate operations whereas forEach is a terminal operation. For a full list of all available stream operations see the Stream Javadoc. Such a chain of stream operations as seen in the example below is also known as operation pipeline. 

Predicate<Person> agePredicate = person -> person.getAge() > 5;
System.out.println("Traversing List using Parellel streams and filters");
personList.parallelStream()
        .filter(nameAndAgePredicate)
        .sorted()
        .forEach(person -> {
    System.out.println(person.getName());
});

Lets see the different ways of creating stream

Arrays.asList("sam", "smith", "zayn")
        .stream()
        .findFirst()
        .ifPresent(System.out::println);

Stream.of("sam", "smith", "zayn")
        .findFirst()
        .ifPresent(s -> System.out.println(s));
Arrays.stream(new int[] {1, 2, 3})
        .average()
        .ifPresent(System.out::println);

In addition to the new lambda syntax, Java SE8 adds a number of new functional interfaces. One of the most useful is called the Predicate Interface which is an interface that has a single boolean method named Test, that you can use to wrap up your conditional processing, and make conditional code a lot cleaner.

Go through the following example and you will get an understanding of Predicate Interface

package com.suhas;

import java.util.ArrayList;
import java.util.List;
import java.util.function.Predicate;


public class PredicateInterfaceTest {

    public static void main(String[] args) {

        List<Person> personList = new ArrayList<>();
        personList.add(new Person("Sam", 10));
        personList.add(new Person("Smith", 9));
        personList.add(new Person("Zayn", 6));
        personList.add(new Person("Nathan", 1));

        Predicate<Person> agePredicate = person -> person.getAge() > 5;

        Predicate<Person> namePredicate = person -> person.getName().equals("Zayn");

        Predicate<Person> nameAndAgePredicate = namePredicate.and(person -> person.getAge() < 5);

        personList.forEach(person -> {
            if (agePredicate.test(person))
                System.out.println("Matching Record Found for Age Predicate :: " + person.getName());
            if (namePredicate.test(person))
                System.out.println("Matching Record Found for Name Predicate :: " +  person.getName());
            if (nameAndAgePredicate.test(person))
                System.out.println("Matching Record Found for Name and Age Predicate :: " +  person.getName());

        });

    }
}

Understanding the CAP theorem

Posted: June 3, 2018 in General
Tags:

Finding the ideal database for your application is largely a choice between trade-offs. The CAP theorem is one concept that can help you understand the trade-offs between different databases. The CAP theorem was originally proposed by Eric Brewer in 2000. It was originally conceptualized around network shared data and is often used to generalize the tradeoffs between different databases. The CAP theorem centers around three desirable properties; consistency is where all users get the same data, no matter where they read the data from, availability ensures users can always read from and write to the database, and finally partition tolerance ensures that the database works when divided across network.

The theorem states that at most you can only guarantee two of the three properties simultaneously. So you can have an available partition- tolerant database, a consistent partition-tolerant database or a consistent available database. One thing to note is that not all of these properties are necessarily exclusive of each other. You can have a consistent partition-tolerant database that still has an emphasis on availability, but you’re going to sacrifice either part of your consistency or your partition tolerance.

Relational databases trend towards consistency and availability. Partition tolerance is something that relational databases typically don’t handle very well. Often you have to write custom code to handle the partitioning of relational databases. NoSQL databases on the other hand trend towards partition-tolerance. They are designed with the idea in mind that you’re going to be adding more nodes to your database as it grows. CouchDB, which we looked at earlier in the course, is an available partition-tolerant database.

That means the data is always available to read from and write to, and that you’re able to add partitions as your database grows. In some instances, the CAP theorem may not apply to your application. Depending on the size of your application, CAP tradeoffs may be irrelevant.If you have a small or a low traffic website, partitions may be useless to you, and in somecases consistency tradeoffs may not be noticeable. For instance, the votes on a comment may not show up right away for all users.

This is fine as long as all votes are displayed eventually. The CAP theorem can be used as a guide for categorizing the tradeoffs between different databases. Consistency, availability, and partition tolerance are all desirable properties in a database. While you may not be able to get all three in any single database system, you can use the CAP theorem to help you decide what to prioritize.

Java 8 introduced Optional<T> as a container object which may contain null values. It’s often used to indicate to a caller that a value might be null and that it need to be handled to avoid NullPointerExceptions.

With the release of Hibernate 5.2, we could use them in our persistence layer for optional entity attributes or when loading entities that may or may not exist.

Lets see how we can use Optional<T> to indicate optional attributes and query results which might not return a result.

Consider a sample Tour Booking app where i need to search a tour package by region. The database design has a 2 tables  ‘TourPackage’  and ‘CustomerReview’  The ‘TourPackage’ entity is the root in our entity aggregate. There can be a CustomerReview associated with every TourPackage but not mandatory.

So when we searched for a TourPackage by region, what if there was tour package for which the optional ‘customerReview’ attribute is null? With previous Java versions, the getCustomerReview() method would just return null. The caller would need to know about the possible null value and handle it. With Java 8, you can return an Optional to make the caller aware of possible null values and to avoid NullPointerExceptions.

But if you just change the type of the customerReview attribute from CustomerReview to Optional<CustomerReview>, Hibernate isn’t able to determine the type of the attribute and throws a MappingException.

javax.persistence.PersistenceException: [PersistenceUnit: my-persistence-unit] Unable to build Hibernate SessionFactory

Caused by: org.hibernate.MappingException: Could not determine type for: java.util.Optional, at table: TourPackage, for columns: [org.hibernate.mapping.Column(customerReview)]

To avoid this Exception, you have to use field-type access and keep Attachment as the type of the attachment attribute. Hibernate is then able to determine the data type of the attribute but doesn’t return an Optional.

Lets see how we can see the comment given by the customer.

Optional<CustomerReview> customerReview = tourPackage.getCusomerReview();
if (customerReview.isPresent()) {
CustomerReview review = customerReview.get();
System.out.print(“Review Comment :: ” + review.getComment());
}

Lets also take a look at the various methods in the Optional class.

Optional.empty() – Return an empty Optional object.
Optional.of() – Return an Optional object with a non-null value. It will throw NullPointerException if value is null.
Optional.ofNullable() – Return an Optional object with a non-null value. It will return empty Optional object if value is null.
Optional#isPresent() – Return true if a value is present in the Optional object, otherwise false.
Optional#get() – Return a value from Optional object, if value is present, otherwise throws NoSuchElementException.
Optional#ifPresent() – This method invoke a Consumer if a value is present in Optional object, otherwise do nothing.
Optional#orElse() – Return a value if present, otherwise return other specified value.
Optional#orElseGet() – Return a value if present, otherwise invoke a Supplier that return other value.
Optional#orElseThrow() – Return a value if present, otherwise invoke a Supplier that create and throws an exception.
Optional#filter() – Return an Optional object if a value is present, and matches the given Predicate, otherwise return an empty Optional object.
Optional#map() – Return an Optional object if a value is present, and applies the given mapping Function to it, otherwise return an empty Optional object.
Optional#flatMap() – Return an Optional object if a value is present, and applies the provided Optional-bearing mapping Function to it, otherwise return an empty Optional object.

 

 

To build a Java application, the first step is to create a Java project. Most Java projects rely on third-party Java archive dependencies, and these third-party archives usually have dependencies of their own. On top of that, each version of the dependencies rely on other versions. Managing all these dependencies is a nightmare that Java developers have nicknamed JAR hell. To avoid JAR hell, we use build dependency management systems like Maven or Gradle.

But even with Maven and Gradle, versioning between individual .jar files can be a nuisance.Spring Boot recognizes this, and created the notion of a Spring Boot Starter, which bundles several dependencies into a grouping that is easier to manage. There are a lot, and I mean a lot of Spring Boot Starter dependencies so even cobbling together a project on your own can be difficult. This is where Spring Initializr comes to the rescue. Spring Initializr is a tool for creating Spring Boot Java projects by answering a series of questions and selecting check boxes to choose which features to include.

Initializr creates the package structure, the pom.xml for Maven, or build.gradle for Gradle files, and any required Java source classes.

Lets see how to use Spring Initializr

Step 1 : Goto https://start.spring.io/

Step 2 : Choose a java project with maven and latest spring boot support

Screen Shot 2018-05-19 at 8.38.24 PM.png

Step 3: If you want to see more options, click on ‘switch to full version’ link at the bottom of the page.

Screen Shot 2018-05-19 at 8.40.14 PM.png

Step 4 : Choose Spring Starter Packages

Now, we’re going to scroll past the Generate Project button and look at all of these Spring Starter packages, and from these we’re going to choose Web and within Web is Rest Repositories.

Screen Shot 2018-05-19 at 8.43.41 PM

And then keep scrolling, and we get to the SQL part, we’re going to choose JPA and H2.

Screen Shot 2018-05-19 at 8.43.54 PM

Now we’re going to go back and click the Generate Project button

Now Spring Initializr will generate the zip file. I will copy it to my working folder and unzip the file there and start working on your project. 🙂