Getting Started with Reactive Programming

Arjun Chauhan
8 min readJul 17, 2023

--

Before the title misleads anyone I would like to clarify that the term “Reactive” might be ambiguous to some. What I would be covering in this blog is paradigm of programming that deals with handling non blocking asynchronous events and not the JS framework with the similar name.

What are Non blocking asynchronous calls?

Let’s say we have an API and we are handling a lot of requests. By architecture, synchronous calls are meant to be blocking. What it means by this is that until we receive the response from the server the application execution will be blocked. In contrary in case of asynchronous execution the application does not wait for the response from the server and hence is non blocking.

I wanted to be clear why asynchronous calls are non blocking by nature and so what we will be discussing in this blog is how reactive programming helps us in handling asynchronous events in our application.

What are events?

Events have varying definitions depending upon the context. Today we are discussing about reactive programming and in today’s case I would describe events as occurrences that happen over time and can be observed and reacted to by the application.

As I mentioned earlier reactive programming helps us in dealing with these events in a way that our application remains responsive and scalable.

Getting Started

One of the main goals of reactive programming is to provide a way to handle asynchronous data streams and events in a more efficient and scalable way.

The state of the application in reactive programming is built around data streams which are composed of events that happen over time. The change in the state or events can be processed by the application asynchronously and updated accordingly.

Reactive programming also involves handling back pressure, which is a mechanism for controlling the flow of data in the application to prevent overload and ensure stability. This is particularly important when dealing with large volumes of data streams and events which is often in the case of big applications.

So how does reactive programming work?

Reactive programming makes use of publishers and subscribers to handle data and events efficiently.

Next I would try to explain how reactive programming leverages the use of publisher-subscriber model.

  1. A publisher object is initialized that emits a stream of data or events over time.
  2. One or more subscriber objects are created that register with the publisher to receive notification when new data is available. The subscribers can process the data asynchronously and update the state of the application accordingly. We can configure if an event can be consumed by one or multiple subscribers depending upon our use case.
  3. We also have support for operators. Operators are functions that can be used to transform and combine data streams. They allow developers to create complex data flows by composing and transforming data streams using a set of functional programming techniques.
  4. Then to make things scalable we have the usage of schedulers. Schedulers are used to control the concurrency and parallelism of the application. They allow developers to control how data streams are processed and ensure that the application is responsive and scalable.

There are various frameworks available that provide an abstraction for reactive programming like RxJava, Reactor, Akka, and Spring WebFlux.

I would be going through one of them in today’s blog to provide a step-by-step guide as to how everything comes together.

Reactor

Reactor is a reactive programming library for building asynchronous and event-driven applications in Java. It is designed to handle large volumes of data streams and events in a scalable and efficient way. I have used reactor myself and the reason I decided to use it as the library for the demo today is because it is easy to understand and very beginner friendly.

The main use of Reactor is to provide a set of tools and abstractions for building reactive applications. It provides a set of core components, such as Flux and Mono, which represent streams of data and events. These components can be combined and transformed using a set of operators, such as map, filter, and reduce, to create complex data flows.

Before we go any further I would like to explain what are mono and flux as mentioned above.

Mono vs Flux

Like mentioned previously reactive programming works on publisher-subscriber model. Both Mono and Flux provide an abstraction to publish events.

The difference between them is that Mono can emit only one event at a time on the other hand Flux can emit multiple events at a time. So in case when we are sure that we only want to emit one event we can choose Mono on the other hand Flux can be used when we have to emit multiple events.

Now I would give a short example explaining every concept we have discussed so far.

Demo

Everything begins with the publisher emitting the event for the consumer to consume.

Like java streams both flux and Mono follow lazy execution i.e the value of the expression is not processed until it is needed.

Let’s start by creating a simple flux.

  Flux<Integer> flux = Flux.just(1, 2, 3, 4, 5);

The “just” method allows us to create a flux when we have the value available. This is the simplest way of creating a Flux. Even though I am giving an example of flux the steps remain the same for Mono as well. Even mono has a “just” method , the only difference being that Mono is used for only one item. So if we give multiple values we would get an error.

Flux<String> mappedFlux = flux.map(i -> "Number: " + i)
The above flux gets converted to
"Number: 1", "Number: 2", "Number: 3", "Number: 4", "Number: 5"

Here we have used an operator to process our data. Remember I mentioned previously the usage of operators. Now I would take the time to list out a few common operators we can use.

  1. Map: Transforms each item emitted by a data stream by applying a function to it.
  2. Filter: Filters items emitted by a data stream based on a predicate function.
  3. Reduce: Aggregates the items emitted by a data stream into a single value using an accumulator function.
  4. Merge: Combines multiple data streams into a single data stream.
  5. Concat: Concatenates multiple data streams into a single data stream in a specific order.
  6. FlatMap: Transforms each item emitted by a data stream into another data stream, and then flattens the resulting data streams into a single data stream.
  7. Zip: Combines the items emitted by multiple data streams into a single item using a combiner function.

Let’s apply a few more operators to explain their working.

Flux<String> filteredFlux = mappedFlux.filter(s -> !s.endsWith("2") && !s.endsWith("4"));

The flux gets converted to

"Number: 1", "Number: 3", "Number: 5"

Let’s reduce the flux now into a single string.

 Mono<String> reducedMono = filteredFlux.reduce("", (a, b) -> a+ ", " + b);

The flux gets converted to

"Number: 1, Number: 3, Number: 5"

What these steps represent is the the processing of the event. So we began with the raw event Flux of a list of numbers. We used the operators to process the events.

A point to be noted is that I have added the converted output for explanation and no as such computation actually happens. Like mentioned previously the Flux follows lazy execution. Until we subscribe to the event flux no computation actually happens. And when we do subscribe we have the control on what we want to with the event we received.

To make things easy I would just print out the Flux to show how subscriber works .

reducedMono.subscribe(System.out::println);

Output

"Number: 1, Number: 3, Number: 5"

This is a very basic example of how flux works. We created an event. Processed it using operators then finally when we used the method subscribe the evaluation of that expression was done.

Now you might wonder that where does schedulers come into play on this one. Let’s discuss it then.

Schedulers

Schedulers are used to control the concurrency and parallelism of the application. They allow developers to control how data streams are processed and ensure that the application is responsive and scalable.

Schedulers provide a way to specify where and how the processing of a data stream should occur. They can be used to run tasks on a separate thread, on a thread pool, or on a specific executor. Schedulers can also be used to control the order in which tasks are executed, and to introduce delays or timeouts.

Reactor provides several built-in schedulers that can be used to control the processing of data streams:

  1. Schedulers.immediate() : Runs tasks on the current thread.
  2. Schedulers.single(): Runs tasks on a single thread.
  3. Schedulers.parallel(): Runs tasks on a fixed-size thread pool.
  4. Schedulers.elastic(): Runs tasks on an unbounded thread pool that can grow or shrink dynamically.

These can be used with methods like subscribeOn and publishOn to control where the events are evaluated.

If we are supposed to add a scheduler in the above case we can do by updating the subscribe call.

reducedMono.subscribeOn(Schedulers.parallel()).subscribe(System.out::println);

In this example subscribeOn is used as an operator to subscribe to the reduced mono on a separate scheduler. We use the Schedulers.Parallel method to create a parallel scheduler that can handle multiple threads. This allows us to process the data stream asynchronously and improve performance.

The output remains the same because we didn’t change the processing part of the execution.

Now we have discussed about publishing and subscribing. We also talked about the schedulers we have in reactor.

So are we missing something?

Well indeed we are. We are missing handling back pressure.

Let’s discuss that now.

Backpressure Strategy

In Reactor, backpressure handling is a mechanism for controlling the flow of data in the application to prevent overload and ensure stability. It is particularly important when dealing with large volumes of data streams and events. This is something we already have discussed. So let’s see how we can actually do it in code.

In Reactor, back pressure handling can be implemented with variety of strategy which reactor has inbuilt for us using onBackPressure___ .

The ___ can be replaced with the strategy name. I will discuss a few strategy support reactor have:

  1. Buffer: If there are excess events coming up that the system can’t handle. It gets added to the buffer to be processed once the system gets available. This is the default strategy that is implemented.
  2. Drop: Drops incoming data when the Subscriber is unable to keep up with the rate of data production. Hence we might lose out on the events that come up faster than the speed of processing.
  3. Error: This strategy simply returns an error if it can’t process the incoming request.
  4. Latest: Keeps only the latest item emitted by the Publisher when the Subscriber is unable to keep up with the rate of data production. This is similar to “Drop” but here we only keep the latest data that is coming up and not the older.

Let’s have an example now.

First let’s create a flux.

Flux<Integer> flux = Flux.range(1, 1000);

Here we didn’t use “just”. We used the range method to create a stream of integers as a flux.

Let’s use the Drop strategy for this example.

Flux<Integer> droppedFlux = flux.onBackpressureDrop().onBackpressureDrop(10);

Here we used the drop strategy to handle backpressure by dropping the oldest items in the buffer when the Subscriber is unable to keep up with the rate of data production. We set the buffer size to 10 here.

After we define the strategy for our case we can go ahead with using operators like we discussed earlier.

In this blog we discussed what reactive programming is and how we can use it. I have discussed a basic example and there are lots of things that Reactor has a support for. However it was beyond the scope of this blog since I wanted to just introduce the reader with it and not overwhelm. Probably in a later blog I would give a detailed example of how different operators and strategies can come together to help scale the application.

Thanks

--

--

No responses yet