03 January 2021

JavaPeanuts reloaded - #askpietro

So, nine years later, here we are: a few of published posts, many periods of silence - the longest one of more than three years!! - and many ideas for incoming posts and content.

But the "Java" prefix no longer matches my professional life, I moved a long time ago to other technologies - .Net and NodeJs above all - and my professional interests range now from DevOps to distributed systems, from backend to frontend, from Kotlin to C# 9 to TypeScript... and - last but not least - this Blogger-powered blog shows a bunch of limits for code formatting and layout customization, which are starting to bother me... so... I decided to move to another technology stack, based on

and switched to a less language-specific blog name: now you can find my (restyled!) old posts - and soon a series of new ones - on #askpietro: keep in touch!

23 December 2020

Functional shell: a minimal toolbox

functional-shell-minimal-toolbox

I already wrote a post about adopting a functional programming style in Bash scripts. Here I want to explore how to build a minimal, reusable functional toolbox for my bash scripts, avoiding redefinition of base functional bricks whenever I need them.

So, in short: I wish I could write a scripts (say use-functional-bricks.sh) like the following

#!/bin/bash
double () {
  expr $1 '*' 2
}

square () {
  expr $1 '*' $1
}

input=$(seq 1 6)
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output $square_after_double_output"

sum() {
	expr $1 '+' $2
}

sum=$(reduce 0 "sum" $input)
echo "The sum is $sum"

referring to “globally” available functions map and reduce(and maybe others, too) without to re-write them everywhere they are needed and without to be bound to external scripts invocation.

The way I think we can solve the problem refers to three interesting features available in bash:

  • export functions from scripts (through export -f)
  • execute scripts in the current shell’s environment, through source command
  • execute scripts when bash starts

So I wrote the following script (say functional-bricks.sh):

#!/bin/bash
map () {
	f=$1
	shift
	for x
	do
		$f $x
	done
}
export -f map

reduce () {
	acc=$1
	f=$2
	shift
	shift
	for curr
	do
		acc=$($f $acc $curr)
	done
	echo $acc
}
export -f reduce

and added the following line at the end of my user’s ~/.bashrc file:

. ~/common/functional-bricks.sh

and… voila!: now map and reduce implemented in functional-bricks.sh are available in all my bash sessions - so I can use them in all my scripts!
And because seeing is beleiving… if I launch the script use-functional-bricks.shdefined above, I get the following output:

square_after_double_output 4
16
36
64
100
144
The sum is 21

20 December 2020

Functional way of thinking: higher order functions and polymorphism

functional-way-of-thinking

I think higher order functions are the functional way to polymorphism: the same way you can write a generic algorithm in an OO language referring to an interface, which you can plug specific behaviour into the generic algorithm through, you can follow the sameplug something specific into something generic” advice writing a high order function referring to a function signature.

Put it another way, function signatures are the functional counterpart for OO interfaces.

This is a very simple concept having big implications about you can design and organize your code. So, I think the best way to metabolize this concept is to get your hands dirty with higher order functions, in order to become faimilar with thinking in terms of functions that consume and return (other) functions.

For example, you can try to reimplement simple higher order functions from some library like lodash, ramdajs or similar. What about implementing an afterfunction that receives an integer n and another function f and returns a new function that invokes f when it is invoked for the n-th time?

function after(n, f) {
	return function() {
	  n--
	  if(n === 0) {
		  f()
	  }
	}
}

You can use like this:

const counter = after(5, () => console.log('5!'))
counter()
counter()
counter()
counter()
counter() // Writes '5!' to the console

So you have a simple tool for count events, reacting to the n-th occurrence (and you honored the Single Responsibility Principly, too, separating counting responsibility from business behavior implemented by f). Each invocation of after creates a scope (more *technically: a closure for subsequent executions os the returned function - the value of n or of variables defined in the lexical scope of after's invocation are nothing different from the instance fields you can use in your class implementing an interface.
Generalizing this approach, you can implement subtle variation of the after function: you can for example write an every function that returns a function that call the f parameter of the every invocation every n times

function every(n, f) {
	let m = n
	return function() {
		m--
		if(m === 0)    {
			m = n
			f()
		}
	}
}

This is my way to see functional composition through higher order functions: another way to plug my specific, business-related behavior into e generic - higher order - piece of code, without reimplement the generic algorithm the latter implements.

Bonus track: what is the higher order behaviour implemented by the following function?

function canYouGuessMyName (items, f) {
 return items.reduce((acc, curr) => ({ ...acc, [f(curr)]: (acc[f(curr)] || []).concat([curr]) }), {})
}

Written with StackEdit.

12 December 2020

Functions as first-class citizens: the shell-ish version

functional-bash

The idea to compose multiple functions together, passing one or more of them to another as parameters, generally referred to as using higher order functions is a pattern which I’m very comfortable with, since I read about ten years ago the very enlighting book Functional Thinking: Paradigm Over Syntax by Neal Ford. The main idea behind this book is that you can adopt a functional mindset programming in any language, wheter it supports function as first-class citizens or not. The examples in that book are mostly written in Java (version 5 o 6), a language that supports (something similar to) functions as first-class citizens only from version 8. As I said, it’s more a matter of mindset than anything else.

So: a few days ago, during a lab of Operating System course, waiting for the solutions written by the students I was wondering If it is possible to take a functional approach composing functions (or something similar…) in a (bash) shell script.

(More in detail: the problem triggering my thinking about this topic was "how to reuse a (not so much) complicated piece of code involving searching files and iterating over them in two different use cases, that differed only in the action applied to each file)

My answer was Probably yes!, so I tried to write some code and ended up with the solution above.

The main point is - imho - that as in a language supporting functions as first class citizens the bricks to be put together are functions, in (bash) script the minimal bricks are commands: generally speaking, a command can be a binary, or a script - but functions defined in (bash) scripts can be used as commands, too. After making this mental switch, it’s not particularly difficult to find a (simple) solution:

action0.sh - An action to be applied to each element of a list

#!/bin/bash
echo "0 Processing $1"

action1.sh - This first action to be applied to each element of a list

#!/bin/bash
echo "1 Processing $1"

foreach.sh - Something similar to List<T>.ForEach(Action<T>) extension method of .Net standard library(it’s actually a high order program)

#!/bin/bash
action=$1
shift
for x
do
    $action $x
done

main.sh - The main program, reusing foreach’s logic in more cases, passing to the high order program different actions

#!/bin/bash
./foreach.sh ./action0.sh $(seq 1 6)
./foreach.sh ./action1.sh $(seq 1 6)

./foreach.sh ./action0.sh {A,B,C,D,E}19
./foreach.sh ./action1.sh {A,B,C,D,E}19

Following this approach, you can apply different actions to a bunch of files, without duplicating the code that finds them… and you do so applying a functional mindset to bash scripting!

In the same way it is possible to implement something like the classic map higher order function using functions in a bash script:

double () {
    expr $1 '*' 2
}

square () {
    expr $1 '*' $1
}

map () {
    f=$1
    shift
    for x
    do
        echo $($f $x)
    done
}

input=$(seq 1 6)
double_output=$(map "double" $input)
echo "double_output --> $double_output"
square_output=$(map "square" $input)
echo "square_output --> $square_output"
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output --> $square_after_double_output"

square_after_double_output, as expected, contains values 4, 16, 36, 64, 100, 144.

In conclusion… no matter what language you are using: using it functionally, composing bricks and higher order bricks together, it’s just a matter of mindset!

Written with StackEdit.

19 February 2019

Set Of Responsability and IOC

I recently read this Pietro’s post about a possible adaptation of the “Chain of responsability” pattern: the “Set of responsability”. This is very similar to its "father" because each Handler handles the responsibility of a Request but in this case it doesn’t propagate the check of responsibility to other handlers. There is a responsibility without chain!

In this article I’d like to present the usage of this pattern in an IOC Container, where the Handlers aren’t added to the HandlerSet list but provided from the container. In this way you can add new responsibility to the system simply adding a new Handler in the container without changing other parts of implemented code (es. the HandlerSet), in full compliance with the Open closed principle.  

For coding I’ll use the Spring Framework (JAVA) because it has a good IOC Container and provides a set of classes to manage with that. Inversion of control principle and the Dependency Injection are first class citizens in the SpringFramework.

Here is the UML class diagram with 3 responsabilities X,Y,Z and a brief description of the solution adopted.

@Component
public class XHandler implements Handler {

    @Override
    public Result handle(Request request) {
       return ((RequestX) request).doSomething();
    }

    @Override
    public boolean canHandle(Request request) {
        return request instanceof RequestX;
    }
} 
@Component annotation on XHandler tells to Spring to instantiate an object of this type in the IOC container.
public interface HandlerManager {
    Result handle(Request request) throws NoHandlerException;
}
@Service
public class CtxHandlerManager implements HandlerManager {

    private ApplicationContext applicationContext;

    @Value("${base.package}")
    private String basePakage;

    @Autowired
    public CtxHandlerManager(ApplicationContext applicationContext) {
        this.applicationContext = applicationContext;
    }

    @Override
    public Result handle(Request request) throws NoHandlerException {
        Optional handlerOpt = findHandler(request);
        if ( !handlerOpt.isPresent() ) {
            throw new NoHandlerException();
        }
        Handler handler = handlerOpt.get();
        return handler.handle(request);
    }

    private Optional<Handler> findHandler(Request request) {
        ClassPathScanningCandidateComponentProvider provider = createComponentScanner();

        for (BeanDefinition beanDef : provider.findCandidateComponents(basePakage)) {
            try {
                Class clazz = Class.forName(beanDef.getBeanClassName());
                Handler handler = (Handler) this.applicationContext.getBean(clazz);
                //find responsible handler for the request
                if (handler.canHandle(request)) {
                    return Optional.ofNullable(handler);
                }
            } catch (ClassNotFoundException e) {
                e.printStackTrace();
            }
        }
        return Optional.empty();
    }

    private ClassPathScanningCandidateComponentProvider createComponentScanner() {
        ClassPathScanningCandidateComponentProvider provider
                = new ClassPathScanningCandidateComponentProvider(false);
        provider.addIncludeFilter(new AssignableTypeFilter(Handler.class));
        return provider;
    }
}
CtxHandlerManager works like an Handler dispatcher.  The handle method finds the Handler and calls its handle method  which invokes the doSomething method of the Request.

In the findHandler method I use the Spring ClassPathScanningCandidateComponentProvider object with an AssignableTypeFiler to the Handler class. I call the findCandidateComponent on a basePackage (the value is set by @Value Spring annotation) and for each candidate the canHandle method check the responsability. And that's all!

In Sender class the HandlerProvider implementation (CtxHandlerManager)is injected by Spring IOC by "Autowiring":


@Service
public class Sender {

    @Autowired
    private HandlerManager handlerProvider;

    public void callX() throws NoHandlerException {
        Request requestX = new RequestX();
        Result result = handlerProvider.handle(requestX);
        ...
    }

This solution let you to add new responsibility simply creating new Request implementation and a new Handler implementation to manage it.  By applying @Component annotation on the Handler you allow Spring to autodetect the class for dependency injection when annotation-based configuration and classpath scanning is used. On application reboot, this class can be provided by the IOC Container and instantiated simply invoking the HandlerManager.

In the next post I’d like present a possible implementation of a Component Factory using the "Set of responsability" pattern in conjunction with another interesting pattern, the "Builder pattern".


Happy coding   
Mauro

11 October 2018

Set of responsibility

So, three and a half years later... I'm back.

According to wikipediathe chain-of-responsibility pattern is a design pattern consisting of a source of command objects and a series of processing objects.[1] Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain.

In some cases, I will benefit from flexibility allowed by this pattern, without being tied to the chain-based structure, e.g. when there is an IoC container involved: Handlers in the pattern all have the same interface, so it's difficult to leave their instantiation to the IoC container.

In such scenario I use a variation of the classic chain of responsibility: there are still responsibilities, off course, but there is no chain out there.

I like to call my variation set of responsibility (or list of responsibility, see above for discussion about this) - the structure is the one that follows (C# code):

interface Handler {
    Result Handle(Request request);
  
    bool CanHandle(Request request);
}


class HandlerSet {
    IEnumerable handlers;

    HandlerSet(IEnumerable < handler > handlers) {
        this.handlers = handlers;
    }
 
    Result Handle(Request request) {
        return this.handlers.Single(h => h.CanHandle(request)).Handle(request);
    }
}

class Sender {
    HandlerSet handler;
 
    Sender(HandlerSet handler) {
        this.handler = handler;
    }

    void FooBar() {
        Request request = ...;
        var result = this.handler.Handle(request);
    }
}

One interesting scenario which I've applied this pattern in is the case in which the Handlers' input type Request hides a hierarchy of different subclasses and each Handler implementation is able to deal with a specific Request subclass: when use polymorphism is not a viable way, e.g. because those classes comes from an external library and are not under our control, we can use set of responsibility in order to cleanup the horrible code that follows:


class RequestX : Request {}

class RequestY : Request {}

class RequestZ : Request {}

class Sender {
    var result = null;
 
    void FooBar() {
        Request request = ...;
        
        if(request is RequestX) {
            result = HandleX((RequestX)request);
        } else if (request is RequestY) {
            result = HandleY((RequestY)request)
        } else if (request is RequestZ) {
            result = HandleZ((RequestZ)request)
        }
    }
}

We can't avoid is and () operators usage, but we can hide them behind a polymorphic interface, adopting a design than conform to open-closed principle:

class Sender {
    HandlerSet handler;
    Sender(HandlerSet handler) {
        this.handler = handler;
    }

    void FooBar() {
  Request request = ...;
     var result = this.handler.Handle(request);
    }
}

class HandlerX : Handler {
    bool CanHandle(Request request) => request is RequestX;
 
    Result Handle(Request request) {
        HandleX((RequestX)request);
    }
}

class HandlerY : Handler {
    bool CanHandle(Request request) => request is RequestY;
 
    Result Handle(Request request) {
        HandleY((RequestY)request);
    }
}

class HandlerZ : Handler {
    bool CanHandle(Request request) => request is RequestZ;
 
    Result Handle(Request request) {
        HandleZ((RequestZ)request);
    }
}

Adding a new Request subclass case is now only a matter of adding a new HandlerAA implemementation of Handler interface, without the need to touch existing code.

I use in cases like the explained one the name of set of responsibility to stress the idea that only one handler of the set can handle a single, specific request (I use _handlers.Single(...) method in HandlerSet implementation, too).

When the order in which the handlers are tested matters, we can adopt a different _handlers.Single(...) strategy: in this case I like to call the pattern variation list of responsibility.

When more than one handler can handle a specific request we can think to variations of this pattern that select all applyable handlers (i.e. those handlers whose CanHandle method returns true for the current request) and apply them to the incoming request.