Monoids are not demode and thursdays are actually the new fridays

No, it’s not the latest Coixet’s movie title. When we talk about cathegories like monoids, we tend to think they just remain on the edge as something merely experimental (even though purely functional) and there’s no direct application at the real world. Something similar happened when you learnt square roots at school and there was no proper moment to start using them at real life…

The use case

This morning, ah naïve me, I tried to make this work:

type Passengers = Int
type MaxCapacity = Passengers
type Plane = (Passengers, MaxCapacity)

val planes: List[Plane] =
  List(1 -> 1, 2 -> 3, 3 -> 3)

val (totalPassengers, totalCapacity) = 
  planes.sum 
//  ERROR: Could not find implicit value 
//  for parameter num: Numeric[(Int, Int)]

Ok, fair enough, Scala needs something, an evidence, for adding integer tuples.
Before we start fighting with the ‘evidence’, let’s try to make it work in a more mechanical way:

val sum = ((0,0) /: planes){
    case ((totalPas,totalCap), (passengers, capacity)) => 
      (totalPas + passengers, totalCap + capacity)
  }

Okay, it works. But it should be simpler, so let’s get back to the idea of Numeric[N] evidence.

Implementing Numeric[N]

We need a Numeric[(A,B)] but before implementing it (it has a lot of abstract methods) let’s sweep under the rug all those methods we don’t really want to focus on in this example. For doing so, let’s create an intermediate layer that keeps all those method without an implementation (which doesn’t mean ‘abstract’):

trait LeanNumeric[T] extends Numeric[T] {
  override def fromInt(x: Int): T = ???
  override def toInt(x: T): Int = ???
  override def minus(x: T, y: T): T = ???
  override def times(x: T, y: T): T = ???
  override def negate(x: T): T = ???
  override def toLong(x: T): Long = ???
  override def toFloat(x: T): Float = ???
  override def toDouble(x: T): Double = ???
  override def compare(x: T, y: T): Int = ???
}

Let’s call this abomination LeanNumeric (it only contains the essentials to develop our example). And now, we can define the method that generates the evidence for any Tuple2:

implicit def numeric[A, B](
  implicit nA: Numeric[A],
  nB: Numeric[B]): Numeric[(A, B)] = {
  new LeanNumeric[(A, B)]{
    override def zero = (nA.zero, nB.zero)
    override def plus(x: (A, B), y: (A, B)): (A, B) = {
      val (a1, b1) = x
      val (a2, b2) = y
      (nA.plus(a1, a2), nB.plus(b1, b2))
    }
  }
}

If we put the implicit into scope and we run again planes.sum…boom! Magic.

Num…oid

We don’t have to master category theory to realize that Numeric[N] may be a thousand of things, but it satisfies at least two properties:

  • The append operation: The sum operation – given n1 and n2 of type N, it return a new N element. Only because of this feature(and the closure, associativity, commutative, …) we can consider it a Semigroup.

  • And additionally the zero element

Seriously? Isn’t it obvious enough? My dear friends, monoid is back in town!

Implementation with scalaz.Monoid

Having in mind that Numeric has (at least) these two properties, let’s re-implement the implicit by using scalaz Monoid. We first define the monoid for integers and the tuple monoid which requires a monoid for each type that compounds the tuple (easy peasy):

import scalaz._

implicit object IntMonoid extends Monoid[Int]{
  override def zero: Int = 0
  override def append(f1: Int, f2: => Int): Int = f1 + f2
}

implicit def tupleMonoid[A,B](
  implicit mA: Monoid[A],
  mB: Monoid[B]): Monoid[(A,B)] = {
  new Monoid[(A, B)] {
    override def zero: (A, B) = (mA.zero, mB.zero)
    override def append(f1: (A, B), f2: => (A, B)): (A, B) = {
      val (a1, b1) = f1
      lazy val (a2, b2) = f2
      (mA.append(a1,a2), mB.append(b1, b2))
    }
  }
}

So good so far, right?

After that, we implement the implicit that will provide a Numeric as long as there’s a Monoid for the type

implicit def numeric[T](
  implicit m: Monoid[T]): Numeric[T] = {
  new LeanNumeric[T]{
    override def zero = m.zero
    override def plus(x: T, y: T): T = m.append(x, y)
  }
}

planes.sum //(6,7)

And it’s awesomely easy to get abstract of whatever T means (a tuple? a dog? …). As long as it’s a monoid, you can define a LeanNumeric.

You can find here the gist.

See you in the next functional madness.

Peace out!

Scalera tip: Handling sticky implicit contexts

A couple of days ago (translation for the masses: like a month ago) I noticed Viktor Klang was tweeting about removing the annoying implicit evidences from methods. And some things I read seemed so elegant to me that I was forced to share some related ideas with all of you that don’t follow him at Twitter (@viktorklang).

Setting some context

Imagine the typical polymorphic method where we need an execution context for evaluating some Future:

import scala.concurrent.{ExecutionContext, Future}

def myMethod[T]
  (element: T)
  (implicit ev: ExecutionContext): Future[Boolean] = ???

You could say it’s as typical as disgusting, having to repeat the same exact words in the following 10 method definitions: (implicit ev: ExecutionContext).

Playing with type alias

The happy idea that is being proposed is to define a type alias like the following one:

type EC[_] = ExecutionContext

This way, by adding some syntax sugar, we would re-define the method signature:

def myMethod[T:EC](element: T): Future[Boolean] = ???
myMethod("hi")

Beautiful, isn’t it?

Some other possibilities

Non-polymorphic methods

In case our method isn’t parameterized, we would have to add some boilerplate (by adding a wildcard for the type that parameterizes the method). In essence, it should be working the same principle:

def myMethod[_:EC](element: Int): Future[Boolean] = ???
myMethod(2)

Multiple implicit contexts

The not-so-crazy case in which we needed several implicit parameters of different natures, we would have to define as many type alias as different type parameters we required:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase

def myMethod[_:EC:MongoDB](element: Int): Future[Boolean] = ???

But what if …?

Multiple implicit parameters with same type

In case we have several implicit parameters that share the same type,

def myMethod
  (element: Int)
  (implicit ev1: ExecutionContext, ev2: ExecutionContext): Future[Boolean] = ???

it turns out that …

Well, by definition that’s impossible given that it would incur in some ambiguity issue when resolving implicits. It’s true that Scala allows having these kind of signatures, but we could only invoke them by making explicit the arguments contained in the second parameter group.:

myMethod(2)(ec1,ec2)

which is kind of…

Type-constructor implicit contexts

When we have implicit parameters that are also type constructors like List[T], Future[T], Option[T]

…well, it actually depends.

Case 1

If the type that parameterizes the method and the one that parameterizes the evidence are not related, there’s no big deal: we define another type alias and move on:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase
type IntOpt[_] = Option[Int]
type StrList[_] = List[String]

def myMethod[_:EC:MongoDB:IntOpt:StrList](
  element: Int): Future[Boolean] = ???

Which would be equivalent to:

def myMethod(
  element: Int)(
  implicit ev1: ExecutionContext,
  ev2: MongoDBDatabase,
  ev3: Option[Int],
  ev4: List[String]): Future[Boolean] = ???

Case 2

If the type that parameterizes the method and the one that parameterizes the evidence have to match …

Well, it’s not possible. The syntax sugar we’re using here implies that both types have to match. Maybe it was too pretty for our bodies 🙂

See you in the next post. Peace out!

Scalera tip: Keep your actor’s state with no VAR at all!

It’s pretty well known that using VARs is, apart from unethical, the evil itself, some kind of hell, it make kitties die and many other stuff you might probably heard before and that could eventually be the cause of a painfull slowly dead.

The essence of functional programming is therefore the immutability: every time I mutate an element, I actually generate a new one.

What about Akka actors?

When we talk about actors, we can define them as stateful computation units that sequentially process a message queue by reacting(or not) to each of these messages.

It’s always been said that, in order to keep state within some actor’s logic, it was ok to use VARs:

It’s impossible that concurrency problems happen: it’s the actor itself and nobody else who access that var and will only process one message at a time.

But maybe, we could renounce to this premise if we look for some way to redefine the actor’s behavior based on a new state.

Mortal approach

If we follow the previously described philosophy, the very first (and more straight forward) approach for keeping some actor’s state would be pretty similar to the following:

class Foo extends Actor{
  var state: Int = 0
  override def receive = {
    case Increase => state += 1
  }
}

Every time an Increase arrives, we modify the state value by adding 1.
So easy so far, right?

Immutable approach

Nevertheless, we could define a receive function parameterized by certain state, so when a message arrives, this parameter is the state to take into account.

If the circumstances to mutate the state took place, we would just invoke the become method that would modify the actor’s behavior. In our case, that behavior mutation would consist on changing the state value.

If we use the previously defined example:

class Foo extends Actor{
  def receive(state: Int): Receive = {
    case Increase =>
      context.become(
        receive(state + 1),
        discardOld = true)
  }
  override def receive = receive(0)
}

we can notice that the function defined by receive is parameterized by some state argument. When some Increase message arrives, what we perform is an invocation to become for modifying the actor’s behavior, passing as an argument the new state to handle.

If we wanted some extra legibility, we could even get abstract from every updatabe-state actor:

trait SActor[State] extends Actor {
  val initialState: State
  def receive(state: State): Receive
  def update(state: State): Receive =
    context.become(
      receive(state),
      discardold = true)
  override def receive =
    receive(initialState)
}

this way, we just have to define the initial state of some actor, a new parameterized receive function and a new update function that takes care of performing the proper become call as explained before.
With all these in mind, we now have some cuter brand new Foo actor:

class Foo extends SActor[Int] {
  val initialState = 0
  def receive(state: Int): Receive = {
    case Increase => update(state +1)
  }
}

Potential hazardous issues

Please, do notice that in the featuring example, we’ve used a second argument for become: discardOld = true. This argument settles whether the new behavior should be stashed on the top of the older one, or ‘au contraire’ it should completely substitute the previous behavior.

Let’s suppose we used discardOld = false. If every single time a new Increase message arrived we had to stash a new behavior, we could obtain a wonderful overflow issue.

See you in the next tip.

Peace out 🙂

More lazy values, the State monad and other stateful stuff

In the previous post, we talked about lazy evaluation in Scala. At the end of that post, we asked an interesting question: Does a Lazy value hold an state?

24195622

In order to answer that question, we’ll try to define a type that could represent the Lazy values:

trait Lazy[T] {

  val evalF : () => T

  val value: Option[T] = None

}
object Lazy{
  def apply[T](f: => T): Lazy[T] =
    new Lazy[T]{ val evalF = () => f }
}

As you can see, our Lazy type is parameterized by some T type that represents the actual value type(Lazy[Int] would be the representation for a lazy integer).
Besides that, we can see that it’s composed of the two main Lazy type features:

  • evalF : Zero-parameter function that, when its ‘apply’ method is invoked, it evaluates the contained T expression.
  • value : The result value of the interpretation of the evalF function. This concrete part denotes the state in the Lazy type, and it only admit two possible values: None (not evaluated) or Some(t) (if it has been already evaluated and the result itself).

We’ve also added a companion object that defines the Lazy instance constructor that receives a by-name parameter that is returned as result of the evalF function.

e9a2295b3db9b45c8f5484a09033c1c71cf88e3375bb7ff60456bc81c29a4e04

Now the question is, how do we join both the evaluation function and the value that it returns so we can make Lazy an stateful type? We define the ‘eval’ function this way:

trait Lazy[T] { lzy =>

  val evalF : () => T

  val value: Option[T] = None

  def eval: (T, Lazy[T]) = {
    val evaluated = evalF.apply()
    evaluated -> new Lazy[T]{ mutated =>
      val evalF = lzy.evalF
      override val value = Some(evaluated)
      override def eval: (T, Lazy[T]) = 
        evaluated -> mutated
    }
  } 

}

The ‘eval’ function returns a two-element tuple:

  • The value result of evaluating the expression that stands for the lazy value.
  • a new Lazy value version that contains the new state: the T evaluation result.

If you take a closer look, what ‘eval’ method does in first place is to invoke the evalF function so it can retrieved the T value that remained until that point not-evaluated.
Once done, we return it as well as the new Lazy value version. This new version (let’s call it mutated version) will have in its ‘value’ attribute the result of having invoked the evalF function. In the same way, we change its eval method, so in future invocations the Lazy instance itself is returned instead of creating new instances (because it actually won’t change its state, like Scala’s lazy definitions work).

The interesting question that comes next is: is this an isolated case? Could anything else be defined as stateful? Let’s perform an abstraction exercise.

Looking for generics: stateful stuff

Let’s think about a simple stack:

sealed trait Stack[+T]
case object Empty extends Stack[Nothing]
case class NonEmpty[T](head: T, tail: Stack[T]) extends Stack

The implementation is really simple. But let’s focus in the Stack trait and in a hypothetical pop method that pops an element from the stack so it is returned as well as the rest of the stack:

sealed trait Stack[+T]{
  def pop(): (Option[T], Stack[T])
}

Does it sound familiar to you? It is mysteriously similar to

trait Lazy[T]{
  def eval: (T, Lazy[T])
}

isn’t it?

If we try to re-factor for getting a common trait between Lazy and Stack, we could define a much more abstract type called State:

trait State[S,T] {
  def apply(s: S): (T, S)
}

Simple but pretty: the State trait is parameterized by two types: S (state type) and T (info or additional element that is returned in the specified state mutation). Though it’s simple, it’s also a ver common pattern when designing Scala systems. There’s always something that holds certain state. And everything that has an state, it mutates. And if something mutates in a fancy and smart way…oh man.

That already exists…

24314442

All this story that seems to be created from a post-modern essay, has already been subject of study for people…that study stuff. Without going into greater detail, in ScalaZ library you can find the State monad that, apart from what was previously pointed, is fully-equipped with composability and everything that being a monad means (semigroup, monoid, …).

If we define our Lazy type with the State monad, we’ll get something similar to:

import scalaz.State

type Lazy[T] = (() => T, Option[T])

def Lazy[T](f: => T) = (() => f, None)

def eval[T] = State[Lazy[T], T]{
  case ((f, None)) => {
    val evaluated = f.apply()
    ((f, Some(evaluated)), evaluated)
  }
  case s@((_, Some(evaluated))) => (s, evaluated) 
}

When decrypting the egyptian hieroglyph, given the State[S,T] monad, we have that our S state will be a tuple composed of what exactly represents a lazy expression (that we also previously described):

type Lazy[T] = (() => T, Option[T])
  • A Function0 that represents the lazy evaluation of T
  • The T value that might have been evaluated or not

For building a Lazy value, we generate a tuple with a function that stands for the expression pointed with the by-name parameter of the Lazy method; and the None value (because the Lazy guy hasn’t been evaluated yet):

def Lazy[T](f: => T) = (() => f, None)

Last, but not least (it’s actually the most important part), we define the only state transition that is possible in this type: the evaluation. This is the key when designing any State type builder: how to model what out S type stands for and the possible state transitions that we might consider.

In the case of the Lazy type, we have two possible situations: the expression hasn’t been evaluated yet (in that case, we’ll evaluate it and we’ll return the same function and the result) or the expression has been already evaluated (in that case we won’t change the state at all and we’ll return the evaluation result):

def eval[T] = State[Lazy[T], T]{
  case ((f, None)) => {
    val evaluated = f.apply()
    ((f, Some(evaluated)), evaluated)
  }
  case s@((_, Some(evaluated))) => (s, evaluated) 
}

iZcUNxH

In order to check that we can still count on the initial features we described for the Lazy type (it can only be evaluated once, only when necessary, …) we check the following assertions:

var sideEffectDetector: Int = 0

val two = Lazy {
  sideEffectDetector += 1
  2
}

require(sideEffectDetector==0)

val (_, (evaluated, evaluated2)) = (for {
  evaluated <- eval[Int]
  evaluated2 <- eval[Int]
} yield (evaluated, evaluated2)).apply(two)

require(sideEffectDetector == 1)
require(evaluated == 2)
require(evaluated2 == 2)

Please, do notice that, as we mentioned before, what is defined inside the for-comprehension are the same transitions or steps that the state we decide will face. That means that we define the mutations that any S state will suffer. Once the recipe is defined, we apply it to the initial state we want.
In this particular case, we define as initial state a lazy integer that will hold the 2 value. For checking the amount of times that our Lazy guy is evaluated, we just add a very dummy var that will be used as a counter. After that, we define inside our recipe that the state must mutate twice by ussing the eval operation. Afterwards we’ll check that the expression of the Lazy block has only been evaluated once and that the returning value is the expected one.

I wish you the best tea for digesting all this crazy story 🙂
Please, feel free to add comments/menaces at the end of this post or even at our gitter channel.

See you on next post.
Peace out!

Lazy values

Just in case you lived in a hole for the last ten years and you didn’t know: Scala allows managing lazy values.

image

In Scala, we can define a value that won’t be evaluated until it is explicitly invoked. For example:

lazy val myLazyInt: Int = { println("hi"); 2 }

As you can see, using lazy notation, we’ve defined lazily an integer that stands for the literal 2 and also prints a ‘hi’ when it’s evaluated.
Apart from violating the biggest functional programming law (referential transparency) due to the insidious println, side effects, dead, destruction, blah blah …

anigif_enhanced-1822-1407333641-6

notice that, if we execute the code block, the previously mentioned ‘println’ is not executed. The block is not evaluated until any other expression makes use of our lazy integer value:

val result = myLazyInt + 3
//woa! somebody printed 'hi' and I have a brand new 5 inside 'result'

Once myLazyInt is evaluated, its value won’t be calculated again, no matter how many times it’s invoked. Therefore, the mysterious impression won’t salute us anymore:

lazy val myLazyInt: Int = { println("hi"); 2 }
myLazyInt
//"hi"
myLazyInt //nothing special happened now ...
myLazyInt //no matter how many times you invoke it...
myLazyInt //seriously, let it go...

Curious. The question that could come up is, if I define a lazy value and I pass it as a method parameter, what happens? Is it evaluated at the very same moment that the method is invoked? Maybe inside the method? That’ll depend on the way you define your method’s parameters.

Call by name vs. call by value

When defining a method, people usually define its parameter ‘by-value’, that means, that we expect the parameter to be already evaluated when it is passed to the method:

def myMethod(someInteger: Int): Int = {
  println("begin")
  val result = someInteger + 2
  println("end")
  result
}

If we invoke our method with any integer:

val n = 3
val result = myMethod(n)
//"begin"
//"end"
require(result == 5)

We just print both traces and it’s not big deal. Nothing new so far.
What happens if we now pass to the method our lazy value? In which exact moment will it print the salutation? Before or after the method traces?
Let’s try:

myMethod(myLazyInt)
//"hi"
//"begin"
//"end"

It printed it out before the method traces, which means that our lazy value was evaluated just before the method was invoked. Why does this happen? Because the way that Scala usually works needs the exact value of someInteger in order to be able to execute myMethod
It’s a pity if we want to keep myLazyInt lazy until the very last moment. How do we fix that? We’ll pass the argument ‘by-name’, that is, indicating the way the value has to be resolved instead of explicitly passing the value:

def myMethod(someInteger: => Int): Int = {
  println("begin")
  val result = someInteger + 2
  println("end")
  result
}

This way (someInteger: => Int) we indicate that our method requires as parameter an expression that, in the end, returns an integer and not an integer itself. If we now execute the method passing our non-yet evaluated lazy value:

myMethod(myLazyInt)
//"begin"
//"hi"
//"end"

Voilà! We made it. The ‘hi’ trace is not printed until the exact value of our lazy guy is required inside the method.

Some other ways to express laziness

Another way to express a lazy evaluation, which could be extremely useful, is the Function0 type:

trait Function0[+R]{
  def apply(): R
}

It’s just a function that requires zero parameters and return an only output type. It’s expressed as follows:

val f: () => Int =
  () => 2
f.apply() //2

And that’s pretty much everything…Once understood in rough outlines how laziness works in Scala, let’s move on to more interesting questions. A Lazy value, does it represent something stateful?
The answer (or more extra questions) will be available in the following post.

Peace out!

Algrebraic Data Types in Scala

What a delightful idea to come back from vacation with batteries fully charged and with some wacky ideas around our minds to write about. Best of these came from the very influence of joints the moon.

ADT?

An Algebraic Data Type (TDA from now so we can save money for each word in WordPress) is just a way to express a data type (Cat, Dog, Prevarication) based on an algebra. And when we say ‘algebra’, we mean type sums and products (of Integers, Cats, Cars, Prevarications, …). For example:

Train = Locomotive + Wagon * Train

How do one read that? A train may be: a locomotive OR a wagon AND another train (that may be as well a wagon and another train, that may be as well a …).
Take a look at both disjunction and conjunction: the sum represents an OR, and the product represents an AND (like Boole algebra).

It’s also worthy to notice that, from this type definition you can infer a recursive pattern. With the Train type, the base case is definitively the Locomotive and, at the recursive case, we have a wagon and another train. As we’ll see later, this pattern is very frequent and makes easier the type definition.

And how are sum and product represented in Scala?

The easier way to represent the type sum (also called co-product), in a paradigm with polimorphism support (in general) and in Scala (in particular), is just the inheritance feature. If we have the following case:

sealed trait Animal
case object Cat extends Animal
case object Dog extends Animal

we’re indeed expressing a type co-product:

Animal = Cat + Dog

that is, an Animal can only be, a Cat, or a Dog.

Regarding the product, we could define it as the attribute set that compounds a certain type instance. For example,

case class Student(name: String, age: Int)

expressed as a product sum, would be as follows:

Student = String * Int

So, for building a Student instance, you need a String and an Int.

If we try now to materialize the previously exposed train model (with some additives) we’ll notice that

Wagon = String * Int
Train = Locomotive + Wagon * Train

is translated into Scala as

sealed trait Train
case object Locomotive extends Train
case class Wagon(model: String, passengers: Int)
case class Nexus(wagon: Wagon, next: Train)

So what is it good for?

hqdefault

…absolutely nothing, listen to me♩♪♫♬.
If you think, my fellow, that this is stuff that nobody uses, you haven’t thought about which scala.Prefef structures are defined this way. Lists, for example, as defined as:

trait List[+T]
case object Nil extends List[Nothing]
case class ::[T](head: T, tail: List[T]) extends List[T]

That is, a List can be, an empty one, or an element followed by another list.
If we express that idea in terms of products and co-products:

List[T] = EmptyList[T] + NonEmptyList[T]
NonEmptyList[T] = T * List[T]

Please, notice that, the case of the empt list (Nil) has a bizarre but beautiful implementation in Scala.

If we try to define an empty list for eeeeeeeeeevery single existing type, we would have to instantiate a Nil[Cat], a Nil[Dog], …
In order to avoid this, and having an only Nil, we make it extend from List[Nothing] that, as you’ll probably remember from other posts, Nothing extends from eeeeeeeeevery single existing type (both primitive and programmer defined). If we add the fact of List[T] being covariant at T, we’ll have an only object Nil that represents the empty lists for all types. Awesome, right?

odtUdEE

Example: Even numbers

In order to harden to this new way of thinking, let’s suppose the following challenge: how could we represent even numbers in Scala?

Requirements

If we’re not sophisticated enough and we trust a lil’ bit in runtime assertions, we could say that even numbers are defined as:

case class Even(value: Int) { 
  require(value%2==0, "it's not even")
}

But, if we try to create an Even with an odd integer number we’ll get a giant NOPE:

Even(1)
java.lang.IllegalArgumentException: requirement failed: it's not even
	at scala.Predef$.require(Predef.scala:233)
	at Even.<init>(<console>:7)

However this assertion won’t be verified until run-time, the moment when require is executed. Thus, our code could be compiled without being correct…
We can do it much better…

Next(Next(…))

Another option is to assume (and we won’t discuss about it) that zero is an even number, that we have infinite memory installed in our machine, that the overflow error doesn’t exist…

907958

In that, not so far, case (…), we could define even numbers as:

sealed abstract class Even(val value: Int)
case object Zero extends Even(0)
case class Next(previousEven: Even) 
  extends Even(2 + previousEven.value)

So, if we have a method that generate reservations for the Boat Love, that requires an even number of participants, we can use our brand new defined Even type:

def loveBoatReservation(
  peopleAmount: Even): Reservation = ???

Given there’s no way to build an Even from an integer that is not even, we avoid uncomfortable situations at runtime, where the amount of people that get on the Love Boat are odd. Otherwise, someone could be…

forever-alone-400x400

Recursive ADTs and its techniques

Once the data type is defined, let’s suppose we want to implement the sum of even numbers:

def sum(e1: Even, e2: Even): Even = ???

We handle several alternatives. One of them could be the quick-and-dirty one:

def sum(e1: Even, e2: Even): Event = 
  new Even(e1.value + e2.value){}

But take a closer look at the fact that we’re totally ignoring the constructors we’ve defined. If we want to use pattern matching over the result:

val four = new Even(4){}
sum(Zero, four) match {
  case Zero => 
    //it never gets inside this case!
  case Next(Next(Zero)) => 
    //OMG! IT DOESN'T FIT HERE EITHER!
}
scala.MatchError: $anon$1@649f2009 (of class $anon$1)

51067781

The other technique (much more sophisticated by the way) consists on invoking a recursive method that, for each call, it decreases the second even number, while it increases the first one. For doing so, we make use of Next apply(constructor) and unapply(extractor) methods:

def sum(e1: Even, e2: Even): Even = {
  @tailrec
  def rSum(ev1: Even, ev2: Even): (Even, Even) = {
    ev2 match {
      case Zero => (ev1, Zero)
      case Next(e) => rSum(Next(ev1), e)
    }
  }
  val (result, _) = rSum(e1, e2)
  result
}

Undeniably beautiful 🙂

Conclusions

Apart from becoming a lil’ bit crazier when reading back-from-vacation posts, we can extract several main conclusions from what we’ve read:

  • As we always say, every possible assertion that we can check at compile time instead of runtime, it’s saving time and headaches hunting bugs of software in production (which is more expensive and more keen to make heads roll).
  • Constructors are THE key. If we define an ADT, we can control that generated values of that type are correct by defining the proper constructors: Zero and Next. In both cases, we are certainly sure that even number rules are satisfied.
  • Methods that operate over recursive data types use to be recursive as well. And, apart from that, for generating values of the mentioned type (Even in our example) they should only use the existing constructor methods.

In a future post, we’ll talk about the relation between data types algebra and definition of formal grammars…or not.

Peace out!