Monoids are not demode and thursdays are actually the new fridays

No, it’s not the latest Coixet’s movie title. When we talk about cathegories like monoids, we tend to think they just remain on the edge as something merely experimental (even though purely functional) and there’s no direct application at the real world. Something similar happened when you learnt square roots at school and there was no proper moment to start using them at real life…

The use case

This morning, ah naïve me, I tried to make this work:

type Passengers = Int
type MaxCapacity = Passengers
type Plane = (Passengers, MaxCapacity)

val planes: List[Plane] =
  List(1 -> 1, 2 -> 3, 3 -> 3)

val (totalPassengers, totalCapacity) = 
  planes.sum 
//  ERROR: Could not find implicit value 
//  for parameter num: Numeric[(Int, Int)]

Ok, fair enough, Scala needs something, an evidence, for adding integer tuples.
Before we start fighting with the ‘evidence’, let’s try to make it work in a more mechanical way:

val sum = ((0,0) /: planes){
    case ((totalPas,totalCap), (passengers, capacity)) => 
      (totalPas + passengers, totalCap + capacity)
  }

Okay, it works. But it should be simpler, so let’s get back to the idea of Numeric[N] evidence.

Implementing Numeric[N]

We need a Numeric[(A,B)] but before implementing it (it has a lot of abstract methods) let’s sweep under the rug all those methods we don’t really want to focus on in this example. For doing so, let’s create an intermediate layer that keeps all those method without an implementation (which doesn’t mean ‘abstract’):

trait LeanNumeric[T] extends Numeric[T] {
  override def fromInt(x: Int): T = ???
  override def toInt(x: T): Int = ???
  override def minus(x: T, y: T): T = ???
  override def times(x: T, y: T): T = ???
  override def negate(x: T): T = ???
  override def toLong(x: T): Long = ???
  override def toFloat(x: T): Float = ???
  override def toDouble(x: T): Double = ???
  override def compare(x: T, y: T): Int = ???
}

Let’s call this abomination LeanNumeric (it only contains the essentials to develop our example). And now, we can define the method that generates the evidence for any Tuple2:

implicit def numeric[A, B](
  implicit nA: Numeric[A],
  nB: Numeric[B]): Numeric[(A, B)] = {
  new LeanNumeric[(A, B)]{
    override def zero = (nA.zero, nB.zero)
    override def plus(x: (A, B), y: (A, B)): (A, B) = {
      val (a1, b1) = x
      val (a2, b2) = y
      (nA.plus(a1, a2), nB.plus(b1, b2))
    }
  }
}

If we put the implicit into scope and we run again planes.sum…boom! Magic.

Num…oid

We don’t have to master category theory to realize that Numeric[N] may be a thousand of things, but it satisfies at least two properties:

  • The append operation: The sum operation – given n1 and n2 of type N, it return a new N element. Only because of this feature(and the closure, associativity, commutative, …) we can consider it a Semigroup.

  • And additionally the zero element

Seriously? Isn’t it obvious enough? My dear friends, monoid is back in town!

Implementation with scalaz.Monoid

Having in mind that Numeric has (at least) these two properties, let’s re-implement the implicit by using scalaz Monoid. We first define the monoid for integers and the tuple monoid which requires a monoid for each type that compounds the tuple (easy peasy):

import scalaz._

implicit object IntMonoid extends Monoid[Int]{
  override def zero: Int = 0
  override def append(f1: Int, f2: => Int): Int = f1 + f2
}

implicit def tupleMonoid[A,B](
  implicit mA: Monoid[A],
  mB: Monoid[B]): Monoid[(A,B)] = {
  new Monoid[(A, B)] {
    override def zero: (A, B) = (mA.zero, mB.zero)
    override def append(f1: (A, B), f2: => (A, B)): (A, B) = {
      val (a1, b1) = f1
      lazy val (a2, b2) = f2
      (mA.append(a1,a2), mB.append(b1, b2))
    }
  }
}

So good so far, right?

After that, we implement the implicit that will provide a Numeric as long as there’s a Monoid for the type

implicit def numeric[T](
  implicit m: Monoid[T]): Numeric[T] = {
  new LeanNumeric[T]{
    override def zero = m.zero
    override def plus(x: T, y: T): T = m.append(x, y)
  }
}

planes.sum //(6,7)

And it’s awesomely easy to get abstract of whatever T means (a tuple? a dog? …). As long as it’s a monoid, you can define a LeanNumeric.

You can find here the gist.

See you in the next functional madness.

Peace out!

Los monoides no están demodé y los jueves sí son los nuevos viernes

No, no es el título de la nueva película de Coixet. Cuando hablamos de categorías como los monoides, tendemos a pensar que se quedan en el ámbito de lo experimental (aunque funcionalmente puro) y que no existe aplicación directa sobre el mundo real. Algo parecido a lo que ocurría cuando te enseñaban a hacer raíces cuadradas en el colegio y no veías el momento para usarlo en la vida real…

El caso de uso

Esta misma mañana, ingenuo de mi, intentaba hacer que funcionara lo siguiente:

type Passengers = Int
type MaxCapacity = Passengers
type Plane = (Passengers, MaxCapacity)

val planes: List[Plane] =
  List(1 -> 1, 2 -> 3, 3 -> 3)

val (totalPassengers, totalCapacity) = 
  planes.sum 
//  ERROR: Could not find implicit value 
//  for parameter num: Numeric[(Int, Int)]

Vale, comprensible, Scala necesita algo, una evidencia, para poder sumar tuplas de enteros.
Antes de pegarnos con la “evidencia”, intentemos hacerlo funcionar de una manera mucho más mecánica:

val sum = ((0,0) /: planes){
    case ((totalPas,totalCap), (passengers, capacity)) => 
      (totalPas + passengers, totalCap + capacity)
  }

Okay, funciona. Pero debería ser algo más simple, por lo que retomemos la idea de la evidencia Numeric[N].

Implementando Numeric[N]

Necesitamos un Numeric[(A,B)] pero antes de implementarlo (tiene bastantes métodos abstractos) escondamos debajo de la alfombra aquellos métodos en los cuales no queremos centrarnos en este ejemplo. Para ello creamos una capa por encima que deje los métodos sin implementar (que no abstractos):

trait LeanNumeric[T] extends Numeric[T] {
  override def fromInt(x: Int): T = ???
  override def toInt(x: T): Int = ???
  override def minus(x: T, y: T): T = ???
  override def times(x: T, y: T): T = ???
  override def negate(x: T): T = ???
  override def toLong(x: T): Long = ???
  override def toFloat(x: T): Float = ???
  override def toDouble(x: T): Double = ???
  override def compare(x: T, y: T): Int = ???
}

A esta aberración la llamaremos LeanNumeric (solo contiene lo esencial para desarrollar nuestro ejemplo). Y ahora definimos el método que genera la evidencia para cualquier Tuple2:

implicit def numeric[A, B](
  implicit nA: Numeric[A],
  nB: Numeric[B]): Numeric[(A, B)] = {
  new LeanNumeric[(A, B)]{
    override def zero = (nA.zero, nB.zero)
    override def plus(x: (A, B), y: (A, B)): (A, B) = {
      val (a1, b1) = x
      val (a2, b2) = y
      (nA.plus(a1, a2), nB.plus(b1, b2))
    }
  }
}

Si ponemos el implícito en scope y ejecutamos de nuevo el planes.sum…¡boom! Magia.

Num…oide

No hace falta saber mucho de teoría de categorías para centrarse en que Numeric[N], podrá ser mil cosas más, pero cumple dos propiedades:

  • La operación append : La operación suma – dados n1 y n2 de tipo N, devuelve otro elemento N. Solo por disponer de esta operación (y el clossure, la asociatividad, la conmutativa, …) ya podemos considerarlo un Semigroup

  • Y adicionalmente el elemento zero : genera el elemento neutro de la suma

¿en serio?¿No es evidente?…amigos, ¡el monoide ha vuelto a la ciudad!

Implementación con scalaz.Monoid

Viendo que el tipo Numeric tiene al menos esas dos propiedades, re-implementamos el implícito haciendo uso de Monoid. Primero definimos el monoide para enteros y el monoide de las tuplas,
el cual requiere de un monoide para cada tipo que compone la tupla (easy peasy)

import scalaz._

implicit object IntMonoid extends Monoid[Int]{
  override def zero: Int = 0
  override def append(f1: Int, f2: => Int): Int = f1 + f2
}

implicit def tupleMonoid[A,B](
  implicit mA: Monoid[A],
  mB: Monoid[B]): Monoid[(A,B)] = {
  new Monoid[(A, B)] {
    override def zero: (A, B) = (mA.zero, mB.zero)
    override def append(f1: (A, B), f2: => (A, B)): (A, B) = {
      val (a1, b1) = f1
      lazy val (a2, b2) = f2
      (mA.append(a1,a2), mB.append(b1, b2))
    }
  }
}

Hasta aquí bien, ¿correcto?

Implementemos ahora el implícito que nos proporcionará un Numeric siempre que exista un Monoid para el tipo

implicit def numeric[T](
  implicit m: Monoid[T]): Numeric[T] = {
  new LeanNumeric[T]{
    override def zero = m.zero
    override def plus(x: T, y: T): T = m.append(x, y)
  }
}

planes.sum //(6,7)

Y es increiblemente sencillo abstraerse de lo que demonios sea T (¿una tupla?, ¿un perro?, …).
Mientras sea un monoide, se puede definir un LeanNumeric.

Podéis encontrar aquí el gist resumen

Hasta la próxima locura funcional.

¡Agur de limón!

Scalera tip: Handling sticky implicit contexts

A couple of days ago (translation for the masses: like a month ago) I noticed Viktor Klang was tweeting about removing the annoying implicit evidences from methods. And some things I read seemed so elegant to me that I was forced to share some related ideas with all of you that don’t follow him at Twitter (@viktorklang).

Setting some context

Imagine the typical polymorphic method where we need an execution context for evaluating some Future:

import scala.concurrent.{ExecutionContext, Future}

def myMethod[T]
  (element: T)
  (implicit ev: ExecutionContext): Future[Boolean] = ???

You could say it’s as typical as disgusting, having to repeat the same exact words in the following 10 method definitions: (implicit ev: ExecutionContext).

Playing with type alias

The happy idea that is being proposed is to define a type alias like the following one:

type EC[_] = ExecutionContext

This way, by adding some syntax sugar, we would re-define the method signature:

def myMethod[T:EC](element: T): Future[Boolean] = ???
myMethod("hi")

Beautiful, isn’t it?

Some other possibilities

Non-polymorphic methods

In case our method isn’t parameterized, we would have to add some boilerplate (by adding a wildcard for the type that parameterizes the method). In essence, it should be working the same principle:

def myMethod[_:EC](element: Int): Future[Boolean] = ???
myMethod(2)

Multiple implicit contexts

The not-so-crazy case in which we needed several implicit parameters of different natures, we would have to define as many type alias as different type parameters we required:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase

def myMethod[_:EC:MongoDB](element: Int): Future[Boolean] = ???

But what if …?

Multiple implicit parameters with same type

In case we have several implicit parameters that share the same type,

def myMethod
  (element: Int)
  (implicit ev1: ExecutionContext, ev2: ExecutionContext): Future[Boolean] = ???

it turns out that …

Well, by definition that’s impossible given that it would incur in some ambiguity issue when resolving implicits. It’s true that Scala allows having these kind of signatures, but we could only invoke them by making explicit the arguments contained in the second parameter group.:

myMethod(2)(ec1,ec2)

which is kind of…

Type-constructor implicit contexts

When we have implicit parameters that are also type constructors like List[T], Future[T], Option[T]

…well, it actually depends.

Case 1

If the type that parameterizes the method and the one that parameterizes the evidence are not related, there’s no big deal: we define another type alias and move on:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase
type IntOpt[_] = Option[Int]
type StrList[_] = List[String]

def myMethod[_:EC:MongoDB:IntOpt:StrList](
  element: Int): Future[Boolean] = ???

Which would be equivalent to:

def myMethod(
  element: Int)(
  implicit ev1: ExecutionContext,
  ev2: MongoDBDatabase,
  ev3: Option[Int],
  ev4: List[String]): Future[Boolean] = ???

Case 2

If the type that parameterizes the method and the one that parameterizes the evidence have to match …

Well, it’s not possible. The syntax sugar we’re using here implies that both types have to match. Maybe it was too pretty for our bodies 🙂

See you in the next post. Peace out!

Scalera tip: contextos implícitos pegajosos

El otro día (para la gente normal: hace cosa de 1 mes) vi que el gran Viktor Klang twiteaba acerca de como quitar las molestas evidencias implícitas en definiciones de métodos. Y me pareció tan elegantes algunas de las cosas que leí, que me vi en la obligación de compartir algunas ideas al hilo de dichos consejos con aquellos de vosotros que no le sigáis aun en Twitter (@viktorklang).

La situación

Imaginad el típico método polimórfico en el cual necesitamos un execution context para ejecutar un futuro:

import scala.concurrent.{ExecutionContext, Future}

def myMethod[T]
  (element: T)
  (implicit ev: ExecutionContext): Future[Boolean] = ???

Es tan típico como feo, el tener que repetir la coletilla de (implicit ev: ExecutionContext) en 10 métodos seguidos…

Jugando con type alias

La idea feliz que se propone es definir un type alias del siguiente tipo:

type EC[_] = ExecutionContext

De esta forma, re-definiríamos la cabecera de nuestro método como sigue:

def myMethod[T:EC](element: T): Future[Boolean] = ???
myMethod("hi")

¿Bello o no?

Otras posibilidades

Métodos no polifórmicos

En el caso en que nuestro método no esté parametrizado, tendríamos que añadir algo de boilerplate (añadiendo un wildcard para el tipo que parametriza el método), pero en esencia debería seguir funcionando el mismo principio:

def myMethod[_:EC](element: Int): Future[Boolean] = ???
myMethod(2)

Múltiples contextos implícitos

En el no-tan-descabellado caso en el que necesitáramos varios parámetros implícitos de distintos tipos, necesitaríamos definir tantos type alias como tipos distintos de parámetros requiriésemos:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase

def myMethod[_:EC:MongoDB](element: Int): Future[Boolean] = ???

Pero, ¿y si…?

Múltiples parámetros implícitos del mismo tipo

En el caso de que tengamos múltiples parámetros implícitos del mismo tipo,

def myMethod
  (element: Int)
  (implicit ev1: ExecutionContext, ev2: ExecutionContext): Future[Boolean] = ???

ocurriría que …

Bueno, por definición eso es imposible ya que incurriría en un problema de ambigüedad a la hora de resolver implícitos. Es cierto que Scala nos permite este tipo de signaturas, pero sólo podríamos invocar al método haciendo explícitos los argumentos del segundo grupo de parámetros:

myMethod(2)(ec1,ec2)

Lo cual es un tanto…

Contextos implícitos que son constructores de tipos

Cuando tenemos parámetros implícitos que son constructores de tipos como List[T], Future[T], Option[T]

En realidad depende.

Caso1

Si el tipo que parametriza el método y el que parametriza la evidencia no están relacionados, no hay mucho problema: definimos otro type alias y a correr:

type EC[_] = ExecutionContext
type MongoDB[_] = MongoDBDatabase
type IntOpt[_] = Option[Int]
type StrList[_] = List[String]

def myMethod[_:EC:MongoDB:IntOpt:StrList](
  element: Int): Future[Boolean] = ???

Lo cual sería el equivalente a:

def myMethod(
  element: Int)(
  implicit ev1: ExecutionContext,
  ev2: MongoDBDatabase,
  ev3: Option[Int],
  ev4: List[String]): Future[Boolean] = ???

Caso 2

Si el tipo que parametriza el método y el que parametriza la evidencia tienen que concordar …

Bueno no es posible. El syntax sugar implica que el tipo que parametriza el método vaya en concordancia con el tipo que parametriza nuestra evidencia. Quizás era todo demasiado bonito 🙂

Hasta el próximo post. ¡Agur de limón!

Scalera tip: Keep your actor’s state with no VAR at all!

It’s pretty well known that using VARs is, apart from unethical, the evil itself, some kind of hell, it make kitties die and many other stuff you might probably heard before and that could eventually be the cause of a painfull slowly dead.

The essence of functional programming is therefore the immutability: every time I mutate an element, I actually generate a new one.

What about Akka actors?

When we talk about actors, we can define them as stateful computation units that sequentially process a message queue by reacting(or not) to each of these messages.

It’s always been said that, in order to keep state within some actor’s logic, it was ok to use VARs:

It’s impossible that concurrency problems happen: it’s the actor itself and nobody else who access that var and will only process one message at a time.

But maybe, we could renounce to this premise if we look for some way to redefine the actor’s behavior based on a new state.

Mortal approach

If we follow the previously described philosophy, the very first (and more straight forward) approach for keeping some actor’s state would be pretty similar to the following:

class Foo extends Actor{
  var state: Int = 0
  override def receive = {
    case Increase => state += 1
  }
}

Every time an Increase arrives, we modify the state value by adding 1.
So easy so far, right?

Immutable approach

Nevertheless, we could define a receive function parameterized by certain state, so when a message arrives, this parameter is the state to take into account.

If the circumstances to mutate the state took place, we would just invoke the become method that would modify the actor’s behavior. In our case, that behavior mutation would consist on changing the state value.

If we use the previously defined example:

class Foo extends Actor{
  def receive(state: Int): Receive = {
    case Increase =>
      context.become(
        receive(state + 1),
        discardOld = true)
  }
  override def receive = receive(0)
}

we can notice that the function defined by receive is parameterized by some state argument. When some Increase message arrives, what we perform is an invocation to become for modifying the actor’s behavior, passing as an argument the new state to handle.

If we wanted some extra legibility, we could even get abstract from every updatabe-state actor:

trait SActor[State] extends Actor {
  val initialState: State
  def receive(state: State): Receive
  def update(state: State): Receive =
    context.become(
      receive(state),
      discardold = true)
  override def receive =
    receive(initialState)
}

this way, we just have to define the initial state of some actor, a new parameterized receive function and a new update function that takes care of performing the proper become call as explained before.
With all these in mind, we now have some cuter brand new Foo actor:

class Foo extends SActor[Int] {
  val initialState = 0
  def receive(state: Int): Receive = {
    case Increase => update(state +1)
  }
}

Potential hazardous issues

Please, do notice that in the featuring example, we’ve used a second argument for become: discardOld = true. This argument settles whether the new behavior should be stashed on the top of the older one, or ‘au contraire’ it should completely substitute the previous behavior.

Let’s suppose we used discardOld = false. If every single time a new Increase message arrived we had to stash a new behavior, we could obtain a wonderful overflow issue.

See you in the next tip.

Peace out 🙂

Scalera tip: Mantén estado en tu actor sin usar un solo VAR

Es de todos sabido que usar VARs es algo que, aparte de mal visto, está mal, es el infierno en vida, hace que mueran gatitos y muchas otras perlitas que probablemente ya habréis oído antes y que por poco os ha causado una muerte lenta y dolorosa en el cadalso.

La esencia en programación funcional es, por tanto, la inmutabilidad: cada vez que muto un elemento, genero uno nuevo.

What about Akka actors?

Cuando hablamos de actores, podemos definirlos como unidades con estado que procesan de manera secuencial una cola de mensajes, asociando (o no) a cada uno de estos mensajes una cierta lógica computacional.

Siempre se ha dicho que para mantener dicho estado dentro de la lógica de un actor, no pasaba nada si usabas un var:

Es imposible que hayan problemas de concurrencia: solo el propio actor tiene acceso a dicho VAR y procesará un solo mensaje al mismo tiempo.

Pero quizás podamos renunciar a esta premisa si buscamos una manera de redefinir el comportamiento del actor en base a un nuevo estado.

Mortal approach

Siguiendo la filosofía antes descrita, la primera (y más directa) aproximación para mantener el estado en un actor se parecería bastante a lo siguiente:

class Foo extends Actor{
  var state: Int = 0
  override def receive = {
    case Increase => state += 1
  }
}

Cada vez que llega un mensaje Increase, modificamos el valor de state, sumando 1.
Hasta aquí nada complicado, ¿no?

Immutable approach

Sin embargo, podríamos definir una función receive que estuviera parametrizada por un cierto estado, de manera que cuando llegue un mensaje, el estado a tener en cuenta sea este parámetro.

Si se diera la circunstancia de tener que actualizar el valor de dicho estado, bastaría con invocar al método become que modifica el comportamiento del actor. En nuestro caso, dicha modificación del comportamiento consistiría en cambiar el valor del estado.

Si usamos el mismo ejemplo que antes:

class Foo extends Actor{
  def receive(state: Int): Receive = {
    case Increase =>
      context.become(
        receive(state + 1),
        discardOld = true)
  }
  override def receive = receive(0)
}

vemos que la función que define el receive en base al estado recibe un parámetro denominado state. Cuando llega un mensaje de tipo Increase, lo que hacemos es invocar a become para modificar el comportamiento del actor, pasando como argumento el nuevo estado a tener en cuenta.

Si queremos mejorar un poco la legibilidad, podríamos incluso abstraer todo actor con estado actualizable:

trait SActor[State] extends Actor {
  val initialState: State
  def receive(state: State): Receive
  def update(state: State): Receive =
    context.become(
      receive(state),
      discardold = true)
  override def receive =
    receive(initialState)
}

de manera que se especifique el estado inicial del actor, una nueva función de receive que queda parametrizada por el nuevo estado a gestionar, y una nueva función de update que se encarga de realizar la llamada a become como antes explicábamos.
Con todo ello nos queda un nuevo actor Foo mucho más curioso:

class Foo extends SActor[Int] {
  val initialState = 0
  def receive(state: Int): Receive = {
    case Increase => update(state +1)
  }
}

Potential hazardous issues

Nótese que en el ejemplo de antes hemos pasado un segundo argumento: discardOld = true. Este argumento indica si el comportamiento nuevo debe apilarse sobre el anterior o si por el contrario debe sustituirlo por completo.

Supongamos que usáramos un discardOld = false. Si cada vez que llegase un mensaje de incremento, apilásemos un nuevo comportamiento, podríamos llegar a tener un problema de desbordamiento.

Hasta el próximo briconsejo.

Agur de limón 🙂

More lazy values, the State monad and other stateful stuff

In the previous post, we talked about lazy evaluation in Scala. At the end of that post, we asked an interesting question: Does a Lazy value hold an state?

24195622

In order to answer that question, we’ll try to define a type that could represent the Lazy values:

trait Lazy[T] {

  val evalF : () => T

  val value: Option[T] = None

}
object Lazy{
  def apply[T](f: => T): Lazy[T] =
    new Lazy[T]{ val evalF = () => f }
}

As you can see, our Lazy type is parameterized by some T type that represents the actual value type(Lazy[Int] would be the representation for a lazy integer).
Besides that, we can see that it’s composed of the two main Lazy type features:

  • evalF : Zero-parameter function that, when its ‘apply’ method is invoked, it evaluates the contained T expression.
  • value : The result value of the interpretation of the evalF function. This concrete part denotes the state in the Lazy type, and it only admit two possible values: None (not evaluated) or Some(t) (if it has been already evaluated and the result itself).

We’ve also added a companion object that defines the Lazy instance constructor that receives a by-name parameter that is returned as result of the evalF function.

e9a2295b3db9b45c8f5484a09033c1c71cf88e3375bb7ff60456bc81c29a4e04

Now the question is, how do we join both the evaluation function and the value that it returns so we can make Lazy an stateful type? We define the ‘eval’ function this way:

trait Lazy[T] { lzy =>

  val evalF : () => T

  val value: Option[T] = None

  def eval: (T, Lazy[T]) = {
    val evaluated = evalF.apply()
    evaluated -> new Lazy[T]{ mutated =>
      val evalF = lzy.evalF
      override val value = Some(evaluated)
      override def eval: (T, Lazy[T]) = 
        evaluated -> mutated
    }
  } 

}

The ‘eval’ function returns a two-element tuple:

  • The value result of evaluating the expression that stands for the lazy value.
  • a new Lazy value version that contains the new state: the T evaluation result.

If you take a closer look, what ‘eval’ method does in first place is to invoke the evalF function so it can retrieved the T value that remained until that point not-evaluated.
Once done, we return it as well as the new Lazy value version. This new version (let’s call it mutated version) will have in its ‘value’ attribute the result of having invoked the evalF function. In the same way, we change its eval method, so in future invocations the Lazy instance itself is returned instead of creating new instances (because it actually won’t change its state, like Scala’s lazy definitions work).

The interesting question that comes next is: is this an isolated case? Could anything else be defined as stateful? Let’s perform an abstraction exercise.

Looking for generics: stateful stuff

Let’s think about a simple stack:

sealed trait Stack[+T]
case object Empty extends Stack[Nothing]
case class NonEmpty[T](head: T, tail: Stack[T]) extends Stack

The implementation is really simple. But let’s focus in the Stack trait and in a hypothetical pop method that pops an element from the stack so it is returned as well as the rest of the stack:

sealed trait Stack[+T]{
  def pop(): (Option[T], Stack[T])
}

Does it sound familiar to you? It is mysteriously similar to

trait Lazy[T]{
  def eval: (T, Lazy[T])
}

isn’t it?

If we try to re-factor for getting a common trait between Lazy and Stack, we could define a much more abstract type called State:

trait State[S,T] {
  def apply(s: S): (T, S)
}

Simple but pretty: the State trait is parameterized by two types: S (state type) and T (info or additional element that is returned in the specified state mutation). Though it’s simple, it’s also a ver common pattern when designing Scala systems. There’s always something that holds certain state. And everything that has an state, it mutates. And if something mutates in a fancy and smart way…oh man.

That already exists…

24314442

All this story that seems to be created from a post-modern essay, has already been subject of study for people…that study stuff. Without going into greater detail, in ScalaZ library you can find the State monad that, apart from what was previously pointed, is fully-equipped with composability and everything that being a monad means (semigroup, monoid, …).

If we define our Lazy type with the State monad, we’ll get something similar to:

import scalaz.State

type Lazy[T] = (() => T, Option[T])

def Lazy[T](f: => T) = (() => f, None)

def eval[T] = State[Lazy[T], T]{
  case ((f, None)) => {
    val evaluated = f.apply()
    ((f, Some(evaluated)), evaluated)
  }
  case s@((_, Some(evaluated))) => (s, evaluated) 
}

When decrypting the egyptian hieroglyph, given the State[S,T] monad, we have that our S state will be a tuple composed of what exactly represents a lazy expression (that we also previously described):

type Lazy[T] = (() => T, Option[T])
  • A Function0 that represents the lazy evaluation of T
  • The T value that might have been evaluated or not

For building a Lazy value, we generate a tuple with a function that stands for the expression pointed with the by-name parameter of the Lazy method; and the None value (because the Lazy guy hasn’t been evaluated yet):

def Lazy[T](f: => T) = (() => f, None)

Last, but not least (it’s actually the most important part), we define the only state transition that is possible in this type: the evaluation. This is the key when designing any State type builder: how to model what out S type stands for and the possible state transitions that we might consider.

In the case of the Lazy type, we have two possible situations: the expression hasn’t been evaluated yet (in that case, we’ll evaluate it and we’ll return the same function and the result) or the expression has been already evaluated (in that case we won’t change the state at all and we’ll return the evaluation result):

def eval[T] = State[Lazy[T], T]{
  case ((f, None)) => {
    val evaluated = f.apply()
    ((f, Some(evaluated)), evaluated)
  }
  case s@((_, Some(evaluated))) => (s, evaluated) 
}

iZcUNxH

In order to check that we can still count on the initial features we described for the Lazy type (it can only be evaluated once, only when necessary, …) we check the following assertions:

var sideEffectDetector: Int = 0

val two = Lazy {
  sideEffectDetector += 1
  2
}

require(sideEffectDetector==0)

val (_, (evaluated, evaluated2)) = (for {
  evaluated <- eval[Int]
  evaluated2 <- eval[Int]
} yield (evaluated, evaluated2)).apply(two)

require(sideEffectDetector == 1)
require(evaluated == 2)
require(evaluated2 == 2)

Please, do notice that, as we mentioned before, what is defined inside the for-comprehension are the same transitions or steps that the state we decide will face. That means that we define the mutations that any S state will suffer. Once the recipe is defined, we apply it to the initial state we want.
In this particular case, we define as initial state a lazy integer that will hold the 2 value. For checking the amount of times that our Lazy guy is evaluated, we just add a very dummy var that will be used as a counter. After that, we define inside our recipe that the state must mutate twice by ussing the eval operation. Afterwards we’ll check that the expression of the Lazy block has only been evaluated once and that the returning value is the expected one.

I wish you the best tea for digesting all this crazy story 🙂
Please, feel free to add comments/menaces at the end of this post or even at our gitter channel.

See you on next post.
Peace out!