tag:blogger.com,1999:blog-22587889Sun, 12 May 2024 10:34:41 +0000scalafunctionaljavadddOOprogrammingerlanghaskelldslpatternsconcurrencydesignrubylispspringakkadslsinamonadscalazcouchdbaopclojurejpacombinatorjavascriptjvmpotpouripotpourritypealgorithmclosuredatabasescalabilitytestingDIdaoguicejpa-gotcha-seriesoscon08softwareamqpapi programmingcategory_theorydata-structuresdemeterdomain-modeljoyliftmessage-queuenosqlscouchdbterracottaactoragilealgebracount-min-sketchcqrsdatatype-generic-programmingeventsourcingfixpointgroovymachine-learningmap-reduceormrailsredissmullyanstream-processingwebxmlF#OCamlRIAajaxcorecursioneulerinductivejsonknuthlookbackmemcachedminamixinmustangopen sourceparametricityparser-combinatorracketrantsrestseamstmwebsocketyeggeRuminations of a ProgrammerA programmer's blog - will deal with everything that relates to a programmer. Occasionally, it will contain some humour, some politics and some sport news.http://debasishg.blogspot.com/noreply@blogger.com (Anonymous)Blogger316125tag:blogger.com,1999:blog-22587889.post-4559431392722706097Sun, 30 Jul 2017 13:11:00 +00002017-07-30T18:41:19.583+05:30domain-modelfunctionalscalaDomain Models - Late Evaluation buys you better CompositionIn the <a href="http://debasishg.blogspot.in/2017/06/domain-models-early-abstractions-and.html">last post</a> we talked about early abstractions that allow you to design generic interfaces which can be polymorphic in the type parameter. Unless you abuse the type system of a permissive language like Scala, if you adhere to the principles of parametricity, this approach helps you implement abstractions that are reusable under various contexts. We saw this when we implemented the generic contract of the <code>mapReduce</code> function and its various specializations by supplying different concrete instances of the <code>Monoid</code> algebra.
<br/>
<br/>
In this post we will take a look at the other end of the spectrum in designing functional domain models. We will discuss evaluation semantics of model behaviors - the precise problem of when to commit to specific concrete evaluation semantics. Consider the following definition of a domain service module ..
<pre class="brush: scala">
type ErrorOr[A] = Either[String, A]
trait PaymentService {
def paymentCycle: ErrorOr[PaymentCycle]
def qualifyingAccounts(paymentCycle: PaymentCycle): ErrorOr[Vector[Account]]
def payments(accounts: Vector[Account]): ErrorOr[Vector[Payment]]
def adjustTax(payments: Vector[Payment]): ErrorOr[Vector[Payment]]
def postToLedger(payments: Vector[Payment]): ErrorOr[Unit]
} </pre>
Such definitions are quite common these days. We have a nice monadic definition going on which can be composed as well to implement larger behaviors out of smaller ones ..
<pre class="brush: scala">
def processPayments() = for {
p <- paymentCycle
a <- qualifyingAccounts(p)
m <- payments(a)
a <- adjustTax(m)
_ <- postToLedger(a)
} yield ()
</pre>
Can we improve upon this design ?
<br/>
<br/>
<h2>Committing to the concrete early - the pitfalls ..</h2>
One of the defining aspects of reusable abstractions is the ability to run it under different context. This is one lesson that we learnt in the last post as well. Make the abstractions depend on the <i>least</i> powerful algebra. In this example our service functions return <code>Either</code>, which is a monad. But it's not necessarily the least powerful algebra in the context. Users may choose to use some other monad or may be even applicative to thread through the context of building larger behaviors. Why not keep the algebra unspecified at the service definition level and hope to have specializations in implementations or even in usages at the end of the world ? Here's what we can do ..
<pre class="brush: scala">
// algebra
trait PaymentService[M[_]] {
def paymentCycle: M[PaymentCycle]
def qualifyingAccounts(paymentCycle: PaymentCycle): M[Vector[Account]]
def payments(accounts: Vector[Account]): M[Vector[Payment]]
def adjustTax(payments: Vector[Payment]): M[Vector[Payment]]
def postToLedger(payments: Vector[Payment]): M[Unit]
} </pre>
A top level service definition that keeps the algebra unspecified. Now if we want to implement a larger behavior with monadic composition, we can do this ..
<pre class="brush: scala">
// weaving the monad
def processPayments()(implicit me: Monad[M]) = for {
p <- paymentCycle
a <- qualifyingAccounts(p)
m <- payments(a)
a <- adjustTax(m)
_ <- postToLedger(a)
} yield p
</pre>
Note that we are using only the monadic bind in composing the larger behavior - hence the least powerful algebra that we can use is that of a <code>Monad</code>. And we express this exact constraint by publishing the requirements of the existence of an instance of a <code>Monad</code> for the type constructor <code>M</code>.
<br/>
<br/>
<h2>What about Implementation ?</h2>
Well, we could avoid the committment to a concrete algebra in the definition of the service. What about the implementation ? One of the core issues with the implementation is how you need to handle errors. This is an issue which often makes you commit to an implementation when you write the interpreter / implementation of a service contract. You may use <code>Failure</code> for a <code>Try</code> based implementation, or <code>Left</code> for an <code>Either</code> based implementation etc. Can we abstract over this behavior through a generic error handling strategy ? Some libraries like <a href="http://typelevel.org/cats/">cats</a> offers you abstractions like <code>MonadError</code> that helps you implement error reporting functionality using generic monadic APIs. Here's how we can do this ..
<pre class="brush: scala">
class PaymentServiceInterpreter[M[_]](implicit me: MonadError[M, Throwable])
extends PaymentService[M] {
//..
def payments(accounts: Vector[Account]): M[Vector[Payment]] =
if (accounts.isEmpty) me.raiseError(
new IllegalArgumentException("Empty account list"))
else //..
//..
}
//..
}
</pre>
Note we needed a monad with error handling capabilities and we used <code>MonadError</code> for that. Note that we have kept the error type in <code>MonadError</code> as <code>Throwable</code>, which may seem a bit unethical in the context of pure functional programming. But it's also true that many libraries (especially Java ones) or underlying abstractions like <code>Future</code> or <code>Try</code> play well with exceptions. Anyway this is just a red herring though it has nothing to do with the current topic of discussion. The moot point is that you need to supply a <code>MonadError</code> that you have instances of.
<br/>
<br/>
Here's how cats defines the trait <code>MonadError</code> ..
<pre class="brush: scala">
trait MonadError[F[_], E] extends ApplicativeError[F, E] with Monad[F] { //..
</pre>
.. and that's exactly what we will commit to. We are still dealing with a generic <code>Monad</code> even in the implementation without committing to any concreate instance.
<br/>
<br/>
<h2>End of the World!</h2>
The basic purpose why we wanted to delay committing to the concrete instance was to allow the users the flexibility to choose their own implementations. This is what we call the principle of delayed evaluation. Abstract early, evaluate late and decouple the concerns of building and the evaluation of the abstractions. We have already seen the 2 of these principles - we will see that our design so far will accommodate the third one as well, at least for some instances of <code>M</code>.
<br/>
<br/>
The user of our API has the flexibility to choose the monad as long as she supplies the <code>MonadError[M, Throwable]</code> instance. And we have many to choose from. Here's an example of the above service implementation in use that composes with another service in a monadic way and choosing the exact concrete instance of the <code>Monad</code> at the end of the world ..
<pre class="brush: scala">
import cats._
import cats.data._
import cats.implicits._
// monix task based computation
object MonixTaskModule {
import monix.eval.Task
import monix.cats._
val paymentInterpreter = new PaymentServiceInterpreter[Task]
val emailInterpreter = new EmailServiceInterpreter[Task]
for {
p <- paymentInterpreter.processPayments
e <- emailInterpreter.sendEmail(p)
} yield e
}
// future based computation
object FutureModule {
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
val paymentInterpreter = new PaymentServiceInterpreter[Future]
val emailInterpreter = new EmailServiceInterpreter[Future]
for {
p <- paymentInterpreter.processPayments
e <- emailInterpreter.sendEmail(p)
} yield e
}
// try task based computation
object TryModule {
import scala.util.Try
val paymentInterpreter = new PaymentServiceInterpreter[Try]
val emailInterpreter = new EmailServiceInterpreter[Try]
for {
p <- paymentInterpreter.processPayments
e <- emailInterpreter.sendEmail(p)
} yield e
}
</pre>
<a href="http://monix.io">Monix</a> <code>Task</code> is an abstraction that decouples the building of the abstraction from execution. So the <code>Task</code> that we get from building the composed behavior as in the above example can be executed in a deferred way depending of the requirements of the application. It can also be composed with other Tasks to build larger ones as well.
<br/>
<br/>
<h2>Vertical Composition - stacking abstractions</h2>
When you have not committed to an implementation early enough, all you have is an unspecified algebra. You can do fun stuff like stacking abstractions vertically. Suppose we want to implement auditability in some of our service methods. Here we consider a simple strategy of logging as a means to audit the behaviors. How can we take an existing implementation and plugin the audit function selectively ? The answer is we compose algebras .. here's an example that stacks the <code>Writer</code> monad with an already existing algebra to make the <code>payments</code> function auditable ..
<pre class="brush: scala">
final class AuditablePaymentService[M[_]: Applicative](paymentService: PaymentService[M])
extends PaymentService[WriterT[M, Vector[String], ?]] {
def paymentCycle: WriterT[M, Vector[String], PaymentCycle] =
WriterT.lift(paymentService.paymentCycle)
def qualifyingAccounts(paymentCycle: PaymentCycle): WriterT[M, Vector[String], Vector[Account]] =
WriterT.lift(paymentService.qualifyingAccounts(paymentCycle))
def payments(accounts: Vector[Account]): WriterT[M, Vector[String], Vector[Payment]] =
WriterT.putT(paymentService.payments(accounts))(accounts.map(_.no))
//..
}
val auditablePaymentInterpreter = new AuditablePaymentService[Future](
new PaymentServiceInterpreter[Future]
)
</pre>
We took the decision to abstract the return type in the form of a type constructor early on. But committed to the specific type only during the actual usage of the service implementation. Early abstraction and late committment to implementation make great composition possible and often in ways that may pleasantly surprise you later ..
http://debasishg.blogspot.com/2017/07/domain-models-late-evaluation-buys-you.htmlnoreply@blogger.com (Anonymous)6tag:blogger.com,1999:blog-22587889.post-6748355324438110961Sun, 25 Jun 2017 18:11:00 +00002017-06-25T23:41:49.180+05:30algebraddddomain-modelfunctionalscalaDomain Models - Early Abstractions and Polymorphic Domain Behaviors<div dir="ltr" style="text-align: left;" trbidi="on">Let's talk genericity or generic abstractions. In the <a href="http://debasishg.blogspot.in/2017/06/domain-models-algebraic-laws-and-unit.html">last post</a> we talked about an abstraction <code>Money</code>, which, BTW was not generic. But we expressed some of the operations on <code>Money</code> in terms of a <code>Money[Monoid]</code>, where <code>Monoid</code> is a generic algebraic structure. By algebraic we mean that a <code>Monoid</code> <br />
<br />
<ol><li>is generic in types</li>
<li>offers operations that are completely generic on the types</li>
<li>all operations honor the algebraic laws of left and right identities and associativity</li>
</ol><br />
But when we design a domain model, what does this really buy us ? We already saw in the earlier post how law abiding abstractions save you from writing some unit tests just through generic verification of the laws using property based testing. That's just a couple of lines in any of the available libraries out there. <br />
<br />
Besides reducing the burden of your unit tests, what does <code>Money[Monoid]</code> buy us in the bigger context of things ? Let's look at a simple operation that we defined in <code>Money</code> .. <br />
<br />
Just to recapitulate, here's the definition of <code>Money</code><br />
<br />
<pre class="brush: scala">class Money (val items: Map[Currency, BigDecimal]) { //..
}
object Money {
final val zeroMoney = new Money(Map.empty[Currency, BigDecimal])
def apply(amount: BigDecimal, ccy: Currency) = new Money(Map(ccy -> amount))
// concrete naive implementation: don't
def add(m: Money, n: Money) = new Money(
(m.items.toList ++ n.items.toList)
.groupBy(_._1)
.map { case (k, v) =>
(k, v.map(_._2).sum)
}
)
//..
}</pre><br />
<code>add</code> is a naive implementation though it's possibly the most frequent one that you will ever encounter in domain models around you. It picks up the <code>Map</code> elements and then adds the ones with the same key to come up with the new <code>Money</code>.<br />
<br />
Why is this a naive implementation ?<br />
<br />
First of all it deconstructs the implementation of <code>Money</code>, instead of using the algebraic properties that the implementation may have. Here we implement <code>Money</code> in terms of a <code>Map</code>, which itself forms a <code>Monoid</code> under the operations defined by <code>Monoid[Map[K, V]]</code>. Hence why don't we use the monoidal algebra of a <code>Map</code> to implement the operations of <code>Money</code> ?<br />
<br />
<pre class="brush: scala">object Money {
//..
def add(m: Money, n: Money) = new Money(m.items |+| n.items)
//..
}</pre><br />
<code>|+|</code> is a helper function that combines the 2 Maps in a monoidal manner. The concrete piece of code that you wrote in the naive implementation is now delegated to the implementation of the algebra of monoids for a <code>Map</code> in a completely generic way. The advantage is that you need (or possibly someone else has already done that for you) to write this implementation only once and use it in every place you use a <code>Map</code>. <b>Reusability of polymorphic code is not via documentation but by actual code reuse.</b> <br />
<br />
On to some more reusability of generic patterns ..<br />
<br />
Consider the following abstraction that builds on top of <code>Money</code> ..<br />
<br />
<pre class="brush: scala">import java.time.OffsetDateTime
import Money._
import cats._
import cats.data._
import cats.implicits._
object Payments {
case class Account(no: String, name: String, openDate: OffsetDateTime,
closeDate: Option[OffsetDateTime] = None)
case class Payment(account: Account, amount: Money, dateOfPayment: OffsetDateTime)
// returns the Money for credit payment, zeroMoney otherwise
def creditsOnly(p: Payment): Money = if (p.amount.isDebit) zeroMoney else p.amount
// compute valuation of all credit payments
def valuation(payments: List[Payment]) = payments.foldLeft(zeroMoney) { (a, e) =>
add(a, creditsOnly(e))
}
//..
}</pre><br />
<br />
<code>valuation</code> gives a standard implementation folding over the <code>List</code> that it gets. Now let's try to critique the implementation ..<br />
<br />
1. The function does a <code>foldLeft</code> on the passed in collection <code>payments</code>. The collection only needs to have the ability to be folded over and <code>List</code> can do much more than that. We violate the <i>principle of using the least powerful abstraction</i> as part of the implementation. The function that implements the fold over the collection only needs to take a <code>Foldable</code> - that prevents misuse on part of a user feeling like a child in a toy store with something more grandiose than what she needs.<br />
<br />
2. The implementation uses the <code>add</code> function of <code>Money</code>, which is nothing but a concrete wrapper over a monoidal operation. If we can replace this with something more generic then it will be a step forward towards a generic implementation of the whole function.<br />
<br />
3. If we squint a bit, we can get some more light into the generic nature of all the components of this 2 line small implementation. <code>zeroMoney</code> is a <code>zero</code> of a <code>Monoid</code>, <code>fold</code> is a generic operation of a <code>Foldable</code>, <code>add</code> is a wrapper over a monoidal operation and <code>creditsOnly</code> is a mapping operation over every payment that the collection hands you over. In summary the implementation folds over a <code>Foldable</code> mapping each element using a function and uses the monoidal operation to collapse the <code>fold</code>.<br />
<br />
Well, it's actually a concrete implementation of a generic map-reduce function ..<br />
<br />
<pre class="brush: scala">def mapReduce[F[_], A, B](as: F[A])(f: A => B)
(implicit fd: Foldable[F], m: Monoid[B]): B =
fd.foldLeft(as, m.empty)((b, a) => m.combine(b, f(a)))</pre><br />
In fact the <code>Foldable</code> trait contains this implementation in the name of <code>foldMap</code>, which makes our implementation of <code>mapReduce</code> even simpler ..<br />
<br />
<pre class="brush: scala">def mapReduce1[F[_], A, B](as: F[A])(f: A => B)
(implicit fd: Foldable[F], m: Monoid[B]): B = fd.foldMap(as)(f)</pre><br />
And <code>List</code> is a <code>Foldable</code> and our implementation of valuation becomes as generic as ..<br />
<br />
<pre class="brush: scala">object Payments {
//..
// generic implementation
def valuation(payments: List[Payment]): Money = {
implicit val m: Monoid[Money] = Money.MoneyAddMonoid
mapReduce(payments)(creditsOnly)
}
}</pre><br />
The implementation is generic and the typesystem will ensure that the <code>Money</code> that we produce can <i>only</i> come from the list of payments that we pass. In the naive implementation there's always a chance that the user subverts the typesystem and can play malice by plugging in some additional <code>Money</code> as the output. If you look at the type signature of <code>mapReduce</code>, you will see that the only way we can get a <code>B</code> is by invoking the function <code>f</code> on an element of <code>F[A]</code>. Since the function is generic on types we cannot ever produce a <code>B</code> otherwise. Parametricity FTW.<br />
<br />
<code>mapReduce</code> is completely generic on types - there's no specific implementation that asks it to add the payments passed to it. This abstraction over operations is provided by the <code>Monoid[B]</code>. And the abstraction over the form of collection is provided by <code>Foldable[F]</code>. It's now no surprise that we can pass in any concrete operation or structure that honors the contracts of <code>mapReduce</code>. Here's another example from the same model ..<br />
<br />
<pre class="brush: scala">object Payments {
//..
// generic implementation
def maxPayment(payments: List[Payment]): Money = {
implicit val m: Monoid[Money] = Money.MoneyOrderMonoid
mapReduce(payments)(creditsOnly)
}
}</pre><br />
We want to compute the maximum credit payment amount from a collection of payments. A different domain behavior needs to be modeled but we can think of it as belonging to the same form as <code>valuation</code> and implemented using the same structure as <code>mapReduce</code>, only passing a different instance of <code>Monoid[Money]</code>. No additional client code, no fiddling around with concrete data types, just matching the type contracts of a polymorphic function. <br />
<br />
Looks like our investment on an early abstraction of <code>mapReduce</code> has started to pay off. The domain model remains clean with much of the domain logic being implemented in terms of the algebra that the likes of Foldables and Monoids offer. I discussed some of these topics at length in my book <a href="https://www.manning.com/books/functional-and-reactive-domain-modeling">Functional and Reactive Domain Modeling</a>. In the next instalment we will explore some more complex algebra as part of domain modeling .. <br />
<br />
</div>http://debasishg.blogspot.com/2017/06/domain-models-early-abstractions-and.htmlnoreply@blogger.com (Anonymous)2tag:blogger.com,1999:blog-22587889.post-9088418543425041898Sun, 18 Jun 2017 16:37:00 +00002017-06-18T22:35:34.894+05:30ddddomain-modelfunctionalscalaDomain models, Algebraic laws and Unit testsIn a domain model, when you have a domain element that forms an algebraic abstraction honoring certain laws, you can get rid of many of your explicitly written unit tests just by checking the laws. Of course you have to squint hard and discover the lawful abstraction that hides behind your concrete domain element.<br />
<br />
Consider this simple abstraction for <code>Money</code> that keeps track of amounts in various currencies. <br />
<br />
<pre class="brush: scala">scala> import Money._
import Money._
// 1000 USD
scala> val m = Money(1000, USD)
m: laws.Money = (USD,1000)
// add 248 AUD
scala> val n = add(m, Money(248, AUD))
n: laws.Money = (AUD,248),(USD,1000)
// add 230 USD more
scala> val p = add(n, Money(230, USD))
p: laws.Money = (AUD,248),(USD,1230)
// value of the money in base currency (USD)
scala> p.toBaseCurrency
res1: BigDecimal = 1418.48
// debit amount
scala> val q = Money(-250, USD)
q: laws.Money = (USD,-250)
scala> val r = add(p, q)
r: laws.Money = (AUD,248),(USD,980)</pre><br />
The valuation of <code>Money</code> is done in terms of its base currency which is usually <code>USD</code>. One of the possible implementations of <code>Money</code> is the following (some parts elided for future explanations) ..<br />
<br />
<pre class="brush: scala">sealed trait Currency
case object USD extends Currency
case object AUD extends Currency
case object JPY extends Currency
case object INR extends Currency
class Money private[laws] (val items: Map[Currency, BigDecimal]) {
def toBaseCurrency: BigDecimal =
items.foldLeft(BigDecimal(0)) { case (a, (ccy, amount)) =>
a + Money.exchangeRateWithUSD.get(ccy).getOrElse(BigDecimal(1)) * amount
}
override def toString = items.toList.mkString(",")
}
object Money {
final val zeroMoney = new Money(Map.empty[Currency, BigDecimal])
def apply(amount: BigDecimal, ccy: Currency) = new Money(Map(ccy -> amount))
def add(m: Money, amount: BigDecimal, ccy: Currency) = ???
final val exchangeRateWithUSD: Map[Currency, BigDecimal] =
Map(AUD -> 0.76, JPY -> 0.009, INR -> 0.016, USD -> 1.0)
}</pre><br />
Needless to say we will have quite a number of unit tests that check for addition of <code>Money</code>, including the boundary cases of adding to <code>zeroMoney</code>.<br />
<br />
It's not very hard to see that the type <code>Money</code> forms a <code>Monoid</code> under the <code>add</code> operation. Or to speak a bit loosely we can say that <code>Money</code> is a <code>Monoid</code> under the <code>add</code> operation.<br />
<br />
A <code>Monoid</code> has <a href="https://en.wikibooks.org/wiki/Haskell/Monoids">laws</a> that every instance needs to honor - associativity, left identity and right identity. And when your model element needs to honor the laws of algebra, it's always recommended to include the verification of the laws as part of your test suite. Besides validating the sanity of your abstractions, one side-effect of verifying laws is that you can get rid of many of your explicitly written unit tests for the operation that forms the <code>Monoid</code>. They will be automatically verified when verifying the laws of <code>Monoid[Money]</code>.<br />
<br />
Here's how we define <code>Monoid[Money]</code> using <a href="http://typelevel.org/cats">Cats</a> ..<br />
<br />
<pre class="brush: scala">val MoneyAddMonoid: Monoid[Money] = new Monoid[Money] {
def combine(m: Money, n: Money): Money = add(m, n)
def empty: Money = zeroMoney
}</pre><br />
and the implementation of the previously elided add operation on <code>Money</code> using <code>Monoid</code> on <code>Map</code> ..<br />
<br />
<pre class="brush: scala">object Money {
//..
def add(m: Money, amount: BigDecimal, ccy: Currency) =
new Money(m.items |+| Map(ccy -> amount))
//..
}</pre><br />
Now we can verify the laws of <code>Monoid[Money]</code> using <a href="http://specs2.org">specs2</a> and <a href="http://scalacheck.org">ScalaCheck</a> and the helper classes that Cats offers ..<br />
<br />
<pre class="brush: scala">import cats._
import kernel.laws.GroupLaws
import org.scalacheck.{ Arbitrary, Gen }
import Arbitrary.arbitrary
class MoneySpec extends CatsSpec { def is = s2"""
This is a specification for validating laws of Money
(Money) should
form a monoid under addition $e1
"""
implicit lazy val arbCurrency: Arbitrary[Currency] = Arbitrary {
Gen.oneOf(AUD, USD, INR, JPY)
}
implicit def moneyArbitrary: Arbitrary[Money] =
Arbitrary {
for {
i <- Arbitrary.arbitrary[Map[Currency, BigDecimal]]
} yield new Money(i)
}
def e1 = checkAll("Money", GroupLaws[Money].monoid(Money.MoneyAddMonoid))
}</pre>
<br>
<br>
and running the test suite will verify the Monoid laws for <code>Monoid[Money]</code> ..
<br>
<br>[info] This is a specification for validating laws of Money
<br>[info]
<br>[info] (Money) should
<br>[info] form a monoid under addition monoid laws must hold for Money
<br>[info] + monoid.associativity
<br>[info] + monoid.combineAll
<br>[info] + monoid.combineAll(Nil) == id
<br>[info] + monoid.combineAllOption
<br>[info] + monoid.combineN(a, 0) == id
<br>[info] + monoid.combineN(a, 1) == a
<br>[info] + monoid.combineN(a, 2) == a |+| a
<br>[info] + monoid.isEmpty
<br>[info] + monoid.leftIdentity
<br>[info] + monoid.rightIdentity
<br>[info] + monoid.serializable
<br>
<br>
In summary ..
<ul><li>strive to find abstractions in your domain model that are constrained by algebraic laws</li>
<li>check all laws as part of your test suite</li>
<li>you will find that you can get rid of quite a few explicitly written unit tests just by checking the laws of your abstraction</li>
<li>and of course use property based testing for unit tests</li>
</ul>In case you want to take a look at the full code base, it's there on my <a href="https://github.com/debasishg/pigeon/tree/master/laws-tango">Github repo</a>. In the next post we will take the next step towards modeling with generic algebraic code using the Monoid pattern from this example. Code written in parametric form without depending on specialized concrete types can be more robust, easier to test and easier to reason about. I have also discussed this at length in my book <a href="https://www.manning.com/books/functional-and-reactive-domain-modeling">Functional and Reactive Domain Modeling</a>. I plan to supplement the materials covered there with more examples and code patterns ..<br />
http://debasishg.blogspot.com/2017/06/domain-models-algebraic-laws-and-unit.htmlnoreply@blogger.com (Anonymous)0tag:blogger.com,1999:blog-22587889.post-7664850666236531463Sat, 13 Jun 2015 15:47:00 +00002015-06-13T21:17:43.760+05:30functionalparametricityscalaBaking a π can teach you a bit of ParametricityEven though I got my copy of Prof. Eugenia Cheng's awesome <a href="http://www.amazon.com/How-Bake-Pi-Exploration-Mathematics/dp/0465051715">How to Bake π</a> a couple of weeks back, I started reading it only over this weekend. I am only on page 19 enjoying all the stuff regarding cookies that Prof. Cheng is using to explain abstraction. This is a beautiful piece of explanation and if you are a programmer you may get an extra mile out of the concepts that she explains here. Let's see if we can unravel a few of them ..<br />
<br />
She starts with a real life situation such as:<br />
<br />
<blockquote>If Grandma gives you five cookies and Grandpa gives you five cookies, how many cookies will you have ?</blockquote><br />
Let's model this as box of cookies that you get from your Grandma and Grandpa and you need to count them and find the total. Let's model this in Scala and we may have something like the following ..<br />
<br />
<pre class="brush: scala">case class CookieBox(count: Int)</pre><br />
and we can define a function that gives you a <code>CookieBox</code> containing the total number of cookies from the 2 boxes that we pass to the function ..<br />
<br />
<pre class="brush: scala">def howManyCookies(gm: CookieBox, gp: CookieBox) = {
CookieBox(gm.count + gp.count)
}</pre><br />
and we use <code>howManyCookies</code> to find the count ..<br />
<br />
<pre class="brush: scala">scala> val gm = CookieBox(5)
gm: CookieBox = CookieBox(5)
scala> val gp = CookieBox(5)
gp: CookieBox = CookieBox(5)
scala> howManyCookies(gm, gp)
res5: CookieBox = CookieBox(10)</pre><br />
.. so we have 10 cookies from our Grandma & Grandpa .. Perfect!<br />
<br />
The problem is .. the child answers: <i>"None, because I'll eat them all"</i>. To model this let's add a function <code>eat</code> to our <code>CookieBox</code> abstraction ..<br />
<br />
<pre class="brush: scala">case class CookieBox(count: Int) {
// let's assume n < count for simplicity
def eat(n: Int): CookieBox = CookieBox(count - n)
}</pre><br />
So instead of the correct way to answer the question, the child cheats and implements <code>howManyCookies</code> as ..<br />
<br />
<pre class="brush: scala">def howManyCookies(gm: CookieBox, gp: CookieBox) = {
CookieBox(gm.eat(gm.count).count + gp.eat(gp.count).count)
}</pre><br />
and we get the following ..<br />
<br />
<pre class="brush: scala">scala> howManyCookies(gm, gf)
res6: CookieBox = CookieBox(0)</pre><br />
Prof. Cheng continues ..<br />
<br />
<blockquote>The trouble here is that cookies do not obey the rules of logic, so using math to study them doesn't quite work. .. We could impose an extra rule on the situation by adding "... and you're not allowed to eat the cookies". If you're not allowed to eat them, what's the point of them being cookies ?</blockquote><br />
This is profound indeed. When we are asked to count some stuff, it really doesn't matter if they are cookies or stones or pastries. The only property we need here is to be able to add together the 2 stuff that we are handed over. The fact that we have implemented <code>howManyCookies</code> in terms of <code>CookieBox</code> gives the little child the opportunity to cheat by using the <code>eat</code> function. More information is actually hurting us here, being concrete with data types is actually creating more avenues for incorrect implementation.<br />
<br />
Prof. Cheng is succinct here when she explains ..<br />
<br />
<blockquote>We could treat the cookies as just things rather than cookies. We lose some resemblance to reality, but we gain scope and with it efficiency. The point of numbers is that we can reason about "things" without having to change the reasoning depending on what "thing" we are thinking about.</blockquote><br />
Yes, she is talking about generalization, being polymorphic over what we count. We just need the ability to add 2 "things", be it cookies, monkeys or anchovies. In programming we model this with parametric polymorphism, and use a universal quantification over the set of types for which we implement the behavior.<br />
<br />
<pre class="brush: scala">def howMany[A](gm: A, gp: A) = //..</pre><br />
We have made the implementation parametric and got rid of the concrete data type <code>CookieBox</code>. But how do we add the capability to sum the 2 objects and get the result ? You got it right - we already have an abstraction that makes this algebra available to a generic data type. Monoids FTW .. and it doesn't get simpler than this ..<br />
<br />
<pre class="brush: scala">trait Monoid[T] {
def zero: T
def append(t1: T, t2: T): T
}</pre><br />
<code>zero</code> is the identity function and <code>append</code> is a binary associative function over 2 objects of the type. So given a monoid instance for our data type, we can model <code>howMany</code> in a completely generic way irrespective of whether <code>A</code> is a <code>CookieBox</code> or <code>Monkey</code>.<br />
<br />
<pre class="brush: scala">def howMany[A : Monoid](gm: A, gp: A): A = gm append gp</pre><br />
Implementing a monoid for <code>CookieBox</code> is also simple ..<br />
<br />
<pre class="brush: scala">object CookieBox {
implicit val CookieBoxMonoid = new Monoid[CookieBox] {
val zero = CookieBox(0)
def append(i: CookieBox, j: CookieBox) = CookieBox(i.count + j.count)
}
}
</pre>With the above implementation of <code>howMany</code>, the little child will not be able to cheat. By providing a simpler data type we have made the implementation more robust and reusable across multiple data types.<br />
<br />
Next time someone wants me to explain parametricity, I will point them to Page 19 of How to Bake π.http://debasishg.blogspot.com/2015/06/baking-can-teach-you-bit-of.htmlnoreply@blogger.com (Anonymous)8tag:blogger.com,1999:blog-22587889.post-7112394182068938457Wed, 25 Mar 2015 20:35:00 +00002015-03-26T02:05:31.939+05:30machine-learningRandomization and Probabilistic Techniques to scale up Machine Learning<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
Some time back I <a href="http://debasishg.blogspot.in/2015/01/probabilistic-techniques-data-streams.html">blogged</a> about the possibilities that probabilistic techniques and randomization bring on to the paradigm of stream computing. Architectures based on big data not only relate to high volume storage, but also on low latency velocities, and this is exactly where stream computing has a role to play. I discussed a few data structures like bloom filters, count min sketch and hyperloglog and algorithms like Locality Sensitive Hashing that use probabilistic techniques to reduce the search and storage space while processing huge volumes of data.<br />
<br />
Of late, I have been studying some of the theories behind machine learning algorithms and how they can be used in conjunction with the petabytes of data that we generate everyday. And the same thing strikes here - there are algorithms that can model the most perfect classifier. But you need randomization and probabilistic techniques to make them scale, even at the expense of a small amount of inaccuracy creeping within your model. In most cases we will see that the small inaccuracy that comes within your algorithm because of probabilistic bounds can be compensated by the ability to process more data within the specified computational timeline. This is true even for some of the basic algorithms like matrix multiplication that form the core of machine learning models.<br />
<br />
The contents of this post is nothing original or new. It's just to share some of my thoughts in learning the usage of approximation techniques in building machine learning classifiers.<br />
<br />
<h1>
Matrix Multiplication</h1>
<br />
Not only these specialized data structures or algorithms, randomization has been found to be quite effective for processing large data sets even for standard algorithms like matrix multiplication, polynomial identity verification or min cut identification from large graphs. In all such cases the best available algorithms have computational complexity which works well for a small data set but doesn't scale well enough with the volumes of data.<br />
<br />
Consider a case where we are given 3 matrices, $A$, $B$ and $C$ and we need to verify if $AB = C$. The standard algorithm for matrix multiplication takes $\Theta(n^3)$ operations and there's also a sophisticated algorithm that works in $\Theta(n^{2.37})$ operations. Instead let's consider some randomization and choose a random vector $\bar{r} = (r_1, r_2, .. r_n) \in \{0, 1\}^n$. Now we can compute $AB\bar{r}$ by first computing $B\bar{r}$ and then $A(B\bar{r})$. And then we compute $C\bar{r}$. If we find $A(B\bar{r}) \neq C\bar{r}$, then $AB \neq C$. Otherwise we return $AB = C$. Instead of matrix-matrix multiplication our randomized algorithm uses matrix-vector multiplication, which can be done in $\Theta(n^2)$ operations the standard way.<br />
<br />
Obviously a $\Theta(n^2)$ algorithm has a lower computational complexity than $\Theta(n^3)$ and scales better with larger data sets. Now the question is how accurate is this algorithm ? Is it guaranteed to give the correct answer every time we run it ? As with other probabilistic algorithms, there's a chance that our algorithm will return a wrong result. But as long as we can show that the chance is minimal and can be reduced by tuning some parameters, we should be fine.<br />
<br />
It can be shown that if $AB \neq C$ and if $\bar{r}$ is chosen uniformly at random from $\{0, 1\}^n$ then $Pr(AB\bar{r} = C\bar{r}) <= 1/2$. But the trick is that we can run our randomized algorithm many times choosing $\bar{r}$ with replacement from $\{0, 1\}^n$. If for any of these trials we get $AB\bar{r} \neq C\bar{r}$, then we can conclude $AB \neq C$. And the probability that we get $AB\bar{r} = C\bar{r}$ for all $k$ trials despite $AB \neq C$ is $2^{-k}$. So for $100$ trials, the chance of error is $2^{-100}$, which we can see is really small. The detailed proof of this analysis can be found in the excellent book <a href="http://www.amazon.com/Probability-Computing-Randomized-Algorithms-Probabilistic/dp/0521835402">Probability and Computing</a> by Michael Mitzenmacher & Eli Upfal.<br />
<br />
Matrix multiplication is something that's used heavily especially in implementing machine learning classifiers. And if we can tolerate that little chance of error we get an algorithm with lower computational complexity that scales much better.<br />
<br />
<h1>
Stochastic Gradient Descent</h1>
<br />
Consider another use case from core machine learning classifier design. Gradient descent is a standard way to minimize the empirical risk for measuring training set performance. The empirical risk is given by the following equation:<br />
$$E_n(f) = (1/n)\sum_i l(f_w(x_i),y_i)$$<br />
where $l$ is the loss function that measures the cost of predicting $f_w(x_i)$ from $n$ training examples where the actual answer is $y$ and $f_w(x)$ is the function parameterized by the weight vector $w$. Each iteration of gradient descent updates the weights $w$ on the basis of the gradient of $E_n(f_w)$ according to the following iterative step:<br />
<br />
$$w_{t+1} = w_t - \gamma (1/n) \sum_i \nabla_w Q(z_i, w_t)$$<br />
where $\gamma$ is an adequately chosen gain. Note that a single update step for the parameter runs through all the training examples and this gets repeated for every update step that you do before convergence. Compare this with Stochastic Gradient Descent (SGD) where the update step is given by the following:<br />
<br />
$$w_{t+1} = w_t - \gamma \nabla_w Q(z_t, w_t)$$<br />
Note instead of running through all examples and compute the exact gradient, SGD computes the gradient based on <i>one randomly picked example</i> $z_t$. So, SGD does a noisy approximation to the true gradient. But since it does not have to process all the examples in every iteration it scales better with a large data set. In this paper on <a href="http://leon.bottou.org/publications/pdf/compstat-2010.pdf">Large Scale Machine Learning With Stochastic Gradient Descent</a>, Leon Bottou classifies the error in building the classifier into 3 components:<br />
<br />
<li><i>Approximation Error</i>, which comes from the fact that the function $f$ that we choose is different from the optimal function $f^*$ and we approximate using a few examples</li>
<br />
<br />
<li><i>Estimation Error</i>, which comes from the fact that we have a finite number of training examples and would have gone away with infinite number of them</li>
<br />
<br />
<li><i>Optimization Error</i>, which comes from the fact that we are using an inferior algorithm to estimate the gradient</li>
</div>
<br />
With normal gradient descent we will have low optimization error since we run through all the training examples in every iteration to compute the gradient, which is clearly superior to the algorithm of SGD that does a noisy approximation. But SGD will report a lower approximation and estimation error since we will be able to process a larger dataset within the stipulated computation time. So it's a tradeoff of that we make using SGD, but clearly we scale better with larger data sets.<br />
<br />
<h1>
Singular Value Decomposition</h1>
<br />
Singular Value Decomposition is a dimensionality reduction technique to unearth a smaller number of intrinsic concepts from a high dimensional matrix by removing unnecessary information. It does so by projecting the original matrix on to lower dimensions such that the reconstruction error is minimized. What this means is that given a matrix $A$ we decompose it into lower dimensional matrices by removing the lesser important information. And we do this in such a way that we can reconstruct a fairly close approximation to $A$ from those lower dimensional matrices. In theory SVD gives the best possible projection in terms of reconstruction error (optimal low rank approximation). But in practice it suffers from scalability problems with large data sets. It generates dense singular vectors even if the original matrix is a sparse one and hence is computationally inefficient, taking cubic time in the size of the data.<br />
<br />
This can be addressed by another algorithm, the CUR algorithm which allows larger reconstruction error but lesser computation time. CUR decomposes the original matrix into ones of lesser dimensions but uses a randomized algorithm in selection of columns and rows based on their probability distribution. Now it can be shown that CUR reconstruction is just an additive term away from SVD reconstruction and it's a probabilistic bound subject to the condition that we select a specific range of columns and rows from $A$. The computational bound of CUR is of the order of the data set, which is much less than that of SVD (which as I mentioned earlier is cubic). This is yet another example where we apply randomization and probabilistic techniques to scale our algorithm better for larger data sets in exchange for a little amount of inaccuracy.<br />
<br />
These are only a few instances of probabilistic bounds being applied to solve real world machine learning problems. There are a lots more. In fact I find that scalability of machine learning has a vey direct correlation with application of probabilistic techniques to the model. As I mentioned earlier the point of this post is to share some of my thoughts as I continue to learn techniques to scale up machine learning models. Feel free to share your ideas, thoughts and discussions in comments.<br />
<div>
<br /></div>
</div>
http://debasishg.blogspot.com/2015/03/randomization-and-probabilistic.htmlnoreply@blogger.com (Anonymous)3tag:blogger.com,1999:blog-22587889.post-7594385420867612238Tue, 10 Feb 2015 19:45:00 +00002015-02-11T01:15:53.267+05:30dddfunctionalpatternsscalaFunctional Patterns in Domain Modeling - Composing a domain workflow with statically checked invariantsI have been doing quite a bit of domain modeling using functional programming mostly in Scala. And as it happens when you work on something for a long period of time you tend to identify more and more patterns that come up repeatedly within your implementations. You may ignore these as patterns the first time, get a feeling of mere coincidence the next time, but third time really gives you that aha! moment and you feel like documenting it as a design pattern. In course of my learnings I have started blogging on some of these patterns - you can find the earlier ones in the series in:<br />
<br />
<li><a href="http://debasishg.blogspot.in/2014/03/functional-patterns-in-domain-modeling.html">Functional Patterns in Domain Modeling - The Specification Pattern</a></li><br />
<li><a href="http://debasishg.blogspot.in/2014/04/functional-patterns-in-domain-modeling.html">Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates</a></li><br />
<li><a href="http://debasishg.blogspot.in/2014/05/functional-patterns-in-domain-modeling.html">Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors</a></li><br />
<br />
In this continuing series of functional patterns in domain modeling, I will go through yet another idiom which has been a quite common occurrence in my explorations across various domain models. You will find many of these patterns explained in details in my upcoming book on <a href="http://manning.com/ghosh2">Functional and Reactive Domain Modeling</a>, the early access edition of which is already published by Manning.<br />
<br />
One of the things that I strive to achieve in implementing domain models is to use the type system to encode as much domain logic as possible. If you can use the type system effectively then you get the benefits of parametricity, which not only makes your code generic, concise and polymorphic, but also makes it self-testing. But that's another story which we can discuss in another post. In this post I will talk about a pattern that helps you design domain workflows compositionally, and also enables implementing domain invariants within the workflow, all done statically with little help from the type system.<br />
<br />
As an example let's consider a loan processing system (simplified for illustration purposes) typically followed by banks issuing loans to customers. A typical simplified workflow looks like the following :-<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPYRjDPShDDiudv3mK3mwa6uy4M225TCgYWtzXLEbEU_nZQe0uXvpTj27Fh01PLmj-gozDQy0prYNQNE7ddxLhTtHAwqFW3XYE9fzCinyGL83wSJHgwcmp90e4nzJXiCdrAvQMFA/s1600/Screen+Shot+2015-02-11+at+12.35.56+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPYRjDPShDDiudv3mK3mwa6uy4M225TCgYWtzXLEbEU_nZQe0uXvpTj27Fh01PLmj-gozDQy0prYNQNE7ddxLhTtHAwqFW3XYE9fzCinyGL83wSJHgwcmp90e4nzJXiCdrAvQMFA/s400/Screen+Shot+2015-02-11+at+12.35.56+AM.png" /></a></div><h1>The Domain Model</h1><br />
The details of each process is not important - we will focus on how we compose the sequence and ensure that the API verifies statically that the correct sequence is followed. Let's start with a domain model for the loan application - we will keep on enriching it as we traverse the workflow.<br />
<br />
<pre class="brush: scala">case class LoanApplication private[Loans](
// date of application
date: Date,
// name of applicant
name: String,
// purpose of loan
purpose: String,
// intended period of repayment in years
repayIn: Int,
// actually sanctioned repayment period in years
actualRepaymentYears: Option[Int] = None,
// actual start date of loan repayment
startDate: Option[Date] = None,
// loan application number
loanNo: Option[String] = None,
// emi
emi: Option[BigDecimal] = None
)</pre><br />
Note we have a bunch of attributes that are defined as optional and will be filled out later as the loan application traverses through the sequence of workflow. Also we have declared the class private and we will have a smart constructor to create an instance of the class. <br />
<br />
<h1>Wiring the workflow with Kleisli</h1><br />
Here are the various domain behaviors modeling the stages of the workflow .. I will be using the <a href="https://github.com/scalaz/scalaz">scalaz</a> library for the <code>Kleisli</code> implementation.<br />
<br />
<pre class="brush: scala">def applyLoan(name: String, purpose: String, repayIn: Int,
date: Date = today) =
LoanApplication(date, name, purpose, repayIn)
def approve = Kleisli[Option, LoanApplication, LoanApplication] { l =>
// .. some logic to approve
l.copy(
loanNo = scala.util.Random.nextString(10).some,
actualRepaymentYears = 15.some,
startDate = today.some
).some
}
def enrich = Kleisli[Option, LoanApplication, LoanApplication] { l =>
//.. may be some logic here
val x = for {
y <- l.actualRepaymentYears
s <- l.startDate
} yield (y, s)
l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some
}</pre><br />
<code>applyLoan</code> is the smart constructor that creates the initial instance of <code>LoanApplication</code>. The other 2 functions <code>approve</code> and <code>enrich</code> perform the approval and enrichment steps of the workflow. Note both of them return an enriched version of the <code>LoanApplication</code> within a Kleisli, so that we can use the power of <code>Kleisli</code> composition and wire them together to model the workflow ..<br />
<br />
<pre class="brush: scala">val l = applyLoan("john", "house building", 10)
val op = approve andThen enrich
op run l</pre><br />
When you have a sequence to model that takes an initial object and then applies a chain of functions, you can use plain function composition like <code>h(g(f(x)))</code> or using the point free notation, <code>(h compose g compose f)</code> or using the more readable order <code>(f andThen g andThen h)</code>. But in the above case we need to have effects along with the composition - we are returning <code>Option</code> from each stage of the workflow. So here instead of plain composition we need effectful composition of functions and that's exactly what <code>Kleisli</code> offers. The <code>andThen</code> combinator in the above code snippet is actually a <code>Kleisli</code> composition aka function composition with effects.<br />
<br />
So we have everything the workflow needs and clients use our API to construct workflows for processing loan applications. But one of the qualities of good API design is to design it in such a way that it becomes difficult for the client to use it in the wrong way. Consider what happens with the above design of the workflow if we invoke the sequence as <code>enrich andThen approve</code>. This violates the domain invariant that states that enrichment is a process that happens after the approval. Approval of the application generates some information which the enrichment process needs to use. But because our types align, the compiler will be perfectly happy to accept this semantically invalid composition to pass through. And we will have the error reported during run time in this case.<br />
<br />
Remembering that we have a static type system at our disposal, can we do better ?<br />
<br />
<h1>Phantom Types in the Mix</h1><br />
Let's throw in some more types and see if we can tag in some more information for the compiler to help us. Let's tag each state of the workflow with a separate type ..<br />
<br />
<pre class="brush: scala">trait Applied
trait Approved
trait Enriched</pre><br />
Finally make the main model <code>LoanApplication</code> parameterized on a type that indicates which state it is in. And we have some helpful type aliases ..<br />
<br />
<pre class="brush: scala">case class LoanApplication[Status] private[Loans]( //..
type LoanApplied = LoanApplication[Applied]
type LoanApproved = LoanApplication[Approved]
type LoanEnriched = LoanApplication[Enriched]</pre><br />
These types will have no role in modeling domain behaviors - they will just be used to dispatch to the correct state of the sequence that the domain invariants mandate. The workflow functions need to be modified slightly to take care of this ..<br />
<br />
<pre class="brush: scala">def applyLoan(name: String, purpose: String, repayIn: Int,
date: Date = today) =
LoanApplication[Applied](date, name, purpose, repayIn)
def approve = Kleisli[Option, LoanApplied, LoanApproved] { l =>
l.copy(
loanNo = scala.util.Random.nextString(10).some,
actualRepaymentYears = 15.some,
startDate = today.some
).some.map(identity[LoanApproved])
}
def enrich = Kleisli[Option, LoanApproved, LoanEnriched] { l =>
val x = for {
y <- l.actualRepaymentYears
s <- l.startDate
} yield (y, s)
l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some.map(identity[LoanEnriched])
}</pre><br />
Note how we use the phantom types within the <code>Kleisli</code> and ensure statically that the sequence can flow only in one direction - that which is mandated by the domain invariant. So now an invocation of <code>enrich andThen approve</code> will result in a compilation error because the types don't match. So once again yay! for having the correct encoding of domain logic with proper types.http://debasishg.blogspot.com/2015/02/functional-patterns-in-domain-modeling.htmlnoreply@blogger.com (Anonymous)14tag:blogger.com,1999:blog-22587889.post-7998047863948966395Wed, 31 Dec 2014 19:20:00 +00002015-01-01T00:52:45.920+05:30data-structuresmachine-learningstream-processingProbabilistic techniques, data streams and online learning - Looking forward to a bigger 2015<div dir="ltr" style="text-align: left;" trbidi="on">
I look forward to 2015 as the year when randomized algorithms, probabilistic techniques and data structures become more pervasive and mainstream. The primary driving factors for this will be more and more prevalence of big data and the necessity to process them in near real time using minimal (or constant) memory bandwidth. You are given data streams where possibly you will see every data only once in your lifetime and you need to churn out analytics from them in real time. You cannot afford to store all of them in a database on disk since it will incur an unrealistic performance penalty to serve queries in real time. And you cannot afford to store all information in memory even if you add RAM at your own will. You need to find clever ways to optimize your storage, employ algorithms and data structures that use sublinear space and yet deliver information in real time.<br />
<br />
Many such data structures are already being used quite heavily for specialized processing of data streams ..<br />
<br />
<ul style="text-align: left;">
<li><a href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom Filters</a> for set membership using sublinear storage</li>
<li><a href="http://research.google.com/pubs/pub40671.html">HyperLogLog</a> for efficient cardinality estimation</li>
<li><a href="http://debasishg.blogspot.in/2014/01/count-min-sketch-data-structure-for.html">Count Min Sketch</a> for estimating frequency related properties of a data stream</li>
</ul>
<br />
These data structures are becoming more and more useful as we prepare to embrace and process larger data sets with fairly strict online requirements. And it has started making a difference. Take for example Impala, the open source analytic database from Cloudera that works on top of Hadoop. Impala's NDV aggregate function (number of distinct values) uses the HyperLogLog algorithm to estimate this number, in parallel, in a fixed amount of space. This <a href="https://gist.github.com/avibryant/8275649">blog post</a> has the details of the performance improvement that it offers in comparison to the standard distinct count. The immensely popular NoSQL store Redis also offers a HyperLogLog implementation that you can use to get an approximation on the cardinality of a set using randomization. Salvatore has the details <a href="http://antirez.com/news/75">here</a> on the implementation of HyperLogLog algorithm in Redis.<br />
<br />
The most important reason these algorithms and data structures are becoming popular is the increased focus on our <i>"online"</i> requirements. We are not only processing bigger and bigger data set, we need results faster too. We just cannot afford to push all analytics to the batch mode and expect results coming out after an overnight batch processing. Various architectural paradigms like the <a href="http://en.wikipedia.org/wiki/Lambda_architecture">lambda architecture</a> also target to address this niche area. But before investing on such complex architectures, often some neat data structures that use probabilistic techniques and randomization may offer a much lighter weight solution that you are looking for.<br />
<br />
Consider processing the Twitter stream and generating analytics (of whatever form) <i>online</i>. This means that immediately after seeing one twitter feed you must be able to predict something and update your model at the same time. Which means you need to memorize the data that you see in the feed, apply it to update your model and yet cannot store the entire hose that you have seen so far. This is <i>online learning</i> and is the essence of techniques like <a href="http://en.wikipedia.org/wiki/Stochastic_gradient_descent">stochastic gradient descent</a> that help you do this - the model is capable of making up to date predictions after every data that you see. John Myles White has an excellent <a href="https://www.hakkalabs.co/articles/streaming-data-analysis-and-online-learning/#!">presentation</a> on this topic.<br />
<br />
Consider this other problem of detecting similarities between documents. When you are doing this on a Web scale you will have to deal with millions of documents to find the similar sets. There are techniques like <a href="http://en.wikipedia.org/wiki/MinHash">minhash</a> which enable you to compress documents into signature matrices. But even then the scale becomes too big to be processed and reported to the user in a meaningful amount of time. As an example (from <a href="http://www.mmds.org/">Mining Massive Datasets</a>), if you process 1 million document using signatures of length 250, you still have to use 1000 bytes per document - the total comes to 1 gigabyte which very well fits into the memory of a standard laptop. But when you check for similar pairs, you need to process (1,000,000 choose 2) or half a trillion pairs of documents which will take almost 6 days to compute all similarities on a laptop. Enter probabilistic techniques and <a href="http://www.mit.edu/~andoni/LSH/">locality sensitive hashing</a> (LSH) algorithm fits this problem like a charm. Detecting similarity is a problem that arises in recommender systems with collaborative filtering and LSH can be used there as well. The basic idea of LSH as applied to similarity detection is to use hashing multiple number of times and identify candidate pairs that qualify for similarity checking. The idea is to reduce the search space using probabilistic techniques so that we can eliminate a class of candidates which have very low chance of being similar.<br />
<br />
Here I have only scratched the surface of the areas where we apply randomization and probabilistic techniques to solve problems that are very real today. There are plentiful other areas in data mining, graph clustering, machine learning and big data processing where similar techniques are employed to reduce the curse of dimensionality and provide practical solution at scale. 2014 has already seen a big surge in terms of popularizing these techniques. I expect 2015 to be bigger and more mainstream in terms of their usage.<br />
<br />
Personally I have been exploring data stream algorithms a lot and have prepared a <a href="https://gist.github.com/debasishg/8172796">collection</a> of some useful references. Feel free to share in case you find it useful. I hope to do something more meaningful with stream processing data structures and online learning in 2015. Have a very happy and joyous new year ..<br />
<div>
<br /></div>
</div>
http://debasishg.blogspot.com/2015/01/probabilistic-techniques-data-streams.htmlnoreply@blogger.com (Anonymous)0tag:blogger.com,1999:blog-22587889.post-822205679628878691Mon, 03 Nov 2014 07:45:00 +00002014-11-03T13:17:02.301+05:30dddfunctionalscalaFunctional and Reactive Domain Modeling<div dir="ltr" style="text-align: left;" trbidi="on">
Manning has launched the MEAP of my <a href="http://manning.com/ghosh2">upcoming book</a> on Domain Modeling.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgVaMNp24AZcLDlZ5-Hii0A_Br9L97yOvQUiC1o2oDzugWzUflmLFj_t8X6h5FkFgbw1ZmuDxZEzGNXU55z8Qkuo4oLDyhVOHFmJc3wkI1X7-eqXGXgdbvbunJHHgGuA33nYjUaA/s1600/Screen+Shot+2014-11-01+at+1.00.08+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgVaMNp24AZcLDlZ5-Hii0A_Br9L97yOvQUiC1o2oDzugWzUflmLFj_t8X6h5FkFgbw1ZmuDxZEzGNXU55z8Qkuo4oLDyhVOHFmJc3wkI1X7-eqXGXgdbvbunJHHgGuA33nYjUaA/s1600/Screen+Shot+2014-11-01+at+1.00.08+PM.png" height="189" width="320" /></a></div>
<br />
<br />
The first time I was formally introduced to the topic was way back when I played around with Erik Evans' <a href="http://www.amazon.in/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">awesome text</a> on the subject of Domain Driven Design. In the book he discusses various object lifecycle patterns like the Factory, Aggregate or Repository that help separation of concerns when you are implementing the various interactions between the elements of the domain model. Entities are artifacts with identities, value objects are pure values while services model the coarse level use cases of the model components.<br />
<br />
Traditionally we followed the recommendations of Erik in our real world implementations and used the object oriented paradigm for modeling all interactions. We started talking about rich domain models and anemic domain models. The rich model espoused a richer agglomeration of state and behavior within the model, while the anemic model preferred to keep them decoupled. Martin Fowler sums this up in his <a href="http://www.martinfowler.com/bliki/AnemicDomainModel.html">post</a> on Anemic Domain Models ..<br />
<blockquote class="tr_bq">
"The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data."</blockquote>
<h2 style="text-align: left;">
</h2>
<h2 style="text-align: left;">
Go Functional</h2>
In <i>Functional and Reactive Domain Modeling</i> I look at the problem with a different lens. The primary focus of the book is to encourage building domain models using the principles of functional programming. It's a completely orthogonal approach than OO and focuses on verbs first (as opposed to nouns first in OO), algebra first (as opposed to objects in OO), function composition first (as opposed to object composition in OO), lightweight objects as ADTs (instead of rich class models).<br />
<br />
The book starts with the basics of functional programming principles and discusses the virtues of purity and the advantages of keeping side-effects decoupled from the core business logic. The book uses Scala as the programming language and does an extensive discussion on why the OO and functional features of Scala are a perfect fit for modelling complex domains. Chapter 3 starts the core subject of functional domain modeling with real world examples illustrating how we can make good use of patterns like smart constructors, monads and monoids in implementing your domain model. The main virtue that these patterns bring to your model is genericity - they help you extract generic algebra from domain specific logic into parametric functions which are far more reusable and less error prone. Chapter 4 focuses on advanced usages like typeclass based design and patterns like monad transformers, kleislis and other forms of compositional idioms of functional programming. One of the primary focus of the book is an emphasis on algebraic API design and to develop an appreciation towards ability to reason about your model.<br />
<h2 style="text-align: left;">
</h2>
<h2 style="text-align: left;">
Go Reactive</h2>
The term <i>"reactive"</i> has recently become quite popular in describing systems that are responsive, scalable and adaptive. In implementing complex domain models, we find many areas which can be made more responsive by implementing them as non blocking operations instead of the standard blocking function calls. Using higher level concurrency primitives like actors, futures and data flow based computing we can compose asynchronous operations and increase the net throughput of your model. The second part of the book discusses how you can combine the principles of functional programming with the reactive way of implementing behaviors. The paradigm of application design using the principles of functional programming together with an asynchronous non-blocking mode of communication between the participating entities promises to be a potent combination towards developing performant systems that are relatively easy to manage, maintain and evolve. But designing and implementing such models need a different way of thinking. The behaviors that you implement have to be composable using pure functions and they can form building blocks of bigger abstractions that communicate between them using non-blocking asynchronous message passing.<br />
<h2 style="text-align: left;">
</h2>
<h2 style="text-align: left;">
Functional meets Reactive</h2>
<i>Functional and Reactive Domain Modeling</i> takes you through the steps teaching you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start learning with the basics of functional programming principles and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design and isolation of side-effects can make your model compose for readability and verifiability. On the subject of reactive modeling, the book focuses on the higher order concurrency patterns like actors and futures. It uses the Akka framework as the reference implementation and demonstrates how advanced architectural patterns like <a href="http://martinfowler.com/eaaDev/EventSourcing.html">event sourcing</a> and <a href="http://martinfowler.com/bliki/CQRS.html">command-query-responsibility-segregation</a> can be put to great use while implementing scalable models. You will learn techniques that are radically different from the standard RDBMS based applications that are based on mutation of records. You’ll also pick up important patterns like using asynchronous messaging for interaction based on non blocking concurrency and model persistence, which delivers the speed of in-memory processing along with suitable guarantees of reliability.<br />
<br />
Looking forward to an exciting journey with the book. I am sure you will also find interest in the topics that I discuss there. And feel free to jump on to <a href="http://www.manning-sandbox.com/forum.jspa?forumID=935">AuthorOnline</a> and fire questions that we all can discuss. I am sure this will also lead to an overall improvement in the quality of the book.</div>
http://debasishg.blogspot.com/2014/11/functional-and-reactive-domain-modeling.htmlnoreply@blogger.com (Anonymous)1tag:blogger.com,1999:blog-22587889.post-3133487324467498659Mon, 12 May 2014 09:03:00 +00002014-05-12T14:33:21.750+05:30dddfunctionalscalaFunctional Patterns in Domain Modeling - Anemic Models and Compositional Domain BehaviorsI was looking at the <a HREF="http://www.slideshare.net/deanwampler/reactive-design-languages-and-paradigms">presentation</A> that Dean Wampler made recently regarding domain driven design, anemic domain models and how using functional programming principles help ameliorate some of the problems there. There are some statements that he made which, I am sure made many OO practitioners chuckle. They contradict popular beliefs that encourage OOP as the primary way of modeling using DDD principles.<br />
<br />
One statement that resonates a lot with my thought is <i>"DDD encourages understanding of the domain, but don't implement the models"</I>. DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.<br />
<br />
On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.<br />
<br />
So when Dean claims <i>"Models should be anemic"</I>, I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.<br />
<br />
<blockquote class="twitter-tweet" lang="en"><p>Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.</p>— John Carmack (@ID_AA_Carmack) <a href="https://twitter.com/ID_AA_Carmack/statuses/53512300451201024">March 31, 2011</a></blockquote><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script><br />
<br />
One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent <a HREF="http://www.drdobbs.com/cpp/how-non-member-functions-improve-encapsu/184401197">article</A> which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors. <br />
<br />
As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..<br />
<br />
Here's the anemic domain model of the Order abstraction ..<br />
<br />
<pre class="brush: scala">case class Order(orderNo: String, orderDate: Date, customer: Customer,
lineItems: Vector[LineItem], shipTo: ShipTo,
netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)</pre><br />
In the <a HREF="http://debasishg.blogspot.in/2014/04/functional-patterns-in-domain-modeling.html">earlier</A> <a HREF="http://debasishg.blogspot.in/2014/03/functional-patterns-in-domain-modeling.html">posts</A> we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model. <br />
<br />
The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.<br />
<br />
Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a <CODE>Kleisli</CODE>, which essentially gives us a composition over monadic functions. So instead of composing <CODE>a -> b</CODE> and <CODE>b -> c</CODE>, which we do with normal function composition, we can do the same over <CODE>a -> m b</CODE> and <CODE>b -> m c</CODE>, where <CODE>m</CODE> is a monad. Composition with effects if you may say so. <br />
<br />
<pre class="brush: scala">def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order =>
val o = orderLineItems.set(
order,
setLineItemValues(order.lineItems)
)
o.lineItems.map(_.value).sequenceU match {
case Some(_) => right(o)
case _ => left("Missing value for items")
}
}</pre><br />
But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the <CODE>process</CODE> function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the <CODE>Kleisli</CODE> itself so that the user level implementation is simple and concise. <br />
<br />
With <code>Kleisli</CODE> it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the <code>Either</CODE> monad - here <code>ProcessingStatus</CODE> is just a type alias for this .. <code>type ProcessingStatus[S] = \/[String, S]</CODE>. Using the <code>Kleisli</CODE>, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.<br />
<br />
Once the <code>Order</CODE> is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as <code>valueOrder</CODE>.<br />
<br />
<pre class="brush: scala">def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order =>
val o = orderLineItems.set(
order,
setLineItemValues(order.lineItems)
)
o.lineItems.map(_.discount).sequenceU match {
case Some(_) => right(o)
case _ => left("Missing discount for items")
}
}</pre><br />
Finally we check out the <code>Order </CODE>..<br />
<br />
<pre class="brush: scala">def checkOut = Kleisli[ProcessingStatus, Order, Order] {order =>
val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) =>
s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d)))
}
right(orderNetValue.set(order, netOrderValue))
}</pre><br />
And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.<br />
<br />
<pre class="brush: scala">def process(order: Order) = {
(valueOrder andThen
applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated))
}</pre>
In case you are interested in the full source code of this small example, feel free to take a peek at my <A HREF="https://github.com/debasishg/scala-snippets/blob/master/src/main/scala/aggregate.scala">github repo</A>.<br />
http://debasishg.blogspot.com/2014/05/functional-patterns-in-domain-modeling.htmlnoreply@blogger.com (Anonymous)3tag:blogger.com,1999:blog-22587889.post-5635474409431333107Sun, 06 Apr 2014 15:19:00 +00002014-04-06T20:49:49.264+05:30dddfunctionalscalaFunctional Patterns in Domain Modeling - Immutable Aggregates and Functional UpdatesIn the last <a href="http://debasishg.blogspot.in/2014/03/functional-patterns-in-domain-modeling.html">post</a> I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones.<br />
<br />
An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an <code>Order</code> is an aggregate. It encapsulates the details that an <code>Order</code> is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)). <br />
<br />
Note an aggregate can consist of other aggregates - e.g. we have a <code>Customer</code> instance within an <code>Order</code>. Eric Evans in his book on <a href="http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">Domain Driven Design</a> provides an excellent discussion of what constitutes an Aggregate.<br />
<br />
<h1>Functional Updates of Aggregates with Lens</h1><br />
This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box ..<br />
<br />
<pre class="brush: scala">case class Order(orderNo: String, orderDate: Date, customer: Customer,
lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None,
status: OrderStatus = Placed)
</pre><br />
Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the <code>orderNo</code> of an order like this ..<br />
<br />
<pre class="brush: scala">val o = Order( .. )
o.copy(orderNo = newOrderNo)</pre><br />
which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, <code>ShipTo</code> is defined as follows ..<br />
<br />
<pre class="brush: scala">case class Address(number: String, street: String, city: String, zip: String)
case class ShipTo(name: String, address: Address)</pre><br />
So, here you go in order to update the zip code of a <code>ShipTo</code> ..<br />
<br />
<pre class="brush: scala">val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))
s.copy(address = s.address.copy(zip = "80231"))</pre><br />
Not really pleasing and can go off bounds in comprehensibility pretty soon.<br />
<br />
In our domain model we use an abstraction called a <code>Lens</code> for updating Aggregates. In very layman's terms, a lens is an encapsulated <code>get</CODE> and <code>set</CODE> combination. The <code>get</CODE> extracts a small part from a larger whole, while the <code>set </CODE>transforms the larger abstraction with a smaller part taken as a parameter. <br />
<br />
<pre class="brush: scala">case class Lens[A, B](get: A => B, set: (A, B) => A)</pre><br />
This is a naive definition of a <code>Lens</CODE> in Scala. Sophisticated <code>lens</CODE> designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the <code>get</CODE> and the <code>set</CODE> to generalize the lens definition in terms of another abstraction named <code>Store</CODE>. As it happens so often in functional programming, <code>Store</CODE> happens to abstract yet another pattern called the <code>Comonad</CODE>. You can think of a <code>Comonad</CODE> as the inverse of a <code>Monad</CODE>. But in case you are more curious, and have wondered how lenses form <i>"the Coalgebras for the Store Comonad"</i>, have a look at the 2 papers <a href="http://r6research.livejournal.com/23705.html">here</a> and <a href="https://www.fpcomplete.com/user/tel/lenses-from-scratch">here</a>.<br />
<br />
Anyway for us mere domain modelers, we will use the <code>Lens</CODE> implementation as in scalaz .. here's a lens that helps us update the <code>OrderStatus</code> within an <code>Order</code> ..<br />
<br />
<pre class="brush: scala">val orderStatus = Lens.lensu[Order, OrderStatus] (
(o, value) => o.copy(status = value),
_.status
)</pre><br />
and use it as follows ..<br />
<br />
<pre class="brush: scala">val o = Order( .. )
orderStatus.set(o, Placed)</pre><br />
will change the <code>status</CODE> field of the <code>Order</code> to <code>Placed</code>. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures. <br />
<br />
<h1>Composition of Lenses</H1>First let's define some individual lenses ..<br />
<br />
<pre class="brush: scala">// lens for updating a ShipTo of an Order
val orderShipTo = Lens.lensu[Order, ShipTo] (
(o, sh) => o.copy(shipTo = sh),
_.shipTo
)
// lens for updating an address of a ShipTo
val shipToAddress = Lens.lensu[ShipTo, Address] (
(sh, add) => sh.copy(address = add),
_.address
)
// lens for updating a city of an address
val addressToCity = Lens.lensu[Address, String] (
(add, c) => add.copy(city = c),
_.city
)</pre><br />
And now we compose them to define a lens that directly updates the <code>city</CODE> of a <code>ShipTo</code> belonging to an <code>Order</code> ..<br />
<br />
<pre class="brush: scala">// compositionality FTW
def orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity</pre><br />
Now updating a <code>city</code> of a <code>ShipTo</code> in an <code>Order</code> is as simple and expressive as ..<br />
<br />
<pre class="brush: scala">val o = Order( .. )
orderShipToCity.set(o, "London")</pre><br />
The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable.<br />
<br />
Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a <code>LineItem</code> of an <code>Order</code>. A <code>LineItem</code> is defined as ..<br />
<br />
<pre class="brush: scala">case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None,
discount: Option[BigDecimal] = None)
</pre><br />
and an <code>Order</code> has a collection of <code>LineItem</code>s. Let's define a lens that updates the <code>value</code> within a <code>LineItem</code> ..<br />
<br />
<pre class="brush: scala">val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] (
(l, v) => l.copy(value = v),
_.value
)</pre><br />
and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our <code>lineItemValue</code> lens to a partial lens using the unary operator <code>~</code> ..<br />
<br />
<pre class="brush: scala">// a lens that updates the value in a specific LineItem within an Order
def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i)</pre><br />
Now we can use this composite lens to functionally update the <code>value</CODE> field of each of the items in a <code>Vector</CODE> of <code>LineItem</CODE>s using some specific business rules ..<br />
<br />
<pre class="brush: scala">(0 to lis.length - 1).foldLeft(lis) {(s, i) =>
val li = lis(i)
lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)
}</pre><br />
In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design.<br />
<br />
http://debasishg.blogspot.com/2014/04/functional-patterns-in-domain-modeling.htmlnoreply@blogger.com (Anonymous)2tag:blogger.com,1999:blog-22587889.post-3331080191889588519Mon, 31 Mar 2014 06:27:00 +00002014-03-31T20:18:31.101+05:30dddfunctionalscalaFunctional Patterns in Domain Modeling - The Specification PatternWhen you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book <a HREF="http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">Domain Driven Design</A>, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality. <br />
<br />
Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the <a HREF="http://en.wikipedia.org/wiki/Specification_pattern">Specification</A> pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern ..<br />
<blockquote>It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria.</blockquote>A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead.<br />
<br />
<h1>Specification - Where ?</H1>One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter.<br />
<ul><li>Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same.</LI>
<li>Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed <i>some</I> validations can belong to the interface and don't hesitate to put them there.</LI>
<li>But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that <i>"states a constraint on the state of another object"</I>. They are business rules which the entity needs to honor in order to proceed to the next stage of processing.</LI> </UL>We consider the following simple example. We take an <code>Order</CODE> entity and the model identifies the following domain "specifications" that a new <code>Order</CODE> must satisfy before being thrown into the processing pipeline:<br />
<br />
<ol><li>it must be a <i>valid</I> order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc.</LI>
<li>it must be <i>approved</I> by a valid approving authority - only then it proceeds to the next stage of the pipeline</LI>
<li>customer status check must be passed to ensure that the customer is not black-listed</LI>
<li>the line items from the order must be checked against inventory to see if the order can be fulfilled</LI> </OL>These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step.<br />
<br />
An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original <code>Order</CODE> object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline.<br />
<br />
<h1>Jumping on to the implementation ..</H1>Let's take down some implementation notes from what we learnt above ..<br />
<br />
<ul><li>The <code>Order</CODE> can be an immutable entity at least for this sequence of operations</LI>
<li>Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an <code>Order </CODE>instance to every specification in the sequence ?</LI>
<li>Since we plan to use functional programming principles, how can we model the above sequence as an <i>expression</I> so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ?</LI>
<li>All these functions look like having similar signatures - we need to make them compose with each other</LI> </UL>Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts ..<br />
<br />
<pre class="brush: scala">type ValidationStatus[S] = \/[String, S]
type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]
object ReaderTStatus extends KleisliInstances with KleisliFunctions {
def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)
}</pre><br />
<code>ValidationStatus</CODE> defines the type that we will return from each of the functions. It's either some status <code>S</CODE> or an error string that explains what went wrong. It's actually an <code>Either</CODE> type (right biased) as implemented in <a HREF="https://github.com/scalaz/scalaz">scalaz</A>.<br />
<br />
One of the things which we thought will be cool is to avoid repeating the <code>Order</CODE> parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - <code>\/</CODE> is a monad. So we need to stack them together using a monad transformer. <code>ReaderT</CODE> does this job and <code>ReaderTStatus</CODE> defines the type that somehow makes our life easier by combining the two of them.<br />
<br />
The next step is an implementation of <code>ReaderTStatus</CODE>, which we do in terms of another abstraction called <code>Kleisli</CODE>. We will use the scalaz library for this, which implements <code>ReaderT</CODE> in terms of <code>Kleisli</CODE>. I will not go into the details of this implementation - in case you are curious, refer to this excellent <a HREF="http://eed3si9n.com/learning-scalaz/Composing+monadic+functions.html">piece</A> by Eugene.<br />
<br />
So, how does one sample specification look like ?<br />
<br />
Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes ..<br />
<br />
<pre class="brush: scala">// the base abstraction
sealed trait Item {
def itemCode: String
}
// sample implementations
case class ItemA(itemCode: String, desc: Option[String],
minPurchaseUnit: Int) extends Item
case class ItemB(itemCode: String, desc: Option[String],
nutritionInfo: String) extends Item
case class LineItem(item: Item, quantity: Int)
case class Customer(custId: String, name: String, category: Int)
// a skeleton order
case class Order(orderNo: String, orderDate: Date, customer: Customer,
lineItems: List[LineItem])</pre><br />
And here's a specification that checks some of the constraints on the <code>Order</CODE> object ..<br />
<br />
<pre class="brush: scala">// a basic validation
private def validate = ReaderTStatus[Order, Boolean] {order =>
if (order.lineItems isEmpty) left(s"Validation failed for order $order")
else right(true)
}</pre><br />
It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. <code>Order</CODE> is not an explicit argument to the function - it's curried. The function returns a <code>ReaderTStatus</CODE>, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.<br />
<br />
Here are a few other specifications based on the domain knowledge that we have gathered ..<br />
<br />
<pre class="brush: scala">private def approve = ReaderTStatus[Order, Boolean] {order =>
right(true)
}
private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>
right(true)
}
private def checkInventory = ReaderTStatus[Order, Boolean] {order =>
right(true)
}</pre><br />
<h1>Wiring them together</H1>But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..<br />
<br />
Here's the <code>isReadyForFulfilment</code> method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the <code>Order</code> supplied.<br />
<br />
<pre class="brush: scala">def isReadyForFulfilment(order: Order) = {
val s = for {
_ <- validate
_ <- approve
_ <- checkCustomerStatus(order.customer)
c <- checkInventory
} yield c
s(order)
}</pre>
<br />
So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.http://debasishg.blogspot.com/2014/03/functional-patterns-in-domain-modeling.htmlnoreply@blogger.com (Anonymous)15tag:blogger.com,1999:blog-22587889.post-6027103926883320551Wed, 22 Jan 2014 19:46:00 +00002014-01-23T01:17:17.869+05:30count-min-sketcheventsourcingA Sketch as the Query Model of an EventSourced SystemIn my last <a href="http://debasishg.blogspot.in/2014/01/count-min-sketch-data-structure-for.html">post</a> I discussed the count-min sketch data structure that can be used to process data streams using sub-linear space. In this post I will continue with some of my thoughts on how count-min sketches can be used in a typical event sourced application architecture. An event sourcing system typically has a query model which provides a read only view of how all the events are <i>folded</i> to provide a coherent view of the system. I have seen applications where the query model is typically rendered from a relational database. And the queries can take a lot of time to be successfully processed and displayed to the user if the data volume is huge. And when we are talking about Big Data, this is not a very uncommon use case.<br />
<br />
Instead of rendering the query from the RDBMS, quite a few types of them can be rendered from a count-min sketch using sub-linear space. Consider the use case where you need to report the highest occuring user-ids in a Twitter stream. The stream is continuous, huge and non ending and you get to see each item once. So you get each item from where you parse out the user-id occurring in it and update the sketch. So each entry of the sketch contains the frequency of the user-id that hashes to that slot. And we can take the minimum of all the slots to which a user-id hashes to, in order to get the frequency of that user-id. The details of how this works can be found in my last post.<br />
<br />
Consider the case where we need to find the heavy-hitters - those user-ids whose frequency exceeds a pre-determined threshold. For that, in addition to the sketch we can also maintain a data structure like heap or tree where we update the top-k heavy hitters. When a user-id appears, we update the sketch, get its estimated frequency from the sketch and if it exceeds the threshold, also record it in the data structure. So at any point in time we can probe this accessary data structure to get the current heavy-hitters. Spark <a href="https://github.com/apache/incubator-spark/blob/master/examples/src/main/scala/org/apache/spark/streaming/examples/TwitterAlgebirdCMS.scala">examples</a> contain a sample implementation of this heavy hitters query from a Twitter stream using the CountMinSketchMonoid of <a href="https://github.com/twitter/algebird">Algebird</a>.<br />
<br />
Can this be a viable approach of implementing the query model in an event sourced system if the use case fits the approximation query approach ? It can be faster, relatively cheap in space and can prove to be responsive enough to be displayed in dashboards in the form of charts or graphs.http://debasishg.blogspot.com/2014/01/a-sketch-as-query-model-of-eventsourced.htmlnoreply@blogger.com (Anonymous)3tag:blogger.com,1999:blog-22587889.post-2689121936470253763Sun, 19 Jan 2014 17:48:00 +00002014-01-21T17:37:18.123+05:30count-min-sketchstream-processingCount-Min Sketch - A Data Structure for Stream Mining ApplicationsIn today's age of Big Data, streaming is one of the techniques for low latency computing. Besides the batch processing infrastructure of map/reduce paradigm, we are seeing a plethora of ways in which streaming data is processed at near real time to cater to some specific kinds of applications. Libraries like <a href="http://storm-project.net">Storm</a>, <a href="http://samza.incubator.apache.org">Samza</a> and <a href="https://www.cs.duke.edu/~kmoses/cps516/dstream.html">Spark</a> belong to this genre and are starting to get their share of user base in the industry today.<br />
<br />
This post is not about Spark, Storm or Samza. It's about a data structure which is one of the relatively new entrants in the domain of stream processing, which is simple to implement, but has already proved to be of immense use in serving a certain class of queries over huge streams of data. I have been doing some readings about the application of such structures and thought of sharing them with the readers of my blog.<br />
<br />
<h1>Using Sublinear Space</h1><br />
Besides data processing, these tools also support data mining over streams that include serving specialized queries over data using limited space and time. Ok, so once we store all data as they come we can always serve queries with O(n) space. But since we are talking about huge data streams, it may not even be possible to run algorithms on the full set of data - it simply will be too expensive. Even if we have the entire set of data in a data warehouse, the processing of the entire data set may take time and consume resources that we cannot afford to have, considering the fee charged under the evolving models of using the platform-as-a-service within the cloud based infrastructure. Also the fact that these algorithms will be working on data streams, there's a high likelihood that they will get to see these data only in a single pass. The bottom line is that we need to have algorithms that work on sub-linear space.<br />
<br />
Working on sublinear space implies that we don't get to store or see all data - hence an obvious conclusion from this will be the fact that we also don't get to deliver an accurate answer to some queries. We rely on some approximation techniques and deliver an accuracy with a reasonably high probability bound. We don't store all data, instead we store a lossy compressed representation of the data and deliver user queries from this subset instead of the entire set.<br />
<br />
One widely used technique for storing a subset of data is through Random Sampling, where the data stored is selected through some stochastic mechanism. There are various ways to determine which data we select for storing and how we build the estimator for querying the data. There are pros and cons with this approach, but it's one of the simplest ways to do approximation based queries on streaming data.<br />
<br />
There are a few other options like Histograms and Wavelet based synopses. But one of the most interesting data structures that have been developed in recent times is the <i>Sketch</i>, which uses summary based techniques for delivering approximation queries, gets around the typical problems that sampling techniques have and are highly parallelizable in practice.<br />
<br />
An important class of sketch is one where the sketch vector (which is the summary information) is a linear transform of the input vector. So if we model the input as a vector we can multiply it by a sketch matrix to obtain the sketch vector that contains the synopses data that we can use for serving approximation queries. Here's a diagrammatic representation of the sketch as a linear transform of the input data.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FLkp6QwHACdnmy4ubEg8uhyXEJ18ox3V5swlL9SabCq4uTgd56eMgu2Wcoz4ULnh1zI6_svHX8MNggzv2i49SVYM60qfkpJnUBpadfChQLF5cuBNZE9jRevgDvW2Bn9EkWnXSQ/s1600/cm.002.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FLkp6QwHACdnmy4ubEg8uhyXEJ18ox3V5swlL9SabCq4uTgd56eMgu2Wcoz4ULnh1zI6_svHX8MNggzv2i49SVYM60qfkpJnUBpadfChQLF5cuBNZE9jRevgDvW2Bn9EkWnXSQ/s640/cm.002.png" /></a></div><br />
<br />
<h1>Count-Min Sketch</h1><br />
One of the most popular forms of the sketch data structure is the <a href="http://dimacs.rutgers.edu/%7Egraham/pubs/papers/cm-full.pdf">Count-Min Sketch</a> introduced by Muthukrishnan and Cormode in 2003. The idea is quite simple and the data structure is based on probabilistic algorithms to serve various types of queries on streaming data. The data structure is parameterized by two factors - ε and δ, where the error in answering the query is within a factor of ε with probability δ. So you can tune these parameters based on the space that you can afford and accordingly amortize the accuracy of results that the data structure can serve you.<br />
<br />
Consider this situation where you have a stream of data (typically modeled as a vector) like updates to stock quotes in a financial processing system arriving continuously that you need to process and report statistical queries on a real time basis. <br />
<br />
<li>We model the data stream as a vector <tt>a[1 .. n]</tt> and the updates received at time <tt>t</tt> are of the form (i<sub>t</sub>, c<sub>t</sub>) which mean that the stock quote for <tt>a[i<sub>t</sub>]</tt> has been incremented by c<sub>t</sub>. There are various models in which this update can appear as discussed in <a href="http://www.amazon.com/Data-Streams-Applications-Foundations-Theoretical/dp/193301914X">Data Streams: Algorithms and Applications</a> by Muthukrishnan which includes negative updates as well and the data structure can be tuned to handle each of these variants.</li><br />
<br />
<li>The core of the data structure is a 2 dimensional array <tt>count[w, d]</tt> that stores the synopses of the original vector and which is used to report results of queries using approximation techniques. Hence the total space requirement of the data structure is <tt>(w * d)</tt>. We can bound each of <tt>w</tt> and <tt>d</tt> in terms of our parameters ε and δ and control the level of accuracy that we want our data structure to serve.</li><br />
<br />
<li>The data structure uses hashing techniques to process these updates and report queries using sublinear space. So assume we have d <a href="http://en.wikipedia.org/wiki/Universal_hashing">pairwise-independent</a> hash functions <tt>{h<sub>1</sub> .. h<sub>d</sub>}</tt> that hash each of our inputs to the range <tt>(1 .. w)</tt>. Just for the more curious mind, pairwise independence is a method to construct a universal hash family, a technique that ensures lower number of collisions in the hash implementation.</li><br />
<br />
<li>When an update (i<sub>t</sub>, c<sub>t</sub>) comes for the stream, we hash <tt>a[i<sub>t</sub>]</tt> through each of the hash functions <tt>h<sub>1</sub> .. h<sub>d</sub></tt> and increment each of the <tt>w</tt> entries in the array that they hash to. </li><br />
<br />
<tt><br />
for i = 1 to d<br />
v = h(i)(a[i<sub>t</sub>]) // v is between 1 and w<br />
count[i, v] += c<sub>t</sub> // increment the cell count by c<sub>t</sub><br />
end<br />
</tt><br />
At any point in time if we want to know the approximate value of an element <tt>a[i]</tt> of the vector <tt>a</tt>, we can get it from computing the minimum of all values in each of the <tt>d</tt> cells of <tt>count</tt> where <tt>i</tt> hashes to. This can be proved formally. But the general intuition is that since we are using hash functions there's always a possibility of multiple <tt>i</tt>'s colliding on to the same cell and contributing additively to the value of the cell. Hence the minimum among all hash values is the closest candidate to give us the correct result for the query.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-X05n34HwtMHOxjJSrpd2LE7H8GwsiDo_flQOoAFUJWnWLXiyIS2tWEjIzXeOJzkRZcfREJ7HDD6BSILA9M3rEkUXGeKX3anuW4dGiQwjBzZDLHJwGSwXK3uuZUMgCTTqT-wgnQ/s1600/cm.001.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-X05n34HwtMHOxjJSrpd2LE7H8GwsiDo_flQOoAFUJWnWLXiyIS2tWEjIzXeOJzkRZcfREJ7HDD6BSILA9M3rEkUXGeKX3anuW4dGiQwjBzZDLHJwGSwXK3uuZUMgCTTqT-wgnQ/s640/cm.001.png" /></a></div><br />
The figure above shows the processing of the updates in a Count-Min sketch. This is typically called the Point Query that returns an approximation of <tt>a[i]</tt>. Similarly we can use a Count-Min sketch to get approximation queries for ranges which is typically a summation over multiple point queries. Another interesting application is to serve inner product queries where the data structure is used to query inner products of 2 vectors, a typical application of this being the estimation of join sizes in relational query processing. The paper <a href="http://www.cise.ufl.edu/%7Eadobra/Publications/sigmod-2007-statistics.pdf">Statistical Analysis of Sketch Estimators</a> gives all details of how to use sketching as a technique for this. <br />
<br />
Count-Min sketches have some great properties which make them a very useful data structure when processing distributed streams. They have associativity properties and can be modelled as monoids and hence terribly performant in a distributed environment where you can parallelize sketch operations. In a future post I will discuss some implementation techniques and how we can use count-min sketches to serve some useful applications over data streams. Meanwhile Twitter's <a href="https://github.com/twitter/algebird">algebird</a> and ClearSpring's <a href="https://github.com/addthis/stream-lib">stream-lib</a> offer implementations of Count-Min sketch and various other data structures applicable for stream mining applications.<br />
<br />
<br />
<br />
http://debasishg.blogspot.com/2014/01/count-min-sketch-data-structure-for.htmlnoreply@blogger.com (Anonymous)6tag:blogger.com,1999:blog-22587889.post-2348936043843108242Wed, 24 Jul 2013 04:37:00 +00002013-07-24T10:07:39.278+05:30akkaredisscalaScala Redis client goes non blocking : uses Akka IO<div dir="ltr" style="text-align: left;" trbidi="on">
scala-redis is getting a <a href="https://github.com/debasishg/scala-redis-nb">new non blocking version</a> based on a kernel implemented with the <a href="http://doc.akka.io/docs/akka/snapshot/scala/io.html">new Akka IO</a>. The result is that all APIs are non blocking and return a <code>Future</code>. We are trying to keep the API as close to the blocking version as possible. But bits rot and some of the technical debt need to be repaid. We have cleaned up some of the return types which had unnecessary <code>Option[]</code>
wrappers, made some improvements and standardizations on the API type signatures and also working on making the byte array manipulation faster using <code>akka.util.ByteString</code> at the implementation level. We also have plans of using the Akka IO pipeline for abstracting the various stages of handling Redis protocol request and response.
<br />
<br />
As of today we have quite a bit ready for review by the users. The APIs may change a bit here and there, but the core APIs are up there. There are a few areas which have not yet been implemented like PubSub or clustering. Stay tuned for more updates on this blog ..
Here are a few code snippets that demonstrate the usage of the APIs ..
<br />
<br />
<h3>
Non blocking get/set</h3>
<pre class="brush: scala">@volatile var callbackExecuted = false
val ks = (1 to 10).map(i => s"client_key_$i")
val kvs = ks.zip(1 to 10)
val sets: Seq[Future[Boolean]] = kvs map {
case (k, v) => client.set(k, v)
}
val setResult = Future.sequence(sets) map { r: Seq[Boolean] =>
callbackExecuted = true
r
}
callbackExecuted should be (false)
setResult.futureValue should contain only (true)
callbackExecuted should be (true)
callbackExecuted = false
val gets: Seq[Future[Option[Long]]] = ks.map { k => client.get[Long](k) }
val getResult = Future.sequence(gets).map { rs =>
callbackExecuted = true
rs.flatten.sum
}
callbackExecuted should be (false)
getResult.futureValue should equal (55)
callbackExecuted should be (true)
</pre>
<h3>
Composing through sequential combinators</h3>
<pre class="brush: scala">val key = "client_key_seq"
val values = (1 to 100).toList
val pushResult = client.lpush(key, 0, values:_*)
val getResult = client.lrange[Long](key, 0, -1)
val res = for {
p <- pushResult.mapTo[Long]
if p > 0
r <- getResult.mapTo[List[Long]]
} yield (p, r)
val (count, list) = res.futureValue
count should equal (101)
list.reverse should equal (0 to 100)
</pre>
<h3>
Error handling using Promise Failure</h3>
<pre class="brush: scala">val key = "client_err"
val v = client.set(key, "value200")
v.futureValue should be (true)
val x = client.lpush(key, 1200)
val thrown = evaluating { x.futureValue } should produce [TestFailedException]
thrown.getCause.getMessage should equal ("ERR Operation against a key holding the wrong kind of value")
</pre>
Feedbacks welcome, especially on the APIs and their usage. All code are in Github with all tests in the test folder. Jisoo Park (<a href="twitter.com/guersam">@guersam</a>) has been doing an awesome job contributing a lot to all the goodness that's there in the repo. Thanks a lot for all the help ..
</div>
http://debasishg.blogspot.com/2013/07/scala-redis-client-goes-non-blocking.htmlnoreply@blogger.com (Anonymous)6tag:blogger.com,1999:blog-22587889.post-4047485455524974789Mon, 22 Jul 2013 05:44:00 +00002013-07-22T11:14:13.005+05:30clojurelispracketThe Realm of Racket is an enjoyable read<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSeiFDDvqZIy2B7n94YdC3umTXF_yb_XO_M75YZjiD7Z8uVIqiyYkSCxAWzflm2vtVfumb9OP_yS-jWjkjQ9OeKNuqZPI-nGYkqLKYJM5bN9zN7ZHk2QWFYz62gsR4tlDtuE61wA/s1600/81oh1EuOHRL._SL1500_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSeiFDDvqZIy2B7n94YdC3umTXF_yb_XO_M75YZjiD7Z8uVIqiyYkSCxAWzflm2vtVfumb9OP_yS-jWjkjQ9OeKNuqZPI-nGYkqLKYJM5bN9zN7ZHk2QWFYz62gsR4tlDtuE61wA/s320/81oh1EuOHRL._SL1500_.jpg" width="242" /></a></div>
<br />
There are many ways to write a programming language book. You can start introducing the syntax and semantics of the language in a naturally comprehensible sequence of complexity and usage. Or you can choose to introduce the various features of the language with real world examples using the standard librray that the language offers. IIRC <a href="http://www.amazon.com/Accelerated-C-Practical-Programming-Example/dp/020170353X">Accelerated C++</a> by Andrew Koenig and Barbara Moo takes this route. I really loved this approach and enjoyed reading the book.<br />
<br />
Of course Matthias Felleisen is known for a third way of teaching a language - the fun way. <a href="http://www.amazon.com/The-Little-Schemer-4th-Edition/dp/0262560992/">The Little Schemer</a> and <a href="http://www.amazon.com/The-Seasoned-Schemer-Daniel-Friedman/dp/026256100X/">The Seasoned Schemer</a> have introduced a novel way of learning a language. <a href="http://realmofracket.com/">The Realm of Racket</a> follows a similar style of teaching the latest descendant of Lisp, one game at a time. The implementation of every game introduces the idioms and language features with increasing degrees of complexity. There's a nice progression which helps understanding the complex features of the language building upon the already acquired knowledge of the simpler ones in earlier chapters.<br />
<br />
The book begins with a history of the Racket programming language and how it evolved as a descendant of Scheme, how it makes programming fun and how it can be used successfully as an introductory language to students aspiring to learn programming. Then it starts with Getting Started with Dr Racket for the impatient programmer and explains the IDE that will serve as your companion for the entire duration of your playing around with the book.<br />
<br />
Every chapter introduces some of the new language features and either develops a new game or builds on some improvement of a game developed earlier. This not only demonstrates a real world usage of the syntax and semantics of the language but makes the programmer aware of how the various features interact as a whole to build complex abstractions out of simpler ones. The book also takes every pain to defer the complexity of the features to the right point so that the reader is not burdened upfront. e.g. Lambdas are introduced only when the authors have introduced all basics of programming with functions and recursion. Mutants are introduced only after teaching the virtues of immutablity. For loops and comprehensions appear only when the book has introduced all list processing functions like folds, maps and filters. And then the book goes into great depth explaining why the language has so many variants of the for loop like <span style="font-family: Courier New, Courier, monospace;">for/list</span>, <span style="font-family: Courier New, Courier, monospace;">for/fold</span>, <span style="font-family: Courier New, Courier, monospace;">for*</span>, <span style="font-family: Courier New, Courier, monospace;">for/first</span>, <span style="font-family: Courier New, Courier, monospace;">for/last</span> etc. In this entire discussion of list processing, for loops etc., I would love to see a more detailed discussion on <span style="font-family: Courier New, Courier, monospace;">sequence</span> in the book. Sequence abstracts a large number of data types, but, much like Clojure it introduces a new way of API design - a single <span style="font-family: Courier New, Courier, monospace;">sequence</span> to rule them all. API designers would surely like to have more of this sauce as part of their repertoire. Racket's uniform way of handling <span style="font-family: Courier New, Courier, monospace;">sequence</span> is definitely a potent model of abstraction as compared to Scheme or other versions of Lisp.<br />
<br />
The games developed progress in complexity and we can see the powers of the language being put to great use when the authors introduce <i>lazy evaluation</i> and <i>memoized computations</i> and use them to improve the Dice of Doom. Then the authors introduce distributed game development which is the final frontier that the book covers. It's truly an enjoyable ride through the entire series.<br />
<br />
The concluding chapter talks about some of the advanced features like classes, objects and meta-programming. Any Lisp book will be incomplete without a discussion of macros and language development. But I think the authors have done well to defer these features till the end. Considering the fact that this is a book for beginning to learn the language this sounded like a sane measure.<br />
<br />
However, as a programmer experienced in other languages and wanting to take a look at Racket, I would have loved to see some coverage on testing. Introducing a bit on testing practices, maybe a unit testing library would have made the book more complete.<br />
<br />
The style of writing of this book has an underlying tone of humor and simplicity, which makes it a thoroughly enjoyable read. The use of illustrations and comics take away the monotony of learning the prosaics of a language. And the fact that Racket is a simple enough language makes this combination with pictures very refreshing.<br />
<br />
On the whole, as a book introducing a language, The Realm of Racket is a fun read. I enjoyed reading it a lot and recommend it without reservations for your bookshelf.<br />
<br />
<br />
<br /></div>
http://debasishg.blogspot.com/2013/07/the-realm-of-racket-is-enjoyable-read.htmlnoreply@blogger.com (Anonymous)3tag:blogger.com,1999:blog-22587889.post-2712980585988306755Mon, 03 Jun 2013 06:40:00 +00002013-06-03T12:10:47.564+05:30dslfunctionalscalascalazEndo is the new fluent APII tweeted this over the weekend ..
<blockquote class="twitter-tweet">
<p>
a good title for a possible blog post .. endo is the new fluent API ..</p>
— Debasish Ghosh (@debasishg) <a href="https://twitter.com/debasishg/status/340885372735193089">June 1, 2013</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
My last two blog posts have been about endomorphisms and how it combines with the other functional structures to help you write expressive and composable code. In <a HREF="http://debasishg.blogspot.in/2013/02/a-dsl-with-endo-monoids-for-free.html">A DSL with an Endo - monoids for free</A>, endos play with Writer monad and implement a DSL for a sequence of activities through monoidal composition. And in <a HREF="http://debasishg.blogspot.in/2013/03/an-exercise-in-refactoring-playing.html">An exercise in Refactoring - Playing around with Monoids and Endomorphisms</A>, I discuss a refactoring exercise that exploits the monoid of an endo to make composition easier.
Endomorphisms help you lift your computation into a data type that gives you an instance of a monoid. And the <code>mappend</code> operation of the monoid is the function composition. Hence once you have the <code>Endo</code> for your type defined, you get a nice declarative syntax for the operations that you want to compose, resulting in a fluent API.
Just a quick recap .. endomorphisms are functions that map a type on to itself and offer composition over monoids. Given an endomorphism we can define an implicit monoid instance ..
<pre class="brush: scala">implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] {
def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2
def zero = Endo.idEndo
}
</pre>
I am not going into the details of this, which I discussed at length in my earlier posts. In this article I will sum up with yet another use case for making fluent APIs using the monoid instance of an Endo. Consider an example from the domain of securities trading, where a security trade goes through a sequence of transformations in its lifecycle through the trading process ..
Here's a typical <code>Trade</CODE> model (very very trivialified for demonstration) ..
<pre class="brush: scala">sealed trait Instrument
case class Security(isin: String, name: String) extends Instrument
case class Trade(refNo: String, tradeDate: Date, valueDate: Option[Date] = None,
ins: Instrument, principal: BigDecimal, net: Option[BigDecimal] = None,
status: TradeStatus = CREATED)
</pre>
Modeling a typical lifecycle of a trade is complex. But for illustration, let's consider these simple ones which need to executed on a trade in sequence ..
<ol>
<li>Validate the trade</LI>
<li>Assign value date to the trade, which will ideally be the settlement date</LI>
<li>Enrich the trade with tax/fees and net trade value</LI>
<li>Journalize the trade in books
</OL>Each of the functions take a <code>Trade</CODE> and return a copy of the <code>Trade</CODE> with some attributes modified. A naive way of doing that will be as follows ..</LI> <pre class="brush: scala">def validate(t: Trade): Trade = //..
def addValueDate(t: Trade): Trade = //..
def enrich(t: Trade): Trade = //..
def journalize(t: Trade): Trade = //..
</pre>
and invoke these methods in sequence while modeling the lifecycle. Instead we try to make it more composable and lift the function <code>Trade => Trade</CODE> within the <code>Endo</CODE> .. <pre class="brush: scala">type TradeLifecycle = Endo[Trade]
</pre>
and here's the implementation .. <pre class="brush: scala">// validate the trade: business logic elided
def validate: TradeLifecycle =
((t: Trade) => t.copy(status = VALIDATED)).endo
// add value date to the trade (for settlement)
def addValueDate: TradeLifecycle =
((t: Trade) => t.copy(valueDate = Some(t.tradeDate), status = VALUE_DATE_ADDED)).endo
// enrich the trade: add taxes and compute net value: business logic elided
def enrich: TradeLifecycle =
((t: Trade) => t.copy(net = Some(t.principal + 100), status = ENRICHED)).endo
// journalize the trade into book: business logic elided
def journalize: TradeLifecycle =
((t: Trade) => t.copy(status = FINALIZED)).endo
</pre>
Now endo has an instance of <code>Monoid</CODE> defined by scalaz and the <code>mappend</CODE> of <code>Endo</CODE> is function composition .. Hence here's our lifecycle model using the holy monoid of endo .. <pre class="brush: scala">def doTrade(t: Trade) =
(journalize |+| enrich |+| addValueDate |+| validate).apply(t)
</pre>
It's almost the specification that we listed above in numbered bullets. Note the inside out sequence that's required for the composition to take place in proper order.
<br/><br/><h3>Why not plain old composition ?</H3>
<br/>
A valid question. The reason - abstraction. Abstracting the composition within types helps you compose the result with other types, as we saw in my earlier blog posts. In one of them we built larger abstractions using the Writer monad with <code>Endo</code> and in the other we used the <code>mzero</code> of the monoid as a fallback during composition thereby avoiding any special case branch statements.
<br/><br/><h3>
One size doesn't fit all .. </H3>
<br/>
The endo and its monoid compose beautifully and gives us a domain friendly syntax that expresses the business functionality ina nice succinct way. But it's not a pattern which you can apply everywhere where you need to compose a bunch of domain behaviors. Like every idiom, it has its shortcomings and you need different sets of solutions in your repertoire. For example the above solution doesn't handle any of the domain exceptions - what if the validation fails ? With the above strategy the only way you can handle this situation is to throw exceptions from validate function. But exceptions are side-effects and in functional programming there are more cleaner ways to tame the evil. And for that you need different patterns in practice. More on that in subsequent posts ..http://debasishg.blogspot.com/2013/06/endo-is-new-fluent-api.htmlnoreply@blogger.com (Anonymous)2tag:blogger.com,1999:blog-22587889.post-8500761222734440572Mon, 04 Mar 2013 05:30:00 +00002013-03-04T11:00:02.367+05:30An exercise in Refactoring - Playing around with Monoids and EndomorphismsA language is powerful when it offers sufficient building blocks for library design and adequate syntactic sugar that helps build expressive syntax on top of the lower level APIs that the library publishes. In this post I will discuss an exercise in refactoring while trying to raise the level of abstraction of a modeling problem. <br />
<br />
Consider the following modeling problem that I recently discussed in one of the Scala training sessions. It's simple but offers ample opportunities to explore how we can raise the level of abstraction in designing the solution model. We will start with an imperative solution and then incrementally work on raising the level of abstraction to make the final code functional and more composable.<br />
<br />
<b><i>A Problem of Transformation ..</I></B><br />
<br />
The problem is to compute the salary of a person through composition of multiple salary components calculated based on some percentage of other components. It's a problem of applying repeated transformations to a pipeline of successive computations - hence it can be generalized as a case study in function composition. But with some constraints as we will see shortly.<br />
<br />
Let's say that the salary of a person is computed as per the following algorithm :<br />
<br />
<ol><li>basic = the basic component of his salary</LI>
<li>allowances = 20% of basic</LI>
<li>bonus = 10% of (basic + allowances)</LI>
<li>tax = 30% of (basic + allowances + bonus)</LI>
<li>surcharge = 10% of (basic + allowances + bonus - tax)</LI> </OL>Note that the computation process starts with a basic salary, computes successive components taking the input from the previous computation of the pipeline. But there's a catch though, which makes the problem a bit more interesting from the modleing perspective. Not all components of the salary are mandatory - of course the <i>basic</I> is mandatory. Hence the final components of the salary will be determined by a configuration object which can be like the following ..<br />
<br />
<pre class="brush: scala">// an item = true means the component should be activated in the computation
case class SalaryConfig(
surcharge: Boolean = true,
tax: Boolean = true,
bonus: Boolean = true,
allowance: Boolean = true
)
</pre><br />
So when we compute the salary we need to take care of this configuration object and activate the relevant components for calculation.<br />
<br />
<b><i>A Function defines a Transformation ..</I></B><br />
<br />
Let's first translate the above components into separate Scala functions ..<br />
<br />
<pre class="brush: scala">// B = basic + 20%
val plusAllowance = (b: Double) => b * 1.2
// C = B + 10%
val plusBonus = (b: Double) => b * 1.1
// D = C - 30%
val plusTax = (b: Double) => 0.7 * b
// E = D - 10%
val plusSurcharge = (b: Double) => 0.9 * b
</pre><br />
Note that every function computes the salary up to the stage which will be fed to the next component computation. So the final salary is really the chained composition of all of these functions <i>in a specific order</I> as determined by the above stated algorithm.<br />
<br />
But we need to selectively activate and deactivate the components depending on the <code>SalaryConfig</CODE> passed. Here's the version that comes straight from the imperative mindset ..<br />
<br />
<b><i>The Imperative Solution ..</I></B><br />
<br />
<pre class="brush: scala">// no abstraction, imperative, using var
def computeSalary(sc: SalaryConfig, basic: Double) = {
var salary = basic
if (sc.allowance) salary = plusAllowance(salary)
if (sc.bonus) salary = plusBonus(salary)
if (sc.tax) salary = plusTax(salary)
if (sc.surcharge) salary = plusSurcharge(salary)
salary
}
</pre><br />
Straight, imperative, mutating (using var) and finally rejected by our functional mindset.<br />
<br />
<b><i>Thinking in terms of Expressions and Composition ..</I></B><br />
<br />
Think in terms of expressions (not statements) that compose. We have functions defined above that we could compose together and get the result. But, but .. the config, which we somehow need to incorporate as part of our composable expressions.<br />
<br />
So direct composition of functions won't work because we need some conditional support to take care of the config. How else can we have a chain of functions to compose ?<br />
<br />
Note that all of the above functions for computing the components are of type <code>(Double => Double)</CODE>. Hmm .. this means they are <i>endomorphisms</I>, which are functions that have the same argument and return type - <i>"endo"</I> means <i>"inside"</I> and <i>"morphism"</I> means <i>"transformation"</I>. So an endomorphism maps a type on to itself. <a HREF="https://github.com/scalaz/scalaz">Scalaz</A> defines it as ..<br />
<br />
<pre class="brush: scala">sealed trait Endo[A] {
/** The captured function. */
def run: A => A
//..
}
</pre><br />
But the interesting part is that there's a monoid instance for <code>Endo</CODE> and the associative <code>append</CODE> operation of the monoid for <code>Endo</CODE> is <i>function composition</I>. That seems mouthful .. so let's dissect what we just said ..<br />
<br />
As you all know, a monoid is defined as "a semigroup with an identity", i.e.<br />
<br />
<pre class="brush: scala">trait Monoid[A] {
def append(m1: A, m2: A): A
def zero: A
}
</pre><br />
and <code>append</CODE> has to be associative.<br />
<br />
<code>Endo</CODE> forms a monoid where <code>zero</CODE> is the identity endomorphism and <code>append</CODE> composes the underlying functions. Isn't that what we need ? Of course we need to figure out how to sneak in those conditionals ..<br />
<br />
<pre class="brush: scala">implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] {
def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2
def zero = Endo.idEndo
}
</pre><br />
But we need to <code>append</CODE> the <code>Endo</CODE> only if the corresponding bit in <code>SalaryConfig</CODE> is true. Scala allows extending a class with custom methods and scalaz gives us the following as an extension method on <code>Boolean</CODE> ..<br />
<br />
<pre class="brush: scala">/**
* Returns the given argument if this is `true`, otherwise, the zero element
* for the type of the given argument.
*/
final def ??[A](a: => A)(implicit z: Monoid[A]): A = b.valueOrZero(self)(a)
</pre><br />
That's exactly what we need to have the following implementation of a functional <code>computeSalary</CODE> that uses monoids on Endomorphisms to compose our functions of computing the salary components ..<br />
<br />
<pre class="brush: scala">// compose using mappend of endomorphism
def computeSalary(sc: SalaryConfig, basic: Double) = {
val e =
sc.surcharge ?? plusSurcharge.endo |+|
sc.tax ?? plusTax.endo |+|
sc.bonus ?? plusBonus.endo |+|
sc.allowance ?? plusAllowance.endo
e run basic
}
</pre><br />
<b><i>More Generalization - Abstracting over Types ..</I></B><br />
<br />
We can generalize the solution further and abstract upon the type that represents the collection of component functions. In the above implementation we are picking each function individually and doing an <code>append</CODE> on the monoid. Instead we can abstract over a type constructor that allows us to fold the append operation over a collection of elements.<br />
<br />
<code>Foldable[]</CODE> is an abstraction which allows its elements to be folded over. Scalaz defines instances of <code>Foldable[]</CODE> typeclass for <code>List</CODE>, <code>Vector</CODE> etc. so we don't care about the underlying type as long as it has an instance of <code>Foldable[]</CODE>. And <code>Foldable[]</CODE> has a method <code>foldMap</CODE> that makes a <code>Monoid</CODE> out of every element of the <code>Foldable[]</CODE> using a supplied function and then folds over the structure using the <code>append</CODE> function of the <code>Monoid</CODE>.<br />
<br />
<pre class="brush: scala">trait Foldable[F[_]] { self =>
def foldMap[A,B](fa: F[A])(f: A => B)(implicit F: Monoid[B]): B
//..
}
</pre><br />
In our example, <code>f: A => B</CODE> is the <code>endo</CODE> function and the <code>append</CODE> is the <code>append</CODE> of <code>Endo</CODE> which composes all the functions that form the <code>Foldable[]</CODE> structure. Here's the version using <code>foldMap</CODE> ..<br />
<br />
<pre class="brush: scala">def computeSalary(sc: SalaryConfig, basic: Double) = {
val components =
List((sc.surcharge, plusSurcharge),
(sc.tax, plusTax),
(sc.bonus, plusBonus),
(sc.allowance, plusAllowance)
)
val e = components.foldMap(e => e._1 ?? e._2.endo)
e run basic
}
</pre><br />
This is an exercise which discusses how to apply transformations on values when you need to model endomorphisms. Instead of thinking in terms of generic composition of functions, we exploited the types more, discovered that our tranformations are actually endomorphisms. And then applied the properties of endomorphism to model function composition as monoidal appends. The moment we modeled at a higher level of abstraction (endomorphism rather than native functions), we could use the zero element of the monoid as the composable null object in the sequence of function transformations. <br />
<br />
In case you are interested I have the whole <a HREF="https://github.com/debasishg/scala-snippets/blob/master/src/main/scala/monoids.scala">working example</A> in my github repo.<br />
http://debasishg.blogspot.com/2013/03/an-exercise-in-refactoring-playing.htmlnoreply@blogger.com (Anonymous)10tag:blogger.com,1999:blog-22587889.post-5099860375622051800Fri, 15 Feb 2013 12:05:00 +00002013-02-15T17:35:15.330+05:30dslscalaA DSL with an Endo - monoids for freeWhen we design a domain model, one of the issues that we care about is abstraction of implementation from the user level API. Besides making the published contract simple, this also decouples the implementation and allows post facto optimization to be done without any impact on the user level API.<br />
<br />
Consider a class like the following ..<br />
<br />
<pre class="brush: scala">// a sample task in a project
case class Task(name: String)
// a project with a list of tasks & dependencies amongst the
// various tasks
case class Project(name: String,
startDate: java.util.Date,
endDate: Option[java.util.Date] = None,
tasks: List[Task] = List(),
deps: List[(Task, Task)] = List())
</pre><br />
We can always use the algebraic data type definition above to add tasks and dependencies to a project. Besides being cumbersome as a user level API, it also is a way to program too close to the implementation. The user is coupled to the fact that we use a <code>List</code> to store tasks, making it difficult to use any alternate implementation in the future. We can offer a Builder like OO interface with fluent APIs, but that also adds to the verbosity of implementation, makes builders mutable and is generally more difficult to compose with other generic functional abstractions. <br />
<br />
Ideally we should be having a DSL that lets users create projects and add tasks and dependencies to them.<br />
<br />
In this post I will discuss a few functional abstractions that will stay behind from the user APIs, and yet provide the compositional power to wire up the DSL. This is a post inspired by this <a HREF="http://ocharles.org.uk/blog/posts/2013-02-12-quick-dsls-with-endo-writers.html">post</A> which discusses a similar DSL design using Endo and Writers in Haskell.<br />
<br />
Let's address the issues one by one. We need to <i>accumulate</I> tasks that belong to the project. So we need an abstraction that helps in this <i>accumulation</I> e.g. concatenation in a list, or in a set or in a Map .. etc. One abstraction that comes to mind is a <code>Monoid</CODE> that gives us an associative binary operation between two objects of a type that form a monoid. <br />
<br />
<pre class="brush: scala">trait Monoid[T] {
def append(m1: T, m2: T): T
def zero: T
}
</pre><br />
A <code>List</CODE> is a monoid with concatenation as the <code>append</code>. But since we don't want to expose the concrete data structure to the client API, we can talk in terms of monoids.<br />
<br />
The other data structure that we need is some form of an abstraction that will offer us the writing operation into the monoid. A <code>Writer</CODE> monad is an example of this. In fact the combination of a <code>Writer</CODE> and a <code>Monoid</CODE> is potent enough to have such a DSL in the making. Tony Morris used this combo to implement a <a HREF="https://github.com/tonymorris/writer-monad/blob/master/src/docbook/WriterDemo.scala">logging functionality</A> ..<br />
<br />
<pre class="brush: scala">for {
a <- k withvaluelog ("starting with " + _)
b <- (a + 7) withlog "adding 7"
c <- (b * 3).nolog
d <- c.toString.reverse.toInt withvaluelog ("switcheroo with " + _)
e <- (d % 2 == 0) withlog "is even?"
} yield e
</pre>
We could use this same technique here. But we have a problem - <code>Project</CODE> is not a monoid and we don't have a definition of <code>zero</CODE> for a <code>Project</CODE> that we can use to make it a <code>Monoid</CODE>.
Is there something that would help us get a monoid from <code>Project</CODE> i.e. allow us to use <code>Project</CODE> in a monoid ?
<br><br>
Enter <code>Endo</CODE> .. an endomorphism which is simply a function that takes an argument of type <code>T</CODE> and returns the same type. In Scala, we can state this as ..
<pre class="brush: scala">sealed trait Endo[A] {
// The captured function
def run: A => A
//..
}
</pre><a HREF="https://github.com/scalaz/scalaz">Scalaz</A> defines <code>Endo[A]</CODE> and provides a lot of helper functions and syntactic sugars to use endomorphisms. Among its other properties, <code>Endo[A]</CODE> provides a natural monoid and allows us to use <code>A</CODE> in a <code>Monoid</CODE>. In other words, endomorphisms of <code>A</CODE> form a monoid under composition. In our case we can define an <code>Endo[Project]</CODE> as a function that takes a <code>Project</CODE> and returns a <code>Project</CODE>. We can then use it with a <code>Writer</CODE> (as above) and implement the accumulation of tasks within a <code>Project</CODE>.
<br><br>
<i>Exercise: Implement Tony Morris' logger without side-effects using an Endo.</i>
<br><br>
Here's how we would like to accumulate tasks in our DSL ..
<pre class="brush: scala">for {
_ <- task("Study Customer Requirements")
_ <- task("Analyze Use Cases")
a <- task("Develop code")
} yield a
</pre>
<br><br>
Let's define a function that adds a <code>Task</CODE> to a <code>Project</CODE> ..
<pre class="brush: scala">// add task to a project
val withTask = (t: Task, p: Project) => p.copy(tasks = t :: p.tasks)
</pre>
<br><br>
and use this function to define the DSL API <code>task</CODE> which makes an <code>Endo[Project]</CODE> and passes it as a <code>Monoid</CODE> to the <code>Writer</CODE> monad. In the following snippet, <code>(p: Project) => withTask(t, p)</CODE> is a mapping from <code>Project => Project</CODE>, which gets converted to an <code>Endo</CODE> and then passed to the <code>Writer</CODE> monad for adding to the task list of the <code>Project</CODE>.
<pre class="brush: scala">def task(n: String): Writer[Endo[Project], Task] = {
val t = Task(n)
for {
_ <- tell(((p: Project) => withTask(t, p)).endo)
} yield t
}
</pre><br>
The DSL snippet above is a monad comprehension. Let's add some more syntax to the DSL by defining dependencies of a Project. That's also a mapping from one <code>Project</CODE> state to another and can be realized using a similar function like <code>withTask</CODE> ..
<pre class="brush: scala">// add project dependency
val withDependency = (t: Task, on: Task, p: Project) =>
p.copy(deps = (t, on) :: p.deps)
</pre><br>
.. and define a function <code>dependsOn</CODE> to our DSL that allows the user to add the explicit dependencies amongst tasks. But this time instead of making it a standalone function we will make it a method of the class <code>Task</CODE>. This is only for getting some free syntactic sugar in the DSL. Here's the modified <code>Task</code> ADT ..
<pre class="brush: scala">case class Task(name: String) {
def dependsOn(on: Task): Writer[Endo[Project], Task] = {
for {
_ <- tell(((p: Project) => withDependency(this, on, p)).endo)
} yield this
}
}
</pre>Finally we define the last API of our DSL that glues together the building of the Project and the addition of tasks and dependencies without directly coupling the user to some of the underlying implementation artifacts.
<pre class="brush: scala">def project(name: String, startDate: Date)(e: Writer[Endo[Project], Task]) = {
val p = Project(name, startDate)
e.run._1(p)
}
</pre>And we can finally create a <code>Project</CODE> along with tasks and dependencies using our DSL ..
<pre class="brush: scala">project("xenos", now) {
for {
a <- task("study customer requirements")
b <- task("analyze usecases")
_ <- b dependsOn a
c <- task("design & code")
_ <- c dependsOn b
d <- c dependsOn a
} yield d
}
</pre>
In case you are interested I have the whole <a HREF="https://github.com/debasishg/scala-snippets/blob/master/src/main/scala/endodsl.scala">working example</A> in my github repo.http://debasishg.blogspot.com/2013/02/a-dsl-with-endo-monoids-for-free.htmlnoreply@blogger.com (Anonymous)1tag:blogger.com,1999:blog-22587889.post-4489064465464541598Fri, 01 Feb 2013 06:57:00 +00002013-02-01T16:28:23.592+05:30DIpatternsscalaModular Abstractions in Scala with Cakes and Path Dependent TypesI have been trying out various options of implementing the <a HREF="http://lampwww.epfl.ch/~odersky/papers/ScalableComponent.html">Cake pattern</A> in Scala, considered to be one of the many ways of doing dependency injection without using any additional framework. There are other (more functional) ways of doing the same thing, one of which I <a HREF="http://debasishg.blogspot.in/2010/02/dependency-injection-as-function.html">blogged</A> about before and also <a HREF="http://vimeo.com/23547652">talked</A> about in a NY Scala meetup. But I digress ..<br />
<br />
Call it DI or not, the Cake pattern is one of the helpful techniques to implement modular abstractions in Scala. You weave your abstract components (aka traits), layering on the dependencies and commit to implementations only at the end of the world. I was trying to come up with an implementation that does not use self type annotations. It's not that I think self type annotations are kludgy or anything but I don't find them used elsewhere much besides the Cake pattern. And of course mutually recursive self annotations are a code smell that makes your system anti-modular.<br />
<br />
In the following implementation I use path dependent types, which have become a regular feature in Scala 2.10. Incidentally it was there since long back under the blessings of an experimental feature, but has come out in public only in 2.10. The consequence is that instead of self type annotations or inheritance I will be configuring my dependencies using composition.<br />
<br />
Let me start with some basic abstractions of a very simple domain model. The core component that I will build is a service that reports the portfolio of clients as a balance. The example has been simplified for illustration purposes - the actual real life model has a much more complex implementation.<br />
<br />
A <i>Portfolio</I> is a collection of <i>Balances</I>. A <i>Balance</I> is a position of an <i>Account</I> in a specific <i>Currency</I> as on a particular <i>Date</I>. Expressing this in simple terms, we have the following traits ..<br />
<br />
<pre class="brush: scala">// currency
sealed trait Currency
case object USD extends Currency
case object EUR extends Currency
case object AUD extends Currency
//account
case class Account(no: String, name: String, openedOn: Date, status: String)
trait BalanceComponent {
type Balance
def balance(amount: Double, currency: Currency, asOf: Date): Balance
def inBaseCurrency(b: Balance): Balance
}
</pre><br />
The interesting point to note is that the actual type of <code>Balance</CODE> has been abstracted in <code>BalanceComponent</CODE>, since various services may choose to use various representations of a Balance. And this is one of the layers of the Cake that we will mix finally ..<br />
<br />
Just a note for the uninitiated, a base currency is typically considered the domestic currency or accounting currency. For accounting purposes, a firm may use the base currency to represent all profits and losses. So we may have some service or component that would like to have the balances reported in base currency.<br />
<br />
<pre class="brush: scala">trait Portfolio {
val bal: BalanceComponent
import bal._
def currentPortfolio(account: Account): List[Balance]
}
</pre><br />
Portfolio uses the abstract <code>BalanceComponent</CODE> and does not commit to any specific implementation. And the <code>Balance</CODE> in the return type of the method <code>currentPortfolio</CODE> is actually a path dependent type, made to look nice through the object import syntax.<br />
<br />
Now let's have some standalone implementations of the above components .. we are still not there yet to mix the cake ..<br />
<br />
<pre class="brush: scala">// report balance as a TUPLE3 - simple
trait SimpleBalanceComponent extends BalanceComponent {
type Balance = (Double, Currency, Date)
override def balance(amount: Double, currency: Currency, asOf: Date) =
(amount, currency, asOf)
override def inBaseCurrency(b: Balance) =
((b._1) * baseCurrencyFactor.get(b._2).get, baseCurrency, b._3)
}
// report balance as an ADT
trait CustomBalanceComponent extends BalanceComponent {
type Balance = BalanceRep
// balance representation
case class BalanceRep(amount: Double, currency: Currency, asOf: Date)
override def balance(amount: Double, currency: Currency, asOf: Date) =
BalanceRep(amount, currency, asOf)
override def inBaseCurrency(b: Balance) =
BalanceRep((b.amount) * baseCurrencyFactor.get(b.currency).get, baseCurrency, b.asOf)
}
</pre><br />
And a sample implementation of <code>ClientPortfolio</CODE> that adds logic without yet commiting to any concrete type for the <code>BalanceComponent</CODE>.<br />
<br />
<pre class="brush: scala">trait ClientPortfolio extends Portfolio {
val bal: BalanceComponent
import bal._
override def currentPortfolio(account: Account) = {
//.. actual impl will fetch from database
List(
balance(1000, EUR, Calendar.getInstance.getTime),
balance(1500, AUD, Calendar.getInstance.getTime)
)
}
}
</pre><br />
Similar to <code>ClientPortfolio</CODE>, we can have multiple implementations of Portfolio reporting that reports balances in various forms. So our cake has started taking shape. We have the <code>Portfolio</CODE> component and the BalanceComponent already weaved in without any implementation. Let's add yet another layer to the mix, maybe for fun - a decorator for the <code>Portfolio</CODE>.<br />
<br />
We add <code>Auditing</CODE> as a component which can decorate *any* <code>Portfolio</CODE> component and report the balance of an account in base currency. Note that <code>Auditing</CODE> needs to abstract implementations of <code>BalanceComponent</CODE> as well as <code>Portfolio</CODE> since the idea is to decorate any <code>Portfolio</CODE> component using any of the underlying <code>BalanceComponent</CODE> implementations.<br />
<br />
Many cake implementations use self type annotations (or inheritance) for this. I will be using composition and path dependent types.<br />
<br />
<pre class="brush: scala">trait Auditing extends Portfolio {
val semantics: Portfolio
val bal: semantics.bal.type
import bal._
override def currentPortfolio(account: Account) = {
semantics.currentPortfolio(account) map inBaseCurrency
}
}
</pre><br />
Note how the <code>Auditing</CODE> component uses the same <code>Balance</CODE> implementation as the underlying decorated <code>Portfolio</CODE> component, enforced through path dependent types.<br />
<br />
And we have reached the end of the world without yet committing to any implementation of our components .. But now let's do that and get a concrete service instantiated ..<br />
<br />
<pre class="brush: scala">object SimpleBalanceComponent extends SimpleBalanceComponent
object CustomBalanceComponent extends CustomBalanceComponent
object ClientPortfolioAuditService1 extends Auditing {
val semantics = new ClientPortfolio { val bal = SimpleBalanceComponent }
val bal: semantics.bal.type = semantics.bal
}
object ClientPortfolioAuditService2 extends Auditing {
val semantics = new ClientPortfolio { val bal = CustomBalanceComponent }
val bal: semantics.bal.type = semantics.bal
}
</pre><br />
Try out in your Repl and see how the two services behave the same way abstracting away all implementations of components from the user ..<br />
<br />
<pre class="brush: scala">scala> ClientPortfolioAuditService1.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
res0: List[(Double, com.redis.cake.Currency, java.util.Date)] = List((1300.0,USD,Thu Jan 31 12:58:35 IST 2013), (1800.0,USD,Thu Jan 31 12:58:35 IST 2013))
scala> ClientPortfolioAuditService2.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
res1: List[com.redis.cake.ClientPortfolioAuditService2.bal.Balance] = List(BalanceRep(1300.0,USD,Thu Jan 31 12:58:46 IST 2013), BalanceRep(1800.0,USD,Thu Jan 31 12:58:46 IST 2013))
</pre><br />
The technique discussed above is inspired from the paper <a HREF="http://www.mathematik.uni-marburg.de/~rendel/hofer08polymorphic.pdf">Polymoprhic Embedding of DSLs</A>. I have been using this technique for quite some time and I have discussed a somewhat similar implementation in my book <a HREF="http://manning.com/ghosh">DSLs In Action</A> while discussing internal DSL design in Scala.<br />
<br />
And in case you are interested in the full code, I have uploaded it on my <a HREF="https://github.com/debasishg/scala-snippets/blob/master/src/main/scala/cake.scala">Github</A>.http://debasishg.blogspot.com/2013/02/modular-abstractions-in-scala-with.htmlnoreply@blogger.com (Anonymous)20tag:blogger.com,1999:blog-22587889.post-1927998603084595620Mon, 14 Jan 2013 09:50:00 +00002013-01-14T15:20:48.761+05:30functionalhaskellmonadA language and its interpretation - Learning free monadsI have been playing around with free monads of late and finding them more and more useful in implementing separation of concerns between pure data and its interpretation. Monads generally don't compose. But if you restrict monads to a particular form, then you can define a sum type that composes. In the paper <a HREF="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.4131">Data Types a la carte</A>, Wouter Swierstra describes this form as <br />
<pre>data Term f a =
Pure a
| Impure (f (Term f a))
</pre>These monads consist of either pure values or an impure effect, constructed using <code>f</CODE>. When <code>f</CODE> is a functor, <code>Term f</CODE> is a monad. And in this case, <code>Term f</CODE> is the free monad being the left adjoint to the forgetful functor <code>f</CODE>.<br />
<br />
I am not going into the details of what makes a monad <i>free</I>. I once asked this question on google+ and Edward Kmett came up with a beautiful explanation. So instead of trying to come up with a half assed version of the same, have a look at Ed's response <a href="https://plus.google.com/u/0/106871002817915335660/posts/g9LASrMjeFS">here</a> .<br />
<br />
In short, we can say that a free monad is the freeest object possible that's still a monad. <br />
<br />
<h1>Composition</H1>Free monads compose and help you build larger abstractions which are pure data and yet manage to retain all properties of a monad. Hmm .. this sounds interesting because now we can not only build abstractions but make them extensible through composition by clients using the fact that it's still a monad.<br />
<br />
A free monad is pure data, not yet interpreted, as we will see shortly. You can pass it to a separate interpreter (possibly multiple interpreters) which can do whatever you feel like with the structure. Your free monad remains pure while all impurities can be put inside your interpreters.<br />
<br />
<h1>And so we interpret ..</H1>In this post I will describe how I implemented an interpreter for a Joy like concatenative language that uses the stack for its computation. Of course it's just a prototype and addresses an extremely simplified subset, but you get the idea. The basic purpose is to explore the power of free monads and come up with something that can potentially be extended to a full blown implementation of the language.<br />
<br />
When designing an interpreter there's always the risk of conflating the language along with the concerns of interpreting it. Here we will have the 2 completely decoupled by designing the core language constructs as free monads. Gabriel Gonzalez has written a series of blog posts [<a HREF="http://www.haskellforall.com/2012/06/you-could-have-invented-free-monads.html">1</A>,<a HREF="http://www.haskellforall.com/2012/07/purify-code-using-free-monads.html">2</A>,<a HREF="http://www.haskellforall.com/2012/07/free-monad-transformers.html">3</A>] on the use of free monads which contain details of their virtues and usage patterns. My post is just an account of my learning experience. In fact after I wrote the post, I discovered a similar <a HREF="https://github.com/TikhonJelvis/Forth-nonsense/blob/master/Forth.hs">exercise</A> done for embedding Forth in Haskell - so I guess I'm not the first one to learn free monads using a language interpreter.<br />
<br />
Let's start with some code, which is basically a snippet of Joy like code that will run within our Haskell interpreter ..<br />
<pre>p :: Joy ()
p = do push 5
push 6
add
incr
add2
square
cube
end
</pre><br />
This is our wish list and at the end of the post we will see if we can interpret this correctly and reason about some aspects of this code. Let's not bother much about the details of the above snippet. The important point is that if we fire it up in ghci, we will see that it's pure data!<br />
<pre>*Joy> p
Free (Push 5 (Free (Push 6 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Dup (Free (Mult (Free (Dup (Free (Dup (Free (Mult (Free (Mult (Free End))))))))))))))))))))))))))))))
*Joy>
</pre><br />
We haven't yet executed anything of the above code. It's completely free to be interpreted and possibly in multiple ways. So, we have achieved this isolation - that the data is freed from the interpreter. You want to develop a pretty printer for this data - go for it. You want to apply semantics and give an execution model based on Joy - do it.<br />
<br />
<h1>Building the pure data (aka Builder)</H1>Let's first define the core operators of the language .. <br />
<pre>data JoyOperator cont = Push Int cont
| Add cont
| Mult cont
| Dup cont
| End
deriving (Show, Functor)
</pre><br />
The interesting piece is the derivation of the <code>Functor</CODE>, which is required for implementing the forgetful functor adjoint of the free monad. Keeping the technical mumbo jumbo aside, free monads are just a general way of turning functors into monads. So if we have a core operator <code>f</CODE> as a functor, we can get a free monad <code>Free f</CODE> out of it. And knowing something is a free monad helps you transform transform an operation over the monad (the monad homomorphism) into an operation over the functor (functor homomorphism). We will see how this helps later .. <br />
<br />
The other point to note is that all operators take a continuation argument that points to the next operation in the chain. <code>End</CODE> is the terminal symbol and we plan to ignore anything that the user enters after an <code>End</CODE>.<br />
<br />
<code>Push</CODE> takes an <code>Int</CODE> and pushes into the stack, <code>Add</CODE> pops the top 2 elements of the stack and pushes the sum, <code>Mult</CODE> does the same for multiplication and <code>Dup</CODE> duplicates the top element of the stack. <code>End</CODE> signifies the end of program.<br />
<br />
Next we define the free monad over <code>JoyOperator</CODE> by using the <code>Free</CODE> data constructor, defined as part of <code>Control.Monad.Free</CODE> ..<br />
<br />
<pre>data Free f a = Pure a | Free (f (Free f a))</pre><br />
<pre>-- | The free monad over JoyOperator
type Joy = Free JoyOperator
</pre><br />
And then follow it up with some of the definitions of Joy operators as operations over free monads. Note that <code>liftF</CODE> lifts an operator (which is a <code>Functor</CODE>) into the context of a free monad. <code>liftF</CODE> has the following type ..<br />
<br />
<pre>liftF :: Functor f => f a -> Free f a</pre><br />
As a property, a free moand has a forgetful functor as its left adjoint. The unlifting from the monad to the functor is given by the <code>retract</CODE> function ..<br />
<br />
<pre>retract :: Monad f => Free f a -> f a</pre><br />
and needless to say <br />
<br />
<pre>retract . liftF = id</pre><br />
<pre>-- | Push an integer to the stack
push :: Int -> Joy ()
push n = liftF $ Push n ()
-- | Add the top two numbers of the stack and push the sum
add :: Joy ()
add = liftF $ Add ()
</pre><br />
.. and this can be done for all operators that we wish to support.<br />
<br />
Not only this, we can also combine the above operators and build newer ones. Remember we are working with monads and hence the *do* notation based sequencing comes for free ..<br />
<br />
<pre>-- | This combinator adds 1 to a number.
incr :: Joy ()
incr = do {1; add}
-- | This combinator increments twice
add2 :: Joy ()
add2 = do {incr; incr}
-- | This combinator squares a number
square :: Joy ()
square = do {dup; mult}
</pre><br />
Now we can have a composite program which sequences through the core operators as well as the ones we derive from them. And that's what we posted as our first example snippet of a target program.<br />
<br />
<h1>An Interpreter (aka Visitor)</H1>Once we have the pure data part done, let's try and build an interpreter that does the actual execution based on the semantics that we defined on the operators.<br />
<br />
<pre>-- | Run a joy program. Result is either an Int or an error
runProgram :: Joy n -> Either JoyError Int
runProgram program = joy [] program
where joy stack (Free (Push v cont)) = joy (v : stack) cont
joy (a : b : stack) (Free (Add cont)) = joy (a + b : stack) cont
joy (a : b : stack) (Free (Mult cont)) = joy (a * b : stack) cont
joy (a : stack) (Free (Dup cont)) = joy (a : a : stack) cont
joy _ (Free Add {}) = Left NotEnoughParamsOnStack
joy _ (Free Dup {}) = Left NotEnoughParamsOnStack
joy [] (Free End) = Left NotEnoughParamsOnStack
joy [result] (Free End) = Right result
joy _ (Free End) = Left NotEmptyOnEnd
joy _ Pure {} = Left NoEnd
</pre><br />
<code>runProgram</CODE> is the interpreter that takes a free monad as its input. Its implementation is quite trivial - it just matches on the recursive structure of the data and pushes the appropriate results on the stack. Now if we run our program <code>p</CODE> using the above interpreter we get the correct result ..<br />
<br />
<pre>*Joy> runProgram p
Right 7529536
*Joy>
</pre><br />
<h1>Equational Reasoning</H1>Being Haskell and being pure, we obviously can prove some bits and pieces of our program as mathematical equations. At the beginning I said that End is the end of the program and anything after End needs to be ignored. What happens if we do the following ..<br />
<br />
<pre>runProgram $ do {push 5; incr; incr; end; incr; incr}</pre><br />
If you have guessed correctly we get <code>Right 7</CODE>, which means that all operations after <code>end</CODE> have been ignored. But can we prove that our program indeed does this ? The following is a proof that <code>end</CODE> indeed ends our program. Consider some operation <code>m</CODE> follows <code>end</CODE> ..<br />
<br />
<pre>end >> m
-- definition of end
= liftF $ End >> m
-- m >> m' = m >>= \_ -> m'
= liftF $ End >>= \_ -> m
-- definition of liftF
= Free End >>= \_ -> m
-- Free m >>= f = Free (fmap (>>= f) m)
= Free (fmap (>>= \_ -> m) End)
-- fmap f End = End
= Free End
-- liftF f = Free f (f is a functor)
= liftF End
-- definition of End
= end
</pre><br />
So this shows that any operation we do after <code>end</CODE> is never executed.<br />
<br />
<h1>Improving the bind</H1>Free monads offer improved modularity by allowing us to separate the building of pure data from the (possibly) impure concerns of interpreting it with the external world. But a naive implementation leads to a penalty in runtime efficiency as Janis Voigtlander discusses in his paper <a HREF="http://www.iai.uni-bonn.de/~jv/mpc08.pdf">Asymptotic Improvement of Computations over Free Monads</A>. And the Haskell's implementation of free monads had this inefficiency where the asymptotic complexity of substitution was quadratic because of left associative bind. Edward Kmett implemented the Janis trick and engineered a solution that gets over this inefficiency. I will discuss this in a future post. And if you would like to play around with the interpreter, the source is there in my <a HREF="https://github.com/debasishg/joy-free-monads/blob/master/joy_free.hs">github</A>.<br />
http://debasishg.blogspot.com/2013/01/a-language-and-its-interpretation.htmlnoreply@blogger.com (Anonymous)3tag:blogger.com,1999:blog-22587889.post-8922324363920234723Thu, 03 Jan 2013 05:36:00 +00002013-01-03T11:06:06.538+05:30corecursionfunctionalhaskellinductivestrict : recursion :: non-strict : co-recursionConsider the very popular algorithm that uses a tail recursive call to implement a <code>map</CODE> over a <code>List</CODE>. Here's the implementation in F# <br />
<br />
<pre>let map converter l =
let rec loop acc = function
| [] -> acc
| hd :: tl -> loop (converter hd :: acc) tl
List.rev (loop [] l)</pre><br />
Scala is also a statically typed functional programming language, though it uses a completely different trick and mutability to implement <code>map </CODE>over a sequence. Let's ignore it for the time being and imagine we are being a bonafide functional programmer.<br />
<br />
The above code uses a very common idiom of accumulate-and-reverse to implement a tail recursive algorithm. Though Scala stdlib does not use this technique and uses a mutable construct for this implementation, we could have done the same thing in Scala as well ..<br />
<br />
Both Scala and F# are languages with strict evaluation semantics as the default. What would a similar tail recursive Haskell implementation look like ?<br />
<br />
<pre>map' f xs = reverse $ go [] xs
where
go accum [] = accum
go accum (x:xs) = go (f x : accum) xs
</pre><br />
Let's now have a look at the actual <a HREF="http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-Base.html#map">implementation</A> of <code>map</CODE> in Haskell prelude ..<br />
<br />
<pre>map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = f x : map f xs
</pre><br />
Whoa! It's not a tail recursion and instead a body recursive implementation. Why is that ? We have been taught that tail recursion is the holy grail since it takes constant stack space and all ..<br />
<br />
In a strict language the above implementation of <code>map</CODE> will be a bad idea since it uses linear stack space. On the other hand the initial implementation is tail recursive. But Haskell has non-strict semantics and hence the last version of map consumes only one element of the input and yields one element of the output. The earlier versions consume the whole input before yielding any output.<br />
<br />
In a lazy language what you need is to make your algorithm co-recursive. And this needs co-data. Understanding co-data or co-recursion needs a brief background of <i>co-induction</I> and how it differs from <i>induction</I>. <br />
<br />
When we define a list in Haskell as <br />
<br />
<pre>data [a] = [] | a : [a]
</pre><br />
it means that the set "List of a" is the <i>smallest</I> set such that [] is in the List of a, and if [a] is in the List of a and a is in a, then a : [a] is in List of a. This is the inductive definition os a List and we can use recursion to implement various properties of a List. Also once we have a recursive definition, we can use structural induction to prove various properties of the data structure.<br />
<br />
If an inductive definition on data gives us the smallest set, a co-inductive definition on co-data gives us the largest set. Let's define a Stream in Haskell ..<br />
<br />
<pre>data Stream a = Cons a (Stream a)
</pre><br />
First to note - unlike the definition of a <code>List</CODE>, there's no special case for empty <code>Stream</CODE>. Hence no base case unlike the inductive definition above. And a "Stream of a" is the <i>largest</I> set consisting of a pair of an "a" and a "Stream of a". Which means that <code>Stream</CODE> is an infinite data structure i.e. the range (or co-domain) of a Stream is infinite. And we implement properties on co-inductive data structures using co-recursive programs. A co-recursive program is defined as one whose range is a type defined recursively as the greatest solution of some equation.<br />
<br />
In terms of a little bit of mathematical concepts, if we model types as sets, then an infinite list of integers is the <i>greatest</I> set X for which there is a bijection X ≅ ℤ x X and a program that <i>generates</I> such a list is a co-recursive program. As its dual, the type of a finite list is the <i>least</I> set X for which X ≅ 1 + (ℤ x X), where 1 is a singleton set and + is the disjoint union of sets. Any program which <i>consumes</I> such an input is a recursive program. (Note the dualities in italics).<br />
<br />
Strictly speaking, the co-domain does not necessarily have to be infinite. If you have read my earlier <a HREF="http://debasishg.blogspot.in/2012/01/list-algebras-and-fixpoint-combinator.html">post</A> on initial algebra, co-recursion recursively defines functions whose co-domain is a <i>final data type</I>, dual to the way that ordinary recursion recursively defines functions whose domain is an <i>initial data type</I>.<br />
<br />
In case of primitive recursion (with the <code>List</CODE> data type above), you always frame the recursive step to operate on data which gets reduced in size in subsequent steps of recursive calls. It's not so with co-recursion since you may be working with co-data and infinite data structures. Since in Haskell, a <code>List</CODE> can also be infinite, using the above co-recursive definition of <code>map</CODE>, we can have<br />
<br />
<pre>head $ map (1+) [1..]
</pre><br />
which invokes <code>map</CODE> on an infinite list of integers. And here the co-recursive steps of map operate successively on sets of data which are not less than the earlier set. So, it's not tail recursion that makes an efficient implementation in Haskell, you need to make the co-recursive call within the application of a constructor.<br />
<br />
As the first post of the new year, that's all friends. Consider this as the notes from someone learning the principles of co-inductive definitions. Who knew co-recursion is so interesting ? It will be more interesting once we get down into the proof techniques for co-recursive programs. But that's for a future post ..<br />
http://debasishg.blogspot.com/2013/01/strict-recursion-non-strict-co-recursion.htmlnoreply@blogger.com (Anonymous)1tag:blogger.com,1999:blog-22587889.post-6494088149187798898Fri, 28 Dec 2012 19:01:00 +00002012-12-29T00:31:59.733+05:30The Best of 2012 - Looking BackThe year 2012 was a bit different for me in terms of readings and enlightenment. While once again Scala proved to be the most dominant language that I used throughout the year, I also had some serious productive time and mind share to learning machine learning and related topics. Online courses at <A HREF="http://www.coursera.org">Coursera</A> were of immense help in this regard and proved to be the catalyst of inspiration. Being in India, staying within the comfort zones of my happy home, I could attend the classes of professors like Daphne Koller, Martin Odersky and Geoffrey Hinton. Thanks a lot Coursera for this enlightening experience ..<br />
<br />
<H1>online courses that I attended ..</H1><br />
<UL> <LI><A HREF="https://class.coursera.org/pgm/class/index">Probabilistic Graphical Models</A> (Daphne Koller)</LI>
<LI><A HREF="https://class.coursera.org/progfun-2012-001/class/index">Functional Programming Principles in Scala</A> (Martin Odersky)</LI>
<LI><A HREF="https://class.coursera.org/neuralnets-2012-001/class/index">Neural Networks in Machine Learning</A> (Geoffrey Hinton)</LI>
</UL><br />
<H1>the best of the papers that I read ..</H1><br />
I usually read lots of papers on my subjects of interest. The total number are too many to list here. But here are a few of them, mostly from the domain of programming languages and type theory. In this regard I must confess that I am not a programming language theory person. But I sincerely believe that you need to know some theory to make sense of its implementation. That's the reason I think programmers (yes. serious programmers using a decent programming language with a static type system) will do better to learn a bit of <A HREF="http://debasishg.blogspot.in/2012/07/does-category-theory-make-you-better.html">category</A> <A HREF="http://debasishg.blogspot.in/2012/08/category-theory-and-programming.html">theory</A>. <br />
<br />
As I mentioned at the beginning, currently my programming language of choice is Scala and I have been exploring how to write better code with Scala's multi-paradigm capabilities. Though I tend to use the functional power of Scala more than its object oriented features, there are quite a few idioms that I use where I find its object oriented capabilities useful. Scala 2.10 will now be released any time and I am even more excited to try out many of the new features that it will offer.<br />
<br />
Ok .. here are some of the papers that I enjoyed reading in 2012 ..<br />
<br />
<UL> <LI><A HREF="http://adam.chlipala.net/papers/CpdtJFR/">An Introduction to Programming and Proving with Dependent Types in Coq</A> - Adam Chlipala</LI>
<LI><A HREF="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9875">Theorems for free!</A> - Philip Wadler</LI>
<LI><A HREF="https://personal.cis.strath.ac.uk/conor.mcbride/pub/Totality.pdf">Totality</A> - Conor McBride</LI>
<LI><A HREF="http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf">Proofs are Programs: 19th Century Logic and 21st Century Computing</A> - Philip Wadler</LI>
<LI><A HREF="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.4131">Data types à la carte</A> - Wouter Swierstra</LI>
<LI><A HREF="http://dl.acm.org/citation.cfm?id=2364529">Agda-curious?: an exploration of programming with dependent types</A> - Conor McBride</LI>
<LI><A HREF="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.6279">Ideal Hash Trees</A> - Phil Bagwell</LI>
</UL><br />
<H1>a few good books ..</H1><br />
This is also a different list that what I expected at the beginning of the year. It's mostly due to my focus on machine learning and AI related topics that I took up in course of my journey.<br />
<br />
<UL> <LI><A HREF="http://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X">Causality: Models, Reasoning and Inference</A> - Judea Pearl</LI>
<LI><A HREF="http://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193">Probabilistic Graphical Models: Principles and Techniques</A> - Daphne Koller</LI>
<LI><A HREF="http://www.amazon.com/World-Without-Time-Forgotten-Einstein/dp/0465092942">A World Without Time: The Forgotten Legacy of Godel and Einstein</A> - Palle Yourgrau</LI>
<LI><A HREF="http://www.amazon.com/Neural-Networks-Pattern-Recognition-Christopher/dp/0198538642">Neural Networks for Pattern Recognition</A> - Christopher M. Bishop</LI>
<LI><A HREF="http://www.amazon.com/Turing-Novel-Computation-Christos-Papadimitriou/dp/0262661918">Turing (A Novel about Computation)</A> - Christos H. Papadimitriou</LI>
<LI><A HREF="http://www.cs.cmu.edu/~rwh/plbook/book.pdf">Practical Foundations for Programming Languages</A> - Robert Harper (Read the freely available ebook)</LI>
<LI><A HREF="http://www.amazon.com/Haskell-Functional-Programming-International-Computer/dp/0201882957">Haskell: The Craft of Functional Programming (3rd Edition)</A> - Simon Thompson</LI>
</UL><br />
<H1>3 conferences in a year - my best so far ..</H1><br />
This was my second consecutive <a href="http://phillyemergingtech.com">PhillyETE</a> and I spoke on functional approach to domain modeling. In case you missed it earlier, here's the <A HREF="http://www.slideshare.net/debasishg/functional-and-event-driven-another-approach-to-domain-modeling">presentation</A> up on slideshare. I combined my PhillyETE trip with a trip to London to attend <a href="http://days2012.scala-lang.org/">ScalaDays 2012</a>. It was a great experience listening to 3 keynotes by Guy Steele, Simon Peyton Jones and Martin Odersky. I also got to meet a number of my friends on Twitter who rock the Scala community .. it was so nice getting to know the faces behind the twitter handles. Later in the year I also spoke at <a href="http://qconnewyork.com/">QCon NY</a> on the topic of <A HREF="http://www.slideshare.net/debasishg/qconny-12">domain modeling in the functional world</A>.<br />
<br />
One of the conferences that I plan to attend in 2013 is <a href="https://thestrangeloop.com/">The Strange Loop</a>. It has been one of the awesomest conferences so far and I hate to be tracking it only on Twitter every year. Keeping fingers crossed .. <br />
<br />
<H1>some plans for 2013 (Not Resolutions though!)</H1><br />
In no specific order ..<br />
<br />
<UL> <LI>Do more Haskell - Haskell is surely in for some big renaissance. I just spent a couple of hours of very productive time watching Edward Kmett explain his new lens library.</LI>
<LI>Continue Scala - obviously .. needs no explanation ..</LI>
<LI>Explore more of machine learning and work on at least one concrete project. I am now reading 2 solid books on ML theory - as I mentioned above I feel a bit uncomfortable learning implementations without any of the theory that drive them. While both of these books assume a solid background of mathematics and statistics (which I don't have), I am trying to crawl through them, greedily accumulating necessary math background as much as I can. Wish I could take a sabbatical leave for 1 year and finish both of them.</LI>
</UL><br />
Guess that's all .. See you all on the other side of 2013. Happy holidays and have a gorgeous 2013.http://debasishg.blogspot.com/2012/12/the-best-of-2012-looking-back.htmlnoreply@blogger.com (Anonymous)2tag:blogger.com,1999:blog-22587889.post-6185794315412664736Mon, 01 Oct 2012 06:03:00 +00002012-10-01T11:41:06.820+05:30functionalscalaTowards better refactoring support in IDEs for functional programming<br />
A couple of days back I was thinking how we could improve the state of IDEs and let them give rich feedbacks to users focusing on code improvement and refactoring. When you write Java programs, IntelliJ IDEA is possibly the best IDE you can have with an extensive set of refactoring tools. But most of these are targeted towards reducing boilerplates and do refactor based on structural changes only (e.g. replace constructor with builder or use interfaces wherever possible etc.). Can we improve the state of the art when we are using a semantically richer language like Scala or Haskell ? How about suggesting some idioms that relate to the paradigm of functional programming and explore some of the best practices suggested by the masters of the trade ?<br />
<br />
I always find many programmers use <code>Option.get</CODE> in Scala when they should be using higher order structures instead. Here's an example of the anti-pattern ..<br />
<br />
<pre class="brush: scala">if oName.isDefined oName.get.toUpperCase else "Name not present"</pre><br />
which should ideally be refactored to <br />
<br />
<pre class="brush: scala">oName.map(_.toUpperCase).getOrElse("Name not present")</pre><br />
The advantage with the latter is that it's more composable and can be pipelined to other higher order functions down the line without introducing any temporary variables.<br />
<br />
Many people also use pattern matching for destructuring an <code>Option</CODE> ..<br />
<br />
<pre class="brush: scala">oName match {
case Some(n) => n.toUpperCase
case _ => "Name not present"
}
</pre><br />
This is also not what idioms espouse and is almost equivalent to explicit null checking.<br />
<br />
Can an IDE suggest such refactoring ? I <a HREF="https://twitter.com/debasishg/status/251891770424688640">tweeted</A> about my thoughts and almost immediately <a HREF="http://twitter.com/m_st">Mirko</A>, who is doing some great stuff on the Scala IDE, added this to his list of ideas. Nice!<br />
<br />
Here's another idea. How about suggesting some program transformations for functional programming ? I know Scala is not a pure functional language and we don't have effect tracking. I have seen people use side-effecting functions in <code>map</CODE>, which, I know is a strict no-no. But again, the compiler doesn't complain. Still we can have the IDE suggest some refactorings assuming the programmer uses purity as per recommendations. After all, these are suggestions only and can enlighten a programmer with some of the realizations of how to model based on the best practices.<br />
<br />
Consider this ..<br />
<br />
<pre class="brush: scala">(-1000000 to 1000000).map(_ + 1).filter(_ > 0)</pre><br />
Here the <code>map</CODE> is executed on all the elements and then <code>filter</CODE> processes the whole output. One option to optimize will be to defer the computations using <code>view</CODE> ..<br />
<br />
<pre class="brush: scala">(-1000000 to 1000000).view.map(_ + 1).filter(_ > 0)</pre><br />
But this doesn't reduce the load of computation on <code>map</CODE>. It just prevents the creation of a temporary list as an output of the <code>map</CODE>. We can apply a transformation called <b>filter promotion</B> that does the filtering first so that <code>map</CODE> can operate on a potentially smaller list. Of course this assumes that both functions are *pure* so that operations can be reordered - but that's what we should anyway ensure while doing functional programming .. right ? Here's the transformation ..<br />
<br />
<pre class="brush: scala">(-1000000 to 1000000).filter(((_: Int) > 0) compose ((_: Int) + 1)).map(_ + 1)</pre><br />
which can be further simplified into .. <br />
<br />
<pre class="brush: scala">(-1000000 to 1000000).filter((_: Int) >= 0).map(_ + 1)</pre><br />
.. and this has same computation load on <code>filter</CODE> but potentially less load on <code>map</CODE>, since it now has to process a filtered list.<br />
<br />
Can we have such transformations available in the IDE ? I am a faithful user of vim, but with such enrichments I may consider going back to an IDE. TYhis really has no limits. Consider Wadler's <a HREF="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9875">free theorems</A>. Many times we tend to write functions whose arguments are more specialized in types than what's required. Generalization of such functions using polymorphic arguments can lead to better abstraction and verification through free theorems. An IDE can suggest all these and may also help generate properties for property based testing using tools like QuickCheck and ScalaCheck. But that's discussion for another day ..<br />
<br />
P.S. Simon Thompson's Haskell The Craft of Functional Programming 3rd edition discusses lots of similar transformations in various chapters that discuss reasoning or Haskell programs.<br />
http://debasishg.blogspot.com/2012/10/towards-better-refactoring-support-in.htmlnoreply@blogger.com (Anonymous)6tag:blogger.com,1999:blog-22587889.post-3953733830027547453Fri, 10 Aug 2012 03:52:00 +00002012-08-10T09:24:07.036+05:30category_theoryfunctionalhaskellscalaCategory Theory and Programming - Reinforcing the value of generic abstractionsOk .. so my last <a HREF="http://debasishg.blogspot.in/2012/07/does-category-theory-make-you-better.html">post</A> on the probable usefulness of Category Theory in programming generated an overwhelming response all over the places. It initiated lots of discussions which are always welcome and offers a great conduit towards enlightenment of everyone. From all these discussions, we can summarize that it's not a boolean answer whether category theory has any usefulness in making you a better programmer. Category theory is abstract, probably more abstract than what our normal programmer's mind can comprehend. Here's what one <a HREF="https://news.ycombinator.com/item?id=4314522">mathematician's bias</A> tells us on HN ..<br />
<br />
<blockquote>"This may be a mathematician's bias, but Category Theory is somewhat underwhelming because you can't really do anything with it. It's actually too abstract--you can model basically every branch of mathematics with Categories, but you can't actually prove very much about them from this perspective. All you get is a different vocabulary to talk about results that you already knew. It's also less approachable because its objects of study are other branches of mathematics, whereas most other branches have number, vectors, and functions as standard objects."</blockquote><br />
Now let me digress a bit towards personal experience. I don't have a strong math background and have only started learning category theory. I don't get most of the mathy stuff that category theorists talk about that I cannot relate to my programming world. But the subset that Pierce talks about and which I can relate to when I write code gives me an alternate perspective to think about functional abstractions. And, as I mentioned in my last post, category theory focuses more on the morphisms than on the objects. When I am programming in a functional language, this has a strong parallel towards emphasizing how abstractions compose. Specifically point free style composition has its inspiration from category theory and the way it focuses on manipulating arrows across objects.<br />
<br />
<h1>Focusing on Generic Abstractions</H1>Category theory gives us Functors, Applicatives, Monads, Semigroups, Monoids and a host of other abstractions which we use extensively in our everyday programming. Or at least I try to. These have taught me to think in terms of generic abstractions. I will give you a practical example from my personal experience that served as a great eye opener towards appreciating the value of generic abstractions ..<br />
<br />
I program mostly in Scala and some times in Haskell. And one thing I need to do frequently (and it's not at all something unique to me) is apply a function <code>f</CODE> to a collection of elements (say of type <code>Foo</CODE>) with the caveat that the function may sometimes return no result. In Scala the obvious way to model this is for the function to return an <code>Option</CODE> type. Just for the record, <code>Option</CODE> is the equivalent of <code>Maybe</CODE> in Haskell and can be one of the two concrete types, <code>Some</CODE> (indicating the presence of a value) and <code>None</CODE> (indicates no value). So, applying <code>f: Foo => Option[Bar]</CODE> over the elements of a <code>List[Foo]</CODE>, gives me <code>List[Option[Bar]]</CODE>. My requirement was to convert the result into an <code>Option[List[Bar]]</CODE>, which would have a value only if all the <code>Foo</CODE>s in the <code>List</CODE> gave me <code>Some[Bar]</CODE>, and <code>None</CODE> otherwise. Initially I implemented this function as a specialized utility for <code>Option</CODE> data type only. But then I realized that this function could very well be applicable for many other data types like <code>Either</CODE>, <code>List</CODE> etc. And this aha! moment came courtsey of <a HREF="https://github.com/scalaz/scalaz/">Scalaz</A> which has a function <code><a HREF="https://github.com/scalaz/scalaz/blob/master/core/src/main/scala/scalaz/MA.scala#L124">sequence</A></CODE> that does the exact same thing and works for all <code>Traversible</CODE> data structures (not just <code>List</CODE>s) and containing anything that's an instance of an Applicative Functor. The Scala standard library doesn't have generic abstractions like <code>Monad</CODE>, <code>Monoid</CODE> or <code>Semigroup</CODE> and I constantly find reinventing them all the time within real world domain models that I program. No wonder I have Scalaz as a default dependency in almost all of my Scala projects these days. <br />
<br />
So much for generic abstractions .. Now the question is do I need to know Category Theory to learn the generic abstractions like Applicative Functors or Monads ? Possibly no, but that's because some of the benevolent souls in the programming community have translated all those concepts and abstractions to a language that's more approachable to a programmer like me. At the same time knowing CT certainly gives you a broader perspective. Like the above quote says, you can't pinpoint and say that I am using this from my category theory knowledge. But you can always get a categorical perspective and a common vocabulary that links all of them to the world of mathematical composition.<br />
<br />
<h1>Natural Transformation, Polymorphism and Free Theorems</H1>Consider yet another aha! moment that I had some time back when I was trying to grok <a HREF="http://en.wikipedia.org/wiki/Natural_transformation">Natural Transformations</A>, which are really morphisms between Functors. Suppose I have 2 functors <i>F</I> and <i>G</I>. Then a natural transformation η<i>: F → G</I> is a map that takes an object (in Haskell, a type, say, <i>a</I>) and returns a morphism from <i>F(a)</I> to <i>G(a)</I>, i.e. <br />
<br />
η <i>:: forall a. F a → G a</I><br />
<br />
Additionally, a natural transformation also needs to satisfy the following constraint ..<br />
<br />
For any <i>f :: A → B</I>, we must have <i>fmap<sub>G</SUB> f . η = η . fmap<sub>F</SUB> f</I><br />
<br />
Consider an example from Haskell, the function <code>reverse</CODE> on lists, which is defined as <br />
<br />
<code>reverse :: [a] -> [a]</CODE>, which we can more specifically state as <code>reverse :: forall a. [a] -> [a]</CODE> ..<br />
<br />
Comparing <code>reverse</CODE> with our earlier definition of η, we have <i>F ≡ []</I> and <i>G ≡ []</I>. And since for lists, <code>fmap</CODE> = <code>map</CODE>, we have the other criterion for natural transformation as <i>map f . reverse = reverse . map f</I>. And this is the <a HREF="http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9875">free theorem</A> for reverse! <br />
<br />
Not only this, we can get more insight from the application of natural transformation. Note the <i>forall</I> annotation above. It literally means that η (or <code>reverse</CODE>) works for <i>any</I> type <code>a</CODE>. That is, the function is not allowed to peek inside the arguments and its behavior does not depend on the <i>exact</I> type of <code>a</CODE>. We can freely substitute one type for another and yet have the same behavior of the function. Hence a natural transformation is all about polymorphic functions.<br />
<br />
<h1>Summary</H1>I agree that Category Theory is not a must read to do programming, particularly if you are not using a typed functional language. But often I find a category diagram or a categorical analysis reveals a lot of insight about an abstraction, particularly with respect to how it composes with other similar abstractions in the world.<br />http://debasishg.blogspot.com/2012/08/category-theory-and-programming.htmlnoreply@blogger.com (Anonymous)1tag:blogger.com,1999:blog-22587889.post-5226755050089626353Mon, 30 Jul 2012 04:55:00 +00002012-08-10T09:24:42.341+05:30category_theoryfunctionalhaskellscalaDoes category theory make you a better programmer ?How much of category theory knowledge should a working programmer have ? I guess this depends on what kind of language the programmer uses in his daily life. Given the proliferation of functional languages today, specifically typed functional languages (Haskell, Scala etc.) that embeds the typed lambda calculus in some form or the other, the question looks relevant to me. And apparently to <a HREF="http://www.cs.toronto.edu/~sme/presentations/cat101.pdf">a</A> <a HREF="http://reperiendi.wordpress.com/2007/11/03/category-theory-for-the-java-programmer/">few</A> <a HREF="http://lambda-the-ultimate.org/node/2604">others</A> as well. In one of his courses on <a HREF="http://www.cs.nott.ac.uk/~gmh/cat.html">Category Theory</A>, Graham Hutton mentioned the following points when talking about the usefulness of the theory : <br />
<br />
<ul><li>Building bridges—exploring relationships between various mathematical objects, e.g., Products and Function</LI>
<li>Unifying ideas - abstracting from unnecessary details to give general definitions and results, e.g., Functors</LI>
<li>High level language - focusing on how things behave rather than what their implementation details are e.g. specification vs implementation</LI>
<li>Type safety - using types to ensure that things are combined only in sensible ways e.g. (f: A -> B g: B -> C) => (g o f: A -> C)</LI>
<li>Equational proofs—performing proofs in a purely equational style of reasoning</LI> </UL>Many of the above points can be related to the experience that we encounter while programming in a functional language today. We use <i>Product </I>and <i>Sum </I>types, we use Functors to abstract our computation, we marry types together to encode domain logic within the structures that we build and many of us use <a HREF="http://www.amazon.com/Pearls-Functional-Algorithm-Design-Richard/dp/0521513383">equational reasoning</A> to optimize algorithms and data structures.<br />
<br />
But how much do we need to care about how category theory models these structures and how that model maps to the ones that we use in our programming model ?<br />
<br />
Let's start with the classical definition of a Category. [<a HREF="http://www.amazon.com/Category-Computer-Scientists-Foundations-Computing/dp/0262660717/ref=la_B001IR3BUA_1_3?ie=UTF8&qid=1343622949&sr=1-3">Pierce</A>] defines a Category as comprising of:<br />
<br />
<ol><li>a collection of <b>objects</B></LI>
<li>a collection of <b>arrows </B>(often called <b>morphisms</B>)</LI>
<li>operations assigning to each arrow <i>f</I> an object <i>dom f</I>, its <b>domain</B>, and an object <i>cod f</I>, its <b>codomain </B>(<i>f: A → B</I>, where <i>dom f = A</I> and <i>cod f = B</I></LI>
<li>a composition operator assigning to each pair of arrows <i>f</I> and <i>g</I> with <i>cod f = dom g</I>, a composite arrow <i>g o f: dom f → cod g</I>, satisfying the following associative law:</LI> for any arrows <i>f: A → B</I>, <i>g: B → C</I>, and <i>h: C → D</I>, <i>h o (g o f) = (h o g) o f</I>
<li>for each object <i>A</I>, an <b>identity</B> arrow <i>id<sub>A</SUB>: A → A</I> satisfying the following identity law:</LI> for any arrow <i>f: A → B</I>, <i>id<sub>B</SUB> o f = f </I>and <i>f o id<sub>A</SUB> = f</I> </OL><h1>Translating to Scala</h1><br />
Ok let's see how this definition can be mapped to your daily programming chores. If we consider Haskell, there's a category of Haskell types called Hask, which makes the collection of objects of the Category. For this post, I will use Scala, and for all practical purposes assume that we use Scala's pure functional capabilities. In our model we consider the Scala types forming the objects of our category. <br />
<br />
You define any function in Scala from <i>type A</I> to <i>type B</I> (<code>A => B</CODE>) and you have an example of a morphism. For every function we have a domain and a co-domain. In our example, <code>val foo: A => B = //..</CODE> we have the <i>type A</I> as the domain and the <i>type B</I> as the co-domain.<br />
<br />
Of course we can define composition of arrows or functions in Scala, as can be demonstrated with the following REPL session ..<br />
<br />
<pre class="brush: scala">scala> val f: Int => String = _.toString
f: Int => String = <function1>
scala> val g: String => Int = _.length
g: String => Int = <function1>
scala> f compose g
res23: String => String = <function1>
</pre><br />
and it's very easy to verify that the composition satisfies the associative law.<br />
<br />
And now the <b>identity </B>law, which is, of course, a specialized version of composition. Let's define some functions and play around with the identity in the REPL ..<br />
<br />
<pre class="brush: scala">scala> val foo: Int => String = _.toString
foo: Int => String = <function1>
scala> val idInt: Int => Int = identity(_: Int)
idInt: Int => Int = <function1>
scala> val idString: String => String = identity(_: String)
idString: String => String = <function1>
scala> idString compose foo
res24: Int => String = <function1>
scala> foo compose idInt
res25: Int => String = <function1>
</pre><br />
Ok .. so we have the identity law of the Category verified above.<br />
<br />
<h1>Category theory & programming languages</h1><br />
Now that we understand the most basic correspondence between category theory and programming language theory, it's time to dig a bit deeper into some of the implicit correspondences. We will definitely come back to the more explicit ones very soon when we talk about products, co-products, functors and natural transformations.<br />
<br />
Do you really think that understanding category theory helps you understand the programming language theory better ? It all depends how much of the *theory* do you really care about. If you are doing enterprise software development and/or really don't care to learn a language outside your comfort zone, then possibly you come back with a resounding *no* as the answer. Category theory is a subject that provides a uniform model of set theory, algebra, logic and computation. And many of the concepts of category theory map quite nicely to structures in programming (particularly in a language that offers a decent type system and preferably has some underpinnings of the typed lambda calculus).<br />
<br />
Categorical reasoning helps you reason about your programs, if they are written using a typed functional language like Haskell or Scala. Some of the basic structures that you encounter in your everyday programming (like <i>Product </I>types or <i>Sum </I>types) have their correspondences in category theory. Analyzing them from CT point of view often illustrates various properties that we tend to overlook (or take for granted) while programming. And this is not coincidental. It has been shown that there's indeed a strong <a HREF="http://scienceblogs.com/goodmath/2006/08/10/from-lambda-calculus-to-cartes/">correspondence</A> between typed lambda calculus and cartesian closed categories. And Haskell is essentially an encoding of the typed lambda calculus.<br />
<br />
Here's an example of how we can explain the properties of a data type in terms of its categorical model. Consider the category of Products of elements and for simplicity let's take the example of cartesian products from the category of Sets. A cartesian product of 2 sets <i>A</I> and <i>B</I> is defined by:<br />
<br />
<i>A X B = {(a, b) | a ∈ A and b ∈ B}</I><br />
<br />
So we have the tuples as the objects in the category. What could be the relevant morphisms ? In case of products, the applicable arrows (or morphisms) are the <b>projection functions</B> <i>π<sub>1</SUB>: A X B → A</I> and <i>π<sub>2</SUB>: A X B → B</I>. Now if we draw a category diagram where <i>C</I> is the product type, then we have 2 functions <i>f: C → A and g: C→ B</I> as the projection functions and the product function is represented by <i><F, G>: C → A X B</I> and is defined as <i><F, G>(x) = (f(x), g(x))</I>. Here's the diagram corresponding to the above category ..<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTkuDaRGIuHMZzDkYe6q1H4c0rvplMmTcpKYHLKq0V3pPASLEsGlK-nCTw7uJxyQcCTx43HuJI5CxnceyYwK5mUXJ2r3cJ1X_02nnPZKZZ75sOZEEg5w_ov9Gxj6l-0D0z_hP8gw/s1600/product.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTkuDaRGIuHMZzDkYe6q1H4c0rvplMmTcpKYHLKq0V3pPASLEsGlK-nCTw7uJxyQcCTx43HuJI5CxnceyYwK5mUXJ2r3cJ1X_02nnPZKZZ75sOZEEg5w_ov9Gxj6l-0D0z_hP8gw/s400/product.jpg" /></a></div><br />
and according to the category theory definition of a Product, the above diagram commutes. Note, by commuting we mean that for every pair of vertices <i>X</I> and <i>Y</I>, all paths in the diagram from <i>X</I> to <i>Y</I> are equal in the sense that each path forms an arrow and these arrows are equal in the category. So here commutativity of the diagram gives <br />
<i>π<sub>1</SUB> o <F, G> = f</I> and <br />
<i>π<sub>2</SUB> o <F, G> = g</I>.<br />
<br />
Let's now define each of the functions above in Scala and see how the results of commutativity of the above diagram maps to the programming domain. As a programmer we use the projection functions (<code>_1</CODE> and <code>_2</CODE> in Scala's <code>Tuple2</CODE> or <code>fst</CODE> and <code>snd</CODE> in Haskell <code>Pair</CODE>) on a regular basis. The above category diagram, as we will see gives some additional insights into the abstraction and helps understand some of the mathematical properties of how a cartesian product of Sets translates to the composition of functions in the programming model.<br />
<br />
<pre class="brush: scala">scala> val ip = (10, "debasish")
ip: (Int, java.lang.String) = (10,debasish)
scala> val pi1: ((Int, String)) => Int = (p => p._1)
pi1: ((Int, String)) => Int = <function1>
scala> val pi2: ((Int, String)) => String = (p => p._2)
pi2: ((Int, String)) => String = <function1>
scala> val f: Int => Int = (_ * 2)
f: Int => Int = <function1>
scala> val g: Int => String = _.toString
g: Int => String = <function1>
scala> val `<f, g>`: Int => (Int, String) = (x => (f(x), g(x)))
<f, g>: Int => (Int, String) = <function1>
scala> pi1 compose `<f, g>`
res26: Int => Int = <function1>
scala> pi2 compose `<f, g>`
res27: Int => String = <function1>
</pre><br />
So, as we claim from the commutativity of the diagram, we see that <code>pi1 compose `<f, g>`</CODE> is typewise equal to <code>f</CODE> and <code>pi2 compose `<f, g>`</CODE> is typewise equal to <code>g</CODE>. Now the definition of a Product in Category Theory says that the morphism between <i>C</I> and <i>A X B</I> is unique and that <i>A X B</I> is defined upto isomorphism. And the uniqueness is indicated by the symbol <b>!</B> in the diagram. I am going to skip the proof, since it's quite trivial and follows from the definition of what a Product of 2 objects mean. This makes sense intuitively in the programming model as well, we can have one unique type consisting of the Pair of A and B.<br />
<br />
Now for some differences in semantics between the categorical model and the programming model. If you consider an eager (or eager-by-default) language like Scala, the Product type fails miserably in presence of the <a HREF="http://james-iry.blogspot.in/2009/08/getting-to-bottom-of-nothing-at-all.html">Bottom</A> data type (_|_) represented by Nothing. For Haskell, the non-strict language, it also fails when we consider the fact that a Product type needs to satisfy the equations <code>(fst(p), snd(p)) == p</CODE> and we apply the Bottom (_|_) for <code>p</CODE>. So, the programming model remains true only when we eliminate the Bottom type from the equation. Have a look at this <a HREF="http://james-iry.blogspot.in/2011/05/why-eager-languages-dont-have-products.html#comment-201436254">comment from Dan Doel</A> in James Iry's blog post on sum and product types.<br />
<br />
This is an instance where a programmer can benefit from knwoledge of category theory. It's actually a bidirectional win-win when knowledge of category theory helps more in understanding of data types in real life programming. <br />
<br />
<h1>Interface driven modeling</h1><br />
One other aspect where category theory maps very closely with the programming model is its focus on the arrows rather than the objects. This corresponds to the notion of an <i>interface</I> in programming. Category theory typically <i>"abstracts away from elements, treating objects as black boxes with unexamined internal structure and focusing attention on the properties of arrows between objects"</I> [<a HREF="http://www.amazon.com/Category-Computer-Scientists-Foundations-Computing/dp/0262660717/ref=la_B001IR3BUA_1_3?ie=UTF8&qid=1343622949&sr=1-3">Pierce</A>]. In programming also we encourage interface driven modeling, where the implementation is typically abstracted away from the client. When we talk about objects upto isomorphism, we focus solely on the arrows rather than what the objects are made of. Learning programming and category theory in an iterative manner serves to enrich your knowledge on both. If you know what a Functor means in category theory, then when you are designing something that looks like a Functor, you can immediately make it generic enough so that it composes seamlessly with all other functors out there in the world.<br />
<br />
<h1>Thinking generically</h1><br />
Category theory talks about objects and morphisms and how arrows compose. A special kind of morphism is <b>Identity</B> morphism, which maps to the Identity function in programming. This is 0 when we talk about addition, 1 when we talk about multiplication, and so on. Category theory generalizes this concept by using the same vocabulary (morphism) to denote both stuff that <i>do</I> some operations and those that <i>don't</I>. And it sets this up nicely by saying that for every object <i>X</I>, there exists a morphism <i>id<sub>X</SUB> : X → X</I> called the identity morphism on <i>X</I>, such that for every morphism <i>f: A → B</I> we have <i>id<sub>B</SUB> o f = f = f o id<sub>A</SUB></I>. This (the concept of a generic zero) has been a great lesson at least for me when I identify structures like monoids in my programming today.<br />
<br />
<h1>Duality</h1><br />
In the programming model, many dualities are not explicit. Category theory has an explicit way of teaching you the dualities in the form of category diagrams. Consider the example of Sum type (also known as Coproduct) and Product type. We have abundance of these in languages like Scala and Haskell, but programmers, particularly people coming from the imperative programming world, are not often aware of this duality. But have a look at the category diagram of the sum type <i>A + B</I> for objects <i>A</I> and <i>B</I> ..<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuch-6_tJUkAhtZn8tN3EyaD6GB7rxo-3p1oXJ_1JYwnSsYZBNEdlczThVuKi1o2PsajP-LDIC5oHobxgpb5sDpfLx47VuX1x8sb1N2bo45LEm7XS1BYUrG3880UcyamQZJdjIng/s1600/sum.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="300" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuch-6_tJUkAhtZn8tN3EyaD6GB7rxo-3p1oXJ_1JYwnSsYZBNEdlczThVuKi1o2PsajP-LDIC5oHobxgpb5sDpfLx47VuX1x8sb1N2bo45LEm7XS1BYUrG3880UcyamQZJdjIng/s400/sum.jpg" /></a></div><br />
It's the same diagram as the Product only with the arrows reversed. Indeed a Sum type <i>A + B</I> is the categorical dual of Product type <i>A X B</I>. In Scala we model it as the union type like <code>Either</CODE> where the value of the sum type comes either from the left or the right. Studying the category diagram and deriving the properties that come out of its commutativity helps understand a lot of theory behind the design of the data type.<br />
<br />
In the next part of this discussion I will explore some other structures like Functors and Natural Transformation and how they map to important concepts in programming which we use on a daily basis. So far, my feeling has been that if you use a typed functional language, a basic knowledge of category theory helps a lot in designing generic abstractions and make them compose with related ones out there in the world.http://debasishg.blogspot.com/2012/07/does-category-theory-make-you-better.htmlnoreply@blogger.com (Anonymous)23