<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[Mastering FP and OO with Scala]]></title>
  <link href="http://blog.jaceklaskowski.pl/atom.xml" rel="self"/>
  <link href="http://blog.jaceklaskowski.pl/"/>
  <updated>2015-09-29T16:40:29-04:00</updated>
  <id>http://blog.jaceklaskowski.pl/</id>
  <author>
    <name><![CDATA[Jacek Laskowski]]></name>
    
  </author>
  <generator uri="http://octopress.org/">Octopress</generator>

  
  <entry>
    <title type="html"><![CDATA[Why Docker - Writing Docs Using Jekyll]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/08/17/why-docker-writing-docs-using-jekyll.html"/>
    <updated>2015-08-17T05:06:45-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/08/17/why-docker-writing-docs-using-jekyll</id>
    <content type="html"><![CDATA[<p><strong>Spoiler</strong> I&rsquo;m so much into <a href="https://www.docker.com/">Docker</a> that I could sing songs about how much it made my life easier. And you&rsquo;re soon, too. Beware!</p>

<p>Just today I&rsquo;ve got a request to review changes to introduce <a href="http://jekyllrb.com/">Jekyll</a> as the documentation framework. I was earlier proposing it myself so I knew what the outcome of the review could be - APPROVED.</p>

<p><em>But&hellip;</em></p>

<p>I also knew that the guy who proposed the change was fighting the installation of Ruby gems and Jekyll to have a complete working environment for the documentation system on his own laptop. He was on Linux while I&rsquo;m on Mac OS. He finally got it sorted out, but the final solution was not satisfactory to me - he installed additional dependencies onto his local machine directly and suggested the very same steps in README so others could follow his steps. I simply couldn&rsquo;t approve it. Sorry.</p>

<p>The alternative was to use Docker and the <a href="https://github.com/jekyll/docker-jekyll">docker-jekyll</a> image. It makes Jekyll running in a self-contained container with no system-wide interaction with the local machine. And it&rsquo;s officially supported and maintained by the Jekyll project itself.</p>

<!-- more -->


<h2>Using Jekyll inside Docker</h2>

<p>The steps to use Jekyll without installing Ruby, gems and other dependencies locally is to use the <a href="https://github.com/jekyll/docker-jekyll">docker-jekyll</a> image. It&rsquo;s very safe because it&rsquo;s a self-contained image and I&rsquo;m not &ldquo;infecting&rdquo; my local working machine in any way (except installing Docker and pulling the image, but that&rsquo;s acceptable).</p>

<p>Install Docker on your platform and do the following.</p>

<p>Go to the <code>docs</code> directory where your documentation lives and execute:</p>

<pre><code>docker run --rm -v $(pwd):/srv/jekyll -t -p 4000:4000 jekyll/stable jekyll build
</code></pre>

<p>It will generate the docs from the sources and save the output into the current working directory (under <code>_site</code>).</p>

<p>Serve the docs using <code>jekyll s</code> which is the default command while spinning up a new docker-jekyll container.</p>

<pre><code>docker run --rm -v $(pwd):/srv/jekyll -t --name=jekyll -p 4000:4000 jekyll/stable
</code></pre>

<p>Mind the name for the container - <strong>jekyll</strong> - so it&rsquo;s easier to work with it later on.</p>

<pre><code>➜  docs git:(39fd9c9) ✗ docker run --rm --volume=$(pwd):/srv/jekyll -t --name=jekyll -p 4000:4000 jekyll/stable
Configuration file: /srv/jekyll/_config.yml
            Source: /srv/jekyll
       Destination: /srv/jekyll/_site
      Generating...
                    done.
 Auto-regeneration: enabled for '/srv/jekyll'
Configuration file: /srv/jekyll/_config.yml
    Server address: http://0.0.0.0:4000/
  Server running... press ctrl-c to stop.
</code></pre>

<p>Open <a href="http://0.0.0.0:4000/">http://0.0.0.0:4000/</a> and have fun!</p>

<p>Stop the container using <code>docker stop jekyll</code>.</p>

<pre><code>➜  docs git:(39fd9c9) ✗ docker ps
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS                    NAMES
b6bb07d8b8aa        jekyll/stable       "/usr/bin/startup"   26 seconds ago      Up 26 seconds       0.0.0.0:4000-&gt;4000/tcp   jekyll
➜  docs git:(39fd9c9) ✗ docker stop jekyll
jekyll
➜  docs git:(39fd9c9) ✗ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
➜  docs git:(39fd9c9) ✗
</code></pre>

<p>Happy Dockering!</p>

<h2>Caveats on Mac OS (and possibly on Windows)</h2>

<p>You are using <a href="https://docs.docker.com/machine/">Docker Machine</a> to work with Docker. And so the IP address for Jekyll&rsquo;s generated website is given by <code>docker-machine ip dev</code> where <code>dev</code> is the name of the Docker machine instance.</p>

<p>Also, you may face issues with not regenerating docs after your changes even when <code>jekyll serve</code> alone is supposed to work fine. It seems it does not with Docker on Mac OS. The workaround is to use <code>jekyll serve --force_polling</code>. The complete command line to have Jekyll running inside a Docker container with your changes being picked up is as follows:</p>

<pre><code>docker run --rm -v $(pwd):/srv/jekyll -t --name=jekyll -p 4000:4000 jekyll/stable jekyll serve --force_polling
</code></pre>

<p>See <a href="https://github.com/jekyll/jekyll/issues/2926.">https://github.com/jekyll/jekyll/issues/2926.</a></p>

<p>Let me know what you think about the topic of the blog post in the <a href="#disqus_thread">Comments</a> section below or contact me at <a href="&#x6d;&#x61;&#x69;&#108;&#116;&#111;&#58;&#x6a;&#97;&#99;&#101;&#107;&#x40;&#x6a;&#x61;&#112;&#105;&#108;&#x61;&#46;&#112;&#x6c;&#46;">&#106;&#x61;&#99;&#x65;&#107;&#x40;&#106;&#97;&#112;&#x69;&#x6c;&#x61;&#46;&#x70;&#x6c;&#x2e;</a> Follow the author as <a href="https://twitter.com/jaceklaskowski">@jaceklaskowski</a> on Twitter, too.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Docker Your Scala Web Application (Play Framework)]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/07/24/docker-your-scala-web-application-play-framework.html"/>
    <updated>2015-07-24T17:03:07-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/07/24/docker-your-scala-web-application-play-framework</id>
    <content type="html"><![CDATA[<p>We&rsquo;re experimenting with <a href="https://www.docker.com/">Docker</a> in the <a href="http://deepsense.io/">DeepSense.io</a> project. There might be a case or two in the other Scala company in Warsaw - <a href="http://www.hcore.com/">HCore</a>. I&rsquo;ve also been noticing interest in using Docker in Scala projects in <a href="http://www.javeo.eu/">Javeo</a> where the <a href="http://www.meetup.com/WarszawScaLa/">Warsaw Scala Enthusiasts</a> meetups are taking place. The Docker space seems very hot for Scala developers in Warsaw, Poland. And these companies <em>are</em> hiring Scala developers!</p>

<p>I didn&rsquo;t know deploying Scala web applications might be so easy until the very recent Warsaw Scala Enthusiasts meetup when <a href="http://www.meetup.com/WarszawScaLa/members/95521122/">Rafal Krzewski</a> introduced me to one of the two <a href="http://www.scala-sbt.org/">sbt</a> plugins for Docker -  <a href="http://www.scala-sbt.org/sbt-native-packager/">sbt-native-packager</a> (the other is <a href="https://github.com/marcuslonnberg/sbt-docker">sbt-docker</a> that they say is even better).</p>

<p>The blog post shows how easy it is to use Docker as a means of deploying Scala web applications using Play Framework (that actually uses sbt-native-packager under the covers).</p>

<!-- more -->


<h2>Creating Play Framework web application</h2>

<p>Create a new web application using Typesafe Activator tool using <code>activator new</code> command:</p>

<pre><code>➜  docker-playground  activator new play-dockerized play-scala

Fetching the latest list of templates...

OK, application "play-dockerized" is being created using the "play-scala" template.

To run "play-dockerized" from the command line, "cd play-dockerized" then:
/Users/jacek/dev/sandbox/docker-playground/play-dockerized/activator run

To run the test for "play-dockerized" from the command line, "cd play-dockerized" then:
/Users/jacek/dev/sandbox/docker-playground/play-dockerized/activator test

To run the Activator UI for "play-dockerized" from the command line, "cd play-dockerized" then:
/Users/jacek/dev/sandbox/docker-playground/play-dockerized/activator ui
</code></pre>

<p><code>cd</code> the <code>play-dockerized</code> directory and execute <code>sbt run</code> to start the application:</p>

<pre><code>➜  play-dockerized  sbt run
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Updating {file:/Users/jacek/.sbt/0.13/plugins/}global-plugins...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Loading project definition from /Users/jacek/dev/sandbox/docker-playground/play-dockerized/project
[info] Updating {file:/Users/jacek/dev/sandbox/docker-playground/play-dockerized/project/}play-dockerized-build...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Set current project to play-dockerized (in build file:/Users/jacek/dev/sandbox/docker-playground/play-dockerized/)
[info] Updating {file:/Users/jacek/dev/sandbox/docker-playground/play-dockerized/}root...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.

--- (Running the application, auto-reloading is enabled) ---

[info] p.a.l.c.ActorSystemProvider - Starting application default Akka system: application
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

(Server started, use Ctrl+D to stop and go back to the console...)
</code></pre>

<p>You should now be able to access <a href="http://localhost:9000.">http://localhost:9000.</a> It&rsquo;s a vanilla <strong>Play Framework 2.4.2</strong> web application.</p>

<p><img class="left" src="http://blog.jaceklaskowski.pl/images/docker-play-youre-using-play-2-4-2.png" title="Play Framework's default welcome page" ></p>

<h2>Publishing Docker image - <code>docker:publishLocal</code></h2>

<p>Play comes with <strong>sbt-native-packager</strong> plugin so stop the previous command (using Ctrl+D) and execute <code>sbt docker:publishLocal</code>:</p>

<pre><code>➜  play-dockerized  sbt docker:publishLocal
...
[info] Digest: sha256:66638b21de4b16af589f54cbd3e2698919efd529583b732a593613f35e813f0b
[info] Status: Downloaded newer image for java:latest
[info]  ---&gt; 49ebfec495e1
[info] Step 1 : WORKDIR /opt/docker
[info]  ---&gt; Running in ac01dbacaf66
[info]  ---&gt; 271ea5c0bd1e
[info] Removing intermediate container ac01dbacaf66
[info] Step 2 : ADD opt /opt
[info]  ---&gt; 9c423c2d2f0c
[info] Removing intermediate container 3087077a2680
[info] Step 3 : RUN chown -R daemon:daemon .
[info]  ---&gt; Running in bd40460a5e7d
[info]  ---&gt; aeec9392fc83
[info] Removing intermediate container bd40460a5e7d
[info] Step 4 : USER daemon
[info]  ---&gt; Running in 461907ca0474
[info]  ---&gt; 4f0ad20b6a7f
[info] Removing intermediate container 461907ca0474
[info] Step 5 : ENTRYPOINT bin/play-dockerized
[info]  ---&gt; Running in 09aa91f09bc5
[info]  ---&gt; 7f2afe7c4918
[info] Removing intermediate container 09aa91f09bc5
[info] Step 6 : CMD
[info]  ---&gt; Running in 99c12a3604a3
[info]  ---&gt; 617942a5bc6f
[info] Removing intermediate container 99c12a3604a3
[info] Successfully built 617942a5bc6f
[info] Built image play-dockerized:1.0-SNAPSHOT
[success] Total time: 101 s, completed Jul 23, 2015 8:31:18 AM
</code></pre>

<p>That was the exact moment when I realized how clever the sbt-native-packager plugin is to leverage well-known <code>publishLocal</code> task to publish to Docker repository (that&rsquo;s merely scoped to <code>docker</code> to change the way it works).</p>

<p>You&rsquo;ve just created a brand new Docker image <code>play-dockerized:1.0-SNAPSHOT</code>. Use <code>docker images</code> command to check it out:</p>

<pre><code>➜  play-dockerized  docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
play-dockerized     1.0-SNAPSHOT        617942a5bc6f        2 minutes ago       892.7 MB
</code></pre>

<h2>Docker my time!</h2>

<p>You can start a container off the <code>play-dockerized</code> image using <code>docker run</code> command:</p>

<pre><code>➜  play-dockerized  docker run --name play-8080 -p 8080:9000 play-dockerized:1.0-SNAPSHOT
[info] - play.api.libs.concurrent.ActorSystemProvider - Starting application default Akka system: application
[info] - play.api.Play - Application started (Prod)
[info] - play.core.server.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
</code></pre>

<p>The other command line options for <code>docker run</code> are <code>-p</code> to expose port <code>9000</code> outside Docker&rsquo;s virtual network (that&rsquo;s locally available as <code>8080</code>) and <code>--name</code> to give the container a friendly name (instead of relying on a cryptic hash).</p>

<p><img class="left" src="http://blog.jaceklaskowski.pl/images/docker-play-new-app-ready.png" title="Play Framework inside Docker" ></p>

<p>In another terminal execute <code>docker ps</code> to see the container running:</p>

<pre><code>➜  play-dockerized  docker ps -a
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS                    NAMES
511ca96e64a4        play-dockerized:1.0-SNAPSHOT   "bin/play-dockerized   10 minutes ago      Up 5 seconds        0.0.0.0:8080-&gt;9000/tcp   play-8080
</code></pre>

<p>Stop the container with <code>docker stop play-8080</code>. The Play Framework web app is no longer accessible. To start it again, execute <code>docker start play-8080</code>.</p>

<h2>Summary</h2>

<p>It&rsquo;s so easy to have a Docker image of a Play Framework/Scala web application that I hardly believe I could have lived without using it for so long. Once an application becomes a Docker image you can use the other commands to play with it, mainly to deploy the image to any environment to have a consistent and exact environment. Love it so much now. And you can deploy the image to <a href="https://hub.docker.com/">Docker Hub</a> (similarly how you publish the sources of your excellent applications to GitHub).</p>

<p>Let me know what you think about the topic of the blog post in the <a href="#disqus_thread">Comments</a> section below or contact me at <a href="&#x6d;&#97;&#x69;&#x6c;&#x74;&#111;&#x3a;&#x6a;&#97;&#99;&#101;&#107;&#x40;&#x6a;&#x61;&#x70;&#x69;&#108;&#x61;&#46;&#112;&#108;&#46;">&#x6a;&#97;&#99;&#x65;&#x6b;&#x40;&#x6a;&#97;&#x70;&#x69;&#108;&#97;&#46;&#112;&#x6c;&#x2e;</a> Follow the author as <a href="https://twitter.com/jaceklaskowski">@jaceklaskowski</a> on Twitter, too.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Real-time Data Processing Using Apache Kafka and Spark Streaming (and Scala and Sbt)]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/07/20/real-time-data-processing-using-apache-kafka-and-spark-streaming.html"/>
    <updated>2015-07-20T15:02:39-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/07/20/real-time-data-processing-using-apache-kafka-and-spark-streaming</id>
    <content type="html"><![CDATA[<p>It&rsquo;s been a while since I worked with <a href="http://spark.apache.org/streaming/">Spark Streaming</a>. It was back then when I was working for a pet project that ultimately ended up as a Typesafe Activator template <a href="http://www.typesafe.com/activator/template/spark-streaming-scala-akka">Spark Streaming with Scala and Akka</a> to get people going with the technologies.</p>

<p>Time flies by very quickly and as <a href="http://blog.jaceklaskowski.pl/2015/07/14/apache-kafka-on-docker.html">the other blog posts</a> <a href="http://blog.jaceklaskowski.pl/2015/07/19/publishing-events-using-custom-producer-for-apache-kafka.html">may have showed</a> I&rsquo;m evaluating <a href="http://kafka.apache.org/">Apache Kafka</a> as a potential messaging and integration platform for my future projects. A lot is happening in so called <em>big data space</em> and Apache Kafka fits the bill in many dataflows around me so well. I&rsquo;m very glad it&rsquo;s mostly all <a href="http://www.scala-lang.org/">Scala</a> which we all <em>love</em> and are spending our time with. Ain&rsquo;t we?</p>

<p>From <a href="http://spark.apache.org/docs/latest/streaming-programming-guide.html">Spark Streaming documentation</a> (Kafka bolded on purpose):</p>

<blockquote><p>Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data can be ingested from many sources like <strong>Kafka</strong>, Flume, Twitter, ZeroMQ, Kinesis, or TCP sockets, and can be processed using complex algorithms expressed with high-level functions like map, reduce, join and window.</p></blockquote>

<p>Since Apache Kafka aims at being <strong>the central hub for real-time streams of data</strong> (see <a href="http://kafka.apache.org/documentation.html#uses">1.2 Use Cases</a> and <a href="http://www.confluent.io/blog/stream-data-platform-1/">Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform (Part 1)</a>) I couldn&rsquo;t deny myself the simple pleasure of giving it a go.</p>

<p>Buckle up and ingest <em>some</em> data using Apache Kafka and Spark Streaming! You surely <em>will</em> love the infrastructure (if you haven&rsquo;t already). Be sure to type fast to see the potential of the platform at your fingertips.</p>

<!-- more -->


<h2>The project (using sbt)</h2>

<p>Here is the sbt build file <code>build.sbt</code> for the project:</p>

<pre><code>val sparkVersion = "1.4.1"
scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-streaming" % sparkVersion,
  "org.apache.spark" %% "spark-streaming-kafka" % sparkVersion
)
</code></pre>

<p>It uses the latest released versions of <strong>Spark Streaming 1.4.1</strong> and <strong>Scala 2.11.7</strong>.</p>

<h2>Setting up Kafka broker</h2>

<p>It assumes you&rsquo;ve already installed Apache Kafka. You may want to use Docker (see <a href="http://blog.jaceklaskowski.pl/2015/07/14/apache-kafka-on-docker.html">Apache Kafka on Docker</a>) or <a href="http://blog.jaceklaskowski.pl/2015/07/19/publishing-events-using-custom-producer-for-apache-kafka.html">build Kafka from the sources</a>. Whatever approach you choose, start Zookeeper and Kafka.</p>

<h3>Starting Zookeeper</h3>

<p>I&rsquo;m using the version of Kafka built from the sources and <code>./bin/zookeeper-server-start.sh</code> that comes with it.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/zookeeper-server-start.sh config/zookeeper.properties
...
[2015-07-21 06:51:39,614] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
</code></pre>

<h3>Starting Kafka broker</h3>

<p>Once Zookeeper is up and running (in the above case, listening to the port 2181), run a Kafka broker.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-server-start.sh config/server.properties
...
[2015-07-21 06:53:17,051] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -&gt; EndPoint(192.168.1.9,9092,PLAINTEXT) (kafka.utils.ZkUtils$)
[2015-07-21 06:53:17,058] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
</code></pre>

<p>There are merely two commands to boot the entire environment up and that&rsquo;s it.</p>

<h3>Creating topic - <code>spark-topic</code></h3>

<p><strong>A topic</strong> is where you&rsquo;re going to send messages to and where Spark Streaming is consuming them from later on. It&rsquo;s the communication channel between producers and consumers and you&rsquo;ve got to have one.</p>

<p>Create <code>spark-topic</code> topic. The name is arbitrary and pick whatever makes you happy, but use it consistently in the other places where the name&rsquo;s used.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic spark-topic
Created topic "spark-topic".
</code></pre>

<p>You may want to check out the available topics.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
spark-topic
</code></pre>

<p>You&rsquo;re now done with setting up Kafka for the demo.</p>

<h3>(optional) Sending to and receiving messages from Kafka</h3>

<p>Apache Kafka comes with two shell scripts to send and receive messages from topics. They&rsquo;re <code>kafka-console-producer.sh</code> and <code>kafka-console-consumer.sh</code>, respectively. They both use the console (stdin) as the input and output.</p>

<p>Let&rsquo;s publish few messages to the <code>spark-topic</code> topic using <code>./bin/kafka-console-producer.sh</code>.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic spark-topic
hello
hi
^D
</code></pre>

<p> You can keep the producer up in one terminal and use another terminal to consume the messages or just send a couple of messages and close the session. Kafka persists messages for a period of time.</p>

<p>Consuming messages is as simple as running <code>./bin/kafka-console-consumer.sh</code>. Mind the option <code>--zookeeper</code> to point to Zookeeper where Kafka stores its configuration and <code>--from-beginning</code> that tells Kafka to process all persisted messages.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic spark-topic --from-beginning
hello
hi
^DConsumed 2 messages
</code></pre>

<h2>Spark time!</h2>

<p>Remember <code>build.sbt</code> above that sets the Scala/sbt project up with appropriate Scala version and Spark Streaming dependencies?</p>

<pre><code>val sparkVersion = "1.4.1"
scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-streaming" % sparkVersion,
  "org.apache.spark" %% "spark-streaming-kafka" % sparkVersion
)
</code></pre>

<p>To learn a little about the integration between Spark Streaming and Apache Kafka you&rsquo;re going to use <code>sbt console</code> and type all the integration code line by line in the interactive console. You could have a simple Scala application, but I&rsquo;m leaving it to you as an exercise.</p>

<p>You may want to read the scaladoc of <a href="https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkConf">org.apache.spark.SparkConf</a> and <a href="https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.streaming.StreamingContext">org.apache.spark.streaming.StreamingContext</a> to learn about their purpose in the sample.</p>

<pre><code>scala&gt; import org.apache.spark.SparkConf
import org.apache.spark.SparkConf

scala&gt; val conf = new SparkConf().setMaster("local[*]").setAppName("KafkaReceiver")
conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@2f8bf5fc

scala&gt; import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.StreamingContext

scala&gt; import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.Seconds

scala&gt; val ssc = new StreamingContext(conf, Seconds(10))
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/21 09:08:39 INFO SparkContext: Running Spark version 1.4.1
...
ssc: org.apache.spark.streaming.StreamingContext = org.apache.spark.streaming.StreamingContext@2ce5cc3

scala&gt; import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.kafka.KafkaUtils

// Note the name of the topic in use - spark-topic
scala&gt; val kafkaStream = KafkaUtils.createStream(ssc, "localhost:2181","spark-streaming-consumer-group", Map("spark-topic" -&gt; 5))
kafkaStream: org.apache.spark.streaming.dstream.ReceiverInputDStream[(String, String)] = org.apache.spark.streaming.kafka.KafkaInputDStream@4ab601ac

// The very complex BIG data analytics
scala&gt; kafkaStream.print()

// Start the streaming context so Spark Streaming polls for messages
scala&gt; ssc.start
15/07/21 09:11:31 INFO ReceiverTracker: ReceiverTracker started
15/07/21 09:11:31 INFO ForEachDStream: metadataCleanupDelay = -1
...
15/07/21 09:11:31 INFO StreamingContext: StreamingContext started
</code></pre>

<p>Spark Streaming is now connected to Apache Kafka and consumes messages every 10 seconds. Leave it running and switch to another terminal.</p>

<p>Open a terminal to run a Kafka producer. You may want to use <code>kafkacat</code> (highly recommended) or the producer that comes with Apache Kafka - <code>kafka-console-producer.sh</code>.</p>

<pre><code>➜  kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic spark-topic
hey spark, how are you doing today?
</code></pre>

<p>Switch to the terminal with Spark Streaming running and see the message printed out.</p>

<pre><code>15/07/21 09:12:00 INFO DAGScheduler: ResultStage 1 (print at &lt;console&gt;:18) finished in 0.016 s
15/07/21 09:12:00 INFO DAGScheduler: Job 1 finished: print at &lt;console&gt;:18, took 0.030530 s
-------------------------------------------
Time: 1437462720000 ms
-------------------------------------------
(null,hey spark, how are you doing today?)

15/07/21 09:12:00 INFO JobScheduler: Finished job streaming job 1437462720000 ms.1 from job set of time 1437462720000 ms
</code></pre>

<p>Congratulations! You&rsquo;ve earned the Spark Streaming and Apache Kafka integration badge! Close Spark Streaming&rsquo;s context using</p>

<pre><code>scala&gt; ssc.stop
</code></pre>

<p>or simply press <code>Ctrl+C</code>. Shut down Apache Kafka and Zookeeper, too. Done.</p>

<h2>(bonus) Building Apache Spark from the sources</h2>

<p>You could use the very latest version of Spark Streaming in which the latest and greatest development is going on and lives on <a href="https://github.com/apache/spark/commits/master">the master branch</a>.</p>

<p>Following the official documentation <a href="http://spark.apache.org/docs/latest/building-spark.html">Building Spark</a><sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>, execute the following two commands. Please note the versions in the build as it uses <strong>Scala 2.11.7</strong> and <strong>Hadoop 2.7.1</strong>.</p>

<pre><code>➜  spark git:(master) ./dev/change-scala-version.sh 2.11
➜  spark git:(master) ✗ ./build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.7.1 -Dscala-2.11 -DskipTests clean install
...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [  6.187 s]
[INFO] Spark Project Launcher ............................. SUCCESS [ 10.096 s]
[INFO] Spark Project Networking ........................... SUCCESS [  8.650 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  6.085 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 10.308 s]
[INFO] Spark Project Core ................................. SUCCESS [02:08 min]
[INFO] Spark Project Bagel ................................ SUCCESS [  6.750 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 15.942 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 32.429 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [ 55.389 s]
[INFO] Spark Project SQL .................................. SUCCESS [ 56.297 s]
[INFO] Spark Project ML Library ........................... SUCCESS [01:05 min]
[INFO] Spark Project Tools ................................ SUCCESS [  4.702 s]
[INFO] Spark Project Hive ................................. SUCCESS [ 47.624 s]
[INFO] Spark Project REPL ................................. SUCCESS [  5.686 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  7.479 s]
[INFO] Spark Project YARN ................................. SUCCESS [ 11.903 s]
[INFO] Spark Project Assembly ............................. SUCCESS [ 59.155 s]
[INFO] Spark Project External Twitter ..................... SUCCESS [  7.177 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [  6.205 s]
[INFO] Spark Project External Flume ....................... SUCCESS [  9.151 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [  1.896 s]
[INFO] Spark Project External MQTT ........................ SUCCESS [ 15.044 s]
[INFO] Spark Project External MQTT Assembly ............... SUCCESS [  3.593 s]
[INFO] Spark Project External ZeroMQ ...................... SUCCESS [  6.658 s]
[INFO] Spark Project External Kafka ....................... SUCCESS [ 11.207 s]
[INFO] Spark Project Examples ............................. SUCCESS [01:16 min]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [  5.115 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11:21 min
[INFO] Finished at: 2015-08-22T15:08:02+02:00
[INFO] Final Memory: 431M/1960M
[INFO] ------------------------------------------------------------------------
</code></pre>

<p>The jars for the version are at your command in the Maven local repository. Switch the version of Spark Streaming to <strong>1.5.0-SNAPSHOT</strong> and start over.</p>

<h2>Summary</h2>

<p>As it turns out setting up a working configuration of Apache Kafka and Spark Streaming is just few clicks away. There are a couple of places that need improvement, but what the article has showed could be a good starting point for other real-time big data analytics using <strong>Apache Kafka</strong> as <strong>the central hub for real-time streams of data</strong> that are then processed <strong>using complex algorithms</strong> in <strong>Spark Streaming</strong>.</p>

<p>Once the data&rsquo;s processed, Spark Streaming could be publishing results into yet another Kafka topic and/or store in Hadoop for later. It seems I&rsquo;ve got a very powerful setup I&rsquo;m not yet fully aware of where I should apply to.</p>

<p>Let me know what you think about the topic<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup> of the blog post in the <a href="#disqus_thread">Comments</a> section below or contact me at <a href="&#x6d;&#x61;&#x69;&#x6c;&#116;&#x6f;&#58;&#x6a;&#97;&#99;&#x65;&#x6b;&#x40;&#106;&#97;&#x70;&#105;&#x6c;&#x61;&#x2e;&#x70;&#x6c;&#x2e;">&#x6a;&#97;&#x63;&#x65;&#x6b;&#64;&#106;&#97;&#x70;&#x69;&#x6c;&#97;&#46;&#112;&#108;&#x2e;</a> Follow the author as <a href="https://twitter.com/jaceklaskowski">@jaceklaskowski</a> on Twitter, too.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
<p><em>I really really wish the Apache Spark project hadn&rsquo;t migrated the build to Apache Maven from the top-notch interactive build tool - <a href="http://www.scala-sbt.org/">sbt</a></em><a href="#fnref:1" rev="footnote">&#8617;</a></p></li>
<li id="fn:2">
<p>pun intended<a href="#fnref:2" rev="footnote">&#8617;</a></p></li>
</ol>
</div>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Publishing Events Using Custom Producer for Apache Kafka]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/07/19/publishing-events-using-custom-producer-for-apache-kafka.html"/>
    <updated>2015-07-19T17:04:21-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/07/19/publishing-events-using-custom-producer-for-apache-kafka</id>
    <content type="html"><![CDATA[<p>I&rsquo;m a <a href="http://www.scala-lang.org/">Scala</a> proponent so when I found out that <a href="https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite">the Apache Kafka team has decided to switch to using Java as the main language of the new client API</a> it was beyond my imagination. <a href="http://akka.io/docs/">Akka</a>&rsquo;s fine with their Java/Scala APIs and so I can&rsquo;t believe <a href="http://kafka.apache.org/">Apache Kafka</a> couldn&rsquo;t offer similar APIs, too. It&rsquo;s even more weird when one finds out that Apache Kafka itself is written in Scala. Why on earth did they decide to do the migration?!</p>

<p>In order to learn Kafka better, I developed a custom producer using the latest Kafka&rsquo;s Producer API in Scala. I built Kafka from the sources, and so I&rsquo;m using the version <strong>0.8.3-SNAPSHOT</strong>. It was pretty surprising experience, esp. when I ran across  <a href="http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Future.html">java.util.concurrent.Future</a> that seems so limited to what <a href="http://docs.scala-lang.org/overviews/core/futures.html">scala.concurrent.Future</a> offers. No <code>map</code>, <code>flatMap</code> or such? So far I consider the switch to using Java for the Client API a big mistake.</p>

<p>Here comes the complete Kafka producer I&rsquo;ve developed in Scala that&rsquo;s supposed to serve as a basis for my future development endeavours using the API in what&rsquo;s going to be in 0.8.3 release.</p>

<!-- more -->


<h2>Custom KafkaProducer in Scala</h2>

<figure class='code'><figcaption><span></span></figcaption><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
<span class='line-number'>17</span>
<span class='line-number'>18</span>
<span class='line-number'>19</span>
<span class='line-number'>20</span>
<span class='line-number'>21</span>
<span class='line-number'>22</span>
<span class='line-number'>23</span>
<span class='line-number'>24</span>
<span class='line-number'>25</span>
<span class='line-number'>26</span>
<span class='line-number'>27</span>
<span class='line-number'>28</span>
<span class='line-number'>29</span>
<span class='line-number'>30</span>
<span class='line-number'>31</span>
<span class='line-number'>32</span>
<span class='line-number'>33</span>
<span class='line-number'>34</span>
<span class='line-number'>35</span>
<span class='line-number'>36</span>
</pre></td><td class='code'><pre><code class='scala'><span class='line'><span class="k">import</span> <span class="nn">java.util.concurrent.Future</span>
</span><span class='line'>
</span><span class='line'><span class="k">import</span> <span class="nn">org.apache.kafka.clients.producer.RecordMetadata</span>
</span><span class='line'>
</span><span class='line'><span class="k">object</span> <span class="nc">KafkaProducer</span> <span class="k">extends</span> <span class="nc">App</span> <span class="o">{</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">val</span> <span class="n">topic</span> <span class="k">=</span> <span class="n">util</span><span class="o">.</span><span class="nc">Try</span><span class="o">(</span><span class="n">args</span><span class="o">(</span><span class="mi">0</span><span class="o">)).</span><span class="n">getOrElse</span><span class="o">(</span><span class="s">&quot;my-topic-test&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="n">println</span><span class="o">(</span><span class="n">s</span><span class="s">&quot;Connecting to $topic&quot;</span><span class="o">)</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">import</span> <span class="nn">org.apache.kafka.clients.producer.KafkaProducer</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">val</span> <span class="n">props</span> <span class="k">=</span> <span class="k">new</span> <span class="n">java</span><span class="o">.</span><span class="n">util</span><span class="o">.</span><span class="nc">Properties</span><span class="o">()</span>
</span><span class='line'>  <span class="n">props</span><span class="o">.</span><span class="n">put</span><span class="o">(</span><span class="s">&quot;bootstrap.servers&quot;</span><span class="o">,</span> <span class="s">&quot;localhost:9092&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="n">props</span><span class="o">.</span><span class="n">put</span><span class="o">(</span><span class="s">&quot;client.id&quot;</span><span class="o">,</span> <span class="s">&quot;KafkaProducer&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="n">props</span><span class="o">.</span><span class="n">put</span><span class="o">(</span><span class="s">&quot;key.serializer&quot;</span><span class="o">,</span> <span class="s">&quot;org.apache.kafka.common.serialization.IntegerSerializer&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="n">props</span><span class="o">.</span><span class="n">put</span><span class="o">(</span><span class="s">&quot;value.serializer&quot;</span><span class="o">,</span> <span class="s">&quot;org.apache.kafka.common.serialization.StringSerializer&quot;</span><span class="o">)</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">val</span> <span class="n">producer</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">KafkaProducer</span><span class="o">[</span><span class="kt">Integer</span>, <span class="kt">String</span><span class="o">](</span><span class="n">props</span><span class="o">)</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">import</span> <span class="nn">org.apache.kafka.clients.producer.ProducerRecord</span>
</span><span class='line'>
</span><span class='line'>  <span class="k">val</span> <span class="n">polish</span> <span class="k">=</span> <span class="n">java</span><span class="o">.</span><span class="n">time</span><span class="o">.</span><span class="n">format</span><span class="o">.</span><span class="nc">DateTimeFormatter</span><span class="o">.</span><span class="n">ofPattern</span><span class="o">(</span><span class="s">&quot;dd.MM.yyyy H:mm:ss&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="k">val</span> <span class="n">now</span> <span class="k">=</span> <span class="n">java</span><span class="o">.</span><span class="n">time</span><span class="o">.</span><span class="nc">LocalDateTime</span><span class="o">.</span><span class="n">now</span><span class="o">().</span><span class="n">format</span><span class="o">(</span><span class="n">polish</span><span class="o">)</span>
</span><span class='line'>  <span class="k">val</span> <span class="n">record</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">ProducerRecord</span><span class="o">[</span><span class="kt">Integer</span>, <span class="kt">String</span><span class="o">](</span><span class="n">topic</span><span class="o">,</span> <span class="mi">1</span><span class="o">,</span> <span class="n">s</span><span class="s">&quot;hello at $now&quot;</span><span class="o">)</span>
</span><span class='line'>  <span class="k">val</span> <span class="n">metaF</span><span class="k">:</span> <span class="kt">Future</span><span class="o">[</span><span class="kt">RecordMetadata</span><span class="o">]</span> <span class="k">=</span> <span class="n">producer</span><span class="o">.</span><span class="n">send</span><span class="o">(</span><span class="n">record</span><span class="o">)</span>
</span><span class='line'>  <span class="k">val</span> <span class="n">meta</span> <span class="k">=</span> <span class="n">metaF</span><span class="o">.</span><span class="n">get</span><span class="o">()</span> <span class="c1">// blocking!</span>
</span><span class='line'>  <span class="k">val</span> <span class="n">msgLog</span> <span class="k">=</span>
</span><span class='line'>    <span class="n">s</span><span class="s">&quot;&quot;&quot;</span>
</span><span class='line'><span class="s">       |offset    = ${meta.offset()}</span>
</span><span class='line'><span class="s">       |partition = ${meta.partition()}</span>
</span><span class='line'><span class="s">       |topic     = ${meta.topic()}</span>
</span><span class='line'><span class="s">     &quot;&quot;&quot;</span><span class="o">.</span><span class="n">stripMargin</span>
</span><span class='line'>  <span class="n">println</span><span class="o">(</span><span class="n">msgLog</span><span class="o">)</span>
</span><span class='line'>
</span><span class='line'>  <span class="n">producer</span><span class="o">.</span><span class="n">close</span><span class="o">()</span>
</span><span class='line'><span class="o">}</span>
</span></code></pre></td></tr></table></div></figure>


<h2>Building Kafka from the sources</h2>

<p>In order to run the client you should build Kafka from the sources first and publish the jars to the local Maven repository. The reason to have the build is that the producer uses the very latest Kafka Producer API.</p>

<p>Building Kafka from the sources is as simple as executing <code>gradle -PscalaVersion=2.11.7 clean releaseTarGz</code> in the directory where you <code>git clone https://github.com/apache/kafka.git</code>d <a href="https://github.com/apache/kafka.git">the Kafka repo from GitHub</a>.</p>

<pre><code>➜  kafka git:(trunk) gradle -PscalaVersion=2.11.7 clean releaseTarGz install
Building project 'core' with Scala version 2.11.7
...
BUILD SUCCESSFUL

Total time: 1 mins 23.233 secs
</code></pre>

<p>I was building the distro against <strong>Scala 2.11.7</strong>.</p>

<p>Once done, <code>core/build/distributions/kafka_2.11-0.9.0.0-SNAPSHOT.tgz</code> is where you find the release package.</p>

<pre><code>➜  kafka git:(trunk) ls -l core/build/distributions/kafka_2.11-0.9.0.0-SNAPSHOT.tgz
-rw-r--r--  1 jacek  staff  17813003 29 wrz 08:32 core/build/distributions/kafka_2.11-0.9.0.0-SNAPSHOT.tgz
</code></pre>

<p>Unpack it and <code>cd</code> to it.</p>

<pre><code>➜  kafka git:(trunk) tar -zxf core/build/distributions/kafka_2.11-0.9.0.0-SNAPSHOT.tgz
➜  kafka git:(trunk) ✗ cd kafka_2.11-0.9.0.0-SNAPSHOT
➜  kafka_2.11-0.9.0.0-SNAPSHOT git:(trunk) ✗ ls -l
total 32
-rw-r--r--   1 jacek  staff  11358  9 lis  2014 LICENSE
-rw-r--r--   1 jacek  staff    162  9 lis  2014 NOTICE
drwxr-xr-x  26 jacek  staff    884 29 wrz 08:32 bin
drwxr-xr-x  16 jacek  staff    544 29 wrz 08:32 config
drwxr-xr-x  21 jacek  staff    714 29 wrz 08:32 libs
</code></pre>

<h2>Zookeeper up and running</h2>

<p>Running Zookeeper is the very first step you should do (as that&rsquo;s how Kafka maintains high-availability). Use <code>./bin/zookeeper-server-start.sh config/zookeeper.properties</code>:</p>

<pre><code>➜  kafka_2.11-0.9.0.0-SNAPSHOT git:(trunk) ✗ ./bin/zookeeper-server-start.sh config/zookeeper.properties
[2015-09-29 12:26:41,011] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2015-09-29 12:26:41,014] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2015-09-29 12:26:41,014] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2015-09-29 12:26:41,014] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2015-09-29 12:26:41,014] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2015-09-29 12:26:41,036] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2015-09-29 12:26:41,036] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2015-09-29 12:26:41,301] INFO Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2015-09-29 12:26:41,301] INFO Server environment:host.name=172.20.36.184 (org.apache.zookeeper.server.ZooKeeperServer)
[2015-09-29 12:26:41,301] INFO Server environment:java.version=1.8.0_60 (org.apache.zookeeper.server.ZooKeeperServer)
...
[2015-09-29 12:26:41,333] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
</code></pre>

<h2>Kafka broker up and running</h2>

<p>In another terminal, start a Kafka broker using <code>./bin/kafka-server-start.sh config/server.properties</code> command:</p>

<pre><code>  ➜  kafka_2.11-0.9.0.0-SNAPSHOT git:(trunk) ✗ ./bin/kafka-server-start.sh config/server.properties
...
[2015-07-20 00:18:33,671] INFO starting (kafka.server.KafkaServer)
[2015-07-20 00:18:33,673] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2015-07-20 00:18:33,684] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2015-07-20 00:18:33,693] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2015-07-20 00:18:33,694] INFO Client environment:host.name=192.168.1.9 (org.apache.zookeeper.ZooKeeper)
[2015-07-20 00:18:33,694] INFO Client environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper)
[2015-07-20 00:18:33,694] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
...
[2015-09-29 13:18:49,919] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -&gt; EndPoint(192.168.99.1,9092,PLAINTEXT) (kafka.utils.ZkUtils$)
[2015-09-29 13:18:49,933] INFO Kafka version : 0.9.0.0-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser)
[2015-09-29 13:18:49,933] INFO Kafka commitId : 4e7db39556ba916c (org.apache.kafka.common.utils.AppInfoParser)
[2015-09-29 13:18:49,934] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2015-09-29 13:18:49,935] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
</code></pre>

<h2>Creating topic</h2>

<p>You&rsquo;re now going to create <code>my-topic</code> topic where the custom producer is going to publish messages to. Of course, the name of the topic is arbitrary, but should match what the custom producer uses.</p>

<pre><code>➜  kafka_2.11-0.9.0.0-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic my-topic-test
Created topic "my-topic-test".
</code></pre>

<p>Check out the topics available using <code>./bin/kafka-topics.sh --list --zookeeper localhost:2181</code>. You should see one.</p>

<pre><code>➜  kafka_2.11-0.9.0.0-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
my-topic-test
</code></pre>

<h2>kafka-publisher - Scala project</h2>

<p>Create a Scala project. The project is managed by sbt with the following <code>build.sbt</code>:</p>

<pre><code>val kafkaVersion = "0.9.0.0-SNAPSHOT"
scalaVersion := "2.11.7"

libraryDependencies += "org.apache.kafka" % "kafka-clients" % kafkaVersion
resolvers += Resolver.mavenLocal
</code></pre>

<p>Use the following <code>project/build.properties</code>:</p>

<pre><code>sbt.version=0.13.9
</code></pre>

<h2>Sending messages using KafkaProducer - <code>sbt run</code></h2>

<p>With the setup, you should now be able to run <code>sbt run</code> to run the custom Scala producer for Kafka.</p>

<pre><code>➜  kafka-publisher  sbt run
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/dev/sandbox/kafka-publisher/project
[info] Set current project to kafka-publisher (in build file:/Users/jacek/dev/sandbox/kafka-publisher/)
[info] Compiling 1 Scala source to /Users/jacek/dev/sandbox/kafka-publisher/target/scala-2.11/classes...
[info] Running KafkaProducer
Connecting to my-topic-test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

offset    = 0
partition = 0
topic     = my-topic-test

[success] Total time: 4 s, completed Sep 29, 2015 4:21:14 PM
</code></pre>

<p>Executing <code>sbt run</code> again should show a different offset for the sam partition and topic:</p>

<pre><code>➜  kafka-publisher  sbt run
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/dev/sandbox/kafka-publisher/project
[info] Set current project to kafka-publisher (in build file:/Users/jacek/dev/sandbox/kafka-publisher/)
[info] Running KafkaProducer
Connecting to my-topic-test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

offset    = 1
partition = 0
topic     = my-topic-test

[success] Total time: 1 s, completed Sep 29, 2015 4:21:47 PM
</code></pre>

<h2>Using kafkacat as a Kafka message consumer</h2>

<p>If you really would like to see the message on the other, receiving side, I strongly recommend using <a href="https://github.com/edenhill/kafkacat">kafkacat</a> that, once installed, boils down to the following command:</p>

<pre><code>➜  ~  kafkacat -C -b localhost:9092 -t my-topic-test
hello at 20.07.2015 0:29:43
hello at 20.07.2015 0:30:46
</code></pre>

<p>It will read all the messages already published to <code>my-topic-test</code> topic and print out others once they come.</p>

<p>That&rsquo;s it. Congratulations!</p>

<h2>Summary</h2>

<p>The complete project is <a href="https://github.com/jaceklaskowski/kafka-producer">on GitHub in kafka-producer repo</a>.</p>

<p>You may also want to read <a href="http://kafka.apache.org/documentation.html#quickstart">1.3 Quick Start</a> in the official documentation of Apache Kafka.</p>

<p>Let me know what you think about the topic<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup> of the blog post in the <a href="#disqus_thread">Comments</a> section below or contact me at <a href="&#109;&#x61;&#x69;&#x6c;&#116;&#x6f;&#58;&#x6a;&#97;&#x63;&#x65;&#x6b;&#x40;&#106;&#x61;&#x70;&#105;&#x6c;&#x61;&#46;&#112;&#108;&#46;">&#106;&#97;&#x63;&#101;&#107;&#x40;&#x6a;&#x61;&#x70;&#x69;&#x6c;&#97;&#x2e;&#112;&#x6c;&#46;</a> Follow the author as <a href="https://twitter.com/jaceklaskowski">@jaceklaskowski</a> on Twitter, too.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
<p>pun intended<a href="#fnref:1" rev="footnote">&#8617;</a></p></li>
</ol>
</div>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Apache Kafka on Docker]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/07/14/apache-kafka-on-docker.html"/>
    <updated>2015-07-14T14:59:33-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/07/14/apache-kafka-on-docker</id>
    <content type="html"><![CDATA[<p><a href="http://kafka.apache.org/"><img class="left" src="http://blog.jaceklaskowski.pl/images/kafka_logo.png" title="Apache Kafka" ></a></p>

<p><a href="http://kafka.apache.org/">Apache Kafka</a> has always been high on my list of things to explore, but since there are quite a few things high on my list, Kafka couldn&rsquo;t actually make it to the very top. Until just recently, when I was asked to give <strong>the broker</strong> a try and see whether or not it meets a project&rsquo;s needs. Two projects, to be honest. You should see my face when I heard it.</p>

<p><a href="https://github.com/apache/kafka#apache-kafka">I compiled Apache Kafka from the sources</a>, connected it to <a href="https://spark.apache.org/streaming/">Spark Streaming</a> and even attempted to answer few questions on StackOverflow (<a href="http://stackoverflow.com/q/31391782/1305344">How to use Kafka in Flink using Scala?</a> and <a href="http://stackoverflow.com/q/31344222/1305344">How to monitor Kafka broker using jmxtrans?</a>), not to mention reading tons of articles and watching videos about the tool. I developed pretty strong confidence what use cases are the sweet spot for Apache Kafka.</p>

<p>With the team in <a href="http://www.codilime.com/">Codilime</a> I&rsquo;m developing <a href="http://deepsense.io/">DeepSense.io</a> platform where we have just used <a href="http://www.ansible.com/home">Ansible</a> to automate deployment. We&rsquo;ve also been evaluating <a href="https://www.docker.com/">Docker</a> and/or <a href="https://www.vagrantup.com/">Vagrant</a>. All to ease the deployment of DeepSense.io.</p>

<p>That&rsquo;s the moment when these two needs converged - exploring Apache Kafka and Docker (among the other tools) for three separate projects! Amazing, isn&rsquo;t it? I could finally explore how Docker might ease exploration of products and deployment. I knew Docker could ease my developer life, but it&rsquo;s only now when I really saw it. I would now <em>dockerize</em> everything. When I was told about the images <a href="https://registry.hub.docker.com/u/wurstmeister/kafka/">wurstmeister/kafka</a> and <a href="https://registry.hub.docker.com/u/wurstmeister/zookeeper/">wurstmeister/zookeeper</a> I couldn&rsquo;t have been happier. Running Apache Kafka and using Docker finally became a no-brainer and such a pleasant experience.</p>

<p>I then thought I&rsquo;d share the love so it&rsquo;s not only mine and others could benefit from it, too.</p>

<!-- more -->


<p>Since I&rsquo;m on <strong>Mac OS X</strong> the steps to run Apache Kafka using Docker rely on <a href="http://boot2docker.io/">boot2docker</a> - <em>a Lightweight Linux for Docker</em> for platforms that don&rsquo;t natively support Docker - aforementioned Mac OS X and Windows.</p>

<p>You&rsquo;re going to use the images <a href="https://registry.hub.docker.com/u/wurstmeister/kafka/">wurstmeister/kafka</a> and <a href="https://registry.hub.docker.com/u/wurstmeister/zookeeper/">wurstmeister/zookeeper</a>.</p>

<p>You can run containers off the images in background or foreground. Depending on you Unix skills, it means one or two terminals. Let&rsquo;s use two terminals for each server - Apache Kafka and Apache Zookeeper. I&rsquo;m going to explain the role of Apache Zookeeper in another blog post.</p>

<p>Here come the steps to run Apache Kafka using Docker. It&rsquo;s assumed you&rsquo;ve got <code>boot2docker</code> and <code>docker</code> tools installed.</p>

<pre><code>➜  ~  boot2docker version
Boot2Docker-cli version: v1.7.1
Git commit: 8fdc6f5

➜  ~  docker --version
Docker version 1.7.1, build 786b29d
</code></pre>

<p>I&rsquo;m a big fan of <a href="http://brew.sh/">homebrew</a> and highly recommend it to anyone using Mac OS X. Plenty of ready-to-use packages are just <code>brew install</code> away, docker and boot2docker including.</p>

<h2>Running Kafka on two Docker images</h2>

<ol>
<li><p>(Mac OS X and Windows users only) Execute <code>boot2docker up</code> to start the tiny Linux core on Mac OS.</p>

<pre><code> ➜  ~  boot2docker up
 Waiting for VM and Docker daemon to start...
 .o
 Started.
 Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/ca.pem
 Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/cert.pem
 Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/key.pem

 To connect the Docker client to the Docker daemon, please set:
     export DOCKER_HOST=tcp://192.168.59.104:2376
     export DOCKER_CERT_PATH=/Users/jacek/.boot2docker/certs/boot2docker-vm
     export DOCKER_TLS_VERIFY=1
</code></pre></li>
<li><p>(Mac OS X and Windows users only) Execute <code>$(boot2docker shellinit)</code> to have the terminal set up and let <code>docker</code> know where the tiny Linux core is running (via <code>boot2docker</code>). You have to do the step in any terminal you open to work with Docker so the <code>export</code>s above are set. Should you face communication issues with <code>docker</code> commands, remember the step.</p>

<pre><code>➜  ~  $(boot2docker shellinit)
Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/jacek/.boot2docker/certs/boot2docker-vm/key.pem
</code></pre></li>
<li><p>Run <code>docker ps</code> to verify the terminal is configured properly for Docker.</p>

<pre><code>➜  ~  docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
</code></pre>

<p> No containers are running at this time. It&rsquo;s going to change soon once you start the containers for Zookeeper first and then Kafka.</p></li>
<li><p><a href="https://hub.docker.com/u/jaceklaskowski/">Create an account on Docker Hub</a> and execute <code>docker login</code> to store the credentials. With the step you don&rsquo;t have to repeat them for <code>docker pull</code> to pull images off the public hub of Docker images. Think of the Docker Hub as the GitHub for Docker images. Refer to the documentation <a href="http://docs.docker.com/docker-hub/userguide/">Using the Docker Hub</a> for more up-to-date information.</p></li>
<li><p>Run <code>docker pull wurstmeister/zookeeper</code> to pull the Zookeeper image off Docker Hub (might take a few minutes to download):</p>

<pre><code>➜  ~  docker pull wurstmeister/zookeeper
Pulling repository wurstmeister/zookeeper
a3075a3d32da: Download complete
...
840840289a0d: Download complete
e7381f1a45cf: Download complete
5a6fc057f418: Download complete
Status: Downloaded newer image for wurstmeister/zookeeper:latest
</code></pre>

<p>You will see hashes of respective layers printed out to the console. It&rsquo;s expected.</p></li>
<li><p>Execute <code>docker pull wurstmeister/kafka</code> to pull the Kafka image off Docker Hub (might take a few minutes to download):</p>

<pre><code> ➜  ~  docker pull wurstmeister/kafka
 latest: Pulling from wurstmeister/kafka
 428b411c28f0: Pull complete
 ...
 422705fe88c8: Pull complete
 02bb7ca441d8: Pull complete
 0f9a08061516: Pull complete
 24fc32f98556: Already exists
 Digest: sha256:06150c136dcfe6e4fbbf37731a2119ea17a953c75902e52775b5511b3572aa1f
 Status: Downloaded newer image for wurstmeister/kafka:latest
</code></pre></li>
<li><p>Verify that the two images - <code>wurstmeister/kafka</code> and <code>wurstmeister/zookeeper</code> - are downloaded. From the command line execute <code>docker images</code>:</p>

<pre><code>➜  ~  docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
wurstmeister/kafka       latest              24fc32f98556        3 weeks ago         477.6 MB
wurstmeister/zookeeper   latest              a3075a3d32da        9 months ago        451 MB
</code></pre></li>
<li><p>You can now execute <code>docker run --name zookeeper -p 2181 -t wurstmeister/zookeeper</code> in one terminal to boot Zookeeper up.</p>

<p> Remember <code>$(boot2docker shellinit)</code> if you&rsquo;re on Mac OS X or Windows.</p>

<pre><code> ➜  ~  docker run --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper
 JMX enabled by default
 Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
 2015-07-17 19:10:40,419 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
 ...
 2015-07-17 19:10:40,452 [myid:] - INFO  [main:ZooKeeperServer@773] - maxSessionTimeout set to -1
 2015-07-17 19:10:40,464 [myid:] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
</code></pre>

<p> This gives you Zookeeper listening to port 2181. Check it out by telneting to it using docker (or boot2docker on MacOS) ip address.</p>

<pre><code> ➜  ~  telnet `boot2docker ip` 2181
 Trying 192.168.59.103...
 Connected to 192.168.59.103.
 Escape character is '^]'.
</code></pre></li>
<li><p>Execute <code>docker run --name kafka -e HOST_IP=localhost -e KAFKA_ADVERTISED_PORT=9092 -e KAFKA_BROKER_ID=1 -e ZK=zk -p 9092 --link zookeeper:zk -t wurstmeister/kafka</code> in another terminal.</p>

<p> Remember <code>$(boot2docker shellinit)</code> if you&rsquo;re on Mac OS X or Windows.</p>

<pre><code> ➜  ~  docker run --name kafka -e HOST_IP=localhost -e KAFKA_ADVERTISED_PORT=9092 -e KAFKA_BROKER_ID=1 -e ZK=zk -p 9092 --link zookeeper:zk -t wurstmeister/kafka
 [2015-07-17 19:32:35,865] INFO Verifying properties (kafka.utils.VerifiableProperties)
 [2015-07-17 19:32:35,891] INFO Property advertised.port is overridden to 9092 (kafka.utils.VerifiableProperties)
 [2015-07-17 19:32:35,891] INFO Property broker.id is overridden to 1 (kafka.utils.VerifiableProperties)
 ...
 [2015-07-17 19:32:35,894] INFO Property zookeeper.connect is overridden to 172.17.0.5:2181 (kafka.utils.VerifiableProperties)
 [2015-07-17 19:32:35,895] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
 [2015-07-17 19:32:35,924] INFO [Kafka Server 1], starting (kafka.server.KafkaServer)
 [2015-07-17 19:32:35,925] INFO [Kafka Server 1], Connecting to zookeeper on 172.17.0.5:2181 (kafka.server.KafkaServer)
 [2015-07-17 19:32:35,934] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
 [2015-07-17 19:32:35,939] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
 ...
 [2015-07-17 19:32:36,093] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
 [2015-07-17 19:32:36,095] INFO [Socket Server on Broker 1], Started (kafka.network.SocketServer)
 [2015-07-17 19:32:36,146] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
 [2015-07-17 19:32:36,172] INFO 1 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
 [2015-07-17 19:32:36,253] INFO Registered broker 1 at path /brokers/ids/1 with address 61c359a3136b:9092. (kafka.utils.ZkUtils$)
 [2015-07-17 19:32:36,270] INFO [Kafka Server 1], started (kafka.server.KafkaServer)
 [2015-07-17 19:32:36,318] INFO New leader is 1 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
</code></pre>

<p> You&rsquo;re now a happy user of Apache Kafka on your computer using Docker. Check the status of the containers using <code>docker ps</code>:</p>

<pre><code> ➜  ~  docker ps
 CONTAINER ID        IMAGE                    COMMAND                CREATED             STATUS              PORTS                                                 NAMES
 0b34a9927004        wurstmeister/kafka       "/bin/sh -c start-ka   2 minutes ago       Up 2 minutes        0.0.0.0:32769-&gt;9092/tcp                               kafka
 14fd32558b1c        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb   4 minutes ago       Up 4 minutes        22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:32768-&gt;2181/tcp   zookeeper
</code></pre></li>
<li><p>Once you&rsquo;re done with your journey into Apache Kafka, <code>docker stop</code> the containers using <code>docker stop kafka zookeeper</code> (or <code>docker stop $(docker ps -aq)</code> if the only running containers are <code>kafka</code> and <code>zookeeper</code>).</p>

<pre><code> ➜  ~  docker stop kafka zookeeper
 kafka
 zookeeper
</code></pre>

<p> Running <code>docker ps</code> shows no running containers afterwards:</p>

<pre><code> ➜  ~  docker ps
 CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
</code></pre>

<p> There are no running containers since they&rsquo;re stopped now. They are still ready to be booted up again - use <code>docker ps -a</code> to see the ready-to-use containers:</p>

<pre><code> ➜  ~  docker ps -a
 CONTAINER ID        IMAGE                    COMMAND                CREATED             STATUS                        PORTS               NAMES
 7dde25ff7ec2        wurstmeister/kafka       "/bin/sh -c start-ka   15 hours ago        Exited (137) 16 seconds ago                       kafka
 b7b4b675b9c0        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb   16 hours ago        Exited (137) 5 seconds ago                        zookeeper
</code></pre></li>
<li><p>(Mac OS X and Windows users only) Finally, stop <code>boot2docker</code> daemon using <code>boot2docker down</code>.</p></li>
</ol>


<h2>Summary</h2>

<p>With these two docker images - <a href="https://registry.hub.docker.com/u/wurstmeister/kafka/">wurstmeister/kafka</a> and <a href="https://registry.hub.docker.com/u/wurstmeister/zookeeper/">wurstmeister/zookeeper</a> - you can run <strong>Apache Kafka</strong> without much changing your local workstation to install it together with the necessary components like Apache ZooKeeper. You don&rsquo;t need to worry about upgrading the software and its dependencies except docker itself (and boot2docker if you&rsquo;re lucky to be on Mac OS). That saves you from spending time on installation and ensures proper functioning of your machine and the software. Moreover, the Docker images could be deployed to other machines and guarantee a consistent environment of the software inside.</p>

<p>Let me know what you think about the topic<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup> of the blog post in the <a href="#disqus_thread">Comments</a> section below or contact me at <a href="&#x6d;&#97;&#105;&#x6c;&#116;&#x6f;&#x3a;&#x6a;&#97;&#99;&#x65;&#107;&#64;&#106;&#x61;&#112;&#105;&#108;&#97;&#x2e;&#112;&#x6c;&#x2e;">&#x6a;&#97;&#x63;&#101;&#x6b;&#x40;&#106;&#97;&#112;&#105;&#x6c;&#x61;&#46;&#112;&#x6c;&#46;</a> Follow the author as <a href="https://twitter.com/jaceklaskowski">@jaceklaskowski</a> on Twitter, too.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
<p>pun intended<a href="#fnref:1" rev="footnote">&#8617;</a></p></li>
</ol>
</div>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Ad Hoc Polymorphism in Scala With Type Classes]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/05/15/ad-hoc-polymorphism-in-scala-with-type-classes.html"/>
    <updated>2015-05-15T16:21:20-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/05/15/ad-hoc-polymorphism-in-scala-with-type-classes</id>
    <content type="html"><![CDATA[<p>My journey into the depths of <a href="http://www.scala-lang.org/">Scala</a> is in full swing. Not only can I learn the theory (with the group of <a href="http://warsawscala.pl">Warsaw Scala Enthusiasts</a>), but also apply it to commercial projects (with the Scala development teams of <a href="http://deepsense.io/">DeepSense.io</a> and <a href="http://www.hcore.com/">HCore</a>). Each day I feel I&rsquo;m getting better at using <strong>type system in Scala</strong> in a more concious and (hopefully) efficient manner.</p>

<p>This time I sank into <strong>type classes</strong> that is a means of doing <strong> <em>ad hoc</em> polymorphism</strong> in Scala.</p>

<p>From <a href="http://en.wikipedia.org/wiki/Ad_hoc_polymorphism"><em>ad hoc</em> polymorphism</a> article on Wikipedia:</p>

<blockquote><p>In programming languages, <strong>ad hoc polymorphism</strong> is a kind of polymorphism in which polymorphic functions can be applied to arguments of different types, because a polymorphic function can denote a number of distinct and potentially heterogeneous implementations depending on the type of argument(s) to which it is applied.</p></blockquote>

<p>The blog post presents a way to implement the type classes concept in Scala.</p>

<p>p.s. I&rsquo;m yet to find out how much of it is <a href="http://clojure.org/multimethods">multimethods</a> in <a href="http://clojure.org/">Clojure</a> (that was once of much help to introduce me to <strong>functional programming</strong>).</p>

<!-- more -->


<h2>Theory</h2>

<p>From the article <a href="http://en.wikipedia.org/wiki/Polymorphism_(computer_science">Polymorphism (computer science)</a>) on Wikipedia, you can read about three different kinds of polymorphism:</p>

<ul>
<li><strong>subtyping</strong></li>
<li><strong>parametric polymorphism</strong></li>
<li><strong>ad hoc polymorphism</strong></li>
</ul>


<p>The order does matter (and is different from the Wikipedia article&rsquo;s one) as I think that&rsquo;s exactly the order of when and how well developers master them (in any programming language).</p>

<p>I reckon the Scala community finds the first two quite often used in code that contributes to how well it&rsquo;s understood and applied (except <a href="http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29">variance</a>), and just the last one - ad hoc polymorphism - is proving a major hurdle for many, me including. It didn&rsquo;t click for me for a long time, either, only until I found the &ldquo;venues&rdquo; of very concise and comprehensible material (see <a href="#references">References</a> section below).</p>

<blockquote><p>In programming languages and type theory, polymorphism is the provision of a single interface to entities of different types.</p></blockquote>

<p>The difference between <em>subtyping</em> and <em>parametric polymorphism</em> and <em>ad hoc polymorphism</em> is that the type hierarchy is expressed explicitly using <strong>extends</strong> keyword in Scala (for subtyping) or type parameters (for parametric polymorphism) while ad hoc polymorphism bases itself on <strong>implicit classes</strong> to <em>mixin</em> the behaviour (using traits). And that&rsquo;s pretty much all, technically.</p>

<p>Let&rsquo;s see it in a Scala code.</p>

<h2>Practice</h2>

<h3>Algebraic data types (raw data)</h3>

<p>Let&rsquo;s assume you have the following type hierarchy (from <a href="https://youtu.be/sVMES4RZF-8">Tutorial: Typeclasses in Scala with Dan Rosen</a>):</p>

<pre><code>sealed trait Expression
case class Number(value: Int) extends Expression
case class Plus(lhs: Expression, rhs: Expression) extends Expression
case class Minus(lhs: Expression, rhs: Expression) extends Expression
</code></pre>

<p>No behaviour, but <em>algebraic data types</em> or (often called) <em>raw data</em>.</p>

<p>Think about how you&rsquo;d go about evaluating expressions, i.e. <code>Plus(Minus(Number(5), Number(3)), Number(18))</code>, i.e. how to make the following application compile and print <code>20</code>?</p>

<pre><code>object Main extends App {
  val expr = Plus(Minus(Number(5), Number(3)), Number(18))
  println(...) // do something with expr so it prints 20
}
</code></pre>

<p>The past me would change <code>sealed trait Expression</code> to include <code>def value: Int</code> and force the three case classes <code>Number</code>, <code>Plus</code> and <code>Minus</code> follow. I&rsquo;d not be surprised if you thought exactly so. You should not, however.</p>

<p>Think about the case where you must <strong>not</strong> change them as they could be provided as a library or be part of the language or be licensed or&hellip;I simply ask you not to.</p>

<h3>objects to apply behaviour</h3>

<p>Since you&rsquo;re in Scala, you could easily work it around with an object, say <code>object ExpressionEvaluator</code>, that would <em>pattern match</em> to types and do the heavy lifting, i.e. know what to do with each and every type:</p>

<pre><code>object ExpressionEvaluator {
  def value(expression: Expression): Int = expression match {
    case Number(value) =&gt; value
    case Plus(lhs, rhs) =&gt; value(lhs) + value(rhs)
    case Minus(lhs, rhs) =&gt; value(lhs) - value(rhs)
  }
}
</code></pre>

<p>That&rsquo;s quite inefficient and cumbersome, though. You&rsquo;d have to know about all of the implementations as well as about what to do for each. It&rsquo;s nearly impossible to have right, complete and still flexible.</p>

<p>With the above object, you&rsquo;d write:</p>

<pre><code>object Main extends App {
  val expr = Plus(Minus(Number(5), Number(3)), Number(18))
  println(ExpressionEvaluator.value(expr)) // print the value
}
</code></pre>

<h3>implicits in Scala</h3>

<p>Let&rsquo;s do it the right way, so it&rsquo;s not <code>ExpressionEvaluator</code> to know what to do with every <code>Expression</code>, but expressions themselves do what they&rsquo;re supposed to do instead.</p>

<p>You may have heard about <strong>implicits</strong> in Scala. Perhaps, you may have used them, too. Think about a solution with implicit machinery so the following is possible:</p>

<pre><code>object Main extends App {
  val expr = Plus(Minus(Number(5), Number(3)), Number(18))
  println(expr.value) // print the value
}
</code></pre>

<p>One way in Scala would be to apply <strong>Pimp my Library</strong> pattern leveraging <code>implicit class</code>es to add necessary methods as follows:</p>

<pre><code>implicit class ExpressionOps(e: Expression) {
  def value = ... // calculate the value
}
</code></pre>

<p>And have a <code>implicit class</code> per <code>case class</code> and <code>sealed trait Expression</code>, too. The reason to have the implicits is to add a <code>value</code> method to every type, i.e. implicitly convert types without <code>value</code> to ones that have it.</p>

<pre><code>implicit class ExpressionOps(e: Expression) {
  def value: Int = e match {
    case n : Number =&gt; n.value
    case p : Plus =&gt; p.value
    case m : Minus =&gt; m.value
  }
}

implicit class PlusOps(p: Plus) {
  def value: Int = p.lhs.value + p.rhs.value
}

implicit class MinusOps(m: Minus) {
  def value: Int = m.lhs.value - m.rhs.value
}
</code></pre>

<p>Note that since <code>Number</code> had <code>value</code> already, an implicit was not needed.</p>

<p>With the implicits in place, you can write:</p>

<pre><code>println(expr.value)  // prints 20
</code></pre>

<p>The implicit-based solution is far much more flexible because calculating a value is the responsibility of an implicit in scope &ndash; a change in a single implicit, say <code>MinusOps</code>, would only change how <code>Minus.value</code> works (with no changes to the rest of the &ldquo;framework&rdquo;).</p>

<p>You&rsquo;re halfway to typeclasses!</p>

<h3>Partial ad hoc polymorphism</h3>

<p>Think about the case where you&rsquo;d have a library to calculate a value of <code>Valueable</code> values (no pun intended). Say, you have such a library that offers the following &ldquo;calculator&rdquo;:</p>

<pre><code>def calculate(v: Valueable) = ... // calculate the value of v
</code></pre>

<p>How would you go about converting <code>Expression</code>s to <code>Valueable</code>s so a value of <code>Expression</code> type would participate in the contract of <code>def calculate</code>?</p>

<p>You&rsquo;re right that whenever type conversion is needed in Scala, implicits have their say &ndash; you used them already to have <code>def value</code> for <code>Expression</code> type hierarchy. You&rsquo;re going to use them again with a very small change that has enormous impact. Doing so will introduce the type class design pattern.</p>

<p>The current solution relies on values with <code>def value</code> &ndash; it&rsquo;s like having a library that uses structural typing in Scala that unfortunatelly uses reflection and hence is very costly performance-wise. Happily, you don&rsquo;t need structural types.</p>

<p>If you had a library that expects values of some type, say <code>trait Valueable</code>, to calculate a value or some other library to JSONify them (using some <code>trait Json</code>), the previous solutions would fall short &ndash; you&rsquo;ve merely met the requirement of being able to call a <code>value</code> method on values, but the values don&rsquo;t belong to a single type hierarchy of some <code>trait Valueable</code> that the library uses.</p>

<p>Assume you have a library that works with <code>trait Valueable</code>-only values and there&rsquo;s a <code>calculate</code> method to work with them as follows:</p>

<pre><code>trait Valueable {
  def value: Int
}

def calculate(v: Valueable) = v.value
</code></pre>

<p>Note that the library knows nothing about the <code>Expression</code> type hierarchy. The <code>Expression</code> type hierarchy could not even have existed at the time <code>Valueable</code> did.</p>

<p>A more flexible and efficient solution is to <em>somehow</em> meld the <code>Expression</code> type hierarchy with <code>Valueable</code> and create <em>is-a relationship</em>. No, no, you&rsquo;re not going to change the <code>Expression</code> trait in any way. You must <strong>not</strong> and you even <strong>can&rsquo;t</strong>, remember?</p>

<p>Welcome to <strong>typeclasses</strong> (also known as <strong>type class design pattern</strong>)!</p>

<p>In order to have <code>extends</code>-like semantic at runtime with no <code>extends Valueable</code> in the (closed and <code>sealed</code>) <code>Expression</code> type hierarchy you change <code>implicit class</code>es to have the common trait mixed in, instead.</p>

<pre><code>implicit class ExpressionOps(e: Expression) extends Valueable {
  def value = e match {
    case n: Number =&gt; n.value
    case p: Plus =&gt; p.value
    case m: Minus =&gt; m.value
  }
}

implicit class PlusOps(p: Plus) {
  def value: Int = p.lhs.value + p.rhs.value
}

implicit class MinusOps(m: Minus) {
  def value: Int = m.lhs.value - m.rhs.value
}
</code></pre>

<p>With the above <code>implicit class</code>es that all <code>extends Valueable</code>, you can safely do <code>expr.value</code> for any <code>Expression</code> value for which an implicit exists in scope and be done with the task at hand. Notice how Scala &ldquo;executes&rdquo; <code>value</code> on the different types (that&rsquo;s based upon using <code>implicit class</code>es for the types when <code>value</code> is needed).</p>

<pre><code>println(calculate(expr))  // prints 20
</code></pre>

<p>For what is worth, the <code>calculate</code> method belongs to a library that knows nothing about <code>Expression</code> type and you can&rsquo;t change it so <code>Expression</code>s would be worked with. That&rsquo;s exactly where typeclasses shines and are hard to beat.</p>

<p>This is the version of type classes pattern as described in the book <a href="http://shop.oreilly.com/product/0636920033073.do">Programming Scala, 2nd Edition</a> in &ldquo;Type Class Pattern&rdquo; section, page 156.</p>

<h3>Final solution: <code>Value[T]</code> type class</h3>

<p>You can still do better than that in Scala and that&rsquo;s what (<em>the real</em>) type class design pattern offers. This is the version of type class pattern as demonstrated in the video <a href="https://youtu.be/sVMES4RZF-8">Tutorial: Typeclasses in Scala with Dan Rosen</a>. You should really watch it.</p>

<p>Let&rsquo;s have a parameterized type <code>trait Value[T]</code> that&rsquo;s supposed to &ldquo;bridge&rdquo; the type hierarchies - <code>Expression</code> and <code>Valueable</code>:</p>

<pre><code>trait Value[T] {
  def value(t: T): Valueable
}
</code></pre>

<p>The fictitious Calculator library has <code>object Calculator</code> with a single <code>def calculate(v: Valueable): Int</code> method:</p>

<pre><code>object Calculator {
  def calculate(v: Valueable): Int = v.value
}
</code></pre>

<p>No <code>Expression</code>s, just <code>Valueable</code>s. That&rsquo;s where the type class pattern shines.</p>

<p>Create <code>object CalculatorEx</code> as follows:</p>

<pre><code>object CalculatorEx {
  def calculate[T : Value](t: T): Int =
    Calculator.calculate(implicitly[Value[T]].value(t))
}
</code></pre>

<p>It says that for any type <code>T</code> there&rsquo;s an implicit conversion to a <code>Value[_]</code> type hierarchy using implicits (that are used in the method via <a href="http://www.scala-lang.org/api/current/index.html#scala.Predef$">implicitly</a>).</p>

<p>All in all, you need to throw in three more implicits for <code>Number</code>, <code>Plus</code> and <code>Minus</code> so they can be seen as <code>Valueable</code> and participate in <code>def calculate(v: Valueable): Int</code>-based library:</p>

<pre><code>implicit val number2Value = new Value[Number] {
  def value(n: Number): Valueable = new Valueable {
    override def value: Int = n.value
  }
}

implicit val plus2Value = new Value[Plus] {
  def value(p: Plus): Valueable = new Valueable {
    override def value: Int = p.lhs.value + p.rhs.value
  }
}

implicit val minus2Value = new Value[Minus] {
  def value(m: Minus): Valueable = new Valueable {
    override def value: Int = m.lhs.value - m.rhs.value
  }
}
</code></pre>

<p>These make it possible to calculate the value of the expression leveraging the fictitious Calculator library:</p>

<pre><code>println(CalculatorEx.calculate(expr))  // prints 20
</code></pre>

<p>That&rsquo;s it! You&rsquo;re done. If you&rsquo;ve followed along closely and have developed the &ldquo;framework&rdquo; on your own, you should have a complete understanding of the type class design pattern in Scala. Without the type class pattern, blending the <code>Expression</code> type hierarchy with <code>Valueable</code> would not have been possible! And that was the goal of the exercise.</p>

<h3>Follow up - All <code>Valueable</code></h3>

<p>Think about using other types, say <code>Int</code>, with the fictitious Calculator library so the following is possible:</p>

<pre><code>println(CalculatorEx.calculate(1))  // prints 1
</code></pre>

<p>As the Scala compiler says:</p>

<blockquote><p>could not find implicit value for evidence parameter of type Value[Int]</p></blockquote>

<p>All, we need to do to blend <code>Int</code>s as <code>Valueable</code>s is to have an implicit that does the conversion.</p>

<pre><code>implicit val int2Value = new Value[Int] {
  override def value(t: Int) = new Valueable {
    override def value: Int = t
  }
}
</code></pre>

<p>And that&rsquo;s it!</p>

<p>As a final exercise would be to reuse implicits and create more complex ones to support &ldquo;sum&rdquo; types, like <code>Tuple</code>. I&rsquo;m leaving it as a homework. Have fun!</p>

<p>Let me know how the homework and the whole blog post went out in the Comments section below. I&rsquo;d appreciate any comments to improve upon.</p>

<h2>References</h2>

<p>There are plenty of very good sources on the topic of type classes in general and in Scala, in particular, but the following have done wonders for me:</p>

<ul>
<li><a href="http://en.wikipedia.org/wiki/Ad_hoc_polymorphism">ad hoc polymorphism</a> article on Wikipedia</li>
<li><a href="http://en.wikipedia.org/wiki/Polymorphism_(computer_science">Polymorphism (computer science)</a>) article on Wikipedia</li>
<li><a href="https://youtu.be/sVMES4RZF-8">Tutorial: Typeclasses in Scala with Dan Rosen</a> video</li>
<li><a href="http://shop.oreilly.com/product/0636920033073.do">Programming Scala, 2nd Edition</a> book</li>
</ul>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Ditching Guice's @Singleton in Favour of Scala's (Companion) Object]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/05/09/ditching-guices-at-singleton-in-favour-of-scalas-companion-object.html"/>
    <updated>2015-05-09T08:57:19-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/05/09/ditching-guices-at-singleton-in-favour-of-scalas-companion-object</id>
    <content type="html"><![CDATA[<p><a href="https://twitter.com/akomarzewski">Arek Komarzewski</a> (a Scala developer in HCore) mentioned the following this Friday and made my day (and the whole week, too):</p>

<blockquote><p>I can now ditch Guice&rsquo;s @Singleton as I&rsquo;ve got a trait and the companion object combo (thanks to Scala).</p></blockquote>

<p>This time the blog post is without a complete working example. Not yet. It&rsquo;s to remind myself to prepare one (or be given one after the blog post is published &ndash; whatever comes first). I just think it needs to be said aloud to be heard and think about.</p>

<!-- more -->


<p>And then, quite unexpectedly to me, <a href="https://www.playframework.com/">Play Framework</a> - <em>&ldquo;The High Velocity Web Framework For Java and Scala&rdquo;</em> - that&rsquo;s the web framework supported by Typesafe in their <a href="http://www.typesafe.com/products/typesafe-reactive-platform">Reactive Platform</a> has announced in <a href="https://www.playframework.com/documentation/2.4.x/Highlights24">What&rsquo;s new in Play 2.4</a>:</p>

<blockquote><p>In the Scala ecosystem, the approach to dependency injection is not generally agreed upon, with many competing compile time and runtime dependency injection approaches out there.</p>

<p>Play’s philosophy in providing a dependency injection solution is to be unopinionated in what approaches we allow, but to be opinionated to the approach that we document and provide out of the box. For this reason, we have provided the following:</p>

<p>An implementation that uses Guice out of the box</p></blockquote>

<p>I&rsquo;m very lucky to be able to pursue my understanding of Scala the programming language not only in my free time, but also in commercial projects as a full-time Scala developer and a technical leader (in <a href="http://deepsense.io/">DeepSense.io</a>) as well as supporting companies making the most out of Scala and open source software (in <a href="http://www.hcore.com/">HCore</a>) not to mention leading <a href="http://www.meetup.com/WarszawScaLa/">Warsaw Scala Enthusiasts</a> in <strong>Warsaw</strong>, <strong>Poland</strong>. The technical part of my life simply can&rsquo;t be any better! I&rsquo;m learning as well as teaching people using Scala as an object-oriented and functional programming language on JVM, and am also meeting up lots of Scala developers. I really wish I had more time to publish all the major breakthroughs in blog posts here.</p>

<p>Enough praising. On to real matters.</p>

<p>In <a href="http://deepsense.io/">DeepSense.io</a> we&rsquo;re using <a href="https://github.com/google/guice">Guice</a> as <em>a dependency framework</em>. It&rsquo;s also used in few other projects where Scala is used. That often leads to my questioning the need for Guice or any dependency injection framework since Scala the programming language itself offers enough features to let Guice leave.</p>

<p>I think there&rsquo;s the issue of how most Scala developers (I&rsquo;m meeting) think &ndash; they are former Java developers who see no reason to adapt to new approaches of tackling development problems. They simply have no more courage to dig deeper, and&hellip;let the past rest and welcome new solutions for the brighter future. And since Scala alone is still shaping itself and the Scala community is not clear on what to follow along and what to forget about, that all makes the leave-past-welcome-new approach harder.</p>

<p>Just this week I had a pleasure to meet two teams that both use Guice because nobody introduced viable alternatives (even when they use one already - Scala!). I don&rsquo;t consider myself skilled enough, either, since I&rsquo;m pretty much a Guice newbie and hence just unable to counter the Guice&rsquo;s way of solving problems. It&rsquo;s that something inside me that is telling me that Guice is dragged along unnecessarily and gives a false perception of being the right fit to dependency injection-looking problems (even when it introduces more problems, mostly related to learning a yet another framework, than it solves). I&rsquo;m glad Spring is far heavier as it would likely have found its way in the projects, too.</p>

<p>It has worked fine for Arek and I&rsquo;m hoping it&rsquo;s going to work fine for me soon, too. I&rsquo;ll simply have to find it out and promote the right approach.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Using AutoPlugin in Sbt for Common Settings Across Projects in Multi-project Build]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/04/12/using-autoplugin-in-sbt-for-common-settings-across-projects-in-multi-project-build.html"/>
    <updated>2015-04-12T09:20:45-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/04/12/using-autoplugin-in-sbt-for-common-settings-across-projects-in-multi-project-build</id>
    <content type="html"><![CDATA[<p>What a joy to learn all the goodies sbt brings to the table and be given a chance to apply it right away to commercial projects in Scala!</p>

<p>I&rsquo;ve recently been assigned to a task to create a solution to share common settings across projects in a <a href="http://www.scala-sbt.org/0.13/tutorial/Multi-Project.html">multi-project build</a> in a Scala project managed by <a href="http://www.scala-sbt.org">sbt</a>. With the new feature of sbt - <a href="http://www.scala-sbt.org/0.13/docs/Plugins.html">autoplugins</a> - it was very easy to implement from the day one.</p>

<!-- more -->


<h2>The solution = project/CommonSettingsPlugin.scala</h2>

<pre><code>import sbt._
import Keys._

object CommonSettingsPlugin extends AutoPlugin {
  override def trigger = allRequirements
  override lazy val projectSettings = Seq(
    organization  := "io.deepsense",
    version       := "0.1.0",
    scalaVersion  := "2.11.6",
    scalacOptions := Seq(
      "-unchecked", "-deprecation", "-encoding", "utf8", "-feature",
      "-language:existentials", "-language:implicitConversions"
    )
  )
}
</code></pre>

<p>Since the autoplugin is automatically triggered for all the projects in the multi-project build &ndash; <code>override def trigger = allRequirements</code> &ndash; all the projects have <code>organization</code>, <code>version</code>, <code>scalaVersion</code>, <code>scalacOptions</code> set. No other work is needed.</p>

<p>Simple and easy, and, what&rsquo;s tracked under another issue, Jenkins should now be able to execute <code>coverageReport</code> and <code>coverageAggregate</code> without troubles for the project!</p>

<p>As a bonus, you don&rsquo;t have to change anything in your sbt project to leverage the simplicity - just drop the code as <code>project/CommonSettingsPlugin.scala</code> and do <code>show scalaVersion</code>. From that moment on, all the (sub)projects in your multi-project build should have the same <code>scalaVersion</code> (unless you use a separate <code>build.sbt</code> for a project and set <code>scalaVersion</code> explicitly).</p>

<h2>Lessons learnt</h2>

<p>The initial mistake was to create a separate sbt project just to keep the autoplugin&rsquo;s code.</p>

<p>From the very first day of the plugin&rsquo;s life it was so much different from the other projects in the build &ndash; it almost yelled out in pain at me not to place it where it was initially. It was a sbt plugin and as such it only meant to enhance sbt itself (not be a part of the commercial project).</p>

<p>What&rsquo;s even worse, up to the latest version of sbt 0.13.8 all plugins (are doomed to) use Scala 2.10.4 that&rsquo;s an issue for <a href="https://github.com/scoverage/sbt-scoverage">sbt-scoverage plugin</a> as it kept refusing to work with the version of Scala.</p>

<p>All in all, I seemed reluctant for a long time to have even thought of a better place for the plugin even though I had actually known it.</p>

<p>A few discussions on <a href="https://gitter.im/sbt/sbt">gitter on the sbt channel</a> made my day after the very helpful and nice people from the channel lent me a helping hand to find the final solution - the autoplugin went under <code>project</code> and&hellip;my live was so much bright again!</p>

<p>What was later pointed out to me when we discussed the change in the team, the final solution was a mere 24 line removal (!)</p>

<p><img src="http://blog.jaceklaskowski.pl/images/using-autoplugin-patchset-gerrit.png" title="Patchset with AutoPlugin changes" ></p>

<p>It got <code>+2</code> from a teammate and has been merged to <code>master</code> without much hussle.</p>

<p>p.s. Shhh, the team thinks it was just me to have figured it out myself.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Code Review With Gerrit and Contributing to a Patch Set?]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/04/05/code-review-with-gerrit-and-contributing-to-a-patchset.html"/>
    <updated>2015-04-05T11:59:41-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/04/05/code-review-with-gerrit-and-contributing-to-a-patchset</id>
    <content type="html"><![CDATA[<p><a href="https://code.google.com/p/gerrit/"><img class="left" src="http://blog.jaceklaskowski.pl/images/gerrit-diffy.png" title="Gerrit Code Review" ></a></p>

<p>I&rsquo;m yet to appreciate <a href="https://code.google.com/p/gerrit/">gerrit</a> as a code review tool worth to learn (after having heard bad and good stories about its features and how it complements development workflows), but in my new team at <a href="http://www.codilime.com/">Codilime</a> where we develop&hellip;<em><a href="http://deepsense.io/">a revolutionary machine learning engine enabling your team to use state-of-the-art algorithms in a fraction of time!</a></em> that&rsquo;s the tool to conduct code reviews.</p>

<p>The blog post presents how I discovered a way to contribute to a patch set with my own changes. Use with caution as I&rsquo;m not really sure that&rsquo;s how gerrit should be used in a team.</p>

<!-- more -->


<h2>Very (lame) intro to gerrit</h2>

<p>From the website of <a href="https://code.google.com/p/gerrit/">gerrit</a>:</p>

<blockquote><p>Web based code review and project management for Git based projects.</p></blockquote>

<p>The take away from it is that it&rsquo;s the tool for <strong>code reviews</strong> for <strong>git</strong> project. You <code>git push</code> your changes to <code>master</code> early and often &ndash; I&rsquo;d say it&rsquo;s even earlier and more often than I did following <a href="https://guides.github.com/introduction/flow/">the github workflow</a> with <em>pull requests</em> or <a href="https://about.gitlab.com/2014/09/29/gitlab-flow/">the gitlab workflow</a> with <em>merge requests</em>.</p>

<p>At GitHub and GitLab I usually <code>git rebase</code> changes to squash them all into one and send a pull/merge request.</p>

<p>In gerrit, you <code>git commit --amend</code> as a way to keep changes under a single code review request. The point is to keep <code>Change-Id</code> and you should be fine in a single code review session. Every <code>git push</code> following <code>git commit --amend</code> with a single <code>Change-Id</code> creates a new Patch Set until it&rsquo;s ready to be merged with <code>master</code>. These Patch Sets (under a single <code>Change-Id</code>) establish a sort of branch from which you send changes for code review.</p>

<p>Once the code looks good, i.e. it&rsquo;s <code>+1</code> twice by the team and verified (possibly by Jenkins), it can be submitted to the central git repository - preferably (private) one at GitHub.</p>

<h2>Drafts as branches - not for review</h2>

<p>There&rsquo;s a feature of gerrit called <a href="http://gerrit-documentation.googlecode.com/svn/ReleaseNotes/ReleaseNotes-2.3.html#_drafts">drafts</a> that&hellip;<em>will be for a change that is not meant for review (yet).</em> It&rsquo;s a good candidate for changes that other developers can contribute to (I might be mistaken here since I&rsquo;m new to gerrit).</p>

<h2>Contributing to a Patch Set</h2>

<p><strong>DISCLAIMER:</strong> What follows assumes that a patch set in draft status can be considered a branch to develop collaboratively with a few developers. I&rsquo;m not so sure it&rsquo;s a correct assumption.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/gerrit-contributing-patchset-5.png" title="Patch Set" ></p>

<p>In gerrit, go to <strong>All > Open</strong> where you see all the available changes for code review. Pick one, preferrably in <strong>Draft</strong> status (but others should work fine, too).</p>

<p>Expand the latest <strong>Patch Set</strong> and select <strong>checkout</strong> tab in <strong>Download</strong>. Copy the <code>git fetch ... &amp;&amp; git checkout FETCH_HEAD</code> command using the icon on the right (see the screenshot above).</p>

<p>Do your changes and once satisfied <code>git checkout -b new_branch_name</code> to create a new branch as <code>new_branch_name</code>.</p>

<p><code>git add</code> the changes and <strong>very important</strong> <code>git commit --amend</code> them with <strong>Change-Id</strong> untouched. The Change-Id is the way for gerrit to keep track what belongs to what development stream as a new patch set. You may also want to change the author with <code>git commit --amend --author</code> to mark the changes as yours.</p>

<p>Push the commit to <code>origin</code> (or whatever remote repository you use) with <code>git push origin &lt;new_branch_name&gt;:refs/drafts/master</code>. The <code>git push</code> command assumes that you&rsquo;re on <code>master</code> branch.</p>

<p>You&rsquo;re done!</p>

<p>See <a href="http://stackoverflow.com/q/24457418/1305344">How to change a patchset and push it as a new one?</a> where the knowledge was initially developed.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Daily Routines to Learn Scala With IntelliJ IDEA]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/03/28/daily-routines-to-learn-scala-with-intellij-idea.html"/>
    <updated>2015-03-28T07:54:28-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/03/28/daily-routines-to-learn-scala-with-intellij-idea</id>
    <content type="html"><![CDATA[<p><a href="http://blog.jetbrains.com/idea/2015/03/intellij-idea-14-1-is-here/"><img class="left" src="http://blog.jaceklaskowski.pl/images/scala-idea-intellij-idea-14-1.png" title="IntelliJ IDEA 14.1 is Here!" ></a> So, you&rsquo;ve got a moment to learn <a href="http://www.scala-lang.org/">Scala</a> and have <a href="https://www.jetbrains.com/idea/">IntelliJ IDEA</a> with <a href="https://plugins.jetbrains.com/plugin/?id=1347">Scala plugin</a> installed. Your wish is to <em>maximize</em> the mental outcome given the time at hand with <em>little to no effort</em> to set up a productive working environment. You may even think you may have gotten one, but, unless you&rsquo;re doing what I&rsquo;m describing here, you&rsquo;re actually far from truly having it. I&rsquo;m asking you to go <em>the extra mile</em>!</p>

<p>In this blog post I&rsquo;m introducing you to two modes in the recently-shipped <a href="http://blog.jetbrains.com/idea/2015/03/intellij-idea-14-1-is-here/">IntelliJ IDEA 14.1</a> &ndash; <strong>Full Screen</strong> and <strong>Distraction Free</strong> modes &ndash; and the few keystrokes I use in the development environment to have a comfortable place to learn Scala. I&rsquo;m sure you&rsquo;ll have found few ideas to improve your way into your own personal Scala nirvana.</p>

<p>Let&rsquo;s go minimalistic, full screen, distraction-free, mouse- and touchpad-less!</p>

<p>You may find the blog post <a href="http://blog.jetbrains.com/scala/2015/03/26/what-to-check-out-in-scala-plugin-1-4-x-for-intellij-idea-14-14-1/">What to Check Out in Scala Plugin 1.4.x for IntelliJ IDEA 14 &amp; 14.1</a> helpful, too.</p>

<p><em>Side note</em> It came as a complete surprise to me to have noticed that I&rsquo;ve been writing the blog post exactly a month after the last one.</p>

<!-- more -->


<h2>Why I&rsquo;m using IntelliJ IDEA to learn Scala?</h2>

<p>I&rsquo;m using IntelliJ IDEA daily.</p>

<p>I begin a day switching to the desktop where the IDE awaits my attention and keep it open (until a mandatory reboot following a system update). I was using other IDEs &ndash; NetBeans IDE or Eclipse IDE &ndash; in the past to develop applications in Java or Java EE, but things have changed since I switched focus on Scala entirely.</p>

<p>The reason for the switch was to master the Scala language not the other available IDEs, and given IntelliJ IDEA have always been receiving positive marks it&rsquo;s with me nowadays. When I need a full-blown IDE, it&rsquo;s IntelliJ IDEA with the Scala plugin. Period.</p>

<p>There&rsquo;s another tool that supports learning Scala beautifully &ndash; <strong>Scala REPL</strong>. However it&rsquo;s often too rudimentary and limiting, for quick rendezvous I prefer it with Sublime Text 3 and sbt. For more advanced sessions nothing beats the beloved IDE - IntelliJ IDEA.</p>

<p>I think it was <a href="http://www.nurkiewicz.com/">Tomasz Nurkiewicz</a> &ndash; <a href="http://blog.jetbrains.com/idea/2014/05/annotated-java-monthly-april-2014/">an IntelliJ IDEA expert</a> &ndash; who first showed the beauty of using IntelliJ IDEA mouse- and touchpad-less. Thanks Tomek!</p>

<h2>Minimalistic workspace</h2>

<p><a href="https://twitter.com/adrgrunt/status/552479034031239168">I remember the tweet from Adrian Gruntkowski</a> very well when the need to go minimalistic was first planted in my head. Adrian mentioned a user guide to set up a minimalistic workspace in IntelliJ IDEA (though it was for CursiveClojure) and the story began.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-tweet-minimalistic-workspace.png" title="Adrian mentions minimalistic workspace" ></p>

<p>It took me a while to get used to it, but it was worth it! I just needed a mixture of <a href="http://www.sublimetext.com/3">Sublime Text 3</a> and IntelliJ IDEA as the Scala development environment and think I&rsquo;ve found mine already.</p>

<h2>A day with Scala and IntelliJ IDEA</h2>

<p>Be warned that I&rsquo;m working on Mac OS X with <code>Mac OS X 10.5+</code> keymap so your milleage may vary.</p>

<p>The blog post assumes you&rsquo;ve got a Scala project imported or created from scratch already. I don&rsquo;t bother explaining how to do it. In either case, IntelliJ IDEA should open with a Scala project so switching between files makes sense.</p>

<h3>Minimalistic IntelliJ IDEA</h3>

<p>Follow <a href="https://cursiveclojure.com/userguide/ui.html">CursiveClojure UI</a> to have a minimalistic, clutter-free workspace. It&rsquo;s a good start and boils down to turning off the toolbars, deselecting Toolbar and Navigation Bars in the View menu and finally disabling the Editor Tabs. That&rsquo;s a very good start.</p>

<p>Start by pressing <code>Cmd + Ctrl + F</code> to enter full screen.</p>

<p>You should have the IDE looked like as in the following screenshot.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-minimalistic-workspace.png" title="Minimalistic workspace following CursiveClojure UI" ></p>

<p>Press <code>Shift</code> key twice to open <strong>Search everywhere</strong> popup.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-search-action-popup.png" title="Search action popup" ></p>

<p>You may also want to use <code>Cmd + Shift + A</code> to search for actions only.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-search-actions-only.png" title="Search actions only popup" ></p>

<p>Type in <strong>pre mod</strong> or (better) <strong>preMod</strong> to execute <strong>Enter Presentation Mode</strong> action. In IDEA 14.1 there&rsquo;s far more productive mode - <strong>Distraction Free Mode</strong> so type in <strong>distMod</strong> instead.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-search-action-enter-distration-free-mode.png" title="Enter Distraction Free Mode" ></p>

<p>Either way &ndash; Presentation or Distraction mode &ndash; you&rsquo;ve got a clean desk and should concentrate on Scala much easier (with the goodies of IntelliJ IDEA at your fingertips).</p>

<p>Following the advice in the empty workspace of IntelliJ IDEA, use <code>Cmd + E</code> to switch between files or <code>Cmd + Shift + E</code> to switch between files that were recently edited.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-recently-edited-files.png" title="Recently Edited Files popup" ></p>

<p>Use <code>Cmd + O</code> to open traits or classes. Use <code>Cmd + Shift + O</code> to open any file like <code>build.sbt</code> or project resources.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-enter-file-name-build-sbt.png" title="Enter file name popup" ></p>

<p>And the last but not least, install <a href="https://plugins.jetbrains.com/plugin/4230">BashSupport plugin</a> to have a fully-supported terminal inside IntelliJ IDEA. Use <code>Alt + F12</code> to open a terminal session. I use it to have sbt shell open for <code>~ test</code> to have the tests executed every time the main and test sources change.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-terminal-window-sbt-test.png" title="Terminal window with sbt test" ></p>

<p>I use the terminal to open Scala REPL when I need to try out a new API.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-terminal-window-scala-repl.png" title="Terminal window with Scala REPL" ></p>

<p>There are also <code>Cmd + KeyUp</code> to select files from other directories in a project, or just switch to Project view with <code>Cmd + 1</code>. You could use <code>Alt + F1</code> to select the target to view the currently open file.</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/scala-idea-change-directory-cmd-keyup.png" title="Cmd + KeyUp" ></p>

<h2>Summary</h2>

<p>There are plenty of ways and tools to help you learn Scala in a pleasant environment. I&rsquo;ve been using Sublime Text 3 and sbt for quite some time, and found myself very productive with this combo. In my case, though, <em>enough</em> turned out to be <em>lazy to learn more advanced developer tools</em>, i.e. IntelliJ IDEA.</p>

<p>Once I switched to IntelliJ IDEA and started using the features like Full Screen and Distraction Free modes with proper keystrokes, it became the development environment of choice to get full steam ahead into Scala.</p>

<p><a href="http://en.wikipedia.org/wiki/There_ain%27t_no_such_thing_as_a_free_lunch">There ain&rsquo;t no such thing as a free lunch</a> and it does take time to hop onto a new tooling and change habits. New things can often be getting into your way until you find them useful. As much as habits can help (speeding things up), they should not rule out others and mark them worse by default. As <a href="http://www.forbes.com/sites/margiewarrell/2014/02/03/learn-unlearn-and-relearn/">Learn, Unlearn And Relearn: How To Stay Current And Get Ahead</a> says:</p>

<blockquote><p>Whatever the reasons, once the basics are covered, many people tend to stick with what they know and avoid situations or challenges where they may mess up or be forced to learn something new, thus creating a safe, secure and comfortable (and confining) world for themselves.</p></blockquote>

<p>Give the tips from the blog post a try and a month later you will have found they&rsquo;re as pleasant as yours. Or bring you even more joy! Let me know how it&rsquo;s worked out in the Comments section below.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Oh My Zsh's Plugins to Boost Happiness From Using Git]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/02/28/oh-my-zshs-plugins-to-boost-happiness-from-using-git.html"/>
    <updated>2015-02-28T14:26:40-05:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/02/28/oh-my-zshs-plugins-to-boost-happiness-from-using-git</id>
    <content type="html"><![CDATA[<p>From the website of the very useful and must-have addition to your terminal - <a href="http://ohmyz.sh/">Oh-My-Zsh</a>:</p>

<blockquote><p>Oh-My-Zsh is an open source, community-driven framework for managing your ZSH configuration. It comes bundled with a ton of helpful functions, helpers, plugins, themes, and a few things that make you shout&hellip;<strong>&ldquo;Oh My ZSH!&rdquo;</strong></p></blockquote>

<p>As I&rsquo;m using the ZSH configuration framework so often that I can&rsquo;t believe I could&rsquo;ve lived without it for so long here&rsquo;s a collection of the plugins to cut the number of keystrokes while working with git repositories. To my great surprise there are quite a few plugins for git and the list below is my humble summary to get it remembered as much as to spread a word (as I&rsquo;ve been meeting people unaware of Oh-My-Zsh far too often).</p>

<p><strong>FIXME</strong> Consider the blog post complete after the line&rsquo;s gone. It&rsquo;s always in readable state, though.</p>

<!-- more -->


<h2>Oh Mom, there are 10 plugins for git!</h2>

<p>I was very surprised to have been presented with <strong>10</strong> plugins for git when I hit <code>TAB</code> to complete the <code>~/.oh-my-zsh/plugins/git</code> path:</p>

<ul>
<li><strong>git</strong> = (quoting README.md) <em>this plugin adds several git aliases and increase the completion function provided by zsh</em></li>
<li><strong>git-extras</strong> = ??? (<em>aka</em> unsure what it does - yet to be discovered)</li>
<li><strong>git-flow</strong> = (quoting the plugin&rsquo;s sources) <em>git-flow completion nirvana</em></li>
<li><strong>git-flow-avh</strong> = unsure how it&rsquo;s different from the <code>git-flow</code> plugin</li>
<li><strong>git-hubflow</strong> = unsure how it&rsquo;s different from the <code>git-flow</code> plugin</li>
<li><strong>git-prompt</strong> = (quoting the plugin&rsquo;s sources) <em>ZSH Git Prompt Plugin</em></li>
<li><strong>git-remote-branch</strong> = ??? (<em>aka</em> unsure what it does - yet to be discovered)</li>
<li><strong>gitfast</strong> = ??? (<em>aka</em> unsure what it does - yet to be discovered)</li>
<li><strong>github</strong> = ??? (<em>aka</em> unsure what it does - yet to be discovered)</li>
<li><strong>gitignore</strong> = ??? (<em>aka</em> unsure what it does - yet to be discovered)</li>
</ul>


<p>I <em>just</em> assume they are all plugins to ease my work with git repositories and am going to review them all one by one and <em>amend</em> the blog post afterwards.</p>

<p>The commands I&rsquo;ve managed to master so far:</p>

<ul>
<li><code>gaa</code>   = <code>git add --all</code></li>
<li><code>gco</code>   = <code>git checkout</code></li>
<li><code>glo</code>   = <code>git log --oneline --decorate --color</code></li>
<li><code>glgga</code> = <code>git log --graph --decorate --all</code></li>
<li><code>gcm</code>   = <code>git checkout master</code></li>
<li><code>gd</code>    = <code>git diff</code></li>
<li><code>gdc</code>   = <code>git diff --cached</code></li>
<li><code>gc!</code>   = <code>git commit -v --amend</code></li>
<li><code>gcp</code>   = <code>git cherry-pick</code></li>
<li><code>gl</code>    = <code>git pull</code></li>
<li><code>gp</code>    = <code>git push</code></li>
<li><code>gst</code>   = <code>git status</code></li>
<li><code>grba</code>  = <code>git rebase --abort</code></li>
<li><code>grbi</code>  = <code>git rebase -i</code></li>
<li><code>grbc</code>  = <code>git rebase --continue</code></li>
<li><code>grh</code>   = <code>git reset HEAD</code></li>
<li><code>grhh</code>  = <code>git reset HEAD --hard</code></li>
<li><code>grv</code>   = <code>git remote -v</code></li>
<li><code>gwc</code>   = <code>git whatchanged -p --abbrev-commit --pretty=medium</code></li>
</ul>


<p>Read the official <a href="https://github.com/robbyrussell/oh-my-zsh/wiki/Plugins">Plugins</a> page to learn how to enable plugins in your configuration.</p>

<p>The configuration of mine includes the following plugins, <code>git</code> including:</p>

<pre><code>➜  ~  grep -e "plugins=(" ~/.zshrc | grep -e "^[^#]"
plugins=(git osx brew common-aliases)
</code></pre>

<p>I&rsquo;m totally aware that there&rsquo;s plenty of room for improvement here. Let me know what I&rsquo;m missing (and where I&rsquo;m wasting my time still). I&rsquo;d wholeheartedly appreciate any time savings.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[REST Clients (for Working With REST Microservices)]]></title>
    <link href="http://blog.jaceklaskowski.pl/2015/02/11/rest-clients-for-working-with-rest-microservices.html"/>
    <updated>2015-02-11T17:02:42-05:00</updated>
    <id>http://blog.jaceklaskowski.pl/2015/02/11/rest-clients-for-working-with-rest-microservices</id>
    <content type="html"><![CDATA[<p>It&rsquo;s so engaging to learn things while learning others. So is the case with the new architectural style <strong>REST microservices</strong> and developing them using <a href="http://www.scala-lang.org/">Scala</a> language and tools.</p>

<p>As there are a lot of REST interactions &ndash; sending requests to and receiving them from REST microservices &ndash; as much as I&rsquo;m not really in need to comprehend every bit of the communication format, I do need tools to work with them effectively, i.e. be able to create and consume HTTP packets with JSON payload with little to no effort.</p>

<!-- more -->


<h2>HTTPie (pronounced <em>aych-tee-tee-pie</em>)</h2>

<p>I&rsquo;ve been using <code>curl</code> for so long that I hardly remember when it all began. It&rsquo;s a very handy tool, alas too complex at times.</p>

<p>Not so long ago I stumbled upon another command-line tool called <a href="http://httpie.org">HTTPie</a>. Installation on MacOS X with <code>brew</code> was just <code>brew install httpie</code> away.</p>

<pre><code>➜  ~  brew install httpie
Warning: httpie-0.9.1 already installed
</code></pre>

<p><code>HTTPie</code> <a href="http://httpie.org">describes itself as</a>:</p>

<blockquote><p>HTTPie (pronounced <em>aych-tee-tee-pie</em>) is a <strong>command line HTTP client</strong>. Its goal is to make CLI interaction with web services as <strong>human-friendly</strong> as possible. It provides a simple <code>http</code> command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized responses.</p></blockquote>

<p>The tool&rsquo;s so simple and intuitive that I&rsquo;m still fighting with my thoughts to believe that people could still be using <code>curl</code>. I for one was using <code>curl</code> for so long that I almost lost the ability to sense how much pain I was experiencing with <code>curl</code>. I did get used to it and didn&rsquo;t notice I&rsquo;m sufferring an enormous pain.</p>

<p><code>GET</code> requests are pretty similar in both tools (still think <code>httpie</code> is simpler).</p>

<pre><code>http -v --json localhost:9000/api/person
</code></pre>

<p>Where the difference shows up is to send <code>POST</code> requests with JSON payload as <code>application/json</code> (<code>-v</code> is to verbose the communication):</p>

<pre><code>➜  ~  http -v localhost:9000/api/person name=jacek
POST /api/person HTTP/1.1
Accept: application/json
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 17
Content-Type: application/json; charset=utf-8
Host: localhost:9000
User-Agent: HTTPie/0.9.1

{
    "name": "jacek"
}

HTTP/1.1 200 OK
Content-Length: 21
Content-Type: application/json; charset=UTF-8
Date: Thu, 12 Feb 2015 09:35:46 GMT
Server: tmprl

{
    "name": "jacek"
}
</code></pre>

<p>First, I don&rsquo;t have to express it <em>explicitly</em> that I&rsquo;m going to send a POST request since the <code>key=value</code> pair (in the example earlier <code>name=jacek</code>, and there could be more) implies so. Less typing, less errors.</p>

<p>Second, I don&rsquo;t have to build the JSON payload myself, <code>key=value</code> pairs are enough to inform the tool to create one for me. Less typing, again.</p>

<p>And the last but not least, colorized responses (aka formatted and colorized terminal output). That could&rsquo;ve made a difference and it did!</p>

<p>Just these three features made <code>httpie</code> an indispensable tool for me.</p>

<p><code>httpie</code> has changed how I test my REST microservices using JSON now. As is JSON to REST microservices, so is <code>httpie</code> to the new architecture.</p>

<h2>Postman</h2>

<p>It appears that command-line tools are not always at a premium, and having few GUI tools might be of help at times.</p>

<p>Just a couple of days ago I saw <a href="http://www.getpostman.com/">Postman - REST Client</a> being used by a teammate. And it wouldn&rsquo;t have been any special if I had not heard about the tool from another person during my presentation about REST microservices with JSON &ndash; just a day after I&rsquo;d seen it the very first time (!) Coincidence? Don&rsquo;t think so!</p>

<p>Postman is a plugin for Google Chrome so the installation went smoothly. Just a click and it&rsquo;s done. You should give it a go, too (unless you&rsquo;ve already done and have few remarks to share in the comments to the blog post - I&rsquo;d appreciate it very much).</p>

<h2>Advanced REST client</h2>

<p>What made the situation more interesting was that even though Postman was mentioned few times here and there, it seems that <a href="https://code.google.com/p/chrome-rest-client/">Advanced REST client</a> beats Postman by the number of stars - 6791 vs 2635. Why is so?! It&rsquo;s over 200%! How could that be that Postman is more famous (in my circles) without Advanced REST client being even mentioned?!</p>

<p><img class="center" src="http://blog.jaceklaskowski.pl/images/postman-rest-client-vs-advanced-rest-client.png" title="Postman REST client vs Advanced REST client" ></p>

<p>It&rsquo;s installed and I&rsquo;m going to give it a whirl, too.</p>

<p>What are your tools to work with REST microservices with JSON as the data format? Let me know in the comments!</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Sbt-dependency-graph for Easier Dependency Management in Sbt]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/11/29/sbt-dependency-graph-for-easier-dependency-management-in-sbt.html"/>
    <updated>2014-11-29T17:10:06-05:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/11/29/sbt-dependency-graph-for-easier-dependency-management-in-sbt</id>
    <content type="html"><![CDATA[<p>That&rsquo;s gonna be short and hopefully simple. If you&rsquo;re with <a href="http://www.scala-sbt.org/">sbt</a> you&rsquo;re going to like <a href="https://github.com/jrudolph/sbt-dependency-graph">sbt-dependency-graph</a> <em>plugin to create a dependency graph for your project</em> very much.</p>

<!-- more -->


<p>Edit <code>~/.sbt/0.13/plugins/sbt-dependency-graph.sbt</code> so it looks as follows:</p>

<pre><code>addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.7.4")
</code></pre>

<p>Edit <code>~/.sbt/0.13/global.sbt</code> so it looks:</p>

<pre><code>net.virtualvoid.sbt.graph.Plugin.graphSettings
</code></pre>

<p>With these two files, open <code>sbt</code> or <code>activator</code> and execute <code>dependencyGraph</code> (I used <a href="https://github.com/jaceklaskowski/hello-slick-specs2">hello-slick-specs2</a> project):</p>

<pre><code>&gt; dependencyGraph
[info] Updating {file:/Users/jacek/dev/oss/hello-slick-specs2/}hello-slick-specs2...
[info] Resolving jline#jline;2.12 ...
[info] Done updating.
[info]                             +---------------------------+
[info]                             |hello-slick-specs2_2.11 [S]|
[info]                             |          default          |
[info]                             |            1.0            |
[info]                             +---------------------------+
[info]                                    |     |   |    |
[info]                ---------------------     |   |    ----------------------------
[info]                |                         |   -----------------               |
[info]                v                         v                   |               |
[info]           +---------+          +------------------+          |               |
[info]           |slf4j-nop|          |  slick_2.11 [S]  |          |               |
[info]           |org.slf4j|          |com.typesafe.slick|          |               |
[info]           |  1.7.7  |          |      2.1.0       |          |               |
[info]           +---------+          +------------------+          |               |
[info]                |                   |   ||      |             |               |
[info]      -----------                   |   ||      ---------     |               |
[info]      |  ----------------------------   ||              |     |               |
[info]      |  |             ------------------|              |     |               |
[info]      |  |             |                 |              |     |               |
[info]      v  v             v                 v              v     v               v
[info]  +---------+ +-----------------+ +------------+ +------------------+ +--------------+
[info]  |slf4j-api| |    slf4j-api    | |   config   | |  scala-library   | |      h2      |
[info]  |org.slf4j| |    org.slf4j    | |com.typesafe| |  org.scala-lang  | |com.h2database|
[info]  |  1.7.7  | |      1.6.4      | |   1.2.1    | |      2.11.1      | |   1.4.182    |
[info]  +---------+ |evicted by: 1.7.7| +------------+ |evicted by: 2.11.4| +--------------+
[info]              +-----------------+                +------------------+
[info] Note: The old tree layout is still available by using `dependency-tree`
[success] Total time: 0 s, completed Nov 29, 2014 11:19:30 PM
</code></pre>

<p>Neat, isn&rsquo;t it?</p>

<p>You may also want to execute <code>dependencyGraphMl</code>:</p>

<pre><code>&gt; dependencyGraphMl
[info] Wrote dependency graph to '/Users/jacek/dev/oss/hello-slick-specs2/target/dependencies-compile.graphml'
[success] Total time: 0 s, completed Nov 29, 2014 11:21:46 PM
</code></pre>

<p>Install <a href="http://www.yworks.com/en/products/yfiles/yed/">yEd</a> and open the graph:</p>

<pre><code>&gt; eval "open target/dependencies-compile.graphml" !
[info] ans: Int = 0
</code></pre>

<p><img src="http://blog.jaceklaskowski.pl/images/hello-slick-specs2-yed-graph.png" title="yEd graph of compile dependencies" ></p>

<p>I really wish I&rsquo;d known it earlier. It&rsquo;d surely have saved me a lot of time.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Loose Notes About Cassandra]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/09/25/loose-notes-about-cassandra.html"/>
    <updated>2014-09-25T03:05:39-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/09/25/loose-notes-about-cassandra</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve got curious about <a href="http://cassandra.apache.org/">Cassandra</a> and more and more I&rsquo;ve been asking myself about the use cases it could be the most valuable solution for. With all the answers out there, I simply needed a place where I could <em>dump</em> the findings to ultimately build something useful as a single page (and perhaps ditch it to create something of a higher value).</p>

<p>So here they are, <em>loose notes</em> about Cassandra to understand the value propositions of the database and where it could fit well. Should I ever be faced the question of using Cassandra or not, I may some day find an answer here (or know where to look for it).</p>

<!-- more -->


<p>My story with <a href="http://cassandra.apache.org/">Cassandra</a> began with the <a href="https://github.com/datastax/spark-cassandra-connector">Spark Cassandra Connector</a> project that enables <a href="https://spark.apache.org/">Apache Spark</a> to use Cassandra to have <a href="https://github.com/datastax/spark-cassandra-connector#spark-cassandra-connector-">lightning-fast cluster computing with Spark and Cassandra</a>.</p>

<p>The latest version of Cassandra is <strong>2.1.0</strong>.</p>

<h2>Step 1. Installation</h2>

<p>I installed Cassandra using <code>brew</code> on Mac OS X. It was a mere <code>brew install cassandra</code> and took a few secs.</p>

<pre><code>➜  ~  brew info cassandra
cassandra: stable 2.1.0
http://cassandra.apache.org
/usr/local/Cellar/cassandra/2.1.0 (3899 files, 92M) *
  Built from source
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cassandra.rb
==&gt; Caveats
If you plan to use the CQL shell (cqlsh), you will need the Python CQL library
installed. Since Homebrew prefers using pip for Python packages, you can
install that using:

  pip install cql

To have launchd start cassandra at login:
    ln -sfv /usr/local/opt/cassandra/*.plist ~/Library/LaunchAgents
Then to load cassandra now:
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.cassandra.plist
</code></pre>

<p>Execute <code>sudo pip install cql</code> to install the Python packages.</p>

<pre><code>➜  ~  sudo pip install cql
Downloading/unpacking cql
  Downloading cql-1.4.0.tar.gz (76kB): 76kB downloaded
  Running setup.py (path:/private/tmp/pip_build_root/cql/setup.py) egg_info for package cql

Requirement already satisfied (use --upgrade to upgrade): thrift in /Library/Python/2.7/site-packages (from cql)
Installing collected packages: cql
  Running setup.py install for cql

Successfully installed cql
Cleaning up...
</code></pre>

<p>That should be all needed to get you going.</p>

<h2>Step 2. Running the server</h2>

<p>Start an instance using <code>cassandra -f</code>.</p>

<pre><code>➜  ~  cassandra -f
objc[43878]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/Current/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
INFO  22:49:39 Hostname: japila.local
INFO  22:49:39 Loading settings from file:/usr/local/etc/cassandra/cassandra.yaml
...
INFO  22:47:52 Using Netty Version: [netty-buffer=netty-buffer-4.0.20.Final.1709113, netty-codec=netty-codec-4.0.20.Final.1709113, netty-codec-http=netty-codec-http-4.0.20.Final.1709113, netty-codec-socks=netty-codec-socks-4.0.20.Final.1709113, netty-common=netty-common-4.0.20.Final.1709113, netty-handler=netty-handler-4.0.20.Final.1709113, netty-transport=netty-transport-4.0.20.Final.1709113, netty-transport-rxtx=netty-transport-rxtx-4.0.20.Final.1709113, netty-transport-sctp=netty-transport-sctp-4.0.20.Final.1709113, netty-transport-udt=netty-transport-udt-4.0.20.Final.1709113]
INFO  22:47:52 Starting listening for CQL clients on localhost/127.0.0.1:9042...
INFO  22:47:52 Binding thrift service to localhost/127.0.0.1:9160
INFO  22:47:52 Listening for thrift clients...
</code></pre>

<h2>Step 3. Using cqlsh</h2>

<p>In another terminal, start the Cassandra shell using <code>cqlsh</code>:</p>

<pre><code>➜  ~  cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh&gt;
</code></pre>

<p>You may face the issue <a href="http://stackoverflow.com/q/25980774/1305344">Why does cqlsh fail with LookupError: unknown encoding?</a> that can be easily solved with setting up <code>LC_ALL</code> appropriately. Mine&rsquo;s <code>pl_pl.utf-8</code>:</p>

<pre><code>export LC_ALL=pl_pl.utf-8
</code></pre>

<p>Copying <a href="http://wiki.apache.org/cassandra/GettingStarted">Step 4: Using cqlsh</a> from the official Getting Started documentation:</p>

<pre><code>CREATE KEYSPACE mykeyspace
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

USE mykeyspace;

CREATE TABLE users (
  user_id int PRIMARY KEY,
  fname text,
  lname text
);

INSERT INTO users (user_id,  fname, lname)
  VALUES (1745, 'john', 'smith');
INSERT INTO users (user_id,  fname, lname)
  VALUES (1744, 'john', 'doe');
INSERT INTO users (user_id,  fname, lname)
  VALUES (1746, 'john', 'smith');

SELECT * FROM users;
</code></pre>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Mistakes Introducing Slick for Database Access Under Play Framework 2.3.4 and 2.4.0-M1]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/09/12/mistakes-introducing-slick-for-database-access-under-play-framework-2-dot-3-4-and-2-dot-4-0-m1.html"/>
    <updated>2014-09-12T15:47:50-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/09/12/mistakes-introducing-slick-for-database-access-under-play-framework-2-dot-3-4-and-2-dot-4-0-m1</id>
    <content type="html"><![CDATA[<p>This is a summary of my attempt to run <a href="http://slick.typesafe.com/">Slick</a> under <a href="https://www.playframework.com/">Play Framework</a> <strong>2.3.4</strong> and <strong>2.4.0-M1</strong> that ultimately turned out a quite successful endeavour.</p>

<p>I’m currently using Play’s Anorm and writing the queries myself didn’t seem to be something I much liked. I’ve been seeing Slick has had a good press so it was a no-brainer to pick it as an alternative.</p>

<!-- more -->


<h2>Mistake #1. Hurting myself with cutting edge versions</h2>

<p>Reading the documentation at <a href="http://slick.typesafe.com/">http://slick.typesafe.com/</a> was not very informative as there was no word about how to embed it in a Play Framework application. There should really be a &ldquo;Getting started with Slick and Play Framework&rdquo; tutorial since both are part of <a href="https://typesafe.com/platform">The Typesafe Reactive Platform</a>.</p>

<p>I added Slick to <code>build.sbt</code> as follows:</p>

<pre><code>"com.typesafe.slick" %% "slick" % “2.1.0"
</code></pre>

<p>And then I realised that I’d have had to do:</p>

<pre><code>Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver") withSession {
  implicit session =&gt;
  // &lt;- write queries here
}
</code></pre>

<p>but Play Framework gives me:</p>

<pre><code>DB.withConnection("sayenedb") { implicit c =&gt;
    ...
}
</code></pre>

<p>It was certainly not the way to follow. I needed a solution that would read database configuration from Play&rsquo;s.</p>

<h2>Mistake #2. Using play-slick with Play 2.4.0-M1</h2>

<p>Time has come to give <a href="https://github.com/playframework/play-slick">play-slick</a> a serious try. The project aims to make <em>&ldquo;Slick a first-class citizen of Play 2.x.”</em></p>

<p>It took me a while to abandon the idea of running play-slick with Play 2.4.0-M1 since <a href="https://groups.google.com/d/msg/play-framework/m_bxuqgSKgk/Z4WgfUer19wJ">this</a>:</p>

<blockquote><p>The purpose of this release is to get feedback about the approach to dependency injection that we&rsquo;re implementing in Play 2.4.  The old Play plugins mechanism is going to be deprecated.  For a detailed overview of the different styles of DI available in Play 2.4, please read here:</p>

<p><a href="https://www.playframework.com/documentation/2.4.x/ScalaDependencyInjection">https://www.playframework.com/documentation/2.4.x/ScalaDependencyInjection</a>
<a href="https://www.playframework.com/documentation/2.4.x/ScalaCompileTimeDependencyInjection">https://www.playframework.com/documentation/2.4.x/ScalaCompileTimeDependencyInjection</a>
<a href="https://www.playframework.com/documentation/2.4.x/JavaDependencyInjection">https://www.playframework.com/documentation/2.4.x/JavaDependencyInjection</a></p></blockquote>

<p>I reported <a href="https://github.com/playframework/play-slick/issues/208">an issue for play-slick</a> hoping that the developers notice the missing integration point and the support for 2.4 will come&hellip;sooner (?)</p>

<h2>Mistake #3. Using custom db configuration in Play</h2>

<p>The application uses no <code>db.default</code> configuration in <code>application.conf</code> in Play - just a custom one. The result?</p>

<pre><code>[error] application -

! @6jg39b0pk - Internal server error, for (GET) [/tips/wgcategories] -&gt;

play.api.Configuration$$anon$1: Configuration error[Slick error : jdbc driver not defined in application.conf for db.default.driver key]
     at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:94) ~[play_2.11-2.3.4.jar:2.3.4]
     at play.api.Configuration.reportError(Configuration.scala:743) ~[play_2.11-2.3.4.jar:2.3.4]
     at play.api.db.slick.Config$$anonfun$1.apply(Config.scala:64) ~[play-slick_2.11-0.8.0.jar:0.8.0]
     at play.api.db.slick.Config$$anonfun$1.apply(Config.scala:64) ~[play-slick_2.11-0.8.0.jar:0.8.0]
     at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.11.2.jar:na]
</code></pre>

<p>I had to add the following entries to work it around:</p>

<pre><code>db.default.driver=org.postgresql.Driver
db.default.url=${?DB_CONN}
</code></pre>

<p><code>DB_CONN</code> is the property I set up at command line at Play startup.</p>

<p>Using two datasources required following <a href="https://github.com/playframework/play-slick/wiki/ScalaSlickDrivers">Advanced drivers configuration</a> and applying the (in)famous Cake pattern.</p>

<h2>Mistake #4. Copying examples to production code - PostgreSQL is case sensitive for field names in quotes</h2>

<p>I had the following in my Component:</p>

<pre><code>def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
</code></pre>

<p>That generated a query with <code>"ID"</code> in <code>select</code> clause that in turn resulted in the following error:</p>

<pre><code>STATEMENT:  select s13."ID", s13."name" from “xxx" s13;
ERROR:  column s13.ID does not exist at character 8
</code></pre>

<p><a href="https://twitter.com/rgielen/status/510501297473462272">As Rene pointed out in his tweet</a>:</p>

<blockquote><p>@jaceklaskowski not true - #PostgreSQL follows SQL standard. Columns names are case insensitive unless created with quotation marks</p></blockquote>

<p>He was right - changing <code>ID</code> to <code>id</code> has indeed fixed the issue.</p>

<pre><code>def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
</code></pre>

<h2>Mistake #5. [SI-3664] Explicit case class companion does not extend Function / override toString</h2>

<p>There&rsquo;s an issue reported against the Scala compiler - <a href="https://issues.scala-lang.org/browse/SI-3664">[SI-3664] Explicit case class companion does not extend Function / override toString</a> - that stands in a way for the <code>*</code> projection, i.e. <code>def * = ...</code>, in your table description:</p>

<pre><code>class Users(tag: Tag) extends Table[User](tag, "users") {
    def id    = column[Int]   ("id", O.PrimaryKey, O.AutoInc)
    def login = column[String]("login")

    def * = (id, login) &lt;&gt; (User.tupled, User.unapply)
}
</code></pre>

<p>For the <code>Users</code> class above the Scala compiler fails reporting:</p>

<blockquote><p>value tupled is not a member of object model.User</p></blockquote>

<p>A solution is described in <a href="http://slick.typesafe.com/doc/2.1.0/upgrade.html#mapped-tables">Mapped Tables</a> section of <a href="http://slick.typesafe.com/doc/2.1.0/upgrade.html">UPGRADE GUIDES</a> document in the Slick manual:</p>

<blockquote><p>Note that <code>.tupled</code> is only available for proper Scala functions.<br/>
When using a case class, the companion object extends the correct function type by default, but only if you do not define the object yourself. In that case you should provide the right supertype manually.</p></blockquote>

<p>The mapping definition can look as follows:</p>

<pre><code>class Users(tag: Tag) extends Table[User](tag, "users") {
    def id    = column[Int]   ("id", O.PrimaryKey, O.AutoInc)
    def login = column[String]("login")

    def * = (id, login) &lt;&gt; ((User.apply _).tupled, User.unapply)
}
</code></pre>

<p>Note <code>(User.apply _).tupled</code>.</p>

<h2>Mistake #6. JodaTime support</h2>

<p>For cases where you need to use JodaTime types in Slick you should use <a href="https://github.com/tototoshi/slick-joda-mapper#slick-joda-mapper">slick-joda-mapper</a>.</p>

<p>Else you have to stick to <code>java.sql.Date</code>, <code>java.sql.Time</code>, <code>java.sql.Timestamp</code> as described in <a href="http://slick.typesafe.com/doc/2.1.0/schemas.html?highlight=date#table-rows">Table Rows</a> in the Slick documentation.</p>

<h2>Summary</h2>

<p>Using a non-default configuration is always a kind of minefield. Stay away from it unless you’re adventurous and have enough time and patience to fix issues along the way. You’ve been warned.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Getting Started With Play Framework and AngularJS - Day 1]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/09/08/getting-started-with-play-framework-and-angularjs-day-1.html"/>
    <updated>2014-09-08T06:42:49-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/09/08/getting-started-with-play-framework-and-angularjs-day-1</id>
    <content type="html"><![CDATA[<p>Let&rsquo;s face it &ndash; there are tons of very good tutorials about how to get started with <a href="https://www.playframework.com/">Play Framework</a> and <a href="https://angularjs.org/">AngularJS</a> to build modern web applications, and despite the fact there are still quite a few people out there who keep asking me about writing mine. Since writing tutorials is a way to clear up understanding of a topic to me, I found it very compelling to help myself and others using the two - Play and Angular - properly. Well, the <em>properly</em> part comes with your comments when I fix the parts that are outdated or plain wrong.</p>

<p>So, here it is. The yet another tutorial series about developing web applications in Play Framework (with Scala) and AngularJS (in JavaScript). Let&rsquo;s get rolling!</p>

<p><strong>NOTE</strong> It&rsquo;s a work in progress. Watch this space until the note has disappeared and the blog post become feature-complete.</p>

<!-- more -->


<h2>Step 1. Installing Typesafe Activator</h2>

<p>Download Typesafe Activator from <a href="http://typesafe.com/activator">http://typesafe.com/activator</a> and install it in a directory. Add the directory to <code>PATH</code> so you&rsquo;ll be able to execute <code>activator</code> from any place in your file system without having to use the fully-qualified path.</p>

<p>Execute <code>activator ui</code> to open up Activator UI in a browser. Go to <a href="http://localhost:8888.">http://localhost:8888.</a></p>

<p>See what&rsquo;s available in the UI and once you&rsquo;re satisfied, go to the command line where <code>activator ui</code> is running and press <code>Ctrl+C</code> to stop the UI process.</p>

<h2>Step 2. Creating hello-play-tutorial web application</h2>

<p>In a directory of your choice, execute <code>activator new hello-play-tutorial play-scala</code> to create a web application using <a href="https://typesafe.com/activator/template/play-scala">the Play Scala Seed template</a>.</p>

<p><strong>Pro-tip:</strong> Read the output from the command so you learn what you can do with <code>activator</code> that I&rsquo;m not going to cover in the article.</p>

<pre><code>➜  sandbox  activator new hello-play-tutorial play-scala

Fetching the latest list of templates...

OK, application "hello-play-tutorial" is being created using the "play-scala" template.

To run "hello-play-tutorial" from the command line, "cd hello-play-tutorial" then:
/Users/jacek/sandbox/hello-play-tutorial/activator run

To run the test for "hello-play-tutorial" from the command line, "cd hello-play-tutorial" then:
/Users/jacek/sandbox/hello-play-tutorial/activator test

To run the Activator UI for "hello-play-tutorial" from the command line, "cd hello-play-tutorial" then:
/Users/jacek/sandbox/hello-play-tutorial/activator ui
</code></pre>

<p>Change the working directory to <code>hello-play-tutorial</code> and run <code>activator run</code>. Wait until the message <code>(Server started, use Ctrl+D to stop and go back to the console...)</code> has showed up.</p>

<pre><code>➜  sandbox  cd hello-play-tutorial
➜  hello-play-tutorial  activator run
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/sandbox/hello-play-tutorial/project
[info] Updating {file:/Users/jacek/sandbox/hello-play-tutorial/project/}hello-play-tutorial-build...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Set current project to hello-play-tutorial (in build file:/Users/jacek/sandbox/hello-play-tutorial/)
[info] Updating {file:/Users/jacek/sandbox/hello-play-tutorial/}root...
[info] Resolving jline#jline;2.11 ...
[info] Done updating.

--- (Running the application from SBT, auto-reloading is enabled) ---

[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

(Server started, use Ctrl+D to stop and go back to the console...)
</code></pre>

<p>Visit <a href="http://localhost:9000">http://localhost:9000</a> to open up the web application in a browser.</p>

<p><strong>Pro-tip:</strong> Read the content of the welcome page so you know what the sample application offers beyond what is included in the article.</p>

<h2>Step 3. (optional) Deploying to a cloud service - CloudBees</h2>

<p>According to the official documentation of Play Framework in <a href="https://www.playframework.com/documentation/2.4.x/Deploying-to-CloudBees">Deploying to Cloudbees</a>:</p>

<blockquote><p>CloudBees support Play dists natively - with Jenkins and continuous deployment</p></blockquote>

<p>that is in turn confirmed in the official documentation of CloudBees in <a href="https://developer.cloudbees.com/bin/view/RUN/Playframework">RUN@cloud » Play Framework</a>:</p>

<blockquote><p>CloudBees includes first-class support for running Play! applications in the Cloud.</p></blockquote>

<p>Install <a href="http://developer.cloudbees.com/bin/view/RUN/BeesSDK">CloudBees SDK</a>. On Mac OS X it&rsquo;s just that easy as <code>brew install cloudbees-sdk</code>.</p>

<p>Start with <code>bees init</code> and provide necessary configuration information before moving on to deploying the Play application.</p>

<p>Execute <code>bees app:create hello-play-tutorial -t play2</code> to configure the application on CloudBees.</p>

<pre><code>➜  hello-play-tutorial git:(master) bees app:create hello-play-tutorial -t play2
Application: jaceklaskowski/hello-play-tutorial
    url: hello-play-tutorial.jaceklaskowski.eu.cloudbees.net
</code></pre>

<p>Create a new git repository under <code>Repos</code> in the CloudBees Administrative Console and execute <code>git init</code> followed by <code>git add -am 'Initial commit'</code> in the directory.</p>

<pre><code>➜  hello-play-tutorial git:(master) git remote add cloudbees https://git.cloudbees.com/jaceklaskowski/hello-play-tutorial.git
➜  hello-play-tutorial git:(master) git push --mirror cloudbees
Counting objects: 30, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (25/25), done.
Writing objects: 100% (30/30), 1010.69 KiB | 0 bytes/s, done.
Total 30 (delta 0), reused 0 (delta 0)
To https://git.cloudbees.com/jaceklaskowski/hello-play-tutorial.git
 * [new branch]      master -&gt; master
➜  hello-play-tutorial git:(master) activator clean dist
...
➜  hello-play-tutorial git:(master) bees app:deploy -t play2 -a hello-play-tutorial target/universal/hello-play-tutorial-1.0-SNAPSHOT.zip
Deploying application jaceklaskowski/hello-play-tutorial (environment: ): target/universal/hello-play-tutorial-1.0-SNAPSHOT.zip
Application parameters: {containerType=play2}
........................uploaded 25%
........................uploaded 50%
........................uploaded 75%
........................upload completed
deploying application to server(s)...
Application jaceklaskowski/hello-play-tutorial deployed: http://hello-play-tutorial.jaceklaskowski.eu.cloudbees.net
</code></pre>

<p>Access <a href="http://hello-play-tutorial.jaceklaskowski.eu.cloudbees.net">http://hello-play-tutorial.jaceklaskowski.eu.cloudbees.net</a> (mind that yours is different) to see the application deployed and running.</p>

<p>Set up a Jenkins job so the application&rsquo;s deployed every <code>git push cloudbees</code>.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[JSON in Play Framework With JsValue and Reads]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/09/02/json-in-play-framework-with-jsvalue-and-reads.html"/>
    <updated>2014-09-02T10:00:00-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/09/02/json-in-play-framework-with-jsvalue-and-reads</id>
    <content type="html"><![CDATA[<p>There are many ways to learn <a href="http://www.scala-lang.org/">the Scala programming language</a> and the vast number of libraries for the language. Mine is to use <a href="http://www.scala-sbt.org/">sbt</a> console in a customized project with required dependencies that are automatically downloaded by sbt. All (analysing, downloading, setting up CLASSPATH and such) is handled by the tooling itself not me. Share your approach if it appears smarter.</p>

<p>In this installment, I&rsquo;m presenting a sbt build for learning the JSON API from the <a href="https://www.playframework.com/documentation/2.4.x/ScalaJson">play-json</a> module in the <a href="https://www.playframework.com/documentation/2.4.x/api/scala/index.html#play.api.libs.json.package">play.api.libs.json</a> package in <a href="https://www.playframework.com/">Play Framework 2.4.0-M1</a>.</p>

<!-- more -->


<p>It’s a code-mainly version of the article <a href="https://www.playframework.com/documentation/2.4.x/ScalaJsonInception">JSON Macro Inception</a> from the official documentation of Play Framework.</p>

<p>Start a new activator/sbt project with the following build definition in <code>build.sbt</code>:</p>

<pre><code>scalaVersion := "2.11.2"

val playVersion = "2.4.0-M1"

libraryDependencies += "com.typesafe.play" %% "play-json" % playVersion
</code></pre>

<p>On the command line execute <code>sbt</code> and then, while in the sbt shell, <code>console</code>.</p>

<pre><code>&gt; console
[info] Starting scala interpreter...
[info]
Welcome to Scala version 2.11.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_20).
Type in expressions to have them evaluated.
Type :help for more information.
</code></pre>

<p>When you see the above (the Java version might be different depending on your configuration), you’re in an interactive development environment - Scala REPL - with play-json library and Scala 2.11.2. The world of JSON’s (almost) yours!</p>

<p><code>:pa</code> (a shortcut for <code>:paste</code>) enters paste mode so you can copy and paste entire Scala statements.</p>

<pre><code>scala&gt; :pa
// Entering paste mode (ctrl-D to finish)

import play.api.libs.functional.syntax._
import play.api.libs.json._

// Exiting paste mode, now interpreting.

import play.api.libs.functional.syntax._
import play.api.libs.json._

scala&gt; case class Person(name: String, age: Int, lovesChocolate: Boolean)
defined class Person

scala&gt; :pa
// Entering paste mode (ctrl-D to finish)

implicit val personReads = (
  (__ \ 'name).read[String] and
  (__ \ 'age).read[Int] and
  (__ \ 'lovesChocolate).read[Boolean]
)(Person)

// Exiting paste mode, now interpreting.

personReads: play.api.libs.json.Reads[Person] = play.api.libs.json.Reads$$anon$8@5c2b898d

scala&gt; val jsonStr = """{ "name" : "Jacek", "age" : 41, "lovesChocolate": true }"""
jsonStr: String = { "name" : "Jacek", "age" : 41, "lovesChocolate": true }

scala&gt; val json = play.api.libs.json.Json.parse(jsonStr)
json: play.api.libs.json.JsValue = {"name":"Jacek","age":41,"lovesChocolate":true}

scala&gt; val jacek: Person = json
&lt;console&gt;:18: error: type mismatch;
 found   : play.api.libs.json.JsValue
 required: Person
       val jacek: Person = json
                           ^

scala&gt; val jacek: Person = json.as[Person]
jacek: Person = Person(Jacek,41,true)

scala&gt; implicit val personReads = Json.reads[Person]
personReads: play.api.libs.json.Reads[Person] = play.api.libs.json.Reads$$anon$8@5e930aa2
</code></pre>

<p>With the playground you can play with JSON types in Play however you like. Start with the trait <a href="https://www.playframework.com/documentation/2.4.x/api/scala/index.html#play.api.libs.json.JsValue">play.api.libs.json.JsValue</a> and then learn what <a href="https://www.playframework.com/documentation/2.4.x/api/scala/index.html#play.api.libs.json.Reads">play.api.libs.json.Reads[T]</a> offers. They&rsquo;re the cornerstone of the JSON API in Play.</p>

<p>The entire code to paste (<code>:pa</code> or <code>:paste</code> in sbt console) follows. Note the simplifications codenamed <strong>JSON Inception</strong>.</p>

<pre><code>import play.api.libs.json._

case class Person(name: String, age: Int, lovesChocolate: Boolean)

val jsonStr = """{ "name" : "Jacek", "age" : 41, "lovesChocolate": true }"""

val json = play.api.libs.json.Json.parse(jsonStr)

implicit val personReads = Json.reads[Person]

val jacek: Person = json.as[Person]
</code></pre>

<p>Once you’re done, press <code>Ctrl+D</code> twice to exit <code>console</code> and the sbt shell afterwards.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Trait Init[Scope] in Sbt]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/07/22/trait-init-scope-in-sbt.html"/>
    <updated>2014-07-22T07:19:54-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/07/22/trait-init-scope-in-sbt</id>
    <content type="html"><![CDATA[<p>It&rsquo;s been my wish to master <a href="http://scala-lang.org/">Scala</a> recently and since I&rsquo;ve been spending more time with <a href="http://www.scala-sbt.org/">sbt</a> I&rsquo;ve made the decision to use one to master the other (in no particular order). There are quite a few sophisticated projects in Scala out there, but sbt is enough for my needs.</p>

<p>In order to pursue my understanding of sbt (and hence Scala itself) I&rsquo;ve been reading the sources that honestly keep surprising me so much often. It&rsquo;s almost every minute when I find myself scratching my head to digest a piece of sbt code. It&rsquo;s akin to when I was reading the source code of <a href="http://clojure.org/">Clojure</a> to learn the language. People can write complicated code and I wouldn&rsquo;t be surprised to hear sbt&rsquo;s sources belong to the category. I don&rsquo;t care, though. I&rsquo;m fine with the complexity hoping the mental pain brings me closer to master Scala.</p>

<p>Today I picked the trait <a href="https://github.com/sbt/sbt/blob/0.13/util/collection/src/main/scala/sbt/Settings.scala#L41">sbt.Init</a> believing it&rsquo;d be an important step in my journey.</p>

<p><strong>NOTE</strong> It becomes feature-complete when the note disappears. Live with the few mistakes for now. Let me know what you think in the Comments section. The site is on GitHub so pull requests are warmly welcome, too. Thanks!</p>

<!-- more -->


<p>There’s the trait <a href="https://github.com/sbt/sbt/blob/0.13/util/collection/src/main/scala/sbt/Settings.scala#L41">sbt.Init</a>. I don’t really know what its purpose is and I hope to find it out after few Scala snippets. There’s just enough hope to master Scala while pursuing my understanding of sbt with the trait.</p>

<h2>Goal</h2>

<p>Create an instance of trait <code>Init[Scope]</code>.</p>

<h2>Solution</h2>

<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>val init = new Init[Int] {
</span><span class='line'>  def showFullKey: Show[ScopedKey[_]] = Show { (sk: ScopedKey[_]) =&gt; 
</span><span class='line'>    s"${sk.scope}:${sk.key}...${sk.scopedKey}"
</span><span class='line'>  }
</span><span class='line'>}</span></code></pre></td></tr></table></div></figure>


<p>Run <code>sbt</code> and then execute the command <code>consoleProject</code> to open sbt&rsquo;s Scala REPL with all the necessary types of sbt loaded.</p>

<h2>Mental issues encountered</h2>

<ol>
<li><p>I’m far from being able to distinguish easily type parameters, e.g. <code>Scope</code>, in parameterised types, e.g. <code>Init[Scope]</code>, from types themselves. When I see <code>Init[Scope]</code> my Java-trained eyes see <code>Scope</code> type within <code>Init</code> type and although it doesn’t make sense after a moment that’s my initial thought.</p></li>
<li><p>The type constructor <code>Show[ScopedKey[_]]</code> in the return value type of <code>showFullKey</code> is another trait <code>Show</code> that comes with <code>apply</code> that is supposed to return a <code>String</code> instance from <code>ScopedKey[_]</code>. But hey, <code>ScopedKey[_]</code> is another type constructor, and things get more complex for me again. Happily, <code>Show</code> has a companion object with <code>apply</code> method. The story ends as <code>ScopedKey</code> is a final parameterized case class and the function parameter <code>f: T =&gt; String</code> in <code>Show</code> returns a <code>String</code> so I&rsquo;ve just merely followed the types and it <em>happened</em> to work fine. The Scala compiler happy and so am I.</p></li>
</ol>


<h2>Summary</h2>

<p><code>Show</code> is a function type (with <code>apply</code>) that accepts <code>T</code> and returns <code>String</code>. In our case, <code>T</code> is <code>ScopedKey[_]</code> that’s&hellip;well&hellip;it’s yet to be understood.</p>

<h2>consoleProject in sbt</h2>

<p>If you happened to want to see the code in action, execute <code>sbt consoleProject</code> and give the following a try:</p>

<figure class='code'><div class="highlight"><table><tr><td class="gutter"><pre class="line-numbers"><span class='line-number'>1</span>
<span class='line-number'>2</span>
<span class='line-number'>3</span>
<span class='line-number'>4</span>
<span class='line-number'>5</span>
<span class='line-number'>6</span>
<span class='line-number'>7</span>
<span class='line-number'>8</span>
<span class='line-number'>9</span>
<span class='line-number'>10</span>
<span class='line-number'>11</span>
<span class='line-number'>12</span>
<span class='line-number'>13</span>
<span class='line-number'>14</span>
<span class='line-number'>15</span>
<span class='line-number'>16</span>
</pre></td><td class='code'><pre><code class=''><span class='line'>// (attribute) key that points at Int value
</span><span class='line'>scala&gt; val number = AttributeKey[Int]("number", "number stringified")
</span><span class='line'>number: sbt.AttributeKey[Int] = number
</span><span class='line'> 
</span><span class='line'>scala&gt; val init = new Init[Int] {
</span><span class='line'>     |   def showFullKey: Show[ScopedKey[_]] = Show { (sk: ScopedKey[_]) =&gt;
</span><span class='line'>     |     s"${sk.scope}:${sk.key}...${sk.scopedKey}"
</span><span class='line'>     |   }
</span><span class='line'>     | }
</span><span class='line'>init: sbt.Init[Int] = $anon$1@1f95802
</span><span class='line'> 
</span><span class='line'>scala&gt; val sfk: Show[init.ScopedKey[_]] = init.showFullKey
</span><span class='line'>sfk: sbt.Show[init.ScopedKey[_]] = sbt.Show$$anon$1@7f54be72
</span><span class='line'>
</span><span class='line'>scala&gt; val s = sfk(init.ScopedKey[Int](scope=5, key=number))
</span><span class='line'>s: String = 5:number...ScopedKey(5,number)</span></code></pre></td></tr></table></div></figure>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Contributing to Open Source Projects on GitHub - Cheat Sheet]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/07/07/contributing-to-open-source-projects-on-github-cheat-sheet.html"/>
    <updated>2014-07-07T19:24:32-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/07/07/contributing-to-open-source-projects-on-github-cheat-sheet</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve been contributing to many open source projects over the past couple of years and I found <a href="https://github.com/jaceklaskowski">GitHub</a> pleasantly helpful to continue the gig in the years to come. I&rsquo;ve learnt few techniques along the way (from many sources I&rsquo;m including after the section they appply to).</p>

<p>I don&rsquo;t want to keep the techniques for myself so the git/GitHub cheat sheet is supposed to help me remember the commands and others to learn from my mistakes (<em>aka</em> experience). It&rsquo;s so easy on GitHub that I keep wondering why it took me so long to learn it. It must not for you.</p>

<p>Have fun contributing to open source projects as much as I do! <em>Pro publico bono.</em></p>

<p><strong>NOTE</strong> It becomes feature-complete when the note disappears. Live with the few mistakes for now. Let me know what you think in the Comments section. Pull requests are welcome, too. Thanks!</p>

<!-- more -->


<h2>Picking project</h2>

<p>You start your day hunting down the project you want to contribute to. Be adventurous and pick the one you&rsquo;ve always been dreaming about. This might be the day when the dream comes true.</p>

<p>I&rsquo;m more into Scala/sbt lately so I&rsquo;m with projects under control of the project build tool - <a href="http://www.scala-sbt.org/">sbt</a> as I can learn both contributing.</p>

<h2>Cloning project</h2>

<p>Learning a project can take different approaches and reading the source code or just building it and staying on the cutting edge are a few examples.</p>

<p>In the project&rsquo;s repository on GitHub, on the right-hand side, there&rsquo;s this clone URL field. Select the protocol to use (HTTPS or SSH) and click the Copy to clipboard button.</p>

<p>In the terminal, execute the following command:</p>

<pre><code>git clone [clone URL]
</code></pre>

<p>It creates a directory with the project. The sources are yours now, master.</p>

<h2>Forking project</h2>

<p>Your very first step is to fork a project. Forking means creating your own copy of the project. On GitHub it&rsquo;s so easy with the Fork button in the upper-right corner. Click it and select the account you want the fork go.</p>

<p>In the terminal, go to the project&rsquo;s directory and add the repository as a remote repository.</p>

<pre><code>git remote add [remote-name] [clone URL]
</code></pre>

<p>I tend to use my first name for <code>remote-name</code> so I know that my personal repository copy is under <code>jacek</code> nick.</p>

<h2>Branching project</h2>

<p>Developing a change for a project is the real thing. It can be a documentation page, a fix for an issue or whatever else the project holds.</p>

<p>The following command</p>

<pre><code>git checkout -b [branch-name]
</code></pre>

<p>creates and changes your current branch from <code>master</code> (usually) to <code>branch-name</code>. Use <code>wip/</code> in the <code>branch-name</code> to denote that the work is in progress so people can review the changes before they get <em>squashed</em> and merged with the master.</p>

<h2>Committing changes to project</h2>

<p>On a branch, go the following to commit the changes of yours:</p>

<pre><code>git commit -am [commit-message]
</code></pre>

<p>There are some strict rules on how to write a proper <code>commit-message</code>. For now, don&rsquo;t worry about it too much. There are tougher things you will have to go through and writing proper commit messages don&rsquo;t belong to that category&hellip;yet. It&rsquo;s more important to get you up to speed with contributing to a project than to do it without mistakes from the day 0.</p>

<h2>Pushing changes to remote repo</h2>

<p>With the changes on the branch committed, it&rsquo;s time to show off on GitHub. Push the changes with the following command:</p>

<pre><code>git push [remote-name] [branch-name]
</code></pre>

<p>Using command completion can save you a lot of typing here. A decent shell like <a href="http://ohmyz.sh/">oh-my-zsh</a> is highly recommended (on Mac OS X at the very least).</p>

<p><code>remote-name</code> is the nick of the remote repository, e.g. <code>jacek</code> while <code>branch-name</code> is the name of the branch you&rsquo;re working on right now.</p>

<h2>Creating pull request on GitHub</h2>

<p>With the changes in the remote repository on GitHub, you should now be able to send a pull request to the <strong>origin</strong>al repo (usually called <strong>origin</strong>, but git lets you name it whatever you like).</p>

<p>GitHub shows the Pull Request button when you&rsquo;re changes hit your repository that&rsquo;s a fork of the project. Click the button and fill out the blanks. GitHub uses your commit message as the title that further easies the process.</p>

<p>Click Create and you&rsquo;ve just contributed to the project! Open Source Contributor badge unlocked! Congratulations.</p>

<h2>Squashing changes</h2>

<p>There might be times when your work in progress generates a stream of changes to a branch. It&rsquo;s assumed that the changes are already <code>git add</code>ed and the project maintainers have requested <em>to squash the changes</em> so they ultimately go (aka <em>get merged</em>) to the <code>master</code> as a single change/commit (since a branch is usually about a single feature that&rsquo;s often reasonable to have the feature merged in as a single change &ndash; so it&rsquo;s self-contained and makes code review a little easier).</p>

<p>Use <code>git rebase -i [branch]</code>:</p>

<pre><code>git rebase -i origin/master
</code></pre>

<p>where <code>origin/master</code> is the name of the <code>master</code> branch of the project you forked and then branched for your changes from the remote <code>origin</code> repository.</p>

<p>Fix any merge issues while rebasing. When fixed, <code>git add</code> the files changed (because of the merge conflict) and <code>git rebase --continue</code> afterwards.</p>

<p>You can always go back to the previous state (before doing <code>git rebase</code>) with <code>git rebase --abort</code>.</p>

<p>Doing squashing is worth the time since merging the changes with <code>master</code> later on becomes a no-brainer for the project maintainers.</p>

<p>Once you&rsquo;re done with modifying the history of the changes in your branch, do <code>git push -f</code> to push your changes forcefully. The reason for the <code>-f</code> option is that you make changes to the history of a public branch that others could&rsquo;ve already featched and based their work on &ndash; a conflict may be coming. To prevent the conflict git makes sure that&rsquo;s what you really want to do. You&rsquo;ve been warned.</p>

<p>Useful links about git rebase:</p>

<ul>
<li><a href="http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html">squashing commits with rebase</a></li>
<li><a href="https://help.github.com/articles/about-git-rebase">About Git rebase</a></li>
<li><a href="https://github.com/edx/edx-platform/wiki/How-to-Rebase-a-Pull-Request">How to Rebase a Pull Request</a></li>
</ul>


<h2>Deleting remote and local branches</h2>

<p>When the work is over and all the changes are merged with the master, you can safely delete remote and local branches.</p>

<p>Once the work gets merged, GitHub asks you to delete the branch. Click the button under the pull request.</p>

<p>Delete the local branch with the command:</p>

<pre><code>git branch -D [branch-name]
</code></pre>

<p>where <code>branch-name</code> is the name of the branch you want to delete.</p>

<p>You should change the branch to some other branch to be able to delete it.</p>

<h2>Maintainers, use &ldquo;Closes #XXX&rdquo; to auto-close pull requests</h2>

<p>It&rsquo;s a feature of GitHub and mostly for project maintainers when they merging pull requests to <code>master</code>.</p>

<p>After you&rsquo;re about to <code>git push</code> your local changes, <code>git commit</code> them and as the last line add <strong>Closes #XXX</strong> where <strong>XXX</strong> is the pull request id. It will auto-close the pull request.</p>

<p>Useful links about the feature:</p>

<ul>
<li><a href="http://blog.spreedly.com/2014/06/24/merge-pull-request-considered-harmful/">&ldquo;Merge pull request&rdquo; Considered Harmful</a></li>
</ul>

]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Calculate Profit in Scala With foldLeft]]></title>
    <link href="http://blog.jaceklaskowski.pl/2014/06/26/calculate-profit-in-scala-with-foldLeft.html"/>
    <updated>2014-06-26T16:57:00-04:00</updated>
    <id>http://blog.jaceklaskowski.pl/2014/06/26/calculate-profit-in-scala-with-foldLeft</id>
    <content type="html"><![CDATA[<p>With a credit in a foreign currency one may want to hedge to offset the foreign currency getting stronger, and hence increasing the cost of the credit.</p>

<p>Say, you bought 589 CHF when it costed 3.4007 PLN and then 593 for 3.3704. How much would you profit when the price of selling CHF rose to 3.4107 PLN?</p>

<!-- more -->


<p>The question of how much profit you earned with a series of pairs <code>(quantity, price)</code> against a given CHF price can be calculated as follows:</p>

<pre><code>def calculateProfit(series: Seq[(Int, Double)], currentPrice: Double): Double =
  series.foldLeft(0.0) {
    case (acc, (qty, price)) =&gt; acc + (currentPrice - price) * qty
  }

val qtyPriceSeries = Seq((589,3.4007),(593,3.3704))
val currPrice = 3.4107

scala&gt; calculateProfit(qtyPriceSeries, currPrice)
res0: Double = 29.787899999999745
</code></pre>

<p>It gives you ca 30 PLN.</p>

<p>It&rsquo;d be nice to have a series with the date when a given pair was made, and then compare it with other means of gaining profits. A web app developed in <a href="http://www.playframework.com/">Play</a> and deployed to <a href="https://www.heroku.com/">Heroku</a> or <a href="http://www.cloudbees.com/">CloudBees</a> might be of help, wouldn&rsquo;t it?</p>
]]></content>
  </entry>
  
</feed>
