<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-8768401356830813531</id><updated>2026-03-31T02:27:36.739-04:00</updated><category term="haskell"/><category term="cabal"/><category term="happs"/><category term="heist"/><category term="ltmt"/><category term="scripting"/><category term="analysis"/><category term="ember"/><category term="javascript"/><category term="screencast"/><category term="snap"/><title type='text'>Software Simply</title><subtitle type='html'>software development, functional programming, haskell, etc</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default?start-index=26&amp;max-results=25&amp;redirect=false'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>54</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-4897572765487639472</id><published>2021-08-26T10:23:00.000-04:00</published><updated>2021-08-26T10:23:36.667-04:00</updated><title type='text'>Dependent Types are a Runtime Maybe</title><content type='html'>&lt;p&gt;Awhile back I was discussing dependent types with someone and we ended up concluding that dependent types can always be replaced by a runtime Maybe.&amp;nbsp; This seemed to me then (and still does today) as a fairly surprising and provocative conclusion.&amp;nbsp; So I thought I&#39;d put the idea out there and see what people think.&lt;/p&gt;&lt;p&gt;Let&#39;s look at a few examples that are commonly used to illustrate dependent types:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Vectors of length N&lt;/li&gt;&lt;li&gt;Matrices of size &lt;span style=&quot;font-family: courier;&quot;&gt;m x n&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Sorted lists&lt;/li&gt;&lt;li&gt;Height-balanced trees (trees where the height of subtrees differ by at most one)&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;The argument is roughly as follows.&amp;nbsp; All of these examples ultimately boil down to enforcing some kind of constraint on some data types.&amp;nbsp; The more powerful your dependent type system, the more rich and expressive will be the constraints that you can enforce.&amp;nbsp; If we take this thought experiment to its logical conclusion, we end up with a dependent type system that allows us to enforce any constraint that can be computed.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The important realization here is that every one of the above dependent type examples are a constraint that can also be enforced by a smart constructor.&amp;nbsp; The smart constructor pattern is roughly this:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;module Foo&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; ( Foo&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; , mkFoo&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; -- Any other functionality that Foo supplies&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; )&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;data Foo = ...&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;mkFoo :: FooInputs -&amp;gt; Maybe Foo&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;You can express all of the above dependent type constraints using this pattern. VecN can simply hold a vector and the length N constraint can be enforced in mkVecN.&amp;nbsp; Similarly, SortedList can simple hold a list and the mkSortedList can sort its input and/or return Nothing if its input isn&#39;t sorted.&amp;nbsp; The smart constructor &lt;span style=&quot;font-family: courier;&quot;&gt;mkFoo&lt;/span&gt; can contain arbitrarily complex Turing-complete constraints and return a &lt;span style=&quot;font-family: courier;&quot;&gt;Just&lt;/span&gt; whenever they&#39;re satisfied or a &lt;span style=&quot;font-family: courier;&quot;&gt;Nothing&lt;/span&gt; if they&#39;re not.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The key difference between dependent types and a smart constructor is that with dependent types the constraint is enforced at compile time and with a smart constructor it is checked at runtime.&amp;nbsp; This suggests a rule of thumb for answering the question of whether you should use dependent types:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;If the cost (TCO...i.e. the sum total of dev time, readability of the resulting code, and all the ongoing maintenance) of using a dependent type is less than the cost of handling the &lt;span style=&quot;font-family: courier;&quot;&gt;Nothing&lt;/span&gt; cases at runtime, then you should use a dependent type.&amp;nbsp; Otherwise, you should just use a smart constructor.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The interesting thing here is that Haskell gives us a fantastic set of tools for handling runtime Nothing values.&amp;nbsp; The &lt;span style=&quot;font-family: courier;&quot;&gt;Maybe&lt;/span&gt; type is has instances of Functor, Applicative, and Monad which allow us to avoid a lot of the code overhead of checking the failure cases and handling them appropriately.&amp;nbsp; It is often possible to front load the checking of the constraint with a case statement near the top level:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;case mkFoo inputs of&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; Nothing -&amp;gt; handleError&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;&amp;nbsp; Just a -&amp;gt; handleSuccess a&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Then all the rest of your code will be working with &lt;span style=&quot;font-family: courier;&quot;&gt;Foo&lt;/span&gt;, which is structurally guaranteed to have the properties you want and allows the use of simplified code that doesn&#39;t bother checking the error conditions.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;My takeaway from this argument is that you should only reach for dependent types when dealing with situations where you can&#39;t front-load the error handling and the cost of having &lt;span style=&quot;font-family: courier;&quot;&gt;Maybe a&lt;/span&gt;&#39;s floating around your code exceeds the cost of the dependent type machinery.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;What do you think?&amp;nbsp; Am I missing something here?&amp;nbsp; I&#39;d love to see if anyone has practical examples of dependent types that can&#39;t be boiled down to this kind of smart constructor and runtime &lt;span style=&quot;font-family: courier;&quot;&gt;Maybe&lt;/span&gt;&amp;nbsp;or where the cost of doing so is exceptionally high.&lt;/div&gt;&lt;p&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/4897572765487639472/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/4897572765487639472' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4897572765487639472'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4897572765487639472'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2021/08/dependent-types-are-runtime-maybe.html' title='Dependent Types are a Runtime Maybe'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-6965724103852009671</id><published>2018-07-24T02:18:00.000-04:00</published><updated>2018-08-13T16:40:48.945-04:00</updated><title type='text'>Setting Up A Private Nix Cache</title><content type='html'>&lt;p&gt;I recently went through the process of setting up a private Nix binary cache. It was not obvious to me how to go about it at first, so I thought I would document what I did here. There are a few different ways of going about this that might be appropriate in one situation or another, but I’ll just describe the one I ended up using. I need to serve a cache for proprietary code, so I ended up using a cache served via SSH.&lt;/p&gt;
&lt;h2 id=&quot;setting-up-the-server&quot;&gt;Setting up the server&lt;/h2&gt;
&lt;p&gt;For my cache server I’m using an Amazon EC2 instance with NixOS. It’s pretty easy to create these using the public NixOS 18.03 AMI. I ended up using a t2.medium with 1 TB of storage, but the jury is still out on the ideal tradeoff of specs and cost for our purposes. YMMV.&lt;/p&gt;
&lt;p&gt;The NixOS AMI puts the SSH credentials in the root user, so log in like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ssh -i /path/to/your/key.pem root@nixcache.example.com&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To get your new NixOS machine working as an SSH binary cache there are two things you need to do: generate a signing key and turn on cache serving.&lt;/p&gt;
&lt;h3 id=&quot;generate-a-signing-key&quot;&gt;Generate a signing key&lt;/h3&gt;
&lt;p&gt;You can generate a public/private key pair simply by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;nix-store --generate-binary-cache-key nixcache.example.com-1 nix-cache-key.sec nix-cache-key.pub&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&quot;turn-on-ssh-nix-store-serving&quot;&gt;Turn on SSH Nix store serving&lt;/h3&gt;
&lt;p&gt;NixOS ships out of the box with a config option for enabling this, so it’s pretty easy. Just edit &lt;code&gt;/etc/nixos/configuraton.nix&lt;/code&gt; and add the following lines:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  nix = {
    extraOptions = &amp;#39;&amp;#39;
      secret-key-files = /root/nix-cache-key.sec
    &amp;#39;&amp;#39;;

    sshServe = {
      enable = true;
      keys = [
        &amp;quot;ssh-rsa ...&amp;quot;
        &amp;quot;ssh-rsa ...&amp;quot;
        ...
      ];
    };
  };&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;extraOptions&lt;/code&gt; section makes the system aware of your signing key. The &lt;code&gt;sshServe&lt;/code&gt; section makes the local Nix store available via the &lt;code&gt;nix-ssh&lt;/code&gt; user. You grant access to the cache by adding your users’ SSH public keys to the keys section.&lt;/p&gt;
&lt;h2 id=&quot;setting-up-the-clients&quot;&gt;Setting up the clients&lt;/h2&gt;
&lt;p&gt;Now you need to add this new cache to your users’ machines so they can get cached binaries instead of building things themselves. The following applies to multi-user Nix setups where there is a Nix daemon that runs as root. This is now the default when you install Nix on macOS. If you are using single-user Nix, then you may not need to do all of the following.&lt;/p&gt;
&lt;p&gt;You need to have an SSH public/private key pair for your root user to use the Kadena Nix cache. This makes sense because everything in your local Nix store is world readable, so private cache access needs to be semantically controlled on a per-machine basis, not a per-user basis.&lt;/p&gt;
&lt;h3 id=&quot;generating-a-root-ssh-key&quot;&gt;Generating a Root SSH Key&lt;/h3&gt;
&lt;p&gt;To generate an SSH key for your root user, run the following commands. After the ssh-keygen command hit enter three times to accept the defaults. It is important that you not set a password for this SSH key because the connection will be run automatically and you won’t be able to type a password. You’ll do the rest of the section as the root user, so start by entering a root shell and generating an SSH key pair.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo su -
ssh-keygen -b 4096&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next ssh to the cache server. This will tell you that the authenticity of the server can’t be established and ask if you want to continue. Answer ‘yes’. After it connects and prompts you for a password, just hit CTRL-c to cancel.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ssh nixcache.example.com&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This has the effect of adding the server to your &lt;code&gt;.ssh/known_hosts&lt;/code&gt; file. If you didn’t do this, SSH would ask you to verify the host authenticity. But SSH will be called automatically by the Nix daemon and it will fail.&lt;/p&gt;
&lt;p&gt;Now cat the public key file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cat ~/.ssh/id_rsa.pub&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Copy the contents of this file and send your key to the administrator of the nix cache.&lt;/p&gt;
&lt;h3 id=&quot;telling-nix-to-use-the-cache&quot;&gt;Telling Nix to use the cache&lt;/h3&gt;
&lt;p&gt;In your &lt;code&gt;$NIX_CONF_DIR/nix.conf&lt;/code&gt;, add your cache to the &lt;code&gt;substituters&lt;/code&gt; line and add the cache&#39;s public signing key (generated above with the nix-store command or given to you by your cache administrator) to the &lt;code&gt;trusted-public-keys&lt;/code&gt; line. It might look something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;substituters = ssh://nix-ssh@nixcache.example.com https://cache.nixos.org/
trusted-public-keys = nixcache.example.com-1:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa= cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you need to restart the nix daemon.&lt;/p&gt;
&lt;p&gt;On mac:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo launchctl stop org.nixos.nix-daemon
sudo launchctl start org.nixos.nix-daemon&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On linux:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo systemctl restart nix-daemon.service&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;populating-the-cache&quot;&gt;Populating the Cache&lt;/h2&gt;
&lt;p&gt;To populate the Nix cache, use the &lt;code&gt;nix-copy-closure&lt;/code&gt; command on any nix store path. For instance, the &lt;code&gt;result&lt;/code&gt; symlink that is created by a &lt;code&gt;nix-build&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;nix-copy-closure -v --gzip --include-outputs --to root@nixcache.example.com &amp;lt;nix-store-path&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After you copy binaries to the cache, you need to sign them with the signing key you created with the &lt;code&gt;nix-store&lt;/code&gt; command above. You can do that by running the following command on the cache server:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;nix sign-paths -k nix-cache-key.sec --all&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s also possible to sign packages on the machines that build them. This would require copying the private signing key around to other servers, so if you’re going to do that you should think carefully about key management.&lt;/p&gt;
&lt;h2 id=&quot;maintenance&quot;&gt;Maintenance&lt;/h2&gt;
&lt;p&gt;At some point it is likely that your cache will run low on disk space. When this happens, the &lt;code&gt;nix-collect-garbage&lt;/code&gt; command is your friend for cleaning things up in a gradual way that doesn&#39;t suddenly drop long builds on your users.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/6965724103852009671/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/6965724103852009671' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6965724103852009671'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6965724103852009671'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2018/07/setting-up-private-nix-cache.html' title='Setting Up A Private Nix Cache'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3903335149023839301</id><published>2018-07-09T15:48:00.000-04:00</published><updated>2018-07-09T16:22:55.756-04:00</updated><title type='text'>Simplicity</title><content type='html'>I&#39;m reposting this here because it cannot be overstated.&amp;nbsp; I&#39;ve been saying roughly the same thing for awhile now.&amp;nbsp; It&#39;s even the title of this blog!&amp;nbsp;&amp;nbsp;Drew DeVault sums it up very nicely in his opening sentences:&lt;br /&gt;
&lt;blockquote class=&quot;tr_bq&quot;&gt;
&lt;span style=&quot;background-color: white; color: #333333; font-family: sans-serif; font-size: 15.3333px;&quot;&gt;The single most important quality in a piece of software is simplicity. It’s more important than doing the task you set out to achieve. It’s more important than performance.&lt;/span&gt;&lt;/blockquote&gt;
Here&#39;s the full post:&amp;nbsp;&lt;a href=&quot;https://drewdevault.com/2018/07/09/Simple-correct-fast.html&quot;&gt;https://drewdevault.com/2018/07/09/Simple-correct-fast.html&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
I would also like to add another idea.&amp;nbsp; It has been my observation that the attributes of &quot;smarter&quot; and &quot;more junior&quot; tend to be more highly correlated with losing focus on simplicity.&amp;nbsp; Intuitively this makes sense because smarter people will tend to grasp complicated concepts more easily.&amp;nbsp; Also, smarter people tend to be enamored by clever complex solutions.&amp;nbsp; Junior engineers usually don&#39;t appreciate how important simplicity is as much as senior engineers--at least I know I didn&#39;t!&amp;nbsp; I don&#39;t have any great wisdom about how to solve this problem other than that the first step to combating the problem is being aware of it.</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3903335149023839301/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3903335149023839301' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3903335149023839301'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3903335149023839301'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2018/07/simplicity.html' title='Simplicity'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-6553669363359033582</id><published>2018-03-17T09:20:00.000-04:00</published><updated>2018-03-17T11:38:53.571-04:00</updated><title type='text'>Fake: Generating Realistic Test Data in Haskell</title><content type='html'>On a number of occasions over the years I&#39;ve found myself wanting to generate realistic looking values for Haskell data structures.&amp;nbsp; Perhaps I&#39;m writing a UI and want to fill it in with example data during development so I can see how the UI behaves with large lists.&amp;nbsp; In this situation you don&#39;t want to generate a bunch of completely random unicode characters.&amp;nbsp; You want things that look plausible so you can see how it will likely look to the user with realistic word wrapping, etc.&amp;nbsp; Later, when you build the backend you actually want to populate the database with this data.&amp;nbsp; Passing around DB dumps to other members of the team so they can test is a pain, so you want this stuff to be auto-generated.&amp;nbsp; This saves time for your QA people because if you didn&#39;t have it, they&#39;d have to manually create it.&amp;nbsp; Even later you get to performance testing and you find yourself wanting to generate several orders of magnitude more data so you can load test the database, but you still want to use the same distribution so it continues to look reasonable in the UI and you can test UI performance at even bigger scale.&lt;br /&gt;
&lt;br /&gt;
Almost every time I&#39;ve been in this situation I thought about using QuickCheck&#39;s &lt;code&gt;&lt;a href=&quot;http://hackage.haskell.org/package/QuickCheck-2.11.3/docs/Test-QuickCheck-Arbitrary.html#t:Arbitrary&quot;&gt;Arbitrary&lt;/a&gt;&lt;/code&gt; type class.&amp;nbsp; But that never seemed quite right to me for a couple reasons.&amp;nbsp; &lt;strike&gt;First, &lt;code&gt;Arbitrary&lt;/code&gt; requires that you specify functions for shrinking a value to simpler values.&amp;nbsp; This was never something I needed for these purposes, so it seemed overkill to have to specify that infrastructure.&lt;/strike&gt;&amp;nbsp; EDIT: I was mistaken with this.&amp;nbsp; QuickCheck gives a default implementation for shrink.&amp;nbsp; Second, using &lt;code&gt;Arbitrary&lt;/code&gt; meant that I had to depend on QuickCheck.&amp;nbsp; This always seemed too heavy to me because I didn&#39;t need any of QuickCheck&#39;s property testing infrastructure.&amp;nbsp; I just wanted to generate a few values and be done.&amp;nbsp; For a long time these issues were never enough to overcome the activation energy needed to justify releasing a new package.&lt;br /&gt;
&lt;br /&gt;
More recently I realized that the biggest reason QuickCheck wasn&#39;t appropriate is because I wanted a different probability distribution than the one that QuickCheck uses.&amp;nbsp; This isn&#39;t about subtle differences between, say, a normal versus an exponential distribution.&amp;nbsp; It&#39;s about the bigger picture of what the probability distributions are accomplishing.&amp;nbsp; QuickCheck is significantly about fuzz testing and finding corner cases where your code doesn&#39;t behave quite as expected.&amp;nbsp; You &lt;i&gt;want&lt;/i&gt; it to generate strings with things like different kinds of quotes to verify that your code escapes things properly, weird unicode characters to check encoding issues, etc.&amp;nbsp; What I wanted was something that could generate random data that looked realistic for whatever kind of realism my domain needed.&amp;nbsp; These two things are complementary.&amp;nbsp; You don&#39;t just want one or the other.&amp;nbsp; Sometimes you need both of them at the same time.&amp;nbsp; Since you can only have one instance of the &lt;code&gt;Arbitrary&lt;/code&gt; type class for each data type, riding on top of QuickCheck wouldn&#39;t be enough.&amp;nbsp; This needed a separate library.&amp;nbsp; Enter &lt;a href=&quot;http://hackage.haskell.org/package/fake&quot;&gt;the &lt;code&gt;fake&lt;/code&gt; package&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
The &lt;code&gt;fake&lt;/code&gt; package provides a type class called &lt;code&gt;Fake&lt;/code&gt; which is a stripped down version of QuickCheck&#39;s &lt;code&gt;Arbitrary&lt;/code&gt; type class intended for generating realistic data.&amp;nbsp; With this we also include a random value generator called &lt;code&gt;FGen&lt;/code&gt; which eliminates confusion with QuickCheck&#39;s &lt;code&gt;Gen&lt;/code&gt; and helps to minimize dependencies.  The package does not provide predefined &lt;code&gt;Fake&lt;/code&gt; instances for Prelude data types because it&#39;s up for your application to define what values are realistic.&amp;nbsp; For example, an &lt;code&gt;Int&lt;/code&gt; representing age probably only needs to generate values in the interval (0,120].&lt;br /&gt;
&lt;br /&gt;
It also gives you a number of &quot;providers&quot; that generate various real-world things in a realistic way.&amp;nbsp; Need to generate plausible user agent strings?&amp;nbsp; We&#39;ve &lt;a href=&quot;https://github.com/mightybyte/fake/blob/0c95bc2f8fb6288551f10d1b47d46d2a85166047/src/Fake/Provider/UserAgent.hs#L17&quot;&gt;got you covered&lt;/a&gt;.&amp;nbsp; Want to generate US addresses with cities and zip codes that are actually valid for the chosen state?&amp;nbsp; Just import the &lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Provider/Address/EN_US.hs&quot;&gt;Fake.Provider.Address.EN_US module&lt;/a&gt;.&amp;nbsp; But that&#39;s not all.&amp;nbsp; Fake ships with providers that include:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Correctly formatted&amp;nbsp;&lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Provider/IdNumber/EN_US.hs&quot;&gt;US social security numbers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Various &lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Provider/Lang/EN_US.hs&quot;&gt;English parts of speech&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Provider/Person/EN_US.hs&quot;&gt;English names&lt;/a&gt;, including gender-appropriate first names&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Provider/PhoneNumber/EN_US.hs&quot;&gt;US phone numbers&lt;/a&gt; with valid area codes and prefixes&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;
See the full list &lt;a href=&quot;https://github.com/mightybyte/fake/tree/master/src/Fake/Provider&quot;&gt;here&lt;/a&gt;.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
I tried to focus on providers that I thought would be broadly useful to a wide audience.&amp;nbsp; If you are interested in a provider for something that isn&#39;t there yet, I invite more contributions!&amp;nbsp; Similar packages exist in a number of other languages, some of which are credited in fake&#39;s &lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/README.md&quot;&gt;README&lt;/a&gt;.&amp;nbsp; If you are planning on writing a new provider for something with complex structure, you might want to look at some of those to see if something already exists that can serve as inspiration.&lt;br /&gt;
&lt;br /&gt;
One area of future exploration where I would love to see activity is something building on top of &lt;code&gt;fake&lt;/code&gt; that allows you to generate entire fake databases matching a certain schema and ensuring that foreign keys are handled properly.&amp;nbsp; This problem might be able to make use of &lt;code&gt;fake&lt;/code&gt;&#39;s full constructor coverage concept (described in more detail &lt;a href=&quot;http://softwaresimply.blogspot.com/2018/03/efficiently-improving-test-coverage.html&quot;&gt;here&lt;/a&gt;) to help ensure that all the important combinations of various foreign keys are generated.&lt;/div&gt;
</content><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6553669363359033582'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6553669363359033582'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2018/03/fake-generating-realistic-test-data-in.html' title='Fake: Generating Realistic Test Data in Haskell'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-9088681482957858316</id><published>2018-03-16T14:06:00.000-04:00</published><updated>2018-03-16T14:24:43.887-04:00</updated><title type='text'>Efficiently Improving Test Coverage with Algebraic Data Types</title><content type='html'>Think of a time you&#39;ve written tests for (de)serialization code of some kind, say for a data structure called &lt;code&gt;Foo&lt;/code&gt;.&amp;nbsp; If you were using the lowest level of sophistication you probably defined a few values by hand, serialized them, deserialized that, and verified that you ended up with the same value you started with.&amp;nbsp; In Haskell nomenclature we&#39;d say that you manually verified that &lt;code&gt;parse . render == id&lt;/code&gt;.&amp;nbsp; If you were a little more sophisticated, you might have used the &lt;a href=&quot;http://hackage.haskell.org/package/QuickCheck&quot;&gt;QuickCheck library&lt;/a&gt;&amp;nbsp;(or any of the numerous similar packages it inspired in other languages) to verify the&amp;nbsp;&lt;span style=&quot;font-family: monospace;&quot;&gt;parse . render == id&amp;nbsp;&lt;/span&gt;property for a bunch of randomly generated values.&amp;nbsp; The first level of sophistication is often referred to as unit testing.&amp;nbsp; The second frequently goes by the term property testing or sometimes fuzz testing.&lt;br /&gt;
&lt;br /&gt;
Both unit testing and property testing have some drawbacks.&amp;nbsp; With unit testing you have to write fairly tedious boilerplate of listing by hand all the values you want to test with.&amp;nbsp; With property testing you have to write a precise specification of how to generate the different cases randomly.&amp;nbsp; There is a package called &lt;a href=&quot;http://hackage.haskell.org/package/generic-arbitrary&quot;&gt;generic-arbitrary&lt;/a&gt; that can automatically derive these generators for you generically, but that approach often ends up testing many values that aren&#39;t giving you any increase in code coverage.&amp;nbsp; To see this, consider the example of testing this domain-specific data type:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;data MyError e a = Failure e | Success a&lt;/pre&gt;
&lt;br /&gt;
Here is what we get using generic-arbitrary to generate test values:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;λ import Test.QuickCheck
λ import Test.QuickCheck.Arbitrary.Generic
λ let g = genericArbitrary :: Gen (MyError Int Bool)
λ mapM_ print =&amp;lt;&amp;lt; generate (replicateM 20 g)
Success True
Success True
Success False
Success False
Success True
Failure 17
Failure 8
Failure (-29)
Failure (-14)
Success True
Failure 22
Failure (-2)
Success False
Success True
Success False
Failure (-15)
Success False
Success True
Failure (-5)
Failure (-5)&lt;/pre&gt;
&lt;br /&gt;
It is testing the exact same value multiple times.&amp;nbsp; But even if it had a mechanism for deduplicating the values, do you really need so many different test cases for the serialization of the &lt;code&gt;Int&lt;/code&gt; in the &lt;code&gt;Failure&lt;/code&gt; case?&amp;nbsp; I&#39;m testing my serialization code for the &lt;code&gt;MyError&lt;/code&gt; type, so I probably don&#39;t care about testing the serialization of &lt;code&gt;Int&lt;/code&gt;.&amp;nbsp; I would take it as a given that the libraries I&#39;m using got the serialization for &lt;code&gt;Int&lt;/code&gt; correct.&amp;nbsp; If we take this assumption as a given, then for the above case of testing &lt;code&gt;MyError Int Bool&lt;/code&gt;, I would really only care about two cases, the &lt;code&gt;Failure&lt;/code&gt; case and the &lt;code&gt;Success&lt;/code&gt; case.&amp;nbsp; Testing serializations for &lt;code&gt;Int&lt;/code&gt; and &lt;code&gt;Bool&lt;/code&gt; are out of the scope of my concern because from the perspective of my project they are primitives.&lt;br /&gt;
&lt;br /&gt;
What we really want to test is something I call &quot;full constructor coverage&quot;.&amp;nbsp; With this pattern of generation you only generate enough values to exercise each constructor your your type hierarchy once.&amp;nbsp; This gets you better code coverage while testing many fewer values.&amp;nbsp; If we wanted exhaustive testing, the number of values you&#39;d need to test are given by this relation:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;-- Product type
numCases (a, b) = numCases a * numCases b
-- Sum type
numCases (Either a b) = numCases a + numCases b
&lt;/pre&gt;
&lt;br /&gt;
This is precisely what the &quot;algebraic&quot; in &quot;algebraic data types&quot; is referring to.&amp;nbsp; For product types the number of inhabitants is multiplied, for sum types it is added.&amp;nbsp; However, if we&#39;re going for full constructor coverage, the number of values we need to test is reduced to the following relation:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;numCases (a, b) = max (numCases a) (numCases b)
numCases (Either a b) = numCases a + numCases b
&lt;/pre&gt;
&lt;br /&gt;
For complex types, replacing the multiplication with the max greatly reduces the number of values you need to test.&amp;nbsp; Here&#39;s what full constructor coverage looks like for MyError:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;λ import Fake
λ let g = unCoverage (gcover :: Coverage (MyError Int Bool))
λ mapM_ print =&amp;lt;&amp;lt; generate (sequence g)
Failure 75
Success False&lt;/pre&gt;
&lt;br /&gt;
At this point readers might be thinking, &quot;Who cares?&amp;nbsp; Computers are fast.&quot;&amp;nbsp; Well, QuickCheck defaults to generating 100 test cases for each property, but complex types can easily make this insufficient.&amp;nbsp; You can increase how many test cases it generates, but that will make your test suite slower and you probably won&#39;t know what the right number is to get the level of coverage you are aiming for.&amp;nbsp; In my experience slower test suites can cause tangible business impact in production systems in terms of possibly longer downtimes when problems are encountered in production and you have to wait for tests to pass before you can deploy a fix.&amp;nbsp; At the end of the day, naive randomized test case generation very unlikely to score as well as the full constructor coverage method under the metric of lines of code covered per test case.&lt;br /&gt;
&lt;br /&gt;
The good news is that Haskell&#39;s algebraic data types give us exactly what we need to get full constructor coverage in a completely automatic way.&amp;nbsp; It is a part of&amp;nbsp;&lt;a href=&quot;https://github.com/mightybyte/fake/blob/master/src/Fake/Cover.hs&quot;&gt;the fake package&lt;/a&gt;&amp;nbsp;and you can use it today.&amp;nbsp; You can see example uses in the &lt;a href=&quot;https://github.com/mightybyte/fake/blob/0c95bc2f8fb6288551f10d1b47d46d2a85166047/test/Main.hs#L30&quot;&gt;test suite&lt;/a&gt;.&amp;nbsp; In a dynamically typed language you don&#39;t have knowledge of the constructors and forms they take.&amp;nbsp; All variables can hold any data, so you don&#39;t have any knowledge of structure to guide you.&amp;nbsp; Thanks to types, Haskell knows that structure.</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/9088681482957858316/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/9088681482957858316' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/9088681482957858316'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/9088681482957858316'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2018/03/efficiently-improving-test-coverage.html' title='Efficiently Improving Test Coverage with Algebraic Data Types'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3457684086515132068</id><published>2018-03-15T08:38:00.000-04:00</published><updated>2018-03-15T08:38:55.158-04:00</updated><title type='text'>Armor Your Data Structures Against Backwards-Incompatible Serializations</title><content type='html'>As almost everyone with significant experience managing production software systems should know, backwards compatibility is incredibly important for any data that is persisted by an application. If you make a change to a data structure that is not backwards compatible with the existing serialized formats, your app will break as soon as it encounters the existing format. Even if you have 100% test coverage, your tests still might not catch this problem. It’s not a problem with your app at any single point in time, but a problem with how your app evolves over time.&lt;br /&gt;
&lt;br /&gt;
One might think that wire formats which are only used for communication between components and not persisted in any way would not be susceptible to this problem. But these too can cause issues if a message is generated and a new version of the app is deployed before the the message is consumed. The longer the message remains in a queue, redis cache, etc the higher the chances of this occurring.&lt;br /&gt;
&lt;br /&gt;
More subtly, if you deploy a backwards incompatible migration, your app may persist some data in the new format before it crashes when it receives the old format. This can leave your system in the horrible state where not only will it not work with the new code, but rolling back to the old code won’t work either because the old code doesn’t support the new serialized format! You have two incompatible serializations active at the same time! Proper migration systems can reduce the chances of this problem occurring, but if your system has any kind of queueing system or message bus, your migrations might not be applied to in-flight messages. Clearly we need something to help us protect against this problem. Enter &lt;a href=&quot;http://hackage.haskell.org/package/armor&quot;&gt;the armor package&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Armor is a Haskell package that saves serialized versions of your data structures to the filesystem and tests that they can be correctly parsed. This alone at a single point in time verifies that &lt;code&gt;parse . render == id&lt;/code&gt; which is a property that you usually want your serializations to have. But in addition to that, armor tracks a version number for your data structures and uses that to accumulate historical serialization formats over time. It stores the serialized bytes in `.test` files that you check into source control. This protects against backwards-incompatible changes and at the same time avoids cluttering up your source code with old versions of the data structure.&lt;br /&gt;
&lt;br /&gt;
Armor does this while being completely agnostic to the choice of serialization library. In fact, it can support any number of different serialization formats simultaneously. For more information check out the &lt;a href=&quot;https://github.com/mightybyte/armor/blob/master/test/AppA.lhs&quot;&gt;literate Haskell tutorial&lt;/a&gt; in the test suite.&lt;br /&gt;
&lt;h3&gt;
Credits&lt;/h3&gt;
&lt;div&gt;
Inspiration for this package came from &lt;a href=&quot;https://github.com/Soostone/safecopy-hunit&quot;&gt;Soostone&#39;s safecopy-hunit package&lt;/a&gt;.
&lt;br /&gt;
&lt;br /&gt;
Details were refined in production at &lt;a href=&quot;http://formation.ai/&quot;&gt;Formation&lt;/a&gt; (previously Takt).
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3457684086515132068/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3457684086515132068' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3457684086515132068'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3457684086515132068'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2018/03/armor-your-data-structures-against.html' title='Armor Your Data Structures Against Backwards-Incompatible Serializations'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-4530921212754864352</id><published>2017-04-23T08:00:00.000-04:00</published><updated>2020-04-25T09:32:04.179-04:00</updated><title type='text'>Talk: Real World Reflex</title><content type='html'>I recently gave a talk at BayHac about some of the things I&#39;ve learned in building production Reflex applications.  If you&#39;re interested, you can find it here:
&lt;br /&gt;
&lt;a href=&quot;https://www.youtube.com/watch?v=dNBUDAU9sv4&quot;&gt;video&lt;/a&gt;
&lt;br /&gt;
&lt;a href=&quot;https://mightybyte.net/real-world-reflex/&quot;&gt;slides&lt;/a&gt;
&lt;br /&gt;
&lt;a href=&quot;https://github.com/mightybyte/real-world-reflex&quot;&gt;github&lt;/a&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/4530921212754864352/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/4530921212754864352' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4530921212754864352'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4530921212754864352'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2017/04/talk-real-world-reflex.html' title='Talk: Real World Reflex'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3131554620317921613</id><published>2016-12-26T11:02:00.000-05:00</published><updated>2016-12-26T11:02:04.592-05:00</updated><title type='text'>On Haskell Documentation</title><content type='html'>The following started out as a response to &lt;a href=&quot;https://news.ycombinator.com/item?id=13257453&quot;&gt;a Hacker News comment&lt;/a&gt;, but got long enough to merit a standalone blog post.&lt;br /&gt;
&lt;br /&gt;
I think the root of the Haskell documentation debate lies in a pretty fundamental difference in how you go about finding, reading, and understanding documentation in Haskell compared to mainstream languages. &amp;nbsp;Just last week I ran into a situation that really highlighted this difference.&lt;br /&gt;
&lt;br /&gt;
I was working on creating &lt;a href=&quot;https://github.com/reflex-frp/reflex-dom-ace&quot;&gt;a Haskell wrapper around the ACE editor&lt;/a&gt;. &amp;nbsp;I initially wrote the wrapper some time ago and got it integrated into a small app. &amp;nbsp;Last week I needed ACE integration in another app I&#39;m working on and came back to the code. &amp;nbsp;But I ran into a problem...ACE automatically makes AJAX requests for JS files needed for pluggable syntax highlighters and themes. &amp;nbsp;But it was making the AJAX requests in the wrong place and I needed to tell it to request them from somewhere else. &amp;nbsp;Depending on how interested you are in this, you might try looking through the ACE documentation on your own before reading on to see if you can find the answer to this problem.&lt;br /&gt;
&lt;br /&gt;
When you go to the &lt;a href=&quot;https://ace.c9.io/&quot;&gt;ACE home page&lt;/a&gt;, the most obvious place to start seems to be &lt;a href=&quot;https://ace.c9.io/#nav=embedding&quot;&gt;the embedding guide&lt;/a&gt;. &amp;nbsp;This kind of guide seems to be what people are talking about when they complain about Haskell&#39;s documentation. &amp;nbsp;But this guide gave me no clues as to how to solve my problem. &amp;nbsp;The embedding guide then refers you to &lt;a href=&quot;https://ace.c9.io/#nav=howto&quot;&gt;the how-to guide&lt;/a&gt;. &amp;nbsp;That documentation didn&#39;t help me either. &amp;nbsp;The next place I go is the API reference. &amp;nbsp;I&#39;m no stranger to API references. &amp;nbsp;This is exactly what I&#39;m used to from Haskell! &amp;nbsp;I look at the &lt;a href=&quot;https://ace.c9.io/#nav=api&amp;amp;api=ace&quot;&gt;docs for the top-level Ace module&lt;/a&gt;. &amp;nbsp;There are only three functions here. &amp;nbsp;None of them what I want. &amp;nbsp;They do have some type signatures that seem to help a little, but it doesn&#39;t tell me the type of the `edit` function, which is the one that seems most likely to be what I want. &amp;nbsp;At this point I&#39;m dying for a hyperlink to the actual code, but there are none to be found. &amp;nbsp;To make a long story short, the thing I want is nowhere to be found in the API reference either.&lt;br /&gt;
&lt;br /&gt;
I only solved the problem when a co-worker who has done a lot of JS work found the answer &lt;a href=&quot;https://github.com/ajaxorg/ace/issues/1518&quot;&gt;buried in a closed GitHub issue&lt;/a&gt;. &amp;nbsp;There&#39;s even a comment on that issue by someone saying he had been looking for it &quot;for days&quot;. &amp;nbsp;The solution was to call &lt;code&gt;ace.config.set(&#39;basePath&#39;, myPath);&lt;/code&gt;. &amp;nbsp;This illustrates the biggest problem with tutorial/how-to documentation: they&#39;re always incomplete. &amp;nbsp;There will always be use cases that the tutorials didn&#39;t think of. &amp;nbsp;They also take effort to maintain, and can easily get out of sync over time.&lt;br /&gt;
&lt;br /&gt;
I found this whole experience with ACE documentation very frustrating, and came away feeling that I vastly prefer Haskell documentation. &amp;nbsp;With Haskell, API docs literally give you all the information needed to solve any problem that the API can solve (with very few exceptions). &amp;nbsp;If the ACE editor was written natively in Haskell, the basePath solution would be pretty much guaranteed to show up somewhere in the API documentation. &amp;nbsp;In my ACE wrapper you can find it &lt;a href=&quot;https://github.com/reflex-frp/reflex-dom-ace/blob/master/src/Reflex/Dom/ACE.hs#L114&quot;&gt;here&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Now I need to be very clear that I am not saying the types are enough and saying the state of Haskell documentation is good enough. &amp;nbsp;There are definitely plenty of situations where it is not at all obvious how to wrangle the types to accomplish what you want to accomplish. &amp;nbsp;Haskell definitely needs to improve its documentation. &amp;nbsp;But this takes time and effort. &amp;nbsp;The Haskell community is growing but still relatively small, and resources are limited. &amp;nbsp;Haskell programmers should keep in mind that newcomers will probably be more used to tutorials and how-tos. &amp;nbsp;And I think newcomers should keep in mind that API docs in Haskell tell you a lot more than in other languages, and be willing to put some effort into learning how to use these resources effectively.</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3131554620317921613/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3131554620317921613' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3131554620317921613'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3131554620317921613'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2016/12/on-haskell-documentation.html' title='On Haskell Documentation'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-4296098316873528918</id><published>2016-08-03T10:02:00.000-04:00</published><updated>2016-08-03T10:02:39.372-04:00</updated><title type='text'>How to Get a Haskell Job</title><content type='html'>&lt;p&gt;
Over and over again I have seen people ask how to get a full time job
programming in Haskell.  So I thought I would write a blog post with tips that
have worked for me as well as others I know who write Haskell professionally.
For the impatient, here&#39;s the tl;dr in order from easiest to hardest:
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IRC&lt;/li&gt;
&lt;li&gt;Local meetups&lt;/li&gt;
&lt;li&gt;Regional gatherings/hackathons&lt;/li&gt;
&lt;li&gt;Open source contributions&lt;/li&gt;
&lt;li&gt;Work where Haskell people work&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;
First, you need to at least start learning Haskell on your own time.  You had
already started learning how to program before you got your first programming
job.  The same is true of Haskell programming.  You have to show some
initiative.  I understand that for people with families this can be hard.  But
you at least need to start.  After that, far and away the most important thing
is to interact with other Haskell developers so you can learn from them.  That
point is so important it bears repeating: &lt;strong&gt;interacting with experienced
Haskell programmers is by far the most important thing to do.&lt;/strong&gt;  Doing
this at a job would be the best, but there are other things you can do.
&lt;/p&gt;

&lt;p&gt;
1. IRC. Join the #haskell channel on Freenode. Lurk for awhile and follow some
of the conversations. Try to participate in discussions when topics come up
that interest you. Don&#39;t be afraid to ask what might seem to be stupid
questions. In my experience the people in #haskell are massively patient and
willing to help anyone who is genuinely trying to learn.
&lt;/p&gt;

&lt;p&gt;
2. Local meetups. Check meetup.com to see if there is a Haskell meetup in a
city near you. I had trouble finding a local meetup when I was first learning
Haskell, but there are a lot more of them now. Don&#39;t just go to listen to the
talks. Talk to people, make friends. See if there&#39;s any way you can collaborate
with some of the people there.
&lt;/p&gt;

&lt;p&gt;
3. Larger regional Haskell events. Find larger weekend gatherings of Haskell
developers and go to them. Here are a few upcoming events that I know of off
the top of my head:
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://www.meetup.com/Boston-Haskell/events/231606922/&quot;&gt;Hac Boston&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://wiki.haskell.org/Budapest_Hackathon_2016&quot;&gt;Budapest Hackathon&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.composeconference.org/2016-melbourne/unconference...&quot;&gt;Compose Melbourne&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://munihac.de/&quot;&gt;MuniHac&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://wiki.haskell.org/Hac_%CF%86 (2016 is coming but hasn&#39;t been announced yet)&quot;&gt;Hac Phi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
The first event like this that I went to was Hac Phi a few years back. Going
there majorly upped my game because I got to be around brilliant people, pair
program with some of them, and ultimately ended up starting the Snap Web
Framework with someone I met there. You might not have a local meetup that you
can go to, but you can definitely travel to go to one of these bigger weekend
events. I lived a few hours away from Hac Phi, but I know a number of people
who travel further to come. If you&#39;re really interested in improving your
Haskell, it is well worth the time and money. I cannot emphasize this enough.
&lt;/p&gt;

&lt;p&gt;
4. Start contributing to an open source Haskell project. Find a project that
interests you and dive in. Don&#39;t ask permission, just decide that you&#39;re going
to learn enough to contribute to this thing no matter what. Join their
project-specific IRC channel if they have one and ask questions. Find out how
you can contribute. Submit pull requests. This is by far the best way to get
feedback on the code that you&#39;re writing. I have actually seen multiple people
(including some who didn&#39;t strike me as unusually talented at first) start
Haskell and work their way up to a full-time Haskell job this way. It takes
time and dedication, but it works.
&lt;/p&gt;

&lt;p&gt;
5. Try to get a non-haskell job at a place where lots of Haskell people are
known to work. Standard Chartered uses is Haskell but is big enough to have
non-Haskell jobs that you might be able to fit. S&amp;P Capital IQ doesn&#39;t use
Haskell but has a significant number of Haskell people who are coding in Scala.
&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/4296098316873528918/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/4296098316873528918' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4296098316873528918'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4296098316873528918'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2016/08/how-to-get-haskell-job.html' title='How to Get a Haskell Job'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-6846295981852566348</id><published>2016-08-02T07:12:00.000-04:00</published><updated>2016-08-22T11:41:57.956-04:00</updated><title type='text'>Measuring Software Fragility</title><content type='html'>&lt;style&gt;
.hl { background-color: orange; }
&lt;/style&gt;

&lt;p&gt;
While writing &lt;a
href=&quot;https://www.reddit.com/r/haskell/comments/4ujg5i/what_are_your_thoughts_on_the_static_type/d5rn9pe&quot;&gt;this
comment on reddit&lt;/a&gt; I came up with an interesting question that I think might
be a useful way of thinking about programming languages. What percentage of
single non-whitespace characters in your source code could be changed to a
different character such that the change would pass your CI build system but
would result in a runtime bug? Let&#39;s call this the software fragility number
because I think that metric gives a potentially useful measure of how bug prone
your software is.
&lt;/p&gt;

&lt;p&gt;
At the end of the day software is a mountain of bytes and you&#39;re trying to get
them into a particular configuration.  Whether you&#39;re writing a new app from
scratch, fixing bugs, or adding new features, the number of bytes of source
code you have (similar to LOC, SLOC, or maybe the compressed number of bytes)
is rough indication of the complexity of your project.  If we model programmer
actions as random byte mutations over all of a project&#39;s source and we&#39;re
trying to predict the project&#39;s defect rate this software fragility number is
exactly the thing we need to know.
&lt;/p&gt;

&lt;p&gt;
Now I&#39;m sure many people will be quick to point out that this random mutation
model is not accurate. Of course that&#39;s true. But I would argue that in this way
it&#39;s similar to the efficient markets hypothesis in finance. Real world markets
are obviously not efficient (Google didn&#39;t become $26 billion less valuable
because the UK voted for brexit). But the efficient markets model is still really
useful--and good luck finding a better one that everybody will agree on.
&lt;/p&gt;

&lt;p&gt;
What this model lacks in real world fidelity, it makes up for in practicality.
We can actually build an automated system to calculate a reasonable
approximation of the fragility number. All that has to be done is take a
project, randomly mutate a character, run the project&#39;s whole CI build, and see
if the result fails the build. Repeat this for every non-whitespace character in
the project and count how many characters pass the build. Since the character
was generated at random, I think it&#39;s reasonable to assume that any mutation
that passes the build is almost definitely a bug.
&lt;/p&gt;

&lt;p&gt;
Performing this process for every character in a large project would obviously
require a lot of CPU time. We could make this more tractable by picking
characters at random to mutate. Repeat this until you have done it for a large
enough number of characters and then see what percentage of them made it through
the build. Alternatively, instead of choosing random characters you could choose
whole modules at random to get more uniform coverage over different parts of the
language&#39;s grammar. There are probably a number of different algorithms that
could be tried for picking random subsets of characters to test. Similar to
numerical approximation algorithms such as Newton&#39;s method, any of these
algorithms could track the convergence of the estimate and stop when the value
gets to a sufficient level of stability.
&lt;/p&gt;

&lt;p&gt;
Now let&#39;s investigate actual fragility numbers for some simple bits of example
code to see how this notion behaves. First let&#39;s look at some JavaScript
examples.
&lt;/p&gt;

&lt;p&gt;
It&#39;s worth noting that comment characters should not be allowed to be chosen for
mutation since they obviously don&#39;t affect the correctness of the program. So
the comments you see here have not been included in the calculations. Fragile
characters are highlighted in orange.
&lt;/p&gt;

&lt;pre&gt;
// Fragility 12 / 48 = 0.25
function &lt;span class=&quot;hl&quot;&gt;f&lt;/span&gt;(&lt;span class=&quot;hl&quot;&gt;n&lt;/span&gt;) {
  if ( &lt;span class=&quot;hl&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;2&lt;/span&gt; ) return &lt;span class=&quot;hl&quot;&gt;1&lt;/span&gt;;
  else return &lt;span class=&quot;hl&quot;&gt;n&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;f&lt;/span&gt;(&lt;span class=&quot;hl&quot;&gt;n-1&lt;/span&gt;);
}
&lt;/pre&gt;

&lt;pre&gt;
// Fragility 14 / 56 = 0.25
function &lt;span class=&quot;hl&quot;&gt;g&lt;/span&gt;(&lt;span class=&quot;hl&quot;&gt;n&lt;/span&gt;) {
  var &lt;span class=&quot;hl&quot;&gt;p&lt;/span&gt; = &lt;span class=&quot;hl&quot;&gt;1&lt;/span&gt;;
  for (var &lt;span class=&quot;hl&quot;&gt;i&lt;/span&gt; = &lt;span class=&quot;hl&quot;&gt;2&lt;/span&gt;; &lt;span class=&quot;hl&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;&amp;lt;&lt;/span&gt;= &lt;span class=&quot;hl&quot;&gt;n&lt;/span&gt;; &lt;span class=&quot;hl&quot;&gt;i&lt;/span&gt;++ ) {
    &lt;span class=&quot;hl&quot;&gt;p&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;*&lt;/span&gt;= &lt;span class=&quot;hl&quot;&gt;i&lt;/span&gt;;
  }
  return &lt;span class=&quot;hl&quot;&gt;p&lt;/span&gt;;
}
&lt;/pre&gt;

&lt;p&gt;
First I should say that I didn&#39;t write an actual program to calculate these. I
just eyeballed it and thought about what things would fail. I easily could have
made mistakes here. In some cases it may even be subjective, so I&#39;m open to
corrections or different views.
&lt;/p&gt;

&lt;p&gt;
Since JavaScript is not statically typed, every character of every identifier is
fragile--mutating them will not cause a build error because there isn&#39;t much of
a build. JavaScript won&#39;t complain, you&#39;ll just start getting undefined values.
If you&#39;ve done a signifciant amount of JavaScript development, you&#39;ve almost
definitely encountered bugs from mistyped identifier names like this. I think
it&#39;s mildly interesting that the recursive and iterative formulations if this
function both have the same fragility. I expected them to be different. But
maybe that&#39;s just luck.
&lt;/p&gt;

&lt;p&gt;
Numerical constants as well as comparison and arithmetic operators will also
cause runtime bugs. These, however, are more debatable because if you use the
random procedure I outlined above, you&#39;ll probably get a build failure because
the character would have probably changed to something syntactically incorrect.
In my experience, it semes like when you mistype an alpha character, it&#39;s likely
that the wrong character will also be an alpha character. The same seems to be
true for the classes of numeric characters as well as symbols. The method I&#39;m
proposing is that the random mutation should preserve the character class. Alpha
characters should remain alpha, numeric should remain numeric, and symbols
should remain symbols. In fact, my original intuition goes even further than
that by only replacing comparison operators with other comparison operators--you
want to maximize the chance that new mutated character will cause a successful
build so the metric will give you a worst-case estimate of fragility. There&#39;s
certainly room for research into what patterns tend come up in the real world
and other algorithms that might describe that better.
&lt;/p&gt;

&lt;p&gt;
Now let&#39;s go to the other end of the programming language spectrum and see what
the fragility number might look like for Haskell.
&lt;/p&gt;

&lt;pre&gt;
// Fragility 7 / 38 = 0.18
f :: Int -&gt; Int
f n
  | n &lt;span class=&quot;hl&quot;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;hl&quot;&gt;2&lt;/span&gt; = &lt;span class=&quot;hl&quot;&gt;1&lt;/span&gt;
  | otherwise = n &lt;span class=&quot;hl&quot;&gt;*&lt;/span&gt; f (n&lt;span class=&quot;hl&quot;&gt;-1&lt;/span&gt;)
&lt;/pre&gt;

&lt;p&gt;
Haskell&#39;s much more substantial compile time checks mean that mutations to
identifier names can&#39;t cause bugs in this example. The fragile characters here
are clearly essential parts of the algorithm we&#39;re implementing. Maybe we could
relate this idea to information theory and think of it as an idea of how much
information is contained in the algorithm.
&lt;/p&gt;

&lt;p&gt;
One interesting thing to note here is the effect of the length of identifier
names on the fragility number. In JavaScript, long identifier names will
increase the fragility because all identifier characters can be mutated and will
cause a bug. But in Haskell, since identifier characters are not fragile, longer
names will lower the fragility score. Choosing to use single character
identifier names everywhere makes these Haskell fragility numbers the worst case
and makes JavaScript fragility numbers the best case.
&lt;/p&gt;

&lt;p&gt;
Another point is that since I&#39;ve used single letter identifier names it is
possible for a random identifier mutation in Haskell to not cause a build
failure but still cause a bug. Take for instance a function that takes two Int
parameters x and y. If y was mutated to x, the program would still compile, but
it would cause a bug. My set of highlighted fragile characters above does not
take this into account because it&#39;s trivially avoidable by using longer
identifier names. Maybe this is an argument against one letter identifier names,
something that Haskell gets criticism for.
&lt;/p&gt;

&lt;p&gt;
Here&#39;s the snippet of Haskell code I was talking about in the above reddit
comment that got me thinking about all this in the first place:
&lt;/p&gt;

&lt;pre&gt;
// Fragility 31 / 277 = 0.11
data MetadataInfo = MetadataInfo
    { title       :: Text
    , description :: Text
    }

pageMetadataWidget :: MonadWidget t m =&gt; Dynamic t MetadataInfo -&gt; m ()
pageMetadataWidget i = do
    el &quot;&lt;span class=&quot;hl&quot;&gt;title&lt;/span&gt;&quot; $ dynText $ title &lt;$&gt; i
    elDynAttr &quot;&lt;span class=&quot;hl&quot;&gt;meta&lt;/span&gt;&quot; (mkDescAttrs . description &lt;$&gt; i) blank
  where
    mkDescAttrs desc =
      &quot;&lt;span class=&quot;hl&quot;&gt;name&lt;/span&gt;&quot; =: &quot;&lt;span class=&quot;hl&quot;&gt;description&lt;/span&gt;&quot; &lt;&gt;
      &quot;&lt;span class=&quot;hl&quot;&gt;content&lt;/span&gt;&quot; =: desc
&lt;/pre&gt;

&lt;p&gt;
In this snippet, the fragility number is probably close to 31 characters--the
number of characters in string literals. This is out of a total of 277
non-whitespace characters, so the software fragility number for this bit of code
is 11%. This half the fragility of the JS code we saw above! And as I&#39;ve pointed
out, larger real world JS examples are likely to have even higher fragility. I&#39;m
not sure how much we can conclude about the actual ratios of these fragility
numbers, but at the very least it matches my experience that JS programs are
significantly more buggy than Haskell programs.
&lt;/p&gt;

&lt;p&gt;
The TDD people are probably thinking that my JS examples aren&#39;t very realistic
because none of them have tests, and that tests would catch most of the
identifier name mutations, bringing the fragility down closer to Haskell
territory. It is true that tests will probably catch some of these things. But
you have to write code to make that happen! It doesn&#39;t happen by default. Also,
you need to take into account the fact that the tests themselves will have some
fragility. Tests require time and effort to maintain. This is an area where this
notion of the fragility number becomes less accurate. I suspect that since the
metric only considers single character mutations it will underestimate the
fragility of tests since mutating single characters in tests will automatically
cause a build failure.
&lt;/p&gt;

&lt;p&gt;
There seems to be a slightly paradoxical relationship between the fragility
number and DRY. Imagine our above JS factorial functions had a test that
completely reimplemented factorial and then tried a bunch of random values
Quickcheck-style. This would yield a fragility number of zero! Any single
character change in the code would cause a test failure. And any single
character change in the tests would also cause a test failure. Single character
changes can no longer classified fragile because we&#39;ve violated DRY. You might
say that the test suite shouldn&#39;t reimplement algorithm--you should just
specific cases like &lt;code&gt;f(5) == 120&lt;/code&gt;. But in an information theory sense
this is still violating DRY.
&lt;/p&gt;

&lt;p&gt;
Does this mean that the fragility number is not very useful? Maybe. I don&#39;t
know. But I don&#39;t think it means that we should just throw away the idea. Maybe
we should just keep in mind that this particular formulation doesn&#39;t have much
to tell us about the fragility more complex coordinated multi-character changes.
I could see the usefulness of this metric going either way. It could simplify
down to something not very profound. Or it could be that measurements of the
fragility of real world software projects end up revealing some interesting
insights that are not immediately obvious even from my analysis here.
&lt;/p&gt;

&lt;p&gt;
Whatever the usefulness of this fragility metric, I think the concept gets is
thinking about software defects in a different way than we might be used to. If
it turns out that my single character mutation model isn&#39;t very useful, perhaps
the extension to multi-character changes could be useful. Hopefully this will
inspire more people to think about these issues and play with the ideas in a way
that will help us progress towards more reliable software and tools to build it
with.
&lt;/p&gt;

&lt;p&gt;
EDIT: Unsurprisingly, I&#39;m not the first person to have thought of this.
It looks like it&#39;s commonly known as
&lt;a href=&quot;https://en.wikipedia.org/wiki/Mutation_testing&quot;&gt;mutation testing&lt;/a&gt;.
That Wikipedia article makes it sound like mutation testing is commonly thought
of as a way to assess your project&#39;s test suite.  I&#39;m particularly interested
in what it might tell us about programming languages...i.e. how much &quot;testing&quot;
we get out of the box because of our choice of programming language and
implementation.
&lt;/p&gt;

</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/6846295981852566348/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/6846295981852566348' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6846295981852566348'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6846295981852566348'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2016/08/measuring-software-fragility.html' title='Measuring Software Fragility'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-2074107047679825165</id><published>2015-08-18T00:40:00.001-04:00</published><updated>2016-08-02T00:30:17.583-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>&quot;cabal gen-bounds&quot;: easy generation of dependency version bounds</title><content type='html'>&lt;p&gt;
In my &lt;a href=&quot;http://softwaresimply.blogspot.com/2015/08/why-version-bounds-cannot-be-inferred.html&quot;&gt;last post&lt;/a&gt; I showed how release dates are not a good way of inferring version bounds.  The package repository should not make assumptions about what versions you have tested against.  You need to tell it.  But from what I&#39;ve seen there are two problems with specifying version bounds:
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lack of knowledge about how to specify proper bounds&lt;/li&gt;
&lt;li&gt;Unwillingness to take the time to do so&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
Early in my Haskell days, the first time I wrote a cabal file I distinctly remember getting to the dependencies section and having no idea what to put for the version bounds.  So I just ignored them and moved on.  The result of that decision is that I can no longer build that app today.  I would really like to, but it&#39;s just not worth the effort to try.
&lt;/p&gt;

&lt;p&gt;
It wasn&#39;t until much later that I learned about the PVP and how to properly set bounds.  But even then, there was still an obstacle.  It can take some time to add appropriate version bounds to all of a package&#39;s dependencies.  So even if you know the correct scheme to use, you might not want to take the time to do it.
&lt;/p&gt;

&lt;p&gt;
Both of these problems are surmountable.  And in the spirit of doing that, I would like to propose a &quot;cabal gen-bounds&quot; command.  It would check all dependencies to see which ones are missing upper bounds and output correct bounds for them.  I have implemented this feature and it is available at &lt;a href=&quot;https://github.com/mightybyte/cabal/tree/gen-bounds&quot;&gt;https://github.com/mightybyte/cabal/tree/gen-bounds&lt;/a&gt;.  Here is what it looks like to use this command on the cabal-install package:
&lt;/p&gt;

&lt;pre&gt;
$ cabal gen-bounds
Resolving dependencies...

The following packages need bounds and here is a suggested starting point.
You can copy and paste this into the build-depends section in your .cabal
file and it should work (with the appropriate removal of commas).

Note that version bounds are a statement that you&#39;ve successfully built and
tested your package and expect it to work with any of the specified package
versions (PROVIDED that those packages continue to conform with the PVP).
Therefore, the version bounds generated here are the most conservative
based on the versions that you are currently building with.  If you know
your package will work with versions outside the ranges generated here,
feel free to widen them.


network      &gt;= 2.6.2 &amp;&amp; &lt; 2.7,
network-uri  &gt;= 2.6.0 &amp;&amp; &lt; 2.7,
&lt;/pre&gt;

&lt;p&gt;
The user can then paste these lines into their build-depends file.  They are formatted in a way that facilitates easy editing as the user finds more versions (either newer or older) that the package builds with.  This serves to both educate users and automate the process.  I think this removes one of the main frustrations people have about upper bounds and is a step in the right direction of getting more hackage packages to supply them.  Hopefully it will be merged upstream and be available in cabal-install in the future.
&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/2074107047679825165/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/2074107047679825165' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2074107047679825165'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2074107047679825165'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2015/08/cabal-gen-bounds-easy-generation-of.html' title='&quot;cabal gen-bounds&quot;: easy generation of dependency version bounds'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3067513860784563859</id><published>2015-08-16T14:40:00.000-04:00</published><updated>2016-08-22T11:35:01.475-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>Why version bounds cannot be inferred retroactively (using dates)</title><content type='html'>&lt;p&gt;
In past debates about Haskell&#39;s &lt;a href=&quot;https://wiki.haskell.org/Package_versioning_policy&quot;&gt;Package Versioning Policy (PVP)&lt;/a&gt;, some have suggested that package developers don&#39;t need to put upper bounds on their version constraints because those bounds can be inferred by looking at what versions were available on the date the package was uploaded.  This strategy cannot work in practice, and here&#39;s why.
&lt;/p&gt;

&lt;p&gt;
Imagine someone creates a small new package called foo.  It&#39;s a simple package, say something along the lines of &lt;a href=&quot;http://hackage.haskell.org/package/formattable&quot;&gt;the formattable package&lt;/a&gt; that I recently released.  One of the dependencies for foo is &lt;a href=&quot;http://hackage.haskell.org/package/errors&quot;&gt;errors&lt;/a&gt;, a popular package supplying frequently used error handling infrastructure.  The developer happens to already have errors-1.4.7 installed on their system, so this new package gets built against that version.  The author uploads it to hackage on August 16, 2015 with no upper bounds on its dependencies.  Let&#39;s for simplicity imagine that errors is the only dependency, so the .cabal file looks like this:
&lt;/p&gt;

&lt;pre&gt;
name: foo
build-depends:
  errors
&lt;/pre&gt;

&lt;p&gt;
If we come back through at some point in the future and try to infer upper bounds by date, we&#39;ll see that on August 16, the most recent version of errors was 2.0.0.  Here&#39;s an abbreviated illustration of the picture we can see from release dates:
&lt;/p&gt;

&lt;p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghKd2c9XFoqEFUuDyxcFkFvNOUhgTWQtlBPAcdgXQ_3WWMB_6_Yj-kH2o-dvZsLpyQ7_I9gIRSPNq1KEJVsqfWM0maph-tPCQUDsuw8keTbg-4qgKbT54zgISI0gTY9wMKrPKW87OOV1g/s1600/timeline1.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 0; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghKd2c9XFoqEFUuDyxcFkFvNOUhgTWQtlBPAcdgXQ_3WWMB_6_Yj-kH2o-dvZsLpyQ7_I9gIRSPNq1KEJVsqfWM0maph-tPCQUDsuw8keTbg-4qgKbT54zgISI0gTY9wMKrPKW87OOV1g/s640/timeline1.png&quot;&gt;&lt;/a&gt;&lt;/div&gt;
&lt;/p&gt;

&lt;p&gt;
If we look only at release dates, and assume that packages were building against the most recent version, we will try to build foo with errors-2.0.0.  But that is incorrect!  Building foo with errors-2.0.0 will fail because errors had a major breaking change in that version.  &lt;strong&gt;Bottom line: dates are irrelevant--all that matters is what dependency versions the author happened to be building against!&lt;/strong&gt;  You cannot assume that package authors will always be building against the most recent versions of their dependencies.  This is especially true if our developer was using the Haskell Platform or LTS Haskell because those package collections lag the bleeding edge even more.  So this scenario is not at all unlikely.
&lt;/p&gt;

&lt;p&gt;
It is also possible for packages to be maintaining multiple major versions simultaneously.  Consider large projects like the linux kernel.  Developers routinely do maintenance releases on 4.1 and 4.0 even though 4.2 is the latest version.  This means that version numbers are not always monotonically increasing as a function of time.
&lt;/p&gt;

&lt;p&gt;
I should also mention another point on the meaning of version bounds.  When a package specifies version bounds like this...
&lt;/p&gt;

&lt;pre&gt;
name: foo
build-depends:
  errors &amp;gt;= 1.4 &amp;amp;&amp;amp; &amp;lt; 1.5
&lt;/pre&gt;

&lt;p&gt;
...it is not saying &quot;my package will not work with errors-1.5 and above&quot;.  It is actually saying, &quot;I warrant that my package does work with those versions of errors (provided errors complies with the PVP)&quot;.  So the idea that &quot;&amp;lt; 1.5&quot; is a &quot;preemptive upper bound&quot; is wrong.  The package author is not preempting anything.  Bounds are simply information.  The upper and lower bounds are important things that developers need to tell you about their packages to improve the overall health of the ecosystem.  Build tools are free to do whatever they want with that information.  Indeed, cabal-install has a flag --allow-newer that lets you ignore those upper bounds and step outside the version ranges that the package authors have verified to work.
&lt;/p&gt;

&lt;p&gt;
In summary, the important point here is that you cannot use dates to infer version bounds.  You cannot assume that package authors will always be building against the most recent versions of their dependencies.  The only reliable thing to do is for the package maintainer to tell you explicitly what versions the package is expected to work with.  And that means lower &lt;strong&gt;and&lt;/strong&gt; upper bounds.
&lt;/p&gt;

&lt;p&gt;
Update:  Here is a situation that illustrates this point perfectly: &lt;a href=&quot;https://github.com/haskell-crypto/cryptonite/issues/96&quot;&gt;cryptonite issue #96&lt;/a&gt;.  &lt;a href=&quot;http://hackage.haskell.org/package/cryptonite-0.19&quot;&gt;cryptonite-0.19&lt;/a&gt; was released on August 12, 2016.  But &lt;a href=&quot;http://hackage.haskell.org/package/cryptonite-0.15.1&quot;&gt;cryptonite-0.15.1&lt;/a&gt; was released on August 22, 2016.  Any library published after August 22, 2016 that depends on cryptonite-0.15.1 would not be able to build if the solver used dates instead of explicit version bounds.
&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3067513860784563859/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3067513860784563859' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3067513860784563859'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3067513860784563859'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2015/08/why-version-bounds-cannot-be-inferred.html' title='Why version bounds cannot be inferred retroactively (using dates)'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghKd2c9XFoqEFUuDyxcFkFvNOUhgTWQtlBPAcdgXQ_3WWMB_6_Yj-kH2o-dvZsLpyQ7_I9gIRSPNq1KEJVsqfWM0maph-tPCQUDsuw8keTbg-4qgKbT54zgISI0gTY9wMKrPKW87OOV1g/s72-c/timeline1.png" height="72" width="72"/><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3292786195052339137</id><published>2015-06-03T13:02:00.000-04:00</published><updated>2015-09-01T13:45:59.439-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>The Problem with Curation</title><content type='html'>&lt;p&gt;Recently I received a question from a user asking about &quot;cabal hell&quot; when installing one of my packages.  The scenario in question worked fine for us, but for some reason it wasn&#39;t working for the user.  When users report problems like this they usually do not provide enough information for us to solve it.  So then we begin the sometimes arduous back and forth process of gathering the information we need to diagnose the problem and suggest a workaround or implement a fix.&lt;/p&gt;

&lt;p&gt;In this particular case luck was on our side and the user&#39;s second message just happened to include the key piece of information.  The problem in this case was that they were using stackage instead of the normal hackage build that people usually use.  Using stackage locks down your dependency bounds to a single version.  The user reporting the problem was trying to add additional dependencies to his project and those dependencies required different versions.  Stackage was taking away degrees of freedom from the dependency solver (demoting it from the driver seat to the passenger seat).  Fortunately in this case the fix was simple: stop freezing down versions with stackage.  As soon as the user did that it worked fine.&lt;/p&gt;

&lt;p&gt;This highlights the core problem with package curation: it is based on a closed-world assumption.  I think that this makes it not a viable answer to the general question of how to solve the package dependency problem.  The world that many users will encounter is not closed.  People are constantly creating new packages.  Curation resources are finite and trying to keep up with the world is a losing battle.  Also, even if we had infinite curation resources and zero delay between the creation of a package and its inclusion in the curated repository, that would still not be good enough.  There are many people working with code that is not public and therefore cannot be curated.  We need a more general solution to the problem that doesn&#39;t require a curator.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3292786195052339137/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3292786195052339137' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3292786195052339137'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3292786195052339137'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2015/06/the-problem-with-curation.html' title='The Problem with Curation'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-4704775857752483121</id><published>2014-12-19T02:01:00.000-05:00</published><updated>2015-05-14T13:52:29.428-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="haskell"/><category scheme="http://www.blogger.com/atom/ns#" term="ltmt"/><title type='text'>LTMT Part 3: The Monad Cookbook</title><content type='html'>&lt;h3&gt;Introduction&lt;/h3&gt;

&lt;p&gt;The previous &lt;a href=&quot;http://softwaresimply.blogspot.com/2012/04/less-travelled-monad-tutorial-part-1.html&quot;&gt;two&lt;/a&gt; &lt;a href=&quot;http://softwaresimply.blogspot.com/2012/04/ltmt-part-2-monads.html&quot;&gt;posts&lt;/a&gt; in my Less Traveled Monad Tutorial series have not had much in the way of directly practical content.  In other words, if you only read those posts and nothing else about monads, you probably wouldn&#39;t be able to use monads in real code.  This was intentional because I felt that the practical stuff (like do notation) had adequate treatment in other resources.  In this post I&#39;m still not going to talk about the details of do notation--you should definitely read about that elsewhere--but I am going to talk about some of the most common things I have seen beginners struggle with and give you cookbook-style patterns that you can use to solve these issues.&lt;/p&gt;

&lt;h3&gt;Problem: Getting at the pure value inside the monad&lt;/h3&gt;

&lt;p&gt;This is perhaps the most common problem for Haskell newcomers.  It usually manifests itself as something like this:&lt;/p&gt;

&lt;pre&gt;
main = do
    lineList &lt;- lines $ readFile &quot;myfile.txt&quot;
    -- ... do something with lineList here
&lt;/pre&gt;

&lt;p&gt;That code generates the following error from GHC:&lt;/p&gt;

&lt;pre&gt;
    Couldn&#39;t match type `IO String&#39; with `[Char]&#39;
    Expected type: String
      Actual type: IO String
    In the return type of a call of `readFile&#39;
&lt;/pre&gt;

&lt;p&gt;Many newcomers seem puzzled by this error message, but it tells you EXACTLY what the problem is.  The return type of readFile has type IO String, but the thing that is expected in that spot is a String.  (Note: String is a synonym for [Char].)  The problem is, this isn&#39;t very helpful.  You could understand that error completely and still not know how to solve the problem.  First, let&#39;s look at the types involved.&lt;/p&gt;

&lt;pre&gt;
readFile :: FilePath -&gt; IO String
lines :: String -&gt; [String]
&lt;/pre&gt;

&lt;p&gt;Both of these functions are defined in &lt;a href=&quot;https://downloads.haskell.org/~ghc/7.6.2/docs/html/libraries/base-4.6.0.1/Prelude.html&quot;&gt;Prelude&lt;/a&gt;.  These two type signatures show the problem very clearly.  readFile returns an IO String, but the lines function is expecting a String as its first argument.  IO String != String.  Somehow we need to extract the String out of the IO in order to pass it to the lines function.  This is exactly what do notation was designed to help you with.&lt;/p&gt;

&lt;h3&gt;Solution #1&lt;/h3&gt;

&lt;pre&gt;
main :: IO ()
main = do
    contents &lt;- readFile &quot;myfile.txt&quot;
    let lineList = lines contents
    -- ... do something with lineList here
&lt;/pre&gt;

&lt;p&gt;This solution demonstrates two things about do notation.  First, the left arrow lets you pull things out of the monad.  Second, if you&#39;re not pulling something out of a monad, use &quot;let foo =&quot;.  One metaphor that might help you remember this is to think of &quot;IO String&quot; as a computation in the IO monad that returns a String.  A do block lets you run these computations and assign names to the resulting pure values.&lt;/p&gt;

&lt;h3&gt;Solution #2&lt;/h3&gt;

&lt;p&gt;We could also attack the problem a different way.  Instead of pulling the result of readFile out of the monad, we can lift the lines function into the monad.  The function we use to do that is called liftM.&lt;/p&gt;

&lt;pre&gt;
liftM :: Monad m =&gt; (a -&gt; b) -&gt; m a -&gt; m b
liftM :: Monad m =&gt; (a -&gt; b) -&gt; (m a -&gt; m b)
&lt;/pre&gt;

&lt;p&gt;The associativity of the -&gt; operator is such that these two type signatures are equivalent.  If you&#39;ve ever heard Haskell people saying that all functions are single argument functions, this is what they are talking about.  You can think of liftM as a function that takes one argument, a function (a -&gt; b), and returns another function, a function (m a -&gt; m b).  When you think about it this way, you see that the liftM function converts a function of pure values into a function of monadic values.  This is exactly what we were looking for.&lt;/p&gt;

&lt;pre&gt;
main :: IO ()
main = do
    lineList &lt;- liftM lines (readFile &quot;myfile.txt&quot;)
    -- ... do something with lineList here
&lt;/pre&gt;

&lt;p&gt;This is more concise than our previous solution, so in this simple example it is probably what we would use.  But if we needed to use contents in more than one place, then the first solution would be better.&lt;/p&gt;

&lt;h3&gt;Problem: Making pure values monadic&lt;/h3&gt;

&lt;p&gt;Consider the following program:&lt;/p&gt;

&lt;pre&gt;
import Control.Monad
import System.Environment
main :: IO ()
main = do
    args &lt;- getArgs
    output &lt;- case args of
                [] -&gt; &quot;cat: must specify some files&quot;
                fs -&gt; liftM concat (mapM readFile fs)
    putStrLn output
&lt;/pre&gt;

&lt;p&gt;This program also has an error.  GHC actually gives you three errors here because there&#39;s no way for it to know exactly what you meant.  But the first error is the one we&#39;re interested in.&lt;/p&gt;

&lt;pre&gt;
    Couldn&#39;t match type `[]&#39; with `IO&#39;
    Expected type: IO Char
      Actual type: [Char]
    In the expression: &quot;cat: must specify some files&quot;
&lt;/pre&gt;

&lt;p&gt;Just like before, this error tells us exactly what&#39;s wrong.  We&#39;re supposed to have an IO something, but we only have a String (remember, String is the same as [Char]).  It&#39;s not convenient for us to get the pure result out of the readFile functions like we did before because of the structure of what we&#39;re trying to do.  The two patterns in the case statement must have the same type, so that means that we need to somehow convert our String into an IO String.  This is exactly what the return function is for.&lt;/p&gt;

&lt;h3&gt;Solution: return&lt;/h3&gt;

&lt;pre&gt;
return :: a -&gt; m a
&lt;/pre&gt;

&lt;p&gt;This type signature tells us that return takes any type a as input and returns &quot;m a&quot;.  So all we have to do is use the return function.&lt;/p&gt;

&lt;pre&gt;
import Control.Monad
import System.Environment
main :: IO ()
main = do
    args &lt;- getArgs
    output &lt;- case args of
                [] -&gt; return &quot;cat: must specify some files&quot;
                fs -&gt; liftM concat (mapM readFile fs)
    putStrLn output
&lt;/pre&gt;

&lt;p&gt;The &#39;m&#39; that the return function wraps its argument in, is determined by the context.  In this case, main is in the IO monad, so that&#39;s what return uses.&lt;/p&gt;

&lt;h3&gt;Problem: Chaining multiple monadic operations&lt;/h3&gt;

&lt;pre&gt;
import System.Environment
main :: IO ()
main = do
    [from,to] &lt;- getArgs
    writeFile to $ readFile from
&lt;/pre&gt;

&lt;p&gt;As you probably guessed, this function also has an error.  Hopefully you have an idea of what it might be.  It&#39;s the same problem of needing a pure value when we actually have a monadic one.  You could solve it like we did in solution #1 on the first problem (you might want to go ahead and give that a try before reading further).  But this particular case has a pattern that makes a different solution work nicely.  Unlike the first problem, you can&#39;t use liftM here.&lt;/p&gt;

&lt;h3&gt;Solution: bind&lt;/h3&gt;

&lt;p&gt;When we used liftM, we had a pure function lines :: String -&gt; [String].  But here we have writeFile :: FilePath -&gt; String -&gt; IO ().  We&#39;ve already supplied the first argument, so what we actually have is writeFile to :: String -&gt; IO ().  And again, readFile returns IO String instead of the pure String that we need.  To solve this we can use another function that you&#39;ve probably heard about when people talk about monads...the bind function.&lt;/p&gt;

&lt;pre&gt;
(=&lt;&lt;) :: Monad m =&gt; (a -&gt; m b) -&gt; m a -&gt; m b
(=&lt;&lt;) :: Monad m =&gt; (a -&gt; m b) -&gt; (m a -&gt; m b)
&lt;/pre&gt;

&lt;p&gt;Notice how the pattern here is different from the first example.  In that example we had (a -&gt; b) and we needed to convert it to (m a -&gt; m b).  Here we have (a -&gt; m b) and we need to convert it to (m a -&gt; m b).  In other words, we&#39;re only adding an &#39;m&#39; onto the &#39;a&#39;, which is exactly the pattern we need here.  Here are the two patterns next to each other to show the correspondence.&lt;/p&gt;

&lt;pre&gt;
writeFile to :: String -&gt; IO ()
                     a -&gt;  m b
&lt;/pre&gt;

&lt;p&gt;From this we see that &quot;writeFile to&quot; is the first argument to the =&lt;&lt; function.  readFile from :: IO String fits perfectly as the second argument to =&lt;&lt;, and then the return value is the result of the writeFile.  It all fits together like this:&lt;/p&gt;

&lt;pre&gt;
import System.Environment
main :: IO ()
main = do
    [from,to] &lt;- getArgs
    writeFile to =&lt;&lt; readFile from
&lt;/pre&gt;

&lt;p&gt;Some might point out that this third problem is really the same as the first problem.  That is true, but I think it&#39;s useful to see the varying patterns laid out in this cookbook style so you can figure out what you need to use when you encounter these patterns as you&#39;re writing code.  Everything I&#39;ve said here can be discovered by carefully studying the &lt;a href=&quot;https://downloads.haskell.org/~ghc/7.6.2/docs/html/libraries/base-4.6.0.1/Control-Monad.html&quot;&gt;Control.Monad module&lt;/a&gt;.  There are lots of other convenience functions there that make working with monads easier.  In fact, I already used one of them: mapM.&lt;/p&gt;

&lt;p&gt;When you&#39;re first learning Haskell, I would recommend that you keep the documentation for Control.Monad close by at all times.  Whenever you need to do something new involving monadic values, odds are good that there&#39;s a function in there to help you.  I would not recommend spending 10 hours studying Control.Monad all at once.  You&#39;ll probably be better off writing lots of code and referring to it whenever you think there should be an easier way to do what you want to do.  Over time the patterns will sink in as form new connections between different concepts in your brain.&lt;/p&gt;

&lt;p&gt;It takes effort.  Some people do pick these things up more quickly than others, but I don&#39;t know anyone who just read through Control.Monad and then immediately had a working knowledge of everything in there.  The patterns you&#39;re grappling with here will almost definitely be foreign to you because no other mainstream language enforces this distinction between pure values and side effecting values.  But I think the payoff of being able to separate pure and impure code is well worth the effort.&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/4704775857752483121/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/4704775857752483121' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4704775857752483121'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/4704775857752483121'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/12/ltmt-part-3-monad-cookbook.html' title='LTMT Part 3: The Monad Cookbook'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-6845679301464459801</id><published>2014-11-25T15:58:00.003-05:00</published><updated>2014-11-25T15:59:18.066-05:00</updated><title type='text'>Announcing C◦mp◦se :: Conference</title><content type='html'>Since most of my content is about Haskell, I would like to take this opportunity to inform my readers of a new conference that I and the other co-organizers of the New York Haskell Meetup are hosting at the end of January.  It&#39;s called C◦mp◦se, and it&#39;s a conference for typed functional programmers.  Check out the website at &lt;a href=&quot;http://www.composeconference.org&quot;&gt;http://www.composeconference.org/&lt;/a&gt;.  We recently issued a &lt;a href=&quot;http://www.composeconference.org/call/index.html&quot;&gt;call for papers&lt;/a&gt;.  I know it&#39;s short notice, but the deadline is November 30.  If you have something that you think would be interesting to typed functional programmers, we&#39;d love to hear from you.  Along with the conference we&#39;ll also be having one day be a less formal hackathon/unconference.  If you would like to give a tutorial/demo at the unconference, email us at info@composeconference.org.</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/6845679301464459801/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/6845679301464459801' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6845679301464459801'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/6845679301464459801'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/11/cmpse-conference.html' title='Announcing C◦mp◦se :: Conference'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-2057597500022751501</id><published>2014-08-10T17:33:00.001-04:00</published><updated>2014-08-10T17:33:52.049-04:00</updated><title type='text'>Field Accessors Considered Harmful</title><content type='html'>It&#39;s pretty well known these days that Haskell&#39;s field accessors are rather cumbersome syntactically and not composable. &amp;nbsp;The lens abstraction that has gotten much more popular recently (thanks in part to Edward Kmett&#39;s lens package) solves these problems. &amp;nbsp;But I recently ran into a bigger problem with field accessors that I had not thought about before. &amp;nbsp;Consider the following scenario. &amp;nbsp;You have a package with code something like this:&lt;br /&gt;
&lt;br /&gt;
&lt;code&gt;data Config = Config { configFieldA :: [Text] }&lt;/code&gt;&lt;br /&gt;
&lt;br /&gt;
So your Config data structure gives your users getters and setters for field A (and any other fields you might have). &amp;nbsp;Your users are happy and life is good. &amp;nbsp;Then one day you decide to add a new feature and that feature requires expanding and restructuring Config. &amp;nbsp;Now you have this:&lt;br /&gt;

&lt;pre&gt;data MConfig = MConfig { mconfigFieldA :: [Text] }&lt;br /&gt;
data Config = Config { configMC :: MConfig
                     , configFieldX :: Text
                     , configFieldY :: Bool }&lt;/pre&gt;&lt;br /&gt;

This is a nice solution because your users get to keep the functionality over the portion of the Config that they are used to, and they still get the new functionality. &amp;nbsp;But now there&#39;s a problem. &amp;nbsp;You&#39;re still breaking them because configFieldA changed names to mconfigFieldA and now refers to the MConfig structure instead of Config. &amp;nbsp;If this was not a data structure, you could preserve backwards compatibility by creating another function:&lt;br /&gt;
&lt;br /&gt;
&lt;code&gt;configFieldA = mconfigFieldA . configMC&lt;/code&gt;&lt;br /&gt;
&lt;br /&gt;
But alas, that won&#39;t work here because configFieldA is not a normal function. &amp;nbsp;It is a special field accessor generated by GHC and you know that your users are using it as a setter. &amp;nbsp;It seems to me that we are at an impasse. &amp;nbsp;It is completely impossible to deliver your new feature without breaking backwards compatibility somehow. &amp;nbsp;No amount of deprecated cycles can ease the transition. &amp;nbsp;The sad thing is that it seems like it should have been totally doable. &amp;nbsp;Obviously there are some kinds of changes that understandably will break backwards compatibility. &amp;nbsp;But this doesn&#39;t seem like one of them since it is an additive change. &amp;nbsp;Yes, yes, I know...it&#39;s impossible to do this change without changing the type of the Config constructor, so that means that at least that function will break. &amp;nbsp;But we should be able to minimize the breakage to the field accessor functions, and field accessors prevent us from doing that.&lt;br /&gt;
&lt;br /&gt;
However, we could have avoided this problem. &amp;nbsp;If we had a bit more foresight, we could have done this.&lt;br /&gt;
&lt;br /&gt;
&lt;code&gt;module Foo (mkConfig, configFieldA) where&lt;br /&gt;
&lt;br /&gt;
data Config = Config { _configFieldA :: [Text] }&lt;br /&gt;
&lt;br /&gt;
mkConfig :: [Text] -&amp;gt; Config&lt;br /&gt;
mkConfig = Config&lt;br /&gt;
&lt;br /&gt;
configFieldA = lens _configFieldA (\c a -&amp;gt; c { _configFieldA = a })&lt;br /&gt;
&lt;/code&gt;
&lt;br /&gt;
This would allow us to avoid breaking backwards compatibility by continuing to export appropriate versions of these symbols. &amp;nbsp;It would look something like this.&lt;br /&gt;
&lt;pre&gt;module Foo
  ( MConfig
  , mkMConfig
  , mconfigFieldA
  , Config
  , mkConfig
  , configFieldA
  , configMC
  -- ...
  ) where

data MConfig = MConfig { _mconfigFieldA :: [Text] }
data Config = Config { _configMC :: MConfig
                     , _configFieldX :: Text
                     , _configFieldY :: Bool }

mkMConfig = MConfig

mkConfig a = Config (mkMConfig a) &quot;&quot; False

mconfigFieldA = lens _mconfigFieldA (\c a -&amp;gt; c { _mconfigFieldA = a })
configMC = lens _configMC (\c mc -&amp;gt; c { _configMC = mc })

-- The rest of the field lenses here

configFieldA = configMC . mconfigFieldA&lt;/pre&gt;

Note that the type signatures for mkConfig and configFieldA stay exactly the same. &amp;nbsp;We weren&#39;t able to do this with field accessors because they are not composable. &amp;nbsp;Lenses solve this problem for us because they are composable and we have complete control over their definition.&lt;br /&gt;
&lt;br /&gt;
For quite some time now I have thought that I understood the advantage that lenses give you over field accessors. &amp;nbsp;Discovering this added ability of lenses in helping us preserve backwards compatibility came as a pleasant surprise. &amp;nbsp;I&#39;ll refrain from opining on how this should affect your development practices, but I think it makes the case for using lenses in your code a bit stronger than it was before.
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/2057597500022751501/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/2057597500022751501' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2057597500022751501'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2057597500022751501'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/08/field-accessors-considered-harmful.html' title='Field Accessors Considered Harmful'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-2432647583227639482</id><published>2014-07-13T19:45:00.002-04:00</published><updated>2020-12-11T16:32:39.301-05:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>Haskell Best Practices for Avoiding &quot;Cabal Hell&quot;</title><content type='html'>&lt;div&gt;DEC 2020 UPDATE: A lot has changed since this post was written.&amp;nbsp; Much of &quot;cabal hell&quot; is now a thing of the past due to cabal&#39;s more recent purely functional &quot;nix style&quot; build infrastructure.&amp;nbsp; Some of the points here aren&#39;t really applicable any more, but many still are.&amp;nbsp; I&#39;m updating this post with strikethroughs for the points that are outdated.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;I posted this as a reddit comment and it was really well received, so I thought I&#39;d post it here so it would be more linkable. &amp;nbsp;A lot of people complain about &quot;cabal hell&quot; and ask what they can do to solve it. &amp;nbsp;There are definitely things about the cabal/hackage ecosystem that can be improved, but on the whole it serves me quite well. &amp;nbsp;I think a significant amount of the difficulty is a result of how fast things move in the Haskell community and how much more reusable Haskell is than other languages.&lt;br /&gt;
&lt;br /&gt;
With that preface, here are my best practices that seem to make Cabal work pretty well for me in my development.&lt;br /&gt;
&lt;br /&gt;
1. &lt;strike&gt;I make sure that I have no more than the absolute minimum number of packages installed as --global. &amp;nbsp;This means that I don&#39;t use the Haskell Platform or any OS haskell packages. &amp;nbsp;I install GHC directly. &amp;nbsp;Some might think this casts too much of a negative light on the Haskell Platform. &amp;nbsp;But everyone will agree that having multiple versions of a package installed at the same time is a significant cause of build problems. &amp;nbsp;And that is exactly what the Haskell Platform does for you--it installs specific versions of packages. &amp;nbsp;If you use Haskell heavily enough, you will invariably encounter a situation where you want to use a different version of a package than the one the Haskell Platform gives you.&lt;/strike&gt;&amp;nbsp; The --global flag is not applicable any more now that we have the new v2-* commands.&lt;div&gt;&lt;br /&gt;
2. Make sure ~/.cabal/bin is at the front of your path. &amp;nbsp;Hopefully you already knew this, but I see this problem a lot, so it&#39;s worth mentioning for completeness.&lt;br /&gt;
&lt;br /&gt;
3. Install happy and alex manually. &amp;nbsp;These two packages generate binary executables that you need to have in ~/.cabal/bin. &amp;nbsp;They don&#39;t get picked up automatically because they are executables and not package dependencies.&lt;br /&gt;
&lt;br /&gt;
4. Make sure you have the most recent version of cabal-install. &amp;nbsp;There is a lot of work going on to improve these tools. &amp;nbsp;The latest version is significantly better than it used to be, so you should definitely be using it.&lt;br /&gt;
&lt;br /&gt;
5. Become friends with &quot;rm -fr ~/.ghc&quot;. &amp;nbsp;This command cleans out your --user repository, which is where you should install packages if you&#39;re not using a sandbox. &amp;nbsp;It sounds bad, but right now this is simply a fact of life. &amp;nbsp;The Haskell ecosystem is moving so fast that packages you install today will be out of date in a few months if not weeks or days. &amp;nbsp;&lt;strike&gt;We don&#39;t have purely functional nix-style package management yet, so removing the old ones is the pragmatic approach. &amp;nbsp;Note that sandboxes accomplish effectively the same thing for you. &amp;nbsp;Creating a new sandbox is the same as &quot;rm -fr ~/.ghc&quot; and then installing to --user, but has the benefit of not deleting everything else you had in --user.&lt;/strike&gt;&amp;nbsp; &lt;b&gt;UPDATE: Removing the .ghc directory is still potentially useful to know but much less of an issue now.&lt;/b&gt;&lt;br /&gt;
&lt;br /&gt;
6. &lt;strike&gt;If you&#39;re not working on a single project with one harmonious dependency tree, then use sandboxes for separate projects or one-off package compiles.&lt;/strike&gt;&amp;nbsp; Sandboxes have been deprecated in lieu of the new build approach.&lt;br /&gt;
&lt;br /&gt;
7. Learn to use --allow-newer. &amp;nbsp;Again, things move fast in Haskell land. &amp;nbsp;If a package gives you dependency errors, then try --allow-newer and see if the package will just work with newer versions of dependencies.&lt;br /&gt;
&lt;br /&gt;
8. Don&#39;t be afraid to dive into other people&#39;s packages. &amp;nbsp;&quot;cabal unpack&quot; makes it trivial to download the code for any package. &amp;nbsp;From there it&#39;s often trivial to make manual changes to version bounds or even small code changes. &amp;nbsp;&lt;strike&gt;If you make local changes to a package, then you can either install it to --user so other packages use it, or you can do &quot;cabal sandbox add-source /path/to/project&quot; to ensure that your other projects use the locally modified version.&lt;/strike&gt; &amp;nbsp;If you&#39;ve made code changes, then help out the community by sending a pull request to the package maintainer. &amp;nbsp;Edit: bergmark mentions that unpack is now &quot;cabal get&quot; and &quot;cabal get -s&quot; lets you clone the project&#39;s source repository.&lt;br /&gt;
&lt;br /&gt;
9. If you can&#39;t make any progress from the build messages cabal gives you, then try building with -v3. &amp;nbsp;I have encountered situations where cabal&#39;s normal dependency errors are not helpful. &amp;nbsp;Using -v3 usually gives me a much better picture of what&#39;s going on and I can usually figure out the root of the problem pretty quickly.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/2432647583227639482/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/2432647583227639482' title='12 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2432647583227639482'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/2432647583227639482'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/07/haskell-best-practices-for-avoiding.html' title='Haskell Best Practices for Avoiding &quot;Cabal Hell&quot;'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>12</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-843302166349634140</id><published>2014-05-16T13:20:00.000-04:00</published><updated>2015-09-01T13:45:59.444-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>Implicit Blacklisting for Cabal</title><content type='html'>&lt;p&gt;I&#39;ve been thinking about all the Haskell PVP discussion that&#39;s been going on lately.  It should be no secret by now that I am a PVP proponent.  I&#39;m not here to debate the PVP in this post, so for this discussion let&#39;s assume that the PVP is a good thing and should be adopted by all packages published on Hackage.  More specifically, let&#39;s assume this to mean that every package should specify upper bounds on all dependencies, and that most of the time these bounds will be of the form &quot;&lt; a.b&quot;.&lt;/p&gt;

&lt;p&gt;Recently there has been discussion about problems encountered when packages that have not been using upper bounds change and start using them.  The recent &lt;a href=&quot;https://github.com/haskell/HTTP/issues/64&quot;&gt;issue&lt;/a&gt; with the HTTP package is a good example of this.  Roughly speaking the problem is that if foo-1.2 does not provide upper bounds on it&#39;s dependency bar, the constraint solver is perpetually &quot;poisoned&quot; because foo-1.2 will always be a candidate even long after bar has become incompatible with foo-1.2.  If later foo-3.9 specifies a bound of bar &lt; 0.5, then when bar-0.5 comes out the solver will try to build with foo-1.2 even though it is hopelessly old.  This will result in build errors since bar has long since changed its API.&lt;/p&gt;

&lt;p&gt;This is a difficult problem.  There are several immediately obvious approaches to solving the problem.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Remove the offending old versions (the ones missing upper bounds) from Hackage.&lt;/li&gt;
&lt;li&gt;Leave them on Hackage, but mark them as deprecated/blacklisted so they will not be chosen by the solver.&lt;/li&gt;
&lt;li&gt;Go back and retroactively add upper bounds to the offending versions.&lt;/li&gt;
&lt;li&gt;Start a new empty Hackage server that requires packages to specify upper bounds on all dependencies.&lt;/li&gt;
&lt;li&gt;Start a new Hackage mirror that infers upper bounds based on package upload dates.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of these approaches have problems.  The first three are problematic because they mess with build reproducibility.  The fourth approach fragments the community and in the very best case would take a lot of time and effort before gaining adoption.  The fifth approach has problems because correct upper bounds cannot always be inferred by upload dates.&lt;/p&gt;

&lt;p&gt;I would like to propose a solution I call implicit blacklisting.  The basic idea is that for each set of versions with the prefix a.b.c Cabal will only consider a single one: the last one.  This effectively means that all the lower versions with the prefix a.b.c will be implicitly blacklisted.  This approach should also allow maintainers to modify this behavior by specifying more granular version bounds.&lt;/p&gt;

&lt;p&gt;In our previous example, suppose there were a number of 0.4 versions of the bar package, with 0.4.3.3 being the last one.  In this case, if foo specified a bound of bar &lt; 0.5, the solver would only consider 0.4.3.3.  0.4.3.2 and 0.4.3.1 would not be considered.  This would allow us to completely hide a lack of version bounds by making a new patch release that only bumps the d number.  If that release had problems, we could address them with more patch releases.&lt;/p&gt;

&lt;p&gt;Now imagine that for some crazy reason foo worked with 0.4.3.2, but 0.4.3.3 broke it somehow.  Note that if bar is following the PVP, that should not be the case.  But there are some well-known cases where the PVP can miss things and there is always the possibility of human error.  In this case, foo should specify a bound of bar &lt; 0.4.3.3.  In this case, the solver should respect that bound and only consider 0.4.3.2.  But 0.4.3.1 would still be ignored as before.&lt;/p&gt;

&lt;p&gt;Implicit blacklisting has the advantage that we don&#39;t need any special support for explicitly marking versions as deprecated/blacklisted.  Another advantage is that it does not cause any problems for people who had locked their code down to using specific versions.  If foo specified an exact version of bar == 0.4.3.0, then that will continue to be chosen.  Implicit blacklisting also allows us to leave everything in hackage untouched and fix issues incrementally as they arise with the minimum amount of work.  In the above issue with HTTP-4000.0.7, we could trivially address it by downloading that version, adding version bounds, and uploading it as HTTP-4000.0.7.1.&lt;/p&gt;

&lt;p&gt;All in all, I think this implicit blacklisting idea has a number of desirable properties and very few downsides.  It fixes the problem using nothing but our existing infrastructure: version numbers.  It doesn’t require us to add new concepts like blacklisted/deprecated flags, out-of-band “revision” markers to denote packages modified after the fact, etc.  But since this is a complicated problem I may very well have missed something, so I&#39;d like to hear what the community thinks about this idea.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/843302166349634140/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/843302166349634140' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/843302166349634140'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/843302166349634140'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/05/implicit-blacklisting-for-cabal.html' title='Implicit Blacklisting for Cabal'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-8946206495463550866</id><published>2014-01-28T00:27:00.000-05:00</published><updated>2014-01-28T00:52:58.800-05:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="ember"/><category scheme="http://www.blogger.com/atom/ns#" term="javascript"/><title type='text'>Ember.js is driving me crazy</title><content type='html'>&lt;p&gt;
For the past few months I&#39;ve been working on a project with a fairly complex
interactive web interface.  This required me to venture into the wild and
unpredictable jungle of Javascript development.  I was totally unprepared for what I
would find.  Soon after starting the project it became clear that just using
JQuery would not be sufficient for my project.  I needed a higher level
Javascript framework.  After a doing a little research I settled on Ember.js.  
&lt;/p&gt;

&lt;h2&gt;The Zombie Code Apocalypse&lt;/h2&gt;

&lt;p&gt;
Ember was definitely a big improvement over straight JQuery, and allowed
me to get some fairly complex UI behavior working very quickly.  But recently I&#39;ve
run into some problems.  The other day I had a UI widget defined like this:
&lt;/p&gt;

&lt;pre&gt;
App.FooController = Ember.ObjectController.extend({
    // ...
});
App.FooView = Ember.View.extend({
    // ...
});
&lt;/pre&gt;

&lt;p&gt;
It was used somewhere on the page, but at some point I decided that the widget
was no longer needed, so I commented out the widget&#39;s markup.  I wasn&#39;t sure
whether we would ultimately keep the widget or not, so I opted to keep the above javascript code
for the controller and view around for awhile so it would be easily available
if I later decided to re-enable that UI element.
&lt;/p&gt;

&lt;p&gt;
Everything seemed to work fine until a few days later when I noticed that
another one of my controls, Bar, was not being populated with data.  After
spending hours trying to figure out the problem, I finally happened to comment
out the unused code for the Foo widget and the problem went away.  WTF?!?  Why
should this have anything to do with the functioning of a completely unrelated
widget?  This makes absolutely no sense to me, and it completely violates the
natural assumption that the controller and view for two completely unrelated
controls would have no impact on each other. I would have liked to know the underlying cause, but I didn&#39;t want to waste time with it, so I just removed the code and moved on.
&lt;/p&gt;

&lt;h2&gt;Spontaneously Changing Values&lt;/h2&gt;

&lt;p&gt;
Maybe a week later I ran into another problem.  Some data was changing
when I didn&#39;t expect it to.  I looked everywhere I could think of that might
affect the data, but couldn&#39;t find anything.  Again, I spent the better part
of a day trying to track down the source of this problem.  After awhile I was
getting desperate, so I started putting print statements all over the place.
I discovered that the data was changing in one particular function.  I
examined it carefully but couldn&#39;t find any hint of this data being impacted.
Eventually I isolated the problem to the following snippet:
&lt;/p&gt;

&lt;pre&gt;
console.log(this.get(&#39;foo&#39;));
this.set(&#39;bar&#39;, ...);
console.log(this.get(&#39;foo&#39;));
&lt;/pre&gt;

&lt;p&gt;
The first log line showed foo with a value of 25.  The second log line showed
foo with a value of 0.  This is utter madness!  I set one field, and a
completely different one gets changed!  In what world does this make any shred
of sense?  This time, even when I actually figured out where the problem was
happening I still couldn&#39;t figure out how to solve it.  At least the first
time I could just comment out the offending innocuous lines.  Here I narrowed
down the exact line that&#39;s causing the problem, but still couldn&#39;t figure out
how to fix it.  Finally I got on the #emberjs IRC channel and learned that
Ember&#39;s set function has special behavior for values in the content field,
which foo was a part of.  I was able to fix this problem by initializing the
bar field to null.  WAT?!?
&lt;/p&gt;

&lt;p&gt;
I was in shock.  This seemed like one of the most absurd behaviors I&#39;ve encountered in
all my years of programming.  Back in the C days you could see some crazy
things, but at least you knew that array updates and pointer arithmetic could
be dangerous and possibly overwrite other parts of memory.  Here there&#39;s no
hint.  No dynamic index that might overflow.  Just what we thought was a
straightforward getter and setter for a static field in a data type.
&lt;/p&gt;

&lt;h2&gt;Blaming Systems, Not People&lt;/h2&gt;

&lt;p&gt;
Before you start jumping all over me for all the things I did wrong, hear me
out.  I&#39;m not blaming the Ember developers or trying to disparage Ember.
Ember.js is an amazing library and my application wouldn&#39;t exist without it or
something like it.  I&#39;m just a feeble-minded Haskell programmer and not
well-versed in the ways of Javascript.  I&#39;m sure I was doing things that
contributed to the problem.  But that&#39;s not the point.  I&#39;ve been around long
enough to realize that there are probably good justifications for why the above
behaviors exist.  The Ember developers are clearly way better Javascript
programmers than I will ever be.  There&#39;s got to be a better explanation.
&lt;/p&gt;

&lt;p&gt;
Peter Senge, in his book &lt;a
href=&quot;http://www.amazon.com/The-Fifth-Discipline-Practice-Organization/dp/0385517254/ref=sr_1_1?ie=UTF8&amp;qid=1384332386&amp;sr=8-1&amp;keywords=the+fifth+discipline&quot;&gt;The
Fifth Discipline&lt;/a&gt;, talks about &lt;a
href=&quot;https://en.wikipedia.org/wiki/Beer_distribution_game&quot;&gt;the beer
distribution game&lt;/a&gt;.  It&#39;s a game that has been played thousands of times
with diverse groups of people in management classes all over the world.  The
vast majority of people who play it perform very poorly.  Peter points out
that we&#39;re too quick to attribute a bad outcome to individual people when it
should instead be attributed to the structure of the system in which those
people were operating.  This situation is no different.
&lt;/p&gt;

&lt;p&gt;
Like the beer distribution game, Javascript is a complex system.  The above
anecdotes demonstrate how localized well-intentioned decisions by different
players resulted in a bad outcome.  The root of the problem is the system we
were operating in: an impure programming language with weak dynamic typing.  In a
different system, say the one we get with Haskell, I can conclusively say that
I never would have had these problems.  Haskell&#39;s purity and strong static
type system provide a level of safety that is simply unavailable in Javascript
(or any other mainstream programming language for that matter).
&lt;/p&gt;

&lt;h2&gt;The Godlike Refactoring&lt;/h2&gt;

&lt;p&gt;
In fact, this same project gave us another anecdote supporting this claim.
The project&#39;s back end is several thousand lines of Haskell code.  I wrote all
of the back end code, and since we have a pretty aggressive roadmap with
ambitious deadlines the code isn&#39;t exactly all that pretty.  There are a
couple places with some pretty hairy logic.  A few weeks ago we needed to do a
major refactoring of the back end to support a new feature.  I was too busy
with other important features, so another member of the team worked on the
refactoring.  He had not touched a single line of the back end code before
that point, but thanks to Haskell&#39;s purity and strong static type system he was able
to pull off the entire refactoring single-handedly in a just a couple hours.
And once he got it compiling, the application worked the first time.  We are
both convinced that this feat would have been impossible without strong static types.
&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;
I think there are a couple of interesting points worth thinking about here.  First of all, the API chosen by Ember only hid the complexity, it didn&#39;t reduce it.  What seemed to be a simple get() method was actually a more complex system with some special cases.  The system was more complex than the API indicated.  It&#39;s useful to think about the true complexity of a problem compared to the complexity of the exposed API.
&lt;/p&gt;

&lt;p&gt;
The second point is that having the ability to make categorical statements about API behavior is very important.  We use this kind of reasoning all the time, and the more of it we can do, the fewer the number of assumptions we will have to question when something isn&#39;t behaving as we expect.  In this case, I made the seemingly reasonable categorical assumption that unused class definitions would have no effect on my program.  But for some reason that I still don&#39;t understand, it was violated.  I also made the categorical assumption that Ember&#39;s get() and set() methods worked like they would work in a map.  But that assumption didn&#39;t hold up either.  I encounter assumptions that don&#39;t hold up all the time.  Every programmer does.  But rarely are they so deeply and universally held as these.
&lt;/p&gt;

&lt;p&gt;
So what can we learn from this?  In The Fifth Discipline, Senge goes on to
talk about the importance of thinking with a systems perspective; about how we
need to stop blaming people and focus more on the systems involved.  I think it&#39;s telling how in my 5 or 6 years of Haskell programming I&#39;ve never seen a bug as crazy as these two that I encountered after working only a few months on a significant Javascript project.  Haskell with it&#39;s purity and strong static type system allows me to make significantly more confident categorical statements about what my code can and cannot do.  That allows me to more easily build better abstractions that actually reduce complexity for the end user instead of just hiding it away in a less frequented area.
&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/8946206495463550866/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/8946206495463550866' title='24 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/8946206495463550866'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/8946206495463550866'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2014/01/emberjs-is-driving-me-crazy.html' title='Ember.js is driving me crazy'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>24</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-8173928468034318557</id><published>2012-12-20T17:20:00.000-05:00</published><updated>2012-12-20T23:03:49.889-05:00</updated><title type='text'>Haskell Web Framework Matrix</title><content type='html'>&lt;p&gt;
A comparison of the big three Haskell web frameworks on the most informative two axes I can think of.
&lt;/p&gt;

&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; standalone=&quot;no&quot;?&gt;
&lt;!-- Created with Inkscape (http://www.inkscape.org/) --&gt;

&lt;svg
   xmlns:dc=&quot;http://purl.org/dc/elements/1.1/&quot;
   xmlns:cc=&quot;http://creativecommons.org/ns#&quot;
   xmlns:rdf=&quot;http://www.w3.org/1999/02/22-rdf-syntax-ns#&quot;
   xmlns:svg=&quot;http://www.w3.org/2000/svg&quot;
   xmlns=&quot;http://www.w3.org/2000/svg&quot;
   xmlns:sodipodi=&quot;http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd&quot;
   xmlns:inkscape=&quot;http://www.inkscape.org/namespaces/inkscape&quot;
   width=&quot;453.58551&quot;
   height=&quot;172.62325&quot;
   id=&quot;svg2&quot;
   version=&quot;1.1&quot;
   inkscape:version=&quot;0.48.4 r9939&quot;
   inkscape:export-filename=&quot;matrix.png&quot;
   inkscape:export-xdpi=&quot;90&quot;
   inkscape:export-ydpi=&quot;90&quot;
   sodipodi:docname=&quot;matrix.svg&quot;&gt;
  &lt;defs
     id=&quot;defs4&quot; /&gt;
  &lt;sodipodi:namedview
     id=&quot;base&quot;
     pagecolor=&quot;#ffffff&quot;
     bordercolor=&quot;#666666&quot;
     borderopacity=&quot;1.0&quot;
     inkscape:pageopacity=&quot;0.0&quot;
     inkscape:pageshadow=&quot;2&quot;
     inkscape:zoom=&quot;1.4&quot;
     inkscape:cx=&quot;255.16729&quot;
     inkscape:cy=&quot;91.35219&quot;
     inkscape:document-units=&quot;px&quot;
     inkscape:current-layer=&quot;layer1&quot;
     showgrid=&quot;false&quot;
     inkscape:window-width=&quot;1291&quot;
     inkscape:window-height=&quot;834&quot;
     inkscape:window-x=&quot;285&quot;
     inkscape:window-y=&quot;121&quot;
     inkscape:window-maximized=&quot;0&quot;
     fit-margin-top=&quot;0&quot;
     fit-margin-left=&quot;0&quot;
     fit-margin-right=&quot;0&quot;
     fit-margin-bottom=&quot;0&quot; /&gt;
  &lt;metadata
     id=&quot;metadata7&quot;&gt;
    &lt;rdf:RDF&gt;
      &lt;cc:Work
         rdf:about=&quot;&quot;&gt;
        &lt;dc:format&gt;image/svg+xml&lt;/dc:format&gt;
        &lt;dc:type
           rdf:resource=&quot;http://purl.org/dc/dcmitype/StillImage&quot; /&gt;
        &lt;dc:title&gt;&lt;/dc:title&gt;
      &lt;/cc:Work&gt;
    &lt;/rdf:RDF&gt;
  &lt;/metadata&gt;
  &lt;g
     inkscape:label=&quot;Layer 1&quot;
     inkscape:groupmode=&quot;layer&quot;
     id=&quot;layer1&quot;
     transform=&quot;translate(-91.350095,-181.76484)&quot;&gt;
    &lt;path
       style=&quot;fill:none;stroke:#000000;stroke-width:1.23979723px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1&quot;
       d=&quot;m 544.31572,286.45912 c -443.56231,0 -452.345726,0 -452.345726,0&quot;
       id=&quot;path2985-2&quot;
       inkscape:connector-curvature=&quot;0&quot; /&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;268.57141&quot;
       y=&quot;213.79079&quot;
       id=&quot;text3027&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan3029&quot;
         x=&quot;268.57141&quot;
         y=&quot;213.79079&quot;&gt;DSLs&lt;/tspan&gt;&lt;/text&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;388.57141&quot;
       y=&quot;213.79074&quot;
       id=&quot;text3031&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan3033&quot;
         x=&quot;388.57141&quot;
         y=&quot;213.79074&quot;&gt;Combinators&lt;/tspan&gt;&lt;/text&gt;
    &lt;path
       style=&quot;fill:none;stroke:#000000;stroke-width:0.7641905px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1&quot;
       d=&quot;m 237.14285,182.14694 c 0,168.52199 0,171.85906 0,171.85906&quot;
       id=&quot;path2985-6&quot;
       inkscape:connector-curvature=&quot;0&quot; /&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;420.7966&quot;
       y=&quot;259.32928&quot;
       id=&quot;text3069&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan3071&quot;
         x=&quot;420.7966&quot;
         y=&quot;259.32928&quot;&gt;Snap&lt;/tspan&gt;&lt;/text&gt;
    &lt;path
       style=&quot;fill:none;stroke:#000000;stroke-width:1.23979723px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1&quot;
       d=&quot;m 544.31572,225.21932 c -443.56231,0 -452.345725,0 -452.345725,0&quot;
       id=&quot;path2985-2-2&quot;
       inkscape:connector-curvature=&quot;0&quot; /&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;392.21262&quot;
       y=&quot;328.86053&quot;
       id=&quot;text3091&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan3093&quot;
         x=&quot;392.21262&quot;
         y=&quot;328.86053&quot;&gt;Happstack&lt;/tspan&gt;&lt;/text&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;264.58078&quot;
       y=&quot;330.79901&quot;
       id=&quot;text3095&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan3097&quot;
         x=&quot;264.58078&quot;
         y=&quot;330.79901&quot;&gt;Yesod&lt;/tspan&gt;&lt;/text&gt;
    &lt;path
       style=&quot;fill:none;stroke:#000000;stroke-width:0.7641905px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1&quot;
       d=&quot;m 347.90704,182.14694 c 0,168.52199 0,171.85906 0,171.85906&quot;
       id=&quot;path2985-6-7&quot;
       inkscape:connector-curvature=&quot;0&quot; /&gt;
    &lt;flowRoot
       xml:space=&quot;preserve&quot;
       id=&quot;flowRoot4019&quot;
       style=&quot;font-size:16px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       transform=&quot;translate(91.350095,181.76484)&quot;&gt;&lt;flowRegion
         id=&quot;flowRegion4021&quot;&gt;&lt;rect
           id=&quot;rect4023&quot;
           width=&quot;131.42857&quot;
           height=&quot;16.428572&quot;
           x=&quot;5&quot;
           y=&quot;-27.376755&quot; /&gt;&lt;/flowRegion&gt;&lt;flowPara
         id=&quot;flowPara4025&quot;&gt;NS&lt;/flowPara&gt;&lt;/flowRoot&gt;    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;99.921524&quot;
       y=&quot;249.24522&quot;
       id=&quot;text4045&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan4047&quot;
         x=&quot;99.921524&quot;
         y=&quot;249.24522&quot;&gt;Some things&lt;/tspan&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         x=&quot;99.921524&quot;
         y=&quot;274.24524&quot;
         id=&quot;tspan4049&quot;&gt;dynamic&lt;/tspan&gt;&lt;/text&gt;
    &lt;text
       xml:space=&quot;preserve&quot;
       style=&quot;font-size:20px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans&quot;
       x=&quot;100.48793&quot;
       y=&quot;314.6738&quot;
       id=&quot;text4051&quot;
       sodipodi:linespacing=&quot;125%&quot;&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         id=&quot;tspan4053&quot;
         x=&quot;100.48793&quot;
         y=&quot;314.6738&quot;&gt;Everything&lt;/tspan&gt;&lt;tspan
         sodipodi:role=&quot;line&quot;
         x=&quot;100.48793&quot;
         y=&quot;339.6738&quot;
         id=&quot;tspan4055&quot;&gt;type-safe&lt;/tspan&gt;&lt;/text&gt;
  &lt;/g&gt;
&lt;/svg&gt;

&lt;p&gt;Note that this is not intended to be a definitive statement of what is and isn&#39;t possible in each of these frameworks.  As I&#39;ve &lt;a href=&quot;http://stackoverflow.com/questions/5645168/comparing-haskells-snap-and-yesod-web-frameworks/5650715#5650715&quot;&gt;written elsewhere&lt;/a&gt;, most of the features of each of the frameworks are interchangeable and can be mixed and matched.  The idea of this matrix is to reflect the general attitude each of the frameworks seem to be taking, because sometimes generalizations are useful.&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/8173928468034318557/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/8173928468034318557' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/8173928468034318557'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/8173928468034318557'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/12/haskell-web-framework-matrix_20.html' title='Haskell Web Framework Matrix'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3245363211447893989</id><published>2012-11-08T10:46:00.002-05:00</published><updated>2020-12-11T16:38:34.221-05:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>Using Cabal With Large Projects</title><content type='html'>&lt;p&gt;DEC 2020 UPDATE: This post is mostly out of date.&amp;nbsp; The main things that cabal-meta and cabal-dev provided have now been either integrated into cabal itself or been made obsolete by subsequent improvements.&lt;/p&gt;&lt;p&gt;In the last post we talked about basic cabal usage.  That all works fine as long as you&#39;re working on a single project and all your dependencies are in hackage.  When Cabal is aware of everything that you want to build, it&#39;s actually pretty good at dependency resolution.  But if you have several packages that depend on each other and you&#39;re working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult.  In this post I&#39;ll describe my workflow for handling the development of multiple local packages.  I make no claim that this is the best way to do it.  But it works pretty well for me, and hopefully others will find this information helpful.&lt;/p&gt;

&lt;p&gt;Consider a situation where package B depends on package A and both of them depend on bytestring.  Package A has wide version bounds for its bytestring dependency while package B has narrower bounds.  Because you&#39;re working on improving both packages you can&#39;t just do &quot;cabal install&quot; in package B&#39;s directory because the correct version of package A isn&#39;t on hackage.  But if you install package A first, Cabal might choose a version of bytestring that won&#39;t work with package B.  It&#39;s a frustrating situation because eventually you&#39;ll have to end up worrying about dependencies issues that Cabal should be handling for you.&lt;/p&gt;

&lt;p&gt;The best solution I&#39;ve found to the above problem is &lt;a href=&quot;http://hackage.haskell.org/package/cabal-meta&quot;&gt;cabal-meta&lt;/a&gt;.  It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package&#39;s build environment.  For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework.  Here&#39;s what my sources.txt file looks like for the snap package:&lt;/p&gt;

&lt;pre&gt;./
../xmlhtml
../heist
../snap-core
../snap-server
&lt;/pre&gt;

&lt;p&gt;My development versions of the other four packages reside in the parent directory on my local machine.  When I build the snap package with &lt;code&gt;cabal-meta install&lt;/code&gt;, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage.  If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies.  Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages.  In my experience this takes most of the pain out of building large-scale Haskell applications.&lt;/p&gt;

&lt;p&gt;Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev.  cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal.  It also has a mechanism for handling this local package problem above.  I personally tend to avoid cabal-dev because in my experience it hasn&#39;t played nicely with ghci.  It tries to solve the problem by giving you the &lt;code&gt;cabal-dev ghci&lt;/code&gt; command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn&#39;t have these problems.&lt;/p&gt;

&lt;p&gt;I should note that cabal-dev does solve another problem that cabal-meta does not.  There may be cases where two different packages may be completely unable to coexist in the same Cabal &quot;sandbox&quot; if their set of dependencies are not compatible.  In that case, you&#39;ll need cabal-dev&#39;s sandboxes instead of the single user-level package repository used by Cabal.  I am usually only working on one major project at a time, so this problem has never been an issue for me.  My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install.  Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.&lt;/p&gt;

&lt;p&gt;There are definitely things that need to be done to improve the cabal tool chain.  But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.&lt;/p&gt;

</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3245363211447893989/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3245363211447893989' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3245363211447893989'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3245363211447893989'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/11/using-cabal-with-large-projects.html' title='Using Cabal With Large Projects'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-1856103746939161061</id><published>2012-11-02T13:09:00.000-04:00</published><updated>2015-09-01T13:45:59.411-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><title type='text'>A Practical Cabal Primer</title><content type='html'>&lt;p&gt;I&#39;ve been doing full-time Haskell development for almost three years now, and while I recognize that Cabal has been painful to use at times, the current reality is that Cabal does what I need it to do and for the most part stays out of my way.  In this post, I&#39;ll describe the Cabal best practices I&#39;ve settled on for my Haskell development.&lt;/p&gt;

&lt;p&gt;First, some terminology.  GHC is the de facto Haskell compiler, Hackage is the package database, Cabal is a library providing package infrastructure, and cabal-install is a command line program (confusingly called &quot;cabal&quot;) for building and installing packages, and downloading and uploading them from Hackage.  This isn&#39;t a tutorial for installing Haskell, so I&#39;ll assume that you at least have GHC and cabal-install&#39;s &quot;cabal&quot; binary.  If you have a very recent release of GHC, then you&#39;re asking for problems.  At the time of this writing GHC 7.6 is a few months old, so don&#39;t use it unless you know what you&#39;re doing.  Stick to 7.4 until maintainers have updated their packages.  But do make sure you have the most recent version of Cabal and cabal-install because it has improved significantly.&lt;/p&gt;

&lt;p&gt;cabal-install can install things as global or user.  You usually have to have root privileges to install globally.  Installing locally will put packages in your user&#39;s home directory.  Executable binaries go in $HOME/.cabal/bin.  Libraries go in $HOME/.ghc.  Other than the packages that come with GHC, I install everything as user.  This means that when I upgrade cabal-install with &quot;cabal install cabal-install&quot;, the new binary won&#39;t take effect unless $HOME/.cabal/bin is at the front of my path.&lt;/p&gt;

&lt;p&gt;Now I need to get the bad news over with up front.  Over time your local Cabal package database will grow until it starts to cause problems.  Whenever I&#39;m having trouble building packages, I&#39;ll tinker with things a little to see if I can isolate the problem, but if that doesn&#39;t work, then I clean out my package repository and start fresh.  On linux this can be done very simply with &lt;code&gt;rm -fr ~/.ghc&lt;/code&gt;.  Yes, this feels icky.  Yes, it&#39;s suboptimal.  But it&#39;s simple and straightforward, so either deal with it, or quit complaining and help us fix it.&lt;/p&gt;

&lt;p&gt;I&#39;ve seen people also say that you should delete the ~/.cabal directory as well.  Most of the time that is bad advice.  If you delete .cabal, you&#39;ll probably lose your most recent version of cabal-install, and that will make life more difficult.  Deleting .ghc completely clears out your user package repository, and in my experience is almost always sufficient.  If you really need to delete .cabal, then I would highly recommend copying the &quot;cabal&quot; binary somewhere safe and restoring it after you&#39;re done.&lt;/p&gt;

&lt;p&gt;Sometimes you don&#39;t need to go quite so far as to delete everything in ~/.ghc.  For more granular control over things, use the &quot;ghc-pkg&quot; program.  &quot;ghc-pkg list&quot; shows you a list of all the installed packages.  &quot;ghc-pkg unregister foo-2.3&quot; removes a package from the list.  You can also use unregister without the trailing version number to remove every installed version of that package.  If there are other packages that depend on the package you&#39;re removing, you&#39;ll get an error.  If you really want to remove it, use the --force flag.&lt;/p&gt;

&lt;p&gt;If you force unregister a package, then &quot;ghc-pkg list&quot; will show you all the broken packages.  If I know that there&#39;s a particular hierarchy of packages that I need to remove, then I&#39;ll force remove the top one, and then use ghc-pkg to tell me all the others that I need to remove.  This is an annoying process, so I only do it when I think it will be quicker than deleting everything and rebuilding it all.&lt;/p&gt;

&lt;p&gt;So when do you need to use ghc-pkg?  Typically I only use it when something breaks that I think should build properly.  However, I&#39;ve also found that having multiple versions of a package installed at the same time can sometimes cause problems.  This can show up when the package I&#39;m working on uses one version of a library, but 
when I&#39;m experimenting in ghci a different version gets loaded.  When this happens you may get perplexing error messages for code that is actually correct.  In this situation, I&#39;ve been able to fix the problem by using ghc-pkg to remove all but one version of the library in question.&lt;/p&gt;

&lt;p&gt;If you&#39;ve used all these tips and you still cannot install a package even after blowing away ~/.ghc, then there is probably a dependency issue in the package you&#39;re using.  Haskell development is moving at a very rapid pace, so the upstream package maintainers may not be aware or have had time to fix the problem.  You can help by alerting them to the problem, or better yet, including a patch to fix it.&lt;/p&gt;

&lt;p&gt;Often the fix may be a simple dependency bump.  These are pretty simple to do yourself.  Use &quot;cabal unpack foo-package-0.0.1&quot; to download the package source and unzip it into the current directory.  Then edit the .cabal file, change the bounds, and build the local package with &quot;cabal install&quot;.  Sometimes I will also bump the version of the package itself and then use that as the lower bound in the local package that I&#39;m working on.  That way I know it will be using my fixed version of foo-package.  Don&#39;t be afraid to get your hands dirty.  You&#39;re literally one command a way from hacking on upstream source.&lt;/p&gt;

&lt;p&gt;For the impatient, here&#39;s a summary of my tips for basic cabal use:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the most recent versions of cabal-install&lt;/li&gt;
&lt;li&gt;Don&#39;t install things with --global&lt;/li&gt;
&lt;li&gt;Make sure $HOME/.cabal/bin is at the front of your path&lt;/li&gt;
&lt;li&gt;Don&#39;t be afraid to use &lt;code&gt;rm -fr ~/.ghc&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Use ghc-pkg for fine-grained package control&lt;/li&gt;
&lt;li&gt;User &quot;cabal unpack&quot; to download upstream code so you can fix things yourself&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using these techniques, I&#39;ve found that Cabal actually works extremely well for small scale Haskell development--development where you&#39;re only working on a single package at a time and everything else is on hackage.  Large scale development where you&#39;re developing more than one local package requires another set of tools.  But fortunately we&#39;ve already have some that work reasonably well.  I&#39;ll discuss those in my next post.&lt;/p&gt;

</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/1856103746939161061/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/1856103746939161061' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/1856103746939161061'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/1856103746939161061'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/11/a-practical-cabal-primer.html' title='A Practical Cabal Primer'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>6</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3199987078325252240</id><published>2012-11-01T18:01:00.001-04:00</published><updated>2015-09-01T13:45:59.430-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="cabal"/><category scheme="http://www.blogger.com/atom/ns#" term="haskell"/><title type='text'>Why Cabal Has Problems</title><content type='html'>&lt;p&gt;Haskell&#39;s package system, henceforth just &quot;Cabal&quot; for simplicity, has gotten some harsh press in the tech world recently.  I want to emphasize a few points that I think are important to keep in mind in the discussion.&lt;/p&gt;

&lt;p&gt;First, this is a &lt;i&gt;hard&lt;/i&gt; problem.  There&#39;s a reason the term &quot;DLL hell&quot; existed long before Cabal.  I can&#39;t think of any package management system I&#39;ve used that didn&#39;t generate quite a bit of frustration at some point.&lt;/p&gt;

&lt;p&gt;Second, the Haskell ecosystem is also moving very quickly.  There&#39;s the ongoing iteratees/conduits/pipes debate of how to do IO in an efficient and scalable way.  Lenses have recently seen major advances in the state of the art.  There is tons of web framework activity.  I could go on and on.  So while Hackage may not be the largest database of reusable code, the larger ones like CPAN that have been around for a long time are probably not moving as fast (in terms of advances in core libraries).&lt;/p&gt;

&lt;p&gt;Third, I think Haskell has a unique ability to facilitate code reuse even for relatively small amounts of code.  The web framework scene demonstrates this fairly well.  As I&#39;ve said before, even though there are three main competing frameworks, libraries in each of the frameworks can be mixed and matched easily.  For example, web-routes-happstack provides convenience code for gluing together the web-routes package with happstack.  It is 82 lines of code.  web-routes-wai does the same thing for wai with 81 lines of code.  The same thing could be done for Snap with a similar amount of code.&lt;/p&gt;

&lt;p&gt;The languages with larger package repositories like Ruby and Python might also have small glue packages like this, but they don&#39;t have the powerful strong type system.  This means that when a Cabal build fails because of dependency issues, you&#39;re catching an interaction much earlier than you would have caught it in the other languages.  This is what I&#39;m getting at when I say &quot;unique ability to facilitate code reuse&quot;.&lt;/p&gt;

&lt;p&gt;When you add Haskell&#39;s use of cross-module compiler optimizations to all these previous points, I think it makes a compelling case that the Haskell community is at or near the frontier of what has been done before even though we may be a ways away in terms of raw number of packages and developers.  Thus, it should not be surprising that there are problems.  When you&#39;re at the edge of the explored space, there&#39;s going to be some stumbling around in the dark and you might go down some dead end paths.  But that&#39;s not a sign that there&#39;s something wrong with the community.&lt;/p&gt;

&lt;p&gt;Note: The first published version of this article made some incorrect claims based on incorrect information about the number of Haskell packages compared to the number of packages in other languages.  I&#39;ve removed the incorrect numbers and adjusted my point.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3199987078325252240/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3199987078325252240' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3199987078325252240'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3199987078325252240'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/11/why-cabal-has-problems.html' title='Why Cabal Has Problems'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-3906362517288553492</id><published>2012-04-17T08:30:00.000-04:00</published><updated>2013-11-17T16:10:22.043-05:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="haskell"/><category scheme="http://www.blogger.com/atom/ns#" term="ltmt"/><title type='text'>LTMT Part 2: Monads</title><content type='html'>&lt;p&gt;In part 1 of this tutorial we talked about types and kinds. Knowledge of kinds will help to orient yourself in today&#39;s discussion of monads. &lt;/p&gt;
&lt;p&gt;What is a monad? When you type &amp;quot;monad&amp;quot; into &lt;a href=&quot;http://hayoo.info&quot;&gt;Hayoo&lt;/a&gt; the first result takes you to the documentation for the type class Monad. If you don&#39;t already have a basic familiarity with type classes, you can think of a type class as roughly equivalent to a Java interface. A type class defines a set of functions involving a certain data type. When a data type defines all the functions required by the type class, we say that it is an instance of that type class. When a type Foo is an instance of the Monad type class, you&#39;ll commonly hear people say &amp;quot;Foo is a monad&amp;quot;. Here is a version of the Monad type class.&lt;/p&gt;
&lt;pre class=&quot;sourceCode literate haskell&quot;&gt;class Monad m where
    return :: a -&amp;gt; m a
    (=&amp;lt;&amp;lt;) :: (a -&amp;gt; m b) -&amp;gt; m a -&amp;gt; m b&lt;/pre&gt;&lt;p&gt;(Note: If you&#39;re the untrusting type and looked up the real definition to verify that mine is accurate, you&#39;ll find that my version is slightly different. Don&#39;t worry about that right now. I did it intentionally, and there is a method to my madness.)&lt;/p&gt;&lt;p&gt;This basically says that in order for a data type to be an instance of the Monad type class, it has to define the two functions &lt;code&gt;return&lt;/code&gt; and &lt;code&gt;(=&amp;lt;&amp;lt;)&lt;/code&gt; (pronounced &amp;quot;bind&amp;quot;) that have the above type signatures. What do these type signatures tell us? Let&#39;s look at &lt;code&gt;return&lt;/code&gt; first. We see that it returns a value of type &lt;code&gt;m a&lt;/code&gt;. This tells us that &lt;code&gt;m&lt;/code&gt; has the kind signature &lt;code&gt;m :: * -&amp;gt; *&lt;/code&gt;. So whenever we hear someone say &amp;quot;Foo is a monad&amp;quot; we immediately know that &lt;code&gt;Foo :: * -&amp;gt; *&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;In part 1, you probably got tired of me emphasizing that a type is a context. When we look at return and bind, this starts to make more sense. The type &lt;code&gt;m a&lt;/code&gt; is just the type &lt;code&gt;a&lt;/code&gt; in the context &lt;code&gt;m&lt;/code&gt;. The type signature &lt;code&gt;return :: a -&amp;gt; m a&lt;/code&gt; tells us that the return function takes a plain value &lt;code&gt;a&lt;/code&gt; and puts that value into the context &lt;code&gt;m&lt;/code&gt;. So when we say something is a monad, we immediately know that we have a function called return that lets us put arbitrary other values into that context.&lt;/p&gt;
&lt;p&gt;Now, what about bind? It looks much more complicated and scary, but it&#39;s really pretty simple. To see this, let&#39;s get rid of all the &lt;code&gt;m&lt;/code&gt;&#39;s in the type signature. Here&#39;s the before and after.&lt;/p&gt;
&lt;pre class=&quot;sourceCode literate haskell&quot;&gt;&lt;code class=&quot;sourceCode haskell&quot;&gt;&lt;span class=&quot;ot&quot;&gt;before ::&lt;/span&gt; (a &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; m b) &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; m a &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; m b
&lt;span class=&quot;ot&quot;&gt;after  ::&lt;/span&gt; (a &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; b) &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; a &lt;span class=&quot;ot&quot;&gt;-&amp;gt;&lt;/span&gt; b&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The type signature for &lt;code&gt;after&lt;/code&gt; might look familiar. It&#39;s exactly the same as the type signature for the &lt;code&gt;($)&lt;/code&gt; function! If you&#39;re not familiar with it, Haskell&#39;s &lt;code&gt;$&lt;/code&gt; function is just syntax sugar for function application. &lt;code&gt;(f $ a)&lt;/code&gt; is exactly the same as &lt;code&gt;(f a)&lt;/code&gt;. It applies the function &lt;code&gt;f&lt;/code&gt; to its argument &lt;code&gt;a&lt;/code&gt;. It is useful because it has very low precedence and is right associative, so it is a nice syntax sugar that allows us to eliminate parenthesis in certain situations. When you realize that &lt;code&gt;(=&amp;lt;&amp;lt;)&lt;/code&gt; is roughly analogous to the concept of function application (modulo the addition of a context &lt;code&gt;m&lt;/code&gt;), it suddenly makes a lot more sense.&lt;/p&gt;
&lt;p&gt;So now what happens when we look at bind&#39;s type signature with the &lt;code&gt;m&lt;/code&gt;&#39;s back in? &lt;code&gt;(f =&amp;lt;&amp;lt; k)&lt;/code&gt; applies the function &lt;code&gt;f&lt;/code&gt; to the value &lt;code&gt;k&lt;/code&gt;. However, the crucial point is that k is a value wrapped in the context &lt;code&gt;m&lt;/code&gt;, but &lt;code&gt;f&lt;/code&gt;&#39;s parameter is an unwrapped value &lt;code&gt;a&lt;/code&gt;. From this we see that the bind function&#39;s main purpose is to pull a value out of the context &lt;code&gt;m&lt;/code&gt; and apply the function &lt;code&gt;f&lt;/code&gt;, which does some computation, and returns the result back in the context m again.&lt;/p&gt;
&lt;p&gt;The monad type class does not provide any mechanism for unconditionally pulling a value out of the context. The only way to get access to the unwrapped value is with the bind function, but bind does this in a controlled way and requires the function to wrap things up again before the result is returned. This behavior, enabled by Haskell&#39;s strong static type system, provides complete control over side effects and mutability.&lt;/p&gt;
&lt;p&gt;Some monads do provide a way to get a value out of the context, but the choice of whether to do so is completely up to the author of said monad. It is not something inherent in the concept of a monad.&lt;/p&gt;
&lt;p&gt;Monads wouldn&#39;t be very fun to use if all you had was return, bind, and derived functions. To make them more usable, Haskell has a special syntax called &amp;quot;do notation&amp;quot;. The basic idea behind do notation is that there&#39;s a bind between every line, and you can do &lt;code&gt;a &amp;lt;- func&lt;/code&gt; to unwrap the return value of func and make it available to later lines with the identifier &#39;a&#39;.&lt;/p&gt;
&lt;p&gt;You can find a more detailed treatment of do notation elsewhere. I hear that &lt;a href=&quot;http://learnyouahaskell.com/a-fistful-of-monads&quot;&gt;Learn You a Haskell&lt;/a&gt; and &lt;a href=&quot;http://book.realworldhaskell.org/read/monads.html&quot;&gt;Real World Haskell&lt;/a&gt; are good.&lt;/p&gt;
&lt;p&gt;In summary, a monad is a certain type of context that provides two things: a way to put things into the context, and function application within the context. There is no way to get things out. To get things out, you have to use bind to take yourself into the context. Once you have these two operations, there are lots of other more complicated operations built on the basic primitives that are provided by the API. Much of this is provided in &lt;a href=&quot;http://www.haskell.org/ghc/docs/7.0-latest/html/libraries/base-4.3.1.0/Control-Monad.html&quot;&gt;Control.Monad&lt;/a&gt;. You probably won&#39;t learn all this stuff in a day. Just dive in and use these concepts in real code. Eventually you&#39;ll find that the patterns are sinking in and becoming clearer.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/3906362517288553492/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/3906362517288553492' title='7 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3906362517288553492'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/3906362517288553492'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/04/ltmt-part-2-monads.html' title='LTMT Part 2: Monads'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>7</thr:total></entry><entry><id>tag:blogger.com,1999:blog-8768401356830813531.post-5191607664469709891</id><published>2012-04-16T08:30:00.000-04:00</published><updated>2012-04-17T18:49:47.755-04:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="haskell"/><category scheme="http://www.blogger.com/atom/ns#" term="ltmt"/><title type='text'>The Less Travelled Monad Tutorial: Understanding Kinds</title><content type='html'>&lt;p&gt;This is part 1 of a monad tutorial (but as we will see, it&#39;s more than your average monad tutorial). If you already have a strong grasp of types, kinds, monads, and monad transformers, and type signatures like &lt;code&gt;newtype RST r s m a = RST { runRST :: r -&amp;gt; s -&amp;gt; m (a, s) }&lt;/code&gt; don&#39;t make your eyes glaze over, then reading this won&#39;t change your life. If you don&#39;t, then maybe it will.&lt;/p&gt;
&lt;p&gt;More seriously, when I was learning Haskell I got the impression that some topics were &amp;quot;more advanced&amp;quot; and should wait until later. Now, a few years in, I feel that understanding some of these topics earlier would have significantly sped up the learning process for me. If there are other people out there whose brains work somewhat like mine, then maybe they will be able to benefit from this tutorial. I can&#39;t say that everything I say here will be new, but I haven&#39;t seen these concepts organized in this way before.&lt;/p&gt;
&lt;p&gt;This tutorial is not for absolute beginners. It assumes a basic knowledge of Haskell including the basics of data types, type signatures, and type classes. If you&#39;ve been programming Haskell for a little bit, but are getting stuck on monads or monad transformers, then you might find some help here.&lt;/p&gt;
&lt;pre class=&quot;sourceCode literate haskell&quot;&gt;&lt;code class=&quot;sourceCode haskell&quot;&gt;&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Distance&lt;/span&gt; &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Dist&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Double&lt;/span&gt;
&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Mass&lt;/span&gt; &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Mass&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Double&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code defines data types called Distance and Mass. The name on the left side of the equals sign is called a &lt;strong&gt;type constructor&lt;/strong&gt; (or sometimes shortened to just &lt;strong&gt;type&lt;/strong&gt;). The Haskell compiler automatically creates functions from the names just to the right of the equals sign. These functions are called &lt;strong&gt;data constructors&lt;/strong&gt; because they construct the types Distance and Mass. Since they are functions, they are also first-class values, which means they have types as seen in the following ghci session.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ ghci ltmt.hs
GHCi, version 7.0.3: http://www.haskell.org/ghc/  :? for helpghci&amp;gt; :t Dist
Dist :: Double -&amp;gt; Distance
ghci&amp;gt; :t Mass
Mass :: Double -&amp;gt; Mass
ghci&amp;gt; :t Distance
&amp;lt;interactive&amp;gt;:1:1: Not in scope: data constructor `Distance&amp;#39;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We see here that Dist and Mass are functions that return the types Distance and Mass respectively. Frequently you&#39;ll encounter code where the type and the constructor have the same name (as we have here with Mass). Distance, however, illustrates that these are really two separate entities. Distance is the type and Dist is the constructor. Types don&#39;t have types, so the &amp;quot;:t&amp;quot; command fails for Distance.&lt;/p&gt;
&lt;p&gt;Now, we need to pause for a moment and think about the meaning of these things. What is the Distance type? Well, when we look at the constructor, we can see that a value of type Distance contains a single Double. The constructor function doesn&#39;t actually do anything to the Double value in the process of constructing the Distance value. All it does is create a new context for thinking about a Double, specifically the context of a Double that we intend to represent a distance quantity. (Well, that&#39;s not completely true, but for the purposes of this tutorial we&#39;ll ignore those details.) Let me repeat that. &lt;strong&gt;A type is just a context.&lt;/strong&gt; This probably seems so obvious that you&#39;re starting to wonder about me. But I&#39;m saying it because I think that keeping it in mind will help when talking about monads later.&lt;/p&gt;
&lt;p&gt;Now let&#39;s look at another data declaration.&lt;/p&gt;
&lt;pre class=&quot;sourceCode literate haskell&quot;&gt;&lt;code class=&quot;sourceCode haskell&quot;&gt;&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Pair&lt;/span&gt; a &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;MkPair&lt;/span&gt; a a&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This one is more interesting. The type constructor Pair takes an argument. The argument is some other type a and that is used in some part of the data constructor. When we look to the right side, we see that the data constructor is called MkPair and it constructs a value of type &amp;quot;Pair a&amp;quot; from two values of type &amp;quot;a&amp;quot;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MkPair :: a -&amp;gt; a -&amp;gt; Pair a&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The same thing we said for the above types Distance and Mass applies here. The type constructor &lt;code&gt;Pair&lt;/code&gt; represents a context. It&#39;s a context representing a pair of values. The type &lt;code&gt;Pair Int&lt;/code&gt; represents a pair of Ints. The type &lt;code&gt;Pair String&lt;/code&gt; represents a pair of strings. And on and on for whatever concrete type we use in the place of &lt;code&gt;a&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Again, this is all very straightforward. But there is a significant distinction between the two type constructors Pair and Distance. Pair requires a type parameter, while Distance does not. This brings us to the topic of kinds. (Most people postpone this topic until later, but it&#39;s not hard to understand and I think it helps to clarify things later.) You know those analogy questions they use on standardized tests? Here&#39;s a completed one for you:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;values : types    ::   types : kinds&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Just as we categorize &lt;strong&gt;values&lt;/strong&gt; by &lt;strong&gt;type&lt;/strong&gt;, we categorize &lt;strong&gt;type constructors&lt;/strong&gt; by &lt;strong&gt;kind&lt;/strong&gt;. GHCi lets us look up a type constructor&#39;s kind with the &amp;quot;:k&amp;quot; command.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ghci&amp;gt; :k Distance
Distance :: *
ghci&amp;gt; :k Mass
Mass :: *
ghci&amp;gt; :k Dist
&amp;lt;interactive&amp;gt;:1:1: Not in scope: type constructor or class `Dist&amp;#39;
ghci&amp;gt; :k Pair
Pair :: * -&amp;gt; *
ghci&amp;gt; :k Pair Mass
Pair Mass :: *&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In English we would say &amp;quot;Distance has kind *&amp;quot;, and &amp;quot;Pair has kind * -&amp;gt; *&amp;quot;. Kind signatures look similar to type signatures because they are. When we use Mass as Pair&#39;s first type argument, the result has kind *. The Haskell report defines kind signatures with the following two rules.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The symbol * represents the kind of all nullary type constructors (constructors that don&#39;t take any parameters).&lt;/li&gt;
&lt;li&gt;If k1 and k2 are kinds, then k1-&amp;gt;k2 is the kind of types that take one parameter that is a type of kind k1 and return a type of kind k2.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As an exercise, see if you can work out the kind signatures for the following type constructors. You can check your work with GHCi.&lt;/p&gt;
&lt;pre class=&quot;sourceCode literate haskell&quot;&gt;&lt;code class=&quot;sourceCode haskell&quot;&gt;&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Tuple&lt;/span&gt; a b &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;Tuple&lt;/span&gt; a b
&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;HardA&lt;/span&gt; a &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;HardA&lt;/span&gt; (a &lt;span class=&quot;dt&quot;&gt;Int&lt;/span&gt;)
&lt;span class=&quot;kw&quot;&gt;data&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;HardB&lt;/span&gt; a b c &lt;span class=&quot;fu&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;dt&quot;&gt;HardB&lt;/span&gt; (a b) (c a &lt;span class=&quot;dt&quot;&gt;Int&lt;/span&gt;)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before reading further, I suggest attempting to figure out the kind signatures for HardA and HardB because they involve a key pattern that will come up later.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Welcome back. The first example is just a simple extension of what we&#39;ve already seen. The type constructor Tuple has two arguments, so it&#39;s kind signature is &lt;code&gt;Tuple :: * -&amp;gt; * -&amp;gt; *&lt;/code&gt;. Also if you try :t you&#39;ll see that the data constructor&#39;s type signature is &lt;code&gt;Tuple :: a -&amp;gt; b -&amp;gt; Tuple a b&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In the case of the last two types, it may be a little less obvious. But they build on each other in fairly small, manageable steps. For HardA, in the part to the left of the equals sign we see that there is one type parameter &#39;a&#39;. From this, we can deduce that HardA&#39;s kind signature is something of the form &lt;code&gt;? -&amp;gt; *&lt;/code&gt;, but we don&#39;t know exactly what to put at the question mark. On the right side, all the individual arguments to the data constructor must have kind *. If &lt;code&gt;(a Int) :: *&lt;/code&gt;, then the type &#39;a&#39; must be a type constructor that takes one parameter. That is, it has kind &lt;code&gt;* -&amp;gt; *&lt;/code&gt;, which is what we must substitute for the question mark. Therefore, we get the final kind signature &lt;code&gt;HardA :: (* -&amp;gt; *) -&amp;gt; *&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;HardB is a very contrived and much more complex case that exercises all the above simple principles. From &lt;code&gt;HardB a b c&lt;/code&gt; we see that HardB has three type parameters, so it&#39;s kind signature has the form &lt;code&gt;HardB :: a -&amp;gt; b -&amp;gt; c -&amp;gt; *&lt;/code&gt;. On the right side the &lt;code&gt;(a b)&lt;/code&gt; tells us that &lt;code&gt;b :: *&lt;/code&gt; and &lt;code&gt;a :: * -&amp;gt; *&lt;/code&gt;. The second part &lt;code&gt;(c a Int)&lt;/code&gt; means that c is a type constructor with two parameters where its first parameter is a, which has the type signature we described above. So this gives us &lt;code&gt;c :: (* -&amp;gt; *) -&amp;gt; * -&amp;gt; *&lt;/code&gt;. Now, substituting all these in, we get &lt;code&gt;HardB :: (* -&amp;gt; *) -&amp;gt; * -&amp;gt; ((* -&amp;gt; *) -&amp;gt; * -&amp;gt; *) -&amp;gt; *&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The point of all this is to show that when you see juxtaposition of type constructors (something of the form &lt;code&gt;(a b)&lt;/code&gt; in a type signature), it is telling you that the context a is a non-nullary type constructor and b is its first parameter.&lt;/p&gt;
&lt;p&gt;Continue on to &lt;a href=&quot;http://softwaresimply.blogspot.com/2012/04/ltmt-part-2-monads.html&quot;&gt;Part 2 of the Less Travelled Monad Tutorial&lt;/a&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://softwaresimply.blogspot.com/feeds/5191607664469709891/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/8768401356830813531/5191607664469709891' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/5191607664469709891'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/8768401356830813531/posts/default/5191607664469709891'/><link rel='alternate' type='text/html' href='http://softwaresimply.blogspot.com/2012/04/less-travelled-monad-tutorial-part-1.html' title='The Less Travelled Monad Tutorial: Understanding Kinds'/><author><name>mightybyte</name><uri>http://www.blogger.com/profile/15198998578494149797</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry></feed>