Archive | Bind RSS for this section

Object Oriented Haskell

You may recall, earlier this year I wrote about object orientation in C. The basic Idea being that “Object Oriented Programming” is more a mindset, then a language feature. You can do object orientation and access control in C using free-floating functions and opaque structs. Well, guess what? You can do Object Oriented Programming in Haskell as well!

As a quick recap, if you didn’t read "C With Classes", for our purposes there are three components that need to be present to be considered “an object”: data consolidation, access control, and method calling. Inheritance is also important to real “Object Oriented Programming” (or OOP, as I’ll start calling it), but is really just gravy. Inheritance is largely supported by typeclasses in Haskell, so I won’t be going into it today.

Functional OOP

What would OOP look like in Haskell? Thanks to the magic of higher-order functions, we can actually very closely approximate what you’d see in a traditional OOP language, such as Java or C++.

Data

This part is trivial, there is literally a data keyword for this purpose. You should have this down by now.

Access Control

Access Control can be accomplished by way of the module keyword. Simply expose what you’d like to be public, and don’t expose your private members. Obviously, if you have private fields in your “Class”, you should make a factory function for your class instead of exposing its constructor. This is all pretty standard stuff.

Method Calls

This is an area that Haskell shines, and C shows its ugly side. In Haskell, we can actually create a method call operator, and create something that looks very much like the traditional Class.method calling convention that we’re used to!

The Method Call Operator

First, we need some contrived classes. Let’s go with everybody’s favorite: the Employee!

data Employee = Employee {name :: String, employeeId :: Integer, title :: String}

Nothing fancy or non-standard here. We created an Employee “Class” in the traditional Haskell way. Due to our use of record syntax, we already have three getters: name, employeeId, and title.

Let’s make another function, so that we can have a method that takes arguments:

isSeniorTo :: Employee -> Employee -> Bool isSeniorTo s f = (employeeId s) > (employeeId f)

There is something extremely important to note about this function: the last argument is the object, not the first as it was in “C With Classes”. The reason for this is to allow us to partially apply this function, this will be crucial.

Now, let’s give it a shot to make sure everything is working:

*ghci> let boss = newEmployee "Chris" 1 "Author" *ghci> let notBoss = newEmployee "Geoffrey" 2 "Associate" *ghci> name boss "Chris" *ghci> isSeniorTo notBoss boss True

All looks well, we create two Employee objects. Since Chris‘s employee ID is lower than Geoffrey‘s, the isSeniorTo method returns True. But this is all very wonky still, let’s create that method call operator already!

(>-) :: a -> (a -> b) -> b (>-) o f = f o

Since . and -> are already spoken for, I’ve gone with >- for my method call operator. The method call operator takes an arbitrary type a, and a function that takes the same type a, and returns a type b. Finally a type b is returned. The definition of this function couldn’t be simpler: we call the function on the passed-in object! Let’s see this in action:

*ghci> notBoss >- title "Associate" *ghci> boss >- isSeniorTo notBoss True

This is just about as elegant as it gets; using the method call operator we just defined, we call a method on an object! In the case of isSeniorTo, we partially apply the function with all but the last argument, then the method call operator is able to use it just as any other function.

But something doesn’t sit right. Isn’t the definition of isSeniorTo a bit smelly? It calls methods on objects, shouldn’t it use the method call operator? Now that we have our method call operator, let’s fix that function:

isSeniorTo :: Employee -> Employee -> Bool isSeniorTo s f = (s >- employeeId) > (f >- employeeId)

There, that’s better. isSeniorTo now properly calls methods on s and f, and all is again well in the world.

Here’s The Kicker

You may be experiencing some deja vu from all of this. It feels like we’ve done this sort of thing before. But where? You may recall this little operator:

*ghci> :t (>>=) (>>=) :: Monad m => m a -> (a -> m b) -> m b

That’s right, the monadic bind operator looks suspiciously like our method call operator. But does it behave the same?

*ghci> 1 >- (+) 1 2 *ghci> Just 1 >>= (\a -> return $ a + 1) Just 2

Let’s ignore for a second that we had to use a monad for the bind operator, and see the fact that the two basically did the same thing. In fact, to further illustrate the point, let’s make a Class monad:

data Class a = Class a deriving (Show) instance Monad Class where (Class a) >>= f = f a return = Class

Unlike most monads, which do important work, our Class monad does nothing but prove a point. However, you should note that the implementation of the bind operator is essentially the same as the implementation of the method call operator. Now, let’s see our monad in action!

let boss = Employee "Chris" 1 "Author" *ghci> boss >- name "Chris" *ghci> Class boss >>= return . name Class "Chris"

As you can see, they essentially do the same thing. It turns out that Haskell had classes all along!

Of course, this isn’t exactly true. I’d say that the State monad serves many of the same purposes as objects in OOP languages. However, the semantics of things like Reader, Maybe, and IO have little in common with objects. But much like we implemented Objects in Haskell, the various OOP languages are implementing monads into their standard libraries.

Indeed, OOP and functional programming are not supported by languages, they are supported by programmers. Some languages may make one or the other easier, but the differences are small, and get smaller as time passes.

List as a Monad

Much ink has been spilled on the subject of “Lists as Monads”. Just the fact that they are of course, not what it means for you. This doesn’t really seem surprising to me; there are better monads to use to describe how monads work. Additionally, lists have their own special functions for working with them; you don’t fmap a list, you just call map.

Presumably, Lists as Monads is something that seasoned Monadimancers understand. Maybe it’s some sort of rite of passage. But no more. Today I’ll shed light on the arcane mysteries of Lists as Monads.

Motivation

But first we need a problem to solve. For my example I’ll be using the cartesian product of two lists. The cartesian product of two sets (we’ll be using lists) is defined as:

A X B is the set of all ordered pairs (a, b) such that: a is a member of set A and b is a member of set B

…translated into Haskell we should get a function with the following signature:

cartProd :: [a] -> [b] -> [(a, b)]

How might this function look? Well, we’d need three functions. The first function should have the signature:

cartProd'' :: a -> b -> (a, b)

This function is pretty straight forward, it pairs an a and a b. Next we’d need the function:

cartProd' :: [b] -> a -> [(a, b)]

This function maps (cartProd'' a) over b, producing the list [(a, b)]. Finally, we need a function to tie it all together:

cartProd :: [a] -> [b] -> [(a, b)]

This function maps (cartProd' b) over a, then concats the resulting list of lists. An implementation might look like this:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = concat $ map (cartProd'' r) l where cartProd' :: [b] -> a -> [(a, b)] cartProd' r l = map (cartProd''' l) r where cartProd'' :: a -> b -> (a, b) cartProd'' l r = (l, r)

Did you go cross-eyed? Not exactly the most concise function that’s ever been written. Surely there is a better way…

Binding Our Lists

You may remember a few weeks back I talked about using the Maybe Monad. The gist of that post being that if you’re inside a do block, you can treat your Maybe as if it were just a plain value. It turns out that we can do something similar with Lists.

The Monad instance for [] is somewhat complicated, but it’s operation is straight forward. If >>= takes a Monad a, a function that takes an a and returns a Monad b, and returns a Monad b, what happens if we bind a simple function to [1]?

Prelude> [1] >>= (\a -> return $ a + 1) [2]

…ok, so it added 1 to 1, and stuffed it into a list. Seems pretty straight forward. Let’s try something a little more complicated:

Prelude> [1, 2] >>= (\a -> return $ a + 1) [2,3]

…so this time it added 1 to each list node, as if we’d called map ((+) 1) [1, 2]. Let’s try something else:

Prelude> [1] >>= (\a -> [(a + 1), (a - 1)]) [2,0]

…this time we tried it in reverse. We bound [1] to a function that returns a list with two elements. The resulting list contained two elements. Again, nothing ground breaking here, but what if we do both?

Prelude> [1, 2] >>= (\a -> [(a + 1), (a - 1)]) [2,0,3,1]

…now we get a list with four elements: 1+1, 1-1, 2+1, and 2-1. To replicate this behavior we can’t just map the lambda over the original list. We need to add a call to concat. Let’s expand this our a bit further:

Prelude> [1, 2] >>= (\a -> [(a + 1), (a - 1)]) >>= (\b -> [b, b]) [2,2,0,0,3,3,1,1]

…all simple functions, but if we were to try to do this without the use of List’s monadic functions it’d become a mess like my cartProd function. Speaking of which…

The Better Way

Getting back to our original topic. Now that we have a feel for how List’s monadic interface works, how could we re-implement cartProd to not be terrible? Simple:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = do l' <- l r' <- r [(l', r')]

I’m often hesitant to toot my own horn and call something I wrote “elegant”, but it’s hard to argue that this function isn’t. In three short lines, I managed to re-implement that nested monstrosity I wrote before. There is one fairly massive gotcha here…

In the first and second lines, we bind our two lists to a label. It’s important to note that the order of these matters. The lists will be cycled through from the bottom up. Meaning that for each element of l, all elements of r will be evaluated. For example, using our cartProd:

Prelude> cartProd [1,2] [3,4] [(1,3),(1,4),(2,3),(2,4)]

1 is paired with 3, then 1 is paired with 4, then 2 is paired with 3 and 2 is paired with 4. Were we to swap the order in which we bind l and r to look like this:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = do r' <- r l' <- l [(l', r')]

Then the output would look like this:

Prelude> cartProd [1,2] [3,4] [(1,3),(2,3),(1,4),(2,4)]

1 is paired with 3, then 2 is paired with 3, then 1 is paired with 4, then 2 is paired with 4. Before the first element of the tuple was grouped together where here the second element is. For our purposes, both results are valid per the definition of cartesian product, but that may not hold for you so be careful.

Maybe I Should Be In The Maybe Monad

If you’ve spent any time with Haskell, then you’ve surely encountered Maybe. Maybe is Haskell’s version of testing your pointer for NULL, only its better because it’s impossible to accidentally dereference a Nothing.

You’ve also probably thought it was just so annoying. You test your Maybe a to ensure it’s not Nothing, but you still have to go about getting the value out of the Maybe so you can actually do something with it.

The Problem

Let’s forgo the usual contrived examples and look at an actual problem I faced. While working on the Server Console, I was faced with dealing with a query string. The end of a query string contains key/value pairs. Happstack conveniently decodes these into this type:

[(String, Input)]

I needed to write a function to lookup a key within this pair, and return it’s value. As we all know, there’s no way to guarantee that a given key is in the list, so the function must be able to handle this. There are a few ways we could go about this, but this seems to me to be an ideal place to use a Maybe. Suppose we write our lookup function like so:

lookup :: Request -> String -> Maybe String

This is logically sound, but now we have an annoying Maybe to work with. Suppose we’re working in a ServerPart Response. We might write a response function like so:

handler :: ServerPart Response handler = do req <- askRq paths <- return $ rqPaths req page <- return $ lookup req "page_number" case page of Nothing -> mzero (Just a) -> do items <- return $ lookup req "items_per_page" case items of Nothing -> mzero (just b) -> h' paths a b

Yucky! After each call to lookup, we check to see if the call succeeded. This gives us a giant tree that’s surely pushing off the right side of my blog page. There must be a better way.

Doing It Wrong

Shockingly, this is not the best way to do this. It turns out that writing our functions in the Maybe monad is the answer. Take the following function:

hTrpl :: Request -> Maybe ([String], String, String) hTrpl r = do paths <- return $ rqPaths r page <- lookup r "page_number" items <- lookup r "items_per_page" return (paths, page, items)

… now we can re-write handler like so:

handler :: ServerPart Response handler = do req <- askRq triple <- return $ hTrpl req case triple of Nothing -> mzero (Just (a, b, c)) -> h' a b c

Much better, right? But why don’t we have to test the return values of lookup? The answer to that question lies in the implementation of Maybe‘s >>= operator:

instance Monad Maybe where (Just x) >>= k = k x Nothing >>= _ = Nothing

Recall that do notation is just syntactic sugar around >>= and a whole bunch of lambdas. With that in mind, you can see that we are actually binding functions together. Per Maybe‘s bind implementation, if you bind Just a to a function, it calls the function on a. If you bind Nothing to a function, it ignores the function, and just returns Nothing.

What this means for us is that so long as we’re inside the Maybe monad, we can pretend all functions return successful values. Maybe allows us to defer testing for failure! The first time a Nothing is returned, functions stop getting called, so we don’t even have to worry about performance losses from not immediately returning from the function! So long as we’re inside of Maybe, there will be Peace On Earth. We code our successful code branch, and then when all is said and done and the dust has settled, we can see if it all worked out.

Next time you find yourself testing a Maybe more than once in a function, ask yourself: should I be in the Maybe monad right now?

Yet Another Aeson Tutorial

Lately I’ve been trying to make sense of Aeson, a prominent JSON parsing library for Haskell. If you do some googling you’ll see two competing notions: 1) Aeson is a powerful library, but it’s documentation is terrible and 2) about 10,000 Aeson tutorials. Even the Aeson hackage page has a huge “How To Use This Library” section with several examples. These examples usually take the form of:

“Make a type for your JSON data!”

data Contrived = Contrived { field1 :: String , field2 :: String } deriving (Show)

“Write a FromJSON instance!”

instance FromJSON Contrived where parseJSON (Object v) = Contrived <$> v .: "field1" <*> v .: "field2"

“Yay!!”

I’ll spare you the long form, other people have done it already and I’m sure you’ve seen it already. What I quickly noticed was that this contrived example wasn’t quite cutting it. I’ve had some challenges that I’ve had to overcome. I’d like to share some of the wisdom I’ve accumulated on this subject.

Nested JSON

The first Problem I’ve run into is the nested JSON. Let’s take a look at an example:

{ "error": { "message": "Message describing the error", "type": "OAuthException", "code": 190 , "error_subcode": 460 } }

That is an example of an exception that can get returned by any Facebook Graph API call. You’ll notice that the exception data is actually contained in a nested JSON object. If passed to a parseJSON function, the only field retrievable by operator .: is “error”, which returns the JSON object. We could define two types and two instances for this like:

data EXTopLevel = EXTopLevel { getEx :: FBException } deriving (Show) data FBException = FBException { exMsg :: String , exType :: String , exCode :: Int , exSubCode :: Maybe Int } deriving (Show) instance FromJSON EXTopLevel where parseJSON (Object v) = EXTopLevel <$> v .: "error" instance FromJSON FBException where parseJSON (Object v) = FBException <$> v .: "message" <*> v .: "type" <*> v .: "code" <*> v .:? "error_subcode"

In this case, you could decode to a EXTopLevel, and call getEx to get the actual exception. However, it doesn’t take a doctor of computer science to see that this is silly. Nobody needs the top-level object, and this is a silly amount of boilerplate. The solution? We can use our friend the bind operator. Aeson Objects are instances of Monad, and it turns out that it’s bind function allows us to drill down into objects. We can re-implement that mess above simply as:

data FBException = FBException { exMsg :: String , exType :: String , exCode :: Int , exSubCode :: Maybe Int } deriving (Show) instance FromJSON FBException where parseJSON (Object v) = FBException <$> (e >>= (.: "message")) <*> (e >>= (.: "type")) <*> (e >>= (.: "code")) <*> (e >>= (.:? "error_subcode")) where e = (v .: "error")

Much better right? I thought so too.

Types With Multiple Constructors

The next problem I’d like to talk about is using types with multiple constructors. Let’s take a look at an example:

{ "value": "EVERYONE" }

When creating a new album of facebook, the API needs you to set a privacy on the album. This setting is set using this JSON object. This would seem to be a trivial case for Aeson:

data Privacy = Privacy { value :: String } deriving (Show) instance FromJSON Privacy where parseJSON (Object v) = Privacy <$> v .: "value" <*>

Unfortunately, the following is not a valid privacy setting:

"{ "value": "NOT_THE_NSA" }

However, our Privacy type would allow that. In reality, this should be an enumeration:

data Privacy = Everyone | AllFriends | FriendsOfFriends | Self

But how would you write a FromJSON instance for that? The method we’ve been using doesn’t work, and parseJSON takes and returns magical internal types that you can’t really do anything with. I was at a loss for a while, and even considered using the method I posted above. Finally, the answer hit me. Like many things in Haskell, the answer was stupidly simple; just define a function to create the privacy object, and use that in the parseJSON function instead of the type constructor! My solution looks like this:

instance FromJSON Privacy where parseJSON (Object v) = createPrivacy v .: "value" createPrivacy :: String -> Privacy createPrivacy "EVERYONE" = Everyone createPrivacy "ALL_FRIENDS" = AllFriends createPrivacy "FRIENDS_OF_FRIENDS" = FriendsOfFriends createPrivacy "SELF" = Self createPrivacy _ = error "Invalid privacy setting!"

If the parseJSON needs a singular function to create a type, and you have more than one type constructor for your type, you can wrap your type constructors in one function that picks and uses the correct one. Your type function doesn’t need to know about Aeson; Aeson magically turns it’s parser into whatever type your function calls for.

Monads In Clinical Terms

Last week, we briefly discussed Monads. If you haven’t read that post, it would probably be helpful to do so now. To recap, I talked about two things. First and foremost I talked about how monads are horrific and terrifying. Secondly I talked about the use of the bind operator, and do notation in Haskell.

Today I’ll be discussing at a very low level, what exactly a Monad is. I feel that there is a general notion that all Monads are similar to each other, and serve some common purpose. I know this has been causing issues for me. However, I’m slowly coming around to the idea that they really aren’t. Let’s consider some of the monads in Haskell. We have:

  • The List monad, which is a data structure.
  • The Maybe monad, which represents the result of a calculation that may have failed.
  • The Either monad, which represents a value, or an error.

I know what you may be thinking: “These all look like data structures!” I had this same thought, but let’s look at some more monads:

  • The IO monad, which represents some communication with the “outside world”.
  • The Writer monad, which represents logging.
  • The State monad, which represents program state.

…and then that notion falls apart. If you tried really hard, you might make a case that these are all data structures. However, they’re really not. So, what exactly is a monad? At the very basic level, a monad is two functions, and three laws. Let’s take a look at these.

Two Functions

I touched on this a bit last week, but in order for something to be a monad, it must implement two functions: bind and return. Let’s take a look at these.

Bind

Any monad typically contains a value, and some “decoration”. Bind takes a monad, a function that takes a value and returns a monad, calls that function on the value contained in the old monad, returning a new monad. Finally the logic contained within the bind function “reconciles” the old monad and new monad’s decorations, returning the final resulting monad. Conceptually, this looks something like this:

conceptual_monad

Let’s take a look at Haskell’s bind operator’s function signature:

(>>=) :: m a -> (a -> m b) -> m b

In Haskell, bind is implemented as an operator: >>=. This takes a monad containing an “a“, a function that takes an “a“, and returns a monad containing a “b“, and finally returns a monad containing a “b“. It is up to the author of a monad to determine what it means to “reconcile the decoration” taking into account the laws I’m going to talk about in a minute.

return

Compared to bind, return is child’s play. Simply put, return takes a value, and creates a new monad with the most basic amount of decoration to still be valid. This is functionally equivalent to the pure function provided by Applicative Functors.

Three Laws

While you can make any old thing a member of the Monad typeclass, it’s not actually a monad unless it obeys the law. There are three properties that a monad must satisfy in order to be a valid monad:

Left identity

Formally defined as:

return a >>= k == k a

This law states that calling return on some value, and then binding the resulting monad to some function should have the same result as just calling that function on the value. The idea here is that if return truly gives a value minimal decoration, that this decoration should not change the resulting monad of a function when bound. Let’s see how Maybe satisfies this law. Here is Maybe‘s instance definition:

instance Monad Maybe where (Just x) >>= k = k x Nothing >>= _ = Nothing return k = Just k

When Maybe‘s return implementation is called, it wraps the argument in a Just. Maybe has two bind implementations, one for Just and one for Nothing. If a Just is bound to a function, the function will be called on the value contained in the Just, and the resulting Maybe is the result of the bind function. Therefore, if the function returns a Just, bind returns that Just. If the function returns Nothing, then bind returns that Nothing.

As you can see, return always returns Just k, therefore Maybe satisfies the law of Left Identity.

Right Identity

Formally defined as:

m >>= return == m

return is a function that takes a value, and returns a monad. However, return is only supposed to just wrap the value in a minimal monad. Therefore, the monad created by return should have no affect on the “decoration reconciliation” performed by bind. Let’s see how this works with Maybe.

As you can see from the implementation posted above, Maybe has two bind implementations: one for Just, and one for Nothing. If Nothing is bound to any function, the result is automatically Nothing. If Just k is bound to a function, the result is the result of the function call. The result of Maybe‘s return function is to wrap the value in a Just. Therefore, for both bind implementations, the law holds.

Associativity

Formally defined as:

m >>= (\x -> k x >>= h) == (m >>= k) >>= h

At first this one seems complicated, but it’s really not. First, we need to understand what the word “Associativity” Means. In math, the Associative Property is a property of certain binary operations. If an operation is associative, then it does not matter how the numbers are nested, the result will be the same.

Addition is associative:

(2 + 3) + 4 = 9 2 + (3 + 4) = 9

…however subtraction is not associative:

(2 - 3) - 4 = -5 2 - (3 - 4) = 3

Simply put, bind must behave like addition, and not subtraction with regards to how it can be nested.

When You Put It Like That…

Doesn’t seem so complicated now, does it? Did all that make sense to you? Congratulations, you get monads! Now, the hard part is figuring out how to use some of the crazy monads that come in Haskell’s standard library. This is a subject I hope to help with over the coming months.

However, for the time being if you have a data type, and you can implement bind and return in a way that obeys the monad laws, then your type is a monad. Don’t let anybody tell you otherwise!

A Trip To The Magical Land Of Monads

Monads can be a tricky topic. On the surface, they’re not hard at all. Taking Maybe for instance:

Prelude> Just 1 Just 1 Prelude> Nothing Nothing

That couldn’t possibly be easier. Something is either Just [something], or it is Nothing. However Monad world is the magical world of gotchas, unenforceable rules, and magical syntax. I had been planning on writing a post on Monads, much like I did with Applicatives and Functors. While researching this topic, I’ve determined that I’m not qualified to speak on this subject yet.

I am qualified to discuss the usage of Monads. I feel that say you are going to “learn how to do Monads” is similar to saying you are going to “learn how to do data structures”. Data structures are similar to each other in that they serve a common purpose: to contain data. However, a Vector is nothing like a Linked List, which is nothing like a Hash Map.

Similarly, a Maybe is nothing like an IO, which is nothing like a Writer. While they are all Monads, they serve different purposes. Today, I’d like to lay some groundwork on the topic and talk about binding functions.

On Magical Syntax

Much like Functor and Applicative, Monad brings functions that allow you to use normal values with monadic values. Monad brings the >>= operator. This operator is called the bind operator. (this is important for monads in general, but I won’t get into this today. Just know that the operator’s name is bind) The signature for >>= is:

(>>=) :: m a -> (a -> m b) -> m b

As you can see, it takes a Monad that contains an a, a function that takes an a and returns a Monad b, and returns a Monad b. Basically, it calls the function on the value contained in the monad, and does whatever additional action is appropriate for the monad, and returns a monad containing the result. In short: it is fmap for Monads. Let’s take a look at a quick example for Maybe:

Prelude> Just 1 >>= (\a -> Just $ a + 5) Just 6

As you can see, we bind Just 1 to a function that takes a value, adds it to 5, and wraps the result in a Just, which results in Just 6. Bind will correctly handle Nothing as well:

Prelude> Nothing >>= (\a -> Just $ a + 5) Nothing

Still with me? Good, because things are about to get magical.

Do Notation

Monads are so special, they have their own magical syntax! When working with monads, you may use do notation. What does do notation look like? Let’s take a look at a sample function:

justAdd :: (Num a) => a -> a -> Maybe a justAdd a b = Just $ a + b

This function takes 2 numbers, adds them, and wraps them up in a Maybe. Nothing earth shattering here. Let’s take a look at how to work with these using Bind:

justAddDemoBind = justAdd 2 2 >>= (justAdd 4) justAddDemoBindNothing = Nothing >>= (justAdd 4)

I’ve defined 2 simple functions here. The first calls justAdd 2 2, which returns Just 4. The function (justAdd 4) is then applied to it using the bind operator, which will return Just 8. The second attempts to apply the function (justAdd 4) to Nothing. Since the bind operator is smart enough to handle this, the final result of this function is Nothing. Simple, really. Now, let’s see do notation in action:

justAddDemoDo = do first <- justAdd 2 2 justAdd 4 first justAddDemoDoNothing = do first <- Nothing justAdd 4 first

Looks like a completely different language, right? In fact, these two functions do the exact same thing as the previous two. The purpose of do notation is to make working with Monads easier. In practice, if your logic is non-trivial, you end up with hugely nested statements. To see what’s going on, let’s break justAddDemoDo down:

justAddDemoDo = do

In the first line, we open our do block.

first <- justAdd 2 2

In the second line, we call justAdd 2 2, and assign the result to the name first. Notice that <- operator? That works basically the same as it does in list comprehensions, it does the assigning.

justAdd 4 first

Finally, we add 4 to first, resulting in Just 8, which is the value returned from the function. It seems like we’re treating our monad contained in first as a regular value. This is part of the magic of do notation. In fact, if this line were written as:justAdd first 4 it would have worked.

Another very important thing to note is that the last line of a do block must be an expression that returns a Monad! GHCI will throw a fit if it’s not.

The Pervasiveness Of Do

As you can see, under the hood, do notation just uses the >>= operator. You can also see that it is much simpler and cleaner looking than using the bind operator. However, that doesn’t mean that do is better. Like many things, it comes down to personal choice.

When reading literature on Haskell, you should be prepared to interpret do blocks, and usage of the bind operator. Like any tool, do and bind are not always the correct one. Picking the correct tool for the job is part of being a programmer. Hopefully this tutorial gave you enough familiarity to be able to use both.