# Of Semicolons and Sadness

Lately, I’ve been writing a lot of Haskell. If you’ve been following the blog, you likely are aware of this. However, I’ve been feeling the itch to try something different. I’ve been dabbling in things here and there, but for some reason I just can’t settle into a groove. It seems that Haskell has ruined programming for me.

Why? Is it the functional purity of Haskell? Is it the way that lazy evaluation changes the way you think about programming? Is it the sublime type system that catches whole classes of bugs at compile time? No, I’m afraid not. My issue is much more basic; semicolons and angle brackets.

## Your Mission

…should you choose to accept it: write a function `fun`

that does the following: takes an integer argument `acc`

, and two arguments of the **same** generic type. This function should enforce that the two generic arguments cannot be changed. This function calls another function `given`

that takes two arguments of the same generic type, that also cannot be changed. `given`

does *something*, then returns an integer as a result. `given`

can fail!

After calling `given`

, if it failed, return `acc - 1`

, if it succeeded, return its result + `acc`

.

## In C

C is a pretty simple language. I like simplicity (it’s the whole premise of this post, right?) but C’s simplicity turns into overly verbose code in the face of the requirements of `fun`

. Here is the signature of `given`

```
// please ensure that lhs and rhs have the same type
int given(int * result,
void const * const lhs,
void const * const rhs)
```

So, what’s going on here? In C, the dominant idiom for indicating success or failure is to return an error code, thus we return an `int`

. However, this isn’t our return value! We use the out parameter `result`

instead. Finally, we have `lhs`

and `rhs`

, which have the confusing types `void const * const`

. These read as “constant pointer to constant void”, and ensures that neither the where the pointer points, or the value of what it points to can change. (thus documenting the requirement that it not change its arguments!) Finally, we have our comment asking the caller to pretty please, ensure `lhs`

and `rhs`

have the same type. There is no way in C to ensure two `void *`

are actually the same type, so this is the best we can do… Now, let’s see my implementation of `fun`

:

```
// please ensure that lhs and rhs have the same type
int fun(int acc,
void const * const lhs,
void const * const rhs)
{
int res = 0;
if (given(&res, lhs, rhs))
{
return acc + res;
}
else
{
return acc - 1;
}
}
```

We have the same confusing sort of signature, except here the return value is our actual return value, and `acc`

is not an out parameter. (which is really obvious, right?)

Inside of this function, we allocate `res`

on the stack and initialize it to 0. Then we call `given`

with the *address of* `res`

, and `lhs`

and `rhs`

. Taking the address of a something requires a special symbol. This function is called within an if block because `given`

returns 0 on failure and *something* on success.

Some of this noise can be reduced, but overall it’s pretty minimal. The use of `void`

pointers and our type checking comment are troubling, but that’s not really the point of this post. However, the many semicolons, curly braces, and commas add a lot of clutter.

Noise level: Moderate

Strangeness: High

Confidence that there are no bugs in my implementation: Very Low

## In C++

Next, let’s check out some C++. Here is the signature of `given`

:

```
// throws an std::exception on failure!
template <typename T> int given(const T & lhs,
const T & rhs)
```

To me, this is kind of a wash from the C implementation. On one hand, we have this `template <typename T>`

silliness, on the other hand we don’t have an out parameter, we only have one `const`

per argument, and the compiler can actually enforce that `lhs`

is of the same type as `rhs`

. Next, the implementation:

```
template <typename T> int fun(int acc,
const T & lhs,
const T & rhs)
{
try
{
return acc + given<T>(lhs, rhs);
}
catch (std::exception & e)
{
return acc - 1;
}
}
```

Here we have that same yucky sort of signature, and we have these `try`

and `catch`

blocks. Personally, I don’t care for exceptions. However, they are a tried and true strategy accepted by the majority of the programming community, and who am I to dispute that? However, what I don’t like is the fact that we have no indication that `given`

can throw. We had to read a comment. Additional weirdness here is the call to `given`

: `given<T>(lhs, rhs)`

. Why do I have to do `<T>`

?

The usual complaints about curly braces and semicolons apply here as well.

Noise Level: High

Strangeness: Low

Confidence that there are no bugs in my implementation: Low

## In Rust

A newcomer to the scene is Rust. Rust is a low level systems programming language that has a very strong emphasis on safety. Excellent, right? Let’s see the signature of `given`

:

`fn given<T>(lhs: &T, rhs: &T) -> Option<i64>`

… eew. We have twice as many angle brackets as the C++ implementation did, which is unfortunate. We’ve also added some colons. On the plus side, we have option types in rust, so no icky exceptions! Let’s see my implementation of `fun`

:

```
fn fun<T>(acc: i64, lhs: &T, rhs: &T) -> i64
{
let res = given(&lhs, &rhs);
let ret = match res {
Some(x) => acc + x,
None => acc - 1,
};
ret
}
```

… and things get strange. Let’s go through this line by line:

`fn fun<T>(acc: i64, lhs: &T, rhs: &T) -> i64`

: A bit noisier than I’d like, but perfectly reasonable.

`let res = given(&lhs, &rhs);`

: Here we call `given`

, and for some reason I have to put an & in front of `lhs`

and `rhs`

. At least rustc inferred the type for us!

`let ret = match res {};`

: Yay, Rust has pattern matching! It’s a bit unfortunate about all those curly braces and semicolons though…

`Some(x) => acc + x,`

: Nothing wrong here, in a world where commas and such are a thing…

`None => acc - 1,`

: And here we have the last pattern of this match block. Why is there a comma there?! For reasons that science will never explain, in Rust, you have to put commas after the last entry in things like this! This oddity extends to structs and enums as well…

`ret`

: Finally, we return. No, `ret`

isn’t a keyword, it’s a variable that I bound in my pattern match. Notice the lack of a semicolon. In Rust, the last line in a function is the return value, and **does not have a semicolon!** However, rust has a `return`

statement for early returns, and this does take a semicolon. Yes, you can use a `return`

statement here, but according to the docs doing this makes you a bad person.

Noise Level: Very High

Strangeness: High

Confidence that there are no bugs in my implementation: Reasonable

## In Haskell

Finally, let’s take a look at what I consider to be a reasonable syntax: Haskell. The signature of `given`

is as follows:

`given :: a -> a -> Maybe Int`

Once you learn to read Haskell, you’ll see how simple this is. `given`

is a function that takes an a, another a, and returns a `Maybe Int`

. Next, let’s see my implementation of `fun`

:

```
fun :: Int -> a -> a -> Int
fun acc lhs rhs = case given lhs rhs of
Just res -> acc + res
_ -> acc - 1
```

Not counting the optional function signature, this is 3 lines. The first is our pattern match, which calls `given`

. The second line handles success, and the third handles failure. There is no return statement here, the result of the expression is the return value. And unlike Rust, there’s no special case that breaks the mold.

Noise Level: Low

Strangeness: Low (If your definition of strange isn’t “doesn’t look like C”)

Confidence that there are no bugs in my implementation: High

## Which Brings Me To My Point

Honestly, I believe that the reason for all of this is: “We’ll make our language look like C so that we don’t scare people away!” Personally, I believe that this is a sad state of affairs.

Strictly speaking, I do mean to criticize. C, C++, and Rust are all fine languages. They do what they set out to do, and do it well. Many people successfully use these languages every day! They are all languages worth knowing and languages worth using. However, there is no reason for all these glyphs and extra keywords!

My top feature for the new version of C++: Get rid of all uses of the angle bracket!

My top feature for the new version of Rust: Get rid of those trailing commas and the return statement!

Of course this will never happen, and for good reason: It will cause massive breakages in everybody’s codebases. But a man can dream…

# Checking Our Heads With Liquid Haskell

Consider this function:

```
headInTail :: [a] -> [a]
headInTail l = (tail l) ++ [head l]
```

Pretty straightforward, right? It takes a list, extracts the head and sticks it in the tail. Surely you’ve written something like this before. It should be fine, right?

```
*Main> headInTail [1,2,3]
[2,3,1]
```

…checks out. Let’s try a few more:

```
*Main> headInTail "hello"
"elloh"
*Main> headInTail ["cat"]
["cat"]
```

…good. And the moment we’ve all been waiting for:

```
*Main> headInTail []
*** Exception: Prelude.tail: empty list
```

Oh yeah, `head`

and `tail`

don’t work for empty lists… Normally, we have some choices on how to proceed here. We could wrap the function in a `Maybe`

:

```
maybeHeadInTail :: [a] -> Maybe [a]
maybeHeadInTail [] = Nothing
maybeHeadInTail l = Just $ headInTail l
```

…which introduces an annoying `Maybe`

to deal with just to stick our heads in our tails. Or, we could just do something with the empty list:

```
headInTail :: [a] -> [a]
headInTail [] = []
headInTail l = (tail l) ++ [head l]
```

…but what if returning the empty list isn’t the *correct* thing to do?

Another choice is to document this behavior (as `head`

and `tail`

do), and just never call `headInTail []`

. But how can we guarantee that we never attempt to call this function on an empty list? Shouldn’t this be the type system’s problem?

Unfortunately, not all is roses and puppies. Sometimes the type system cannot help us. Sometimes somebody thought it’d be a good idea to use Haskell’s Evil exception system. Whatever the case, there are tools to help us.

## Liquid Haskell

Liquid Haskell is a static code analysis tool that is used to catch just these sorts of problems. Liquid Haskell allows us to define invariants which will be enforced by the tool. Liquid Haskell is a research project that is still in development. As such, it has some rough spots, however it’s still very much capable of helping us with our problem here. But before we begin, we need to get the tool installed.

To install the tool, execute:

`cabal install liquidhaskell`

…simple right? Unfortunately, we’re not quite done. We need to install an SMT solver. This tool is used by Liquid Haskell. Currently, the tool defaults to Z3. I’m not sure how to use a different solver (and Z3 works just fine), so I suggest you you Z3. You’ll have to build Z3, and place the binary somewhere on the `PATH`

. After this is done, and assuming your `.cabal/bin`

directory is also on the `PATH`

, testing your source file is a simple as:

`liquid [FILE].hs`

## Let’s Have A Look

Create a haskell source file that contains the following:

```
headInTail :: [a] -> [a]
headInTail l = (tail l) ++ [head l]
```

After that’s done, let Liquid Haskell analyze your file:

`liquid [YOUR_FILE].hs`

A bunch of stuff should scroll by, then in a second you’ll see something similar to the following:

```
**** UNSAFE *********************************************
Doop.hs:5:22: Error: Liquid Type Mismatch
Inferred type
VV : [a] | VV == l && len VV >= 0
not a subtype of Required type
VV : [a] | len VV > 0
In Context
VV : [a] | VV == l && len VV >= 0
l : [a] | len l >= 0
```

If you go to the line and column indicated, you’ll find the argument to tail. Conveniently, it seems that Liquid Haskell comes pre-loaded with definitions for some library functions. Normally, you’ll have to define those yourself. In fact, let’s do just that.

The next logical step here is to write a *specification* for our function. This specification is a statement about what sorts of values the function can take. Add the following to your source file, in the line above the signature for headInTail:

`{-@ headInTail :: {l:[a] | len l > 0} -> [a] @-}`

If you re-run `liquid`

on your source file, you’ll see that the warning went away, and the program now indicates that your source is “SAFE”. So, what does this refinement mean?

Basically, these refinements are machine-checked comments. They have no impact on the program, they exist for Liquid Haskell. Think of it as being like an enhanced type signature. Like a normal type signature, we start with the name of the function, then two colons. This part, however:

`{l:[a] | len l > 0}`

…should be new. Basically, this part says that the list should not be empty. You should read it as “l is a [a] *such that* len l is greater than zero.” A lot of the notation used by Liquid Haskell comes from formal logic. Let’s break this down some more:

`l:[a]`

Here we bind a symbol, l, to the first list argument. At any point to the right of this symbol until the end of the scope defined by `{}`

, we can reference this symbol.

`{... | ...}`

The pipe symbol indicates that we are going to make some statement about the type on the left hand side.

`len l > 0`

Here we state that the length of l must be greater than 0. It looks like we are calling a function, and we sort of are; len is a *measure* which is a special function that is used in specifications. However, the subject of measures is a post for another day.

You may now be thinking: “Well this is all well and good, but what’s to stop me from calling this function on an empty list?” To answer that, let’s implement main:

```
main =
do i <- getLine
putStrLn $ headInTail i
```

Add this to your source file, and then run `liquid [YOUR_FILE].hs`

and you’ll notice that Liquid Haskell has a problem with your attempt to call `headInTail`

:

```
**** UNSAFE *********************************************
Doop.hs:3:29: Error: Liquid Type Mismatch
Inferred type
VV : [Char] | VV == i && len VV >= 0
not a subtype of Required type
VV : [Char] | len VV > 0
In Context
VV : [Char] | VV == i && len VV >= 0
i : [Char] | len i >= 0
```

Liquid Haskell is telling you that it can’t prove that the length of `i`

is greater than 0. If you execute your main function, you should see that it works as expected. Type in a string, and it’ll do the right thing. Push enter right away and you’ll get an exception.

```
*Main> main
hello
elloh
*Main> main
*** Exception: Prelude.tail: empty list
```

…ick… Let’s fix this:

```
main =
do i <- getLine
case i of
[] -> putStrLn "Get your head checked."
_ -> putStrLn $ headInTail i
```

Now if you analyze your file, Liquid Haskell should be happy. Honestly, you should be happy too: the tool caught this problem for you. It didn’t go to testing messed up, and the bug certainly didn’t escape testing unfound. Your program is now objectively better:

```
*Main> main
isn't that neat?
sn't that neat?i
*Main> main
Get your head checked.
```

# Maybe I Should Be In The Maybe Monad

If you’ve spent any time with Haskell, then you’ve surely encountered `Maybe`

. `Maybe`

is Haskell’s version of testing your pointer for `NULL`

, only its better because it’s impossible to accidentally dereference a `Nothing`

.

You’ve also probably thought it was just so *annoying*. You test your `Maybe a`

to ensure it’s not `Nothing`

, but you still have to go about getting the value out of the `Maybe`

so you can actually do something with it.

## The Problem

Let’s forgo the usual contrived examples and look at an actual problem I faced. While working on the Server Console, I was faced with dealing with a query string. The end of a query string contains key/value pairs. Happstack conveniently decodes these into this type:

`[(String, Input)]`

I needed to write a function to lookup a key within this pair, and return it’s value. As we all know, there’s no way to guarantee that a given key is in the list, so the function must be able to handle this. There are a few ways we could go about this, but this seems to me to be an ideal place to use a `Maybe`

. Suppose we write our lookup function like so:

`lookup :: Request -> String -> Maybe String`

This is logically sound, but now we have an annoying `Maybe`

to work with. Suppose we’re working in a `ServerPart Response`

. We might write a response function like so:

```
handler :: ServerPart Response
handler = do req <- askRq
paths <- return $ rqPaths req
page <- return $ lookup req "page_number"
case page of Nothing -> mzero
(Just a) -> do items <- return $ lookup req "items_per_page"
case items of Nothing -> mzero
(just b) -> h' paths a b
```

Yucky! After each call to lookup, we check to see if the call succeeded. This gives us a giant tree that’s surely pushing off the right side of my blog page. There must be a better way.

## Doing It Wrong

Shockingly, this is not the best way to do this. It turns out that writing our functions in the `Maybe`

monad is the answer. Take the following function:

```
hTrpl :: Request -> Maybe ([String], String, String)
hTrpl r = do paths <- return $ rqPaths r
page <- lookup r "page_number"
items <- lookup r "items_per_page"
return (paths, page, items)
```

… now we can re-write `handler`

like so:

```
handler :: ServerPart Response
handler = do req <- askRq
triple <- return $ hTrpl req
case triple of Nothing -> mzero
(Just (a, b, c)) -> h' a b c
```

Much better, right? But why don’t we have to test the return values of `lookup`

? The answer to that question lies in the implementation of `Maybe`

‘s `>>=`

operator:

```
instance Monad Maybe where
(Just x) >>= k = k x
Nothing >>= _ = Nothing
```

Recall that `do`

notation is just syntactic sugar around `>>=`

and a whole bunch of lambdas. With that in mind, you can see that we are actually binding functions together. Per `Maybe`

‘s bind implementation, if you bind `Just a`

to a function, it calls the function on `a`

. If you bind `Nothing`

to a function, it ignores the function, and just returns `Nothing`

.

What this means for us is that so long as we’re inside the `Maybe`

monad, we can pretend all functions return successful values. `Maybe`

allows us to defer testing for failure! The first time a `Nothing`

is returned, functions stop getting called, so we don’t even have to worry about performance losses from not immediately returning from the function! So long as we’re inside of `Maybe`

, there will be Peace On Earth. We code our successful code branch, and then when all is said and done and the dust has settled, we can see if it all worked out.

Next time you find yourself testing a `Maybe`

more than once in a function, ask yourself: should I be in the `Maybe`

monad right now?

# Monads In Clinical Terms

Last week, we briefly discussed Monads. If you haven’t read that post, it would probably be helpful to do so now. To recap, I talked about two things. First and foremost I talked about how monads are horrific and terrifying. Secondly I talked about the use of the bind operator, and `do`

notation in Haskell.

Today I’ll be discussing at a very low level, what exactly a Monad is. I feel that there is a general notion that all Monads are similar to each other, and serve some common purpose. I know this has been causing issues for me. However, I’m slowly coming around to the idea that they really aren’t. Let’s consider some of the monads in Haskell. We have:

- The
`List`

monad, which is a data structure. - The
`Maybe`

monad, which represents the result of a calculation that may have failed. - The
`Either`

monad, which represents a value, or an error.

I know what you may be thinking: “These all look like data structures!” I had this same thought, but let’s look at some more monads:

- The
`IO`

monad, which represents some communication with the “outside world”. - The
`Writer`

monad, which represents logging. - The
`State`

monad, which represents program state.

…and then that notion falls apart. If you tried *really hard*, you might make a case that these are all data structures. However, they’re really not. So, what exactly is a monad? At the very basic level, a monad is two functions, and three laws. Let’s take a look at these.

## Two Functions

I touched on this a bit last week, but in order for something to be a monad, it must implement two functions: `bind`

and `return`

. Let’s take a look at these.

#### Bind

Any monad typically contains a value, and some “decoration”. Bind takes a monad, a function that takes a value and returns a monad, calls that function on the value contained in the old monad, returning a new monad. Finally the logic contained within the bind function “reconciles” the old monad and new monad’s decorations, returning the final resulting monad. Conceptually, this looks something like this:

Let’s take a look at Haskell’s bind operator’s function signature:

`(>>=) :: m a -> (a -> m b) -> m b`

In Haskell, bind is implemented as an operator: `>>=`

. This takes a monad containing an “`a`

“, a function that takes an “`a`

“, and returns a monad containing a “`b`

“, and finally returns a monad containing a “`b`

“. It is up to the author of a monad to determine what it means to “reconcile the decoration” taking into account the laws I’m going to talk about in a minute.

#### return

Compared to bind, `return`

is child’s play. Simply put, `return`

takes a value, and creates a new monad with the most basic amount of decoration to still be valid. This is functionally equivalent to the `pure`

function provided by Applicative Functors.

## Three Laws

While you can make any old thing a member of the `Monad`

typeclass, it’s not actually a monad unless it obeys the law. There are three properties that a monad must satisfy in order to be a valid monad:

#### Left identity

Formally defined as:

`return a >>= k == k a`

This law states that calling `return`

on some value, and then binding the resulting monad to some function should have the same result as just calling that function on the value. The idea here is that if `return`

truly gives a value minimal decoration, that this decoration should not change the resulting monad of a function when bound. Let’s see how `Maybe`

satisfies this law. Here is `Maybe`

‘s instance definition:

```
instance Monad Maybe where
(Just x) >>= k = k x
Nothing >>= _ = Nothing
return k = Just k
```

When `Maybe`

‘s `return`

implementation is called, it wraps the argument in a `Just`

. `Maybe`

has two bind implementations, one for `Just`

and one for `Nothing`

. If a `Just`

is bound to a function, the function will be called on the value contained in the `Just`

, and the resulting `Maybe`

is the result of the bind function. Therefore, if the function returns a `Just`

, bind returns that `Just`

. If the function returns `Nothing`

, then bind returns that `Nothing`

.

As you can see, `return`

always returns `Just k`

, therefore `Maybe`

satisfies the law of Left Identity.

#### Right Identity

Formally defined as:

`m >>= return == m`

`return`

is a function that takes a value, and returns a monad. However, `return`

is only supposed to just wrap the value in a minimal monad. Therefore, the monad created by `return`

should have no affect on the “decoration reconciliation” performed by bind. Let’s see how this works with `Maybe`

.

As you can see from the implementation posted above, `Maybe`

has two bind implementations: one for `Just`

, and one for `Nothing`

. If `Nothing`

is bound to any function, the result is automatically `Nothing`

. If `Just k`

is bound to a function, the result is the result of the function call. The result of `Maybe`

‘s `return`

function is to wrap the value in a `Just`

. Therefore, for both bind implementations, the law holds.

#### Associativity

Formally defined as:

`m >>= (\x -> k x >>= h) == (m >>= k) >>= h`

At first this one seems complicated, but it’s really not. First, we need to understand what the word “Associativity” Means. In math, the Associative Property is a property of certain binary operations. If an operation is associative, then it does not matter how the numbers are nested, the result will be the same.

Addition is associative:

```
(2 + 3) + 4 = 9
2 + (3 + 4) = 9
```

…however subtraction is not associative:

```
(2 - 3) - 4 = -5
2 - (3 - 4) = 3
```

Simply put, bind must behave like addition, and not subtraction with regards to how it can be nested.

## When You Put It Like That…

Doesn’t seem so complicated now, does it? Did all that make sense to you? Congratulations, you *get* monads! Now, the hard part is figuring out how to use some of the crazy monads that come in Haskell’s standard library. This is a subject I hope to help with over the coming months.

However, for the time being if you have a data type, and you can implement bind and `return`

in a way that obeys the monad laws, then your type is a monad. Don’t let anybody tell you otherwise!

# A Trip To The Magical Land Of Monads

Monads can be a tricky topic. On the surface, they’re not hard at all. Taking `Maybe`

for instance:

```
Prelude> Just 1
Just 1
Prelude> Nothing
Nothing
```

That couldn’t possibly be easier. Something is either `Just [something]`

, or it is `Nothing`

. However Monad world is the magical world of gotchas, unenforceable rules, and magical syntax. I had been planning on writing a post on Monads, much like I did with Applicatives and Functors. While researching this topic, I’ve determined that I’m not qualified to speak on this subject yet.

I am qualified to discuss the usage of Monads. I feel that say you are going to “learn how to do Monads” is similar to saying you are going to “learn how to do data structures”. Data structures are similar to each other in that they serve a common purpose: to contain data. However, a Vector is nothing like a Linked List, which is nothing like a Hash Map.

Similarly, a `Maybe`

is nothing like an `IO`

, which is nothing like a `Writer`

. While they are all Monads, they serve different purposes. Today, I’d like to lay some groundwork on the topic and talk about binding functions.

## On Magical Syntax

Much like `Functor`

and `Applicative`

, `Monad`

brings functions that allow you to use normal values with monadic values. `Monad`

brings the `>>=`

operator. This operator is called the **bind** operator. (this is important for monads in general, but I won’t get into this today. Just know that the operator’s name is bind) The signature for `>>=`

is:

`(>>=) :: m a -> (a -> m b) -> m b`

As you can see, it takes a `Monad`

that contains an `a`

, a function that takes an `a`

and returns a `Monad b`

, and returns a `Monad b`

. Basically, it calls the function on the value contained in the monad, and does whatever additional action is appropriate for the monad, and returns a monad containing the result. In short: it is `fmap`

for `Monad`

s. Let’s take a look at a quick example for `Maybe`

:

```
Prelude> Just 1 >>= (\a -> Just $ a + 5)
Just 6
```

As you can see, we bind `Just 1`

to a function that takes a value, adds it to 5, and wraps the result in a `Just`

, which results in `Just 6`

. Bind will correctly handle `Nothing`

as well:

```
Prelude> Nothing >>= (\a -> Just $ a + 5)
Nothing
```

Still with me? Good, because things are about to get magical.

## Do Notation

Monads are so special, they have their own magical syntax! When working with monads, you may use `do`

notation. What does do notation look like? Let’s take a look at a sample function:

```
justAdd :: (Num a) => a -> a -> Maybe a
justAdd a b = Just $ a + b
```

This function takes 2 numbers, adds them, and wraps them up in a `Maybe`

. Nothing earth shattering here. Let’s take a look at how to work with these using Bind:

```
justAddDemoBind = justAdd 2 2 >>= (justAdd 4)
justAddDemoBindNothing = Nothing >>= (justAdd 4)
```

I’ve defined 2 simple functions here. The first calls `justAdd 2 2`

, which returns `Just 4`

. The function `(justAdd 4)`

is then applied to it using the bind operator, which will return `Just 8`

. The second attempts to apply the function `(justAdd 4)`

to `Nothing`

. Since the bind operator is smart enough to handle this, the final result of this function is `Nothing`

. Simple, really. Now, let’s see `do`

notation in action:

```
justAddDemoDo = do
first <- justAdd 2 2
justAdd 4 first
justAddDemoDoNothing = do
first <- Nothing
justAdd 4 first
```

Looks like a completely different language, right? In fact, these two functions do the exact same thing as the previous two. The purpose of `do`

notation is to make working with Monads easier. In practice, if your logic is non-trivial, you end up with hugely nested statements. To see what’s going on, let’s break `justAddDemoDo`

down:

`justAddDemoDo = do`

In the first line, we open our `do`

block.

`first <- justAdd 2 2`

In the second line, we call `justAdd 2 2`

, and assign the result to the name `first`

. Notice that `<-`

operator? That works basically the same as it does in list comprehensions, it does the assigning.

`justAdd 4 first`

Finally, we add 4 to first, resulting in `Just 8`

, which is the value returned from the function. It seems like we’re treating our monad contained in `first`

as a regular value. This is part of the magic of `do`

notation. In fact, if this line were written as:`justAdd first 4`

it would have worked.

Another very important thing to note is that **the last line of a do block must be an expression that returns a Monad!** GHCI will throw a fit if it’s not.

## The Pervasiveness Of Do

As you can see, under the hood, `do`

notation just uses the `>>=`

operator. You can also see that it is much simpler and cleaner looking than using the bind operator. However, that doesn’t mean that `do`

is *better*. Like many things, it comes down to personal choice.

When reading literature on Haskell, you should be prepared to interpret `do`

blocks, and usage of the bind operator. Like any tool, `do`

and bind are not always the correct one. Picking the correct tool for the job is part of being a programmer. Hopefully this tutorial gave you enough familiarity to be able to use both.

# The Point Of Applicative Functors

A few weeks ago, we talked about Functors. In case you’ve forgotten, you might want to click that link and read the post, but long story short: Functors allow you to call functions that take “bare” values on “dressed up” values.

Functors had one big weakness though: you can only call a function that takes one argument. I justified this using function currying, but there is another way. Today, we’ll be talking about that other way: Applicative functors.

## Let’s Recap

Functors use a function called `fmap`

to call a function on a functor. This function is called on the value within the functor, and a new functor containing the result is returned. This looks like this:

```
Prelude> fmap show $ Just 1
Just "1"
```

I `fmap`

‘d `show`

over `Just 1`

, which returns `Just "1"`

. Let’s try another one:

```
Prelude> fmap (+ 1) $ Just 1
Just 2
```

Here, we’ve used function currying to `fmap (+ 2)`

over `Just 1`

, which returns `Just 2`

. This is all fine and good in a contrived situation such as adding 1 to 1, but what about the real world? As you probably know, you often don’t have some nice constants ready to use in your function. There must be a better way…

## Applicatives To The Rescue

To put it in layman’s terms, Applicative Functors are the middle ground. Let’s take a look at another example:

```
Prelude> let example = fmap (+) $ Just 1
Prelude> :t example
example :: Maybe (Integer -> Integer)
```

If we `fmap (+)`

over `Just 1`

, the result is a function that takes an Integer and returns an Integer, wrapped in a `Maybe`

. To do anything useful with this, you need to find a way to call this function on something else. That’s what Applicative Functors do for us. Let’s take a look at part of the class definition of Applicative:

```
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
```

We have two functions here: `pure`

, which takes a value and wraps it in a minimal Functor, and `(<*>)`

, which takes a functor that contains a function, and calls it on another functor which contains a value, returning a value. In a nutshell, this allows us to call multi-argument functions with functors. There’s one more function exported by `Control.Applicative`

that bears mentioning: `(<$>)`

. `(<$>)`

is basically an alias for `fmap`

that allows us to use `fmap`

as infix:

```
Prelude> fmap (+1) $ Just 2
Just 3
Prelude> (+1) <$> Just 2
Just 3
```

As you can see, they function identically. Feel free to use `(<$>)`

, just know that it requires you to `import Control.Applicative`

. Anyways, with that out of the way, here’s how you use an applicative. For this example, I will be adding 2 to 2. But 2 will be wrapped in a `Maybe`

requiring extra steps:

```
Prelude> (+) <$> Just 2 <*> pure 2
Just 4
```

Let’s talk about what just happened. First, I `fmap`

‘d `(+)`

over `Just 2`

using the `(<$>)`

operator. This returns a function that takes an Integer and returns an integer wrapped in a `Maybe`

. Next I used the `(<*>)`

operator to call that function on `pure 2`

, which returns `Just 4`

.

You may be wondering why I used `pure 2`

instead of `Just 2`

. Sure, `Just 2`

would have worked, but `pure 2`

would have worked for any type of Applicative. Let’s take a look at another example:

```
Prelude> (+) <$> [1,2,99,100] <*> pure 2
[3,4,101,102]
```

As you can see, this is the exact same expression, except I used a List instead of a Maybe. This is where `pure`

comes in handy. It allows you to write more generic code. `pure`

works for any type of Applicative, therefore you needn’t concern yourself with what kind of Applicative is being passed into your function. You can rest easy knowing that it will work regardless.

## Hopefully That All Made Sense

Applicatives are a pretty abstract concept, and it can be difficult to visualize how they can be useful. This is a topic that I don’t feel is documented very well. It’s a topic I had trouble with for a while.

Since I suspect that most readers will find this post with a google search term like “what is the point of applicative functors”, hopefully I’ve shed some light on the topic for you.

# What Functors Can Do For You

One of the major aspects of Functional Programming, its lack of side effects, can be one of its major stumbling blocks. Sure, taking a value as a parameter and using it without creating a variable is a simple matter for basic types, but what of more complicated structures? How does one go about unwrapping a structure, operating on its value, and re-wrapping the value in said structure without losing it’s data? The answer is by using a convoluted set of functions that take way too many arguments just to preserve the state. Or, you could use a functor.

## So I Overload Operator()?

No, we aren’t talking about C++ Classes with an `operator()`

implementation to make it look like a function. Functors in Haskell are a completely different animal. The easiest way to explain it is to look at an example. By way of example, let’s take a look at the `Maybe`

monad:

```
data Maybe a = Nothing | Just a
deriving (Eq, Ord)
```

As you can see, `Maybe`

has two type constructors: `Nothing`

or `Just something`

. `Maybe`

is used to represent a calculation that may have failed. If the calculation failed, `Nothing`

is returned. If it succeeded, `Just whatever`

is returned. This is all fine and good, but `Just whatever`

isn’t the same thing as `whatever`

; it is encapsulated in a `maybe`

, and must be extracted if it is to be used.

How do you do that? Well, first you have to account for a `Nothing`

value. Assuming you have a `Just`

, you have to extract it, probably using a new function. Then you have to use the value. Afterwards, you probably want to put it back into the `Maybe`

. Sounds like a lot of work, right? Let’s take a look at what `Functor`

has to offer us:

```
class Functor f where
fmap :: (a -> b) -> f a -> f b
```

So, a functor defines 1 function (actually it defines more, but they have default implementations and can be ignored in 99% of cases) `fmap`

, which takes a function that takes 1 argument of type `a`

and returns a result of type `b`

, a `Functor a`

, and returns a `Functor b`

. Basically, this amounts to fmap takes a function, calls it on the passed in functor, and returns a functor with the resulting value.

## Maybe This Is Helpful?

Let’s take a look at how this works with `Maybe`

. Here is `Maybe`

‘s `Functor`

implementation:

```
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap f (Just a) = Just (f a)
```

Basically, if `fmap Nothing`

is called, `Nothing`

is returned. Otherwise, the passed in function is called on the value contained in the `Just`

, and a new `Just`

is returned with the result. This is a much more elegant way to deal with structures, there is no checking required, `fmap`

knows how to do the right thing.

## A Bit Limiting

You may be thinking to yourself “Isn’t this a bit limiting? I can only `fmap`

using a function that takes 1 argument!” You’d be right, if not for Haskell’s partial function application. Say we want to add 2 to the `Int`

contained in a `Just`

. If we didn’t have partial function application, we would have to do something ugly like this:

`fmap (\a -> a + 2) (Just 2)`

… or maybe something truly awful like this:

```
addTwo :: Int -> Int
addTwo a = a + 2
```

Luckily for us, we live in a just and righteous world where we can just partially apply `+`

:

`fmap (+2) (Just 2)`

While this is a trivial example, I feel it adequately illustrates the point; you can `fmap`

a function that takes any amount of arguments if you partially apply it!

This leads me to an important point that most literature leaves out: you must apply arguments from left to right, **the order of arguments is important!** When you’re writing functions, think about how users might want to partially apply them. Take these two functions for example:

```
padString1 :: String -> Int -> String
padString1 s 0 = s
padString1 s n = padString1 (s ++ "-") (n - 1)
padString2 :: Int -> String -> String
padString2 0 s = s
padString2 n s = padString2 (n - 1) (s ++ "-")
```

Both of these functions do the same thing: they take a `String`

and an `Int`

, and append n -‘s to the string. The difference is how they can be partially applied: `padString1`

has the `String`

first, so it can only be partially applied with the `String`

. Conversely, `padString2`

has the `Int`

first, so it can only be partially applied with the `Int`

. For this reason, when creating functions, take time to consider these issues. There’s probably a whole blog post to be written about not messing this up. Maybe when I feel qualified, I’ll write it! For now, just keep it in mind.

## Sounds Familiar

You’ve heard this word before, right? Map… But where?

```
instance Functor [] where
fmap = map
```

That’s right, `List`

s are `Functor`

s!. The `fmap`

implementation for `List`

just calls the `map`

function that you’re probably familiar with. (if you need a refresher, it takes a function, a list, and returns a list with the function called on all members of the original list)

In fact, most structures in Haskell are `Functor`

s. All `Monad`

s in the standard library are `Functor`

s, so it’s a good bet that you can `fmap`

them. Either way, judicious use of `fmap`

and `Functor`

s can make your code much cleaner and more manageable.