James Bowen James Bowen

BayHac 2018!

BayHac.png

This week we’ll be taking a quick breather from our work on deploying our Haskell code. Instead, I’ll be giving a brief overview of BayHac, the Bay Area Haskell Hackathon, which took place a week ago from April 27-29. It was hosted once again by Formation (formerly Takt). Many Haskellers from the Bay Area and beyond met up, hacked and discussed many ideas.

Presentations

This year there was a larger focus on projects and hacking, and less on presentations. But there were still a few short talks each morning. I was only able to make one set of these, but that included some very interesting topics. A couple speakers discussed some of the theoretical aspects of Haskell’s containers. One went through the idea of free objects, a generalization of free monads as seen on this blog. Another speaker discussed ways to perform type-level validation within Postgres.

And speaking of databases, Travis Athougies gave an overview of his Beam database library. This library has some awesome semantics. It might force me to re-think my habit of defaulting to Persistent, so it's definitely worth a look!

Finally, I gave a short overview of some of the work I did last year with Tensorflow and dependent types. I’ll post a link to the presentation as soon as it’s up. But in the meantime, you can check out the full blog series to learn more!

Nix and HNix

I spent most of my time at the Hackathon trying to setup Nix, so that I could work on HNix. Nix is a functional package manager with incredible reliability. We could use it for any language in theory. But it shares many conceptual ideas with Haskell, so many Haskellers have adopted it. In particular, if you do frontend web programming with GHCJS, you’ll want to use Nix instead of Stack.

Several people at the Hackathon worked on HNix, a Haskell implementation of Nix. The work was well organized by John Wiegley. He put in a lot of time parceling out tasks that newcomers could contribute to the codebase.

Having a Windows laptop, I wasn’t able to contribute a whole lot to the project (Nix only runs on *nix systems). Instead, I let myself be a guinea pig to see if I could get Nix working on the Windows Subsystem for Linux. My efforts were unsuccessful, though Jonas Chevalier of Tweag insists it’s possible.

Codeworld

The last talk I saw came from Chris Smith, who gave an overview of Haskell Codeworld, an educational tool for math and programming. This project in particular caught my attention for a couple reasons. First, I’ve developed a passion for teaching Haskell to beginners and showing it’s not so hard. But even I tend not to focus on teaching Haskell as a first language. Chris’s idea is to teach Haskell to middle school kids who have never written code before.

His primary intention is to teach mathematics. Since Haskell has such a mathematical view of programming, it's a natural fit. He stated an interesting finding from an academic study. Children’s success in calculus depends a lot on their understanding of functions. Those who view functions as a mere series of steps to compute tend to struggle. But there's another more correct way to view functions. This idea is that functions express a fundamental relationship between sets. Those who view functions this way have a better chance of flourishing.

This research suggests Haskell is great as a primary programming language for kids! It matches the latter definition, while object oriented languages teach the former idea. Codeworld a cool project, so check it out and see if you can help in any way!

Conclusion

Next week, we’ll conclude our series on deploying Haskell code by looking at Github’s API. It has some neat little tricks we can play to enhance our development experience.

Events like BayHac show that there are a lot of different ways to get involved in the Haskell community. See if you can find one in your city! And don’t worry if you’ve never written Haskell before! The Haskell community is very welcoming! Check out our Beginners Checklist to get started!

Read More
James Bowen James Bowen

Dockerizing our Haskell App

containers.jpg

Last week, we explored how to automate the deployment of our Haskell app with Circle CI. Every time we push a branch, Circle CI will load our code onto a container, build it, and run any tests we have. We also configured Heroku to deploy our new code whenever the master branch passed the build.

Our system had a couple weaknesses though. First, it was a bit of a hassle that our configuration required us to download the Stack program every time. Setting up Stack required about half the commands in our Circle config! The second weakness was that we built our code twice on each deploy. First, the Circle container would build it. Then Heroku would also compile it. This week, we’ll solve these problems using Docker images.

Using Docker Images

Last week we used a vanilla Circle container. We can start simplifying our configuration by using a pre-existing Docker image instead. Remember the start of our build_project section? It looked like this:

jobs:
  build_project:
    machine: true

The machine keyword tells Circle to use an unconfigured Linux box. Since it had nothing on it, we needed to download and install Stack ourselves. However, Circle also allows us to use Docker images as the starting point for our machines. We’ll use an image from the Haskell Docker repository. These each have a particular version of GHC installed, and the later ones also come with Stack. These images lag behind GHC releases a little bit. So we’ll use GHC 8.0.2, and update our stack.yaml file to use LTS 9.21, the latest version for this GHC. Here’s how we write our Circle configuration to use this image:

jobs:
  build_project:
    docker:
      - image: haskell:8.0.2

Now we can radically simplify the rest of the file! Stack and GHC will be pre-installed, so we can remove all the steps related to those. We’ll also remove the caching step on the installed Stack directory. This leaves us with the following configuration file:

version: 2
jobs:
  build_project:
    docker:
      - image: haskell:8.0.2
    steps:
      - checkout
      - restore-cache:
        keys:
          - stack-work-{{checksum “stack.yaml”}}-\
                       {{checksum “HaskellTestApp.cabal”}}
      - run: stack setup
      - run: stack build
      - run: stack test
      - save_cache:
        key: stack-work-{{checksum “stack.yaml”}}-\
                        {{checksum “HaskellTestApp.cabal”}}
        paths:
          - “.stack-work”

workflows:
  version: 2
  build_and_test:
    jobs:
      - build_project

Making Our Own Docker Image

Now our builds are a little more efficient, but we haven’t solved the bigger problem in our system. In the rest of this article, we’ll use Docker to create a new image with our code built on it. Then we can push this image to Heroku instead of re-building our code with the buildpack.

To do this, we’ll fold some of the existing Circle configuration work into Docker itself. To start, we need to define a Dockerfile at the root of our project. This file specifies the commands Docker needs to run to create an image with our code and run the server. Here’s what ours looks like:

# Use the existing Haskell image as our base
FROM haskell:8.0.2

# Checkout our code onto the Docker container
WORKDIR /app
ADD . /app

# Build and test our code, then install the “run-server” executable
RUN stack setup
RUN stack build --test --copy-bins

# Expose a port to run our application
EXPOSE 80

# Run the server command
CMD [“run-server”]

The first important part is that we’ll “inherit” from the Haskell Docker image we were using on Circle with FROM. Then we’ll run our setup command, and build the project. We’ll pass arguments to build that will run the tests, and install our executables. Then, we’ll run the server off the container.

Build our Docker Image on Circle

To actually save a Docker image on a remote repository, we’ll need to make a Docker account. We don’t need to create our own repository, since we’ll end up storing our image on a Heroku repository.

We no longer need to run Stack commands as part of our Circle configuration. Docker handles them for us. We can go back to using a normal machine, as Docker also handles using the Haskell image. Here’s the core of our configuration on Circle:

jobs:
  build_project:
    machine: true
    steps:
      - checkout
      - run: echo $DOCKER_PASSWORD | docker login \
                                     --username=$DOCKER_USERNAME \
                                     --password-stdin
      - run: docker pull \                
               registry.heroku.com/$HEROKU_APP/web:$CIRCLE_BRANCH
      - run: docker build -t \
               registry.heroku.com/$HEROKU_APP/web:$CIRCLE_BRANCH
      - run: docker push \
               registry.heroku.com/$HEROKU_APP/web:$CIRCLE_BRANCH

The key commands are obviously the four docker commands. First, we log into our Docker account using our credentials as environment variables. Next, we’ll pull the existing image off the Heroku image repository tied to our app. We don’t need to do anything to set this repository up, but we’ll need to configure the app to use it below. Then we build our container and tag it with our current branch name. As long as this succeeds, we’ll push this new image back to our Docker repository.

Heroku Integration

To use this image on Heroku, we’ll need to update the “Deploy” section of our app again from the dashboard. Instead of using Circle CI, we’ll use the Heroku registry option. Now our successful builds will push our code up to our Heroku registry. Then Heroku updates our app automatically! Plus, there will now be no need for us to rebuild the code on Heroku!

There’s one more caveat though. To push and pull from the Heroku registry, we also need to login to Heroku from our circle machine. Circle CI version 2 doesn’t yet have built-in support for this, so it’s a little tricky. On our own machine, we would login to Heroku using the CLI with the heroku login command. But we can’t use that command with stdin the way we can with Docker’s login command.

But we can replicate the ultimate result of logging in with a little script. Logging into Heroku creates a file called ~/.netrc storing our credentials. We can write this script that will output all that information like so:

#! /bin/bash

cat > ~/.netrc << EOF
machine api.heroku.com
  login $HEROKU_LOGIN
  password $HEROKU_PASSWORD
machine git.heroku.com
  login $HEROKU_LOGIN
  password $HEROKU_PASSWORD
EOF

heroku container:login

We run the final heroku container command to actually connect to the repository. Note that the $HEROKU_PASSWORD environment variable should use your Heroku API Key, NOT your Heroku password. We call the variable PASSWORD because the HEROKU_API_KEY environment variable is special. It can cause problems with the CLI to have it set pre-maturely.

With this script saved as setup_heroku.sh, we can call it from our Circle script like so:

jobs:
  build_project:
    machine: true
    steps:
      - checkout
      - bash .circleci/setup_heroku.sh
      ...

Now everything should work! Our app should be automatically deployed to Heroku without re-compilation!

Conclusion

We’ve now made our deployment process a lot more efficient. First we used a Docker Haskell image to avoid manually downloading Stack. Then we created our own Docker image off of this, and pushed it to a registry. Once we connected our Heroku app to this registry, we no longer needed to compile it there. Next week, we’ll conclude this series by using a similar process to push our code to AWS instead of Heroku.

Now that you can deploy your code, you can make whatever Haskell apps you want! Download our Production Checklist to get some more ideas for libraries you can use in your apps.

And if you’ve never used Haskell before, download our Beginners Checklist to get started!

Read More
James Bowen James Bowen

Deploying Confidently: Haskell and Circle CI

circle_haskell.png

In last week’s article, we deployed our Haskell code to the cloud using Heroku. Our solution worked, but the process was also very basic and very manual. Let’s review the steps we would take to deploy code on a real project with this approach.

  1. Make a pull request against master branch
  2. Merge code into master
  3. Pull master locally, run tests
  4. Manually run git push heroku master
  5. Hope everything works fine on Heroku

This isn’t a great approach. Wherever there are manual steps in our development process, we’re likely to forget something. This will almost always come around to bite us at some point. In this article, we’ll see how we can automate our development workflow using Circle CI.

Getting Started with Circle

To follow along with this article, you should already have your project stored on Github. As soon as you have this, you can integrate with Circle easily. Go to the Circle Website and login with Github. Then go to “Add Project”. You should see all your personal repositories. Clicking your Haskell project should allow you to integrate the two services.

Now that Circle knows about our repository, it will try to build whenever we push code up to Github. But we have to tell Circle CI what to do once we’ve pushed our code! For this step, we’ll need to create a config file and store it as part of our repository. Note we’ll be using Version 2 of the Circle CI configuration. To define this configuration we first create a folder called .circleci at the root of our repository. Then we make a YAML file called config.yaml.

In Circle V2, we specify “workflows” for the Circle container to run through. To keep things simple, we’ll limit our actions to take place within a single workflow. We specify the workflows section at the bottom of our config:

workflows:
  version: 2
  build_and_test:
    jobs:
      - build_project

Now at the top, we’ll again specify version 2, and then lay out a bare-bones definition of our build_project job.

version: 2
jobs:
  build_project:
    machine: true
    steps:
      - checkout
      - run: echo “Hello”

The machine section indicates a default Circle machine image we’re using for our project. There’s no built-in Haskell machine configuration we can use, so we’re using a basic image. Then for our steps, we’ll first checkout our code, and then run a simple “echo” command. Let’s now consider how we can get this machine to get the Stack utility so we can actually go and build our code.

Installing Stack

So right now our Circle container has no Haskell tools. This means we'll need to do everything from scratch. This is a useful learning exercise. We’ll learn the minimal steps we need to take to build a Haskell project on a Linux box. Next week, we’ll see a shortcut we can use.

Luckily, the Stack tool handles most of our problems for us, but we first have to download it. So after checking our our code, we’ll run several different commands to install Stack. Here’s what they look like:

steps:
  - checkout
  - run: wget https://github.com/commercialhaskell/stack/releases/download/v1.6.1/stack-1.6.1-linux-x86_64.tar.gz -O /tmp/stack.tar.gz
  - run: sudo mkdir /tmp/stack-download
  - run: sudo tar -xzf /tmp/stack.tar.gz -C /tmp/stack-download
  - run: sudo chmod +x /tmp/stack-download/stack-1.6.1-linux-x86_64/stack
  - run: sudo mv /tmp/stack-download/stack-1.6.1-linux-x86_64/stack /usr/bin/stack

The wget command downloads Stack off Github. If you’re using a different version of Stack than we are (1.6.1), you’ll need to change the version numbers of course. We’ll then create a temporary directory to unzip the actual executable to. Then we use tar to perform the unzip step. This leaves us with the stack executable in the appropriate folder. We’ll give this executable x permissions, and then move it onto the machine’s path. Then we can use stack!

Building Our Project

Now we’ve done most of the hard work! From here, we’ll just use the Stack commands to make sure our code works. We’ll start by running stack setup. This will download whatever version of GHC our project needs. Then we’ll run the stack test command to make sure our code compiles and passes all our test suites.

steps:
  - checkout
  - run: wget …
  ... 
  - run: stack setup
  - run: stack test

Note that Circle expects our commands to finish with exit code 0. This means if any of them has a non-zero exit code, the build will be a “failure”. This includes our stack test step. Thus, if we push code that fails any of our tests, we’ll see it as a build failure! This spares us the extra steps of running our tests manually and “hoping” they’ll work on the environment we deploy to.

Caching

There is a pretty big weakness in this process right now. Every Circle container we make starts from scratch. Thus we’ll have to download GHC and all the different libraries our code depends on for every build. This means you might need to wait 30-60 minutes to see if your code passes depending on the size of your project! We don’t want this. So to make things faster, we’ll tell Circle to cache this information, since it won’t change on most builds. We’ll take the following two steps:

  1. Only download GHC when stack.yaml changes (since the LTS might have changed). This involves caching the ~/.stack directory
  2. Only re-download libraries when either stack.yaml or our .cabal file changes. For this, we’ll cache the .stack-work library.

For each of these, we’ll make an appropriate cache key. At the start of our build process, we’ll attempt to restore these directories from the cache based on particular keys. As part of each key, we’ll use a checksum of the relevant file.

steps:
  - checkout
  - restore-cache:
      keys:
        - stack-{{ checksum “stack.yaml” }}
  - restore-cache:
      keys:
        - stack-{{checksum “stack.yaml”}}-{{checksum “project.cabal”}}

If these files change, the checksum will be different, so Circle won’t be able to restore the directories. Then our other steps will run in full, downloading all the relevant information. At the end of the process, we want to then make sure we’ve saved these directories under the same key. We do this with the save_cache command:

steps:
  …
  - stack test
  - save-cache:
      key: stack-{{ checksum “stack.yaml” }}
      paths:
        - “~/.stack”
  - restore-cache:
      keys: stack-{{checksum “stack.yaml”}}-{{checksum “project.cabal”}}
      paths:
        - “.stack-work”

Now the next builds won’t take as long! There are other ways we can make our cache keys. For instance, we could use the Stack LTS as part of the key, and bump this every time we change which LTS we’re using. The downside is that there’s a little more manual work required. But this work won’t happen too often. The positive side is that we won’t need to re-download GHC when we add extra dependencies to stack.yaml.

Deploying to Heroku

Last but not least, we’ll want to actually deploy our code to heroku every time we push to the master branch. Heroku makes it very easy for us to do this! First, go to the app dashboard for Heroku. Then find the Deploy tab. You should see an option to connect with Github. Use it to connect your repository. Then make sure you check the box that indicates Heroku should wait for CI. Now, whenever your build successfully completes, your code will get pushed to Heroku!

Conclusion

You might have noticed that there’s some redundancy with our approaches now! Our Circle CI container will build the code. Then our Heroku container will also build the code! This is very inefficient, and it can lead to deployment problems down the line. Next week, we’ll see how we can use Docker in this process. Docker fully integrates with Circle V2. It will simplify our Circle config definition. It will also spare us from needing to rebuild all our code on Heroku again!

With all these tools at your disposal, it’s time to finally build that Haskell app you always wanted to! Download our Production Checklist to learn some cool libraries you can use!

If you’ve never programmed in Haskell before, hopefully you can see that it’s not too difficult to use! Download our Haskell Beginner’s Checklist and get started!

Read More
James Bowen James Bowen

For All the World to See: Deploying Haskell with Heroku

deployment_1.jpg

In several different articles now, we’ve explored how to build web apps using Haskell. See for instance, our Haskell Web Series and our API integrations series. But all this is meaningless in the end if we don’t have a way to deploy our code so that other people on the internet can find it! In this next series, we’ll explore how we can use common services to deploy Haskell code. It’ll involve a few more steps than code in more well-supported languages!

If you’ve never programmed in Haskell at all, you’ve got a few things to learn before you start deploying code! Download our Beginners Checklist for tips on how to start learning! But maybe you’ve done some Haskell already, and need some more ideas for libraries to use. In that case, take a look at our Production Checklist for guidance!

Deploying Code on Heroku

In this article, we’re going to focus on using the Heroku service to deploy our code. Heroku allows us to do this with ease. We can get a quick prototype out for free, making it ideal for Hackathons. Like most platforms though, Heroku is easiest to use with more common languages. Heroku can automatically detect Javascript or Python apps and take the proper steps. Since Haskell isn’t used as much, we’ll need one extra specification to get Heroku support. Luckily, most of the hard work is already done for us.

Buildpacks

Heroku uses the concept of a “buildpack” to determine how to turn your project into runnable code. You’ll deploy your app by pushing your code to a remote repository. Then the buildpack will tell Heroku how to construct the executables you need. If you specify a Node.js project, Heroku will find your package.json file and download everything from NPM. If it’s Python, Heroku will install pip and do the same thing.

Heroku does not have any default buildpacks for Haskell projects. However, there is a buildpack on Github we can use (star this repository!). It will tell our Heroku container to download Stack, and then use Stack to build all our executables. So let’s see how we can build a rudimentary Haskell project using this process.

Creating Our Application

We’ll need to start by making a free account on Heroku. Then we’ll download the Heroku CLI so we can connect from the terminal. Use the heroku login command and enter your credentials.

Now we want to create our application. In your terminal, cd into the directory that has your Haskell Stack project. Make sure it’s also a Github repository already. It’s fine if the repository is only local for now. Run this command to create your application (replace haskell-test-app with your desired app name):

heroku create haskell-test-app \
  -b https://github.com/mfine/heroku-buildpack-stack

The -b argument specifies our buildpack. We'll pull it from the specified Github repository. If this works, you should be able to go to your Heroku dashboard and see an entry for your new application. You’ll have a Heroku domain for your project that you can see on project settings.

Now we need to make a Procfile. This tells Heroku the specific binary we need to run to start our web server. Make sure you have an executable in your .cabal file that starts up the server. Then in the Procfile, you’ll specify that executable under the web name:

web: run-server

Note though that you can’t use a hard-coded port! Heroku will choose a port for you. You can get it by retrieving the PORT environment variable. Here’s what your code might look like:

runServer :: IO ()
runServer = do
  port <- read <$> getEnv “PORT”
  Run port (serve myAPI myServer)

Now you’ll need to “scale” the application to make sure it has at least a single machine to run on. From your repository, run the command:

heroku ps:scale web=1

Finally, we need to push our application to the Heroku container. To do this, make sure Heroku added the remote heroku Github repository. You can do this with the following command:

git remote -v

It should show you two remotes named heroku, one for fetch, and one for push. If those don’t exist, you can add them like so:

heroku git:remote -a haskell-test-app

Then you can finish up by running this command:

git push heroku master

You should see terminal output indicating that Heroku recognizes your application. If you wait long enough, you'll start to see the Stack build process. If you have any environment variables for your project, set them from the app dashboard. You can also set variables with the following command:

heroku config:set VAR_NAME=var_value

Once our app finishes building, you can visit the URL Heroku gives you. It should look like https://your-app.herokuapp.com. You’ve now deployed your Haskell code to the cloud!

Weaknesses

There are a few weaknesses to this system. The main one is that our entire build process takes place on the cloud. This might seem like an advantage, and it has its perks. Haskell applications can take a LONG time to compile though. This is especially true if the project is large and involves Template Haskell. Services like Heroku often have timeouts on their build process. So if compilation takes too long, the build will fail. Luckily, the containers will cache previous results. This means Stack won't have to keep re-downloading all the libraries. So even if our first build times out, the second might succeed.

Conclusion

This concludes part 1 of our Haskell Deployment series. We’ll see the same themes quite a bit throughout this series. It’s definitely possible to deploy our Haskell code using common services. But we often have to do a little bit more work to do so. Next week we’ll see how we can automate our deployment process with Circle CI.

Want some more tips on developing web applications with Haskell? Download our Production Checklist to learn about some other libraries you can use! For a more detailed explanation of one approach, read our Haskell Web Skills series.

Read More
James Bowen James Bowen

Next up on MMH!

Exciting news! We’ve spent this week growing Monday Morning Haskell’s permanent content a bit more. Last week we finished our series on API integrations by looking at the Mailchimp service. But don’t fret if you missed it! It’s now available as a full series on the Advanced section of the website. Feel free to take a look and enjoy all the new Haskell tools at your disposal.

On the Beginner side of things, we’ve also added a new series on testing and profiling our code! First you’ll learn a little bit about the process of test driven development. Then you’ll learn some neat libraries for implementing it in Haskell. You’ll also see how to test performance in addition to correctness with profiling and the Criterion library.

Existing Series

As a reminder, here are all the existing series we have on the blog. For beginners:

  1. Liftoff Series - If you’ve never written any Haskell before, start here!
  2. The Haskell Brain - A few articles on overcoming some of Haskell’s psychological hurdles
  3. Functional Data Structures - Any Haskell author has to talk about monads at some point. This is our series teaching monads from the ground up. We start with other structures like functors that are easier to understand.

Then the more advanced topics include:

  1. Haskell Web Skills - Learn libraries for many skills including database management and writing a server.
  2. Haskell and AI - See why Haskell is a good fit for Machine Learning and AI. Then examine some of the libraries we can use to make it happen!
  3. Parsing with Haskell - Haskell is renowned for its parsing capabilities. Learn why by looking at three of the many parsing libraries Haskell offers to us.

What’s Next

Our API integrations series focuses on connecting to other helpful services. But most of these are only helpful in the first place if you have your Haskell code deployed on the internet. Since Haskell is still not common, many hosting services don’t support it well. In the next few weeks, we’ll look at how we can use sites like Heroku and AWS to deploy our Haskell code. We’ll also see a few other tricks we can use to enhance our deployment pipeline.

And remember, if you’ve never written Haskell before, now’s the best time to start! Download our Beginners Checklist and start your journey!

If you’ve toyed around with Haskell a bit but aren’t sure what to try next, you’re in luck! Take a look at our Production Checklist! It’ll give you some fresh ideas of libraries to learn and apply to your projects.

Read More
James Bowen James Bowen

Connecting to Mailchimp...from Scratch!

mailing_list.png

Welcome to the third and final article in our series on Haskell API integrations! We started this series off by learning how to send and receive text messages using Twilio. Then we learned how to send emails using the Mailgun service. Both of these involved applying existing Haskell libraries suited to the tasks. This week, we’ll learn how to connect with Mailchimp, a service for managing email subscribers. Only this time, we’re going to do it a bit differently.

There are a couple different Haskell libraries out there for Mailchimp. But we’re not going to use them! Instead, we’ll learn how we can use Servant to connect directly to the API. This should give us some understanding for how to write one of these libraries. It should also make us more confident of integrating with any API of our choosing!

To follow along the code for this article, checkout the mailchimp branch on Github! It’ll show you all the imports and compiler extensions you need!

The topics in this article are quite advanced. If any of it seems crazy confusing, there are plenty of easier resources for you to start off with!

  1. If you’ve never written Haskell at all, see our Beginners Checklist to learn how to get started!
  2. If you want to learn more about the Servant library we’ll be using, check out my talk from BayHac 2017 and download the slides and companion code.
  3. Our Production Checklist has some further resources and libraries you can look at for common tasks like writing web APIs!

Mailchimp 101

Now let’s get going! To integrate with Mailchimp, you first need to make an account and create a mailing list! This is pretty straightforward, and you'll want to save 3 pieces of information. First is base URL for the Mailchimp API. It will look like

https://{server}.api.mailchimp.com/3.0

Where {server} should be replaced by the region that appears in the URL when you log into your account. For instance, mine is: https://us14.api.mailchimp.com/3.0. You’ll also need your API Key, which appears in the “Extras” section under your account profile. Then you’ll also want to save the name of the mailing list you made.

Our 3 Tasks

We’ll be trying to perform three tasks using the API. First, we want to derive the internal “List ID” of our particular Mailchimp list. We can do this by analyzing the results of calling the endpoint at:

GET {base-url}/lists

It will give us all the information we need about our different mailing lists.

Once we have the list ID, we can use that to perform actions on that list. We can for instance retrieve all the information about the list’s subscribers by using:

GET {base-url}/lists/{list-id}/members

We’ll add an extra count param to this, as otherwise we'll only see the results for 10 users:

GET {base-url}/lists/{list-id}/members?count=2000

Finally, we’ll use this same basic resource to subscribe a user to our list. This involves a POST request and a request body containing the user’s email address. Note that all requests and responses will be in the JSON format:

POST {base-url}/lists/{list-id}/members

{
  “email_address”: “person@email.com”,
  “status”: “subscribed”
}

On top of these endpoints, we’ll also need to add basic authentication to every API call. This is where our API key comes in. Basic auth requires us to provides a “username” and “password” with every API request. Mailchimp doesn’t care what we provide as the username. As long as we provide the API key as the password, we’ll be good. Servant will make it easy for us to do this.

Types and Instances

Once we have the structure of the API down, our next goal is to define wrapper types. These will allow us to serialize our data into the format demanded by the Mailchimp API. We’ll have four different newtypes. The first will represent a single email list in a response object. All we care about is the list name and its ID, which we represent with Text:

newtype MailchimpSingleList = MailchimpSingleList (Text, Text)
  deriving (Show)

Now we want to be able to deserialize a response containing many different lists:

newtype MailchimpListResponse =
  MailchimpListResponse [MailchimpSingleList]
deriving (Show)

In a similar way, we want to represent a single subscriber and a response containing several subscribers:

newtype MailchimpSubscriber = MailchimpSubscriber
  { unMailchimpSubscriber :: Text }
deriving (Show)

newtype MailchimpMembersResponse =
  MailchimpMembersResponse [MailchimpSubscriber]
deriving (Show)

The purpose of using these newtypes is so we can define JSON instances for them. In general, we only need FromJSON instances so we can deserialize the response we get back from the API. Here’s what our different instances look like:

instance FromJSON MailchimpSingleList where
  parseJSON = withObject "MailchimpSingleList" $ \o -> do
    name <- o .: "name"
    id_ <- o .: "id"
    return $ MailchimpSingleList (name, id_)

instance FromJSON MailchimpListResponse where
  parseJSON = withObject "MailchimpListResponse" $ \o -> do
    lists <- o .: "lists"
    MailchimpListResponse <$> forM lists parseJSON

instance FromJSON MailchimpSubscriber where
  parseJSON = withObject "MailchimpSubscriber" $ \o -> do
    email <- o .: "email_address" 
    return $ MailchimpSubscriber email

instance FromJSON MailchimpListResponse where
  parseJSON = withObject "MailchimpListResponse" $ \o -> do
    lists <- o .: "lists"
    MailchimpListResponse <$> forM lists parseJSON

And last, we need a ToJSON instance for our individual subscriber type. This is because we’ll be sending that as a POST request body:

instance ToJSON MailchimpSubscriber where
  toJSON (MailchimpSubscriber email) = object
    [ "email_address" .= email
    , "status" .= ("subscribed" :: Text)
    ]

Defining a Server Type

Now that we've defined our types, we can go ahead and define our actual API using Servant. This might seem a little confusing. After all, we’re not building a Mailchimp Server! But by writing this API, we can use the client function from the servant-client library. This will derive all the client functions we need to call into the Mailchimp API. Let’s start by defining a combinator that will description our authentication format using BasicAuth. Since we aren’t writing any server code, we don’t need a “return” type for our authentication.

type MCAuth = BasicAuth "mailchimp" ()

Now let’s write the lists endpoint. It has the authentication, our string path, and then returns us our list response.

type MailchimpAPI =
  MCAuth :> “lists” :> Get ‘[JSON] MailchimpListResponse :<|>
  ...

For our next endpoint, we need to capture the list ID as a parameter. Then we’ll add the extra query parameter related to “count”. It will return us the members in our list.

type Mailchimp API =
  …
  MCAuth :> “lists” :> Capture “list-id” Text :>
    QueryParam “count” Int :> Get ‘[JSON] MailchimpMembersResponse

Finally, we need the “subscribe” endpoint. This will look like our last endpoint, except without the count parameter and as a post request. Then we’ll include a single subscriber in the request body.

type Mailchimp API =
  …
  MCAuth :> “lists” :> Capture “list-id” Text :>
    ReqBody ‘[JSON] MailchimpSubscriber :> Post ‘[JSON] ()

mailchimpApi :: Proxy MailchimpApi
mailchimpApi = Proxy :: Proxy MailchimpApi

Now with servant-client, it’s very easy to derive the client functions for these endpoints. We define the type signatures and use client. Note how the type signatures line up with the parameters that we expect based on the endpoint definitions. Each endpoint takes the BasicAuthData type. This contains a username and password for authenticating the request.

fetchListsClient :: BasicAuthData -> ClientM MailchimpListResponse
fetchSubscribersClient :: BasicAuthData -> Text -> Maybe Int
  -> ClientM MailchimpMembersResponse
subscribeNewUserClient :: BasicAuthData -> Text -> MailchimpSubscriber
  -> ClientM ()
( fetchListsClient :<|>
  fetchSubscribersClient :<|>
  subscribeNewUserClient) = client mailchimpApi

Running Our Client Functions

Now let’s write some helper functions so we can call these functions from the IO monad. Here’s a generic function that will take one of our endpoints and call it using Servant’s runClientM mechanism.

runMailchimp :: (BasicAuthData -> ClientM a) -> IO (Either ServantError a)
runMailchimp action = do
  baseUrl <- getEnv "MAILCHIMP_BASE_URL"
  apiKey <- getEnv "MAILCHIMP_API_KEY"
  trueUrl <- parseBaseUrl baseUrl
  let userData = BasicAuthData "username" (pack apiKey)
  manager <- newTlsManager
  let clientEnv = ClientEnv manager trueUrl
  runClientM (action userData) clientEnv

First we derive our environment variables and get a network connection manager. Then we run the client action against the ClientEnv. Not too difficult.

Now we’ll write a function that will take a list name, query the API for all our lists, and give us the list ID for that name. It will return an Either value since the client call might actually fail. It calls our list client and filters through the results until it finds a list whose name matches. We’ll return an error value if the list isn’t found.

fetchMCListId :: Text -> IO (Either String Text)
fetchMCListId listName = do
  listsResponse <- runMailchimp fetchListsClient
  case listsResponse of
    Left err -> return $ Left (show err)
    Right (MailchimpListResponse lists) ->
      case find nameMatches lists of
        Nothing -> return $ Left "Couldn't find list with that name!"
        Just (MailchimpSingleList (_, id_)) -> return $ Right id_ 
  where
    nameMatches :: MailchimpSingleList -> Bool
    nameMatches (MailchimpSingleList (name, _)) = name == listName

Our function for retrieving the subscribers for a particular list is more straightforward. We make the client call and either return the error or else unwrap the subscriber emails and return them.

fetchMCListMembers :: Text -> IO (Either String [Text])
fetchMCListMembers listId = do
  membersResponse <- runMailchimp 
    (\auth -> fetchSubscribersClient auth listId (Just 2000))
  case membersResponse of
    Left err -> return $ Left (show err)
    Right (MailchimpMembersResponse subs) -> return $
      Right (map unMailchimpSubscriber subs)

And our subscribe function looks very similar. We wrap the email up in the MailchimpSubscriber type and then we make the client call using runMailchimp.

subscribeMCMember :: Text -> Text -> IO (Either String ())
subscribeMCMember listId email = do
  subscribeResponse <- runMailchimp (\auth ->
    subscribeNewUserClient auth listId (MailchimpSubscriber email))
  case subscribeResponse of
    Left err -> return $ Left (show err)
    Right _ -> return $ Right ()

The SubscriberList Effect

Since the rest of our server uses Eff, let’s add an effect type for our subscription list. This will help abstract away the Mailchimp details. We’ll call this effect SubscriberList, and it will have a constructor for each of our three actions:

data SubscriberList a where
  FetchListId :: SubscriberList (Either String Text)
  FetchListMembers ::
    Text -> SubscriberList (Either String [Subscriber])
  SubscribeUser ::
    Text -> Subscriber -> SubscriberList (Either String ())

fetchListId :: (Member SubscriberList r) => Eff r (Either String Text)
fetchListId = send FetchListId

fetchListMembers :: (Member SubscriberList r) =>
  Text -> Eff r (Either String [Subscriber])
fetchListMembers listId = send (FetchListMembers listId)

subscribeUser :: (Member SubscriberList r) =>
  Text -> Subscriber -> Eff r (Either String ())
subscribeUser listId subscriber =
  send (SubscribeUser listId subscriber)

Note we use our wrapper type Subscriber from the schema.

To complete the puzzle, we need a function to convert this action into IO. Like all our different transformations, we use runNat on a natural transformation:

runSubscriberList :: (Member IO r) =>
  Eff (SubscriberList ': r) a -> Eff r a
runSubscriberList = runNat subscriberListToIO
  where
    subscriberListToIO :: SubscriberList a -> IO a
    ...

Now for each constructor, we’ll call into the helper functions we wrote above. We’ll add a little bit of extra logic that’s going to handle unwrapping the Mailchimp specific types we used and some error handling.

runSubscriberList :: (Member IO r) =>
  Eff (SubscriberList ': r) a -> Eff r a
runSubscriberList = runNat subscriberListToIO
  where
    subscriberListToIO :: SubscriberList a -> IO a
    subscriberListToIO FetchListId = do
      listName <- pack <$> getEnv "MAILCHIMP_LIST_NAME"
      fetchMCListId listName
    subscriberListToIO (FetchListMembers listId) = do
      membersEither <- fetchMCListMembers listId
      case membersEither of
        Left e -> return $ Left e
        Right emails -> return $ Right (Subscriber <$> emails)
    subscriberListToIO (SubscribeUser listId (Subscriber email)) =
      subscribeMCMember listId email

Modifying the Server

The last step of this process is to incorporate the new effects into our server. Our aim is to replace the simplistic Database effect we were using before. This is a snap. We’ll start by substituting our SubscriberList into the natural transformation used by Servant:

transformToHandler ::
  (Eff '[SubscriberList, Email, SMS, IO]) :~> Handler
transformToHandler = NT $ \action -> do
  let ioAct = runM $ runTwilio (runEmail (runSubscriberList action))
  liftIO ioAct

We now need to change our other server functions to use the new effects. In both cases, we’ll need to first fetch the list ID, handle the failure, and we can then proceed with the other operation. Here’s how we subscribe a new user:

subscribeHandler :: (Member SubscriberList r) => Text -> Eff r ()
subscribeHandler email = do
  listId <- fetchListId 
  case listId of
    Left _ -> error "Failed to find list ID!"
    Right listId' -> do
      _ <- subscribeUser listId' (Subscriber email)
      return ()

Finally, we send an email like so, combining last week’s Email effect with the SubscriberList effect we just created:

emailList :: (Member SubscriberList r, Member Email r) =>
  (Text, ByteString, Maybe ByteString) -> Eff r ()
emailList content = do
  listId <- fetchListId 
  case listId of
    Left _ -> error "Failed to find list ID!"
    Right listId' -> do
      subscribers <- fetchListMembers listId'
      case subscribers of
        Left _ -> error "Failed to find subscribers!"
        Right subscribers' -> do
          _ <- sendEmailToList
            content (subscriberEmail <$> subscribers')
          return ()

Conclusion

That wraps up our exploration of Mailchimp and our series on integrating APIs with Haskell! In part 1 of this series, we saw how to send and receive texts using the Twilio API. Then in part 2, we sent emails to our users with Mailgun. Finally, we used the Mailchimp API to more reliably store our list of subscribers. We even did this from scratch, without the use of a library like we had for the other two effects. We used Servant to great effect here, specifying what our API would look like even though we weren’t writing a server for it! This enabled us to derive client functions that could call the API for us.

This series combined tons of complex ideas from many other topics. If you were a little lost trying to keep track of everything, I highly recommend you check out our Haskell Web Skills series. It’ll teach you a lot of cool techniques, such as how to connect Haskell to a database and set up a server with Servant. You should also download our Production Checklist for some more ideas about cool libraries!

And of course, if you’re a total beginner at Haskell, hopefully you understand now that Haskell CAN be used for some very advanced functionality. Furthermore, we can do so with incredibly elegant solutions that separate our effects very nicely. If you’re interested in learning more about the language, download our free Beginners Checklist!

Read More
James Bowen James Bowen

Mailing it out with Mailgun!

emails.jpg

Last week, we started our exploration of the world of APIs by integrating Haskell with Twilio. We were able to send a basic SMS message, and then create a server that could respond to a user’s message. This week, we’re going to venture into another type of effect: sending emails. We’ll be using Mailgun for this task, along with the Hailgun Haskell API for it.

You can take a look at the full code for this article by looking at the mailgun branch on our Github repository. If this article sparks your curiosity for more Haskell libraries, you should download our Production Checklist!

Making an Account

To start with, we’ll need a mailgun account obviously. Signing up is free and straightforward. It will ask you for an email domain, but you don’t need one to get started. As long as you’re in testing mode, you can use a sandbox domain they provide to host your mail server.

With Twilio, we had to specify a “verified” phone number that we could message in testing mode. Similarly, you will also need to designate a verified email address. Your sandboxed domain will only be able to send to this address. You’ll also need to save a couple pieces of information about your Mailgun account. In particular, you need your API Key, the sandboxed email domain, and the reply address for your emails to use. Save these as environment variables on your local system and remote machine.

Basic Email

Now let’s get a feel for the Hailgun code by sending a basic email. All this occurs in the simple IO monad. We ultimately want to use the function sendEmail, which requires both a HailgunContext and a HailgunMessage:

sendEmail
  :: HailgunContext
  -> HailgunMessage
  -> IO (Either HailgunErrorResponse HailgunSendResponse)

We’ll start by retrieving our environment variables. With our domain and API key, we can build the HailgunContext we’ll need to pass as an argument.

import Data.ByteString.Char8 (pack)

sendMail :: IO ()
sendMail = do
  domain <- getEnv “MAILGUN_DOMAIN”
  apiKey <- getEnv “MAILGUN_API_KEY”
  replyAddress <- pack <$> getEnv “MAILGUN_REPLY_ADDRESS”
  -- Last argument is an optional proxy
  let context = HailgunContext domain apiKey Nothing
  ...

Now to build the message itself, we’ll use a builder function hailgunMessage. It takes several different parameters:

hailgunMessage
 :: MessageSubject
 -> MessageContent
 -> UnverifiedEmailAddress -- Reply Address, just a ByteString
 -> MessageRecipients
 -> [Attachment]
 -> Either HailgunErrorMessage HailgunMessage

These are all very easy to fill in. The MessageSubject is Text and then we’ll pass our reply address from above. For the content, we’ll start by using the TextOnly constructor for a plain text email. We’ll see an example later of how we can use HTML in the content:

sendMail :: IO ()
sendMail = do
  …
  replyAddress <- pack <$> getEnv “MAILGUN_REPLY_ADDRESS”
  let msg = mkMessage replyAddress
  …
  where
    mkMessage replyAddress = hailgunMessage
      “Hello Mailgun!”
      (TextOnly “This is a test message.”)
      replyAddress
      ...

The MessageRecipients type has three fields. First are the direct recipients, then the CC’d emails, and then the BCC’d users. We're only sending to a single user at the moment. So we can take the emptyMessageRecipients item and modify it. We’ll wrap up our construction by providing an empty list of attachments for now:

where
  mkMessage replyAddress = hailgunMessage
    “Hello Mailgun!”
    (TextOnly “This is a test message.”)
    replyAddress
    (emptyMessageRecipients { recipientsTo = [“verified@mail.com”] } )
    []

If there are issues, the hailgunMessage function can throw an error, as can the sendEmail function itself. But as long as we check these errors, we’re in good shape to send out the email!

createAndSendEmail :: IO ()
createAndSendEmail = do
  domain <- getEnv “MAILGUN_DOMAIN”
  apiKey <- getEnv “MAILGUN_API_KEY”
  replyAddress <- pack <$> getEnv “MAILGUN_REPLY_ADDRESS”
  let context = HailgunContext domain apiKey Nothing
  let msg = mkMessage replyAddress
  case msg of
    Left err -> putStrLn (“Making failed: “ ++ show err)
    Right msg’ -> do
      result <- sendEmail context msg
      case result of
        Left err -> putStrLn (“Sending failed: “ ++ show err)
        Right resp -> putStrLn (“Sending succeeded: “ ++ show rep)

Notice how it’s very easy to build all our functions up when we start with the type definitions. We can work through each type and figure out what it needs. I reflect on this idea some more in this article on Compile Driven Learning, which is part of our Haskell Brain Series for newcomers to Haskell!

Effify Email

Now we’d like to incorporate sending an email into our server. As you’ll note from looking at the source code, I revamped the Servant server to use free monads. There are many different effects in our system, and this helps us keep them straight. Check out this article for more details on free monads and the Eff library. To start, we want to describe our email sending as an effect. We’ll start with a simple data type that has a single constructor:

data Email a where
  SendSubscribeEmail :: Text -> Email (Either String ())

sendSubscribeEmail :: (Member Email r)
  => Text -> Eff r (Either String ())
sendSubscribeEmail email = send (SendSubscribeEmail email)

Now we need a way to peel the Email effect off our stack, which we can do as long as we have IO. We’ll mimic the sendEmail function we already wrote as the transformation. We now take the user’s email we’re sending to as an input!

runEmail :: (Member IO r) => Eff (Email ': r) a -> Eff r a
runEmail = runNat emailToIO
  where
    emailToIO :: Email a -> IO a
    emailToIO (SendSubscribeEmail subEmail) = do
      domain <- getEnv "MAILGUN_DOMAIN"
      apiKey <- getEnv "MAILGUN_API_KEY"
      replyEmail <- pack <$> getEnv "MAILGUN_REPLY_ADDRESS"
      let context = HailgunContext domain apiKey Nothing
      case mkSubscribeMessage replyEmail (encodeUtf8 subEmail) of
        Left err -> return $ Left err
        Right msg -> do
          result <- sendEmail context msg
          case result of
            Left err -> return $ Left (show err)
            Right resp -> return $ Right ()

Extending our SMS Handler

Now that we’ve properly described sending an email as an effect, let’s incorporate it into our server! We’ll start by writing another data type that will represent the potential commands a user might text to us. For now, it will only have the “subscribe” command.

data SMSCommand = SubscribeCommand Text

Now let’s write a function that will take their message and interpret it as a command. If they text subscribe {email}, we’ll send them an email!

messageToCommand :: Text -> Maybe SMSCommand
messageToCommand messageBody = case splitOn " " messageBody of
  ["subscribe", email] -> Just $ SubscribeCommand email
  _ -> Nothing

Now we’ll extend our server handler to reply. If we interpret their command correctly, we’ll send the email! Otherwise, we’ll send them back a text saying we couldn’t understand them. Notice how our SMS effect and Email effect are part of this handler:

smsHandler :: (Member SMS r, Member Email r)
  => IncomingMessage -> Eff r ()
smsHandler msg = 
  case messageToCommand (body msg) of
    Nothing -> sendText (fromNumber msg) 
      "Sorry, we didn't understand that request!"
    Just (SubscribeCommand email) -> do
      _ <- sendSubscribeEmail email
      return ()

And now our server will be able to send the email when the user "subscribes"!

Attaching a File

Let’s make our email a little more complicated. Right now we’re only sending a very basic email. Let’s modify it so it has an attachment. We can build an attachment by providing a path to a file as well as a string describing it. To get this file, our message making function will need the current running directory. We’ll also change the body a little bit.

mkSubscribeMessage :: ByteString -> ByteString -> FilePath -> Either HailgunErrorMessage HailgunMessage
mkSubscribeMessage replyAddress subscriberAddress currentDir = 
  hailgunMessage
    "Thanks for signing up!"
    content
    replyAddress 
    (emptyMessageRecipients { recipientsTo = [subscriberAddress] })
    -- Notice the attachment!
    [ Attachment 
        (rewardFilepath currentDir)
        (AttachmentBS "Your Reward")
    ]
  where
    content = TextOnly "Here's your reward!”

rewardFilepath :: FilePath -> FilePath
rewardFilepath currentDir = currentDir ++ "/attachments/reward.txt"

Now when our user signs up, they’ll get whatever attachment file we’ve specified!

HTML Content

To show off one more feature, let’s change the content of our email so that it contains some HTML instead of only text! In particular, we’ll give them the chance to confirm their subscription by clicking a link to our server. All that changes here is that we’ll use the TextAndHTML constructor instead of TextOnly. We do want to provide a plain text interpretation of our email in case HTML can’t be rendered for whatever reason. Notice the use of the <a> tags for the link:

content = TextAndHTML 
   textOnly
   ("Here's your reward! To confirm your subscription, click " <> 
     link <> "!")
  where
    textOnly = "Here's your reward! To confirm your subscription, go to "
       <> "https://haskell-apis.herokuapp.com/api/subscribe/"
       <> subscriberAddress
       <> " and we'll sign you up!"
   link = "<a href=\"https://haskell-apis.herokuapp.com/api/subscribe/" 
     <> subscriberAddress <> "\">this link</a>"

Now we’ll add another endpoint that will capture the email as a parameter and save it to a database. The Database effect very much resembles the one from the Eff article. It’ll save the email in a database table.

type ServerAPI = "api" :> "ping" :> Get '[JSON] String :<|>
  "api" :> "sms" :> ReqBody '[FormUrlEncoded] IncomingMessage
    :> Post '[JSON] () :<|>
  "api" :> "subscribe" :> Capture "email" Text :> Get '[JSON] ()

subscribeHandler :: (Member Database r) => Text -> Eff r ()
subscribeHandler email = registerUser email

Now if we wanted to write a function that would email everyone in our system, it’s not hard at all! We extend our effect types for both Email and Database. The Database function will retrieve all the subscribers in our system. Meanwhile the Email effect will send the specified email to the whole list.

data Database a where
  RegisterUser :: Text -> Database ()
  RetrieveSubscribers :: Database [Text]

data Email a where
  SendSubscribeEmail :: Text -> Email (Either String ())
  -- First parameter is (Subject line, Text content, HTML Context)
  SendEmailToList
    :: (Text, ByteString, Maybe ByteString)
    -> [Text]
    -> Email (Either String ())

And combining these just requires using both effects:

sendEmailToList :: (Member Email r, Member Database r) => ByteString -> ByteString -> Eff r ()
sendEmailToList = do
  list <- retrieveSubscribers
  void $ sendEmailToList list

Notice the absence of any lift calls! This is one of the cool strengths of Eff.

Conclusion

As we’ve seen in this article, sending emails with Haskell isn’t too scary. The Hailgun API is quite intuitive and when you break things down piece by piece and look at the types involved. This article brought together ideas from both compile driven development and the Eff framework. In particular, we can see in this series how convenient it is to separate our effects with Eff so that we aren’t doing a lot of messy lifts.

There’s a lot of advanced material in this article, so if you think you need to backtrack, don’t worry, we’ve got you covered! Our Haskell Web Skills Series will teach you how to use libraries like Persistent for database management and Servant for making an API. For some more libraries you can use to write enhanced Haskell, download our Production Checklist!

If you’ve never programmed in Haskell at all, you should try it out! Download our Haskell Beginner’s Checklist or read our Liftoff Series!

Read More
James Bowen James Bowen

Sending Texts with Twilio and Haskell!

text_convos.jpg

Writing our own Haskell code using only simple libraries is fun. But we can’t do everything from scratch. There are all kinds of cools services out there to use so we don’t have to. We can interface with a lot of these by using APIs. Often, the most well supported APIs use languages like Python and Javascript. But adventurous Haskell developers have also developed bindings for these systems! So in the next few weeks, we’ll be exploring a couple of these. We’ll also see what we can do when there isn’t an out-of-the-box library for us to use.

This week, we’ll focus on the Twilio API. We’ll see how we can send SMS messages from our Haskell code using the twilio library. We’ll also write a simple server to use Twilio’s callback system to receive text messages and process them programmatically. You can follow along with the code here on the Github repository for this series.

Of course, none of this is useful if you’ve never written any Haskell before! If you want to get started with the language basics, download our Beginners Checklist. To learn more about advanced techniques and libraries, grab our Production Checklist!

Setting Up Our Account

Naturally, you’ll need a Twilio account to use the Twilio API. Once you have this set up, you need to add your first Twilio number. This will be the number you’ll send text messages to. You'll also see it as the sender for other messages in your system. You should also go through the process of verifying your own phone number. This will allow you to send and receive messages on that phone without “publishing” your app.

You also need a couple other pieces of information from your account. There’s the account SID, and the authentication token. You can find these on the dashboard for your project on the Twilio page. You’ll need these values in your code. But since you don’t want to put them into version control, you should save them as environment variables on your machine. Then when you need to, you can fetch them like so:

fetchSid :: IO String
fetchSid = getEnv “TWILIO_ACCOUT_SID”

fetchToken :: IO String
fetchToken = getEnv “TWILIO_AUTH_TOKEN”

Sending a Message

The first thing we’ll want to do is use the API to actually send a text message. We perform Twilio actions within the Twilio monad. It’s rather straightforward to access this monad from IO. All we need is the runTwilio’ function:

runTwilio’ :: IO String -> IO String -> Twilio a -> IO a

The first two parameters to this function are IO actions to fetch the account SID and auth token. We've already written those. Then the final parameter of course is our Twilio action.

sendMessage :: IO ()
sendMessage = runTwilio’ fetchSid fetchToken $ do
  ...

To compose a message, we’ll use the PostMessage constructor. This takes three parameters. First, the “to” number of our message. Fill this in with the number to your physical phone. Then the second parameter is the “from” number, which has to be our Twilio account’s phone number. Then the third parameter is the message itself. To send the message, all we have to do is use the post function! That’s all there is to it!

sendMessage :: IO ()
sendMessage = runTwilio’ fetchSid fetchToken $ do
  let msg = PostMessage “+15551231234” “+15559879876” “Hello Twilio!”
  _ <- post msg
  return ()

And just like that, you’ve sent your first Twilio message! Note that it does cost a small amount of money to send messages over Twilio. But a trial account should give you enough free credit to experiment a little bit.

Receiving Messages

Now, it’s a little more complicated to deal with incoming messages. The first thing we need to do is create a webhook on our Twilio account. To do this, go to “Manage Numbers” from your project dashboard page. Then select your Twilio number. You’ll now want to scroll to the section called “Messaging” and then within that, find “A Message Comes In”. You want to select “Webhook” in the dropdown. Then you’ll need to specify a URL where your server is located, and select “HTTP Post”. For setting up a quick server, I use Heroku combined with this nifty build pack that works with Stack. I’ll go into that in more depth in a later article. But the main thing to see is that our endpoint is /api/sms.

twilio_dashboard.png

With this webhook set up, Twilio will send a post request to the endpoint every time a user texts our number. The request will contain the message and the number of the sender. So let’s set up a server using Servant to pick up that request.

We’ll start by specifying a simple type to encode the message we’ll receive from Twilio:

data IncomingMessage = IncomingMessage
  { fromNumber :: Text
  , body :: Text
  }

Twilio encodes its post request body as FormURLEncoded. In order for Servant to deserialize this, we’ll need to define an instance of the FromForm class for our type. This function takes in a hash map from keys to lists of values. It will return either an error string or our desired value.

instance FromForm IncomingMessage where
  fromForm :: Form -> Either Text IncomingMessage
  fromForm (From form) = ...

So form is a hash map, and we want to look up the “From” number of the message as well as its body. Then as long as we find at least one result for each of these, we’ll return the message. Otherwise, we return an error.

instance FromForm IncomingMessage where
  fromForm :: Form -> Either Text IncomingMessage
  fromForm (From form) = case lookupResults of
    Just ((fromNumber : _), (body : _)) -> 
      Right $ IncomingMessage fromNumber body
    Just _ -> Left “Found the keys but no values”
    Nothing -> Left “Didn’t find keys”
    where
      lookupResults = do
        fromNumber <- HashMap.lookup “From” form
        body <- HashMap.lookup “Body” form
        return (fromNumber, body)

Now that we have this instance, we can finally define our API endpoint! All it needs are the simple path components and the request body. For now, we won’t actually post any response.

type TwilioServerAPI = "api" :> "sms" :> 
  ReqBody '[FormUrlEncoded] IncomingMessage :> Post '[JSON] ()

Writing Our Handler

Now let’s we want to write a handler for our endpoint. First though, we’ll write a natural transformation so we can write our handler in the Twilio monad.

transformToHandler :: Twilio :~> Handler
transformToHandler = NT $ \action -> 
  liftIO $ runTwilio' fetchSid fetchToken action

Now we’ll write a simple handler that will echo the user’s message back to them.

twilioNum :: Text
twilioNum “+15559879876”

smsHandler :: IncomingMessage -> Twilio ()
smsHandler msg = do
  let newMessage = PostMessage (fromNumber msg) twilioNum (body msg)
  _ <- post newMessage
  return ()

And now we wrap up with some of the Servant mechanics to run our server.

twilioAPI :: Proxy TwilioServerAPI
twilioAPI = Proxy :: Proxy TwilioServerAPI

twilioServer :: Server TwilioServerAPI
twilioServer = enter transformToHandler smsHandler

runServer :: IO ()
runServer = do
  port <- read <$> getEnv “PORT”
  run port (serve twilioAPI twilioServer)

And now if we send a text message to our Twilio number, we’ll see that same message back as a reply!

Conclusion

In this article, we saw how we could use just a few simple lines of Haskell to send and receive text messages. There was a fair amount of effort required in using the Twilio tools themselves, but most of that is easy once you know where to look! Come back next week and we’ll explore how we can send emails with the Mailgun API. We’ll see how we can combine text and email for some pretty cool functionality.

An important thing making these apps easy is knowing the right tools to use! One of the tools we used in this part was the Servant web API library. To learn more about this, be sure to check out our Haskell Web Skills Series. For more ideas of web libraries to use, download our Production Checklist.

And if you’ve never written Haskell before, hopefully I’ve convinced you that it IS possible to do some cool things with the language! Download our Beginners Checklist to get stated!

Read More
James Bowen James Bowen

More Series + What's Coming Up!

In the past few weeks on Monday Morning Haskell, we’ve been very busy. We’ve gone over several different parsing libraries. We started with Applicative Parsing and then learned all about Attoparsec and Megaparsec. If you missed it, that series is now available as a permanent fixture on our advanced topics page! So make sure you check it out!

Monads Series

The parsing series made an important distinction between applicative code and monadic code. If these terms are still a little foreign to you, don’t worry! You’re in luck! We’ve also added a new series in our beginners section dedicated to monads and other abstract functional structures! You’ll start by learning about the basics of functors and applicative functors. Then you'll work your way up to all different kinds of monads!

Coming Up: APIs!

In the next few weeks, we’ve got more new material coming up on the blog! Starting next week, we’ll be learning to use APIs to connect to many different services using Haskell. We’ll start by sending SMS messages with the Twilio API. I recently worked with this API (in Haskell) at a Hackathon, so you’ll be able to learn from my afternoon of pains and frustrations!

After that, we’ll spend a couple weeks working with emails. We’ll use the Mailgun API to master the basics of triggering an email send from our Haskell code. Then we’ll see how we can combine this with the Mailchimp service to subscribe people to an email list!

All these APIs have complex side effects we need to manage. We’ll also want to be able to test the systems without these effects occurring. So once we’re done learning the basics, we’ll examine how we can write these kinds of tests.

So keep coming back every Monday morning for some new content! And speaking of emails and email lists, if you haven’t yet, you should subscribe to Monday Morning Haskell! You’ll get our monthly newsletter and you’ll also be the first to hear about any exciting offers!

Read More
James Bowen James Bowen

Megaparsec: Same Syntax, More Features!

megaparsec.png

Last week, we took a step into the monadic world of parsing by learning about the Attoparsec library. It provided us with a clearer syntax to work with compared to applicative parsing. This week, we’ll explore one final library: Megaparsec.

This library has a lot in common with Attoparsec. In fact, the two have a lot of compatibility by design. Ultimately, we’ll find that we don’t need to change our syntax a whole lot. But Megaparsec does have a few extra features that can make our lives simpler.

To follow the code examples here, head to the megaparsec branch on Github! To learn about more awesome libraries you can use in production, make sure to download our Production Checklist! But never fear if you’re new to Haskell! Just take a look at our Beginners checklist and you’ll know where to get started!

A Different Parser Type

To start out, the basic parsing type for Megaparsec is a little more complicated. It has two type parameters, e and s, and also comes with a built-in monad transformer ParsecT.

data ParsecT e s m a

type Parsec e s = ParsecT e s Identity

The e type allows us to provide some custom error data to our parser. The s type refers to the input type of our parser, typically some variant of String. This parameter also exists under the hood in Attoparsec. But we sidestepped that issue by using the Text module. For now, we’ll set up our own type alias that will sweep these parameters under the rug:

type MParser = Parsec Void Text

Trying our Hardest

Let’s start filling in our parsers. There’s one structural difference between Attoparsec and Megaparsec. When a parser fails in Attoparsec, its default behavior is to backtrack. This means it acts as though it consumed no input. This is not the case in Megaparsec! A naive attempt to repeat our nullParser code could fail in some ways:

nullParser :: MParser Value
nullParser = nullWordParser >> return ValueNull
  where
    nullWordParser = string "Null" <|> string "NULL" <|> string "null"

Suppose we get the input "NULL" for this parser. Our program will attempt to select the first parser, which will parse the N token. Then it will fail on U. It will move on to the second parser, but it will have already consumed the N! Thus the second and third parser will both fail as well!

We get around this issue by using the try combinator. Using try gives us the Attoparsec behavior of backtracking if our parser fails. The following will work without issue:

nullParser :: MParser Value
nullParser = nullWordParser >> return ValueNull
  where
    nullWordParser = 
      try (string "Null") <|> 
      try (string "NULL") <|> 
      try (string "null")

Even better, Megaparsec also has a convenience function string’ for case insensitive parsing. So our null and boolean parsers become even simpler:

nullParser :: MParser Value
nullParser = M.string' "null" >> return ValueNull

boolParser :: MParser Value
boolParser = 
  (trueParser >> return (ValueBool True)) <|> 
  (falseParser >> return (ValueBool False))
    where
      trueParser = M.string' "true"
      falseParser = M.string' "false"

Unlike Attoparsec, we don’t have a convenient parser for scientific numbers. We’ll have to go back to our logic from applicative parsing, only this time with monadic syntax.

numberParser :: MParser Value
numberParser = (ValueNumber . read) <$>
  (negativeParser <|> decimalParser <|> integerParser)
  where
    integerParser :: MParser String
    integerParser = M.try (some M.digitChar)

    decimalParser :: MParser String
    decimalParser = M.try $ do
      front <- many M.digitChar
      M.char '.'
      back <- some M.digitChar
      return $ front ++ ('.' : back)

    negativeParser :: MParser String
    negativeParser = M.try $ do
      M.char '-'
      num <- decimalParser <|> integerParser
      return $ '-' : num

Notice that each of our first two parsers use try to allow proper backtracking. For parsing strings, we’ll use the satisfy combinator to read everything up until a bar or newline:

stringParser :: MParser Value
stringParser = (ValueString . trim) <$>
  many (M.satisfy (not . barOrNewline))

And then filling in our value parser is easy as it was before:

valueParser :: MParser Value
valueParser =
  nullParser <|>
  boolParser <|>
  numberParser <|>
  stringParser

Filling in the Details

Aside from some trivial alterations, nothing changes about how we parse example tables. The Statement parser requires adding in another try call when we’re grabbing our pairs:

parseStatementLine :: Text -> MParser Statement
parseStatementLine signal = do
  M.string signal
  M.char ' '
  pairs <- many $ M.try ((,) <$> nonBrackets <*> insideBrackets)
  finalString <- nonBrackets
  let (fullString, keys) = buildStatement pairs finalString
  return $ Statement fullString keys
  where
    buildStatement  = ...

Otherwise, we’ll fail on any case where we don’t use any keywords in the statement! But it's otherwise the same. Of course, we also need to change how we call our parser in the first place. We'll use the runParser function instead of Attoparsec’s parseOnly. This takes an extra argument for the source file of our parser to provide better messages.

parseFeatureFromFile :: FilePath -> IO Feature
parseFeatureFromFile inputFile = do
  …
  case runParser featureParser finalString inputFile of
    Left s -> error (show s)
    Right feature -> return feature

But nothing else changes in the structure of our parsers. It's very easy to take Attoparsec code and Megaparsec code and re-use it with the other library!

Adding some State

One bonus we do get from Megaparsec is that its monad transformer makes it easier for us to use other monadic functionality. Our parser for statement lines has always been a little bit clunky. Let’s clean it up a little bit by allowing ourselves to store a list of strings as a state object. Here’s how we’ll change our parser type:

type MParser = ParsecT Void Text (State [String])

Now whenever we parse a key using our brackets parser, we can append that key to our existing list using modify. We’ll also return the brackets along with the string instead of merely the keyword:

insideBrackets :: MParser String
insideBrackets = do
  M.char '<'
  key <- many M.letterChar
  M.char '>'
  modify (++ [key]) -- Store the key in the state!
  return $ ('<' : key) ++ ['>']

Now instead of forming tuples, we can concatenate the strings we parse!

parseStatementLine :: Text -> MParser Statement
parseStatementLine signal = do
  M.string signal
  M.char ' '
  pairs <- many $ M.try ((++) <$> nonBrackets <*> insideBrackets)
  finalString <- nonBrackets
  let fullString = concat pairs ++ finalString
  …

And now how do we get our final list of keys? Simple! We get our state value, reset it, and return everything. No need for our messy buildStatement function!

parseStatementLine :: Text -> MParser Statement
parseStatementLine signal = do
  M.string signal
  M.char ' '
  pairs <- many $ M.try ((++) <$> nonBrackets <*> insideBrackets)
  finalString <- nonBrackets
  let fullString = concat pairs ++ finalString
  keys <- get
  put []
  return $ Statement fullString keys

When we run this parser at the start, we now have to use runParserT instead of runParser. This returns us an action in the State monad, meaning we have to use evalState to get our final result:

parseFeatureFromFile :: FilePath -> IO Feature
parseFeatureFromFile inputFile = do
  …
  case evalState (stateAction finalString) [] of
    Left s -> error (show s)
    Right feature -> return feature
  where
    stateAction s = runParserT featureParser inputFile s

Bonuses of Megaparsec

As a last bonus, let's look at error messages in Megaparsec. When we have errors in Attoparsec, the parseOnly function gives us an error string. But it’s not that helpful. All it tells us is what individual parser on the inside of our system failed:

>> parseOnly nullParser "true"
Left "string"
>> parseOnly "numberParser" "hello"
Left "Failed reading: takeWhile1"

These messages don’t tell us where within the input it failed, or what we expected instead. Let’s compare this to Megaparsec and runParser:

>> runParser nullParser "true" ""
Left (TrivialError 
  (SourcePos {sourceName = "true", sourceLine = Pos 1, sourceColumn = Pos 1} :| []) 
  (Just EndOfInput) 
  (fromList [Tokens ('n' :| "ull")]))
>> runParser numberParser "hello" ""
Left (TrivialError 
  (SourcePos {sourceName = "hello", sourceLine = Pos 1, sourceColumn = Pos 1} :| []) 
    (Just EndOfInput) 
    (fromList [Tokens ('-' :| ""),Tokens ('.' :| ""),Label ('d' :| "igit")]))

This gives us a lot more information! We can see the string we’re trying to parse. We can also see the exact position it fails at. It’ll even give us a picture of what parsers it was trying to use. In a larger system, this makes a big difference. We can track down where we’ve gone wrong either in developing our syntax, or conforming our input to meet the syntax. If we customize the e parameter type, we can even add our own details into the error message to help even more!

Conclusion

This wraps up our exploration of parsing libraries in Haskell! In the past few weeks, we’ve learned about Applicative parsing, Attoparsec, and Megaparsec. The first provides useful and intuitive combinators for when our language is regular. It allows us to avoid using a monad for parsing and the baggage that might bring. With Attoparsec, we saw an introduction to monadic style parsing. This provided us with a syntax that was easier to understand and where we could see what was happening. Finally, this week, we explored Megaparsec. This library has a lot in common syntactically with Attoparsec. But it provides a few more bells and whistles that can make many tasks easier.

Ready to explore some more areas of Haskell development? Want to get some ideas for new libraries to learn? Download our Production Checklist! It’ll give you a quick summary of some tools in areas ranging from data structures to web APIs!

Never programmed in Haskell before? Want to get started? Check out our Beginners Checklist! It has all the tools you need to start your Haskell journey!

Read More
James Bowen James Bowen

Attoparsec: The Clarity of Do-Syntax

attoparsec.jpg

In last week’s article we completed our look at the Applicative Parsing library. We took all our smaller combinators and put them together to parse our Gherkin syntax. This week, we’ll look at a new library: Attoparsec. Instead of trying to do everything using a purely applicative structure, this library uses a monadic approach. This approach is much more common. It results in syntax that is simpler to read and understand. It will also make it easier for us to add certain features.

To follow along with the code for this article, take a look at the attoparsec branch on Github! For some more excellent ideas about useful libraries, download our Production Checklist! It includes material on libraries for everything from data structures to machine learning!

If you’re new to Haskell, make sure you download our Beginner’s Checklist! It’ll tell you about all the steps you need to take to get started on your Haskell journey!

The Parser Type

In applicative parsing, all our parsers had the type RE Char. This type belonged to the Applicative typeclass but was not a Monad. For Attoparsec, we’ll instead be using the Parser type, a full monad. So in general we’ll be writing parsers with the following types:

featureParser :: Parser Feature
scenarioParser :: Parser Scenario
statementParser :: Parser Statement
exampleTableParser :: Parser ExampleTable
valueParser :: Parser Value

Parsing Values

The first thing we should realize though is that our parser is still an Applicative! So not everything needs to change! We can still make use of operators like *> and <|>. In fact, we can leave our value parsing code almost exactly the same! For instance, the valueParser, nullParser, and boolParser expressions can remain the same:

valueParser :: Parser Value
valueParser =
  nullParser <|>
  boolParser <|>
  numberParser <|>
  stringParser

nullParser :: Parser Value
nullParser =
  (string "null" <|>
  string "NULL" <|>
  string "Null") *> pure ValueNull

boolParser :: Parser Value
boolParser = (trueParser *> pure (ValueBool True)) <|> (falseParser *> pure (ValueBool False))
  where
    trueParser = string "True" <|> string "true" <|> string "TRUE"
    falseParser = string "False" <|> string "false" <|> string "FALSE"

If we wanted, we could make these more "monadic" without changing their structure. For instance, we can use return instead of pure (since they are identical). We can also use >> instead of *> to perform monadic actions while discarding a result. Our value parser for numbers changes a bit, but it gets simpler! The authors of Attoparsec provide a convenient parser for reading scientific numbers:

numberParser :: Parser Value
numberParser = ValueNumber <$> scientific

Then for string values, we’ll use the takeTill combinator to read all the characters until a vertical bar or newline. Then we’ll apply a few text functions to remove the whitespace and get it back to a String. (The Parser monad we’re using parses things as Text rather than String).

stringParser :: Parser Value
stringParser = (ValueString . unpack . strip) <$> 
  takeTill (\c -> c == '|' || c == '\n')

Parsing Examples

As we parse the example table, we’ll switch to a more monadic approach by using do-syntax. First, we establish a cellParser that will read a value within a cell.

cellParser = do
  skipWhile nonNewlineSpace
  val <- valueParser
  skipWhile (not . barOrNewline)
  char '|'
  return val

Each line in our statement refers to a step of the parsing process. So first we skip all the leading whitespace. Then we parse our value. Then we skip the remaining space, and parse the final vertical bar to end the cell. Then we’ll return the value we parsed.

It’s a lot easier to keep track of what’s going on here compared to applicative syntax. It’s not hard to see which parts of the input we discard and which we use. If we don’t assign the value with <- within do-syntax, we discard the value. If we retrieve it, we’ll use it. To complete the exampleLineParser, we parse the initial bar, get many values, close out the line, and then return them:

exampleLineParser :: Parser [Value]
exampleLineParser = do
  char '|'
  cells <- many cellParser
  char '\n'
  return cells
  where
    cellParser = ...

Reading the keys for the table is almost identical. All that changes is that our cellParser uses many letter instead of valueParser. So now we can put these pieces together for our exampleTableParser:

exampleTableParser :: Parser ExampleTable
exampleTableParser = do
  string "Examples:"
  consumeLine
  keys <- exampleColumnTitleLineParser
  valueLists <- many exampleLineParser
  return $ ExampleTable keys (map (zip keys) valueLists)

We read the signal string "Examples:", followed by consuming the line. Then we get our keys and values, and build the table with them. Again, this is much simpler than mapping a function like buildExampleTable like in applicative syntax.

Statements

The Statement parser is another area where we can improve the clarity of our code. Once again, we’ll define two helper parsers. These will fetch the portions outside brackets and then inside brackets, respectively:

nonBrackets :: Parser String
nonBrackets = many (satisfy (\c -> c /= '\n' && c /= '<'))

insideBrackets :: Parser String
insideBrackets = do
  char '<'
  key <- many letter
  char '>'
  return key

Now when we put these together, we can more clearly see the steps of the process outlined in do-syntax. First we parse the “signal” word, then a space. Then we get the “pairs” of non-bracketed and bracketed portions. Finally, we’ll get one last non-bracketed part:

parseStatementLine :: Text -> Parser Statement
parseStatementLine signal = do
  string signal
  char ' '
  pairs <- many ((,) <$> nonBrackets <*> insideBrackets)
  finalString <- nonBrackets
  ...

Now we can define our helper function buildStatement and call it on its own line in do-syntax. Then we’ll return the resulting Statement. This is much easier to read than tracking which functions we map over which sections of the parser:

parseStatementLine :: Text -> Parser Statement
parseStatementLine signal = do
  string signal
  char ' '
  pairs <- many ((,) <$> nonBrackets <*> insideBrackets)
  finalString <- nonBrackets
  let (fullString, keys) = buildStatement pairs finalString
  return $ Statement fullString keys
  where
    buildStatement 
      :: [(String, String)] -> String -> (String, [String])
    buildStatement [] last = (last, [])
    buildStatement ((str, key) : rest) rem =
      let (str', keys) = buildStatement rest rem
      in (str <> "<" <> key <> ">" <> str', key : keys)

Scenarios and Features

As with applicative parsing, it’s now straightforward for us to finish everything off. To parse a scenario, we read the keyword, consume the line to read the title, and read the statements and examples:

scenarioParser :: Parser Scenario
scenarioParser = do
  string "Scenario: "
  title <- consumeLine
  statements <- many (parseStatement <* char '\n')
  examples <- (exampleTableParser <|> return (ExampleTable [] []))
  return $ Scenario title statements examples

Again, we provide an empty ExampleTable as an alternative if there are no examples. The parser for Background looks very similar. The only difference is we ignore the result of the line and instead use Background as the title string.

backgroundParser :: Parser Scenario
backgroundParser = do
  string "Background:"
  consumeLine
  statements <- many (parseStatement <* char '\n')
  examples <- (exampleTableParser <|> return (ExampleTable [] []))
  return $ Scenario "Background" statements examples

Finally, we’ll put all this together as a feature. We read the title, get the background if it exists, and read our scenarios:

featureParser :: Parser Feature
featureParser = do
  string "Feature: "
  title <- consumeLine
  maybeBackground <- optional backgroundParser
  scenarios <- many scenarioParser
  return $ Feature title maybeBackground scenarios

Feature Description

One extra feature we’ll add now is that we can more easily parse the “description” of a feature. We omitted them in applicative parsing, as it’s a real pain to implement. It becomes much simpler when using a monadic approach. The first step we have to take though is to make one parser for all the main elements of our feature. This approach looks like this:

featureParser :: Parser Feature
featureParser = do
  string "Feature: "
  title <- consumeLine
  (description, maybeBackground, scenarios) <- parseRestOfFeature
  return $ Feature title description maybeBackground scenarios

parseRestOfFeature :: Parser ([String], Maybe Scenario, [Scenario])
parseRestOfFeature = ...

Now we’ll use a recursive function that reads one line of the description at a time and adds to a growing list. The trick is that we’ll use the choice combinator offered by Attoparsec.

We’ll create two parsers. The first assumes there are no further lines of description. It attempts to parse the background and scenario list. The second reads a line of description, adds it to our growing list, and recurses:

parseRestOfFeature :: Parser ([String], Maybe Scenario, [Scenario])
parseRestOfFeature = parseRestOfFeatureTail []
  where
    parseRestOfFeatureTail prevDesc = do
      (fullDesc, maybeBG, scenarios) <- choice [noDescriptionLine prevDesc, descriptionLine prevDesc]
      return (fullDesc, maybeBG, scenarios)

So we’ll first try to run this noDescriptionLineParser. It will try to read the background and then the scenarios as we’ve always done. If it succeeds, we know we’re done. The argument we passed is the full description:

where
  noDescriptionLine prevDesc = do
    maybeBackground <- optional backgroundParser
    scenarios <- some scenarioParser
    return (prevDesc, maybeBackground, scenarios)

Now if this parser fails, we know that it means the next line is actually part of the description. So we’ll write a parser to consume a full line, and then recurse:

descriptionLine prevDesc = do
  nextLine <- consumeLine
  parseRestOfFeatureTail (prevDesc ++ [nextLine])

And now we’re done! We can parse descriptions!

Conclusion

That wraps up our exploration of Attoparsec. Come back next week where we’ll finish this series off by learning about Megaparsec. We’ll find that it’s syntactically very similar to Attoparsec with a few small exceptions. We’ll see how we can use some of the added power of monadic parsing to enrich our syntax.

To learn more about cool Haskell libraries, be sure to check out our Production Checklist! It’ll tell you a little bit about libraries in all kinds of areas like databases and web APIs.

If you’ve never written Haskell at all, download our Beginner’s Checklist! It’ll give you all the resources you need to get started on your Haskell journey!

Read More
James Bowen James Bowen

Applicative Parsing II: Putting the Pieces Together

applicative_parsing_2.png

In last week’s article, we introduced the Applicative parsing library. We learned about the RE type and the basic combinators like sym and string. We saw how we could combine those together with applicative functions like many and <*> to parse strings into data structures. This week, we’ll put these pieces together in an actual parser for our Gherkin syntax. To follow along with the code examples, check out Parser.hs on the Github repository.

Starting next week, we’ll explore some other parsing libraries, starting with Attoparsec. For a little more information about those and many other libraries, download our Production Checklist! It summarizes many libraries on topics from databases to Web APIs.

If you’ve never written Haskell at all, get started! Download our free Beginners Checklist!.

Value Parser

In keeping with our approach from the last article, we’re going to start with smaller elements of our syntax. Then we can use these to build larger ones with ease. To that end, let’s build a parser for our Value type, the most basic data structure in our syntax. Let’s recall what that looks like:

data Value =
ValueNull |
ValueBool Bool |
ValueString String |
ValueNumber Scientific

Since we have different constructors, we’ll make a parser for each one. Then we can combine them with alternative syntax:

valueParser :: RE Char Value
valueParser =
  nullParser <|>
  boolParser <|>
  numberParser <|>
  stringParser

Now our parsers for the null values and boolean values are easy. For each of them, we’ll give a few different options about what strings we can use to represent those elements. Then, as with the larger parser, we’ll combine them with <|>.

nullParser :: RE Char Value
nullParser =
  (string “null” <|>
  string “NULL” <|>
  string “Null”) *> pure ValueNull

boolParser :: RE Char Value
boolParser =
  trueParser *> pure (ValueBool True) <|> 
  falseParser *> pure (ValueBool False)
  where
    trueParser = string “True” <|> string “true” <|> string “TRUE”
    falseParser = string “False” <|> string “false” <|> string “FALSE”

Notice in both these cases we discard the actual string with *> and then return our constructor. We have to wrap the desired result with pure.

Number and String Values

Numbers and strings are a little more complicated since we can’t rely on hard-coded formats. In the case of numbers, we’ll account for integers, decimals, and negative numbers. We'll ignore scientific notation for now. An integer is simple to parse, since we’ll have many characters that are all numbers. We use some instead of many to enforce that there is at least one:

numberParser :: RE Char Value
numberPaser = …
  where
    integerParser = some (psym isNumber)

A decimal parser will read some numbers, then a decimal point, and then more numbers. We'll insist there is at least one number after the decimal point.

numberParser :: RE Char Value
numberPaser = …
  where
    integerParser = some (psym isNumber)
    decimalParser = 
      many (psym isNumber) <*> sym ‘.’ <*> some (psym isNumber)

Finally, for negative numbers, we’ll read a negative symbol and then one of the other parsers:

numberParser :: RE Char Value
numberPaser = …
  where
    integerParser = some (psym isNumber)
    decimalParser = 
      many (psym isNumber) <*> sym ‘.’ <*> some (psym isNumber)
    negativeParser = sym ‘-’ <*> (decimalParser <|> integerParser)

However, we can’t combine these parsers as is! Right now, they all return different results! The integer parser returns a single string. The decimal parser returns two strings and the decimal character, and so on. In general, we’ll want to combine each parser's results into a single string and then pass them to the read function. This requires mapping a couple functions over our last two parsers:

numberParser :: RE Char Value
numberPaser = …
  where
    integerParser = some (psym isNumber)
    decimalParser = combineDecimal <$> 
      many (psym isNumber) <*> sym ‘.’ <*> some (psym isNumber)
    negativeParser = (:) <$> 
      sym ‘-’ <*> (decimalParser <|> integerParser)

    combineDecimal :: String -> Char -> String -> String
    combineDecimal base point decimal = base ++ (point : decimal)

Now all our number parsers return strings, so we can safely combine them. We'll map the ValueNumber constructor over the value we read from the string.

numberParser :: RE Char Value
numberPaser = (ValueNumber . read) <$>
  (negativeParser <|> decimalParser <|> integerParser)
  where
    ...

Note that order matters! If we put the integer parser first, we’ll be in trouble! If we encounter a decimal, the integer parser will greedily succeed and parse everything before the decimal point. We'll either lose all the information after the decimal, or worse, have a parse failure.

The last thing we need to do is read a string. We need to read everything in the example cell until we hit a vertical bar, but then ignore any whitespace. Luckily, we have the right combinator for this, and we’ve even written a trim function already!

stringParser :: RE Char Value
stringParser = (ValueString . trim) <$> readUntilBar

And now our valueParser will work as expected!

Building an Example Table

Now that we can parse individual values, let’s figure out how to parse the full example table. We can use our individual value parser to parse a whole line of values! The first step is to read the vertical bar at the start of the line.

exampleLineParser :: RE Char [Value]
exampleLineParser = sym ‘|’ *> ...

Next, we’ll build a parser for each cell. It will read the whitespace, then the value, and then read up through the next bar.

exampleLineParser :: RE Char [Value]
exampleLineParser = sym ‘|’ *> ...
  where
    cellParser = 
      many isNonNewlineSpace *> valueParser <* readThroughBar

isNonNewlineSpace :: RE Char Char
isNonNewlineSpace = psym (\c -> isSpace c && c /= ‘\n’)

Now we read many of these and finish by reading the newline:

exampleLineParser :: RE Char [Value]
exampleLineParser = 
  sym ‘|’ *> many cellParser <* readThroughEndOfLine
  where
    cellParser = 
      many isNonNewlineSpace *> valueParser <* readThroughBar

Now, we need a similar parser that reads the title column of our examples. This will have the same structure as the value cells, only it will read normal alphabetic strings instead of values.

exampleColumnTitleLineParser :: RE Char [String]
exampleColumnTitleLineParser = sym ‘|’ *> many cellParser <* readThroughEndOfLine
  where
    cellParser = 
      many isNonNewlineSpace *> many (psym isAlpha) <* readThroughBar

Now we can start building the full example parser. We’ll want to read the string, the column titles, and then the value lines.

exampleTableParser :: RE Char ExampleTable
exampleTableParser =
  (string “Examples:” *> readThroughEndOfLine) *>
  exampleColumnTitleLineParser <*>
  many exampleLineParser

We’re not quite done yet. We’ll need to apply a function over these results that will produce the final ExampleTable. And the trick is that we want to map up the example keys with their values. We can accomplish this with a simple function. It will return zip the keys over each value list using map:

exampleTableParser :: RE Char ExampleTable
exampleTableParser = buildExampleTable <$>
  (string “Examples:” *> readThroughEndOfLine) *>
  exampleColumnTitleLineParser <*>
  many exampleLineParser
  where
    buildExampleTable :: [String] -> [[Value]] -> ExampleTable
    buildExampleTable keys valueLists = ExampleTable keys (map (zip keys) valueLists)

Statements

Now we that we can parse the examples for a given scenario, we need to parse the Gherkin statements. To start with, let’s make a generic parser that takes the keyword as an argument. Then our full parser will try each of the different statement keywords:

parseStatementLine :: String -> RE Char Statement
parseStatementLine signal = …

parseStatement :: RE Char Statement
parseStatement =
  parseStatementLine “Given” <|>
  parseStatementLine “When” <|>
  parseStatementLine “Then” <|>
  parseStatementLine “And”

Now we’ll get the signal word out of the way and parse the statement line itself.

parseStatementLine :: String -> RE Char Statement
parseStatementLine signal = string signal *> sym ' ' *> ...

Parsing the statement is tricky. We want to parse the keys inside brackets and separate them as keys. But we also want them as part of the statement’s string. To that end, we’ll make two helper parsers. First, nonBrackets will parse everything in a string up through a bracket (or a newline).

nonBrackets :: RE Char String
nonBrackets = many (psym (\c -> c /= ‘\n’ && c /= ‘<’))

We’ll also want a parser that parses the brackets and returns the keyword inside:

insideBrackets :: RE Char String
insideBrackets = sym ‘<’ *> many (psym (/= ‘>’)) <* sym ‘>’

Now to read a statement, we start with non-brackets, and alternate with keys in brackets. Let's observe that we start and end with non-brackets, since they can be empty. Thus we can represent a line a list of non-bracket/bracket pairs, followed by a last non-bracket part. To make a pair, we combine the parser results in a tuple using the (,) constructor enabled by TupleSections:

parseStatementLine :: String -> RE Char Statement
parseStatementLine signal = string signal *> sym ‘ ‘ *>
  many ((,) <$> nonBrackets <*> insideBrackets) <*> nonBrackets

From here, we need a recursive function that will build up our final statement string and the list of keys. We do this with buildStatement.

parseStatementLine :: String -> RE Char Statement
parseStatementLine signal = string signal *> sym ‘ ‘ *>
  (buildStatement <$> 
    many ((,) <$> nonBrackets <*> insideBrackets) <*> nonBrackets)
  where
    buildStatement :: 
      [(String, String)] -> String -> (String, [String])
    buildStatement [] last = (last, [])
    buildStatement ((str, key) : rest) rem =
      let (str', keys) = buildStatement rest rem
      in (str <> "<" <> key <> ">" <> str', key : keys)

The last thing we need is a final helper that will take the result of buildStatement and turn it into a Statement. We’ll call this finalizeStatement, and then we’re done!

parseStatementLine :: String -> RE Char Statement
parseStatementLine signal = string signal *> sym ‘ ‘ *>
  (finalizeStatement . buildStatement <$> 
    many ((,) <$> nonBrackets <*> insideBrackets) <*> nonBrackets)
  where
    buildStatement :: 
      [(String, String)] -> String -> (String, [String])
    buildStatement [] last = (last, [])
    buildStatement ((str, key) : rest) rem =
      let (str', keys) = buildStatement rest rem
      in (str <> "<" <> key <> ">" <> str', key : keys)

    finalizeStatement :: (String, [String]) -> Statement
    finalizeStatement (regex, variables) = Statement regex variables

Scenarios

Now that we have all our pieces in place, it’s quite easy to write the parser for scenario! First we get the title by reading the keyword and then the rest of the line:

scenarioParser :: RE Char Scenario
scenarioParser = string “Scenario: “ *> readThroughEndOfLine ...

After that, we read many statements, and then the example table. Since the example table might not exist, we’ll provide an alternative that is a pure, empty table. We can wrap everything together by mapping the Scenario constructor over it.

scenarioParser :: RE Char Scenario
scenarioParser = Scenario <$>
  (string “Scenario: “ *> readThroughEndOfLine) <*>
  many (statementParser <* sym ‘\n’) <*>
  (exampleTableParser <|> pure (ExampleTable [] []))

We can also make a “Background” parser that is very similar. All that changes is that we read the string “Background” instead of a title. Since we’ll hard-code the title as “Background”, we can include it with the constructor and map it over the parser.

backgroundParser :: RE Char Scenario
backgroundParser = Scenario “Background” <$>
  (string “Background:” *> readThroughEndOfLine) *>
 many (statementParser <* sym ‘\n’) <*>
  (exampleTableParser <|> pure (ExampleTable [] []))

Finally the Feature

We’re almost done! All we have left is to write the featureParser itself! As with scenarios, we’ll start with the keyword and a title line:

featureParser :: RE Char Feature
featureParser = Feature <$>
  (string “Feature: “ *> readThroughEndOfLine) <*>
  ...

Now we’ll use the optional combinator to parse the Background if it exists, but return Nothing if it doesn’t. Then we’ll wrap up with parsing many scenarios!

featureParser :: RE Char Feature
featureParser = Feature <$>
  (string “Feature: “ *> readThroughEndOfLine) <*>
  (optional backgroundParser) <*>
  (many scenarioParser)

Note that here we’re ignoring the “description” of a feature we proposed as part of our original syntax. Since there are no keywords for that, it turns out to be painful to deal with it using applicative parsing. When we look at monadic approaches starting next week, we’ll see it isn’t as hard there.

Conclusion

This wraps up our exploration of applicative parsing. We can see how well suited Haskell is for parsing. The functional nature of the language means it's easy to start with small building blocks like our first parsers. Then we can gradually combine them to make something larger. It can be a little tricky to wrap our heads around all the different operators and combinators. But once you understand the ways in which these let us combine our parsers, they make a lot of sense and are easy to use.

To further your knowledge of useful Haskell libraries, download our free Production Checklist! It will tell you about libraries for many tasks, from databases to machine learning!

If you’ve never written a line of Haskell before, never fear! Download our Beginners Checklist to learn more!

Read More
James Bowen James Bowen

Applicative Parsing I: Building the Foundation

applicative_parsing_1.png

Last week we prepared ourselves for parsing by going over the basics of the Gherkin Syntax. In this article and the next, we’ll be using the applicative parsing library to parse that syntax. This week, we’ll focus on the fundamentals of this library, and build up a vocabulary of combinators to use. We'll make heavy use of the Applicative typeclass. If you need a refresher on that, check out this article. As we start coding, you can also follow along with the examples on Github here! Most of the code here is in Parser.hs.

In the coming weeks, we’ll be seeing a couple other parsing libraries as well. If you want to get some ideas about these and more, download our Production Checklist. It summarizes many other useful libraries for writing higher level Haskell.

If you’ve never started writing Haskell, now’s your chance! Get our free Beginner’s Checklist and learn the basics of getting started!

Getting Started

So to start parsing, let’s make some notes about our input format. First, we’ll treat our input feature document as a single string. We’ll remove all empty lines, and then trim leading and trailing whitespace from each line.

parseFeatureFromFile :: FilePath -> IO Feature
parseFeatureFromFile inputFile = do
  fileContents <- lines <$> readFile inputFile
  let nonEmptyLines = filter (not . isEmpty) fileContents
  let trimmedLines = map trim nonEmptyLines
  let finalString = unlines trimmedLines
  case parseFeature finalString of
    ...

…
isEmpty :: String -> Bool
isEmpty = all isSpace

trim :: String -> String
trim input = reverse flippedTrimmed
  where
    trimStart = dropWhile isSpace input
    flipped = reverse trimStart
    flippedTrimmed = dropWhile isSpace flipped

This means a few things for our syntax. First, we don’t care about indentation. Second, we ignore extra lines. This means our parsers might allow certain formats we don’t want. But that’s OK because we’re trying to keep things simple.

The RE Type

With applicative based parsing, the main data type we’ll be working with is called RE, for regular expression. This represents a parser, and it’s parameterized by two types:

data RE s a = ...

The s type refers to the fundamental unit we’ll be parsing. Since we're parsing our input as a single String, this will be Char. Then the a type is the result of the parsing element. This varies from parser to parser. The most basic combinator we can use is sym. This parses a single symbol of your choosing:

sym :: s - > RE s s

parseLowercaseA :: RE Char Char
parseLowercaseA = sym ‘a’

To use an RE parser, we call the match function or its infix equivalent =~. These will return a Just value if we can match the entire input string, and Nothing otherwise:

>> match parseLowercaseA “a”
Just ‘a’
>> “b” =~ parseLowercaseA
Nothing
>> “ab” =~ parseLowercaseA
Nothing -- (Needs to parse entire input)

Predicates and Strings

Naturally, we’ll want some more complicated functionality. Instead of parsing a single input character, we can parse any character that fits a particular predicate by using psym. So if we want to read any character that was not a newline, we could do:

parseNonNewline :: RE Char Char
parseNonNewline = psym (/= ‘\n’)

The string combinator allows us to match a particular full string and then return it:

readFeatureWord :: RE Char String
readFeatureWord = string “Feature”

We’ll use this for parsing keywords, though we’ll often end up discarding the “result”.

Applicative Combinators

Now the RE type is applicative. This means we can apply all kinds of applicative combinators over it. One of these is many, which allows us to apply a single parser several times. Here is one combinator that we’ll use a lot. It allows us to read everything up until a newline and return the resulting string:

readUntilEndOfLine :: RE Char String
readUntilEndOfLine = many (psym (/= '\n'))

Beyond this, we’ll want to make use of the applicative <*> operator to combine different parsers. We can also apply a pure function (or constructor) on top of those by using <$>. Suppose we have a data type that stores two characters. Here’s how we can build a parser for it:

data TwoChars = TwoChars Char Char

parseTwoChars :: RE Char TwoChars
parseTwoChars = TwoChars <$> parseNonNewline <*> parseNonNewline

...

>> match parseTwoChars “ab”
Just (TwoChars ‘a’ ‘b’)

We can also use <* and *>, which are cousins of the main applicative operator. The first one will parse but then ignore the right hand parse result. The second discards the left side result.

parseFirst :: RE Char Char
parseFirst = parseNonNewline <* parseNonNewline

parseSecond :: RE Char Char
parseSecond = parseNonNewline *> parseNonnewline

…

>> match parseFirst “ab”
Just ‘a’
>> match parseSecond “ab”
Just ‘b’
>> match parseFirst “a”
Nothing

Notice the last one fails because the parser needs to have both inputs! We’ll come back to this idea of failure in a second. But now that we know this technique, we can write a couple other useful parsers:

readThroughEndOfLine :: RE Char String
readThroughEndOfLine = readUntilEndOfLine <* sym '\n'

readThroughBar :: RE Char String
readThroughBar = readUntilBar <* sym '|'

readUntilBar :: RE Char String
readUntilBar = many (psym (\c -> c /= '|' && c /= '\n'))

The first will parse the rest of the line and then consume the newline character itself. The other parsers accomplish this same task, except with the vertical bar character. We’ll need these when we parse the Examples section next week.

Alternatives: Dealing with Parse Failure

We introduced the notion of a parser “failing” up above. Of course, we need to be able to offer alternatives when a parser fails! Otherwise our language will be very limited in its structure. Luckily, the RE type also implements Alternative. This means we can use the <|> operator to determine an alternative parser when one fails. Let’s see this in action:

parseFeatureTitle :: RE Char String
parseFeatureTitle = string “Feature: “ *> readThroughEndOfLine

parseScenarioTitle :: RE Char String
parseScenarioTitle = string “Scenario: “ *> readThroughEndOfLine

parseEither :: RE Char String
parseEither = parseFeatureTitle <|> parseScenarioTitle

…

>> match parseFeatureTitle “Feature: Login\n”
Just “Login”
>> match parseFeatureTitle “Scenario: Login\n”
Nothing
>> match parseEither “Scenario: Login\n”
Just “Login”

Of course, if ALL the options fail, then we’ll still have a failing parser!

>> match parseEither “Random: Login\n”
Nothing

We’ll need this to introduce some level of choice into our parsing system. For instance, it’s up to the user if they want to include a Background as part of their feature. So we need to be able to read the background if it’s there or else move onto parsing a scenario.

Conclusion

That wraps up our introduction to the basic combinators of applicative parsing. Next week, we’ll take all the pieces we’ve developed here and put them to work on Gherkin syntax itself. Everything seems pretty small so far. But we’ll see that we can actually build up our results very rapidly once we have the basic pieces in place!

If you want to see some more libraries that are useful for important Haskell tasks, take a look at our Production Checklist. It will introduce you to some libraries for parsing, databases, APIs, and much more!

If you’re new to Haskell, there’s no better time to start! Download our free Beginners Checklist! It will help you download the right tools and start learning the language.

Read More
James Bowen James Bowen

Parsing Primer: Gherkin Syntax!

One topic I have yet to discuss on this blog is how to parse a domain specific language. This is odd, because Haskell has some awesome approaches for parsing. Haskell expressions tend to compose in awesome and simple ways. This provides an ideal environment in which to break down parsing into simpler tasks. Thus there are many excellent libraries out there.

In these next few weeks, we’ll be taking a tour of these different parsing libraries. But before we look at specific code, it will be useful to establish a common example for what we’re going to be parsing. In this article, I’ll introduce Gherkin Syntax, the language behind the Cucumber framework. We’ll go through the language specifics, then show the basics of how we set ourselves up for success in Haskell.

Gherkin Background

Cucumber is a framework for Behavior Driven Development. Under BDD, we first describe all the general behaviors we want our code to perform in plain language. This paradigm is an alternative to Test Driven Development. There, we use test cases to determine our next programming objectives. But BDD can do both of these if we can take behavior descriptions and automatically create tests from them! This would allow less technical members of a project team to effectively write tests!

The main challenge of this is formalizing a language for describing these behaviors. If we have a formal language, then we can parse it. If we can parse it into a reasonable structure, then we can turn that structure into runnable test code. This series will focus on the second part of this problem: turning Gherkin Syntax into a data structure (a Haskell data structure, in our case).

Gherkin Syntax

Gherkin syntax has many complexities, but for these articles we’ll be focusing on the core elements of it. The behaviors you want to test are broken down into a series of features. We describe each feature in its own .feature file. So our overarching task is to read input from a single file and turn it into a Feature object.

We begin our description of a feature with the Feature keyword (obviously). We'll give it a title, and then give it an indented description (our example will be a simple banking app):

Feature: Registering a User
  As a potential user
  I want to be able to create an account with a username,
    email and password
  So that I can start depositing money into my account

Each feature then has a series of scenarios. These describe specific cases of what can happen as part of this feature. Each scenario begins with the Scenario keyword and a title.

Scenario: Successful registration
  ...

Scenario: Email is already taken
  ...

Scenario: Username is already taken
  ...

Each scenario then has a series of Gherkin statements. These statements begin with one of the keywords Given, When, Then, or And. You should use Given statements to describe pre-conditions of the scenario. Then you’ll use When to describe the particular action a user is taking to initiate the scenario. And finally, you’ll use Then to describe the after effects.

Scenario: Email is already taken
  Given there is already an account with the email “test@test.com”
  When I register an account with username “test”,
    email “test@test.com” and password “1234abcd!?”
  Then it should fail with an error:
    “An account with that email already exists”

You can supplement any of these cases with a statement beginning with And.

Scenario: Email is already taken
  Given there is already an account with the email “test@test.com”
  And there is already an account with the username “test”
  When I register an account with username “test”,
    email “test@test.com” and password “1234abcd!?”
  Then it should fail with an error: 
    “An account with that email already exists”
  And there should still only be one account with 
    the email “test@test.com”

Gherkin syntax does not enforce that you use the different keywords in a semantically sound way. We could start every statement with Given and it would still work. But obviously you should do whatever you can to make your tests sound correct.

We can also fill in statements with variables in angle brackets. We'll then follow the scenario with a table of examples for those variables:

Scenario: Successful Registration
  Given There is no account with username <username>
    or email <email>
  When I register the account with username <username>,
    email <email> and password <password>
  Then it should successfully create the account
    with <username>, <email>, and <password>
  Examples:
    | username | email              | password      |
    | john doe | john@doe.com       | ABCD1234!?    |
    | jane doe | jane.doe@gmail.com | abcdefgh1.aba |
    | jackson  | jackson@yahoo.com  | cadsw4ll0p/   |

We can also create a Background for the whole feature. This is a scenario-like description of preconditions that exist for every scenario in that feature. This can also have an example table:

Feature: User Log In
  ...

Background:
  Given: There is an existing user with username <username>,
    email <email> and password <password>
  Examples:
    | username | email              | password      |
    | john doe | john@doe.com       | ABCD1234!?    |
    | jane doe | jane.doe@gmail.com | abcdefgh1.aba |

And that’s the whole language we’re going to be working with!

Haskell Data Structures

Let’s appreciate now how easy it is to create data structures in Haskell to represent this syntax. We’ll start with a description of a Feature. It has a title, description (which we’ll treat as a list of multiple lines), the background, and then a list of scenarios. We’ll also treat the background like a “nameless” scenario that may or may not exist:

data Feature = Feature
  { featureTitle :: String
  , featureDescription :: [String]
  , featureBackground :: Maybe Scenario
  , featureScenarios :: [Scenario]
  }

Now let’s describe what a Scenario is. It's main components are its title and a list of statements. We’ll also observe that we should have some kind of structure for the list of examples we'll provide:

data Scenario = Scenario
  { scenarioTitle :: String
  , scenarioStatements :: [Statement]
  , scenarioExamples :: ExampleTable
  }

This ExampleTable will store a list of possible keys as well as list of tuple maps. Each tuple will contain keys and values. At the scale we’re likely to be working at, it’s not worth it to use a full Map:

data ExampleTable = ExampleTable
  { exampleTableKeys :: [String]
  , exampleTableExamples :: [[(String, Value)]]
  }

Now we'll have to define what we mean by a Value. We’ll keep it simple and only use literal bools, strings, numbers, and a null value:

data Value =
  ValueNumber Scientific |
  ValueString String |
  ValueBool Bool |
  ValueNull

And finally we’ll describe a statement. This will have the string itself, as well as a list of variable keywords to interpolate:

data Statement = Statement
  { statementText :: String
  , statementExampleVariables :: [String]
  }

And that’s all there is too it! We can put all these types in a single file and feel pretty good about that. In Java or C++, we would want to make a separate file (or two!) for each type and there would be a lot more boilerplate involved.

General Parsing Approach

Another reason we’ll see that Haskell is good for parsing is the ease of breaking problems down into smaller pieces. We’ll have one function for parsing an example table, a different function for parsing a statement, and so on. Then gluing these together will actually be slick and simple!

Conclusion

Next week, come back and we’ll actually look at how we start parsing this. The first library we’ll use is the regex-applicative parsing library. We’ll see how we can get a lot of what we want without even using a monadic context!

For some more ideas on parsing libraries you can use, check out our free Production Checklist. It will tell you about different libraries for parsing as well as a great many other tasks, from data structures to web APIs!

If you’ve never written in Haskell before but are intrigued by the possibilities, download our Beginner’s Checklist and read our Liftoff Series!

Read More
James Bowen James Bowen

Monday Morning Haskell: Upgraded!

Welcome to the new Monday Morning Haskell! We just went live with the latest changes to the website this week. So it’s time to announce what’s coming next. Our main project right now is converting older blog content into permanent, organized, series. We currently have two sections for these series. One is focused on beginners, the other on more advanced Haskellers.

Beginners Section

The Beginners Section will obviously focus on content for people who are new to Haskell. Right now, there are two series of articles. The first is our Liftoff series. If you have never programmed in Haskell before, this is the series for you! You'll learn how to install Haskell on your system, as well as the core language mechanics.

The second set of articles is our Haskell Brain series. This series focuses on the mental side of learning Haskell. It goes through the psychological hurdles many people face when starting Haskell. It also goes over some interesting general techniques for learning.

Advanced Section

The Advanced Section features content for those trying to make the step up from hobbyists to professional Haskellers. It incorporates two of our more recent series from the blog. First up is the Web Skills series. This series goes through some interesting libraries for tasks you might need when building a Web backend. For instance, you'll learn about the Persistent database library, the Servant API library, and some general testing techniques.

The advanced section also features a series on Haskell and machine learning. It starts off by making the case for why Haskell is a good fit for machine learning in general. Then it goes through some specific examples. One highlight of this series is a tutorial on the Haskell Tensor Flow bindings. It also gives some examples of using dependent types within Tensor Flow.

Resources Update

We've also made a big update to our subscriber-only resources. All the resources are available on the resources page. If you're a subscriber to our email list, you should have gotten an email with the password to this page! If you're not subscribed yet, you can still sign up for free! You'll get access to all of these:

  1. Beginner's Checklist – The newly revised version of our Getting Started Checklist will help you review some of the core concepts of Haskell. It will also point you towards some additional resources to help you learn even more!
  2. Production Checklist – This NEW resource lists a large number of libraries you can use for production tasks. It goes well beyond the set covered in the web skills series and gives a short summary of each.
  3. Recursion Workbook – This workbook contains a couple chapters of content that will teach you all about recursion. Then it offers 10 practice problems so you can put your skills to the test!
  4. Stack Mini-Course – This mini-course will walk you through the basics of the Haskell Stack tool so you can actually make your own projects!
  5. Servant Tutorial – At BayHac 2016 I gave a talk on the Servant library. If you're a subscriber, you can get the slides and the sample code for that talk.
  6. Tensor Flow Guide – This guide accompanies our Haskell AI series. It goes through all the details you need to know about getting the Haskell Tensor Flow library up and running.

If you subscribe, you'll also get our monthly newsletter! This will detail what's new on the blog, and what content you can expect in the future!

The Blog

Going forward, I'll be continuing to take some older blog content and form it into coherent series. Most of my weekly blog posts for the time being will focus on announcing when these are available. I do have quite a bit more fresh content planned for the future though, so stay tuned! In the meantime, if there's an old blog article you're trying to find, you can use our search functionality! I've added tags to each blog post to help you out!

So don't forget, if you want access to our awesome resources, sign up for free!

Read More
James Bowen James Bowen

Functors Done Quick!

Suppose we're writing some code to deal with bank accounts. Most of our code will refer to these using a proper data type. But less refined parts of our code might use a tuple with the same information instead. We would want a conversion function to go between them. Here's a simple example:

data BankAccount = BankAccount
  { bankName :: String
  , ownerName :: String
  , accountBalance :: Double
  }

convertAccount :: (String, String, Double) -> BankAccount
convertAccout (bank, owner, balance) = BankAccount bank owner balance

Naturally, we'll want a convenience function for performing this operation on a list of items. We'll can use map for lists.

convertAccounts :: [(String, String, Double)] -> [BankAccount]
convertAccounts = map convertAccount

But Haskell has a plethora of different data structures. We can store our data in a Set, or a Vector, for a couple examples. What if different parts of our code store the data differently? They would need their own conversion functions, since the list version of map doesn't work on a Set or Vector. Can we make this code more generic?

Functors

If you read the blog post a couple weeks ago, you'll remember the idea of typeclasses. This is how we can make our code generic! We want to generalize the behavior of running a transformation over a data structure. We can make a typeclass to encapsulate this behavior. Luckily, Haskell already has such a typeclass, called Functor. It has a single function, fmap. Here is how it is defined:

class Functor f where
  fmap :: (a -> b)  ->  f a  ->  f b

If that type signature looks familiar, that's because it's almost identical to the map function over lists. And in fact, the List type uses map as it's implementation for fmap:

map :: (a  ->  b)  ->  [a]  ->  [b]

instance Functor [] where
  fmap = map

Other Functor Instances

Now, Set and Vector do have map functions. But to make our code generic, we have to define functor instances as a go-between:

instance Functor Vector where
  fmap = Data.Vector.map

instance Functor Set where
  fmap = Data.Set.map

With all this in mind, we can now rewrite convertAccounts generically.

convertAccounts :: (Functor f) => f (String, String, Double)  ->  f BankAccount
convertAccounts = fmap convertAccount

Now anything can use convertAccounts no matter how it structures the data, as long as it uses a functor! Let's looks at some of the other functors out there!

While it might not seem to fit in the same category as lists, vectors and sets, Maybe is also a functor! Here's its implementation:

instance Functor Maybe where
  fmap _ Nothing = Nothing
  fmap f (Just a) = Just (f a)

Another example of a functor is Either. This one is a little confusing since Either has two type parameters. But really, we have to fix the first parameter. Then the conversion function is only applied to the second. This means that, like with the Nothing case above, when we have Left, we return the original value:

instance Functor (Either a) where
  fmap _ (Left a) = Left a
  fmap f (Right x) = Right (f x)

Conceptualizing Functors

So concretely, Functor is a typeclass in Haskell. But how can we think of it conceptually? This is actually pretty simple. A functor is nothing more than a generic container or box. We don't know how many elements it contains. We don't know what the structure of those elements is. But if we have a way to transform those elements, we can apply that function over all of them. The result will be a new container with the same structure, but new elements. As far as abstractions go, this is probably the cleanest one we'll get, so enjoy it!

Conclusion

Functor is an example of typeclass that we can use to get general behavior. In this case, the behavior is transforming a group of objects in a container while maintaining the container's structure. We saw how this typeclass allowed us to re-use a function over many different types. Functors are the simplest in a series of important typeclasses. Applicative functors would come next, and then monads. Monads are vital to Haskell. So understanding functors is an important first step towards learning more complex Haskell.

But you can't learn about data structures until you know the basics! If you've never written any Haskell before, download out Getting Started Checklist! If you're comfortable with the basics and want more of a challenge, take a look at our Recursion Workbook!

Read More
James Bowen James Bowen

Need to be Faster? Be Lazy!

In more procedural and object oriented languages, we write code as a series of commands. These commands get executed in the order we write them in no matter what. Consider this example:

int myFunction(int a, int b, int c) {
  int result1 = longCalculation1(a,b);
  int result2 = longCalculation2(b,c);
  int result3 = longCalculation3(a,c);
  if (result1 < 10) {
    return result1;
  } else if (result2 < 100) {
    return result2;
  } else {
    return result3;
  }
}

There’s a clear inefficiency here. No matter what, we’ll perform all three long running operations. But we might not actually need all the results! We could rewrite the code to get around this.

int myFunction(int a, int b, int c) {
  int result1 = longCalculation1(a,b);
  if (result1 < 10) {
    return result1;
  } else {
    int result2 = longCalculation2(b,c);
    if (result2 < 100) {
     return result2;
    } else {
      int result3 = longCalculation3(a,c);
      return result3;
    }
  }
}

But now it’s a little less clear what’s going on. The code isn’t as readable. And there are some situations where this kind of refactoring is impossible. This is an inevitable consequence of the paradigm of eager evaluation in almost all mainstream languages. In Haskell we write expressions, rather than commands. Thus evaluation order is a little less clear. In fact, Haskell expressions are evaluated lazily. We don’t perform any calculations until we’re sure they’re needed! Let’s see how this works.

How Laziness Works

Here’s how we can write the function above in Haskell:

myFunction :: Int -> Int -> Int -> Int
myFunction a b c =
  let result1 = longCalculation1 a b
      result2 = longCalculation2 b c
      result3 = longCalculation3 a c
  in if result1 < 10
       then result1
       else if result2 < 100
         then result2
         else result3

While this seems semantically identical to the first C++ version, it actually runs as efficiently as the second version! In Haskell, result1, result2, and result3 get stored as “thunks”. GHC sets aside a piece of memory for the result, and knows what calculation it has to perform to get the result. But it doesn’t perform the calculation until we need the result.

Here’s another example. Suppose we want all Pythagorean triples whose sum is less than 1000. Sounds like a tall order. But enter the following into GHCI, and you’ll see that it happens very quickly!

>> let triples = [(a,b,c) | a <- [1..1000], b <- [1..1000], c <- [1..1000], a + b + c < 1000, (a ** 2) + (b**2) == c ** 2]

Did it perform all that calculation so quickly? Of course not! If you now print triples, it will take a while to print it all out. But suppose we only wanted 5 examples! It doesn’t take too long!

>> take 5 triples
[(3.0,4.0,5.0),(4.0,3.0,5.0),(5.0,12.0,13.0),(6.0,8.0,10.0),(7.0,24.0,25.0)]

As we see, an element is typically only brought into scope when it is needed by an IO action, such as print. If you’re using GHCI and print the result of a calculation, you’ll need to evaluate the whole calculation. Otherwise, you’ll need the calculation once it (or an expression that depends on it) gets printed by main.

Infinite Lists as a Consequence of Laziness

Besides potentially saving time, laziness has some other interesting consequences. One of these is that we can have data structures that can’t exist in other languages. For instance, we can define an “infinite” list:

>> let infList = [1..]

This list starts at 1, and each element counts up by 1, going up to infinity! But how is this possible? We don’t have an infinite amount of memory! The key is that we don’t actually bring any of the elements into scope until we need them. For example, we can take the first 10 elements of an infinite list.

>> take 10 [1..]
[1,2,3,4,5,6,7,8,9,10]

Of course, if we try to print the entire list, we’ll run into problems!

>> [1..]
(Endless printing of numbers)

But there are some cool things we can do with infinite lists. For instance, it’s easy to match up a list of elements with the numeric index of the element in the list. We can do this by using zip in conjunction with an infinite list:

addIndex :: [a] -> [(Int, a)]
addIndex = zip [1..]

Or we could match every element with its index modulo 4:

addIndexMod4 :: [a] -> [(Int, a)]
addIndexMod4 = zip (cycle [0,1,2,3])

Disadvantages of Laziness

Haskell’s laziness isn’t infallible. It can get us into trouble sometimes. While it often saves us time, it can cost us in terms of space. This is apparent even in a simple example using foldl.

>> foldl (+) 0 [1..100000000]
Stack overflow!

When we add the numbers up through 100 million, we should be able to do it with constant memory. All we would need would be a running tally of the current sum. On the one hand, laziness means that the entire list of a hundred million numbers is never all in scope at the same time. But on the other hand, all the calculations involved in that running tally happen lazily! So at some point, our memory footprint actually looks like this:

(((((1 + 2) + 3) + 4) + …) + 100000000)

That is, all the individual numbers are in memory at the same time because the + operations aren’t evaluated until they need to be! In situations like this example, we want to introduce strictness into our code. We can do this with the seq function. This function is a little special. It takes two arguments, and returns the second of them. However, it is strict in its first argument. That is, the first item we pass to it will be fully evaluated.

We can see this in use in the definition of foldl’, the strict counterpart to foldl:

foldl’ f accum [] = accum
foldl’ f accum (x : xs) = let newAccum = f accum x
                           in seq newAccum $ foldl’ f newAccum xs

The use of seq here causes Haskell to evaluate newAccum strictly, so we don't keep storing calculations in memory. Using this technique, we can now actually add up that list of integers!

>> foldl’ (+) 0 [1..100000000]
5000000050000000

Conclusion

Laziness is another feature that makes Haskell pretty unique among programming languages. Like any language feature, it has its drawbacks. It gives us yet another way we have to reason differently about Haskell compared to other languages. But it also has some distinct advantages. We can have some significantly faster code in many cases. It also allows us to use structures like infinite lists that don’t exist in other languages.

Hopefully this has convinced you to give Haskell a try. Take a look at our Getting Started Checklist and get going!

Read More
James Bowen James Bowen

Immutability: The Less Things Change, The More You Know

Most programmers take it for granted that they can change the values of their expressions. Consider this simple Python example:

> a = [1,2,3]
> a.reverse()
> a
[3,2,1]

We can see that the reverse function actually changed the underlying list. Here’s another example, this time in C++. We pass a pointer to an integer as a parameter, and we can update the integer within the function.

int myFunction(int* a) {
  int result = 0;
  if (*a % 2 == 0) {
    result = 10;
  } else {
    result = 20;
  }
  ++(*a);
  return result;
}

When we call this function, the original expression changes values.

int main() {
  int x = 4;
  int* xp = &x;
  int result = myFunction(xp);
  cout << result << endl;
  cout << *xp << endl;
}

Even though xp initially points to the value 4, when we print it at the end, the value is now 5! But as we’ll learn, Haskell does not, in general, allow us to do this! Let’s see how it works.

Immutability

In Haskell, all expressions are immutable! This means you cannot change the underlying value of something like you can in Python or C++. There are still some functions that appear to mutate things. But in general, they don’t change the original value. They create entirely new values! Let’s look at an example with reverse:

>> let a = [1,2,3]
>> reverse a
[3,2,1]
>> a
[1,2,3] -- unchanged!

The reverse function takes one argument, a list, and returns a list. But the final value is a totally new list! Observe how the original expression a remains the same! Compare this to the earlier Python example. The reverse function actually had a “void” return value. Instead, it changed the original list.

Record syntax is another example where we appear to mutate a value in Haskell. Consider this type and an accompanying mutator function:

data Person = Person
  { personName :: String
  , personAge :: Int
  } deriving (Show)

makeAdult :: Person -> Person
makeAdult person = person { personAge = 18}

But when we actually use the function, we’ll find again that it creates a totally new value! The old one stays the same!

>> let p = Person “John” 17
>> makeAdult p
Person {personName = “John”, personAge = 18}
>> p
Person {personName = “John”, personAge = 17}

Advantages of Immutability

Immutability might seem constraining at first. But it’s actually very liberating! Until you try programming with immutability by default, you don’t realize quite how many bugs mutable data causes. It is tremendously useful to know that your values cannot change. Suppose we take a list as a parameter to a function. In Haskell, we know that no matter how many functions we call with that list as a parameter, it will still be the same each time.

example :: [Int] -> Int
example myList = …
  where
    -- Each call uses the EXACT same list!
    result1 = function1 myList
    result2 = function2 result1 myList
    result3 = function3 result2 myList

Immutability also means you don’t have to worry about different ways to “copy” a data structure. Every copy is a shallow copy, since you can’t change the values within the structure anyway!

Ways Around Immutability

Naturally, there are situations where you want to have mutable data. But we can always simulate this effect in Haskell by using more advanced types! For instance, we can easily represent the C++ function above using the State monad.

myFunction :: State Int Int
myFunction = do
  a <- get
  let result = if a `mod` 2 == 0
        then 10
        else 20
  modify (+1) -- Change the underlying state value
  return result

{-
>> let x = 4
>> runState myFunction x
(10, 5)
>> x
4
-}

Again, this doesn’t actually “mutate” any data. When we pass x into our State function, x itself doesn’t change! Instead, the function returns a completely new value. Now, different calls to get can return us different values depending on the state. But this fact is encoded in the type system. We explicitly declare that there is an Int value that can change.

Of course, there are times where we actually do want to change the specific values in memory. One example of this is if we want to perform an in-place sort. We’ll have the move the elements of the array to different spots in memory. Otherwise we will have to allocate at least O(n) more space for the final list. In cases like this, we can use IO references. To sort an array, we’d want Data.Array.IO. For many other cases, we’ll just want the IORef type. Whenever you need to truly mutate data, you need to be in the IO monad.

Looking at all these examples, what we see is that Haskell doesn’t actually limit us at all! We can get all the same mutability effects we have in other languages. The difference is that in Haskell the default behavior is immutability. We have to use the type system to specify when we want mutable data.

Contrast this with C++. We can get immutable data by using the const keyword if we want. But the default is mutable data and we have to use the type system to make it immutable.

Conclusion

Immutability sounds crazy. But it does a huge amount to limit the kinds of bugs we can get. It seems like a big limitation on your code, but there are plenty of workarounds when you need them. The key fact is that mutable data is encoded within the type system. This forces you to be very conscious about when your data is mutable, and that will help you avoid bugs.

Want to see for yourself what the hype is about? Give Haskell a shot! Download our Getting Started Checklist and start learning Haskell!

Read More
James Bowen James Bowen

General Functions with Typeclasses

Last week, we looked at the basics of Haskell’s data types. We saw that haskell is not an object oriented language, and we don’t have inheritance between data types. This would get very confusing with all the different constructors that a data type can have. Haskell gives a lot of the same functionality as inheritance by using Typeclasses. This week we’ll take a quick look at this concept.

What is a Typeclass?

A typeclass encapsulates functionality that is common to different types. In practice, a typeclass describes a series of functions that you expect to exist for a given type. When these functions exist, you can create what is called an “instance” of a typeclass.

Typeclasses are a lot like interfaces in Java. You specify a group of functions, but only with the type signatures. Then for each relevant type, you'll need to specify an implementation for each function. As an example, suppose we had two different types referring to different kinds of people.

data Student = Student String Int

data Teacher = Teacher
  { teacherName:: String
  , teacherAge:: Int
  , teacherDepartment :: String
  , teacherSalary :: Int
  }

We could then make a typeclass called IsPerson. We'll give it a couple functions that refer to the name and age of the person. Then we parameterize the class by the single type a. We'll use that type parameter in the type signatures of the functions:

class IsPerson a where
  personName :: a -> String
  personAge :: a -> Int

Creating Instances of Typeclasses

Now let's create an instance of the typeclass. All we have to do is implement each function under the instance keyword:

instance IsPerson Student where
  personName (Student name _) = name
  personAge (Student _ age) = age

instance IsPerson Teacher where
  personName = teacherName
  personAge = teacherAge

There are a lot of simple typeclasses in the base libraries that you’ll need to know for some basic tasks. For instance, to compare two items for equality, you’ll need the Eq typeclass:

class Eq a where
  (==) :: a -> a -> Bool
  (/=) :: a -> a -> Bool

We can define instances for these for all our types. But for simple, base library classes like these, GHC can define them for us! All we need is the deriving keyword:

data Student = Student String Int
  deriving (Eq)

data Teacher = Teacher
  { teacherName:: String
  , teacherAge:: Int
  , teacherDepartment :: String
  , teacherSalary :: Int
  }
  deriving (Eq)

Using Typeclass Constraints

But why are typeclasses important? Well, often times we want to write code that is as general as possible. We want to write functions that assume as little about their inputs as they can. For instance, suppose we have this function that will print a teacher’s name:

printName :: Teacher -> IO ()
printName teacher = putStrLn $ personName teacher

We can use this function for more types than Teacher though! Any type that implements IsPerson will do. So we can make the function polymorphic, and add the IsPerson constraint on our a type:

printName :: (IsPerson a) => a-> IO ()
printName person = putStrLn $ personName person

You can also use typeclasses to constrain a type parameter of a new data type.

data (IsPerson a) => EmployeeRecord a = EmployeeRecord
  { employee :: a
  , employeeTenure :: Int
  }

Typeclasses can even provide a form of inheritance. You can constrain a typeclass by another typeclass! A couple base classes show an example of this. The “Orderable” typeclass Ord depends on the type having an instance of Eq:

class (Eq a) => Ord a where
  compare :: a -> a -> Ordering
  (<=) :: a -> a -> Bool
  ...

Conclusion

Haskell programmers like code that is as general as possible. Object oriented languages try to accomplish this with inheritance. But Haskell gets most of the same functionality with typeclasses instead. They describe common features between types, and provide a lot of flexibility.

To continue learning more about the Haskell basics, take a look at our Getting Started Checklist and get going!

Do you already understand the basics and want more of a challenge? Check out our Recursion Workbook!

Read More
James Bowen James Bowen

Haskell Data Types in 5 Steps

People often speak of a dichotomy between “object oriented” programming and “functional” programming. Haskell falls into the latter category, meaning we do more of our work with functions. We don't use hierarchies of objects to abstract work away. But Haskell is also heavily driven by its type system. So of course we still define our own data types in Haskell! Even better, Haskell has unique mechanisms you won't find in OO languages!

The Data Keyword and Constructors

In general, we define a new data type by using the data keyword, followed by the name of the type we’re defining. The type has to begin with a capital letter to distinguish it from normal expression names.

data Employee = ...

To start defining our type, we must provide a constructor. This is another capitalized word that allows you to create expressions of your new type. The constructor name is then followed by a list of 0 or more other types. These are like the “fields” that a data type carries in a language like Java or C++.

data Employee = Executive String Int Int

employee1 :: Employee
employee1 = Executive "Jane Doe" 38 300000

Sum Types

In a language like Java, you can have multiple constructors for a type. But the type will still encapsulate the same data no matter what constructor you use. In Haskell, you can have many constructors for your data type, separated by a vertical bar |. Each of your constructors then has its own list of data types! So different constructors of the same type can have different underlying data! We refer to a type with multiple constructors as a “sum” type.

data Employee =
 Executive String Int Int |
 VicePresident String String Int |
 Manager String String |
 Engineer String Int

If your type has only one constructor, it is not uncommon to re-use the name of the type as the constructor name:

data Employee = Employee String Int

Record Syntax

You can also define a type using “record syntax”. This allows you to provide field names to each type in the constructor. With these, you access the individual fields with simple functions. Otherwise, you'll need to resort to pattern matching. This is more commonly seen with types that use a single constructor. It is a good practice to prefix your field names with the type name to avoid name conflicts.

data Employee = Employee
 { employeeName :: String
 , employeeAge :: Int
 }

printName :: Employee -> IO ()
printName employee = putStrLn $ employeeName employee

Type Synonyms

As in C++, you can create type synonyms, providing a second name for a type. Sometimes, expressions can mean different things, even though they have the same representation. Type synonyms can help keep these straight. To make a synonym, use the type keyword, the new name you would like to use to refer to your type, and then the original type.

type InterestRate = Float
type BankBalance = Float

applyInterest :: BankBalance -> InterestRate -> BankBalance
applyInterest balance interestRate = balance + (balance * interestRate)

Note though that type synonyms have no impact on how your code compiles! This means it is still quite possible to misuse them! The following type signatures will still compile for this function:

applyInterest :: Float -> Float -> Float
applyInterest :: InterestRate -> BankBalance -> Float

Newtypes

To avoid the confusion that can occur above, you can use the newtype keyword. A newtype is like a cross between data and type. Like type, you’re essentially renaming a type. But you do this by writing a declaration that has exactly one constructor with exactly one type. As with a data declaration, you can use record syntax within newtypes.

newtype BankBalance = BankBalance Float
newtype InterestRate = InterestRate { unInterestRate :: Float }

Once you’ve done this, you will have to use the constructors (or record functions) to wrap and unwrap your code:

applyInterest :: BankBalance -> InterestRate -> BankBalance
applyInterest (BankBalance bal) rate = BankBalance $
 bal + (unInterestRate rate * bal)

Newtype declarations do affect how your code compiles. So the following invalid type signature will NOT compile!

applyInterest :: InterestRate -> BankBalance -> Float
applyInterest (BankBalance bal) (InterestRate rate) = ...

Conclusion

As we learned a couple weeks ago, types are important in Haskell. So it’s not surprising that Haskell has some nifty constructs for building our own types. Constructors and sum types give us the flexibility to choose what kind of data we want to store. We can even change the data stored for different elements of the same type! Type synonyms and newtypes give us two different ways to rename our types. The first is easy and helps avoid confusion. The second requires more code re-writing, but provides more type safety.

If you’ve never written a line of Haskell before, never fear! Take a look at our Getting Started Checklist to get going!

Read More