Machine Learning in Haskell
AI and machine learning are huge topics in technology. In this series, we'll explore how Haskell's unique features as a language can be valuable in crafting better AI programs. In particular, we'll explore some advanced concepts in type safety, and apply these to the machine learning framework Tensor Flow. We'll also look at another library that uses similar ideas with Neural Networks.
Part 1: Haskell and Tensor Flow
We can talk all day about theoretical reasons why Haskell would be good for machine learning. But at the end of the day, we have to examine what it's like in practice. Part 1 of this series introduces the Haskell Tensor Flow bindings. These will enable us to use the awesome machine learning framework Tensor Flow from our Haskell code.
Part 2: Haskell, AI, and Dependent Types I
The Haskell Tensor Flow bindings are nice. But by themselves, they don't necessarily provide any of the security we hoped to achieve. In part 2, we'll really dig into some ideas more unique to Haskell. We'll see how we can use dependent types to avoid some of the runtime failures that can occur with Tensor Flow.
Part 3: Haskell, AI, and Dependent Types II
In part 3, we'll continue what we started in part 2. We've seen that we can restrict the sizes of our tensors at compile time. But now we'll examine how we can encode information about placeholders within our tensors!
Part 4: Grenade and Deep Learning
In parts 2 and 3, we rolled our own dependent type machinery. To wrap up this series, we'll examine a more fleshed out library that applies dependent type concepts to machine learning. Part 4 is all about the Grenade library and its cool features.
Review: Haskell Tensor Flow Guide
If you want to try Haskell Tensor Flow for yourself, you should download our Haskell Tensor Flow Guide. The library requires installing a few dependencies, and this can be a little complicated. The guide will walk you through all the dependencies and how to get the library as part of your Stack project.