Machine Learning in Haskell
AI and machine learning are huge topics in technology. In this series, we'll explore how Haskell's unique features as a language can be valuable in crafting better AI programs. In particular, we'll explore some advanced concepts in type safety, and apply these to the machine learning framework Tensor Flow. We'll also look at another library that uses similar ideas with Neural Networks.
In part 1 of this series, we'll consider some of the implications of AI and its coming ubiquity. We'll also consider some of the unique values Haskell could bring to AI programming.
We can talk all day about theoretical reasons why Haskell would be good for machine learning. But at the end of the day, we have to examine what it's like in practice. Part 2 of this series introduces the Haskell Tensor Flow bindings. These will enable us to use the awesome machine learning framework Tensor Flow from our Haskell code.
The Haskell Tensor Flow bindings are nice. But by themselves, they don't necessarily provide any of the security we hoped to achieve. In part 3, we'll really dig into some ideas more unique to Haskell. We'll see how we can use dependent types to avoid some of the runtime failures that can occur with Tensor Flow.
In part 4, we'll continue what we started in part 3. We've seen that we can restrict the sizes of our tensors at compile time. But now we'll examine how we can encode information about placeholders within our tensors!
In parts 3 and 4, we rolled our own dependent type machinery. To wrap up this series, we'll examine a more fleshed out library that applies dependent type concepts to machine learning. Part 5 is all about the Grenade library and its cool features.
If you want to try Haskell Tensor Flow for yourself, you should download our Haskell Tensor Flow Guide. The library requires installing a few dependencies, and this can be a little complicated. The guide will walk you through all the dependencies and how to get the library as part of your Stack project.