Diesel: Making a Formal Schema
In part 1 of our Rust Web series, we took our first step into some real world tasks with Rust. We explored the simple Rust Postgres library to connect to a database and run some queries. In this second part, we're going to use Diesel, a library with some cool ORM capabilities. It's a bit like the Haskell library Persistent, which you can explore more in our Real World Haskell Series.
For a more in-depth look at the code for this article, you should take a look at our Github Repository! You'll want to look at the files referenced below and also at the executable here. If you're ready to move onto web server design, you should skip to part 3!
Diesel CLI
Our first step is to add Diesel as a dependency in our program. We briefly discussed Cargo "features" in part 1. Diesel has separate features for each backend you might use. So we'll specify "postgres". Once again, we'll also use a special feature for the chrono
library so we can use timestamps in our database.
[[dependencies]]
diesel={version="1.4.4", features=["postgres", "chrono"]}
But there's more! Diesel comes with a CLI that helps us manage our database migrations. It also will generate some of our schema code. Just as we can install binaries with Stack using stack install
, we can do the same with Cargo. We only want to specify the features we want. Otherwise it will crash if we don't have the other databases installed.
>> cargo install diesel_cli --no-default-features --features postgres
Now we can start using the program to setup our project to generate our migrations. We begin with this command.
>> diesel setup
This creates a couple different items in our project directory. First, we have a "migrations" folder, where we'll put some SQL code. Then we also get a schema.rs
file in our src
directory. Diesel will automatically generate code for us in this file. Let's see how!
Migrations and Schemas
When using Persistent in Haskell, we defined our basic types in a single Schema file using a special template language. We could run migrations on our whole schema programmatically, without our own SQL. But it is difficult to track more complex migrations as your schema evolves.
Diesel is a bit different. Unfortunately, we have to write our own SQL. But, we'll do so in a way that it's easy to take more granular actions on our table. Diesel will then generate a schema file for us. But we'll still need some extra work to get the Rust types we'll need. To start though, let's use Diesel to generate our first migration. This migration will create our "users" table.
>> diesel migration generate create_users
This creates a new folder within our "migrations" directory for this "create_users" migration. It has two files, up.sql
and down.sql
. We start by populating the up.sql
file to specify the SQL we need to run the migration.
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
age INTEGER NOT NULL
)
Then we also want the down.sql
file to contain SQL that reverses the migration.
DROP TABLE users CASCADE;
Once we've written these, we can run our migration!
>> diesel migration run
We can then undo the migration, running the code in down.sql
with this command:
>> diesel migration redo
The result of running our migration is that Diesel populates the schema.rs
file. This file uses the table
macro that generates helpful types and trait instances for us. We'll use this a bit when incorporating the table into our code.
table! {
users (id) {
id -> Int4,
name -> Text,
email -> Text,
age -> Int4,
}
}
While we're at it, let's make one more migration to add an articles table.
-- migrations/create_articles/up.sql
CREATE TABLE articles (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
body TEXT NOT NULL,
published_at TIMESTAMP WITH TIME ZONE NOT NULL,
author_id INTEGER REFERENCES users(id) NOT NULL
)
-- migrations/create_articles/down.sql
DROP TABLE articles;
Then we can once again use diesel migration run
.
Model Types
Now, while Diesel will generate a lot of useful code for us, we still need to do some work on our own. We have to create our own structs for the data types to take advantage of the instances we get. With Persistent, we got these for free. Persistent also used a wrapper Entity
type, which attached a Key
to our actual data.
Diesel doesn't have the notion of an entity. We have to manually make two different types, one with the database key and one without. For the "Entity" type which has the key, we'll derive the "Queryable" class. Then we can use Diesel's functions to select items from the table.
#[derive(Queryable)]
pub struct UserEntity {
pub id: i32
pub name: String,
pub email: String,
pub age: i32
}
We then have to declare a separate type that implements "Insertable". This doesn't have the database key, since we don't know the key before inserting the item. This should be a copy of our entity type, but without the key field. We use a second macro to tie it to the users
table.
#[derive(Insertable)]
#[table_name="users"]
pub struct User {
pub name: String,
pub email: String,
pub age: i32
}
Note that in the case of our foreign key type, we'll use a normal integer for our column reference. In Persistent we would have a special Key
type. We lose some of the semantic meaning of this field by doing this. But it can help keep more of our code separate from this specific library.
Making Queries
Now that we have our models in place, we can start using them to write queries. First, we need to make a database connection using the establish
function. Rather than using the ?
syntax, we'll use .expect
to unwrap our results in this article. This is less safe, but a little easier to work with.
fn create_connection() -> PgConnection {
let database_url = "postgres://postgres:postgres@localhost/rust_db";
PgConnection::establish(&database_url)
.expect("Error Connecting to database")
}
fn main() {
let connection: PgConnection = create_connection();
...
}
Let's start now with insertion. Of course, we begin by creating one of our "Insertable" User
items. We can then start writing an insertion query with the Diesel function insert_into
.
Diesel's query functions are composable. We add different elements to the query until it is complete. With an insertion, we use values
combined with the item we want to insert. Then, we call get_result
with our connection. The result of an insertion is our "Entity" type.
fn create_user(conn: &PgConnection) -> UserEntity {
let u = User
{ name = "James".to_string()
, email: "james@test.com".to_string()
, age: 26};
diesel::insert_into(users::table).values(&u)
.get_result(conn).expect("Error creating user!")
}
Selecting Items
Selecting items is a bit more complicated. Diesel generates a dsl
module for each of our types. This allows us to use each field name as a value within "filters" and orderings. Let's suppose we want to fetch all the articles written by a particular user. We'll start our query on the articles
table and call filter
to start building our query. We can then add a constraint on the author_id
field.
fn fetch_articles(conn: &PgConnection, uid: i32) -> Vec<ArticleEntity> {
use rust_web::schema::articles::dsl::*;
articles.filter(author_id.eq(uid))
...
We can also add an ordering to our query. Notice again how these functions compose. We also have to specify the return type we want when using the load
function to complete our select query. The main case is to return the full entity. This is like SELECT * FROM
in SQL lingo. Applying load
will give us a vector of these items.
fn fetch_articles(conn: &PgConnection, uid: i32) -> Vec<ArticleEntity> {
use rust_web::schema::articles::dsl::*;
articles.filter(author_id.eq(uid))
.order(title)
.load::<ArticleEntity>(conn)
.expect("Error loading articles!")
}
But we can also specify particular fields that we want to return. We'll see this in the final example, where our result type is a vector of tuples. This last query will be a join between our two tables. We start with users
and apply the inner_join
function.
fn fetch_all_names_and_titles(conn: &PgConnection) -> Vec<(String, String)> {
use rust_web::schema::users::dsl::*;
use rust_web::schema::articles::dsl::*;
users.inner_join(...
}
Then we join it to the articles table on the particular ID field. Because both of our tables have id
fields, we have to namespace it to specify the user's ID field.
fn fetch_all_names_and_titles(conn: &PgConnection) -> Vec<(String, String)> {
use rust_web::schema::users::dsl::*;
use rust_web::schema::articles::dsl::*;
users.inner_join(
articles.on(author_id.eq(rust_web::schema::users::dsl::id)))...
}
Finally, we load
our query to get the results. But notice, we use select
and only ask for the name
of the User and the title
of the article. This gives us our final values, so that each element is a tuple of two strings.
fn fetch_all_names_and_titles(conn: &PgConnection) -> Vec<(String, String)> {
use rust_web::schema::users::dsl::*;
use rust_web::schema::articles::dsl::*;
users.inner_join(
articles.on(author_id.eq(rust_web::schema::users::dsl::id)))
.select((name, title)).load(conn).expect("Error on join query!")
}
Conclusion
For my part, I prefer the functionality provided by Persistent in Haskell. But Diesel's method of providing a separate CLI to handle migrations is very cool as well. And it's good to see more sophisticated functionality in this relatively new language. Either way, you're now ready to move on to part 3 of the series, where we'll start making a web server!
If you're still new to Rust, we have some more beginner-related material. Read our Rust Beginners Series or better yet, watch our Rust Video Tutorial!