/ Blog

Cloudflare D1 setup

One of the greatest joys of building this new site has been developing with Cloudflare products. They are truly cloud-native and provide immense value for developers.

Why D1?

Cloudflare D1 has been on my list for a while. While NoSQL is very popular, I enjoy working with relational databases when they are the right tool for the job. The D1 announcement provides a comprehensive overview of the product. Here are the highlights that stood out to me:

D1 sample project

My blog is straightforward, using markdown files as the datastore. However, I need something more robust where relational data would be beneficial. I've started designing my own shirts, and would enjoy an excuse to talk about and organize them. Creating a shirt design catalog using D1 as the datastore sounds like a perfect sample project.

a lineup of t-shirts

Here is my initial requirements list:

Setting up the database

The database setup steps are pretty simple thanks to the Wrangler toolkit.

1️⃣ To create the database, run wrangler d1 create <db_name>. The D1 resource is now visible in the CloudFlare Dashboard under "Workers & pages > D1". Bonus points for the URL path matching the menu structure - https://dash.cloudflare.com/[Account_ID]/workers/d1.

2️⃣ Once the database exists, add a reference in the "wrangler.toml" file:

[[d1_databases]]
binding = "DB" # customizable
database_name = "[DB_NAME]"
database_id = "[DB_ID]"

3️⃣ Update types using npm run cf-typegen. Now TypeScript can reference the database via the binding name.

4️⃣ Populate the database with an SQL script using Wrangler.

wrangler d1 execute <db_name> --local --schema.sql

DROP TABLE IF EXISTS Shirts;

CREATE TABLE
IF NOT EXISTS Shirts (
id INTEGER PRIMARY KEY,
name TEXT,
createdDate TEXT
);

INSERT INTO
Shirts (
id,
name,
createdDate
)
VALUES
(
1,
'Rainbow Deciduous Tee',
"2024-03-02 21:16:35.000"
);

This is a key concept not to miss. Using the flag --remote will update the database in the cloud. Using the --local flag will update files in your project under a ".wrangler" folder.

There is no way to call the remote database from your local application. Local and remote are always seperate and never shall the two mix. This is a product design principle ensuring that developers don't accidentally augment or destroy production data.

#️⃣ For a detailed step by step checkout the D1 Get Started tutorial

Using the database with Remix

1️⃣ With the database in place and the TypeScript binding setup, data can be retrieved using a Remix loader function. As an upgrade from writing raw queries, I'm using the worker-qb library.

import type { LoaderFunctionArgs } from "@remix-run/cloudflare";
import { D1QB, OrderTypes } from 'workers-qb'

export const loader = async ({ context }: LoaderFunctionArgs) => {
const { env } = context.cloudflare;
const qb = new D1QB(env.DB)

const query = await qb
.fetchAll<Shirt>({
tableName: 'shirts',
orderBy: { 'createdDate': OrderTypes.DESC },
})
.execute();

const shirts = query.results;

return json(shirts);
};

A key concept not to miss is the LoaderFunctionArgs usage. It's required to ensure type inference passes through to other functions that use the loaded data.

2️⃣ Using the data in your page function is accomplished using the normal loader pattern:

export default function ShirtsIndex() {
const shirts = useLoaderData<typeof loader>();

return (
<>
<h1>Shirts</h1>
{shirts.map((shirt) =>
<div key={shirt.id}>
{shirt.name}
</div >
)}
</>
)
}

Summary

That's it! Once I worked through those two key concepts mentioned above, the rest was pretty straightforward. The other pages came together quickly as my existing SQL skills kicked in to structure and query the data.

I'll be digging in next into more details on the data is being retrieved and stored. I have a background in SQL Server and Stored Procedures so at this point it still feels a little wrong but more likely it's just how this technology works. On the plus side, my site is not a critical application which makes it a great place to experiment and learn.

a shirt detail page

Epilogue - Blog needs based on this post

Oof 😮‍💨, this post introduced a lot of new complex requirements for authoring and layout components that I do not support yet. Many future posts to write as I work through the following issue requirements: