05 — Storage (R2)

Upload, download, and manage files in your R2 bucket. S3-compatible API, public or private access.

Bucket Details

Your bucket is isolated to your account:

Bucket Name: drklynx-m-bucket-abc123def456
Region: Auto (Cloudflare's global edge)
Access: Private by default

You interact with R2 via:

  • AWS SDK (Node.js, Python, etc.) — S3-compatible
  • Wrangler CLI
  • HTTP API
  • From your Pages functions

Connection Details

You'll need:

Access Key ID: (in your credentials email)
Secret Access Key: (in your credentials email)
Endpoint: https://abc123def456.r2.cloudflarestorage.com

Node.js Example

Install AWS SDK

npm install @aws-sdk/client-s3

Upload a File

import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "fs";

const s3 = new S3Client({
  region: "auto",
  endpoint: "https://abc123def456.r2.cloudflarestorage.com",
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY,
    secretAccessKey: process.env.R2_SECRET_KEY,
  },
});

const fileContent = fs.readFileSync("./myfile.pdf");

await s3.send(
  new PutObjectCommand({
    Bucket: "drklynx-m-bucket-abc123def456",
    Key: "uploads/myfile.pdf",
    Body: fileContent,
    ContentType: "application/pdf",
  })
);

console.log("File uploaded!");

Download a File

import { GetObjectCommand } from "@aws-sdk/client-s3";

const response = await s3.send(
  new GetObjectCommand({
    Bucket: "drklynx-m-bucket-abc123def456",
    Key: "uploads/myfile.pdf",
  })
);

const buffer = await response.Body.transformToByteArray();
fs.writeFileSync("./downloaded.pdf", buffer);

List Files

import { ListObjectsV2Command } from "@aws-sdk/client-s3";

const response = await s3.send(
  new ListObjectsV2Command({
    Bucket: "drklynx-m-bucket-abc123def456",
    Prefix: "uploads/",
  })
);

response.Contents.forEach(obj => console.log(obj.Key));

Delete a File

import { DeleteObjectCommand } from "@aws-sdk/client-s3";

await s3.send(
  new DeleteObjectCommand({
    Bucket: "drklynx-m-bucket-abc123def456",
    Key: "uploads/myfile.pdf",
  })
);

Python Example

Install Boto3

pip install boto3

Upload & Download

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://abc123def456.r2.cloudflarestorage.com",
    aws_access_key_id="YOUR_ACCESS_KEY",
    aws_secret_access_key="YOUR_SECRET_KEY",
    region_name="auto",
)

# Upload
s3.upload_file("myfile.pdf", "drklynx-m-bucket-abc123def456", "uploads/myfile.pdf")

# Download
s3.download_file("drklynx-m-bucket-abc123def456", "uploads/myfile.pdf", "downloaded.pdf")

# List
response = s3.list_objects_v2(Bucket="drklynx-m-bucket-abc123def456", Prefix="uploads/")
for obj in response.get("Contents", []):
    print(obj["Key"])

# Delete
s3.delete_object(Bucket="drklynx-m-bucket-abc123def456", Key="uploads/myfile.pdf")

Public vs. Private Objects

By default, objects are private — only you can download them.

Make an Object Public (Read-Only)

When uploading, set ACL: "public-read":

await s3.send(
  new PutObjectCommand({
    Bucket: "drklynx-m-bucket-abc123def456",
    Key: "public/image.jpg",
    Body: fileContent,
    ACL: "public-read",
  })
);

The file is now accessible via:

https://abc123def456.r2.cloudflarestorage.com/public/image.jpg

Make It Even Easier with a Custom Domain

(Coming soon — not yet available. For now, use the R2 endpoint directly.)

Lifecycle Rules (Auto-Delete Old Files)

Delete objects automatically after N days:

wrangler r2 object create-bucket-lifecycle drklynx-m-bucket-abc123def456 \
  --expiration-days 30

Objects in this bucket will be deleted 30 days after upload.

Quotas

Tier Soft Limit Hard Limit
Starter 50 GB 100 GB
Pro 250 GB 500 GB
Business 1 TB 2 TB
Enterprise 10 TB 20 TB

Check your usage:

wrangler r2 bucket info drklynx-m-bucket-abc123def456

From Your Pages Functions

You can upload/download directly from your Pages functions:

// functions/api/upload.js
export async function onRequest({ request, env }) {
  const formData = await request.formData();
  const file = formData.get("file");
  const buffer = await file.arrayBuffer();

  await env.BUCKET.put(`uploads/${file.name}`, buffer);

  return new Response(JSON.stringify({ success: true }), {
    headers: { "Content-Type": "application/json" },
  });
}

In your wrangler.toml:

[[r2_buckets]]
binding = "BUCKET"
bucket_name = "drklynx-m-bucket-abc123def456"

Troubleshooting

"Access Denied" or "Invalid Credentials"

  • Check your Access Key ID and Secret Key in your credentials email
  • Make sure they're passed as environment variables: process.env.R2_ACCESS_KEY

"Bucket Not Found"

Make sure the bucket name in your code matches exactly: drklynx-m-bucket-abc123def456

"Timeout Uploading Large Files"

R2 handles files up to 5 GB per upload. For larger files, use multipart uploads (built into the AWS SDK). If still timing out, break it into smaller chunks.

Pricing Note

R2 charges per-operation (not per-GB stored). Cost is typically $0.015 per million requests. Deletions are free.


What to Read Next


Getting help: Email support@drklynx.com or check status.drklynx.com.