Cloud Jobs
Run jobs in Google Cloud Run Jobs containers.
Overview
Cloud deployment is designed for monorepos. The runner uses isolate-package to automatically isolate your service with its internal workspace dependencies into a standalone deployable package — no manual bundling required.
The runner handles:
- Isolates the service and its workspace dependencies automatically
- Generates Dockerfiles
- Builds container images with content-based caching
- Creates and updates Cloud Run Jobs via gcloud
- Passes arguments and manages execution
No Terraform, Pulumi, or manual GCP configuration needed.
Setup
1. Configure Cloud Settings
Add the cloud section to your job-runner.config.ts:
import { defineRunnerConfig, defineRunnerEnv } from "gcp-job-runner";
export default defineRunnerConfig({
environments: {
stag: defineRunnerEnv({
project: "my-project",
secrets: ["API_KEY"],
}),
prod: defineRunnerEnv({
project: "my-project-prod",
secrets: ["API_KEY"],
}),
},
cloud: {
name: "my-service-jobs",
},
});2. Add Build Entry for Jobs
Include job files in your tsdown config:
import { defineConfig } from "tsdown";
export default defineConfig({
entry: ["src/index.ts", "src/jobs/**/*.ts"],
format: ["esm"],
target: "node22",
});Usage
cloud run — Run a job (auto-deploy if changed)
job cloud run stag process-data --batch-size 100cloud run is smart about deployment. It:
- Builds TypeScript (unless
--no-build) - Isolates the workspace package
- Hashes the content to generate an image tag
- Checks if the image already exists in Artifact Registry
- If image exists — skips deploy, logs "No changes detected"
- If image is new — builds + pushes the image, creates/updates the Cloud Run Job
- Executes the job
This means repeated runs with unchanged code skip the entire deploy step, making execution much faster.
# Auto-deploy if changed, then execute
job cloud run stag process-data --batch-size 100
# Fire and forget (don't wait for completion)
job cloud run stag process-data --batch-size 100 --async
# Interactive mode
job cloud run stag -icloud deploy — Deploy only
job cloud deploy stagThis always builds the image and creates/updates the Cloud Run Job, regardless of whether the image changed. Useful for updating job configuration (env vars, secrets, resource limits) without executing.
Log Streaming
When you run a cloud job without --async, application logs from the Cloud Run Job execution are streamed to your terminal in real time via Cloud Logging. This gives you the same visibility as local execution — log.info(...) output appears directly in your terminal.
The CLI:
- Starts the execution asynchronously
- Opens a live tail on Cloud Logging filtered to the specific execution
- Polls execution status every 5 seconds
- On completion, waits a few seconds for remaining logs to arrive, then exits
If you press Ctrl+C during streaming, the execution continues in the cloud. The CLI prints a message with the Cloud Console log URL so you can follow along there.
Log entries are formatted with timestamps and color-coded severity levels:
- ERROR / CRITICAL — red
- WARNING — yellow
- INFO — cyan
The --async flag skips streaming entirely and exits immediately after starting the execution.
Parallel Tasks
Cloud Run Jobs can run multiple container instances (tasks) in parallel within a single execution. Each task receives a unique index, allowing you to partition work across containers.
CLI Flags
# Run with 10 parallel tasks
job cloud run stag process-data --tasks 10
# Run 10 tasks with at most 5 running concurrently
job cloud run stag process-data --tasks 10 --parallelism 5--tasks N— Number of tasks for this execution. Each task runs the same container image with its ownCLOUD_RUN_TASK_INDEX.--parallelism N— Maximum number of tasks running concurrently. This sets the job resource default viagcloud run jobs create/update. Use0for no limit.
You can also set the default parallelism in config:
cloud: {
name: "my-service-jobs",
resources: {
parallelism: 5,
},
},Using Task Context in Handlers
Import getTaskContext() to read the current task's index and the total count:
import { defineJob, getTaskContext } from "gcp-job-runner";
export default defineJob({
description: "Process data in parallel shards",
handler: async () => {
const { taskIndex, taskCount } = getTaskContext();
console.log(`Task ${taskIndex + 1} of ${taskCount}`);
// Partition work based on task index
const allItems = await fetchItems();
const chunkSize = Math.ceil(allItems.length / taskCount);
const chunk = allItems.slice(
taskIndex * chunkSize,
(taskIndex + 1) * chunkSize,
);
for (const item of chunk) {
await processItem(item);
}
},
});getTaskContext() returns { taskIndex: 0, taskCount: 1 } when running locally or as a single-task cloud execution, so handlers work without changes in both environments.
Cloud Config Options
cloud: {
name: "my-service-jobs", // Required: Cloud Run Job name
region: "us-central1", // Optional, default: "us-central1"
artifactRegistry: "cloud-run", // Optional, default: "cloud-run"
serviceAccount: "sa@proj.iam.gserviceaccount.com", // Optional
resources: {
memory: "1Gi", // Optional, default: "512Mi"
cpu: "2", // Optional, default: "1"
timeout: 7200, // Optional, default: 86400 seconds (24 hours)
parallelism: 5, // Optional, max concurrent tasks
},
network: {
name: "default", // VPC network name
subnet: "default", // Optional, VPC subnet name
egress: "private-ranges-only", // Optional, default: "private-ranges-only"
},
}VPC Network Access
If your jobs need to access resources on a private network (e.g., a Redis instance or internal database), configure Direct VPC egress with the network option:
cloud: {
name: "my-service-jobs",
network: {
name: "default",
subnet: "default",
},
},This passes --network, --subnet, and --vpc-egress flags to gcloud run jobs create/update, enabling your Cloud Run Job containers to reach private IPs within the VPC.
| Option | Description | Default |
|---|---|---|
name | VPC network name | (required) |
subnet | VPC subnet name | (not set) |
egress | VPC egress mode | "private-ranges-only" |
The egress option controls which traffic is routed through the VPC:
"private-ranges-only"— only traffic to private IP ranges (RFC 1918) goes through the VPC"all-traffic"— all outbound traffic is routed through the VPC
Example Job
import { z } from "zod";
import { defineJob } from "gcp-job-runner";
const ArgsSchema = z.object({
batchSize: z.number().default(50).describe("Number of items per batch"),
});
export default defineJob({
description: "Process data in batches",
schema: ArgsSchema,
handler: async (args) => {
console.log(`Processing with batch size: ${args.batchSize}`);
// Your job logic here
},
});Secrets
Secrets are loaded from GCP Secret Manager — same secrets for local and cloud execution:
environments: {
stag: defineRunnerEnv({
project: "my-project",
secrets: ["API_KEY", "DATABASE_URL"],
}),
}Content-Based Caching
Images are tagged with a hash of the isolated package directory. When running cloud run, the CLI checks whether the image already exists in Artifact Registry:
- Image exists — no rebuild, no deploy, straight to execution
- Image is new — build, push, create/update Cloud Run Job, then execute
Use cloud deploy to force a deploy regardless of whether the image changed (useful for updating env vars or resource limits).
One Image, Many Jobs
A single Docker image contains all jobs for a service. The job name and arguments are passed at execution time, not at build time. This means:
- Running different jobs does not trigger a rebuild
- Passing different arguments does not trigger a rebuild
- Only source code changes produce a new content hash and trigger a build + deploy
In practice, after the first deploy you can cloud run as many different jobs with as many different arguments as you want — each run starts almost instantly because there's nothing to build.
Prerequisites
gcloudCLI authenticated with appropriate permissions- Artifact Registry repository (default:
cloud-run) - GCP project with Cloud Run and Cloud Build APIs enabled