sqlxmq/README.md

223 lines
6.5 KiB
Markdown
Raw Normal View History

2021-03-30 00:00:51 +00:00
[![CI Status](https://github.com/Diggsey/sqlxmq/workflows/CI/badge.svg)](https://github.com/Diggsey/sqlxmq/actions?query=workflow%3ACI)
[![Documentation](https://docs.rs/sqlxmq/badge.svg)](https://docs.rs/sqlxmq)
[![crates.io](https://img.shields.io/crates/v/sqlxmq.svg)](https://crates.io/crates/sqlxmq)
2022-07-06 16:16:38 +00:00
<!-- cargo-sync-readme start -->
# sqlxmq
A job queue built on `sqlx` and `PostgreSQL`.
2021-03-29 02:05:20 +00:00
This library allows a CRUD application to run background jobs without complicating its
2021-03-29 02:05:20 +00:00
deployment. The only runtime dependency is `PostgreSQL`, so this is ideal for applications
already using a `PostgreSQL` database.
Although using a SQL database as a job queue means compromising on latency of
delivered jobs, there are several show-stopping issues present in ordinary job
2021-03-29 02:05:20 +00:00
queues which are avoided altogether.
With most other job queues, in-flight jobs are state that is not covered by normal
database backups. Even if jobs _are_ backed up, there is no way to restore both
a database and a job queue to a consistent point-in-time without manually
2021-03-29 02:05:20 +00:00
resolving conflicts.
By storing jobs in the database, existing backup procedures will store a perfectly
consistent state of both in-flight jobs and persistent data. Additionally, jobs can
2021-03-29 02:05:20 +00:00
be spawned and completed as part of other transactions, making it easy to write correct
application code.
Leveraging the power of `PostgreSQL`, this job queue offers several features not
present in other job queues.
2021-03-29 02:05:20 +00:00
# Features
- **Send/receive multiple jobs at once.**
2021-03-29 02:05:20 +00:00
This reduces the number of queries to the database.
- **Send jobs to be executed at a future date and time.**
2021-03-29 02:05:20 +00:00
Avoids the need for a separate scheduling system.
- **Reliable delivery of jobs.**
2021-03-29 02:05:20 +00:00
- **Automatic retries with exponential backoff.**
Number of retries and initial backoff parameters are configurable.
- **Transactional sending of jobs.**
2021-03-29 02:05:20 +00:00
Avoids sending spurious jobs if a transaction is rolled back.
2021-03-29 02:05:20 +00:00
- **Transactional completion of jobs.**
2021-03-29 02:05:20 +00:00
If all side-effects of a job are updates to the database, this provides
true exactly-once execution of jobs.
2021-03-29 02:05:20 +00:00
- **Transactional check-pointing of jobs.**
2021-03-29 02:05:20 +00:00
Long-running jobs can check-point their state to avoid having to restart
2021-03-29 02:05:20 +00:00
from the beginning if there is a failure: the next retry can continue
from the last check-point.
- **Opt-in strictly ordered job delivery.**
2021-03-29 02:05:20 +00:00
Jobs within the same channel will be processed strictly in-order
if this option is enabled for the job.
2021-03-29 02:05:20 +00:00
- **Fair job delivery.**
2021-03-29 02:05:20 +00:00
A channel with a lot of jobs ready to run will not starve a channel with fewer
jobs.
2021-03-29 02:05:20 +00:00
- **Opt-in two-phase commit.**
This is particularly useful on an ordered channel where a position can be "reserved"
in the job order, but not committed until later.
2021-03-29 02:05:20 +00:00
- **JSON and/or binary payloads.**
Jobs can use whichever is most convenient.
2021-03-29 02:05:20 +00:00
- **Automatic keep-alive of jobs.**
2021-03-29 02:05:20 +00:00
Long-running jobs will automatically be "kept alive" to prevent them being
2021-03-29 02:05:20 +00:00
retried whilst they're still ongoing.
- **Concurrency limits.**
Specify the minimum and maximum number of concurrent jobs each runner should
2021-03-29 02:05:20 +00:00
handle.
- **Built-in job registry via an attribute macro.**
2021-03-29 02:05:20 +00:00
Jobs can be easily registered with a runner, and default configuration specified
on a per-job basis.
2021-03-29 02:05:20 +00:00
- **Implicit channels.**
Channels are implicitly created and destroyed when jobs are sent and processed,
2021-03-29 02:05:20 +00:00
so no setup is required.
- **Channel groups.**
Easily subscribe to multiple channels at once, thanks to the separation of
channel name and channel arguments.
- **NOTIFY-based polling.**
This saves resources when few jobs are being processed.
2021-03-29 02:05:20 +00:00
# Getting started
2021-03-30 00:00:51 +00:00
## Database schema
This crate expects certain database tables and stored procedures to exist.
You can copy the migration files from this crate into your own migrations
folder.
All database items created by this crate are prefixed with `mq`, so as not
to conflict with your own schema.
## Defining jobs
2021-03-29 02:05:20 +00:00
The first step is to define a function to be run on the job queue.
2021-03-29 02:05:20 +00:00
```rust
2022-07-06 16:16:38 +00:00
use std::error::Error;
use sqlxmq::{job, CurrentJob};
2021-03-29 02:05:20 +00:00
// Arguments to the `#[job]` attribute allow setting default job options.
#[job(channel_name = "foo")]
async fn example_job(
// The first argument should always be the current job.
mut current_job: CurrentJob,
// Additional arguments are optional, but can be used to access context
2022-07-06 16:16:38 +00:00
// provided via [`JobRegistry::set_context`].
message: &'static str,
2022-07-06 16:16:38 +00:00
) -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
2021-03-29 02:05:20 +00:00
// Decode a JSON payload
let who: Option<String> = current_job.json()?;
2021-03-29 02:05:20 +00:00
// Do some work
println!("{}, {}!", message, who.as_deref().unwrap_or("world"));
2021-03-29 02:05:20 +00:00
// Mark the job as complete
current_job.complete().await?;
2021-03-29 02:05:20 +00:00
Ok(())
}
```
## Listening for jobs
2021-03-29 02:05:20 +00:00
Next we need to create a job runner: this is what listens for new jobs
2021-03-29 02:05:20 +00:00
and executes them.
2022-07-06 16:16:38 +00:00
```rust,no_run
use std::error::Error;
use sqlxmq::JobRegistry;
2021-03-29 02:05:20 +00:00
2022-07-06 16:16:38 +00:00
2021-03-29 02:05:20 +00:00
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// You'll need to provide a Postgres connection pool.
let pool = connect_to_db().await?;
// Construct a job registry from our single job.
let mut registry = JobRegistry::new(&[example_job]);
2021-03-29 02:05:20 +00:00
// Here is where you can configure the registry
// registry.set_error_handler(...)
// And add context
registry.set_context("Hello");
2021-03-29 02:05:20 +00:00
let runner = registry
// Create a job runner using the connection pool.
2021-03-29 02:05:20 +00:00
.runner(&pool)
// Here is where you can configure the job runner
// Aim to keep 10-20 jobs running at a time.
2021-03-29 02:05:20 +00:00
.set_concurrency(10, 20)
// Start the job runner in the background.
2021-03-29 02:05:20 +00:00
.run()
.await?;
// The job runner will continue listening and running
// jobs until `runner` is dropped.
2022-07-06 16:16:38 +00:00
Ok(())
2021-03-29 02:05:20 +00:00
}
```
## Spawning a job
2021-03-29 02:05:20 +00:00
The final step is to actually run a job.
2021-03-29 02:05:20 +00:00
```rust
2021-03-30 00:00:51 +00:00
example_job.builder()
// This is where we can override job configuration
2021-03-29 02:05:20 +00:00
.set_channel_name("bar")
2022-07-05 09:46:35 +00:00
.set_json("John")?
2021-03-29 02:05:20 +00:00
.spawn(&pool)
.await?;
```
2022-07-06 16:16:38 +00:00
<!-- cargo-sync-readme end -->
## Note on README
Most of the readme is automatically copied from the crate documentation by [cargo-readme-sync][].
This way the readme is always in sync with the docs and examples are tested.
So if you find a part of the readme you'd like to change between `<!-- cargo-sync-readme start -->`
and `<!-- cargo-sync-readme end -->` markers, don't edit `README.md` directly, but rather change
the documentation on top of `src/lib.rs` and then synchronize the readme with:
```bash
cargo sync-readme
```
(make sure the cargo command is installed):
```bash
cargo install cargo-sync-readme