background-jobs/README.md

212 lines
7.2 KiB
Markdown
Raw Normal View History

# Background Jobs
2018-06-29 00:01:34 +00:00
This crate provides tooling required to run some processes asynchronously from a usually
synchronous application. The standard example of this is Web Services, where certain things
need to be processed, but processing them while a user is waiting for their browser to respond
might not be the best experience.
2019-07-30 22:30:08 +00:00
- [Read the documentation on docs.rs](https://docs.rs/background-jobs)
- [Find the crate on crates.io](https://crates.io/crates/background-jobs)
- [Join the discussion on Matrix](https://matrix.to/#/!vZKoAKLpHaFIWjRxpT:asonix.dog?via=asonix.dog)
### Usage
#### Add Background Jobs to your project
```toml
[dependencies]
2019-05-25 21:15:09 +00:00
actix = "0.8"
2019-05-28 00:01:21 +00:00
background-jobs = "0.6.0"
failure = "0.1"
futures = "0.1"
2019-05-25 21:15:09 +00:00
serde = "1.0"
serde_drive = "1.0"
2019-09-09 00:03:00 +00:00
sled-extensions = "0.1"
```
#### To get started with Background Jobs, first you should define a job.
Jobs are a combination of the data required to perform an operation, and the logic of that
operation. They implment the `Job`, `serde::Serialize`, and `serde::DeserializeOwned`.
```rust
2019-05-25 21:15:09 +00:00
use background_jobs::Job;
2019-05-28 00:01:21 +00:00
use failure::Error;
2019-05-25 21:15:09 +00:00
use serde_derive::{Deserialize, Serialize};
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct MyJob {
some_usize: usize,
other_usize: usize,
}
impl MyJob {
pub fn new(some_usize: usize, other_usize: usize) -> Self {
MyJob {
some_usize,
other_usize,
}
}
}
impl Job for MyJob {
2019-05-28 00:01:21 +00:00
type Processor = MyProcessor; // We will define this later
type State = ();
2019-05-25 21:15:09 +00:00
fn run(self, _: ()) -> Box<dyn Future<Item = (), Error = Error> + Send> {
info!("args: {:?}", self);
Box::new(Ok(()).into_future())
}
}
```
2018-11-18 21:05:03 +00:00
The run method for a job takes an additional argument, which is the state the job expects to
use. The state for all jobs defined in an application must be the same. By default, the state
is an empty tuple, but it's likely you'll want to pass in some Actix address, or something
else.
Let's re-define the job to care about some application state.
2018-11-18 21:10:18 +00:00
```rust
2018-11-18 21:05:03 +00:00
#[derive(Clone, Debug)]
pub struct MyState {
pub app_name: String,
}
2019-05-25 21:15:09 +00:00
impl MyState {
pub fn new(app_name: &str) -> Self {
MyState {
app_name: app_name.to_owned(),
}
}
}
2019-05-28 00:01:21 +00:00
impl Job for MyJob {
type Processor = MyProcessor; // We will define this later
type State = MyState;
2018-11-18 21:05:03 +00:00
fn run(self, state: MyState) -> Box<dyn Future<Item = (), Error = Error> + Send> {
info!("{}: args, {:?}", state.app_name, self);
Box::new(Ok(()).into_future())
}
}
```
#### Next, define a Processor.
Processors are types that define default attributes for jobs, as well as containing some logic
used internally to perform the job. Processors must implement `Proccessor` and `Clone`.
```rust
2019-05-25 21:15:09 +00:00
use background_jobs::{Backoff, MaxRetries, Processor};
const DEFAULT_QUEUE: &'static str = "default";
#[derive(Clone, Debug)]
pub struct MyProcessor;
2019-05-28 00:01:21 +00:00
impl Processor for MyProcessor {
// The kind of job this processor should execute
type Job = MyJob;
// The name of the processor. It is super important that each processor has a unique name,
// because otherwise one processor will overwrite another processor when they're being
// registered.
const NAME: &'static str = "MyProcessor";
// The queue that this processor belongs to
//
// Workers have the option to subscribe to specific queues, so this is important to
// determine which worker will call the processor
//
// Jobs can optionally override the queue they're spawned on
const QUEUE: &'static str = DEFAULT_QUEUE;
// The number of times background-jobs should try to retry a job before giving up
//
// Jobs can optionally override this value
const MAX_RETRIES: MaxRetries = MaxRetries::Count(1);
// The logic to determine how often to retry this job if it fails
//
// Jobs can optionally override this value
const BACKOFF_STRATEGY: Backoff = Backoff::Exponential(2);
}
```
#### Running jobs
2019-05-25 21:15:09 +00:00
By default, this crate ships with the `background-jobs-actix` feature enabled. This uses the
`background-jobs-actix` crate to spin up a Server and Workers, and provides a mechanism for
spawning new jobs.
2019-05-25 21:15:09 +00:00
`background-jobs-actix` on it's own doesn't have a mechanism for storing worker state. This
can be implemented manually by implementing the `Storage` trait from `background-jobs-core`,
2019-05-28 00:01:21 +00:00
the in-memory store provided in the `background-jobs-core` crate can be used, or the
`background-jobs-sled-storage` crate can be used to provide a
2019-05-25 21:15:09 +00:00
[Sled](https://github.com/spacejam/sled)-backed jobs store.
With that out of the way, back to the examples:
2019-05-25 21:15:09 +00:00
##### Main
```rust
2019-05-25 21:15:09 +00:00
use actix::System;
2019-05-28 00:01:21 +00:00
use background_jobs::{ServerConfig, WorkerConfig};
use failure::Error;
fn main() -> Result<(), Error> {
2019-05-25 21:15:09 +00:00
// First set up the Actix System to ensure we have a runtime to spawn jobs on.
let sys = System::new("my-actix-system");
2019-05-25 21:15:09 +00:00
// Set up our Storage
2019-05-28 00:01:21 +00:00
// For this example, we use the default in-memory storage mechanism
use background_jobs::memory_storage::Storage;
let storage = Storage::new();
/*
// Optionally, a storage backend using the Sled database is provided
use sled::Db;
use background_jobs::sled_storage::Storage;
2019-09-09 00:03:00 +00:00
let db = Db::open("my-sled-db")?;
2019-05-28 00:01:21 +00:00
let storage = Storage::new(db)?;
*/
2019-05-25 21:15:09 +00:00
// Start the application server. This guards access to to the jobs store
2019-05-28 00:01:21 +00:00
let queue_handle = ServerConfig::new(storage).thread_count(8).start();
2019-05-25 21:15:09 +00:00
// Configure and start our workers
2019-05-28 00:01:21 +00:00
WorkerConfig::new(move || MyState::new("My App"))
2019-05-28 01:35:02 +00:00
.register(MyProcessor)
2019-05-28 00:01:21 +00:00
.set_processor_count(DEFAULT_QUEUE, 16)
.start(queue_handle.clone());
2019-05-25 21:15:09 +00:00
// Queue our jobs
2019-05-28 00:01:21 +00:00
queue_handle.queue(MyJob::new(1, 2))?;
queue_handle.queue(MyJob::new(3, 4))?;
queue_handle.queue(MyJob::new(5, 6))?;
2019-05-25 21:15:09 +00:00
// Block on Actix
sys.run()?;
Ok(())
}
```
##### Complete Example
2019-05-25 21:15:09 +00:00
For the complete example project, see [the examples folder](https://git.asonix.dog/Aardwolf/background-jobs/src/branch/master/examples/actix-example)
2018-12-16 19:44:25 +00:00
#### Bringing your own server/worker implementation
If you want to create your own jobs processor based on this idea, you can depend on the
2019-05-25 21:15:09 +00:00
`background-jobs-core` crate, which provides the Processor and Job traits, as well as some
other useful types for implementing a jobs processor and job store.
### Contributing
Feel free to open issues for anything you find an issue with. Please note that any contributed code will be licensed under the GPLv3.
### License
2019-09-15 20:51:33 +00:00
This work is licensed under the Cooperative Software License. This is not a Free Software
License, but may be considered a "source-available License." For most hobbyists, self-employed
developers, worker-owned companies, and cooperatives, this software can be used in most
projects so long as this software is distributed under the terms of the CSL. For more
information, see the provided LICENSE file. If none exists, the license can be found online
[here](https://lynnesbian.space/csl/). If you are a free software project and wish to use this
software under the terms of the GNU Affero General Public License, please contact me at
[asonix@asonix.dog](mailto:asonix@asonix.dog) and we can sort that out. If you wish to use this
project under any other license, especially in proprietary software, the answer is likely no.