No description
Find a file
Ayrat Badykov 4b1f537d19
execute different types of tasks in separate workers (#1)
* execute different types of task in separate workers

* add more tests

* pass reference

* add CHANGELOG
2021-07-03 07:23:05 +03:00
.github add dependabot 2021-06-23 15:17:13 +03:00
migrations execute different types of tasks in separate workers (#1) 2021-07-03 07:23:05 +03:00
src execute different types of tasks in separate workers (#1) 2021-07-03 07:23:05 +03:00
.env start working on storage level 2021-06-05 14:39:19 +03:00
.gitignore initial commit 2021-05-30 11:35:00 +03:00
Cargo.toml prepare to publish on crates.io 2021-06-24 13:17:11 +03:00
CHANGELOG.md execute different types of tasks in separate workers (#1) 2021-07-03 07:23:05 +03:00
diesel.toml start working on storage level 2021-06-05 14:39:19 +03:00
logo.png add README 2021-06-24 12:58:02 +03:00
README.md prepare to publish on crates.io 2021-06-24 13:17:11 +03:00

fang

Fang

Background job processing library for Rust.

Currently, it uses Postgres to store state. But in the future, more backends will be supported.

Installation

  1. Add this to your Cargo.toml
[dependencies]
fang = "0.2"
typetag = "0.1"
serde = { version = "1.0", features = ["derive"] }
  1. Create fang_tasks table in the Postgres database. The migration can be found in the migrations directory.

Usage

Defining a job

Every job should implement fang::Runnable trait which is used by fang to execute it.

    use fang::Error;
    use fang::Runnable;
    use serde::{Deserialize, Serialize};


    #[derive(Serialize, Deserialize)]
    struct Job {
        pub number: u16,
    }

    #[typetag::serde]
    impl Runnable for Job {
        fn run(&self) -> Result<(), Error> {
            println!("the number is {}", self.number);

            Ok(())
        }
    }

As you can see from the example above, the trait implementation has #[typetag::serde] attribute which is used to deserialize the job.

Enqueuing a job

To enqueue a job use Postgres::enqueue_task

use fang::Postgres;

...

Postgres::enqueue_task(&Job { number: 10 }).unwrap();

Starting workers

Every worker runs in a separate thread. In case of panic, they are always restarted.

Use WorkerPool to start workers. It accepts two parameters - the number of workers and the prefix for the worker thread name.

use fang::WorkerPool;

WorkerPool::new(10, "sync".to_string()).start();

Potential/future features

  • Extendable/new backends
  • Workers for specific types of tasks. Currently, each worker execute all types of tasks
  • Configurable db records retention. Currently, fang doesn't remove tasks from the db.
  • Retries
  • Scheduled tasks

Contributing

  1. Fork it!
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author

Ayrat Badykov (@ayrat555)