mirror of
https://git.deuxfleurs.fr/Deuxfleurs/garage.git
synced 2024-11-27 18:41:00 +00:00
Merge pull request 'ReplicationMode -> ConsistencyMode+ReplicationFactor' (#750) from yuka/garage:split-consistency-mode into next-0.10
Reviewed-on: https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/750
This commit is contained in:
commit
20c0b4ffb2
16 changed files with 245 additions and 167 deletions
|
@ -116,7 +116,7 @@ metadata_dir = "/var/lib/garage/meta"
|
|||
data_dir = "/var/lib/garage/data"
|
||||
db_engine = "lmdb"
|
||||
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
|
||||
compression_level = 2
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ An introduction to building cluster layouts can be found in the [production depl
|
|||
In Garage, all of the data that can be stored in a given cluster is divided
|
||||
into slices which we call *partitions*. Each partition is stored by
|
||||
one or several nodes in the cluster
|
||||
(see [`replication_mode`](@/documentation/reference-manual/configuration.md#replication_mode)).
|
||||
(see [`replication_factor`](@/documentation/reference-manual/configuration.md#replication_factor)).
|
||||
The layout determines the correspondence between these partitions,
|
||||
which exist on a logical level, and actual storage nodes.
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ metadata_dir = "/tmp/meta"
|
|||
data_dir = "/tmp/data"
|
||||
db_engine = "lmdb"
|
||||
|
||||
replication_mode = "none"
|
||||
replication_factor = 1
|
||||
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_public_addr = "127.0.0.1:3901"
|
||||
|
|
|
@ -8,7 +8,8 @@ weight = 20
|
|||
Here is an example `garage.toml` configuration file that illustrates all of the possible options:
|
||||
|
||||
```toml
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
consistency_mode = "consistent"
|
||||
|
||||
metadata_dir = "/var/lib/garage/meta"
|
||||
data_dir = "/var/lib/garage/data"
|
||||
|
@ -90,7 +91,8 @@ Top-level configuration options:
|
|||
[`lmdb_map_size`](#lmdb_map_size),
|
||||
[`metadata_dir`](#metadata_dir),
|
||||
[`metadata_fsync`](#metadata_fsync),
|
||||
[`replication_mode`](#replication_mode),
|
||||
[`replication_factor`](#replication_factor),
|
||||
[`consistency_mode`](#consistency_mode),
|
||||
[`rpc_bind_addr`](#rpc_bind_addr),
|
||||
[`rpc_bind_outgoing`](#rpc_bind_outgoing),
|
||||
[`rpc_public_addr`](#rpc_public_addr),
|
||||
|
@ -133,11 +135,12 @@ The `[admin]` section:
|
|||
|
||||
### Top-level configuration options
|
||||
|
||||
#### `replication_mode` {#replication_mode}
|
||||
#### `replication_factor` {#replication_factor}
|
||||
|
||||
Garage supports the following replication modes:
|
||||
The replication factor can be any positive integer smaller or equal the node count in your cluster.
|
||||
The chosen replication factor has a big impact on the cluster's failure tolerancy and performance characteristics.
|
||||
|
||||
- `none` or `1`: data stored on Garage is stored on a single node. There is no
|
||||
- `1`: data stored on Garage is stored on a single node. There is no
|
||||
redundancy, and data will be unavailable as soon as one node fails or its
|
||||
network is disconnected. Do not use this for anything else than test
|
||||
deployments.
|
||||
|
@ -148,17 +151,6 @@ Garage supports the following replication modes:
|
|||
before losing data. Data remains available in read-only mode when one node is
|
||||
down, but write operations will fail.
|
||||
|
||||
- `2-dangerous`: a variant of mode `2`, where written objects are written to
|
||||
the second replica asynchronously. This means that Garage will return `200
|
||||
OK` to a PutObject request before the second copy is fully written (or even
|
||||
before it even starts being written). This means that data can more easily
|
||||
be lost if the node crashes before a second copy can be completed. This
|
||||
also means that written objects might not be visible immediately in read
|
||||
operations. In other words, this mode severely breaks the consistency and
|
||||
durability guarantees of standard Garage cluster operation. Benefits of
|
||||
this mode: you can still write to your cluster when one node is
|
||||
unavailable.
|
||||
|
||||
- `3`: data stored on Garage will be stored on three different nodes, if
|
||||
possible each in a different zones. Garage tolerates two node failure, or
|
||||
several node failures but in no more than two zones (in a deployment with at
|
||||
|
@ -166,55 +158,84 @@ Garage supports the following replication modes:
|
|||
or node failures are only in a single zone, reading and writing data to
|
||||
Garage can continue normally.
|
||||
|
||||
- `3-degraded`: a variant of replication mode `3`, that lowers the read
|
||||
quorum to `1`, to allow you to read data from your cluster when several
|
||||
nodes (or nodes in several zones) are unavailable. In this mode, Garage
|
||||
does not provide read-after-write consistency anymore. The write quorum is
|
||||
still 2, ensuring that data successfully written to Garage is stored on at
|
||||
least two nodes.
|
||||
|
||||
- `3-dangerous`: a variant of replication mode `3` that lowers both the read
|
||||
and write quorums to `1`, to allow you to both read and write to your
|
||||
cluster when several nodes (or nodes in several zones) are unavailable. It
|
||||
is the least consistent mode of operation proposed by Garage, and also one
|
||||
that should probably never be used.
|
||||
- `5`, `7`, ...: When setting the replication factor above 3, it is most useful to
|
||||
choose an uneven value, since for every two copies added, one more node can fail
|
||||
before losing the ability to write and read to the cluster.
|
||||
|
||||
Note that in modes `2` and `3`,
|
||||
if at least the same number of zones are available, an arbitrary number of failures in
|
||||
any given zone is tolerated as copies of data will be spread over several zones.
|
||||
|
||||
**Make sure `replication_mode` is the same in the configuration files of all nodes.
|
||||
**Make sure `replication_factor` is the same in the configuration files of all nodes.
|
||||
Never run a Garage cluster where that is not the case.**
|
||||
|
||||
It is technically possible to change the replication factor although it's a
|
||||
dangerous operation that is not officially supported. This requires you to
|
||||
delete the existing cluster layout and create a new layout from scratch,
|
||||
meaning that a full rebalancing of your cluster's data will be needed. To do
|
||||
it, shut down your cluster entirely, delete the `custer_layout` files in the
|
||||
meta directories of all your nodes, update all your configuration files with
|
||||
the new `replication_factor` parameter, restart your cluster, and then create a
|
||||
new layout with all the nodes you want to keep. Rebalancing data will take
|
||||
some time, and data might temporarily appear unavailable to your users.
|
||||
It is recommended to shut down public access to the cluster while rebalancing
|
||||
is in progress. In theory, no data should be lost as rebalancing is a
|
||||
routine operation for Garage, although we cannot guarantee you that everything
|
||||
will go right in such an extreme scenario.
|
||||
|
||||
#### `consistency_mode` {#consistency_mode}
|
||||
|
||||
The consistency mode setting determines the read and write behaviour of your cluster.
|
||||
|
||||
- `consistent`: The default setting. This is what the paragraph above describes.
|
||||
The read and write quorum will be determined so that read-after-write consistency
|
||||
is guaranteed.
|
||||
- `degraded`: Lowers the read
|
||||
quorum to `1`, to allow you to read data from your cluster when several
|
||||
nodes (or nodes in several zones) are unavailable. In this mode, Garage
|
||||
does not provide read-after-write consistency anymore.
|
||||
The write quorum stays the same as in the `consistent` mode, ensuring that
|
||||
data successfully written to Garage is stored on multiple nodes (depending
|
||||
the replication factor).
|
||||
- `dangerous`: This mode lowers both the read
|
||||
and write quorums to `1`, to allow you to both read and write to your
|
||||
cluster when several nodes (or nodes in several zones) are unavailable. It
|
||||
is the least consistent mode of operation proposed by Garage, and also one
|
||||
that should probably never be used.
|
||||
|
||||
Changing the `consistency_mode` between modes while leaving the `replication_factor` untouched
|
||||
(e.g. setting your node's `consistency_mode` to `degraded` when it was previously unset, or from
|
||||
`dangerous` to `consistent`), can be done easily by just changing the `consistency_mode`
|
||||
parameter in your config files and restarting all your Garage nodes.
|
||||
|
||||
The consistency mode can be used together with various replication factors, to achieve
|
||||
a wide range of read and write characteristics. Some examples:
|
||||
|
||||
- Replication factor `2`, consistency mode `degraded`: While this mode
|
||||
technically exists, its properties are the same as with consistency mode `consistent`,
|
||||
since the read quorum with replication factor `2`, consistency mode `consistent` is already 1.
|
||||
|
||||
- Replication factor `2`, consistency mode `dangerous`: written objects are written to
|
||||
the second replica asynchronously. This means that Garage will return `200
|
||||
OK` to a PutObject request before the second copy is fully written (or even
|
||||
before it even starts being written). This means that data can more easily
|
||||
be lost if the node crashes before a second copy can be completed. This
|
||||
also means that written objects might not be visible immediately in read
|
||||
operations. In other words, this configuration severely breaks the consistency and
|
||||
durability guarantees of standard Garage cluster operation. Benefits of
|
||||
this configuration: you can still write to your cluster when one node is
|
||||
unavailable.
|
||||
|
||||
The quorums associated with each replication mode are described below:
|
||||
|
||||
| `replication_mode` | Number of replicas | Write quorum | Read quorum | Read-after-write consistency? |
|
||||
| ------------------ | ------------------ | ------------ | ----------- | ----------------------------- |
|
||||
| `none` or `1` | 1 | 1 | 1 | yes |
|
||||
| `2` | 2 | 2 | 1 | yes |
|
||||
| `2-dangerous` | 2 | 1 | 1 | NO |
|
||||
| `3` | 3 | 2 | 2 | yes |
|
||||
| `3-degraded` | 3 | 2 | 1 | NO |
|
||||
| `3-dangerous` | 3 | 1 | 1 | NO |
|
||||
|
||||
Changing the `replication_mode` between modes with the same number of replicas
|
||||
(e.g. from `3` to `3-degraded`, or from `2-dangerous` to `2`), can be done easily by
|
||||
just changing the `replication_mode` parameter in your config files and restarting all your
|
||||
Garage nodes.
|
||||
|
||||
It is also technically possible to change the replication mode to a mode with a
|
||||
different numbers of replicas, although it's a dangerous operation that is not
|
||||
officially supported. This requires you to delete the existing cluster layout
|
||||
and create a new layout from scratch, meaning that a full rebalancing of your
|
||||
cluster's data will be needed. To do it, shut down your cluster entirely,
|
||||
delete the `custer_layout` files in the meta directories of all your nodes,
|
||||
update all your configuration files with the new `replication_mode` parameter,
|
||||
restart your cluster, and then create a new layout with all the nodes you want
|
||||
to keep. Rebalancing data will take some time, and data might temporarily
|
||||
appear unavailable to your users. It is recommended to shut down public access
|
||||
to the cluster while rebalancing is in progress. In theory, no data should be
|
||||
lost as rebalancing is a routine operation for Garage, although we cannot
|
||||
guarantee you that everything will go right in such an extreme scenario.
|
||||
| `consistency_mode` | `replication_factor` | Write quorum | Read quorum | Read-after-write consistency? |
|
||||
| ------------------ | -------------------- | ------------ | ----------- | ----------------------------- |
|
||||
| `consistent` | 1 | 1 | 1 | yes |
|
||||
| `consistent` | 2 | 2 | 1 | yes |
|
||||
| `dangerous` | 2 | 1 | 1 | NO |
|
||||
| `consistent` | 3 | 2 | 2 | yes |
|
||||
| `degraded` | 3 | 2 | 1 | NO |
|
||||
| `dangerous` | 3 | 1 | 1 | NO |
|
||||
|
||||
#### `metadata_dir` {#metadata_dir}
|
||||
|
||||
|
|
|
@ -39,10 +39,10 @@ Read about cluster layout management [here](@/documentation/operations/layout.md
|
|||
|
||||
### Several replication modes
|
||||
|
||||
Garage supports a variety of replication modes, with 1 copy, 2 copies or 3 copies of your data,
|
||||
Garage supports a variety of replication modes, with configurable replica count,
|
||||
and with various levels of consistency, in order to adapt to a variety of usage scenarios.
|
||||
Read our reference page on [supported replication modes](@/documentation/reference-manual/configuration.md#replication_mode)
|
||||
to select the replication mode best suited to your use case (hint: in most cases, `replication_mode = "3"` is what you want).
|
||||
Read our reference page on [supported replication modes](@/documentation/reference-manual/configuration.md#replication_factor)
|
||||
to select the replication mode best suited to your use case (hint: in most cases, `replication_factor = 3` is what you want).
|
||||
|
||||
### Compression and deduplication
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ mod tests {
|
|||
r#"
|
||||
metadata_dir = "/tmp/garage/meta"
|
||||
data_dir = "/tmp/garage/data"
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_secret_file = "{}"
|
||||
|
||||
|
@ -185,7 +185,7 @@ mod tests {
|
|||
r#"
|
||||
metadata_dir = "/tmp/garage/meta"
|
||||
data_dir = "/tmp/garage/data"
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_secret_file = "{}"
|
||||
allow_world_readable_secrets = true
|
||||
|
@ -296,7 +296,7 @@ mod tests {
|
|||
r#"
|
||||
metadata_dir = "/tmp/garage/meta"
|
||||
data_dir = "/tmp/garage/data"
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_secret= "dummy"
|
||||
rpc_secret_file = "dummy"
|
||||
|
|
|
@ -54,7 +54,7 @@ metadata_dir = "{path}/meta"
|
|||
data_dir = "{path}/data"
|
||||
db_engine = "lmdb"
|
||||
|
||||
replication_mode = "1"
|
||||
replication_factor = 1
|
||||
|
||||
rpc_bind_addr = "127.0.0.1:{rpc_port}"
|
||||
rpc_public_addr = "127.0.0.1:{rpc_port}"
|
||||
|
|
|
@ -9,7 +9,7 @@ use garage_util::config::*;
|
|||
use garage_util::error::*;
|
||||
use garage_util::persister::PersisterShared;
|
||||
|
||||
use garage_rpc::replication_mode::ReplicationMode;
|
||||
use garage_rpc::replication_mode::*;
|
||||
use garage_rpc::system::System;
|
||||
|
||||
use garage_block::manager::*;
|
||||
|
@ -39,8 +39,8 @@ pub struct Garage {
|
|||
/// The set of background variables that can be viewed/modified at runtime
|
||||
pub bg_vars: vars::BgVars,
|
||||
|
||||
/// The replication mode of this cluster
|
||||
pub replication_mode: ReplicationMode,
|
||||
/// The replication factor of this cluster
|
||||
pub replication_factor: ReplicationFactor,
|
||||
|
||||
/// The local database
|
||||
pub db: db::Db,
|
||||
|
@ -222,27 +222,26 @@ impl Garage {
|
|||
.and_then(|x| NetworkKey::from_slice(&x))
|
||||
.ok_or_message("Invalid RPC secret key")?;
|
||||
|
||||
let replication_mode = ReplicationMode::parse(&config.replication_mode)
|
||||
.ok_or_message("Invalid replication_mode in config file.")?;
|
||||
let (replication_factor, consistency_mode) = parse_replication_mode(&config)?;
|
||||
|
||||
info!("Initialize background variable system...");
|
||||
let mut bg_vars = vars::BgVars::new();
|
||||
|
||||
info!("Initialize membership management system...");
|
||||
let system = System::new(network_key, replication_mode, &config)?;
|
||||
let system = System::new(network_key, replication_factor, consistency_mode, &config)?;
|
||||
|
||||
let data_rep_param = TableShardedReplication {
|
||||
system: system.clone(),
|
||||
replication_factor: replication_mode.replication_factor(),
|
||||
write_quorum: replication_mode.write_quorum(),
|
||||
replication_factor: replication_factor.into(),
|
||||
write_quorum: replication_factor.write_quorum(consistency_mode),
|
||||
read_quorum: 1,
|
||||
};
|
||||
|
||||
let meta_rep_param = TableShardedReplication {
|
||||
system: system.clone(),
|
||||
replication_factor: replication_mode.replication_factor(),
|
||||
write_quorum: replication_mode.write_quorum(),
|
||||
read_quorum: replication_mode.read_quorum(),
|
||||
replication_factor: replication_factor.into(),
|
||||
write_quorum: replication_factor.write_quorum(consistency_mode),
|
||||
read_quorum: replication_factor.read_quorum(consistency_mode),
|
||||
};
|
||||
|
||||
let control_rep_param = TableFullReplication {
|
||||
|
@ -338,7 +337,7 @@ impl Garage {
|
|||
Ok(Arc::new(Self {
|
||||
config,
|
||||
bg_vars,
|
||||
replication_mode,
|
||||
replication_factor,
|
||||
db,
|
||||
system,
|
||||
block_manager,
|
||||
|
|
|
@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize};
|
|||
use garage_util::data::*;
|
||||
|
||||
use super::*;
|
||||
use crate::replication_mode::ReplicationMode;
|
||||
use crate::replication_mode::*;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq, Eq)]
|
||||
pub struct RpcLayoutDigest {
|
||||
|
@ -29,7 +29,8 @@ pub struct SyncLayoutDigest {
|
|||
}
|
||||
|
||||
pub struct LayoutHelper {
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
consistency_mode: ConsistencyMode,
|
||||
layout: Option<LayoutHistory>,
|
||||
|
||||
// cached values
|
||||
|
@ -57,7 +58,8 @@ impl Deref for LayoutHelper {
|
|||
|
||||
impl LayoutHelper {
|
||||
pub fn new(
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
consistency_mode: ConsistencyMode,
|
||||
mut layout: LayoutHistory,
|
||||
mut ack_lock: HashMap<u64, AtomicUsize>,
|
||||
) -> Self {
|
||||
|
@ -66,7 +68,7 @@ impl LayoutHelper {
|
|||
// correct and we have rapid access to important values such as
|
||||
// the layout versions to use when reading to ensure consistency.
|
||||
|
||||
if !replication_mode.is_read_after_write_consistent() {
|
||||
if consistency_mode != ConsistencyMode::Consistent {
|
||||
// Fast path for when no consistency is required.
|
||||
// In this case we only need to keep the last version of the layout,
|
||||
// we don't care about coordinating stuff in the cluster.
|
||||
|
@ -103,7 +105,7 @@ impl LayoutHelper {
|
|||
// This value is calculated using quorums to allow progress even
|
||||
// if not all nodes have successfully completed a sync.
|
||||
let sync_map_min =
|
||||
layout.calculate_sync_map_min_with_quorum(replication_mode, &all_nongateway_nodes);
|
||||
layout.calculate_sync_map_min_with_quorum(replication_factor, &all_nongateway_nodes);
|
||||
|
||||
let trackers_hash = layout.calculate_trackers_hash();
|
||||
let staging_hash = layout.calculate_staging_hash();
|
||||
|
@ -114,7 +116,8 @@ impl LayoutHelper {
|
|||
.or_insert(AtomicUsize::new(0));
|
||||
|
||||
LayoutHelper {
|
||||
replication_mode,
|
||||
replication_factor,
|
||||
consistency_mode,
|
||||
layout: Some(layout),
|
||||
ack_map_min,
|
||||
sync_map_min,
|
||||
|
@ -139,7 +142,8 @@ impl LayoutHelper {
|
|||
let changed = f(self.layout.as_mut().unwrap());
|
||||
if changed {
|
||||
*self = Self::new(
|
||||
self.replication_mode,
|
||||
self.replication_factor,
|
||||
self.consistency_mode,
|
||||
self.layout.take().unwrap(),
|
||||
std::mem::take(&mut self.ack_lock),
|
||||
);
|
||||
|
|
|
@ -6,11 +6,11 @@ use garage_util::encode::nonversioned_encode;
|
|||
use garage_util::error::*;
|
||||
|
||||
use super::*;
|
||||
use crate::replication_mode::ReplicationMode;
|
||||
use crate::replication_mode::*;
|
||||
|
||||
impl LayoutHistory {
|
||||
pub fn new(replication_factor: usize) -> Self {
|
||||
let version = LayoutVersion::new(replication_factor);
|
||||
pub fn new(replication_factor: ReplicationFactor) -> Self {
|
||||
let version = LayoutVersion::new(replication_factor.into());
|
||||
|
||||
let staging = LayoutStaging {
|
||||
parameters: Lww::<LayoutParameters>::new(version.parameters),
|
||||
|
@ -119,7 +119,7 @@ impl LayoutHistory {
|
|||
|
||||
pub(crate) fn calculate_sync_map_min_with_quorum(
|
||||
&self,
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
all_nongateway_nodes: &[Uuid],
|
||||
) -> u64 {
|
||||
// This function calculates the minimum layout version from which
|
||||
|
@ -133,7 +133,7 @@ impl LayoutHistory {
|
|||
return self.current().version;
|
||||
}
|
||||
|
||||
let quorum = replication_mode.write_quorum();
|
||||
let quorum = replication_factor.write_quorum(ConsistencyMode::Consistent);
|
||||
|
||||
let min_version = self.min_stored();
|
||||
let global_min = self
|
||||
|
|
|
@ -14,13 +14,13 @@ use garage_util::error::*;
|
|||
use garage_util::persister::Persister;
|
||||
|
||||
use super::*;
|
||||
use crate::replication_mode::ReplicationMode;
|
||||
use crate::replication_mode::*;
|
||||
use crate::rpc_helper::*;
|
||||
use crate::system::*;
|
||||
|
||||
pub struct LayoutManager {
|
||||
node_id: Uuid,
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
persist_cluster_layout: Persister<LayoutHistory>,
|
||||
|
||||
layout: Arc<RwLock<LayoutHelper>>,
|
||||
|
@ -38,20 +38,19 @@ impl LayoutManager {
|
|||
node_id: NodeID,
|
||||
system_endpoint: Arc<Endpoint<SystemRpc, System>>,
|
||||
peering: Arc<PeeringManager>,
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
consistency_mode: ConsistencyMode,
|
||||
) -> Result<Arc<Self>, Error> {
|
||||
let replication_factor = replication_mode.replication_factor();
|
||||
|
||||
let persist_cluster_layout: Persister<LayoutHistory> =
|
||||
Persister::new(&config.metadata_dir, "cluster_layout");
|
||||
|
||||
let cluster_layout = match persist_cluster_layout.load() {
|
||||
Ok(x) => {
|
||||
if x.current().replication_factor != replication_mode.replication_factor() {
|
||||
if x.current().replication_factor != replication_factor.replication_factor() {
|
||||
return Err(Error::Message(format!(
|
||||
"Prevous cluster layout has replication factor {}, which is different than the one specified in the config file ({}). The previous cluster layout can be purged, if you know what you are doing, simply by deleting the `cluster_layout` file in your metadata directory.",
|
||||
x.current().replication_factor,
|
||||
replication_factor
|
||||
replication_factor.replication_factor()
|
||||
)));
|
||||
}
|
||||
x
|
||||
|
@ -65,8 +64,12 @@ impl LayoutManager {
|
|||
}
|
||||
};
|
||||
|
||||
let mut cluster_layout =
|
||||
LayoutHelper::new(replication_mode, cluster_layout, Default::default());
|
||||
let mut cluster_layout = LayoutHelper::new(
|
||||
replication_factor,
|
||||
consistency_mode,
|
||||
cluster_layout,
|
||||
Default::default(),
|
||||
);
|
||||
cluster_layout.update_trackers(node_id.into());
|
||||
|
||||
let layout = Arc::new(RwLock::new(cluster_layout));
|
||||
|
@ -81,7 +84,7 @@ impl LayoutManager {
|
|||
|
||||
Ok(Arc::new(Self {
|
||||
node_id: node_id.into(),
|
||||
replication_mode,
|
||||
replication_factor,
|
||||
persist_cluster_layout,
|
||||
layout,
|
||||
change_notify,
|
||||
|
@ -295,11 +298,11 @@ impl LayoutManager {
|
|||
adv.update_trackers
|
||||
);
|
||||
|
||||
if adv.current().replication_factor != self.replication_mode.replication_factor() {
|
||||
if adv.current().replication_factor != self.replication_factor.replication_factor() {
|
||||
let msg = format!(
|
||||
"Received a cluster layout from another node with replication factor {}, which is different from what we have in our configuration ({}). Discarding the cluster layout we received.",
|
||||
adv.current().replication_factor,
|
||||
self.replication_mode.replication_factor()
|
||||
self.replication_factor.replication_factor()
|
||||
);
|
||||
error!("{}", msg);
|
||||
return Err(Error::Message(msg));
|
||||
|
|
|
@ -5,6 +5,7 @@ use garage_util::crdt::Crdt;
|
|||
use garage_util::error::*;
|
||||
|
||||
use crate::layout::*;
|
||||
use crate::replication_mode::ReplicationFactor;
|
||||
|
||||
// This function checks that the partition size S computed is at least better than the
|
||||
// one given by a very naive algorithm. To do so, we try to run the naive algorithm
|
||||
|
@ -120,7 +121,7 @@ fn test_assignment() {
|
|||
let mut node_capacity_vec = vec![4000, 1000, 2000];
|
||||
let mut node_zone_vec = vec!["A", "B", "C"];
|
||||
|
||||
let mut cl = LayoutHistory::new(3);
|
||||
let mut cl = LayoutHistory::new(ReplicationFactor::new(3).unwrap());
|
||||
update_layout(&mut cl, &node_capacity_vec, &node_zone_vec, 3);
|
||||
let v = cl.current().version;
|
||||
let (mut cl, msg) = cl.apply_staged_changes(Some(v + 1)).unwrap();
|
||||
|
|
|
@ -1,57 +1,94 @@
|
|||
#[derive(Clone, Copy)]
|
||||
pub enum ReplicationMode {
|
||||
None,
|
||||
TwoWay,
|
||||
TwoWayDangerous,
|
||||
ThreeWay,
|
||||
ThreeWayDegraded,
|
||||
ThreeWayDangerous,
|
||||
use garage_util::config::Config;
|
||||
use garage_util::crdt::AutoCrdt;
|
||||
use garage_util::error::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)]
|
||||
#[serde(transparent)]
|
||||
pub struct ReplicationFactor(usize);
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum ConsistencyMode {
|
||||
/// Read- and Write-quorum are 1
|
||||
Dangerous,
|
||||
/// Read-quorum is 1
|
||||
Degraded,
|
||||
/// Read- and Write-quorum are determined for read-after-write-consistency
|
||||
#[default]
|
||||
Consistent,
|
||||
}
|
||||
|
||||
impl ReplicationMode {
|
||||
pub fn parse(v: &str) -> Option<Self> {
|
||||
match v {
|
||||
"none" | "1" => Some(Self::None),
|
||||
"2" => Some(Self::TwoWay),
|
||||
"2-dangerous" => Some(Self::TwoWayDangerous),
|
||||
"3" => Some(Self::ThreeWay),
|
||||
"3-degraded" => Some(Self::ThreeWayDegraded),
|
||||
"3-dangerous" => Some(Self::ThreeWayDangerous),
|
||||
_ => None,
|
||||
impl ConsistencyMode {
|
||||
pub fn parse(s: &str) -> Option<Self> {
|
||||
serde_json::from_value(serde_json::Value::String(s.to_string())).ok()
|
||||
}
|
||||
}
|
||||
|
||||
impl AutoCrdt for ConsistencyMode {
|
||||
const WARN_IF_DIFFERENT: bool = true;
|
||||
}
|
||||
|
||||
impl ReplicationFactor {
|
||||
pub fn new(replication_factor: usize) -> Option<Self> {
|
||||
if replication_factor < 1 {
|
||||
None
|
||||
} else {
|
||||
Some(Self(replication_factor))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn replication_factor(&self) -> usize {
|
||||
match self {
|
||||
Self::None => 1,
|
||||
Self::TwoWay | Self::TwoWayDangerous => 2,
|
||||
Self::ThreeWay | Self::ThreeWayDegraded | Self::ThreeWayDangerous => 3,
|
||||
self.0
|
||||
}
|
||||
|
||||
pub fn read_quorum(&self, consistency_mode: ConsistencyMode) -> usize {
|
||||
match consistency_mode {
|
||||
ConsistencyMode::Dangerous | ConsistencyMode::Degraded => 1,
|
||||
ConsistencyMode::Consistent => self.replication_factor().div_ceil(2),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn read_quorum(&self) -> usize {
|
||||
match self {
|
||||
Self::None => 1,
|
||||
Self::TwoWay | Self::TwoWayDangerous => 1,
|
||||
Self::ThreeWay => 2,
|
||||
Self::ThreeWayDegraded | Self::ThreeWayDangerous => 1,
|
||||
pub fn write_quorum(&self, consistency_mode: ConsistencyMode) -> usize {
|
||||
match consistency_mode {
|
||||
ConsistencyMode::Dangerous => 1,
|
||||
ConsistencyMode::Degraded | ConsistencyMode::Consistent => {
|
||||
(self.replication_factor() + 1) - self.read_quorum(ConsistencyMode::Consistent)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn write_quorum(&self) -> usize {
|
||||
match self {
|
||||
Self::None => 1,
|
||||
Self::TwoWay => 2,
|
||||
Self::TwoWayDangerous => 1,
|
||||
Self::ThreeWay | Self::ThreeWayDegraded => 2,
|
||||
Self::ThreeWayDangerous => 1,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn is_read_after_write_consistent(&self) -> bool {
|
||||
match self {
|
||||
Self::None | Self::TwoWay | Self::ThreeWay => true,
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::convert::From<ReplicationFactor> for usize {
|
||||
fn from(replication_factor: ReplicationFactor) -> usize {
|
||||
replication_factor.0
|
||||
}
|
||||
}
|
||||
|
||||
pub fn parse_replication_mode(
|
||||
config: &Config,
|
||||
) -> Result<(ReplicationFactor, ConsistencyMode), Error> {
|
||||
match (&config.replication_mode, config.replication_factor, config.consistency_mode.as_str()) {
|
||||
(Some(replication_mode), None, "consistent") => {
|
||||
tracing::warn!("Legacy config option replication_mode in use. Please migrate to replication_factor and consistency_mode");
|
||||
let parsed_replication_mode = match replication_mode.as_str() {
|
||||
"1" | "none" => Some((ReplicationFactor(1), ConsistencyMode::Consistent)),
|
||||
"2" => Some((ReplicationFactor(2), ConsistencyMode::Consistent)),
|
||||
"2-dangerous" => Some((ReplicationFactor(2), ConsistencyMode::Dangerous)),
|
||||
"3" => Some((ReplicationFactor(3), ConsistencyMode::Consistent)),
|
||||
"3-degraded" => Some((ReplicationFactor(3), ConsistencyMode::Degraded)),
|
||||
"3-dangerous" => Some((ReplicationFactor(3), ConsistencyMode::Dangerous)),
|
||||
_ => None,
|
||||
};
|
||||
Some(parsed_replication_mode.ok_or_message("Invalid replication_mode in config file.")?)
|
||||
},
|
||||
(None, Some(replication_factor), consistency_mode) => {
|
||||
let replication_factor = ReplicationFactor::new(replication_factor)
|
||||
.ok_or_message("Invalid replication_factor in config file.")?;
|
||||
let consistency_mode = ConsistencyMode::parse(consistency_mode)
|
||||
.ok_or_message("Invalid consistency_mode in config file.")?;
|
||||
Some((replication_factor, consistency_mode))
|
||||
}
|
||||
_ => None,
|
||||
}.ok_or_message("Either the legacy replication_mode or replication_level and consistency_mode can be set, not both.")
|
||||
}
|
||||
|
|
|
@ -112,8 +112,7 @@ pub struct System {
|
|||
|
||||
metrics: ArcSwapOption<SystemMetrics>,
|
||||
|
||||
replication_mode: ReplicationMode,
|
||||
pub(crate) replication_factor: usize,
|
||||
pub(crate) replication_factor: ReplicationFactor,
|
||||
|
||||
/// Path to metadata directory
|
||||
pub metadata_dir: PathBuf,
|
||||
|
@ -243,7 +242,8 @@ impl System {
|
|||
/// Create this node's membership manager
|
||||
pub fn new(
|
||||
network_key: NetworkKey,
|
||||
replication_mode: ReplicationMode,
|
||||
replication_factor: ReplicationFactor,
|
||||
consistency_mode: ConsistencyMode,
|
||||
config: &Config,
|
||||
) -> Result<Arc<Self>, Error> {
|
||||
// ---- setup netapp RPC protocol ----
|
||||
|
@ -274,14 +274,13 @@ impl System {
|
|||
let persist_peer_list = Persister::new(&config.metadata_dir, "peer_list");
|
||||
|
||||
// ---- setup cluster layout and layout manager ----
|
||||
let replication_factor = replication_mode.replication_factor();
|
||||
|
||||
let layout_manager = LayoutManager::new(
|
||||
config,
|
||||
netapp.id,
|
||||
system_endpoint.clone(),
|
||||
peering.clone(),
|
||||
replication_mode,
|
||||
replication_factor,
|
||||
consistency_mode,
|
||||
)?;
|
||||
|
||||
let mut local_status = NodeStatus::initial(replication_factor, &layout_manager);
|
||||
|
@ -315,7 +314,6 @@ impl System {
|
|||
netapp: netapp.clone(),
|
||||
peering: peering.clone(),
|
||||
system_endpoint,
|
||||
replication_mode,
|
||||
replication_factor,
|
||||
rpc_listen_addr: config.rpc_bind_addr,
|
||||
rpc_public_addr,
|
||||
|
@ -427,7 +425,9 @@ impl System {
|
|||
}
|
||||
|
||||
pub fn health(&self) -> ClusterHealth {
|
||||
let quorum = self.replication_mode.write_quorum();
|
||||
let quorum = self
|
||||
.replication_factor
|
||||
.write_quorum(ConsistencyMode::Consistent);
|
||||
|
||||
// Gather information about running nodes.
|
||||
// Technically, `nodes` contains currently running nodes, as well
|
||||
|
@ -631,7 +631,7 @@ impl System {
|
|||
.count();
|
||||
|
||||
let not_configured = self.cluster_layout().check().is_err();
|
||||
let no_peers = n_connected < self.replication_factor;
|
||||
let no_peers = n_connected < self.replication_factor.into();
|
||||
let expected_n_nodes = self.cluster_layout().all_nodes().len();
|
||||
let bad_peers = n_connected != expected_n_nodes;
|
||||
|
||||
|
@ -774,14 +774,14 @@ impl EndpointHandler<SystemRpc> for System {
|
|||
}
|
||||
|
||||
impl NodeStatus {
|
||||
fn initial(replication_factor: usize, layout_manager: &LayoutManager) -> Self {
|
||||
fn initial(replication_factor: ReplicationFactor, layout_manager: &LayoutManager) -> Self {
|
||||
NodeStatus {
|
||||
hostname: Some(
|
||||
gethostname::gethostname()
|
||||
.into_string()
|
||||
.unwrap_or_else(|_| "<invalid utf-8>".to_string()),
|
||||
),
|
||||
replication_factor,
|
||||
replication_factor: replication_factor.into(),
|
||||
layout_digest: layout_manager.layout().digest(),
|
||||
meta_disk_avail: None,
|
||||
data_disk_avail: None,
|
||||
|
|
|
@ -68,7 +68,7 @@ impl SystemMetrics {
|
|||
let replication_factor = system.replication_factor;
|
||||
meter
|
||||
.u64_value_observer("garage_replication_factor", move |observer| {
|
||||
observer.observe(replication_factor as u64, &[])
|
||||
observer.observe(replication_factor.replication_factor() as u64, &[])
|
||||
})
|
||||
.with_description("Garage replication factor setting")
|
||||
.init()
|
||||
|
|
|
@ -30,12 +30,20 @@ pub struct Config {
|
|||
)]
|
||||
pub block_size: usize,
|
||||
|
||||
/// Replication mode. Supported values:
|
||||
/// - none, 1 -> no replication
|
||||
/// - 2 -> 2-way replication
|
||||
/// - 3 -> 3-way replication
|
||||
// (we can add more aliases for this later)
|
||||
pub replication_mode: String,
|
||||
/// Number of replicas. Can be any positive integer, but uneven numbers are more favorable.
|
||||
/// - 1 for single-node clusters, or to disable replication
|
||||
/// - 3 is the recommended and supported setting.
|
||||
#[serde(default)]
|
||||
pub replication_factor: Option<usize>,
|
||||
|
||||
/// Consistency mode for all for requests through this node
|
||||
/// - Degraded -> Disable read quorum
|
||||
/// - Dangerous -> Disable read and write quorum
|
||||
#[serde(default = "default_consistency_mode")]
|
||||
pub consistency_mode: String,
|
||||
|
||||
/// Legacy option
|
||||
pub replication_mode: Option<String>,
|
||||
|
||||
/// Zstd compression level used on data blocks
|
||||
#[serde(
|
||||
|
@ -244,10 +252,15 @@ fn default_sled_cache_capacity() -> usize {
|
|||
fn default_sled_flush_every_ms() -> u64 {
|
||||
2000
|
||||
}
|
||||
|
||||
fn default_block_size() -> usize {
|
||||
1048576
|
||||
}
|
||||
|
||||
fn default_consistency_mode() -> String {
|
||||
"consistent".into()
|
||||
}
|
||||
|
||||
fn default_compression() -> Option<i32> {
|
||||
Some(1)
|
||||
}
|
||||
|
@ -359,7 +372,7 @@ mod tests {
|
|||
r#"
|
||||
metadata_dir = "/tmp/garage/meta"
|
||||
data_dir = "/tmp/garage/data"
|
||||
replication_mode = "3"
|
||||
replication_factor = 3
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_secret = "foo"
|
||||
|
||||
|
|
Loading…
Reference in a new issue