As of 0.1.13, the s3-tar library uses an environment variable
(`S3_ENDPOINT_URL`) to determine the AWS endpoint. See:
https://github.com/xtream1101/s3-tar/blob/0.1.13/s3_tar/utils.py#L25-L29.
To save BookWyrm admins from having to set it (e.g., through `.env`)
when they are already setting `AWS_S3_ENDPOINT_URL`, we create a Session
class that unconditionally uses that URL, and feed it to S3Tar.
- use signed url for s3 downloads
- re-arrange tar.gz file to match original
- delete all working files after tarring
- import from s3 export
TODO
- check local export and import
- fix error when avatar missing
- deal with multiple s3 storage options (e.g. Azure)
We were passing the *requesting* user's moved_to value to the Move notification template, instead of the id of the user that they are being notified about.
Additionally, the id_to_username template tag had no fallback for if the user_id is None.
This resolves both problems and removes an unnecessary space in a template for when the logged in user made the move.
Fixes#3196
- new setting to enable user exports defaults to False
- add setting to enable and disable user exports
- do not allow user exports when using s3 storage
- do not serve non-image files from /images/ (requires update to nginx settings)
- increase default file upload limit to 100MB to enable user exports to be imported (can be changed in .env)
- custom storages
- tar.gz within bucket using s3_tar
- slightly changes export directory structure
- major problems still outstanding re delivering s3 files to end users
In 1937177e1 ("dev-tools: use apt source for Node instead of setup script"),
I introduced the use of `Signed-By` with a public key block, which is only
supported in bookworm (bullseye only supports fingerprints, TTBOMK).
Python's Docker images already use bookworm by default, but we explicitly
require it now to avoid build errors if someone has a very old image laying
around (see, e.g., #3190).
(This can be dropped after Debian 13 ‘trixie’ is released.)