Adds S3_SIGNED_URL_EXPIRY val to .env and settings (defaults to 15 mins)
Note that this is reset every time the user loads the exports page
and is independent of the _creation_ of export files.
- use signed url for s3 downloads
- re-arrange tar.gz file to match original
- delete all working files after tarring
- import from s3 export
TODO
- check local export and import
- fix error when avatar missing
- deal with multiple s3 storage options (e.g. Azure)
- makes user_import_time_limit a site setting rather than a value in settings.py (note this applies to exports as well as imports)
- admins can change user_import_time_limit from UI
- admins can cancel stuck user imports
- disabling new imports also disables user imports
Splitting this into five separate queries avoids the large join that
prevents us from using indexes, and requires materializing to disk.
Fixes: #2157 (hopefully)
The idea behind a streaming CSV export was to reduce the amount of
memory used, by avoiding building the entire CSV file in memory before
sending it to the client. However, it didn't work out this way in
practice: the query objects that were created to represent each line
caused Postgres to generate a very large (~200MB on bookwyrm.social)
temp file, not to mention the memory being used by the Query object
likely being similar to, if not larger than that used by the finalized
CSV row.
While we should in the long term run our CSV exports as a Celery task,
this change should allow CSV exports to work on large servers without
causing disk-space problems.
Fixes: #2157