Compare commits

..

527 commits
main ... v0.6.3

Author SHA1 Message Date
Mouse Reeve
6400a8e234 Merge branch 'main' into production 2023-05-30 11:38:03 -07:00
Mouse Reeve
a65e6ce423 Merge branch 'main' into production 2023-04-25 17:43:00 -07:00
Mouse Reeve
e00e7c1890
Merge pull request #2776 from WesleyAC/production-add-log-retention
Add default retention policy to containers
2023-04-07 05:57:45 -07:00
Mouse Reeve
419e9f24cb Merge branch 'main' into production 2023-04-03 21:32:01 -07:00
Mouse Reeve
ffb7f66375 Merge branch 'main' into production 2023-04-03 21:31:08 -07:00
Wesley Aptekar-Cassels
fe020b7c95 Add default retention policy to containers
Docker makes it extremely difficult to do time-based retention,
unfortunately, so space-based is the best we'll be able to do. This is
probably fairly aggressive for bookwyrm.social, and not nearly
aggressive enough for smaller instances, but it's better than the
current status quo.

I've only tested that this builds and runs, not that it actually has the
intended effect.
2023-04-03 21:30:28 -04:00
Mouse Reeve
0c427eaf06
Merge pull request #2756 from WesleyAC/production-weed-cron-timing
Adjust sample postgres-docker cron file
2023-03-25 17:29:28 -07:00
Wesley Aptekar-Cassels
c58689ae20 Adjust sample postgres-docker cron file
Only delaying one minute between backup and weeding means that the
backup script could potentially still be running when the weed script
is, which isn't good. Instead, wait a little longer.
2023-03-25 20:21:00 -04:00
Mouse Reeve
154f23719b
Merge pull request #2754 from WesleyAC/production-fix-weed-script
Fix weed.sh script
2023-03-21 17:57:07 -07:00
Wesley Aptekar-Cassels
a0b8adf3a9 Fix weed.sh script
This allows it to work with the naming convention used by the backup
script.

Fixes: #2753
2023-03-21 20:34:08 -04:00
Mouse Reeve
6d404558c3 Merge branch 'main' into production 2023-03-13 08:17:12 -07:00
Mouse Reeve
dd505a8814 Merge branch 'main' into production 2023-02-22 09:00:13 -08:00
Mouse Reeve
006272306a Merge branch 'main' into production 2023-02-20 05:39:02 -08:00
Mouse Reeve
cad83a339e Merge branch 'main' into production 2023-01-26 07:39:46 -08:00
Mouse Reeve
5e2a416d65 Merge branch 'main' into production 2023-01-12 15:42:07 -08:00
Mouse Reeve
36a43a1996 Merge branch 'main' into production 2023-01-11 19:30:51 -08:00
Mouse Reeve
013f9ed925 Merge branch 'main' into production 2023-01-11 17:18:40 -08:00
Mouse Reeve
08f247421f
Merge pull request #2548 from chdorner/fix/postgres-backup-script
Support custom pg user/db names in DB backup script
2023-01-09 20:07:23 -08:00
Christof Dorner
6fdc9c6118 Move executable scripts to /usr/local/bin
They are currently in the same folder as the backup files which is on a volume. This makes it impossible to make changes to these scripts and have them applied the next time somebody upgrades and builds these docker images again.
2023-01-01 14:14:33 +01:00
Christof Dorner
cf76173bd1 Support custom pg user/db names in DB backup script 2022-12-26 15:20:21 +01:00
Mouse Reeve
54db892b90 Merge branch 'main' into production 2022-12-19 14:40:25 -08:00
Mouse Reeve
2c5811fb6f Merge branch 'main' into production 2022-12-16 15:27:13 -08:00
Mouse Reeve
58335539fc Merge branch 'main' into production 2022-12-11 13:59:27 -08:00
Mouse Reeve
ad75fd1928 Merge branch 'main' into production 2022-12-04 18:17:29 -08:00
Mouse Reeve
ae1be06bb4 Merge branch 'main' into production 2022-11-25 14:38:03 -08:00
Mouse Reeve
ef975c3ebc Merge branch 'main' into production 2022-11-25 14:37:05 -08:00
Mouse Reeve
ed28e2d6e8 Merge branch 'main' into production 2022-11-23 22:11:59 -08:00
Mouse Reeve
0c220a299d Merge branch 'main' into production 2022-11-17 19:52:13 -08:00
Mouse Reeve
8da6741cdf Merge branch 'main' into production 2022-11-17 15:54:27 -08:00
Mouse Reeve
ad25c6c31b Merge branch 'main' into production 2022-11-16 18:44:02 -08:00
Mouse Reeve
8cdcfe89fb Merge branch 'main' into production 2022-11-16 18:42:20 -08:00
Mouse Reeve
4501b504ac Merge branch 'main' into production 2022-11-15 20:26:56 -08:00
Mouse Reeve
e20e009df6 Merge branch 'main' into production 2022-11-15 15:09:18 -08:00
Mouse Reeve
5651f83389 Merge branch 'main' into production 2022-11-14 18:50:21 -08:00
Mouse Reeve
3578dc839c Merge branch 'main' into production 2022-11-14 12:20:25 -08:00
Mouse Reeve
0d86df35ff Merge branch 'main' into production 2022-11-10 18:28:56 -08:00
Mouse Reeve
d3f1261050 Merge branch 'main' into production 2022-11-10 14:20:41 -08:00
Mouse Reeve
7cfccf2710 Merge branch 'main' into production 2022-11-10 13:45:55 -08:00
Mouse Reeve
92b59ff47c Merge branch 'main' into production 2022-11-07 11:42:31 -08:00
Mouse Reeve
8982637094 Merge branch 'main' into production 2022-11-07 10:59:16 -08:00
Mouse Reeve
2cbb80305d Merge branch 'main' into production 2022-11-05 13:48:27 -07:00
Mouse Reeve
b00981c615 Merge branch 'main' into production 2022-11-03 15:21:49 -07:00
Mouse Reeve
101fd87a55 Merge branch 'main' into production 2022-11-03 11:06:47 -07:00
Mouse Reeve
29314c6243 Merge branch 'main' into production 2022-11-03 10:32:13 -07:00
Mouse Reeve
d375518489 Merge branch 'main' into production 2022-10-11 10:56:37 -07:00
Mouse Reeve
3fd4ab6162 Merge branch 'main' into production 2022-09-15 11:21:51 -07:00
Mouse Reeve
b34bbac228 Merge branch 'main' into production 2022-08-29 15:21:43 -07:00
Mouse Reeve
580745bee3 Merge branch 'main' into production 2022-08-05 16:57:53 -07:00
Mouse Reeve
d5980d2b56
Merge pull request #2252 from bookwyrm-social/branch-convergence
Copies main branch files to prod
2022-08-04 10:13:21 -07:00
Mouse Reeve
618c7bbeec Copies main branch files to prod
Resolves some divergences caused by merges, and copies over some files
that aren't needed in prod but aren't harmful
2022-08-04 10:08:19 -07:00
Mouse Reeve
8ba1ccadcc Merge branch 'main' into production 2022-08-04 09:31:46 -07:00
Mouse Reeve
14b6aa7bbd Merge branch 'main' into production 2022-07-18 09:40:32 -07:00
Mouse Reeve
372bfa3b28 Merge branch 'main' into production 2022-07-15 12:34:40 -07:00
Mouse Reeve
99b64ae9e8 Merge branch 'main' into production 2022-07-15 12:05:23 -07:00
Mouse Reeve
cf595916f9 Merge branch 'main' into production 2022-07-10 09:54:47 -07:00
Mouse Reeve
7382f233cc Merge branch 'main' into production 2022-07-09 12:45:42 -07:00
Mouse Reeve
f09fdc865c Merge branch 'main' into production 2022-07-07 12:32:19 -07:00
Mouse Reeve
2d2d0194a6 Merge branch 'main' into production 2022-07-07 09:34:57 -07:00
Mouse Reeve
83cb0a9df8 Removed references to clean command that doesn't exist on prod 2022-07-06 14:04:33 -07:00
Mouse Reeve
5d8a1ec24b Merge branch 'main' into production 2022-07-06 12:33:49 -07:00
Mouse Reeve
0c6f38828b Merge branch 'main' into production 2022-07-05 18:10:49 -07:00
Mouse Reeve
4097a8989c Merge branch 'main' into production 2022-07-05 17:51:50 -07:00
Mouse Reeve
3c3eae7b9e Merge branch 'main' into production 2022-07-05 12:10:09 -07:00
Mouse Reeve
e452aa95b6 Merge branch 'main' into production 2022-07-04 14:08:24 -07:00
Mouse Reeve
14e73d18dd Merge branch 'main' into production 2022-05-31 10:41:58 -07:00
Mouse Reeve
02a315be00 Merge branch 'main' into production 2022-05-30 08:23:26 -07:00
Mouse Reeve
3a8aff938b Merge branch 'main' into production 2022-05-19 14:24:55 -07:00
Mouse Reeve
e915335932 Merge branch 'main' into production 2022-05-16 14:42:47 -07:00
Mouse Reeve
b1f3253aa7 Merge branch 'main' into production 2022-05-16 10:00:19 -07:00
Mouse Reeve
12a5eaba3e Merge branch 'main' into production 2022-05-14 08:48:07 -07:00
Mouse Reeve
dc0dede105 Merge branch 'main' into production 2022-05-09 12:41:27 -07:00
Mouse Reeve
05cf3ce344 Merge branch 'main' into production 2022-04-29 15:42:39 -07:00
Mouse Reeve
3ad09308b5 Merge branch 'main' into production 2022-04-26 08:30:35 -07:00
Mouse Reeve
c10c14b2c9 Merge branch 'main' into production 2022-04-26 07:50:39 -07:00
Mouse Reeve
f64a909751 Merge branch 'main' into production 2022-04-08 14:05:32 -07:00
Mouse Reeve
f27fe05008 Merge branch 'main' into production 2022-03-26 12:56:11 -07:00
Mouse Reeve
4308137390 Merge branch 'main' into production 2022-03-26 08:58:11 -07:00
Mouse Reeve
588d2da2c6 Merge branch 'main' into production 2022-03-24 09:46:32 -07:00
Mouse Reeve
c1c0eed92c Merge branch 'main' into production 2022-03-19 07:25:18 -07:00
Mouse Reeve
bb89766de0 Merge branch 'main' into production 2022-03-18 07:41:45 -07:00
Mouse Reeve
e2197b3e7d Merge branch 'main' into production 2022-03-17 10:36:43 -07:00
Mouse Reeve
4a0e1bdd46 Merge branch 'main' into production 2022-03-17 10:07:27 -07:00
Mouse Reeve
67bb154008 Merge branch 'main' into production 2022-03-17 08:56:15 -07:00
Mouse Reeve
6f9488f7a8 Merge branch 'main' into production 2022-03-15 12:43:11 -07:00
Mouse Reeve
35e27a4443 Merge branch 'main' into production 2022-03-13 13:33:00 -07:00
Mouse Reeve
3ebdd432ec Merge branch 'main' into production 2022-03-13 12:09:47 -07:00
Mouse Reeve
a54a8b7573 Merge branch 'main' into production 2022-03-11 16:06:02 -08:00
Mouse Reeve
91e8a9ac19
Merge pull request #1995 from bookwyrm-social/revert-broken-themes
Revert broken themes commit
2022-03-04 18:36:01 -08:00
Mouse Reeve
394f6fea87 Revert "Adds custom compile management command"
This reverts commit b0c0af9617.
2022-03-04 18:14:46 -08:00
Mouse Reeve
4baba480c5 Merge branch 'main' into production 2022-03-04 17:59:09 -08:00
Mouse Reeve
64b5a18d7b Merge branch 'main' into production 2022-03-01 14:53:11 -08:00
Mouse Reeve
dfba63f977 Merge branch 'main' into production 2022-03-01 12:20:27 -08:00
Mouse Reeve
8de5a44181 Merge branch 'main' into production 2022-02-24 17:53:29 -08:00
Mouse Reeve
3e78164039 Include packaging dep 2022-02-18 18:19:28 -08:00
Mouse Reeve
abfd094337 Merge branch 'main' into production 2022-02-18 18:17:43 -08:00
Mouse Reeve
0e8fffe001 Merge branch 'main' into production 2022-02-12 10:29:49 -08:00
Mouse Reeve
0f1757b278 Merge branch 'main' into production 2022-02-12 10:14:47 -08:00
Mouse Reeve
f1381341b4 Merge branch 'main' into production 2022-02-09 12:41:23 -05:00
Mouse Reeve
6bf9fe7295 Merge branch 'main' into production 2022-02-04 20:23:35 -08:00
Mouse Reeve
657d7b04c1 Merge branch 'main' into production 2022-02-04 19:52:16 -08:00
Mouse Reeve
0a6c14dc8a Merge branch 'main' into production 2022-02-04 18:13:00 -08:00
Mouse Reeve
771435168b Merge branch 'main' into production 2022-02-04 15:45:20 -08:00
Mouse Reeve
36dbf64b12 Merge branch 'main' into production 2022-02-03 14:09:09 -08:00
Mouse Reeve
75726460d7 Merge branch 'main' into production 2022-02-03 11:14:19 -08:00
Mouse Reeve
d04345ec08 Merge branch 'main' into production 2022-02-02 13:25:42 -08:00
Mouse Reeve
bf573a0871 Merge branch 'main' into production 2022-02-02 12:41:05 -08:00
Mouse Reeve
43ca5f466c Merge branch 'main' into production 2022-02-02 11:06:10 -08:00
Mouse Reeve
47be375de3 Merge branch 'main' into production 2022-01-30 07:19:44 -08:00
Mouse Reeve
cc1496cf8f Merge branch 'main' into production 2022-01-30 07:13:11 -08:00
Mouse Reeve
dd16ccd093 Merge branch 'main' into production 2022-01-30 06:33:08 -08:00
Mouse Reeve
4a81786236 Merge branch 'main' into production 2022-01-20 17:20:29 -08:00
Mouse Reeve
b46ec147da Merge branch 'main' into production 2022-01-20 16:46:45 -08:00
Mouse Reeve
589a743cfb Merge branch 'main' into production 2022-01-20 15:25:41 -08:00
Mouse Reeve
2dd39517c3 Merge branch 'main' into production 2022-01-18 13:46:53 -08:00
Mouse Reeve
4fad83e910 Merge branch 'main' into production 2022-01-18 06:58:13 -08:00
Mouse Reeve
56c6ee8879 Merge branch 'main' into production 2022-01-17 11:09:01 -08:00
Mouse Reeve
3d036386f9 Merge branch 'main' into production 2022-01-17 08:23:50 -08:00
Mouse Reeve
53754546c2 Merge branch 'main' into production 2022-01-13 11:10:41 -08:00
Mouse Reeve
6a7c38003d Removes duplicate update command 2022-01-13 09:25:43 -08:00
Mouse Reeve
584da682ee Merge branch 'main' into production 2022-01-13 09:24:05 -08:00
Mouse Reeve
c102106dc2
Merge pull request #1803 from cincodenada/run-rm-update
Use run --rm in `update` command
2022-01-13 08:49:45 -08:00
Mouse Reeve
9fb7280366
Merge pull request #1751 from cincodenada/open-telemetry
Adds OpenTelemetry exporter for use with various monitoring tools
2022-01-12 17:28:47 -08:00
Mouse Reeve
26482232c9 Merge branch 'main' into production 2022-01-11 13:32:59 -08:00
Joel Bradshaw
1dc5467969 Drop --no-cache as well
Since several of our services share the same image, this should speed up
building a good bit, and --no-cache shouldn't be necessary - if we're
still having issues with things not updating, we should figure out how
to arrange the Dockerfile, but I think it should be fine as-is.
2022-01-10 23:22:12 -08:00
Joel Bradshaw
83964f4e2b Merge branch 'production' into open-telemetry 2022-01-10 23:20:05 -08:00
Joel Bradshaw
320ac617cb Add default values for OTEL settings 2022-01-10 23:17:22 -08:00
Joel Bradshaw
7ab8209046 Use runweb to update, and up -d instead of restart
Partial fix for #1785 - preivously we would run the migrate and collect
commands with exec, which would run them in the running (and thus old)
contianers, but with the new code. This caused issues when, for example,
new dependencies were introduced, which weren't built into the old
containers.

Instead, use run --rm to spin up temporary instances of the new
containers to do the commands.
2022-01-10 00:04:51 -08:00
Mouse Reeve
cbfa99a95b Merge branch 'main' into production 2022-01-09 12:36:44 -08:00
Mouse Reeve
2fccb8ef83 Merge branch 'main' into production 2022-01-09 11:12:43 -08:00
Mouse Reeve
49bef9a7c5 Merge branch 'main' into production 2022-01-08 18:55:37 -08:00
Mouse Reeve
2b07e9c485 Merge branch 'main' into production 2022-01-08 16:43:07 -08:00
Mouse Reeve
c32e5b78ac Fixes merge error in requirements 2022-01-07 10:28:01 -08:00
Mouse Reeve
637600763b Merge branch 'main' into production 2022-01-07 10:25:39 -08:00
Mouse Reeve
17d4b60275 Merge branch 'main' into production 2022-01-07 08:18:48 -08:00
Mouse Reeve
55aa26d2ba Merge branch 'main' into production 2022-01-06 18:56:49 -08:00
Mouse Reeve
a806264497 Merge branch 'main' into production 2022-01-06 13:13:05 -08:00
Mouse Reeve
d650585858 Merge branch 'main' into production 2022-01-06 12:23:07 -08:00
Mouse Reeve
4416ce5069 Merge branch 'main' into production 2022-01-05 16:11:19 -08:00
Joel Bradshaw
f5f861ce25 Make it black 🎸 2022-01-01 14:51:01 -08:00
Joel Bradshaw
5b3ff0cf82 Instrument celery, move init into apps.py 2022-01-01 14:48:58 -08:00
Joel Bradshaw
40bec83833 Add versions to requirements.txt 2022-01-01 14:48:58 -08:00
Joel Bradshaw
3d7f73d73c Document OTLP in env, only load if env vars exist
Also move telemetry into its own file, all those imports seemed like
unnecessary clutter
2022-01-01 14:48:58 -08:00
Joel Bradshaw
7cb7063da5 Add more route names, and format
I think these show up in telemetry, and nicer names are nice
2022-01-01 14:48:46 -08:00
Mouse Reeve
93e1beda6e Merge branch 'main' into production 2022-01-01 07:02:40 -08:00
Joel Bradshaw
8b4f93dd4e Integrate open telemetry
This allows us to export to anyone that takes OTLP, which is most of the
major players, I think! Nifty!

Kinda like the S3 config but for tracing, you can slot in any provider
that supports it via environment variables

This uses the Django instrumentation, which gets us a bunch of nifty
stuff right out of the box.
2021-12-31 03:47:21 -08:00
Mouse Reeve
82513197fd Merge branch 'main' into production 2021-12-30 09:30:57 -08:00
Mouse Reeve
2c5265a117 Merge branch 'main' into production 2021-12-29 16:21:28 -08:00
Mouse Reeve
8b42d58caf Remove prettier in prod branch 2021-12-29 09:37:08 -08:00
Mouse Reeve
e8c1ca68d1 Merge branch 'production' of github.com:bookwyrm-social/bookwyrm into production 2021-12-29 09:36:53 -08:00
Mouse Reeve
638352ba26 Merge branch 'main' into production 2021-12-29 09:35:08 -08:00
Mouse Reeve
5cc4f9d381
Merge pull request #1717 from cincodenada/no-more-fedireads
Remove last lingering traces of fedireads name
2021-12-29 09:33:13 -08:00
Joel Bradshaw
2e9574d53c Add database to filename, don't install recommends
Cron comes with just a metric ton of recommended dependencies including
mariadb-common which is just a bunch of unneccessary weight. Just
install what's necessary for cron.
2021-12-28 14:18:57 -08:00
Joel Bradshaw
879a410808 Attempt to use env variable for backup script
This should be available via docker, and we shouldn't have the database
name hardcoded anywhere
2021-12-28 13:45:55 -08:00
Joel Bradshaw
d44a900e0c Fix typo while we're here 2021-12-28 13:35:27 -08:00
Joel Bradshaw
cd9acef30a Remove last traces of fedireads 2021-12-28 13:35:27 -08:00
Mouse Reeve
1c48605418 Merge branch 'main' into production 2021-12-28 07:28:34 -08:00
Mouse Reeve
4ca0834b43 Merge branch 'main' into production 2021-12-27 14:56:45 -08:00
Mouse Reeve
2a5012470b Merge branch 'main' into production 2021-12-27 14:33:30 -08:00
Mouse Reeve
07c190d717 Merge branch 'main' into production 2021-12-27 13:41:45 -08:00
Mouse Reeve
bdf617d005 Merge branch 'main' into production 2021-12-14 19:01:49 -08:00
Mouse Reeve
b53d45a19a Merge branch 'main' into production 2021-12-07 15:37:22 -08:00
Mouse Reeve
4c73e53b9a Merge branch 'main' into production 2021-12-06 13:42:55 -08:00
Mouse Reeve
c52689c4b8 Merge branch 'main' into production 2021-12-05 10:29:23 -08:00
Mouse Reeve
620a22bde0 Merge branch 'main' into production 2021-12-04 16:58:07 -08:00
Mouse Reeve
58b88ef71e Merge branch 'main' into production 2021-11-28 09:41:13 -08:00
Mouse Reeve
25a7ab7b84 Merge branch 'main' into production 2021-11-23 15:01:30 -08:00
Mouse Reeve
eff6591727 Merge branch 'main' into production 2021-11-19 09:44:37 -08:00
Mouse Reeve
db5ec248ef Merge branch 'main' into production 2021-11-17 10:44:19 -08:00
Mouse Reeve
d62a4e7aa0 Merge branch 'main' into production 2021-11-16 09:43:53 -08:00
Mouse Reeve
ab06180e41 Merge branch 'main' into production 2021-11-15 13:47:32 -08:00
Mouse Reeve
02ea4020ea Merge branch 'main' into production 2021-11-15 10:43:19 -08:00
Mouse Reeve
17294abc13 Merge branch 'main' into production 2021-10-25 11:32:55 -07:00
Mouse Reeve
327a616779 Merge branch 'main' into production 2021-10-25 10:59:46 -07:00
Mouse Reeve
9a1ca982c8 Merge branch 'main' into production 2021-10-15 14:56:41 -07:00
Mouse Reeve
382d98a2e0 Merge branch 'main' into production 2021-10-15 14:31:13 -07:00
Mouse Reeve
1bc09485ee Merge branch 'main' into production 2021-10-14 16:28:35 -07:00
Mouse Reeve
b15744cc37 Merge branch 'main' into production 2021-10-14 14:32:14 -07:00
Mouse Reeve
62b62b5057 Merge branch 'main' into production 2021-10-11 10:35:16 -07:00
Mouse Reeve
7cb377da6f Merge branch 'main' into production 2021-10-11 10:22:22 -07:00
Mouse Reeve
9b1a9ec1d4 Merge branch 'main' into production 2021-10-06 18:24:24 -07:00
Mouse Reeve
8aa920e357 Merge branch 'main' into production 2021-10-04 12:00:23 -07:00
Mouse Reeve
34e45f3113 Merge branch 'main' into production 2021-10-03 13:38:13 -07:00
Mouse Reeve
e9f60f93b2 Merge branch 'main' into production 2021-10-03 12:12:34 -07:00
Mouse Reeve
1b987542a4 Merge branch 'main' into production 2021-10-02 18:26:38 -07:00
Mouse Reeve
57396499ff Merge branch 'main' into production 2021-09-29 11:25:09 -07:00
Mouse Reeve
755a6569c6 Merge branch 'main' into production 2021-09-22 17:11:20 -07:00
Mouse Reeve
7744b9a117 Merge branch 'main' into production 2021-09-22 12:34:22 -07:00
Mouse Reeve
697924ebc6 Merge branch 'main' into production 2021-09-21 07:09:19 -07:00
Mouse Reeve
cfc6528d0c Merge branch 'main' into production 2021-09-19 09:40:48 -07:00
Mouse Reeve
5d19d33a01 Merge branch 'main' into production 2021-09-18 16:41:03 -07:00
Mouse Reeve
620d3ab804 Merge branch 'main' into production 2021-09-18 06:41:30 -07:00
Mouse Reeve
2d7047b833 Merge branch 'main' into production 2021-09-12 11:58:59 -07:00
Mouse Reeve
c03db63bb3 Merge branch 'main' into production 2021-09-12 10:53:16 -07:00
Mouse Reeve
b5c0c0b52c Merge branch 'main' into production 2021-09-11 17:56:56 -07:00
Mouse Reeve
75ca011f92 Merge branch 'main' into production 2021-09-11 12:18:10 -07:00
Mouse Reeve
074a9113e6 Merge branch 'main' into production 2021-09-11 09:44:36 -07:00
Mouse Reeve
b2bd655a00 Merge branch 'main' into production 2021-09-11 07:43:24 -07:00
Mouse Reeve
3fb2584c03 Merge branch 'main' into production 2021-09-10 16:03:56 -07:00
Mouse Reeve
78edfa142e Merge branch 'main' into production 2021-09-10 15:12:25 -07:00
Mouse Reeve
4515cec7d2 Merge branch 'main' into production 2021-09-10 12:27:09 -07:00
Mouse Reeve
4c5f5d90ba Merge branch 'main' into production 2021-09-09 17:38:24 -07:00
Mouse Reeve
5e88e893c7 Merge branch 'main' into production 2021-09-08 17:55:35 -07:00
Mouse Reeve
2022b3a035 Merge branch 'main' into production 2021-09-08 12:18:35 -07:00
Mouse Reeve
8f50f8758c Merge branch 'main' into production 2021-09-08 09:09:09 -07:00
Mouse Reeve
276bded337 Merge branch 'main' into production 2021-09-07 09:55:38 -07:00
Mouse Reeve
9aa1201cbe Merge branch 'main' into production 2021-09-07 06:50:43 -07:00
Mouse Reeve
7284a4efad Merge branch 'main' into production 2021-09-06 13:17:46 -07:00
Mouse Reeve
ed6d35ef72 Merge branch 'main' into production 2021-09-05 16:47:02 -07:00
Mouse Reeve
8dc8ea2f59 Merge branch 'main' into production 2021-09-03 10:50:38 -07:00
Mouse Reeve
fad1b8db94 Merge branch 'main' into production 2021-08-30 13:38:01 -07:00
Mouse Reeve
642ebec5ff Merge branch 'main' into production 2021-08-24 14:52:14 -07:00
Mouse Reeve
b6b4e36e94 Merge branch 'main' into production 2021-08-23 16:27:06 -07:00
Mouse Reeve
48c78a9d0e Merge branch 'main' into production 2021-08-23 15:29:00 -07:00
Mouse Reeve
0f419d2b06 Merge branch 'main' into production 2021-08-21 12:10:06 -07:00
Mouse Reeve
1d6f30c2df Merge branch 'main' into production 2021-08-20 14:15:46 -07:00
Mouse Reeve
b0751d0555 Merge branch 'main' into production 2021-08-19 17:49:25 -07:00
Mouse Reeve
376adc0a9a Merge branch 'main' into production 2021-08-17 15:09:38 -07:00
Mouse Reeve
6bad4abd63 Merge branch 'main' into production 2021-08-16 14:10:11 -07:00
Mouse Reeve
a8dbecde57 Merge branch 'main' into production 2021-08-16 11:00:27 -07:00
Mouse Reeve
470fe576a1 Merge branch 'main' into production 2021-08-13 06:37:35 -07:00
Mouse Reeve
638bc17724 Merge branch 'main' into production 2021-08-12 07:19:13 -07:00
Mouse Reeve
8c3b170812 Merge branch 'main' into production 2021-08-10 19:56:33 -07:00
Mouse Reeve
47b5f8c4f8 Merge branch 'main' into production 2021-08-10 14:43:38 -07:00
Mouse Reeve
fee03191e8 Merge branch 'main' into production 2021-08-08 16:24:53 -07:00
Mouse Reeve
847c4b49b4 Merge branch 'main' into production 2021-08-08 15:10:21 -07:00
Mouse Reeve
5ce86d7b52 Merge branch 'main' into production 2021-08-07 15:19:35 -07:00
Mouse Reeve
e363d1af11 Merge branch 'main' into production 2021-08-07 15:18:15 -07:00
Mouse Reeve
c105490178 Merge branch 'main' into production 2021-08-07 08:34:05 -07:00
Mouse Reeve
3d1f4e3452 Merge branch 'main' into production 2021-08-06 11:27:21 -07:00
Mouse Reeve
fe0def3de8 Merge branch 'main' into production 2021-08-05 18:30:48 -07:00
Mouse Reeve
cc9c6ce76c Merge branch 'main' into production 2021-08-05 16:15:19 -07:00
Mouse Reeve
8f73cd9d89 Merge branch 'main' into production 2021-08-04 13:59:46 -07:00
Mouse Reeve
c7443c9749 Merge branch 'main' into production 2021-08-02 20:30:54 -07:00
Mouse Reeve
9bf79bf9b9 Merge branch 'main' into production 2021-08-02 07:32:22 -07:00
Mouse Reeve
870cf3b60c
Merge pull request #1229 from bookwyrm-social/postgres-version
Pin Postgres version number
2021-07-28 09:51:24 -06:00
Mouse Reeve
375385ea6c Pin Postgres version number
Fixes #1218 (maybe?)
2021-07-27 16:33:11 -07:00
Mouse Reeve
c0fc0431e7 Fixes merge block 2021-07-17 18:23:18 -07:00
Mouse Reeve
036463c8d9 Merge branch 'main' into production 2021-07-17 18:21:03 -07:00
Mouse Reeve
29044517a1
Merge pull request #1205 from bookwyrm-social/build-cache
Fix installing image dependencies
2021-06-29 10:26:46 -07:00
Mouse Reeve
b0a8d6c9fb Fix installing image dependencies 2021-06-29 06:45:10 -07:00
Mouse Reeve
8534e49f96 Merge branch 'main' into production 2021-06-27 08:05:08 -07:00
Mouse Reeve
c4e66d44c9 Merge branch 'main' into production 2021-06-18 14:30:23 -07:00
Mouse Reeve
fd0e4c6e13 Merge branch 'main' into production 2021-06-17 19:17:43 -07:00
Mouse Reeve
7e3627f787 Merge branch 'main' into production 2021-06-14 12:55:15 -07:00
Mouse Reeve
c3f938d500 Merge branch 'main' into production 2021-06-05 12:53:03 -07:00
Mouse Reeve
90021ab0e4
Merge pull request #1161 from bookwyrm-social/fix-certbot
Reverts to functional certbot configuration
2021-06-05 10:53:40 -07:00
Mouse Reeve
65333b258b Removes certbot init .env config variable 2021-06-05 10:51:02 -07:00
Mouse Reeve
8086b9bca5 Reverts to functional certbot configuration 2021-06-05 10:46:41 -07:00
Mouse Reeve
730b6fabc4 Merge branch 'main' into production 2021-05-24 07:21:43 -07:00
Mouse Reeve
802d28b4a7 Merge branch 'main' into production 2021-05-23 08:36:26 -07:00
Mouse Reeve
5f9e80ac1d Merge branch 'main' into production 2021-05-22 07:04:50 -07:00
Mouse Reeve
832a9494b1 Merge branch 'main' into production 2021-05-21 07:37:17 -07:00
Mouse Reeve
5ba6d8321f Merge branch 'main' into production 2021-05-20 18:48:57 -07:00
Mouse Reeve
8bb815d5b3 Merge branch 'main' into production 2021-05-20 18:21:58 -07:00
Mouse Reeve
1e71cf980c Merge branch 'main' into production 2021-05-18 14:40:54 -07:00
Mouse Reeve
30a67a0221
Merge pull request #1089 from bcj/bcj/non-standard-ports
Allow BookWyrm Services on Non-Standard Ports
2021-05-18 11:28:16 -07:00
bcj
9302aa6ce4 Add POSTGRES_PORT to .env.dev 2021-05-18 13:21:19 -05:00
bcj
238862a4cf Have redis_* grab port information from .env 2021-05-18 00:54:15 -05:00
bcj
bad39aef55 Remove the redis_activity ports listing
It is misleading, it isn't binding to that port
2021-05-17 00:07:59 -05:00
bcj
83078cd424 Combine duplicate redis_* volumes in compose file.
I _think_ yaml handles duplicates by overwriting so the conf file was not being added
2021-05-17 00:05:11 -05:00
bcj
892d338adc Read flower port from .env 2021-05-17 00:03:51 -05:00
bcj
e8124806b1 Don't hardcode postres port in bookwyrm.settings
Have bookwyrm.settings check for an alternative postgres port.
2021-05-17 00:03:11 -05:00
Mouse Reeve
f508b4eb33 Merge branch 'main' into production 2021-05-10 16:25:25 -07:00
Mouse Reeve
0ff7c84a14 Merge branch 'main' into production 2021-05-05 08:39:21 -07:00
Mouse Reeve
4fc230ec8b Merge branch 'main' into production 2021-05-04 09:16:20 -07:00
Mouse Reeve
5ed4dfdb63 Removes aria-hidden from covers 2021-05-02 07:12:50 -07:00
Mouse Reeve
b3e369cdba Merge branch 'main' into production 2021-05-01 07:32:30 -07:00
Mouse Reeve
4cc2eccaa4 Merge branch 'main' into production 2021-04-30 15:59:24 -07:00
Mouse Reeve
7aff486a59 Merge branch 'main' into production 2021-04-30 14:05:20 -07:00
Mouse Reeve
d80623d88d Merge branch 'main' into production 2021-04-30 13:09:44 -07:00
Mouse Reeve
8761357905 Merge branch 'main' into production 2021-04-30 06:54:05 -07:00
Mouse Reeve
3d2a56090c Merge branch 'main' into production 2021-04-26 14:07:57 -07:00
Mouse Reeve
d7a662a39a Merge branch 'main' into production 2021-04-25 11:37:17 -07:00
Mouse Reeve
b30fab0597 Merge branch 'main' into production 2021-04-25 11:33:04 -07:00
Mouse Reeve
72f3aff024 Merge branch 'main' into production 2021-04-22 10:43:48 -07:00
Mouse Reeve
92d58411b9 Merge branch 'main' into production 2021-04-22 09:04:13 -07:00
Mouse Reeve
834bc08f34 Merge branch 'main' into production 2021-04-22 07:54:50 -07:00
Mouse Reeve
09ae418881 Merge branch 'main' into production 2021-04-21 14:30:26 -07:00
Mouse Reeve
b54979d39c Merge branch 'dropdown-style' of https://github.com/joachimesque/bookwyrm into production 2021-04-21 12:49:35 -07:00
Mouse Reeve
85f1c38ba6 Merge branch 'main' into production 2021-04-18 07:11:50 -07:00
Mouse Reeve
238e88c9dc Merge branch 'main' into production 2021-04-17 18:06:17 -07:00
Mouse Reeve
8a1bfc5ffc Merge branch 'main' into production 2021-04-17 15:29:52 -07:00
Mouse Reeve
1a939ed913 Merge branch 'main' into production 2021-04-17 11:57:19 -07:00
Mouse Reeve
9d26c0e824 Merge branch 'main' into production 2021-04-17 10:57:55 -07:00
Mouse Reeve
107d56c494 Merge branch 'main' into production 2021-04-16 14:31:36 -07:00
Mouse Reeve
9ad369203f Merge branch 'main' into production 2021-04-15 16:41:50 -07:00
Mouse Reeve
37c5c4979f Merge branch 'main' into production 2021-04-15 11:35:30 -07:00
Mouse Reeve
5f783e4fd1 Merge branch 'main' into production 2021-04-13 18:39:45 -07:00
Mouse Reeve
d54286b571 Merge branch 'main' into production 2021-04-13 13:12:44 -07:00
Mouse Reeve
64e721fb0b Merge branch 'main' into production 2021-04-08 19:37:09 -07:00
Mouse Reeve
4ee738ae52 Merge branch 'main' into production 2021-04-08 09:22:24 -07:00
Mouse Reeve
3795d682aa Merge branch 'main' into production 2021-04-07 11:19:12 -07:00
Mouse Reeve
c33eacaf3d Merge branch 'main' into production 2021-04-07 11:17:16 -07:00
Mouse Reeve
e35befb6a2 Merge branch 'main' into production 2021-04-05 11:11:06 -07:00
Mouse Reeve
411dd1b14d Adds password to production stream erase command 2021-04-05 08:50:50 -07:00
Mouse Reeve
8a0db114d8 Fixes stream length 2021-04-04 21:01:20 -07:00
Mouse Reeve
a3d5d352a7 Merge branch 'main' into production 2021-04-04 21:00:26 -07:00
Mouse Reeve
a8052c2dd0 Merge branch 'main' into production 2021-04-04 16:44:46 -07:00
Mouse Reeve
c50edc9d3f Merge branch 'main' into production 2021-04-02 10:09:33 -07:00
Mouse Reeve
d55dafd9be Merge branch 'main' into production 2021-04-02 07:42:37 -07:00
Mouse Reeve
a5aab26986 Merge branch 'main' into production 2021-04-02 07:22:33 -07:00
Mouse Reeve
406c94354f Merge branch 'main' into production 2021-04-01 18:07:54 -07:00
Mouse Reeve
afa831d140 Merge branch 'main' into production 2021-04-01 14:36:32 -07:00
Mouse Reeve
01f598a951 Merge branch 'main' into production 2021-04-01 14:32:21 -07:00
Mouse Reeve
32cd7ec32d Merge branch 'main' into production 2021-04-01 13:24:32 -07:00
Mouse Reeve
5e6b9e44c9 Merge branch 'main' into production 2021-04-01 12:31:09 -07:00
Mouse Reeve
fd91a79558 Merge branch 'main' into production 2021-03-31 15:19:40 -07:00
Mouse Reeve
877c90b087 Merge branch 'main' into production 2021-03-31 11:28:52 -07:00
Mouse Reeve
af92e3e9a4 Merge branch 'main' into production 2021-03-30 11:01:32 -07:00
Mouse Reeve
d77eb5381d Merge branch 'main' into production 2021-03-29 18:57:48 -07:00
Mouse Reeve
9189ae2f2d Merge branch 'main' into production 2021-03-29 15:24:05 -07:00
Mouse Reeve
79fc286ef8 Merge branch 'main' into production 2021-03-29 13:41:16 -07:00
Mouse Reeve
211bf318c4 Merge branch 'main' into production 2021-03-29 13:21:56 -07:00
Mouse Reeve
7439adb8e6 Merge branch 'main' into production 2021-03-29 12:00:50 -07:00
Mouse Reeve
17b289e6e2 Merge branch 'main' into production 2021-03-28 19:04:23 -07:00
Mouse Reeve
145ea053cb Merge branch 'main' into production 2021-03-28 18:08:01 -07:00
Mouse Reeve
7730c9f9a7 Merge branch 'main' into production 2021-03-28 14:49:58 -07:00
Mouse Reeve
bc1a782541 Merge branch 'main' into production 2021-03-28 11:02:38 -07:00
Mouse Reeve
0abd4e7fc8 Merge branch 'main' into production 2021-03-27 13:17:19 -07:00
Mouse Reeve
fae72977b6 Merge branch 'main' into production 2021-03-26 10:51:50 -07:00
Mouse Reeve
18557af41a Merge branch 'main' into production 2021-03-25 20:09:19 -07:00
Mouse Reeve
e01581c28d Merge branch 'main' into production 2021-03-25 16:56:50 -07:00
Mouse Reeve
78467190a6
Merge pull request #797 from mouse-reeve/auth-prod-redis-management
Authentication for redis in prod management command
2021-03-25 11:36:48 -07:00
Mouse Reeve
b8695ae3b7 Authentication for redis in prod management command 2021-03-25 11:34:49 -07:00
Mouse Reeve
6f37a43d33 Merge branch 'main' into production 2021-03-25 11:14:37 -07:00
Mouse Reeve
34fb1d2526 Revert "Revert "Prod redis activitystream""
This reverts commit 127881f56a.
2021-03-25 11:00:37 -07:00
Mouse Reeve
3cac69cd2c Updates tests 2021-03-25 06:44:50 -07:00
Mouse Reeve
f546dfb005 Adds privacy fields to boost activities 2021-03-25 06:43:52 -07:00
Mouse Reeve
c3a750f5da Adds published date field to boost activity 2021-03-25 06:43:52 -07:00
Mouse Reeve
4b44ce691e Return activities for existing objs in resolve_remote_id 2021-03-25 06:43:52 -07:00
Mouse Reeve
9dbd1c674c Hotfix for serializing review names 2021-03-24 11:23:26 -07:00
Mouse Reeve
80893dd5fd Python formatting 2021-03-24 11:12:53 -07:00
Mouse Reeve
4ca76ec58d Updates tests to catch decimal/float/int errors in status serialization 2021-03-24 10:50:54 -07:00
Mouse Reeve
e3350e58d0 Fixes formatting rating value in template 2021-03-24 10:49:39 -07:00
Mouse Reeve
8b9528ae95 Fixes decimaal formatting 2021-03-24 10:49:36 -07:00
Mouse Reeve
a26bf2859b Python formatting 2021-03-24 10:48:32 -07:00
Mouse Reeve
94ef9cdd3e
Merge pull request #789 from mouse-reeve/revert-787-prod-redis-activitystream
Revert "Prod redis activitystream"
2021-03-24 08:13:13 -07:00
Mouse Reeve
127881f56a
Revert "Prod redis activitystream" 2021-03-24 08:12:03 -07:00
Mouse Reeve
8e4db60f46
Merge pull request #787 from mouse-reeve/prod-redis-activitystream
Prod redis activitystream
2021-03-24 07:40:27 -07:00
Mouse Reeve
8277c9e42e python formatting 2021-03-23 20:15:03 -07:00
Mouse Reeve
af72488cd9 Production config for new redis image 2021-03-23 20:05:45 -07:00
Mouse Reeve
f97efc2f86 Merge branch 'main' into production 2021-03-23 18:29:39 -07:00
Mouse Reeve
d72dc842bd Merge branch 'main' into production 2021-03-22 13:05:23 -07:00
Mouse Reeve
e37bb809e0
Merge pull request #778 from mouse-reeve/update-instructions
Instructions for updating an instance
2021-03-21 14:31:13 -07:00
Mouse Reeve
eca1a87294 Remove development requirements for prod installs 2021-03-21 14:23:36 -07:00
Mouse Reeve
97697ce5d5 Adds build step to update command 2021-03-21 14:23:26 -07:00
Mouse Reeve
c9f7ab6389 Adds documentation on how to update an instance 2021-03-21 14:23:05 -07:00
Mouse Reeve
bb8cac021b Merge branch 'main' into production 2021-03-21 12:54:00 -07:00
Mouse Reeve
94d18f6c24 Merge branch 'main' into production 2021-03-20 20:47:53 -07:00
Mouse Reeve
5f1694a7b0 Merge branch 'main' into production 2021-03-19 20:35:29 -07:00
Mouse Reeve
08cf9b5b40 Merge branch 'main' into production 2021-03-19 16:07:57 -07:00
Mouse Reeve
8586f0fe75 Merge branch 'main' into production 2021-03-19 11:01:32 -07:00
Mouse Reeve
a868e6c6fd Merge branch 'main' into production 2021-03-18 09:24:55 -07:00
Mouse Reeve
fd4b2adb8b Merge branch 'main' into production 2021-03-16 14:21:02 -07:00
Mouse Reeve
52905a3668 Merge branch 'main' into production 2021-03-15 17:04:21 -07:00
Mouse Reeve
d202bd1d1d Merge branch 'main' into production 2021-03-15 16:02:53 -07:00
Mouse Reeve
42ba3753d6
Merge pull request #719 from mouse-reeve/certbot-install
Certbot install
2021-03-15 14:25:35 -07:00
Mouse Reeve
52cf9c67b9 Merge branch 'main' into production 2021-03-15 10:29:29 -07:00
Mouse Reeve
e8b89eee73 Merge branch 'main' into production 2021-03-13 18:25:15 -08:00
Mouse Reeve
a04511ccf7 Merge branch 'main' into production 2021-03-13 16:15:42 -08:00
Mouse Reeve
fd3c6d1d21 Merge branch 'main' into production 2021-03-13 15:48:13 -08:00
Mouse Reeve
6a14529893 Merge branch 'main' into production 2021-03-13 14:06:08 -08:00
Mouse Reeve
36088554e8
Merge branch 'production' into certbot-install 2021-03-13 11:02:27 -08:00
Mouse Reeve
59cc5c112f
Merge pull request #720 from bcj/bcj/reverse-proxy
Add instructions for handling port conflicts
2021-03-13 09:57:13 -08:00
bcj
b0f78e6d0b Add instructions for handling port conflicts
Add information on how to handle port conflicts when building Bookwyrm, and specifically instructions for handling conflicts caused by another webserver
2021-03-12 22:21:24 -06:00
Mouse Reeve
28160137d0 Comments out https part of nginx config to certbot can run 2021-03-12 14:34:00 -08:00
Mouse Reeve
4bf61e0ef0 Updates production install instructions 2021-03-12 14:15:53 -08:00
Mouse Reeve
f6366e1c4a Merge branch 'main' into production 2021-03-11 10:29:34 -08:00
Mouse Reeve
593d5d309a
Merge pull request #714 from bcj/bcj/automatic-backups
Turn on Automatic backups for the DB
2021-03-09 10:28:51 -08:00
bcj
84b525f83e Add a script for pruning old backup files
Adds a pruning script which is installed but not set to run by default.
Also adds for that script that can be run in a container that replicates the db container's conditions
2021-03-08 23:16:34 -06:00
bcj
015d45ef99 Start cron on db service start
init.d isn't run when docker containers are run so we need to modify the entrypoint.
This commit makes the ugly choice of injecting the command in automatically in order to avoid the need to manually maintain the images entrypoint.
2021-03-08 23:09:54 -06:00
Mouse Reeve
6c7fcb0dd1 Merge branch 'main' into production 2021-03-08 10:02:46 -08:00
Mouse Reeve
6f6ca40ce7 Merge branch 'main' into production 2021-03-07 13:21:31 -08:00
Mouse Reeve
c770b369d2 Merge branch 'main' into production 2021-03-07 09:00:20 -08:00
Mouse Reeve
88879207b9 Don't install gettext in production 2021-03-03 15:18:06 -08:00
Mouse Reeve
8ac2315cc6 Merge branch 'main' into production 2021-03-03 15:17:47 -08:00
Mouse Reeve
e5283f9576 Merge branch 'main' into production 2021-03-02 20:31:34 -08:00
Mouse Reeve
6ecda991d9 Merge branch 'main' into production 2021-03-02 13:52:19 -08:00
Mouse Reeve
cabb486cb8 Merge branch 'main' into production 2021-03-02 13:43:50 -08:00
Mouse Reeve
a1428a6030 Merge branch 'main' into production 2021-03-01 11:44:13 -08:00
Mouse Reeve
489ac29761 Merge branch 'main' into production 2021-02-27 12:12:24 -08:00
Mouse Reeve
df4def6cef Merge branch 'main' into production 2021-02-24 13:32:44 -08:00
Mouse Reeve
e36ddb3f9b Merge branch 'main' into production 2021-02-24 07:16:46 -08:00
Mouse Reeve
396cb30c3a Merge branch 'main' into production 2021-02-23 20:21:47 -08:00
Mouse Reeve
98f35929b4 Merge branch 'main' into production 2021-02-22 09:50:40 -08:00
Mouse Reeve
888987f19d Merge branch 'main' into production 2021-02-22 08:40:24 -08:00
Mouse Reeve
ac80df7ee0 Merge branch 'main' into production 2021-02-20 11:25:03 -08:00
Mouse Reeve
70bdac3706 Merge branch 'main' into production 2021-02-12 16:23:27 -08:00
Mouse Reeve
355b2fad35 Merge branch 'main' into production 2021-02-12 10:11:16 -08:00
Mouse Reeve
f107e3d499 db password that more clearly indicates to change it 2021-02-10 17:25:23 -08:00
Mouse Reeve
5d7bd6a92b Merge branch 'main' into production 2021-02-10 17:24:31 -08:00
Mouse Reeve
a2c7bffec9 Merge branch 'main' into production 2021-02-04 12:25:35 -08:00
Mouse Reeve
d41c1b7213 Merge branch 'main' into production 2021-02-03 18:09:35 -08:00
Mouse Reeve
f9da72d957 Merge branch 'main' into production 2021-02-03 17:11:32 -08:00
Mouse Reeve
6105a6921b Merge branch 'main' into production 2021-02-03 16:50:00 -08:00
Mouse Reeve
a0b0edbc3e Merge branch 'main' into production 2021-02-03 16:45:44 -08:00
Mouse Reeve
a3966aa807 Merge branch 'main' into production 2021-02-03 11:01:20 -08:00
Mouse Reeve
a3768b52a6 Merge branch 'main' into production 2021-02-03 11:00:44 -08:00
Mouse Reeve
7e7f80d31e Merge branch 'main' into production 2021-01-30 17:53:23 -08:00
Mouse Reeve
9c5444ad7a Merge branch 'main' into production 2021-01-30 12:31:34 -08:00
Mouse Reeve
657dff7e95 Merge branch 'main' into production 2021-01-30 11:53:00 -08:00
Mouse Reeve
bfdfb846da Merge branch 'main' into production 2021-01-30 09:20:19 -08:00
Mouse Reeve
00c8fab365 Merge branch 'main' into production 2021-01-27 07:38:58 -08:00
Mouse Reeve
e4001aba0b Merge branch 'main' into production 2021-01-21 16:57:26 -08:00
Mouse Reeve
6545141bcf Merge branch 'main' into production 2021-01-19 09:53:54 -08:00
Mouse Reeve
7ef29bb99e Merge branch 'main' into production 2021-01-19 07:50:18 -08:00
Mouse Reeve
d9d5bc4e31 Merge branch 'main' into production 2021-01-19 07:19:37 -08:00
Mouse Reeve
7eae4cdebb Merge branch 'main' into production 2021-01-18 18:56:37 -08:00
Mouse Reeve
1eb979f05d Merge branch 'main' into production 2021-01-18 16:55:28 -08:00
Mouse Reeve
28a7156f36 Merge branch 'main' into production 2021-01-18 13:52:42 -08:00
Mouse Reeve
802d12421b Merge branch 'main' into production 2021-01-18 12:55:03 -08:00
Mouse Reeve
41d9fe9d9d Merge branch 'main' into production 2021-01-18 12:01:33 -08:00
Mouse Reeve
78d358a916 Merge branch 'main' into production 2021-01-18 11:52:09 -08:00
Mouse Reeve
6d4335e05b Merge branch 'main' into production 2021-01-18 11:46:51 -08:00
Mouse Reeve
5e0cb746d4 Merge branch 'main' into production 2021-01-18 10:26:58 -08:00
Mouse Reeve
c7d6273b3b Merge branch 'main' into production 2021-01-12 07:30:49 -08:00
Mouse Reeve
5abba57bfe Merge branch 'main' into production 2021-01-11 18:15:42 -08:00
Mouse Reeve
12dbd47207 Merge branch 'main' into production 2021-01-11 15:29:50 -08:00
Mouse Reeve
8156f1905d Merge branch 'main' into production 2021-01-07 09:37:08 -08:00
Mouse Reeve
14e0102694 Merge branch 'main' into production 2021-01-07 09:33:43 -08:00
Mouse Reeve
aafc9654c1 Merge branch 'main' into production 2021-01-06 16:06:06 -08:00
Mouse Reeve
97f050c68e Merge branch 'main' into production 2021-01-06 14:14:43 -08:00
Mouse Reeve
5c24bb6243
Merge pull request #487 from mouse-reeve/gunicorn
Use gunicorn as production runner
2021-01-06 10:37:27 -08:00
Mouse Reeve
08cf668233 Use gunicorn as production runner 2021-01-05 19:28:13 -08:00
Mouse Reeve
0a75c33de4
Merge pull request #486 from mouse-reeve/flower-auth
uses basic auth for flower
2021-01-05 12:54:42 -08:00
Mouse Reeve
a29a5dbde9 uses basic auth for flower 2021-01-05 12:52:10 -08:00
Mouse Reeve
636de3ae54 Merge branch 'main' into production 2021-01-05 11:48:15 -08:00
Mouse Reeve
0f49436475 Merge branch 'main' into production 2021-01-04 21:48:40 -08:00
Mouse Reeve
79dca312fb Merge branch 'main' into production 2021-01-03 20:08:22 -08:00
Mouse Reeve
c69fdf7bb6 Merge branch 'main' into production 2021-01-03 15:47:17 -08:00
Mouse Reeve
2719332e82 Merge branch 'main' into production 2021-01-03 14:51:16 -08:00
Mouse Reeve
5b397c42f6 Merge branch 'main' into production 2021-01-03 10:14:01 -08:00
Mouse Reeve
2635d109ed Merge branch 'main' into production 2021-01-03 08:18:57 -08:00
Mouse Reeve
34ccf60868
Merge pull request #464 from mouse-reeve/certbot-renew
Fixes acme challenge path for certbot renewal
2021-01-02 16:29:26 -08:00
Mouse Reeve
d3192fb1bb Fixes acme challenge path for certbot renewal 2021-01-02 12:14:51 -08:00
Mouse Reeve
042cfe2dfc Merge branch 'main' into production 2021-01-01 07:33:03 -08:00
Mouse Reeve
42d50d15f8 Merge branch 'main' into production 2020-12-31 14:37:24 -08:00
Mouse Reeve
2d21a31c13 Merge branch 'main' into production 2020-12-27 14:45:53 -08:00
Mouse Reeve
77db2e183d Merge branch 'main' into production 2020-12-21 13:21:59 -08:00
Mouse Reeve
255261dc3d Merge branch 'main' into production 2020-12-20 13:15:27 -08:00
Mouse Reeve
44f764a34a Merge branch 'main' into production 2020-12-20 11:42:28 -08:00
Mouse Reeve
b910be99c3 Merge branch 'main' into production 2020-12-19 20:33:36 -08:00
Mouse Reeve
f7769db99b Merge branch 'main' into production 2020-12-17 22:16:32 -08:00
Mouse Reeve
b36cf5a7b8 Merge branch 'main' into production 2020-12-17 13:56:45 -08:00
Mouse Reeve
c5b48f521a Merge branch 'main' into production 2020-12-17 13:26:42 -08:00
Mouse Reeve
0cd9f81431 Merge branch 'main' into production 2020-12-17 11:33:52 -08:00
Mouse Reeve
ad6b2315ab Merge branch 'main' into production 2020-12-16 20:28:47 -08:00
Mouse Reeve
4c8583bfdb Merge branch 'main' into production 2020-12-16 15:59:59 -08:00
Mouse Reeve
175eab01f7 Merge branch 'main' into production 2020-12-16 15:02:23 -08:00
Mouse Reeve
d5cc1d2f02 Merge branch 'main' into production 2020-12-11 17:43:40 -08:00
Mouse Reeve
4171829626 Merge branch 'main' into production 2020-12-02 15:28:01 -08:00
Mouse Reeve
f3576d59d7 Merge branch 'main' into production 2020-11-24 13:46:22 -08:00
Mouse Reeve
e5b850299c Merge branch 'main' into production 2020-11-22 09:38:06 -08:00
Mouse Reeve
d417f1e09d Merge branch 'main' into production 2020-11-18 12:33:16 -08:00
Mouse Reeve
cb124e9ba4 Merge branch 'main' into production 2020-11-17 14:28:48 -08:00
Mouse Reeve
567ea40f52 Merge branch 'main' into production 2020-11-13 12:18:30 -08:00
Mouse Reeve
8f595b6a35 Merge branch 'main' into production 2020-11-13 11:48:55 -08:00
Mouse Reeve
e9ed457012 Merge branch 'main' into production 2020-11-12 14:40:49 -08:00
Mouse Reeve
5cfdc75c5f Remove more info box when it's not used 2020-11-11 11:24:46 -08:00
Mouse Reeve
0eacee02ac Merge branch 'main' into production 2020-11-11 11:10:15 -08:00
Mouse Reeve
dac3c9353d Merge branch 'main' into production 2020-11-09 16:58:59 -08:00
Mouse Reeve
da24241b78 Merge branch 'main' into production 2020-11-09 13:14:35 -08:00
Mouse Reeve
2d96c8a35a Merge branch 'main' into production 2020-11-08 20:17:52 -08:00
Mouse Reeve
9ac94543b9 Merge branch 'main' of github.com:mouse-reeve/bookwyrm into main 2020-11-08 20:15:57 -08:00
Mouse Reeve
00752232ff Merge branch 'jim/cli-tooling' of https://github.com/jimfingal/bookwyrm into main 2020-11-08 20:03:27 -08:00
Mouse Reeve
9db851f2d4 Merge branch 'main' into production 2020-11-08 16:47:52 -08:00
Mouse Reeve
4424acc5c2 Merge branch 'main' into production 2020-11-08 16:27:54 -08:00
Mouse Reeve
eb1e4fd24c Don't show re-shelve buttons on other people's shelves
yikes
2020-11-07 20:47:17 -08:00
Mouse Reeve
abd96717da Merge branch 'main' into production 2020-11-07 20:14:07 -08:00
Mouse Reeve
29e5c67f83 Merge branch 'main' into production 2020-11-07 19:15:27 -08:00
Mouse Reeve
0574e36602 Merge branch 'main' into production 2020-11-07 19:02:05 -08:00
Mouse Reeve
b20de604d3 Merge branch 'main' into production 2020-11-07 19:01:25 -08:00
Mouse Reeve
de95b60b0b
Merge pull request #300 from mouse-reeve/main
fixing federation of images bugs, finally
2020-11-07 11:01:07 -08:00
Mouse Reeve
bc62ed231f Merge branch 'main' into production 2020-11-06 20:51:41 -08:00
Mouse Reeve
c80d63d184 Merge branch 'main' into production 2020-11-06 20:40:48 -08:00
Mouse Reeve
46d9a444fd Merge branch 'main' into production 2020-11-06 15:51:31 -08:00
Mouse Reeve
6dd83d490f Merge branch 'main' into production 2020-11-06 14:53:44 -08:00
Mouse Reeve
5782961e58 Merge branch 'main' into production 2020-11-06 09:09:17 -08:00
Mouse Reeve
573bdfd56b Merge branch 'main' into production 2020-11-05 13:51:37 -08:00
Mouse Reeve
8dc350e8a1 prod-only configuration 2020-11-05 11:47:40 -08:00
Mouse Reeve
44b299b28f Merge branch 'main' into production 2020-11-05 11:45:28 -08:00
Mouse Reeve
4dc53c56d5 Merge branch 'main' into production 2020-11-05 11:44:59 -08:00
Mouse Reeve
5a051d669f Merge branch 'main' into production 2020-11-04 16:32:20 -08:00
Mouse Reeve
ecca49f70a Merge branch 'main' into production 2020-11-04 14:19:27 -08:00
Mouse Reeve
004a3e5e56 Merge branch 'main' into production 2020-11-04 14:13:59 -08:00
Mouse Reeve
1a61c6eb07 Merge branch 'main' into production 2020-11-04 14:01:57 -08:00
Mouse Reeve
ea672c74e2 Merge branch 'main' into production 2020-11-04 13:33:23 -08:00
Mouse Reeve
2823eca70d Merge branch 'main' into production 2020-11-04 13:13:22 -08:00
Mouse Reeve
88de4ff046 Merge branch 'main' into production 2020-11-04 12:12:58 -08:00
Mouse Reeve
c577c91912 Merge branch 'main' into production 2020-11-04 11:31:01 -08:00
Mouse Reeve
dd8f91a044 Merge branch 'main' into production 2020-11-02 15:40:15 -08:00
Mouse Reeve
de66d77f4b Merge branch 'main' into production 2020-11-02 14:13:48 -08:00
Mouse Reeve
f6026f5ed8 Merge branch 'main' into production 2020-11-02 11:55:06 -08:00
Mouse Reeve
17ea59e655 Merge branch 'main' into production 2020-11-02 09:24:26 -08:00
Mouse Reeve
6b667aa575 Merge branch 'main' into production 2020-11-02 09:07:21 -08:00
Mouse Reeve
5e2667c297 Remove option to resetdb from production helpers 2020-11-01 12:16:03 -08:00
Mouse Reeve
7db77c690e Merge branch 'main' into production 2020-11-01 12:09:59 -08:00
Mouse Reeve
c7b2d7a4b9 Merge branch 'main' into production 2020-11-01 11:58:14 -08:00
Mouse Reeve
bfef6f2d8a Merge branch 'main' into production 2020-11-01 11:14:01 -08:00
Mouse Reeve
875b473711 Merge branch 'main' into production 2020-11-01 11:12:02 -08:00
Mouse Reeve
2b75fd1f14 Merge branch 'main' into production 2020-10-31 13:08:29 -07:00
Mouse Reeve
d6a166d9db Merge branch 'main' into production 2020-10-31 11:21:26 -07:00
Mouse Reeve
8d21cfa707 Merge branch 'main' into production 2020-10-31 11:05:53 -07:00
Mouse Reeve
7da705f87e Merge branch 'main' into production 2020-10-30 15:23:26 -07:00
Mouse Reeve
7af3afe24b Merge branch 'main' into production 2020-10-30 13:11:37 -07:00
Mouse Reeve
3cf2c4d1b0 Merge branch 'main' into production 2020-10-30 12:59:53 -07:00
Mouse Reeve
6d6a16daf3 Merge branch 'main' into production 2020-10-30 12:07:44 -07:00
Mouse Reeve
0a34cf8821 Merge branch 'main' into production 2020-10-30 11:56:05 -07:00
Mouse Reeve
d19405f379 fixes backup script name 2020-10-29 23:10:40 -07:00
Mouse Reeve
59d0164911 Fixes celery access to redis 2020-10-28 17:42:01 -07:00
Mouse Reeve
f23e10f8f7 Merge branch 'main' into production 2020-10-28 17:17:16 -07:00
Mouse Reeve
19f9b75c6b Merge branch 'main' into production 2020-10-28 16:54:09 -07:00
Mouse Reeve
9c68b6d430
Merge pull request #244 from mouse-reeve/cron
Adds database backup crontab
2020-10-28 16:33:49 -07:00
Mouse Reeve
d3987756eb Removes hanging cron command 2020-10-28 16:10:50 -07:00
Mouse Reeve
687ff5a6d7 Gets database cron into the right image 2020-10-28 15:56:25 -07:00
Mouse Reeve
8b061d63a8 Changes perms on db backup script 2020-10-28 13:56:23 -07:00
Mouse Reeve
6826386ea3 Adds database backup crontab 2020-10-28 13:55:08 -07:00
Mouse Reeve
b02a70b484 Merge branch 'main' into production 2020-10-28 13:17:44 -07:00
Mouse Reeve
c5bcde0ee4 replicate password line 2020-10-28 12:07:56 -07:00
Mouse Reeve
be1a26d32b Fixes redis conf location in image 2020-10-28 11:51:25 -07:00
Mouse Reeve
54b924a4e2 Secure redis.conf 2020-10-28 11:37:28 -07:00
Mouse Reeve
439feac110 Use redis password in production 2020-10-27 12:30:34 -07:00
Mouse Reeve
7dd1deb438 Merge branch 'main' into production 2020-10-27 12:18:46 -07:00
Mouse Reeve
a9a40e4d69 Merge branch 'main' into production 2020-10-16 17:11:46 -07:00
Mouse Reeve
5446a5b238 Merge branch 'main' into production 2020-10-16 17:09:43 -07:00
Mouse Reeve
4b32948fd3 oops keep the .env stuff 2020-10-16 13:22:09 -07:00
Mouse Reeve
4b993fb5d6 env and config for production 2020-10-16 12:55:32 -07:00
570 changed files with 15152 additions and 71472 deletions

View file

@ -16,11 +16,6 @@ DEFAULT_LANGUAGE="English"
## Leave unset to allow all hosts
# ALLOWED_HOSTS="localhost,127.0.0.1,[::1]"
# Specify when the site is served from a port that is not the default
# for the protocol (80 for HTTP or 443 for HTTPS).
# Probably only necessary in development.
# PORT=1333
MEDIA_ROOT=images/
# Database configuration
@ -76,20 +71,14 @@ ENABLE_THUMBNAIL_GENERATION=true
USE_S3=false
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
# seconds for signed S3 urls to expire
# this is currently only used for user export files
S3_SIGNED_URL_EXPIRY=900
# Commented are example values if you use a non-AWS, S3-compatible service
# AWS S3 should work with only AWS_STORAGE_BUCKET_NAME and AWS_S3_REGION_NAME
# non-AWS S3-compatible services will need AWS_STORAGE_BUCKET_NAME,
# along with both AWS_S3_CUSTOM_DOMAIN and AWS_S3_ENDPOINT_URL.
# AWS_S3_URL_PROTOCOL must end in ":" and defaults to the same protocol as
# the BookWyrm instance ("http:" or "https:", based on USE_SSL).
# along with both AWS_S3_CUSTOM_DOMAIN and AWS_S3_ENDPOINT_URL
# AWS_STORAGE_BUCKET_NAME= # "example-bucket-name"
# AWS_S3_CUSTOM_DOMAIN=None # "example-bucket-name.s3.fr-par.scw.cloud"
# AWS_S3_URL_PROTOCOL=None # "http:"
# AWS_S3_REGION_NAME=None # "fr-par"
# AWS_S3_ENDPOINT_URL=None # "https://s3.fr-par.scw.cloud"
@ -144,14 +133,7 @@ HTTP_X_FORWARDED_PROTO=false
TWO_FACTOR_LOGIN_VALIDITY_WINDOW=2
TWO_FACTOR_LOGIN_MAX_SECONDS=60
# Additional hosts to allow in the Content-Security-Policy, "self" (should be
# DOMAIN with optionally ":" + PORT) and AWS_S3_CUSTOM_DOMAIN (if used) are
# added by default. Value should be a comma-separated list of host names.
# Additional hosts to allow in the Content-Security-Policy, "self" (should be DOMAIN)
# and AWS_S3_CUSTOM_DOMAIN (if used) are added by default.
# Value should be a comma-separated list of host names.
CSP_ADDITIONAL_HOSTS=
# Time before being logged out (in seconds)
# SESSION_COOKIE_AGE=2592000 # current default: 30 days
# Maximum allowed memory for file uploads (increase if users are having trouble
# uploading BookWyrm export files).
# DATA_UPLOAD_MAX_MEMORY_MiB=100

View file

@ -1,68 +0,0 @@
<!--
Thanks for contributing! This template has some checkboxes that help keep track of what changes go into a release.
To check (tick) a list item, replace the space between square brackets with an x, like this:
- [x] I have checked the box
You can find more information and tips for BookWyrm contributors at https://docs.joinbookwyrm.com/contributing.html
-->
## Description
<!--
Describe what your pull request does here
-->
<!--
For pull requests that relate or close an issue, please include them
below. We like to follow [Github's guidance on linking issues to pull requests](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue).
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## What type of Pull Request is this?
<!-- Check all that apply -->
- [ ] Bug Fix
- [ ] Enhancement
- [ ] Plumbing / Internals / Dependencies
- [ ] Refactor
## Does this PR change settings or dependencies, or break something?
<!-- Check all that apply -->
- [ ] This PR changes or adds default settings, configuration, or .env values
- [ ] This PR changes or adds dependencies
- [ ] This PR introduces other breaking changes
### Details of breaking or configuration changes (if any of above checked)
## Documentation
<!--
Documentation for users, admins, and developers is an important way to keep the BookWyrm community welcoming and make Bookwyrm easy to use.
Our documentation is maintained in a separate repository at https://github.com/bookwyrm-social/documentation
-->
<!-- Check all that apply -->
- [ ] New or amended documentation will be required if this PR is merged
- [ ] I have created a matching pull request in the Documentation repository
- [ ] I intend to create a matching pull request in the Documentation repository after this PR is merged
<!-- Amazing! Thanks for filling that out. Your PR will need to have passing tests and happy linters before we can merge
You will need to check your code with `black`, `pylint`, and `mypy`, or `./bw-dev formatters`
-->
### Tests
<!-- Check one -->
- [ ] My changes do not need new tests
- [ ] All tests I have added are passing
- [ ] I have written tests but need help to make them pass
- [ ] I have not written tests and need help to write them

26
.github/release.yml vendored
View file

@ -1,26 +0,0 @@
changelog:
exclude:
labels:
- ignore-for-release
categories:
- title: ‼️ Breaking Changes & New Settings ⚙️
labels:
- breaking-change
- config-change
- title: Updated Dependencies 🧸
labels:
- dependencies
- title: New Features 🎉
labels:
- enhancement
- title: Bug Fixes 🐛
labels:
- fix
- bug
- title: Internals/Plumbing 👩‍🔧
- plumbing
- tests
- deployment
- title: Other Changes
labels:
- "*"

17
.github/workflows/black.yml vendored Normal file
View file

@ -0,0 +1,17 @@
name: Python Formatting (run ./bw-dev black to fix)
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: psf/black@22.12.0
with:
version: 22.12.0

View file

@ -36,11 +36,11 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@ -51,7 +51,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@ -65,4 +65,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v2

View file

@ -10,7 +10,7 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Install curlylint
run: pip install curlylint

61
.github/workflows/django-tests.yml vendored Normal file
View file

@ -0,0 +1,61 @@
name: Run Python Tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-20.04
services:
postgres:
image: postgres:13
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: hunter2
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
env:
SECRET_KEY: beepbeep
DEBUG: false
USE_HTTPS: true
DOMAIN: your.domain.here
BOOKWYRM_DATABASE_BACKEND: postgres
MEDIA_ROOT: images/
POSTGRES_PASSWORD: hunter2
POSTGRES_USER: postgres
POSTGRES_DB: github_actions
POSTGRES_HOST: 127.0.0.1
CELERY_BROKER: ""
REDIS_BROKER_PORT: 6379
REDIS_BROKER_PASSWORD: beep
USE_DUMMY_CACHE: true
FLOWER_PORT: 8888
EMAIL_HOST: "smtp.mailgun.org"
EMAIL_PORT: 587
EMAIL_HOST_USER: ""
EMAIL_HOST_PASSWORD: ""
EMAIL_USE_TLS: true
ENABLE_PREVIEW_IMAGES: false
ENABLE_THUMBNAIL_GENERATION: true
HTTP_X_FORWARDED_PROTO: false
run: |
pytest -n 3

View file

@ -19,11 +19,10 @@ jobs:
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it.
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Install modules
# run: npm install stylelint stylelint-config-recommended stylelint-config-standard stylelint-order eslint
run: npm install eslint@^8.9.0
run: npm install stylelint stylelint-config-recommended stylelint-config-standard stylelint-order eslint
# See .stylelintignore for files that are not linted.
# - name: Run stylelint

View file

@ -14,10 +14,10 @@ jobs:
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it.
- uses: actions/checkout@v4
- uses: actions/checkout@v3
- name: Install modules
run: npm install prettier@2.5.1
run: npm install prettier
- name: Run Prettier
run: npx prettier --check bookwyrm/static/js/*.js

27
.github/workflows/pylint.yml vendored Normal file
View file

@ -0,0 +1,27 @@
name: Pylint
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.9
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Analysing the code with pylint
run: |
pylint bookwyrm/

View file

@ -1,99 +0,0 @@
name: Python
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
# overrides for .env.example
env:
POSTGRES_HOST: 127.0.0.1
PGPORT: 5432
POSTGRES_USER: postgres
POSTGRES_PASSWORD: hunter2
POSTGRES_DB: github_actions
SECRET_KEY: beepbeep
EMAIL_HOST_USER: ""
EMAIL_HOST_PASSWORD: ""
jobs:
pytest:
name: Tests (pytest)
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13
env: # does not inherit from jobs.build.env
POSTGRES_USER: postgres
POSTGRES_PASSWORD: hunter2
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
cache: pip
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest-github-actions-annotate-failures
- name: Set up .env
run: cp .env.example .env
- name: Check migrations up-to-date
run: python ./manage.py makemigrations --check -v 3
- name: Run Tests
run: pytest -n 3
pylint:
name: Linting (pylint)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
cache: pip
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Analyse code with pylint
run: pylint bookwyrm/
mypy:
name: Typing (mypy)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
cache: pip
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Set up .env
run: cp .env.example .env
- name: Analyse code with mypy
run: mypy bookwyrm celerywyrm
black:
name: Formatting (black; run ./bw-dev black to fix)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: psf/black@stable
with:
version: "22.*"

5
.gitignore vendored
View file

@ -16,8 +16,6 @@
# BookWyrm
.env
/images/
/exports/
/static/
bookwyrm/static/css/bookwyrm.css
bookwyrm/static/css/themes/
!bookwyrm/static/css/themes/bookwyrm-*.scss
@ -38,6 +36,3 @@ nginx/default.conf
#macOS
**/.DS_Store
# Docker
docker-compose.override.yml

View file

@ -1 +0,0 @@
'trailingComma': 'es5'

View file

@ -3,19 +3,7 @@ ignore=migrations
load-plugins=pylint.extensions.no_self_use
[MESSAGES CONTROL]
disable =
cyclic-import,
duplicate-code,
fixme,
no-member,
raise-missing-from,
too-few-public-methods,
too-many-ancestors,
too-many-instance-attributes,
unnecessary-lambda-assignment,
unsubscriptable-object,
enable =
useless-suppression
disable=E1101,E1135,E1136,R0903,R0901,R0902,W0707,W0511,W0406,R0401,R0801,C3001,import-error
[FORMAT]
max-line-length=88

View file

@ -1,4 +1,4 @@
FROM python:3.11
FROM python:3.9
ENV PYTHONUNBUFFERED 1

View file

@ -1,334 +0,0 @@
# Federation
BookWyrm uses the [ActivityPub](http://activitypub.rocks/) protocol to send and receive user activity between other BookWyrm instances and other services that implement ActivityPub. To handle book data, BookWyrm has a handful of extended Activity types which are not part of the standard, but are legible to other BookWyrm instances.
## Activities and Objects
### Users and relationships
User relationship interactions follow the standard ActivityPub spec.
- `Follow`: request to receive statuses from a user, and view their statuses that have followers-only privacy
- `Accept`: approves a `Follow` and finalizes the relationship
- `Reject`: denies a `Follow`
- `Block`: prevent users from seeing one another's statuses, and prevents the blocked user from viewing the actor's profile
- `Update`: updates a user's profile and settings
- `Delete`: deactivates a user
- `Undo`: reverses a `Block` or `Follow`
### Activities
- `Create/Status`: saves a new status in the database.
- `Delete/Status`: Removes a status
- `Like/Status`: Creates a favorite on the status
- `Announce/Status`: Boosts the status into the actor's timeline
- `Undo/*`,: Reverses an `Announce`, `Like`, or `Move`
- `Move/User`: Moves a user from one ActivityPub id to another.
### Collections
User's books and lists are represented by [`OrderedCollection`](https://www.w3.org/TR/activitystreams-vocabulary/#dfn-orderedcollection)
### Statuses
BookWyrm is focused on book reading activities - it is not a general-purpose messaging application. For this reason, BookWyrm only accepts status `Create` activities if they are:
- Direct messages (i.e., `Note`s with the privacy level `direct`, which mention a local user),
- Related to a book (of a custom status type that includes the field `inReplyToBook`),
- Replies to existing statuses saved in the database
All other statuses will be received by the instance inbox, but by design **will not be delivered to user inboxes or displayed to users**.
### Custom Object types
With the exception of `Note`, the following object types are used in Bookwyrm but are not currently provided with a custom JSON-LD `@context` extension IRI. This is likely to change in future to make them true deserialisable JSON-LD objects.
##### Note
Within BookWyrm a `Note` is constructed according to [the ActivityStreams vocabulary](https://www.w3.org/TR/activitystreams-vocabulary/#dfn-note), however `Note`s can only be created as direct messages or as replies to other statuses. As mentioned above, this also applies to incoming `Note`s.
##### Review
A `Review` is a status in response to a book (indicated by the `inReplyToBook` field), which has a title, body, and numerical rating between 0 (not rated) and 5.
Example:
```json
{
"id": "https://example.net/user/library_lurker/review/2",
"type": "Review",
"published": "2023-06-30T21:43:46.013132+00:00",
"attributedTo": "https://example.net/user/library_lurker",
"content": "<p>This is an enjoyable book with great characters.</p>",
"to": ["https://example.net/user/library_lurker/followers"],
"cc": [],
"replies": {
"id": "https://example.net/user/library_lurker/review/2/replies",
"type": "OrderedCollection",
"totalItems": 0,
"first": "https://example.net/user/library_lurker/review/2/replies?page=1",
"last": "https://example.net/user/library_lurker/review/2/replies?page=1",
"@context": "https://www.w3.org/ns/activitystreams"
},
"summary": "Spoilers ahead!",
"tag": [],
"attachment": [],
"sensitive": true,
"inReplyToBook": "https://example.net/book/1",
"name": "What a cracking read",
"rating": 4.5,
"@context": "https://www.w3.org/ns/activitystreams"
}
```
##### Comment
A `Comment` on a book mentions a book and has a message body, reading status, and progress indicator.
Example:
```json
{
"id": "https://example.net/user/library_lurker/comment/9",
"type": "Comment",
"published": "2023-06-30T21:43:46.013132+00:00",
"attributedTo": "https://example.net/user/library_lurker",
"content": "<p>This is a very enjoyable book so far.</p>",
"to": ["https://example.net/user/library_lurker/followers"],
"cc": [],
"replies": {
"id": "https://example.net/user/library_lurker/comment/9/replies",
"type": "OrderedCollection",
"totalItems": 0,
"first": "https://example.net/user/library_lurker/comment/9/replies?page=1",
"last": "https://example.net/user/library_lurker/comment/9/replies?page=1",
"@context": "https://www.w3.org/ns/activitystreams"
},
"summary": "Spoilers ahead!",
"tag": [],
"attachment": [],
"sensitive": true,
"inReplyToBook": "https://example.net/book/1",
"readingStatus": "reading",
"progress": 25,
"progressMode": "PG",
"@context": "https://www.w3.org/ns/activitystreams"
}
```
##### Quotation
A quotation (aka "quote") has a message body, an excerpt from a book including position as a page number or percentage indicator, and mentions a book.
Example:
```json
{
"id": "https://example.net/user/mouse/quotation/13",
"url": "https://example.net/user/mouse/quotation/13",
"inReplyTo": null,
"published": "2020-05-10T02:38:31.150343+00:00",
"attributedTo": "https://example.net/user/mouse",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://example.net/user/mouse/followers"
],
"sensitive": false,
"content": "I really like this quote",
"type": "Quotation",
"replies": {
"id": "https://example.net/user/mouse/quotation/13/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://example.net/user/mouse/quotation/13/replies?only_other_accounts=true&page=true",
"partOf": "https://example.net/user/mouse/quotation/13/replies",
"items": []
}
},
"inReplyToBook": "https://example.net/book/1",
"quote": "To be or not to be, that is the question.",
"position": 50,
"positionMode": "PCT",
"@context": "https://www.w3.org/ns/activitystreams"
}
```
### Custom Objects
##### Work
A particular book, a "work" in the [FRBR](https://en.wikipedia.org/wiki/Functional_Requirements_for_Bibliographic_Records) sense.
Example:
```json
{
"id": "https://bookwyrm.social/book/5988",
"type": "Work",
"authors": [
"https://bookwyrm.social/author/417"
],
"first_published_date": null,
"published_date": null,
"title": "Piranesi",
"sort_title": null,
"subtitle": null,
"description": "**From the *New York Times* bestselling author of *Jonathan Strange & Mr. Norrell*, an intoxicating, hypnotic new novel set in a dreamlike alternative reality.",
"languages": [],
"series": null,
"series_number": null,
"subjects": [
"English literature"
],
"subject_places": [],
"openlibrary_key": "OL20893680W",
"librarything_key": null,
"goodreads_key": null,
"attachment": [
{
"url": "https://bookwyrm.social/images/covers/10226290-M.jpg",
"type": "Image"
}
],
"lccn": null,
"editions": [
"https://bookwyrm.social/book/5989"
],
"@context": "https://www.w3.org/ns/activitystreams"
}
```
##### Edition
A particular _manifestation_ of a Work, in the [FRBR](https://en.wikipedia.org/wiki/Functional_Requirements_for_Bibliographic_Records) sense.
Example:
```json
{
"id": "https://bookwyrm.social/book/5989",
"lastEditedBy": "https://example.net/users/rat",
"type": "Edition",
"authors": [
"https://bookwyrm.social/author/417"
],
"first_published_date": null,
"published_date": "2020-09-15T00:00:00+00:00",
"title": "Piranesi",
"sort_title": null,
"subtitle": null,
"description": "Piranesi's house is no ordinary building; its rooms are infinite, its corridors endless, its walls are lined with thousands upon thousands of statues, each one different from all the others.",
"languages": [
"English"
],
"series": null,
"series_number": null,
"subjects": [],
"subject_places": [],
"openlibrary_key": "OL29486417M",
"librarything_key": null,
"goodreads_key": null,
"isfdb": null,
"attachment": [
{
"url": "https://bookwyrm.social/images/covers/50202953._SX318_.jpg",
"type": "Image"
}
],
"isbn_10": "1526622424",
"isbn_13": "9781526622426",
"oclc_number": null,
"asin": null,
"pages": 272,
"physical_format": null,
"publishers": [
"Bloomsbury Publishing Plc"
],
"work": "https://bookwyrm.social/book/5988",
"@context": "https://www.w3.org/ns/activitystreams"
}
```
#### Shelf
A user's book collection. By default, every user has a `to-read`, `reading`, `read`, and `stopped-reading` shelf which are used to track reading progress. Users may create an unlimited number of additional shelves with their own ids.
Example
```json
{
"id": "https://example.net/user/avid_reader/books/extraspecialbooks-5",
"type": "Shelf",
"totalItems": 0,
"first": "https://example.net/user/avid_reader/books/extraspecialbooks-5?page=1",
"last": "https://example.net/user/avid_reader/books/extraspecialbooks-5?page=1",
"name": "Extra special books",
"owner": "https://example.net/user/avid_reader",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://example.net/user/avid_reader/followers"
],
"@context": "https://www.w3.org/ns/activitystreams"
}
```
#### List
A collection of books that may have items contributed by users other than the one who created the list.
Example:
```json
{
"id": "https://example.net/list/1",
"type": "BookList",
"totalItems": 0,
"first": "https://example.net/list/1?page=1",
"last": "https://example.net/list/1?page=1",
"name": "My cool list",
"owner": "https://example.net/user/avid_reader",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://example.net/user/avid_reader/followers"
],
"summary": "A list of books I like.",
"curation": "curated",
"@context": "https://www.w3.org/ns/activitystreams"
}
```
#### Activities
- `Create`: Adds a shelf or list to the database.
- `Delete`: Removes a shelf or list.
- `Add`: Adds a book to a shelf or list.
- `Remove`: Removes a book from a shelf or list.
## Alternative Serialization
Because BookWyrm uses custom object types that aren't listed in [the standard ActivityStreams Vocabulary](https://www.w3.org/TR/activitystreams-vocabulary), some statuses are transformed into standard types when sent to or viewed by non-BookWyrm services. `Review`s are converted into `Article`s, and `Comment`s and `Quotation`s are converted into `Note`s, with a link to the book and the cover image attached.
In future this may be done with [JSON-LD type arrays](https://www.w3.org/TR/json-ld/#specifying-the-type) instead.
## Other extensions
### Webfinger
Bookwyrm uses the [Webfinger](https://datatracker.ietf.org/doc/html/rfc7033) standard to identify and disambiguate fediverse actors. The [Webfinger documentation on the Mastodon project](https://docs.joinmastodon.org/spec/webfinger/) provides a good overview of how Webfinger is used.
### HTTP Signatures
Bookwyrm uses and requires HTTP signatures for all `POST` requests. `GET` requests are not signed by default, but if Bookwyrm receives a `403` response to a `GET` it will re-send the request, signed by the default server user. This usually will have a user id of `https://example.net/user/bookwyrm.instance.actor`
#### publicKey id
In older versions of Bookwyrm the `publicKey.id` was incorrectly listed in request headers as `https://example.net/user/username#main-key`. As of v0.6.3 the id is now listed correctly, as `https://example.net/user/username/#main-key`. In most ActivityPub implementations this will make no difference as the URL will usually resolve to the same place.
### NodeInfo
Bookwyrm uses the [NodeInfo](http://nodeinfo.diaspora.software/) standard to provide statistics and version information for each instance.
## Further Documentation
See [docs.joinbookwyrm.com/](https://docs.joinbookwyrm.com/) for more documentation.

View file

@ -10,6 +10,7 @@ BookWyrm is a social network for tracking your reading, talking about books, wri
## Links
[![Mastodon Follow](https://img.shields.io/mastodon/follow/000146121?domain=https%3A%2F%2Ftech.lgbt&style=social)](https://tech.lgbt/@bookwyrm)
[![Twitter Follow](https://img.shields.io/twitter/follow/BookWyrmSocial?style=social)](https://twitter.com/BookWyrmSocial)
- [Project homepage](https://joinbookwyrm.com/)
- [Support](https://patreon.com/bookwyrm)

View file

@ -1 +0,0 @@
0.7.3

View file

@ -4,11 +4,7 @@ import sys
from .base_activity import ActivityEncoder, Signature, naive_parse
from .base_activity import Link, Mention, Hashtag
from .base_activity import (
ActivitySerializerError,
resolve_remote_id,
get_representative,
)
from .base_activity import ActivitySerializerError, resolve_remote_id
from .image import Document, Image
from .note import Note, GeneratedNote, Article, Comment, Quotation
from .note import Review, Rating
@ -23,7 +19,6 @@ from .verbs import Create, Delete, Undo, Update
from .verbs import Follow, Accept, Reject, Block
from .verbs import Add, Remove
from .verbs import Announce, Like
from .verbs import Move
# this creates a list of all the Activity types that we can serialize,
# so when an Activity comes in from outside, we can check if it's known

View file

@ -1,10 +1,7 @@
""" basics for an activitypub serializer """
from __future__ import annotations
from dataclasses import dataclass, fields, MISSING
from json import JSONEncoder
import logging
from typing import Optional, Union, TypeVar, overload, Any
import requests
from django.apps import apps
@ -13,16 +10,12 @@ from django.utils.http import http_date
from bookwyrm import models
from bookwyrm.connectors import ConnectorException, get_data
from bookwyrm.models import base_model
from bookwyrm.signatures import make_signature
from bookwyrm.settings import DOMAIN, INSTANCE_ACTOR_USERNAME
from bookwyrm.tasks import app, MISC
from bookwyrm.tasks import app, MEDIUM
logger = logging.getLogger(__name__)
# pylint: disable=invalid-name
TBookWyrmModel = TypeVar("TBookWyrmModel", bound=base_model.BookWyrmModel)
class ActivitySerializerError(ValueError):
"""routine problems serializing activitypub json"""
@ -72,13 +65,7 @@ class ActivityObject:
id: str
type: str
def __init__(
self,
activity_objects: Optional[
dict[str, Union[str, list[str], ActivityObject, base_model.BookWyrmModel]]
] = None,
**kwargs: Any,
):
def __init__(self, activity_objects=None, **kwargs):
"""this lets you pass in an object with fields that aren't in the
dataclass, which it ignores. Any field in the dataclass is required or
has a default value"""
@ -114,13 +101,13 @@ class ActivityObject:
# pylint: disable=too-many-locals,too-many-branches,too-many-arguments
def to_model(
self,
model: Optional[type[TBookWyrmModel]] = None,
instance: Optional[TBookWyrmModel] = None,
allow_create: bool = True,
save: bool = True,
overwrite: bool = True,
allow_external_connections: bool = True,
) -> Optional[TBookWyrmModel]:
model=None,
instance=None,
allow_create=True,
save=True,
overwrite=True,
allow_external_connections=True,
):
"""convert from an activity to a model instance. Args:
model: the django model that this object is being converted to
(will guess if not known)
@ -237,7 +224,7 @@ class ActivityObject:
omit = kwargs.get("omit", ())
data = self.__dict__.copy()
# recursively serialize
for k, v in data.items():
for (k, v) in data.items():
try:
if issubclass(type(v), ActivityObject):
data[k] = v.serialize()
@ -250,14 +237,11 @@ class ActivityObject:
pass
data = {k: v for (k, v) in data.items() if v is not None and k not in omit}
if "@context" not in omit:
data["@context"] = [
"https://www.w3.org/ns/activitystreams",
{"Hashtag": "as:Hashtag"},
]
data["@context"] = "https://www.w3.org/ns/activitystreams"
return data
@app.task(queue=MISC)
@app.task(queue=MEDIUM)
@transaction.atomic
def set_related_field(
model_name, origin_model_name, related_field_name, related_remote_id, data
@ -312,40 +296,14 @@ def get_model_from_type(activity_type):
# pylint: disable=too-many-arguments
@overload
def resolve_remote_id(
remote_id: str,
model: type[TBookWyrmModel],
refresh: bool = False,
save: bool = True,
get_activity: bool = False,
allow_external_connections: bool = True,
) -> TBookWyrmModel:
...
# pylint: disable=too-many-arguments
@overload
def resolve_remote_id(
remote_id: str,
model: Optional[str] = None,
refresh: bool = False,
save: bool = True,
get_activity: bool = False,
allow_external_connections: bool = True,
) -> base_model.BookWyrmModel:
...
# pylint: disable=too-many-arguments
def resolve_remote_id(
remote_id: str,
model: Optional[Union[str, type[base_model.BookWyrmModel]]] = None,
refresh: bool = False,
save: bool = True,
get_activity: bool = False,
allow_external_connections: bool = True,
) -> base_model.BookWyrmModel:
remote_id,
model=None,
refresh=False,
save=True,
get_activity=False,
allow_external_connections=True,
):
"""take a remote_id and return an instance, creating if necessary. Args:
remote_id: the unique url for looking up the object in the db or by http
model: a string or object representing the model that corresponds to the object
@ -369,13 +327,17 @@ def resolve_remote_id(
# load the data and create the object
try:
data = get_activitypub_data(remote_id)
data = get_data(remote_id)
except ConnectionError:
logger.info("Could not connect to host for remote_id: %s", remote_id)
return None
except requests.HTTPError as e:
logger.exception("HTTP error - remote_id: %s - error: %s", remote_id, e)
return None
if (e.response is not None) and e.response.status_code == 401:
# This most likely means it's a mastodon with secure fetch enabled.
data = get_activitypub_data(remote_id)
else:
logger.info("Could not connect to host for remote_id: %s", remote_id)
return None
# determine the model implicitly, if not provided
# or if it's a model with subclasses like Status, check again
if not model or hasattr(model.objects, "select_subclasses"):
@ -396,15 +358,19 @@ def resolve_remote_id(
def get_representative():
"""Get or create an actor representing the instance
to sign outgoing HTTP GET requests"""
return models.User.objects.get_or_create(
username=f"{INSTANCE_ACTOR_USERNAME}@{DOMAIN}",
defaults={
"email": "bookwyrm@localhost",
"local": True,
"localname": INSTANCE_ACTOR_USERNAME,
},
)[0]
to sign requests to 'secure mastodon' servers"""
username = f"{INSTANCE_ACTOR_USERNAME}@{DOMAIN}"
email = "bookwyrm@localhost"
try:
user = models.User.objects.get(username=username)
except models.User.DoesNotExist:
user = models.User.objects.create_user(
username=username,
email=email,
local=True,
localname=INSTANCE_ACTOR_USERNAME,
)
return user
def get_activitypub_data(url):
@ -423,7 +389,6 @@ def get_activitypub_data(url):
"Date": now,
"Signature": make_signature("get", sender, url, now),
},
timeout=15,
)
except requests.RequestException:
raise ConnectorException()

View file

@ -1,6 +1,6 @@
""" book and author data """
from dataclasses import dataclass, field
from typing import Optional
from typing import List
from .base_activity import ActivityObject
from .image import Document
@ -11,17 +11,19 @@ from .image import Document
class BookData(ActivityObject):
"""shared fields for all book data and authors"""
openlibraryKey: Optional[str] = None
inventaireId: Optional[str] = None
librarythingKey: Optional[str] = None
goodreadsKey: Optional[str] = None
bnfId: Optional[str] = None
viaf: Optional[str] = None
wikidata: Optional[str] = None
asin: Optional[str] = None
aasin: Optional[str] = None
isfdb: Optional[str] = None
lastEditedBy: Optional[str] = None
openlibraryKey: str = None
inventaireId: str = None
librarythingKey: str = None
goodreadsKey: str = None
bnfId: str = None
viaf: str = None
wikidata: str = None
asin: str = None
aasin: str = None
isfdb: str = None
lastEditedBy: str = None
links: List[str] = field(default_factory=lambda: [])
fileLinks: List[str] = field(default_factory=lambda: [])
# pylint: disable=invalid-name
@ -33,19 +35,17 @@ class Book(BookData):
sortTitle: str = None
subtitle: str = None
description: str = ""
languages: list[str] = field(default_factory=list)
languages: List[str] = field(default_factory=lambda: [])
series: str = ""
seriesNumber: str = ""
subjects: list[str] = field(default_factory=list)
subjectPlaces: list[str] = field(default_factory=list)
subjects: List[str] = field(default_factory=lambda: [])
subjectPlaces: List[str] = field(default_factory=lambda: [])
authors: list[str] = field(default_factory=list)
authors: List[str] = field(default_factory=lambda: [])
firstPublishedDate: str = ""
publishedDate: str = ""
fileLinks: list[str] = field(default_factory=list)
cover: Optional[Document] = None
cover: Document = None
type: str = "Book"
@ -58,21 +58,22 @@ class Edition(Book):
isbn10: str = ""
isbn13: str = ""
oclcNumber: str = ""
pages: Optional[int] = None
pages: int = None
physicalFormat: str = ""
physicalFormatDetail: str = ""
publishers: list[str] = field(default_factory=list)
publishers: List[str] = field(default_factory=lambda: [])
editionRank: int = 0
type: str = "Edition"
# pylint: disable=invalid-name
@dataclass(init=False)
class Work(Book):
"""work instance of a book object"""
lccn: str = ""
editions: list[str] = field(default_factory=list)
editions: List[str] = field(default_factory=lambda: [])
type: str = "Work"
@ -82,12 +83,12 @@ class Author(BookData):
"""author of a book"""
name: str
isni: Optional[str] = None
viafId: Optional[str] = None
gutenbergId: Optional[str] = None
born: Optional[str] = None
died: Optional[str] = None
aliases: list[str] = field(default_factory=list)
isni: str = None
viafId: str = None
gutenbergId: str = None
born: str = None
died: str = None
aliases: List[str] = field(default_factory=lambda: [])
bio: str = ""
wikipediaLink: str = ""
type: str = "Author"

View file

@ -18,6 +18,7 @@ class OrderedCollection(ActivityObject):
type: str = "OrderedCollection"
# pylint: disable=invalid-name
@dataclass(init=False)
class OrderedCollectionPrivate(OrderedCollection):
"""an ordered collection with privacy settings"""

View file

@ -1,5 +1,5 @@
""" actor serializer """
from dataclasses import dataclass
from dataclasses import dataclass, field
from typing import Dict
from .base_activity import ActivityObject
@ -35,11 +35,9 @@ class Person(ActivityObject):
endpoints: Dict = None
name: str = None
summary: str = None
icon: Image = None
icon: Image = field(default_factory=lambda: {})
bookwyrmUser: bool = False
manuallyApprovesFollowers: str = False
discoverable: str = False
hideFollows: str = False
movedTo: str = None
alsoKnownAs: dict[str] = None
type: str = "Person"

View file

@ -22,6 +22,7 @@ class Verb(ActivityObject):
self.object.to_model(allow_external_connections=allow_external_connections)
# pylint: disable=invalid-name
@dataclass(init=False)
class Create(Verb):
"""Create activity"""
@ -32,6 +33,7 @@ class Create(Verb):
type: str = "Create"
# pylint: disable=invalid-name
@dataclass(init=False)
class Delete(Verb):
"""Create activity"""
@ -61,6 +63,7 @@ class Delete(Verb):
# if we can't find it, we don't need to delete it because we don't have it
# pylint: disable=invalid-name
@dataclass(init=False)
class Update(Verb):
"""Update activity"""
@ -168,19 +171,9 @@ class Reject(Verb):
type: str = "Reject"
def action(self, allow_external_connections=True):
"""reject a follow or follow request"""
for model_name in ["UserFollowRequest", "UserFollows", None]:
model = apps.get_model(f"bookwyrm.{model_name}") if model_name else None
if obj := self.object.to_model(
model=model,
save=False,
allow_create=False,
allow_external_connections=allow_external_connections,
):
# Reject the first model that can be built.
obj.reject()
break
"""reject a follow request"""
obj = self.object.to_model(save=False, allow_create=False)
obj.reject()
@dataclass(init=False)
@ -224,6 +217,7 @@ class Like(Verb):
self.to_model(allow_external_connections=allow_external_connections)
# pylint: disable=invalid-name
@dataclass(init=False)
class Announce(Verb):
"""boosting a status"""
@ -237,30 +231,3 @@ class Announce(Verb):
def action(self, allow_external_connections=True):
"""boost"""
self.to_model(allow_external_connections=allow_external_connections)
@dataclass(init=False)
class Move(Verb):
"""a user moving an object"""
object: str
type: str = "Move"
origin: str = None
target: str = None
def action(self, allow_external_connections=True):
"""move"""
object_is_user = resolve_remote_id(remote_id=self.object, model="User")
if object_is_user:
model = apps.get_model("bookwyrm.MoveUser")
self.to_model(
model=model,
save=True,
allow_external_connections=allow_external_connections,
)
else:
# we might do something with this to move other objects at some point
pass

View file

@ -8,7 +8,7 @@ from opentelemetry import trace
from bookwyrm import models
from bookwyrm.redis_store import RedisStore, r
from bookwyrm.tasks import app, STREAMS, IMPORT_TRIGGERED
from bookwyrm.tasks import app, LOW, MEDIUM, HIGH
from bookwyrm.telemetry import open_telemetry
@ -32,7 +32,7 @@ class ActivityStream(RedisStore):
stream_id = self.stream_id(user_id)
return f"{stream_id}-unread-by-type"
def get_rank(self, obj):
def get_rank(self, obj): # pylint: disable=no-self-use
"""statuses are sorted by date published"""
return obj.published_date.timestamp()
@ -112,7 +112,7 @@ class ActivityStream(RedisStore):
trace.get_current_span().set_attribute("status_privacy", status.privacy)
trace.get_current_span().set_attribute(
"status_reply_parent_privacy",
status.reply_parent.privacy if status.reply_parent else status.privacy,
status.reply_parent.privacy if status.reply_parent else None,
)
# direct messages don't appear in feeds, direct comments/reviews/etc do
if status.privacy == "direct" and status.status_type == "Note":
@ -139,14 +139,14 @@ class ActivityStream(RedisStore):
| (
Q(following=status.user) & Q(following=status.reply_parent.user)
) # if the user is following both authors
)
).distinct()
# only visible to the poster's followers and tagged users
elif status.privacy == "followers":
audience = audience.filter(
Q(following=status.user) # if the user is following the author
)
return audience.distinct("id")
return audience.distinct()
@tracer.start_as_current_span("ActivityStream.get_audience")
def get_audience(self, status):
@ -156,7 +156,7 @@ class ActivityStream(RedisStore):
status_author = models.User.objects.filter(
is_active=True, local=True, id=status.user.id
).values_list("id", flat=True)
return list(set(audience) | set(status_author))
return list(set(list(audience) + list(status_author)))
def get_stores_for_users(self, user_ids):
"""convert a list of user ids into redis store ids"""
@ -183,13 +183,15 @@ class HomeStream(ActivityStream):
def get_audience(self, status):
trace.get_current_span().set_attribute("stream_id", self.key)
audience = super()._get_audience(status)
if not audience:
return []
# if the user is following the author
audience = audience.filter(following=status.user).values_list("id", flat=True)
# if the user is the post's author
status_author = models.User.objects.filter(
is_active=True, local=True, id=status.user.id
).values_list("id", flat=True)
return list(set(audience) | set(status_author))
return list(set(list(audience) + list(status_author)))
def get_statuses_for_user(self, user):
return models.Status.privacy_filter(
@ -237,7 +239,9 @@ class BooksStream(ActivityStream):
)
audience = super()._get_audience(status)
return audience.filter(shelfbook__book__parent_work=work)
if not audience:
return models.User.objects.none()
return audience.filter(shelfbook__book__parent_work=work).distinct()
def get_audience(self, status):
# only show public statuses on the books feed,
@ -325,9 +329,10 @@ def add_status_on_create(sender, instance, created, *args, **kwargs):
remove_status_task.delay(instance.id)
return
# We don't want to create multiple add_status_tasks for each status, and because
# the transactions are atomic, on_commit won't run until the status is ready to add.
if not created:
# To avoid creating a zillion unnecessary tasks caused by re-saving the model,
# check if it's actually ready to send before we go. We're trusting this was
# set correctly by the inbox or view
if not instance.ready:
return
# when creating new things, gotta wait on the transaction
@ -338,11 +343,7 @@ def add_status_on_create(sender, instance, created, *args, **kwargs):
def add_status_on_create_command(sender, instance, created):
"""runs this code only after the database commit completes"""
# boosts trigger 'saves" twice, so don't bother duplicating the task
if sender == models.Boost and not created:
return
priority = STREAMS
priority = HIGH
# check if this is an old status, de-prioritize if so
# (this will happen if federation is very slow, or, more expectedly, on csv import)
if instance.published_date < timezone.now() - timedelta(
@ -352,7 +353,7 @@ def add_status_on_create_command(sender, instance, created):
if instance.user.local:
return
# an out of date remote status is a low priority but should be added
priority = IMPORT_TRIGGERED
priority = LOW
add_status_task.apply_async(
args=(instance.id,),
@ -496,7 +497,7 @@ def remove_statuses_on_unshelve(sender, instance, *args, **kwargs):
# ---- TASKS
@app.task(queue=STREAMS)
@app.task(queue=LOW)
def add_book_statuses_task(user_id, book_id):
"""add statuses related to a book on shelve"""
user = models.User.objects.get(id=user_id)
@ -504,7 +505,7 @@ def add_book_statuses_task(user_id, book_id):
BooksStream().add_book_statuses(user, book)
@app.task(queue=STREAMS)
@app.task(queue=LOW)
def remove_book_statuses_task(user_id, book_id):
"""remove statuses about a book from a user's books feed"""
user = models.User.objects.get(id=user_id)
@ -512,7 +513,7 @@ def remove_book_statuses_task(user_id, book_id):
BooksStream().remove_book_statuses(user, book)
@app.task(queue=STREAMS)
@app.task(queue=MEDIUM)
def populate_stream_task(stream, user_id):
"""background task for populating an empty activitystream"""
user = models.User.objects.get(id=user_id)
@ -520,7 +521,7 @@ def populate_stream_task(stream, user_id):
stream.populate_streams(user)
@app.task(queue=STREAMS)
@app.task(queue=MEDIUM)
def remove_status_task(status_ids):
"""remove a status from any stream it might be in"""
# this can take an id or a list of ids
@ -535,7 +536,7 @@ def remove_status_task(status_ids):
)
@app.task(queue=STREAMS)
@app.task(queue=HIGH)
def add_status_task(status_id, increment_unread=False):
"""add a status to any stream it should be in"""
status = models.Status.objects.select_subclasses().get(id=status_id)
@ -547,7 +548,7 @@ def add_status_task(status_id, increment_unread=False):
stream.add_status(status, increment_unread=increment_unread)
@app.task(queue=STREAMS)
@app.task(queue=MEDIUM)
def remove_user_statuses_task(viewer_id, user_id, stream_list=None):
"""remove all statuses by a user from a viewer's stream"""
stream_list = [streams[s] for s in stream_list] if stream_list else streams.values()
@ -557,7 +558,7 @@ def remove_user_statuses_task(viewer_id, user_id, stream_list=None):
stream.remove_user_statuses(viewer, user)
@app.task(queue=STREAMS)
@app.task(queue=MEDIUM)
def add_user_statuses_task(viewer_id, user_id, stream_list=None):
"""add all statuses by a user to a viewer's stream"""
stream_list = [streams[s] for s in stream_list] if stream_list else streams.values()
@ -567,7 +568,7 @@ def add_user_statuses_task(viewer_id, user_id, stream_list=None):
stream.add_user_statuses(viewer, user)
@app.task(queue=STREAMS)
@app.task(queue=MEDIUM)
def handle_boost_task(boost_id):
"""remove the original post and other, earlier boosts"""
instance = models.Status.objects.get(id=boost_id)

View file

@ -1,5 +1,4 @@
"""Do further startup configuration and initialization"""
import os
import urllib
import logging
@ -15,16 +14,16 @@ def download_file(url, destination):
"""Downloads a file to the given path"""
try:
# Ensure our destination directory exists
os.makedirs(os.path.dirname(destination), exist_ok=True)
os.makedirs(os.path.dirname(destination))
with urllib.request.urlopen(url) as stream:
with open(destination, "b+w") as outfile:
outfile.write(stream.read())
except (urllib.error.HTTPError, urllib.error.URLError) as err:
logger.error("Failed to download file %s: %s", url, err)
except OSError as err:
logger.error("Couldn't open font file %s for writing: %s", destination, err)
except Exception as err: # pylint:disable=broad-except
logger.error("Unknown error in file download: %s", err)
except (urllib.error.HTTPError, urllib.error.URLError):
logger.info("Failed to download file %s", url)
except OSError:
logger.info("Couldn't open font file %s for writing", destination)
except: # pylint: disable=bare-except
logger.info("Unknown error in file download")
class BookwyrmConfig(AppConfig):
@ -33,6 +32,7 @@ class BookwyrmConfig(AppConfig):
name = "bookwyrm"
verbose_name = "BookWyrm"
# pylint: disable=no-self-use
def ready(self):
"""set up OTLP and preview image files, if desired"""
if settings.OTEL_EXPORTER_OTLP_ENDPOINT or settings.OTEL_EXPORTER_CONSOLE:

View file

@ -1,68 +1,35 @@
""" using a bookwyrm instance as a source of book data """
from __future__ import annotations
from dataclasses import asdict, dataclass
from functools import reduce
import operator
from typing import Optional, Union, Any, Literal, overload
from django.contrib.postgres.search import SearchRank, SearchQuery
from django.db.models import F, Q
from django.db.models.query import QuerySet
from bookwyrm import models
from bookwyrm import connectors
from bookwyrm.settings import MEDIA_FULL_URL
@overload
def search(
query: str,
*,
min_confidence: float = 0,
filters: Optional[list[Any]] = None,
return_first: Literal[False],
) -> QuerySet[models.Edition]:
...
@overload
def search(
query: str,
*,
min_confidence: float = 0,
filters: Optional[list[Any]] = None,
return_first: Literal[True],
) -> Optional[models.Edition]:
...
def search(
query: str,
*,
min_confidence: float = 0,
filters: Optional[list[Any]] = None,
return_first: bool = False,
books: Optional[QuerySet[models.Edition]] = None,
) -> Union[Optional[models.Edition], QuerySet[models.Edition]]:
# pylint: disable=arguments-differ
def search(query, min_confidence=0, filters=None, return_first=False):
"""search your local database"""
filters = filters or []
if not query:
return None if return_first else []
return []
query = query.strip()
results = None
# first, try searching unique identifiers
# unique identifiers never have spaces, title/author usually do
if not " " in query:
results = search_identifiers(
query, *filters, return_first=return_first, books=books
)
results = search_identifiers(query, *filters, return_first=return_first)
# if there were no identifier results...
if not results:
# then try searching title/author
results = search_title_author(
query, min_confidence, *filters, return_first=return_first, books=books
query, min_confidence, *filters, return_first=return_first
)
return results
@ -99,18 +66,8 @@ def format_search_result(search_result):
).json()
def search_identifiers(
query,
*filters,
return_first=False,
books=None,
) -> Union[Optional[models.Edition], QuerySet[models.Edition]]:
"""search Editions by deduplication fields
Best for cases when we can assume someone is searching for an exact match on
commonly unique data identifiers like isbn or specific library ids.
"""
books = books or models.Edition.objects
def search_identifiers(query, *filters, return_first=False):
"""tries remote_id, isbn; defined as dedupe fields on the model"""
if connectors.maybe_isbn(query):
# Oh did you think the 'S' in ISBN stood for 'standard'?
normalized_isbn = query.strip().upper().rjust(10, "0")
@ -121,7 +78,7 @@ def search_identifiers(
for f in models.Edition._meta.get_fields()
if hasattr(f, "deduplication_field") and f.deduplication_field
]
results = books.filter(
results = models.Edition.objects.filter(
*filters, reduce(operator.or_, (Q(**f) for f in or_filters))
).distinct()
@ -130,18 +87,11 @@ def search_identifiers(
return results
def search_title_author(
query,
min_confidence,
*filters,
return_first=False,
books=None,
) -> QuerySet[models.Edition]:
def search_title_author(query, min_confidence, *filters, return_first=False):
"""searches for title and author"""
books = books or models.Edition.objects
query = SearchQuery(query, config="simple") | SearchQuery(query, config="english")
results = (
books.filter(*filters, search_vector=query)
models.Edition.objects.filter(*filters, search_vector=query)
.annotate(rank=SearchRank(F("search_vector"), query))
.filter(rank__gt=min_confidence)
.order_by("-rank")
@ -152,7 +102,7 @@ def search_title_author(
# filter out multiple editions of the same work
list_results = []
for work_id in editions_of_work[:30]:
for work_id in set(editions_of_work[:30]):
result = (
results.filter(parent_work=work_id)
.order_by("-rank", "-edition_rank")
@ -172,11 +122,11 @@ class SearchResult:
title: str
key: str
connector: object
view_link: Optional[str] = None
author: Optional[str] = None
year: Optional[str] = None
cover: Optional[str] = None
confidence: float = 1.0
view_link: str = None
author: str = None
year: str = None
cover: str = None
confidence: int = 1
def __repr__(self):
# pylint: disable=consider-using-f-string

View file

@ -1,11 +1,7 @@
""" functionality outline for a book data connector """
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Optional, TypedDict, Any, Callable, Union, Iterator
from urllib.parse import quote_plus
# pylint: disable-next=deprecated-module
import imghdr # Deprecated in 3.11 for removal in 3.13; no good alternative yet
import imghdr
import logging
import re
import asyncio
@ -20,38 +16,33 @@ from bookwyrm import activitypub, models, settings
from bookwyrm.settings import USER_AGENT
from .connector_manager import load_more_data, ConnectorException, raise_not_valid_url
from .format_mappings import format_mappings
from ..book_search import SearchResult
logger = logging.getLogger(__name__)
JsonDict = dict[str, Any]
class ConnectorResults(TypedDict):
"""TypedDict for results returned by connector"""
connector: AbstractMinimalConnector
results: list[SearchResult]
class AbstractMinimalConnector(ABC):
"""just the bare bones, for other bookwyrm instances"""
def __init__(self, identifier: str):
def __init__(self, identifier):
# load connector settings
info = models.Connector.objects.get(identifier=identifier)
self.connector = info
# the things in the connector model to copy over
self.base_url = info.base_url
self.books_url = info.books_url
self.covers_url = info.covers_url
self.search_url = info.search_url
self.isbn_search_url = info.isbn_search_url
self.name = info.name
self.identifier = info.identifier
self_fields = [
"base_url",
"books_url",
"covers_url",
"search_url",
"isbn_search_url",
"name",
"identifier",
]
for field in self_fields:
setattr(self, field, getattr(info, field))
def get_search_url(self, query: str) -> str:
def get_search_url(self, query):
"""format the query url"""
# Check if the query resembles an ISBN
if maybe_isbn(query) and self.isbn_search_url and self.isbn_search_url != "":
@ -63,21 +54,13 @@ class AbstractMinimalConnector(ABC):
# searched as free text. This, instead, only searches isbn if it's isbn-y
return f"{self.search_url}{quote_plus(query)}"
def process_search_response(
self, query: str, data: Any, min_confidence: float
) -> list[SearchResult]:
def process_search_response(self, query, data, min_confidence):
"""Format the search results based on the format of the query"""
if maybe_isbn(query):
return list(self.parse_isbn_search_data(data))[:10]
return list(self.parse_search_data(data, min_confidence))[:10]
async def get_results(
self,
session: aiohttp.ClientSession,
url: str,
min_confidence: float,
query: str,
) -> Optional[ConnectorResults]:
async def get_results(self, session, url, min_confidence, query):
"""try this specific connector"""
# pylint: disable=line-too-long
headers = {
@ -86,68 +69,60 @@ class AbstractMinimalConnector(ABC):
),
"User-Agent": USER_AGENT,
}
params = {"min_confidence": str(min_confidence)}
params = {"min_confidence": min_confidence}
try:
async with session.get(url, headers=headers, params=params) as response:
if not response.ok:
logger.info("Unable to connect to %s: %s", url, response.reason)
return None
return
try:
raw_data = await response.json()
except aiohttp.client_exceptions.ContentTypeError as err:
logger.exception(err)
return None
return
return ConnectorResults(
connector=self,
results=self.process_search_response(
return {
"connector": self,
"results": self.process_search_response(
query, raw_data, min_confidence
),
)
}
except asyncio.TimeoutError:
logger.info("Connection timed out for url: %s", url)
except aiohttp.ClientError as err:
logger.info(err)
return None
@abstractmethod
def get_or_create_book(self, remote_id: str) -> Optional[models.Book]:
def get_or_create_book(self, remote_id):
"""pull up a book record by whatever means possible"""
@abstractmethod
def parse_search_data(
self, data: Any, min_confidence: float
) -> Iterator[SearchResult]:
def parse_search_data(self, data, min_confidence):
"""turn the result json from a search into a list"""
@abstractmethod
def parse_isbn_search_data(self, data: Any) -> Iterator[SearchResult]:
def parse_isbn_search_data(self, data):
"""turn the result json from a search into a list"""
class AbstractConnector(AbstractMinimalConnector):
"""generic book data connector"""
generated_remote_link_field = ""
def __init__(self, identifier: str):
def __init__(self, identifier):
super().__init__(identifier)
# fields we want to look for in book data to copy over
# title we handle separately.
self.book_mappings: list[Mapping] = []
self.author_mappings: list[Mapping] = []
self.book_mappings = []
def get_or_create_book(self, remote_id: str) -> Optional[models.Book]:
def get_or_create_book(self, remote_id):
"""translate arbitrary json into an Activitypub dataclass"""
# first, check if we have the origin_id saved
existing = models.Edition.find_existing_by_remote_id(
remote_id
) or models.Work.find_existing_by_remote_id(remote_id)
if existing:
if hasattr(existing, "default_edition") and isinstance(
existing.default_edition, models.Edition
):
if hasattr(existing, "default_edition"):
return existing.default_edition
return existing
@ -179,9 +154,6 @@ class AbstractConnector(AbstractMinimalConnector):
)
# this will dedupe automatically
work = work_activity.to_model(model=models.Work, overwrite=False)
if not work:
return None
for author in self.get_authors_from_data(work_data):
work.authors.add(author)
@ -189,21 +161,12 @@ class AbstractConnector(AbstractMinimalConnector):
load_more_data.delay(self.connector.id, work.id)
return edition
def get_book_data(self, remote_id: str) -> JsonDict: # pylint: disable=no-self-use
def get_book_data(self, remote_id): # pylint: disable=no-self-use
"""this allows connectors to override the default behavior"""
return get_data(remote_id)
def create_edition_from_data(
self,
work: models.Work,
edition_data: Union[str, JsonDict],
instance: Optional[models.Edition] = None,
) -> Optional[models.Edition]:
def create_edition_from_data(self, work, edition_data, instance=None):
"""if we already have the work, we're ready"""
if isinstance(edition_data, str):
# We don't expect a string here
return None
mapped_data = dict_from_mappings(edition_data, self.book_mappings)
mapped_data["work"] = work.remote_id
edition_activity = activitypub.Edition(**mapped_data)
@ -211,9 +174,6 @@ class AbstractConnector(AbstractMinimalConnector):
model=models.Edition, overwrite=False, instance=instance
)
if not edition:
return None
# if we're updating an existing instance, we don't need to load authors
if instance:
return edition
@ -230,9 +190,7 @@ class AbstractConnector(AbstractMinimalConnector):
return edition
def get_or_create_author(
self, remote_id: str, instance: Optional[models.Author] = None
) -> Optional[models.Author]:
def get_or_create_author(self, remote_id, instance=None):
"""load that author"""
if not instance:
existing = models.Author.find_existing_by_remote_id(remote_id)
@ -252,51 +210,46 @@ class AbstractConnector(AbstractMinimalConnector):
model=models.Author, overwrite=False, instance=instance
)
def get_remote_id_from_model(self, obj: models.BookDataModel) -> Optional[str]:
def get_remote_id_from_model(self, obj):
"""given the data stored, how can we look this up"""
remote_id: Optional[str] = getattr(obj, self.generated_remote_link_field)
return remote_id
return getattr(obj, getattr(self, "generated_remote_link_field"))
def update_author_from_remote(self, obj: models.Author) -> Optional[models.Author]:
def update_author_from_remote(self, obj):
"""load the remote data from this connector and add it to an existing author"""
remote_id = self.get_remote_id_from_model(obj)
if not remote_id:
return None
return self.get_or_create_author(remote_id, instance=obj)
def update_book_from_remote(self, obj: models.Edition) -> Optional[models.Edition]:
def update_book_from_remote(self, obj):
"""load the remote data from this connector and add it to an existing book"""
remote_id = self.get_remote_id_from_model(obj)
if not remote_id:
return None
data = self.get_book_data(remote_id)
return self.create_edition_from_data(obj.parent_work, data, instance=obj)
@abstractmethod
def is_work_data(self, data: JsonDict) -> bool:
def is_work_data(self, data):
"""differentiate works and editions"""
@abstractmethod
def get_edition_from_work_data(self, data: JsonDict) -> JsonDict:
def get_edition_from_work_data(self, data):
"""every work needs at least one edition"""
@abstractmethod
def get_work_from_edition_data(self, data: JsonDict) -> JsonDict:
def get_work_from_edition_data(self, data):
"""every edition needs a work"""
@abstractmethod
def get_authors_from_data(self, data: JsonDict) -> Iterator[models.Author]:
def get_authors_from_data(self, data):
"""load author data"""
@abstractmethod
def expand_book_data(self, book: models.Book) -> None:
def expand_book_data(self, book):
"""get more info on a book"""
def dict_from_mappings(data: JsonDict, mappings: list[Mapping]) -> JsonDict:
def dict_from_mappings(data, mappings):
"""create a dict in Activitypub format, using mappings supplies by
the subclass"""
result: JsonDict = {}
result = {}
for mapping in mappings:
# sometimes there are multiple mappings for one field, don't
# overwrite earlier writes in that case
@ -306,11 +259,7 @@ def dict_from_mappings(data: JsonDict, mappings: list[Mapping]) -> JsonDict:
return result
def get_data(
url: str,
params: Optional[dict[str, str]] = None,
timeout: int = settings.QUERY_TIMEOUT,
) -> JsonDict:
def get_data(url, params=None, timeout=settings.QUERY_TIMEOUT):
"""wrapper for request.get"""
# check if the url is blocked
raise_not_valid_url(url)
@ -343,15 +292,10 @@ def get_data(
logger.info(err)
raise ConnectorException(err)
if not isinstance(data, dict):
raise ConnectorException("Unexpected data format")
return data
def get_image(
url: str, timeout: int = 10
) -> Union[tuple[ContentFile[bytes], str], tuple[None, None]]:
def get_image(url, timeout=10):
"""wrapper for requesting an image"""
raise_not_valid_url(url)
try:
@ -381,19 +325,14 @@ def get_image(
class Mapping:
"""associate a local database field with a field in an external dataset"""
def __init__(
self,
local_field: str,
remote_field: Optional[str] = None,
formatter: Optional[Callable[[Any], Any]] = None,
):
def __init__(self, local_field, remote_field=None, formatter=None):
noop = lambda x: x
self.local_field = local_field
self.remote_field = remote_field or local_field
self.formatter = formatter or noop
def get_value(self, data: JsonDict) -> Optional[Any]:
def get_value(self, data):
"""pull a field from incoming json and return the formatted version"""
value = data.get(self.remote_field)
if not value:
@ -404,7 +343,7 @@ class Mapping:
return None
def infer_physical_format(format_text: str) -> Optional[str]:
def infer_physical_format(format_text):
"""try to figure out what the standardized format is from the free value"""
format_text = format_text.lower()
if format_text in format_mappings:
@ -417,7 +356,7 @@ def infer_physical_format(format_text: str) -> Optional[str]:
return matches[0]
def unique_physical_format(format_text: str) -> Optional[str]:
def unique_physical_format(format_text):
"""only store the format if it isn't directly in the format mappings"""
format_text = format_text.lower()
if format_text in format_mappings:
@ -426,7 +365,7 @@ def unique_physical_format(format_text: str) -> Optional[str]:
return format_text
def maybe_isbn(query: str) -> bool:
def maybe_isbn(query):
"""check if a query looks like an isbn"""
isbn = re.sub(r"[\W_]", "", query) # removes filler characters
# ISBNs must be numeric except an ISBN10 checkdigit can be 'X'

View file

@ -1,7 +1,4 @@
""" using another bookwyrm instance as a source of book data """
from __future__ import annotations
from typing import Any, Iterator
from bookwyrm import activitypub, models
from bookwyrm.book_search import SearchResult
from .abstract_connector import AbstractMinimalConnector
@ -10,19 +7,15 @@ from .abstract_connector import AbstractMinimalConnector
class Connector(AbstractMinimalConnector):
"""this is basically just for search"""
def get_or_create_book(self, remote_id: str) -> models.Edition:
def get_or_create_book(self, remote_id):
return activitypub.resolve_remote_id(remote_id, model=models.Edition)
def parse_search_data(
self, data: list[dict[str, Any]], min_confidence: float
) -> Iterator[SearchResult]:
def parse_search_data(self, data, min_confidence):
for search_result in data:
search_result["connector"] = self
yield SearchResult(**search_result)
def parse_isbn_search_data(
self, data: list[dict[str, Any]]
) -> Iterator[SearchResult]:
def parse_isbn_search_data(self, data):
for search_result in data:
search_result["connector"] = self
yield SearchResult(**search_result)

View file

@ -1,11 +1,8 @@
""" interface with whatever connectors the app has """
from __future__ import annotations
import asyncio
import importlib
import ipaddress
import logging
from asyncio import Future
from typing import Iterator, Any, Optional, Union, overload, Literal
from urllib.parse import urlparse
import aiohttp
@ -15,10 +12,8 @@ from django.db.models import signals
from requests import HTTPError
from bookwyrm import book_search, models
from bookwyrm.book_search import SearchResult
from bookwyrm.connectors import abstract_connector
from bookwyrm.settings import SEARCH_TIMEOUT
from bookwyrm.tasks import app, CONNECTORS
from bookwyrm.tasks import app, LOW
logger = logging.getLogger(__name__)
@ -27,15 +22,11 @@ class ConnectorException(HTTPError):
"""when the connector can't do what was asked"""
async def async_connector_search(
query: str,
items: list[tuple[str, abstract_connector.AbstractConnector]],
min_confidence: float,
) -> list[Optional[abstract_connector.ConnectorResults]]:
async def async_connector_search(query, items, min_confidence):
"""Try a number of requests simultaneously"""
timeout = aiohttp.ClientTimeout(total=SEARCH_TIMEOUT)
async with aiohttp.ClientSession(timeout=timeout) as session:
tasks: list[Future[Optional[abstract_connector.ConnectorResults]]] = []
tasks = []
for url, connector in items:
tasks.append(
asyncio.ensure_future(
@ -44,29 +35,14 @@ async def async_connector_search(
)
results = await asyncio.gather(*tasks)
return list(results)
return results
@overload
def search(
query: str, *, min_confidence: float = 0.1, return_first: Literal[False]
) -> list[abstract_connector.ConnectorResults]:
...
@overload
def search(
query: str, *, min_confidence: float = 0.1, return_first: Literal[True]
) -> Optional[SearchResult]:
...
def search(
query: str, *, min_confidence: float = 0.1, return_first: bool = False
) -> Union[list[abstract_connector.ConnectorResults], Optional[SearchResult]]:
def search(query, min_confidence=0.1, return_first=False):
"""find books based on arbitrary keywords"""
if not query:
return None if return_first else []
return []
results = []
items = []
for connector in get_connectors():
@ -81,12 +57,8 @@ def search(
items.append((url, connector))
# load as many results as we can
# failed requests will return None, so filter those out
results = [
r
for r in asyncio.run(async_connector_search(query, items, min_confidence))
if r
]
results = asyncio.run(async_connector_search(query, items, min_confidence))
results = [r for r in results if r]
if return_first:
# find the best result from all the responses and return that
@ -94,12 +66,11 @@ def search(
all_results = sorted(all_results, key=lambda r: r.confidence, reverse=True)
return all_results[0] if all_results else None
# failed requests will return None, so filter those out
return results
def first_search_result(
query: str, min_confidence: float = 0.1
) -> Union[models.Edition, SearchResult, None]:
def first_search_result(query, min_confidence=0.1):
"""search until you find a result that fits"""
# try local search first
result = book_search.search(query, min_confidence=min_confidence, return_first=True)
@ -109,20 +80,18 @@ def first_search_result(
return search(query, min_confidence=min_confidence, return_first=True) or None
def get_connectors() -> Iterator[abstract_connector.AbstractConnector]:
def get_connectors():
"""load all connectors"""
for info in models.Connector.objects.filter(active=True).order_by("priority").all():
yield load_connector(info)
def get_or_create_connector(remote_id: str) -> abstract_connector.AbstractConnector:
def get_or_create_connector(remote_id):
"""get the connector related to the object's server"""
url = urlparse(remote_id)
identifier = url.hostname
identifier = url.netloc
if not identifier:
raise ValueError(f"Invalid remote id: {remote_id}")
base_url = f"{url.scheme}://{url.netloc}"
raise ValueError("Invalid remote id")
try:
connector_info = models.Connector.objects.get(identifier=identifier)
@ -130,75 +99,58 @@ def get_or_create_connector(remote_id: str) -> abstract_connector.AbstractConnec
connector_info = models.Connector.objects.create(
identifier=identifier,
connector_file="bookwyrm_connector",
base_url=base_url,
books_url=f"{base_url}/book",
covers_url=f"{base_url}/images/covers",
search_url=f"{base_url}/search?q=",
base_url=f"https://{identifier}",
books_url=f"https://{identifier}/book",
covers_url=f"https://{identifier}/images/covers",
search_url=f"https://{identifier}/search?q=",
priority=2,
)
return load_connector(connector_info)
@app.task(queue=CONNECTORS)
def load_more_data(connector_id: str, book_id: str) -> None:
@app.task(queue=LOW)
def load_more_data(connector_id, book_id):
"""background the work of getting all 10,000 editions of LoTR"""
connector_info = models.Connector.objects.get(id=connector_id)
connector = load_connector(connector_info)
book = models.Book.objects.select_subclasses().get( # type: ignore[no-untyped-call]
id=book_id
)
book = models.Book.objects.select_subclasses().get(id=book_id)
connector.expand_book_data(book)
@app.task(queue=CONNECTORS)
def create_edition_task(
connector_id: int, work_id: int, data: Union[str, abstract_connector.JsonDict]
) -> None:
@app.task(queue=LOW)
def create_edition_task(connector_id, work_id, data):
"""separate task for each of the 10,000 editions of LoTR"""
connector_info = models.Connector.objects.get(id=connector_id)
connector = load_connector(connector_info)
work = models.Work.objects.select_subclasses().get( # type: ignore[no-untyped-call]
id=work_id
)
work = models.Work.objects.select_subclasses().get(id=work_id)
connector.create_edition_from_data(work, data)
def load_connector(
connector_info: models.Connector,
) -> abstract_connector.AbstractConnector:
def load_connector(connector_info):
"""instantiate the connector class"""
connector = importlib.import_module(
f"bookwyrm.connectors.{connector_info.connector_file}"
)
return connector.Connector(connector_info.identifier) # type: ignore[no-any-return]
return connector.Connector(connector_info.identifier)
@receiver(signals.post_save, sender="bookwyrm.FederatedServer")
# pylint: disable=unused-argument
def create_connector(
sender: Any,
instance: models.FederatedServer,
created: Any,
*args: Any,
**kwargs: Any,
) -> None:
def create_connector(sender, instance, created, *args, **kwargs):
"""create a connector to an external bookwyrm server"""
if instance.application_type == "bookwyrm":
get_or_create_connector(f"https://{instance.server_name}")
def raise_not_valid_url(url: str) -> None:
def raise_not_valid_url(url):
"""do some basic reality checks on the url"""
parsed = urlparse(url)
if not parsed.scheme in ["http", "https"]:
raise ConnectorException("Invalid scheme: ", url)
if not parsed.hostname:
raise ConnectorException("Hostname missing: ", url)
try:
ipaddress.ip_address(parsed.hostname)
ipaddress.ip_address(parsed.netloc)
raise ConnectorException("Provided url is an IP address: ", url)
except ValueError:
# it's not an IP address, which is good

View file

@ -1,10 +1,9 @@
""" inventaire data connector """
import re
from typing import Any, Union, Optional, Iterator, Iterable
from bookwyrm import models
from bookwyrm.book_search import SearchResult
from .abstract_connector import AbstractConnector, Mapping, JsonDict
from .abstract_connector import AbstractConnector, Mapping
from .abstract_connector import get_data
from .connector_manager import ConnectorException, create_edition_task
@ -14,7 +13,7 @@ class Connector(AbstractConnector):
generated_remote_link_field = "inventaire_id"
def __init__(self, identifier: str):
def __init__(self, identifier):
super().__init__(identifier)
get_first = lambda a: a[0]
@ -61,13 +60,13 @@ class Connector(AbstractConnector):
Mapping("died", remote_field="wdt:P570", formatter=get_first),
] + shared_mappings
def get_remote_id(self, value: str) -> str:
def get_remote_id(self, value):
"""convert an id/uri into a url"""
return f"{self.books_url}?action=by-uris&uris={value}"
def get_book_data(self, remote_id: str) -> JsonDict:
def get_book_data(self, remote_id):
data = get_data(remote_id)
extracted = list(data.get("entities", {}).values())
extracted = list(data.get("entities").values())
try:
data = extracted[0]
except (KeyError, IndexError):
@ -75,16 +74,10 @@ class Connector(AbstractConnector):
# flatten the data so that images, uri, and claims are on the same level
return {
**data.get("claims", {}),
**{
k: data.get(k)
for k in ["uri", "image", "labels", "sitelinks", "type"]
if k in data
},
**{k: data.get(k) for k in ["uri", "image", "labels", "sitelinks", "type"]},
}
def parse_search_data(
self, data: JsonDict, min_confidence: float
) -> Iterator[SearchResult]:
def parse_search_data(self, data, min_confidence):
for search_result in data.get("results", []):
images = search_result.get("image")
cover = f"{self.covers_url}/img/entities/{images[0]}" if images else None
@ -103,7 +96,7 @@ class Connector(AbstractConnector):
connector=self,
)
def parse_isbn_search_data(self, data: JsonDict) -> Iterator[SearchResult]:
def parse_isbn_search_data(self, data):
"""got some data"""
results = data.get("entities")
if not results:
@ -121,44 +114,35 @@ class Connector(AbstractConnector):
connector=self,
)
def is_work_data(self, data: JsonDict) -> bool:
def is_work_data(self, data):
return data.get("type") == "work"
def load_edition_data(self, work_uri: str) -> JsonDict:
def load_edition_data(self, work_uri):
"""get a list of editions for a work"""
# pylint: disable=line-too-long
url = f"{self.books_url}?action=reverse-claims&property=wdt:P629&value={work_uri}&sort=true"
return get_data(url)
def get_edition_from_work_data(self, data: JsonDict) -> JsonDict:
work_uri = data.get("uri")
if not work_uri:
raise ConnectorException("Invalid URI")
data = self.load_edition_data(work_uri)
def get_edition_from_work_data(self, data):
data = self.load_edition_data(data.get("uri"))
try:
uri = data.get("uris", [])[0]
except IndexError:
raise ConnectorException("Invalid book data")
return self.get_book_data(self.get_remote_id(uri))
def get_work_from_edition_data(self, data: JsonDict) -> JsonDict:
try:
uri = data.get("wdt:P629", [])[0]
except IndexError:
raise ConnectorException("Invalid book data")
def get_work_from_edition_data(self, data):
uri = data.get("wdt:P629", [None])[0]
if not uri:
raise ConnectorException("Invalid book data")
return self.get_book_data(self.get_remote_id(uri))
def get_authors_from_data(self, data: JsonDict) -> Iterator[models.Author]:
def get_authors_from_data(self, data):
authors = data.get("wdt:P50", [])
for author in authors:
model = self.get_or_create_author(self.get_remote_id(author))
if model:
yield model
yield self.get_or_create_author(self.get_remote_id(author))
def expand_book_data(self, book: models.Book) -> None:
def expand_book_data(self, book):
work = book
# go from the edition to the work, if necessary
if isinstance(book, models.Edition):
@ -170,16 +154,11 @@ class Connector(AbstractConnector):
# who knows, man
return
for edition_uri in edition_options.get("uris", []):
for edition_uri in edition_options.get("uris"):
remote_id = self.get_remote_id(edition_uri)
create_edition_task.delay(self.connector.id, work.id, remote_id)
def create_edition_from_data(
self,
work: models.Work,
edition_data: Union[str, JsonDict],
instance: Optional[models.Edition] = None,
) -> Optional[models.Edition]:
def create_edition_from_data(self, work, edition_data, instance=None):
"""pass in the url as data and then call the version in abstract connector"""
if isinstance(edition_data, str):
try:
@ -189,26 +168,22 @@ class Connector(AbstractConnector):
return None
return super().create_edition_from_data(work, edition_data, instance=instance)
def get_cover_url(
self, cover_blob: Union[list[JsonDict], JsonDict], *_: Any
) -> Optional[str]:
def get_cover_url(self, cover_blob, *_):
"""format the relative cover url into an absolute one:
{"url": "/img/entities/e794783f01b9d4f897a1ea9820b96e00d346994f"}
"""
# covers may or may not be a list
if isinstance(cover_blob, list):
if len(cover_blob) == 0:
return None
if isinstance(cover_blob, list) and len(cover_blob) > 0:
cover_blob = cover_blob[0]
cover_id = cover_blob.get("url")
if not isinstance(cover_id, str):
if not cover_id:
return None
# cover may or may not be an absolute url already
if re.match(r"^http", cover_id):
return cover_id
return f"{self.covers_url}{cover_id}"
def resolve_keys(self, keys: Iterable[str]) -> list[str]:
def resolve_keys(self, keys):
"""cool, it's "wd:Q3156592" now what the heck does that mean"""
results = []
for uri in keys:
@ -216,10 +191,10 @@ class Connector(AbstractConnector):
data = self.get_book_data(self.get_remote_id(uri))
except ConnectorException:
continue
results.append(get_language_code(data.get("labels", {})))
results.append(get_language_code(data.get("labels")))
return results
def get_description(self, links: JsonDict) -> str:
def get_description(self, links):
"""grab an extracted excerpt from wikipedia"""
link = links.get("enwiki")
if not link:
@ -229,15 +204,15 @@ class Connector(AbstractConnector):
data = get_data(url)
except ConnectorException:
return ""
return str(data.get("extract", ""))
return data.get("extract")
def get_remote_id_from_model(self, obj: models.BookDataModel) -> str:
def get_remote_id_from_model(self, obj):
"""use get_remote_id to figure out the link from a model obj"""
remote_id_value = obj.inventaire_id
return self.get_remote_id(remote_id_value)
def get_language_code(options: JsonDict, code: str = "en") -> Any:
def get_language_code(options, code="en"):
"""when there are a bunch of translation but we need a single field"""
result = options.get(code)
if result:

View file

@ -1,13 +1,9 @@
""" openlibrary data connector """
import re
from typing import Any, Optional, Union, Iterator, Iterable
from markdown import markdown
from bookwyrm import models
from bookwyrm.book_search import SearchResult
from bookwyrm.utils.sanitizer import clean
from .abstract_connector import AbstractConnector, Mapping, JsonDict
from .abstract_connector import AbstractConnector, Mapping
from .abstract_connector import get_data, infer_physical_format, unique_physical_format
from .connector_manager import ConnectorException, create_edition_task
from .openlibrary_languages import languages
@ -18,7 +14,7 @@ class Connector(AbstractConnector):
generated_remote_link_field = "openlibrary_link"
def __init__(self, identifier: str):
def __init__(self, identifier):
super().__init__(identifier)
get_first = lambda a, *args: a[0]
@ -98,14 +94,14 @@ class Connector(AbstractConnector):
Mapping("inventaire_id", remote_field="links", formatter=get_inventaire_id),
]
def get_book_data(self, remote_id: str) -> JsonDict:
def get_book_data(self, remote_id):
data = get_data(remote_id)
if data.get("type", {}).get("key") == "/type/redirect":
remote_id = self.base_url + data.get("location", "")
remote_id = self.base_url + data.get("location")
return get_data(remote_id)
return data
def get_remote_id_from_data(self, data: JsonDict) -> str:
def get_remote_id_from_data(self, data):
"""format a url from an openlibrary id field"""
try:
key = data["key"]
@ -113,10 +109,10 @@ class Connector(AbstractConnector):
raise ConnectorException("Invalid book data")
return f"{self.books_url}{key}"
def is_work_data(self, data: JsonDict) -> bool:
def is_work_data(self, data):
return bool(re.match(r"^[\/\w]+OL\d+W$", data["key"]))
def get_edition_from_work_data(self, data: JsonDict) -> JsonDict:
def get_edition_from_work_data(self, data):
try:
key = data["key"]
except KeyError:
@ -128,7 +124,7 @@ class Connector(AbstractConnector):
raise ConnectorException("No editions for work")
return edition
def get_work_from_edition_data(self, data: JsonDict) -> JsonDict:
def get_work_from_edition_data(self, data):
try:
key = data["works"][0]["key"]
except (IndexError, KeyError):
@ -136,7 +132,7 @@ class Connector(AbstractConnector):
url = f"{self.books_url}{key}"
return self.get_book_data(url)
def get_authors_from_data(self, data: JsonDict) -> Iterator[models.Author]:
def get_authors_from_data(self, data):
"""parse author json and load or create authors"""
for author_blob in data.get("authors", []):
author_blob = author_blob.get("author", author_blob)
@ -148,7 +144,7 @@ class Connector(AbstractConnector):
continue
yield author
def get_cover_url(self, cover_blob: list[str], size: str = "L") -> Optional[str]:
def get_cover_url(self, cover_blob, size="L"):
"""ask openlibrary for the cover"""
if not cover_blob:
return None
@ -156,10 +152,8 @@ class Connector(AbstractConnector):
image_name = f"{cover_id}-{size}.jpg"
return f"{self.covers_url}/b/id/{image_name}"
def parse_search_data(
self, data: JsonDict, min_confidence: float
) -> Iterator[SearchResult]:
for idx, search_result in enumerate(data.get("docs", [])):
def parse_search_data(self, data, min_confidence):
for idx, search_result in enumerate(data.get("docs")):
# build the remote id from the openlibrary key
key = self.books_url + search_result["key"]
author = search_result.get("author_name") or ["Unknown"]
@ -180,7 +174,7 @@ class Connector(AbstractConnector):
confidence=confidence,
)
def parse_isbn_search_data(self, data: JsonDict) -> Iterator[SearchResult]:
def parse_isbn_search_data(self, data):
for search_result in list(data.values()):
# build the remote id from the openlibrary key
key = self.books_url + search_result["key"]
@ -194,12 +188,12 @@ class Connector(AbstractConnector):
year=search_result.get("publish_date"),
)
def load_edition_data(self, olkey: str) -> JsonDict:
def load_edition_data(self, olkey):
"""query openlibrary for editions of a work"""
url = f"{self.books_url}/works/{olkey}/editions"
return self.get_book_data(url)
def expand_book_data(self, book: models.Book) -> None:
def expand_book_data(self, book):
work = book
# go from the edition to the work, if necessary
if isinstance(book, models.Edition):
@ -212,14 +206,14 @@ class Connector(AbstractConnector):
# who knows, man
return
for edition_data in edition_options.get("entries", []):
for edition_data in edition_options.get("entries"):
# does this edition have ANY interesting data?
if ignore_edition(edition_data):
continue
create_edition_task.delay(self.connector.id, work.id, edition_data)
def ignore_edition(edition_data: JsonDict) -> bool:
def ignore_edition(edition_data):
"""don't load a million editions that have no metadata"""
# an isbn, we love to see it
if edition_data.get("isbn_13") or edition_data.get("isbn_10"):
@ -238,30 +232,19 @@ def ignore_edition(edition_data: JsonDict) -> bool:
return True
def get_description(description_blob: Union[JsonDict, str]) -> str:
def get_description(description_blob):
"""descriptions can be a string or a dict"""
if isinstance(description_blob, dict):
description = markdown(description_blob.get("value", ""))
else:
description = markdown(description_blob)
if (
description.startswith("<p>")
and description.endswith("</p>")
and description.count("<p>") == 1
):
# If there is just one <p> tag and it is around the text remove it
return description[len("<p>") : -len("</p>")].strip()
return clean(description)
return description_blob.get("value")
return description_blob
def get_openlibrary_key(key: str) -> str:
def get_openlibrary_key(key):
"""convert /books/OL27320736M into OL27320736M"""
return key.split("/")[-1]
def get_languages(language_blob: Iterable[JsonDict]) -> list[Optional[str]]:
def get_languages(language_blob):
"""/language/eng -> English"""
langs = []
for lang in language_blob:
@ -269,14 +252,14 @@ def get_languages(language_blob: Iterable[JsonDict]) -> list[Optional[str]]:
return langs
def get_dict_field(blob: Optional[JsonDict], field_name: str) -> Optional[Any]:
def get_dict_field(blob, field_name):
"""extract the isni from the remote id data for the author"""
if not blob or not isinstance(blob, dict):
return None
return blob.get(field_name)
def get_wikipedia_link(links: list[Any]) -> Optional[str]:
def get_wikipedia_link(links):
"""extract wikipedia links"""
if not isinstance(links, list):
return None
@ -289,7 +272,7 @@ def get_wikipedia_link(links: list[Any]) -> Optional[str]:
return None
def get_inventaire_id(links: list[Any]) -> Optional[str]:
def get_inventaire_id(links):
"""extract and format inventaire ids"""
if not isinstance(links, list):
return None
@ -299,13 +282,11 @@ def get_inventaire_id(links: list[Any]) -> Optional[str]:
continue
if link.get("title") == "inventaire.io":
iv_link = link.get("url")
if not isinstance(iv_link, str):
return None
return iv_link.split("/")[-1]
return None
def pick_default_edition(options: list[JsonDict]) -> Optional[JsonDict]:
def pick_default_edition(options):
"""favor physical copies with covers in english"""
if not options:
return None

View file

@ -2,7 +2,7 @@
from bookwyrm import models, settings
def site_settings(request):
def site_settings(request): # pylint: disable=unused-argument
"""include the custom info about the site"""
request_protocol = "https://"
if not request.is_secure():

View file

@ -3,8 +3,8 @@ from django.core.mail import EmailMultiAlternatives
from django.template.loader import get_template
from bookwyrm import models, settings
from bookwyrm.tasks import app, EMAIL
from bookwyrm.settings import DOMAIN, BASE_URL
from bookwyrm.tasks import app, HIGH
from bookwyrm.settings import DOMAIN
def email_data():
@ -14,7 +14,6 @@ def email_data():
"site_name": site.name,
"logo": site.logo_small_url,
"domain": DOMAIN,
"base_url": BASE_URL,
"user": None,
}
@ -76,7 +75,7 @@ def format_email(email_name, data):
return (subject, html_content, text_content)
@app.task(queue=EMAIL)
@app.task(queue=HIGH)
def send_email(recipient, subject, html_content, text_content):
"""use a task to send the email"""
email = EmailMultiAlternatives(

View file

@ -15,7 +15,6 @@ class AuthorForm(CustomForm):
"aliases",
"bio",
"wikipedia_link",
"wikidata",
"website",
"born",
"died",
@ -33,7 +32,6 @@ class AuthorForm(CustomForm):
"wikipedia_link": forms.TextInput(
attrs={"aria-describedby": "desc_wikipedia_link"}
),
"wikidata": forms.TextInput(attrs={"aria-describedby": "desc_wikidata"}),
"website": forms.TextInput(attrs={"aria-describedby": "desc_website"}),
"born": forms.SelectDateWidget(attrs={"aria-describedby": "desc_born"}),
"died": forms.SelectDateWidget(attrs={"aria-describedby": "desc_died"}),

View file

@ -1,9 +1,8 @@
""" using django model forms """
from django import forms
from file_resubmit.widgets import ResubmitImageWidget
from bookwyrm import models
from bookwyrm.models.fields import ClearableFileInputWithWarning
from .custom_form import CustomForm
from .widgets import ArrayWidget, SelectDateWidget, Select
@ -21,7 +20,6 @@ class EditionForm(CustomForm):
model = models.Edition
fields = [
"title",
"sort_title",
"subtitle",
"description",
"series",
@ -47,9 +45,6 @@ class EditionForm(CustomForm):
]
widgets = {
"title": forms.TextInput(attrs={"aria-describedby": "desc_title"}),
"sort_title": forms.TextInput(
attrs={"aria-describedby": "desc_sort_title"}
),
"subtitle": forms.TextInput(attrs={"aria-describedby": "desc_subtitle"}),
"description": forms.Textarea(
attrs={"aria-describedby": "desc_description"}
@ -71,7 +66,9 @@ class EditionForm(CustomForm):
"published_date": SelectDateWidget(
attrs={"aria-describedby": "desc_published_date"}
),
"cover": ResubmitImageWidget(attrs={"aria-describedby": "desc_cover"}),
"cover": ClearableFileInputWithWarning(
attrs={"aria-describedby": "desc_cover"}
),
"physical_format": Select(
attrs={"aria-describedby": "desc_physical_format"}
),
@ -110,7 +107,6 @@ class EditionFromWorkForm(CustomForm):
model = models.Work
fields = [
"title",
"sort_title",
"subtitle",
"authors",
"description",

View file

@ -15,9 +15,9 @@ class StyledForm(ModelForm):
css_classes["number"] = "input"
css_classes["checkbox"] = "checkbox"
css_classes["textarea"] = "textarea"
# pylint: disable=super-with-arguments
super().__init__(*args, **kwargs)
for visible in self.visible_fields():
input_type = ""
if hasattr(visible.field.widget, "input_type"):
input_type = visible.field.widget.input_type
if isinstance(visible.field.widget, Textarea):

View file

@ -18,7 +18,6 @@ class EditUserForm(CustomForm):
"email",
"summary",
"show_goal",
"show_ratings",
"show_suggested_users",
"manually_approves_followers",
"default_post_privacy",
@ -71,22 +70,6 @@ class DeleteUserForm(CustomForm):
fields = ["password"]
class MoveUserForm(CustomForm):
target = forms.CharField(widget=forms.TextInput)
class Meta:
model = models.User
fields = ["password"]
class AliasUserForm(CustomForm):
username = forms.CharField(widget=forms.TextInput)
class Meta:
model = models.User
fields = ["password"]
class ChangePasswordForm(CustomForm):
current_password = forms.CharField(widget=forms.PasswordInput)
confirm_password = forms.CharField(widget=forms.PasswordInput)

View file

@ -25,10 +25,6 @@ class ImportForm(forms.Form):
csv_file = forms.FileField()
class ImportUserForm(forms.Form):
archive_file = forms.FileField()
class ShelfForm(CustomForm):
class Meta:
model = models.Shelf

View file

@ -34,6 +34,7 @@ class LoginForm(CustomForm):
def add_invalid_password_error(self):
"""We don't want to be too specific about this"""
# pylint: disable=attribute-defined-outside-init
self.non_field_errors = _("Username or password are incorrect")

View file

@ -1,5 +1,4 @@
""" using django model forms """
from urllib.parse import urlparse
from django.utils.translation import gettext_lazy as _
@ -26,7 +25,7 @@ class FileLinkForm(CustomForm):
url = cleaned_data.get("url")
filetype = cleaned_data.get("filetype")
book = cleaned_data.get("book")
domain = urlparse(url).hostname
domain = urlparse(url).netloc
if models.LinkDomain.objects.filter(domain=domain).exists():
status = models.LinkDomain.objects.get(domain=domain).status
if status == "blocked":
@ -38,9 +37,10 @@ class FileLinkForm(CustomForm):
),
)
if (
models.FileLink.objects.filter(url=url, book=book, filetype=filetype)
.exclude(pk=self.instance)
.exists()
not self.instance
and models.FileLink.objects.filter(
url=url, book=book, filetype=filetype
).exists()
):
# pylint: disable=line-too-long
self.add_error(

View file

@ -24,7 +24,7 @@ class SortListForm(forms.Form):
sort_by = ChoiceField(
choices=(
("order", _("List Order")),
("sort_title", _("Book Title")),
("title", _("Book Title")),
("rating", _("Rating")),
),
label=_("Sort By"),

View file

@ -5,6 +5,8 @@ from django import forms
class ArrayWidget(forms.widgets.TextInput):
"""Inputs for postgres array fields"""
# pylint: disable=unused-argument
# pylint: disable=no-self-use
def value_from_datadict(self, data, files, name):
"""get all values for this name"""
return [i for i in data.getlist(name) if i]

View file

@ -1,7 +1,6 @@
""" import classes """
from .importer import Importer
from .bookwyrm_import import BookwyrmImporter, BookwyrmBooksImporter
from .calibre_import import CalibreImporter
from .goodreads_import import GoodreadsImporter
from .librarything_import import LibrarythingImporter

View file

@ -1,39 +0,0 @@
"""Import data from Bookwyrm export files"""
from django.http import QueryDict
from bookwyrm.models import User
from bookwyrm.models.bookwyrm_import_job import BookwyrmImportJob
from . import Importer
class BookwyrmImporter:
"""Import a Bookwyrm User export file.
This is kind of a combination of an importer and a connector.
"""
# pylint: disable=no-self-use
def process_import(
self, user: User, archive_file: bytes, settings: QueryDict
) -> BookwyrmImportJob:
"""import user data from a Bookwyrm export file"""
required = [k for k in settings if settings.get(k) == "on"]
job = BookwyrmImportJob.objects.create(
user=user, archive_file=archive_file, required=required
)
return job
class BookwyrmBooksImporter(Importer):
"""
Handle reading a csv from BookWyrm.
Goodreads is the default importer, we basically just use the same structure
But BookWyrm has additional attributes in the csv
"""
service = "BookWyrm"
row_mappings_guesses = Importer.row_mappings_guesses + [
("shelf_name", ["shelf_name"]),
("review_published", ["review_published"]),
]

View file

@ -1,6 +1,4 @@
""" handle reading a csv from calibre """
from typing import Any, Optional
from bookwyrm.models import Shelf
from . import Importer
@ -11,15 +9,20 @@ class CalibreImporter(Importer):
service = "Calibre"
def __init__(self, *args: Any, **kwargs: Any):
def __init__(self, *args, **kwargs):
# Add timestamp to row_mappings_guesses for date_added to avoid
# integrity error
self.row_mappings_guesses = [
(field, mapping + (["timestamp"] if field == "date_added" else []))
for field, mapping in self.row_mappings_guesses
]
row_mappings_guesses = []
for field, mapping in self.row_mappings_guesses:
if field in ("date_added",):
row_mappings_guesses.append((field, mapping + ["timestamp"]))
else:
row_mappings_guesses.append((field, mapping))
self.row_mappings_guesses = row_mappings_guesses
super().__init__(*args, **kwargs)
def get_shelf(self, normalized_row: dict[str, Optional[str]]) -> Optional[str]:
def get_shelf(self, normalized_row):
# Calibre export does not indicate which shelf to use. Use a default one for now
return Shelf.TO_READ

View file

@ -1,10 +1,8 @@
""" handle reading a csv from an external service, defaults are from Goodreads """
import csv
from datetime import timedelta
from typing import Iterable, Optional
from django.utils import timezone
from bookwyrm.models import ImportJob, ImportItem, SiteSettings, User
from bookwyrm.models import ImportJob, ImportItem, SiteSettings
class Importer:
@ -18,26 +16,17 @@ class Importer:
row_mappings_guesses = [
("id", ["id", "book id"]),
("title", ["title"]),
("authors", ["author_text", "author", "authors", "primary author"]),
("isbn_10", ["isbn_10", "isbn10", "isbn", "isbn/uid"]),
("isbn_13", ["isbn_13", "isbn13", "isbn", "isbns", "isbn/uid"]),
("authors", ["author", "authors", "primary author"]),
("isbn_10", ["isbn10", "isbn", "isbn/uid"]),
("isbn_13", ["isbn13", "isbn", "isbns", "isbn/uid"]),
("shelf", ["shelf", "exclusive shelf", "read status", "bookshelf"]),
("review_name", ["review_name", "review name"]),
("review_body", ["review_content", "my review", "review"]),
("review_name", ["review name"]),
("review_body", ["my review", "review"]),
("rating", ["my rating", "rating", "star rating"]),
(
"date_added",
["shelf_date", "date_added", "date added", "entry date", "added"],
),
("date_started", ["start_date", "date started", "started"]),
(
"date_finished",
["finish_date", "date finished", "last date read", "date read", "finished"],
),
("date_added", ["date added", "entry date", "added"]),
("date_started", ["date started", "started"]),
("date_finished", ["date finished", "last date read", "date read", "finished"]),
]
# TODO: stopped
date_fields = ["date_added", "date_started", "date_finished"]
shelf_mapping_guesses = {
"to-read": ["to-read", "want to read"],
@ -45,33 +34,20 @@ class Importer:
"reading": ["currently-reading", "reading", "currently reading"],
}
# pylint: disable=too-many-arguments
def create_job(
self,
user: User,
csv_file: Iterable[str],
include_reviews: bool,
privacy: str,
create_shelves: bool = True,
) -> ImportJob:
# pylint: disable=too-many-locals
def create_job(self, user, csv_file, include_reviews, privacy):
"""check over a csv and creates a database entry for the job"""
csv_reader = csv.DictReader(csv_file, delimiter=self.delimiter)
rows = list(csv_reader)
if len(rows) < 1:
raise ValueError("CSV file is empty")
mappings = (
self.create_row_mappings(list(fieldnames))
if (fieldnames := csv_reader.fieldnames)
else {}
)
rows = enumerate(rows)
job = ImportJob.objects.create(
user=user,
include_reviews=include_reviews,
create_shelves=create_shelves,
privacy=privacy,
mappings=mappings,
mappings=self.create_row_mappings(csv_reader.fieldnames),
source=self.service,
)
@ -79,20 +55,16 @@ class Importer:
if enforce_limit and allowed_imports <= 0:
job.complete_job()
return job
for index, entry in enumerate(rows):
for index, entry in rows:
if enforce_limit and index >= allowed_imports:
break
self.create_item(job, index, entry)
return job
def update_legacy_job(self, job: ImportJob) -> None:
def update_legacy_job(self, job):
"""patch up a job that was in the old format"""
items = job.items
first_item = items.first()
if first_item is None:
return
headers = list(first_item.data.keys())
headers = list(items.first().data.keys())
job.mappings = self.create_row_mappings(headers)
job.updated_date = timezone.now()
job.save()
@ -103,24 +75,24 @@ class Importer:
item.normalized_data = normalized
item.save()
def create_row_mappings(self, headers: list[str]) -> dict[str, Optional[str]]:
def create_row_mappings(self, headers):
"""guess what the headers mean"""
mappings = {}
for (key, guesses) in self.row_mappings_guesses:
values = [h for h in headers if h.lower() in guesses]
value = values[0] if len(values) else None
value = [h for h in headers if h.lower() in guesses]
value = value[0] if len(value) else None
if value:
headers.remove(value)
mappings[key] = value
return mappings
def create_item(self, job: ImportJob, index: int, data: dict[str, str]) -> None:
def create_item(self, job, index, data):
"""creates and saves an import item"""
normalized = self.normalize_row(data, job.mappings)
normalized["shelf"] = self.get_shelf(normalized)
ImportItem(job=job, index=index, data=data, normalized_data=normalized).save()
def get_shelf(self, normalized_row: dict[str, Optional[str]]) -> Optional[str]:
def get_shelf(self, normalized_row):
"""determine which shelf to use"""
shelf_name = normalized_row.get("shelf")
if not shelf_name:
@ -129,17 +101,13 @@ class Importer:
shelf = [
s for (s, gs) in self.shelf_mapping_guesses.items() if shelf_name in gs
]
return shelf[0] if shelf else normalized_row.get("shelf") or None
return shelf[0] if shelf else None
# pylint: disable=no-self-use
def normalize_row(
self, entry: dict[str, str], mappings: dict[str, Optional[str]]
) -> dict[str, Optional[str]]:
def normalize_row(self, entry, mappings): # pylint: disable=no-self-use
"""use the dataclass to create the formatted row of data"""
return {k: entry.get(v) if v else None for k, v in mappings.items()}
return {k: entry.get(v) for k, v in mappings.items()}
# pylint: disable=no-self-use
def get_import_limit(self, user: User) -> tuple[int, int]:
def get_import_limit(self, user): # pylint: disable=no-self-use
"""check if import limit is set and return how many imports are left"""
site_settings = SiteSettings.objects.get()
import_size_limit = site_settings.import_size_limit
@ -157,14 +125,11 @@ class Importer:
allowed_imports = import_size_limit - imported_books
return enforce_limit, allowed_imports
def create_retry_job(
self, user: User, original_job: ImportJob, items: list[ImportItem]
) -> ImportJob:
def create_retry_job(self, user, original_job, items):
"""retry items that didn't import"""
job = ImportJob.objects.create(
user=user,
include_reviews=original_job.include_reviews,
create_shelves=original_job.create_shelves,
privacy=original_job.privacy,
source=original_job.source,
# TODO: allow users to adjust mappings

View file

@ -1,16 +1,11 @@
""" handle reading a tsv from librarything """
import re
from typing import Optional
from bookwyrm.models import Shelf
from . import Importer
def _remove_brackets(value: Optional[str]) -> Optional[str]:
return re.sub(r"\[|\]", "", value) if value else None
class LibrarythingImporter(Importer):
"""csv downloads from librarything"""
@ -18,19 +13,16 @@ class LibrarythingImporter(Importer):
delimiter = "\t"
encoding = "ISO-8859-1"
def normalize_row(
self, entry: dict[str, str], mappings: dict[str, Optional[str]]
) -> dict[str, Optional[str]]:
def normalize_row(self, entry, mappings): # pylint: disable=no-self-use
"""use the dataclass to create the formatted row of data"""
normalized = {
k: _remove_brackets(entry.get(v) if v else None)
for k, v in mappings.items()
}
isbn_13 = value.split(", ") if (value := normalized.get("isbn_13")) else []
remove_brackets = lambda v: re.sub(r"\[|\]", "", v) if v else None
normalized = {k: remove_brackets(entry.get(v)) for k, v in mappings.items()}
isbn_13 = normalized.get("isbn_13")
isbn_13 = isbn_13.split(", ") if isbn_13 else []
normalized["isbn_13"] = isbn_13[1] if len(isbn_13) > 1 else None
return normalized
def get_shelf(self, normalized_row: dict[str, Optional[str]]) -> Optional[str]:
def get_shelf(self, normalized_row):
if normalized_row["date_finished"]:
return Shelf.READ_FINISHED
if normalized_row["date_started"]:

View file

@ -1,6 +1,4 @@
""" handle reading a csv from openlibrary"""
from typing import Any
from . import Importer
@ -9,7 +7,7 @@ class OpenLibraryImporter(Importer):
service = "OpenLibrary"
def __init__(self, *args: Any, **kwargs: Any):
def __init__(self, *args, **kwargs):
self.row_mappings_guesses.append(("openlibrary_key", ["edition id"]))
self.row_mappings_guesses.append(("openlibrary_work_key", ["work id"]))
super().__init__(*args, **kwargs)

File diff suppressed because it is too large Load diff

View file

@ -1,128 +0,0 @@
""" Use the range message from isbn-international to hyphenate ISBNs """
import os
from typing import Optional
from xml.etree import ElementTree
from xml.etree.ElementTree import Element
import requests
from bookwyrm import settings
def _get_rules(element: Element) -> list[Element]:
if (rules_el := element.find("Rules")) is not None:
return rules_el.findall("Rule")
return []
class IsbnHyphenator:
"""Class to manage the range message xml file and use it to hyphenate ISBNs"""
__range_message_url = "https://www.isbn-international.org/export_rangemessage.xml"
__range_file_path = os.path.join(
settings.BASE_DIR, "bookwyrm", "isbn", "RangeMessage.xml"
)
__element_tree = None
def update_range_message(self) -> None:
"""Download the range message xml file and save it locally"""
response = requests.get(self.__range_message_url, timeout=15)
with open(self.__range_file_path, "w", encoding="utf-8") as file:
file.write(response.text)
self.__element_tree = None
def hyphenate(self, isbn_13: Optional[str]) -> Optional[str]:
"""hyphenate the given ISBN-13 number using the range message"""
if isbn_13 is None:
return None
if self.__element_tree is None:
self.__element_tree = ElementTree.parse(self.__range_file_path)
gs1_prefix = isbn_13[:3]
try:
reg_group = self.__find_reg_group(isbn_13, gs1_prefix)
except ValueError:
# if the reg groups are invalid, just return the original isbn
return isbn_13
if reg_group is None:
return isbn_13 # failed to hyphenate
registrant = self.__find_registrant(isbn_13, gs1_prefix, reg_group)
if registrant is None:
return isbn_13 # failed to hyphenate
publication = isbn_13[len(gs1_prefix) + len(reg_group) + len(registrant) : -1]
check_digit = isbn_13[-1:]
return "-".join((gs1_prefix, reg_group, registrant, publication, check_digit))
def __find_reg_group(self, isbn_13: str, gs1_prefix: str) -> Optional[str]:
if self.__element_tree is None:
self.__element_tree = ElementTree.parse(self.__range_file_path)
ucc_prefixes_el = self.__element_tree.find("EAN.UCCPrefixes")
if ucc_prefixes_el is None:
return None
for ean_ucc_el in ucc_prefixes_el.findall("EAN.UCC"):
if (
prefix_el := ean_ucc_el.find("Prefix")
) is not None and prefix_el.text == gs1_prefix:
for rule_el in _get_rules(ean_ucc_el):
length_el = rule_el.find("Length")
if length_el is None:
continue
length = int(text) if (text := length_el.text) else 0
if length == 0:
continue
range_el = rule_el.find("Range")
if range_el is None or range_el.text is None:
continue
reg_grp_range = [int(x[:length]) for x in range_el.text.split("-")]
reg_group = isbn_13[len(gs1_prefix) : len(gs1_prefix) + length]
if reg_grp_range[0] <= int(reg_group) <= reg_grp_range[1]:
return reg_group
return None
return None
def __find_registrant(
self, isbn_13: str, gs1_prefix: str, reg_group: str
) -> Optional[str]:
from_ind = len(gs1_prefix) + len(reg_group)
if self.__element_tree is None:
self.__element_tree = ElementTree.parse(self.__range_file_path)
reg_groups_el = self.__element_tree.find("RegistrationGroups")
if reg_groups_el is None:
return None
for group_el in reg_groups_el.findall("Group"):
if (
prefix_el := group_el.find("Prefix")
) is not None and prefix_el.text == "-".join((gs1_prefix, reg_group)):
for rule_el in _get_rules(group_el):
length_el = rule_el.find("Length")
if length_el is None:
continue
length = int(text) if (text := length_el.text) else 0
if length == 0:
continue
range_el = rule_el.find("Range")
if range_el is None or range_el.text is None:
continue
registrant_range = [
int(x[:length]) for x in range_el.text.split("-")
]
registrant = isbn_13[from_ind : from_ind + length]
if registrant_range[0] <= int(registrant) <= registrant_range[1]:
return registrant
return None
return None
hyphenator_singleton = IsbnHyphenator()

View file

@ -5,7 +5,7 @@ from django.db.models import signals, Count, Q
from bookwyrm import models
from bookwyrm.redis_store import RedisStore
from bookwyrm.tasks import app, LISTS
from bookwyrm.tasks import app, MEDIUM, HIGH
class ListsStream(RedisStore):
@ -18,7 +18,7 @@ class ListsStream(RedisStore):
return f"{user}-lists"
return f"{user.id}-lists"
def get_rank(self, obj):
def get_rank(self, obj): # pylint: disable=no-self-use
"""lists are sorted by updated date"""
return obj.updated_date.timestamp()
@ -217,14 +217,14 @@ def add_list_on_account_create_command(user_id):
# ---- TASKS
@app.task(queue=LISTS)
@app.task(queue=MEDIUM)
def populate_lists_task(user_id):
"""background task for populating an empty list stream"""
user = models.User.objects.get(id=user_id)
ListsStream().populate_lists(user)
@app.task(queue=LISTS)
@app.task(queue=MEDIUM)
def remove_list_task(list_id, re_add=False):
"""remove a list from any stream it might be in"""
stores = models.User.objects.filter(local=True, is_active=True).values_list(
@ -239,14 +239,14 @@ def remove_list_task(list_id, re_add=False):
add_list_task.delay(list_id)
@app.task(queue=LISTS)
@app.task(queue=HIGH)
def add_list_task(list_id):
"""add a list to any stream it should be in"""
book_list = models.List.objects.get(id=list_id)
ListsStream().add_list(book_list)
@app.task(queue=LISTS)
@app.task(queue=MEDIUM)
def remove_user_lists_task(viewer_id, user_id, exclude_privacy=None):
"""remove all lists by a user from a viewer's stream"""
viewer = models.User.objects.get(id=viewer_id)
@ -254,7 +254,7 @@ def remove_user_lists_task(viewer_id, user_id, exclude_privacy=None):
ListsStream().remove_user_lists(viewer, user, exclude_privacy=exclude_privacy)
@app.task(queue=LISTS)
@app.task(queue=MEDIUM)
def add_user_lists_task(viewer_id, user_id):
"""add all lists by a user to a viewer's stream"""
viewer = models.User.objects.get(id=viewer_id)

View file

@ -1,14 +1,13 @@
""" PROCEED WITH CAUTION: uses deduplication fields to permanently
merge book data objects """
from django.core.management.base import BaseCommand
from django.db.models import Count
from bookwyrm import models
from bookwyrm.management.merge import merge_objects
def dedupe_model(model, dry_run=False):
def dedupe_model(model):
"""combine duplicate editions and update related models"""
print(f"deduplicating {model.__name__}:")
fields = model._meta.get_fields()
dedupe_fields = [
f for f in fields if hasattr(f, "deduplication_field") and f.deduplication_field
@ -17,42 +16,30 @@ def dedupe_model(model, dry_run=False):
dupes = (
model.objects.values(field.name)
.annotate(Count(field.name))
.filter(**{f"{field.name}__count__gt": 1})
.exclude(**{field.name: ""})
.exclude(**{f"{field.name}__isnull": True})
.filter(**{"%s__count__gt" % field.name: 1})
)
for dupe in dupes:
value = dupe[field.name]
if not value or value == "":
continue
print("----------")
print(dupe)
objs = model.objects.filter(**{field.name: value}).order_by("id")
canonical = objs.first()
action = "would merge" if dry_run else "merging"
print(
f"{action} into {model.__name__} {canonical.remote_id} based on {field.name} {value}:"
)
print("keeping", canonical.remote_id)
for obj in objs[1:]:
print(f"- {obj.remote_id}")
absorbed_fields = obj.merge_into(canonical, dry_run=dry_run)
print(f" absorbed fields: {absorbed_fields}")
print(obj.remote_id)
merge_objects(canonical, obj)
class Command(BaseCommand):
"""deduplicate allllll the book data models"""
help = "merges duplicate book data"
def add_arguments(self, parser):
"""add the arguments for this command"""
parser.add_argument(
"--dry_run",
action="store_true",
help="don't actually merge, only print what would happen",
)
# pylint: disable=no-self-use,unused-argument
def handle(self, *args, **options):
"""run deduplications"""
dedupe_model(models.Edition, dry_run=options["dry_run"])
dedupe_model(models.Work, dry_run=options["dry_run"])
dedupe_model(models.Author, dry_run=options["dry_run"])
dedupe_model(models.Edition)
dedupe_model(models.Work)
dedupe_model(models.Author)

View file

@ -1,43 +0,0 @@
""" Erase any data stored about deleted users """
import sys
from django.core.management.base import BaseCommand, CommandError
from bookwyrm import models
from bookwyrm.models.user import erase_user_data
# pylint: disable=missing-function-docstring
class Command(BaseCommand):
"""command-line options"""
help = "Remove Two Factor Authorisation from user"
def add_arguments(self, parser): # pylint: disable=no-self-use
parser.add_argument(
"--dryrun",
action="store_true",
help="Preview users to be cleared without altering the database",
)
def handle(self, *args, **options): # pylint: disable=unused-argument
# Check for anything fishy
bad_state = models.User.objects.filter(is_deleted=True, is_active=True)
if bad_state.exists():
raise CommandError(
f"{bad_state.count()} user(s) marked as both active and deleted"
)
deleted_users = models.User.objects.filter(is_deleted=True)
self.stdout.write(f"Found {deleted_users.count()} deleted users")
if options["dryrun"]:
self.stdout.write("\n".join(u.username for u in deleted_users[:5]))
if deleted_users.count() > 5:
self.stdout.write("... and more")
sys.exit()
self.stdout.write("Erasing user data:")
for user_id in deleted_users.values_list("id", flat=True):
erase_user_data.delay(user_id)
self.stdout.write(".", ending="")
self.stdout.write("")
self.stdout.write("Tasks created successfully")

View file

@ -0,0 +1,54 @@
""" Get your admin code to allow install """
from django.core.management.base import BaseCommand
from bookwyrm import models
from bookwyrm.settings import VERSION
# pylint: disable=no-self-use
class Command(BaseCommand):
"""command-line options"""
help = "What version is this?"
def add_arguments(self, parser):
"""specify which function to run"""
parser.add_argument(
"--current",
action="store_true",
help="Version stored in database",
)
parser.add_argument(
"--target",
action="store_true",
help="Version stored in settings",
)
parser.add_argument(
"--update",
action="store_true",
help="Update database version",
)
# pylint: disable=unused-argument
def handle(self, *args, **options):
"""execute init"""
site = models.SiteSettings.objects.get()
current = site.version or "0.0.1"
target = VERSION
if options.get("current"):
print(current)
return
if options.get("target"):
print(target)
return
if options.get("update"):
site.version = target
site.save()
return
if current != target:
print(f"{current}/{target}")
else:
print(current)

View file

@ -1,21 +0,0 @@
""" Repair editions with missing works """
from django.core.management.base import BaseCommand
from bookwyrm import models
class Command(BaseCommand):
"""command-line options"""
help = "Repairs an edition that is in a broken state"
# pylint: disable=unused-argument
def handle(self, *args, **options):
"""Find and repair broken editions"""
# Find broken editions
editions = models.Edition.objects.filter(parent_work__isnull=True)
self.stdout.write(f"Repairing {editions.count()} edition(s):")
# Do repair
for edition in editions:
edition.repair()
self.stdout.write(".", ending="")

View file

@ -0,0 +1,50 @@
from django.db.models import ManyToManyField
def update_related(canonical, obj):
"""update all the models with fk to the object being removed"""
# move related models to canonical
related_models = [
(r.remote_field.name, r.related_model) for r in canonical._meta.related_objects
]
for (related_field, related_model) in related_models:
# Skip the ManyToMany fields that arent auto-created. These
# should have a corresponding OneToMany field in the model for
# the linking table anyway. If we update it through that model
# instead then we wont lose the extra fields in the linking
# table.
related_field_obj = related_model._meta.get_field(related_field)
if isinstance(related_field_obj, ManyToManyField):
through = related_field_obj.remote_field.through
if not through._meta.auto_created:
continue
related_objs = related_model.objects.filter(**{related_field: obj})
for related_obj in related_objs:
print("replacing in", related_model.__name__, related_field, related_obj.id)
try:
setattr(related_obj, related_field, canonical)
related_obj.save()
except TypeError:
getattr(related_obj, related_field).add(canonical)
getattr(related_obj, related_field).remove(obj)
def copy_data(canonical, obj):
"""try to get the most data possible"""
for data_field in obj._meta.get_fields():
if not hasattr(data_field, "activitypub_field"):
continue
data_value = getattr(obj, data_field.name)
if not data_value:
continue
if not getattr(canonical, data_field.name):
print("setting data field", data_field.name, data_value)
setattr(canonical, data_field.name, data_value)
canonical.save()
def merge_objects(canonical, obj):
copy_data(canonical, obj)
update_related(canonical, obj)
# remove the outdated entry
obj.delete()

View file

@ -1,3 +1,4 @@
from bookwyrm.management.merge import merge_objects
from django.core.management.base import BaseCommand
@ -8,11 +9,6 @@ class MergeCommand(BaseCommand):
"""add the arguments for this command"""
parser.add_argument("--canonical", type=int, required=True)
parser.add_argument("--other", type=int, required=True)
parser.add_argument(
"--dry_run",
action="store_true",
help="don't actually merge, only print what would happen",
)
# pylint: disable=no-self-use,unused-argument
def handle(self, *args, **options):
@ -30,8 +26,4 @@ class MergeCommand(BaseCommand):
print("other book doesnt exist!")
return
absorbed_fields = other.merge_into(canonical, dry_run=options["dry_run"])
action = "would be" if options["dry_run"] else "has been"
print(f"{other.remote_id} {action} merged into {canonical.remote_id}")
print(f"absorbed fields: {absorbed_fields}")
merge_objects(canonical, other)

View file

@ -1,4 +1,3 @@
""" look at all this nice middleware! """
from .timezone_middleware import TimezoneMiddleware
from .ip_middleware import IPBlocklistMiddleware
from .file_too_big import FileTooBig

View file

@ -1,30 +0,0 @@
"""Middleware to display a custom 413 error page"""
from django.http import HttpResponse
from django.shortcuts import render
from django.core.exceptions import RequestDataTooBig
class FileTooBig:
"""Middleware to display a custom page when a
RequestDataTooBig exception is thrown"""
def __init__(self, get_response):
"""boilerplate __init__ from Django docs"""
self.get_response = get_response
def __call__(self, request):
"""If RequestDataTooBig is thrown, render the 413 error page"""
try:
body = request.body # pylint: disable=unused-variable
except RequestDataTooBig:
rendered = render(request, "413.html")
response = HttpResponse(rendered)
return response
response = self.get_response(request)
return response

View file

@ -1,5 +1,5 @@
""" Makes the app aware of the users timezone """
import zoneinfo
import pytz
from django.utils import timezone
@ -12,7 +12,9 @@ class TimezoneMiddleware:
def __call__(self, request):
if request.user.is_authenticated:
timezone.activate(zoneinfo.ZoneInfo(request.user.preferred_timezone))
timezone.activate(pytz.timezone(request.user.preferred_timezone))
else:
timezone.deactivate()
return self.get_response(request)
timezone.activate(pytz.utc)
response = self.get_response(request)
timezone.deactivate()
return response

View file

@ -10,7 +10,6 @@ class Migration(migrations.Migration):
]
operations = [
# The new timezones are "Factory" and "localtime"
migrations.AlterField(
model_name="user",
name="preferred_timezone",

View file

@ -1,51 +0,0 @@
import re
from itertools import chain
from django.db import migrations, transaction
from django.db.models import Q
from bookwyrm.settings import LANGUAGE_ARTICLES
def set_sort_title(edition):
articles = chain(
*(LANGUAGE_ARTICLES.get(language, ()) for language in tuple(edition.languages))
)
edition.sort_title = re.sub(
f'^{" |^".join(articles)} ', "", str(edition.title).lower()
)
return edition
@transaction.atomic
def populate_sort_title(apps, schema_editor):
Edition = apps.get_model("bookwyrm", "Edition")
db_alias = schema_editor.connection.alias
editions_wo_sort_title = Edition.objects.using(db_alias).filter(
Q(sort_title__isnull=True) | Q(sort_title__exact="")
)
batch_size = 1000
start = 0
end = batch_size
while True:
batch = editions_wo_sort_title[start:end]
if not batch.exists():
break
Edition.objects.bulk_update(
(set_sort_title(edition) for edition in batch), ["sort_title"]
)
start = end
end += batch_size
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0178_auto_20230328_2132"),
]
operations = [
migrations.RunPython(
populate_sort_title, reverse_code=migrations.RunPython.noop
),
]

View file

@ -1,36 +0,0 @@
# Generated by Django 3.2.18 on 2023-05-16 16:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0178_auto_20230328_2132"),
]
operations = [
migrations.AddField(
model_name="reportcomment",
name="action_type",
field=models.CharField(
choices=[
("comment", "Comment"),
("resolve", "Resolved report"),
("reopen", "Re-opened report"),
("message_reporter", "Messaged reporter"),
("message_offender", "Messaged reported user"),
("user_suspension", "Suspended user"),
("user_unsuspension", "Un-suspended user"),
("user_perms", "Changed user permission level"),
("user_deletion", "Deleted user account"),
("block_domain", "Blocked domain"),
("approve_domain", "Approved domain"),
("delete_item", "Deleted item"),
],
default="comment",
max_length=20,
),
),
migrations.RenameModel("ReportComment", "ReportAction"),
]

View file

@ -1,17 +0,0 @@
# Generated by Django 3.2.18 on 2023-06-21 22:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0179_reportcomment_comment_type"),
]
operations = [
migrations.AlterModelOptions(
name="reportaction",
options={"ordering": ("created_date",)},
),
]

View file

@ -1,44 +0,0 @@
# Generated by Django 3.2.19 on 2023-07-23 19:33
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0179_populate_sort_title"),
]
operations = [
migrations.AlterField(
model_name="user",
name="preferred_language",
field=models.CharField(
blank=True,
choices=[
("en-us", "English"),
("ca-es", "Català (Catalan)"),
("de-de", "Deutsch (German)"),
("eo-uy", "Esperanto (Esperanto)"),
("es-es", "Español (Spanish)"),
("eu-es", "Euskara (Basque)"),
("gl-es", "Galego (Galician)"),
("it-it", "Italiano (Italian)"),
("fi-fi", "Suomi (Finnish)"),
("fr-fr", "Français (French)"),
("lt-lt", "Lietuvių (Lithuanian)"),
("nl-nl", "Nederlands (Dutch)"),
("no-no", "Norsk (Norwegian)"),
("pl-pl", "Polski (Polish)"),
("pt-br", "Português do Brasil (Brazilian Portuguese)"),
("pt-pt", "Português Europeu (European Portuguese)"),
("ro-ro", "Română (Romanian)"),
("sv-se", "Svenska (Swedish)"),
("zh-hans", "简体中文 (Simplified Chinese)"),
("zh-hant", "繁體中文 (Traditional Chinese)"),
],
max_length=255,
null=True,
),
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.20 on 2023-08-06 23:02
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0180_alter_reportaction_options"),
("bookwyrm", "0180_alter_user_preferred_language"),
]
operations = []

View file

@ -1,130 +0,0 @@
# Generated by Django 3.2.20 on 2023-10-27 11:22
import bookwyrm.models.activitypub_mixin
import bookwyrm.models.fields
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0181_merge_20230806_2302"),
]
operations = [
migrations.AddField(
model_name="user",
name="also_known_as",
field=bookwyrm.models.fields.ManyToManyField(to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name="user",
name="moved_to",
field=bookwyrm.models.fields.RemoteIdField(
max_length=255,
null=True,
validators=[bookwyrm.models.fields.validate_remote_id],
),
),
migrations.AlterField(
model_name="notification",
name="notification_type",
field=models.CharField(
choices=[
("FAVORITE", "Favorite"),
("REPLY", "Reply"),
("MENTION", "Mention"),
("TAG", "Tag"),
("FOLLOW", "Follow"),
("FOLLOW_REQUEST", "Follow Request"),
("BOOST", "Boost"),
("IMPORT", "Import"),
("ADD", "Add"),
("REPORT", "Report"),
("LINK_DOMAIN", "Link Domain"),
("INVITE", "Invite"),
("ACCEPT", "Accept"),
("JOIN", "Join"),
("LEAVE", "Leave"),
("REMOVE", "Remove"),
("GROUP_PRIVACY", "Group Privacy"),
("GROUP_NAME", "Group Name"),
("GROUP_DESCRIPTION", "Group Description"),
("MOVE", "Move"),
],
max_length=255,
),
),
migrations.CreateModel(
name="Move",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("created_date", models.DateTimeField(auto_now_add=True)),
("updated_date", models.DateTimeField(auto_now=True)),
(
"remote_id",
bookwyrm.models.fields.RemoteIdField(
max_length=255,
null=True,
validators=[bookwyrm.models.fields.validate_remote_id],
),
),
("object", bookwyrm.models.fields.CharField(max_length=255)),
(
"origin",
bookwyrm.models.fields.CharField(
blank=True, default="", max_length=255, null=True
),
),
(
"user",
bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
to=settings.AUTH_USER_MODEL,
),
),
],
options={
"abstract": False,
},
bases=(bookwyrm.models.activitypub_mixin.ActivityMixin, models.Model),
),
migrations.CreateModel(
name="MoveUser",
fields=[
(
"move_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="bookwyrm.move",
),
),
(
"target",
bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="move_target",
to=settings.AUTH_USER_MODEL,
),
),
],
options={
"abstract": False,
},
bases=("bookwyrm.move",),
),
]

View file

@ -1,18 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-05 16:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0182_auto_20231027_1122"),
]
operations = [
migrations.AddField(
model_name="user",
name="is_deleted",
field=models.BooleanField(default=False),
),
]

View file

@ -1,35 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-06 04:21
from django.db import migrations
from bookwyrm.models import User
def update_deleted_users(apps, schema_editor):
"""Find all the users who are deleted, not just inactive, and set deleted"""
users = apps.get_model("bookwyrm", "User")
db_alias = schema_editor.connection.alias
users.objects.using(db_alias).filter(
is_active=False,
deactivation_reason__in=[
"self_deletion",
"moderator_deletion",
],
).update(is_deleted=True)
# differente rules for remote users
users.objects.using(db_alias).filter(is_active=False, local=False,).exclude(
deactivation_reason="moderator_deactivation",
).update(is_deleted=True)
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0183_auto_20231105_1607"),
]
operations = [
migrations.RunPython(
update_deleted_users, reverse_code=migrations.RunPython.noop
),
]

View file

@ -1,42 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-13 22:39
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0184_auto_20231106_0421"),
]
operations = [
migrations.AlterField(
model_name="notification",
name="notification_type",
field=models.CharField(
choices=[
("FAVORITE", "Favorite"),
("BOOST", "Boost"),
("REPLY", "Reply"),
("MENTION", "Mention"),
("TAG", "Tag"),
("FOLLOW", "Follow"),
("FOLLOW_REQUEST", "Follow Request"),
("IMPORT", "Import"),
("ADD", "Add"),
("REPORT", "Report"),
("LINK_DOMAIN", "Link Domain"),
("INVITE", "Invite"),
("ACCEPT", "Accept"),
("JOIN", "Join"),
("LEAVE", "Leave"),
("REMOVE", "Remove"),
("GROUP_PRIVACY", "Group Privacy"),
("GROUP_NAME", "Group Name"),
("GROUP_DESCRIPTION", "Group Description"),
("MOVE", "Move"),
],
max_length=255,
),
),
]

View file

@ -1,212 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-16 00:48
from django.conf import settings
import django.contrib.postgres.fields
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0185_alter_notification_notification_type"),
]
operations = [
migrations.CreateModel(
name="ParentJob",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("task_id", models.UUIDField(blank=True, null=True, unique=True)),
(
"created_date",
models.DateTimeField(default=django.utils.timezone.now),
),
(
"updated_date",
models.DateTimeField(default=django.utils.timezone.now),
),
("complete", models.BooleanField(default=False)),
(
"status",
models.CharField(
choices=[
("pending", "Pending"),
("active", "Active"),
("complete", "Complete"),
("stopped", "Stopped"),
("failed", "Failed"),
],
default="pending",
max_length=50,
null=True,
),
),
(
"user",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
],
options={
"abstract": False,
},
),
migrations.AddField(
model_name="sitesettings",
name="user_import_time_limit",
field=models.IntegerField(default=48),
),
migrations.AlterField(
model_name="notification",
name="notification_type",
field=models.CharField(
choices=[
("FAVORITE", "Favorite"),
("BOOST", "Boost"),
("REPLY", "Reply"),
("MENTION", "Mention"),
("TAG", "Tag"),
("FOLLOW", "Follow"),
("FOLLOW_REQUEST", "Follow Request"),
("IMPORT", "Import"),
("USER_IMPORT", "User Import"),
("USER_EXPORT", "User Export"),
("ADD", "Add"),
("REPORT", "Report"),
("LINK_DOMAIN", "Link Domain"),
("INVITE", "Invite"),
("ACCEPT", "Accept"),
("JOIN", "Join"),
("LEAVE", "Leave"),
("REMOVE", "Remove"),
("GROUP_PRIVACY", "Group Privacy"),
("GROUP_NAME", "Group Name"),
("GROUP_DESCRIPTION", "Group Description"),
("MOVE", "Move"),
],
max_length=255,
),
),
migrations.CreateModel(
name="BookwyrmExportJob",
fields=[
(
"parentjob_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="bookwyrm.parentjob",
),
),
("export_data", models.FileField(null=True, upload_to="")),
],
options={
"abstract": False,
},
bases=("bookwyrm.parentjob",),
),
migrations.CreateModel(
name="BookwyrmImportJob",
fields=[
(
"parentjob_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="bookwyrm.parentjob",
),
),
("archive_file", models.FileField(blank=True, null=True, upload_to="")),
("import_data", models.JSONField(null=True)),
(
"required",
django.contrib.postgres.fields.ArrayField(
base_field=models.CharField(blank=True, max_length=50),
blank=True,
size=None,
),
),
],
options={
"abstract": False,
},
bases=("bookwyrm.parentjob",),
),
migrations.CreateModel(
name="ChildJob",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("task_id", models.UUIDField(blank=True, null=True, unique=True)),
(
"created_date",
models.DateTimeField(default=django.utils.timezone.now),
),
(
"updated_date",
models.DateTimeField(default=django.utils.timezone.now),
),
("complete", models.BooleanField(default=False)),
(
"status",
models.CharField(
choices=[
("pending", "Pending"),
("active", "Active"),
("complete", "Complete"),
("stopped", "Stopped"),
("failed", "Failed"),
],
default="pending",
max_length=50,
null=True,
),
),
(
"parent_job",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="child_jobs",
to="bookwyrm.parentjob",
),
),
],
options={
"abstract": False,
},
),
migrations.AddField(
model_name="notification",
name="related_user_export",
field=models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="bookwyrm.bookwyrmexportjob",
),
),
]

View file

@ -1,48 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-14 10:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0185_alter_notification_notification_type"),
]
operations = [
migrations.AddField(
model_name="notification",
name="related_invite_requests",
field=models.ManyToManyField(to="bookwyrm.InviteRequest"),
),
migrations.AlterField(
model_name="notification",
name="notification_type",
field=models.CharField(
choices=[
("FAVORITE", "Favorite"),
("BOOST", "Boost"),
("REPLY", "Reply"),
("MENTION", "Mention"),
("TAG", "Tag"),
("FOLLOW", "Follow"),
("FOLLOW_REQUEST", "Follow Request"),
("IMPORT", "Import"),
("ADD", "Add"),
("REPORT", "Report"),
("LINK_DOMAIN", "Link Domain"),
("INVITE_REQUEST", "Invite Request"),
("INVITE", "Invite"),
("ACCEPT", "Accept"),
("JOIN", "Join"),
("LEAVE", "Leave"),
("REMOVE", "Remove"),
("GROUP_PRIVACY", "Group Privacy"),
("GROUP_NAME", "Group Name"),
("GROUP_DESCRIPTION", "Group Description"),
("MOVE", "Move"),
],
max_length=255,
),
),
]

View file

@ -1,54 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-09 16:57
import bookwyrm.models.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0186_invite_request_notification"),
]
operations = [
migrations.AddField(
model_name="book",
name="first_published_date_precision",
field=models.CharField(
blank=True,
choices=[
("DAY", "Day prec."),
("MONTH", "Month prec."),
("YEAR", "Year prec."),
],
editable=False,
max_length=10,
null=True,
),
),
migrations.AddField(
model_name="book",
name="published_date_precision",
field=models.CharField(
blank=True,
choices=[
("DAY", "Day prec."),
("MONTH", "Month prec."),
("YEAR", "Year prec."),
],
editable=False,
max_length=10,
null=True,
),
),
migrations.AlterField(
model_name="book",
name="first_published_date",
field=bookwyrm.models.fields.PartialDateField(blank=True, null=True),
),
migrations.AlterField(
model_name="book",
name="published_date",
field=bookwyrm.models.fields.PartialDateField(blank=True, null=True),
),
]

View file

@ -1,18 +0,0 @@
# Generated by Django 3.2.23 on 2023-11-20 18:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0187_partial_publication_dates"),
]
operations = [
migrations.AddField(
model_name="theme",
name="loads",
field=models.BooleanField(blank=True, null=True),
),
]

View file

@ -1,45 +0,0 @@
# Generated by Django 3.2.23 on 2023-12-12 23:42
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0188_theme_loads"),
]
operations = [
migrations.AlterField(
model_name="user",
name="preferred_language",
field=models.CharField(
blank=True,
choices=[
("en-us", "English"),
("ca-es", "Català (Catalan)"),
("de-de", "Deutsch (German)"),
("eo-uy", "Esperanto (Esperanto)"),
("es-es", "Español (Spanish)"),
("eu-es", "Euskara (Basque)"),
("gl-es", "Galego (Galician)"),
("it-it", "Italiano (Italian)"),
("fi-fi", "Suomi (Finnish)"),
("fr-fr", "Français (French)"),
("lt-lt", "Lietuvių (Lithuanian)"),
("nl-nl", "Nederlands (Dutch)"),
("no-no", "Norsk (Norwegian)"),
("pl-pl", "Polski (Polish)"),
("pt-br", "Português do Brasil (Brazilian Portuguese)"),
("pt-pt", "Português Europeu (European Portuguese)"),
("ro-ro", "Română (Romanian)"),
("sv-se", "Svenska (Swedish)"),
("uk-ua", "Українська (Ukrainian)"),
("zh-hans", "简体中文 (Simplified Chinese)"),
("zh-hant", "繁體中文 (Traditional Chinese)"),
],
max_length=255,
null=True,
),
),
]

View file

@ -1,18 +0,0 @@
# Generated by Django 3.2.23 on 2023-11-25 05:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0188_theme_loads"),
]
operations = [
migrations.AddField(
model_name="importjob",
name="create_shelves",
field=models.BooleanField(default=True),
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2023-11-22 10:16
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0186_auto_20231116_0048"),
("bookwyrm", "0188_theme_loads"),
]
operations = []

View file

@ -1,45 +0,0 @@
# Generated by Django 3.2.23 on 2023-11-23 19:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0189_merge_0186_auto_20231116_0048_0188_theme_loads"),
]
operations = [
migrations.AlterField(
model_name="notification",
name="notification_type",
field=models.CharField(
choices=[
("FAVORITE", "Favorite"),
("BOOST", "Boost"),
("REPLY", "Reply"),
("MENTION", "Mention"),
("TAG", "Tag"),
("FOLLOW", "Follow"),
("FOLLOW_REQUEST", "Follow Request"),
("IMPORT", "Import"),
("USER_IMPORT", "User Import"),
("USER_EXPORT", "User Export"),
("ADD", "Add"),
("REPORT", "Report"),
("LINK_DOMAIN", "Link Domain"),
("INVITE_REQUEST", "Invite Request"),
("INVITE", "Invite"),
("ACCEPT", "Accept"),
("JOIN", "Join"),
("LEAVE", "Leave"),
("REMOVE", "Remove"),
("GROUP_PRIVACY", "Group Privacy"),
("GROUP_NAME", "Group Name"),
("GROUP_DESCRIPTION", "Group Description"),
("MOVE", "Move"),
],
max_length=255,
),
),
]

View file

@ -1,16 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-24 17:11
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0188_theme_loads"),
]
operations = [
migrations.RemoveIndex(
model_name="author",
name="bookwyrm_au_search__b050a8_gin",
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2024-01-02 03:26
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0189_alter_user_preferred_language"),
("bookwyrm", "0190_alter_notification_notification_type"),
]
operations = []

View file

@ -1,76 +0,0 @@
# Generated by Django 3.2.20 on 2023-11-25 00:47
from importlib import import_module
import re
from django.db import migrations
import pgtrigger.compiler
import pgtrigger.migrations
trigger_migration = import_module("bookwyrm.migrations.0077_auto_20210623_2155")
# it's _very_ convenient for development that this migration be reversible
search_vector_trigger = trigger_migration.Migration.operations[4]
author_search_vector_trigger = trigger_migration.Migration.operations[5]
assert re.search(r"\bCREATE TRIGGER search_vector_trigger\b", search_vector_trigger.sql)
assert re.search(
r"\bCREATE TRIGGER author_search_vector_trigger\b",
author_search_vector_trigger.sql,
)
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0190_book_search_updates"),
]
operations = [
pgtrigger.migrations.AddTrigger(
model_name="book",
trigger=pgtrigger.compiler.Trigger(
name="update_search_vector_on_book_edit",
sql=pgtrigger.compiler.UpsertTriggerSql(
func="new.search_vector := setweight(coalesce(nullif(to_tsvector('english', new.title), ''), to_tsvector('simple', new.title)), 'A') || setweight(to_tsvector('english', coalesce(new.subtitle, '')), 'B') || (SELECT setweight(to_tsvector('simple', coalesce(array_to_string(array_agg(bookwyrm_author.name), ' '), '')), 'C') FROM bookwyrm_author LEFT JOIN bookwyrm_book_authors ON bookwyrm_author.id = bookwyrm_book_authors.author_id WHERE bookwyrm_book_authors.book_id = new.id ) || setweight(to_tsvector('english', coalesce(new.series, '')), 'D');RETURN NEW;",
hash="77d6399497c0a89b0bf09d296e33c396da63705c",
operation='INSERT OR UPDATE OF "title", "subtitle", "series", "search_vector"',
pgid="pgtrigger_update_search_vector_on_book_edit_bec58",
table="bookwyrm_book",
when="BEFORE",
),
),
),
pgtrigger.migrations.AddTrigger(
model_name="author",
trigger=pgtrigger.compiler.Trigger(
name="reset_search_vector_on_author_edit",
sql=pgtrigger.compiler.UpsertTriggerSql(
func="WITH updated_books AS (SELECT book_id FROM bookwyrm_book_authors WHERE author_id = new.id ) UPDATE bookwyrm_book SET search_vector = '' FROM updated_books WHERE id = updated_books.book_id;RETURN NEW;",
hash="e7bbf08711ff3724c58f4d92fb7a082ffb3d7826",
operation='UPDATE OF "name"',
pgid="pgtrigger_reset_search_vector_on_author_edit_a447c",
table="bookwyrm_author",
when="AFTER",
),
),
),
migrations.RunSQL(
sql="""DROP TRIGGER IF EXISTS search_vector_trigger ON bookwyrm_book;
DROP FUNCTION IF EXISTS book_trigger;
""",
reverse_sql=search_vector_trigger.sql,
),
migrations.RunSQL(
sql="""DROP TRIGGER IF EXISTS author_search_vector_trigger ON bookwyrm_author;
DROP FUNCTION IF EXISTS author_trigger;
""",
reverse_sql=author_search_vector_trigger.sql,
),
migrations.RunSQL(
# Recalculate book search vector for any missed author name changes
# due to bug in JOIN in the old trigger.
sql="UPDATE bookwyrm_book SET search_vector = NULL;",
reverse_sql=migrations.RunSQL.noop,
),
]

View file

@ -1,23 +0,0 @@
# Generated by Django 3.2.23 on 2024-01-04 23:56
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0191_merge_20240102_0326"),
]
operations = [
migrations.AlterField(
model_name="quotation",
name="endposition",
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name="quotation",
name="position",
field=models.TextField(blank=True, null=True),
),
]

View file

@ -1,18 +0,0 @@
# Generated by Django 3.2.23 on 2024-01-02 19:36
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0191_merge_20240102_0326"),
]
operations = [
migrations.RenameField(
model_name="sitesettings",
old_name="version",
new_name="available_version",
),
]

View file

@ -1,18 +0,0 @@
# Generated by Django 3.2.23 on 2024-01-16 10:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0191_merge_20240102_0326"),
]
operations = [
migrations.AddField(
model_name="sitesettings",
name="user_exports_enabled",
field=models.BooleanField(default=False),
),
]

View file

@ -1,92 +0,0 @@
# Generated by Django 3.2.23 on 2024-01-28 02:49
import django.core.serializers.json
from django.db import migrations, models
import django.db.models.deletion
from django.core.files.storage import storages
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0192_sitesettings_user_exports_enabled"),
]
operations = [
migrations.AddField(
model_name="bookwyrmexportjob",
name="export_json",
field=models.JSONField(
encoder=django.core.serializers.json.DjangoJSONEncoder, null=True
),
),
migrations.AddField(
model_name="bookwyrmexportjob",
name="json_completed",
field=models.BooleanField(default=False),
),
migrations.AlterField(
model_name="bookwyrmexportjob",
name="export_data",
field=models.FileField(
null=True,
storage=storages["exports"],
upload_to="",
),
),
migrations.CreateModel(
name="AddFileToTar",
fields=[
(
"childjob_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="bookwyrm.childjob",
),
),
(
"parent_export_job",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="child_edition_export_jobs",
to="bookwyrm.bookwyrmexportjob",
),
),
],
options={
"abstract": False,
},
bases=("bookwyrm.childjob",),
),
migrations.CreateModel(
name="AddBookToUserExportJob",
fields=[
(
"childjob_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="bookwyrm.childjob",
),
),
(
"edition",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="bookwyrm.edition",
),
),
],
options={
"abstract": False,
},
bases=("bookwyrm.childjob",),
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2024-02-03 15:39
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0192_make_page_positions_text"),
("bookwyrm", "0192_sitesettings_user_exports_enabled"),
]
operations = []

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2024-02-03 16:19
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0192_rename_version_sitesettings_available_version"),
("bookwyrm", "0193_merge_20240203_1539"),
]
operations = []

View file

@ -1,46 +0,0 @@
# Generated by Django 3.2.23 on 2024-02-21 00:45
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0194_merge_20240203_1619"),
]
operations = [
migrations.AlterField(
model_name="user",
name="preferred_language",
field=models.CharField(
blank=True,
choices=[
("en-us", "English"),
("ca-es", "Català (Catalan)"),
("de-de", "Deutsch (German)"),
("eo-uy", "Esperanto (Esperanto)"),
("es-es", "Español (Spanish)"),
("eu-es", "Euskara (Basque)"),
("gl-es", "Galego (Galician)"),
("it-it", "Italiano (Italian)"),
("ko-kr", "한국어 (Korean)"),
("fi-fi", "Suomi (Finnish)"),
("fr-fr", "Français (French)"),
("lt-lt", "Lietuvių (Lithuanian)"),
("nl-nl", "Nederlands (Dutch)"),
("no-no", "Norsk (Norwegian)"),
("pl-pl", "Polski (Polish)"),
("pt-br", "Português do Brasil (Brazilian Portuguese)"),
("pt-pt", "Português Europeu (European Portuguese)"),
("ro-ro", "Română (Romanian)"),
("sv-se", "Svenska (Swedish)"),
("uk-ua", "Українська (Ukrainian)"),
("zh-hans", "简体中文 (Simplified Chinese)"),
("zh-hant", "繁體中文 (Traditional Chinese)"),
],
max_length=255,
null=True,
),
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2024-03-18 17:37
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0193_auto_20240128_0249"),
("bookwyrm", "0195_alter_user_preferred_language"),
]
operations = []

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.23 on 2024-03-18 00:48
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0191_migrate_search_vec_triggers_to_pgtriggers"),
("bookwyrm", "0195_alter_user_preferred_language"),
]
operations = []

View file

@ -1,41 +0,0 @@
# Generated by Django 3.2.25 on 2024-03-20 15:15
import django.contrib.postgres.indexes
from django.db import migrations
import pgtrigger.compiler
import pgtrigger.migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0196_merge_pr3134_into_main"),
]
operations = [
migrations.AddIndex(
model_name="author",
index=django.contrib.postgres.indexes.GinIndex(
fields=["search_vector"], name="bookwyrm_au_search__b050a8_gin"
),
),
pgtrigger.migrations.AddTrigger(
model_name="author",
trigger=pgtrigger.compiler.Trigger(
name="update_search_vector_on_author_edit",
sql=pgtrigger.compiler.UpsertTriggerSql(
func="new.search_vector := setweight(to_tsvector('simple', new.name), 'A') || setweight(to_tsvector('simple', coalesce(array_to_string(new.aliases, ' '), '')), 'B');RETURN NEW;",
hash="b97919016236d74d0ade51a0769a173ea269da64",
operation='INSERT OR UPDATE OF "name", "aliases", "search_vector"',
pgid="pgtrigger_update_search_vector_on_author_edit_c61cb",
table="bookwyrm_author",
when="BEFORE",
),
),
),
migrations.RunSQL(
# Calculate search vector for all Authors.
sql="UPDATE bookwyrm_author SET search_vector = NULL;",
reverse_sql="UPDATE bookwyrm_author SET search_vector = NULL;",
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.25 on 2024-03-24 02:35
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0196_merge_20240318_1737"),
("bookwyrm", "0196_merge_pr3134_into_main"),
]
operations = []

View file

@ -1,48 +0,0 @@
# Generated by Django 3.2.24 on 2024-02-28 21:30
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0196_merge_pr3134_into_main"),
]
operations = [
migrations.CreateModel(
name="MergedBook",
fields=[
("deleted_id", models.IntegerField(primary_key=True, serialize=False)),
(
"merged_into",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="absorbed",
to="bookwyrm.book",
),
),
],
options={
"abstract": False,
},
),
migrations.CreateModel(
name="MergedAuthor",
fields=[
("deleted_id", models.IntegerField(primary_key=True, serialize=False)),
(
"merged_into",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="absorbed",
to="bookwyrm.author",
),
),
],
options={
"abstract": False,
},
),
]

View file

@ -1,23 +0,0 @@
# Generated by Django 3.2.25 on 2024-03-26 11:37
import bookwyrm.models.bookwyrm_export_job
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0197_merge_20240324_0235"),
]
operations = [
migrations.AlterField(
model_name="bookwyrmexportjob",
name="export_data",
field=models.FileField(
null=True,
storage=bookwyrm.models.bookwyrm_export_job.select_exports_storage,
upload_to="",
),
),
]

View file

@ -1,57 +0,0 @@
# Generated by Django 3.2.25 on 2024-03-20 15:52
from django.db import migrations
import pgtrigger.compiler
import pgtrigger.migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0197_author_search_vector"),
]
operations = [
pgtrigger.migrations.RemoveTrigger(
model_name="author",
name="reset_search_vector_on_author_edit",
),
pgtrigger.migrations.RemoveTrigger(
model_name="book",
name="update_search_vector_on_book_edit",
),
pgtrigger.migrations.AddTrigger(
model_name="author",
trigger=pgtrigger.compiler.Trigger(
name="reset_book_search_vector_on_author_edit",
sql=pgtrigger.compiler.UpsertTriggerSql(
func="WITH updated_books AS (SELECT book_id FROM bookwyrm_book_authors WHERE author_id = new.id ) UPDATE bookwyrm_book SET search_vector = '' FROM updated_books WHERE id = updated_books.book_id;RETURN NEW;",
hash="68422c0f29879c5802b82159dde45297eff53e73",
operation='UPDATE OF "name", "aliases"',
pgid="pgtrigger_reset_book_search_vector_on_author_edit_a50c7",
table="bookwyrm_author",
when="AFTER",
),
),
),
pgtrigger.migrations.AddTrigger(
model_name="book",
trigger=pgtrigger.compiler.Trigger(
name="update_search_vector_on_book_edit",
sql=pgtrigger.compiler.UpsertTriggerSql(
func="WITH author_names AS (SELECT array_to_string(bookwyrm_author.name || bookwyrm_author.aliases, ' ') AS name_and_aliases FROM bookwyrm_author LEFT JOIN bookwyrm_book_authors ON bookwyrm_author.id = bookwyrm_book_authors.author_id WHERE bookwyrm_book_authors.book_id = new.id ) SELECT setweight(coalesce(nullif(to_tsvector('english', new.title), ''), to_tsvector('simple', new.title)), 'A') || setweight(to_tsvector('english', coalesce(new.subtitle, '')), 'B') || (SELECT setweight(to_tsvector('simple', coalesce(array_to_string(array_agg(name_and_aliases), ' '), '')), 'C') FROM author_names) || setweight(to_tsvector('english', coalesce(new.series, '')), 'D') INTO new.search_vector;RETURN NEW;",
hash="9324f5ca76a6f5e63931881d62d11da11f595b2c",
operation='INSERT OR UPDATE OF "title", "subtitle", "series", "search_vector"',
pgid="pgtrigger_update_search_vector_on_book_edit_bec58",
table="bookwyrm_book",
when="BEFORE",
),
),
),
migrations.RunSQL(
# Recalculate search vector for all Books because it now includes
# Author aliases.
sql="UPDATE bookwyrm_book SET search_vector = NULL;",
reverse_sql="UPDATE bookwyrm_book SET search_vector = NULL;",
),
]

View file

@ -1,70 +0,0 @@
# Generated by Django 4.2.11 on 2024-03-29 19:25
import bookwyrm.models.fields
from django.conf import settings
from django.db import migrations
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0198_book_search_vector_author_aliases"),
]
operations = [
migrations.AlterField(
model_name="userblocks",
name="user_object",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_object",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AlterField(
model_name="userblocks",
name="user_subject",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_subject",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AlterField(
model_name="userfollowrequest",
name="user_object",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_object",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AlterField(
model_name="userfollowrequest",
name="user_subject",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_subject",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AlterField(
model_name="userfollows",
name="user_object",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_object",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AlterField(
model_name="userfollows",
name="user_subject",
field=bookwyrm.models.fields.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="%(class)s_user_subject",
to=settings.AUTH_USER_MODEL,
),
),
]

View file

@ -1,13 +0,0 @@
# Generated by Django 3.2.25 on 2024-03-26 12:17
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0198_alter_bookwyrmexportjob_export_data"),
("bookwyrm", "0198_book_search_vector_author_aliases"),
]
operations = []

View file

@ -1,19 +0,0 @@
# Generated by Django 3.2.25 on 2024-04-02 19:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0198_book_search_vector_author_aliases"),
]
operations = [
migrations.AddIndex(
model_name="status",
index=models.Index(
fields=["remote_id"], name="bookwyrm_st_remote__06aeba_idx"
),
),
]

Some files were not shown because too many files have changed in this diff Show more