In most cases, we want to return back to where we came from after
performing an action. It's not safe to return to an arbitrary referer,
so this streamlines using the util validator to verify the redirect and
fall back on regular redirect params if the referer is outside our
domain.
Splitting this into five separate queries avoids the large join that
prevents us from using indexes, and requires materializing to disk.
Fixes: #2157 (hopefully)
By default, Django doesn't run any context processors for server errors,
to make the error path as simple as possible. However, this has the
downside that our template does not load correctly. To fix this, I added
a custom 500 error handler, which will run the context processor.
Fixes: #2736
The idea behind a streaming CSV export was to reduce the amount of
memory used, by avoiding building the entire CSV file in memory before
sending it to the client. However, it didn't work out this way in
practice: the query objects that were created to represent each line
caused Postgres to generate a very large (~200MB on bookwyrm.social)
temp file, not to mention the memory being used by the Query object
likely being similar to, if not larger than that used by the finalized
CSV row.
While we should in the long term run our CSV exports as a Celery task,
this change should allow CSV exports to work on large servers without
causing disk-space problems.
Fixes: #2157
get_data can return exceptions other than ConnectorException, and when
it does, we want to simply not show the update section, rather than
crashing.
Related: #2717
Since we don't use the results of our Celery tasks (all of them return
None implicitly), it's prudent to set the ignore_result flag, for a
potential performance improvement. See the Celery docs for details [1].
We could do this with the global CELERY_IGNORE_RESULT setting, but it
offers more flexibility if we want to use task results in the future to
set it on a per-task basis.
[1]: https://docs.celeryq.dev/en/stable/userguide/tasks.html#ignore-results-you-don-t-want
We should store hashtags case-sensitive, but ensures that an existing
hashtag with different case are found and re-used. for example,
an existing #BookWyrm hashtag will be found and used even if the
status content is using #bookwyrm.
I think this will go a long way to solve the federation delay problems
we're seeing on b.s. I'm not sure at what point adding more queues will
create more problems than it solves, but I do think in this case the
queues are out of balance and moving broadcasts (which are the most
common type of `medium_priority` task at the moment) to their own queue
will be an improvement.
When an inbox activity comes in from another fediverse instance, the
behavior prior to this commit was always to immediately give a 200
response to the external server and then create a celery activity
(usually in the MEDIUM_PRIORITY queue) to complete it.
Instead, this would receive a request and try to complete it without
making any http requests (which would make the request take too long to
process). If an external request is required to complete the activity, a
task is created and added to the queue.
Ideally, this will cause some tasks to happen very promptly, and reduce
the load on celery, which would help queued tasks happen more quickly as
well.
One downside is that this will make completing http requests from
external servers slowing (since it's doing a bunch of thinking before
responding).
This should go a long way towards fixing the problems with follows not
going through to remote servers. All it does is move relationship
related activities from the medium priority queue, which gets
backlogged easily, to the high priority queue, which is less backlogged.
The risk here is that the high priority queue could end up getting
backlogged, so this isn't the last word on fixing this, but I think the
volume of activities that this will add to it will be manageable.
Given this field doesn't map to an `Edition` model field it lost its values when re-rendering the form.
It worked only when the form was valid and rendered as part of the confirmation screen, which is due to
the context data value being set in `add_authors` which was only getting called after the form validation.
I've opted to pull it out into a separate new function that gets called before form validation.
* New ID: Audible ASIN
Audible belongs to Amazon BUT they do not share the same IDs. The Audible ASIN of an audiobook is never the same as the Amazon ASIN.
Yeah, I know, Amazon is great. The fact that the ASIN is a good distinction for different works and editions bothers me more than I will ever be willing to admint.
* New ID "ISFDB"
Internet Speculative Ficiton Database ID for books and authors.
Links to the entry if set.
* Added aasin to test
Added aasin to test
* the answer expects more emptxy fields...