The queries as they previously existed required joining together 12
different tables, which is extremely expensive. Splitting it into four
queries means that the individual queries can effectively use the
indexes we have, and should be very fast no matter how many statuses are
in the database.
Removing the .distinct() call is fine, since we're adding them to a set
in Redis anyways, which will take care of the duplicates.
It's a bit ugly that we now make four separate calls to Redis (this
might result in things being slightly slower in cases where there are an
extremely small number of statuses), but doing things differently would
result in significantly more surgery to the existing code, so I've opted
to avoid that for the moment.
Fixes: #2725
This splits HomeStream.get_audience into two separate database queries,
in order to more effectively take advantage of the indexes we have.
Combining the user ID query and the user following query means that
Postgres isn't able to use the index we have on the userfollows table.
The query planner claims that the userfollows query should be about 20
times faster than it was previously, and the id query should take a
negligible amount of time, since it's selecting a single item by primary
key.
We don't need to worry about duplicates, since there is a constraint
preventing a user from following themself.
Fixes: #2720
Anywhere we have a user object, we can easily get the user ID in the
caller, and this will allow us more flexibility in the future to
implement optimizations that involve knowing a user ID without querying
the database for the user object.
The existing polling code had a few problems:
* It started the timer for a new request when the first request was
sent, rather than when a response was received.
* It increased the delay regardless of whether the response was a
success or a failure.
This commit changes it to a more standard exponential backoff system,
where it starts with a 5 minute ± 30 second delay, and uses that same
delay until it hits an error, at which point the delay is increased by
10%. Once it receives a successful response again, the delay is reset to
the default.
I suspect this should be nicer on the server, since it avoids the
initial sending of many requests. After about half an hour of leaving
the page open, the request rate for this new code will be higher than
that of the old code, so it's possible that this may cause problems, but
I think that a five-minute request frequency should be pretty reasonable.
Expand TOTP validity window
This changes the default window to allow 2 codes (60 seconds) on either side. Admins can change this by setting a different `TWO_FACTOR_LOGIN_VALIDITY_WINDOW` value in `.env`
I think this will go a long way to solve the federation delay problems
we're seeing on b.s. I'm not sure at what point adding more queues will
create more problems than it solves, but I do think in this case the
queues are out of balance and moving broadcasts (which are the most
common type of `medium_priority` task at the moment) to their own queue
will be an improvement.
When an inbox activity comes in from another fediverse instance, the
behavior prior to this commit was always to immediately give a 200
response to the external server and then create a celery activity
(usually in the MEDIUM_PRIORITY queue) to complete it.
Instead, this would receive a request and try to complete it without
making any http requests (which would make the request take too long to
process). If an external request is required to complete the activity, a
task is created and added to the queue.
Ideally, this will cause some tasks to happen very promptly, and reduce
the load on celery, which would help queued tasks happen more quickly as
well.
One downside is that this will make completing http requests from
external servers slowing (since it's doing a bunch of thinking before
responding).
The migration uses `RESTRICT` instead of `PROTECT`, which is both more
correct, but also those values need to be identical, otherwise Django
thinks that there's a migration missing and will refuse to apply any
new migrations.