| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This adds an API for third-party plugin modules to implement account validity, so they can provide this feature instead of Synapse. The module implementing the current behaviour for this feature can be found at https://github.com/matrix-org/synapse-email-account-validity.
To allow for a smooth transition between the current feature and the new module, hooks have been added to the existing account validity endpoints to allow their behaviours to be overridden by a module.
|
| |
|
|
|
|
|
| |
hitting an 'Invalid Token' page #74" from synapse-dinsic (#9832)
This attempts to be a direct port of https://github.com/matrix-org/synapse-dinsic/pull/74 to mainline. There was some fiddling required to deal with the changes that have been made to mainline since (mainly dealing with the split of `RegistrationWorkerStore` from `RegistrationStore`, and the changes made to `self.make_request` in test code).
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split ShardedWorkerHandlingConfig
This is so that we have a type level understanding of when it is safe to
call `get_instance(..)` (as opposed to `should_handle(..)`).
* Remove special cases in ShardedWorkerHandlingConfig.
`ShardedWorkerHandlingConfig` tried to handle the various different ways
it was possible to configure federation senders and pushers. This led to
special cases that weren't hit during testing.
To fix this the handling of the different cases is moved from there and
`generic_worker` into the worker config class. This allows us to have
the logic in one place and allows the rest of the code to ignore the
different cases.
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
| |
The last stream token is always known and we do not need to handle none.
|
|
|
| |
This improves type hinting and should use less memory.
|
|
|
|
| |
Removes faulty assertions and fixes the logic to ensure the max
stream token is always set.
|
| |
|
|
|
|
|
|
|
| |
#8567 started a span for every background process. This is good as it means all Synapse code that gets run should be in a span (unless in the sentinel logging context), but it means we generate about 15x the number of spans as we did previously.
This PR attempts to reduce that number by a) not starting one for send commands to Redis, and b) deferring starting background processes until after we're sure they're necessary.
I don't really know how much this will help.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
|
|
|
|
| |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
|
|
|
| |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| |
|
| |
|
|
|
| |
This reuses the same scheme as federation sender sharding
|
|
|
| |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
|
|
| |
add a lock to try to make this metric actually work
|
| |
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We start all pushers on start up and immediately start a background
process to fetch push to send. This makes start up incredibly painful
when dealing with many pushers.
Instead, let's do a quick fast DB check to see if there *may* be push to
send and only start the background processes for those pushers. We also
stagger starting up and doing those checks so that we don't try and
handle all pushers at once.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We don't do anything with the result, so this is needed to give this code a
logcontext.
|
|
|
|
|
|
| |
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|
|
|
| |
simplifies the interface to _start_pushers
|
|
|
|
|
| |
... and use it from start_pusher_by_id. This mostly simplifies
start_pusher_by_id.
|
|
|
|
|
| |
This is public (or at least, called from outside the class), so ought to have a
better name.
|
|
|
|
|
|
|
| |
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
|
|/
|
|
|
|
| |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
|
|
|
|
| |
Whenever an access token is invalidated, we should remove the associated
pushers.
|
|
|
|
|
| |
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
|
| |
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
|
| |
|
|
|
|
| |
slaves.
|
| |
|
| |
|
|
|
|
| |
If they registered with an email address and email notifs are enabled on the HS
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Also fix bugs with retrying.
|
|
|
|
|
|
|
| |
event stream & running the rules again. Sytest passes, but remaining to do:
* Make badges work again
* Remove old, unused code
|
|
|
|
| |
deletion to match access token deletion and make exception arg optional.
|
|
|
|
| |
password) actually takes effect without HS restart. Reinstate the code to avoid logging out the session that changed the password, removed in 415c2f05491ce65a4fc34326519754cd1edd9c54
|
| |
|
|
|
|
|
|
| |
It wasn't possible to hit the code from the API because of a typo
in parsing the request path. Since no-one was using the feature
we might as well remove the dead code.
|
| |
|
|\ |
|
| |
| |
| |
| | |
of the code
|
|/
|
|
| |
notifications.
|
| |
|
|
|
|
|
|
|
|
|
| |
* Merge LoginHandler -> AuthHandler
* Add a bunch of documentation
* Improve some naming
* Remove unused branches
I will start merging the actual logic of the two handlers shortly
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
flag in the API.
|
| |
| |
| |
| | |
2) Change places where we mean unauthenticated to 401, not 403, in C/S v2: hack so it stays as 403 in v1 because web client relies on it.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
the push token changes.
|
|
|
|
|
| |
Because this seems like it might be useful to do sooner rather
than later.
|
|
|
|
|
|
|
| |
Add a timestamp to push tokens so we know the last time they we
got them from the device. Send it to the push gateways so it can
determine whether its failure is more recent than the token.
Stop and remove pushers that have been rejected.
|
| |
|
| |
|
|
|
|
| |
pokes work or not yet but the retry semantics are pretty good.
|
|
stdout currently!)
|