summary refs log tree commit diff
path: root/synapse/replication/tcp/client.py (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Add debug logging for issue #9533 (#9959)Richard van der Hoff2021-05-111-1/+0
| | | | | Hopefully this will help us track down where to-device messages are getting lost/delayed.
* Add presence federation stream (#9819)Erik Johnston2021-04-201-3/+4
|
* Move some replication processing out of generic_worker (#9796)Erik Johnston2021-04-141-7/+224
| | | Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
* Remove redundant "coding: utf-8" lines (#9786)Jonathan de Jong2021-04-141-1/+0
| | | | | | | Part of #9744 Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now. `Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
* Fix additional type hints from Twisted upgrade. (#9518)Patrick Cloke2021-03-031-3/+1
|
* Don't pull event from DB when handling replication traffic. (#8669)Erik Johnston2020-10-281-8/+12
| | | | | I was trying to make it so that we didn't have to start a background task when handling RDATA, but that is a bigger job (due to all the code in `generic_worker`). However I still think not pulling the event from the DB may help reduce some DB usage due to replication, even if most workers will simply go and pull that event from the DB later anyway. Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
* Make event persisters periodically announce position over replication. (#8499)Erik Johnston2020-10-121-0/+4
| | | | | Currently background proccesses stream the events stream use the "minimum persisted position" (i.e. `get_current_token()`) rather than the vector clock style tokens. This is broadly fine as it doesn't matter if the background processes lag a small amount. However, in extreme cases (i.e. SyTests) where we only write to one event persister the background processes will never make progress. This PR changes it so that the `MultiWriterIDGenerator` keeps the current position of a given instance as up to date as possible (i.e using the latest token it sees if its not in the process of persisting anything), and then periodically announces that over replication. This then allows the "minimum persisted position" to advance, albeit with a small lag.
* Various clean ups to room stream tokens. (#8423)Erik Johnston2020-09-291-4/+2
|
* Add EventStreamPosition type (#8388)Erik Johnston2020-09-241-3/+9
| | | | | | | | | | | | | | The idea is to remove some of the places we pass around `int`, where it can represent one of two things: 1. the position of an event in the stream; or 2. a token that partitions the stream, used as part of the stream tokens. The valid operations are then: 1. did a position happen before or after a token; 2. get all events that happened before or after a token; and 3. get all events between two tokens. (Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
* Clean up `Notifier.on_new_room_event` code path (#8288)Erik Johnston2020-09-101-6/+3
| | | | | | | | | | | | | The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it. This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens. The valid operations here are: 1. Is a position before or after a token 2. Fetching all events between two tokens 3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C. Future PR will change the token type to a dedicated type.
* Fixup pusher pool notifications (#8287)Erik Johnston2020-09-091-1/+2
| | | | | `pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed. I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
* Revert "Fixup pusher pool notifications"Erik Johnston2020-09-091-2/+1
| | | | This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
* Fixup pusher pool notificationsErik Johnston2020-09-091-1/+2
|
* Fix `wait_for_stream_position` for multiple waiters. (#8196)Erik Johnston2020-08-281-4/+2
| | | | | | This fixes a bug where having multiple callers waiting on the same stream and position will cause it to try and compare two deferreds, which fails (due to the sorted list having an entry of `Tuple[int, Deferred]`).
* Fix typing replication not being handled on master (#7959)Erik Johnston2020-07-271-0/+8
| | | | | | | | | | | | | | | | Handling of incoming typing stream updates from replication was not hooked up on master, effecting set ups where typing was handled on a different worker. This is really only a problem if the master process is also handling sync requests, which is unlikely for those that are at the stage of moving typing off. The other observable effect is that if a worker restarts or a replication connect drops then the typing worker will issue a `POSITION typing`, triggering master process to try and stream *all* typing updates from position 0. Fixes #7907
* isort 5 compatibility (#7786)Will Hunt2020-07-051-1/+1
| | | The CI appears to use the latest version of isort, which is a problem when isort gets a major version bump. Rather than try to pin the version, I've done the necessary to make isort5 happy with synapse.
* Typo fixes.Patrick Cloke2020-06-051-1/+1
|
* Add ability to wait for replication streams (#7542)Erik Johnston2020-05-221-2/+88
| | | | | | | The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room). Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on. People probably want to look at this commit by commit.
* Move EventStream handling into default ReplicationDataHandler (#7493)Erik Johnston2020-05-141-4/+33
| | | This is so that the logic can happen on both master and workers when we move event persistence out.
* Support any process writing to cache invalidation stream. (#7436)Erik Johnston2020-05-071-3/+3
|
* Thread through instance name to replication client. (#7369)Erik Johnston2020-05-011-5/+7
| | | For in memory streams when fetching updates on workers we need to query the source of the stream, which currently is hard coded to be master. This PR threads through the source instance we received via `POSITION` through to the update function in each stream, which can then be passed to the replication client for in memory streams.
* Use `stream.current_token()` and remove `stream_positions()` (#7172)Erik Johnston2020-05-011-18/+1
| | | | We move the processing of typing and federation replication traffic into their handlers so that `Stream.current_token()` points to a valid token. This allows us to remove `get_streams_to_replicate()` and `stream_positions()`.
* Add ability to run replication protocol over redis. (#7040)Erik Johnston2020-04-221-1/+1
| | | This is configured via the `redis` config options.
* Move client command handling out of TCP protocol (#7185)Erik Johnston2020-04-061-151/+28
| | | The aim here is to move the command handling out of the TCP protocol classes and to also merge the client and server command handling (so that we can reuse them for redis protocol). This PR simply moves the client paths to the new `ReplicationCommandHandler`, a future PR will move the server paths too.
* Remove usage of "conn_id" for presence. (#7128)Erik Johnston2020-03-301-2/+4
| | | | | | | | | | | | | | | | * Remove `conn_id` usage for UserSyncCommand. Each tcp replication connection is assigned a "conn_id", which is used to give an ID to a remotely connected worker. In a redis world, there will no longer be a one to one mapping between connection and instance, so instead we need to replace such usages with an ID generated by the remote instances and included in the replicaiton commands. This really only effects UserSyncCommand. * Add CLEAR_USER_SYNCS command that is sent on shutdown. This should help with the case where a synchrotron gets restarted gracefully, rather than rely on 5 minute timeout.
* Move catchup of replication streams to worker. (#7024)Erik Johnston2020-03-251-1/+2
| | | This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
* Fix sending server up commands from workers (#6811)Erik Johnston2020-01-301-0/+4
| | | | Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Wake up transaction queue when remote server comes back online (#6706)Erik Johnston2020-01-171-0/+3
| | | | | This will be used to retry outbound transactions to a remote server if we think it might have come back up.
* Port synapse.replication.tcp to async/await (#6666)Erik Johnston2020-01-161-7/+4
| | | | | | | | | | * Port synapse.replication.tcp to async/await * Newsfile * Correctly document type of on_<FOO> functions as async * Don't be overenthusiastic with the asyncing....
* Fixup synapse.replication to pass mypy checks (#6667)Erik Johnston2020-01-141-5/+7
|
* Reduce the reconnect time when replication fails. (#6617)Richard van der Hoff2020-01-031-1/+2
|
* document the REPLICATE command a bit better (#6305)Richard van der Hoff2019-11-041-6/+14
| | | | since I found myself wonder how it works
* Remove usage of deprecated logger.warn method from codebase (#6271)Andrew Morgan2019-10-311-1/+1
| | | Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
* Run Black. (#5482)Amber Brown2019-06-201-3/+3
|
* Add parse_row method to replication stream classRichard van der Hoff2019-03-271-2/+3
| | | | This will allow individual stream classes to override how a row is parsed.
* Fix/improve some docstrings in the replication code. (#4949)Richard van der Hoff2019-03-271-3/+11
|
* Move connecting logic into ClientReplicationStreamProtocolErik Johnston2019-02-271-18/+0
|
* Increase the max delay between retry attemptsErik Johnston2019-02-261-1/+1
| | | | | Otherwise if you have many workers they can easily take out master with their connection attempts
* Fix tightloop over connecting to replication serverErik Johnston2019-02-261-3/+35
| | | | | | | | | | | | | | | If the client failed to process incoming commands during the initial set up of the replication connection it would immediately disconnect and reconnect, resulting in a tightloop. This can happen, for example, when subscribing to a stream that has a row that is too long in the backlog. The fix here is to not consider the connection successfully set up until the client has succesfully subscribed and caught up with the streams. This ensures that the retry logic timers aren't reset until then, meaning that if an error does happen during start up the client will continue backing off before retrying again.
* Make the replication logger quieter (#4108)Amber Brown2018-10-291-1/+1
|
* Logcontexts for replication command handlersRichard van der Hoff2018-08-171-2/+2
| | | | | | | | | | Run the handlers for replication commands as background processes. This should improve the visibility in our metrics, and reduce the number of "running db transaction from sentinel context" warnings. Ideally it means converting the things that fire off deferreds into the night into things that actually return a Deferred when they are done. I've made a bit of a stab at this, but it will probably be leaky.
* Fix unit testsRichard van der Hoff2018-07-251-1/+1
| | | | | | on_notifier_poke no longer runs synchonously, so we have to do a different hack to make sure that the replication data has been sent. Let's actually listen for its arrival.
* run isortAmber Brown2018-07-091-3/+6
|
* Remove all global reactor imports & pass it around explicitly (#3424)Amber Brown2018-06-251-3/+3
|
* Make workers report to master for user ip updatesErik Johnston2017-06-271-0/+7
|
* Fix incorrect type when using InvalidateCacheCommandErik Johnston2017-04-061-1/+1
|
* Add basic replication client handler and factoryErik Johnston2017-04-031-0/+196