| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
background worker. (#12251)
|
|
|
|
| |
configuration flag. (#12200)
|
|
|
|
|
|
| |
Since the object it returns is a ReplicationCommandHandler.
This is clean-up from adding support to Redis where the command handler
was added as an additional layer of abstraction from the TCP protocol.
|
|
|
|
|
|
|
| |
The presence of this method was confusing, and mostly present for backwards
compatibility. Let's get rid of it.
Part of #11733
|
|
|
|
| |
* Require latest matrix-common
* Use the common function
|
|
|
| |
Co-authored-by: reivilibre <oliverw@matrix.org>
|
|
|
|
| |
Signed-off-by: Tulir Asokan <tulir@beeper.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add support for `/_matrix/media/v3` APIs
Signed-off-by: Aaron Raimist <aaron@raim.ist>
* Update `workers.md` to use v3 client and media APIs
Signed-off-by: Aaron Raimist <aaron@raim.ist>
* Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
|
| |
|
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/8308
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Instead of proxying through the magic getter of the RootConfig
object. This should be more performant (and is more explicit).
|
|
|
| |
Also refactors some of the registration of endpoints on workers.
|
| |
|
| |
|
|
|
|
|
| |
Signed-off-by: Callum Brown <callum@calcuode.com>
This is part of my GSoC project implementing [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231).
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
|
|
| |
Signed-off-by: Kai A. Hiller <V02460@gmail.com>
|
|
|
|
|
|
|
|
|
| |
This PR is tantamount to running
```
pyupgrade --py36-plus --keep-percent-format `find synapse/ -type f -name "*.py"`
```
Part of #9744
|
| |
|
|
|
| |
This adds a simple best effort locking mechanism that works cross workers.
|
|
|
|
|
|
|
|
| |
(#10191)
* Defer stdio redirection until we are about to start the reactor
* Catch and handle exceptions during startup
|
|
|
| |
This PR adds a common configuration section for all modules (see docs). These modules are then loaded at startup by the homeserver. Modules register their hooks and web resources using the new `register_[...]_callbacks` and `register_web_resource` methods of the module API.
|
| |
|
| |
|
|
|
|
|
| |
This will double count slightly in the presence of interned strings. It's off by default as it can consume a lot of resources.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Synapse can be quite memory intensive, and unless care is taken to tune
the GC thresholds it can end up thrashing, causing noticable performance
problems for large servers. We fix this by limiting how often we GC a
given generation, regardless of current counts/thresholds.
This does not help with the reverse problem where the thresholds are set
too high, but that should only happen in situations where they've been
manually configured.
Adds a `gc_min_seconds_between` config option to override the defaults.
Fixes #9890.
|
|
|
|
|
|
| |
* Simplify `start_listening` callpath
* Correctly check the size of uploaded files
|
| |
|
| |
|
|
|
|
| |
Every single time I want to access the config object, I have to remember
whether or not we use `get_config`. Let's just get rid of it.
|
|
|
| |
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment, if you'd like to share presence between local or remote users, those users must be sharing a room together. This isn't always the most convenient or useful situation though.
This PR adds a module to Synapse that will allow deployments to set up extra logic on where presence updates should be routed. The module must implement two methods, `get_users_for_states` and `get_interested_users`. These methods are given presence updates or user IDs and must return information that Synapse will use to grant passing presence updates around.
A method is additionally added to `ModuleApi` which allows triggering a set of users to receive the current, online presence information for all users they are considered interested in. This is the equivalent of that user receiving presence information during an initial sync.
The goal of this module is to be fairly generic and useful for a variety of applications, with hard requirements being:
* Sending state for a specific set or all known users to a defined set of local and remote users.
* The ability to trigger an initial sync for specific users, so they receive all current state.
|
|
|
|
| |
Includes an abstract base class which both the FederationSender
and the FederationRemoteSendQueue must implement.
|
|
|
| |
This warning is somewhat confusing to users, so let's suppress it
|
| |
|
| |
|
| |
|
|
|
| |
Should fix some remaining warnings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split ShardedWorkerHandlingConfig
This is so that we have a type level understanding of when it is safe to
call `get_instance(..)` (as opposed to `should_handle(..)`).
* Remove special cases in ShardedWorkerHandlingConfig.
`ShardedWorkerHandlingConfig` tried to handle the various different ways
it was possible to configure federation senders and pushers. This led to
special cases that weren't hit during testing.
To fix this the handling of the different cases is moved from there and
`generic_worker` into the worker config class. This allows us to have
the logic in one place and allows the rest of the code to ignore the
different cases.
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #8966.
* Factor out build_synapse_client_resource_tree
Start a function which will mount resources common to all workers.
* Move sso init into build_synapse_client_resource_tree
... so that we don't have to do it for each worker
* Fix SSO-login-via-a-worker
Expose the SSO login endpoints on workers, like the documentation says.
* Update workers config for new endpoints
Add documentation for endpoints recently added (#8942, #9017, #9262)
* remove submit_token from workers endpoints list
this *doesn't* work on workers (yet).
* changelog
* Add a comment about the odd path for SAML2Resource
|
| |
|
| |
|
| |
|
|
|
|
| |
Factor out the exception handling in the startup code to a utility function,
and fix the some logging and exit code stuff.
|
| |
|
|
|
| |
Adds the redacts endpoint to workers that have the client listener.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.
There are some changes in other files that were made to fix the typing
in server.py.
|
| |
|
| |
|
|\ |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling of incoming typing stream updates from replication was not
hooked up on master, effecting set ups where typing was handled on a
different worker.
This is really only a problem if the master process is also handling
sync requests, which is unlikely for those that are at the stage of
moving typing off.
The other observable effect is that if a worker restarts or a
replication connect drops then the typing worker will issue a
`POSITION typing`, triggering master process to try and stream *all*
typing updates from position 0.
Fixes #7907
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This ended up being a bit more invasive than I'd hoped for (not helped by
generic_worker duplicating some of the code from homeserver), but hopefully
it's an improvement.
The idea is that, rather than storing unstructured `dict`s in the config for
the listener configurations, we instead parse it into a structured
`ListenerConfig` object.
|
| |
|
| |
|
|
|
|
|
|
| |
Instead of storing and sending an ACK for every single row we send
synchronously, we instead do it asynchronously while batching up
updates.
|
|
|
| |
Introduced in #7556
|
|
|
|
|
|
|
|
| |
A couple of changes of significance:
* remove the `_last_ack < federation_position` condition, so that
updates will still be correctly processed after restart
* Correctly wire up send_federation_ack to the right class.
|
| |
|
| |
|
|
|
|
| |
These are business as usual errors, rather than stuff we want to log at
error.
|
|
|
|
|
| |
We don't really make any promises about returning accurate presence data when
presence is disabled, so we may as well just return a static response, rather
than making the master handle a request.
|
|
|
| |
This allows workers to talk to each other over HTTP replication.
|
|
|
|
|
| |
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
|
|
|
| |
This is so that the logic can happen on both master and workers when we move event persistence out.
|
|
|
| |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
|
|
| |
For in memory streams when fetching updates on workers we need to query the source of the stream, which currently is hard coded to be master. This PR threads through the source instance we received via `POSITION` through to the update function in each stream, which can then be passed to the replication client for in memory streams.
|
|
|
|
| |
We move the processing of typing and federation replication traffic into their handlers so that `Stream.current_token()` points to a valid token. This allows us to remove `get_streams_to_replicate()` and `stream_positions()`.
|
|
|
|
|
| |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
|
|
| |
Currently we never write to streams from workers, but that will change soon
|
|
|
|
|
|
|
| |
Long story short: if we're handling presence on the current worker, we shouldn't be sending USER_SYNC commands over replication.
In an attempt to figure out what is going on here, I ended up refactoring some bits of the presencehandler code, so the first 4 commits here are non-functional refactors to move this code slightly closer to sanity. (There's still plenty to do here :/). Suggest reviewing individual commits.
Fixes (I hope) #7257.
|
|\ |
|
| | |
|
| | |
|
| |
| |
| | |
The aim here is to move the command handling out of the TCP protocol classes and to also merge the client and server command handling (so that we can reuse them for redis protocol). This PR simply moves the client paths to the new `ReplicationCommandHandler`, a future PR will move the server paths too.
|
| |
| |
| |
| |
| |
| | |
By running this stuff with `run_in_background`, it won't be correctly reported
against the relevant CPU usage stats.
Fixes #7202
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Remove `conn_id` usage for UserSyncCommand.
Each tcp replication connection is assigned a "conn_id", which is used
to give an ID to a remotely connected worker. In a redis world, there
will no longer be a one to one mapping between connection and instance,
so instead we need to replace such usages with an ID generated by the
remote instances and included in the replicaiton commands.
This really only effects UserSyncCommand.
* Add CLEAR_USER_SYNCS command that is sent on shutdown.
This should help with the case where a synchrotron gets restarted
gracefully, rather than rely on 5 minute timeout.
|
| |
| |
| | |
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
|
|\ \
| | |
| | | |
Fix starting workers when federation sending not split out.
|
| |/ |
|
| |
| |
| |
| |
| | |
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
|
| |
| |
| |
| |
| |
| | |
`groups` != `receipts`
Introduced in #6964
|
| | |
|
| | |
|
|/
|
|
|
| |
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
|
|
|
|
|
|
|
|
| |
Instead lets just warn if the worker has a media listener configured but
has the media repository disabled.
Previously non media repository workers would just ignore the media
listener.
|
|
|