| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Every single time I want to access the config object, I have to remember
whether or not we use `get_config`. Let's just get rid of it.
|
|
|
| |
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment, if you'd like to share presence between local or remote users, those users must be sharing a room together. This isn't always the most convenient or useful situation though.
This PR adds a module to Synapse that will allow deployments to set up extra logic on where presence updates should be routed. The module must implement two methods, `get_users_for_states` and `get_interested_users`. These methods are given presence updates or user IDs and must return information that Synapse will use to grant passing presence updates around.
A method is additionally added to `ModuleApi` which allows triggering a set of users to receive the current, online presence information for all users they are considered interested in. This is the equivalent of that user receiving presence information during an initial sync.
The goal of this module is to be fairly generic and useful for a variety of applications, with hard requirements being:
* Sending state for a specific set or all known users to a defined set of local and remote users.
* The ability to trigger an initial sync for specific users, so they receive all current state.
|
|
|
|
| |
Includes an abstract base class which both the FederationSender
and the FederationRemoteSendQueue must implement.
|
|
|
| |
This warning is somewhat confusing to users, so let's suppress it
|
| |
|
| |
|
| |
|
|
|
|
| |
* Adds B00 to ignored checks.
* Fixes remaining issues.
|
|
|
| |
Should fix some remaining warnings
|
|
|
|
|
|
|
| |
In #75, bytecode was disabled (from a bit of FUD back in `python<2.4` days, according to dev chat), I think it's safe enough to enable it again.
Added in `__pycache__/` and `.pyc`/`.pyd` to `.gitignore`, to extra-insure compiled files don't get committed.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split ShardedWorkerHandlingConfig
This is so that we have a type level understanding of when it is safe to
call `get_instance(..)` (as opposed to `should_handle(..)`).
* Remove special cases in ShardedWorkerHandlingConfig.
`ShardedWorkerHandlingConfig` tried to handle the various different ways
it was possible to configure federation senders and pushers. This led to
special cases that weren't hit during testing.
To fix this the handling of the different cases is moved from there and
`generic_worker` into the worker config class. This allows us to have
the logic in one place and allows the rest of the code to ignore the
different cases.
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #8966.
* Factor out build_synapse_client_resource_tree
Start a function which will mount resources common to all workers.
* Move sso init into build_synapse_client_resource_tree
... so that we don't have to do it for each worker
* Fix SSO-login-via-a-worker
Expose the SSO login endpoints on workers, like the documentation says.
* Update workers config for new endpoints
Add documentation for endpoints recently added (#8942, #9017, #9262)
* remove submit_token from workers endpoints list
this *doesn't* work on workers (yet).
* changelog
* Add a comment about the odd path for SAML2Resource
|
| |
| |
| | |
There are going to be a couple of paths to get to the final step of SSO reg, and I want the URL in the browser to consistent. So, let's move the final step onto a separate path, which we redirect to.
|
| |
| |
| | |
Signed-off-by: Jan Christian Grünhage <jan.christian@gruenhage.xyz>
|
|/
|
|
|
|
|
|
|
|
|
|
| |
* synapse.app.base: only call gc.freeze() on CPython
gc.freeze() is an implementation detail of CPython garbage collector,
and notably does not exist on PyPy.
Rather than playing whack-a-mole and skipping the call when under PyPy,
simply restrict it to CPython because the whole gc module is
implementation-defined.
Signed-off-by: Ivan Shapovalov <intelfx@intelfx.name>
|
| |
|
|
|
|
|
|
|
| |
The idea here is that we will have an instance of OidcProvider for each
configured IdP, with OidcHandler just doing the marshalling of them.
For now it's still hardcoded with a single provider.
|
| |
|
| |
|
|
|
|
| |
Factor out the exception handling in the startup code to a utility function,
and fix the some logging and exit code stuff.
|
| |
|
| |
|
|
|
|
|
| |
During login, if there are multiple IdPs enabled, offer the user a choice of
IdPs.
|
|
|
| |
Adds the redacts endpoint to workers that have the client listener.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The final part (for now) of my work to implement a username picker in synapse itself. The idea is that we allow
`UsernameMappingProvider`s to return `localpart=None`, in which case, rather than redirecting the browser
back to the client, we redirect to a username-picker resource, which allows the user to enter a username.
We *then* complete the SSO flow (including doing the client permission checks).
The static resources for the username picker itself (in
https://github.com/matrix-org/synapse/tree/rav/username_picker/synapse/res/username_picker)
are essentially lifted wholesale from
https://github.com/matrix-org/matrix-synapse-saml-mozilla/tree/master/matrix_synapse_saml_mozilla/res.
As the comment says, we might want to think about making them customisable, but that can be a follow-up.
Fixes #8876.
|
|
|
| |
Fixes #8892
|
|
|
|
|
|
|
|
|
|
| |
The idea is that the parse_config method of extension modules can raise either a ConfigError or a JsonValidationError,
and it will be magically turned into a legible error message. There's a few components to it:
* Separating the "path" and the "message" parts of a ConfigError, so that we can fiddle with the path bit to turn it
into an absolute path.
* Generally improving the way ConfigErrors get printed.
* Passing in the config path to load_module so that it can wrap any exceptions that get caught appropriately.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
|
| |
We can get a SIGHUP at any point, including times where we are not in a
sane state. By deferring calling the handlers until the next reactor
tick we ensure that we don't get unexpected conflicts, e.g. trying to
flush logs from the signal handler while the code was in the process of
writing a log entry.
Fixes #8769.
|
|
|
|
|
|
|
| |
Fixes:
```
builtins.TypeError: _reload_logging_config() takes 1 positional argument but 2 were given
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
| |
|
|
|
| |
All handlers now available via get_*_handler() methods on the HomeServer.
|
| |
|
|
|
| |
By reporting the log level of the synapse logger as a string.
|
|
|
|
|
| |
Lots of different module apis is not easy to maintain.
Rather than adding yet another ModuleApi(hs, hs.get_auth_handler()) incantation, first add an hs.get_module_api() method and use it where possible.
|
|
|
|
|
| |
This is so we can tell what is going on when things are taking a while to start up.
The main change here is to ensure that transactions that are created during startup get correctly logged like normal transactions.
|
| |
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| |
|
|
|
|
|
| |
This PR adds a confirmation step to resetting your user password between clicking the link in your email and your password actually being reset.
This is to better align our password reset flow with the industry standard of requiring a confirmation from the user after email validation.
|
|
|
|
|
| |
By importing from canonicaljson the simplejson module was still being used
in some situations. After this change the std lib json is consistenty used
throughout Synapse.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.
There are some changes in other files that were made to fix the typing
in server.py.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has long been something I've wanted to do. Basically the `Daemonize` code
is both too flexible and not flexible enough, in that it offers a bunch of
features that we don't use (changing UID, closing FDs in the child, logging to
syslog) and doesn't offer a bunch that we could do with (redirecting stdout/err
to a file instead of /dev/null; having the parent not exit until the child is
running).
As a first step, I've lifted the Daemonize code and removed the bits we don't
use. This should be a non-functional change. Fixing everything else will come
later.
|
| |
|
|\ |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling of incoming typing stream updates from replication was not
hooked up on master, effecting set ups where typing was handled on a
different worker.
This is really only a problem if the master process is also handling
sync requests, which is unlikely for those that are at the stage of
moving typing off.
The other observable effect is that if a worker restarts or a
replication connect drops then the typing worker will issue a
`POSITION typing`, triggering master process to try and stream *all*
typing updates from position 0.
Fixes #7907
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This ended up being a bit more invasive than I'd hoped for (not helped by
generic_worker duplicating some of the code from homeserver), but hopefully
it's an improvement.
The idea is that, rather than storing unstructured `dict`s in the config for
the listener configurations, we instead parse it into a structured
`ListenerConfig` object.
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Based on #7619
async's `get_user_id_by_threepid` and its call stack.
|
| |
|
|
|
|
|
|
| |
Instead of storing and sending an ACK for every single row we send
synchronously, we instead do it asynchronously while batching up
updates.
|
|
|
| |
Introduced in #7556
|
|
|
|
|
|
|
|
| |
A couple of changes of significance:
* remove the `_last_ack < federation_position` condition, so that
updates will still be correctly processed after restart
* Correctly wire up send_federation_ack to the right class.
|
| |
|
| |
|
|
|
|
| |
These are business as usual errors, rather than stuff we want to log at
error.
|
|
|
|
|
| |
We don't really make any promises about returning accurate presence data when
presence is disabled, so we may as well just return a static response, rather
than making the master handle a request.
|
|
|
| |
This allows workers to talk to each other over HTTP replication.
|
|
|
|
|
| |
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
|
|
|
| |
This is so that the logic can happen on both master and workers when we move event persistence out.
|
|
|
| |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
| |
|
|
|
|
| |
variables (#6391)
|
| |
|
|
|
| |
For in memory streams when fetching updates on workers we need to query the source of the stream, which currently is hard coded to be master. This PR threads through the source instance we received via `POSITION` through to the update function in each stream, which can then be passed to the replication client for in memory streams.
|
|
|
|
| |
We move the processing of typing and federation replication traffic into their handlers so that `Stream.current_token()` points to a valid token. This allows us to remove `get_streams_to_replicate()` and `stream_positions()`.
|
|
|
|
|
| |
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
|
|
|
|
|
| |
This is primarily for allowing us to send those commands from workers, but for now simply allows us to ignore echoed RDATA/POSITION commands that we sent (we get echoes of sent commands when using redis). Currently we log a WARNING on the master process every time we receive an echoed RDATA.
|
|
|
| |
Currently we never write to streams from workers, but that will change soon
|
|
|
|
|
|
|
| |
Long story short: if we're handling presence on the current worker, we shouldn't be sending USER_SYNC commands over replication.
In an attempt to figure out what is going on here, I ended up refactoring some bits of the presencehandler code, so the first 4 commits here are non-functional refactors to move this code slightly closer to sanity. (There's still plenty to do here :/). Suggest reviewing individual commits.
Fixes (I hope) #7257.
|
|\ |
|
| | |
|
| | |
|
| |
| |
| | |
This is configured via the `redis` config options.
|
| |
| |
| | |
The aim here is to move the command handling out of the TCP protocol classes and to also merge the client and server command handling (so that we can reuse them for redis protocol). This PR simply moves the client paths to the new `ReplicationCommandHandler`, a future PR will move the server paths too.
|
| |
| |
| |
| |
| | |
Log warning when filesystem path is used.
Signed-off-by: Martin Milata <martin@martinmilata.cz>
|
| |
| |
| |
| |
| |
| | |
By running this stuff with `run_in_background`, it won't be correctly reported
against the relevant CPU usage stats.
Fixes #7202
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Remove `conn_id` usage for UserSyncCommand.
Each tcp replication connection is assigned a "conn_id", which is used
to give an ID to a remotely connected worker. In a redis world, there
will no longer be a one to one mapping between connection and instance,
so instead we need to replace such usages with an ID generated by the
remote instances and included in the replicaiton commands.
This really only effects UserSyncCommand.
* Add CLEAR_USER_SYNCS command that is sent on shutdown.
This should help with the case where a synchrotron gets restarted
gracefully, rather than rely on 5 minute timeout.
|
| |
| |
| | |
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
|
|\ \
| | |
| | | |
Fix starting workers when federation sending not split out.
|
| |/ |
|
| |
| |
| |
| |
| | |
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
|
| |
| |
| |
| |
| |
| | |
`groups` != `receipts`
Introduced in #6964
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.
Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
* Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
* Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
* Newsfile
* Remove handling of multiple rows per ID
* Fix worker handling
* Comments from review
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
|
| |
| |
| |
| |
| | |
This should be safe to do on all workers/masters because it is guarded by
a config option which will ensure it is only actually done on the worker
assigned as a pusher.
|
|/
|
|
|
| |
* Break down monthly active users by appservice_id and emit via prometheus.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
|
|
|
|
|
|
|
| |
Instead lets just warn if the worker has a media listener configured but
has the media repository disabled.
Previously non media repository workers would just ignore the media
listener.
|
| |
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
| |
This may make gc go a bit faster as the gc will know things like
caches/data stores etc. are frozen without having to check.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were sending device updates down both the federation stream and
device streams. This mean there was a race if the federation sender
worker processed the federation stream first, as when the sender checked
if there were new device updates the slaved ID generator hadn't been
updated with the new stream IDs and so returned nothing.
This situation is correctly handled by events/receipts/etc by not
sending updates down the federation stream and instead having the
federation sender worker listen on the other streams and poke the
transaction queues as appropriate.
|
| |
|
|
|
|
|
| |
This will be used to retry outbound transactions to a remote server if
we think it might have come back up.
|
|
|
|
|
|
|
|
|
|
| |
* Port synapse.replication.tcp to async/await
* Newsfile
* Correctly document type of on_<FOO> functions as async
* Don't be overenthusiastic with the asyncing....
|
|
|
|
|
|
| |
AdditionalResource really doesn't add any value, and it gets in the way for
resources which want to support child resources or the like. So, if the
resource object already implements the IResource interface, don't bother
wrapping it.
|
| |
|
| |
|
| |
|
|
|
|
| |
This has caused some confusion for people who didn't notice it going away.
|
|
|
|
|
|
| |
This looks like it got half-killed back in #888.
Fixes #6567.
|
| |
|
|
|
|
| |
`Failed to upgrade database` is not helpful, and it's unlikely that UPGRADE.rst
has anything useful.
|
|
|
|
|
|
|
| |
If acme was enabled, the sdnotify startup hook would never be run because we
would try to add it to a hook which had already fired.
There's no need to delay it: we can sdnotify as soon as we've started the
listeners.
|
|\
| |
| | |
Move database config from apps into HomeServer object
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Synapse 1.7.0rc2 (2019-12-11)
=============================
Bugfixes
--------
- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
|
| |/
| |
| |
| | |
So that it has access to the get_retention_policy_for_room function which is required by filter_events_for_client.
|
|/ |
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
erikj/make_database_class
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
Fix phone home stats
|
|/ |
|
| |
|
|
|
| |
* remove psutil and replace with resource
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes #4198.
|
|
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
|
|
|
|
| |
This is in preparation for having multiple data stores that offer
different functionality, e.g. splitting out state or event storage.
|
| |
|
|
|
|
|
|
| |
* type checking fixes
* changelog
|
|
|
| |
This PR adds the optional `report_stats_endpoint` to configure where stats are reported to, if enabled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
server to handle 3pid validation (#5987)
This is a combination of a few different PRs, finally all being merged into `develop`:
* #5875
* #5876
* #5868 (This one added the `/versions` flag but the flag itself was actually [backed out](https://github.com/matrix-org/synapse/commit/891afb57cbdf9867f2848341b29c75d6f35eef5a#diff-e591d42d30690ffb79f63bb726200891) in #5969. What's left is just giving /versions access to the config file, which could be useful in the future)
* #5835
* #5969
* #5940
Clients should not actually use the new registration functionality until https://github.com/matrix-org/synapse/pull/5972 is merged.
UPGRADE.rst, changelog entries and config file changes should all be reviewed closely before this PR is merged.
|
|
|
|
|
| |
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
|
| |
|
|
|
|
|
|
| |
... to save OSes which don't use it from having to maintain a port.
Fixes #5865.
|
|
|
|
| |
Signed-off-by: Chris Moos <chris@chrismoos.com>
|
| |
|
| |
|
|
|
|
|
| |
This helps ensures that we only consider ourselves "up" once all the
startup functions have completed.
|
|
|
|
| |
Fixes #5676.
|
| |
|
| |
|
|
|
| |
Co-Authored-By: Aaron Raimist <aaron@raim.ist>
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Configure and initialise tracer
Includes config options for the tracer and sets up JaegerClient.
* Scope manager using LogContexts
We piggy-back our tracer scopes by using log context.
The current log context gives us the current scope. If new scope is
created we create a stack of scopes in the context.
* jaeger is a dependency now
* Carrier inject and extraction for Twisted Headers
* Trace federation requests on the way in and out.
The span is created in _started_processing and closed in
_finished_processing because we need a meaningful log context.
* Create logcontext for new scope.
Instead of having a stack of scopes in a logcontext we create a new
context for a new scope if the current logcontext already has a scope.
* Remove scope from logcontext if logcontext is top level
* Disable tracer if not configured
* typo
* Remove dependence on jaeger internals
* bools
* Set service name
* :Explicitely state that the tracer is disabled
* Black is the new black
* Newsfile
* Code style
* Use the new config setup.
* Generate config.
* Copyright
* Rename config to opentracing
* Remove user whitelisting
* Empty whitelist by default
* User ConfigError instead of RuntimeError
* Use isinstance
* Use tag constants for opentracing.
* Remove debug comment and no need to explicitely record error
* Two errors a "s(c)entry"
* Docstrings!
* Remove debugging brainslip
* Homeserver Whitlisting
* Better opentracing config comment
* linting
* Inclue worker name in service_name
* Make opentracing an optional dependency
* Neater config retreival
* Clean up dummy tags
* Instantiate tracing as object instead of global class
* Inlcude opentracing as a homeserver member.
* Thread opentracing to the request level
* Reference opetnracing through hs
* Instantiate dummy opentracin g for tests.
* About to revert, just keeping the unfinished changes just in case
* Revert back to global state, commit number:
9ce4a3d9067bf9889b86c360c05ac88618b85c4f
* Use class level methods in tracerutils
* Start and stop requests spans in a place where we
have access to the authenticated entity
* Seen it, isort it
* Make sure to close the active span.
* I'm getting black and blue from this.
* Logger formatting
Co-Authored-By: Erik Johnston <erik@matrix.org>
* Outdated comment
* Import opentracing at the top
* Return a contextmanager
* Start tracing client requests from the servlet
* Return noop context manager if not tracing
* Explicitely say that these are federation requests
* Include servlet name in client requests
* Use context manager
* Move opentracing to logging/
* Seen it, isort it again!
* Ignore twisted return exceptions on context exit
* Escape the scope
* Scopes should be entered to make them useful.
* Nicer decorator names
* Just one init, init?
* Don't need to close something that isn't open
* Docs make you smarter
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
* Fix 'utime went backwards' errors on daemonization.
Fixes #5608
* remove spurious debug
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
| |
This has no useful purpose on python3, and is generally a source of confusion.
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
|/
|
|
| |
* add monthly active users to phonehome stats
|
|
|
|
|
|
|
|
|
|
|
|
| |
identity server (#5377)
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
|
| |
|
|
|
| |
Fixes #5271.
|
|
|
| |
* expose SlavedProfileStore to ClientReaderSlavedStore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit ce5bcefc609db40740c692bd53a1ef84ab675e8c.
This caused:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/synapse/src/synapse/app/client_reader.py", line 32, in <module>
from synapse.replication.slave.storage import SlavedProfileStore
ImportError: cannot import name 'SlavedProfileStore' from 'synapse.replication.slave.storage' (/home/synapse/src/synapse/replication/slave/storage/__init__.py)
error starting synapse.app.client_reader('/home/synapse/config/workers/client_reader.yaml') (exit code: 1); see above for logs
```
|
|
|
| |
* expose SlavedProfileStore to ClientReaderSlavedStore
|
|\
| |
| | |
Limit in flight DNS requests
|
| |
| |
| |
| |
| |
| |
| | |
This is to work around a bug in twisted where a large number of
concurrent DNS requests cause it to tight loop forever.
c.f. https://twistedmatrix.com/trac/ticket/9620#ticket
|
| |
| |
| |
| | |
It doesn't really belong under rest/client/v1 any more.
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Move some rest endpoints to client reader
|
| | |
|
|/
|
|
| |
add context to phonehome stats
|
| |
|
|
|
|
| |
... as a precursor to combining it with the CurrentStateDelta stream.
|
| |
|
|\
| |
| | |
Move client receipt processing to federation sender worker.
|
| |
| |
| |
| |
| | |
This is mostly a prerequisite for #4730, but also fits with the general theme
of "move everything off the master that we possibly can".
|
|\ \
| | |
| | | |
Allow passing --daemonize to workers
|
| |/ |
|
|/ |
|
| |
|
|\
| |
| | |
Move /account/3pid to client_reader
|
| | |
|
|/ |
|
| |
|
| |
|
|\
| |
| | |
Split /login into client_reader
|
| | |
|
|\ \
| | |
| | | |
Add basic optional sentry.io integration
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | |/
| |/| |
Split out registration to worker
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This allows registration to be handled by a worker, though the actual
write to the database still happens on master.
Note: due to the in-memory session map all registration requests must be
handled by the same worker.
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
When guest_access changes from allowed to forbidden all local guest
users should be kicked from the room. This did not happen when
revocation was received from federation on a worker.
Presumably broken in #4141
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Better logging for errors on startup
* Fix "TypeError: '>' not supported" when starting without an existing
certificate
* Fix a bug where an existing certificate would be reprovisoned every day
|
| |
| |
| | |
Co-Authored-By: richvdh <1389908+richvdh@users.noreply.github.com>
|
| |
| |
| |
| |
| | |
Fixes the "can't listen on 0.0.0.0" error. Also makes it more consistent with
what we do elsewhere.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I wanted to bring listen_tcp into line with listen_ssl in terms of returning a
list of ports, and wanted to check that was a safe thing to do - hence the
logging in `refresh_certificate`.
Also, pull the 'Synapse now listening' message up to homeserver.py, because it
was being duplicated everywhere else.
|
|/
|
|
|
| |
turns out it doesn't really support ipv6, so let's hack around that by only
listening on ipv4 by default.
|
|
|
|
|
|
| |
If TLS is disabled, it should not be an error if no cert is given.
Fixes #4554.
|
|
|
|
|
| |
Rather than have to specify `no_tls` explicitly, infer whether we need to load
the TLS keys etc from whether we have any TLS-enabled listeners.
|
|
|
|
| |
we aren't going to use them anyway.
|
|
|
|
|
| |
Log which file we're reading keys and certs from, and refactor the code a bit
in preparation for other work
|
|
|
|
|
| |
It's nothing to do with refreshing the certificates. No idea why it was here.
|
| |
|
|\
| |
| | |
New listener resource for the federation API "openid/userinfo" endpoint
|
| |
| |
| |
| | |
Signed-off-by: Jason Robinson <jasonr@matrix.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This allows the OpenID userinfo endpoint to be active even if the
federation resource is not active. The OpenID userinfo endpoint
is called by integration managers to verify user actions using the
client API OpenID access token. Without this verification, the
integration manager cannot know that the access token is valid.
The OpenID userinfo endpoint will be loaded in the case that either
"federation" or "openid" resource is defined. The new "openid"
resource is defaulted to active in default configuration.
Signed-off-by: Jason Robinson <jasonr@matrix.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Handle listening for ACME requests on IPv6 addresses
the weird url-but-not-actually-a-url-string doesn't handle IPv6 addresses
without extra quoting. Building a string which you are about to parse again
seems like a weird choice. Let's just use listenTCP, which is consistent with
what we do elsewhere.
* Clean up the default ACME config
make it look a bit more consistent with everything else, and tweak the defaults
to listen on port 80.
* newsfile
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
* load cert
* changelog
* fix
|
|/ |
|
|
|
|
|
|
|
|
| |
* Raise a ConfigError if an invalid resource is specified
* Require Jinja 2.9 for the consent resource
* changelog
|
|
|
|
| |
optional dependencies to setuptools (#4298)
|
|
|
| |
* ensure can report mau stats when hs.config.mau_stats_only is set
|
|\ |
|
| |\
| | |
| | | |
Stop installing Matrix Console by default
|
| | |
| | |
| | |
| | | |
This is based on the work done by @krombel in #2601.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is largely a precursor for the removal of the bundled webclient. The idea
is to present a page at / which reassures people that something is working, and
to give them some links for next steps.
The welcome page lives at `/_matrix/static/`, so is enabled alongside the other
`static` resources (which, in practice, means the client API is enabled). We'll
redirect to it from `/` if we have nothing better to display there.
It would be nice to have a way to disable it (in the same way that you might
disable the nginx welcome page), but I can't really think of a good way to do
that without a load of ickiness.
It's based on the work done by @krombel for #2601.
|