| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Also add support for giving a callback to generate the JSON object to
verify. This should reduce memory usage, as we no longer have the event
in memory in dict form (which has a large memory footprint) for extend
periods of time.
|
|
|
|
| |
Every single time I want to access the config object, I have to remember
whether or not we use `get_config`. Let's just get rid of it.
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix test_verify_json_objects_for_server_awaits_previous_requests
It turns out that this wasn't really testing what it thought it was testing
(in particular, `check_context` was turning failures into success, which was
making the tests pass even though it wasn't clear they should have been.
It was also somewhat overcomplex - we can test what it was trying to test
without mocking out perspectives servers.
* Fix warnings about finished logcontexts in the keyring
We need to make sure that we finish the key fetching magic before we run the
verifying code, to ensure that we don't mess up our logcontexts.
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
|
| |
Lift the restriction that *all* the keys used for signing v2 key responses be
present in verify_keys.
Fixes #6596.
|
|\
| |
| | |
Add config option to sign remote key query responses with a separate key.
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This allows servers to separate keys that are used to sign remote keys
when acting as a notary server.
|
|/
|
|
|
|
|
|
|
| |
There's no point doing a raise_from here, because the exception is always
logged at warn with no stacktrace in the caller. Instead, let's try to give
better messages to reduce confusion.
In particular, this means that we won't log 'Failed to connect to remote
server' when we don't even attempt to connect to the remote server due to
blacklisting.
|
| |
|
|
|
|
| |
A tactical call_later here should fix #5723
|
|
|
|
|
| |
There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's not really a problem to trust notary responses signed by the old key so
long as we are also doing TLS validation.
This commit adds a check to the config parsing code at startup to check that
we do not have the insecure matrix.org key without tls validation, and refuses
to start without it.
This allows us to remove the rather alarming-looking warning which happens at
runtime.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a few changes going on here:
* We make checking the signature on a key server response optional: if no
verify_keys are specified, we trust to TLS to validate the connection.
* We change the default config so that it does not require responses to be
signed by the old key.
* We replace the old 'perspectives' config with 'trusted_key_servers', which
is also formatted slightly differently.
* We emit a warning to the logs every time we trust a key server response
signed by the old key.
|
|
|
|
|
|
|
| |
Also:
* rename VerifyKeyRequest->VerifyJsonRequest
* calculate key_ids on VerifyJsonRequest construction
* refactor things to pass around VerifyJsonRequests instead of 4-tuples
|
|
|
| |
Remove some spurious stuff, clarify some other stuff
|
|
|
| |
it's a bit confusing
|
|
|
|
|
|
|
|
| |
It takes at least 20 minutes to work through the long_retries schedule (11
attempts, each with a 60 second timeout, and 60 seconds between each request),
so if the notary server isn't returning within the timeout, we'll just end up
blocking whatever request is happening for 20 minutes.
Ain't nobody got time for that.
|
|
|
|
| |
... else we're guaranteed to time out.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
When handling incoming federation requests, make sure that we have an
up-to-date copy of the signing key.
We do not yet enforce the validity period for event signatures.
|
|\
| |
| |
| |
| | |
matrix-org/rav/server_keys/07-fix-notary-cache-poison
Stop overwriting server keys with other keys
|
| |
| |
| |
| |
| | |
Fix a bug where we would discard a key result which the origin server is no
longer returning. Fixes #5305.
|
|\|
| |
| |
| | |
rav/server_keys/05-rewrite-gsvk-again
|
| |
| |
| |
| |
| |
| |
| |
| | |
The verify_request deferred already returns a suitable SynapseError, so I don't
really know what we expect to achieve by doing more wrapping, other than log
spam.
Fixes #4278.
|
| |
| |
| |
| | |
because namedtuple is awful
|
|/
|
|
|
| |
Attempt to simplify the logic in get_server_verify_keys by splitting it into
two methods.
|
|\
| |
| | |
Ensure that server_keys fetched via a notary server are correctly signed.
|
| |
| |
| |
| | |
In particular, don't give up on the first failure.
|
| | |
|
|/
|
|
|
|
| |
The list of server names was redundant, since it was equivalent to the keys on
the server_to_deferred map. This reduces the number of large lists being passed
around, and has the benefit of deduplicating the entries in `wait_on`.
|
|
|
|
|
|
|
| |
Rather than have three methods which have to have the same interface,
factor out a separate interface which is provided by three implementations.
I find it easier to grok the code this way.
|
|
|
|
|
|
|
|
| |
This is a first step to checking that the key is valid at the required moment.
The idea here is that, rather than passing VerifyKey objects in and out of the
storage layer, we instead pass FetchKeyResult objects, which simply wrap the
VerifyKey and add a valid_until_ts field.
|
|
|
|
|
|
|
|
|
| |
* Pass time_added_ms into process_v2_response
* Simplify process_v2_response
We can merge old_verify_keys into verify_keys, and reduce the number of dicts
flying around.
|
|
|
|
|
| |
These were never used, and poking arbitary data into objects from other
packages seems confusing at best.
|
|
|
|
|
| |
Storing server keys hammered the database a bit. This replaces the
implementation which stored a single key, with one which can do many updates at
once.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Rewrite this so that it doesn't hammer the database.
|
|
|
|
|
|
| |
There's no point in collecting a merged dict of keys: it is sufficient to
consider just the new keys which have been fetched by the most recent
key_fetch_fns.
|
|
|
|
|
| |
make sure we store the name of the server the keys came from, rather than the
origin server, after doing a fetch-from-perspectives.
|
|
|
|
|
| |
It's easier to check it in the caller than to complicate the interface with an
extra param.
|
|
|
|
|
|
| |
Make this just return the key dict, rather than a single-entry dict mapping the
server name to the key dict. It's easy for the caller to get the server name
from from the response object anyway.
|
| |
|
| |
|
| |
|
|
|
|
| |
This mainly reduces the number of exceptions we log.
|
|
|
|
|
| |
All this magic is redundant.
|
| |
|
|\
| |
| | |
add some logging for the keyring queue
|
| |
| |
| |
| | |
why is it so damn slow?
|
|/ |
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
send_sni_for_federation_requests
# Conflicts:
# synapse/crypto/context_factory.py
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
| |
Firstly, don't swallow the reason for the failure
Secondly, don't assume all exceptions are verification failures
Thirdly, log a bit of info about the key being used if debug is enabled
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
|/
|
|
|
|
| |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
|
|
|
|
|
|
| |
Doing this I learned e.message was pretty shortlived, added in 2.6,
they realized it was a bad idea and deprecated it in 2.7
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|
|
|
| |
what could possibly go wrong
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
preserve_context_over_fn is essentially broken, because (a) it pointlessly
drops the current logcontext before calling its wrapped function, which means
we don't get any useful logcontexts for _handle_key_deferred; (b) it wraps the
resulting deferred in a _PreservingContextDeferred, which is very dangerous
because you then can't yield on it without leaking context back into the
reactor.
Instead, let's specify that the resultant deferreds call their callbacks with
no logcontext.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... which means that logcontexts can be correctly preserved for the stuff it
does.
get_server_verify_keys is now called with the logcontext, so needs to
preserve_fn when it fires off its nested inlineCallbacks function.
Also renames get_server_verify_keys to reflect the fact it's meant to be
private.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the verify_request.deferred has already completed, then `remove_deferreds`
will be called immediately. It therefore might resolve the server_to_deferred
deferred while there are still other requests for that server in flight.
To avoid that, we should build the complete list of requests, and *then* add the
callbacks.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Define that it is run with no log context, and make sure that happens.
If we aren't careful to reset the logcontext, we can't bung the deferreds into
defer.gatherResults etc. We don't actually do that directly, but we *do*
resolve other deferreds from affected callbacks (notably the server_to_deferred
map in _start_key_lookups), and those *do* get passed into
defer.gatherResults. It turns out that this way ends up being least confusing.
|
| |
| |
| |
| | |
... to make it easier to see what's going on.
|
| |
| |
| |
| | |
This is a precursor to factoring some of this code out.
|
| |
| |
| |
| |
| |
| | |
There's no need for this to be a nested definition; pulling it out not only
makes it more efficient, but makes it easier to check that it's not accessing
any local variables it shouldn't be.
|
| | |
|
| |
| |
| |
| | |
Fix a bug where we could end up firing off multiple requests for server_keys
for the same server at the same time.
|
| |
| |
| |
| |
| |
| | |
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
|
|/
|
|
| |
Signed-off-by: Kenny Keslar <r3dey3@r3dey3.com>
|
|\
| |
| | |
push federation retry limiter down to matrixfederationclient
|
| |
| |
| |
| |
| | |
rather than having to instrument everywhere we make a federation call,
make the MatrixFederationHttpClient manage the retry limiter.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
Conflicts:
synapse/crypto/keyring.py
|
| |\
| | |
| | | |
Add a couple more checks to the keyring
|
| | | |
|
| |/ |
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
| |
set.union() is a side-effect-free function that returns the union of two
sets. This clearly wanted .update(), which is the side-effecting mutator
version.
|
| |
|
| |
|
|
|
|
| |
server.
|
|\
| |
| | |
Allow configuration to ignore invalid SSL certs
|
| |
| |
| |
| |
| | |
This will be useful for sytest, and sytest only, hence the aggressive
config key name.
|
|/ |
|
| |
|
|
|
|
| |
to fetch more
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
as it might work better than iterating over the top level dict
|
|
|
|
| |
by 'unhandled errors'
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
Conflicts:
synapse/crypto/keyring.py
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
Factor out the pre destination retry logic from TransactionQueue so it
can be reused in both get_pdu and crypto.keyring
|
| |
|
| |
|
|
|
|
| |
because they don't interact well with the logging contexts
|
| |
|
| |
|
| |
|
| |
|
|
|