| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
The get_entities_changed function was changed to return all changed
entities since the given stream position, rather than only those changed
from a given list of entities. This resulted in the function incorrectly
returning large numbers of entities that, for example, caused large
increases in database usage.
|
|\
| |
| | |
Don't return unknown entities in get_entities_changed
|
| |
| |
| |
| |
| |
| |
| |
| | |
The stream cache keeps track of all entities that have changed since
a particular stream position, so get_entities_changed does not need to
return unknown entites when given a larger stream position.
This makes it consistent with the behaviour of has_entity_changed.
|
|/
|
|
|
|
|
|
| |
popitem removes the *most recent* item by default [1]. We want the oldest.
Fixes #3524
[1]: https://docs.python.org/2/library/collections.html#collections.OrderedDict.popitem
|
|
|
|
|
|
|
|
|
|
|
| |
This line shows up as about 5% of cpu time on a synchrotron:
not_known_entities = set(entities) - set(self._entity_to_key)
Presumably the problem here is that _entity_to_key can be largeish, and
building a set for its keys every time this function is called is slow.
Here we rewrite the logic to avoid building so many sets.
|
|
|
|
|
| |
Let's try to include time spent in the DB threads in the per-request/block cpu
usage metrics.
|
|
|
|
|
| |
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
|
|\
| |
| | |
Log number of events fetched from DB
|
| |
| |
| |
| | |
so that we can stub it for the sentinel and not have a billion failing UTs
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we finish processing a request, log the number of events we fetched from
the database to handle it.
[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
| |
they're not meant to be lazy (#3307)
|
|\
| |
| | |
remaining isintance fixes
|
| | |
|
| | |
|
| |
| |
| |
| | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |
|
| | |
|
| | |
|
|\| |
|
| |\
| | |
| | | |
Misc Python3 fixes
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | | |
| | | | |
Add batch_iter to utils
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There's a frequent idiom I noticed where an iterable is split up into a
number of chunks/batches. Unfortunately that method does not work with
iterators like dict.keys() in python3. This implementation works with
iterators.
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | | |
|
| | | |
|
| | | |
|
|\| | |
|
| | | |
|
| |/ |
|
|/ |
|
|\ |
|
| | |
|
| |\ |
|
| | |
| | |
| | |
| | | |
This was introduced in 4f2f5171
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So, it turns out that if you have a first `Deferred` `D1`, you can add a
callback which returns another `Deferred` `D2`, and `D2` must then complete
before any further callbacks on `D1` will execute (and later callbacks on `D1`
get the *result* of `D2` rather than `D2` itself).
So, `D1` might have `called=True` (as in, it has started running its
callbacks), but any new callbacks added to `D1` won't get run until `D2`
completes - so if you `yield D1` in an `inlineCallbacks` function, your `yield`
will 'block'.
In conclusion: some of our assumptions in `logcontext` were invalid. We need to
make sure that we don't optimise out the logcontext juggling when this
situation happens. Fortunately, it is easy to detect by checking `D1.paused`.
|
| |\
| | |
| | |
| | |
| | | |
matrix-org/rav/run_in_background_exception_handling
Trap exceptions thrown within run_in_background
|
| | |
| | |
| | |
| | |
| | | |
Turn any exceptions that get thrown synchronously within run_in_background into
Failures instead.
|
| |\ \ |
|
| | |\ \
| | | | |
| | | | | |
Replace stringIO imports with six
|
| | | | | |
|
| | |\ \ \
| | | | | |
| | | | | | |
more bytes strings
|
| | | |/ /
| | | | |
| | | | |
| | | | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |\ \ \
| | | |/ /
| | |/| | |
Use run_in_background in preference to preserve_fn
|
| | | |\ \ |
|
| | | | |/
| | | |/|
| | | | |
| | | | |
| | | | |
| | | | | |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | | |
plus a bonus next()
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \ \
| | | |/
| | |/| |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
| | |
| | |
| | |
| | | |
Twisted 16.0 doesn't have addTimeout, so let's backport it.
|
| |/
| |
| |
| | |
This doesn't feel like a wheel we need to reinvent.
|
| |\
| | |
| | | |
add __bool__ alias to __nonzero__ methods
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | | |
| | | | |
Replace Queue with six.moves.queue
|
| | |/
| | |
| | |
| | |
| | |
| | | |
and a six.range change which I missed the last time
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | |/
| |/| |
Refactor ResponseCache usage
|
| | |
| | |
| | |
| | |
| | | |
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.
This will be largely non-functional, but does include the following functional
changes:
* federation_server.on_context_state_request: drops use of _server_linearizer
which looked redundant and could cause incorrect cache misses by yielding
between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
on production.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reverts commit 9fbe70a7dc3afabfdac176ba1f4be32dd44602aa.
It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.
This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
|
| |\
| | |
| | | |
Add metrics for ResponseCache
|
| | | |
|
| |\ \
| | | |
| | | | |
Document the behaviour of ResponseCache
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | | |
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:
http://www.grantjenks.com/docs/sortedcontainers/performance.html
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We aren't ready to release this yet, so I'm reverting it for now.
This reverts commit d1679a4ed7947b0814e0f2af9b888a16c588f1a1, reversing
changes made to e089100c6231541c446e37e157dec8feed02d283.
|
| |\ \
| | | |
| | | | |
Improve database cache performance
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
|
| |/ / |
|
| |/
| |
| |
| |
| | |
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
|
| |
| |
| |
| | |
fixes https://github.com/matrix-org/synapse/issues/2043 and https://github.com/matrix-org/synapse/issues/2029
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The state cache bases its size on the sum of the size of entries. The
size of the entry is calculated once on insertion, so it is important
that the size of entries does not change.
The DictionaryCache modified the entries size, which caused the state
cache to incorrectly think it was smaller than it actually was.
|
|/
|
|
|
| |
I think we've now fixed enough of these that the rest can be logged at
warning.
|
|
|
|
|
| |
It annoys me that we create temporary function objects when there's really no
need for it. Let's factor the gubbins out of preserve_fn and start using it.
|
|
|
|
|
| |
... because (a) it's actually simpler (b) it might be marginally more
performant?
|
| |
|
|
|
|
|
|
| |
Add federation_domain_whitelist
gives a way to restrict which domains your HS is allowed to federate with.
useful mainly for gracefully preventing a private but internet-connected HS from trying to federate to the wider public Matrix network
|
|\
| |
| | |
add registrations_require_3pid and allow_local_3pids
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* [ ] split config options into allowed_local_3pids and registrations_require_3pid
* [ ] simplify and comment logic for picking registration flows
* [ ] fix docstring and move check_3pid_allowed into a new util module
* [ ] use check_3pid_allowed everywhere
@erikjohnston PTAL
|
|\ \
| | |
| | | |
Add decent impl of a FileConsumer
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Twisted core doesn't have a general purpose one, so we need to write one
ourselves.
Features:
- All writing happens in background thread
- Supports both push and pull producers
- Push producers get paused if the consumer falls behind
|
| |/
|/|
| |
| | |
... which I introduced in #2785
|
| |
| |
| |
| |
| |
| | |
For each request, track the amount of time spent waiting for a db
connection. This entails adding it to the LoggingContext and we may as well add
metrics for it while we are passing.
|
| |
| |
| |
| | |
... to reduce the amount of floating-point foo we do.
|
|/
|
|
|
|
|
|
| |
It turns out that the only thing we use the __dict__ of LoggingContext for is
`request`, and given we create lots of LoggingContexts and then copy them every
time we do a db transaction or log line, using the __dict__ seems a bit
redundant. Let's try to optimise things by making the request attribute
explicit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to circumvent the number of duplicate foo:count metrics increasing
without bounds, it's time for a rearrangement.
The following are all deprecated, and replaced with synapse_util_metrics_block_count:
synapse_util_metrics_block_timer:count
synapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_db_txn_duration:count
The following are all deprecated, and replaced with synapse_http_server_response_count:
synapse_http_server_requests
synapse_http_server_response_time:count
synapse_http_server_response_ru_utime:count
synapse_http_server_response_ru_stime:count
synapse_http_server_response_db_txn_count:count
synapse_http_server_response_db_txn_duration:count
The following are renamed (the old metrics are kept for now, but deprecated):
synapse_util_metrics_block_timer:total ->
synapse_util_metrics_block_time_seconds
synapse_util_metrics_block_ru_utime:total ->
synapse_util_metrics_block_ru_utime_seconds
synapse_util_metrics_block_ru_stime:total ->
synapse_util_metrics_block_ru_stime_seconds
synapse_util_metrics_block_db_txn_count:total ->
synapse_util_metrics_block_db_txn_count
synapse_util_metrics_block_db_txn_duration:total ->
synapse_util_metrics_block_db_txn_duration_seconds
synapse_http_server_response_time:total ->
synapse_http_server_response_time_seconds
synapse_http_server_response_ru_utime:total ->
synapse_http_server_response_ru_utime_seconds
synapse_http_server_response_ru_stime:total ->
synapse_http_server_response_ru_stime_seconds
synapse_http_server_response_db_txn_count:total ->
synapse_http_server_response_db_txn_count
synapse_http_server_response_db_txn_duration:total
synapse_http_server_response_db_txn_duration_seconds
|
| |
|
|
|
|
|
| |
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
|
|
|
|
|
|
|
|
|
| |
Add some logging to the Limiter in a similar spirit to the Linearizer, to help
debug issues.
Also fix a logcontext leak.
Also refactor slightly to avoid throwing exceptions.
|
|
|
|
| |
E741 says "do not use variables named ‘l’, ‘O’, or ‘I’".
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
|
|
|
|
| |
* don't use preserve_context_over_deferred, which is known broken.
* remove a redundant preserve_fn.
* add/improve some comments
|
|\
| |
| | |
Fix stackoverflow and logcontexts from linearizer
|
| |
| |
| |
| |
| |
| |
| | |
1. make it not blow out the stack when there are more than 50 things waiting
for a lock. Fixes https://github.com/matrix-org/synapse/issues/2505.
2. Make it not mess up the log contexts.
|
| |
| |
| |
| | |
make sure we have the relevant fields before we try to log them.
|
|/
|
|
|
|
|
|
|
| |
This is a bit of an experimental change at this point; the idea is to see if it
helps us track down where our stack overflows are coming from by logging the
stack when the exception was caught and turned into a Failure. (We'll also need
https://github.com/richvdh/twisted/commit/edf27044200e74680ea67c525768e36dc9d9af2b).
If we deploy this, we'll be able to enable it via the log config yaml.
|
|
|
|
| |
Avoid preserve_context_over_deferred, which is broken.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This is because pruning them was a significant performance drain on
matrix.org
|
| |
|
|
|
|
|
|
|
|
| |
Occaisonally has_any_entity_changed would throw the error: "Set changed
size during iteration" when taking the max of the `sorteddict`. While
its uncertain how that happens, its quite inefficient to iterate over
the entire dict anyway so we change to using the more traditional
`bisect_*` functions.
|
| |
|
| |
|
|
|
|
|
|
| |
We update the normal cache descriptors to handle caches with a single
argument specially so that the key wasn't a 1-tuple. We need to update
the cache list to be aware of this.
|
|
|
|
|
|
|
|
|
| |
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.
By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
|
|
|
|
|
| |
Otherwise the hit ration of plain get_events gets completely skewed by
calls to get_joined_users* functions.
|
| |
|
|
|
|
|
|
| |
Call `super` correctly, so that we correctly initialise the `errcode` field.
Fixes https://github.com/matrix-org/synapse/issues/2179.
|
|
|
|
|
|
|
|
| |
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.
We also try converting the string to ascii to further reduce the size.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the cache descriptors store deferreds rather than raw values,
this is a simple way of triggering only one database hit and sharing the
result if two callers attempt to get the same value.
However, there are a few caches that simply store a mapping from string
to string (or int). These caches can have a large number of entries,
under the assumption that each entry is small. However, the size of a
deferred (specifically the size of ObservableDeferred) is signigicantly
larger than that of the raw value, 2kb vs 32b.
This PR therefore changes the cache descriptors to store the raw values
rather than the deferreds.
As a side effect cached storage function now either return a deferred or
the actual value, as the cached list decriptor already does. This is
fine as we always end up just yield'ing on the returned value
eventually, which handles that case correctly.
|
| |
|
|
|
|
|
| |
`preserve_fn` is no longer used as a decorator anywhere, so we can safely fix a
fixme therein.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This is because getcallargs recomputes the getargspec, amongst other
things, which we don't need to do as its already been done
|
| |
|
|
|
|
|
|
|
| |
The cache wrappers had a habit of leaking the logcontext into the reactor while
the lookup function was running, and then not restoring it correctly when the
lookup function had completed. It's all the fault of
`preserve_context_over_{fn,deferred}` which are basically a bit broken.
|
|\
| |
| | |
push federation retry limiter down to matrixfederationclient
|
| |
| |
| |
| |
| | |
Add a param to the federation client which lets us ignore historical backoff
data for federation queries, and set it for a handful of operations.
|
| |
| |
| |
| |
| | |
rather than having to instrument everywhere we make a federation call,
make the MatrixFederationHttpClient manage the retry limiter.
|
|\ \
| | |
| | | |
Fix time_bound_deferred to throw the right exception
|
| |/
| |
| |
| |
| |
| | |
Due to a failure to instantiate DeferredTimedOutError, time_bound_deferred
would throw a CancelledError when the deferred timed out, which was rather
confusing.
|
| |
| |
| |
| |
| | |
Use preserve_fn to correctly manage the logcontexts around things we don't want
to yield on.
|
|/
|
|
|
|
|
|
|
| |
The `@cached` decorator on `KeyStore._get_server_verify_key` was missing
its `num_args` parameter, which meant that it was returning the wrong key for
any server which had more than one recorded key.
By way of a fix, change the default for `num_args` to be *all* arguments. To
implement that, factor out a common base class for `CacheDescriptor` and `CacheListDescriptor`.
|
|\
| |
| | |
Logcontext docs
|
| | |
|
|/
|
|
|
|
|
|
| |
Fix a bug in ``logcontext.preserve_fn`` which made it leak context into the
reactor, and add a test for it.
Also, get rid of ``logcontext.reset_context_after_deferred``, which tried to do
the same thing but had its own, different, set of bugs.
|
|\
| |
| | |
Queue up federation PDUs while a room join is in progress
|
| |
| |
| |
| |
| | |
to correctly reset the context when we fire off a deferred we aren't going to
wait for.
|
| |
| |
| |
| |
| |
| |
| |
| | |
... and update some docstrings to correctly reflect the types being used.
get_new_device_msgs_for_remote can return a long under some circumstances,
which was being stored in last_device_list_stream_id_by_dest, and was then
upsetting things on the next loop.
|
|/
|
|
| |
Changes from https://github.com/matrix-org/synapse/pull/1971
|
| |
|
| |
|
| |
|
|
|
|
| |
This file post-dates OM
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of calculating the size of the cache repeatedly, which can take
a long time now that it can use a callback, instead cache the size and
update that on insertion and deletion.
This requires changing the cache descriptors to have two caches, one for
pending deferreds and the other for the actual values. There's no reason
to evict from the pending deferreds as they won't take up any more
memory.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The debug 'full_twisted_stacktraces' flag caused synapse to rewrite
twisted deferreds to always fire the callback on the next reactor tick.
This was to force the deferred to always store the stacktraces on
exceptions, and thus be more likely to have a full stacktrace when it
reaches the final error handlers and gets printed to the logs.
Dynamically rewriting things is generally bad, and in particular this
change violates assumptions of various bits of Twisted. This wouldn't
necessarily be so bad, but it turns out this option has been turned on
on some production servers.
Turning the option can cause e.g. #1778.
For now, lets just entirely nuke this option.
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Limit the number of events that can be created on a given room concurrently
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
| |
This only makes a difference for versions of ldap3 before 1.0, but a)
its best to be explicit and b) there are distributions that package
ancient versions for ldap3 (e.g. debian).
|
|
|
|
|
|
| |
Allows delegating the password auth to an external module. This also
moves the LDAP auth to using this system, allowing it to be removed from
the synapse tree entirely in the future.
|
| |
|
|
|
|
| |
effective
|
| |
|
| |
|
|\
| |
| | |
Add more Measure blocks
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Cache federation state responses
|
| | |
|
| | |
|
| |
| |
| |
| | |
Avoids insane pushes like, "Bob invited you to invite from Bob"
|
|/ |
|
| |
|
|
|
|
| |
Fixes https://github.com/vector-im/vector-web/issues/1654
|
|
|
|
|
|
|
|
|
|
| |
The only place that was observed was to set the profile. I've made it
so that the profile is set within store.register in the same transaction
that creates the user.
This required some slight changes to the registration code for upgrading
guest users, since it previously relied on the distributor swallowing errors
if the profile already existed.
|
|\ |
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
| |
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Include name of the person we're sending to and add summary text at the top giving an overview of what's happened.
|
| | |
|
| | |
|
| |
| |
| |
| | |
So names of people in a room are given in order
|
| | |
|
|/
|
|
| |
Mostly WIP porting the room name calculation logic from the web client so our room names in the email mirror the clients.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
* Remove some unused functions
* get_room_events_stream is only used in tests
* is_exclusive_room might actually be something we want
|
|
|
|
|
|
|
| |
Move the functions inside the distributor and import them
where needed. This reduces duplication and makes it possible
for flake8 to detect when the functions aren't used in a
given file.
|
| |
|
| |
|
| |
|
| |
|
| |
|