| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove `on_timeout_cancel` from `timeout_deferred`
The `on_timeout_cancel` param to `timeout_deferred` wasn't always called on a
timeout (in particular if the canceller raised an exception), so it was
unreliable. It was also only used in one place, and to be honest it's easier to
do what it does a different way.
* Fix handling of connection timeouts in outgoing http requests
Turns out that if we get a timeout during connection, then a different
exception is raised, which wasn't always handled correctly.
To fix it, catch the exception in SimpleHttpClient and turn it into a
RequestTimedOutError (which is already a documented exception).
Also add a description to RequestTimedOutError so that we can see which stage
it failed at.
* Fix incorrect handling of timeouts reading federation responses
This was trapping the wrong sort of TimeoutError, so was never being hit.
The effect was relatively minor, but we should fix this so that it does the
expected thing.
* Fix inconsistent handling of `timeout` param between methods
`get_json`, `put_json` and `delete_json` were applying a different timeout to
the response body to `post_json`; bring them in line and test.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
| |
|
|
|
|
|
| |
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Synapse 1.20.0rc3 (2020-09-11)
==============================
Bugfixes
--------
- Fix a bug introduced in v1.20.0rc1 where the wrong exception was raised when invalid JSON data is encountered. ([\#8291](https://github.com/matrix-org/synapse/issues/8291))
|
| | |
|
| |
| |
| |
| |
| | |
Removes the `user_joined_room` and stops calling it since there are no observers.
Also cleans-up some other unused signals and related code.
|
| | |
|
|/
|
|
|
| |
By importing from canonicaljson the simplejson module was still being used
in some situations. After this change the std lib json is consistenty used
throughout Synapse.
|
| |
|
|
|
| |
This requires adding a mypy plugin to fiddle with the type signatures a bit.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Closes: https://github.com/matrix-org/synapse/issues/6766
Equivalent Sydent PR: https://github.com/matrix-org/sydent/pull/309
I believe it's now time to remove the extra allowed `:` from `client_secret` parameters.
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
This solves the problem that the first few lines are logged twice on matrix.org. Hopefully the comments explain it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has long been something I've wanted to do. Basically the `Daemonize` code
is both too flexible and not flexible enough, in that it offers a bunch of
features that we don't use (changing UID, closing FDs in the child, logging to
syslog) and doesn't offer a bunch that we could do with (redirecting stdout/err
to a file instead of /dev/null; having the parent not exit until the child is
running).
As a first step, I've lifted the Daemonize code and removed the bits we don't
use. This should be a non-functional change. Fixing everything else will come
later.
|
| |
|
| |
|
| |
|
|
|
| |
fixes #7016
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
While working on https://github.com/matrix-org/synapse/issues/5665 I found myself digging into the `Ratelimiter` class and seeing that it was both:
* Rather undocumented, and
* causing a *lot* of config checks
This PR attempts to refactor and comment the `Ratelimiter` class, as well as encourage config file accesses to only be done at instantiation.
Best to be reviewed commit-by-commit.
|
|
|
|
|
|
| |
Instead of storing and sending an ACK for every single row we send
synchronously, we instead do it asynchronously while batching up
updates.
|
|
|
|
| |
This is already correctly done when we instansiate the cache, but wasn't
when it got reloaded (which always happens at least once on startup).
|
|
|
| |
`Failure()` is more cunning than `Failure(e)`.
|
| |
|
|
|
|
| |
this is a no-op on python 3.
|
|
|
|
| |
this is a no-op on python 3.
|
| |
|
|
|
|
| |
variables (#6391)
|
|
|
|
|
| |
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First some background: StreamChangeCache is used to keep track of what "entities" have
changed since a given stream ID. So for example, we might use it to keep track of when the last
to-device message for a given user was received [1], and hence whether we need to pull any to-device messages from the database on a sync [2].
Now, it turns out that StreamChangeCache didn't support more than one thing being changed at
a given stream_id (this was part of the problem with #7206). However, it's entirely valid to send
to-device messages to more than one user at a time.
As it turns out, this did in fact work, because *some* methods of StreamChangeCache coped
ok with having multiple things changing on the same stream ID, and it seems we never actually
use the methods which don't work on the stream change caches where we allow multiple
changes at the same stream ID. But that feels horribly fragile, hence: let's update
StreamChangeCache to properly support this, and add some typing and some more tests while
we're at it.
[1]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L301
[2]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L47-L51
|
|
|
|
|
|
| |
Other parts of the code (such as the StreamChangeCache) assume that there will
not be multiple changes with the same stream id.
This code was introduced in #7024, and I hope this fixes #7206.
|
|
|
|
| |
make sure we clear out all but one update for the user
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
|
|
|
| |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
| |
|
| |
|
|
|
|
|
|
| |
... since the whole response is huge.
We even need to break up the assertions, since kibana otherwise truncates them.
|
| |
|
|
|
|
|
| |
Some modules don't need any config, so having to define a `config` property
just to keep the loader happy is a bit annoying.
|
|
|
| |
The main point here is to make sure that the state returned by _get_state_in_room has been authed before we try to use it as state in the room.
|
| |
|
| |
|
|
|
|
|
| |
`Measure` incorrectly assumed that it was the only thing being done by the parent `LoggingContext`. For instance, during a "renew group attestations" operation, hundreds of `outbound_request` calls could take place in parallel, all using the same `LoggingContext`. This would mean that any resources used during *any* of those calls would be reported against *all* of them, producing wildly inaccurate results.
Instead, we now give each `Measure` block its own `LoggingContext` (using the parent `LoggingContext` mechanism to ensure that the log lines look correct and that the metrics are ultimately propogated to the top level for reporting against requests/backgrond tasks).
|
| |
|
| |
|
| |
|
|
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
| |
|
|
|
|
|
|
|
| |
This makes it easier to use in an async/await world.
Also fixes a bug where cache descriptors would occaisonally return a raw
value rather than a deferred.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* type checking fixes
* changelog
|
|
|
|
|
|
|
|
|
| |
We have set the max retry interval to a value larger than a postgres or
sqlite int can hold, which caused exceptions when updating the
destinations table.
To fix postgres we need to change the column to a bigint, and for sqlite
we lower the max interval to 2**62 (which is still incredibly long).
|
|\ |
|
| |
| |
| |
| | |
Track the time that a server started failing at, for general analysis purposes.
|
| |
| |
| |
| |
| |
| | |
Essentially the intention here is to end up blacklisting servers which never
respond to federation requests.
Fixes https://github.com/matrix-org/synapse/issues/5113.
|
| |
| |
| |
| | |
This was intended to introduce an element of jitter; instead it gave you a
30/60 chance of resetting to zero.
|
| |
| |
| |
| |
| |
| |
| | |
This is a redo of https://github.com/matrix-org/synapse/pull/5897 but with `id_access_token` accepted.
Implements [MSC2134](https://github.com/matrix-org/matrix-doc/pull/2134) plus Identity Service v2 authentication ala [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140).
Identity lookup-related functions were also moved from `RoomMemberHandler` to `IdentityHandler`.
|
| |
| |
| |
| | |
* remove some unused code
* make things which were constants into constants for efficiency and clarity
|
| |
| |
| |
| |
| | |
This reverts commit 71fc04069a5770a204c3514e0237d7374df257a8.
This broke 3PID invites as #5892 was required for it to work correctly.
|
| |
| |
| |
| |
| |
| |
| | |
Fixes https://github.com/matrix-org/synapse/issues/5861
Adds support for the v2 lookup API as defined in [MSC2134](https://github.com/matrix-org/matrix-doc/pull/2134). Currently this is only used for 3PID invites.
Sytest PR: https://github.com/matrix-org/sytest/pull/679
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This gives a bit of a grace period where we can attempt to refetch a
remote `well-known`, while still using the cached result if that fails.
Hopefully this will make the well-known resolution a bit more torelant
of failures, rather than it immediately treating failures as "no result"
and caching that for an hour.
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a bug where the default attribute maps were prioritised over
user-specified ones, resulting in incorrect mappings.
The problem is that if you call SPConfig.load() multiple times, it adds new
attribute mappers to a list. So by calling it with the default config first,
and then the user-specified config, we would always get the default mappers
before the user-specified mappers.
To solve this, let's merge the config dicts first, and then pass them to
SPConfig.
|
| |
|
| |
|
|
|
|
|
|
|
| |
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.
This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
|
|
|
|
|
|
| |
* Add a prometheus metric for active cache lookups.
* changelog
|
| |
|
|
|
|
|
|
|
|
|
| |
The version of a module isn't going to change over the lifetime of the
process (assuming no funky hot reloading is going on, which it isn't),
so let's just cache the result to avoid spawning lots of git
subprocesses.
Fixes #5672.
|
|
|
|
|
|
|
| |
- Put the default window_size back to 1000ms (broken by #5181)
- Make the `rc_federation` config actually do something
- fix an off-by-one error in the 'concurrent' limit
- Avoid creating an unused `_PerHostRatelimiter` object for every single
incoming request
|
|
|
|
|
|
|
|
| |
(#5617)
* Improve the backwards compatibility re-exports of synapse.logging.context.
* reexport logformatter too
|
| |
|
|
|
|
|
|
|
|
| |
* Fix 'utime went backwards' errors on daemonization.
Fixes #5608
* remove spurious debug
|
|
|
|
| |
Fixes #5602, #5603
|
| |
|
|
|
|
|
|
|
| |
Closes #4583
Does slightly less than #5045, which prevented a room from being upgraded multiple times, one after another. This PR still allows that, but just prevents two from happening at the same time.
Mostly just to mitigate the fact that servers are slow and it can take a moment for the room upgrade to actually complete. We don't want people sending another request to upgrade the room when really they just thought the first didn't go through.
|
|
|
|
|
| |
Sentry will catch the errors if they happen, so that should be good enough, and
woun't make things explode if we hit the error condition.
|
|\ |
|
| | |
|
|/
|
|
| |
Check that our clocks go forward.
|
|
|
| |
Fixes a regression introduced in #5335.
|
| |
|
| |
|
| |
|
|\
| |
| | |
Allow client event serialization to be async
|
| |
| |
| | |
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
|
| | |
|
|/ |
|
|\ |
|
| | |
|
| | |
|
|/
|
| |
Avoid sending syntax errors from the manhole to sentry.
|
| |
|
|\
| |
| | |
Implement workaround for login error.
|
| |
| |
| |
| | |
Signed-off-by: Robert Jacob <xperimental@solidproject.de>
|
|/ |
|
| |
|
|
|
|
| |
This is a bit of a half-assed effort at fixing https://github.com/matrix-org/synapse/issues/4252. Fundamentally the right answer is to drop support for Python 2.
|
|\
| |
| |
| | |
erikj/alias_disallow_list
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Wrap calls to deferToThread() in a thing which uses a child logcontext to
attribute CPU usage to the right request.
While we're in the area, remove the logcontext_tracer stuff, which is never
used, and afaik doesn't work.
Fixes #4064
|
| |
| |
| |
| | |
on py3) (#4068)
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If a looping call function errors, then it kills the loop entirely.
Currently it throws away the exception logs, so we should make it
actually log them.
Fixes #3929
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
transactions (#3959)
when processing incoming transactions, it can be hard to see what's going on,
because we process a bunch of stuff in parallel, and because we may end up
recursively working our way through a chain of three or four events.
This commit creates a way to use logcontexts to add the relevant event ids to
the log lines.
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.
(This appears to have been a longstanding bug, but one which has been made more
of a problem by #3932 and #3933).
(This was originally done by Erik as part of #3933. I'm cherry-picking it
because really it's a fix in its own right)
|
| |
| |
| |
| |
| |
| | |
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.
|
| |
| |
| |
| | |
Hopefully helps with #3931
|
|/ |
|
|
|
|
|
|
|
|
| |
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.
This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
|
|
|
|
|
|
|
|
|
|
| |
Let's try to rationalise the logging that happens when we are processing an
incoming transaction, to make it easier to figure out what is going wrong when
they take ages. In particular:
- make everything start with a [room_id event_id] prefix
- make sure we log a warning when catching exceptions rather than just turning
them into other, more cryptic, exceptions.
|
| |
|
| |
|
|
|
|
|
|
|
| |
The existing deferred timeout helper function (and the one into twisted)
suffer from a bug when a deferred's canceller throws an exception, #3842.
The new helper function doesn't suffer from this problem.
|
|
|
|
|
| |
Turns out deferred.cancel sometimes throws, so we do that last to ensure
that we always do resolve the new deferred.
|
|
|
|
| |
This is an attempt to mitigate #3842 by adding yet-another-timeout
|
| |
|
|
|
|
|
| |
Newer versions of openssh client refuse to connect to the old key due to
its length.
|
|
|
|
|
| |
This fixes bugs introduced in #3700, by making sure that we behave sanely
when an incoming connection is closed before the headers are read.
|
|
|
|
|
| |
Make the logcontext filter not explode if it somehow ends up with a logcontext
of None, since that infinite-loops the whole logging system.
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Turns out that cancellation of inlineDeferreds didn't really work properly
until Twisted 18.7. This commit refactors Linearizer.queue to avoid
inlineCallbacks.
|
|/ |
|
| |
|
| |
|
|
|
|
|
| |
Because it was complicated and annoyed me. I suspect this will be more
efficient too.
|
|
|
|
|
|
|
|
|
| |
It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.
Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
|
| |
|
|
|
|
|
| |
Linearizer was effectively a Limiter with max_count=1, so rather than
maintaining two sets of code, let's combine them.
|
|
|
|
|
| |
* give them names, to improve logging
* use a deque rather than a list for efficiency
|
|
|
|
| |
Fixes #3570
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This is more involved than it might otherwise be, because the current
implementation just drops its logcontexts and runs everything in the sentinel
context.
It turns out that we aren't actually using a bunch of the functionality here
(notably suppress_failures and the fact that Distributor.fire returns a
deferred), so the easiest way to fix this is actually by simplifying a bunch of
code.
|
|
|
|
|
|
|
|
| |
This fixes #3518, and ensures that we get useful logs and metrics for lots of
things that happen in the background.
(There are certainly more things that happen in the background; these are just
the common ones I've found running a single-process synapse locally).
|
| |
|
|
|
|
|
|
|
|
| |
The get_entities_changed function was changed to return all changed
entities since the given stream position, rather than only those changed
from a given list of entities. This resulted in the function incorrectly
returning large numbers of entities that, for example, caused large
increases in database usage.
|
|\
| |
| | |
Don't return unknown entities in get_entities_changed
|
| |
| |
| |
| |
| |
| |
| |
| | |
The stream cache keeps track of all entities that have changed since
a particular stream position, so get_entities_changed does not need to
return unknown entites when given a larger stream position.
This makes it consistent with the behaviour of has_entity_changed.
|
|/
|
|
|
|
|
|
| |
popitem removes the *most recent* item by default [1]. We want the oldest.
Fixes #3524
[1]: https://docs.python.org/2/library/collections.html#collections.OrderedDict.popitem
|
|
|
|
|
|
|
|
|
|
|
| |
This line shows up as about 5% of cpu time on a synchrotron:
not_known_entities = set(entities) - set(self._entity_to_key)
Presumably the problem here is that _entity_to_key can be largeish, and
building a set for its keys every time this function is called is slow.
Here we rewrite the logic to avoid building so many sets.
|
|
|
|
|
| |
Let's try to include time spent in the DB threads in the per-request/block cpu
usage metrics.
|
|
|
|
|
| |
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
|
|\
| |
| | |
Log number of events fetched from DB
|
| |
| |
| |
| | |
so that we can stub it for the sentinel and not have a billion failing UTs
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we finish processing a request, log the number of events we fetched from
the database to handle it.
[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
| |
they're not meant to be lazy (#3307)
|
|\
| |
| | |
remaining isintance fixes
|
| | |
|
| | |
|
| |
| |
| |
| | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |
|
| | |
|
| | |
|
|\| |
|
| |\
| | |
| | | |
Misc Python3 fixes
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | | |
| | | | |
Add batch_iter to utils
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There's a frequent idiom I noticed where an iterable is split up into a
number of chunks/batches. Unfortunately that method does not work with
iterators like dict.keys() in python3. This implementation works with
iterators.
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | | |
|
| | | |
|
| | | |
|
|\| | |
|
| | | |
|
| |/ |
|
|/ |
|
|\ |
|
| | |
|
| |\ |
|
| | |
| | |
| | |
| | | |
This was introduced in 4f2f5171
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So, it turns out that if you have a first `Deferred` `D1`, you can add a
callback which returns another `Deferred` `D2`, and `D2` must then complete
before any further callbacks on `D1` will execute (and later callbacks on `D1`
get the *result* of `D2` rather than `D2` itself).
So, `D1` might have `called=True` (as in, it has started running its
callbacks), but any new callbacks added to `D1` won't get run until `D2`
completes - so if you `yield D1` in an `inlineCallbacks` function, your `yield`
will 'block'.
In conclusion: some of our assumptions in `logcontext` were invalid. We need to
make sure that we don't optimise out the logcontext juggling when this
situation happens. Fortunately, it is easy to detect by checking `D1.paused`.
|
| |\
| | |
| | |
| | |
| | | |
matrix-org/rav/run_in_background_exception_handling
Trap exceptions thrown within run_in_background
|
| | |
| | |
| | |
| | |
| | | |
Turn any exceptions that get thrown synchronously within run_in_background into
Failures instead.
|
| |\ \ |
|
| | |\ \
| | | | |
| | | | | |
Replace stringIO imports with six
|
| | | | | |
|
| | |\ \ \
| | | | | |
| | | | | | |
more bytes strings
|
| | | |/ /
| | | | |
| | | | |
| | | | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |\ \ \
| | | |/ /
| | |/| | |
Use run_in_background in preference to preserve_fn
|
| | | |\ \ |
|
| | | | |/
| | | |/|
| | | | |
| | | | |
| | | | |
| | | | | |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | | |
plus a bonus next()
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \ \
| | | |/
| | |/| |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
| | |
| | |
| | |
| | | |
Twisted 16.0 doesn't have addTimeout, so let's backport it.
|
| |/
| |
| |
| | |
This doesn't feel like a wheel we need to reinvent.
|
| |\
| | |
| | | |
add __bool__ alias to __nonzero__ methods
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | | |
| | | | |
Replace Queue with six.moves.queue
|
| | |/
| | |
| | |
| | |
| | |
| | | |
and a six.range change which I missed the last time
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \
| | |/
| |/| |
Refactor ResponseCache usage
|
| | |
| | |
| | |
| | |
| | | |
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.
This will be largely non-functional, but does include the following functional
changes:
* federation_server.on_context_state_request: drops use of _server_linearizer
which looked redundant and could cause incorrect cache misses by yielding
between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
on production.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reverts commit 9fbe70a7dc3afabfdac176ba1f4be32dd44602aa.
It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.
This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
|
| |\
| | |
| | | |
Add metrics for ResponseCache
|
| | | |
|
| |\ \
| | | |
| | | | |
Document the behaviour of ResponseCache
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | | |
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:
http://www.grantjenks.com/docs/sortedcontainers/performance.html
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We aren't ready to release this yet, so I'm reverting it for now.
This reverts commit d1679a4ed7947b0814e0f2af9b888a16c588f1a1, reversing
changes made to e089100c6231541c446e37e157dec8feed02d283.
|
| |\ \
| | | |
| | | | |
Improve database cache performance
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
|
| |/ / |
|
| |/
| |
| |
| |
| | |
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
|
| |
| |
| |
| | |
fixes https://github.com/matrix-org/synapse/issues/2043 and https://github.com/matrix-org/synapse/issues/2029
|