| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were sending device updates down both the federation stream and
device streams. This mean there was a race if the federation sender
worker processed the federation stream first, as when the sender checked
if there were new device updates the slaved ID generator hadn't been
updated with the new stream IDs and so returned nothing.
This situation is correctly handled by events/receipts/etc by not
sending updates down the federation stream and instead having the
federation sender worker listen on the other streams and poke the
transaction queues as appropriate.
|
|
|
|
|
|
|
|
|
|
| |
* Port synapse.replication.tcp to async/await
* Newsfile
* Correctly document type of on_<FOO> functions as async
* Don't be overenthusiastic with the asyncing....
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Primarily this fixes a bug in the handling of remote users joining a
room where the server sent out the presence for all local users in the
room to all servers in the room.
We also change to using the state delta stream, rather than the
distributor, as it will make it easier to split processing out of the
master process (as well as being more flexible).
Finally, when sending presence states to newly joined servers we filter
out old presence states to reduce the number sent. Initially we filter
out states that are offline and have a last active more than a week ago,
though this can be changed down the line.
Fixes #3962
|
| |
|
|
|
|
|
| |
This is mostly a prerequisite for #4730, but also fits with the general theme
of "move everything off the master that we possibly can".
|
|
|
|
|
| |
In worker mode, on the federation sender, when we receive an edu for sending
over the replication socket, it is parsed into an Edu object. There is no point
extracting the contents of it so that we can then immediately build another Edu.
|
|
|
|
|
| |
itervalues(d) calls d.itervalues() [PY2] and d.values() [PY3]
but SortedDict only implements d.values()
|
|
|
| |
The field is never read from, and all the opportunities given to populate it are not utilized. It should be very safe to remove this.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Fixes a startup crash due to commit df9f72d9e5fe264b86005208e0f096156eb03e4b
"replacing portions".
|
|
|
|
| |
they're not meant to be lazy (#3307)
|
| |
|
| |
|
|
|
|
|
|
| |
There's more where that came from
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 9fbe70a7dc3afabfdac176ba1f4be32dd44602aa.
It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.
This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
|
|
|
|
|
|
|
|
| |
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:
http://www.grantjenks.com/docs/sortedcontainers/performance.html
|
|\
| |
| | |
Reduce federation replication traffic
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is mainly done by moving the calculation of where to send presence
updates from the presence handler to the transaction queue, so we only
need to send the presence event (and not the destinations) across the
replication connection. Before we were duplicating by sending the full
state across once per destination.
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|