| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Destination was being used incorrectly (a single destination instead
of a list of destinations was being passed).
This also updates some of the types in the area to not use Collection[str],
which is a footgun.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#14870)
* Allow `AbstractSet` in `StrCollection`
Or else frozensets are excluded. This will be useful in an upcoming
commit where I plan to change a function that accepts `List[str]` to
accept `StrCollection` instead.
* `rooms_to_exclude` -> `rooms_to_exclude_globally`
I am about to make use of this exclusion mechanism to exclude rooms for
a specific user and a specific sync. This rename helps to clarify the
distinction between the global config and the rooms to exclude for a
specific sync.
* Better function names for internal sync methods
* Track a list of excluded rooms on SyncResultBuilder
I plan to feed a list of partially stated rooms for this sync to ignore
* Exclude partial state rooms during eager sync
using the mechanism established in the previous commit
* Track un-partial-state stream in sync tokens
So that we can work out which rooms have become fully-stated during a
given sync period.
* Fix mutation of `@cached` return value
This was fouling up a complement test added alongside this PR.
Excluding a room would mean the set of forgotten rooms in the cache
would be extended. This means that room could be erroneously considered
forgotten in the future.
Introduced in #12310, Synapse 1.57.0. I don't think this had any
user-visible side effects (until now).
* SyncResultBuilder: track rooms to force as newly joined
Similar plan as before. We've omitted rooms from certain sync responses;
now we establish the mechanism to reintroduce them into future syncs.
* Read new field, to present rooms as newly joined
* Force un-partial-stated rooms to be newly-joined
for eager incremental syncs only, provided they're still fully stated
* Notify user stream listeners to wake up long polling syncs
* Changelog
* Typo fix
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Unnecessary list cast
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Rephrase comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Another comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Fixup merge(?)
* Poke notifier when receiving un-partial-stated msg over replication
* Fixup merge whoops
Thanks MV :)
Co-authored-by: Mathieu Velen <mathieuv@matrix.org>
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
|
|
|
|
| |
#12383 (#14149)
Fixes https://github.com/matrix-org/synapse/issues/12383
|
|
|
|
|
|
|
| |
* Remove unused type-ignores
Oversights in #14427 and #14429.
* Changelog
|
|
|
|
|
|
|
| |
The callers either set a default limit or manually handle a None-limit
later on (by setting a default value).
Update the callers to always instantiate PaginationConfig with a default
limit and then assume the limit is non-None.
|
|
|
|
|
|
| |
From MSC3715, this was unused by clients (and there was no
way for clients to know it was supported).
Matrix 1.4 defines the stable field.
|
|
|
|
|
|
| |
In Jaeger:
- Before: huge list of uncategorized database calls
- After: nice and collapsible into units of work
|
| |
|
|
|
|
| |
provided (#12370)
|
|
|
|
|
|
|
| |
The presence of this method was confusing, and mostly present for backwards
compatibility. Let's get rid of it.
Part of #11733
|
|
|
| |
Upgrade mypy to 0.931, mypy-zope to 0.3.5 and fix new complaints.
|
| |
|
| |
|
| |
|
|
|
|
| |
And set the required attribute in a few places which will error if
a parameter is not provided.
|
| |
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
|
|
| |
The idea is that in future tokens will encode a mapping of instance to position. However, we don't want to include the full instance name in the string representation, so instead we'll have a mapping between instance name and an immutable integer ID in the DB that we can use instead. We'll then do the lookup when we serialize/deserialize the token (we could alternatively pass around an `Instance` type that includes both the name and ID, but that turns out to be a lot more invasive).
|
|
|
| |
This removes `SourcePaginationConfig` and `get_pagination_rows`. The reasoning behind this is that these generic classes/functions erased the types of the IDs it used (i.e. instead of passing around `StreamToken` it'd pass in e.g. `token.room_key`, which don't have uniform types).
|
| |
|
|
|
|
|
| |
It's just a thin wrapper around two ID gens to make `get_current_token`
and `get_next` return tuples. This can easily be replaced by calling the
appropriate methods on the underlying ID gens directly.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
|
| |
|
|
|
|
|
|
|
| |
If no `from` param is specified we calculate and use the "current
token" that inlcuded typing, presence, etc. These are unused during
pagination and are not available on workers, so we simply don't
calculate them.
|
| |
|
| |
|
|
|
|
|
|
|
| |
parse_integer and parse_string can take a request and raise errors
in case we have wrong or missing params.
This PR tries to use them more to deduplicate some code and make it
better readable
|
| |
|
|
|
|
| |
what could possibly go wrong
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If a client didn't specify a from token when paginating backwards
synapse would attempt to query the (global) maximum topological token.
This a) doesn't make much sense since they're room specific and b) there
are no indices that lets postgres do this efficiently.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
So that they can be used with "%r" log formats.
|
| |
|
| |
|
|
|
|
| |
source whether we want a token for going 'forwards' or 'backwards'
|
| |
|
| |
|
|
|
|
| |
at all; missing still implies default 10
|
|
|
|
| |
get_pagination_rows; meaning each source doesn't have to care about its own name any more
|
|
|
|
| |
hasn't been incorporated in time for launch.
|
| |
|
|
|
|
| |
that source, not overall eventstream tokens
|
| |
|
| |
|
|
|
|
| |
easier to find the code
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
supply directions.
|
| |
|
| |
|
| |
|
|
|
|
| |
turned off currently.
|
|
reflects the change in the underlying storage model.
|