| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This is only used by filter_events_for_client, so we can simplify the whole
thing by just doing one user at a time, and removing a dead storage function to
boot.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split state group persist into seperate storage func
* Add per database engine code for state group id gen
* Move store_state_group to StateReadStore
This allows other workers to use it, and so resolve state.
* Hook up store_state_group
* Fix tests
* Rename _store_mult_state_groups_txn
* Rename StateGroupReadStore
* Remove redundant _have_persisted_state_group_txn
* Update comments
* Comment compute_event_context
* Set start val for state_group_id_seq
... otherwise we try to recreate old state groups
* Update comments
* Don't store state for outliers
* Update comment
* Update docstring as state groups are ints
|
|
|
|
| |
Extracted from https://github.com/matrix-org/synapse/pull/2820
|
| |
|
|
|
|
|
|
|
| |
As the TCP replication uses a slightly different API and streams than
the HTTP replication.
This breaks HTTP replication.
|
| |
|
|
|
|
|
| |
This is because it now relies of the caches stream, which only works on
postgres. We are trying to test with sqlite.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some streams will occaisonally advance their positions without actually
having any new rows to send over federation. Currently this means that
the token will not advance on the workers, leading to them repeatedly
sending a slightly out of date token. This in turns requires the master
to hit the DB to check if there are any new rows, rather than hitting
the no op logic where we check if the given token matches the current
token.
This commit changes the API to always return an entry if the position
for a stream has changed, allowing workers to advance their tokens
correctly.
|
| |
|
| |
|
|
|
|
|
| |
Wrap the `Requester` constructor with a function which provides sensible
defaults, and use it throughout
|
|
|
|
| |
as get_room_name_and_alias is now gone
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Add a slaved receipts store
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Rather than adding them globally. This limits the changes to only
affect the tests.
|
| |
|
| |
|
|
|
|
|
| |
Add a test to check that get_room_names_and_aliases does the same
thing on both the master and on the slave data store.
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
This will enable more detailed decisions
|
|/ |
|
|
synapse
This is necessary for replicating the data in synapse to be visible to a
separate service because presence and typing notifications aren't stored
in a database so won't be visible to another process.
This API can be used to either get the raw data by requesting the tables
themselves or to just receive notifications for updates by following the
streams meta-stream.
Returns updates for each table requested a JSON array of arrays with a
row for each row in the table.
Each table is prefixed by a header row with the: name of the table,
current stream_id position for the table, number of rows, number of
columns and the names of the columns.
This is followed by the rows that have been added to the server since
the requester last asked.
The API has a timeout and is hooked up to the notifier so that a slave
can long poll for updates.
|