summary refs log tree commit diff
path: root/synapse/storage/util (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Reduce serialization errors in MultiWriterIdGen (#8456)Erik Johnston2020-10-071-1/+11
| | | | | | We call `_update_stream_positions_table_txn` a lot, which is an UPSERT that can conflict in `REPEATABLE READ` isolation level. Instead of doing a transaction consisting of a single query we may as well run it outside of a transaction.
* Add logging on startup/shutdown (#8448)Erik Johnston2020-10-022-9/+14
| | | | | This is so we can tell what is going on when things are taking a while to start up. The main change here is to ensure that transactions that are created during startup get correctly logged like normal transactions.
* Merge tag 'v1.21.0rc2' into developRichard van der Hoff2020-10-021-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Synapse 1.21.0rc2 (2020-10-02) ============================== Features -------- - Convert additional templates from inline HTML to Jinja2 templates. ([\#8444](https://github.com/matrix-org/synapse/issues/8444)) Bugfixes -------- - Fix a regression in v1.21.0rc1 which broke thumbnails of remote media. ([\#8438](https://github.com/matrix-org/synapse/issues/8438)) - Do not expose the experimental `uk.half-shot.msc2778.login.application_service` flow in the login API, which caused a compatibility problem with Element iOS. ([\#8440](https://github.com/matrix-org/synapse/issues/8440)) - Fix malformed log line in new federation "catch up" logic. ([\#8442](https://github.com/matrix-org/synapse/issues/8442)) - Fix DB query on startup for negative streams which caused long start up times. Introduced in [\#8374](https://github.com/matrix-org/synapse/issues/8374). ([\#8447](https://github.com/matrix-org/synapse/issues/8447))
| * Fix DB query on startup for negative streams. (#8447)Erik Johnston2020-10-021-1/+1
| | | | | | | | | | | | | | | | For negative streams we have to negate the internal stream ID before querying the DB. The effect of this bug was to query far too many rows, slowing start up time, but we would correctly filter the results afterwards so there was no ill effect.
* | Enable mypy checking for unreachable code and fix instances. (#8432)Patrick Cloke2020-10-011-1/+1
|/
* Don't table scan events on worker startup (#8419)Erik Johnston2020-09-291-1/+25
| | | | | | | | | | | | | | | | | | | | * Fix table scan of events on worker startup. This happened because we assumed "new" writers had an initial stream position of 0, so the replication code tried to fetch all events written by the instance between 0 and the current position. Instead, set the initial position of new writers to the current persisted up to position, on the assumption that new writers won't have written anything before that point. * Consider old writers coming back as "new". Otherwise we'd try and fetch entries between the old stale token and the current position, even though it won't have written any rows. Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
* Add checks for postgres sequence consistency (#8402)Erik Johnston2020-09-282-2/+93
|
* Fix schema delta for servers that have not backfilled (#8396)Erik Johnston2020-09-251-1/+5
| | | | | Fixes #8395.
* Fix MultiWriteIdGenerator's handling of restarts. (#8374)Erik Johnston2020-09-241-21/+127
| | | | | | | | | | | | | | | | | | | On startup `MultiWriteIdGenerator` fetches the maximum stream ID for each instance from the table and uses that as its initial "current position" for each writer. This is problematic as a) it involves either a scan of events table or an index (neither of which is ideal), and b) if rows are being persisted out of order elsewhere while the process restarts then using the maximum stream ID is not correct. This could theoretically lead to race conditions where e.g. events that are persisted out of order are not sent down sync streams. We fix this by creating a new table that tracks the current positions of each writer to the stream, and update it each time we finish persisting a new entry. This is a relatively small overhead when persisting events. However for the cache invalidation stream this is a much bigger relative overhead, so instead we note that for invalidation we don't actually care about reliability over restarts (as there's no caches to invalidate) and simply don't bother reading and writing to the new table in that particular case.
* Use `async with` for ID gens (#8383)Erik Johnston2020-09-231-54/+76
| | | This will allow us to hit the DB after we've finished using the generated stream ID.
* Add experimental support for sharding event persister. Again. (#8294)Erik Johnston2020-09-141-4/+6
| | | | | | This is *not* ready for production yet. Caveats: 1. We should write some tests... 2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
* Fix `MultiWriterIdGenerator.current_position`. (#8257)Erik Johnston2020-09-081-6/+37
| | | | | It did not correctly handle IDs finishing being persisted out of order, resulting in the `current_position` lagging until new IDs are persisted.
* Add more logging to debug slow startup (#8264)Richard van der Hoff2020-09-071-0/+5
| | | | I'm hoping this will provide some pointers for debugging https://github.com/matrix-org/synapse/issues/7968.
* Stop sub-classing object (#8249)Patrick Cloke2020-09-041-2/+2
|
* Revert "Add experimental support for sharding event persister. (#8170)" (#8242)Brendan Abolivier2020-09-041-6/+4
| | | | | | | * Revert "Add experimental support for sharding event persister. (#8170)" This reverts commit 82c1ee1c22a87b9e6e3179947014b0f11c0a1ac3. * Changelog
* Add experimental support for sharding event persister. (#8170)Erik Johnston2020-09-021-4/+6
| | | | | | This is *not* ready for production yet. Caveats: 1. We should write some tests... 2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
* Make MultiWriterIDGenerator work for streams that use negative stream IDs ↵Erik Johnston2020-09-011-11/+28
| | | | | (#8203) This is so that we can use it for the backfill events stream.
* Fix missing _add_persisted_position (#8179)Erik Johnston2020-08-271-0/+2
| | | This was forgotten in #8164.
* Add functions to `MultiWriterIdGen` used by events stream (#8164)Erik Johnston2020-08-252-3/+108
|
* Make StreamIdGen `get_next` and `get_next_mult` async (#8161)Erik Johnston2020-08-251-5/+5
| | | | This is mainly so that `StreamIdGenerator` and `MultiWriterIdGenerator` will have the same interface, allowing them to be used interchangeably.
* Remove `ChainedIdGenerator`. (#8123)Erik Johnston2020-08-191-67/+1
| | | | | It's just a thin wrapper around two ID gens to make `get_current_token` and `get_next` return tuples. This can easily be replaced by calling the appropriate methods on the underlying ID gens directly.
* Separate `get_current_token` into two. (#8113)Erik Johnston2020-08-191-9/+27
| | | | | | | | | | | | The function is used for two purposes: 1) for subscribers of streams to get a token they can use to get further updates with, and 2) for replication to track position of the writers of the stream. For streams with a single writer the two scenarios produce the same result, however the situation becomes complicated for streams with multiple writers. The current `MultiWriterIdGenerator` does not correctly handle the first case (which is not an issue as its only used for the `caches` stream which nothing subscribes to outside of replication).
* Rename database classes to make some sense (#8033)Erik Johnston2020-08-051-2/+2
|
* Use `PostgresSequenceGenerator` from `MultiWriterIdGenerator`Richard van der Hoff2020-07-161-4/+4
| | | | partly just to show it works, but alwo to remove a bit of code duplication.
* Add some helper classes for generating ID sequencesRichard van der Hoff2020-07-161-0/+98
|
* Move event stream handling out of slave store. (#7491)Erik Johnston2020-05-151-0/+11
| | | | | This allows us to have the logic on both master and workers, which is necessary to move event persistence off master. We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
* Add MultiWriterIdGenerator. (#7281)Erik Johnston2020-05-041-2/+167
| | | | | | This will be used to coordinate stream IDs across multiple writers. Functions as the equivalent of both `StreamIdGenerator` and `SlavedIdTracker`.
* Update black to 19.10b0 (#6304)Amber Brown2019-11-011-1/+1
| | | * update version of black and also fix the mypy config being overridden
* Remove unnecessary parentheses around return statements (#5931)Andrew Morgan2019-08-301-2/+2
| | | | | Python will return a tuple whether there are parentheses around the returned values or not. I'm just sick of my editor complaining about this all over the place :)
* Run black on the rest of the storage module (#4996)Amber Brown2019-04-031-5/+5
|
* run isortAmber Brown2018-07-091-1/+1
|
* Fix assertion to stop transaction queue getting wedgedRichard van der Hoff2017-03-151-0/+14
| | | | | | | | ... and update some docstrings to correctly reflect the types being used. get_new_device_msgs_for_remote can return a long under some circumstances, which was being stored in last_device_list_stream_id_by_dest, and was then upsetting things on the next loop.
* Add tests for redactionsMark Haines2016-04-071-1/+1
|
* Assert that the step != 0Mark Haines2016-04-011-0/+1
|
* use google style doc stringsMark Haines2016-04-011-11/+12
|
* Rename direction to step, apply checks consistentlyMark Haines2016-04-011-15/+15
|
* Use a stream id generator for backfilled idsMark Haines2016-04-011-20/+41
|
* Add replication stream for pushersMark Haines2016-03-151-1/+6
|
* Ensure integer is an integerErik Johnston2016-03-091-1/+1
|
* Add a stream for push rule updatesMark Haines2016-03-011-26/+58
|
* Load the current id in the IdGenerator constructorMark Haines2016-03-011-47/+22
| | | | | | | | | Rather than loading them lazily. This allows us to remove all the yield statements and spurious arguments for the get_next methods. It also allows us to replace all instances of get_next_txn with get_next since get_next no longer needs to access the db.
* Remove unused param from get_max_tokenErik Johnston2016-02-181-3/+1
|
* Initial cutErik Johnston2016-02-171-1/+3
|
* Add a Homeserver.setup method.Erik Johnston2016-01-261-27/+9
| | | | | | This is for setting up dependencies that require work on startup. This is useful for the DataStore that wants to read a bunch from the database before initiliazing.
* copyrightsMatthew Hodgson2016-01-072-2/+2
|
* Merge pull request #199 from matrix-org/erikj/receiptsErik Johnston2015-07-161-2/+5
|\ | | | | Implement read receipts.
| * Add basic storage functions for handling of receiptsErik Johnston2015-07-011-2/+5
| |
* | Add bulk insert events APIErik Johnston2015-06-251-0/+31
|/
* SYN-377: Make sure that the StreamIdGenerator.get_next.__exit__ is called ↵Mark Haines2015-05-121-4/+8
| | | | from the main thread after the transaction completes, not from database thread before the transaction completes.
* TypoErik Johnston2015-04-291-1/+1
|
* Also remove yield from within lock in the other generatorErik Johnston2015-04-291-8/+6
|
* Fix deadlock in id_generators. No idea why this was an actual deadlock.Erik Johnston2015-04-291-14/+16
|
* Make get_max_token into inlineCallbacks so that the lock works.Erik Johnston2015-04-271-3/+4
|
* Use try..finally in contextlib.contextmanagerErik Johnston2015-04-151-3/+5
|
* Correctly increment the _next_id initiallyErik Johnston2015-04-141-2/+4
|
* Stream ordering and out of order insertions.Erik Johnston2015-04-092-0/+140
Handle the fact that events can be persisted out of order, and so to get the "current max" stream token becomes non trivial - as we need to make sure that *all* stream tokens less than the current max have also successfully been persisted.