summary refs log tree commit diff
path: root/synapse/storage/util (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Move event stream handling out of slave store. (#7491)Erik Johnston2020-05-151-0/+11
| | | | | This allows us to have the logic on both master and workers, which is necessary to move event persistence off master. We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
* Add MultiWriterIdGenerator. (#7281)Erik Johnston2020-05-041-2/+167
| | | | | | This will be used to coordinate stream IDs across multiple writers. Functions as the equivalent of both `StreamIdGenerator` and `SlavedIdTracker`.
* Update black to 19.10b0 (#6304)Amber Brown2019-11-011-1/+1
| | | * update version of black and also fix the mypy config being overridden
* Remove unnecessary parentheses around return statements (#5931)Andrew Morgan2019-08-301-2/+2
| | | | | Python will return a tuple whether there are parentheses around the returned values or not. I'm just sick of my editor complaining about this all over the place :)
* Run black on the rest of the storage module (#4996)Amber Brown2019-04-031-5/+5
|
* run isortAmber Brown2018-07-091-1/+1
|
* Fix assertion to stop transaction queue getting wedgedRichard van der Hoff2017-03-151-0/+14
| | | | | | | | ... and update some docstrings to correctly reflect the types being used. get_new_device_msgs_for_remote can return a long under some circumstances, which was being stored in last_device_list_stream_id_by_dest, and was then upsetting things on the next loop.
* Add tests for redactionsMark Haines2016-04-071-1/+1
|
* Assert that the step != 0Mark Haines2016-04-011-0/+1
|
* use google style doc stringsMark Haines2016-04-011-11/+12
|
* Rename direction to step, apply checks consistentlyMark Haines2016-04-011-15/+15
|
* Use a stream id generator for backfilled idsMark Haines2016-04-011-20/+41
|
* Add replication stream for pushersMark Haines2016-03-151-1/+6
|
* Ensure integer is an integerErik Johnston2016-03-091-1/+1
|
* Add a stream for push rule updatesMark Haines2016-03-011-26/+58
|
* Load the current id in the IdGenerator constructorMark Haines2016-03-011-47/+22
| | | | | | | | | Rather than loading them lazily. This allows us to remove all the yield statements and spurious arguments for the get_next methods. It also allows us to replace all instances of get_next_txn with get_next since get_next no longer needs to access the db.
* Remove unused param from get_max_tokenErik Johnston2016-02-181-3/+1
|
* Initial cutErik Johnston2016-02-171-1/+3
|
* Add a Homeserver.setup method.Erik Johnston2016-01-261-27/+9
| | | | | | This is for setting up dependencies that require work on startup. This is useful for the DataStore that wants to read a bunch from the database before initiliazing.
* copyrightsMatthew Hodgson2016-01-072-2/+2
|
* Merge pull request #199 from matrix-org/erikj/receiptsErik Johnston2015-07-161-2/+5
|\ | | | | Implement read receipts.
| * Add basic storage functions for handling of receiptsErik Johnston2015-07-011-2/+5
| |
* | Add bulk insert events APIErik Johnston2015-06-251-0/+31
|/
* SYN-377: Make sure that the StreamIdGenerator.get_next.__exit__ is called ↵Mark Haines2015-05-121-4/+8
| | | | from the main thread after the transaction completes, not from database thread before the transaction completes.
* TypoErik Johnston2015-04-291-1/+1
|
* Also remove yield from within lock in the other generatorErik Johnston2015-04-291-8/+6
|
* Fix deadlock in id_generators. No idea why this was an actual deadlock.Erik Johnston2015-04-291-14/+16
|
* Make get_max_token into inlineCallbacks so that the lock works.Erik Johnston2015-04-271-3/+4
|
* Use try..finally in contextlib.contextmanagerErik Johnston2015-04-151-3/+5
|
* Correctly increment the _next_id initiallyErik Johnston2015-04-141-2/+4
|
* Stream ordering and out of order insertions.Erik Johnston2015-04-092-0/+140
Handle the fact that events can be persisted out of order, and so to get the "current max" stream token becomes non trivial - as we need to make sure that *all* stream tokens less than the current max have also successfully been persisted.