| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
| |
The last stream token is always known and we do not need to handle none.
|
|
|
| |
This improves type hinting and should use less memory.
|
|
|
|
| |
Removes faulty assertions and fixes the logic to ensure the max
stream token is always set.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
|
|
|
|
| |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
|
|
|
| |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| |
|
| |
|
|
|
|
| |
Mainly because sometimes the email push code raises exceptions where the
stack traces have gotten lost, which is hopefully fixed by this.
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
| |
|
|
|
|
|
| |
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We start all pushers on start up and immediately start a background
process to fetch push to send. This makes start up incredibly painful
when dealing with many pushers.
Instead, let's do a quick fast DB check to see if there *may* be push to
send and only start the background processes for those pushers. We also
stagger starting up and doing those checks so that we don't try and
handle all pushers at once.
|
| |
|
|
|
|
| |
... and rename it, for even more sanity
|
|
|
|
|
|
| |
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|
|
|
|
|
|
|
| |
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
|
| |
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
|
| |
|
|
|
|
|
|
|
| |
Update the last stream ordering if the
`get_unread_push_actions_for_user_in_range_for_email` returns no new
push actions. This reduces the range that it needs to check next
iteration.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
for the email and http pushers rather than trying to make a single
method that will work with their conflicting requirements.
The http pusher needs to get the messages in ascending stream order, and
doesn't want to miss a message.
The email pusher needs to get the messages in descending timestamp order,
and doesn't mind if it misses messages.
|
|\ |
|
| | |
|
| | |
|
|/
|
|
| |
Were it not for that fact that you can't use the base handler in the pusher because it pulls in the world. Comitting while I fix that on a different branch.
|
|
|
|
|
|
|
|
| |
* After initial 10 minute window, only alert every 24h for room notifs
* Reset room state after 6h of idleness
* Synchronise throttles for messages sent in the same notif, so the 24 hourly notifs 'line up'
* Fix the email subjects to say what triggered the notification
* Order the rooms in reverse activity order in the email, so the 'reason' room should always come first
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Say who the messages are from if there's no room name, otherwise it's a bit nonsensical
|
| |
|
|
|
|
| |
Mostly WIP porting the room name calculation logic from the web client so our room names in the email mirror the clients.
|
| |
|
|
|
|
| |
Also pep8 fixes
|
|
|
|
| |
Copy the stuff over from http pusher that prevents multiple instances of process running at once and sets up logging and measure blocks.
|
|
Mostly logic of when to send an email
|