| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Avoid renaming configuration settings for now and rename internal code
to use blocklist and allowlist instead.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
#15514 introduced a regression where Synapse would encounter
`PartialDownloadError`s when fetching OpenID metadata for certain
providers on startup. Due to #8088, this prevents Synapse from starting
entirely.
Revert the change while we decide what to do about the regression.
|
|/
|
| |
This is to discourage timing based profiling on the push gateways.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Pushers tend to make many connections to the same HTTP host
(e.g. a new event comes in, causes events to be pushed, and then
the homeserver connects to the same host many times). Due to this
the per-host HTTP connection pool size was increased, but this does
not make sense for other SimpleHttpClients.
Add a parameter for the connection pool and override it for pushers
(making a separate SimpleHttpClient for pushers with the increased
configuration).
This returns the HTTP connection pool settings to the default Twisted
ones for non-pusher HTTP clients.
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
| |
|
|
|
|
| |
badge_count_last_call was always zero when the response for push
notifications included a "rejected" key which mapped to an empty list.
|
|
|
|
|
|
|
| |
The presence of this method was confusing, and mostly present for backwards
compatibility. Let's get rid of it.
Part of #11733
|
|
|
| |
Upgrade mypy to 0.931, mypy-zope to 0.3.5 and fix new complaints.
|
| |
|
| |
|
|
|
| |
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
| |
|
|
|
|
|
| |
- Merge 'isinstance' calls.
- Remove unnecessary dict call outside of comprehension.
- Use 'sys.exit()' calls.
|
| |
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
| |
The last stream token is always known and we do not need to handle none.
|
|
|
| |
This improves type hinting and should use less memory.
|
|
|
|
| |
Removes faulty assertions and fixes the logic to ensure the max
stream token is always set.
|
| |
|
|
|
|
| |
Pusher URLs now must end in `/_matrix/push/v1/notify` per the
specification.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
| |
per-message or per-room (#8820)
This PR adds a new config option to the `push` section of the homeserver config, `group_unread_count_by_room`. By default Synapse will group push notifications by room (so if you have 1000 unread messages, if they lie in 55 rooms, you'll see an unread count on your phone of 55).
However, it is also useful to be able to send out the true count of unread messages if desired. If `group_unread_count_by_room` is set to `false`, then with the above example, one would see an unread count of 1000 (email anyone?).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
|
|
|
|
| |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
|
|
|
| |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove obsolete comment about ancient temporary code
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Implement hack to set push priority
based on whether the tweaks indicate the event might cause
effects.
* Changelog for 7765
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Antilint
* Add tests for push priority
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Update synapse/push/httppusher.py
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
* Antilint
* Remove needless invites from tests.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes #4198.
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
|\
| |
| | |
Add StateGroupStorage interface
|
| | |
|
|/
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
| |
|
|
|
|
|
| |
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We start all pushers on start up and immediately start a background
process to fetch push to send. This makes start up incredibly painful
when dealing with many pushers.
Instead, let's do a quick fast DB check to see if there *may* be push to
send and only start the background processes for those pushers. We also
stagger starting up and doing those checks so that we don't try and
handle all pushers at once.
|
| |
|
|
|
|
|
|
| |
We're counting the number of push notifications, but not the number of badges;
I'd like to see if they are significant.
|
| |
|
|
|
|
|
| |
This is in preparation to refactor FrozenEvent to support different
event formats for different room versions
|
|
|
|
| |
... and rename it, for even more sanity
|
|
|
|
|
|
| |
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|
|
|
|
|
|
|
| |
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.
This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.
This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.
Includes https://github.com/matrix-org/synapse/pull/2422
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
| |
as really it's part of the event ID
|
| |
|
|
|
|
|
|
|
| |
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
|
| |
|
|
|
|
| |
for google/apple devices
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
for the email and http pushers rather than trying to make a single
method that will work with their conflicting requirements.
The http pusher needs to get the messages in ascending stream order, and
doesn't want to miss a message.
The email pusher needs to get the messages in descending timestamp order,
and doesn't mind if it misses messages.
|
|
|
|
| |
Fixes https://github.com/vector-im/vector-web/issues/1654
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
and wrap unsafe process in a try block
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
saying that the min stream id won't be completely accurate all the time
|
| |
|
|
|
|
| |
Also fix bugs with retrying.
|
|
|
|
|
|
|
| |
event stream & running the rules again. Sytest passes, but remaining to do:
* Make badges work again
* Remove old, unused code
|
|
|
|
|
|
| |
It wasn't possible to hit the code from the API because of a typo
in parsing the request path. Since no-one was using the feature
we might as well remove the dead code.
|
|\ |
|
| |
| |
| |
| | |
of the code
|
|/
|
|
| |
notifications.
|
| |
|
| |
|
| |
|
|
|
|
| |
in question
|
|
|
|
| |
the user responds to the notification.
|
| |
|
| |
|
| |
|
|
|
|
| |
sender display names where they exist.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
the push token changes.
|
| |
|
|
|
|
|
|
|
| |
Add a timestamp to push tokens so we know the last time they we
got them from the device. Send it to the push gateways so it can
determine whether its failure is more recent than the token.
Stop and remove pushers that have been rejected.
|
| |
|
| |
|
|
|
|
| |
pokes work or not yet but the retry semantics are pretty good.
|
| |
|
|
stdout currently!)
|