| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
per-message or per-room (#8820)
This PR adds a new config option to the `push` section of the homeserver config, `group_unread_count_by_room`. By default Synapse will group push notifications by room (so if you have 1000 unread messages, if they lie in 55 rooms, you'll see an unread count on your phone of 55).
However, it is also useful to be able to send out the true count of unread messages if desired. If `group_unread_count_by_room` is set to `false`, then with the above example, one would see an unread count of 1000 (email anyone?).
|
|
|
| |
This could be customised to trigger a different kind of notification in the future, but for now it's a normal non-highlight one.
|
|
|
| |
We don't always need the full power of a DeferredCache.
|
|
|
|
|
|
|
| |
#8567 started a span for every background process. This is good as it means all Synapse code that gets run should be in a span (unless in the sentinel logging context), but it means we generate about 15x the number of spans as we did previously.
This PR attempts to reduce that number by a) not starting one for send commands to Redis, and b) deferring starting background processes until after we're sure they're necessary.
I don't really know how much this will help.
|
| |
|
|
|
|
| |
This can happen if e.g. the room invited into is no longer on the
server (or if all users left the room).
|
|
|
|
|
|
|
|
|
|
|
| |
* Add `DeferredCache.get_immediate` method
A bunch of things that are currently calling `DeferredCache.get` are only
really interested in the result if it's completed. We can optimise and simplify
this case.
* Remove unused 'default' parameter to DeferredCache.get()
* another get_immediate instance
|
|
|
|
| |
content (#8545)
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
|
|
|
|
| |
rather than have everything that instantiates an LruCache manage metrics
separately, have LruCache do it itself.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
|
|
|
|
| |
This PR adds a confirmation step to resetting your user password between clicking the link in your email and your password actually being reset.
This is to better align our password reset flow with the industry standard of requiring a confirmation from the user after email validation.
|
|
|
|
|
| |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
|
|
|
| |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| |
|
|
|
|
| |
marked unread (#8274)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fixup `ALTER TABLE` database queries
Make the new columns nullable, because doing otherwise can wedge a
server with a big database, as setting a default value rewrites the
table.
* Switch back to using the notifications count in the push badge
Clients are likely to be confused if we send a push but the badge count
is the unread messages one, and not the notifications one.
* Changelog
|
| |
|
| |
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/6583
|
|\
| |
| | |
With an undocumented configuration setting to enable them for specific users.
|
| | |
|
| | |
|
| |\
| | |
| | |
| | | |
babolivier/new_push_rules
|
| | | |
|
| | | |
|
| |/
|/| |
|
| | |
|
| | |
|
|/ |
|
|
|
| |
This reuses the same scheme as federation sender sharding
|
| |
|
|
|
| |
We didn't do this for e.g. registration emails.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix spec compliance; tweaks without values are valid
(default to True, which is only concretely specified for
`highlight`, but it seems only reasonable to generalise)
* Changelog for 7766.
* Add documentation to `tweaks_for_actions`
May as well tidy up when I'm here.
* Add a test for `tweaks_for_actions`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove obsolete comment about ancient temporary code
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Implement hack to set push priority
based on whether the tweaks indicate the event might cause
effects.
* Changelog for 7765
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Antilint
* Add tests for push priority
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Update synapse/push/httppusher.py
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
* Antilint
* Remove needless invites from tests.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |
|
| |
|
|
|
|
|
| |
* Always return an unread_count in get_unread_event_push_actions_by_room_for_user
* Don't always expect unread_count to be there so we don't take out sync entirely if something goes wrong
|
|\
| |
| | |
Implementation of https://github.com/matrix-org/matrix-doc/pull/2625
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | | |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
| | | |
|
| |/
|/| |
|
|/ |
|
|
|
|
| |
Mainly because sometimes the email push code raises exceptions where the
stack traces have gotten lost, which is hopefully fixed by this.
|
| |
|
| |
|
|
|
|
| |
variables (#6391)
|
| |
|
|
|
| |
add a lock to try to make this metric actually work
|
| |
|
|
|
|
|
| |
This would break notifications about un-named rooms when processing
notifications in a batch.
|
|
|
|
| |
notifications. (#6966)
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
|
|
|
| |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
|
|
|
|
|
|
| |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes #4198.
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
|\
| |
| | |
Add StateGroupStorage interface
|
| | |
|
|/
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
| |
|
|
|
|
|
|
|
| |
In ancient times Synapse would only send emails when it was notifying a user about a message they received...
Now it can do all sorts of neat things!
Change the logging so it's not just about notifications.
|
| |
|
|
|
| |
The validation links sent via email had their query parameters inserted without any URL-encoding. Surprisingly this didn't seem to cause any issues, but if a user were to put a `/` in their client_secret it could lead to problems.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
server to handle 3pid validation (#5987)
This is a combination of a few different PRs, finally all being merged into `develop`:
* #5875
* #5876
* #5868 (This one added the `/versions` flag but the flag itself was actually [backed out](https://github.com/matrix-org/synapse/commit/891afb57cbdf9867f2848341b29c75d6f35eef5a#diff-e591d42d30690ffb79f63bb726200891) in #5969. What's left is just giving /versions access to the config file, which could be useful in the future)
* #5835
* #5969
* #5940
Clients should not actually use the new registration functionality until https://github.com/matrix-org/synapse/pull/5972 is merged.
UPGRADE.rst, changelog entries and config file changes should all be reviewed closely before this PR is merged.
|
|
|
|
|
| |
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
|
| |
|
|
|
|
|
| |
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This adds a default push rule following the proposal in
[MSC2153](https://github.com/matrix-org/matrix-doc/pull/2153).
See also https://github.com/vector-im/riot-web/issues/10208
See also https://github.com/matrix-org/matrix-js-sdk/pull/976
|
| |
|
| |
|
| |
|
|\
| |
| | |
Fix email notifications for unnamed rooms with multiple people
|
| | |
|
| |
| |
| |
| |
| |
| | |
When we try and calculate a description for a room for with no name but
multiple other users we threw an exception (due to trying to subscript
result of `dict.values()`).
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
| |
identity server (#5377)
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
|
|
|
|
|
|
|
|
|
|
| |
* Add a default .m.rule.tombstone push rule
In support of MSC1930: https://github.com/matrix-org/matrix-doc/pull/1930
* changelog
* Appease the changelog linter
|
|\
| |
| | |
Send out emails with links to extend an account's validity period
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
| |
We start all pushers on start up and immediately start a background
process to fetch push to send. This makes start up incredibly painful
when dealing with many pushers.
Instead, let's do a quick fast DB check to see if there *may* be push to
send and only start the background processes for those pushers. We also
stagger starting up and doing those checks so that we don't try and
handle all pushers at once.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We're counting the number of push notifications, but not the number of badges;
I'd like to see if they are significant.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This is in preparation to refactor FrozenEvent to support different
event formats for different room versions
|
| |
|
| |
|
|
|
|
| |
... and rename it, for even more sanity
|
|
|
|
|
| |
We don't do anything with the result, so this is needed to give this code a
logcontext.
|
|
|
|
|
|
| |
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|
|
|
|
|
|
|
| |
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
|
|
|
|
| |
simplifies the interface to _start_pushers
|
|
|
|
|
| |
... and use it from start_pusher_by_id. This mostly simplifies
start_pusher_by_id.
|
|
|
|
|
| |
This is public (or at least, called from outside the class), so ought to have a
better name.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
move the example email templates into the synapse package so that they can be
used as package data, which should mean that all of the packaging mechanisms
(pip, docker, debian, arch, etc) should now come with the example templates.
In order to grandfather in people who relied on the templates being in the old
place, check for that situation and fall back to using the defaults if the
templates directory does not exist.
|
| |
|
| |
|
|
|
|
|
|
|
| |
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
they're not meant to be lazy (#3307)
|
|\ |
|
| |\
| | |
| | | |
replace some iteritems with six
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |/
| |
| |
| |
| |
| | |
plus a bonus b"" string I missed last time
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
make imports local
|
| |
| |
| |
| | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\| |
|
| |\
| | |
| | | |
Improve exception handling for background processes
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
| |/
| |
| |
| |
| | |
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
|
|/
|
|
|
|
| |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Whenever an access token is invalidated, we should remove the associated
pushers.
|
|\
| |
| | |
Remove preserve_context_over_{fn, deferred}
|
| |
| |
| |
| |
| | |
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.
This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.
This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.
Includes https://github.com/matrix-org/synapse/pull/2422
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
| |
They're just redundant
|
| |
|
| |
|
|
|
|
|
|
| |
Rather than making the condition directly require a specific power
level. This way the level require to notify a room can be configured
per room.
|
|
|
|
| |
also update copyright
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Add condition type to check the sender's power level and add a base
rule using it for @channel notifications.
|
|
|
|
| |
From https://github.com/matrix-org/matrix-js-sdk/commit/ebc95667b8a5777d13e5d3c679972bedae022fd5
|
| |
|
|
|
|
|
|
|
| |
Only prepend / append word bounary characters if the search
expression starts or ends with a word character, otherwise they
don't work because there's no word bounary between whitespace and
a non-word char.
|
|
|
|
| |
as really it's part of the event ID
|
| |
|
|
|
|
|
|
|
| |
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
|
|
|
|
|
| |
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
|
| |
|
| |
|
|
|
|
|
| |
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
|
|\
| |
| | |
Fix caching error in the push evaluator
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.
I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
|
| | |
|
|/
|
|
| |
for google/apple devices
|
| |
|
| |
|
|
|
|
|
| |
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
|
|
|
|
| |
This reverts commit 421fdf74609439edaaffce117436e6a6df147841.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.
We also try converting the string to ascii to further reduce the size.
|
|
|
|
|
|
| |
Closes (SYN-714) #1385
Signed-off-by: Daniel Dent <matrixcontrib@contactdaniel.net>
|
|\
| |
| | |
Speed up cached function access
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This was broken when device list updates were implemented, as Mailer
could no longer instantiate an AuthHandler due to a dependency on
federation sending.
|
|\
| |
| | |
Allow configuring the Riot URL used in notification emails
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The URLs used for notification emails were hardcoded to use either matrix.to
or vector.im; but for self-hosted setups where Riot is also self-hosted it
may be desirable to allow configuring an alternative Riot URL.
Fixes #1809.
Signed-off-by: Adrian Perez de Castro <aperez@igalia.com>
|
|/ |
|
|
|
|
|
|
| |
This returns the currently joined members in the room with their display
names and avatar urls. This is more efficient than /members for large
rooms where you don't need the full events.
|
| |
|
|
|
|
|
|
|
| |
Update the last stream ordering if the
`get_unread_push_actions_for_user_in_range_for_email` returns no new
push actions. This reduces the range that it needs to check next
iteration.
|
|
|
|
|
|
|
|
| |
A lot of email push notifications were failing to be sent due to an
exception being thrown along one of the (many) paths. This was due to a
change where we moved from pulling out the full state for each room, but
rather pulled out the event ids for the state and separately loaded the
full events when needed.
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Assign state groups in state handler.
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| | |
dbkr/contains_display_name_override
|
| |
| |
| |
| | |
slaves.
|
| | |
|
|/
|
|
| |
As per https://github.com/matrix-org/matrix-doc/pull/373 and comment
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
for the email and http pushers rather than trying to make a single
method that will work with their conflicting requirements.
The http pusher needs to get the messages in ascending stream order, and
doesn't want to miss a message.
The email pusher needs to get the messages in descending timestamp order,
and doesn't mind if it misses messages.
|
| |
|
| |
|
|
|
|
| |
Fixes https://github.com/vector-im/vector-web/issues/1654
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a key error where the mailer tried to get the ``msgtype`` of an
event that was missing a ``msgtype``.
```
File "synapse/push/mailer.py", line 264, in get_notif_vars
File "synapse/push/mailer.py", line 285, in get_message_vars
File ".../frozendict/__init__.py", line 10, in __getitem__
return self.__dict[key]
KeyError: 'msgtype'
```
|
|
|
|
|
|
|
|
|
|
|
| |
Loading push rules now happens in the datastore, so we can remove
the methods that loaded them outside the datastore.
The ``waiting_for_join_list`` in federation handler is populated by
anything, so can be removed.
The ``_get_members_events_txn`` method isn't called from anywhere
so can be removed.
|
| |
|