| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
This adds an API for third-party plugin modules to implement account validity, so they can provide this feature instead of Synapse. The module implementing the current behaviour for this feature can be found at https://github.com/matrix-org/synapse-email-account-validity.
To allow for a smooth transition between the current feature and the new module, hooks have been added to the existing account validity endpoints to allow their behaviours to be overridden by a module.
|
| |
|
|
|
| |
Instead of mixing them with user authentication methods.
|
|
|
| |
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tests for push rule pattern matching
* tests for acl pattern matching
* factor out common `re.escape`
* Factor out common re.compile
* Factor out common anchoring code
* add word_boundary support to `glob_to_regex`
* Use `glob_to_regex` in push rule evaluator
NB that this drops support for character classes. I don't think anyone ever
used them.
* Improve efficiency of globs with multiple wildcards
The idea here is that we compress multiple `*` globs into a single `.*`. We
also need to consider `?`, since `*?*` is as hard to implement efficiently as
`**`.
* add assertion on regex pattern
* Fix mypy
* Simplify glob_to_regex
* Inline the glob_to_regex helper function
Signed-off-by: Dan Callahan <danc@element.io>
* Moar comments
Signed-off-by: Dan Callahan <danc@element.io>
Co-authored-by: Dan Callahan <danc@element.io>
|
| |
|
| |
|
|
|
|
|
| |
hitting an 'Invalid Token' page #74" from synapse-dinsic (#9832)
This attempts to be a direct port of https://github.com/matrix-org/synapse-dinsic/pull/74 to mainline. There was some fiddling required to deal with the changes that have been made to mainline since (mainly dealing with the split of `RegistrationWorkerStore` from `RegistrationStore`, and the changes made to `self.make_request` in test code).
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
| |
|
|
|
|
|
| |
- Merge 'isinstance' calls.
- Remove unnecessary dict call outside of comprehension.
- Use 'sys.exit()' calls.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split ShardedWorkerHandlingConfig
This is so that we have a type level understanding of when it is safe to
call `get_instance(..)` (as opposed to `should_handle(..)`).
* Remove special cases in ShardedWorkerHandlingConfig.
`ShardedWorkerHandlingConfig` tried to handle the various different ways
it was possible to configure federation senders and pushers. This led to
special cases that weren't hit during testing.
To fix this the handling of the different cases is moved from there and
`generic_worker` into the worker config class. This allows us to have
the logic in one place and allows the rest of the code to ignore the
different cases.
|
| |
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
| |
|
|
|
|
|
|
| |
Fixes some exceptions if the room state isn't quite as expected.
If the expected state events aren't found, try to find them in the
historical room state. If they still aren't found, fallback to a reasonable,
although ugly, value.
|
|
|
|
|
|
| |
* Fixes a case where no summary text was returned.
* The use of messages_from_person vs. messages_from_person_and_others
was tweaked to depend on whether there was 1 sender or multiple senders,
not based on if there was 1 room or multiple rooms.
|
|
|
|
|
| |
* Enables autoescape by default for HTML files.
* Adds a new read_template method for reading a single template.
* Some logic clean-up.
|
|
|
|
| |
Treat the content as untrusted and do not assume it is of
the proper form.
|
|
|
|
|
|
| |
This allows for efficiently finding which users ignore a particular
user.
Co-authored-by: Erik Johnston <erik@matrix.org>
|
|
|
| |
The last stream token is always known and we do not need to handle none.
|
|
|
|
| |
This fixes an KeyError exception, after this PR the content
is just considered unknown.
|
|
|
| |
This improves type hinting and should use less memory.
|
|
|
|
| |
Removes faulty assertions and fixes the logic to ensure the max
stream token is always set.
|
| |
|
| |
|
| |
|
|
|
|
| |
Pusher URLs now must end in `/_matrix/push/v1/notify` per the
specification.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
|
|
|
| |
per-message or per-room (#8820)
This PR adds a new config option to the `push` section of the homeserver config, `group_unread_count_by_room`. By default Synapse will group push notifications by room (so if you have 1000 unread messages, if they lie in 55 rooms, you'll see an unread count on your phone of 55).
However, it is also useful to be able to send out the true count of unread messages if desired. If `group_unread_count_by_room` is set to `false`, then with the above example, one would see an unread count of 1000 (email anyone?).
|
|
|
| |
This could be customised to trigger a different kind of notification in the future, but for now it's a normal non-highlight one.
|
|
|
| |
We don't always need the full power of a DeferredCache.
|
|
|
|
|
|
|
| |
#8567 started a span for every background process. This is good as it means all Synapse code that gets run should be in a span (unless in the sentinel logging context), but it means we generate about 15x the number of spans as we did previously.
This PR attempts to reduce that number by a) not starting one for send commands to Redis, and b) deferring starting background processes until after we're sure they're necessary.
I don't really know how much this will help.
|
| |
|
|
|
|
| |
This can happen if e.g. the room invited into is no longer on the
server (or if all users left the room).
|
|
|
|
|
|
|
|
|
|
|
| |
* Add `DeferredCache.get_immediate` method
A bunch of things that are currently calling `DeferredCache.get` are only
really interested in the result if it's completed. We can optimise and simplify
this case.
* Remove unused 'default' parameter to DeferredCache.get()
* another get_immediate instance
|
|
|
|
| |
content (#8545)
|
|
|
| |
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
| |
|
|
|
|
|
| |
rather than have everything that instantiates an LruCache manage metrics
separately, have LruCache do it itself.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#8536)
* Fix outbound federaion with multiple event persisters.
We incorrectly notified federation senders that the minimum persisted
stream position had advanced when we got an `RDATA` from an event
persister.
Notifying of federation senders already correctly happens in the
notifier, so we just delete the offending line.
* Change some interfaces to use RoomStreamToken.
By enforcing use of `RoomStreamTokens` we make it less likely that
people pass in random ints that they got from somewhere random.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This converts calls like super(Foo, self) -> super().
Generated with:
sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
|
|
|
|
| |
This PR adds a confirmation step to resetting your user password between clicking the link in your email and your password actually being reset.
This is to better align our password reset flow with the industry standard of requiring a confirmation from the user after email validation.
|
|
|
|
|
| |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
|
|
|
| |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| |
|
|
|
|
| |
marked unread (#8274)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fixup `ALTER TABLE` database queries
Make the new columns nullable, because doing otherwise can wedge a
server with a big database, as setting a default value rewrites the
table.
* Switch back to using the notifications count in the push badge
Clients are likely to be confused if we send a push but the badge count
is the unread messages one, and not the notifications one.
* Changelog
|
| |
|
| |
|
|
|
| |
Fixes https://github.com/matrix-org/synapse/issues/6583
|
|\
| |
| | |
With an undocumented configuration setting to enable them for specific users.
|
| | |
|
| | |
|
| |\
| | |
| | |
| | | |
babolivier/new_push_rules
|
| | | |
|
| | | |
|
| |/
|/| |
|
| | |
|
| | |
|
|/ |
|
|
|
| |
This reuses the same scheme as federation sender sharding
|
| |
|
|
|
| |
We didn't do this for e.g. registration emails.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix spec compliance; tweaks without values are valid
(default to True, which is only concretely specified for
`highlight`, but it seems only reasonable to generalise)
* Changelog for 7766.
* Add documentation to `tweaks_for_actions`
May as well tidy up when I'm here.
* Add a test for `tweaks_for_actions`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove obsolete comment about ancient temporary code
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Implement hack to set push priority
based on whether the tweaks indicate the event might cause
effects.
* Changelog for 7765
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Antilint
* Add tests for push priority
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
* Update synapse/push/httppusher.py
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
* Antilint
* Remove needless invites from tests.
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
|
| |
|
| |
|
|
|
|
|
| |
* Always return an unread_count in get_unread_event_push_actions_by_room_for_user
* Don't always expect unread_count to be there so we don't take out sync entirely if something goes wrong
|
|\
| |
| | |
Implementation of https://github.com/matrix-org/matrix-doc/pull/2625
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | | |
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.
|
| | | |
|
| |/
|/| |
|
|/ |
|
|
|
|
| |
Mainly because sometimes the email push code raises exceptions where the
stack traces have gotten lost, which is hopefully fixed by this.
|
| |
|
| |
|
|
|
|
| |
variables (#6391)
|
| |
|
|
|
| |
add a lock to try to make this metric actually work
|
| |
|
|
|
|
|
| |
This would break notifications about un-named rooms when processing
notifications in a batch.
|
|
|
|
| |
notifications. (#6966)
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|
|
|
|
|
|
|
| |
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
|
|
|
|
|
|
|
| |
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes #4198.
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
|\
| |
| | |
Add StateGroupStorage interface
|
| | |
|
|/
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
| |
|
|
|
|
|
|
|
| |
In ancient times Synapse would only send emails when it was notifying a user about a message they received...
Now it can do all sorts of neat things!
Change the logging so it's not just about notifications.
|
| |
|
|
|
| |
The validation links sent via email had their query parameters inserted without any URL-encoding. Surprisingly this didn't seem to cause any issues, but if a user were to put a `/` in their client_secret it could lead to problems.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
server to handle 3pid validation (#5987)
This is a combination of a few different PRs, finally all being merged into `develop`:
* #5875
* #5876
* #5868 (This one added the `/versions` flag but the flag itself was actually [backed out](https://github.com/matrix-org/synapse/commit/891afb57cbdf9867f2848341b29c75d6f35eef5a#diff-e591d42d30690ffb79f63bb726200891) in #5969. What's left is just giving /versions access to the config file, which could be useful in the future)
* #5835
* #5969
* #5940
Clients should not actually use the new registration functionality until https://github.com/matrix-org/synapse/pull/5972 is merged.
UPGRADE.rst, changelog entries and config file changes should all be reviewed closely before this PR is merged.
|
|
|
|
|
| |
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
|
| |
|
|
|
|
|
| |
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This adds a default push rule following the proposal in
[MSC2153](https://github.com/matrix-org/matrix-doc/pull/2153).
See also https://github.com/vector-im/riot-web/issues/10208
See also https://github.com/matrix-org/matrix-js-sdk/pull/976
|
| |
|
| |
|
| |
|
|\
| |
| | |
Fix email notifications for unnamed rooms with multiple people
|
| | |
|
| |
| |
| |
| |
| |
| | |
When we try and calculate a description for a room for with no name but
multiple other users we threw an exception (due to trying to subscript
result of `dict.values()`).
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
| |
identity server (#5377)
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
|
|
|
|
|
|
|
|
|
|
| |
* Add a default .m.rule.tombstone push rule
In support of MSC1930: https://github.com/matrix-org/matrix-doc/pull/1930
* changelog
* Appease the changelog linter
|
|\
| |
| | |
Send out emails with links to extend an account's validity period
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
| |
We start all pushers on start up and immediately start a background
process to fetch push to send. This makes start up incredibly painful
when dealing with many pushers.
Instead, let's do a quick fast DB check to see if there *may* be push to
send and only start the background processes for those pushers. We also
stagger starting up and doing those checks so that we don't try and
handle all pushers at once.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We're counting the number of push notifications, but not the number of badges;
I'd like to see if they are significant.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This is in preparation to refactor FrozenEvent to support different
event formats for different room versions
|
| |
|
| |
|
|
|
|
| |
... and rename it, for even more sanity
|
|
|
|
|
| |
We don't do anything with the result, so this is needed to give this code a
logcontext.
|
|
|
|
|
|
| |
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
|
|
|
|
|
|
|
| |
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
|
|
|
|
| |
simplifies the interface to _start_pushers
|
|
|
|
|
| |
... and use it from start_pusher_by_id. This mostly simplifies
start_pusher_by_id.
|
|
|
|
|
| |
This is public (or at least, called from outside the class), so ought to have a
better name.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
move the example email templates into the synapse package so that they can be
used as package data, which should mean that all of the packaging mechanisms
(pip, docker, debian, arch, etc) should now come with the example templates.
In order to grandfather in people who relied on the templates being in the old
place, check for that situation and fall back to using the defaults if the
templates directory does not exist.
|
| |
|
| |
|
|
|
|
|
|
|
| |
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
they're not meant to be lazy (#3307)
|
|\ |
|
| |\
| | |
| | | |
replace some iteritems with six
|
| | |
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |/
| |
| |
| |
| |
| | |
plus a bonus b"" string I missed last time
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
make imports local
|
| |
| |
| |
| | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\| |
|
| |\
| | |
| | | |
Improve exception handling for background processes
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
| |/
| |
| |
| |
| | |
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
|
|/
|
|
|
|
| |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Whenever an access token is invalidated, we should remove the associated
pushers.
|
|\
| |
| | |
Remove preserve_context_over_{fn, deferred}
|
| |
| |
| |
| |
| | |
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.
This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.
This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.
Includes https://github.com/matrix-org/synapse/pull/2422
|
|
|
|
| |
what could possibly go wrong
|
|
|
|
| |
They're just redundant
|
| |
|
| |
|
|
|
|
|
|
| |
Rather than making the condition directly require a specific power
level. This way the level require to notify a room can be configured
per room.
|
|
|
|
| |
also update copyright
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Add condition type to check the sender's power level and add a base
rule using it for @channel notifications.
|
|
|
|
| |
From https://github.com/matrix-org/matrix-js-sdk/commit/ebc95667b8a5777d13e5d3c679972bedae022fd5
|
| |
|
|
|
|
|
|
|
| |
Only prepend / append word bounary characters if the search
expression starts or ends with a word character, otherwise they
don't work because there's no word bounary between whitespace and
a non-word char.
|
|
|
|
| |
as really it's part of the event ID
|
| |
|
|
|
|
|
|
|
| |
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
|
|
|
|
|
| |
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
|
| |
|
| |
|
|
|
|
|
| |
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
|
|\
| |
| | |
Fix caching error in the push evaluator
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.
I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
|
| | |
|
|/
|
|
| |
for google/apple devices
|
| |
|
| |
|
|
|
|
|
| |
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
|
|
|
|
| |
This reverts commit 421fdf74609439edaaffce117436e6a6df147841.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.
We also try converting the string to ascii to further reduce the size.
|
|
|
|
|
|
| |
Closes (SYN-714) #1385
Signed-off-by: Daniel Dent <matrixcontrib@contactdaniel.net>
|
|\
| |
| | |
Speed up cached function access
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This was broken when device list updates were implemented, as Mailer
could no longer instantiate an AuthHandler due to a dependency on
federation sending.
|
|\
| |
| | |
Allow configuring the Riot URL used in notification emails
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The URLs used for notification emails were hardcoded to use either matrix.to
or vector.im; but for self-hosted setups where Riot is also self-hosted it
may be desirable to allow configuring an alternative Riot URL.
Fixes #1809.
Signed-off-by: Adrian Perez de Castro <aperez@igalia.com>
|