| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '31acc5c30':
Escape the error description on the sso_error template. (#8405)
Fix occasional "Re-starting finished log context" from keyring (#8398)
Allow existing users to login via OpenID Connect. (#8345)
Fix schema delta for servers that have not backfilled (#8396)
Fix MultiWriteIdGenerator's handling of restarts. (#8374)
s/URLs/variables in changelog
s/accidentally/incorrectly in changelog
Update changelog wording
Add type annotations to SimpleHttpClient (#8372)
Add new sequences to port DB script (#8387)
Add EventStreamPosition type (#8388)
Mark the shadow_banned column as boolean in synapse_port_db. (#8386)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The idea is to remove some of the places we pass around `int`, where it can represent one of two things:
1. the position of an event in the stream; or
2. a token that partitions the stream, used as part of the stream tokens.
The valid operations are then:
1. did a position happen before or after a token;
2. get all events that happened before or after a token; and
3. get all events between two tokens.
(Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '2983049a7':
Factor out `_send_dummy_event_for_room` (#8370)
Improve logging of state resolution (#8371)
Fix bug which caused failure on join with malformed membership events (#8385)
Use `async with` for ID gens (#8383)
Don't push if an user account has expired (#8353)
Do not check lint/test dependencies at runtime. (#8377)
Add note to reverse_proxy.md about disabling Apache's mod_security2 (#8375)
Changelog
|
| |
| |
| | |
this makes it possible to use from the manhole, and seems cleaner anyway.
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '837293c31':
Remove obsolete __future__ imports (#8337)
Use admin_patterns for all admin APIs. (#8331)
Fix a potential bug of UnboundLocalError (#8329)
Switch metaclass initialization to python 3-compatible syntax (#8326)
Catch-up after Federation Outage (split, 4): catch-up loop (#8272)
Use slots in attrs classes where possible (#8296)
Fix typos in comments.
Add the topic and avatar to the room details admin API (#8305)
Improve SAML error messages (#8248)
Add experimental support for sharding event persister. Again. (#8294)
Make `StreamToken.room_key` be a `RoomStreamToken` instance. (#8281)
Use TLSv1.2 for fake servers in tests (#8208)
Add /_synapse/client to the reverse proxy docs (#8227)
Clean up `Notifier.on_new_room_event` code path (#8288)
|
| |
| |
| |
| |
| |
| | |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.
This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.
The valid operations here are:
1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.
Future PR will change the token type to a dedicated type.
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit 'a3a90ee03':
Show a confirmation page during user password reset (#8004)
Do not error when thumbnailing invalid files (#8236)
Remove some unused distributor signals (#8216)
Fixup pusher pool notifications (#8287)
Revert "Fixup pusher pool notifications"
Fixup pusher pool notifications
|
| |
| |
| |
| |
| | |
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.
I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
|
| |
| |
| |
| | |
This reverts commit e7fd336a53a4ca489cdafc389b494d5477019dc0.
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| | |
* commit '17fa4c7ca':
Catch up after Federation Outage (split, 2): Track last successful stream ordering after transmission (#8247)
Catch-up after Federation Outage (split, 1) (#8230)
Fix type signature in simple_select_one_onecol and friends (#8241)
Stop sub-classing object (#8249)
|
| | |
|
|\|
| |
| |
| |
| | |
* commit '9f8abdcc3':
Revert "Add experimental support for sharding event persister. (#8170)" (#8242)
|
| |
| |
| |
| |
| |
| |
| | |
* Revert "Add experimental support for sharding event persister. (#8170)"
This reverts commit 82c1ee1c22a87b9e6e3179947014b0f11c0a1ac3.
* Changelog
|
|\|
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '0d4f614fd':
Refactor `_get_e2e_device_keys_for_federation_query_txn` (#8225)
Add experimental support for sharding event persister. (#8170)
Add /user/{user_id}/shared_rooms/ api (#7785)
Do not try to store invalid data in the stats table (#8226)
Convert the main methods run by the reactor to async. (#8213)
|
| |
| |
| |
| |
| |
| | |
This is *not* ready for production yet. Caveats:
1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '5bf8e5f55':
Convert the well known resolver to async (#8214)
Convert additional databases to async/await part 2 (#8200)
Make MultiWriterIDGenerator work for streams that use negative stream IDs (#8203)
Do not install setuptools 50.0. (#8212)
Move and rename `get_devices_with_keys_by_user` (#8204)
Rename `get_e2e_device_keys` to better reflect its purpose (#8205)
Add a comment about _LimitedHostnameResolver
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit 'a466b6797':
Reduce run-times of tests by advancing the reactor less (#7757)
Update debian systemd service to use Type=notify (#8169)
Remove remaining is_guest argument uses from get_room_data calls (#8181)
Do not propagate typing notifications from shadow-banned users. (#8176)
Remove unused parameter from, and add safeguard in, get_room_data (#8174)
Add required Debian dependencies to allow docker builds on the arm platform (#8144)
Allow running mypy directly. (#8175)
Update the test federation client to handle streaming responses (#8130)
Do not propagate profile changes of shadow-banned users into rooms. (#8157)
Make SlavedIdTracker.advance have same interface as MultiWriterIDGenerator (#8171)
Convert simple_select_one and simple_select_one_onecol to async (#8162)
|
| |
| |
| |
| |
| |
| | |
Small cleanup PR.
* Removed the unused `is_guest` argument
* Added a safeguard to a (currently) impossible code path, fixing static checking at the same time.
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '56efa9ec7': (22 commits)
Fix rate limiting unit tests. (#8167)
Add functions to `MultiWriterIdGen` used by events stream (#8164)
Do not allow send_nonmember_event to be called with shadow-banned users. (#8158)
Changelog fixes
Make StreamIdGen `get_next` and `get_next_mult` async (#8161)
Wording fixes to 'name' user admin api filter (#8163)
Fix missing double-backtick in RST document
Search in columns 'name' and 'displayname' in the admin users endpoint (#7377)
Add type hints for state. (#8140)
Stop shadow-banned users from sending non-member events. (#8142)
Allow capping a room's retention policy (#8104)
Add healthcheck for default localhost 8008 port on /health endpoint. (#8147)
Fix flaky shadow-ban tests. (#8152)
Don't fail /submit_token requests on incorrect session ID if request_token_inhibit_3pid_errors is turned on (#7991)
Do not apply ratelimiting on joins to appservices (#8139)
Micro-optimisations to get_auth_chain_ids (#8132)
Allow denying or shadow banning registrations via the spam checker (#8034)
Stop shadow-banned users from sending invites. (#8095)
Be more tolerant of membership events in unknown rooms (#8110)
Improve the error code when trying to register using a name reserved for guests. (#8135)
...
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit 'c9c544cda':
Remove `ChainedIdGenerator`. (#8123)
Switch the JSON byte producer from a pull to a push producer. (#8116)
Updated docs: Added note about missing 308 redirect support. (#8120)
Be stricter about JSON that is accepted by Synapse (#8106)
Convert runWithConnection to async. (#8121)
Remove the unused inlineCallbacks code-paths in the caching code (#8119)
Separate `get_current_token` into two. (#8113)
Convert events worker database to async/await. (#8071)
Add a link to the matrix-synapse-rest-password-provider. (#8111)
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| | |
* commit '3c01724b3':
Fix the return type of send_nonmember_events. (#8112)
Remove : from allowed client_secret chars (#8101)
Rename changelog from bugfix to misc.
Iteratively encode JSON responses to avoid blocking the reactor. (#8013)
Return the previous stream token if a non-member event is a duplicate. (#8093)
|
| | |
|
| | |
|
|\|
| |
| |
| |
| | |
* commit '53834bb9c':
Run `remove_push_actions_from_staging` in foreground (#8081)
|
| |
| |
| |
| |
| |
| |
| | |
If we got an error persisting an event, we would try to remove the push actions
asynchronously, which would lead to a 'Re-starting finished log context'
warning.
I don't think there's any need for this to be asynchronous.
|
|\|
| |
| |
| |
| | |
* commit '5dd73d029':
Add type hints to handlers.message and events.builder (#8067)
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* commit 'db131b6b2':
Change the default log config to reduce disk I/O and storage (#8040)
Implement login blocking based on SAML attributes (#8052)
Add an assertion on prev_events in create_new_client_event (#8041)
Typo
Lint
why mypy why
Lint
Incorporate review
Incorporate review
Fix PUT /pushrules to use the right rule IDs
Back out the database hack and replace it with a temporary config setting
Fix cache name
Fix cache invalidation calls
Lint
Changelog
Implement new experimental push rules with a database hack to enable them
|
| |
| |
| |
| |
| |
| | |
I think this would have caught all the cases in
https://github.com/matrix-org/synapse/issues/7642 - and I think a 500 makes
more sense here than a 403
|
|\|
| |
| |
| |
| | |
* commit 'd4a7829b1':
Convert synapse.api to async/await (#8031)
|
| | |
|
|\|
| |
| |
| |
| | |
* commit 'a7bdf98d0':
Rename database classes to make some sense (#8033)
|
| | |
|
|/
|
|
|
|
|
|
| |
This PR allows Synapse modules making use of the `ModuleApi` to create and send non-membership events into a room. This can useful to have modules send messages, or change power levels in a room etc. Note that they must send event through a user that's already in the room.
The non-membership event limitation is currently arbitrary, as it's another chunk of work and not necessary at the moment.
This commit has been cherry-picked from mainline.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Fixes #2181.
The basic premise is that, when we
fail to reject an invite via the remote server, we can generate our own
out-of-band leave event and persist it as an outlier, so that we have something
to send to the client.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
While working on https://github.com/matrix-org/synapse/issues/5665 I found myself digging into the `Ratelimiter` class and seeing that it was both:
* Rather undocumented, and
* causing a *lot* of config checks
This PR attempts to refactor and comment the `Ratelimiter` class, as well as encourage config file accesses to only be done at instantiation.
Best to be reviewed commit-by-commit.
|
|
|
| |
These are surprisingly expensive, and we only really need to do them at startup.
|
| |
|
|
|
|
|
|
|
| |
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).
Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.
People probably want to look at this commit by commit.
|
|
|
|
|
|
|
|
|
| |
(#7497)
Per https://github.com/matrix-org/matrix-doc/issues/1436#issuecomment-410089470 they should be omitted instead of returning null or "". They aren't marked as required in the spec.
Fixes https://github.com/matrix-org/synapse/issues/7333
Signed-off-by: Aaron Raimist <aaron@raim.ist>
|
|
|
| |
This is safe as we can now write to cache invalidation stream on workers, and is required for when we move event persistence off master.
|
|
|
| |
Add dummy_events_threshold which allows configuring the number of forward extremities a room needs for Synapse to send forward extremities in it.
|
| |
|
|
|
|
| |
used. (#7109)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
... and set it everywhere it's called.
while we're here, rename it for consistency with `check_user_in_room` (and to
help check that I haven't missed any instances)
|
| |
|
|
|
|
|
| |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
|
|
|
|
|
|
| |
These are easier to work with than the strings and we normally have one around.
This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
|
| |
|
| |
|
|
|
|
| |
create_new_client_event
|
|
|
|
| |
... to make way for a new method which just returns the event ids
|
| |
|
| |
|
|
|
|
| |
Pulling things out of config is currently surprisingly expensive.
|
|
|
|
|
|
|
|
| |
Implement part [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). The parts that differ are:
* the feature is hidden behind a configuration flag (`enable_ephemeral_messages`)
* self-destruction doesn't happen for state events
* only implement support for the `m.self_destruct_after` field (not the `m.self_destruct` one)
* doesn't send synthetic redactions to clients because for this specific case we consider the clients to be able to destroy an event themselves, instead we just censor it (by pruning its JSON) in the database
|
|
|
|
|
|
|
|
| |
Purge jobs don't delete the latest event in a room in order to keep the forward extremity and not break the room. On the other hand, get_state_events, when given an at_token argument calls filter_events_for_client to know if the user can see the event that matches that (sync) token. That function uses the retention policies of the events it's given to filter out those that are too old from a client's view.
Some clients, such as Riot, when loading a room, request the list of members for the latest sync token it knows about, and get confused to the point of refusing to send any message if the server tells it that it can't get that information. This can happen very easily with the message retention feature turned on and a room with low activity so that the last event sent becomes too old according to the room's retention policy.
An easy and clean fix for that issue is to discard the room's retention policies when retrieving state.
|
| |
|
|
|
| |
* update version of black and also fix the mypy config being overridden
|
|\
| |
| | |
Add StateGroupStorage interface
|
| | |
|
|/
|
| |
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
|
| |
|
|
|
| |
Fixes #5905
|
|
|
| |
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
|
| |
|
|
|
|
|
| |
This is useful to allow room admins to quickly deal with a large number
of abusive messages.
|
|
|
| |
Co-Authored-By: Erik Johnston <erik@matrix.org>
|
| |
|
| |
|
|
|
|
|
| |
We already correctly filter out such redactions, but we should also deny
them over the CS API.
|
|
|
|
|
|
|
| |
`None` is not a valid event id, so queuing up a database fetch for it seems
like a silly thing to do.
I considered making `get_event` return `None` if `event_id is None`, but then
its interaction with `allow_none` seemed uninituitive, and strong typing ftw.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Adds new config option `cleanup_extremities_with_dummy_events` which
periodically sends dummy events to rooms with more than 10 extremities.
THIS IS REALLY EXPERIMENTAL.
|
| |
|
|\
| |
| | |
Don't bundle aggregations with events in /sync or /events or state queries
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Follow-up to #5124
Also added a bunch of checks to make sure everything (both the stuff added on #5124 and this PR) works as intended.
|
| |
|
|
|
|
| |
Collect all the things that make room-versions different to one another into
one place, so that it's easier to define new room versions.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
There are a number of instances where a server or admin may puppet a
user to join/leave rooms, which we don't want to fail if the user has
not consented to the privacy policy. We fix this by adding a check to
test if the requester has an associated access_token, which is used as a
proxy to answer the question of whether the action is being done on
behalf of a real request from the user.
|
| |
|
|
|
| |
We were logging this when it was not true.
|
|\
| |
| |
| | |
erikj/redactions_eiah
|
| | |
|
| |
| |
| |
| |
| | |
This is so that everything is done in one place, making it easier to
change the event format based on room version
|
|/ |
|
|\
| |
| | |
Split up event validation between event and builder
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The validator was being run on the EventBuilder objects, and so the
validator only checked a subset of fields. With the upcoming
EventBuilder refactor even fewer fields will be there to validate.
To get around this we split the validation into those that can be run
against an EventBuilder and those run against a fully fledged event.
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
| |
I found these helpful in debugging my room upgrade tests.
|
|
|
|
|
|
|
|
| |
Currently when fetching state groups from the data store we make two
hits two the database: once for members and once for non-members (unless
request is filtered to one or the other). This adds needless load to the
datbase, so this PR refactors the lookup to make only a single database
hit.
|
|
|
|
|
|
|
| |
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
|
| |
|
|\
| |
| | |
Fix logcontexts for running pushers
|
| |
| |
| |
| |
| |
| |
| | |
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
|
|/ |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
This was missed during the transition from attribute to getter for
getting state from context.
|
| | |
|
|\|
| |
| |
| | |
erikj/client_apis_move
|
| |
| |
| |
| |
| | |
Linearizer was effectively a Limiter with max_count=1, so rather than
maintaining two sets of code, let's combine them.
|
| |
| |
| |
| |
| | |
* give them names, to improve logging
* use a deque rather than a list for efficiency
|
| | |
|
| | |
|
|/
|
|
|
| |
This will let us call the read only parts from workers, and so be able
to move some APIs off of master, e.g. the `/state` API.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\ |
|
| |
| |
| |
| |
| | |
Make it possible to put the URI in the error message and the server notice that
get sent by the server
|
|/
|
|
| |
... because it's shorter.
|
|
|
|
|
|
| |
Returns an M_CONSENT_NOT_GIVEN error (cf
https://github.com/matrix-org/matrix-doc/issues/1252) if consent is not yet
given.
|
|
|
|
| |
As we're soon going to change how topological_ordering works
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
* When creating a new event, cap its depth to 2^63 - 1
* When receiving events, reject any without a sensible depth
As per https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
|
| | |
|
|\ \ |
|
| |\ \
| | | |
| | | | |
reraise exceptions more carefully
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | | |
We need to be careful (under python 2, at least) that when we reraise an
exception after doing some error handling, we actually reraise the original
exception rather than anything that might have been raised (and handled) during
the error handling.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
|
|/
|
|
|
|
| |
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
|
|
|
|
|
|
| |
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
|
| |
|
| |
|
|
|
|
|
| |
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Make the purge request return quickly, and allow scripts to poll for updates.
|
| |
| |
| |
| | |
Queuing up purges doesn't sound like a good thing.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Create a worker for event creation
|
| | |
| | |
| | |
| | | |
As we want to have it run on the main synapse instance
|
| | | |
|
|\ \ \
| |/ /
|/| | |
delete_local_events for purge_room_history
|
| | |
| | |
| | |
| | | |
Add a flag which makes the purger delete local events
|
| |/
| |
| |
| | |
(beacause it deletes more than state)
|
| |
| |
| |
| |
| |
| | |
The intention was for the check to be called as early as possible in the
request, but actually was called just before the main ratelimit check,
so was fairly pointless.
|
| | |
|
| | |
|
|/ |
|
|
|
|
| |
what could possibly go wrong
|
| |
|
| |
|
|\
| |
| | |
Initial Group Implementation
|
| |\ |
|
| | | |
|
|\ \ \
| |_|/
|/| | |
Unfreeze event before serializing with ujson
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In newer versions of https://github.com/esnme/ultrajson, ujson does not
serialize frozendicts (introduced in esnme/ultrajson@53f85b1). Although
the PyPI version is still 1.35, Fedora ships with a build from commit
esnme/ultrajson@2f1d487. This causes the serialization to fail if the
distribution-provided package is used.
This runs the event through the unfreeze utility before serializing it.
Thanks to @ignatenkobrain for tracking down the root cause.
fixes #2351
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
|
| | |
|
| | |
|
| | |
|
|/
|
| |
Demonstration of how you might add some hooks to filter out spammy events.
|
|
|
|
|
|
|
|
| |
Since we didn't instansiate the PusherPool at start time it could fail
at run time, which it did for some users.
This may or may not fix things for those users, but it should happen at
start time and stop the server from starting.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
In `MessageHandler`, remove `yield` on call to `Notifier.on_new_room_event`:
it doesn't return anything anyway.
|
| |
|
| |
|
| |
|
|\
| |
| | |
Limit the number of events that can be created on a given room concurrently
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If a client didn't specify a from token when paginating backwards
synapse would attempt to query the (global) maximum topological token.
This a) doesn't make much sense since they're room specific and b) there
are no indices that lets postgres do this efficiently.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Move the presence handler out of the Handlers object
|
| | |
|
|/
|
|
| |
s/domian/domain/g
|
| |
|
|
|
|
|
|
|
|
| |
Wait until we sign a message to get the signing key from the homeserver
config. This means that the message handler can be created without
having a signing key in the config which means that separate processes
like the pusher that don't send messages and don't need to sign them can
still access the handlers.
|
| |
|
|
|
|
| |
so we don't accidentally mail out events people shouldn't see
|
|
|
|
|
|
|
|
| |
* Remove some unused functions
* get_room_events_stream is only used in tests
* is_exclusive_room might actually be something we want
|
|
|
|
| |
collect_presencelike_data
|
| |
|
| |
|
|\
| |
| | |
Add a stream for push rule updates
|
| |\ |
|
| | | |
|
| | | |
|
| |/
|/| |
|
|/
|
|
| |
This will enable more detailed decisions
|
|\
| |
| | |
Rewrite presence for performance.
|
| | |
|
| | |
|
| | |
|