| Commit message (Collapse) | Author | Files | Lines |
|
This reverts commit 5ee8a1c50a1b571a8a8704a59635232193b454f2.
This has now been superceded on develop by PR #9436.
|
|
This reverts commit 47d2b49e2b938a1c0c2e13830505a6d019ee65fe.
This has now been superceded on develop by PR 9472.
|
|
... otherwise, we don't get the cookie back.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rewrite XForwardedForRequest to set `isSecure()` based on
`X-Forwarded-Proto`. Also implement `getClientAddress()` while we're here.
|
|
|
|
|
|
This fixes #8518 by adding a conditional check on `SyncResult` in a function when `prev_stream_token == current_stream_token`, as a sanity check. In `CachedResponse.set.<remove>()`, the result is immediately popped from the cache if the conditional function returns "false".
This prevents the caching of a timed-out `SyncResult` (that has `next_key` as the stream key that produced that `SyncResult`). The cache is prevented from returning a `SyncResult` that makes the client request the same stream key over and over again, effectively making it stuck in a loop of requesting and getting a response immediately for as long as the cache keeps those values.
Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>
|
|
* Split ShardedWorkerHandlingConfig
This is so that we have a type level understanding of when it is safe to
call `get_instance(..)` (as opposed to `should_handle(..)`).
* Remove special cases in ShardedWorkerHandlingConfig.
`ShardedWorkerHandlingConfig` tried to handle the various different ways
it was possible to configure federation senders and pushers. This led to
special cases that weren't hit during testing.
To fix this the handling of the different cases is moved from there and
`generic_worker` into the worker config class. This allows us to have
the logic in one place and allows the rest of the code to ignore the
different cases.
|
|
The idea here is to stop people forgetting to call `check_consistency`. Folks can still just pass in `None` to the new args in `build_sequence_generator`, but hopefully they won't.
|
|
|
|
This confused me for a while.
|
|
And ensure the consistency of `event_auth_chain_id`.
|
|
|
|
`uploads_path` was a thing that was never used; most of it was removed in #6628
but a few vestiges remained.
|
|
|
|
It should be possible to reload `synapse.target` to have the reload propagate
to all the synapse units.
|
|
This PR remove the cache for the `get_shared_rooms_for_users` storage method (the db method driving the experimental "what rooms do I share with this user?" feature: [MSC2666](https://github.com/matrix-org/matrix-doc/pull/2666)). Currently subsequent requests to the endpoint will return the same result, even if your shared rooms with that user have changed.
The cache was added in https://github.com/matrix-org/synapse/pull/7785, but we forgot to ensure it was invalidated appropriately.
Upon attempting to invalidate it, I found that the cache had to be entirely invalidated whenever a user (remote or local) joined or left a room. This didn't make for a very useful cache, especially for a function that may or may not be called very often. Thus, I've opted to remove it instead of invalidating it.
|
|
The user directory sample config section was a little messy, and didn't adhere to our [recommended config format guidelines](https://github.com/matrix-org/synapse/blob/develop/docs/code_style.md#configuration-file-format).
This PR cleans that up a bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(#9402)
This PR attempts to eliminate unnecessary presence sending work when your local server joins a room, or when a remote server joins a room your server is participating in by processing state deltas in chunks rather than individually.
---
When your server joins a room for the first time, it requests the historical state as well. This chunk of new state is passed to the presence handler which, after filtering that state down to only membership joins, will send presence updates to homeservers for each join processed.
It turns out that we were being a bit naive and processing each event individually, and sending out presence updates for every one of those joins. Even if many different joins were users on the same server (hello IRC bridges), we'd send presence to that same homeserver for every remote user join we saw.
This PR attempts to deduplicate all of that by processing the entire batch of state deltas at once, instead of only doing each join individually. We process the joins and note down which servers need which presence:
* If it was a local user join, send that user's latest presence to all servers in the room
* If it was a remote user join, send the presence for all local users in the room to that homeserver
We deduplicate by inserting all of those pending updates into a dictionary of the form:
```
{
server_name1: {presence_update1, ...},
server_name2: {presence_update1, presence_update2, ...}
}
```
Only after building this dict do we then start sending out presence updates.
|