| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
Add a (long) timeout to when a "busy" device is considered not online.
This does *not* match MSC3026, but is a reasonable thing for an
implementation to do.
Expands tests for the (unstable) busy presence with multiple devices.
|
|
|
|
|
|
|
|
|
|
|
| |
Tracks presence on an individual per-device basis and combine
the per-device state into a per-user state. This should help in
situations where a user has multiple devices with conflicting status
(e.g. one is syncing with unavailable and one is syncing with online).
The tie-breaking is done by priority:
BUSY > ONLINE > UNAVAILABLE > OFFLINE
|
|
|
|
| |
(#16223)
|
|
|
|
|
|
| |
Refactoring to pass the device ID (in addition to the user ID) through
the presence handler (specifically the `user_syncing`, `set_state`,
and `bump_presence_active_time` methods and their replication
versions).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify some of the presence code by reducing duplicated code between
worker & non-worker modes.
The main change is to push some of the logic from `user_syncing` into
`set_state`. This is done by passing whether the user is setting the presence
via a `/sync` with a new `is_sync` flag to `set_state`. If this is `true` some
additional logic is performed:
* Don't override `busy` presence.
* Update the `last_user_sync_ts`.
* Never update the status message.
|
| |
|
|
|
| |
Reduce duplicated code & remove unused variables.
|
|
|
|
|
| |
Allow configuring the set of workers to proxy outbound federation traffic through (`outbound_federation_restricted_to`).
This is useful when you have a worker setup with `federation_sender` instances responsible for sending outbound federation requests and want to make sure *all* outbound federation traffic goes through those instances. Before this change, the generic workers would still contact federation themselves for things like profile lookups, backfill, etc. This PR allows you to set more strict access controls/firewall for all workers and only allow the `federation_sender`'s to contact the outside world.
|
|
|
|
|
|
| |
Revert "Federation outbound proxy (#15773)"
This reverts commit b07b14b494ae1dd564b4c44f844c9a9545b3d08a.
|
|
|
|
|
|
|
| |
Allow configuring the set of workers to proxy outbound federation traffic through (`outbound_federation_restricted_to`).
This is useful when you have a worker setup with `federation_sender` instances responsible for sending outbound federation requests and want to make sure *all* outbound federation traffic goes through those instances. Before this change, the generic workers would still contact federation themselves for things like profile lookups, backfill, etc. This PR allows you to set more strict access controls/firewall for all workers and only allow the `federation_sender`'s to contact the outside world.
The original code is from @erikjohnston's branches which I've gotten in-shape to merge.
|
|
|
| |
And do not allow untyped defs in tests.handlers.
|
|
|
|
| |
Use the newer foo_instances configuration instead of the
deprecated flags to enable specific features (e.g. start_pushers).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In trying to use the MSC3026 busy presence status, the user's status
would be set back to 'online' next time they synced. This change makes
it so that syncing does not affect a user's presence status if it
is currently set to 'busy': it must be removed through the presence
API.
The MSC defers to implementations on the behaviour of busy presence,
so this ought to remain compatible with the MSC.
|
| |
|
| |
|
|
|
|
|
|
|
| |
The presence of this method was confusing, and mostly present for backwards
compatibility. Let's get rid of it.
Part of #11733
|
|
|
| |
The idea here is to take anything to do with incoming events and move it out to a separate handler, as a way of making FederationHandler smaller.
|
| |
|
|
|
| |
Signed-off-by: Dirk Klimpel dirk@klimpel.org
|
|
|
| |
Instead of mixing them with user authentication methods.
|
|
|
| |
Work on https://github.com/matrix-org/matrix-doc/pull/2716
|
|
|
|
|
|
|
| |
https://github.com/matrix-org/synapse/issues/9962 uncovered that we accidentally removed all but one of the presence updates that we store in the database when persisting multiple updates. This could cause users' presence state to be stale.
The bug was fixed in #10014, and this PR just adds a test that failed on the old code, and was used to initially verify the bug.
The test attempts to insert some presence into the database in a batch using `PresenceStore.update_presence`, and then simply pulls it out again.
|
| |
|
|
|
|
|
| |
Only affects workers. Introduced in #9819.
Fixes #9899.
|
| |
|
|
|
|
|
|
|
| |
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#9402)
This PR attempts to eliminate unnecessary presence sending work when your local server joins a room, or when a remote server joins a room your server is participating in by processing state deltas in chunks rather than individually.
---
When your server joins a room for the first time, it requests the historical state as well. This chunk of new state is passed to the presence handler which, after filtering that state down to only membership joins, will send presence updates to homeservers for each join processed.
It turns out that we were being a bit naive and processing each event individually, and sending out presence updates for every one of those joins. Even if many different joins were users on the same server (hello IRC bridges), we'd send presence to that same homeserver for every remote user join we saw.
This PR attempts to deduplicate all of that by processing the entire batch of state deltas at once, instead of only doing each join individually. We process the joins and note down which servers need which presence:
* If it was a local user join, send that user's latest presence to all servers in the room
* If it was a remote user join, send the presence for all local users in the room to that homeserver
We deduplicate by inserting all of those pending updates into a dictionary of the form:
```
{
server_name1: {presence_update1, ...},
server_name2: {presence_update1, presence_update2, ...}
}
```
Only after building this dict do we then start sending out presence updates.
|
|
|
|
|
|
|
| |
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](https://github.com/matrix-org/synapse/blob/80d6dc9783aa80886a133756028984dbf8920168/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
|
|
|
|
|
| |
Update `EventCreationHandler.create_event` to accept an auth_events param, and
use it in `_locally_reject_invite` instead of reinventing the wheel.
|
|
|
| |
All handlers now available via get_*_handler() methods on the HomeServer.
|
| |
|
| |
|
|
|
|
| |
Ensure good comprehension hygiene using flake8-comprehensions.
|
|\
| |
| | |
Pass room_version into add_hashes_and_signatures
|
| | |
|
|/
|
|
|
| |
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
|
|
|
|
|
|
|
|
|
| |
* Fix presence timeouts when synchrotron restarts.
Handling timeouts would fail if there was an external process that had
timed out, e.g. a synchrotron restarting. This was due to a couple of
variable name typoes.
Fixes #3715.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
last_user_sync_ts
|
|
|
|
|
| |
Specifically, if currently_active remains true then we should not notify
if only the last active time changes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add infrastructure to the presence handler to track sync requests in external processes
* Expire stale entries for dead external processes
* Add an http endpoint for making users as syncing
Add some docstrings and comments.
* Fixes
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Squash-merge of PR #345 from daniel/anonymousevents
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
elements) when they go online again
|
| |
| |
| |
| | |
offline, so that when we delete them from the cachemap, we can still synthesize OFFLINE events for them (SYN-261)
|
|/ |
|
| |
|
| |
|
|
|
|
| |
setup_test_homeserver function in utils.
|
| |
|
|
|
|
| |
is still presence-specific
|
|
|
|
| |
shareable
|
| |
|
|
|
|
| |
copypasta
|
|
|
|
| |
retries
|
| |
|
| |
|
| |
|
|
|
|
| |
responsible for any number of bug reports
|
| |
|
|
|
|
| |
from the transaction itself
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
|\| |
|
| |
| |
| |
| | |
TestCase; set up logging in ONE PLACE ONLY
|
| | |
|
| |
| |
| |
| | |
constructor, as DataStore's constructor will want it ready
|
|/
|
|
| |
just the 'State' test case for now
|
|
|
|
| |
hasn't been incorporated in time for launch.
|
|
|
|
| |
... key from all the Presence messages
|
| |
|
|
|
|
| |
EDU-sending test because of this
|
| |
|
|
|
|
| |
get_new_events_for_user() returning a Deferred
|
|
|
|
| |
redundant code paths
|
|
|
|
| |
different semantics
|
|
|
|
| |
legacy 'state' copy for now
|
|
|
|
|
|
| |
than /matrix to make it easier for existing websites to mount a HS in their namespace without collisions.
perl -pi -e 's#/matrix#/_matrix#g' ./cmdclient/console.py ./docs/client-server/howto.rst ./docs/client-server/specification.rst ./docs/client-server/swagger_matrix/directory ./docs/client-server/swagger_matrix/events ./docs/client-server/swagger_matrix/login ./docs/client-server/swagger_matrix/presence ./docs/client-server/swagger_matrix/profile ./docs/client-server/swagger_matrix/registration ./docs/client-server/swagger_matrix/rooms ./docs/server-server/specification.rst ./graph/graph.py ./jsfiddles/create_room_send_msg/demo.js ./jsfiddles/event_stream/demo.js ./jsfiddles/example_app/demo.js ./jsfiddles/register_login/demo.js ./jsfiddles/room_memberships/demo.js ./synapse/api/urls.py ./tests/federation/test_federation.py ./tests/handlers/test_presence.py ./tests/handlers/test_typing.py ./tests/rest/test_events.py ./tests/rest/test_presence.py ./tests/rest/test_profile.py ./tests/rest/test_rooms.py ./webclient/components/fileUpload/file-upload-service.js ./webclient/components/matrix/matrix-service.js
|
|
|
|
|
|
|
|
|
| |
* No need to inform clients of status of remote users; as that will
arrive in due course anyway. We don't -have- the state currently, so
we'd only send an unknown message
* Remember to bump the presence serial for the event source, so the
notifiers will wake up and report it
|
|
|
|
| |
correct user. Fix presence tests.
|
| |
|
| |
|
|
|
|
| |
they still PASS
|
| |
|
|
|
|
| |
Federation as well
|
| |
|
| |
|
|
|
|
| |
resource_for_federation or resource_for_client depending on what is being tested.
|
|
|
|
| |
presenting age durations to clients/federation events
|
|
|
|
| |
rename AWAY to UNAVAILABLE
|
|
|
|
| |
copyrighter.pl whilst we're at it
|
|
|