| Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
Removes the ability to configure legacy direct TCP replication. Workers now require Redis to run.
|
|
* Reduce number of CI jobs run on PRs
* Newsfile
* Also limit sytest jobs
* Fix typo
* Fix up
* Fixup
|
|
usable as a guide for the whole process. (#13483)
|
|
|
|
(#13671)
|
|
|
|
Summarized from @richvdh's reply at https://github.com/matrix-org/synapse/pull/13589#discussion_r961116999
|
|
It was really easy to miss the `enable_metrics: True` step with the previous language.
|
|
|
|
|
|
|
|
Otherwise they'll be leaked due to the filtering code only respecting
the stable identifiers for private read receipts.
|
|
This avoids doing work that will never be used (since the
resulting unread counts will never be sent in a /sync
response).
The negative of doing this is that unread counts will be
incorrect when the feature is initially enabled.
|
|
directory. (#13697)
* Add missing graph to contrib
* Update with minor but plausible changes, including positioning changes
* Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
|
|
Fixes #13613.
|
|
* Add monthly active users documentation
* changelog
* Tidy up notes
* more tidyup
* Rewrite #1
* link back to mau docs
* fix links
* s/appservice|AS/application service
* further review
* a newline
* Remove bit about shadow banned users.
I think talking about them is confusing, and the current text doesn't imply they get any special treatment.
* Update docs/usage/administration/monthly_active_users.md
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
* Update docs/usage/administration/monthly_active_users.md
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>
|
|
an `id_access_token` (#13241)
Fixes #13206
Signed-off-by: Jacek Kusnierz jacek.kusnierz@tum.de
|
|
|
|
|
|
|
|
Borrows some text from https://github.com/matrix-org/synapse/pull/13647
for the changelog.
|
|
The method doesn't actually do any data fetching and the method that
does, `_get_joined_profile_from_event_id`, has its own cache.
Signed off by Nick @ Beeper (@Fizzadar).
|
|
other than just servlet methods. (#13662)
|
|
|
|
|
|
|
|
|
|
|
|
The --force flag of dpkg-statoverride has been deprecated (apparently starting
with the dpkg version in Debian buster). It offers --force-all as q quick fix,
but the usage in the Debian postinst script is probably covered by
--force-statoverride-add.
Fixes: #8391
Signed-off-by: Jörg Behrmann <behrmann@physik.fu-berlin.de>
|
|
|
|
MSC3030 (#13658)
Discovered while working on https://github.com/matrix-org/synapse/pull/13589 and I had all the messages at the same timestamp in the tests.
Part of https://github.com/matrix-org/matrix-spec-proposals/pull/3030
Complement tests: https://github.com/matrix-org/complement/pull/457
|
|
|
|
This has been the same as a generic_worker since #6964, so let's get rid of it.
Fixes #3717
|
|
|
|
It can be authenticated with the worker_replication_secret setting,
but is always unencrypted.
|
|
|
|
We incorrectly didn't use the returned `Responder` if the client had
disconnected, which meant that the resource used by the Responder
wasn't correctly released.
In particular, this exhausted the thread pools so that *all* requests
timed out.
|
|
Media downloaded as part of a URL preview is normally deleted after two days.
However, while a background database migration is running, the process is
stopped. A long-running database migration can therefore cause the media
store to fill up with old preview files.
This logic was added in #2697 to make sure that we didn't try to run the expiry
without an index on `local_media_repository.created_ts`; the original logic that
needs that index was added in #2478 (in `get_url_cache_media_before`, as
amended by 93247a424a5068b088567fa98b6990e47608b7cb), and is still present.
Given that the background update was added before Synapse v1.0.0, just drop
this check and assume the index exists.
|
|
By using `execute_values` instead of `execute_batch`.
|
|
* Fix rate limit metrics registering twice and misreporting
Fix https://github.com/matrix-org/synapse/issues/13641
* Fix lints
* Add changelog
* Document `metrics_name=None`.
|
|
|
|
Optimize how we calculate `likely_domains` during backfill because I've seen this take 17s in production just to `get_current_state` which is used to `get_domains_from_state` (see case [*2. Loading tons of events* in the `/messages` investigation issue](https://github.com/matrix-org/synapse/issues/13356)).
There are 3 ways we currently calculate hosts that are in the room:
1. `get_current_state` -> `get_domains_from_state`
- Used in `backfill` to calculate `likely_domains` and `/timestamp_to_event` because it was cargo-culted from `backfill`
- This one is being eliminated in favor of `get_current_hosts_in_room` in this PR 🕳
1. `get_current_hosts_in_room`
- Used for other federation things like sending read receipts and typing indicators
1. `get_hosts_in_room_at_events`
- Used when pushing out events over federation to other servers in the `_process_event_queue_loop`
Fix https://github.com/matrix-org/synapse/issues/13626
Part of https://github.com/matrix-org/synapse/issues/13356
Mentioned in [internal doc](https://docs.google.com/document/d/1lvUoVfYUiy6UaHB6Rb4HicjaJAU40-APue9Q4vzuW3c/edit#bookmark=id.2tvwz3yhcafh)
### Query performance
#### Before
The query from `get_current_state` sucks just because we have to get all 80k events. And we see almost the exact same performance locally trying to get all of these events (16s vs 17s):
```
synapse=# SELECT type, state_key, event_id FROM current_state_events WHERE room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
Time: 16035.612 ms (00:16.036)
synapse=# SELECT type, state_key, event_id FROM current_state_events WHERE room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
Time: 4243.237 ms (00:04.243)
```
But what about `get_current_hosts_in_room`: When there is 8M rows in the `current_state_events` table, the previous query in `get_current_hosts_in_room` took 13s from complete freshness (when the events were first added). But takes 930ms after a Postgres restart or 390ms if running back to back to back.
```sh
$ psql synapse
synapse=# \timing on
synapse=# SELECT COUNT(DISTINCT substring(state_key FROM '@[^:]*:(.*)$'))
FROM current_state_events
WHERE
type = 'm.room.member'
AND membership = 'join'
AND room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
count
-------
4130
(1 row)
Time: 13181.598 ms (00:13.182)
synapse=# SELECT COUNT(*) from current_state_events where room_id = '!OGEhHVWSdvArJzumhm:matrix.org';
count
-------
80814
synapse=# SELECT COUNT(*) from current_state_events;
count
---------
8162847
synapse=# SELECT pg_size_pretty( pg_total_relation_size('current_state_events') );
pg_size_pretty
----------------
4702 MB
```
#### After
I'm not sure how long it takes from complete freshness as I only really get that opportunity once (maybe restarting computer but that's cumbersome) and it's not really relevant to normal operating times. Maybe you get closer to the fresh times the more access variability there is so that Postgres caches aren't as exact. Update: The longest I've seen this run for is 6.4s and 4.5s after a computer restart.
After a Postgres restart, it takes 330ms and running back to back takes 260ms.
```sh
$ psql synapse
synapse=# \timing on
Timing is on.
synapse=# SELECT
substring(c.state_key FROM '@[^:]*:(.*)$') as host
FROM current_state_events c
/* Get the depth of the event from the events table */
INNER JOIN events AS e USING (event_id)
WHERE
c.type = 'm.room.member'
AND c.membership = 'join'
AND c.room_id = '!OGEhHVWSdvArJzumhm:matrix.org'
GROUP BY host
ORDER BY min(e.depth) ASC;
Time: 333.800 ms
```
#### Going further
To improve things further we could add a `limit` parameter to `get_current_hosts_in_room`. Realistically, we don't need 4k domains to choose from because there is no way we're going to query that many before we a) probably get an answer or b) we give up.
Another thing we can do is optimize the query to use a index skip scan:
- https://wiki.postgresql.org/wiki/Loose_indexscan
- Index Skip Scan, https://commitfest.postgresql.org/37/1741/
- https://www.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/
|
|
Since github always scrolls to the bottom of any test output, let's put the
failed tests last and hide any successful packages.
|
|
Update a bunch of the documentation for user registration, add some cross
links, etc.
|
|
If things like the signing key file are missing, let's just try to generate
them on startup.
Again, this is useful for k8s-like deployments where we just want to generate
keys on the first run.
|
|
* Update debian packaging to debhelper version 12
Don't call dh_installinit anymore, because it has been deprecated, and use
dh_installsystemd instead of dh_systemd_enable for the same reason.
Signed-off-by: Jörg Behrmann <behrmann@physik.fu-berlin.de>
* Drop preinst script
It was used for reasons of interactions of dh_systemd_start and dh_installinit,
which have both be deprecated
Signed-off-by: Jörg Behrmann <behrmann@physik.fu-berlin.de>
* Drop /etc/default file
It was no longer being installed.
* Remove debian/compat file
This is managed by the control file nowadays
|
|
Fixes #9927
Signed-off-by: Brad Murray brad@beeper.com
|
|
Otherwise the files of the synapse user are readable by the nobody user, which
is unsafe.
Signed-off-by: Jörg Behrmann <behrmann@physik.fu-berlin.de>
|
|
A new `registration_shared_secret_path` option. This is kinda handy for k8s deployments and things.
|
|
Fixes https://github.com/matrix-org/synapse/issues/3672:
`https://localhost:8448` is virtually never right.
|
|
GitHub appears to be deprecating addProjectNextItem by not allowing it to be used alongside projectV2 to get the project ID, so switching to using addProjectV2ItemById instead.
|
|
events (#13586)
Split off from https://github.com/matrix-org/synapse/pull/13561
Part of https://github.com/matrix-org/synapse/issues/13356
Mentioned in [internal doc](https://docs.google.com/document/d/1lvUoVfYUiy6UaHB6Rb4HicjaJAU40-APue9Q4vzuW3c/edit#bookmark=id.2tvwz3yhcafh)
|
|
|
|
`get_current_hosts_in_room` (#13605)
See https://github.com/matrix-org/synapse/pull/13575#discussion_r953023755
|
|
first (`get_users_in_room` mis-use) (#13608)
See https://github.com/matrix-org/synapse/pull/13575#discussion_r953023755
|
|
and avoid errors caused by sorting alphabetical instance name which can be `null` (#13585)
When loading current ids, sort by stream ID so that we don't want to overwrite the `current_position` of an instance to a lower stream ID than we're actually at ([discussion](https://github.com/matrix-org/synapse/pull/13585#discussion_r951795379)). Previously, it sorted alphabetically by instance name which can be `null` and throw errors but more importantly, accomplishes nothing.
Fixes the following startup error which is why I started looking into this area:
```
$ poetry run synapse_homeserver --config-path homeserver.yaml
****************************************************************
Error during initialisation:
'<' not supported between instances of 'NoneType' and 'str'
There may be more information in the logs.
****************************************************************
```
Somehow my database ended up looking like the following, notice the `instance_name` is `null` in the db, and we can't sort `NoneType` things. Another question is why do we see the `instance_name` as `null` sometimes instead of `master` in monolith mode?
```
$ psql synapse
synapse=# SELECT * FROM stream_positions;
stream_name | instance_name | stream_id
-----------------+---------------+-----------
account_data | master | 1242
events | master | 1787
to_device | master |