diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index 94dc650485..620dc88ce2 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -56,7 +56,7 @@ Code style
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
-at https://github.com/matrix-org/synapse/tree/master/docs/code_style.rst.
+at https://github.com/matrix-org/synapse/tree/master/docs/code_style.md.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
diff --git a/INSTALL.md b/INSTALL.md
index 6bce370ea8..3eb979c362 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -373,7 +373,7 @@ is suitable for local testing, but for any practical use, you will either need
to enable a reverse proxy, or configure Synapse to expose an HTTPS port.
For information on using a reverse proxy, see
-[docs/reverse_proxy.rst](docs/reverse_proxy.rst).
+[docs/reverse_proxy.md](docs/reverse_proxy.md).
To configure Synapse to expose an HTTPS port, you will need to edit
`homeserver.yaml`, as follows:
@@ -446,7 +446,7 @@ on your server even if `enable_registration` is `false`.
## Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
-a TURN server. See [docs/turn-howto.rst](docs/turn-howto.rst) for details.
+a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
## URL previews
diff --git a/MANIFEST.in b/MANIFEST.in
index 919cd8a1cd..9c2902b8d2 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -38,14 +38,16 @@ exclude sytest-blacklist
include pyproject.toml
recursive-include changelog.d *
-prune .github
-prune demo/etc
-prune docker
+prune .buildkite
prune .circleci
+prune .codecov.yml
prune .coveragerc
+prune .github
prune debian
-prune .codecov.yml
-prune .buildkite
+prune demo/etc
+prune docker
+prune mypy.ini
+prune stubs
exclude jenkins*
recursive-exclude jenkins *.sh
diff --git a/README.rst b/README.rst
index bbff8de5ab..2948fd0765 100644
--- a/README.rst
+++ b/README.rst
@@ -115,7 +115,7 @@ Registering a new user from a client
By default, registration of new users via Matrix clients is disabled. To enable
it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
-recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.rst>`_.)
+recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
@@ -186,7 +186,7 @@ Almost all installations should opt to use PostreSQL. Advantages include:
synapse itself.
For information on how to install and use PostgreSQL, please see
-`docs/postgres.rst <docs/postgres.rst>`_.
+`docs/postgres.md <docs/postgres.md>`_.
.. _reverse-proxy:
@@ -201,7 +201,7 @@ It is recommended to put a reverse proxy such as
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
-For information on configuring one, see `<docs/reverse_proxy.rst>`_.
+For information on configuring one, see `<docs/reverse_proxy.md>`_.
Identity Servers
================
diff --git a/UPGRADE.rst b/UPGRADE.rst
index dddcd75fda..5aaf804902 100644
--- a/UPGRADE.rst
+++ b/UPGRADE.rst
@@ -103,7 +103,7 @@ Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
-`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
+`the metrics documentation <docs/metrics-howto.md#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0
diff --git a/changelog.d/5849.doc b/changelog.d/5849.doc
new file mode 100644
index 0000000000..fbe62e8633
--- /dev/null
+++ b/changelog.d/5849.doc
@@ -0,0 +1 @@
+Convert documentation to markdown (from rst)
diff --git a/changelog.d/5897.feature b/changelog.d/5897.feature
new file mode 100644
index 0000000000..1557e559e8
--- /dev/null
+++ b/changelog.d/5897.feature
@@ -0,0 +1 @@
+Switch to using the v2 Identity Service `/lookup` API where available, with fallback to v1. (Implements [MSC2134](https://github.com/matrix-org/matrix-doc/pull/2134) plus id_access_token authentication for v2 Identity Service APIs from [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140)).
diff --git a/changelog.d/5934.feature b/changelog.d/5934.feature
new file mode 100644
index 0000000000..eae969a52a
--- /dev/null
+++ b/changelog.d/5934.feature
@@ -0,0 +1 @@
+Redact events in the database that have been redacted for a month.
diff --git a/changelog.d/5979.feature b/changelog.d/5979.feature
new file mode 100644
index 0000000000..94888aa2d3
--- /dev/null
+++ b/changelog.d/5979.feature
@@ -0,0 +1 @@
+Use the v2 Identity Service API for 3PID invites.
\ No newline at end of file
diff --git a/changelog.d/5981.feature b/changelog.d/5981.feature
new file mode 100644
index 0000000000..e39514273d
--- /dev/null
+++ b/changelog.d/5981.feature
@@ -0,0 +1 @@
+Setting metrics_flags.known_servers to True in the configuration will publish the synapse_federation_known_servers metric over Prometheus. This represents the total number of servers your server knows about (i.e. is in rooms with), including itself.
diff --git a/changelog.d/5985.feature b/changelog.d/5985.feature
new file mode 100644
index 0000000000..e5e29504af
--- /dev/null
+++ b/changelog.d/5985.feature
@@ -0,0 +1 @@
+Check at setup that opentracing is installed if it's enabled in the config.
diff --git a/changelog.d/5989.misc b/changelog.d/5989.misc
new file mode 100644
index 0000000000..9f2525fd3e
--- /dev/null
+++ b/changelog.d/5989.misc
@@ -0,0 +1 @@
+Clean up dependency checking at setup.
diff --git a/changelog.d/5995.bugfix b/changelog.d/5995.bugfix
new file mode 100644
index 0000000000..e03ab98bc6
--- /dev/null
+++ b/changelog.d/5995.bugfix
@@ -0,0 +1 @@
+Return a M_MISSING_PARAM if `sid` is not provided to `/account/3pid`.
\ No newline at end of file
diff --git a/changelog.d/5996.bugfix b/changelog.d/5996.bugfix
new file mode 100644
index 0000000000..05e31faaa2
--- /dev/null
+++ b/changelog.d/5996.bugfix
@@ -0,0 +1 @@
+federation_certificate_verification_whitelist now will not cause TypeErrors to be raised (a regression in 1.3). Additionally, it now supports internationalised domain names in their non-canonical representation.
diff --git a/changelog.d/5998.bugfix b/changelog.d/5998.bugfix
new file mode 100644
index 0000000000..9ea095103b
--- /dev/null
+++ b/changelog.d/5998.bugfix
@@ -0,0 +1 @@
+Fix room and user stats tracking.
diff --git a/changelog.d/6003.misc b/changelog.d/6003.misc
new file mode 100644
index 0000000000..4152d05f87
--- /dev/null
+++ b/changelog.d/6003.misc
@@ -0,0 +1 @@
+Add opentracing span over HTTP push processing.
diff --git a/changelog.d/6004.bugfix b/changelog.d/6004.bugfix
new file mode 100644
index 0000000000..45c179c8fd
--- /dev/null
+++ b/changelog.d/6004.bugfix
@@ -0,0 +1 @@
+Only count real users when checking for auto-creation of auto-join room.
diff --git a/changelog.d/6005.feature b/changelog.d/6005.feature
new file mode 100644
index 0000000000..ed6491d3e4
--- /dev/null
+++ b/changelog.d/6005.feature
@@ -0,0 +1 @@
+The new Prometheus metric `synapse_build_info` exposes the Python version, OS version, and Synapse version of the running server.
diff --git a/changelog.d/6009.misc b/changelog.d/6009.misc
new file mode 100644
index 0000000000..fea479e1dd
--- /dev/null
+++ b/changelog.d/6009.misc
@@ -0,0 +1 @@
+Small refactor of function arguments and docstrings in RoomMemberHandler.
\ No newline at end of file
diff --git a/changelog.d/6010.misc b/changelog.d/6010.misc
new file mode 100644
index 0000000000..0659f12ebd
--- /dev/null
+++ b/changelog.d/6010.misc
@@ -0,0 +1 @@
+Remove unused `origin` argument on FederationHandler.add_display_name_to_third_party_invite.
\ No newline at end of file
diff --git a/changelog.d/6012.feature b/changelog.d/6012.feature
new file mode 100644
index 0000000000..25425510c6
--- /dev/null
+++ b/changelog.d/6012.feature
@@ -0,0 +1 @@
+Add report_stats_endpoint option to configure where stats are reported to, if enabled. Contributed by @Sorunome.
diff --git a/changelog.d/6013.misc b/changelog.d/6013.misc
new file mode 100644
index 0000000000..939fe8c655
--- /dev/null
+++ b/changelog.d/6013.misc
@@ -0,0 +1 @@
+Compatibility with v2 Identity Service APIs other than /lookup.
\ No newline at end of file
diff --git a/changelog.d/6015.feature b/changelog.d/6015.feature
new file mode 100644
index 0000000000..42aaffced9
--- /dev/null
+++ b/changelog.d/6015.feature
@@ -0,0 +1 @@
+Add config option to increase ratelimits for room admins redacting messages.
diff --git a/changelog.d/6016.misc b/changelog.d/6016.misc
new file mode 100644
index 0000000000..91cf164714
--- /dev/null
+++ b/changelog.d/6016.misc
@@ -0,0 +1 @@
+Add a 'failure_ts' column to the 'destinations' database table.
diff --git a/changelog.d/6017.misc b/changelog.d/6017.misc
new file mode 100644
index 0000000000..5ccab9c6ca
--- /dev/null
+++ b/changelog.d/6017.misc
@@ -0,0 +1 @@
+Clean up some code in the retry logic.
diff --git a/changelog.d/6020.bugfix b/changelog.d/6020.bugfix
new file mode 100644
index 0000000000..58a7deba9d
--- /dev/null
+++ b/changelog.d/6020.bugfix
@@ -0,0 +1 @@
+Ensure support users can be registered even if MAU limit is reached.
diff --git a/changelog.d/6023.misc b/changelog.d/6023.misc
new file mode 100644
index 0000000000..d80410c22c
--- /dev/null
+++ b/changelog.d/6023.misc
@@ -0,0 +1 @@
+Fix the structured logging tests stomping on the global log configuration for subsequent tests.
diff --git a/changelog.d/6024.bugfix b/changelog.d/6024.bugfix
new file mode 100644
index 0000000000..ddad34595b
--- /dev/null
+++ b/changelog.d/6024.bugfix
@@ -0,0 +1 @@
+Fix bug where login error was shown incorrectly on SSO fallback login.
diff --git a/changelog.d/6025.bugfix b/changelog.d/6025.bugfix
new file mode 100644
index 0000000000..50d7f9aab5
--- /dev/null
+++ b/changelog.d/6025.bugfix
@@ -0,0 +1 @@
+Fix bug in calculating the federation retry backoff period.
\ No newline at end of file
diff --git a/changelog.d/6026.feature b/changelog.d/6026.feature
new file mode 100644
index 0000000000..2489ff09b5
--- /dev/null
+++ b/changelog.d/6026.feature
@@ -0,0 +1 @@
+Stop sending federation transactions to servers which have been down for a long time.
diff --git a/changelog.d/6029.bugfix b/changelog.d/6029.bugfix
new file mode 100644
index 0000000000..9ea095103b
--- /dev/null
+++ b/changelog.d/6029.bugfix
@@ -0,0 +1 @@
+Fix room and user stats tracking.
diff --git a/changelog.d/6032.misc b/changelog.d/6032.misc
new file mode 100644
index 0000000000..ec5b5eb881
--- /dev/null
+++ b/changelog.d/6032.misc
@@ -0,0 +1 @@
+Add developer documentation for using SAML2.
diff --git a/docs/CAPTCHA_SETUP.rst b/docs/CAPTCHA_SETUP.md
index 0c22ee4ff6..5f9057530b 100644
--- a/docs/CAPTCHA_SETUP.rst
+++ b/docs/CAPTCHA_SETUP.md
@@ -1,30 +1,31 @@
+# Overview
Captcha can be enabled for this home server. This file explains how to do that.
The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google.
-Getting keys
-------------
+## Getting keys
+
Requires a public/private key pair from:
-https://developers.google.com/recaptcha/
+<https://developers.google.com/recaptcha/>
Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option
-Setting ReCaptcha Keys
-----------------------
+## Setting ReCaptcha Keys
+
The keys are a config option on the home server config. If they are not
-visible, you can generate them via --generate-config. Set the following value::
+visible, you can generate them via `--generate-config`. Set the following value:
+
+ recaptcha_public_key: YOUR_PUBLIC_KEY
+ recaptcha_private_key: YOUR_PRIVATE_KEY
- recaptcha_public_key: YOUR_PUBLIC_KEY
- recaptcha_private_key: YOUR_PRIVATE_KEY
+In addition, you MUST enable captchas via:
-In addition, you MUST enable captchas via::
+ enable_registration_captcha: true
- enable_registration_captcha: true
+## Configuring IP used for auth
-Configuring IP used for auth
-----------------------------
The ReCaptcha API requires that the IP address of the user who solved the
captcha is sent. If the client is connecting through a proxy or load balancer,
-it may be required to use the X-Forwarded-For (XFF) header instead of the origin
-IP address. This can be configured using the x_forwarded directive in the
+it may be required to use the `X-Forwarded-For` (XFF) header instead of the origin
+IP address. This can be configured using the `x_forwarded` directive in the
listeners section of the homeserver.yaml configuration file.
diff --git a/docs/MSC1711_certificates_FAQ.md b/docs/MSC1711_certificates_FAQ.md
index 83497380df..80bd1294c7 100644
--- a/docs/MSC1711_certificates_FAQ.md
+++ b/docs/MSC1711_certificates_FAQ.md
@@ -147,7 +147,7 @@ your domain, you can simply route all traffic through the reverse proxy by
updating the SRV record appropriately (or removing it, if the proxy listens on
8448).
-See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
+See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
reverse proxy.
#### Option 3: add a .well-known file to delegate your matrix traffic
@@ -319,7 +319,7 @@ We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
-See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
+See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000000..3c6ea48c66
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,7 @@
+# Synapse Documentation
+
+This directory contains documentation specific to the `synapse` homeserver.
+
+All matrix-generic documentation now lives in its own project, located at [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc)
+
+(Note: some items here may be moved to [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc) at some point in the future.)
diff --git a/docs/README.rst b/docs/README.rst
deleted file mode 100644
index 3012da8b19..0000000000
--- a/docs/README.rst
+++ /dev/null
@@ -1,6 +0,0 @@
-All matrix-generic documentation now lives in its own project at
-
-github.com/matrix-org/matrix-doc.git
-
-Only Synapse implementation-specific documentation lives here now
-(together with some older stuff will be shortly migrated over to matrix-doc)
diff --git a/docs/ancient_architecture_notes.md b/docs/ancient_architecture_notes.md
new file mode 100644
index 0000000000..3ea8976cc7
--- /dev/null
+++ b/docs/ancient_architecture_notes.md
@@ -0,0 +1,81 @@
+> **Warning**
+> These architecture notes are spectacularly old, and date back
+> to when Synapse was just federation code in isolation. This should be
+> merged into the main spec.
+
+# Server to Server
+
+## Server to Server Stack
+
+To use the server to server stack, home servers should only need to
+interact with the Messaging layer.
+
+The server to server side of things is designed into 4 distinct layers:
+
+1. Messaging Layer
+2. Pdu Layer
+3. Transaction Layer
+4. Transport Layer
+
+Where the bottom (the transport layer) is what talks to the internet via
+HTTP, and the top (the messaging layer) talks to the rest of the Home
+Server with a domain specific API.
+
+1. **Messaging Layer**
+
+ This is what the rest of the Home Server hits to send messages, join rooms,
+ etc. It also allows you to register callbacks for when it get's notified by
+ lower levels that e.g. a new message has been received.
+
+ It is responsible for serializing requests to send to the data
+ layer, and to parse requests received from the data layer.
+
+2. **PDU Layer**
+
+ This layer handles:
+
+ - duplicate `pdu_id`'s - i.e., it makes sure we ignore them.
+ - responding to requests for a given `pdu_id`
+ - responding to requests for all metadata for a given context (i.e. room)
+ - handling incoming backfill requests
+
+ So it has to parse incoming messages to discover which are metadata and
+ which aren't, and has to correctly clobber existing metadata where
+ appropriate.
+
+ For incoming PDUs, it has to check the PDUs it references to see
+ if we have missed any. If we have go and ask someone (another
+ home server) for it.
+
+3. **Transaction Layer**
+
+ This layer makes incoming requests idempotent. i.e., it stores
+ which transaction id's we have seen and what our response were.
+ If we have already seen a message with the given transaction id,
+ we do not notify higher levels but simply respond with the
+ previous response.
+
+ `transaction_id` is from "`GET /send/<tx_id>/`"
+
+ It's also responsible for batching PDUs into single transaction for
+ sending to remote destinations, so that we only ever have one
+ transaction in flight to a given destination at any one time.
+
+ This is also responsible for answering requests for things after a
+ given set of transactions, i.e., ask for everything after 'ver' X.
+
+4. **Transport Layer**
+
+ This is responsible for starting a HTTP server and hitting the
+ correct callbacks on the Transaction layer, as well as sending
+ both data and requests for data.
+
+## Persistence
+
+We persist things in a single sqlite3 database. All database queries get
+run on a separate, dedicated thread. This that we only ever have one
+query running at a time, making it a lot easier to do things in a safe
+manner.
+
+The queries are located in the `synapse.persistence.transactions` module,
+and the table information in the `synapse.persistence.tables` module.
diff --git a/docs/ancient_architecture_notes.rst b/docs/ancient_architecture_notes.rst
deleted file mode 100644
index 2a5a2613c4..0000000000
--- a/docs/ancient_architecture_notes.rst
+++ /dev/null
@@ -1,59 +0,0 @@
-.. WARNING::
- These architecture notes are spectacularly old, and date back to when Synapse
- was just federation code in isolation. This should be merged into the main
- spec.
-
-
-= Server to Server =
-
-== Server to Server Stack ==
-
-To use the server to server stack, home servers should only need to interact with the Messaging layer.
-
-The server to server side of things is designed into 4 distinct layers:
-
- 1. Messaging Layer
- 2. Pdu Layer
- 3. Transaction Layer
- 4. Transport Layer
-
-Where the bottom (the transport layer) is what talks to the internet via HTTP, and the top (the messaging layer) talks to the rest of the Home Server with a domain specific API.
-
-1. Messaging Layer
- This is what the rest of the Home Server hits to send messages, join rooms, etc. It also allows you to register callbacks for when it get's notified by lower levels that e.g. a new message has been received.
-
- It is responsible for serializing requests to send to the data layer, and to parse requests received from the data layer.
-
-
-2. PDU Layer
- This layer handles:
- * duplicate pdu_id's - i.e., it makes sure we ignore them.
- * responding to requests for a given pdu_id
- * responding to requests for all metadata for a given context (i.e. room)
- * handling incoming backfill requests
-
- So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate.
-
- For incoming PDUs, it has to check the PDUs it references to see if we have missed any. If we have go and ask someone (another home server) for it.
-
-
-3. Transaction Layer
- This layer makes incoming requests idempotent. I.e., it stores which transaction id's we have seen and what our response were. If we have already seen a message with the given transaction id, we do not notify higher levels but simply respond with the previous response.
-
-transaction_id is from "GET /send/<tx_id>/"
-
- It's also responsible for batching PDUs into single transaction for sending to remote destinations, so that we only ever have one transaction in flight to a given destination at any one time.
-
- This is also responsible for answering requests for things after a given set of transactions, i.e., ask for everything after 'ver' X.
-
-
-4. Transport Layer
- This is responsible for starting a HTTP server and hitting the correct callbacks on the Transaction layer, as well as sending both data and requests for data.
-
-
-== Persistence ==
-
-We persist things in a single sqlite3 database. All database queries get run on a separate, dedicated thread. This that we only ever have one query running at a time, making it a lot easier to do things in a safe manner.
-
-The queries are located in the synapse.persistence.transactions module, and the table information in the synapse.persistence.tables module.
-
diff --git a/docs/application_services.md b/docs/application_services.md
new file mode 100644
index 0000000000..06cb79f1f9
--- /dev/null
+++ b/docs/application_services.md
@@ -0,0 +1,31 @@
+# Registering an Application Service
+
+The registration of new application services depends on the homeserver used.
+In synapse, you need to create a new configuration file for your AS and add it
+to the list specified under the `app_service_config_files` config
+option in your synapse config.
+
+For example:
+
+```yaml
+app_service_config_files:
+- /home/matrix/.synapse/<your-AS>.yaml
+```
+
+The format of the AS configuration file is as follows:
+
+```yaml
+url: <base url of AS>
+as_token: <token AS will add to requests to HS>
+hs_token: <token HS will add to requests to AS>
+sender_localpart: <localpart of AS user>
+namespaces:
+ users: # List of users we're interested in
+ - exclusive: <bool>
+ regex: <regex>
+ - ...
+ aliases: [] # List of aliases we're interested in
+ rooms: [] # List of room ids we're interested in
+```
+
+See the [spec](https://matrix.org/docs/spec/application_service/unstable.html) for further details on how application services work.
diff --git a/docs/application_services.rst b/docs/application_services.rst
deleted file mode 100644
index fbc0c7e960..0000000000
--- a/docs/application_services.rst
+++ /dev/null
@@ -1,35 +0,0 @@
-Registering an Application Service
-==================================
-
-The registration of new application services depends on the homeserver used.
-In synapse, you need to create a new configuration file for your AS and add it
-to the list specified under the ``app_service_config_files`` config
-option in your synapse config.
-
-For example:
-
-.. code-block:: yaml
-
- app_service_config_files:
- - /home/matrix/.synapse/<your-AS>.yaml
-
-
-The format of the AS configuration file is as follows:
-
-.. code-block:: yaml
-
- url: <base url of AS>
- as_token: <token AS will add to requests to HS>
- hs_token: <token HS will add to requests to AS>
- sender_localpart: <localpart of AS user>
- namespaces:
- users: # List of users we're interested in
- - exclusive: <bool>
- regex: <regex>
- - ...
- aliases: [] # List of aliases we're interested in
- rooms: [] # List of room ids we're interested in
-
-See the spec_ for further details on how application services work.
-
-.. _spec: https://matrix.org/docs/spec/application_service/unstable.html
diff --git a/docs/architecture.md b/docs/architecture.md
new file mode 100644
index 0000000000..0c7f315f3f
--- /dev/null
+++ b/docs/architecture.md
@@ -0,0 +1,65 @@
+# Synapse Architecture
+
+As of the end of Oct 2014, Synapse's overall architecture looks like:
+
+ synapse
+ .-----------------------------------------------------.
+ | Notifier |
+ | ^ | |
+ | | | |
+ | .------------|------. |
+ | | handlers/ | | |
+ | | v | |
+ | | Event*Handler <--------> rest/* <=> Client
+ | | Rooms*Handler | |
+ HS <=> federation/* <==> FederationHandler | |
+ | | | PresenceHandler | |
+ | | | TypingHandler | |
+ | | '-------------------' |
+ | | | | |
+ | | state/* | |
+ | | | | |
+ | | v v |
+ | `--------------> storage/* |
+ | | |
+ '--------------------------|--------------------------'
+ v
+ .----.
+ | DB |
+ '----'
+
+- Handlers: business logic of synapse itself. Follows a set contract of BaseHandler:
+ - BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic):
+ - handle_state(event)
+ - auth(event)
+ - persist_event(event)
+ - notify notifier or federation(event)
+ - PresenceHandler: use distributor to get EDUs out of Federation.
+ Very lightweight logic built on the distributor
+ - TypingHandler: use distributor to get EDUs out of Federation.
+ Very lightweight logic built on the distributor
+ - EventsHandler: handles the events stream...
+ - FederationHandler: - gets PDU from Federation Layer; turns into
+ an event; follows basehandler functionality.
+ - RoomsHandler: does all the room logic, including members - lots
+ of classes in RoomsHandler.
+ - ProfileHandler: talks to the storage to store/retrieve profile
+ info.
+- EventFactory: generates events of particular event types.
+- Notifier: Backs the events handler
+- REST: Interfaces handlers and events to the outside world via
+ HTTP/JSON. Converts events back and forth from JSON.
+- Federation: holds the HTTP client & server to talk to other servers.
+ Does replication to make sure there's nothing missing in the graph.
+ Handles reliability. Handles txns.
+- Distributor: generic event bus. used for presence & typing only
+ currently. Notifier could be implemented using Distributor - so far
+ we are only using for things which actually /require/ dynamic
+ pluggability however as it can obfuscate the actual flow of control.
+- Auth: helper singleton to say whether a given event is allowed to do
+ a given thing (TODO: put this on the diagram)
+- State: helper singleton: does state conflict resolution. You give it
+ an event and it tells you if it actually updates the state or not,
+ and annotates the event up properly and handles merge conflict
+ resolution.
+- Storage: abstracts the storage engine.
diff --git a/docs/architecture.rst b/docs/architecture.rst
deleted file mode 100644
index 98050428b9..0000000000
--- a/docs/architecture.rst
+++ /dev/null
@@ -1,68 +0,0 @@
-Synapse Architecture
-====================
-
-As of the end of Oct 2014, Synapse's overall architecture looks like::
-
- synapse
- .-----------------------------------------------------.
- | Notifier |
- | ^ | |
- | | | |
- | .------------|------. |
- | | handlers/ | | |
- | | v | |
- | | Event*Handler <--------> rest/* <=> Client
- | | Rooms*Handler | |
- HSes <=> federation/* <==> FederationHandler | |
- | | | PresenceHandler | |
- | | | TypingHandler | |
- | | '-------------------' |
- | | | | |
- | | state/* | |
- | | | | |
- | | v v |
- | `--------------> storage/* |
- | | |
- '--------------------------|--------------------------'
- v
- .----.
- | DB |
- '----'
-
-* Handlers: business logic of synapse itself. Follows a set contract of BaseHandler:
-
- - BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic):
-
- + handle_state(event)
- + auth(event)
- + persist_event(event)
- + notify notifier or federation(event)
-
- - PresenceHandler: use distributor to get EDUs out of Federation. Very
- lightweight logic built on the distributor
- - TypingHandler: use distributor to get EDUs out of Federation. Very
- lightweight logic built on the distributor
- - EventsHandler: handles the events stream...
- - FederationHandler: - gets PDU from Federation Layer; turns into an event;
- follows basehandler functionality.
- - RoomsHandler: does all the room logic, including members - lots of classes in
- RoomsHandler.
- - ProfileHandler: talks to the storage to store/retrieve profile info.
-
-* EventFactory: generates events of particular event types.
-* Notifier: Backs the events handler
-* REST: Interfaces handlers and events to the outside world via HTTP/JSON.
- Converts events back and forth from JSON.
-* Federation: holds the HTTP client & server to talk to other servers. Does
- replication to make sure there's nothing missing in the graph. Handles
- reliability. Handles txns.
-* Distributor: generic event bus. used for presence & typing only currently.
- Notifier could be implemented using Distributor - so far we are only using for
- things which actually /require/ dynamic pluggability however as it can
- obfuscate the actual flow of control.
-* Auth: helper singleton to say whether a given event is allowed to do a given
- thing (TODO: put this on the diagram)
-* State: helper singleton: does state conflict resolution. You give it an event
- and it tells you if it actually updates the state or not, and annotates the
- event up properly and handles merge conflict resolution.
-* Storage: abstracts the storage engine.
diff --git a/docs/code_style.md b/docs/code_style.md
new file mode 100644
index 0000000000..f983f72d6c
--- /dev/null
+++ b/docs/code_style.md
@@ -0,0 +1,169 @@
+# Code Style
+
+## Formatting tools
+
+The Synapse codebase uses a number of code formatting tools in order to
+quickly and automatically check for formatting (and sometimes logical)
+errors in code.
+
+The necessary tools are detailed below.
+
+- **black**
+
+ The Synapse codebase uses [black](https://pypi.org/project/black/)
+ as an opinionated code formatter, ensuring all comitted code is
+ properly formatted.
+
+ First install `black` with:
+
+ pip install --upgrade black
+
+ Have `black` auto-format your code (it shouldn't change any
+ functionality) with:
+
+ black . --exclude="\.tox|build|env"
+
+- **flake8**
+
+ `flake8` is a code checking tool. We require code to pass `flake8`
+ before being merged into the codebase.
+
+ Install `flake8` with:
+
+ pip install --upgrade flake8
+
+ Check all application and test code with:
+
+ flake8 synapse tests
+
+- **isort**
+
+ `isort` ensures imports are nicely formatted, and can suggest and
+ auto-fix issues such as double-importing.
+
+ Install `isort` with:
+
+ pip install --upgrade isort
+
+ Auto-fix imports with:
+
+ isort -rc synapse tests
+
+ `-rc` means to recursively search the given directories.
+
+It's worth noting that modern IDEs and text editors can run these tools
+automatically on save. It may be worth looking into whether this
+functionality is supported in your editor for a more convenient
+development workflow. It is not, however, recommended to run `flake8` on
+save as it takes a while and is very resource intensive.
+
+## General rules
+
+- **Naming**:
+ - Use camel case for class and type names
+ - Use underscores for functions and variables.
+- **Docstrings**: should follow the [google code
+ style](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings).
+ This is so that we can generate documentation with
+ [sphinx](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/).
+ See the
+ [examples](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
+ in the sphinx documentation.
+- **Imports**:
+ - Imports should be sorted by `isort` as described above.
+ - Prefer to import classes and functions rather than packages or
+ modules.
+
+ Example:
+
+ from synapse.types import UserID
+ ...
+ user_id = UserID(local, server)
+
+ is preferred over:
+
+ from synapse import types
+ ...
+ user_id = types.UserID(local, server)
+
+ (or any other variant).
+
+ This goes against the advice in the Google style guide, but it
+ means that errors in the name are caught early (at import time).
+
+ - Avoid wildcard imports (`from synapse.types import *`) and
+ relative imports (`from .types import UserID`).
+
+## Configuration file format
+
+The [sample configuration file](./sample_config.yaml) acts as a
+reference to Synapse's configuration options for server administrators.
+Remember that many readers will be unfamiliar with YAML and server
+administration in general, so that it is important that the file be as
+easy to understand as possible, which includes following a consistent
+format.
+
+Some guidelines follow:
+
+- Sections should be separated with a heading consisting of a single
+ line prefixed and suffixed with `##`. There should be **two** blank
+ lines before the section header, and **one** after.
+- Each option should be listed in the file with the following format:
+ - A comment describing the setting. Each line of this comment
+ should be prefixed with a hash (`#`) and a space.
+
+ The comment should describe the default behaviour (ie, what
+ happens if the setting is omitted), as well as what the effect
+ will be if the setting is changed.
+
+ Often, the comment end with something like "uncomment the
+ following to <do action>".
+
+ - A line consisting of only `#`.
+ - A commented-out example setting, prefixed with only `#`.
+
+ For boolean (on/off) options, convention is that this example
+ should be the *opposite* to the default (so the comment will end
+ with "Uncomment the following to enable [or disable]
+ <feature>." For other options, the example should give some
+ non-default value which is likely to be useful to the reader.
+
+- There should be a blank line between each option.
+- Where several settings are grouped into a single dict, *avoid* the
+ convention where the whole block is commented out, resulting in
+ comment lines starting `# #`, as this is hard to read and confusing
+ to edit. Instead, leave the top-level config option uncommented, and
+ follow the conventions above for sub-options. Ensure that your code
+ correctly handles the top-level option being set to `None` (as it
+ will be if no sub-options are enabled).
+- Lines should be wrapped at 80 characters.
+
+Example:
+
+ ## Frobnication ##
+
+ # The frobnicator will ensure that all requests are fully frobnicated.
+ # To enable it, uncomment the following.
+ #
+ #frobnicator_enabled: true
+
+ # By default, the frobnicator will frobnicate with the default frobber.
+ # The following will make it use an alternative frobber.
+ #
+ #frobincator_frobber: special_frobber
+
+ # Settings for the frobber
+ #
+ frobber:
+ # frobbing speed. Defaults to 1.
+ #
+ #speed: 10
+
+ # frobbing distance. Defaults to 1000.
+ #
+ #distance: 100
+
+Note that the sample configuration is generated from the synapse code
+and is maintained by a script, `scripts-dev/generate_sample_config`.
+Making sure that the output from this script matches the desired format
+is left as an exercise for the reader!
diff --git a/docs/code_style.rst b/docs/code_style.rst
deleted file mode 100644
index 39ac4ebedc..0000000000
--- a/docs/code_style.rst
+++ /dev/null
@@ -1,180 +0,0 @@
-Code Style
-==========
-
-Formatting tools
-----------------
-
-The Synapse codebase uses a number of code formatting tools in order to
-quickly and automatically check for formatting (and sometimes logical) errors
-in code.
-
-The necessary tools are detailed below.
-
-- **black**
-
- The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
- opinionated code formatter, ensuring all comitted code is properly
- formatted.
-
- First install ``black`` with::
-
- pip install --upgrade black
-
- Have ``black`` auto-format your code (it shouldn't change any functionality)
- with::
-
- black . --exclude="\.tox|build|env"
-
-- **flake8**
-
- ``flake8`` is a code checking tool. We require code to pass ``flake8`` before being merged into the codebase.
-
- Install ``flake8`` with::
-
- pip install --upgrade flake8
-
- Check all application and test code with::
-
- flake8 synapse tests
-
-- **isort**
-
- ``isort`` ensures imports are nicely formatted, and can suggest and
- auto-fix issues such as double-importing.
-
- Install ``isort`` with::
-
- pip install --upgrade isort
-
- Auto-fix imports with::
-
- isort -rc synapse tests
-
- ``-rc`` means to recursively search the given directories.
-
-It's worth noting that modern IDEs and text editors can run these tools
-automatically on save. It may be worth looking into whether this
-functionality is supported in your editor for a more convenient development
-workflow. It is not, however, recommended to run ``flake8`` on save as it
-takes a while and is very resource intensive.
-
-General rules
--------------
-
-- **Naming**:
-
- - Use camel case for class and type names
- - Use underscores for functions and variables.
-
-- **Docstrings**: should follow the `google code style
- <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
- This is so that we can generate documentation with `sphinx
- <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
- `examples
- <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
- in the sphinx documentation.
-
-- **Imports**:
-
- - Imports should be sorted by ``isort`` as described above.
-
- - Prefer to import classes and functions rather than packages or modules.
-
- Example::
-
- from synapse.types import UserID
- ...
- user_id = UserID(local, server)
-
- is preferred over::
-
- from synapse import types
- ...
- user_id = types.UserID(local, server)
-
- (or any other variant).
-
- This goes against the advice in the Google style guide, but it means that
- errors in the name are caught early (at import time).
-
- - Avoid wildcard imports (``from synapse.types import *``) and relative
- imports (``from .types import UserID``).
-
-Configuration file format
--------------------------
-
-The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
-Synapse's configuration options for server administrators. Remember that many
-readers will be unfamiliar with YAML and server administration in general, so
-that it is important that the file be as easy to understand as possible, which
-includes following a consistent format.
-
-Some guidelines follow:
-
-* Sections should be separated with a heading consisting of a single line
- prefixed and suffixed with ``##``. There should be **two** blank lines
- before the section header, and **one** after.
-
-* Each option should be listed in the file with the following format:
-
- * A comment describing the setting. Each line of this comment should be
- prefixed with a hash (``#``) and a space.
-
- The comment should describe the default behaviour (ie, what happens if
- the setting is omitted), as well as what the effect will be if the
- setting is changed.
-
- Often, the comment end with something like "uncomment the
- following to \<do action>".
-
- * A line consisting of only ``#``.
-
- * A commented-out example setting, prefixed with only ``#``.
-
- For boolean (on/off) options, convention is that this example should be
- the *opposite* to the default (so the comment will end with "Uncomment
- the following to enable [or disable] \<feature\>." For other options,
- the example should give some non-default value which is likely to be
- useful to the reader.
-
-* There should be a blank line between each option.
-
-* Where several settings are grouped into a single dict, *avoid* the
- convention where the whole block is commented out, resulting in comment
- lines starting ``# #``, as this is hard to read and confusing to
- edit. Instead, leave the top-level config option uncommented, and follow
- the conventions above for sub-options. Ensure that your code correctly
- handles the top-level option being set to ``None`` (as it will be if no
- sub-options are enabled).
-
-* Lines should be wrapped at 80 characters.
-
-Example::
-
- ## Frobnication ##
-
- # The frobnicator will ensure that all requests are fully frobnicated.
- # To enable it, uncomment the following.
- #
- #frobnicator_enabled: true
-
- # By default, the frobnicator will frobnicate with the default frobber.
- # The following will make it use an alternative frobber.
- #
- #frobincator_frobber: special_frobber
-
- # Settings for the frobber
- #
- frobber:
- # frobbing speed. Defaults to 1.
- #
- #speed: 10
-
- # frobbing distance. Defaults to 1000.
- #
- #distance: 100
-
-Note that the sample configuration is generated from the synapse code and is
-maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
-that the output from this script matches the desired format is left as an
-exercise for the reader!
diff --git a/docs/dev/saml.md b/docs/dev/saml.md
new file mode 100644
index 0000000000..f41aadce47
--- /dev/null
+++ b/docs/dev/saml.md
@@ -0,0 +1,37 @@
+# How to test SAML as a developer without a server
+
+https://capriza.github.io/samling/samling.html (https://github.com/capriza/samling) is a great
+resource for being able to tinker with the SAML options within Synapse without needing to
+deploy and configure a complicated software stack.
+
+To make Synapse (and therefore Riot) use it:
+
+1. Use the samling.html URL above or deploy your own and visit the IdP Metadata tab.
+2. Copy the XML to your clipboard.
+3. On your Synapse server, create a new file `samling.xml` next to your `homeserver.yaml` with
+ the XML from step 2 as the contents.
+4. Edit your `homeserver.yaml` to include:
+ ```yaml
+ saml2_config:
+ sp_config:
+ allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
+ metadata:
+ local: ["samling.xml"]
+ ```
+5. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
+ the dependencies are installed and ready to go.
+6. Restart Synapse.
+
+Then in Riot:
+
+1. Visit the login page with a Riot pointing at your homeserver.
+2. Click the Single Sign-On button.
+3. On the samling page, enter a Name Identifier and add a SAML Attribute for `uid=your_localpart`.
+ The response must also be signed.
+4. Click "Next".
+5. Click "Post Response" (change nothing).
+6. You should be logged in.
+
+If you try and repeat this process, you may be automatically logged in using the information you
+gave previously. To fix this, open your developer console (`F12` or `Ctrl+Shift+I`) while on the
+samling page and clear the site data. In Chrome, this will be a button on the Application tab.
diff --git a/docs/federate.md b/docs/federate.md
index 6d6bb85e15..193e2d2dfe 100644
--- a/docs/federate.md
+++ b/docs/federate.md
@@ -148,7 +148,7 @@ We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
-See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
+See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
reverse proxy.
#### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
@@ -184,7 +184,7 @@ a complicated dance which requires connections in both directions).
Another common problem is that people on other servers can't join rooms that
you invite them to. This can be caused by an incorrectly-configured reverse
-proxy: see [reverse_proxy.rst](<reverse_proxy.rst>) for instructions on how to correctly
+proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
configure a reverse proxy.
## Running a Demo Federation of Synapses
diff --git a/docs/log_contexts.md b/docs/log_contexts.md
new file mode 100644
index 0000000000..5331e8c88b
--- /dev/null
+++ b/docs/log_contexts.md
@@ -0,0 +1,494 @@
+# Log Contexts
+
+To help track the processing of individual requests, synapse uses a
+'`log context`' to track which request it is handling at any given
+moment. This is done via a thread-local variable; a `logging.Filter` is
+then used to fish the information back out of the thread-local variable
+and add it to each log record.
+
+Logcontexts are also used for CPU and database accounting, so that we
+can track which requests were responsible for high CPU use or database
+activity.
+
+The `synapse.logging.context` module provides a facilities for managing
+the current log context (as well as providing the `LoggingContextFilter`
+class).
+
+Deferreds make the whole thing complicated, so this document describes
+how it all works, and how to write code which follows the rules.
+
+##Logcontexts without Deferreds
+
+In the absence of any Deferred voodoo, things are simple enough. As with
+any code of this nature, the rule is that our function should leave
+things as it found them:
+
+```python
+from synapse.logging import context # omitted from future snippets
+
+def handle_request(request_id):
+ request_context = context.LoggingContext()
+
+ calling_context = context.LoggingContext.current_context()
+ context.LoggingContext.set_current_context(request_context)
+ try:
+ request_context.request = request_id
+ do_request_handling()
+ logger.debug("finished")
+ finally:
+ context.LoggingContext.set_current_context(calling_context)
+
+def do_request_handling():
+ logger.debug("phew") # this will be logged against request_id
+```
+
+LoggingContext implements the context management methods, so the above
+can be written much more succinctly as:
+
+```python
+def handle_request(request_id):
+ with context.LoggingContext() as request_context:
+ request_context.request = request_id
+ do_request_handling()
+ logger.debug("finished")
+
+def do_request_handling():
+ logger.debug("phew")
+```
+
+## Using logcontexts with Deferreds
+
+Deferreds --- and in particular, `defer.inlineCallbacks` --- break the
+linear flow of code so that there is no longer a single entry point
+where we should set the logcontext and a single exit point where we
+should remove it.
+
+Consider the example above, where `do_request_handling` needs to do some
+blocking operation, and returns a deferred:
+
+```python
+@defer.inlineCallbacks
+def handle_request(request_id):
+ with context.LoggingContext() as request_context:
+ request_context.request = request_id
+ yield do_request_handling()
+ logger.debug("finished")
+```
+
+In the above flow:
+
+- The logcontext is set
+- `do_request_handling` is called, and returns a deferred
+- `handle_request` yields the deferred
+- The `inlineCallbacks` wrapper of `handle_request` returns a deferred
+
+So we have stopped processing the request (and will probably go on to
+start processing the next), without clearing the logcontext.
+
+To circumvent this problem, synapse code assumes that, wherever you have
+a deferred, you will want to yield on it. To that end, whereever
+functions return a deferred, we adopt the following conventions:
+
+**Rules for functions returning deferreds:**
+
+> - If the deferred is already complete, the function returns with the
+> same logcontext it started with.
+> - If the deferred is incomplete, the function clears the logcontext
+> before returning; when the deferred completes, it restores the
+> logcontext before running any callbacks.
+
+That sounds complicated, but actually it means a lot of code (including
+the example above) "just works". There are two cases:
+
+- If `do_request_handling` returns a completed deferred, then the
+ logcontext will still be in place. In this case, execution will
+ continue immediately after the `yield`; the "finished" line will
+ be logged against the right context, and the `with` block restores
+ the original context before we return to the caller.
+- If the returned deferred is incomplete, `do_request_handling` clears
+ the logcontext before returning. The logcontext is therefore clear
+ when `handle_request` yields the deferred. At that point, the
+ `inlineCallbacks` wrapper adds a callback to the deferred, and
+ returns another (incomplete) deferred to the caller, and it is safe
+ to begin processing the next request.
+
+ Once `do_request_handling`'s deferred completes, it will reinstate
+ the logcontext, before running the callback added by the
+ `inlineCallbacks` wrapper. That callback runs the second half of
+ `handle_request`, so again the "finished" line will be logged
+ against the right context, and the `with` block restores the
+ original context.
+
+As an aside, it's worth noting that `handle_request` follows our rules
+-though that only matters if the caller has its own logcontext which it
+cares about.
+
+The following sections describe pitfalls and helpful patterns when
+implementing these rules.
+
+Always yield your deferreds
+---------------------------
+
+Whenever you get a deferred back from a function, you should `yield` on
+it as soon as possible. (Returning it directly to your caller is ok too,
+if you're not doing `inlineCallbacks`.) Do not pass go; do not do any
+logging; do not call any other functions.
+
+```python
+@defer.inlineCallbacks
+def fun():
+ logger.debug("starting")
+ yield do_some_stuff() # just like this
+
+ d = more_stuff()
+ result = yield d # also fine, of course
+
+ return result
+
+def nonInlineCallbacksFun():
+ logger.debug("just a wrapper really")
+ return do_some_stuff() # this is ok too - the caller will yield on
+ # it anyway.
+```
+
+Provided this pattern is followed all the way back up to the callchain
+to where the logcontext was set, this will make things work out ok:
+provided `do_some_stuff` and `more_stuff` follow the rules above, then
+so will `fun` (as wrapped by `inlineCallbacks`) and
+`nonInlineCallbacksFun`.
+
+It's all too easy to forget to `yield`: for instance if we forgot that
+`do_some_stuff` returned a deferred, we might plough on regardless. This
+leads to a mess; it will probably work itself out eventually, but not
+before a load of stuff has been logged against the wrong context.
+(Normally, other things will break, more obviously, if you forget to
+`yield`, so this tends not to be a major problem in practice.)
+
+Of course sometimes you need to do something a bit fancier with your
+Deferreds - not all code follows the linear A-then-B-then-C pattern.
+Notes on implementing more complex patterns are in later sections.
+
+## Where you create a new Deferred, make it follow the rules
+
+Most of the time, a Deferred comes from another synapse function.
+Sometimes, though, we need to make up a new Deferred, or we get a
+Deferred back from external code. We need to make it follow our rules.
+
+The easy way to do it is with a combination of `defer.inlineCallbacks`,
+and `context.PreserveLoggingContext`. Suppose we want to implement
+`sleep`, which returns a deferred which will run its callbacks after a
+given number of seconds. That might look like:
+
+```python
+# not a logcontext-rules-compliant function
+def get_sleep_deferred(seconds):
+ d = defer.Deferred()
+ reactor.callLater(seconds, d.callback, None)
+ return d
+```
+
+That doesn't follow the rules, but we can fix it by wrapping it with
+`PreserveLoggingContext` and `yield` ing on it:
+
+```python
+@defer.inlineCallbacks
+def sleep(seconds):
+ with PreserveLoggingContext():
+ yield get_sleep_deferred(seconds)
+```
+
+This technique works equally for external functions which return
+deferreds, or deferreds we have made ourselves.
+
+You can also use `context.make_deferred_yieldable`, which just does the
+boilerplate for you, so the above could be written:
+
+```python
+def sleep(seconds):
+ return context.make_deferred_yieldable(get_sleep_deferred(seconds))
+```
+
+## Fire-and-forget
+
+Sometimes you want to fire off a chain of execution, but not wait for
+its result. That might look a bit like this:
+
+```python
+@defer.inlineCallbacks
+def do_request_handling():
+ yield foreground_operation()
+
+ # *don't* do this
+ background_operation()
+
+ logger.debug("Request handling complete")
+
+@defer.inlineCallbacks
+def background_operation():
+ yield first_background_step()
+ logger.debug("Completed first step")
+ yield second_background_step()
+ logger.debug("Completed second step")
+```
+
+The above code does a couple of steps in the background after
+`do_request_handling` has finished. The log lines are still logged
+against the `request_context` logcontext, which may or may not be
+desirable. There are two big problems with the above, however. The first
+problem is that, if `background_operation` returns an incomplete
+Deferred, it will expect its caller to `yield` immediately, so will have
+cleared the logcontext. In this example, that means that 'Request
+handling complete' will be logged without any context.
+
+The second problem, which is potentially even worse, is that when the
+Deferred returned by `background_operation` completes, it will restore
+the original logcontext. There is nothing waiting on that Deferred, so
+the logcontext will leak into the reactor and possibly get attached to
+some arbitrary future operation.
+
+There are two potential solutions to this.
+
+One option is to surround the call to `background_operation` with a
+`PreserveLoggingContext` call. That will reset the logcontext before
+starting `background_operation` (so the context restored when the
+deferred completes will be the empty logcontext), and will restore the
+current logcontext before continuing the foreground process:
+
+```python
+@defer.inlineCallbacks
+def do_request_handling():
+ yield foreground_operation()
+
+ # start background_operation off in the empty logcontext, to
+ # avoid leaking the current context into the reactor.
+ with PreserveLoggingContext():
+ background_operation()
+
+ # this will now be logged against the request context
+ logger.debug("Request handling complete")
+```
+
+Obviously that option means that the operations done in
+`background_operation` would be not be logged against a logcontext
+(though that might be fixed by setting a different logcontext via a
+`with LoggingContext(...)` in `background_operation`).
+
+The second option is to use `context.run_in_background`, which wraps a
+function so that it doesn't reset the logcontext even when it returns
+an incomplete deferred, and adds a callback to the returned deferred to
+reset the logcontext. In other words, it turns a function that follows
+the Synapse rules about logcontexts and Deferreds into one which behaves
+more like an external function --- the opposite operation to that
+described in the previous section. It can be used like this:
+
+```python
+@defer.inlineCallbacks
+def do_request_handling():
+ yield foreground_operation()
+
+ context.run_in_background(background_operation)
+
+ # this will now be logged against the request context
+ logger.debug("Request handling complete")
+```
+
+## Passing synapse deferreds into third-party functions
+
+A typical example of this is where we want to collect together two or
+more deferred via `defer.gatherResults`:
+
+```python
+d1 = operation1()
+d2 = operation2()
+d3 = defer.gatherResults([d1, d2])
+```
+
+This is really a variation of the fire-and-forget problem above, in that
+we are firing off `d1` and `d2` without yielding on them. The difference
+is that we now have third-party code attached to their callbacks. Anyway
+either technique given in the [Fire-and-forget](#fire-and-forget)
+section will work.
+
+Of course, the new Deferred returned by `gatherResults` needs to be
+wrapped in order to make it follow the logcontext rules before we can
+yield it, as described in [Where you create a new Deferred, make it
+follow the
+rules](#where-you-create-a-new-deferred-make-it-follow-the-rules).
+
+So, option one: reset the logcontext before starting the operations to
+be gathered:
+
+```python
+@defer.inlineCallbacks
+def do_request_handling():
+ with PreserveLoggingContext():
+ d1 = operation1()
+ d2 = operation2()
+ result = yield defer.gatherResults([d1, d2])
+```
+
+In this case particularly, though, option two, of using
+`context.preserve_fn` almost certainly makes more sense, so that
+`operation1` and `operation2` are both logged against the original
+logcontext. This looks like:
+
+```python
+@defer.inlineCallbacks
+def do_request_handling():
+ d1 = context.preserve_fn(operation1)()
+ d2 = context.preserve_fn(operation2)()
+
+ with PreserveLoggingContext():
+ result = yield defer.gatherResults([d1, d2])
+```
+
+## Was all this really necessary?
+
+The conventions used work fine for a linear flow where everything
+happens in series via `defer.inlineCallbacks` and `yield`, but are
+certainly tricky to follow for any more exotic flows. It's hard not to
+wonder if we could have done something else.
+
+We're not going to rewrite Synapse now, so the following is entirely of
+academic interest, but I'd like to record some thoughts on an
+alternative approach.
+
+I briefly prototyped some code following an alternative set of rules. I
+think it would work, but I certainly didn't get as far as thinking how
+it would interact with concepts as complicated as the cache descriptors.
+
+My alternative rules were:
+
+- functions always preserve the logcontext of their caller, whether or
+ not they are returning a Deferred.
+- Deferreds returned by synapse functions run their callbacks in the
+ same context as the function was orignally called in.
+
+The main point of this scheme is that everywhere that sets the
+logcontext is responsible for clearing it before returning control to
+the reactor.
+
+So, for example, if you were the function which started a
+`with LoggingContext` block, you wouldn't `yield` within it --- instead
+you'd start off the background process, and then leave the `with` block
+to wait for it:
+
+```python
+def handle_request(request_id):
+ with context.LoggingContext() as request_context:
+ request_context.request = request_id
+ d = do_request_handling()
+
+ def cb(r):
+ logger.debug("finished")
+
+ d.addCallback(cb)
+ return d
+```
+
+(in general, mixing `with LoggingContext` blocks and
+`defer.inlineCallbacks` in the same function leads to slighly
+counter-intuitive code, under this scheme).
+
+Because we leave the original `with` block as soon as the Deferred is
+returned (as opposed to waiting for it to be resolved, as we do today),
+the logcontext is cleared before control passes back to the reactor; so
+if there is some code within `do_request_handling` which needs to wait
+for a Deferred to complete, there is no need for it to worry about
+clearing the logcontext before doing so:
+
+```python
+def handle_request():
+ r = do_some_stuff()
+ r.addCallback(do_some_more_stuff)
+ return r
+```
+
+--- and provided `do_some_stuff` follows the rules of returning a
+Deferred which runs its callbacks in the original logcontext, all is
+happy.
+
+The business of a Deferred which runs its callbacks in the original
+logcontext isn't hard to achieve --- we have it today, in the shape of
+`context._PreservingContextDeferred`:
+
+```python
+def do_some_stuff():
+ deferred = do_some_io()
+ pcd = _PreservingContextDeferred(LoggingContext.current_context())
+ deferred.chainDeferred(pcd)
+ return pcd
+```
+
+It turns out that, thanks to the way that Deferreds chain together, we
+automatically get the property of a context-preserving deferred with
+`defer.inlineCallbacks`, provided the final Defered the function
+`yields` on has that property. So we can just write:
+
+```python
+@defer.inlineCallbacks
+def handle_request():
+ yield do_some_stuff()
+ yield do_some_more_stuff()
+```
+
+To conclude: I think this scheme would have worked equally well, with
+less danger of messing it up, and probably made some more esoteric code
+easier to write. But again --- changing the conventions of the entire
+Synapse codebase is not a sensible option for the marginal improvement
+offered.
+
+## A note on garbage-collection of Deferred chains
+
+It turns out that our logcontext rules do not play nicely with Deferred
+chains which get orphaned and garbage-collected.
+
+Imagine we have some code that looks like this:
+
+```python
+listener_queue = []
+
+def on_something_interesting():
+ for d in listener_queue:
+ d.callback("foo")
+
+@defer.inlineCallbacks
+def await_something_interesting():
+ new_deferred = defer.Deferred()
+ listener_queue.append(new_deferred)
+
+ with PreserveLoggingContext():
+ yield new_deferred
+```
+
+Obviously, the idea here is that we have a bunch of things which are
+waiting for an event. (It's just an example of the problem here, but a
+relatively common one.)
+
+Now let's imagine two further things happen. First of all, whatever was
+waiting for the interesting thing goes away. (Perhaps the request times
+out, or something *even more* interesting happens.)
+
+Secondly, let's suppose that we decide that the interesting thing is
+never going to happen, and we reset the listener queue:
+
+```python
+def reset_listener_queue():
+ listener_queue.clear()
+```
+
+So, both ends of the deferred chain have now dropped their references,
+and the deferred chain is now orphaned, and will be garbage-collected at
+some point. Note that `await_something_interesting` is a generator
+function, and when Python garbage-collects generator functions, it gives
+them a chance to clean up by making the `yield` raise a `GeneratorExit`
+exception. In our case, that means that the `__exit__` handler of
+`PreserveLoggingContext` will carefully restore the request context, but
+there is now nothing waiting for its return, so the request context is
+never cleared.
+
+To reiterate, this problem only arises when *both* ends of a deferred
+chain are dropped. Dropping the the reference to a deferred you're
+supposed to be calling is probably bad practice, so this doesn't
+actually happen too much. Unfortunately, when it does happen, it will
+lead to leaked logcontexts which are incredibly hard to track down.
diff --git a/docs/log_contexts.rst b/docs/log_contexts.rst
deleted file mode 100644
index 4502cd9454..0000000000
--- a/docs/log_contexts.rst
+++ /dev/null
@@ -1,498 +0,0 @@
-Log Contexts
-============
-
-.. contents::
-
-To help track the processing of individual requests, synapse uses a
-'log context' to track which request it is handling at any given moment. This
-is done via a thread-local variable; a ``logging.Filter`` is then used to fish
-the information back out of the thread-local variable and add it to each log
-record.
-
-Logcontexts are also used for CPU and database accounting, so that we can track
-which requests were responsible for high CPU use or database activity.
-
-The ``synapse.logging.context`` module provides a facilities for managing the
-current log context (as well as providing the ``LoggingContextFilter`` class).
-
-Deferreds make the whole thing complicated, so this document describes how it
-all works, and how to write code which follows the rules.
-
-Logcontexts without Deferreds
------------------------------
-
-In the absence of any Deferred voodoo, things are simple enough. As with any
-code of this nature, the rule is that our function should leave things as it
-found them:
-
-.. code:: python
-
- from synapse.logging import context # omitted from future snippets
-
- def handle_request(request_id):
- request_context = context.LoggingContext()
-
- calling_context = context.LoggingContext.current_context()
- context.LoggingContext.set_current_context(request_context)
- try:
- request_context.request = request_id
- do_request_handling()
- logger.debug("finished")
- finally:
- context.LoggingContext.set_current_context(calling_context)
-
- def do_request_handling():
- logger.debug("phew") # this will be logged against request_id
-
-
-LoggingContext implements the context management methods, so the above can be
-written much more succinctly as:
-
-.. code:: python
-
- def handle_request(request_id):
- with context.LoggingContext() as request_context:
- request_context.request = request_id
- do_request_handling()
- logger.debug("finished")
-
- def do_request_handling():
- logger.debug("phew")
-
-
-Using logcontexts with Deferreds
---------------------------------
-
-Deferreds — and in particular, ``defer.inlineCallbacks`` — break
-the linear flow of code so that there is no longer a single entry point where
-we should set the logcontext and a single exit point where we should remove it.
-
-Consider the example above, where ``do_request_handling`` needs to do some
-blocking operation, and returns a deferred:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def handle_request(request_id):
- with context.LoggingContext() as request_context:
- request_context.request = request_id
- yield do_request_handling()
- logger.debug("finished")
-
-
-In the above flow:
-
-* The logcontext is set
-* ``do_request_handling`` is called, and returns a deferred
-* ``handle_request`` yields the deferred
-* The ``inlineCallbacks`` wrapper of ``handle_request`` returns a deferred
-
-So we have stopped processing the request (and will probably go on to start
-processing the next), without clearing the logcontext.
-
-To circumvent this problem, synapse code assumes that, wherever you have a
-deferred, you will want to yield on it. To that end, whereever functions return
-a deferred, we adopt the following conventions:
-
-**Rules for functions returning deferreds:**
-
- * If the deferred is already complete, the function returns with the same
- logcontext it started with.
- * If the deferred is incomplete, the function clears the logcontext before
- returning; when the deferred completes, it restores the logcontext before
- running any callbacks.
-
-That sounds complicated, but actually it means a lot of code (including the
-example above) "just works". There are two cases:
-
-* If ``do_request_handling`` returns a completed deferred, then the logcontext
- will still be in place. In this case, execution will continue immediately
- after the ``yield``; the "finished" line will be logged against the right
- context, and the ``with`` block restores the original context before we
- return to the caller.
-
-* If the returned deferred is incomplete, ``do_request_handling`` clears the
- logcontext before returning. The logcontext is therefore clear when
- ``handle_request`` yields the deferred. At that point, the ``inlineCallbacks``
- wrapper adds a callback to the deferred, and returns another (incomplete)
- deferred to the caller, and it is safe to begin processing the next request.
-
- Once ``do_request_handling``'s deferred completes, it will reinstate the
- logcontext, before running the callback added by the ``inlineCallbacks``
- wrapper. That callback runs the second half of ``handle_request``, so again
- the "finished" line will be logged against the right
- context, and the ``with`` block restores the original context.
-
-As an aside, it's worth noting that ``handle_request`` follows our rules -
-though that only matters if the caller has its own logcontext which it cares
-about.
-
-The following sections describe pitfalls and helpful patterns when implementing
-these rules.
-
-Always yield your deferreds
----------------------------
-
-Whenever you get a deferred back from a function, you should ``yield`` on it
-as soon as possible. (Returning it directly to your caller is ok too, if you're
-not doing ``inlineCallbacks``.) Do not pass go; do not do any logging; do not
-call any other functions.
-
-.. code:: python
-
- @defer.inlineCallbacks
- def fun():
- logger.debug("starting")
- yield do_some_stuff() # just like this
-
- d = more_stuff()
- result = yield d # also fine, of course
-
- return result
-
- def nonInlineCallbacksFun():
- logger.debug("just a wrapper really")
- return do_some_stuff() # this is ok too - the caller will yield on
- # it anyway.
-
-Provided this pattern is followed all the way back up to the callchain to where
-the logcontext was set, this will make things work out ok: provided
-``do_some_stuff`` and ``more_stuff`` follow the rules above, then so will
-``fun`` (as wrapped by ``inlineCallbacks``) and ``nonInlineCallbacksFun``.
-
-It's all too easy to forget to ``yield``: for instance if we forgot that
-``do_some_stuff`` returned a deferred, we might plough on regardless. This
-leads to a mess; it will probably work itself out eventually, but not before
-a load of stuff has been logged against the wrong context. (Normally, other
-things will break, more obviously, if you forget to ``yield``, so this tends
-not to be a major problem in practice.)
-
-Of course sometimes you need to do something a bit fancier with your Deferreds
-- not all code follows the linear A-then-B-then-C pattern. Notes on
-implementing more complex patterns are in later sections.
-
-Where you create a new Deferred, make it follow the rules
----------------------------------------------------------
-
-Most of the time, a Deferred comes from another synapse function. Sometimes,
-though, we need to make up a new Deferred, or we get a Deferred back from
-external code. We need to make it follow our rules.
-
-The easy way to do it is with a combination of ``defer.inlineCallbacks``, and
-``context.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
-which returns a deferred which will run its callbacks after a given number of
-seconds. That might look like:
-
-.. code:: python
-
- # not a logcontext-rules-compliant function
- def get_sleep_deferred(seconds):
- d = defer.Deferred()
- reactor.callLater(seconds, d.callback, None)
- return d
-
-That doesn't follow the rules, but we can fix it by wrapping it with
-``PreserveLoggingContext`` and ``yield`` ing on it:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def sleep(seconds):
- with PreserveLoggingContext():
- yield get_sleep_deferred(seconds)
-
-This technique works equally for external functions which return deferreds,
-or deferreds we have made ourselves.
-
-You can also use ``context.make_deferred_yieldable``, which just does the
-boilerplate for you, so the above could be written:
-
-.. code:: python
-
- def sleep(seconds):
- return context.make_deferred_yieldable(get_sleep_deferred(seconds))
-
-
-Fire-and-forget
----------------
-
-Sometimes you want to fire off a chain of execution, but not wait for its
-result. That might look a bit like this:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def do_request_handling():
- yield foreground_operation()
-
- # *don't* do this
- background_operation()
-
- logger.debug("Request handling complete")
-
- @defer.inlineCallbacks
- def background_operation():
- yield first_background_step()
- logger.debug("Completed first step")
- yield second_background_step()
- logger.debug("Completed second step")
-
-The above code does a couple of steps in the background after
-``do_request_handling`` has finished. The log lines are still logged against
-the ``request_context`` logcontext, which may or may not be desirable. There
-are two big problems with the above, however. The first problem is that, if
-``background_operation`` returns an incomplete Deferred, it will expect its
-caller to ``yield`` immediately, so will have cleared the logcontext. In this
-example, that means that 'Request handling complete' will be logged without any
-context.
-
-The second problem, which is potentially even worse, is that when the Deferred
-returned by ``background_operation`` completes, it will restore the original
-logcontext. There is nothing waiting on that Deferred, so the logcontext will
-leak into the reactor and possibly get attached to some arbitrary future
-operation.
-
-There are two potential solutions to this.
-
-One option is to surround the call to ``background_operation`` with a
-``PreserveLoggingContext`` call. That will reset the logcontext before
-starting ``background_operation`` (so the context restored when the deferred
-completes will be the empty logcontext), and will restore the current
-logcontext before continuing the foreground process:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def do_request_handling():
- yield foreground_operation()
-
- # start background_operation off in the empty logcontext, to
- # avoid leaking the current context into the reactor.
- with PreserveLoggingContext():
- background_operation()
-
- # this will now be logged against the request context
- logger.debug("Request handling complete")
-
-Obviously that option means that the operations done in
-``background_operation`` would be not be logged against a logcontext (though
-that might be fixed by setting a different logcontext via a ``with
-LoggingContext(...)`` in ``background_operation``).
-
-The second option is to use ``context.run_in_background``, which wraps a
-function so that it doesn't reset the logcontext even when it returns an
-incomplete deferred, and adds a callback to the returned deferred to reset the
-logcontext. In other words, it turns a function that follows the Synapse rules
-about logcontexts and Deferreds into one which behaves more like an external
-function — the opposite operation to that described in the previous section.
-It can be used like this:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def do_request_handling():
- yield foreground_operation()
-
- context.run_in_background(background_operation)
-
- # this will now be logged against the request context
- logger.debug("Request handling complete")
-
-Passing synapse deferreds into third-party functions
-----------------------------------------------------
-
-A typical example of this is where we want to collect together two or more
-deferred via ``defer.gatherResults``:
-
-.. code:: python
-
- d1 = operation1()
- d2 = operation2()
- d3 = defer.gatherResults([d1, d2])
-
-This is really a variation of the fire-and-forget problem above, in that we are
-firing off ``d1`` and ``d2`` without yielding on them. The difference
-is that we now have third-party code attached to their callbacks. Anyway either
-technique given in the `Fire-and-forget`_ section will work.
-
-Of course, the new Deferred returned by ``gatherResults`` needs to be wrapped
-in order to make it follow the logcontext rules before we can yield it, as
-described in `Where you create a new Deferred, make it follow the rules`_.
-
-So, option one: reset the logcontext before starting the operations to be
-gathered:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def do_request_handling():
- with PreserveLoggingContext():
- d1 = operation1()
- d2 = operation2()
- result = yield defer.gatherResults([d1, d2])
-
-In this case particularly, though, option two, of using
-``context.preserve_fn`` almost certainly makes more sense, so that
-``operation1`` and ``operation2`` are both logged against the original
-logcontext. This looks like:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def do_request_handling():
- d1 = context.preserve_fn(operation1)()
- d2 = context.preserve_fn(operation2)()
-
- with PreserveLoggingContext():
- result = yield defer.gatherResults([d1, d2])
-
-
-Was all this really necessary?
-------------------------------
-
-The conventions used work fine for a linear flow where everything happens in
-series via ``defer.inlineCallbacks`` and ``yield``, but are certainly tricky to
-follow for any more exotic flows. It's hard not to wonder if we could have done
-something else.
-
-We're not going to rewrite Synapse now, so the following is entirely of
-academic interest, but I'd like to record some thoughts on an alternative
-approach.
-
-I briefly prototyped some code following an alternative set of rules. I think
-it would work, but I certainly didn't get as far as thinking how it would
-interact with concepts as complicated as the cache descriptors.
-
-My alternative rules were:
-
-* functions always preserve the logcontext of their caller, whether or not they
- are returning a Deferred.
-
-* Deferreds returned by synapse functions run their callbacks in the same
- context as the function was orignally called in.
-
-The main point of this scheme is that everywhere that sets the logcontext is
-responsible for clearing it before returning control to the reactor.
-
-So, for example, if you were the function which started a ``with
-LoggingContext`` block, you wouldn't ``yield`` within it — instead you'd start
-off the background process, and then leave the ``with`` block to wait for it:
-
-.. code:: python
-
- def handle_request(request_id):
- with context.LoggingContext() as request_context:
- request_context.request = request_id
- d = do_request_handling()
-
- def cb(r):
- logger.debug("finished")
-
- d.addCallback(cb)
- return d
-
-(in general, mixing ``with LoggingContext`` blocks and
-``defer.inlineCallbacks`` in the same function leads to slighly
-counter-intuitive code, under this scheme).
-
-Because we leave the original ``with`` block as soon as the Deferred is
-returned (as opposed to waiting for it to be resolved, as we do today), the
-logcontext is cleared before control passes back to the reactor; so if there is
-some code within ``do_request_handling`` which needs to wait for a Deferred to
-complete, there is no need for it to worry about clearing the logcontext before
-doing so:
-
-.. code:: python
-
- def handle_request():
- r = do_some_stuff()
- r.addCallback(do_some_more_stuff)
- return r
-
-— and provided ``do_some_stuff`` follows the rules of returning a Deferred which
-runs its callbacks in the original logcontext, all is happy.
-
-The business of a Deferred which runs its callbacks in the original logcontext
-isn't hard to achieve — we have it today, in the shape of
-``context._PreservingContextDeferred``:
-
-.. code:: python
-
- def do_some_stuff():
- deferred = do_some_io()
- pcd = _PreservingContextDeferred(LoggingContext.current_context())
- deferred.chainDeferred(pcd)
- return pcd
-
-It turns out that, thanks to the way that Deferreds chain together, we
-automatically get the property of a context-preserving deferred with
-``defer.inlineCallbacks``, provided the final Defered the function ``yields``
-on has that property. So we can just write:
-
-.. code:: python
-
- @defer.inlineCallbacks
- def handle_request():
- yield do_some_stuff()
- yield do_some_more_stuff()
-
-To conclude: I think this scheme would have worked equally well, with less
-danger of messing it up, and probably made some more esoteric code easier to
-write. But again — changing the conventions of the entire Synapse codebase is
-not a sensible option for the marginal improvement offered.
-
-
-A note on garbage-collection of Deferred chains
------------------------------------------------
-
-It turns out that our logcontext rules do not play nicely with Deferred
-chains which get orphaned and garbage-collected.
-
-Imagine we have some code that looks like this:
-
-.. code:: python
-
- listener_queue = []
-
- def on_something_interesting():
- for d in listener_queue:
- d.callback("foo")
-
- @defer.inlineCallbacks
- def await_something_interesting():
- new_deferred = defer.Deferred()
- listener_queue.append(new_deferred)
-
- with PreserveLoggingContext():
- yield new_deferred
-
-Obviously, the idea here is that we have a bunch of things which are waiting
-for an event. (It's just an example of the problem here, but a relatively
-common one.)
-
-Now let's imagine two further things happen. First of all, whatever was
-waiting for the interesting thing goes away. (Perhaps the request times out,
-or something *even more* interesting happens.)
-
-Secondly, let's suppose that we decide that the interesting thing is never
-going to happen, and we reset the listener queue:
-
-.. code:: python
-
- def reset_listener_queue():
- listener_queue.clear()
-
-So, both ends of the deferred chain have now dropped their references, and the
-deferred chain is now orphaned, and will be garbage-collected at some point.
-Note that ``await_something_interesting`` is a generator function, and when
-Python garbage-collects generator functions, it gives them a chance to clean
-up by making the ``yield`` raise a ``GeneratorExit`` exception. In our case,
-that means that the ``__exit__`` handler of ``PreserveLoggingContext`` will
-carefully restore the request context, but there is now nothing waiting for
-its return, so the request context is never cleared.
-
-To reiterate, this problem only arises when *both* ends of a deferred chain
-are dropped. Dropping the the reference to a deferred you're supposed to be
-calling is probably bad practice, so this doesn't actually happen too much.
-Unfortunately, when it does happen, it will lead to leaked logcontexts which
-are incredibly hard to track down.
diff --git a/docs/media_repository.md b/docs/media_repository.md
new file mode 100644
index 0000000000..1bf8f16f55
--- /dev/null
+++ b/docs/media_repository.md
@@ -0,0 +1,30 @@
+# Media Repository
+
+*Synapse implementation-specific details for the media repository*
+
+The media repository is where attachments and avatar photos are stored.
+It stores attachment content and thumbnails for media uploaded by local users.
+It caches attachment content and thumbnails for media uploaded by remote users.
+
+## Storage
+
+Each item of media is assigned a `media_id` when it is uploaded.
+The `media_id` is a randomly chosen, URL safe 24 character string.
+
+Metadata such as the MIME type, upload time and length are stored in the
+sqlite3 database indexed by `media_id`.
+
+Content is stored on the filesystem under a `"local_content"` directory.
+
+Thumbnails are stored under a `"local_thumbnails"` directory.
+
+The item with `media_id` `"aabbccccccccdddddddddddd"` is stored under
+`"local_content/aa/bb/ccccccccdddddddddddd"`. Its thumbnail with width
+`128` and height `96` and type `"image/jpeg"` is stored under
+`"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`
+
+Remote content is cached under `"remote_content"` directory. Each item of
+remote content is assigned a local `"filesystem_id"` to ensure that the
+directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
+is appropriate. Thumbnails for remote content are stored under
+`"remote_thumbnails/server_name/..."`
diff --git a/docs/media_repository.rst b/docs/media_repository.rst
deleted file mode 100644
index 1037b5be63..0000000000
--- a/docs/media_repository.rst
+++ /dev/null
@@ -1,27 +0,0 @@
-Media Repository
-================
-
-*Synapse implementation-specific details for the media repository*
-
-The media repository is where attachments and avatar photos are stored.
-It stores attachment content and thumbnails for media uploaded by local users.
-It caches attachment content and thumbnails for media uploaded by remote users.
-
-Storage
--------
-
-Each item of media is assigned a ``media_id`` when it is uploaded.
-The ``media_id`` is a randomly chosen, URL safe 24 character string.
-Metadata such as the MIME type, upload time and length are stored in the
-sqlite3 database indexed by ``media_id``.
-Content is stored on the filesystem under a ``"local_content"`` directory.
-Thumbnails are stored under a ``"local_thumbnails"`` directory.
-The item with ``media_id`` ``"aabbccccccccdddddddddddd"`` is stored under
-``"local_content/aa/bb/ccccccccdddddddddddd"``. Its thumbnail with width
-``128`` and height ``96`` and type ``"image/jpeg"`` is stored under
-``"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"``
-Remote content is cached under ``"remote_content"`` directory. Each item of
-remote content is assigned a local "``filesystem_id``" to ensure that the
-directory structure ``"remote_content/server_name/aa/bb/ccccccccdddddddddddd"``
-is appropriate. Thumbnails for remote content are stored under
-``"remote_thumbnails/server_name/..."``
diff --git a/docs/metrics-howto.md b/docs/metrics-howto.md
new file mode 100644
index 0000000000..32abb9f44e
--- /dev/null
+++ b/docs/metrics-howto.md
@@ -0,0 +1,217 @@
+# How to monitor Synapse metrics using Prometheus
+
+1. Install Prometheus:
+
+ Follow instructions at
+ <http://prometheus.io/docs/introduction/install/>
+
+1. Enable Synapse metrics:
+
+ There are two methods of enabling metrics in Synapse.
+
+ The first serves the metrics as a part of the usual web server and
+ can be enabled by adding the \"metrics\" resource to the existing
+ listener as such:
+
+ resources:
+ - names:
+ - client
+ - metrics
+
+ This provides a simple way of adding metrics to your Synapse
+ installation, and serves under `/_synapse/metrics`. If you do not
+ wish your metrics be publicly exposed, you will need to either
+ filter it out at your load balancer, or use the second method.
+
+ The second method runs the metrics server on a different port, in a
+ different thread to Synapse. This can make it more resilient to
+ heavy load meaning metrics cannot be retrieved, and can be exposed
+ to just internal networks easier. The served metrics are available
+ over HTTP only, and will be available at `/`.
+
+ Add a new listener to homeserver.yaml:
+
+ listeners:
+ - type: metrics
+ port: 9000
+ bind_addresses:
+ - '0.0.0.0'
+
+ For both options, you will need to ensure that `enable_metrics` is
+ set to `True`.
+
+1. Restart Synapse.
+
+1. Add a Prometheus target for Synapse.
+
+ It needs to set the `metrics_path` to a non-default value (under
+ `scrape_configs`):
+
+ - job_name: "synapse"
+ metrics_path: "/_synapse/metrics"
+ static_configs:
+ - targets: ["my.server.here:port"]
+
+ where `my.server.here` is the IP address of Synapse, and `port` is
+ the listener port configured with the `metrics` resource.
+
+ If your prometheus is older than 1.5.2, you will need to replace
+ `static_configs` in the above with `target_groups`.
+
+1. Restart Prometheus.
+
+## Renaming of metrics & deprecation of old names in 1.2
+
+Synapse 1.2 updates the Prometheus metrics to match the naming
+convention of the upstream `prometheus_client`. The old names are
+considered deprecated and will be removed in a future version of
+Synapse.
+
+| New Name | Old Name |
+| ---------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
+| python_gc_objects_collected_total | python_gc_objects_collected |
+| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
+| python_gc_collections_total | python_gc_collections |
+| process_cpu_seconds_total | process_cpu_seconds |
+| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
+| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
+| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
+| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
+| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
+| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
+| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
+| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
+| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
+| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
+| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
+| synapse_background_process_start_count_total | synapse_background_process_start_count |
+| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
+| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
+| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
+| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
+| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
+| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
+| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
+| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
+| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
+| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
+| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
+| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
+| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
+| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
+| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
+| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
+| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
+| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
+| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
+| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
+| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
+| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
+| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
+| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
+| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
+| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
+| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
+| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
+| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
+
+Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
+---------------------------------------------------------------------------------
+
+The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
+
+All time duration-based metrics have been changed to be seconds. This
+affects:
+
+| msec -> sec metrics |
+| -------------------------------------- |
+| python_gc_time |
+| python_twisted_reactor_tick_time |
+| synapse_storage_query_time |
+| synapse_storage_schedule_time |
+| synapse_storage_transaction_time |
+
+Several metrics have been changed to be histograms, which sort entries
+into buckets and allow better analysis. The following metrics are now
+histograms:
+
+| Altered metrics |
+| ------------------------------------------------ |
+| python_gc_time |
+| python_twisted_reactor_pending_calls |
+| python_twisted_reactor_tick_time |
+| synapse_http_server_response_time_seconds |
+| synapse_storage_query_time |
+| synapse_storage_schedule_time |
+| synapse_storage_transaction_time |
+
+Block and response metrics renamed for 0.27.0
+---------------------------------------------
+
+Synapse 0.27.0 begins the process of rationalising the duplicate
+`*:count` metrics reported for the resource tracking for code blocks and
+HTTP requests.
+
+At the same time, the corresponding `*:total` metrics are being renamed,
+as the `:total` suffix no longer makes sense in the absence of a
+corresponding `:count` metric.
+
+To enable a graceful migration path, this release just adds new names
+for the metrics being renamed. A future release will remove the old
+ones.
+
+The following table shows the new metrics, and the old metrics which
+they are replacing.
+
+| New name | Old name |
+| ------------------------------------------------------------- | ---------------------------------------------------------- |
+| synapse_util_metrics_block_count | synapse_util_metrics_block_timer:count |
+| synapse_util_metrics_block_count | synapse_util_metrics_block_ru_utime:count |
+| synapse_util_metrics_block_count | synapse_util_metrics_block_ru_stime:count |
+| synapse_util_metrics_block_count | synapse_util_metrics_block_db_txn_count:count |
+| synapse_util_metrics_block_count | synapse_util_metrics_block_db_txn_duration:count |
+| synapse_util_metrics_block_time_seconds | synapse_util_metrics_block_timer:total |
+| synapse_util_metrics_block_ru_utime_seconds | synapse_util_metrics_block_ru_utime:total |
+| synapse_util_metrics_block_ru_stime_seconds | synapse_util_metrics_block_ru_stime:total |
+| synapse_util_metrics_block_db_txn_count | synapse_util_metrics_block_db_txn_count:total |
+| synapse_util_metrics_block_db_txn_duration_seconds | synapse_util_metrics_block_db_txn_duration:total |
+| synapse_http_server_response_count | synapse_http_server_requests |
+| synapse_http_server_response_count | synapse_http_server_response_time:count |
+| synapse_http_server_response_count | synapse_http_server_response_ru_utime:count |
+| synapse_http_server_response_count | synapse_http_server_response_ru_stime:count |
+| synapse_http_server_response_count | synapse_http_server_response_db_txn_count:count |
+| synapse_http_server_response_count | synapse_http_server_response_db_txn_duration:count |
+| synapse_http_server_response_time_seconds | synapse_http_server_response_time:total |
+| synapse_http_server_response_ru_utime_seconds | synapse_http_server_response_ru_utime:total |
+| synapse_http_server_response_ru_stime_seconds | synapse_http_server_response_ru_stime:total |
+| synapse_http_server_response_db_txn_count | synapse_http_server_response_db_txn_count:total |
+| synapse_http_server_response_db_txn_duration_seconds | synapse_http_server_response_db_txn_duration:total |
+
+Standard Metric Names
+---------------------
+
+As of synapse version 0.18.2, the format of the process-wide metrics has
+been changed to fit prometheus standard naming conventions. Additionally
+the units have been changed to seconds, from miliseconds.
+
+| New name | Old name |
+| ---------------------------------------- | --------------------------------- |
+| process_cpu_user_seconds_total | process_resource_utime / 1000 |
+| process_cpu_system_seconds_total | process_resource_stime / 1000 |
+| process_open_fds (no \'type\' label) | process_fds |
+
+The python-specific counts of garbage collector performance have been
+renamed.
+
+| New name | Old name |
+| -------------------------------- | -------------------------- |
+| python_gc_time | reactor_gc_time |
+| python_gc_unreachable_total | reactor_gc_unreachable |
+| python_gc_counts | reactor_gc_counts |
+
+The twisted-specific reactor metrics have been renamed.
+
+| New name | Old name |
+| -------------------------------------- | ----------------------- |
+| python_twisted_reactor_pending_calls | reactor_pending_calls |
+| python_twisted_reactor_tick_time | reactor_tick_time |
diff --git a/docs/metrics-howto.rst b/docs/metrics-howto.rst
deleted file mode 100644
index 973641f3dc..0000000000
--- a/docs/metrics-howto.rst
+++ /dev/null
@@ -1,285 +0,0 @@
-How to monitor Synapse metrics using Prometheus
-===============================================
-
-1. Install Prometheus:
-
- Follow instructions at http://prometheus.io/docs/introduction/install/
-
-2. Enable Synapse metrics:
-
- There are two methods of enabling metrics in Synapse.
-
- The first serves the metrics as a part of the usual web server and can be
- enabled by adding the "metrics" resource to the existing listener as such::
-
- resources:
- - names:
- - client
- - metrics
-
- This provides a simple way of adding metrics to your Synapse installation,
- and serves under ``/_synapse/metrics``. If you do not wish your metrics be
- publicly exposed, you will need to either filter it out at your load
- balancer, or use the second method.
-
- The second method runs the metrics server on a different port, in a
- different thread to Synapse. This can make it more resilient to heavy load
- meaning metrics cannot be retrieved, and can be exposed to just internal
- networks easier. The served metrics are available over HTTP only, and will
- be available at ``/``.
-
- Add a new listener to homeserver.yaml::
-
- listeners:
- - type: metrics
- port: 9000
- bind_addresses:
- - '0.0.0.0'
-
- For both options, you will need to ensure that ``enable_metrics`` is set to
- ``True``.
-
- Restart Synapse.
-
-3. Add a Prometheus target for Synapse.
-
- It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
-
- - job_name: "synapse"
- metrics_path: "/_synapse/metrics"
- static_configs:
- - targets: ["my.server.here:port"]
-
- where ``my.server.here`` is the IP address of Synapse, and ``port`` is the listener port
- configured with the ``metrics`` resource.
-
- If your prometheus is older than 1.5.2, you will need to replace
- ``static_configs`` in the above with ``target_groups``.
-
- Restart Prometheus.
-
-
-Renaming of metrics & deprecation of old names in 1.2
------------------------------------------------------
-
-Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
-upstream ``prometheus_client``. The old names are considered deprecated and will
-be removed in a future version of Synapse.
-
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| New Name | Old Name |
-+=============================================================================+=======================================================================+
-| python_gc_objects_collected_total | python_gc_objects_collected |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| python_gc_collections_total | python_gc_collections |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| process_cpu_seconds_total | process_cpu_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_start_count_total | synapse_background_process_start_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
-+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
-
-
-Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
----------------------------------------------------------------------------------
-
-The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
-
-All time duration-based metrics have been changed to be seconds. This affects:
-
-+----------------------------------+
-| msec -> sec metrics |
-+==================================+
-| python_gc_time |
-+----------------------------------+
-| python_twisted_reactor_tick_time |
-+----------------------------------+
-| synapse_storage_query_time |
-+----------------------------------+
-| synapse_storage_schedule_time |
-+----------------------------------+
-| synapse_storage_transaction_time |
-+----------------------------------+
-
-Several metrics have been changed to be histograms, which sort entries into
-buckets and allow better analysis. The following metrics are now histograms:
-
-+-------------------------------------------+
-| Altered metrics |
-+===========================================+
-| python_gc_time |
-+-------------------------------------------+
-| python_twisted_reactor_pending_calls |
-+-------------------------------------------+
-| python_twisted_reactor_tick_time |
-+-------------------------------------------+
-| synapse_http_server_response_time_seconds |
-+-------------------------------------------+
-| synapse_storage_query_time |
-+-------------------------------------------+
-| synapse_storage_schedule_time |
-+-------------------------------------------+
-| synapse_storage_transaction_time |
-+-------------------------------------------+
-
-
-Block and response metrics renamed for 0.27.0
----------------------------------------------
-
-Synapse 0.27.0 begins the process of rationalising the duplicate ``*:count``
-metrics reported for the resource tracking for code blocks and HTTP requests.
-
-At the same time, the corresponding ``*:total`` metrics are being renamed, as
-the ``:total`` suffix no longer makes sense in the absence of a corresponding
-``:count`` metric.
-
-To enable a graceful migration path, this release just adds new names for the
-metrics being renamed. A future release will remove the old ones.
-
-The following table shows the new metrics, and the old metrics which they are
-replacing.
-
-==================================================== ===================================================
-New name Old name
-==================================================== ===================================================
-synapse_util_metrics_block_count synapse_util_metrics_block_timer:count
-synapse_util_metrics_block_count synapse_util_metrics_block_ru_utime:count
-synapse_util_metrics_block_count synapse_util_metrics_block_ru_stime:count
-synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_count:count
-synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_duration:count
-
-synapse_util_metrics_block_time_seconds synapse_util_metrics_block_timer:total
-synapse_util_metrics_block_ru_utime_seconds synapse_util_metrics_block_ru_utime:total
-synapse_util_metrics_block_ru_stime_seconds synapse_util_metrics_block_ru_stime:total
-synapse_util_metrics_block_db_txn_count synapse_util_metrics_block_db_txn_count:total
-synapse_util_metrics_block_db_txn_duration_seconds synapse_util_metrics_block_db_txn_duration:total
-
-synapse_http_server_response_count synapse_http_server_requests
-synapse_http_server_response_count synapse_http_server_response_time:count
-synapse_http_server_response_count synapse_http_server_response_ru_utime:count
-synapse_http_server_response_count synapse_http_server_response_ru_stime:count
-synapse_http_server_response_count synapse_http_server_response_db_txn_count:count
-synapse_http_server_response_count synapse_http_server_response_db_txn_duration:count
-
-synapse_http_server_response_time_seconds synapse_http_server_response_time:total
-synapse_http_server_response_ru_utime_seconds synapse_http_server_response_ru_utime:total
-synapse_http_server_response_ru_stime_seconds synapse_http_server_response_ru_stime:total
-synapse_http_server_response_db_txn_count synapse_http_server_response_db_txn_count:total
-synapse_http_server_response_db_txn_duration_seconds synapse_http_server_response_db_txn_duration:total
-==================================================== ===================================================
-
-
-Standard Metric Names
----------------------
-
-As of synapse version 0.18.2, the format of the process-wide metrics has been
-changed to fit prometheus standard naming conventions. Additionally the units
-have been changed to seconds, from miliseconds.
-
-================================== =============================
-New name Old name
-================================== =============================
-process_cpu_user_seconds_total process_resource_utime / 1000
-process_cpu_system_seconds_total process_resource_stime / 1000
-process_open_fds (no 'type' label) process_fds
-================================== =============================
-
-The python-specific counts of garbage collector performance have been renamed.
-
-=========================== ======================
-New name Old name
-=========================== ======================
-python_gc_time reactor_gc_time
-python_gc_unreachable_total reactor_gc_unreachable
-python_gc_counts reactor_gc_counts
-=========================== ======================
-
-The twisted-specific reactor metrics have been renamed.
-
-==================================== =====================
-New name Old name
-==================================== =====================
-python_twisted_reactor_pending_calls reactor_pending_calls
-python_twisted_reactor_tick_time reactor_tick_time
-==================================== =====================
diff --git a/docs/opentracing.md b/docs/opentracing.md
new file mode 100644
index 0000000000..4c7a56a5d7
--- /dev/null
+++ b/docs/opentracing.md
@@ -0,0 +1,93 @@
+# OpenTracing
+
+## Background
+
+OpenTracing is a semi-standard being adopted by a number of distributed
+tracing platforms. It is a common api for facilitating vendor-agnostic
+tracing instrumentation. That is, we can use the OpenTracing api and
+select one of a number of tracer implementations to do the heavy lifting
+in the background. Our current selected implementation is Jaeger.
+
+OpenTracing is a tool which gives an insight into the causal
+relationship of work done in and between servers. The servers each track
+events and report them to a centralised server - in Synapse's case:
+Jaeger. The basic unit used to represent events is the span. The span
+roughly represents a single piece of work that was done and the time at
+which it occurred. A span can have child spans, meaning that the work of
+the child had to be completed for the parent span to complete, or it can
+have follow-on spans which represent work that is undertaken as a result
+of the parent but is not depended on by the parent to in order to
+finish.
+
+Since this is undertaken in a distributed environment a request to
+another server, such as an RPC or a simple GET, can be considered a span
+(a unit or work) for the local server. This causal link is what
+OpenTracing aims to capture and visualise. In order to do this metadata
+about the local server's span, i.e the 'span context', needs to be
+included with the request to the remote.
+
+It is up to the remote server to decide what it does with the spans it
+creates. This is called the sampling policy and it can be configured
+through Jaeger's settings.
+
+For OpenTracing concepts see
+<https://opentracing.io/docs/overview/what-is-tracing/>.
+
+For more information about Jaeger's implementation see
+<https://www.jaegertracing.io/docs/>
+
+## Setting up OpenTracing
+
+To receive OpenTracing spans, start up a Jaeger server. This can be done
+using docker like so:
+
+```sh
+docker run -d --name jaeger
+ -p 6831:6831/udp \
+ -p 6832:6832/udp \
+ -p 5778:5778 \
+ -p 16686:16686 \
+ -p 14268:14268 \
+ jaegertracing/all-in-one:1.13
+```
+
+Latest documentation is probably at
+<https://www.jaegertracing.io/docs/1.13/getting-started/>
+
+## Enable OpenTracing in Synapse
+
+OpenTracing is not enabled by default. It must be enabled in the
+homeserver config by uncommenting the config options under `opentracing`
+as shown in the [sample config](./sample_config.yaml). For example:
+
+```yaml
+opentracing:
+ tracer_enabled: true
+ homeserver_whitelist:
+ - "mytrustedhomeserver.org"
+ - "*.myotherhomeservers.com"
+```
+
+## Homeserver whitelisting
+
+The homeserver whitelist is configured using regular expressions. A list
+of regular expressions can be given and their union will be compared
+when propagating any spans contexts to another homeserver.
+
+Though it's mostly safe to send and receive span contexts to and from
+untrusted users since span contexts are usually opaque ids it can lead
+to two problems, namely:
+
+- If the span context is marked as sampled by the sending homeserver
+ the receiver will sample it. Therefore two homeservers with wildly
+ different sampling policies could incur higher sampling counts than
+ intended.
+- Sending servers can attach arbitrary data to spans, known as
+ 'baggage'. For safety this has been disabled in Synapse but that
+ doesn't prevent another server sending you baggage which will be
+ logged to OpenTracing's logs.
+
+## Configuring Jaeger
+
+Sampling strategies can be set as in this document:
+<https://www.jaegertracing.io/docs/1.13/sampling/>
diff --git a/docs/opentracing.rst b/docs/opentracing.rst
deleted file mode 100644
index 6e98ab56ba..0000000000
--- a/docs/opentracing.rst
+++ /dev/null
@@ -1,123 +0,0 @@
-===========
-OpenTracing
-===========
-
-Background
-----------
-
-OpenTracing is a semi-standard being adopted by a number of distributed tracing
-platforms. It is a common api for facilitating vendor-agnostic tracing
-instrumentation. That is, we can use the OpenTracing api and select one of a
-number of tracer implementations to do the heavy lifting in the background.
-Our current selected implementation is Jaeger.
-
-OpenTracing is a tool which gives an insight into the causal relationship of
-work done in and between servers. The servers each track events and report them
-to a centralised server - in Synapse's case: Jaeger. The basic unit used to
-represent events is the span. The span roughly represents a single piece of work
-that was done and the time at which it occurred. A span can have child spans,
-meaning that the work of the child had to be completed for the parent span to
-complete, or it can have follow-on spans which represent work that is undertaken
-as a result of the parent but is not depended on by the parent to in order to
-finish.
-
-Since this is undertaken in a distributed environment a request to another
-server, such as an RPC or a simple GET, can be considered a span (a unit or
-work) for the local server. This causal link is what OpenTracing aims to
-capture and visualise. In order to do this metadata about the local server's
-span, i.e the 'span context', needs to be included with the request to the
-remote.
-
-It is up to the remote server to decide what it does with the spans
-it creates. This is called the sampling policy and it can be configured
-through Jaeger's settings.
-
-For OpenTracing concepts see
-https://opentracing.io/docs/overview/what-is-tracing/.
-
-For more information about Jaeger's implementation see
-https://www.jaegertracing.io/docs/
-
-=====================
-Seting up OpenTracing
-=====================
-
-To receive OpenTracing spans, start up a Jaeger server. This can be done
-using docker like so:
-
-.. code-block:: bash
-
- docker run -d --name jaeger
- -p 6831:6831/udp \
- -p 6832:6832/udp \
- -p 5778:5778 \
- -p 16686:16686 \
- -p 14268:14268 \
- jaegertracing/all-in-one:1.13
-
-Latest documentation is probably at
-https://www.jaegertracing.io/docs/1.13/getting-started/
-
-
-Enable OpenTracing in Synapse
------------------------------
-
-OpenTracing is not enabled by default. It must be enabled in the homeserver
-config by uncommenting the config options under ``opentracing`` as shown in
-the `sample config <./sample_config.yaml>`_. For example:
-
-.. code-block:: yaml
-
- opentracing:
- tracer_enabled: true
- homeserver_whitelist:
- - "mytrustedhomeserver.org"
- - "*.myotherhomeservers.com"
-
-Homeserver whitelisting
------------------------
-
-The homeserver whitelist is configured using regular expressions. A list of regular
-expressions can be given and their union will be compared when propagating any
-spans contexts to another homeserver.
-
-Though it's mostly safe to send and receive span contexts to and from
-untrusted users since span contexts are usually opaque ids it can lead to
-two problems, namely:
-
-- If the span context is marked as sampled by the sending homeserver the receiver will
- sample it. Therefore two homeservers with wildly different sampling policies
- could incur higher sampling counts than intended.
-- Sending servers can attach arbitrary data to spans, known as 'baggage'. For safety this has been disabled in Synapse
- but that doesn't prevent another server sending you baggage which will be logged
- to OpenTracing's logs.
-
-==========
-EDU FORMAT
-==========
-
-EDUs can contain tracing data in their content. This is not specced but
-it could be of interest for other homeservers.
-
-EDU format (if you're using jaeger):
-
-.. code-block:: json
-
- {
- "edu_type": "type",
- "content": {
- "org.matrix.opentracing_context": {
- "uber-trace-id": "fe57cf3e65083289"
- }
- }
- }
-
-Though you don't have to use jaeger you must inject the span context into
-`org.matrix.opentracing_context` using the opentracing `Format.TEXT_MAP` inject method.
-
-==================
-Configuring Jaeger
-==================
-
-Sampling strategies can be set as in this document:
-https://www.jaegertracing.io/docs/1.13/sampling/
diff --git a/docs/password_auth_providers.md b/docs/password_auth_providers.md
new file mode 100644
index 0000000000..0db1a3804a
--- /dev/null
+++ b/docs/password_auth_providers.md
@@ -0,0 +1,116 @@
+# Password auth provider modules
+
+Password auth providers offer a way for server administrators to
+integrate their Synapse installation with an existing authentication
+system.
+
+A password auth provider is a Python class which is dynamically loaded
+into Synapse, and provides a number of methods by which it can integrate
+with the authentication system.
+
+This document serves as a reference for those looking to implement their
+own password auth providers.
+
+## Required methods
+
+Password auth provider classes must provide the following methods:
+
+*class* `SomeProvider.parse_config`(*config*)
+
+> This method is passed the `config` object for this module from the
+> homeserver configuration file.
+>
+> It should perform any appropriate sanity checks on the provided
+> configuration, and return an object which is then passed into
+> `__init__`.
+
+*class* `SomeProvider`(*config*, *account_handler*)
+
+> The constructor is passed the config object returned by
+> `parse_config`, and a `synapse.module_api.ModuleApi` object which
+> allows the password provider to check if accounts exist and/or create
+> new ones.
+
+## Optional methods
+
+Password auth provider classes may optionally provide the following
+methods.
+
+*class* `SomeProvider.get_db_schema_files`()
+
+> This method, if implemented, should return an Iterable of
+> `(name, stream)` pairs of database schema files. Each file is applied
+> in turn at initialisation, and a record is then made in the database
+> so that it is not re-applied on the next start.
+
+`someprovider.get_supported_login_types`()
+
+> This method, if implemented, should return a `dict` mapping from a
+> login type identifier (such as `m.login.password`) to an iterable
+> giving the fields which must be provided by the user in the submission
+> to the `/login` api. These fields are passed in the `login_dict`
+> dictionary to `check_auth`.
+>
+> For example, if a password auth provider wants to implement a custom
+> login type of `com.example.custom_login`, where the client is expected
+> to pass the fields `secret1` and `secret2`, the provider should
+> implement this method and return the following dict:
+>
+> {"com.example.custom_login": ("secret1", "secret2")}
+
+`someprovider.check_auth`(*username*, *login_type*, *login_dict*)
+
+> This method is the one that does the real work. If implemented, it
+> will be called for each login attempt where the login type matches one
+> of the keys returned by `get_supported_login_types`.
+>
+> It is passed the (possibly UNqualified) `user` provided by the client,
+> the login type, and a dictionary of login secrets passed by the
+> client.
+>
+> The method should return a Twisted `Deferred` object, which resolves
+> to the canonical `@localpart:domain` user id if authentication is
+> successful, and `None` if not.
+>
+> Alternatively, the `Deferred` can resolve to a `(str, func)` tuple, in
+> which case the second field is a callback which will be called with
+> the result from the `/login` call (including `access_token`,
+> `device_id`, etc.)
+
+`someprovider.check_3pid_auth`(*medium*, *address*, *password*)
+
+> This method, if implemented, is called when a user attempts to
+> register or log in with a third party identifier, such as email. It is
+> passed the medium (ex. "email"), an address (ex.
+> "<jdoe@example.com>") and the user's password.
+>
+> The method should return a Twisted `Deferred` object, which resolves
+> to a `str` containing the user's (canonical) User ID if
+> authentication was successful, and `None` if not.
+>
+> As with `check_auth`, the `Deferred` may alternatively resolve to a
+> `(user_id, callback)` tuple.
+
+`someprovider.check_password`(*user_id*, *password*)
+
+> This method provides a simpler interface than
+> `get_supported_login_types` and `check_auth` for password auth
+> providers that just want to provide a mechanism for validating
+> `m.login.password` logins.
+>
+> Iif implemented, it will be called to check logins with an
+> `m.login.password` login type. It is passed a qualified
+> `@localpart:domain` user id, and the password provided by the user.
+>
+> The method should return a Twisted `Deferred` object, which resolves
+> to `True` if authentication is successful, and `False` if not.
+
+`someprovider.on_logged_out`(*user_id*, *device_id*, *access_token*)
+
+> This method, if implemented, is called when a user logs out. It is
+> passed the qualified user ID, the ID of the deactivated device (if
+> any: access tokens are occasionally created without an associated
+> device ID), and the (now deactivated) access token.
+>
+> It may return a Twisted `Deferred` object; the logout request will
+> wait for the deferred to complete but the result is ignored.
diff --git a/docs/password_auth_providers.rst b/docs/password_auth_providers.rst
deleted file mode 100644
index 6149ba7458..0000000000
--- a/docs/password_auth_providers.rst
+++ /dev/null
@@ -1,113 +0,0 @@
-Password auth provider modules
-==============================
-
-Password auth providers offer a way for server administrators to integrate
-their Synapse installation with an existing authentication system.
-
-A password auth provider is a Python class which is dynamically loaded into
-Synapse, and provides a number of methods by which it can integrate with the
-authentication system.
-
-This document serves as a reference for those looking to implement their own
-password auth providers.
-
-Required methods
-----------------
-
-Password auth provider classes must provide the following methods:
-
-*class* ``SomeProvider.parse_config``\(*config*)
-
- This method is passed the ``config`` object for this module from the
- homeserver configuration file.
-
- It should perform any appropriate sanity checks on the provided
- configuration, and return an object which is then passed into ``__init__``.
-
-*class* ``SomeProvider``\(*config*, *account_handler*)
-
- The constructor is passed the config object returned by ``parse_config``,
- and a ``synapse.module_api.ModuleApi`` object which allows the
- password provider to check if accounts exist and/or create new ones.
-
-Optional methods
-----------------
-
-Password auth provider classes may optionally provide the following methods.
-
-*class* ``SomeProvider.get_db_schema_files``\()
-
- This method, if implemented, should return an Iterable of ``(name,
- stream)`` pairs of database schema files. Each file is applied in turn at
- initialisation, and a record is then made in the database so that it is
- not re-applied on the next start.
-
-``someprovider.get_supported_login_types``\()
-
- This method, if implemented, should return a ``dict`` mapping from a login
- type identifier (such as ``m.login.password``) to an iterable giving the
- fields which must be provided by the user in the submission to the
- ``/login`` api. These fields are passed in the ``login_dict`` dictionary
- to ``check_auth``.
-
- For example, if a password auth provider wants to implement a custom login
- type of ``com.example.custom_login``, where the client is expected to pass
- the fields ``secret1`` and ``secret2``, the provider should implement this
- method and return the following dict::
-
- {"com.example.custom_login": ("secret1", "secret2")}
-
-``someprovider.check_auth``\(*username*, *login_type*, *login_dict*)
-
- This method is the one that does the real work. If implemented, it will be
- called for each login attempt where the login type matches one of the keys
- returned by ``get_supported_login_types``.
-
- It is passed the (possibly UNqualified) ``user`` provided by the client,
- the login type, and a dictionary of login secrets passed by the client.
-
- The method should return a Twisted ``Deferred`` object, which resolves to
- the canonical ``@localpart:domain`` user id if authentication is successful,
- and ``None`` if not.
-
- Alternatively, the ``Deferred`` can resolve to a ``(str, func)`` tuple, in
- which case the second field is a callback which will be called with the
- result from the ``/login`` call (including ``access_token``, ``device_id``,
- etc.)
-
-``someprovider.check_3pid_auth``\(*medium*, *address*, *password*)
-
- This method, if implemented, is called when a user attempts to register or
- log in with a third party identifier, such as email. It is passed the
- medium (ex. "email"), an address (ex. "jdoe@example.com") and the user's
- password.
-
- The method should return a Twisted ``Deferred`` object, which resolves to
- a ``str`` containing the user's (canonical) User ID if authentication was
- successful, and ``None`` if not.
-
- As with ``check_auth``, the ``Deferred`` may alternatively resolve to a
- ``(user_id, callback)`` tuple.
-
-``someprovider.check_password``\(*user_id*, *password*)
-
- This method provides a simpler interface than ``get_supported_login_types``
- and ``check_auth`` for password auth providers that just want to provide a
- mechanism for validating ``m.login.password`` logins.
-
- Iif implemented, it will be called to check logins with an
- ``m.login.password`` login type. It is passed a qualified
- ``@localpart:domain`` user id, and the password provided by the user.
-
- The method should return a Twisted ``Deferred`` object, which resolves to
- ``True`` if authentication is successful, and ``False`` if not.
-
-``someprovider.on_logged_out``\(*user_id*, *device_id*, *access_token*)
-
- This method, if implemented, is called when a user logs out. It is passed
- the qualified user ID, the ID of the deactivated device (if any: access
- tokens are occasionally created without an associated device ID), and the
- (now deactivated) access token.
-
- It may return a Twisted ``Deferred`` object; the logout request will wait
- for the deferred to complete but the result is ignored.
diff --git a/docs/postgres.md b/docs/postgres.md
new file mode 100644
index 0000000000..29cf762858
--- /dev/null
+++ b/docs/postgres.md
@@ -0,0 +1,164 @@
+# Using Postgres
+
+Postgres version 9.5 or later is known to work.
+
+## Install postgres client libraries
+
+Synapse will require the python postgres client library in order to
+connect to a postgres database.
+
+- If you are using the [matrix.org debian/ubuntu
+ packages](../INSTALL.md#matrixorg-packages), the necessary python
+ library will already be installed, but you will need to ensure the
+ low-level postgres library is installed, which you can do with
+ `apt install libpq5`.
+- For other pre-built packages, please consult the documentation from
+ the relevant package.
+- If you installed synapse [in a
+ virtualenv](../INSTALL.md#installing-from-source), you can install
+ the library with:
+
+ ~/synapse/env/bin/pip install matrix-synapse[postgres]
+
+ (substituting the path to your virtualenv for `~/synapse/env`, if
+ you used a different path). You will require the postgres
+ development files. These are in the `libpq-dev` package on
+ Debian-derived distributions.
+
+## Set up database
+
+Assuming your PostgreSQL database user is called `postgres`, create a
+user `synapse_user` with:
+
+ su - postgres
+ createuser --pwprompt synapse_user
+
+Before you can authenticate with the `synapse_user`, you must create a
+database that it can access. To create a database, first connect to the
+database with your database user:
+
+ su - postgres
+ psql
+
+and then run:
+
+ CREATE DATABASE synapse
+ ENCODING 'UTF8'
+ LC_COLLATE='C'
+ LC_CTYPE='C'
+ template=template0
+ OWNER synapse_user;
+
+This would create an appropriate database named `synapse` owned by the
+`synapse_user` user (which must already have been created as above).
+
+Note that the PostgreSQL database *must* have the correct encoding set
+(as shown above), otherwise it will not be able to store UTF8 strings.
+
+You may need to enable password authentication so `synapse_user` can
+connect to the database. See
+<https://www.postgresql.org/docs/11/auth-pg-hba-conf.html>.
+
+## Tuning Postgres
+
+The default settings should be fine for most deployments. For larger
+scale deployments tuning some of the settings is recommended, details of
+which can be found at
+<https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server>.
+
+In particular, we've found tuning the following values helpful for
+performance:
+
+- `shared_buffers`
+- `effective_cache_size`
+- `work_mem`
+- `maintenance_work_mem`
+- `autovacuum_work_mem`
+
+Note that the appropriate values for those fields depend on the amount
+of free memory the database host has available.
+
+## Synapse config
+
+When you are ready to start using PostgreSQL, edit the `database`
+section in your config file to match the following lines:
+
+ database:
+ name: psycopg2
+ args:
+ user: <user>
+ password: <pass>
+ database: <db>
+ host: <host>
+ cp_min: 5
+ cp_max: 10
+
+All key, values in `args` are passed to the `psycopg2.connect(..)`
+function, except keys beginning with `cp_`, which are consumed by the
+twisted adbapi connection pool.
+
+## Porting from SQLite
+
+### Overview
+
+The script `synapse_port_db` allows porting an existing synapse server
+backed by SQLite to using PostgreSQL. This is done in as a two phase
+process:
+
+1. Copy the existing SQLite database to a separate location (while the
+ server is down) and running the port script against that offline
+ database.
+2. Shut down the server. Rerun the port script to port any data that
+ has come in since taking the first snapshot. Restart server against
+ the PostgreSQL database.
+
+The port script is designed to be run repeatedly against newer snapshots
+of the SQLite database file. This makes it safe to repeat step 1 if
+there was a delay between taking the previous snapshot and being ready
+to do step 2.
+
+It is safe to at any time kill the port script and restart it.
+
+### Using the port script
+
+Firstly, shut down the currently running synapse server and copy its
+database file (typically `homeserver.db`) to another location. Once the
+copy is complete, restart synapse. For instance:
+
+ ./synctl stop
+ cp homeserver.db homeserver.db.snapshot
+ ./synctl start
+
+Copy the old config file into a new config file:
+
+ cp homeserver.yaml homeserver-postgres.yaml
+
+Edit the database section as described in the section *Synapse config*
+above and with the SQLite snapshot located at `homeserver.db.snapshot`
+simply run:
+
+ synapse_port_db --sqlite-database homeserver.db.snapshot \
+ --postgres-config homeserver-postgres.yaml
+
+The flag `--curses` displays a coloured curses progress UI.
+
+If the script took a long time to complete, or time has otherwise passed
+since the original snapshot was taken, repeat the previous steps with a
+newer snapshot.
+
+To complete the conversion shut down the synapse server and run the port
+script one last time, e.g. if the SQLite database is at `homeserver.db`
+run:
+
+ synapse_port_db --sqlite-database homeserver.db \
+ --postgres-config homeserver-postgres.yaml
+
+Once that has completed, change the synapse config to point at the
+PostgreSQL database configuration file `homeserver-postgres.yaml`:
+
+ ./synctl stop
+ mv homeserver.yaml homeserver-old-sqlite.yaml
+ mv homeserver-postgres.yaml homeserver.yaml
+ ./synctl start
+
+Synapse should now be running against PostgreSQL.
diff --git a/docs/postgres.rst b/docs/postgres.rst
deleted file mode 100644
index e08a5116b9..0000000000
--- a/docs/postgres.rst
+++ /dev/null
@@ -1,166 +0,0 @@
-Using Postgres
---------------
-
-Postgres version 9.5 or later is known to work.
-
-Install postgres client libraries
-=================================
-
-Synapse will require the python postgres client library in order to connect to
-a postgres database.
-
-* If you are using the `matrix.org debian/ubuntu
- packages <../INSTALL.md#matrixorg-packages>`_,
- the necessary python library will already be installed, but you will need to
- ensure the low-level postgres library is installed, which you can do with
- ``apt install libpq5``.
-
-* For other pre-built packages, please consult the documentation from the
- relevant package.
-
-* If you installed synapse `in a virtualenv
- <../INSTALL.md#installing-from-source>`_, you can install the library with::
-
- ~/synapse/env/bin/pip install matrix-synapse[postgres]
-
- (substituting the path to your virtualenv for ``~/synapse/env``, if you used a
- different path). You will require the postgres development files. These are in
- the ``libpq-dev`` package on Debian-derived distributions.
-
-Set up database
-===============
-
-Assuming your PostgreSQL database user is called ``postgres``, create a user
-``synapse_user`` with::
-
- su - postgres
- createuser --pwprompt synapse_user
-
-Before you can authenticate with the ``synapse_user``, you must create a
-database that it can access. To create a database, first connect to the database
-with your database user::
-
- su - postgres
- psql
-
-and then run::
-
- CREATE DATABASE synapse
- ENCODING 'UTF8'
- LC_COLLATE='C'
- LC_CTYPE='C'
- template=template0
- OWNER synapse_user;
-
-This would create an appropriate database named ``synapse`` owned by the
-``synapse_user`` user (which must already have been created as above).
-
-Note that the PostgreSQL database *must* have the correct encoding set (as
-shown above), otherwise it will not be able to store UTF8 strings.
-
-You may need to enable password authentication so ``synapse_user`` can connect
-to the database. See https://www.postgresql.org/docs/11/auth-pg-hba-conf.html.
-
-Tuning Postgres
-===============
-
-The default settings should be fine for most deployments. For larger scale
-deployments tuning some of the settings is recommended, details of which can be
-found at https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server.
-
-In particular, we've found tuning the following values helpful for performance:
-
-- ``shared_buffers``
-- ``effective_cache_size``
-- ``work_mem``
-- ``maintenance_work_mem``
-- ``autovacuum_work_mem``
-
-Note that the appropriate values for those fields depend on the amount of free
-memory the database host has available.
-
-Synapse config
-==============
-
-When you are ready to start using PostgreSQL, edit the ``database`` section in
-your config file to match the following lines::
-
- database:
- name: psycopg2
- args:
- user: <user>
- password: <pass>
- database: <db>
- host: <host>
- cp_min: 5
- cp_max: 10
-
-All key, values in ``args`` are passed to the ``psycopg2.connect(..)``
-function, except keys beginning with ``cp_``, which are consumed by the twisted
-adbapi connection pool.
-
-
-Porting from SQLite
-===================
-
-Overview
-~~~~~~~~
-
-The script ``synapse_port_db`` allows porting an existing synapse server
-backed by SQLite to using PostgreSQL. This is done in as a two phase process:
-
-1. Copy the existing SQLite database to a separate location (while the server
- is down) and running the port script against that offline database.
-2. Shut down the server. Rerun the port script to port any data that has come
- in since taking the first snapshot. Restart server against the PostgreSQL
- database.
-
-The port script is designed to be run repeatedly against newer snapshots of the
-SQLite database file. This makes it safe to repeat step 1 if there was a delay
-between taking the previous snapshot and being ready to do step 2.
-
-It is safe to at any time kill the port script and restart it.
-
-Using the port script
-~~~~~~~~~~~~~~~~~~~~~
-
-Firstly, shut down the currently running synapse server and copy its database
-file (typically ``homeserver.db``) to another location. Once the copy is
-complete, restart synapse. For instance::
-
- ./synctl stop
- cp homeserver.db homeserver.db.snapshot
- ./synctl start
-
-Copy the old config file into a new config file::
-
- cp homeserver.yaml homeserver-postgres.yaml
-
-Edit the database section as described in the section *Synapse config* above
-and with the SQLite snapshot located at ``homeserver.db.snapshot`` simply run::
-
- synapse_port_db --sqlite-database homeserver.db.snapshot \
- --postgres-config homeserver-postgres.yaml
-
-The flag ``--curses`` displays a coloured curses progress UI.
-
-If the script took a long time to complete, or time has otherwise passed since
-the original snapshot was taken, repeat the previous steps with a newer
-snapshot.
-
-To complete the conversion shut down the synapse server and run the port
-script one last time, e.g. if the SQLite database is at ``homeserver.db``
-run::
-
- synapse_port_db --sqlite-database homeserver.db \
- --postgres-config homeserver-postgres.yaml
-
-Once that has completed, change the synapse config to point at the PostgreSQL
-database configuration file ``homeserver-postgres.yaml``::
-
- ./synctl stop
- mv homeserver.yaml homeserver-old-sqlite.yaml
- mv homeserver-postgres.yaml homeserver.yaml
- ./synctl start
-
-Synapse should now be running against PostgreSQL.
diff --git a/docs/replication.md b/docs/replication.md
new file mode 100644
index 0000000000..ed88233157
--- /dev/null
+++ b/docs/replication.md
@@ -0,0 +1,37 @@
+# Replication Architecture
+
+## Motivation
+
+We'd like to be able to split some of the work that synapse does into
+multiple python processes. In theory multiple synapse processes could
+share a single postgresql database and we\'d scale up by running more
+synapse processes. However much of synapse assumes that only one process
+is interacting with the database, both for assigning unique identifiers
+when inserting into tables, notifying components about new updates, and
+for invalidating its caches.
+
+So running multiple copies of the current code isn't an option. One way
+to run multiple processes would be to have a single writer process and
+multiple reader processes connected to the same database. In order to do
+this we'd need a way for the reader process to invalidate its in-memory
+caches when an update happens on the writer. One way to do this is for
+the writer to present an append-only log of updates which the readers
+can consume to invalidate their caches and to push updates to listening
+clients or pushers.
+
+Synapse already stores much of its data as an append-only log so that it
+can correctly respond to `/sync` requests so the amount of code changes
+needed to expose the append-only log to the readers should be fairly
+minimal.
+
+## Architecture
+
+### The Replication Protocol
+
+See [tcp_replication.md](tcp_replication.md)
+
+### The Slaved DataStore
+
+There are read-only version of the synapse storage layer in
+`synapse/replication/slave/storage` that use the response of the
+replication API to invalidate their caches.
diff --git a/docs/replication.rst b/docs/replication.rst
deleted file mode 100644
index 310abb3488..0000000000
--- a/docs/replication.rst
+++ /dev/null
@@ -1,40 +0,0 @@
-Replication Architecture
-========================
-
-Motivation
-----------
-
-We'd like to be able to split some of the work that synapse does into multiple
-python processes. In theory multiple synapse processes could share a single
-postgresql database and we'd scale up by running more synapse processes.
-However much of synapse assumes that only one process is interacting with the
-database, both for assigning unique identifiers when inserting into tables,
-notifying components about new updates, and for invalidating its caches.
-
-So running multiple copies of the current code isn't an option. One way to
-run multiple processes would be to have a single writer process and multiple
-reader processes connected to the same database. In order to do this we'd need
-a way for the reader process to invalidate its in-memory caches when an update
-happens on the writer. One way to do this is for the writer to present an
-append-only log of updates which the readers can consume to invalidate their
-caches and to push updates to listening clients or pushers.
-
-Synapse already stores much of its data as an append-only log so that it can
-correctly respond to /sync requests so the amount of code changes needed to
-expose the append-only log to the readers should be fairly minimal.
-
-Architecture
-------------
-
-The Replication Protocol
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-See ``tcp_replication.rst``
-
-
-The Slaved DataStore
-~~~~~~~~~~~~~~~~~~~~
-
-There are read-only version of the synapse storage layer in
-``synapse/replication/slave/storage`` that use the response of the replication
-API to invalidate their caches.
diff --git a/docs/reverse_proxy.md b/docs/reverse_proxy.md
new file mode 100644
index 0000000000..dcfc5c64aa
--- /dev/null
+++ b/docs/reverse_proxy.md
@@ -0,0 +1,123 @@
+# Using a reverse proxy with Synapse
+
+It is recommended to put a reverse proxy such as
+[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
+[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
+[Caddy](https://caddyserver.com/docs/proxy) or
+[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
+of doing so is that it means that you can expose the default https port
+(443) to Matrix clients without needing to run Synapse with root
+privileges.
+
+> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
+the requested URI in any way (for example, by decoding `%xx` escapes).
+Beware that Apache *will* canonicalise URIs unless you specifify
+`nocanon`.
+
+When setting up a reverse proxy, remember that Matrix clients and other
+Matrix servers do not necessarily need to connect to your server via the
+same server name or port. Indeed, clients will use port 443 by default,
+whereas servers default to port 8448. Where these are different, we
+refer to the 'client port' and the \'federation port\'. See [Setting
+up federation](federate.md) for more details of the algorithm used for
+federation connections.
+
+Let's assume that we expect clients to connect to our server at
+`https://matrix.example.com`, and other servers to connect at
+`https://example.com:8448`. The following sections detail the configuration of
+the reverse proxy and the homeserver.
+
+## Webserver configuration examples
+
+> **NOTE**: You only need one of these.
+
+### nginx
+
+ server {
+ listen 443 ssl;
+ listen [::]:443 ssl;
+ server_name matrix.example.com;
+
+ location /_matrix {
+ proxy_pass http://localhost:8008;
+ proxy_set_header X-Forwarded-For $remote_addr;
+ }
+ }
+
+ server {
+ listen 8448 ssl default_server;
+ listen [::]:8448 ssl default_server;
+ server_name example.com;
+
+ location / {
+ proxy_pass http://localhost:8008;
+ proxy_set_header X-Forwarded-For $remote_addr;
+ }
+ }
+
+> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will
+canonicalise/normalise the URI.
+
+### Caddy
+
+ matrix.example.com {
+ proxy /_matrix http://localhost:8008 {
+ transparent
+ }
+ }
+
+ example.com:8448 {
+ proxy / http://localhost:8008 {
+ transparent
+ }
+ }
+
+### Apache
+
+ <VirtualHost *:443>
+ SSLEngine on
+ ServerName matrix.example.com;
+
+ AllowEncodedSlashes NoDecode
+ ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+ ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+ </VirtualHost>
+
+ <VirtualHost *:8448>
+ SSLEngine on
+ ServerName example.com;
+
+ AllowEncodedSlashes NoDecode
+ ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
+ ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
+ </VirtualHost>
+
+> **NOTE**: ensure the `nocanon` options are included.
+
+### HAProxy
+
+ frontend https
+ bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
+
+ # Matrix client traffic
+ acl matrix-host hdr(host) -i matrix.example.com
+ acl matrix-path path_beg /_matrix
+
+ use_backend matrix if matrix-host matrix-path
+
+ frontend matrix-federation
+ bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
+ default_backend matrix
+
+ backend matrix
+ server matrix 127.0.0.1:8008
+
+## Homeserver Configuration
+
+You will also want to set `bind_addresses: ['127.0.0.1']` and
+`x_forwarded: true` for port 8008 in `homeserver.yaml` to ensure that
+client IP addresses are recorded correctly.
+
+Having done so, you can then use `https://matrix.example.com` (instead
+of `https://matrix.example.com:8448`) as the "Custom server" when
+connecting to Synapse from a client.
diff --git a/docs/reverse_proxy.rst b/docs/reverse_proxy.rst
deleted file mode 100644
index 4b640ffc4f..0000000000
--- a/docs/reverse_proxy.rst
+++ /dev/null
@@ -1,112 +0,0 @@
-Using a reverse proxy with Synapse
-==================================
-
-It is recommended to put a reverse proxy such as
-`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
-`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
-`Caddy <https://caddyserver.com/docs/proxy>`_ or
-`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
-doing so is that it means that you can expose the default https port (443) to
-Matrix clients without needing to run Synapse with root privileges.
-
-**NOTE**: Your reverse proxy must not 'canonicalise' or 'normalise' the
-requested URI in any way (for example, by decoding ``%xx`` escapes). Beware
-that Apache *will* canonicalise URIs unless you specifify ``nocanon``.
-
-When setting up a reverse proxy, remember that Matrix clients and other Matrix
-servers do not necessarily need to connect to your server via the same server
-name or port. Indeed, clients will use port 443 by default, whereas servers
-default to port 8448. Where these are different, we refer to the 'client port'
-and the 'federation port'. See `Setting up federation
-<federate.md>`_ for more details of the algorithm used for
-federation connections.
-
-Let's assume that we expect clients to connect to our server at
-``https://matrix.example.com``, and other servers to connect at
-``https://example.com:8448``. Here are some example configurations:
-
-* nginx::
-
- server {
- listen 443 ssl;
- listen [::]:443 ssl;
- server_name matrix.example.com;
-
- location /_matrix {
- proxy_pass http://localhost:8008;
- proxy_set_header X-Forwarded-For $remote_addr;
- }
- }
-
- server {
- listen 8448 ssl default_server;
- listen [::]:8448 ssl default_server;
- server_name example.com;
-
- location / {
- proxy_pass http://localhost:8008;
- proxy_set_header X-Forwarded-For $remote_addr;
- }
- }
-
- Do not add a `/` after the port in `proxy_pass`, otherwise nginx will canonicalise/normalise the URI.
-
-* Caddy::
-
- matrix.example.com {
- proxy /_matrix http://localhost:8008 {
- transparent
- }
- }
-
- example.com:8448 {
- proxy / http://localhost:8008 {
- transparent
- }
- }
-
-* Apache (note the ``nocanon`` options here!)::
-
- <VirtualHost *:443>
- SSLEngine on
- ServerName matrix.example.com;
-
- AllowEncodedSlashes NoDecode
- ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
- ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
- </VirtualHost>
-
- <VirtualHost *:8448>
- SSLEngine on
- ServerName example.com;
-
- AllowEncodedSlashes NoDecode
- ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
- ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
- </VirtualHost>
-
-* HAProxy::
-
- frontend https
- bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
-
- # Matrix client traffic
- acl matrix-host hdr(host) -i matrix.example.com
- acl matrix-path path_beg /_matrix
-
- use_backend matrix if matrix-host matrix-path
-
- frontend matrix-federation
- bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
- default_backend matrix
-
- backend matrix
- server matrix 127.0.0.1:8008
-
-You will also want to set ``bind_addresses: ['127.0.0.1']`` and ``x_forwarded: true``
-for port 8008 in ``homeserver.yaml`` to ensure that client IP addresses are
-recorded correctly.
-
-Having done so, you can then use ``https://matrix.example.com`` (instead of
-``https://matrix.example.com:8448``) as the "Custom server" when connecting to
-Synapse from a client.
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index 186cdbedd2..1ee0ba8c30 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -136,8 +136,8 @@ federation_ip_range_blacklist:
#
# type: the type of listener. Normally 'http', but other valid options are:
# 'manhole' (see docs/manhole.md),
-# 'metrics' (see docs/metrics-howto.rst),
-# 'replication' (see docs/workers.rst).
+# 'metrics' (see docs/metrics-howto.md),
+# 'replication' (see docs/workers.md).
#
# tls: set to true to enable TLS for this listener. Will use the TLS
# key/cert specified in tls_private_key_path / tls_certificate_path.
@@ -172,12 +172,12 @@ federation_ip_range_blacklist:
#
# media: the media API (/_matrix/media).
#
-# metrics: the metrics interface. See docs/metrics-howto.rst.
+# metrics: the metrics interface. See docs/metrics-howto.md.
#
# openid: OpenID authentication.
#
# replication: the HTTP replication API (/_synapse/replication). See
-# docs/workers.rst.
+# docs/workers.md.
#
# static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.)
@@ -201,7 +201,7 @@ listeners:
# that unwraps TLS.
#
# If you plan to use a reverse proxy, please see
- # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
+ # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
#
- port: 8008
tls: false
@@ -306,6 +306,13 @@ listeners:
#
#allow_per_room_profiles: false
+# How long to keep redacted events in unredacted form in the database. After
+# this period redacted events get replaced with their redacted form in the DB.
+#
+# Defaults to `7d`. Set to `null` to disable.
+#
+redaction_retention_period: 7d
+
## TLS ##
@@ -511,6 +518,9 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# - one for login that ratelimits login requests based on the account the
# client is attempting to log into, based on the amount of failed login
# attempts for this account.
+# - one for ratelimiting redactions by room admins. If this is not explicitly
+# set then it uses the same ratelimiting as per rc_message. This is useful
+# to allow room admins to deal with abuse quickly.
#
# The defaults are as shown below.
#
@@ -532,6 +542,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# failed_attempts:
# per_second: 0.17
# burst_count: 3
+#
+#rc_admin_redaction:
+# per_second: 1
+# burst_count: 50
# Ratelimiting settings for incoming federation
@@ -958,9 +972,24 @@ account_threepid_delegates:
#sentry:
# dsn: "..."
+# Flags to enable Prometheus metrics which are not suitable to be
+# enabled by default, either for performance reasons or limited use.
+#
+metrics_flags:
+ # Publish synapse_federation_known_servers, a g auge of the number of
+ # servers this homeserver knows about, including itself. May cause
+ # performance problems on large homeservers.
+ #
+ #known_servers: true
+
# Whether or not to report anonymized homeserver usage statistics.
# report_stats: true|false
+# The endpoint to report the anonymized homeserver usage statistics to.
+# Defaults to https://matrix.org/report-usage-stats/push
+#
+#report_stats_endpoint: https://example.com/report-usage-stats/push
+
## API Configuration ##
diff --git a/docs/tcp_replication.md b/docs/tcp_replication.md
new file mode 100644
index 0000000000..e099d8a87b
--- /dev/null
+++ b/docs/tcp_replication.md
@@ -0,0 +1,249 @@
+# TCP Replication
+
+## Motivation
+
+Previously the workers used an HTTP long poll mechanism to get updates
+from the master, which had the problem of causing a lot of duplicate
+work on the server. This TCP protocol replaces those APIs with the aim
+of increased efficiency.
+
+## Overview
+
+The protocol is based on fire and forget, line based commands. An
+example flow would be (where '>' indicates master to worker and
+'<' worker to master flows):
+
+ > SERVER example.com
+ < REPLICATE events 53
+ > RDATA events 54 ["$foo1:bar.com", ...]
+ > RDATA events 55 ["$foo4:bar.com", ...]
+
+The example shows the server accepting a new connection and sending its
+identity with the `SERVER` command, followed by the client asking to
+subscribe to the `events` stream from the token `53`. The server then
+periodically sends `RDATA` commands which have the format
+`RDATA <stream_name> <token> <row>`, where the format of `<row>` is
+defined by the individual streams.
+
+Error reporting happens by either the client or server sending an ERROR
+command, and usually the connection will be closed.
+
+Since the protocol is a simple line based, its possible to manually
+connect to the server using a tool like netcat. A few things should be
+noted when manually using the protocol:
+
+- When subscribing to a stream using `REPLICATE`, the special token
+ `NOW` can be used to get all future updates. The special stream name
+ `ALL` can be used with `NOW` to subscribe to all available streams.
+- The federation stream is only available if federation sending has
+ been disabled on the main process.
+- The server will only time connections out that have sent a `PING`
+ command. If a ping is sent then the connection will be closed if no
+ further commands are receieved within 15s. Both the client and
+ server protocol implementations will send an initial PING on
+ connection and ensure at least one command every 5s is sent (not
+ necessarily `PING`).
+- `RDATA` commands *usually* include a numeric token, however if the
+ stream has multiple rows to replicate per token the server will send
+ multiple `RDATA` commands, with all but the last having a token of
+ `batch`. See the documentation on `commands.RdataCommand` for
+ further details.
+
+## Architecture
+
+The basic structure of the protocol is line based, where the initial
+word of each line specifies the command. The rest of the line is parsed
+based on the command. For example, the RDATA command is defined as:
+
+ RDATA <stream_name> <token> <row_json>
+
+(Note that <row_json> may contains spaces, but cannot contain
+newlines.)
+
+Blank lines are ignored.
+
+### Keep alives
+
+Both sides are expected to send at least one command every 5s or so, and
+should send a `PING` command if necessary. If either side do not receive
+a command within e.g. 15s then the connection should be closed.
+
+Because the server may be connected to manually using e.g. netcat, the
+timeouts aren't enabled until an initial `PING` command is seen. Both
+the client and server implementations below send a `PING` command
+immediately on connection to ensure the timeouts are enabled.
+
+This ensures that both sides can quickly realize if the tcp connection
+has gone and handle the situation appropriately.
+
+### Start up
+
+When a new connection is made, the server:
+
+- Sends a `SERVER` command, which includes the identity of the server,
+ allowing the client to detect if its connected to the expected
+ server
+- Sends a `PING` command as above, to enable the client to time out
+ connections promptly.
+
+The client:
+
+- Sends a `NAME` command, allowing the server to associate a human
+ friendly name with the connection. This is optional.
+- Sends a `PING` as above
+- For each stream the client wishes to subscribe to it sends a
+ `REPLICATE` with the `stream_name` and token it wants to subscribe
+ from.
+- On receipt of a `SERVER` command, checks that the server name
+ matches the expected server name.
+
+### Error handling
+
+If either side detects an error it can send an `ERROR` command and close
+the connection.
+
+If the client side loses the connection to the server it should
+reconnect, following the steps above.
+
+### Congestion
+
+If the server sends messages faster than the client can consume them the
+server will first buffer a (fairly large) number of commands and then
+disconnect the client. This ensures that we don't queue up an unbounded
+number of commands in memory and gives us a potential oppurtunity to
+squawk loudly. When/if the client recovers it can reconnect to the
+server and ask for missed messages.
+
+### Reliability
+
+In general the replication stream should be considered an unreliable
+transport since e.g. commands are not resent if the connection
+disappears.
+
+The exception to that are the replication streams, i.e. RDATA commands,
+since these include tokens which can be used to restart the stream on
+connection errors.
+
+The client should keep track of the token in the last RDATA command
+received for each stream so that on reconneciton it can start streaming
+from the correct place. Note: not all RDATA have valid tokens due to
+batching. See `RdataCommand` for more details.
+
+### Example
+
+An example iteraction is shown below. Each line is prefixed with '>'
+or '<' to indicate which side is sending, these are *not* included on
+the wire:
+
+ * connection established *
+ > SERVER localhost:8823
+ > PING 1490197665618
+ < NAME synapse.app.appservice
+ < PING 1490197665618
+ < REPLICATE events 1
+ < REPLICATE backfill 1
+ < REPLICATE caches 1
+ > POSITION events 1
+ > POSITION backfill 1
+ > POSITION caches 1
+ > RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
+ > RDATA events 14 ["$149019767112vOHxz:localhost:8823",
+ "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
+ < PING 1490197675618
+ > ERROR server stopping
+ * connection closed by server *
+
+The `POSITION` command sent by the server is used to set the clients
+position without needing to send data with the `RDATA` command.
+
+An example of a batched set of `RDATA` is:
+
+ > RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
+ > RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
+ > RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
+ > RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
+
+In this case the client shouldn't advance their caches token until it
+sees the the last `RDATA`.
+
+### List of commands
+
+The list of valid commands, with which side can send it: server (S) or
+client (C):
+
+#### SERVER (S)
+
+ Sent at the start to identify which server the client is talking to
+
+#### RDATA (S)
+
+ A single update in a stream
+
+#### POSITION (S)
+
+ The position of the stream has been updated. Sent to the client
+ after all missing updates for a stream have been sent to the client
+ and they're now up to date.
+
+#### ERROR (S, C)
+
+ There was an error
+
+#### PING (S, C)
+
+ Sent periodically to ensure the connection is still alive
+
+#### NAME (C)
+
+ Sent at the start by client to inform the server who they are
+
+#### REPLICATE (C)
+
+ Asks the server to replicate a given stream
+
+#### USER_SYNC (C)
+
+ A user has started or stopped syncing
+
+#### FEDERATION_ACK (C)
+
+ Acknowledge receipt of some federation data
+
+#### REMOVE_PUSHER (C)
+
+ Inform the server a pusher should be removed
+
+#### INVALIDATE_CACHE (C)
+
+ Inform the server a cache should be invalidated
+
+#### SYNC (S, C)
+
+ Used exclusively in tests
+
+See `synapse/replication/tcp/commands.py` for a detailed description and
+the format of each command.
+
+### Cache Invalidation Stream
+
+The cache invalidation stream is used to inform workers when they need
+to invalidate any of their caches in the data store. This is done by
+streaming all cache invalidations done on master down to the workers,
+assuming that any caches on the workers also exist on the master.
+
+Each individual cache invalidation results in a row being sent down
+replication, which includes the cache name (the name of the function)
+and they key to invalidate. For example:
+
+ > RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
+
+However, there are times when a number of caches need to be invalidated
+at the same time with the same key. To reduce traffic we batch those
+invalidations into a single poke by defining a special cache name that
+workers understand to mean to expand to invalidate the correct caches.
+
+Currently the special cache names are declared in
+`synapse/storage/_base.py` and are:
+
+1. `cs_cache_fake` ─ invalidates caches that depend on the current
+ state
diff --git a/docs/tcp_replication.rst b/docs/tcp_replication.rst
deleted file mode 100644
index 75e723484c..0000000000
--- a/docs/tcp_replication.rst
+++ /dev/null
@@ -1,249 +0,0 @@
-TCP Replication
-===============
-
-Motivation
-----------
-
-Previously the workers used an HTTP long poll mechanism to get updates from the
-master, which had the problem of causing a lot of duplicate work on the server.
-This TCP protocol replaces those APIs with the aim of increased efficiency.
-
-
-
-Overview
---------
-
-The protocol is based on fire and forget, line based commands. An example flow
-would be (where '>' indicates master to worker and '<' worker to master flows)::
-
- > SERVER example.com
- < REPLICATE events 53
- > RDATA events 54 ["$foo1:bar.com", ...]
- > RDATA events 55 ["$foo4:bar.com", ...]
-
-The example shows the server accepting a new connection and sending its identity
-with the ``SERVER`` command, followed by the client asking to subscribe to the
-``events`` stream from the token ``53``. The server then periodically sends ``RDATA``
-commands which have the format ``RDATA <stream_name> <token> <row>``, where the
-format of ``<row>`` is defined by the individual streams.
-
-Error reporting happens by either the client or server sending an `ERROR`
-command, and usually the connection will be closed.
-
-
-Since the protocol is a simple line based, its possible to manually connect to
-the server using a tool like netcat. A few things should be noted when manually
-using the protocol:
-
-* When subscribing to a stream using ``REPLICATE``, the special token ``NOW`` can
- be used to get all future updates. The special stream name ``ALL`` can be used
- with ``NOW`` to subscribe to all available streams.
-* The federation stream is only available if federation sending has been
- disabled on the main process.
-* The server will only time connections out that have sent a ``PING`` command.
- If a ping is sent then the connection will be closed if no further commands
- are receieved within 15s. Both the client and server protocol implementations
- will send an initial PING on connection and ensure at least one command every
- 5s is sent (not necessarily ``PING``).
-* ``RDATA`` commands *usually* include a numeric token, however if the stream
- has multiple rows to replicate per token the server will send multiple
- ``RDATA`` commands, with all but the last having a token of ``batch``. See
- the documentation on ``commands.RdataCommand`` for further details.
-
-
-Architecture
-------------
-
-The basic structure of the protocol is line based, where the initial word of
-each line specifies the command. The rest of the line is parsed based on the
-command. For example, the `RDATA` command is defined as::
-
- RDATA <stream_name> <token> <row_json>
-
-(Note that `<row_json>` may contains spaces, but cannot contain newlines.)
-
-Blank lines are ignored.
-
-
-Keep alives
-~~~~~~~~~~~
-
-Both sides are expected to send at least one command every 5s or so, and
-should send a ``PING`` command if necessary. If either side do not receive a
-command within e.g. 15s then the connection should be closed.
-
-Because the server may be connected to manually using e.g. netcat, the timeouts
-aren't enabled until an initial ``PING`` command is seen. Both the client and
-server implementations below send a ``PING`` command immediately on connection to
-ensure the timeouts are enabled.
-
-This ensures that both sides can quickly realize if the tcp connection has gone
-and handle the situation appropriately.
-
-
-Start up
-~~~~~~~~
-
-When a new connection is made, the server:
-
-* Sends a ``SERVER`` command, which includes the identity of the server, allowing
- the client to detect if its connected to the expected server
-* Sends a ``PING`` command as above, to enable the client to time out connections
- promptly.
-
-The client:
-
-* Sends a ``NAME`` command, allowing the server to associate a human friendly
- name with the connection. This is optional.
-* Sends a ``PING`` as above
-* For each stream the client wishes to subscribe to it sends a ``REPLICATE``
- with the stream_name and token it wants to subscribe from.
-* On receipt of a ``SERVER`` command, checks that the server name matches the
- expected server name.
-
-
-Error handling
-~~~~~~~~~~~~~~
-
-If either side detects an error it can send an ``ERROR`` command and close the
-connection.
-
-If the client side loses the connection to the server it should reconnect,
-following the steps above.
-
-
-Congestion
-~~~~~~~~~~
-
-If the server sends messages faster than the client can consume them the server
-will first buffer a (fairly large) number of commands and then disconnect the
-client. This ensures that we don't queue up an unbounded number of commands in
-memory and gives us a potential oppurtunity to squawk loudly. When/if the client
-recovers it can reconnect to the server and ask for missed messages.
-
-
-Reliability
-~~~~~~~~~~~
-
-In general the replication stream should be considered an unreliable transport
-since e.g. commands are not resent if the connection disappears.
-
-The exception to that are the replication streams, i.e. RDATA commands, since
-these include tokens which can be used to restart the stream on connection
-errors.
-
-The client should keep track of the token in the last RDATA command received
-for each stream so that on reconneciton it can start streaming from the correct
-place. Note: not all RDATA have valid tokens due to batching. See
-``RdataCommand`` for more details.
-
-Example
-~~~~~~~
-
-An example iteraction is shown below. Each line is prefixed with '>' or '<' to
-indicate which side is sending, these are *not* included on the wire::
-
- * connection established *
- > SERVER localhost:8823
- > PING 1490197665618
- < NAME synapse.app.appservice
- < PING 1490197665618
- < REPLICATE events 1
- < REPLICATE backfill 1
- < REPLICATE caches 1
- > POSITION events 1
- > POSITION backfill 1
- > POSITION caches 1
- > RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
- > RDATA events 14 ["$149019767112vOHxz:localhost:8823",
- "!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
- < PING 1490197675618
- > ERROR server stopping
- * connection closed by server *
-
-The ``POSITION`` command sent by the server is used to set the clients position
-without needing to send data with the ``RDATA`` command.
-
-
-An example of a batched set of ``RDATA`` is::
-
- > RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
- > RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
- > RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
- > RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
-
-In this case the client shouldn't advance their caches token until it sees the
-the last ``RDATA``.
-
-
-List of commands
-~~~~~~~~~~~~~~~~
-
-The list of valid commands, with which side can send it: server (S) or client (C):
-
-SERVER (S)
- Sent at the start to identify which server the client is talking to
-
-RDATA (S)
- A single update in a stream
-
-POSITION (S)
- The position of the stream has been updated. Sent to the client after all
- missing updates for a stream have been sent to the client and they're now
- up to date.
-
-ERROR (S, C)
- There was an error
-
-PING (S, C)
- Sent periodically to ensure the connection is still alive
-
-NAME (C)
- Sent at the start by client to inform the server who they are
-
-REPLICATE (C)
- Asks the server to replicate a given stream
-
-USER_SYNC (C)
- A user has started or stopped syncing
-
-FEDERATION_ACK (C)
- Acknowledge receipt of some federation data
-
-REMOVE_PUSHER (C)
- Inform the server a pusher should be removed
-
-INVALIDATE_CACHE (C)
- Inform the server a cache should be invalidated
-
-SYNC (S, C)
- Used exclusively in tests
-
-
-See ``synapse/replication/tcp/commands.py`` for a detailed description and the
-format of each command.
-
-
-Cache Invalidation Stream
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The cache invalidation stream is used to inform workers when they need to
-invalidate any of their caches in the data store. This is done by streaming all
-cache invalidations done on master down to the workers, assuming that any caches
-on the workers also exist on the master.
-
-Each individual cache invalidation results in a row being sent down replication,
-which includes the cache name (the name of the function) and they key to
-invalidate. For example::
-
- > RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
-
-However, there are times when a number of caches need to be invalidated at the
-same time with the same key. To reduce traffic we batch those invalidations into
-a single poke by defining a special cache name that workers understand to mean
-to expand to invalidate the correct caches.
-
-Currently the special cache names are declared in ``synapse/storage/_base.py``
-and are:
-
-1. ``cs_cache_fake`` ─ invalidates caches that depend on the current state
diff --git a/docs/turn-howto.md b/docs/turn-howto.md
new file mode 100644
index 0000000000..4a983621e5
--- /dev/null
+++ b/docs/turn-howto.md
@@ -0,0 +1,123 @@
+# Overview
+
+This document explains how to enable VoIP relaying on your Home Server with
+TURN.
+
+The synapse Matrix Home Server supports integration with TURN server via the
+[TURN server REST API](<http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
+allows the Home Server to generate credentials that are valid for use on the
+TURN server through the use of a secret shared between the Home Server and the
+TURN server.
+
+The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
+
+## `coturn` Setup
+
+### Initial installation
+
+The TURN daemon `coturn` is available from a variety of sources such as native package managers, or installation from source.
+
+#### Debian installation
+
+ # apt install coturn
+
+#### Source installation
+
+1. Download the [latest release](https://github.com/coturn/coturn/releases/latest) from github. Unpack it and `cd` into the directory.
+
+1. Configure it:
+
+ ./configure
+
+ > You may need to install `libevent2`: if so, you should do so in
+ > the way recommended by your operating system. You can ignore
+ > warnings about lack of database support: a database is unnecessary
+ > for this purpose.
+
+1. Build and install it:
+
+ make
+ make install
+
+1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
+ lines, with example values, are:
+
+ use-auth-secret
+ static-auth-secret=[your secret key here]
+ realm=turn.myserver.org
+
+ See `turnserver.conf` for explanations of the options. One way to generate
+ the `static-auth-secret` is with `pwgen`:
+
+ pwgen -s 64 1
+
+1. Consider your security settings. TURN lets users request a relay which will
+ connect to arbitrary IP addresses and ports. The following configuration is
+ suggested as a minimum starting point:
+
+ # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
+ no-tcp-relay
+
+ # don't let the relay ever try to connect to private IP address ranges within your network (if any)
+ # given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
+ denied-peer-ip=10.0.0.0-10.255.255.255
+ denied-peer-ip=192.168.0.0-192.168.255.255
+ denied-peer-ip=172.16.0.0-172.31.255.255
+
+ # special case the turn server itself so that client->TURN->TURN->client flows work
+ allowed-peer-ip=10.0.0.1
+
+ # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
+ user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
+ total-quota=1200
+
+ Ideally coturn should refuse to relay traffic which isn't SRTP; see
+ <https://github.com/matrix-org/synapse/issues/2009>
+
+1. Ensure your firewall allows traffic into the TURN server on the ports
+ you've configured it to listen on (remember to allow both TCP and UDP TURN
+ traffic)
+
+1. If you've configured coturn to support TLS/DTLS, generate or import your
+ private key and certificate.
+
+1. Start the turn server:
+
+ bin/turnserver -o
+
+## synapse Setup
+
+Your home server configuration file needs the following extra keys:
+
+1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
+ for your TURN server to be given out to your clients. Add separate
+ entries for each transport your TURN server supports.
+2. "`turn_shared_secret`": This is the secret shared between your
+ Home server and your TURN server, so you should set it to the same
+ string you used in turnserver.conf.
+3. "`turn_user_lifetime`": This is the amount of time credentials
+ generated by your Home Server are valid for (in milliseconds).
+ Shorter times offer less potential for abuse at the expense of
+ increased traffic between web clients and your home server to
+ refresh credentials. The TURN REST API specification recommends
+ one day (86400000).
+4. "`turn_allow_guests`": Whether to allow guest users to use the
+ TURN server. This is enabled by default, as otherwise VoIP will
+ not work reliably for guests. However, it does introduce a
+ security risk as it lets guests connect to arbitrary endpoints
+ without having gone through a CAPTCHA or similar to register a
+ real account.
+
+As an example, here is the relevant section of the config file for matrix.org:
+
+ turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
+ turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
+ turn_user_lifetime: 86400000
+ turn_allow_guests: True
+
+After updating the homeserver configuration, you must restart synapse:
+
+ cd /where/you/run/synapse
+ ./synctl restart
+
+..and your Home Server now supports VoIP relaying!
diff --git a/docs/turn-howto.rst b/docs/turn-howto.rst
deleted file mode 100644
index a2fc5c8820..0000000000
--- a/docs/turn-howto.rst
+++ /dev/null
@@ -1,127 +0,0 @@
-How to enable VoIP relaying on your Home Server with TURN
-
-Overview
---------
-The synapse Matrix Home Server supports integration with TURN server via the
-TURN server REST API
-(http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00). This allows
-the Home Server to generate credentials that are valid for use on the TURN
-server through the use of a secret shared between the Home Server and the
-TURN server.
-
-This document describes how to install coturn
-(https://github.com/coturn/coturn) which also supports the TURN REST API,
-and integrate it with synapse.
-
-coturn Setup
-============
-
-You may be able to setup coturn via your package manager, or set it up manually using the usual ``configure, make, make install`` process.
-
- 1. Check out coturn::
-
- git clone https://github.com/coturn/coturn.git coturn
- cd coturn
-
- 2. Configure it::
-
- ./configure
-
- You may need to install ``libevent2``: if so, you should do so
- in the way recommended by your operating system.
- You can ignore warnings about lack of database support: a
- database is unnecessary for this purpose.
-
- 3. Build and install it::
-
- make
- make install
-
- 4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant
- lines, with example values, are::
-
- use-auth-secret
- static-auth-secret=[your secret key here]
- realm=turn.myserver.org
-
- See turnserver.conf for explanations of the options.
- One way to generate the static-auth-secret is with pwgen::
-
- pwgen -s 64 1
-
- 5. Consider your security settings. TURN lets users request a relay
- which will connect to arbitrary IP addresses and ports. At the least
- we recommend::
-
- # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
- no-tcp-relay
-
- # don't let the relay ever try to connect to private IP address ranges within your network (if any)
- # given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
- denied-peer-ip=10.0.0.0-10.255.255.255
- denied-peer-ip=192.168.0.0-192.168.255.255
- denied-peer-ip=172.16.0.0-172.31.255.255
-
- # special case the turn server itself so that client->TURN->TURN->client flows work
- allowed-peer-ip=10.0.0.1
-
- # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
- user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
- total-quota=1200
-
- Ideally coturn should refuse to relay traffic which isn't SRTP;
- see https://github.com/matrix-org/synapse/issues/2009
-
- 6. Ensure your firewall allows traffic into the TURN server on
- the ports you've configured it to listen on (remember to allow
- both TCP and UDP TURN traffic)
-
- 7. If you've configured coturn to support TLS/DTLS, generate or
- import your private key and certificate.
-
- 8. Start the turn server::
-
- bin/turnserver -o
-
-
-synapse Setup
-=============
-
-Your home server configuration file needs the following extra keys:
-
- 1. "turn_uris": This needs to be a yaml list
- of public-facing URIs for your TURN server to be given out
- to your clients. Add separate entries for each transport your
- TURN server supports.
-
- 2. "turn_shared_secret": This is the secret shared between your Home
- server and your TURN server, so you should set it to the same
- string you used in turnserver.conf.
-
- 3. "turn_user_lifetime": This is the amount of time credentials
- generated by your Home Server are valid for (in milliseconds).
- Shorter times offer less potential for abuse at the expense
- of increased traffic between web clients and your home server
- to refresh credentials. The TURN REST API specification recommends
- one day (86400000).
-
- 4. "turn_allow_guests": Whether to allow guest users to use the TURN
- server. This is enabled by default, as otherwise VoIP will not
- work reliably for guests. However, it does introduce a security risk
- as it lets guests connect to arbitrary endpoints without having gone
- through a CAPTCHA or similar to register a real account.
-
-As an example, here is the relevant section of the config file for
-matrix.org::
-
- turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
- turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
- turn_user_lifetime: 86400000
- turn_allow_guests: True
-
-Now, restart synapse::
-
- cd /where/you/run/synapse
- ./synctl restart
-
-...and your Home Server now supports VoIP relaying!
diff --git a/docs/workers.rst b/docs/workers.md
index e11e117418..4bd60ba0a0 100644
--- a/docs/workers.rst
+++ b/docs/workers.md
@@ -1,5 +1,4 @@
-Scaling synapse via workers
-===========================
+# Scaling synapse via workers
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
@@ -20,17 +19,16 @@ TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
-Configuration
--------------
+## Configuration
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. Note that this includes
-requests made to the federation port. See `<reverse_proxy.rst>`_ for
-information on setting up a reverse proxy.
+requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md)
+for information on setting up a reverse proxy.
To enable workers, you need to add two replication listeners to the master
-synapse, e.g.::
+synapse, e.g.:
listeners:
# The TCP replication port
@@ -56,7 +54,7 @@ You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
-not be started automatically. That configuration should look like this::
+not be started automatically. That configuration should look like this:
worker_app: synapse.app.homeserver
daemonize: true
@@ -66,17 +64,17 @@ configuration file. You can then override configuration specific to that worker
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
-You must specify the type of worker application (``worker_app``). The currently
+You must specify the type of worker application (`worker_app`). The currently
available worker applications are listed below. You must also specify the
replication endpoints that it's talking to on the main synapse process.
-``worker_replication_host`` should specify the host of the main synapse,
-``worker_replication_port`` should point to the TCP replication listener port and
-``worker_replication_http_port`` should point to the HTTP replication port.
+`worker_replication_host` should specify the host of the main synapse,
+`worker_replication_port` should point to the TCP replication listener port and
+`worker_replication_http_port` should point to the HTTP replication port.
-Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
-``worker_replication_http_port``.
+Currently, the `event_creator` and `federation_reader` workers require specifying
+`worker_replication_http_port`.
-For instance::
+For instance:
worker_app: synapse.app.synchrotron
@@ -97,15 +95,15 @@ For instance::
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a
-plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
+plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided
by the main synapse.
Obviously you should configure your reverse-proxy to route the relevant
-endpoints to the worker (``localhost:8083`` in the above example).
+endpoints to the worker (`localhost:8083` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a
commandline option to tell it to operate on all the worker configurations found
-in the given directory, e.g.::
+in the given directory, e.g.:
synctl -a $CONFIG/workers start
@@ -114,28 +112,24 @@ synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
-To manipulate a specific worker, you pass the -w option to synctl::
+To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/synchrotron.yaml restart
+## Available worker applications
-Available worker applications
------------------------------
-
-``synapse.app.pusher``
-~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.pusher`
Handles sending push notifications to sygnal and email. Doesn't handle any
-REST endpoints itself, but you should set ``start_pushers: False`` in the
+REST endpoints itself, but you should set `start_pushers: False` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
-``synapse.app.synchrotron``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.synchrotron`
-The synchrotron handles ``sync`` requests from clients. In particular, it can
-handle REST endpoints matching the following regular expressions::
+The synchrotron handles `sync` requests from clients. In particular, it can
+handle REST endpoints matching the following regular expressions:
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
@@ -151,20 +145,18 @@ load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
-``synapse.app.appservice``
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.appservice`
Handles sending output traffic to Application Services. Doesn't handle any
-REST endpoints itself, but you should set ``notify_appservices: False`` in the
+REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
-``synapse.app.federation_reader``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.federation_reader`
Handles a subset of federation endpoints. In particular, it can handle REST
-endpoints matching the following regular expressions::
+endpoints matching the following regular expressions:
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
@@ -190,40 +182,36 @@ reverse-proxy configuration.
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
-``synapse.app.federation_sender``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.federation_sender`
Handles sending federation traffic to other servers. Doesn't handle any
-REST endpoints itself, but you should set ``send_federation: False`` in the
+REST endpoints itself, but you should set `send_federation: False` in the
shared configuration file to stop the main synapse sending this traffic.
Note this worker cannot be load-balanced: only one instance should be active.
-``synapse.app.media_repository``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.media_repository`
-Handles the media repository. It can handle all endpoints starting with::
+Handles the media repository. It can handle all endpoints starting with:
/_matrix/media/
-And the following regular expressions matching media-specific administration
-APIs::
+And the following regular expressions matching media-specific administration APIs:
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/quarantine_media/.*$
-You should also set ``enable_media_repo: False`` in the shared configuration
+You should also set `enable_media_repo: False` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
Note this worker cannot be load-balanced: only one instance should be active.
-``synapse.app.client_reader``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.client_reader`
Handles client API endpoints. It can handle REST endpoints matching the
-following regular expressions::
+following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
@@ -237,60 +225,55 @@ following regular expressions::
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
-Additionally, the following REST endpoints can be handled for GET requests::
+Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
Additionally, the following REST endpoints can be handled, but all requests must
-be routed to the same instance::
+be routed to the same instance:
^/_matrix/client/(r0|unstable)/register$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
-for the room are in flight::
+for the room are in flight:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
-
-``synapse.app.user_dir``
-~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.user_dir`
Handles searches in the user directory. It can handle REST endpoints matching
-the following regular expressions::
+the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
-``synapse.app.frontend_proxy``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### `synapse.app.frontend_proxy`
Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
-regular expressions::
+regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
-If ``use_presence`` is False in the homeserver config, it can also handle REST
-endpoints matching the following regular expressions::
+If `use_presence` is False in the homeserver config, it can also handle REST
+endpoints matching the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
-This "stub" presence handler will pass through ``GET`` request but make the
-``PUT`` effectively a no-op.
+This "stub" presence handler will pass through `GET` request but make the
+`PUT` effectively a no-op.
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
-the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
-file. For example::
+the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
+file. For example:
worker_main_http_uri: http://127.0.0.1:8008
+### `synapse.app.event_creator`
-``synapse.app.event_creator``
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Handles some event creation. It can handle REST endpoints matching::
+Handles some event creation. It can handle REST endpoints matching:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
diff --git a/mypy.ini b/mypy.ini
new file mode 100644
index 0000000000..8788574ee3
--- /dev/null
+++ b/mypy.ini
@@ -0,0 +1,54 @@
+[mypy]
+namespace_packages=True
+plugins=mypy_zope:plugin
+follow_imports=skip
+mypy_path=stubs
+
+[mypy-synapse.config.homeserver]
+# this is a mess because of the metaclass shenanigans
+ignore_errors = True
+
+[mypy-zope]
+ignore_missing_imports = True
+
+[mypy-constantly]
+ignore_missing_imports = True
+
+[mypy-twisted.*]
+ignore_missing_imports = True
+
+[mypy-treq.*]
+ignore_missing_imports = True
+
+[mypy-hyperlink]
+ignore_missing_imports = True
+
+[mypy-h11]
+ignore_missing_imports = True
+
+[mypy-opentracing]
+ignore_missing_imports = True
+
+[mypy-OpenSSL]
+ignore_missing_imports = True
+
+[mypy-netaddr]
+ignore_missing_imports = True
+
+[mypy-saml2.*]
+ignore_missing_imports = True
+
+[mypy-unpaddedbase64]
+ignore_missing_imports = True
+
+[mypy-canonicaljson]
+ignore_missing_imports = True
+
+[mypy-jaeger_client]
+ignore_missing_imports = True
+
+[mypy-jsonschema]
+ignore_missing_imports = True
+
+[mypy-signedjson.*]
+ignore_missing_imports = True
diff --git a/synapse/api/auth.py b/synapse/api/auth.py
index ddc195bc32..9e445cd808 100644
--- a/synapse/api/auth.py
+++ b/synapse/api/auth.py
@@ -25,7 +25,7 @@ from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
-from synapse.api.constants import EventTypes, JoinRules, Membership
+from synapse.api.constants import EventTypes, JoinRules, Membership, UserTypes
from synapse.api.errors import (
AuthError,
Codes,
@@ -709,7 +709,7 @@ class Auth(object):
)
@defer.inlineCallbacks
- def check_auth_blocking(self, user_id=None, threepid=None):
+ def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
@@ -722,6 +722,9 @@ class Auth(object):
with a MAU blocked server, normally they would be rejected but their
threepid is on the reserved list. user_id and
threepid should never be set at the same time.
+
+ user_type(str|None): If present, is used to decide whether to check against
+ certain blocking reasons like MAU.
"""
# Never fail an auth check for the server notices users or support user
@@ -759,6 +762,10 @@ class Auth(object):
self.hs.config.mau_limits_reserved_threepids, threepid
):
return
+ elif user_type == UserTypes.SUPPORT:
+ # If the user does not exist yet and is of type "support",
+ # allow registration. Support users are excluded from MAU checks.
+ return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
diff --git a/synapse/app/homeserver.py b/synapse/app/homeserver.py
index 04f1ed14f3..774326dff9 100644
--- a/synapse/app/homeserver.py
+++ b/synapse/app/homeserver.py
@@ -561,10 +561,12 @@ def run(hs):
stats["database_engine"] = hs.get_datastore().database_engine_name
stats["database_server_version"] = hs.get_datastore().get_server_version()
- logger.info("Reporting stats to matrix.org: %s" % (stats,))
+ logger.info(
+ "Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)
+ )
try:
yield hs.get_simple_http_client().put_json(
- "https://matrix.org/report-usage-stats/push", stats
+ hs.config.report_stats_endpoint, stats
)
except Exception as e:
logger.warn("Error reporting stats: %s", e)
diff --git a/synapse/config/logger.py b/synapse/config/logger.py
index 2704c18720..767ecfdf09 100644
--- a/synapse/config/logger.py
+++ b/synapse/config/logger.py
@@ -21,7 +21,12 @@ from string import Template
import yaml
-from twisted.logger import STDLibLogObserver, globalLogBeginner
+from twisted.logger import (
+ ILogObserver,
+ LogBeginner,
+ STDLibLogObserver,
+ globalLogBeginner,
+)
import synapse
from synapse.app import _base as appbase
@@ -124,7 +129,7 @@ class LoggingConfig(Config):
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
-def _setup_stdlib_logging(config, log_config):
+def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
"""
Set up Python stdlib logging.
"""
@@ -165,12 +170,12 @@ def _setup_stdlib_logging(config, log_config):
return observer(event)
- globalLogBeginner.beginLoggingTo(
- [_log], redirectStandardIO=not config.no_redirect_stdio
- )
+ logBeginner.beginLoggingTo([_log], redirectStandardIO=not config.no_redirect_stdio)
if not config.no_redirect_stdio:
print("Redirected stdout/stderr to logs")
+ return observer
+
def _reload_stdlib_logging(*args, log_config=None):
logger = logging.getLogger("")
@@ -181,7 +186,9 @@ def _reload_stdlib_logging(*args, log_config=None):
logging.config.dictConfig(log_config)
-def setup_logging(hs, config, use_worker_options=False):
+def setup_logging(
+ hs, config, use_worker_options=False, logBeginner: LogBeginner = globalLogBeginner
+) -> ILogObserver:
"""
Set up the logging subsystem.
@@ -191,6 +198,12 @@ def setup_logging(hs, config, use_worker_options=False):
use_worker_options (bool): True to use the 'worker_log_config' option
instead of 'log_config'.
+
+ logBeginner: The Twisted logBeginner to use.
+
+ Returns:
+ The "root" Twisted Logger observer, suitable for sending logs to from a
+ Logger instance.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
@@ -210,10 +223,12 @@ def setup_logging(hs, config, use_worker_options=False):
log_config_body = read_config()
if log_config_body and log_config_body.get("structured") is True:
- setup_structured_logging(hs, config, log_config_body)
+ logger = setup_structured_logging(
+ hs, config, log_config_body, logBeginner=logBeginner
+ )
appbase.register_sighup(read_config, callback=reload_structured_logging)
else:
- _setup_stdlib_logging(config, log_config_body)
+ logger = _setup_stdlib_logging(config, log_config_body, logBeginner=logBeginner)
appbase.register_sighup(read_config, callback=_reload_stdlib_logging)
# make sure that the first thing we log is a thing we can grep backwards
@@ -221,3 +236,5 @@ def setup_logging(hs, config, use_worker_options=False):
logging.warn("***** STARTING SERVER *****")
logging.warn("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
+
+ return logger
diff --git a/synapse/config/metrics.py b/synapse/config/metrics.py
index 3698441963..ec35a6b868 100644
--- a/synapse/config/metrics.py
+++ b/synapse/config/metrics.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,26 +14,47 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import attr
+
+from synapse.python_dependencies import DependencyException, check_requirements
+
from ._base import Config, ConfigError
-MISSING_SENTRY = """Missing sentry-sdk library. This is required to enable sentry
- integration.
- """
+
+@attr.s
+class MetricsFlags(object):
+ known_servers = attr.ib(default=False, validator=attr.validators.instance_of(bool))
+
+ @classmethod
+ def all_off(cls):
+ """
+ Instantiate the flags with all options set to off.
+ """
+ return cls(**{x.name: False for x in attr.fields(cls)})
class MetricsConfig(Config):
def read_config(self, config, **kwargs):
self.enable_metrics = config.get("enable_metrics", False)
self.report_stats = config.get("report_stats", None)
+ self.report_stats_endpoint = config.get(
+ "report_stats_endpoint", "https://matrix.org/report-usage-stats/push"
+ )
self.metrics_port = config.get("metrics_port")
self.metrics_bind_host = config.get("metrics_bind_host", "127.0.0.1")
+ if self.enable_metrics:
+ _metrics_config = config.get("metrics_flags") or {}
+ self.metrics_flags = MetricsFlags(**_metrics_config)
+ else:
+ self.metrics_flags = MetricsFlags.all_off()
+
self.sentry_enabled = "sentry" in config
if self.sentry_enabled:
try:
- import sentry_sdk # noqa F401
- except ImportError:
- raise ConfigError(MISSING_SENTRY)
+ check_requirements("sentry")
+ except DependencyException as e:
+ raise ConfigError(e.message)
self.sentry_dsn = config["sentry"].get("dsn")
if not self.sentry_dsn:
@@ -58,6 +80,16 @@ class MetricsConfig(Config):
#sentry:
# dsn: "..."
+ # Flags to enable Prometheus metrics which are not suitable to be
+ # enabled by default, either for performance reasons or limited use.
+ #
+ metrics_flags:
+ # Publish synapse_federation_known_servers, a g auge of the number of
+ # servers this homeserver knows about, including itself. May cause
+ # performance problems on large homeservers.
+ #
+ #known_servers: true
+
# Whether or not to report anonymized homeserver usage statistics.
"""
@@ -66,4 +98,10 @@ class MetricsConfig(Config):
else:
res += "report_stats: %s\n" % ("true" if report_stats else "false")
+ res += """
+ # The endpoint to report the anonymized homeserver usage statistics to.
+ # Defaults to https://matrix.org/report-usage-stats/push
+ #
+ #report_stats_endpoint: https://example.com/report-usage-stats/push
+ """
return res
diff --git a/synapse/config/ratelimiting.py b/synapse/config/ratelimiting.py
index 33f31cf213..587e2862b7 100644
--- a/synapse/config/ratelimiting.py
+++ b/synapse/config/ratelimiting.py
@@ -80,6 +80,12 @@ class RatelimitConfig(Config):
"federation_rr_transactions_per_room_per_second", 50
)
+ rc_admin_redaction = config.get("rc_admin_redaction")
+ if rc_admin_redaction:
+ self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
+ else:
+ self.rc_admin_redaction = None
+
def generate_config_section(self, **kwargs):
return """\
## Ratelimiting ##
@@ -102,6 +108,9 @@ class RatelimitConfig(Config):
# - one for login that ratelimits login requests based on the account the
# client is attempting to log into, based on the amount of failed login
# attempts for this account.
+ # - one for ratelimiting redactions by room admins. If this is not explicitly
+ # set then it uses the same ratelimiting as per rc_message. This is useful
+ # to allow room admins to deal with abuse quickly.
#
# The defaults are as shown below.
#
@@ -123,6 +132,10 @@ class RatelimitConfig(Config):
# failed_attempts:
# per_second: 0.17
# burst_count: 3
+ #
+ #rc_admin_redaction:
+ # per_second: 1
+ # burst_count: 50
# Ratelimiting settings for incoming federation
diff --git a/synapse/config/repository.py b/synapse/config/repository.py
index fdb1f246d0..34f1a9a92d 100644
--- a/synapse/config/repository.py
+++ b/synapse/config/repository.py
@@ -16,6 +16,7 @@
import os
from collections import namedtuple
+from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
@@ -34,17 +35,6 @@ THUMBNAIL_SIZE_YAML = """\
# method: %(method)s
"""
-MISSING_NETADDR = "Missing netaddr library. This is required for URL preview API."
-
-MISSING_LXML = """Missing lxml library. This is required for URL preview API.
-
- Install by running:
- pip install lxml
-
- Requires libxslt1-dev system package.
- """
-
-
ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
@@ -171,16 +161,10 @@ class ContentRepositoryConfig(Config):
self.url_preview_enabled = config.get("url_preview_enabled", False)
if self.url_preview_enabled:
try:
- import lxml
-
- lxml # To stop unused lint.
- except ImportError:
- raise ConfigError(MISSING_LXML)
+ check_requirements("url_preview")
- try:
- from netaddr import IPSet
- except ImportError:
- raise ConfigError(MISSING_NETADDR)
+ except DependencyException as e:
+ raise ConfigError(e.message)
if "url_preview_ip_range_blacklist" not in config:
raise ConfigError(
@@ -189,6 +173,9 @@ class ContentRepositoryConfig(Config):
"to work"
)
+ # netaddr is a dependency for url_preview
+ from netaddr import IPSet
+
self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"]
)
diff --git a/synapse/config/server.py b/synapse/config/server.py
index 2abdef0971..7f8d315954 100644
--- a/synapse/config/server.py
+++ b/synapse/config/server.py
@@ -162,6 +162,16 @@ class ServerConfig(Config):
self.mau_trial_days = config.get("mau_trial_days", 0)
+ # How long to keep redacted events in the database in unredacted form
+ # before redacting them.
+ redaction_retention_period = config.get("redaction_retention_period", "7d")
+ if redaction_retention_period is not None:
+ self.redaction_retention_period = self.parse_duration(
+ redaction_retention_period
+ )
+ else:
+ self.redaction_retention_period = None
+
# Options to disable HS
self.hs_disabled = config.get("hs_disabled", False)
self.hs_disabled_message = config.get("hs_disabled_message", "")
@@ -328,7 +338,7 @@ class ServerConfig(Config):
(
"The metrics_port configuration option is deprecated in Synapse 0.31 "
"in favour of a listener. Please see "
- "http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
+ "http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md"
" on how to configure the new listener."
)
)
@@ -561,8 +571,8 @@ class ServerConfig(Config):
#
# type: the type of listener. Normally 'http', but other valid options are:
# 'manhole' (see docs/manhole.md),
- # 'metrics' (see docs/metrics-howto.rst),
- # 'replication' (see docs/workers.rst).
+ # 'metrics' (see docs/metrics-howto.md),
+ # 'replication' (see docs/workers.md).
#
# tls: set to true to enable TLS for this listener. Will use the TLS
# key/cert specified in tls_private_key_path / tls_certificate_path.
@@ -597,12 +607,12 @@ class ServerConfig(Config):
#
# media: the media API (/_matrix/media).
#
- # metrics: the metrics interface. See docs/metrics-howto.rst.
+ # metrics: the metrics interface. See docs/metrics-howto.md.
#
# openid: OpenID authentication.
#
# replication: the HTTP replication API (/_synapse/replication). See
- # docs/workers.rst.
+ # docs/workers.md.
#
# static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.)
@@ -622,7 +632,7 @@ class ServerConfig(Config):
# that unwraps TLS.
#
# If you plan to use a reverse proxy, please see
- # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
+ # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
#
%(unsecure_http_bindings)s
@@ -718,6 +728,13 @@ class ServerConfig(Config):
# Defaults to 'true'.
#
#allow_per_room_profiles: false
+
+ # How long to keep redacted events in unredacted form in the database. After
+ # this period redacted events get replaced with their redacted form in the DB.
+ #
+ # Defaults to `7d`. Set to `null` to disable.
+ #
+ redaction_retention_period: 7d
"""
% locals()
)
diff --git a/synapse/config/tls.py b/synapse/config/tls.py
index c0148aa95c..fc47ba3e9a 100644
--- a/synapse/config/tls.py
+++ b/synapse/config/tls.py
@@ -110,8 +110,15 @@ class TlsConfig(Config):
# Support globs (*) in whitelist values
self.federation_certificate_verification_whitelist = []
for entry in fed_whitelist_entries:
+ try:
+ entry_regex = glob_to_regex(entry.encode("ascii").decode("ascii"))
+ except UnicodeEncodeError:
+ raise ConfigError(
+ "IDNA domain names are not allowed in the "
+ "federation_certificate_verification_whitelist: %s" % (entry,)
+ )
+
# Convert globs to regex
- entry_regex = glob_to_regex(entry)
self.federation_certificate_verification_whitelist.append(entry_regex)
# List of custom certificate authorities for federation traffic validation
diff --git a/synapse/config/tracer.py b/synapse/config/tracer.py
index 95e7ccb3a3..85d99a3166 100644
--- a/synapse/config/tracer.py
+++ b/synapse/config/tracer.py
@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from synapse.python_dependencies import DependencyException, check_requirements
+
from ._base import Config, ConfigError
@@ -32,6 +34,11 @@ class TracerConfig(Config):
if not self.opentracer_enabled:
return
+ try:
+ check_requirements("opentracing")
+ except DependencyException as e:
+ raise ConfigError(e.message)
+
# The tracer is enabled so sanitize the config
self.opentracer_whitelist = opentracing_config.get("homeserver_whitelist", [])
diff --git a/synapse/crypto/context_factory.py b/synapse/crypto/context_factory.py
index 06e63a96b5..e93f0b3705 100644
--- a/synapse/crypto/context_factory.py
+++ b/synapse/crypto/context_factory.py
@@ -15,7 +15,6 @@
import logging
-import idna
from service_identity import VerificationError
from service_identity.pyopenssl import verify_hostname, verify_ip_address
from zope.interface import implementer
@@ -114,14 +113,20 @@ class ClientTLSOptionsFactory(object):
self._no_verify_ssl_context = self._no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(self._context_info_cb)
- def get_options(self, host):
+ def get_options(self, host: bytes):
+
+ # IPolicyForHTTPS.get_options takes bytes, but we want to compare
+ # against the str whitelist. The hostnames in the whitelist are already
+ # IDNA-encoded like the hosts will be here.
+ ascii_host = host.decode("ascii")
+
# Check if certificate verification has been enabled
should_verify = self._config.federation_verify_certificates
# Check if we've disabled certificate verification for this host
if should_verify:
for regex in self._config.federation_certificate_verification_whitelist:
- if regex.match(host):
+ if regex.match(ascii_host):
should_verify = False
break
@@ -162,7 +167,7 @@ class SSLClientConnectionCreator(object):
Replaces twisted.internet.ssl.ClientTLSOptions
"""
- def __init__(self, hostname, ctx, verify_certs):
+ def __init__(self, hostname: bytes, ctx, verify_certs: bool):
self._ctx = ctx
self._verifier = ConnectionVerifier(hostname, verify_certs)
@@ -190,21 +195,16 @@ class ConnectionVerifier(object):
# This code is based on twisted.internet.ssl.ClientTLSOptions.
- def __init__(self, hostname, verify_certs):
+ def __init__(self, hostname: bytes, verify_certs):
self._verify_certs = verify_certs
- if isIPAddress(hostname) or isIPv6Address(hostname):
- self._hostnameBytes = hostname.encode("ascii")
+ _decoded = hostname.decode("ascii")
+ if isIPAddress(_decoded) or isIPv6Address(_decoded):
self._is_ip_address = True
else:
- # twisted's ClientTLSOptions falls back to the stdlib impl here if
- # idna is not installed, but points out that lacks support for
- # IDNA2008 (http://bugs.python.org/issue17305).
- #
- # We can rely on having idna.
- self._hostnameBytes = idna.encode(hostname)
self._is_ip_address = False
+ self._hostnameBytes = hostname
self._hostnameASCII = self._hostnameBytes.decode("ascii")
def verify_context_info_cb(self, ssl_connection, where):
diff --git a/synapse/federation/federation_server.py b/synapse/federation/federation_server.py
index e5f0b90aec..da06ab379d 100644
--- a/synapse/federation/federation_server.py
+++ b/synapse/federation/federation_server.py
@@ -669,9 +669,9 @@ class FederationServer(FederationBase):
return ret
@defer.inlineCallbacks
- def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
+ def on_exchange_third_party_invite_request(self, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
- origin, room_id, event_dict
+ room_id, event_dict
)
return ret
diff --git a/synapse/federation/transport/server.py b/synapse/federation/transport/server.py
index 132a8fb5e6..7dc696c7ae 100644
--- a/synapse/federation/transport/server.py
+++ b/synapse/federation/transport/server.py
@@ -575,7 +575,7 @@ class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
async def on_PUT(self, origin, content, query, room_id):
content = await self.handler.on_exchange_third_party_invite_request(
- origin, room_id, content
+ room_id, content
)
return 200, content
diff --git a/synapse/handlers/_base.py b/synapse/handlers/_base.py
index c29c78bd65..d15c6282fb 100644
--- a/synapse/handlers/_base.py
+++ b/synapse/handlers/_base.py
@@ -45,6 +45,7 @@ class BaseHandler(object):
self.state_handler = hs.get_state_handler()
self.distributor = hs.get_distributor()
self.ratelimiter = hs.get_ratelimiter()
+ self.admin_redaction_ratelimiter = hs.get_admin_redaction_ratelimiter()
self.clock = hs.get_clock()
self.hs = hs
@@ -53,7 +54,7 @@ class BaseHandler(object):
self.event_builder_factory = hs.get_event_builder_factory()
@defer.inlineCallbacks
- def ratelimit(self, requester, update=True):
+ def ratelimit(self, requester, update=True, is_admin_redaction=False):
"""Ratelimits requests.
Args:
@@ -62,6 +63,9 @@ class BaseHandler(object):
Set to False when doing multiple checks for one request (e.g.
to check up front if we would reject the request), and set to
True for the last call for a given request.
+ is_admin_redaction (bool): Whether this is a room admin/moderator
+ redacting an event. If so then we may apply different
+ ratelimits depending on config.
Raises:
LimitExceededError if the request should be ratelimited
@@ -90,16 +94,33 @@ class BaseHandler(object):
messages_per_second = override.messages_per_second
burst_count = override.burst_count
else:
- messages_per_second = self.hs.config.rc_message.per_second
- burst_count = self.hs.config.rc_message.burst_count
-
- allowed, time_allowed = self.ratelimiter.can_do_action(
- user_id,
- time_now,
- rate_hz=messages_per_second,
- burst_count=burst_count,
- update=update,
- )
+ # We default to different values if this is an admin redaction and
+ # the config is set
+ if is_admin_redaction and self.hs.config.rc_admin_redaction:
+ messages_per_second = self.hs.config.rc_admin_redaction.per_second
+ burst_count = self.hs.config.rc_admin_redaction.burst_count
+ else:
+ messages_per_second = self.hs.config.rc_message.per_second
+ burst_count = self.hs.config.rc_message.burst_count
+
+ if is_admin_redaction and self.hs.config.rc_admin_redaction:
+ # If we have separate config for admin redactions we use a separate
+ # ratelimiter
+ allowed, time_allowed = self.admin_redaction_ratelimiter.can_do_action(
+ user_id,
+ time_now,
+ rate_hz=messages_per_second,
+ burst_count=burst_count,
+ update=update,
+ )
+ else:
+ allowed, time_allowed = self.ratelimiter.can_do_action(
+ user_id,
+ time_now,
+ rate_hz=messages_per_second,
+ burst_count=burst_count,
+ update=update,
+ )
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now))
diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py
index 538b16efd6..f72b81d419 100644
--- a/synapse/handlers/federation.py
+++ b/synapse/handlers/federation.py
@@ -2530,12 +2530,17 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
- def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
+ def on_exchange_third_party_invite_request(self, room_id, event_dict):
"""Handle an exchange_third_party_invite request from a remote server
The remote server will call this when it wants to turn a 3pid invite
into a normal m.room.member invite.
+ Args:
+ room_id (str): The ID of the room.
+
+ event_dict (dict[str, Any]): Dictionary containing the event body.
+
Returns:
Deferred: resolves (to None)
"""
diff --git a/synapse/handlers/identity.py b/synapse/handlers/identity.py
index f6d1d1717e..512f38e5a6 100644
--- a/synapse/handlers/identity.py
+++ b/synapse/handlers/identity.py
@@ -144,20 +144,29 @@ class IdentityHandler(BaseHandler):
creds
)
+ sid = creds.get("sid")
+ if not sid:
+ raise SynapseError(
+ 400, "No sid in three_pid_creds", errcode=Codes.MISSING_PARAM
+ )
+
# If an id_access_token is not supplied, force usage of v1
if id_access_token is None:
use_v2 = False
# Decide which API endpoint URLs to use
- bind_data = {"sid": creds["sid"], "client_secret": client_secret, "mxid": mxid}
+ headers = {}
+ bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid}
if use_v2:
bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,)
- bind_data["id_access_token"] = id_access_token
+ headers["Authorization"] = create_id_access_token_header(id_access_token)
else:
bind_url = "https://%s/_matrix/identity/api/v1/3pid/bind" % (id_server,)
try:
- data = yield self.http_client.post_json_get_json(bind_url, bind_data)
+ data = yield self.http_client.post_json_get_json(
+ bind_url, bind_data, headers=headers
+ )
logger.debug("bound threepid %r to %s", creds, mxid)
# Remember where we bound the threepid
@@ -448,3 +457,36 @@ class IdentityHandler(BaseHandler):
except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e.to_synapse_error()
+
+
+def create_id_access_token_header(id_access_token):
+ """Create an Authorization header for passing to SimpleHttpClient as the header value
+ of an HTTP request.
+
+ Args:
+ id_access_token (str): An identity server access token.
+
+ Returns:
+ list[str]: The ascii-encoded bearer token encased in a list.
+ """
+ # Prefix with Bearer
+ bearer_token = "Bearer %s" % id_access_token
+
+ # Encode headers to standard ascii
+ bearer_token.encode("ascii")
+
+ # Return as a list as that's how SimpleHttpClient takes header values
+ return [bearer_token]
+
+
+class LookupAlgorithm:
+ """
+ Supported hashing algorithms when performing a 3PID lookup.
+
+ SHA256 - Hashing an (address, medium, pepper) combo with sha256, then url-safe base64
+ encoding
+ NONE - Not performing any hashing. Simply sending an (address, medium) combo in plaintext
+ """
+
+ SHA256 = "sha256"
+ NONE = "none"
diff --git a/synapse/handlers/message.py b/synapse/handlers/message.py
index efb1cea579..2650880044 100644
--- a/synapse/handlers/message.py
+++ b/synapse/handlers/message.py
@@ -729,7 +729,27 @@ class EventCreationHandler(object):
assert not self.config.worker_app
if ratelimit:
- yield self.base_handler.ratelimit(requester)
+ # We check if this is a room admin redacting an event so that we
+ # can apply different ratelimiting. We do this by simply checking
+ # it's not a self-redaction (to avoid having to look up whether the
+ # user is actually admin or not).
+ is_admin_redaction = False
+ if event.type == EventTypes.Redaction:
+ original_event = yield self.store.get_event(
+ event.redacts,
+ check_redacted=False,
+ get_prev_content=False,
+ allow_rejected=False,
+ allow_none=True,
+ )
+
+ is_admin_redaction = (
+ original_event and event.sender != original_event.sender
+ )
+
+ yield self.base_handler.ratelimit(
+ requester, is_admin_redaction=is_admin_redaction
+ )
yield self.base_handler.maybe_kick_guest_users(event, context)
diff --git a/synapse/handlers/register.py b/synapse/handlers/register.py
index 975da57ffd..06bd03b77c 100644
--- a/synapse/handlers/register.py
+++ b/synapse/handlers/register.py
@@ -275,16 +275,12 @@ class RegistrationHandler(BaseHandler):
fake_requester = create_requester(user_id)
# try to create the room if we're the first real user on the server. Note
- # that an auto-generated support user is not a real user and will never be
+ # that an auto-generated support or bot user is not a real user and will never be
# the user to create the room
should_auto_create_rooms = False
- is_support = yield self.store.is_support_user(user_id)
- # There is an edge case where the first user is the support user, then
- # the room is never created, though this seems unlikely and
- # recoverable from given the support user being involved in the first
- # place.
- if self.hs.config.autocreate_auto_join_rooms and not is_support:
- count = yield self.store.count_all_users()
+ is_real_user = yield self.store.is_real_user(user_id)
+ if self.hs.config.autocreate_auto_join_rooms and is_real_user:
+ count = yield self.store.count_real_users()
should_auto_create_rooms = count == 1
for r in self.hs.config.auto_join_rooms:
logger.info("Auto-joining %s to %s", user_id, r)
diff --git a/synapse/handlers/room.py b/synapse/handlers/room.py
index a509e11d69..970be3c846 100644
--- a/synapse/handlers/room.py
+++ b/synapse/handlers/room.py
@@ -579,8 +579,8 @@ class RoomCreationHandler(BaseHandler):
room_id = yield self._generate_room_id(creator_id=user_id, is_public=is_public)
+ directory_handler = self.hs.get_handlers().directory_handler
if room_alias:
- directory_handler = self.hs.get_handlers().directory_handler
yield directory_handler.create_association(
requester=requester,
room_id=room_id,
@@ -665,6 +665,7 @@ class RoomCreationHandler(BaseHandler):
for invite_3pid in invite_3pid_list:
id_server = invite_3pid["id_server"]
+ id_access_token = invite_3pid.get("id_access_token") # optional
address = invite_3pid["address"]
medium = invite_3pid["medium"]
yield self.hs.get_room_member_handler().do_3pid_invite(
@@ -675,6 +676,7 @@ class RoomCreationHandler(BaseHandler):
id_server,
requester,
txn_id=None,
+ id_access_token=id_access_token,
)
result = {"room_id": room_id}
diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py
index 2d7d72c7a4..e914d75e89 100644
--- a/synapse/handlers/room_member.py
+++ b/synapse/handlers/room_member.py
@@ -29,9 +29,11 @@ from twisted.internet import defer
from synapse import types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError
+from synapse.handlers.identity import LookupAlgorithm, create_id_access_token_header
from synapse.types import RoomID, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room
+from synapse.util.hash import sha256_and_url_safe_base64
from ._base import BaseHandler
@@ -101,7 +103,7 @@ class RoomMemberHandler(object):
raise NotImplementedError()
@abc.abstractmethod
- def _remote_reject_invite(self, remote_room_hosts, room_id, target):
+ def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
@@ -530,9 +532,7 @@ class RoomMemberHandler(object):
return res
@defer.inlineCallbacks
- def send_membership_event(
- self, requester, event, context, remote_room_hosts=None, ratelimit=True
- ):
+ def send_membership_event(self, requester, event, context, ratelimit=True):
"""
Change the membership status of a user in a room.
@@ -542,16 +542,10 @@ class RoomMemberHandler(object):
act as the sender, will be skipped.
event (SynapseEvent): The membership event.
context: The context of the event.
- is_guest (bool): Whether the sender is a guest.
- room_hosts ([str]): Homeservers which are likely to already be in
- the room, and could be danced with in order to join this
- homeserver for the first time.
ratelimit (bool): Whether to rate limit this request.
Raises:
SynapseError if there was a problem changing the membership.
"""
- remote_room_hosts = remote_room_hosts or []
-
target_user = UserID.from_string(event.state_key)
room_id = event.room_id
@@ -654,7 +648,7 @@ class RoomMemberHandler(object):
servers.remove(room_alias.domain)
servers.insert(0, room_alias.domain)
- return (RoomID.from_string(room_id), servers)
+ return RoomID.from_string(room_id), servers
@defer.inlineCallbacks
def _get_inviter(self, user_id, room_id):
@@ -666,7 +660,15 @@ class RoomMemberHandler(object):
@defer.inlineCallbacks
def do_3pid_invite(
- self, room_id, inviter, medium, address, id_server, requester, txn_id
+ self,
+ room_id,
+ inviter,
+ medium,
+ address,
+ id_server,
+ requester,
+ txn_id,
+ id_access_token=None,
):
if self.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(requester.user)
@@ -689,7 +691,12 @@ class RoomMemberHandler(object):
Codes.FORBIDDEN,
)
- invitee = yield self._lookup_3pid(id_server, medium, address)
+ if not self._enable_lookup:
+ raise SynapseError(
+ 403, "Looking up third-party identifiers is denied from this server"
+ )
+
+ invitee = yield self._lookup_3pid(id_server, medium, address, id_access_token)
if invitee:
yield self.update_membership(
@@ -697,11 +704,18 @@ class RoomMemberHandler(object):
)
else:
yield self._make_and_store_3pid_invite(
- requester, id_server, medium, address, room_id, inviter, txn_id=txn_id
+ requester,
+ id_server,
+ medium,
+ address,
+ room_id,
+ inviter,
+ txn_id=txn_id,
+ id_access_token=id_access_token,
)
@defer.inlineCallbacks
- def _lookup_3pid(self, id_server, medium, address):
+ def _lookup_3pid(self, id_server, medium, address, id_access_token=None):
"""Looks up a 3pid in the passed identity server.
Args:
@@ -709,14 +723,48 @@ class RoomMemberHandler(object):
of the identity server to use.
medium (str): The type of the third party identifier (e.g. "email").
address (str): The third party identifier (e.g. "foo@example.com").
+ id_access_token (str|None): The access token to authenticate to the identity
+ server with
+
+ Returns:
+ str|None: the matrix ID of the 3pid, or None if it is not recognized.
+ """
+ if id_access_token is not None:
+ try:
+ results = yield self._lookup_3pid_v2(
+ id_server, id_access_token, medium, address
+ )
+ return results
+
+ except Exception as e:
+ # Catch HttpResponseExcept for a non-200 response code
+ # Check if this identity server does not know about v2 lookups
+ if isinstance(e, HttpResponseException) and e.code == 404:
+ # This is an old identity server that does not yet support v2 lookups
+ logger.warning(
+ "Attempted v2 lookup on v1 identity server %s. Falling "
+ "back to v1",
+ id_server,
+ )
+ else:
+ logger.warning("Error when looking up hashing details: %s", e)
+ return None
+
+ return (yield self._lookup_3pid_v1(id_server, medium, address))
+
+ @defer.inlineCallbacks
+ def _lookup_3pid_v1(self, id_server, medium, address):
+ """Looks up a 3pid in the passed identity server using v1 lookup.
+
+ Args:
+ id_server (str): The server name (including port, if required)
+ of the identity server to use.
+ medium (str): The type of the third party identifier (e.g. "email").
+ address (str): The third party identifier (e.g. "foo@example.com").
Returns:
str: the matrix ID of the 3pid, or None if it is not recognized.
"""
- if not self._enable_lookup:
- raise SynapseError(
- 403, "Looking up third-party identifiers is denied from this server"
- )
try:
data = yield self.simple_http_client.get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server),
@@ -730,9 +778,116 @@ class RoomMemberHandler(object):
return data["mxid"]
except IOError as e:
- logger.warn("Error from identity server lookup: %s" % (e,))
+ logger.warning("Error from v1 identity server lookup: %s" % (e,))
+
+ return None
+
+ @defer.inlineCallbacks
+ def _lookup_3pid_v2(self, id_server, id_access_token, medium, address):
+ """Looks up a 3pid in the passed identity server using v2 lookup.
+
+ Args:
+ id_server (str): The server name (including port, if required)
+ of the identity server to use.
+ id_access_token (str): The access token to authenticate to the identity server with
+ medium (str): The type of the third party identifier (e.g. "email").
+ address (str): The third party identifier (e.g. "foo@example.com").
+
+ Returns:
+ Deferred[str|None]: the matrix ID of the 3pid, or None if it is not recognised.
+ """
+ # Check what hashing details are supported by this identity server
+ hash_details = yield self.simple_http_client.get_json(
+ "%s%s/_matrix/identity/v2/hash_details" % (id_server_scheme, id_server),
+ {"access_token": id_access_token},
+ )
+
+ if not isinstance(hash_details, dict):
+ logger.warning(
+ "Got non-dict object when checking hash details of %s%s: %s",
+ id_server_scheme,
+ id_server,
+ hash_details,
+ )
+ raise SynapseError(
+ 400,
+ "Non-dict object from %s%s during v2 hash_details request: %s"
+ % (id_server_scheme, id_server, hash_details),
+ )
+
+ # Extract information from hash_details
+ supported_lookup_algorithms = hash_details.get("algorithms")
+ lookup_pepper = hash_details.get("lookup_pepper")
+ if (
+ not supported_lookup_algorithms
+ or not isinstance(supported_lookup_algorithms, list)
+ or not lookup_pepper
+ or not isinstance(lookup_pepper, str)
+ ):
+ raise SynapseError(
+ 400,
+ "Invalid hash details received from identity server %s%s: %s"
+ % (id_server_scheme, id_server, hash_details),
+ )
+
+ # Check if any of the supported lookup algorithms are present
+ if LookupAlgorithm.SHA256 in supported_lookup_algorithms:
+ # Perform a hashed lookup
+ lookup_algorithm = LookupAlgorithm.SHA256
+
+ # Hash address, medium and the pepper with sha256
+ to_hash = "%s %s %s" % (address, medium, lookup_pepper)
+ lookup_value = sha256_and_url_safe_base64(to_hash)
+
+ elif LookupAlgorithm.NONE in supported_lookup_algorithms:
+ # Perform a non-hashed lookup
+ lookup_algorithm = LookupAlgorithm.NONE
+
+ # Combine together plaintext address and medium
+ lookup_value = "%s %s" % (address, medium)
+
+ else:
+ logger.warning(
+ "None of the provided lookup algorithms of %s are supported: %s",
+ id_server,
+ supported_lookup_algorithms,
+ )
+ raise SynapseError(
+ 400,
+ "Provided identity server does not support any v2 lookup "
+ "algorithms that this homeserver supports.",
+ )
+
+ # Authenticate with identity server given the access token from the client
+ headers = {"Authorization": create_id_access_token_header(id_access_token)}
+
+ try:
+ lookup_results = yield self.simple_http_client.post_json_get_json(
+ "%s%s/_matrix/identity/v2/lookup" % (id_server_scheme, id_server),
+ {
+ "addresses": [lookup_value],
+ "algorithm": lookup_algorithm,
+ "pepper": lookup_pepper,
+ },
+ headers=headers,
+ )
+ except Exception as e:
+ logger.warning("Error when performing a v2 3pid lookup: %s", e)
+ raise SynapseError(
+ 500, "Unknown error occurred during identity server lookup"
+ )
+
+ # Check for a mapping from what we looked up to an MXID
+ if "mappings" not in lookup_results or not isinstance(
+ lookup_results["mappings"], dict
+ ):
+ logger.warning("No results from 3pid lookup")
return None
+ # Return the MXID if it's available, or None otherwise
+ mxid = lookup_results["mappings"].get(lookup_value)
+ return mxid
+
@defer.inlineCallbacks
def _verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
@@ -757,7 +912,15 @@ class RoomMemberHandler(object):
@defer.inlineCallbacks
def _make_and_store_3pid_invite(
- self, requester, id_server, medium, address, room_id, user, txn_id
+ self,
+ requester,
+ id_server,
+ medium,
+ address,
+ room_id,
+ user,
+ txn_id,
+ id_access_token=None,
):
room_state = yield self.state_handler.get_current_state(room_id)
@@ -806,6 +969,7 @@ class RoomMemberHandler(object):
room_name=room_name,
inviter_display_name=inviter_display_name,
inviter_avatar_url=inviter_avatar_url,
+ id_access_token=id_access_token,
)
)
@@ -843,6 +1007,7 @@ class RoomMemberHandler(object):
room_name,
inviter_display_name,
inviter_avatar_url,
+ id_access_token=None,
):
"""
Asks an identity server for a third party invite.
@@ -862,6 +1027,8 @@ class RoomMemberHandler(object):
inviter_display_name (str): The current display name of the
inviter.
inviter_avatar_url (str): The URL of the inviter's avatar.
+ id_access_token (str|None): The access token to authenticate to the identity
+ server with
Returns:
A deferred tuple containing:
@@ -872,12 +1039,6 @@ class RoomMemberHandler(object):
display_name (str): A user-friendly name to represent the invited
user.
"""
-
- is_url = "%s%s/_matrix/identity/api/v1/store-invite" % (
- id_server_scheme,
- id_server,
- )
-
invite_config = {
"medium": medium,
"address": address,
@@ -891,22 +1052,66 @@ class RoomMemberHandler(object):
"sender_avatar_url": inviter_avatar_url,
}
- try:
- data = yield self.simple_http_client.post_json_get_json(
- is_url, invite_config
- )
- except HttpResponseException as e:
- # Some identity servers may only support application/x-www-form-urlencoded
- # types. This is especially true with old instances of Sydent, see
- # https://github.com/matrix-org/sydent/pull/170
- logger.info(
- "Failed to POST %s with JSON, falling back to urlencoded form: %s",
- is_url,
- e,
+ # Add the identity service access token to the JSON body and use the v2
+ # Identity Service endpoints if id_access_token is present
+ data = None
+ base_url = "%s%s/_matrix/identity" % (id_server_scheme, id_server)
+
+ if id_access_token:
+ key_validity_url = "%s%s/_matrix/identity/v2/pubkey/isvalid" % (
+ id_server_scheme,
+ id_server,
)
- data = yield self.simple_http_client.post_urlencoded_get_json(
- is_url, invite_config
+
+ # Attempt a v2 lookup
+ url = base_url + "/v2/store-invite"
+ try:
+ data = yield self.simple_http_client.post_json_get_json(
+ url,
+ invite_config,
+ {"Authorization": create_id_access_token_header(id_access_token)},
+ )
+ except HttpResponseException as e:
+ if e.code != 404:
+ logger.info("Failed to POST %s with JSON: %s", url, e)
+ raise e
+
+ if data is None:
+ key_validity_url = "%s%s/_matrix/identity/api/v1/pubkey/isvalid" % (
+ id_server_scheme,
+ id_server,
)
+ url = base_url + "/api/v1/store-invite"
+
+ try:
+ data = yield self.simple_http_client.post_json_get_json(
+ url, invite_config
+ )
+ except HttpResponseException as e:
+ logger.warning(
+ "Error trying to call /store-invite on %s%s: %s",
+ id_server_scheme,
+ id_server,
+ e,
+ )
+
+ if data is None:
+ # Some identity servers may only support application/x-www-form-urlencoded
+ # types. This is especially true with old instances of Sydent, see
+ # https://github.com/matrix-org/sydent/pull/170
+ try:
+ data = yield self.simple_http_client.post_urlencoded_get_json(
+ url, invite_config
+ )
+ except HttpResponseException as e:
+ logger.warning(
+ "Error calling /store-invite on %s%s with fallback "
+ "encoding: %s",
+ id_server_scheme,
+ id_server,
+ e,
+ )
+ raise e
# TODO: Check for success
token = data["token"]
@@ -914,8 +1119,7 @@ class RoomMemberHandler(object):
if "public_key" in data:
fallback_public_key = {
"public_key": data["public_key"],
- "key_validity_url": "%s%s/_matrix/identity/api/v1/pubkey/isvalid"
- % (id_server_scheme, id_server),
+ "key_validity_url": key_validity_url,
}
else:
fallback_public_key = public_keys[0]
@@ -1077,7 +1281,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
- logger.warn("Failed to reject invite: %s", e)
+ logger.warning("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(target.to_string(), room_id)
return {}
diff --git a/synapse/handlers/stats.py b/synapse/handlers/stats.py
index 921735edb3..cbac7c347a 100644
--- a/synapse/handlers/stats.py
+++ b/synapse/handlers/stats.py
@@ -84,6 +84,13 @@ class StatsHandler(StateDeltasHandler):
# Loop round handling deltas until we're up to date
while True:
+ # Be sure to read the max stream_ordering *before* checking if there are any outstanding
+ # deltas, since there is otherwise a chance that we could miss updates which arrive
+ # after we check the deltas.
+ room_max_stream_ordering = yield self.store.get_room_max_stream_ordering()
+ if self.pos == room_max_stream_ordering:
+ break
+
deltas = yield self.store.get_current_state_deltas(self.pos)
if deltas:
@@ -94,7 +101,7 @@ class StatsHandler(StateDeltasHandler):
else:
room_deltas = {}
user_deltas = {}
- max_pos = yield self.store.get_room_max_stream_ordering()
+ max_pos = room_max_stream_ordering
# Then count deltas for total_events and total_event_bytes.
room_count, user_count = yield self.store.get_changes_room_total_events_and_bytes(
@@ -117,10 +124,9 @@ class StatsHandler(StateDeltasHandler):
stream_id=max_pos,
)
- event_processing_positions.labels("stats").set(max_pos)
+ logger.debug("Handled room stats to %s -> %s", self.pos, max_pos)
- if self.pos == max_pos:
- break
+ event_processing_positions.labels("stats").set(max_pos)
self.pos = max_pos
@@ -260,7 +266,9 @@ class StatsHandler(StateDeltasHandler):
room_stats_delta["local_users_in_room"] += delta
elif typ == EventTypes.Create:
- room_state["is_federatable"] = event_content.get("m.federate", True)
+ room_state["is_federatable"] = (
+ event_content.get("m.federate", True) is True
+ )
if sender and self.is_mine_id(sender):
user_to_stats_deltas.setdefault(sender, Counter())[
"rooms_created"
diff --git a/synapse/http/federation/matrix_federation_agent.py b/synapse/http/federation/matrix_federation_agent.py
index feae7de5be..647d26dc56 100644
--- a/synapse/http/federation/matrix_federation_agent.py
+++ b/synapse/http/federation/matrix_federation_agent.py
@@ -217,7 +217,7 @@ class MatrixHostnameEndpoint(object):
self._tls_options = None
else:
self._tls_options = tls_client_options_factory.get_options(
- self._parsed_uri.host.decode("ascii")
+ self._parsed_uri.host
)
self._srv_resolver = srv_resolver
diff --git a/synapse/logging/_structured.py b/synapse/logging/_structured.py
index 0367d6dfc4..3220e985a9 100644
--- a/synapse/logging/_structured.py
+++ b/synapse/logging/_structured.py
@@ -18,6 +18,7 @@ import os.path
import sys
import typing
import warnings
+from typing import List
import attr
from constantly import NamedConstant, Names, ValueConstant, Values
@@ -33,7 +34,6 @@ from twisted.logger import (
LogLevelFilterPredicate,
LogPublisher,
eventAsText,
- globalLogBeginner,
jsonFileLogObserver,
)
@@ -134,7 +134,7 @@ class PythonStdlibToTwistedLogger(logging.Handler):
)
-def SynapseFileLogObserver(outFile: typing.io.TextIO) -> FileLogObserver:
+def SynapseFileLogObserver(outFile: typing.IO[str]) -> FileLogObserver:
"""
A log observer that formats events like the traditional log formatter and
sends them to `outFile`.
@@ -265,7 +265,7 @@ def setup_structured_logging(
hs,
config,
log_config: dict,
- logBeginner: LogBeginner = globalLogBeginner,
+ logBeginner: LogBeginner,
redirect_stdlib_logging: bool = True,
) -> LogPublisher:
"""
@@ -286,7 +286,7 @@ def setup_structured_logging(
if "drains" not in log_config:
raise ConfigError("The logging configuration requires a list of drains.")
- observers = []
+ observers = [] # type: List[ILogObserver]
for observer in parse_drain_configs(log_config["drains"]):
# Pipe drains
diff --git a/synapse/logging/_terse_json.py b/synapse/logging/_terse_json.py
index 7f1e8f23fe..0ebbde06f2 100644
--- a/synapse/logging/_terse_json.py
+++ b/synapse/logging/_terse_json.py
@@ -21,10 +21,11 @@ import sys
from collections import deque
from ipaddress import IPv4Address, IPv6Address, ip_address
from math import floor
-from typing.io import TextIO
+from typing import IO
import attr
from simplejson import dumps
+from zope.interface import implementer
from twisted.application.internet import ClientService
from twisted.internet.endpoints import (
@@ -33,7 +34,7 @@ from twisted.internet.endpoints import (
TCP6ClientEndpoint,
)
from twisted.internet.protocol import Factory, Protocol
-from twisted.logger import FileLogObserver, Logger
+from twisted.logger import FileLogObserver, ILogObserver, Logger
from twisted.python.failure import Failure
@@ -129,7 +130,7 @@ def flatten_event(event: dict, metadata: dict, include_time: bool = False):
return new_event
-def TerseJSONToConsoleLogObserver(outFile: TextIO, metadata: dict) -> FileLogObserver:
+def TerseJSONToConsoleLogObserver(outFile: IO[str], metadata: dict) -> FileLogObserver:
"""
A log observer that formats events to a flattened JSON representation.
@@ -146,6 +147,7 @@ def TerseJSONToConsoleLogObserver(outFile: TextIO, metadata: dict) -> FileLogObs
@attr.s
+@implementer(ILogObserver)
class TerseJSONToTCPLogObserver(object):
"""
An IObserver that writes JSON logs to a TCP target.
diff --git a/synapse/logging/opentracing.py b/synapse/logging/opentracing.py
index 7246253018..308a27213b 100644
--- a/synapse/logging/opentracing.py
+++ b/synapse/logging/opentracing.py
@@ -223,8 +223,8 @@ try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
- JaegerConfig = None
- LogContextScopeManager = None
+ JaegerConfig = None # type: ignore
+ LogContextScopeManager = None # type: ignore
logger = logging.getLogger(__name__)
diff --git a/synapse/metrics/__init__.py b/synapse/metrics/__init__.py
index 488280b4a6..bec3b13397 100644
--- a/synapse/metrics/__init__.py
+++ b/synapse/metrics/__init__.py
@@ -20,6 +20,7 @@ import os
import platform
import threading
import time
+from typing import Dict, Union
import six
@@ -29,20 +30,20 @@ from prometheus_client.core import REGISTRY, GaugeMetricFamily, HistogramMetricF
from twisted.internet import reactor
+import synapse
from synapse.metrics._exposition import (
MetricsResource,
generate_latest,
start_http_server,
)
+from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__)
METRICS_PREFIX = "/_synapse/metrics"
running_on_pypy = platform.python_implementation() == "PyPy"
-all_metrics = []
-all_collectors = []
-all_gauges = {}
+all_gauges = {} # type: Dict[str, Union[LaterGauge, InFlightGauge, BucketCollector]]
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
@@ -385,6 +386,16 @@ event_processing_last_ts = Gauge("synapse_event_processing_last_ts", "", ["name"
# finished being processed.
event_processing_lag = Gauge("synapse_event_processing_lag", "", ["name"])
+# Build info of the running server.
+build_info = Gauge(
+ "synapse_build_info", "Build information", ["pythonversion", "version", "osversion"]
+)
+build_info.labels(
+ " ".join([platform.python_implementation(), platform.python_version()]),
+ get_version_string(synapse),
+ " ".join([platform.system(), platform.release()]),
+).set(1)
+
last_ticked = time.time()
diff --git a/synapse/metrics/_exposition.py b/synapse/metrics/_exposition.py
index 1933ecd3e3..74d9c3ecd3 100644
--- a/synapse/metrics/_exposition.py
+++ b/synapse/metrics/_exposition.py
@@ -36,7 +36,9 @@ from twisted.web.resource import Resource
try:
from prometheus_client.samples import Sample
except ImportError:
- Sample = namedtuple("Sample", ["name", "labels", "value", "timestamp", "exemplar"])
+ Sample = namedtuple(
+ "Sample", ["name", "labels", "value", "timestamp", "exemplar"]
+ ) # type: ignore
CONTENT_TYPE_LATEST = str("text/plain; version=0.0.4; charset=utf-8")
diff --git a/synapse/push/httppusher.py b/synapse/push/httppusher.py
index bf65cfa21a..5a3e3812e0 100644
--- a/synapse/push/httppusher.py
+++ b/synapse/push/httppusher.py
@@ -22,6 +22,7 @@ from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
+from synapse.logging import opentracing
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.push import PusherConfigException
@@ -198,7 +199,17 @@ class HttpPusher(object):
)
for push_action in unprocessed:
- processed = yield self._process_one(push_action)
+ with opentracing.start_active_span(
+ "http-push",
+ tags={
+ "authenticated_entity": self.user_id,
+ "event_id": push_action["event_id"],
+ "app_id": self.app_id,
+ "app_display_name": self.app_display_name,
+ },
+ ):
+ processed = yield self._process_one(push_action)
+
if processed:
http_push_processed_counter.inc()
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py
index ec0ac547c1..0bd563edc7 100644
--- a/synapse/python_dependencies.py
+++ b/synapse/python_dependencies.py
@@ -15,6 +15,7 @@
# limitations under the License.
import logging
+from typing import Set
from pkg_resources import (
DistributionNotFound,
@@ -97,7 +98,7 @@ CONDITIONAL_REQUIREMENTS = {
"jwt": ["pyjwt>=1.6.4"],
}
-ALL_OPTIONAL_REQUIREMENTS = set()
+ALL_OPTIONAL_REQUIREMENTS = set() # type: Set[str]
for name, optional_deps in CONDITIONAL_REQUIREMENTS.items():
# Exclude systemd as it's a system-based requirement.
@@ -147,7 +148,13 @@ def check_requirements(for_feature=None):
)
except DistributionNotFound:
deps_needed.append(dependency)
- errors.append("Needed %s but it was not installed" % (dependency,))
+ if for_feature:
+ errors.append(
+ "Needed %s for the '%s' feature but it was not installed"
+ % (dependency, for_feature)
+ )
+ else:
+ errors.append("Needed %s but it was not installed" % (dependency,))
if not for_feature:
# Check the optional dependencies are up to date. We allow them to not be
@@ -168,8 +175,8 @@ def check_requirements(for_feature=None):
pass
if deps_needed:
- for e in errors:
- logging.error(e)
+ for err in errors:
+ logging.error(err)
raise DependencyException(deps_needed)
diff --git a/synapse/rest/client/v1/room.py b/synapse/rest/client/v1/room.py
index 3582259026..a6a7b3b57e 100644
--- a/synapse/rest/client/v1/room.py
+++ b/synapse/rest/client/v1/room.py
@@ -701,6 +701,7 @@ class RoomMembershipRestServlet(TransactionRestServlet):
content["id_server"],
requester,
txn_id,
+ content.get("id_access_token"),
)
return 200, {}
diff --git a/synapse/server.py b/synapse/server.py
index 9e28dba2b1..1fcc7375d3 100644
--- a/synapse/server.py
+++ b/synapse/server.py
@@ -221,6 +221,7 @@ class HomeServer(object):
self.clock = Clock(reactor)
self.distributor = Distributor()
self.ratelimiter = Ratelimiter()
+ self.admin_redaction_ratelimiter = Ratelimiter()
self.registration_ratelimiter = Ratelimiter()
self.datastore = None
@@ -279,6 +280,9 @@ class HomeServer(object):
def get_registration_ratelimiter(self):
return self.registration_ratelimiter
+ def get_admin_redaction_ratelimiter(self):
+ return self.admin_redaction_ratelimiter
+
def build_federation_client(self):
return FederationClient(self)
diff --git a/synapse/static/client/login/js/login.js b/synapse/static/client/login/js/login.js
index e02663f50e..276c271bbe 100644
--- a/synapse/static/client/login/js/login.js
+++ b/synapse/static/client/login/js/login.js
@@ -62,7 +62,7 @@ var show_login = function() {
$("#sso_flow").show();
}
- if (!matrixLogin.serverAcceptsPassword && !matrixLogin.serverAcceptsCas) {
+ if (!matrixLogin.serverAcceptsPassword && !matrixLogin.serverAcceptsCas && !matrixLogin.serverAcceptsSso) {
$("#no_login_types").show();
}
};
diff --git a/synapse/storage/events.py b/synapse/storage/events.py
index 1958afe1d7..ddf7ab6479 100644
--- a/synapse/storage/events.py
+++ b/synapse/storage/events.py
@@ -23,7 +23,7 @@ from functools import wraps
from six import iteritems, text_type
from six.moves import range
-from canonicaljson import json
+from canonicaljson import encode_canonical_json, json
from prometheus_client import Counter, Histogram
from twisted.internet import defer
@@ -33,6 +33,7 @@ from synapse.api.constants import EventTypes
from synapse.api.errors import SynapseError
from synapse.events import EventBase # noqa: F401
from synapse.events.snapshot import EventContext # noqa: F401
+from synapse.events.utils import prune_event_dict
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.utils import log_function
from synapse.metrics import BucketCollector
@@ -262,6 +263,14 @@ class EventsStore(
hs.get_clock().looping_call(read_forward_extremities, 60 * 60 * 1000)
+ def _censor_redactions():
+ return run_as_background_process(
+ "_censor_redactions", self._censor_redactions
+ )
+
+ if self.hs.config.redaction_retention_period is not None:
+ hs.get_clock().looping_call(_censor_redactions, 5 * 60 * 1000)
+
@defer.inlineCallbacks
def _read_forward_extremities(self):
def fetch(txn):
@@ -1549,6 +1558,98 @@ class EventsStore(
)
@defer.inlineCallbacks
+ def _censor_redactions(self):
+ """Censors all redactions older than the configured period that haven't
+ been censored yet.
+
+ By censor we mean update the event_json table with the redacted event.
+
+ Returns:
+ Deferred
+ """
+
+ if self.hs.config.redaction_retention_period is None:
+ return
+
+ max_pos = yield self.find_first_stream_ordering_after_ts(
+ self._clock.time_msec() - self.hs.config.redaction_retention_period
+ )
+
+ # We fetch all redactions that:
+ # 1. point to an event we have,
+ # 2. has a stream ordering from before the cut off, and
+ # 3. we haven't yet censored.
+ #
+ # This is limited to 100 events to ensure that we don't try and do too
+ # much at once. We'll get called again so this should eventually catch
+ # up.
+ #
+ # We use the range [-max_pos, max_pos] to handle backfilled events,
+ # which are given negative stream ordering.
+ sql = """
+ SELECT redact_event.event_id, redacts FROM redactions
+ INNER JOIN events AS redact_event USING (event_id)
+ INNER JOIN events AS original_event ON (
+ redact_event.room_id = original_event.room_id
+ AND redacts = original_event.event_id
+ )
+ WHERE NOT have_censored
+ AND ? <= redact_event.stream_ordering AND redact_event.stream_ordering <= ?
+ ORDER BY redact_event.stream_ordering ASC
+ LIMIT ?
+ """
+
+ rows = yield self._execute(
+ "_censor_redactions_fetch", None, sql, -max_pos, max_pos, 100
+ )
+
+ updates = []
+
+ for redaction_id, event_id in rows:
+ redaction_event = yield self.get_event(redaction_id, allow_none=True)
+ original_event = yield self.get_event(
+ event_id, allow_rejected=True, allow_none=True
+ )
+
+ # The SQL above ensures that we have both the redaction and
+ # original event, so if the `get_event` calls return None it
+ # means that the redaction wasn't allowed. Either way we know that
+ # the result won't change so we mark the fact that we've checked.
+ if (
+ redaction_event
+ and original_event
+ and original_event.internal_metadata.is_redacted()
+ ):
+ # Redaction was allowed
+ pruned_json = encode_canonical_json(
+ prune_event_dict(original_event.get_dict())
+ )
+ else:
+ # Redaction wasn't allowed
+ pruned_json = None
+
+ updates.append((redaction_id, event_id, pruned_json))
+
+ def _update_censor_txn(txn):
+ for redaction_id, event_id, pruned_json in updates:
+ if pruned_json:
+ self._simple_update_one_txn(
+ txn,
+ table="event_json",
+ keyvalues={"event_id": event_id},
+ updatevalues={"json": pruned_json},
+ )
+
+ self._simple_update_one_txn(
+ txn,
+ table="redactions",
+ keyvalues={"event_id": redaction_id},
+ updatevalues={"have_censored": True},
+ )
+
+ yield self.runInteraction("_update_censor_txn", _update_censor_txn)
+
+ @defer.inlineCallbacks
def count_daily_messages(self):
"""
Returns an estimate of the number of messages sent in the last day.
diff --git a/synapse/storage/registration.py b/synapse/storage/registration.py
index 5138792a5f..109052fa41 100644
--- a/synapse/storage/registration.py
+++ b/synapse/storage/registration.py
@@ -323,6 +323,19 @@ class RegistrationWorkerStore(SQLBaseStore):
return None
@cachedInlineCallbacks()
+ def is_real_user(self, user_id):
+ """Determines if the user is a real user, ie does not have a 'user_type'.
+
+ Args:
+ user_id (str): user id to test
+
+ Returns:
+ Deferred[bool]: True if user 'user_type' is null or empty string
+ """
+ res = yield self.runInteraction("is_real_user", self.is_real_user_txn, user_id)
+ return res
+
+ @cachedInlineCallbacks()
def is_support_user(self, user_id):
"""Determines if the user is of type UserTypes.SUPPORT
@@ -337,6 +350,16 @@ class RegistrationWorkerStore(SQLBaseStore):
)
return res
+ def is_real_user_txn(self, txn, user_id):
+ res = self._simple_select_one_onecol_txn(
+ txn=txn,
+ table="users",
+ keyvalues={"name": user_id},
+ retcol="user_type",
+ allow_none=True,
+ )
+ return res is None
+
def is_support_user_txn(self, txn, user_id):
res = self._simple_select_one_onecol_txn(
txn=txn,
@@ -422,6 +445,20 @@ class RegistrationWorkerStore(SQLBaseStore):
return ret
@defer.inlineCallbacks
+ def count_real_users(self):
+ """Counts all users without a special user_type registered on the homeserver."""
+
+ def _count_users(txn):
+ txn.execute("SELECT COUNT(*) AS users FROM users where user_type is null")
+ rows = self.cursor_to_dict(txn)
+ if rows:
+ return rows[0]["users"]
+ return 0
+
+ ret = yield self.runInteraction("count_real_users", _count_users)
+ return ret
+
+ @defer.inlineCallbacks
def find_next_generated_user_id_localpart(self):
"""
Gets the localpart of the next generated user ID.
diff --git a/synapse/storage/roommember.py b/synapse/storage/roommember.py
index f8b682ebd9..4df8ebdacd 100644
--- a/synapse/storage/roommember.py
+++ b/synapse/storage/roommember.py
@@ -24,8 +24,10 @@ from canonicaljson import json
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
+from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage._base import LoggingTransaction
+from synapse.storage.engines import Sqlite3Engine
from synapse.storage.events_worker import EventsWorkerStore
from synapse.types import get_domain_from_id
from synapse.util.async_helpers import Linearizer
@@ -74,6 +76,63 @@ class RoomMemberWorkerStore(EventsWorkerStore):
self._check_safe_current_state_events_membership_updated_txn(txn)
txn.close()
+ if self.hs.config.metrics_flags.known_servers:
+ self._known_servers_count = 1
+ self.hs.get_clock().looping_call(
+ run_as_background_process,
+ 60 * 1000,
+ "_count_known_servers",
+ self._count_known_servers,
+ )
+ self.hs.get_clock().call_later(
+ 1000,
+ run_as_background_process,
+ "_count_known_servers",
+ self._count_known_servers,
+ )
+ LaterGauge(
+ "synapse_federation_known_servers",
+ "",
+ [],
+ lambda: self._known_servers_count,
+ )
+
+ @defer.inlineCallbacks
+ def _count_known_servers(self):
+ """
+ Count the servers that this server knows about.
+
+ The statistic is stored on the class for the
+ `synapse_federation_known_servers` LaterGauge to collect.
+ """
+
+ def _transact(txn):
+ if isinstance(self.database_engine, Sqlite3Engine):
+ query = """
+ SELECT COUNT(DISTINCT substr(out.user_id, pos+1))
+ FROM (
+ SELECT rm.user_id as user_id, instr(rm.user_id, ':')
+ AS pos FROM room_memberships as rm
+ INNER JOIN current_state_events as c ON rm.event_id = c.event_id
+ WHERE c.type = 'm.room.member'
+ ) as out
+ """
+ else:
+ query = """
+ SELECT COUNT(DISTINCT split_part(state_key, ':', 2))
+ FROM current_state_events
+ WHERE type = 'm.room.member' AND membership = 'join';
+ """
+ txn.execute(query)
+ return list(txn)[0][0]
+
+ count = yield self.runInteraction("get_known_servers", _transact)
+
+ # We always know about ourselves, even if we have nothing in
+ # room_memberships (for example, the server is new).
+ self._known_servers_count = max([count, 1])
+ return self._known_servers_count
+
def _check_safe_current_state_events_membership_updated_txn(self, txn):
"""Checks if it is safe to assume the new current_state_events
membership column is up to date
diff --git a/synapse/storage/schema/delta/56/destinations_failure_ts.sql b/synapse/storage/schema/delta/56/destinations_failure_ts.sql
new file mode 100644
index 0000000000..f00889290b
--- /dev/null
+++ b/synapse/storage/schema/delta/56/destinations_failure_ts.sql
@@ -0,0 +1,25 @@
+/* Copyright 2019 The Matrix.org Foundation C.I.C
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/*
+ * Record the timestamp when a given server started failing
+ */
+ALTER TABLE destinations ADD failure_ts BIGINT;
+
+/* as a rough approximation, we assume that the server started failing at
+ * retry_interval before the last retry
+ */
+UPDATE destinations SET failure_ts = retry_last_ts - retry_interval
+ WHERE retry_last_ts > 0;
diff --git a/synapse/storage/schema/delta/56/redaction_censor.sql b/synapse/storage/schema/delta/56/redaction_censor.sql
new file mode 100644
index 0000000000..fe51b02309
--- /dev/null
+++ b/synapse/storage/schema/delta/56/redaction_censor.sql
@@ -0,0 +1,17 @@
+/* Copyright 2019 The Matrix.org Foundation C.I.C.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+ALTER TABLE redactions ADD COLUMN have_censored BOOL NOT NULL DEFAULT false;
+CREATE INDEX redactions_have_censored ON redactions(event_id) WHERE not have_censored;
diff --git a/synapse/storage/stats.py b/synapse/storage/stats.py
index 6560173c08..09190d684e 100644
--- a/synapse/storage/stats.py
+++ b/synapse/storage/stats.py
@@ -823,7 +823,9 @@ class StatsStore(StateDeltasStore):
elif event.type == EventTypes.CanonicalAlias:
room_state["canonical_alias"] = event.content.get("alias")
elif event.type == EventTypes.Create:
- room_state["is_federatable"] = event.content.get("m.federate", True)
+ room_state["is_federatable"] = (
+ event.content.get("m.federate", True) is True
+ )
yield self.update_room_state(room_id, room_state)
diff --git a/synapse/storage/transactions.py b/synapse/storage/transactions.py
index b3c3bf55bc..289c117396 100644
--- a/synapse/storage/transactions.py
+++ b/synapse/storage/transactions.py
@@ -165,7 +165,7 @@ class TransactionStore(SQLBaseStore):
txn,
table="destinations",
keyvalues={"destination": destination},
- retcols=("destination", "retry_last_ts", "retry_interval"),
+ retcols=("destination", "failure_ts", "retry_last_ts", "retry_interval"),
allow_none=True,
)
@@ -174,12 +174,15 @@ class TransactionStore(SQLBaseStore):
else:
return None
- def set_destination_retry_timings(self, destination, retry_last_ts, retry_interval):
+ def set_destination_retry_timings(
+ self, destination, failure_ts, retry_last_ts, retry_interval
+ ):
"""Sets the current retry timings for a given destination.
Both timings should be zero if retrying is no longer occuring.
Args:
destination (str)
+ failure_ts (int|None) - when the server started failing (ms since epoch)
retry_last_ts (int) - time of last retry attempt in unix epoch ms
retry_interval (int) - how long until next retry in ms
"""
@@ -189,12 +192,13 @@ class TransactionStore(SQLBaseStore):
"set_destination_retry_timings",
self._set_destination_retry_timings,
destination,
+ failure_ts,
retry_last_ts,
retry_interval,
)
def _set_destination_retry_timings(
- self, txn, destination, retry_last_ts, retry_interval
+ self, txn, destination, failure_ts, retry_last_ts, retry_interval
):
if self.database_engine.can_native_upsert:
@@ -202,9 +206,12 @@ class TransactionStore(SQLBaseStore):
# resetting it) or greater than the existing retry interval.
sql = """
- INSERT INTO destinations (destination, retry_last_ts, retry_interval)
- VALUES (?, ?, ?)
+ INSERT INTO destinations (
+ destination, failure_ts, retry_last_ts, retry_interval
+ )
+ VALUES (?, ?, ?, ?)
ON CONFLICT (destination) DO UPDATE SET
+ failure_ts = EXCLUDED.failure_ts,
retry_last_ts = EXCLUDED.retry_last_ts,
retry_interval = EXCLUDED.retry_interval
WHERE
@@ -212,7 +219,7 @@ class TransactionStore(SQLBaseStore):
OR destinations.retry_interval < EXCLUDED.retry_interval
"""
- txn.execute(sql, (destination, retry_last_ts, retry_interval))
+ txn.execute(sql, (destination, failure_ts, retry_last_ts, retry_interval))
return
@@ -225,7 +232,7 @@ class TransactionStore(SQLBaseStore):
txn,
table="destinations",
keyvalues={"destination": destination},
- retcols=("retry_last_ts", "retry_interval"),
+ retcols=("failure_ts", "retry_last_ts", "retry_interval"),
allow_none=True,
)
@@ -235,6 +242,7 @@ class TransactionStore(SQLBaseStore):
table="destinations",
values={
"destination": destination,
+ "failure_ts": failure_ts,
"retry_last_ts": retry_last_ts,
"retry_interval": retry_interval,
},
@@ -245,31 +253,12 @@ class TransactionStore(SQLBaseStore):
"destinations",
keyvalues={"destination": destination},
updatevalues={
+ "failure_ts": failure_ts,
"retry_last_ts": retry_last_ts,
"retry_interval": retry_interval,
},
)
- def get_destinations_needing_retry(self):
- """Get all destinations which are due a retry for sending a transaction.
-
- Returns:
- list: A list of dicts
- """
-
- return self.runInteraction(
- "get_destinations_needing_retry", self._get_destinations_needing_retry
- )
-
- def _get_destinations_needing_retry(self, txn):
- query = (
- "SELECT * FROM destinations"
- " WHERE retry_last_ts > 0 and retry_next_ts < ?"
- )
-
- txn.execute(query, (self._clock.time_msec(),))
- return self.cursor_to_dict(txn)
-
def _start_cleanup_transactions(self):
return run_as_background_process(
"cleanup_transactions", self._cleanup_transactions
diff --git a/synapse/util/hash.py b/synapse/util/hash.py
new file mode 100644
index 0000000000..359168704e
--- /dev/null
+++ b/synapse/util/hash.py
@@ -0,0 +1,33 @@
+# -*- coding: utf-8 -*-
+
+# Copyright 2019 The Matrix.org Foundation C.I.C.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import hashlib
+
+import unpaddedbase64
+
+
+def sha256_and_url_safe_base64(input_text):
+ """SHA256 hash an input string, encode the digest as url-safe base64, and
+ return
+
+ :param input_text: string to hash
+ :type input_text: str
+
+ :returns a sha256 hashed and url-safe base64 encoded digest
+ :rtype: str
+ """
+ digest = hashlib.sha256(input_text.encode()).digest()
+ return unpaddedbase64.encode_base64(digest, urlsafe=True)
diff --git a/synapse/util/retryutils.py b/synapse/util/retryutils.py
index 0862b5ca5a..a5f2fbef5c 100644
--- a/synapse/util/retryutils.py
+++ b/synapse/util/retryutils.py
@@ -22,6 +22,15 @@ from synapse.api.errors import CodeMessageException
logger = logging.getLogger(__name__)
+# the intial backoff, after the first transaction fails
+MIN_RETRY_INTERVAL = 10 * 60 * 1000
+
+# how much we multiply the backoff by after each subsequent fail
+RETRY_MULTIPLIER = 5
+
+# a cap on the backoff. (Essentially none)
+MAX_RETRY_INTERVAL = 2 ** 63
+
class NotRetryingDestination(Exception):
def __init__(self, retry_last_ts, retry_interval, destination):
@@ -71,11 +80,13 @@ def get_retry_limiter(destination, clock, store, ignore_backoff=False, **kwargs)
# We aren't ready to retry that destination.
raise
"""
+ failure_ts = None
retry_last_ts, retry_interval = (0, 0)
retry_timings = yield store.get_destination_retry_timings(destination)
if retry_timings:
+ failure_ts = retry_timings["failure_ts"]
retry_last_ts, retry_interval = (
retry_timings["retry_last_ts"],
retry_timings["retry_interval"],
@@ -99,6 +110,7 @@ def get_retry_limiter(destination, clock, store, ignore_backoff=False, **kwargs)
destination,
clock,
store,
+ failure_ts,
retry_interval,
backoff_on_failure=backoff_on_failure,
**kwargs
@@ -111,10 +123,8 @@ class RetryDestinationLimiter(object):
destination,
clock,
store,
+ failure_ts,
retry_interval,
- min_retry_interval=10 * 60 * 1000,
- max_retry_interval=24 * 60 * 60 * 1000,
- multiplier_retry_interval=5,
backoff_on_404=False,
backoff_on_failure=True,
):
@@ -127,15 +137,11 @@ class RetryDestinationLimiter(object):
destination (str)
clock (Clock)
store (DataStore)
+ failure_ts (int|None): when this destination started failing (in ms since
+ the epoch), or zero if the last request was successful
retry_interval (int): The next retry interval taken from the
database in milliseconds, or zero if the last request was
successful.
- min_retry_interval (int): The minimum retry interval to use after
- a failed request, in milliseconds.
- max_retry_interval (int): The maximum retry interval to use after
- a failed request, in milliseconds.
- multiplier_retry_interval (int): The multiplier to use to increase
- the retry interval after a failed request.
backoff_on_404 (bool): Back off if we get a 404
backoff_on_failure (bool): set to False if we should not increase the
@@ -145,10 +151,8 @@ class RetryDestinationLimiter(object):
self.store = store
self.destination = destination
+ self.failure_ts = failure_ts
self.retry_interval = retry_interval
- self.min_retry_interval = min_retry_interval
- self.max_retry_interval = max_retry_interval
- self.multiplier_retry_interval = multiplier_retry_interval
self.backoff_on_404 = backoff_on_404
self.backoff_on_failure = backoff_on_failure
@@ -189,6 +193,7 @@ class RetryDestinationLimiter(object):
logger.debug(
"Connection to %s was successful; clearing backoff", self.destination
)
+ self.failure_ts = None
retry_last_ts = 0
self.retry_interval = 0
elif not self.backoff_on_failure:
@@ -196,13 +201,14 @@ class RetryDestinationLimiter(object):
else:
# We couldn't connect.
if self.retry_interval:
- self.retry_interval *= self.multiplier_retry_interval
- self.retry_interval *= int(random.uniform(0.8, 1.4))
+ self.retry_interval = int(
+ self.retry_interval * RETRY_MULTIPLIER * random.uniform(0.8, 1.4)
+ )
- if self.retry_interval >= self.max_retry_interval:
- self.retry_interval = self.max_retry_interval
+ if self.retry_interval >= MAX_RETRY_INTERVAL:
+ self.retry_interval = MAX_RETRY_INTERVAL
else:
- self.retry_interval = self.min_retry_interval
+ self.retry_interval = MIN_RETRY_INTERVAL
logger.info(
"Connection to %s was unsuccessful (%s(%s)); backoff now %i",
@@ -213,11 +219,17 @@ class RetryDestinationLimiter(object):
)
retry_last_ts = int(self.clock.time_msec())
+ if self.failure_ts is None:
+ self.failure_ts = retry_last_ts
+
@defer.inlineCallbacks
def store_retry_timings():
try:
yield self.store.set_destination_retry_timings(
- self.destination, retry_last_ts, self.retry_interval
+ self.destination,
+ self.failure_ts,
+ retry_last_ts,
+ self.retry_interval,
)
except Exception:
logger.exception("Failed to store destination_retry_timings")
diff --git a/tests/api/test_auth.py b/tests/api/test_auth.py
index c0cb8ef296..6121efcfa9 100644
--- a/tests/api/test_auth.py
+++ b/tests/api/test_auth.py
@@ -21,6 +21,7 @@ from twisted.internet import defer
import synapse.handlers.auth
from synapse.api.auth import Auth
+from synapse.api.constants import UserTypes
from synapse.api.errors import (
AuthError,
Codes,
@@ -336,6 +337,23 @@ class AuthTestCase(unittest.TestCase):
yield self.auth.check_auth_blocking()
@defer.inlineCallbacks
+ def test_blocking_mau__depending_on_user_type(self):
+ self.hs.config.max_mau_value = 50
+ self.hs.config.limit_usage_by_mau = True
+
+ self.store.get_monthly_active_count = Mock(return_value=defer.succeed(100))
+ # Support users allowed
+ yield self.auth.check_auth_blocking(user_type=UserTypes.SUPPORT)
+ self.store.get_monthly_active_count = Mock(return_value=defer.succeed(100))
+ # Bots not allowed
+ with self.assertRaises(ResourceLimitError):
+ yield self.auth.check_auth_blocking(user_type=UserTypes.BOT)
+ self.store.get_monthly_active_count = Mock(return_value=defer.succeed(100))
+ # Real users not allowed
+ with self.assertRaises(ResourceLimitError):
+ yield self.auth.check_auth_blocking()
+
+ @defer.inlineCallbacks
def test_reserved_threepid(self):
self.hs.config.limit_usage_by_mau = True
self.hs.config.max_mau_value = 1
diff --git a/tests/config/test_generate.py b/tests/config/test_generate.py
index 5017cbce85..2684e662de 100644
--- a/tests/config/test_generate.py
+++ b/tests/config/test_generate.py
@@ -17,6 +17,8 @@ import os.path
import re
import shutil
import tempfile
+from contextlib import redirect_stdout
+from io import StringIO
from synapse.config.homeserver import HomeServerConfig
@@ -32,17 +34,18 @@ class ConfigGenerationTestCase(unittest.TestCase):
shutil.rmtree(self.dir)
def test_generate_config_generates_files(self):
- HomeServerConfig.load_or_generate_config(
- "",
- [
- "--generate-config",
- "-c",
- self.file,
- "--report-stats=yes",
- "-H",
- "lemurs.win",
- ],
- )
+ with redirect_stdout(StringIO()):
+ HomeServerConfig.load_or_generate_config(
+ "",
+ [
+ "--generate-config",
+ "-c",
+ self.file,
+ "--report-stats=yes",
+ "-H",
+ "lemurs.win",
+ ],
+ )
self.assertSetEqual(
set(["homeserver.yaml", "lemurs.win.log.config", "lemurs.win.signing.key"]),
diff --git a/tests/config/test_load.py b/tests/config/test_load.py
index 6bfc1970ad..b3e557bd6a 100644
--- a/tests/config/test_load.py
+++ b/tests/config/test_load.py
@@ -15,6 +15,8 @@
import os.path
import shutil
import tempfile
+from contextlib import redirect_stdout
+from io import StringIO
import yaml
@@ -26,7 +28,6 @@ from tests import unittest
class ConfigLoadingTestCase(unittest.TestCase):
def setUp(self):
self.dir = tempfile.mkdtemp()
- print(self.dir)
self.file = os.path.join(self.dir, "homeserver.yaml")
def tearDown(self):
@@ -94,18 +95,27 @@ class ConfigLoadingTestCase(unittest.TestCase):
)
self.assertTrue(config.enable_registration)
+ def test_stats_enabled(self):
+ self.generate_config_and_remove_lines_containing("enable_metrics")
+ self.add_lines_to_config(["enable_metrics: true"])
+
+ # The default Metrics Flags are off by default.
+ config = HomeServerConfig.load_config("", ["-c", self.file])
+ self.assertFalse(config.metrics_flags.known_servers)
+
def generate_config(self):
- HomeServerConfig.load_or_generate_config(
- "",
- [
- "--generate-config",
- "-c",
- self.file,
- "--report-stats=yes",
- "-H",
- "lemurs.win",
- ],
- )
+ with redirect_stdout(StringIO()):
+ HomeServerConfig.load_or_generate_config(
+ "",
+ [
+ "--generate-config",
+ "-c",
+ self.file,
+ "--report-stats=yes",
+ "-H",
+ "lemurs.win",
+ ],
+ )
def generate_config_and_remove_lines_containing(self, needle):
self.generate_config()
diff --git a/tests/config/test_tls.py b/tests/config/test_tls.py
index 8e0c4b9533..b02780772a 100644
--- a/tests/config/test_tls.py
+++ b/tests/config/test_tls.py
@@ -16,6 +16,7 @@
import os
+import idna
import yaml
from OpenSSL import SSL
@@ -235,3 +236,42 @@ s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
)
self.assertTrue(conf.acme_enabled)
+
+ def test_whitelist_idna_failure(self):
+ """
+ The federation certificate whitelist will not allow IDNA domain names.
+ """
+ config = {
+ "federation_certificate_verification_whitelist": [
+ "example.com",
+ "*.ドメイン.テスト",
+ ]
+ }
+ t = TestConfig()
+ e = self.assertRaises(
+ ConfigError, t.read_config, config, config_dir_path="", data_dir_path=""
+ )
+ self.assertIn("IDNA domain names", str(e))
+
+ def test_whitelist_idna_result(self):
+ """
+ The federation certificate whitelist will match on IDNA encoded names.
+ """
+ config = {
+ "federation_certificate_verification_whitelist": [
+ "example.com",
+ "*.xn--eckwd4c7c.xn--zckzah",
+ ]
+ }
+ t = TestConfig()
+ t.read_config(config, config_dir_path="", data_dir_path="")
+
+ cf = ClientTLSOptionsFactory(t)
+
+ # Not in the whitelist
+ opts = cf.get_options(b"notexample.com")
+ self.assertTrue(opts._verifier._verify_certs)
+
+ # Caught by the wildcard
+ opts = cf.get_options(idna.encode("テスト.ドメイン.テスト"))
+ self.assertFalse(opts._verifier._verify_certs)
diff --git a/tests/handlers/test_register.py b/tests/handlers/test_register.py
index e10296a5e4..1e9ba3a201 100644
--- a/tests/handlers/test_register.py
+++ b/tests/handlers/test_register.py
@@ -171,11 +171,11 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0)
- def test_auto_create_auto_join_rooms_when_support_user_exists(self):
+ def test_auto_create_auto_join_rooms_when_user_is_not_a_real_user(self):
room_alias_str = "#room:test"
self.hs.config.auto_join_rooms = [room_alias_str]
- self.store.is_support_user = Mock(return_value=True)
+ self.store.is_real_user = Mock(return_value=False)
user_id = self.get_success(self.handler.register_user(localpart="support"))
rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0)
@@ -183,6 +183,31 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
room_alias = RoomAlias.from_string(room_alias_str)
self.get_failure(directory_handler.get_association(room_alias), SynapseError)
+ def test_auto_create_auto_join_rooms_when_user_is_the_first_real_user(self):
+ room_alias_str = "#room:test"
+ self.hs.config.auto_join_rooms = [room_alias_str]
+
+ self.store.count_real_users = Mock(return_value=1)
+ self.store.is_real_user = Mock(return_value=True)
+ user_id = self.get_success(self.handler.register_user(localpart="real"))
+ rooms = self.get_success(self.store.get_rooms_for_user(user_id))
+ directory_handler = self.hs.get_handlers().directory_handler
+ room_alias = RoomAlias.from_string(room_alias_str)
+ room_id = self.get_success(directory_handler.get_association(room_alias))
+
+ self.assertTrue(room_id["room_id"] in rooms)
+ self.assertEqual(len(rooms), 1)
+
+ def test_auto_create_auto_join_rooms_when_user_is_not_the_first_real_user(self):
+ room_alias_str = "#room:test"
+ self.hs.config.auto_join_rooms = [room_alias_str]
+
+ self.store.count_real_users = Mock(return_value=2)
+ self.store.is_real_user = Mock(return_value=True)
+ user_id = self.get_success(self.handler.register_user(localpart="real"))
+ rooms = self.get_success(self.store.get_rooms_for_user(user_id))
+ self.assertEqual(len(rooms), 0)
+
def test_auto_create_auto_join_where_no_consent(self):
"""Test to ensure that the first user is not auto-joined to a room if
they have not given general consent.
diff --git a/tests/handlers/test_typing.py b/tests/handlers/test_typing.py
index 5d5e324df2..1f2ef5d01f 100644
--- a/tests/handlers/test_typing.py
+++ b/tests/handlers/test_typing.py
@@ -99,7 +99,12 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
self.event_source = hs.get_event_sources().sources["typing"]
self.datastore = hs.get_datastore()
- retry_timings_res = {"destination": "", "retry_last_ts": 0, "retry_interval": 0}
+ retry_timings_res = {
+ "destination": "",
+ "retry_last_ts": 0,
+ "retry_interval": 0,
+ "failure_ts": None,
+ }
self.datastore.get_destination_retry_timings.return_value = defer.succeed(
retry_timings_res
)
diff --git a/tests/logging/test_structured.py b/tests/logging/test_structured.py
index a786de0233..451d05c0f0 100644
--- a/tests/logging/test_structured.py
+++ b/tests/logging/test_structured.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging
import os
import os.path
import shutil
@@ -33,7 +34,20 @@ class FakeBeginner(object):
self.observers = observers
-class StructuredLoggingTestCase(HomeserverTestCase):
+class StructuredLoggingTestBase(object):
+ """
+ Test base that registers a cleanup handler to reset the stdlib log handler
+ to 'unset'.
+ """
+
+ def prepare(self, reactor, clock, hs):
+ def _cleanup():
+ logging.getLogger("synapse").setLevel(logging.NOTSET)
+
+ self.addCleanup(_cleanup)
+
+
+class StructuredLoggingTestCase(StructuredLoggingTestBase, HomeserverTestCase):
"""
Tests for Synapse's structured logging support.
"""
@@ -139,7 +153,9 @@ class StructuredLoggingTestCase(HomeserverTestCase):
self.assertEqual(logs[0]["request"], "somereq")
-class StructuredLoggingConfigurationFileTestCase(HomeserverTestCase):
+class StructuredLoggingConfigurationFileTestCase(
+ StructuredLoggingTestBase, HomeserverTestCase
+):
def make_homeserver(self, reactor, clock):
tempdir = self.mktemp()
@@ -179,10 +195,11 @@ class StructuredLoggingConfigurationFileTestCase(HomeserverTestCase):
"""
When a structured logging config is given, Synapse will use it.
"""
- setup_logging(self.hs, self.hs.config)
+ beginner = FakeBeginner()
+ publisher = setup_logging(self.hs, self.hs.config, logBeginner=beginner)
# Make a logger and send an event
- logger = Logger(namespace="tests.logging.test_structured")
+ logger = Logger(namespace="tests.logging.test_structured", observer=publisher)
with LoggingContext("testcontext", request="somereq"):
logger.info("Hello there, {name}!", name="steve")
diff --git a/tests/logging/test_terse_json.py b/tests/logging/test_terse_json.py
index 514282591d..4cf81f7128 100644
--- a/tests/logging/test_terse_json.py
+++ b/tests/logging/test_terse_json.py
@@ -23,10 +23,10 @@ from synapse.logging._structured import setup_structured_logging
from tests.server import connect_client
from tests.unittest import HomeserverTestCase
-from .test_structured import FakeBeginner
+from .test_structured import FakeBeginner, StructuredLoggingTestBase
-class TerseJSONTCPTestCase(HomeserverTestCase):
+class TerseJSONTCPTestCase(StructuredLoggingTestBase, HomeserverTestCase):
def test_log_output(self):
"""
The Terse JSON outputter delivers simplified structured logs over TCP.
diff --git a/tests/rest/client/test_redactions.py b/tests/rest/client/test_redactions.py
index fe66e397c4..d2bcf256fa 100644
--- a/tests/rest/client/test_redactions.py
+++ b/tests/rest/client/test_redactions.py
@@ -30,6 +30,14 @@ class RedactionsTestCase(HomeserverTestCase):
sync.register_servlets,
]
+ def make_homeserver(self, reactor, clock):
+ config = self.default_config()
+
+ config["rc_message"] = {"per_second": 0.2, "burst_count": 10}
+ config["rc_admin_redaction"] = {"per_second": 1, "burst_count": 100}
+
+ return self.setup_test_homeserver(config=config)
+
def prepare(self, reactor, clock, hs):
# register a couple of users
self.mod_user_id = self.register_user("user1", "pass")
@@ -177,3 +185,20 @@ class RedactionsTestCase(HomeserverTestCase):
self._redact_event(
self.other_access_token, self.room_id, create_event_id, expect_code=403
)
+
+ def test_redact_event_as_moderator_ratelimit(self):
+ """Tests that the correct ratelimiting is applied to redactions
+ """
+
+ message_ids = []
+ # as a regular user, send messages to redact
+ for _ in range(20):
+ b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
+ message_ids.append(b["event_id"])
+ self.reactor.advance(10) # To get around ratelimits
+
+ # as the moderator, send a bunch of redactions
+ for msg_id in message_ids:
+ # These should all succeed, even though this would be denied by
+ # the standard message ratelimiter
+ self._redact_event(self.mod_access_token, self.room_id, msg_id)
diff --git a/tests/storage/test_redaction.py b/tests/storage/test_redaction.py
index d961b81d48..deecfad9fb 100644
--- a/tests/storage/test_redaction.py
+++ b/tests/storage/test_redaction.py
@@ -17,6 +17,8 @@
from mock import Mock
+from canonicaljson import json
+
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
@@ -29,8 +31,10 @@ from tests.utils import create_room
class RedactionTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
+ config = self.default_config()
+ config["redaction_retention_period"] = "30d"
return self.setup_test_homeserver(
- resource_for_federation=Mock(), http_client=None
+ resource_for_federation=Mock(), http_client=None, config=config
)
def prepare(self, reactor, clock, hs):
@@ -286,3 +290,74 @@ class RedactionTestCase(unittest.HomeserverTestCase):
self.assertEqual(
fetched.unsigned["redacted_because"].event_id, redaction_event_id2
)
+
+ def test_redact_censor(self):
+ """Test that a redacted event gets censored in the DB after a month
+ """
+
+ self.get_success(
+ self.inject_room_member(self.room1, self.u_alice, Membership.JOIN)
+ )
+
+ msg_event = self.get_success(self.inject_message(self.room1, self.u_alice, "t"))
+
+ # Check event has not been redacted:
+ event = self.get_success(self.store.get_event(msg_event.event_id))
+
+ self.assertObjectHasAttributes(
+ {
+ "type": EventTypes.Message,
+ "user_id": self.u_alice.to_string(),
+ "content": {"body": "t", "msgtype": "message"},
+ },
+ event,
+ )
+
+ self.assertFalse("redacted_because" in event.unsigned)
+
+ # Redact event
+ reason = "Because I said so"
+ self.get_success(
+ self.inject_redaction(self.room1, msg_event.event_id, self.u_alice, reason)
+ )
+
+ event = self.get_success(self.store.get_event(msg_event.event_id))
+
+ self.assertTrue("redacted_because" in event.unsigned)
+
+ self.assertObjectHasAttributes(
+ {
+ "type": EventTypes.Message,
+ "user_id": self.u_alice.to_string(),
+ "content": {},
+ },
+ event,
+ )
+
+ event_json = self.get_success(
+ self.store._simple_select_one_onecol(
+ table="event_json",
+ keyvalues={"event_id": msg_event.event_id},
+ retcol="json",
+ )
+ )
+
+ self.assert_dict(
+ {"content": {"body": "t", "msgtype": "message"}}, json.loads(event_json)
+ )
+
+ # Advance by 30 days, then advance again to ensure that the looping call
+ # for updating the stream position gets called and then the looping call
+ # for the censoring gets called.
+ self.reactor.advance(60 * 60 * 24 * 31)
+ self.reactor.advance(60 * 60 * 2)
+
+ event_json = self.get_success(
+ self.store._simple_select_one_onecol(
+ table="event_json",
+ keyvalues={"event_id": msg_event.event_id},
+ retcol="json",
+ )
+ )
+
+ self.assert_dict({"content": {}}, json.loads(event_json))
diff --git a/tests/storage/test_roommember.py b/tests/storage/test_roommember.py
index 64cb294c37..447a3c6ffb 100644
--- a/tests/storage/test_roommember.py
+++ b/tests/storage/test_roommember.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,78 +14,129 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
-from mock import Mock
-
-from twisted.internet import defer
+from unittest.mock import Mock
from synapse.api.constants import EventTypes, Membership
from synapse.api.room_versions import RoomVersions
-from synapse.types import Requester, RoomID, UserID
+from synapse.rest.admin import register_servlets_for_client_rest_resource
+from synapse.rest.client.v1 import login, room
+from synapse.types import Requester, UserID
from tests import unittest
-from tests.utils import create_room, setup_test_homeserver
-class RoomMemberStoreTestCase(unittest.TestCase):
- @defer.inlineCallbacks
- def setUp(self):
- hs = yield setup_test_homeserver(
- self.addCleanup, resource_for_federation=Mock(), http_client=None
+class RoomMemberStoreTestCase(unittest.HomeserverTestCase):
+
+ servlets = [
+ login.register_servlets,
+ register_servlets_for_client_rest_resource,
+ room.register_servlets,
+ ]
+
+ def make_homeserver(self, reactor, clock):
+ hs = self.setup_test_homeserver(
+ resource_for_federation=Mock(), http_client=None
)
+ return hs
+
+ def prepare(self, reactor, clock, hs):
+
# We can't test the RoomMemberStore on its own without the other event
# storage logic
self.store = hs.get_datastore()
self.event_builder_factory = hs.get_event_builder_factory()
self.event_creation_handler = hs.get_event_creation_handler()
- self.u_alice = UserID.from_string("@alice:test")
- self.u_bob = UserID.from_string("@bob:test")
+ self.u_alice = self.register_user("alice", "pass")
+ self.t_alice = self.login("alice", "pass")
+ self.u_bob = self.register_user("bob", "pass")
# User elsewhere on another host
self.u_charlie = UserID.from_string("@charlie:elsewhere")
- self.room = RoomID.from_string("!abc123:test")
-
- yield create_room(hs, self.room.to_string(), self.u_alice.to_string())
-
- @defer.inlineCallbacks
def inject_room_member(self, room, user, membership, replaces_state=None):
builder = self.event_builder_factory.for_room_version(
RoomVersions.V1,
{
"type": EventTypes.Member,
- "sender": user.to_string(),
- "state_key": user.to_string(),
- "room_id": room.to_string(),
+ "sender": user,
+ "state_key": user,
+ "room_id": room,
"content": {"membership": membership},
},
)
- event, context = yield self.event_creation_handler.create_new_client_event(
- builder
+ event, context = self.get_success(
+ self.event_creation_handler.create_new_client_event(builder)
)
- yield self.store.persist_event(event, context)
+ self.get_success(self.store.persist_event(event, context))
return event
- @defer.inlineCallbacks
def test_one_member(self):
- yield self.inject_room_member(self.room, self.u_alice, Membership.JOIN)
-
- self.assertEquals(
- [self.room.to_string()],
- [
- m.room_id
- for m in (
- yield self.store.get_rooms_for_user_where_membership_is(
- self.u_alice.to_string(), [Membership.JOIN]
- )
- )
- ],
+
+ # Alice creates the room, and is automatically joined
+ self.room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
+
+ rooms_for_user = self.get_success(
+ self.store.get_rooms_for_user_where_membership_is(
+ self.u_alice, [Membership.JOIN]
+ )
)
+ self.assertEquals([self.room], [m.room_id for m in rooms_for_user])
+
+ def test_count_known_servers(self):
+ """
+ _count_known_servers will calculate how many servers are in a room.
+ """
+ self.room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
+ self.inject_room_member(self.room, self.u_bob, Membership.JOIN)
+ self.inject_room_member(self.room, self.u_charlie.to_string(), Membership.JOIN)
+
+ servers = self.get_success(self.store._count_known_servers())
+ self.assertEqual(servers, 2)
+
+ def test_count_known_servers_stat_counter_disabled(self):
+ """
+ If enabled, the metrics for how many servers are known will be counted.
+ """
+ self.assertTrue("_known_servers_count" not in self.store.__dict__.keys())
+
+ self.room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
+ self.inject_room_member(self.room, self.u_bob, Membership.JOIN)
+ self.inject_room_member(self.room, self.u_charlie.to_string(), Membership.JOIN)
+
+ self.pump(20)
+
+ self.assertTrue("_known_servers_count" not in self.store.__dict__.keys())
+
+ @unittest.override_config(
+ {"enable_metrics": True, "metrics_flags": {"known_servers": True}}
+ )
+ def test_count_known_servers_stat_counter_enabled(self):
+ """
+ If enabled, the metrics for how many servers are known will be counted.
+ """
+ # Initialises to 1 -- itself
+ self.assertEqual(self.store._known_servers_count, 1)
+
+ self.pump(20)
+
+ # No rooms have been joined, so technically the SQL returns 0, but it
+ # will still say it knows about itself.
+ self.assertEqual(self.store._known_servers_count, 1)
+
+ self.room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
+ self.inject_room_member(self.room, self.u_bob, Membership.JOIN)
+ self.inject_room_member(self.room, self.u_charlie.to_string(), Membership.JOIN)
+
+ self.pump(20)
+
+ # It now knows about Charlie's server.
+ self.assertEqual(self.store._known_servers_count, 2)
+
class CurrentStateMembershipUpdateTestCase(unittest.HomeserverTestCase):
def prepare(self, reactor, clock, homeserver):
diff --git a/tests/storage/test_transactions.py b/tests/storage/test_transactions.py
index 14169afa96..a771d5af29 100644
--- a/tests/storage/test_transactions.py
+++ b/tests/storage/test_transactions.py
@@ -29,17 +29,19 @@ class TransactionStoreTestCase(HomeserverTestCase):
r = self.get_success(d)
self.assertIsNone(r)
- d = self.store.set_destination_retry_timings("example.com", 50, 100)
+ d = self.store.set_destination_retry_timings("example.com", 1000, 50, 100)
self.get_success(d)
d = self.store.get_destination_retry_timings("example.com")
r = self.get_success(d)
- self.assert_dict({"retry_last_ts": 50, "retry_interval": 100}, r)
+ self.assert_dict(
+ {"retry_last_ts": 50, "retry_interval": 100, "failure_ts": 1000}, r
+ )
def test_initial_set_transactions(self):
"""Tests that we can successfully set the destination retries (there
was a bug around invalidating the cache that broke this)
"""
- d = self.store.set_destination_retry_timings("example.com", 50, 100)
+ d = self.store.set_destination_retry_timings("example.com", 1000, 50, 100)
self.get_success(d)
diff --git a/tests/test_metrics.py b/tests/test_metrics.py
index 2edbae5c6d..270f853d60 100644
--- a/tests/test_metrics.py
+++ b/tests/test_metrics.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
+# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,8 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
-from synapse.metrics import InFlightGauge
+from synapse.metrics import REGISTRY, InFlightGauge, generate_latest
from tests import unittest
@@ -111,3 +111,21 @@ class TestMauLimit(unittest.TestCase):
}
return results
+
+
+class BuildInfoTests(unittest.TestCase):
+ def test_get_build(self):
+ """
+ The synapse_build_info metric reports the OS version, Python version,
+ and Synapse version.
+ """
+ items = list(
+ filter(
+ lambda x: b"synapse_build_info{" in x,
+ generate_latest(REGISTRY).split(b"\n"),
+ )
+ )
+ self.assertEqual(len(items), 1)
+ self.assertTrue(b"osversion=" in items[0])
+ self.assertTrue(b"pythonversion=" in items[0])
+ self.assertTrue(b"version=" in items[0])
diff --git a/tests/util/test_retryutils.py b/tests/util/test_retryutils.py
new file mode 100644
index 0000000000..9e348694ad
--- /dev/null
+++ b/tests/util/test_retryutils.py
@@ -0,0 +1,127 @@
+# -*- coding: utf-8 -*-
+# Copyright 2019 The Matrix.org Foundation C.I.C.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from synapse.util.retryutils import (
+ MIN_RETRY_INTERVAL,
+ RETRY_MULTIPLIER,
+ NotRetryingDestination,
+ get_retry_limiter,
+)
+
+from tests.unittest import HomeserverTestCase
+
+
+class RetryLimiterTestCase(HomeserverTestCase):
+ def test_new_destination(self):
+ """A happy-path case with a new destination and a successful operation"""
+ store = self.hs.get_datastore()
+ d = get_retry_limiter("test_dest", self.clock, store)
+ self.pump()
+ limiter = self.successResultOf(d)
+
+ # advance the clock a bit before making the request
+ self.pump(1)
+
+ with limiter:
+ pass
+
+ d = store.get_destination_retry_timings("test_dest")
+ self.pump()
+ new_timings = self.successResultOf(d)
+ self.assertIsNone(new_timings)
+
+ def test_limiter(self):
+ """General test case which walks through the process of a failing request"""
+ store = self.hs.get_datastore()
+
+ d = get_retry_limiter("test_dest", self.clock, store)
+ self.pump()
+ limiter = self.successResultOf(d)
+
+ self.pump(1)
+ try:
+ with limiter:
+ self.pump(1)
+ failure_ts = self.clock.time_msec()
+ raise AssertionError("argh")
+ except AssertionError:
+ pass
+
+ # wait for the update to land
+ self.pump()
+
+ d = store.get_destination_retry_timings("test_dest")
+ self.pump()
+ new_timings = self.successResultOf(d)
+ self.assertEqual(new_timings["failure_ts"], failure_ts)
+ self.assertEqual(new_timings["retry_last_ts"], failure_ts)
+ self.assertEqual(new_timings["retry_interval"], MIN_RETRY_INTERVAL)
+
+ # now if we try again we should get a failure
+ d = get_retry_limiter("test_dest", self.clock, store)
+ self.pump()
+ self.failureResultOf(d, NotRetryingDestination)
+
+ #
+ # advance the clock and try again
+ #
+
+ self.pump(MIN_RETRY_INTERVAL)
+ d = get_retry_limiter("test_dest", self.clock, store)
+ self.pump()
+ limiter = self.successResultOf(d)
+
+ self.pump(1)
+ try:
+ with limiter:
+ self.pump(1)
+ retry_ts = self.clock.time_msec()
+ raise AssertionError("argh")
+ except AssertionError:
+ pass
+
+ # wait for the update to land
+ self.pump()
+
+ d = store.get_destination_retry_timings("test_dest")
+ self.pump()
+ new_timings = self.successResultOf(d)
+ self.assertEqual(new_timings["failure_ts"], failure_ts)
+ self.assertEqual(new_timings["retry_last_ts"], retry_ts)
+ self.assertGreaterEqual(
+ new_timings["retry_interval"], MIN_RETRY_INTERVAL * RETRY_MULTIPLIER * 0.5
+ )
+ self.assertLessEqual(
+ new_timings["retry_interval"], MIN_RETRY_INTERVAL * RETRY_MULTIPLIER * 2.0
+ )
+
+ #
+ # one more go, with success
+ #
+ self.pump(MIN_RETRY_INTERVAL * RETRY_MULTIPLIER * 2.0)
+ d = get_retry_limiter("test_dest", self.clock, store)
+ self.pump()
+ limiter = self.successResultOf(d)
+
+ self.pump(1)
+ with limiter:
+ self.pump(1)
+
+ # wait for the update to land
+ self.pump()
+
+ d = store.get_destination_retry_timings("test_dest")
+ self.pump()
+ new_timings = self.successResultOf(d)
+ self.assertIsNone(new_timings)
diff --git a/tox.ini b/tox.ini
index 7cb40847b5..1bce10a4ce 100644
--- a/tox.ini
+++ b/tox.ini
@@ -2,6 +2,7 @@
envlist = packaging, py35, py36, py37, check_codestyle, check_isort
[base]
+basepython = python3.7
deps =
mock
python-subunit
@@ -137,18 +138,35 @@ commands = {toxinidir}/scripts-dev/generate_sample_config --check
skip_install = True
deps =
coverage
-whitelist_externals =
- bash
commands=
coverage combine
coverage report
+[testenv:cov-erase]
+skip_install = True
+deps =
+ coverage
+commands=
+ coverage erase
+
+[testenv:cov-html]
+skip_install = True
+deps =
+ coverage
+commands=
+ coverage html
+
[testenv:mypy]
-basepython = python3.5
+basepython = python3.7
+skip_install = True
deps =
{[base]deps}
mypy
+ mypy-zope
+ typeshed
+env =
+ MYPYPATH = stubs/
extras = all
-commands = mypy --ignore-missing-imports \
- synapse/logging/_structured.py \
- synapse/logging/_terse_json.py
+commands = mypy --show-traceback \
+ synapse/logging/ \
+ synapse/config/
|