summary refs log tree commit diff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/CAPTCHA_SETUP.rst (renamed from docs/CAPTCHA_SETUP)11
-rw-r--r--docs/admin_api/README.rst12
-rw-r--r--docs/admin_api/purge_history_api.rst15
-rw-r--r--docs/admin_api/purge_remote_media.rst19
-rw-r--r--docs/application_services.rst3
-rw-r--r--docs/code_style.rst7
-rw-r--r--docs/log_contexts.rst10
-rw-r--r--docs/metrics-howto.rst75
-rw-r--r--docs/replication.rst58
-rw-r--r--docs/turn-howto.rst19
-rw-r--r--docs/url_previews.rst74
-rw-r--r--docs/workers.rst98
12 files changed, 351 insertions, 50 deletions
diff --git a/docs/CAPTCHA_SETUP b/docs/CAPTCHA_SETUP.rst
index 75ff80981b..db621aedfc 100644
--- a/docs/CAPTCHA_SETUP
+++ b/docs/CAPTCHA_SETUP.rst
@@ -10,13 +10,13 @@ https://developers.google.com/recaptcha/
 
 Setting ReCaptcha Keys
 ----------------------
-The keys are a config option on the home server config. If they are not 
-visible, you can generate them via --generate-config. Set the following value:
+The keys are a config option on the home server config. If they are not
+visible, you can generate them via --generate-config. Set the following value::
 
   recaptcha_public_key: YOUR_PUBLIC_KEY
   recaptcha_private_key: YOUR_PRIVATE_KEY
-  
-In addition, you MUST enable captchas via:
+
+In addition, you MUST enable captchas via::
 
   enable_registration_captcha: true
 
@@ -25,7 +25,6 @@ Configuring IP used for auth
 The ReCaptcha API requires that the IP address of the user who solved the
 captcha is sent. If the client is connecting through a proxy or load balancer,
 it may be required to use the X-Forwarded-For (XFF) header instead of the origin
-IP address. This can be configured as an option on the home server like so:
+IP address. This can be configured as an option on the home server like so::
 
   captcha_ip_origin_is_x_forwarded: true
-
diff --git a/docs/admin_api/README.rst b/docs/admin_api/README.rst
new file mode 100644
index 0000000000..d4f564cfae
--- /dev/null
+++ b/docs/admin_api/README.rst
@@ -0,0 +1,12 @@
+Admin APIs
+==========
+
+This directory includes documentation for the various synapse specific admin
+APIs available.
+
+Only users that are server admins can use these APIs. A user can be marked as a
+server admin by updating the database directly, e.g.:
+
+``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
+
+Restarting may be required for the changes to register.
diff --git a/docs/admin_api/purge_history_api.rst b/docs/admin_api/purge_history_api.rst
new file mode 100644
index 0000000000..986efe40f9
--- /dev/null
+++ b/docs/admin_api/purge_history_api.rst
@@ -0,0 +1,15 @@
+Purge History API
+=================
+
+The purge history API allows server admins to purge historic events from their
+database, reclaiming disk space.
+
+Depending on the amount of history being purged a call to the API may take
+several minutes or longer. During this period users will not be able to
+paginate further back in the room from the point being purged from.
+
+The API is simply:
+
+``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
+
+including an ``access_token`` of a server admin.
diff --git a/docs/admin_api/purge_remote_media.rst b/docs/admin_api/purge_remote_media.rst
new file mode 100644
index 0000000000..b26c6a9e7b
--- /dev/null
+++ b/docs/admin_api/purge_remote_media.rst
@@ -0,0 +1,19 @@
+Purge Remote Media API
+======================
+
+The purge remote media API allows server admins to purge old cached remote
+media. 
+
+The API is::
+
+    POST /_matrix/client/r0/admin/purge_media_cache
+
+    {
+        "before_ts": <unix_timestamp_in_ms>
+    }
+
+Which will remove all cached media that was last accessed before
+``<unix_timestamp_in_ms>``.
+
+If the user re-requests purged remote media, synapse will re-request the media
+from the originating server.
diff --git a/docs/application_services.rst b/docs/application_services.rst
index 7e87ac9ad6..fbc0c7e960 100644
--- a/docs/application_services.rst
+++ b/docs/application_services.rst
@@ -32,5 +32,4 @@ The format of the AS configuration file is as follows:
 
 See the spec_ for further details on how application services work.
 
-.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api
-
+.. _spec: https://matrix.org/docs/spec/application_service/unstable.html
diff --git a/docs/code_style.rst b/docs/code_style.rst
index dc40a7ab7b..8d73d17beb 100644
--- a/docs/code_style.rst
+++ b/docs/code_style.rst
@@ -43,7 +43,10 @@ Basically, PEP8
   together, or want to deliberately extend or preserve vertical/horizontal
   space)
 
-Comments should follow the google code style. This is so that we can generate
-documentation with sphinx (http://sphinxcontrib-napoleon.readthedocs.org/en/latest/) 
+Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
+This is so that we can generate documentation with 
+`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
+`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
+in the sphinx documentation.
 
 Code should pass pep8 --max-line-length=100 without any warnings.
diff --git a/docs/log_contexts.rst b/docs/log_contexts.rst
new file mode 100644
index 0000000000..0046e171be
--- /dev/null
+++ b/docs/log_contexts.rst
@@ -0,0 +1,10 @@
+What do I do about "Unexpected logging context" debug log-lines everywhere?
+
+<Mjark> The logging context lives in thread local storage
+<Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context. 
+<Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context?
+<Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed.
+<Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor.
+<Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults
+
+Unanswered: how and when should we preserve log context?
\ No newline at end of file
diff --git a/docs/metrics-howto.rst b/docs/metrics-howto.rst
index c1f5ae2174..ca10799b00 100644
--- a/docs/metrics-howto.rst
+++ b/docs/metrics-howto.rst
@@ -15,36 +15,45 @@ How to monitor Synapse metrics using Prometheus
 
   Restart synapse
 
-3: Check out synapse-prometheus-config
-  https://github.com/matrix-org/synapse-prometheus-config
-
-4: Add ``synapse.html`` and ``synapse.rules``
-  The ``.html`` file needs to appear in prometheus's ``consoles`` directory,
-  and the ``.rules`` file needs to be invoked somewhere in the main config
-  file. A symlink to each from the git checkout into the prometheus directory
-  might be easiest to ensure ``git pull`` keeps it updated.
-
-5: Add a prometheus target for synapse
-  This is easiest if prometheus runs on the same machine as synapse, as it can
-  then just use localhost::
-
-    global: {
-      rule_file: "synapse.rules"
-    }
-
-    job: {
-      name: "synapse"
-
-      target_group: {
-        target: "http://localhost:9092/"
-      }
-    }
-
-6: Start prometheus::
-
-   ./prometheus -config.file=prometheus.conf
-
-7: Wait a few seconds for it to start and perform the first scrape,
-   then visit the console:
-
-    http://server-where-prometheus-runs:9090/consoles/synapse.html
+3: Add a prometheus target for synapse. It needs to set the ``metrics_path``
+   to a non-default value::
+
+    - job_name: "synapse"
+      metrics_path: "/_synapse/metrics"
+      static_configs:
+        - targets:
+            "my.server.here:9092"
+
+Standard Metric Names
+---------------------
+
+As of synapse version 0.18.2, the format of the process-wide metrics has been
+changed to fit prometheus standard naming conventions. Additionally the units
+have been changed to seconds, from miliseconds.
+
+================================== =============================
+New name                           Old name
+---------------------------------- -----------------------------
+process_cpu_user_seconds_total     process_resource_utime / 1000
+process_cpu_system_seconds_total   process_resource_stime / 1000
+process_open_fds (no 'type' label) process_fds
+================================== =============================
+
+The python-specific counts of garbage collector performance have been renamed.
+
+=========================== ======================
+New name                    Old name
+--------------------------- ----------------------
+python_gc_time              reactor_gc_time      
+python_gc_unreachable_total reactor_gc_unreachable
+python_gc_counts            reactor_gc_counts
+=========================== ======================
+
+The twisted-specific reactor metrics have been renamed.
+
+==================================== =====================
+New name                             Old name
+------------------------------------ ---------------------
+python_twisted_reactor_pending_calls reactor_pending_calls
+python_twisted_reactor_tick_time     reactor_tick_time
+==================================== =====================
diff --git a/docs/replication.rst b/docs/replication.rst
new file mode 100644
index 0000000000..7e37e71987
--- /dev/null
+++ b/docs/replication.rst
@@ -0,0 +1,58 @@
+Replication Architecture
+========================
+
+Motivation
+----------
+
+We'd like to be able to split some of the work that synapse does into multiple
+python processes. In theory multiple synapse processes could share a single
+postgresql database and we'd scale up by running more synapse processes.
+However much of synapse assumes that only one process is interacting with the
+database, both for assigning unique identifiers when inserting into tables,
+notifying components about new updates, and for invalidating its caches.
+
+So running multiple copies of the current code isn't an option. One way to
+run multiple processes would be to have a single writer process and multiple
+reader processes connected to the same database. In order to do this we'd need
+a way for the reader process to invalidate its in-memory caches when an update
+happens on the writer. One way to do this is for the writer to present an
+append-only log of updates which the readers can consume to invalidate their
+caches and to push updates to listening clients or pushers.
+
+Synapse already stores much of its data as an append-only log so that it can
+correctly respond to /sync requests so the amount of code changes needed to
+expose the append-only log to the readers should be fairly minimal.
+
+Architecture
+------------
+
+The Replication API
+~~~~~~~~~~~~~~~~~~~
+
+Synapse will optionally expose a long poll HTTP API for extracting updates. The
+API will have a similar shape to /sync in that clients provide tokens
+indicating where in the log they have reached and a timeout. The synapse server
+then either responds with updates immediately if it already has updates or it
+waits until the timeout for more updates. If the timeout expires and nothing
+happened then the server returns an empty response.
+
+However unlike the /sync API this replication API is returning synapse specific
+data rather than trying to implement a matrix specification. The replication
+results are returned as arrays of rows where the rows are mostly lifted
+directly from the database. This avoids unnecessary JSON parsing on the server
+and hopefully avoids an impedance mismatch between the data returned and the
+required updates to the datastore.
+
+This does not replicate all the database tables as many of the database tables
+are indexes that can be recovered from the contents of other tables.
+
+The format and parameters for the api are documented in
+``synapse/replication/resource.py``.
+
+
+The Slaved DataStore
+~~~~~~~~~~~~~~~~~~~~
+
+There are read-only version of the synapse storage layer in
+``synapse/replication/slave/storage`` that use the response of the replication
+API to invalidate their caches.
diff --git a/docs/turn-howto.rst b/docs/turn-howto.rst
index e2c73458e2..04c0100715 100644
--- a/docs/turn-howto.rst
+++ b/docs/turn-howto.rst
@@ -9,31 +9,35 @@ the Home Server to generate credentials that are valid for use on the TURN
 server through the use of a secret shared between the Home Server and the
 TURN server.
 
-This document described how to install coturn
-(https://code.google.com/p/coturn/) which also supports the TURN REST API,
+This document describes how to install coturn
+(https://github.com/coturn/coturn) which also supports the TURN REST API,
 and integrate it with synapse.
 
 coturn Setup
 ============
 
+You may be able to setup coturn via your package manager,  or set it up manually using the usual ``configure, make, make install`` process.  
+
  1. Check out coturn::
-      svn checkout http://coturn.googlecode.com/svn/trunk/ coturn
+ 
+      git clone https://github.com/coturn/coturn.git coturn
       cd coturn
 
  2. Configure it::
+ 
       ./configure
 
-    You may need to install libevent2: if so, you should do so
+    You may need to install ``libevent2``: if so, you should do so
     in the way recommended by your operating system.
     You can ignore warnings about lack of database support: a
     database is unnecessary for this purpose.
 
  3. Build and install it::
+ 
       make
       make install
 
- 4. Make a config file in /etc/turnserver.conf. You can customise
-    a config file from turnserver.conf.default. The relevant
+ 4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant
     lines, with example values, are::
 
       lt-cred-mech
@@ -41,7 +45,7 @@ coturn Setup
       static-auth-secret=[your secret key here]
       realm=turn.myserver.org
 
-    See turnserver.conf.default for explanations of the options.
+    See turnserver.conf for explanations of the options.
     One way to generate the static-auth-secret is with pwgen::
 
        pwgen -s 64 1
@@ -54,6 +58,7 @@ coturn Setup
     import your private key and certificate.
 
  7. Start the turn server::
+ 
        bin/turnserver -o
 
 
diff --git a/docs/url_previews.rst b/docs/url_previews.rst
new file mode 100644
index 0000000000..634d9d907f
--- /dev/null
+++ b/docs/url_previews.rst
@@ -0,0 +1,74 @@
+URL Previews
+============
+
+Design notes on a URL previewing service for Matrix:
+
+Options are:
+
+ 1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.
+   * Pros:
+     * Decouples the implementation entirely from Synapse.
+     * Uses existing Matrix events & content repo to store the metadata.
+   * Cons:
+     * Which AS should provide this service for a room, and why should you trust it?
+     * Doesn't work well with E2E; you'd have to cut the AS into every room
+     * the AS would end up subscribing to every room anyway.
+
+ 2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service:
+   * Pros:
+     * Simple and flexible; can be used by any clients at any point
+   * Cons:
+     * If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI
+     * We need somewhere to store the URL metadata rather than just using Matrix itself
+     * We can't piggyback on matrix to distribute the metadata between HSes.
+
+ 3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.
+   * Pros:
+     * Works transparently for all clients
+     * Piggy-backs nicely on using Matrix for distributing the metadata.
+     * No confusion as to which AS
+   * Cons:
+     * Doesn't work with E2E
+     * We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS.  It's more like a bot than a core part of the server.
+
+ 4. Make the sending client use the preview API and insert the event itself when successful.
+   * Pros:
+      * Works well with E2E
+      * No custom server functionality
+      * Lets the client customise the preview that they send (like on FB)
+   * Cons:
+      * Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.
+
+ 5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.
+
+Best solution is probably a combination of both 2 and 4.
+ * Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed?  (This also lets the user validate the preview before sending)
+ * Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)
+
+This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one.  However, this can always be exposed to users: "Generate your own URL previews if none are available?"
+
+This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail.  Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.
+
+However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate.  We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions).   Bogus metadata is particularly bad though, especially if it's avoidable.
+
+As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails).  We can then extend/optimise this to option 4 as a special extra if needed.
+
+API
+---
+
+GET /_matrix/media/r0/preview_url?url=http://wherever.com
+200 OK
+{
+    "og:type"        : "article"
+    "og:url"         : "https://twitter.com/matrixdotorg/status/684074366691356672"
+    "og:title"       : "Matrix on Twitter"
+    "og:image"       : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
+    "og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
+    "og:site_name"   : "Twitter"
+}
+
+* Downloads the URL
+  * If HTML, just stores it in RAM and parses it for OG meta tags
+    * Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.
+  * If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.
+  * Otherwise, don't bother downloading further.
diff --git a/docs/workers.rst b/docs/workers.rst
new file mode 100644
index 0000000000..65b6e690f7
--- /dev/null
+++ b/docs/workers.rst
@@ -0,0 +1,98 @@
+Scaling synapse via workers
+---------------------------
+
+Synapse has experimental support for splitting out functionality into
+multiple separate python processes, helping greatly with scalability.  These
+processes are called 'workers', and are (eventually) intended to scale
+horizontally independently.
+
+All processes continue to share the same database instance, and as such, workers
+only work with postgres based synapse deployments (sharing a single sqlite
+across multiple processes is a recipe for disaster, plus you should be using
+postgres anyway if you care about scalability).
+
+The workers communicate with the master synapse process via a synapse-specific
+HTTP protocol called 'replication' - analogous to MySQL or Postgres style
+database replication; feeding a stream of relevant data to the workers so they
+can be kept in sync with the main synapse process and database state.
+
+To enable workers, you need to add a replication listener to the master synapse, e.g.::
+
+    listeners:
+      - port: 9092
+        bind_address: '127.0.0.1'
+        type: http
+        tls: false
+        x_forwarded: false
+        resources:
+          - names: [replication]
+            compress: false
+
+Under **no circumstances** should this replication API listener be exposed to the
+public internet; it currently implements no authentication whatsoever and is
+unencrypted HTTP.
+
+You then create a set of configs for the various worker processes.  These should be
+worker configuration files should be stored in a dedicated subdirectory, to allow
+synctl to manipulate them.
+
+The current available worker applications are:
+ * synapse.app.pusher - handles sending push notifications to sygnal and email
+ * synapse.app.synchrotron - handles /sync endpoints.  can scales horizontally through multiple instances.
+ * synapse.app.appservice - handles output traffic to Application Services
+ * synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
+ * synapse.app.media_repository - handles the media repository.
+ * synapse.app.client_reader - handles client API endpoints like /publicRooms
+
+Each worker configuration file inherits the configuration of the main homeserver
+configuration file.  You can then override configuration specific to that worker,
+e.g. the HTTP listener that it provides (if any); logging configuration; etc.
+You should minimise the number of overrides though to maintain a usable config.
+
+You must specify the type of worker application (worker_app) and the replication
+endpoint that it's talking to on the main synapse process (worker_replication_url).
+
+For instance::
+
+    worker_app: synapse.app.synchrotron
+
+    # The replication listener on the synapse to talk to.
+    worker_replication_url: http://127.0.0.1:9092/_synapse/replication
+
+    worker_listeners:
+     - type: http
+       port: 8083
+       resources:
+         - names:
+           - client
+
+    worker_daemonize: True
+    worker_pid_file: /home/matrix/synapse/synchrotron.pid
+    worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
+
+...is a full configuration for a synchrotron worker instance, which will expose a
+plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
+by the main synapse.
+
+Obviously you should configure your loadbalancer to route the /sync endpoint to
+the synchrotron instance(s) in this instance.
+
+Finally, to actually run your worker-based synapse, you must pass synctl the -a
+commandline option to tell it to operate on all the worker configurations found
+in the given directory, e.g.::
+
+    synctl -a $CONFIG/workers start
+
+Currently one should always restart all workers when restarting or upgrading
+synapse, unless you explicitly know it's safe not to.  For instance, restarting
+synapse without restarting all the synchrotrons may result in broken typing
+notifications.
+
+To manipulate a specific worker, you pass the -w option to synctl::
+
+    synctl -w $CONFIG/workers/synchrotron.yaml restart
+
+All of the above is highly experimental and subject to change as Synapse evolves,
+but documenting it here to help folks needing highly scalable Synapses similar
+to the one running matrix.org!
+