summary refs log tree commit diff
path: root/docs
diff options
context:
space:
mode:
authorNeil Johnson <neil@fragile.org.uk>2018-03-26 14:51:11 +0100
committerNeil Johnson <neil@fragile.org.uk>2018-03-26 14:51:11 +0100
commitaa3587fdd111d01b3bd0a610dab6e102d79f230c (patch)
tree1fad5548eab845afcb1fe5f0c5e26694b322d6d3 /docs
parentMerge branch 'hotfixes-v0.26.1' of github.com:matrix-org/synapse (diff)
parentversion bump (diff)
downloadsynapse-aa3587fdd111d01b3bd0a610dab6e102d79f230c.tar.xz
Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse v0.27.0
Diffstat (limited to 'docs')
-rw-r--r--docs/admin_api/media_admin_api.md23
-rw-r--r--docs/admin_api/purge_history_api.rst54
-rw-r--r--docs/log_contexts.rst8
-rw-r--r--docs/metrics-howto.rst61
-rw-r--r--docs/workers.rst39
5 files changed, 165 insertions, 20 deletions
diff --git a/docs/admin_api/media_admin_api.md b/docs/admin_api/media_admin_api.md
new file mode 100644
index 0000000000..abdbc1ea86
--- /dev/null
+++ b/docs/admin_api/media_admin_api.md
@@ -0,0 +1,23 @@
+# List all media in a room
+
+This API gets a list of known media in a room.
+
+The API is:
+```
+GET /_matrix/client/r0/admin/room/<room_id>/media
+```
+including an `access_token` of a server admin.
+
+It returns a JSON body like the following:
+```
+{
+    "local": [
+        "mxc://localhost/xwvutsrqponmlkjihgfedcba",
+        "mxc://localhost/abcdefghijklmnopqrstuvwx"
+    ],
+    "remote": [
+        "mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
+        "mxc://matrix.org/abcdefghijklmnopqrstuvwx"
+    ]
+}
+```
diff --git a/docs/admin_api/purge_history_api.rst b/docs/admin_api/purge_history_api.rst
index 08b3306366..2da833c827 100644
--- a/docs/admin_api/purge_history_api.rst
+++ b/docs/admin_api/purge_history_api.rst
@@ -4,14 +4,60 @@ Purge History API
 The purge history API allows server admins to purge historic events from their
 database, reclaiming disk space.
 
-**NB!** This will not delete local events (locally sent messages content etc) from the database, but will remove lots of the metadata about them and does dramatically reduce the on disk space usage
-
 Depending on the amount of history being purged a call to the API may take
 several minutes or longer. During this period users will not be able to
 paginate further back in the room from the point being purged from.
 
-The API is simply:
+The API is:
 
-``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
+``POST /_matrix/client/r0/admin/purge_history/<room_id>[/<event_id>]``
 
 including an ``access_token`` of a server admin.
+
+By default, events sent by local users are not deleted, as they may represent
+the only copies of this content in existence. (Events sent by remote users are
+deleted.)
+
+Room state data (such as joins, leaves, topic) is always preserved.
+
+To delete local message events as well, set ``delete_local_events`` in the body:
+
+.. code:: json
+
+   {
+       "delete_local_events": true
+   }
+
+The caller must specify the point in the room to purge up to. This can be
+specified by including an event_id in the URI, or by setting a
+``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event
+id is given, that event (and others at the same graph depth) will be retained.
+If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch,
+in milliseconds.
+
+The API starts the purge running, and returns immediately with a JSON body with
+a purge id:
+
+.. code:: json
+
+    {
+        "purge_id": "<opaque id>"
+    }
+
+Purge status query
+------------------
+
+It is possible to poll for updates on recent purges with a second API;
+
+``GET /_matrix/client/r0/admin/purge_history_status/<purge_id>``
+
+(again, with a suitable ``access_token``). This API returns a JSON body like
+the following:
+
+.. code:: json
+
+    {
+        "status": "active"
+    }
+
+The status will be one of ``active``, ``complete``, or ``failed``.
diff --git a/docs/log_contexts.rst b/docs/log_contexts.rst
index b19b7fa1ea..82ac4f91e5 100644
--- a/docs/log_contexts.rst
+++ b/docs/log_contexts.rst
@@ -279,9 +279,9 @@ Obviously that option means that the operations done in
 that might be fixed by setting a different logcontext via a ``with
 LoggingContext(...)`` in ``background_operation``).
 
-The second option is to use ``logcontext.preserve_fn``, which wraps a function
-so that it doesn't reset the logcontext even when it returns an incomplete
-deferred, and adds a callback to the returned deferred to reset the
+The second option is to use ``logcontext.run_in_background``, which wraps a
+function so that it doesn't reset the logcontext even when it returns an
+incomplete deferred, and adds a callback to the returned deferred to reset the
 logcontext. In other words, it turns a function that follows the Synapse rules
 about logcontexts and Deferreds into one which behaves more like an external
 function — the opposite operation to that described in the previous section.
@@ -293,7 +293,7 @@ It can be used like this:
     def do_request_handling():
         yield foreground_operation()
 
-        logcontext.preserve_fn(background_operation)()
+        logcontext.run_in_background(background_operation)
 
         # this will now be logged against the request context
         logger.debug("Request handling complete")
diff --git a/docs/metrics-howto.rst b/docs/metrics-howto.rst
index 143cd0f42f..8acc479bc3 100644
--- a/docs/metrics-howto.rst
+++ b/docs/metrics-howto.rst
@@ -16,7 +16,7 @@ How to monitor Synapse metrics using Prometheus
      metrics_port: 9092
 
    Also ensure that ``enable_metrics`` is set to ``True``.
-  
+
    Restart synapse.
 
 3. Add a prometheus target for synapse.
@@ -28,11 +28,58 @@ How to monitor Synapse metrics using Prometheus
       static_configs:
         - targets: ["my.server.here:9092"]
 
-   If your prometheus is older than 1.5.2, you will need to replace 
+   If your prometheus is older than 1.5.2, you will need to replace
    ``static_configs`` in the above with ``target_groups``.
-   
+
    Restart prometheus.
 
+
+Block and response metrics renamed for 0.27.0
+---------------------------------------------
+
+Synapse 0.27.0 begins the process of rationalising the duplicate ``*:count``
+metrics reported for the resource tracking for code blocks and HTTP requests.
+
+At the same time, the corresponding ``*:total`` metrics are being renamed, as
+the ``:total`` suffix no longer makes sense in the absence of a corresponding
+``:count`` metric.
+
+To enable a graceful migration path, this release just adds new names for the
+metrics being renamed. A future release will remove the old ones.
+
+The following table shows the new metrics, and the old metrics which they are
+replacing.
+
+==================================================== ===================================================
+New name                                             Old name
+==================================================== ===================================================
+synapse_util_metrics_block_count                     synapse_util_metrics_block_timer:count
+synapse_util_metrics_block_count                     synapse_util_metrics_block_ru_utime:count
+synapse_util_metrics_block_count                     synapse_util_metrics_block_ru_stime:count
+synapse_util_metrics_block_count                     synapse_util_metrics_block_db_txn_count:count
+synapse_util_metrics_block_count                     synapse_util_metrics_block_db_txn_duration:count
+
+synapse_util_metrics_block_time_seconds              synapse_util_metrics_block_timer:total
+synapse_util_metrics_block_ru_utime_seconds          synapse_util_metrics_block_ru_utime:total
+synapse_util_metrics_block_ru_stime_seconds          synapse_util_metrics_block_ru_stime:total
+synapse_util_metrics_block_db_txn_count              synapse_util_metrics_block_db_txn_count:total
+synapse_util_metrics_block_db_txn_duration_seconds   synapse_util_metrics_block_db_txn_duration:total
+
+synapse_http_server_response_count                   synapse_http_server_requests
+synapse_http_server_response_count                   synapse_http_server_response_time:count
+synapse_http_server_response_count                   synapse_http_server_response_ru_utime:count
+synapse_http_server_response_count                   synapse_http_server_response_ru_stime:count
+synapse_http_server_response_count                   synapse_http_server_response_db_txn_count:count
+synapse_http_server_response_count                   synapse_http_server_response_db_txn_duration:count
+
+synapse_http_server_response_time_seconds            synapse_http_server_response_time:total
+synapse_http_server_response_ru_utime_seconds        synapse_http_server_response_ru_utime:total
+synapse_http_server_response_ru_stime_seconds        synapse_http_server_response_ru_stime:total
+synapse_http_server_response_db_txn_count            synapse_http_server_response_db_txn_count:total
+synapse_http_server_response_db_txn_duration_seconds synapse_http_server_response_db_txn_duration:total
+==================================================== ===================================================
+
+
 Standard Metric Names
 ---------------------
 
@@ -42,7 +89,7 @@ have been changed to seconds, from miliseconds.
 
 ================================== =============================
 New name                           Old name
----------------------------------- -----------------------------
+================================== =============================
 process_cpu_user_seconds_total     process_resource_utime / 1000
 process_cpu_system_seconds_total   process_resource_stime / 1000
 process_open_fds (no 'type' label) process_fds
@@ -52,8 +99,8 @@ The python-specific counts of garbage collector performance have been renamed.
 
 =========================== ======================
 New name                    Old name
---------------------------- ----------------------
-python_gc_time              reactor_gc_time      
+=========================== ======================
+python_gc_time              reactor_gc_time
 python_gc_unreachable_total reactor_gc_unreachable
 python_gc_counts            reactor_gc_counts
 =========================== ======================
@@ -62,7 +109,7 @@ The twisted-specific reactor metrics have been renamed.
 
 ==================================== =====================
 New name                             Old name
------------------------------------- ---------------------
+==================================== =====================
 python_twisted_reactor_pending_calls reactor_pending_calls
 python_twisted_reactor_tick_time     reactor_tick_time
 ==================================== =====================
diff --git a/docs/workers.rst b/docs/workers.rst
index b39f79058e..80f8d2181a 100644
--- a/docs/workers.rst
+++ b/docs/workers.rst
@@ -30,17 +30,29 @@ requests made to the federation port. The caveats regarding running a
 reverse-proxy on the federation port still apply (see
 https://github.com/matrix-org/synapse/blob/master/README.rst#reverse-proxying-the-federation-port).
 
-To enable workers, you need to add a replication listener to the master synapse, e.g.::
+To enable workers, you need to add two replication listeners to the master
+synapse, e.g.::
 
     listeners:
+      # The TCP replication port
       - port: 9092
         bind_address: '127.0.0.1'
         type: replication
+      # The HTTP replication port
+      - port: 9093
+        bind_address: '127.0.0.1'
+        type: http
+        resources:
+         - names: [replication]
 
-Under **no circumstances** should this replication API listener be exposed to the
-public internet; it currently implements no authentication whatsoever and is
+Under **no circumstances** should these replication API listeners be exposed to
+the public internet; it currently implements no authentication whatsoever and is
 unencrypted.
 
+(Roughly, the TCP port is used for streaming data from the master to the
+workers, and the HTTP port for the workers to send data to the main
+synapse process.)
+
 You then create a set of configs for the various worker processes.  These
 should be worker configuration files, and should be stored in a dedicated
 subdirectory, to allow synctl to manipulate them.
@@ -52,8 +64,13 @@ You should minimise the number of overrides though to maintain a usable config.
 
 You must specify the type of worker application (``worker_app``). The currently
 available worker applications are listed below. You must also specify the
-replication endpoint that it's talking to on the main synapse process
-(``worker_replication_host`` and ``worker_replication_port``).
+replication endpoints that it's talking to on the main synapse process.
+``worker_replication_host`` should specify the host of the main synapse,
+``worker_replication_port`` should point to the TCP replication listener port and
+``worker_replication_http_port`` should point to the HTTP replication port.
+
+Currently, only the ``event_creator`` worker requires specifying
+``worker_replication_http_port``.
 
 For instance::
 
@@ -62,6 +79,7 @@ For instance::
     # The replication listener on the synapse to talk to.
     worker_replication_host: 127.0.0.1
     worker_replication_port: 9092
+    worker_replication_http_port: 9093
 
     worker_listeners:
      - type: http
@@ -207,3 +225,14 @@ the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
 file. For example::
 
     worker_main_http_uri: http://127.0.0.1:8008
+
+
+``synapse.app.event_creator``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Handles non-state event creation. It can handle REST endpoints matching::
+
+    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
+
+It will create events locally and then send them on to the main synapse
+instance to be persisted and handled.