summary refs log tree commit diff
path: root/contrib
diff options
context:
space:
mode:
Diffstat (limited to 'contrib')
-rw-r--r--contrib/docker_compose_workers/README.md125
-rw-r--r--contrib/docker_compose_workers/docker-compose.yaml77
-rw-r--r--contrib/docker_compose_workers/workers/synapse-federation-sender-1.yaml14
-rw-r--r--contrib/docker_compose_workers/workers/synapse-generic-worker-1.yaml19
-rw-r--r--contrib/graph/graph.py35
-rw-r--r--contrib/graph/graph2.py32
-rw-r--r--contrib/graph/graph3.py45
7 files changed, 304 insertions, 43 deletions
diff --git a/contrib/docker_compose_workers/README.md b/contrib/docker_compose_workers/README.md
new file mode 100644
index 0000000000..4dbfee2853
--- /dev/null
+++ b/contrib/docker_compose_workers/README.md
@@ -0,0 +1,125 @@
+# Setting up Synapse with Workers using Docker Compose
+
+This directory describes how deploy and manage Synapse and workers via [Docker Compose](https://docs.docker.com/compose/).
+
+Example worker configuration files can be found [here](workers).
+
+All examples and snippets assume that your Synapse service is called `synapse` in your Docker Compose file.
+
+An example Docker Compose file can be found [here](docker-compose.yaml).
+
+## Worker Service Examples in Docker Compose
+
+In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`).
+
+### Generic Worker Example
+
+```yaml
+synapse-generic-worker-1:
+  image: matrixdotorg/synapse:latest
+  container_name: synapse-generic-worker-1
+  restart: unless-stopped
+  entrypoint: ["/start.py", "run", "--config-path=/data/homeserver.yaml", "--config-path=/data/workers/synapse-generic-worker-1.yaml"]
+  healthcheck:
+    test: ["CMD-SHELL", "curl -fSs http://localhost:8081/health || exit 1"]
+    start_period: "5s"
+    interval: "15s"
+    timeout: "5s"
+  volumes:
+    - ${VOLUME_PATH}/data:/data:rw # Replace VOLUME_PATH with the path to your Synapse volume
+  environment:
+    SYNAPSE_WORKER: synapse.app.generic_worker
+  # Expose port if required so your reverse proxy can send requests to this worker
+  # Port configuration will depend on how the http listener is defined in the worker configuration file
+  ports:
+    - 8081:8081
+  depends_on:
+    - synapse
+```
+
+### Federation Sender Example
+
+Please note: The federation sender does not receive REST API calls so no exposed ports are required.
+
+```yaml
+synapse-federation-sender-1:
+  image: matrixdotorg/synapse:latest
+  container_name: synapse-federation-sender-1
+  restart: unless-stopped
+  entrypoint: ["/start.py", "run", "--config-path=/data/homeserver.yaml", "--config-path=/data/workers/synapse-federation-sender-1.yaml"]
+  healthcheck:
+    disable: true
+  volumes:
+    - ${VOLUME_PATH}/data:/data:rw # Replace VOLUME_PATH with the path to your Synapse volume
+  environment:
+    SYNAPSE_WORKER: synapse.app.federation_sender
+  depends_on:
+    - synapse
+```
+
+## `homeserver.yaml` Configuration
+
+### Enable Redis
+
+Locate the `redis` section of your `homeserver.yaml` and enable and configure it:
+
+```yaml
+redis:
+  enabled: true
+  host: redis
+  port: 6379
+  # password: <secret_password>  
+```
+
+This assumes that your Redis service is called `redis` in your Docker Compose file.
+
+### Add a replication Listener
+
+Locate the `listeners` section of your `homeserver.yaml` and add the following replication listener:
+
+```yaml
+listeners:
+  # Other listeners
+
+  - port: 9093
+    type: http
+    resources:
+      - names: [replication]
+```
+
+This listener is used by the workers for replication and is referred to in worker config files using the following settings:
+
+```yaml
+worker_replication_host: synapse
+worker_replication_http_port: 9093
+```
+
+### Add Workers to `instance_map`
+
+Locate the `instance_map` section of your `homeserver.yaml` and populate it with your workers:
+
+```yaml
+instance_map:
+  synapse-generic-worker-1:        # The worker_name setting in your worker configuration file
+    host: synapse-generic-worker-1 # The name of the worker service in your Docker Compose file
+    port: 8034                     # The port assigned to the replication listener in your worker config file
+  synapse-federation-sender-1:
+    host: synapse-federation-sender-1
+    port: 8034
+```
+
+### Configure Federation Senders
+
+This section is applicable if you are using Federation senders (synapse.app.federation_sender). Locate the `send_federation` and `federation_sender_instances` settings in your `homeserver.yaml` and configure them:
+
+```yaml
+# This will disable federation sending on the main Synapse instance
+send_federation: false
+
+federation_sender_instances:
+  - synapse-federation-sender-1 # The worker_name setting in your federation sender worker configuration file
+```
+
+## Other Worker types
+
+Using the concepts shown here it is possible to create other worker types in Docker Compose. See the [Workers](https://matrix-org.github.io/synapse/latest/workers.html#available-worker-applications) documentation for a list of available workers.
\ No newline at end of file
diff --git a/contrib/docker_compose_workers/docker-compose.yaml b/contrib/docker_compose_workers/docker-compose.yaml
new file mode 100644
index 0000000000..eaf02c2af9
--- /dev/null
+++ b/contrib/docker_compose_workers/docker-compose.yaml
@@ -0,0 +1,77 @@
+networks:
+  backend:
+
+services:
+  postgres:
+    image: postgres:latest
+    restart: unless-stopped
+    volumes:
+      - ${VOLUME_PATH}/var/lib/postgresql/data:/var/lib/postgresql/data:rw
+    networks:
+      - backend
+    environment:
+      POSTGRES_DB: synapse
+      POSTGRES_USER: synapse_user
+      POSTGRES_PASSWORD: postgres
+      POSTGRES_INITDB_ARGS: --encoding=UTF8 --locale=C
+
+  redis:
+    image: redis:latest
+    restart: unless-stopped
+    networks:
+      - backend
+
+  synapse:
+    image: matrixdotorg/synapse:latest
+    container_name: synapse
+    restart: unless-stopped
+    volumes:
+      - ${VOLUME_PATH}/data:/data:rw
+    ports:
+      - 8008:8008
+    networks:
+      - backend
+    environment:
+      SYNAPSE_CONFIG_DIR: /data
+      SYNAPSE_CONFIG_PATH: /data/homeserver.yaml
+    depends_on:
+      - postgres
+
+  synapse-generic-worker-1:
+    image: matrixdotorg/synapse:latest
+    container_name: synapse-generic-worker-1
+    restart: unless-stopped
+    entrypoint: ["/start.py", "run", "--config-path=/data/homeserver.yaml", "--config-path=/data/workers/synapse-generic-worker-1.yaml"]
+    healthcheck:
+      test: ["CMD-SHELL", "curl -fSs http://localhost:8081/health || exit 1"]
+      start_period: "5s"
+      interval: "15s"
+      timeout: "5s"
+    networks:
+      - backend
+    volumes:
+      - ${VOLUME_PATH}/data:/data:rw # Replace VOLUME_PATH with the path to your Synapse volume
+    environment:
+      SYNAPSE_WORKER: synapse.app.generic_worker
+    # Expose port if required so your reverse proxy can send requests to this worker
+    # Port configuration will depend on how the http listener is defined in the worker configuration file
+    ports:
+      - 8081:8081
+    depends_on:
+      - synapse
+
+  synapse-federation-sender-1:
+    image: matrixdotorg/synapse:latest
+    container_name: synapse-federation-sender-1
+    restart: unless-stopped
+    entrypoint: ["/start.py", "run", "--config-path=/data/homeserver.yaml", "--config-path=/data/workers/synapse-federation-sender-1.yaml"]
+    healthcheck:
+      disable: true
+    networks:
+      - backend
+    volumes:
+      - ${VOLUME_PATH}/data:/data:rw # Replace VOLUME_PATH with the path to your Synapse volume
+    environment:
+      SYNAPSE_WORKER: synapse.app.federation_sender
+    depends_on:
+      - synapse
diff --git a/contrib/docker_compose_workers/workers/synapse-federation-sender-1.yaml b/contrib/docker_compose_workers/workers/synapse-federation-sender-1.yaml
new file mode 100644
index 0000000000..5ba42a92d2
--- /dev/null
+++ b/contrib/docker_compose_workers/workers/synapse-federation-sender-1.yaml
@@ -0,0 +1,14 @@
+worker_app: synapse.app.federation_sender
+worker_name: synapse-federation-sender-1
+
+# The replication listener on the main synapse process.
+worker_replication_host: synapse
+worker_replication_http_port: 9093
+
+worker_listeners:
+  - type: http
+    port: 8034
+    resources:
+      - names: [replication]
+
+worker_log_config: /data/federation_sender.log.config
diff --git a/contrib/docker_compose_workers/workers/synapse-generic-worker-1.yaml b/contrib/docker_compose_workers/workers/synapse-generic-worker-1.yaml
new file mode 100644
index 0000000000..694584105a
--- /dev/null
+++ b/contrib/docker_compose_workers/workers/synapse-generic-worker-1.yaml
@@ -0,0 +1,19 @@
+worker_app: synapse.app.generic_worker
+worker_name: synapse-generic-worker-1
+
+# The replication listener on the main synapse process.
+worker_replication_host: synapse
+worker_replication_http_port: 9093
+
+worker_listeners:
+  - type: http
+    port: 8034
+    resources:
+      - names: [replication]
+  - type: http
+    port: 8081
+    x_forwarded: true
+    resources:
+      - names: [client, federation]
+
+worker_log_config: /data/worker.log.config
diff --git a/contrib/graph/graph.py b/contrib/graph/graph.py
index fdbac087bd..3c4f47dbd2 100644
--- a/contrib/graph/graph.py
+++ b/contrib/graph/graph.py
@@ -1,11 +1,3 @@
-import argparse
-import cgi
-import datetime
-import json
-
-import pydot
-import urllib2
-
 # Copyright 2014-2016 OpenMarket Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
@@ -20,12 +12,25 @@ import urllib2
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import argparse
+import cgi
+import datetime
+import json
+import urllib.request
+from typing import List
+
+import pydot
+
 
-def make_name(pdu_id, origin):
-    return "%s@%s" % (pdu_id, origin)
+def make_name(pdu_id: str, origin: str) -> str:
+    return f"{pdu_id}@{origin}"
 
 
-def make_graph(pdus, room, filename_prefix):
+def make_graph(pdus: List[dict], filename_prefix: str) -> None:
+    """
+    Generate a dot and SVG file for a graph of events in the room based on the
+    topological ordering by querying a homeserver.
+    """
     pdu_map = {}
     node_map = {}
 
@@ -111,10 +116,10 @@ def make_graph(pdus, room, filename_prefix):
     graph.write_svg("%s.svg" % filename_prefix, prog="dot")
 
 
-def get_pdus(host, room):
+def get_pdus(host: str, room: str) -> List[dict]:
     transaction = json.loads(
-        urllib2.urlopen(
-            "http://%s/_matrix/federation/v1/context/%s/" % (host, room)
+        urllib.request.urlopen(
+            f"http://{host}/_matrix/federation/v1/context/{room}/"
         ).read()
     )
 
@@ -141,4 +146,4 @@ if __name__ == "__main__":
 
     pdus = get_pdus(host, room)
 
-    make_graph(pdus, room, prefix)
+    make_graph(pdus, prefix)
diff --git a/contrib/graph/graph2.py b/contrib/graph/graph2.py
index 0980231e4a..b46094ce0a 100644
--- a/contrib/graph/graph2.py
+++ b/contrib/graph/graph2.py
@@ -14,22 +14,31 @@
 
 
 import argparse
-import cgi
 import datetime
+import html
 import json
 import sqlite3
 
 import pydot
 
-from synapse.events import FrozenEvent
+from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
+from synapse.events import make_event_from_dict
 from synapse.util.frozenutils import unfreeze
 
 
-def make_graph(db_name, room_id, file_prefix, limit):
+def make_graph(db_name: str, room_id: str, file_prefix: str, limit: int) -> None:
+    """
+    Generate a dot and SVG file for a graph of events in the room based on the
+    topological ordering by reading from a Synapse SQLite database.
+    """
     conn = sqlite3.connect(db_name)
 
+    sql = "SELECT room_version FROM rooms WHERE room_id = ?"
+    c = conn.execute(sql, (room_id,))
+    room_version = KNOWN_ROOM_VERSIONS[c.fetchone()[0]]
+
     sql = (
-        "SELECT json FROM event_json as j "
+        "SELECT json, internal_metadata FROM event_json as j "
         "INNER JOIN events as e ON e.event_id = j.event_id "
         "WHERE j.room_id = ?"
     )
@@ -43,7 +52,10 @@ def make_graph(db_name, room_id, file_prefix, limit):
 
     c = conn.execute(sql, args)
 
-    events = [FrozenEvent(json.loads(e[0])) for e in c.fetchall()]
+    events = [
+        make_event_from_dict(json.loads(e[0]), room_version, json.loads(e[1]))
+        for e in c.fetchall()
+    ]
 
     events.sort(key=lambda e: e.depth)
 
@@ -84,7 +96,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
             "name": event.event_id,
             "type": event.type,
             "state_key": event.get("state_key", None),
-            "content": cgi.escape(content, quote=True),
+            "content": html.escape(content, quote=True),
             "time": t,
             "depth": event.depth,
             "state_group": state_group,
@@ -96,11 +108,11 @@ def make_graph(db_name, room_id, file_prefix, limit):
         graph.add_node(node)
 
     for event in events:
-        for prev_id, _ in event.prev_events:
+        for prev_id in event.prev_event_ids():
             try:
                 end_node = node_map[prev_id]
             except Exception:
-                end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
+                end_node = pydot.Node(name=prev_id, label=f"<<b>{prev_id}</b>>")
 
                 node_map[prev_id] = end_node
                 graph.add_node(end_node)
@@ -112,7 +124,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
         if len(event_ids) <= 1:
             continue
 
-        cluster = pydot.Cluster(str(group), label="<State Group: %s>" % (str(group),))
+        cluster = pydot.Cluster(str(group), label=f"<State Group: {str(group)}>")
 
         for event_id in event_ids:
             cluster.add_node(node_map[event_id])
@@ -126,7 +138,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
 if __name__ == "__main__":
     parser = argparse.ArgumentParser(
         description="Generate a PDU graph for a given room by talking "
-        "to the given homeserver to get the list of PDUs. \n"
+        "to the given Synapse SQLite file to get the list of PDUs. \n"
         "Requires pydot."
     )
     parser.add_argument(
diff --git a/contrib/graph/graph3.py b/contrib/graph/graph3.py
index dd0c19368b..a28a1594c7 100644
--- a/contrib/graph/graph3.py
+++ b/contrib/graph/graph3.py
@@ -1,13 +1,3 @@
-import argparse
-import cgi
-import datetime
-
-import pydot
-import simplejson as json
-
-from synapse.events import FrozenEvent
-from synapse.util.frozenutils import unfreeze
-
 # Copyright 2016 OpenMarket Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
@@ -22,15 +12,35 @@ from synapse.util.frozenutils import unfreeze
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import argparse
+import datetime
+import html
+import json
+
+import pydot
 
-def make_graph(file_name, room_id, file_prefix, limit):
+from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
+from synapse.events import make_event_from_dict
+from synapse.util.frozenutils import unfreeze
+
+
+def make_graph(file_name: str, file_prefix: str, limit: int) -> None:
+    """
+    Generate a dot and SVG file for a graph of events in the room based on the
+    topological ordering by reading line-delimited JSON from a file.
+    """
     print("Reading lines")
     with open(file_name) as f:
         lines = f.readlines()
 
     print("Read lines")
 
-    events = [FrozenEvent(json.loads(line)) for line in lines]
+    # Figure out the room version, assume the first line is the create event.
+    room_version = KNOWN_ROOM_VERSIONS[
+        json.loads(lines[0]).get("content", {}).get("room_version")
+    ]
+
+    events = [make_event_from_dict(json.loads(line), room_version) for line in lines]
 
     print("Loaded events.")
 
@@ -66,8 +76,8 @@ def make_graph(file_name, room_id, file_prefix, limit):
             content.append(
                 "<b>%s</b>: %s,"
                 % (
-                    cgi.escape(key, quote=True).encode("ascii", "xmlcharrefreplace"),
-                    cgi.escape(value, quote=True).encode("ascii", "xmlcharrefreplace"),
+                    html.escape(key, quote=True).encode("ascii", "xmlcharrefreplace"),
+                    html.escape(value, quote=True).encode("ascii", "xmlcharrefreplace"),
                 )
             )
 
@@ -101,11 +111,11 @@ def make_graph(file_name, room_id, file_prefix, limit):
     print("Created Nodes")
 
     for event in events:
-        for prev_id, _ in event.prev_events:
+        for prev_id in event.prev_event_ids():
             try:
                 end_node = node_map[prev_id]
             except Exception:
-                end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
+                end_node = pydot.Node(name=prev_id, label=f"<<b>{prev_id}</b>>")
 
                 node_map[prev_id] = end_node
                 graph.add_node(end_node)
@@ -139,8 +149,7 @@ if __name__ == "__main__":
     )
     parser.add_argument("-l", "--limit", help="Only retrieve the last N events.")
     parser.add_argument("event_file")
-    parser.add_argument("room")
 
     args = parser.parse_args()
 
-    make_graph(args.event_file, args.room, args.prefix, args.limit)
+    make_graph(args.event_file, args.prefix, args.limit)