summary refs log tree commit diff
path: root/docs/workers.md
blob: 38bd758e57fbba161e5e5facc8a567f96b0199bb (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
# Scaling synapse via workers

For small instances it recommended to run Synapse in monolith mode (the
default). For larger instances where performance is a concern it can be helpful
to split out functionality into multiple separate python processes. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.

Synapse's worker support is under active development and subject to change as
we attempt to rapidly scale ever larger Synapse instances. However we are
documenting it here to help admins needing a highly scalable Synapse instance
similar to the one running `matrix.org`.

All processes continue to share the same database instance, and as such,
workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be
running PostgreSQL.

## Main process/worker communication

The processes communicate with each other via a Synapse-specific protocol called
'replication' (analogous to MySQL- or Postgres-style database replication) which
feeds streams of newly written data between processes so they can be kept in
sync with the database state.

Additionally, processes may make HTTP requests to each other. Typically this is
used for operations which need to wait for a reply - such as sending an event.

As of Synapse v1.13.0, it is possible to configure Synapse to send replication
via a [Redis pub/sub channel](https://redis.io/topics/pubsub), and is now the
recommended way of configuring replication. This is an alternative to the old
direct TCP connections to the main process: rather than all the workers
connecting to the main process, all the workers and the main process connect to
Redis, which relays replication commands between processes. This can give a
significant cpu saving on the main process and will be a prerequisite for
upcoming performance improvements.

(See the [Architectural diagram](#architectural-diagram) section at the end for
a visualisation of what this looks like)


## Setting up workers

A Redis server is required to manage the communication between the processes.
(The older direct TCP connections are now deprecated.) The Redis server
should be installed following the normal procedure for your distribution (e.g.
`apt install redis-server` on Debian). It is safe to use an existing Redis
deployment if you have one.

Once installed, check that Redis is running and accessible from the host running
Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
a response of `+PONG`.

The appropriate dependencies must also be installed for Synapse. If using a
virtualenv, these can be installed with:

```sh
pip install matrix-synapse[redis]
```

Note that these dependencies are included when synapse is installed with `pip
install matrix-synapse[all]`. They are also included in the debian packages from
`matrix.org` and in the docker images at
https://hub.docker.com/r/matrixdotorg/synapse/.

To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. See [reverse_proxy.md](reverse_proxy.md)
for information on setting up a reverse proxy.

To enable workers you should create a configuration file for each worker
process. Each worker configuration file inherits the configuration of the shared
homeserver configuration file.  You can then override configuration specific to
that worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc.  You should minimise the number of overrides though to
maintain a usable config.

Next you need to add both a HTTP replication listener and redis config to the
shared Synapse configuration file (`homeserver.yaml`). For example:

```yaml
# extend the existing `listeners` section. This defines the ports that the
# main process will listen on.
listeners:
  # The HTTP replication port
  - port: 9093
    bind_address: '127.0.0.1'
    type: http
    resources:
     - names: [replication]

redis:
    enabled: true
```

See the sample config for the full documentation of each option.

Under **no circumstances** should the replication listener be exposed to the
public internet; it has no authentication and is unencrypted.

In the config file for each worker, you must specify the type of worker
application (`worker_app`), and you should specify a unqiue name for the worker
(`worker_name`). The currently available worker applications are listed below.
You must also specify the HTTP replication endpoint that it should talk to on
the main synapse process.  `worker_replication_host` should specify the host of
the main synapse and `worker_replication_http_port` should point to the HTTP
replication port. If the worker will handle HTTP requests then the
`worker_listeners` option should be set with a `http` listener, in the same way
as the `listeners` option in the shared config.

For example:

```yaml
worker_app: synapse.app.generic_worker
worker_name: worker1

# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093

worker_listeners:
 - type: http
   port: 8083
   resources:
     - names:
       - client
       - federation

worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
```

...is a full configuration for a generic worker instance, which will expose a
plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
`/sync`, which are listed below.

Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example).

Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up
`systemd` to start synapse workers, see
[systemd-with-workers](systemd-with-workers). To use `synctl`, see
[synctl_workers.md](synctl_workers.md).


## Available worker applications

### `synapse.app.generic_worker`

This worker can handle API requests matching the following regular
expressions:

    # Sync requests
    ^/_matrix/client/(v2_alpha|r0)/sync$
    ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
    ^/_matrix/client/(api/v1|r0)/initialSync$
    ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$

    # Federation requests
    ^/_matrix/federation/v1/event/
    ^/_matrix/federation/v1/state/
    ^/_matrix/federation/v1/state_ids/
    ^/_matrix/federation/v1/backfill/
    ^/_matrix/federation/v1/get_missing_events/
    ^/_matrix/federation/v1/publicRooms
    ^/_matrix/federation/v1/query/
    ^/_matrix/federation/v1/make_join/
    ^/_matrix/federation/v1/make_leave/
    ^/_matrix/federation/v1/send_join/
    ^/_matrix/federation/v2/send_join/
    ^/_matrix/federation/v1/send_leave/
    ^/_matrix/federation/v2/send_leave/
    ^/_matrix/federation/v1/invite/
    ^/_matrix/federation/v2/invite/
    ^/_matrix/federation/v1/query_auth/
    ^/_matrix/federation/v1/event_auth/
    ^/_matrix/federation/v1/exchange_third_party_invite/
    ^/_matrix/federation/v1/user/devices/
    ^/_matrix/federation/v1/get_groups_publicised$
    ^/_matrix/key/v2/query

    # Inbound federation transaction request
    ^/_matrix/federation/v1/send/

    # Client API requests
    ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
    ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
    ^/_matrix/client/(api/v1|r0|unstable)/keys/query$
    ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
    ^/_matrix/client/versions$
    ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
    ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
    ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
    ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/

    # Registration/login requests
    ^/_matrix/client/(api/v1|r0|unstable)/login$
    ^/_matrix/client/(r0|unstable)/register$
    ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$

    # Event sending requests
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
    ^/_matrix/client/(api/v1|r0|unstable)/join/
    ^/_matrix/client/(api/v1|r0|unstable)/profile/


Additionally, the following REST endpoints can be handled for GET requests:

    ^/_matrix/federation/v1/groups/

Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:

    ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$

Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.


#### Load balancing

It is possible to run multiple instances of this worker app, with incoming requests
being load-balanced between them by the reverse-proxy. However, different endpoints
have different characteristics and so admins
may wish to run multiple groups of workers handling different endpoints so that
load balancing can be done in different ways.

For `/sync` and `/initialSync` requests it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting a
user ID from the access token or `Authorization` header is currently left as an
exercise for the reader. Admins may additionally wish to separate out `/sync`
requests that have a `since` query parameter from those that don't (and
`/initialSync`), as requests that don't are known as "initial sync" that happens
when a user logs in on a new device and can be *very* resource intensive, so
isolating these requests will stop them from interfering with other users ongoing
syncs.

Federation and client requests can be balanced via simple round robin.

The inbound federation transaction request `^/_matrix/federation/v1/send/`
should be balanced by source IP so that transactions from the same remote server
go to the same process.

Registration/login requests can be handled separately purely to help ensure that
unexpected load doesn't affect new logins and sign ups.

Finally, event sending requests can be balanced by the room ID in the URI (or
the full URI, or even just round robin), the room ID is the path component after
`/rooms/`. If there is a large bridge connected that is sending or may send lots
of events, then a dedicated set of workers can be provisioned to limit the
effects of bursts of events from that bridge on events sent by normal users.

#### Stream writers

Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)

Currently support streams are `events` and `typing`.

To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
move event persistence off to a dedicated worker, the shared configuration would
include:

```yaml
instance_map:
    event_persister1:
        host: localhost
        port: 8034

streams_writers:
    events: event_persister1
```


### `synapse.app.pusher`

Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set `start_pushers: False` in the
shared configuration file to stop the main synapse sending push notifications.

Note this worker cannot be load-balanced: only one instance should be active.

### `synapse.app.appservice`

Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending appservice notifications.

Note this worker cannot be load-balanced: only one instance should be active.


### `synapse.app.federation_sender`

Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set `send_federation: False` in the
shared configuration file to stop the main synapse sending this traffic.

If running multiple federation senders then you must list each
instance in the `federation_sender_instances` option by their `worker_name`.
All instances must be stopped and started when adding or removing instances.
For example:

```yaml
federation_sender_instances:
    - federation_sender1
    - federation_sender2
```

### `synapse.app.media_repository`

Handles the media repository. It can handle all endpoints starting with:

    /_matrix/media/

... and the following regular expressions matching media-specific administration APIs:

    ^/_synapse/admin/v1/purge_media_cache$
    ^/_synapse/admin/v1/room/.*/media.*$
    ^/_synapse/admin/v1/user/.*/media.*$
    ^/_synapse/admin/v1/media/.*$
    ^/_synapse/admin/v1/quarantine_media/.*$

You should also set `enable_media_repo: False` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.

In the `media_repository` worker configuration file, configure the http listener to
expose the `media` resource. For example:

```yaml
    worker_listeners:
     - type: http
       port: 8085
       resources:
         - names:
           - media
```

Note that if running multiple media repositories they must be on the same server
and you must configure a single instance to run the background tasks, e.g.:

```yaml
    media_instance_running_background_jobs: "media-repository-1"
```

### `synapse.app.user_dir`

Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:

    ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$

When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
jobs related to updating the user directory.

### `synapse.app.frontend_proxy`

Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions:

    ^/_matrix/client/(api/v1|r0|unstable)/keys/upload

If `use_presence` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions:

    ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status

This "stub" presence handler will pass through `GET` request but make the
`PUT` effectively a no-op.

It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
file. For example:

    worker_main_http_uri: http://127.0.0.1:8008

### Historical apps

*Note:* Historically there used to be more apps, however they have been
amalgamated into a single `synapse.app.generic_worker` app. The remaining apps
are ones that do specific processing unrelated to requests, e.g. the `pusher`
that handles sending out push notifications for new events. The intention is for
all these to be folded into the `generic_worker` app and to use config to define
which processes handle the various proccessing such as push notifications.


## Architectural diagram

The following shows an example setup using Redis and a reverse proxy:

```
                     Clients & Federation
                              |
                              v
                        +-----------+
                        |           |
                        |  Reverse  |
                        |  Proxy    |
                        |           |
                        +-----------+
                            | | |
                            | | | HTTP requests
        +-------------------+ | +-----------+
        |                 +---+             |
        |                 |                 |
        v                 v                 v
+--------------+  +--------------+  +--------------+  +--------------+
|   Main       |  |   Generic    |  |   Generic    |  |  Event       |
|   Process    |  |   Worker 1   |  |   Worker 2   |  |  Persister   |
+--------------+  +--------------+  +--------------+  +--------------+
      ^    ^          |   ^   |         |   ^   |          ^    ^
      |    |          |   |   |         |   |   |          |    |
      |    |          |   |   |  HTTP   |   |   |          |    |
      |    +----------+<--|---|---------+   |   |          |    |
      |                   |   +-------------|-->+----------+    |
      |                   |                 |                   |
      |                   |                 |                   |
      v                   v                 v                   v
====================================================================
                                                         Redis pub/sub channel
```