summary refs log tree commit diff
diff options
context:
space:
mode:
-rw-r--r--MANIFEST.in1
-rw-r--r--README.rst13
-rw-r--r--changelog.d/3585.bugfix1
-rw-r--r--changelog.d/3644.misc1
-rw-r--r--changelog.d/3658.bugfix1
-rw-r--r--contrib/docker/README.md110
-rw-r--r--contrib/grafana/synapse.json24
-rw-r--r--docker/Dockerfile (renamed from Dockerfile)2
-rw-r--r--docker/README.md124
-rw-r--r--docker/conf/homeserver.yaml (renamed from contrib/docker/conf/homeserver.yaml)0
-rw-r--r--docker/conf/log.config (renamed from contrib/docker/conf/log.config)0
-rwxr-xr-xdocker/start.py (renamed from contrib/docker/start.py)0
-rw-r--r--synapse/handlers/profile.py89
-rw-r--r--synapse/storage/events.py11
14 files changed, 223 insertions, 154 deletions
diff --git a/MANIFEST.in b/MANIFEST.in
index 7076b608d4..1ff98d95df 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -35,3 +35,4 @@ recursive-include changelog.d *
 
 prune .github
 prune demo/etc
+prune docker
diff --git a/README.rst b/README.rst
index 5fdfad345f..4c5971d043 100644
--- a/README.rst
+++ b/README.rst
@@ -157,12 +157,19 @@ if you prefer.
 
 In case of problems, please see the _`Troubleshooting` section below.
 
-There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`.
+There is an offical synapse image available at 
+https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
+the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
+this including configuration options is available in the README on
+hub.docker.com.
 
-Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/
+Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
+Dockerfile to automate a synapse server in a single Docker image, at
+https://hub.docker.com/r/avhost/docker-matrix/tags/
 
 Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
-tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
+tested with VirtualBox/AWS/DigitalOcean - see 
+https://github.com/EMnify/matrix-synapse-auto-deploy
 for details.
 
 Configuring synapse
diff --git a/changelog.d/3585.bugfix b/changelog.d/3585.bugfix
new file mode 100644
index 0000000000..e8ae1d8cb4
--- /dev/null
+++ b/changelog.d/3585.bugfix
@@ -0,0 +1 @@
+Respond with M_NOT_FOUND when profiles are not found locally or over federation. Fixes #3585
diff --git a/changelog.d/3644.misc b/changelog.d/3644.misc
new file mode 100644
index 0000000000..2347fc8500
--- /dev/null
+++ b/changelog.d/3644.misc
@@ -0,0 +1 @@
+Refactor location of docker build script.
diff --git a/changelog.d/3658.bugfix b/changelog.d/3658.bugfix
new file mode 100644
index 0000000000..556011a150
--- /dev/null
+++ b/changelog.d/3658.bugfix
@@ -0,0 +1 @@
+Fix occasional glitches in the synapse_event_persisted_position metric
diff --git a/contrib/docker/README.md b/contrib/docker/README.md
index 562cdaac2b..05254e5192 100644
--- a/contrib/docker/README.md
+++ b/contrib/docker/README.md
@@ -1,23 +1,5 @@
 # Synapse Docker
 
-The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
-database server or a TURN server, you should run these separately.
-
-If you run a Postgres server, you should simply include it in the same Compose
-project or set the proper environment variables and the image will automatically
-use that server.
-
-## Build
-
-Build the docker image with the `docker-compose build` command.
-
-You may have a local Python wheel cache available, in which case copy the relevant packages in the ``cache/`` directory at the root of the project.
-
-## Run
-
-This image is designed to run either with an automatically generated configuration
-file or with a custom configuration that requires manual edition.
-
 ### Automated configuration
 
 It is recommended that you use Docker Compose to run your containers, including
@@ -54,94 +36,6 @@ Then, customize your configuration and run the server:
 docker-compose up -d
 ```
 
-### Without Compose
-
-If you do not wish to use Compose, you may still run this image using plain
-Docker commands. Note that the following is just a guideline and you may need
-to add parameters to the docker run command to account for the network situation
-with your postgres database.
-
-```
-docker run \
-    -d \
-    --name synapse \
-    -v ${DATA_PATH}:/data \
-    -e SYNAPSE_SERVER_NAME=my.matrix.host \
-    -e SYNAPSE_REPORT_STATS=yes \
-    docker.io/matrixdotorg/synapse:latest
-```
-
-## Volumes
-
-The image expects a single volume, located at ``/data``, that will hold:
-
-* temporary files during uploads;
-* uploaded media and thumbnails;
-* the SQLite database if you do not configure postgres;
-* the appservices configuration.
-
-You are free to use separate volumes depending on storage endpoints at your
-disposal. For instance, ``/data/media`` coud be stored on a large but low
-performance hdd storage while other files could be stored on high performance
-endpoints.
-
-In order to setup an application service, simply create an ``appservices``
-directory in the data volume and write the application service Yaml
-configuration file there. Multiple application services are supported.
-
-## Environment
-
-Unless you specify a custom path for the configuration file, a very generic
-file will be generated, based on the following environment settings.
-These are a good starting point for setting up your own deployment.
-
-Global settings:
-
-* ``UID``, the user id Synapse will run as [default 991]
-* ``GID``, the group id Synapse will run as [default 991]
-* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
-
-If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
-then customize it manually. No other environment variable is required.
-
-Otherwise, a dynamic configuration file will be used. The following environment
-variables are available for configuration:
-
-* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
-* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
-  statistics reporting back to the Matrix project which helps us to get funding.
-* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
-  you run your own TLS-capable reverse proxy).
-* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
-  the Synapse instance.
-* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
-* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
-* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
-* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
-  key in order to enable recaptcha upon registration.
-* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
-  key in order to enable recaptcha upon registration.
-* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
-  uris to enable TURN for this homeserver.
-* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
-
-Shared secrets, that will be initialized to random values if not set:
-
-* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
-  registration is disable.
-* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
-  to the server.
-
-Database specific values (will use SQLite if not set):
-
-* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
-* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
-* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
-* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
-
-Mail server specific values (will not send emails if not set):
+### More information
 
-* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
-* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
-* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
-* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
+For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
diff --git a/contrib/grafana/synapse.json b/contrib/grafana/synapse.json
index 94a1de58f4..c58612594a 100644
--- a/contrib/grafana/synapse.json
+++ b/contrib/grafana/synapse.json
@@ -54,7 +54,7 @@
   "gnetId": null,
   "graphTooltip": 0,
   "id": null,
-  "iteration": 1533026624326,
+  "iteration": 1533598785368,
   "links": [
     {
       "asDropdown": true,
@@ -4629,7 +4629,7 @@
             "h": 9,
             "w": 12,
             "x": 0,
-            "y": 11
+            "y": 29
           },
           "id": 67,
           "legend": {
@@ -4655,11 +4655,11 @@
           "steppedLine": false,
           "targets": [
             {
-              "expr": " synapse_event_persisted_position{instance=\"$instance\"} - ignoring(index, job, name) group_right(instance) synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
+              "expr": " synapse_event_persisted_position{instance=\"$instance\",job=\"synapse\"}  - ignoring(index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
               "format": "time_series",
               "interval": "",
               "intervalFactor": 1,
-              "legendFormat": "{{job}}-{{index}}",
+              "legendFormat": "{{job}}-{{index}} ",
               "refId": "A"
             }
           ],
@@ -4697,7 +4697,11 @@
               "min": null,
               "show": true
             }
-          ]
+          ],
+          "yaxis": {
+            "align": false,
+            "alignLevel": null
+          }
         },
         {
           "aliasColors": {},
@@ -4710,7 +4714,7 @@
             "h": 9,
             "w": 12,
             "x": 12,
-            "y": 11
+            "y": 29
           },
           "id": 71,
           "legend": {
@@ -4778,7 +4782,11 @@
               "min": null,
               "show": true
             }
-          ]
+          ],
+          "yaxis": {
+            "align": false,
+            "alignLevel": null
+          }
         }
       ],
       "title": "Event processing loop positions",
@@ -4957,5 +4965,5 @@
   "timezone": "",
   "title": "Synapse",
   "uid": "000000012",
-  "version": 125
+  "version": 127
 }
\ No newline at end of file
diff --git a/Dockerfile b/docker/Dockerfile
index 0242be5f68..26fb3a6bff 100644
--- a/Dockerfile
+++ b/docker/Dockerfile
@@ -22,7 +22,7 @@ RUN cd /synapse \
         setuptools \
  && mkdir -p /synapse/cache \
  && pip install -f /synapse/cache --upgrade --process-dependency-links . \
- && mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \
+ && mv /synapse/docker/start.py /synapse/docker/conf / \
  && rm -rf \
         setup.cfg \
         setup.py \
diff --git a/docker/README.md b/docker/README.md
new file mode 100644
index 0000000000..038c78f7c0
--- /dev/null
+++ b/docker/README.md
@@ -0,0 +1,124 @@
+# Synapse Docker
+
+This Docker image will run Synapse as a single process. It does not provide a database
+server or a TURN server, you should run these separately.
+
+## Run
+
+We do not currently offer a `latest` image, as this has somewhat undefined semantics.
+We instead release only tagged versions so upgrading between releases is entirely
+within your control.
+
+### Using docker-compose (easier)
+
+This image is designed to run either with an automatically generated configuration
+file or with a custom configuration that requires manual editing.
+
+An easy way to make use of this image is via docker-compose. See the
+[contrib/docker](../contrib/docker)
+section of the synapse project for examples.
+
+### Without Compose (harder)
+
+If you do not wish to use Compose, you may still run this image using plain
+Docker commands. Note that the following is just a guideline and you may need
+to add parameters to the docker run command to account for the network situation
+with your postgres database.
+
+```
+docker run \
+    -d \
+    --name synapse \
+    -v ${DATA_PATH}:/data \
+    -e SYNAPSE_SERVER_NAME=my.matrix.host \
+    -e SYNAPSE_REPORT_STATS=yes \
+    docker.io/matrixdotorg/synapse:latest
+```
+
+## Volumes
+
+The image expects a single volume, located at ``/data``, that will hold:
+
+* temporary files during uploads;
+* uploaded media and thumbnails;
+* the SQLite database if you do not configure postgres;
+* the appservices configuration.
+
+You are free to use separate volumes depending on storage endpoints at your
+disposal. For instance, ``/data/media`` coud be stored on a large but low
+performance hdd storage while other files could be stored on high performance
+endpoints.
+
+In order to setup an application service, simply create an ``appservices``
+directory in the data volume and write the application service Yaml
+configuration file there. Multiple application services are supported.
+
+## Environment
+
+Unless you specify a custom path for the configuration file, a very generic
+file will be generated, based on the following environment settings.
+These are a good starting point for setting up your own deployment.
+
+Global settings:
+
+* ``UID``, the user id Synapse will run as [default 991]
+* ``GID``, the group id Synapse will run as [default 991]
+* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
+
+If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
+then customize it manually. No other environment variable is required.
+
+Otherwise, a dynamic configuration file will be used. The following environment
+variables are available for configuration:
+
+* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
+* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
+  statistics reporting back to the Matrix project which helps us to get funding.
+* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
+  you run your own TLS-capable reverse proxy).
+* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
+  the Synapse instance.
+* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
+* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
+* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
+* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
+  key in order to enable recaptcha upon registration.
+* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
+  key in order to enable recaptcha upon registration.
+* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
+  uris to enable TURN for this homeserver.
+* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
+
+Shared secrets, that will be initialized to random values if not set:
+
+* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
+  registration is disable.
+* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
+  to the server.
+
+Database specific values (will use SQLite if not set):
+
+* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
+* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
+* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
+* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
+
+Mail server specific values (will not send emails if not set):
+
+* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
+* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
+* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
+* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
+
+## Build
+
+Build the docker image with the `docker build` command from the root of the synapse repository.
+
+```
+docker build -t docker.io/matrixdotorg/synapse . -f docker/Dockerfile
+```
+
+The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
+
+You may have a local Python wheel cache available, in which case copy the relevant
+packages in the ``cache/`` directory at the root of the project.
diff --git a/contrib/docker/conf/homeserver.yaml b/docker/conf/homeserver.yaml
index 6bc25bb45f..6bc25bb45f 100644
--- a/contrib/docker/conf/homeserver.yaml
+++ b/docker/conf/homeserver.yaml
diff --git a/contrib/docker/conf/log.config b/docker/conf/log.config
index 1851995802..1851995802 100644
--- a/contrib/docker/conf/log.config
+++ b/docker/conf/log.config
diff --git a/contrib/docker/start.py b/docker/start.py
index 90e8b9c51a..90e8b9c51a 100755
--- a/contrib/docker/start.py
+++ b/docker/start.py
diff --git a/synapse/handlers/profile.py b/synapse/handlers/profile.py
index cb5c6d587e..9af2e8f869 100644
--- a/synapse/handlers/profile.py
+++ b/synapse/handlers/profile.py
@@ -17,7 +17,13 @@ import logging
 
 from twisted.internet import defer
 
-from synapse.api.errors import AuthError, CodeMessageException, SynapseError
+from synapse.api.errors import (
+    AuthError,
+    CodeMessageException,
+    Codes,
+    StoreError,
+    SynapseError,
+)
 from synapse.metrics.background_process_metrics import run_as_background_process
 from synapse.types import UserID, get_domain_from_id
 
@@ -49,12 +55,17 @@ class ProfileHandler(BaseHandler):
     def get_profile(self, user_id):
         target_user = UserID.from_string(user_id)
         if self.hs.is_mine(target_user):
-            displayname = yield self.store.get_profile_displayname(
-                target_user.localpart
-            )
-            avatar_url = yield self.store.get_profile_avatar_url(
-                target_user.localpart
-            )
+            try:
+                displayname = yield self.store.get_profile_displayname(
+                    target_user.localpart
+                )
+                avatar_url = yield self.store.get_profile_avatar_url(
+                    target_user.localpart
+                )
+            except StoreError as e:
+                if e.code == 404:
+                    raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
+                raise
 
             defer.returnValue({
                 "displayname": displayname,
@@ -74,7 +85,6 @@ class ProfileHandler(BaseHandler):
             except CodeMessageException as e:
                 if e.code != 404:
                     logger.exception("Failed to get displayname")
-
                 raise
 
     @defer.inlineCallbacks
@@ -85,12 +95,17 @@ class ProfileHandler(BaseHandler):
         """
         target_user = UserID.from_string(user_id)
         if self.hs.is_mine(target_user):
-            displayname = yield self.store.get_profile_displayname(
-                target_user.localpart
-            )
-            avatar_url = yield self.store.get_profile_avatar_url(
-                target_user.localpart
-            )
+            try:
+                displayname = yield self.store.get_profile_displayname(
+                    target_user.localpart
+                )
+                avatar_url = yield self.store.get_profile_avatar_url(
+                    target_user.localpart
+                )
+            except StoreError as e:
+                if e.code == 404:
+                    raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
+                raise
 
             defer.returnValue({
                 "displayname": displayname,
@@ -103,9 +118,14 @@ class ProfileHandler(BaseHandler):
     @defer.inlineCallbacks
     def get_displayname(self, target_user):
         if self.hs.is_mine(target_user):
-            displayname = yield self.store.get_profile_displayname(
-                target_user.localpart
-            )
+            try:
+                displayname = yield self.store.get_profile_displayname(
+                    target_user.localpart
+                )
+            except StoreError as e:
+                if e.code == 404:
+                    raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
+                raise
 
             defer.returnValue(displayname)
         else:
@@ -122,7 +142,6 @@ class ProfileHandler(BaseHandler):
             except CodeMessageException as e:
                 if e.code != 404:
                     logger.exception("Failed to get displayname")
-
                 raise
             except Exception:
                 logger.exception("Failed to get displayname")
@@ -157,10 +176,14 @@ class ProfileHandler(BaseHandler):
     @defer.inlineCallbacks
     def get_avatar_url(self, target_user):
         if self.hs.is_mine(target_user):
-            avatar_url = yield self.store.get_profile_avatar_url(
-                target_user.localpart
-            )
-
+            try:
+                avatar_url = yield self.store.get_profile_avatar_url(
+                    target_user.localpart
+                )
+            except StoreError as e:
+                if e.code == 404:
+                    raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
+                raise
             defer.returnValue(avatar_url)
         else:
             try:
@@ -213,16 +236,20 @@ class ProfileHandler(BaseHandler):
         just_field = args.get("field", None)
 
         response = {}
+        try:
+            if just_field is None or just_field == "displayname":
+                response["displayname"] = yield self.store.get_profile_displayname(
+                    user.localpart
+                )
 
-        if just_field is None or just_field == "displayname":
-            response["displayname"] = yield self.store.get_profile_displayname(
-                user.localpart
-            )
-
-        if just_field is None or just_field == "avatar_url":
-            response["avatar_url"] = yield self.store.get_profile_avatar_url(
-                user.localpart
-            )
+            if just_field is None or just_field == "avatar_url":
+                response["avatar_url"] = yield self.store.get_profile_avatar_url(
+                    user.localpart
+                )
+        except StoreError as e:
+            if e.code == 404:
+                raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
+            raise
 
         defer.returnValue(response)
 
diff --git a/synapse/storage/events.py b/synapse/storage/events.py
index e8e5a0fe44..ce32e8fefd 100644
--- a/synapse/storage/events.py
+++ b/synapse/storage/events.py
@@ -485,9 +485,14 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
                     new_forward_extremeties=new_forward_extremeties,
                 )
                 persist_event_counter.inc(len(chunk))
-                synapse.metrics.event_persisted_position.set(
-                    chunk[-1][0].internal_metadata.stream_ordering,
-                )
+
+                if not backfilled:
+                    # backfilled events have negative stream orderings, so we don't
+                    # want to set the event_persisted_position to that.
+                    synapse.metrics.event_persisted_position.set(
+                        chunk[-1][0].internal_metadata.stream_ordering,
+                    )
+
                 for event, context in chunk:
                     if context.app_service:
                         origin_type = "local"