summary refs log tree commit diff
diff options
context:
space:
mode:
-rw-r--r--CHANGES.md15
-rw-r--r--INSTALL.md4
-rw-r--r--changelog.d/3484.misc1
-rw-r--r--changelog.d/5039.bugfix1
-rw-r--r--changelog.d/5146.bugfix1
-rw-r--r--changelog.d/5174.bugfix1
-rw-r--r--changelog.d/5177.bugfix1
-rw-r--r--changelog.d/5181.feature1
-rw-r--r--changelog.d/5187.bugfix1
-rw-r--r--changelog.d/5190.feature1
-rw-r--r--changelog.d/5196.feature1
-rw-r--r--changelog.d/5197.misc1
-rw-r--r--changelog.d/5198.bugfix1
-rw-r--r--changelog.d/5198.feature1
-rw-r--r--changelog.d/5204.feature1
-rw-r--r--changelog.d/5209.feature1
-rw-r--r--changelog.d/5210.feature1
-rw-r--r--changelog.d/5211.feature1
-rw-r--r--changelog.d/5212.feature1
-rw-r--r--changelog.d/5217.feature1
-rw-r--r--changelog.d/5218.bugfix1
-rw-r--r--changelog.d/5219.bugfix1
-rw-r--r--debian/changelog7
-rw-r--r--debian/test/.gitignore2
-rw-r--r--debian/test/provision.sh23
-rw-r--r--debian/test/stretch/Vagrantfile13
-rw-r--r--debian/test/xenial/Vagrantfile10
-rw-r--r--docs/postgres.rst45
-rw-r--r--docs/sample_config.yaml67
-rw-r--r--synapse/__init__.py2
-rw-r--r--synapse/api/constants.py7
-rw-r--r--synapse/api/errors.py16
-rw-r--r--synapse/api/room_versions.py13
-rw-r--r--synapse/api/urls.py3
-rw-r--r--synapse/config/ratelimiting.py115
-rw-r--r--synapse/config/registration.py8
-rw-r--r--synapse/config/server.py11
-rw-r--r--synapse/events/__init__.py31
-rw-r--r--synapse/events/builder.py6
-rw-r--r--synapse/events/utils.py6
-rw-r--r--synapse/federation/federation_server.py14
-rw-r--r--synapse/federation/transport/server.py6
-rw-r--r--synapse/handlers/_base.py4
-rw-r--r--synapse/handlers/federation.py7
-rw-r--r--synapse/handlers/message.py16
-rw-r--r--synapse/handlers/register.py11
-rw-r--r--synapse/handlers/room_member.py11
-rw-r--r--synapse/handlers/sync.py44
-rw-r--r--synapse/python_dependencies.py42
-rw-r--r--synapse/rest/client/v1/base.py8
-rw-r--r--synapse/rest/client/v2_alpha/_base.py9
-rw-r--r--synapse/rest/client/v2_alpha/auth.py18
-rw-r--r--synapse/rest/client/v2_alpha/devices.py6
-rw-r--r--synapse/rest/client/v2_alpha/register.py85
-rw-r--r--synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py1
-rw-r--r--synapse/rest/client/v2_alpha/sendtodevice.py1
-rw-r--r--synapse/rest/media/v1/media_repository.py9
-rw-r--r--synapse/rest/media/v1/thumbnailer.py35
-rw-r--r--synapse/storage/_base.py58
-rw-r--r--synapse/storage/events.py12
-rw-r--r--synapse/storage/registration.py16
-rw-r--r--synapse/storage/relations.py71
-rw-r--r--synapse/storage/stream.py21
-rw-r--r--synapse/util/ratelimitutils.py47
-rw-r--r--tests/handlers/test_register.py7
-rw-r--r--tests/rest/client/v1/test_rooms.py70
-rw-r--r--tests/rest/client/v2_alpha/test_auth.py9
-rw-r--r--tests/rest/client/v2_alpha/test_register.py55
-rw-r--r--tests/rest/client/v2_alpha/test_relations.py86
-rw-r--r--tests/test_terms_auth.py2
-rw-r--r--tests/utils.py20
-rw-r--r--tox.ini2
72 files changed, 929 insertions, 298 deletions
diff --git a/CHANGES.md b/CHANGES.md
index 3cacca5a6b..1e9c3cf953 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -1,3 +1,9 @@
+Synapse 0.99.4 (2019-05-15)
+===========================
+
+No significant changes.
+
+
 Synapse 0.99.4rc1 (2019-05-13)
 ==============================
 
@@ -17,8 +23,8 @@ Features
   instead of the executable name, `python`.
   Contributed by Christoph Müller. ([\#5023](https://github.com/matrix-org/synapse/issues/5023))
 - Add time-based account expiration. ([\#5027](https://github.com/matrix-org/synapse/issues/5027), [\#5047](https://github.com/matrix-org/synapse/issues/5047), [\#5073](https://github.com/matrix-org/synapse/issues/5073), [\#5116](https://github.com/matrix-org/synapse/issues/5116))
-- Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker. ([\#5063](https://github.com/matrix-org/synapse/issues/5063), [\#5065](https://github.com/matrix-org/synapse/issues/5065), [\#5070](https://github.com/matrix-org/synapse/issues/5070))
-- Add an configuration option to require authentication on /publicRooms and /profile endpoints. ([\#5083](https://github.com/matrix-org/synapse/issues/5083))
+- Add support for handling `/versions`, `/voip` and `/push_rules` client endpoints to client_reader worker. ([\#5063](https://github.com/matrix-org/synapse/issues/5063), [\#5065](https://github.com/matrix-org/synapse/issues/5065), [\#5070](https://github.com/matrix-org/synapse/issues/5070))
+- Add a configuration option to require authentication on /publicRooms and /profile endpoints. ([\#5083](https://github.com/matrix-org/synapse/issues/5083))
 - Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now). ([\#5119](https://github.com/matrix-org/synapse/issues/5119))
 - Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work. ([\#5121](https://github.com/matrix-org/synapse/issues/5121), [\#5142](https://github.com/matrix-org/synapse/issues/5142))
 
@@ -39,11 +45,9 @@ Bugfixes
 - Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors. ([\#5037](https://github.com/matrix-org/synapse/issues/5037))
 - Make sure we're not registering the same 3pid twice on registration. ([\#5071](https://github.com/matrix-org/synapse/issues/5071))
 - Don't crash on lack of expiry templates. ([\#5077](https://github.com/matrix-org/synapse/issues/5077))
-- Fix the ratelimting on third party invites. ([\#5104](https://github.com/matrix-org/synapse/issues/5104))
+- Fix the ratelimiting on third party invites. ([\#5104](https://github.com/matrix-org/synapse/issues/5104))
 - Add some missing limitations to room alias creation. ([\#5124](https://github.com/matrix-org/synapse/issues/5124), [\#5128](https://github.com/matrix-org/synapse/issues/5128))
 - Limit the number of EDUs in transactions to 100 as expected by synapse. Thanks to @superboum for this work! ([\#5138](https://github.com/matrix-org/synapse/issues/5138))
-- Fix bogus imports in unit tests. ([\#5154](https://github.com/matrix-org/synapse/issues/5154))
-
 
 Internal Changes
 ----------------
@@ -78,6 +82,7 @@ Internal Changes
 - Prevent an exception from being raised in a IResolutionReceiver and use a more generic error message for blacklisted URL previews. ([\#5155](https://github.com/matrix-org/synapse/issues/5155))
 - Run `black` on the tests directory. ([\#5170](https://github.com/matrix-org/synapse/issues/5170))
 - Fix CI after new release of isort. ([\#5179](https://github.com/matrix-org/synapse/issues/5179))
+- Fix bogus imports in unit tests. ([\#5154](https://github.com/matrix-org/synapse/issues/5154))
 
 
 Synapse 0.99.3.2 (2019-05-03)
diff --git a/INSTALL.md b/INSTALL.md
index b88d826f6c..1934593148 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -35,7 +35,7 @@ virtualenv -p python3 ~/synapse/env
 source ~/synapse/env/bin/activate
 pip install --upgrade pip
 pip install --upgrade setuptools
-pip install matrix-synapse[all]
+pip install matrix-synapse
 ```
 
 This will download Synapse from [PyPI](https://pypi.org/project/matrix-synapse)
@@ -48,7 +48,7 @@ update flag:
 
 ```
 source ~/synapse/env/bin/activate
-pip install -U matrix-synapse[all]
+pip install -U matrix-synapse
 ```
 
 Before you can start Synapse, you will need to generate a configuration
diff --git a/changelog.d/3484.misc b/changelog.d/3484.misc
new file mode 100644
index 0000000000..3645849844
--- /dev/null
+++ b/changelog.d/3484.misc
@@ -0,0 +1 @@
+Make /sync attempt to return device updates for both joined and invited users. Note that this doesn't currently work correctly due to other bugs.
diff --git a/changelog.d/5039.bugfix b/changelog.d/5039.bugfix
new file mode 100644
index 0000000000..212cff7ae8
--- /dev/null
+++ b/changelog.d/5039.bugfix
@@ -0,0 +1 @@
+Fix image orientation when generating thumbnails (needs pillow>=4.3.0). Contributed by Pau Rodriguez-Estivill.
diff --git a/changelog.d/5146.bugfix b/changelog.d/5146.bugfix
new file mode 100644
index 0000000000..a54abed92b
--- /dev/null
+++ b/changelog.d/5146.bugfix
@@ -0,0 +1 @@
+Exclude soft-failed events from forward-extremity candidates: fixes "No forward extremities left!" error.
diff --git a/changelog.d/5174.bugfix b/changelog.d/5174.bugfix
new file mode 100644
index 0000000000..0f26d46b2c
--- /dev/null
+++ b/changelog.d/5174.bugfix
@@ -0,0 +1 @@
+Re-order stages in registration flows such that msisdn and email verification are done last.
diff --git a/changelog.d/5177.bugfix b/changelog.d/5177.bugfix
new file mode 100644
index 0000000000..c2f1644ae5
--- /dev/null
+++ b/changelog.d/5177.bugfix
@@ -0,0 +1 @@
+Fix 3pid guest invites.
diff --git a/changelog.d/5181.feature b/changelog.d/5181.feature
new file mode 100644
index 0000000000..5ce13aa2ea
--- /dev/null
+++ b/changelog.d/5181.feature
@@ -0,0 +1 @@
+Ratelimiting configuration for clients sending messages and the federation server has been altered to match login ratelimiting. The old configuration names will continue working. Check the sample config for details of the new names.
diff --git a/changelog.d/5187.bugfix b/changelog.d/5187.bugfix
new file mode 100644
index 0000000000..df176cf5b2
--- /dev/null
+++ b/changelog.d/5187.bugfix
@@ -0,0 +1 @@
+Fix a bug where the register endpoint would fail with M_THREEPID_IN_USE instead of returning an account previously registered in the same session.
diff --git a/changelog.d/5190.feature b/changelog.d/5190.feature
new file mode 100644
index 0000000000..34904aa7a8
--- /dev/null
+++ b/changelog.d/5190.feature
@@ -0,0 +1 @@
+Drop support for the undocumented /_matrix/client/v2_alpha API prefix.
diff --git a/changelog.d/5196.feature b/changelog.d/5196.feature
new file mode 100644
index 0000000000..1ffb928f62
--- /dev/null
+++ b/changelog.d/5196.feature
@@ -0,0 +1 @@
+Add an option to disable per-room profiles.
diff --git a/changelog.d/5197.misc b/changelog.d/5197.misc
new file mode 100644
index 0000000000..fca1d86b2e
--- /dev/null
+++ b/changelog.d/5197.misc
@@ -0,0 +1 @@
+Stop telling people to install the optional dependencies by default.
diff --git a/changelog.d/5198.bugfix b/changelog.d/5198.bugfix
new file mode 100644
index 0000000000..c6b156f17d
--- /dev/null
+++ b/changelog.d/5198.bugfix
@@ -0,0 +1 @@
+Prevent registration for user ids that are to long to fit into a state key. Contributed by Reid Anderson.
\ No newline at end of file
diff --git a/changelog.d/5198.feature b/changelog.d/5198.feature
deleted file mode 100644
index c8f624d172..0000000000
--- a/changelog.d/5198.feature
+++ /dev/null
@@ -1 +0,0 @@
-Add experimental support for reactions.
diff --git a/changelog.d/5204.feature b/changelog.d/5204.feature
new file mode 100644
index 0000000000..2a7212ca18
--- /dev/null
+++ b/changelog.d/5204.feature
@@ -0,0 +1 @@
+Stick an expiration date to any registered user missing one at startup if account validity is enabled.
diff --git a/changelog.d/5209.feature b/changelog.d/5209.feature
new file mode 100644
index 0000000000..747098c166
--- /dev/null
+++ b/changelog.d/5209.feature
@@ -0,0 +1 @@
+Add experimental support for relations (aka reactions and edits).
diff --git a/changelog.d/5210.feature b/changelog.d/5210.feature
new file mode 100644
index 0000000000..c78325a6ac
--- /dev/null
+++ b/changelog.d/5210.feature
@@ -0,0 +1 @@
+Add a room version 4 which uses a new event ID format, as per [MSC2002](https://github.com/matrix-org/matrix-doc/pull/2002).
diff --git a/changelog.d/5211.feature b/changelog.d/5211.feature
new file mode 100644
index 0000000000..747098c166
--- /dev/null
+++ b/changelog.d/5211.feature
@@ -0,0 +1 @@
+Add experimental support for relations (aka reactions and edits).
diff --git a/changelog.d/5212.feature b/changelog.d/5212.feature
new file mode 100644
index 0000000000..747098c166
--- /dev/null
+++ b/changelog.d/5212.feature
@@ -0,0 +1 @@
+Add experimental support for relations (aka reactions and edits).
diff --git a/changelog.d/5217.feature b/changelog.d/5217.feature
new file mode 100644
index 0000000000..c78325a6ac
--- /dev/null
+++ b/changelog.d/5217.feature
@@ -0,0 +1 @@
+Add a room version 4 which uses a new event ID format, as per [MSC2002](https://github.com/matrix-org/matrix-doc/pull/2002).
diff --git a/changelog.d/5218.bugfix b/changelog.d/5218.bugfix
new file mode 100644
index 0000000000..cd624ecfd0
--- /dev/null
+++ b/changelog.d/5218.bugfix
@@ -0,0 +1 @@
+Fix incompatibility between ACME support and Python 3.5.2.
diff --git a/changelog.d/5219.bugfix b/changelog.d/5219.bugfix
new file mode 100644
index 0000000000..c1e17adc5d
--- /dev/null
+++ b/changelog.d/5219.bugfix
@@ -0,0 +1 @@
+Fix error handling for rooms whose versions are unknown.
diff --git a/debian/changelog b/debian/changelog
index 454fa8eb16..35cf8ffb20 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,12 @@
-matrix-synapse-py3 (0.99.3.2+nmu1) UNRELEASED; urgency=medium
+matrix-synapse-py3 (0.99.4) stable; urgency=medium
 
   [ Christoph Müller ]
   * Configure the systemd units to have a log identifier of `matrix-synapse`
 
- -- Christoph Müller <iblzm@hotmail.de>  Wed, 17 Apr 2019 16:17:32 +0200
+  [ Synapse Packaging team ]
+  * New synapse release 0.99.4.
+
+ -- Synapse Packaging team <packages@matrix.org>  Wed, 15 May 2019 13:58:08 +0100
 
 matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
 
diff --git a/debian/test/.gitignore b/debian/test/.gitignore
new file mode 100644
index 0000000000..95eda73fcc
--- /dev/null
+++ b/debian/test/.gitignore
@@ -0,0 +1,2 @@
+.vagrant
+*.log
diff --git a/debian/test/provision.sh b/debian/test/provision.sh
new file mode 100644
index 0000000000..a5c7f59712
--- /dev/null
+++ b/debian/test/provision.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+#
+# provisioning script for vagrant boxes for testing the matrix-synapse debs.
+#
+# Will install the most recent matrix-synapse-py3 deb for this platform from
+# the /debs directory.
+
+set -e
+
+apt-get update
+apt-get install -y lsb-release
+
+deb=`ls /debs/matrix-synapse-py3_*+$(lsb_release -cs)*.deb | sort | tail -n1`
+
+debconf-set-selections <<EOF
+matrix-synapse matrix-synapse/report-stats boolean false
+matrix-synapse matrix-synapse/server-name string localhost:18448
+EOF
+
+dpkg -i "$deb"
+
+sed -i -e '/port: 8...$/{s/8448/18448/; s/8008/18008/}' -e '$aregistration_shared_secret: secret' /etc/matrix-synapse/homeserver.yaml
+systemctl restart matrix-synapse
diff --git a/debian/test/stretch/Vagrantfile b/debian/test/stretch/Vagrantfile
new file mode 100644
index 0000000000..d8eff6fe11
--- /dev/null
+++ b/debian/test/stretch/Vagrantfile
@@ -0,0 +1,13 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+ver = `cd ../../..; dpkg-parsechangelog -S Version`.strip()
+
+Vagrant.configure("2") do |config|
+  config.vm.box = "debian/stretch64"
+
+  config.vm.synced_folder ".", "/vagrant", disabled: true
+  config.vm.synced_folder "../../../../debs", "/debs", type: "nfs"
+
+  config.vm.provision "shell", path: "../provision.sh"
+end
diff --git a/debian/test/xenial/Vagrantfile b/debian/test/xenial/Vagrantfile
new file mode 100644
index 0000000000..189236da17
--- /dev/null
+++ b/debian/test/xenial/Vagrantfile
@@ -0,0 +1,10 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+Vagrant.configure("2") do |config|
+  config.vm.box = "ubuntu/xenial64"
+
+  config.vm.synced_folder ".", "/vagrant", disabled: true
+  config.vm.synced_folder "../../../../debs", "/debs"
+  config.vm.provision "shell", path: "../provision.sh"
+end
diff --git a/docs/postgres.rst b/docs/postgres.rst
index f7ebbed0c3..e81e10403f 100644
--- a/docs/postgres.rst
+++ b/docs/postgres.rst
@@ -3,6 +3,28 @@ Using Postgres
 
 Postgres version 9.4 or later is known to work.
 
+Install postgres client libraries
+=================================
+
+Synapse will require the python postgres client library in order to connect to
+a postgres database.
+
+* If you are using the `matrix.org debian/ubuntu
+  packages <../INSTALL.md#matrixorg-packages>`_,
+  the necessary libraries will already be installed.
+
+* For other pre-built packages, please consult the documentation from the
+  relevant package.
+
+* If you installed synapse `in a virtualenv 
+  <../INSTALL.md#installing-from-source>`_, you can install the library with::
+
+      ~/synapse/env/bin/pip install matrix-synapse[postgres]
+
+  (substituting the path to your virtualenv for ``~/synapse/env``, if you used a
+  different path). You will require the postgres development files. These are in
+  the ``libpq-dev`` package on Debian-derived distributions.
+
 Set up database
 ===============
 
@@ -26,29 +48,6 @@ encoding use, e.g.::
 This would create an appropriate database named ``synapse`` owned by the
 ``synapse_user`` user (which must already exist).
 
-Set up client in Debian/Ubuntu
-===========================
-
-Postgres support depends on the postgres python connector ``psycopg2``. In the
-virtual env::
-
-    sudo apt-get install libpq-dev
-    pip install psycopg2
-
-Set up client in RHEL/CentOs 7
-==============================
-
-Make sure you have the appropriate version of postgres-devel installed. For a
-postgres 9.4, use the postgres 9.4 packages from
-[here](https://wiki.postgresql.org/wiki/YUM_Installation).
-
-As with Debian/Ubuntu, postgres support depends on the postgres python connector
-``psycopg2``. In the virtual env::
-
-    sudo yum install postgresql-devel libpqxx-devel.x86_64
-    export PATH=/usr/pgsql-9.4/bin/:$PATH
-    pip install psycopg2
-
 Tuning Postgres
 ===============
 
diff --git a/docs/sample_config.yaml b/docs/sample_config.yaml
index c4e5c4cf39..f658ec8ecd 100644
--- a/docs/sample_config.yaml
+++ b/docs/sample_config.yaml
@@ -276,6 +276,12 @@ listeners:
 #
 #require_membership_for_aliases: false
 
+# Whether to allow per-room membership profiles through the send of membership
+# events with profile information that differ from the target's global profile.
+# Defaults to 'true'.
+#
+#allow_per_room_profiles: false
+
 
 ## TLS ##
 
@@ -446,21 +452,15 @@ log_config: "CONFDIR/SERVERNAME.log.config"
 
 ## Ratelimiting ##
 
-# Number of messages a client can send per second
-#
-#rc_messages_per_second: 0.2
-
-# Number of message a client can send before being throttled
-#
-#rc_message_burst_count: 10.0
-
-# Ratelimiting settings for registration and login.
+# Ratelimiting settings for client actions (registration, login, messaging).
 #
 # Each ratelimiting configuration is made of two parameters:
 #   - per_second: number of requests a client can send per second.
 #   - burst_count: number of requests a client can send before being throttled.
 #
 # Synapse currently uses the following configurations:
+#   - one for messages that ratelimits sending based on the account the client
+#     is using
 #   - one for registration that ratelimits registration requests based on the
 #     client's IP address.
 #   - one for login that ratelimits login requests based on the client's IP
@@ -473,6 +473,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
 #
 # The defaults are as shown below.
 #
+#rc_message:
+#  per_second: 0.2
+#  burst_count: 10
+#
 #rc_registration:
 #  per_second: 0.17
 #  burst_count: 3
@@ -488,29 +492,28 @@ log_config: "CONFDIR/SERVERNAME.log.config"
 #    per_second: 0.17
 #    burst_count: 3
 
-# The federation window size in milliseconds
-#
-#federation_rc_window_size: 1000
-
-# The number of federation requests from a single server in a window
-# before the server will delay processing the request.
-#
-#federation_rc_sleep_limit: 10
 
-# The duration in milliseconds to delay processing events from
-# remote servers by if they go over the sleep limit.
+# Ratelimiting settings for incoming federation
 #
-#federation_rc_sleep_delay: 500
-
-# The maximum number of concurrent federation requests allowed
-# from a single server
+# The rc_federation configuration is made up of the following settings:
+#   - window_size: window size in milliseconds
+#   - sleep_limit: number of federation requests from a single server in
+#     a window before the server will delay processing the request.
+#   - sleep_delay: duration in milliseconds to delay processing events
+#     from remote servers by if they go over the sleep limit.
+#   - reject_limit: maximum number of concurrent federation requests
+#     allowed from a single server
+#   - concurrent: number of federation requests to concurrently process
+#     from a single server
 #
-#federation_rc_reject_limit: 50
-
-# The number of federation requests to concurrently process from a
-# single server
+# The defaults are as shown below.
 #
-#federation_rc_concurrent: 3
+#rc_federation:
+#  window_size: 1000
+#  sleep_limit: 10
+#  sleep_delay: 500
+#  reject_limit: 50
+#  concurrent: 3
 
 # Target outgoing federation transaction frequency for sending read-receipts,
 # per-room.
@@ -744,6 +747,14 @@ uploads_path: "DATADIR/uploads"
 # link. ``%(app)s`` can be used as a placeholder for the ``app_name`` parameter
 # from the ``email`` section.
 #
+# Once this feature is enabled, Synapse will look for registered users without an
+# expiration date at startup and will add one to every account it found using the
+# current settings at that time.
+# This means that, if a validity period is set, and Synapse is restarted (it will
+# then derive an expiration date from the current validity period), and some time
+# after that the validity period changes and Synapse is restarted, the users'
+# expiration dates won't be updated unless their account is manually renewed.
+#
 #account_validity:
 #  enabled: True
 #  period: 6w
diff --git a/synapse/__init__.py b/synapse/__init__.py
index cd9cfb2409..bf9e810da6 100644
--- a/synapse/__init__.py
+++ b/synapse/__init__.py
@@ -27,4 +27,4 @@ try:
 except ImportError:
     pass
 
-__version__ = "0.99.4rc1"
+__version__ = "0.99.4"
diff --git a/synapse/api/constants.py b/synapse/api/constants.py
index 30bebd749f..6b347b1749 100644
--- a/synapse/api/constants.py
+++ b/synapse/api/constants.py
@@ -23,6 +23,9 @@ MAX_DEPTH = 2**63 - 1
 # the maximum length for a room alias is 255 characters
 MAX_ALIAS_LENGTH = 255
 
+# the maximum length for a user id is 255 characters
+MAX_USERID_LENGTH = 255
+
 
 class Membership(object):
 
@@ -122,5 +125,5 @@ class RelationTypes(object):
     """The types of relations known to this server.
     """
     ANNOTATION = "m.annotation"
-    REPLACES = "m.replaces"
-    REFERENCES = "m.references"
+    REPLACE = "m.replace"
+    REFERENCE = "m.reference"
diff --git a/synapse/api/errors.py b/synapse/api/errors.py
index ff89259dec..e91697049c 100644
--- a/synapse/api/errors.py
+++ b/synapse/api/errors.py
@@ -328,9 +328,23 @@ class RoomKeysVersionError(SynapseError):
         self.current_version = current_version
 
 
+class UnsupportedRoomVersionError(SynapseError):
+    """The client's request to create a room used a room version that the server does
+    not support."""
+    def __init__(self):
+        super(UnsupportedRoomVersionError, self).__init__(
+            code=400,
+            msg="Homeserver does not support this room version",
+            errcode=Codes.UNSUPPORTED_ROOM_VERSION,
+        )
+
+
 class IncompatibleRoomVersionError(SynapseError):
-    """A server is trying to join a room whose version it does not support."""
+    """A server is trying to join a room whose version it does not support.
 
+    Unlike UnsupportedRoomVersionError, it is specific to the case of the make_join
+    failing.
+    """
     def __init__(self, room_version):
         super(IncompatibleRoomVersionError, self).__init__(
             code=400,
diff --git a/synapse/api/room_versions.py b/synapse/api/room_versions.py
index e77abe1040..b2895355a8 100644
--- a/synapse/api/room_versions.py
+++ b/synapse/api/room_versions.py
@@ -19,13 +19,15 @@ class EventFormatVersions(object):
     """This is an internal enum for tracking the version of the event format,
     independently from the room version.
     """
-    V1 = 1   # $id:server format
-    V2 = 2   # MSC1659-style $hash format: introduced for room v3
+    V1 = 1   # $id:server event id format
+    V2 = 2   # MSC1659-style $hash event id format: introduced for room v3
+    V3 = 3   # MSC1884-style $hash format: introduced for room v4
 
 
 KNOWN_EVENT_FORMAT_VERSIONS = {
     EventFormatVersions.V1,
     EventFormatVersions.V2,
+    EventFormatVersions.V3,
 }
 
 
@@ -75,6 +77,12 @@ class RoomVersions(object):
         EventFormatVersions.V2,
         StateResolutionVersions.V2,
     )
+    V4 = RoomVersion(
+        "4",
+        RoomDisposition.STABLE,
+        EventFormatVersions.V3,
+        StateResolutionVersions.V2,
+    )
 
 
 # the version we will give rooms which are created on this server
@@ -87,5 +95,6 @@ KNOWN_ROOM_VERSIONS = {
         RoomVersions.V2,
         RoomVersions.V3,
         RoomVersions.STATE_V2_TEST,
+        RoomVersions.V4,
     )
 }   # type: dict[str, RoomVersion]
diff --git a/synapse/api/urls.py b/synapse/api/urls.py
index cb71d80875..3c6bddff7a 100644
--- a/synapse/api/urls.py
+++ b/synapse/api/urls.py
@@ -22,8 +22,7 @@ from six.moves.urllib.parse import urlencode
 
 from synapse.config import ConfigError
 
-CLIENT_PREFIX = "/_matrix/client/api/v1"
-CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha"
+CLIENT_API_PREFIX = "/_matrix/client"
 FEDERATION_PREFIX = "/_matrix/federation"
 FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
 FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
diff --git a/synapse/config/ratelimiting.py b/synapse/config/ratelimiting.py
index 5a68399e63..5a9adac480 100644
--- a/synapse/config/ratelimiting.py
+++ b/synapse/config/ratelimiting.py
@@ -16,16 +16,56 @@ from ._base import Config
 
 
 class RateLimitConfig(object):
-    def __init__(self, config):
-        self.per_second = config.get("per_second", 0.17)
-        self.burst_count = config.get("burst_count", 3.0)
+    def __init__(self, config, defaults={"per_second": 0.17, "burst_count": 3.0}):
+        self.per_second = config.get("per_second", defaults["per_second"])
+        self.burst_count = config.get("burst_count", defaults["burst_count"])
 
 
-class RatelimitConfig(Config):
+class FederationRateLimitConfig(object):
+    _items_and_default = {
+        "window_size": 10000,
+        "sleep_limit": 10,
+        "sleep_delay": 500,
+        "reject_limit": 50,
+        "concurrent": 3,
+    }
+
+    def __init__(self, **kwargs):
+        for i in self._items_and_default.keys():
+            setattr(self, i, kwargs.get(i) or self._items_and_default[i])
+
 
+class RatelimitConfig(Config):
     def read_config(self, config):
-        self.rc_messages_per_second = config.get("rc_messages_per_second", 0.2)
-        self.rc_message_burst_count = config.get("rc_message_burst_count", 10.0)
+
+        # Load the new-style messages config if it exists. Otherwise fall back
+        # to the old method.
+        if "rc_message" in config:
+            self.rc_message = RateLimitConfig(
+                config["rc_message"], defaults={"per_second": 0.2, "burst_count": 10.0}
+            )
+        else:
+            self.rc_message = RateLimitConfig(
+                {
+                    "per_second": config.get("rc_messages_per_second", 0.2),
+                    "burst_count": config.get("rc_message_burst_count", 10.0),
+                }
+            )
+
+        # Load the new-style federation config, if it exists. Otherwise, fall
+        # back to the old method.
+        if "federation_rc" in config:
+            self.rc_federation = FederationRateLimitConfig(**config["rc_federation"])
+        else:
+            self.rc_federation = FederationRateLimitConfig(
+                **{
+                    "window_size": config.get("federation_rc_window_size"),
+                    "sleep_limit": config.get("federation_rc_sleep_limit"),
+                    "sleep_delay": config.get("federation_rc_sleep_delay"),
+                    "reject_limit": config.get("federation_rc_reject_limit"),
+                    "concurrent": config.get("federation_rc_concurrent"),
+                }
+            )
 
         self.rc_registration = RateLimitConfig(config.get("rc_registration", {}))
 
@@ -33,38 +73,26 @@ class RatelimitConfig(Config):
         self.rc_login_address = RateLimitConfig(rc_login_config.get("address", {}))
         self.rc_login_account = RateLimitConfig(rc_login_config.get("account", {}))
         self.rc_login_failed_attempts = RateLimitConfig(
-            rc_login_config.get("failed_attempts", {}),
+            rc_login_config.get("failed_attempts", {})
         )
 
-        self.federation_rc_window_size = config.get("federation_rc_window_size", 1000)
-        self.federation_rc_sleep_limit = config.get("federation_rc_sleep_limit", 10)
-        self.federation_rc_sleep_delay = config.get("federation_rc_sleep_delay", 500)
-        self.federation_rc_reject_limit = config.get("federation_rc_reject_limit", 50)
-        self.federation_rc_concurrent = config.get("federation_rc_concurrent", 3)
-
         self.federation_rr_transactions_per_room_per_second = config.get(
-            "federation_rr_transactions_per_room_per_second", 50,
+            "federation_rr_transactions_per_room_per_second", 50
         )
 
     def default_config(self, **kwargs):
         return """\
         ## Ratelimiting ##
 
-        # Number of messages a client can send per second
-        #
-        #rc_messages_per_second: 0.2
-
-        # Number of message a client can send before being throttled
-        #
-        #rc_message_burst_count: 10.0
-
-        # Ratelimiting settings for registration and login.
+        # Ratelimiting settings for client actions (registration, login, messaging).
         #
         # Each ratelimiting configuration is made of two parameters:
         #   - per_second: number of requests a client can send per second.
         #   - burst_count: number of requests a client can send before being throttled.
         #
         # Synapse currently uses the following configurations:
+        #   - one for messages that ratelimits sending based on the account the client
+        #     is using
         #   - one for registration that ratelimits registration requests based on the
         #     client's IP address.
         #   - one for login that ratelimits login requests based on the client's IP
@@ -77,6 +105,10 @@ class RatelimitConfig(Config):
         #
         # The defaults are as shown below.
         #
+        #rc_message:
+        #  per_second: 0.2
+        #  burst_count: 10
+        #
         #rc_registration:
         #  per_second: 0.17
         #  burst_count: 3
@@ -92,29 +124,28 @@ class RatelimitConfig(Config):
         #    per_second: 0.17
         #    burst_count: 3
 
-        # The federation window size in milliseconds
-        #
-        #federation_rc_window_size: 1000
-
-        # The number of federation requests from a single server in a window
-        # before the server will delay processing the request.
-        #
-        #federation_rc_sleep_limit: 10
 
-        # The duration in milliseconds to delay processing events from
-        # remote servers by if they go over the sleep limit.
+        # Ratelimiting settings for incoming federation
         #
-        #federation_rc_sleep_delay: 500
-
-        # The maximum number of concurrent federation requests allowed
-        # from a single server
+        # The rc_federation configuration is made up of the following settings:
+        #   - window_size: window size in milliseconds
+        #   - sleep_limit: number of federation requests from a single server in
+        #     a window before the server will delay processing the request.
+        #   - sleep_delay: duration in milliseconds to delay processing events
+        #     from remote servers by if they go over the sleep limit.
+        #   - reject_limit: maximum number of concurrent federation requests
+        #     allowed from a single server
+        #   - concurrent: number of federation requests to concurrently process
+        #     from a single server
         #
-        #federation_rc_reject_limit: 50
-
-        # The number of federation requests to concurrently process from a
-        # single server
+        # The defaults are as shown below.
         #
-        #federation_rc_concurrent: 3
+        #rc_federation:
+        #  window_size: 1000
+        #  sleep_limit: 10
+        #  sleep_delay: 500
+        #  reject_limit: 50
+        #  concurrent: 3
 
         # Target outgoing federation transaction frequency for sending read-receipts,
         # per-room.
diff --git a/synapse/config/registration.py b/synapse/config/registration.py
index 1309bce3ee..693288f938 100644
--- a/synapse/config/registration.py
+++ b/synapse/config/registration.py
@@ -123,6 +123,14 @@ class RegistrationConfig(Config):
         # link. ``%%(app)s`` can be used as a placeholder for the ``app_name`` parameter
         # from the ``email`` section.
         #
+        # Once this feature is enabled, Synapse will look for registered users without an
+        # expiration date at startup and will add one to every account it found using the
+        # current settings at that time.
+        # This means that, if a validity period is set, and Synapse is restarted (it will
+        # then derive an expiration date from the current validity period), and some time
+        # after that the validity period changes and Synapse is restarted, the users'
+        # expiration dates won't be updated unless their account is manually renewed.
+        #
         #account_validity:
         #  enabled: True
         #  period: 6w
diff --git a/synapse/config/server.py b/synapse/config/server.py
index f403477b54..f34aa42afa 100644
--- a/synapse/config/server.py
+++ b/synapse/config/server.py
@@ -1,6 +1,7 @@
 # -*- coding: utf-8 -*-
 # Copyright 2014-2016 OpenMarket Ltd
 # Copyright 2017-2018 New Vector Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -178,6 +179,10 @@ class ServerConfig(Config):
             "require_membership_for_aliases", True,
         )
 
+        # Whether to allow per-room membership profiles through the send of membership
+        # events with profile information that differ from the target's global profile.
+        self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
+
         self.listeners = []
         for listener in config.get("listeners", []):
             if not isinstance(listener.get("port", None), int):
@@ -571,6 +576,12 @@ class ServerConfig(Config):
         # Defaults to 'true'.
         #
         #require_membership_for_aliases: false
+
+        # Whether to allow per-room membership profiles through the send of membership
+        # events with profile information that differ from the target's global profile.
+        # Defaults to 'true'.
+        #
+        #allow_per_room_profiles: false
         """ % locals()
 
     def read_arguments(self, args):
diff --git a/synapse/events/__init__.py b/synapse/events/__init__.py
index 12056d5be2..1edd19cc13 100644
--- a/synapse/events/__init__.py
+++ b/synapse/events/__init__.py
@@ -21,6 +21,7 @@ import six
 
 from unpaddedbase64 import encode_base64
 
+from synapse.api.errors import UnsupportedRoomVersionError
 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
 from synapse.util.caches import intern_dict
 from synapse.util.frozenutils import freeze
@@ -335,13 +336,32 @@ class FrozenEventV2(EventBase):
         return self.__repr__()
 
     def __repr__(self):
-        return "<FrozenEventV2 event_id='%s', type='%s', state_key='%s'>" % (
+        return "<%s event_id='%s', type='%s', state_key='%s'>" % (
+            self.__class__.__name__,
             self.event_id,
             self.get("type", None),
             self.get("state_key", None),
         )
 
 
+class FrozenEventV3(FrozenEventV2):
+    """FrozenEventV3, which differs from FrozenEventV2 only in the event_id format"""
+    format_version = EventFormatVersions.V3  # All events of this type are V3
+
+    @property
+    def event_id(self):
+        # We have to import this here as otherwise we get an import loop which
+        # is hard to break.
+        from synapse.crypto.event_signing import compute_event_reference_hash
+
+        if self._event_id:
+            return self._event_id
+        self._event_id = "$" + encode_base64(
+            compute_event_reference_hash(self)[1], urlsafe=True
+        )
+        return self._event_id
+
+
 def room_version_to_event_format(room_version):
     """Converts a room version string to the event format
 
@@ -350,12 +370,15 @@ def room_version_to_event_format(room_version):
 
     Returns:
         int
+
+    Raises:
+        UnsupportedRoomVersionError if the room version is unknown
     """
     v = KNOWN_ROOM_VERSIONS.get(room_version)
 
     if not v:
-        # We should have already checked version, so this should not happen
-        raise RuntimeError("Unrecognized room version %s" % (room_version,))
+        # this can happen if support is withdrawn for a room version
+        raise UnsupportedRoomVersionError()
 
     return v.event_format
 
@@ -376,6 +399,8 @@ def event_type_from_format_version(format_version):
         return FrozenEvent
     elif format_version == EventFormatVersions.V2:
         return FrozenEventV2
+    elif format_version == EventFormatVersions.V3:
+        return FrozenEventV3
     else:
         raise Exception(
             "No event format %r" % (format_version,)
diff --git a/synapse/events/builder.py b/synapse/events/builder.py
index fba27177c7..1fe995f212 100644
--- a/synapse/events/builder.py
+++ b/synapse/events/builder.py
@@ -18,6 +18,7 @@ import attr
 from twisted.internet import defer
 
 from synapse.api.constants import MAX_DEPTH
+from synapse.api.errors import UnsupportedRoomVersionError
 from synapse.api.room_versions import (
     KNOWN_EVENT_FORMAT_VERSIONS,
     KNOWN_ROOM_VERSIONS,
@@ -178,9 +179,8 @@ class EventBuilderFactory(object):
         """
         v = KNOWN_ROOM_VERSIONS.get(room_version)
         if not v:
-            raise Exception(
-                "No event format defined for version %r" % (room_version,)
-            )
+            # this can happen if support is withdrawn for a room version
+            raise UnsupportedRoomVersionError()
         return self.for_room_version(v, key_values)
 
     def for_room_version(self, room_version, key_values):
diff --git a/synapse/events/utils.py b/synapse/events/utils.py
index bf3c8f8dc1..27a2a9ef98 100644
--- a/synapse/events/utils.py
+++ b/synapse/events/utils.py
@@ -355,7 +355,7 @@ class EventClientSerializer(object):
                 event_id,
             )
             references = yield self.store.get_relations_for_event(
-                event_id, RelationTypes.REFERENCES, direction="f",
+                event_id, RelationTypes.REFERENCE, direction="f",
             )
 
             if annotations.chunk:
@@ -364,7 +364,7 @@ class EventClientSerializer(object):
 
             if references.chunk:
                 r = serialized_event["unsigned"].setdefault("m.relations", {})
-                r[RelationTypes.REFERENCES] = references.to_dict()
+                r[RelationTypes.REFERENCE] = references.to_dict()
 
             edit = None
             if event.type == EventTypes.Message:
@@ -382,7 +382,7 @@ class EventClientSerializer(object):
                     serialized_event["content"].pop("m.relates_to", None)
 
                 r = serialized_event["unsigned"].setdefault("m.relations", {})
-                r[RelationTypes.REPLACES] = {
+                r[RelationTypes.REPLACE] = {
                     "event_id": edit.event_id,
                 }
 
diff --git a/synapse/federation/federation_server.py b/synapse/federation/federation_server.py
index df60828dba..4c28c1dc3c 100644
--- a/synapse/federation/federation_server.py
+++ b/synapse/federation/federation_server.py
@@ -33,6 +33,7 @@ from synapse.api.errors import (
     IncompatibleRoomVersionError,
     NotFoundError,
     SynapseError,
+    UnsupportedRoomVersionError,
 )
 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
 from synapse.crypto.event_signing import compute_event_signature
@@ -198,11 +199,22 @@ class FederationServer(FederationBase):
 
             try:
                 room_version = yield self.store.get_room_version(room_id)
-                format_ver = room_version_to_event_format(room_version)
             except NotFoundError:
                 logger.info("Ignoring PDU for unknown room_id: %s", room_id)
                 continue
 
+            try:
+                format_ver = room_version_to_event_format(room_version)
+            except UnsupportedRoomVersionError:
+                # this can happen if support for a given room version is withdrawn,
+                # so that we still get events for said room.
+                logger.info(
+                    "Ignoring PDU for room %s with unknown version %s",
+                    room_id,
+                    room_version,
+                )
+                continue
+
             event = event_from_pdu_json(p, format_ver)
             pdus_by_room.setdefault(room_id, []).append(event)
 
diff --git a/synapse/federation/transport/server.py b/synapse/federation/transport/server.py
index 9030eb18c5..385eda2dca 100644
--- a/synapse/federation/transport/server.py
+++ b/synapse/federation/transport/server.py
@@ -63,11 +63,7 @@ class TransportLayerServer(JsonResource):
         self.authenticator = Authenticator(hs)
         self.ratelimiter = FederationRateLimiter(
             self.clock,
-            window_size=hs.config.federation_rc_window_size,
-            sleep_limit=hs.config.federation_rc_sleep_limit,
-            sleep_msec=hs.config.federation_rc_sleep_delay,
-            reject_limit=hs.config.federation_rc_reject_limit,
-            concurrent_requests=hs.config.federation_rc_concurrent,
+            config=hs.config.rc_federation,
         )
 
         self.register_servlets()
diff --git a/synapse/handlers/_base.py b/synapse/handlers/_base.py
index ac09d03ba9..dca337ec61 100644
--- a/synapse/handlers/_base.py
+++ b/synapse/handlers/_base.py
@@ -90,8 +90,8 @@ class BaseHandler(object):
             messages_per_second = override.messages_per_second
             burst_count = override.burst_count
         else:
-            messages_per_second = self.hs.config.rc_messages_per_second
-            burst_count = self.hs.config.rc_message_burst_count
+            messages_per_second = self.hs.config.rc_message.per_second
+            burst_count = self.hs.config.rc_message.burst_count
 
         allowed, time_allowed = self.ratelimiter.can_do_action(
             user_id, time_now,
diff --git a/synapse/handlers/federation.py b/synapse/handlers/federation.py
index 0684778882..2202ed699a 100644
--- a/synapse/handlers/federation.py
+++ b/synapse/handlers/federation.py
@@ -1916,6 +1916,11 @@ class FederationHandler(BaseHandler):
                     event.room_id, latest_event_ids=extrem_ids,
                 )
 
+            logger.debug(
+                "Doing soft-fail check for %s: state %s",
+                event.event_id, current_state_ids,
+            )
+
             # Now check if event pass auth against said current state
             auth_types = auth_types_for_event(event)
             current_state_ids = [
@@ -1932,7 +1937,7 @@ class FederationHandler(BaseHandler):
                 self.auth.check(room_version, event, auth_events=current_auth_events)
             except AuthError as e:
                 logger.warn(
-                    "Failed current state auth resolution for %r because %s",
+                    "Soft-failing %r because %s",
                     event, e,
                 )
                 event.internal_metadata.soft_failed = True
diff --git a/synapse/handlers/message.py b/synapse/handlers/message.py
index 7b2c33a922..792edc7579 100644
--- a/synapse/handlers/message.py
+++ b/synapse/handlers/message.py
@@ -22,7 +22,7 @@ from canonicaljson import encode_canonical_json, json
 from twisted.internet import defer
 from twisted.internet.defer import succeed
 
-from synapse.api.constants import EventTypes, Membership
+from synapse.api.constants import EventTypes, Membership, RelationTypes
 from synapse.api.errors import (
     AuthError,
     Codes,
@@ -601,6 +601,20 @@ class EventCreationHandler(object):
 
         self.validator.validate_new(event)
 
+        # If this event is an annotation then we check that that the sender
+        # can't annotate the same way twice (e.g. stops users from liking an
+        # event multiple times).
+        relation = event.content.get("m.relates_to", {})
+        if relation.get("rel_type") == RelationTypes.ANNOTATION:
+            relates_to = relation["event_id"]
+            aggregation_key = relation["key"]
+
+            already_exists = yield self.store.has_user_annotated_event(
+                relates_to, event.type, aggregation_key, event.sender,
+            )
+            if already_exists:
+                raise SynapseError(400, "Can't send same reaction twice")
+
         logger.debug(
             "Created event %s",
             event.event_id,
diff --git a/synapse/handlers/register.py b/synapse/handlers/register.py
index a51d11a257..e83ee24f10 100644
--- a/synapse/handlers/register.py
+++ b/synapse/handlers/register.py
@@ -19,7 +19,7 @@ import logging
 from twisted.internet import defer
 
 from synapse import types
-from synapse.api.constants import LoginType
+from synapse.api.constants import MAX_USERID_LENGTH, LoginType
 from synapse.api.errors import (
     AuthError,
     Codes,
@@ -123,6 +123,15 @@ class RegistrationHandler(BaseHandler):
 
         self.check_user_id_not_appservice_exclusive(user_id)
 
+        if len(user_id) > MAX_USERID_LENGTH:
+            raise SynapseError(
+                400,
+                "User ID may not be longer than %s characters" % (
+                    MAX_USERID_LENGTH,
+                ),
+                Codes.INVALID_USERNAME
+            )
+
         users = yield self.store.get_users_by_id_case_insensitive(user_id)
         if users:
             if not guest_access_token:
diff --git a/synapse/handlers/room_member.py b/synapse/handlers/room_member.py
index 3e86b9c690..93ac986c86 100644
--- a/synapse/handlers/room_member.py
+++ b/synapse/handlers/room_member.py
@@ -1,6 +1,7 @@
 # -*- coding: utf-8 -*-
 # Copyright 2016 OpenMarket Ltd
 # Copyright 2018 New Vector Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -73,6 +74,7 @@ class RoomMemberHandler(object):
         self.spam_checker = hs.get_spam_checker()
         self._server_notices_mxid = self.config.server_notices_mxid
         self._enable_lookup = hs.config.enable_3pid_lookup
+        self.allow_per_room_profiles = self.config.allow_per_room_profiles
 
         # This is only used to get at ratelimit function, and
         # maybe_kick_guest_users. It's fine there are multiple of these as
@@ -357,6 +359,13 @@ class RoomMemberHandler(object):
             # later on.
             content = dict(content)
 
+        if not self.allow_per_room_profiles:
+            # Strip profile data, knowing that new profile data will be added to the
+            # event's content in event_creation_handler.create_event() using the target's
+            # global profile.
+            content.pop("displayname", None)
+            content.pop("avatar_url", None)
+
         effective_membership_state = action
         if action in ["kick", "unban"]:
             effective_membership_state = "leave"
@@ -935,7 +944,7 @@ class RoomMemberHandler(object):
         }
 
         if self.config.invite_3pid_guest:
-            guest_access_token, guest_user_id = yield self.get_or_register_3pid_guest(
+            guest_user_id, guest_access_token = yield self.get_or_register_3pid_guest(
                 requester=requester,
                 medium=medium,
                 address=address,
diff --git a/synapse/handlers/sync.py b/synapse/handlers/sync.py
index 153312e39f..1ee9a6e313 100644
--- a/synapse/handlers/sync.py
+++ b/synapse/handlers/sync.py
@@ -934,7 +934,7 @@ class SyncHandler(object):
         res = yield self._generate_sync_entry_for_rooms(
             sync_result_builder, account_data_by_room
         )
-        newly_joined_rooms, newly_joined_users, _, _ = res
+        newly_joined_rooms, newly_joined_or_invited_users, _, _ = res
         _, _, newly_left_rooms, newly_left_users = res
 
         block_all_presence_data = (
@@ -943,7 +943,7 @@ class SyncHandler(object):
         )
         if self.hs_config.use_presence and not block_all_presence_data:
             yield self._generate_sync_entry_for_presence(
-                sync_result_builder, newly_joined_rooms, newly_joined_users
+                sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
             )
 
         yield self._generate_sync_entry_for_to_device(sync_result_builder)
@@ -951,7 +951,7 @@ class SyncHandler(object):
         device_lists = yield self._generate_sync_entry_for_device_list(
             sync_result_builder,
             newly_joined_rooms=newly_joined_rooms,
-            newly_joined_users=newly_joined_users,
+            newly_joined_or_invited_users=newly_joined_or_invited_users,
             newly_left_rooms=newly_left_rooms,
             newly_left_users=newly_left_users,
         )
@@ -1036,7 +1036,8 @@ class SyncHandler(object):
     @measure_func("_generate_sync_entry_for_device_list")
     @defer.inlineCallbacks
     def _generate_sync_entry_for_device_list(self, sync_result_builder,
-                                             newly_joined_rooms, newly_joined_users,
+                                             newly_joined_rooms,
+                                             newly_joined_or_invited_users,
                                              newly_left_rooms, newly_left_users):
         user_id = sync_result_builder.sync_config.user.to_string()
         since_token = sync_result_builder.since_token
@@ -1050,7 +1051,7 @@ class SyncHandler(object):
             # share a room with?
             for room_id in newly_joined_rooms:
                 joined_users = yield self.state.get_current_users_in_room(room_id)
-                newly_joined_users.update(joined_users)
+                newly_joined_or_invited_users.update(joined_users)
 
             for room_id in newly_left_rooms:
                 left_users = yield self.state.get_current_users_in_room(room_id)
@@ -1058,7 +1059,7 @@ class SyncHandler(object):
 
             # TODO: Check that these users are actually new, i.e. either they
             # weren't in the previous sync *or* they left and rejoined.
-            changed.update(newly_joined_users)
+            changed.update(newly_joined_or_invited_users)
 
             if not changed and not newly_left_users:
                 defer.returnValue(DeviceLists(
@@ -1176,7 +1177,7 @@ class SyncHandler(object):
 
     @defer.inlineCallbacks
     def _generate_sync_entry_for_presence(self, sync_result_builder, newly_joined_rooms,
-                                          newly_joined_users):
+                                          newly_joined_or_invited_users):
         """Generates the presence portion of the sync response. Populates the
         `sync_result_builder` with the result.
 
@@ -1184,8 +1185,9 @@ class SyncHandler(object):
             sync_result_builder(SyncResultBuilder)
             newly_joined_rooms(list): List of rooms that the user has joined
                 since the last sync (or empty if an initial sync)
-            newly_joined_users(list): List of users that have joined rooms
-                since the last sync (or empty if an initial sync)
+            newly_joined_or_invited_users(list): List of users that have joined
+                or been invited to rooms since the last sync (or empty if an initial
+                sync)
         """
         now_token = sync_result_builder.now_token
         sync_config = sync_result_builder.sync_config
@@ -1211,7 +1213,7 @@ class SyncHandler(object):
             "presence_key", presence_key
         )
 
-        extra_users_ids = set(newly_joined_users)
+        extra_users_ids = set(newly_joined_or_invited_users)
         for room_id in newly_joined_rooms:
             users = yield self.state.get_current_users_in_room(room_id)
             extra_users_ids.update(users)
@@ -1243,7 +1245,8 @@ class SyncHandler(object):
 
         Returns:
             Deferred(tuple): Returns a 4-tuple of
-            `(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
+            `(newly_joined_rooms, newly_joined_or_invited_users,
+            newly_left_rooms, newly_left_users)`
         """
         user_id = sync_result_builder.sync_config.user.to_string()
         block_all_room_ephemeral = (
@@ -1314,8 +1317,8 @@ class SyncHandler(object):
 
         sync_result_builder.invited.extend(invited)
 
-        # Now we want to get any newly joined users
-        newly_joined_users = set()
+        # Now we want to get any newly joined or invited users
+        newly_joined_or_invited_users = set()
         newly_left_users = set()
         if since_token:
             for joined_sync in sync_result_builder.joined:
@@ -1324,19 +1327,22 @@ class SyncHandler(object):
                 )
                 for event in it:
                     if event.type == EventTypes.Member:
-                        if event.membership == Membership.JOIN:
-                            newly_joined_users.add(event.state_key)
+                        if (
+                            event.membership == Membership.JOIN or
+                            event.membership == Membership.INVITE
+                        ):
+                            newly_joined_or_invited_users.add(event.state_key)
                         else:
                             prev_content = event.unsigned.get("prev_content", {})
                             prev_membership = prev_content.get("membership", None)
                             if prev_membership == Membership.JOIN:
                                 newly_left_users.add(event.state_key)
 
-        newly_left_users -= newly_joined_users
+        newly_left_users -= newly_joined_or_invited_users
 
         defer.returnValue((
             newly_joined_rooms,
-            newly_joined_users,
+            newly_joined_or_invited_users,
             newly_left_rooms,
             newly_left_users,
         ))
@@ -1381,7 +1387,7 @@ class SyncHandler(object):
             where:
                 room_entries is a list [RoomSyncResultBuilder]
                 invited_rooms is a list [InvitedSyncResult]
-                newly_joined rooms is a list[str] of room ids
+                newly_joined_rooms is a list[str] of room ids
                 newly_left_rooms is a list[str] of room ids
         """
         user_id = sync_result_builder.sync_config.user.to_string()
@@ -1422,7 +1428,7 @@ class SyncHandler(object):
             if room_id in sync_result_builder.joined_room_ids and non_joins:
                 # Always include if the user (re)joined the room, especially
                 # important so that device list changes are calculated correctly.
-                # If there are non join member events, but we are still in the room,
+                # If there are non-join member events, but we are still in the room,
                 # then the user must have left and joined
                 newly_joined_rooms.append(room_id)
 
diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py
index 2708f5e820..e3f828c4bb 100644
--- a/synapse/python_dependencies.py
+++ b/synapse/python_dependencies.py
@@ -16,7 +16,12 @@
 
 import logging
 
-from pkg_resources import DistributionNotFound, VersionConflict, get_distribution
+from pkg_resources import (
+    DistributionNotFound,
+    Requirement,
+    VersionConflict,
+    get_provider,
+)
 
 logger = logging.getLogger(__name__)
 
@@ -53,7 +58,7 @@ REQUIREMENTS = [
     "pyasn1-modules>=0.0.7",
     "daemonize>=2.3.1",
     "bcrypt>=3.1.0",
-    "pillow>=3.1.2",
+    "pillow>=4.3.0",
     "sortedcontainers>=1.4.4",
     "psutil>=2.0.0",
     "pymacaroons>=0.13.0",
@@ -91,7 +96,13 @@ CONDITIONAL_REQUIREMENTS = {
 
     # ACME support is required to provision TLS certificates from authorities
     # that use the protocol, such as Let's Encrypt.
-    "acme": ["txacme>=0.9.2"],
+    "acme": [
+        "txacme>=0.9.2",
+
+        # txacme depends on eliot. Eliot 1.8.0 is incompatible with
+        # python 3.5.2, as per https://github.com/itamarst/eliot/issues/418
+        'eliot<1.8.0;python_version<"3.5.3"',
+    ],
 
     "saml2": ["pysaml2>=4.5.0"],
     "systemd": ["systemd-python>=231"],
@@ -125,10 +136,10 @@ class DependencyException(Exception):
     @property
     def dependencies(self):
         for i in self.args[0]:
-            yield '"' + i + '"'
+            yield "'" + i + "'"
 
 
-def check_requirements(for_feature=None, _get_distribution=get_distribution):
+def check_requirements(for_feature=None):
     deps_needed = []
     errors = []
 
@@ -139,7 +150,7 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
 
     for dependency in reqs:
         try:
-            _get_distribution(dependency)
+            _check_requirement(dependency)
         except VersionConflict as e:
             deps_needed.append(dependency)
             errors.append(
@@ -157,7 +168,7 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
 
         for dependency in OPTS:
             try:
-                _get_distribution(dependency)
+                _check_requirement(dependency)
             except VersionConflict as e:
                 deps_needed.append(dependency)
                 errors.append(
@@ -175,6 +186,23 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
         raise DependencyException(deps_needed)
 
 
+def _check_requirement(dependency_string):
+    """Parses a dependency string, and checks if the specified requirement is installed
+
+    Raises:
+        VersionConflict if the requirement is installed, but with the the wrong version
+        DistributionNotFound if nothing is found to provide the requirement
+    """
+    req = Requirement.parse(dependency_string)
+
+    # first check if the markers specify that this requirement needs installing
+    if req.marker is not None and not req.marker.evaluate():
+        # not required for this environment
+        return
+
+    get_provider(req)
+
+
 if __name__ == "__main__":
     import sys
 
diff --git a/synapse/rest/client/v1/base.py b/synapse/rest/client/v1/base.py
index c77d7aba68..dc63b661c0 100644
--- a/synapse/rest/client/v1/base.py
+++ b/synapse/rest/client/v1/base.py
@@ -19,7 +19,7 @@
 import logging
 import re
 
-from synapse.api.urls import CLIENT_PREFIX
+from synapse.api.urls import CLIENT_API_PREFIX
 from synapse.http.servlet import RestServlet
 from synapse.rest.client.transactions import HttpTransactionCache
 
@@ -36,12 +36,12 @@ def client_path_patterns(path_regex, releases=(0,), include_in_unstable=True):
     Returns:
         SRE_Pattern
     """
-    patterns = [re.compile("^" + CLIENT_PREFIX + path_regex)]
+    patterns = [re.compile("^" + CLIENT_API_PREFIX + "/api/v1" + path_regex)]
     if include_in_unstable:
-        unstable_prefix = CLIENT_PREFIX.replace("/api/v1", "/unstable")
+        unstable_prefix = CLIENT_API_PREFIX + "/unstable"
         patterns.append(re.compile("^" + unstable_prefix + path_regex))
     for release in releases:
-        new_prefix = CLIENT_PREFIX.replace("/api/v1", "/r%d" % release)
+        new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
         patterns.append(re.compile("^" + new_prefix + path_regex))
     return patterns
 
diff --git a/synapse/rest/client/v2_alpha/_base.py b/synapse/rest/client/v2_alpha/_base.py
index 77434937ff..24ac26bf03 100644
--- a/synapse/rest/client/v2_alpha/_base.py
+++ b/synapse/rest/client/v2_alpha/_base.py
@@ -21,13 +21,12 @@ import re
 from twisted.internet import defer
 
 from synapse.api.errors import InteractiveAuthIncompleteError
-from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX
+from synapse.api.urls import CLIENT_API_PREFIX
 
 logger = logging.getLogger(__name__)
 
 
 def client_v2_patterns(path_regex, releases=(0,),
-                       v2_alpha=True,
                        unstable=True):
     """Creates a regex compiled client path with the correct client path
     prefix.
@@ -39,13 +38,11 @@ def client_v2_patterns(path_regex, releases=(0,),
         SRE_Pattern
     """
     patterns = []
-    if v2_alpha:
-        patterns.append(re.compile("^" + CLIENT_V2_ALPHA_PREFIX + path_regex))
     if unstable:
-        unstable_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/unstable")
+        unstable_prefix = CLIENT_API_PREFIX + "/unstable"
         patterns.append(re.compile("^" + unstable_prefix + path_regex))
     for release in releases:
-        new_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/r%d" % release)
+        new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
         patterns.append(re.compile("^" + new_prefix + path_regex))
     return patterns
 
diff --git a/synapse/rest/client/v2_alpha/auth.py b/synapse/rest/client/v2_alpha/auth.py
index ac035c7735..4c380ab84d 100644
--- a/synapse/rest/client/v2_alpha/auth.py
+++ b/synapse/rest/client/v2_alpha/auth.py
@@ -19,7 +19,7 @@ from twisted.internet import defer
 
 from synapse.api.constants import LoginType
 from synapse.api.errors import SynapseError
-from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX
+from synapse.api.urls import CLIENT_API_PREFIX
 from synapse.http.server import finish_request
 from synapse.http.servlet import RestServlet, parse_string
 
@@ -139,8 +139,8 @@ class AuthRestServlet(RestServlet):
         if stagetype == LoginType.RECAPTCHA:
             html = RECAPTCHA_TEMPLATE % {
                 'session': session,
-                'myurl': "%s/auth/%s/fallback/web" % (
-                    CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA
+                'myurl': "%s/r0/auth/%s/fallback/web" % (
+                    CLIENT_API_PREFIX, LoginType.RECAPTCHA
                 ),
                 'sitekey': self.hs.config.recaptcha_public_key,
             }
@@ -159,8 +159,8 @@ class AuthRestServlet(RestServlet):
                     self.hs.config.public_baseurl,
                     self.hs.config.user_consent_version,
                 ),
-                'myurl': "%s/auth/%s/fallback/web" % (
-                    CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
+                'myurl': "%s/r0/auth/%s/fallback/web" % (
+                    CLIENT_API_PREFIX, LoginType.TERMS
                 ),
             }
             html_bytes = html.encode("utf8")
@@ -203,8 +203,8 @@ class AuthRestServlet(RestServlet):
             else:
                 html = RECAPTCHA_TEMPLATE % {
                     'session': session,
-                    'myurl': "%s/auth/%s/fallback/web" % (
-                        CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA
+                    'myurl': "%s/r0/auth/%s/fallback/web" % (
+                        CLIENT_API_PREFIX, LoginType.RECAPTCHA
                     ),
                     'sitekey': self.hs.config.recaptcha_public_key,
                 }
@@ -240,8 +240,8 @@ class AuthRestServlet(RestServlet):
                         self.hs.config.public_baseurl,
                         self.hs.config.user_consent_version,
                     ),
-                    'myurl': "%s/auth/%s/fallback/web" % (
-                        CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
+                    'myurl': "%s/r0/auth/%s/fallback/web" % (
+                        CLIENT_API_PREFIX, LoginType.TERMS
                     ),
                 }
             html_bytes = html.encode("utf8")
diff --git a/synapse/rest/client/v2_alpha/devices.py b/synapse/rest/client/v2_alpha/devices.py
index 9b75bb1377..5a5be7c390 100644
--- a/synapse/rest/client/v2_alpha/devices.py
+++ b/synapse/rest/client/v2_alpha/devices.py
@@ -30,7 +30,7 @@ logger = logging.getLogger(__name__)
 
 
 class DevicesRestServlet(RestServlet):
-    PATTERNS = client_v2_patterns("/devices$", v2_alpha=False)
+    PATTERNS = client_v2_patterns("/devices$")
 
     def __init__(self, hs):
         """
@@ -56,7 +56,7 @@ class DeleteDevicesRestServlet(RestServlet):
     API for bulk deletion of devices. Accepts a JSON object with a devices
     key which lists the device_ids to delete. Requires user interactive auth.
     """
-    PATTERNS = client_v2_patterns("/delete_devices", v2_alpha=False)
+    PATTERNS = client_v2_patterns("/delete_devices")
 
     def __init__(self, hs):
         super(DeleteDevicesRestServlet, self).__init__()
@@ -95,7 +95,7 @@ class DeleteDevicesRestServlet(RestServlet):
 
 
 class DeviceRestServlet(RestServlet):
-    PATTERNS = client_v2_patterns("/devices/(?P<device_id>[^/]*)$", v2_alpha=False)
+    PATTERNS = client_v2_patterns("/devices/(?P<device_id>[^/]*)$")
 
     def __init__(self, hs):
         """
diff --git a/synapse/rest/client/v2_alpha/register.py b/synapse/rest/client/v2_alpha/register.py
index dc3e265bcd..042f636135 100644
--- a/synapse/rest/client/v2_alpha/register.py
+++ b/synapse/rest/client/v2_alpha/register.py
@@ -31,6 +31,7 @@ from synapse.api.errors import (
     SynapseError,
     UnrecognizedRequestError,
 )
+from synapse.config.ratelimiting import FederationRateLimitConfig
 from synapse.config.server import is_threepid_reserved
 from synapse.http.servlet import (
     RestServlet,
@@ -153,16 +154,18 @@ class UsernameAvailabilityRestServlet(RestServlet):
         self.registration_handler = hs.get_registration_handler()
         self.ratelimiter = FederationRateLimiter(
             hs.get_clock(),
-            # Time window of 2s
-            window_size=2000,
-            # Artificially delay requests if rate > sleep_limit/window_size
-            sleep_limit=1,
-            # Amount of artificial delay to apply
-            sleep_msec=1000,
-            # Error with 429 if more than reject_limit requests are queued
-            reject_limit=1,
-            # Allow 1 request at a time
-            concurrent_requests=1,
+            FederationRateLimitConfig(
+                # Time window of 2s
+                window_size=2000,
+                # Artificially delay requests if rate > sleep_limit/window_size
+                sleep_limit=1,
+                # Amount of artificial delay to apply
+                sleep_msec=1000,
+                # Error with 429 if more than reject_limit requests are queued
+                reject_limit=1,
+                # Allow 1 request at a time
+                concurrent_requests=1,
+            )
         )
 
     @defer.inlineCallbacks
@@ -345,18 +348,22 @@ class RegisterRestServlet(RestServlet):
         if self.hs.config.enable_registration_captcha:
             # only support 3PIDless registration if no 3PIDs are required
             if not require_email and not require_msisdn:
-                flows.extend([[LoginType.RECAPTCHA]])
+                # Also add a dummy flow here, otherwise if a client completes
+                # recaptcha first we'll assume they were going for this flow
+                # and complete the request, when they could have been trying to
+                # complete one of the flows with email/msisdn auth.
+                flows.extend([[LoginType.RECAPTCHA, LoginType.DUMMY]])
             # only support the email-only flow if we don't require MSISDN 3PIDs
             if not require_msisdn:
-                flows.extend([[LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA]])
+                flows.extend([[LoginType.RECAPTCHA, LoginType.EMAIL_IDENTITY]])
 
             if show_msisdn:
                 # only support the MSISDN-only flow if we don't require email 3PIDs
                 if not require_email:
-                    flows.extend([[LoginType.MSISDN, LoginType.RECAPTCHA]])
+                    flows.extend([[LoginType.RECAPTCHA, LoginType.MSISDN]])
                 # always let users provide both MSISDN & email
                 flows.extend([
-                    [LoginType.MSISDN, LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA],
+                    [LoginType.RECAPTCHA, LoginType.MSISDN, LoginType.EMAIL_IDENTITY],
                 ])
         else:
             # only support 3PIDless registration if no 3PIDs are required
@@ -379,7 +386,15 @@ class RegisterRestServlet(RestServlet):
         if self.hs.config.user_consent_at_registration:
             new_flows = []
             for flow in flows:
-                flow.append(LoginType.TERMS)
+                inserted = False
+                # m.login.terms should go near the end but before msisdn or email auth
+                for i, stage in enumerate(flow):
+                    if stage == LoginType.EMAIL_IDENTITY or stage == LoginType.MSISDN:
+                        flow.insert(i, LoginType.TERMS)
+                        inserted = True
+                        break
+                if not inserted:
+                    flow.append(LoginType.TERMS)
             flows.extend(new_flows)
 
         auth_result, params, session_id = yield self.auth_handler.check_auth(
@@ -391,13 +406,6 @@ class RegisterRestServlet(RestServlet):
         # the user-facing checks will probably already have happened in
         # /register/email/requestToken when we requested a 3pid, but that's not
         # guaranteed.
-        #
-        # Also check that we're not trying to register a 3pid that's already
-        # been registered.
-        #
-        # This has probably happened in /register/email/requestToken as well,
-        # but if a user hits this endpoint twice then clicks on each link from
-        # the two activation emails, they would register the same 3pid twice.
 
         if auth_result:
             for login_type in [LoginType.EMAIL_IDENTITY, LoginType.MSISDN]:
@@ -413,17 +421,6 @@ class RegisterRestServlet(RestServlet):
                             Codes.THREEPID_DENIED,
                         )
 
-                    existingUid = yield self.store.get_user_id_by_threepid(
-                        medium, address,
-                    )
-
-                    if existingUid is not None:
-                        raise SynapseError(
-                            400,
-                            "%s is already in use" % medium,
-                            Codes.THREEPID_IN_USE,
-                        )
-
         if registered_user_id is not None:
             logger.info(
                 "Already registered user ID %r for this session",
@@ -446,6 +443,28 @@ class RegisterRestServlet(RestServlet):
             if auth_result:
                 threepid = auth_result.get(LoginType.EMAIL_IDENTITY)
 
+                # Also check that we're not trying to register a 3pid that's already
+                # been registered.
+                #
+                # This has probably happened in /register/email/requestToken as well,
+                # but if a user hits this endpoint twice then clicks on each link from
+                # the two activation emails, they would register the same 3pid twice.
+                for login_type in [LoginType.EMAIL_IDENTITY, LoginType.MSISDN]:
+                    if login_type in auth_result:
+                        medium = auth_result[login_type]['medium']
+                        address = auth_result[login_type]['address']
+
+                        existingUid = yield self.store.get_user_id_by_threepid(
+                            medium, address,
+                        )
+
+                        if existingUid is not None:
+                            raise SynapseError(
+                                400,
+                                "%s is already in use" % medium,
+                                Codes.THREEPID_IN_USE,
+                            )
+
             (registered_user_id, _) = yield self.registration_handler.register(
                 localpart=desired_username,
                 password=new_password,
diff --git a/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py b/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py
index 3db7ff8d1b..62b8de71fa 100644
--- a/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py
+++ b/synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py
@@ -50,7 +50,6 @@ class RoomUpgradeRestServlet(RestServlet):
     PATTERNS = client_v2_patterns(
         # /rooms/$roomid/upgrade
         "/rooms/(?P<room_id>[^/]*)/upgrade$",
-        v2_alpha=False,
     )
 
     def __init__(self, hs):
diff --git a/synapse/rest/client/v2_alpha/sendtodevice.py b/synapse/rest/client/v2_alpha/sendtodevice.py
index a9e9a47a0b..21e9cef2d0 100644
--- a/synapse/rest/client/v2_alpha/sendtodevice.py
+++ b/synapse/rest/client/v2_alpha/sendtodevice.py
@@ -29,7 +29,6 @@ logger = logging.getLogger(__name__)
 class SendToDeviceRestServlet(servlet.RestServlet):
     PATTERNS = client_v2_patterns(
         "/sendToDevice/(?P<message_type>[^/]*)/(?P<txn_id>[^/]*)$",
-        v2_alpha=False
     )
 
     def __init__(self, hs):
diff --git a/synapse/rest/media/v1/media_repository.py b/synapse/rest/media/v1/media_repository.py
index bdffa97805..8569677355 100644
--- a/synapse/rest/media/v1/media_repository.py
+++ b/synapse/rest/media/v1/media_repository.py
@@ -444,6 +444,9 @@ class MediaRepository(object):
             )
             return
 
+        if thumbnailer.transpose_method is not None:
+            m_width, m_height = thumbnailer.transpose()
+
         if t_method == "crop":
             t_byte_source = thumbnailer.crop(t_width, t_height, t_type)
         elif t_method == "scale":
@@ -578,6 +581,12 @@ class MediaRepository(object):
             )
             return
 
+        if thumbnailer.transpose_method is not None:
+            m_width, m_height = yield logcontext.defer_to_thread(
+                self.hs.get_reactor(),
+                thumbnailer.transpose
+            )
+
         # We deduplicate the thumbnail sizes by ignoring the cropped versions if
         # they have the same dimensions of a scaled one.
         thumbnails = {}
diff --git a/synapse/rest/media/v1/thumbnailer.py b/synapse/rest/media/v1/thumbnailer.py
index a4b26c2587..3efd0d80fc 100644
--- a/synapse/rest/media/v1/thumbnailer.py
+++ b/synapse/rest/media/v1/thumbnailer.py
@@ -20,6 +20,17 @@ import PIL.Image as Image
 
 logger = logging.getLogger(__name__)
 
+EXIF_ORIENTATION_TAG = 0x0112
+EXIF_TRANSPOSE_MAPPINGS = {
+    2: Image.FLIP_LEFT_RIGHT,
+    3: Image.ROTATE_180,
+    4: Image.FLIP_TOP_BOTTOM,
+    5: Image.TRANSPOSE,
+    6: Image.ROTATE_270,
+    7: Image.TRANSVERSE,
+    8: Image.ROTATE_90
+}
+
 
 class Thumbnailer(object):
 
@@ -31,6 +42,30 @@ class Thumbnailer(object):
     def __init__(self, input_path):
         self.image = Image.open(input_path)
         self.width, self.height = self.image.size
+        self.transpose_method = None
+        try:
+            # We don't use ImageOps.exif_transpose since it crashes with big EXIF
+            image_exif = self.image._getexif()
+            if image_exif is not None:
+                image_orientation = image_exif.get(EXIF_ORIENTATION_TAG)
+                self.transpose_method = EXIF_TRANSPOSE_MAPPINGS.get(image_orientation)
+        except Exception as e:
+            # A lot of parsing errors can happen when parsing EXIF
+            logger.info("Error parsing image EXIF information: %s", e)
+
+    def transpose(self):
+        """Transpose the image using its EXIF Orientation tag
+
+        Returns:
+            Tuple[int, int]: (width, height) containing the new image size in pixels.
+        """
+        if self.transpose_method is not None:
+            self.image = self.image.transpose(self.transpose_method)
+            self.width, self.height = self.image.size
+            self.transpose_method = None
+            # We don't need EXIF any more
+            self.image.info["exif"] = None
+        return self.image.size
 
     def aspect(self, max_width, max_height):
         """Calculate the largest size that preserves aspect ratio which
diff --git a/synapse/storage/_base.py b/synapse/storage/_base.py
index 983ce026e1..fa6839ceca 100644
--- a/synapse/storage/_base.py
+++ b/synapse/storage/_base.py
@@ -1,5 +1,7 @@
 # -*- coding: utf-8 -*-
 # Copyright 2014-2016 OpenMarket Ltd
+# Copyright 2017-2018 New Vector Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -227,6 +229,8 @@ class SQLBaseStore(object):
         # A set of tables that are not safe to use native upserts in.
         self._unsafe_to_upsert_tables = set(UNIQUE_INDEX_BACKGROUND_UPDATES.keys())
 
+        self._account_validity = self.hs.config.account_validity
+
         # We add the user_directory_search table to the blacklist on SQLite
         # because the existing search table does not have an index, making it
         # unsafe to use native upserts.
@@ -243,6 +247,14 @@ class SQLBaseStore(object):
                 self._check_safe_to_upsert,
             )
 
+        if self._account_validity.enabled:
+            self._clock.call_later(
+                0.0,
+                run_as_background_process,
+                "account_validity_set_expiration_dates",
+                self._set_expiration_date_when_missing,
+            )
+
     @defer.inlineCallbacks
     def _check_safe_to_upsert(self):
         """
@@ -275,6 +287,52 @@ class SQLBaseStore(object):
                 self._check_safe_to_upsert,
             )
 
+    @defer.inlineCallbacks
+    def _set_expiration_date_when_missing(self):
+        """
+        Retrieves the list of registered users that don't have an expiration date, and
+        adds an expiration date for each of them.
+        """
+
+        def select_users_with_no_expiration_date_txn(txn):
+            """Retrieves the list of registered users with no expiration date from the
+            database.
+            """
+            sql = (
+                "SELECT users.name FROM users"
+                " LEFT JOIN account_validity ON (users.name = account_validity.user_id)"
+                " WHERE account_validity.user_id is NULL;"
+            )
+            txn.execute(sql, [])
+
+            res = self.cursor_to_dict(txn)
+            if res:
+                for user in res:
+                    self.set_expiration_date_for_user_txn(txn, user["name"])
+
+        yield self.runInteraction(
+            "get_users_with_no_expiration_date",
+            select_users_with_no_expiration_date_txn,
+        )
+
+    def set_expiration_date_for_user_txn(self, txn, user_id):
+        """Sets an expiration date to the account with the given user ID.
+
+        Args:
+             user_id (str): User ID to set an expiration date for.
+        """
+        now_ms = self._clock.time_msec()
+        expiration_ts = now_ms + self._account_validity.period
+        self._simple_insert_txn(
+            txn,
+            "account_validity",
+            values={
+                "user_id": user_id,
+                "expiration_ts_ms": expiration_ts,
+                "email_sent": False,
+            },
+        )
+
     def start_profiling(self):
         self._previous_loop_ts = self._clock.time_msec()
 
diff --git a/synapse/storage/events.py b/synapse/storage/events.py
index b025ebc926..2ffc27ff41 100644
--- a/synapse/storage/events.py
+++ b/synapse/storage/events.py
@@ -575,10 +575,11 @@ class EventsStore(
 
         def _get_events(txn, batch):
             sql = """
-            SELECT prev_event_id
+            SELECT prev_event_id, internal_metadata
             FROM event_edges
                 INNER JOIN events USING (event_id)
                 LEFT JOIN rejections USING (event_id)
+                LEFT JOIN event_json USING (event_id)
             WHERE
                 prev_event_id IN (%s)
                 AND NOT events.outlier
@@ -588,7 +589,11 @@ class EventsStore(
             )
 
             txn.execute(sql, batch)
-            results.extend(r[0] for r in txn)
+            results.extend(
+                r[0]
+                for r in txn
+                if not json.loads(r[1]).get("soft_failed")
+            )
 
         for chunk in batch_iter(event_ids, 100):
             yield self.runInteraction("_get_events_which_are_prevs", _get_events, chunk)
@@ -1325,6 +1330,9 @@ class EventsStore(
                     txn, event.room_id, event.redacts
                 )
 
+                # Remove from relations table.
+                self._handle_redaction(txn, event.redacts)
+
         # Update the event_forward_extremities, event_backward_extremities and
         # event_edges tables.
         self._handle_mult_prev_events(
diff --git a/synapse/storage/registration.py b/synapse/storage/registration.py
index 03a06a83d6..4cf159ba81 100644
--- a/synapse/storage/registration.py
+++ b/synapse/storage/registration.py
@@ -1,5 +1,7 @@
 # -*- coding: utf-8 -*-
-# Copyright 2014 - 2016 OpenMarket Ltd
+# Copyright 2014-2016 OpenMarket Ltd
+# Copyright 2017-2018 New Vector Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -725,17 +727,7 @@ class RegistrationStore(
             raise StoreError(400, "User ID already taken.", errcode=Codes.USER_IN_USE)
 
         if self._account_validity.enabled:
-            now_ms = self.clock.time_msec()
-            expiration_ts = now_ms + self._account_validity.period
-            self._simple_insert_txn(
-                txn,
-                "account_validity",
-                values={
-                    "user_id": user_id,
-                    "expiration_ts_ms": expiration_ts,
-                    "email_sent": False,
-                }
-            )
+            self.set_expiration_date_for_user_txn(txn, user_id)
 
         if token:
             # it's possible for this to get a conflict, but only for a single user
diff --git a/synapse/storage/relations.py b/synapse/storage/relations.py
index 42f587a7d8..4c83800cca 100644
--- a/synapse/storage/relations.py
+++ b/synapse/storage/relations.py
@@ -350,9 +350,7 @@ class RelationsWorkerStore(SQLBaseStore):
         """
 
         def _get_applicable_edit_txn(txn):
-            txn.execute(
-                sql, (event_id, RelationTypes.REPLACES,)
-            )
+            txn.execute(sql, (event_id, RelationTypes.REPLACE))
             row = txn.fetchone()
             if row:
                 return row[0]
@@ -367,6 +365,50 @@ class RelationsWorkerStore(SQLBaseStore):
         edit_event = yield self.get_event(edit_id, allow_none=True)
         defer.returnValue(edit_event)
 
+    def has_user_annotated_event(self, parent_id, event_type, aggregation_key, sender):
+        """Check if a user has already annotated an event with the same key
+        (e.g. already liked an event).
+
+        Args:
+            parent_id (str): The event being annotated
+            event_type (str): The event type of the annotation
+            aggregation_key (str): The aggregation key of the annotation
+            sender (str): The sender of the annotation
+
+        Returns:
+            Deferred[bool]
+        """
+
+        sql = """
+            SELECT 1 FROM event_relations
+            INNER JOIN events USING (event_id)
+            WHERE
+                relates_to_id = ?
+                AND relation_type = ?
+                AND type = ?
+                AND sender = ?
+                AND aggregation_key = ?
+            LIMIT 1;
+        """
+
+        def _get_if_user_has_annotated_event(txn):
+            txn.execute(
+                sql,
+                (
+                    parent_id,
+                    RelationTypes.ANNOTATION,
+                    event_type,
+                    sender,
+                    aggregation_key,
+                ),
+            )
+
+            return bool(txn.fetchone())
+
+        return self.runInteraction(
+            "get_if_user_has_annotated_event", _get_if_user_has_annotated_event
+        )
+
 
 class RelationsStore(RelationsWorkerStore):
     def _handle_event_relations(self, txn, event):
@@ -384,8 +426,8 @@ class RelationsStore(RelationsWorkerStore):
         rel_type = relation.get("rel_type")
         if rel_type not in (
             RelationTypes.ANNOTATION,
-            RelationTypes.REFERENCES,
-            RelationTypes.REPLACES,
+            RelationTypes.REFERENCE,
+            RelationTypes.REPLACE,
         ):
             # Unknown relation type
             return
@@ -413,5 +455,22 @@ class RelationsStore(RelationsWorkerStore):
             self.get_aggregation_groups_for_event.invalidate_many, (parent_id,)
         )
 
-        if rel_type == RelationTypes.REPLACES:
+        if rel_type == RelationTypes.REPLACE:
             txn.call_after(self.get_applicable_edit.invalidate, (parent_id,))
+
+    def _handle_redaction(self, txn, redacted_event_id):
+        """Handles receiving a redaction and checking whether we need to remove
+        any redacted relations from the database.
+
+        Args:
+            txn
+            redacted_event_id (str): The event that was redacted.
+        """
+
+        self._simple_delete_txn(
+            txn,
+            table="event_relations",
+            keyvalues={
+                "event_id": redacted_event_id,
+            }
+        )
diff --git a/synapse/storage/stream.py b/synapse/storage/stream.py
index 163363c0c2..529ad4ea79 100644
--- a/synapse/storage/stream.py
+++ b/synapse/storage/stream.py
@@ -77,14 +77,27 @@ def generate_pagination_where_clause(
 
     would be generated for dir=b, from_token=(6, 7) and to_token=(5, 3).
 
+    Note that tokens are considered to be after the row they are in, e.g. if
+    a row A has a token T, then we consider A to be before T. This convention
+    is important when figuring out inequalities for the generated SQL, and
+    produces the following result:
+        - If paginating forwards then we exclude any rows matching the from
+          token, but include those that match the to token.
+        - If paginating backwards then we include any rows matching the from
+          token, but include those that match the to token.
+
     Args:
         direction (str): Whether we're paginating backwards("b") or
             forwards ("f").
         column_names (tuple[str, str]): The column names to bound. Must *not*
             be user defined as these get inserted directly into the SQL
             statement without escapes.
-        from_token (tuple[int, int]|None)
-        to_token (tuple[int, int]|None)
+        from_token (tuple[int, int]|None): The start point for the pagination.
+            This is an exclusive minimum bound if direction is "f", and an
+            inclusive maximum bound if direction is "b".
+        to_token (tuple[int, int]|None): The endpoint point for the pagination.
+            This is an inclusive maximum bound if direction is "f", and an
+            exclusive minimum bound if direction is "b".
         engine: The database engine to generate the clauses for
 
     Returns:
@@ -131,7 +144,9 @@ def _make_generic_sql_bound(bound, column_names, values, engine):
         names (tuple[str, str]): The column names. Must *not* be user defined
             as these get inserted directly into the SQL statement without
             escapes.
-        values (tuple[int, int]): The values to bound the columns by.
+        values (tuple[int|None, int]): The values to bound the columns by. If
+            the first value is None then only creates a bound on the second
+            column.
         engine: The database engine to generate the SQL for
 
     Returns:
diff --git a/synapse/util/ratelimitutils.py b/synapse/util/ratelimitutils.py
index 7deb38f2a7..b146d137f4 100644
--- a/synapse/util/ratelimitutils.py
+++ b/synapse/util/ratelimitutils.py
@@ -30,31 +30,14 @@ logger = logging.getLogger(__name__)
 
 
 class FederationRateLimiter(object):
-    def __init__(self, clock, window_size, sleep_limit, sleep_msec,
-                 reject_limit, concurrent_requests):
+    def __init__(self, clock, config):
         """
         Args:
             clock (Clock)
-            window_size (int): The window size in milliseconds.
-            sleep_limit (int): The number of requests received in the last
-                `window_size` milliseconds before we artificially start
-                delaying processing of requests.
-            sleep_msec (int): The number of milliseconds to delay processing
-                of incoming requests by.
-            reject_limit (int): The maximum number of requests that are can be
-                queued for processing before we start rejecting requests with
-                a 429 Too Many Requests response.
-            concurrent_requests (int): The number of concurrent requests to
-                process.
+            config (FederationRateLimitConfig)
         """
         self.clock = clock
-
-        self.window_size = window_size
-        self.sleep_limit = sleep_limit
-        self.sleep_msec = sleep_msec
-        self.reject_limit = reject_limit
-        self.concurrent_requests = concurrent_requests
-
+        self._config = config
         self.ratelimiters = {}
 
     def ratelimit(self, host):
@@ -76,25 +59,25 @@ class FederationRateLimiter(object):
             host,
             _PerHostRatelimiter(
                 clock=self.clock,
-                window_size=self.window_size,
-                sleep_limit=self.sleep_limit,
-                sleep_msec=self.sleep_msec,
-                reject_limit=self.reject_limit,
-                concurrent_requests=self.concurrent_requests,
+                config=self._config,
             )
         ).ratelimit()
 
 
 class _PerHostRatelimiter(object):
-    def __init__(self, clock, window_size, sleep_limit, sleep_msec,
-                 reject_limit, concurrent_requests):
+    def __init__(self, clock, config):
+        """
+        Args:
+            clock (Clock)
+            config (FederationRateLimitConfig)
+        """
         self.clock = clock
 
-        self.window_size = window_size
-        self.sleep_limit = sleep_limit
-        self.sleep_sec = sleep_msec / 1000.0
-        self.reject_limit = reject_limit
-        self.concurrent_requests = concurrent_requests
+        self.window_size = config.window_size
+        self.sleep_limit = config.sleep_limit
+        self.sleep_sec = config.sleep_delay / 1000.0
+        self.reject_limit = config.reject_limit
+        self.concurrent_requests = config.concurrent
 
         # request_id objects for requests which have been slept
         self.sleeping_requests = set()
diff --git a/tests/handlers/test_register.py b/tests/handlers/test_register.py
index 1c253d0579..5ffba2ca7a 100644
--- a/tests/handlers/test_register.py
+++ b/tests/handlers/test_register.py
@@ -228,3 +228,10 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
     def test_register_not_support_user(self):
         res = self.get_success(self.handler.register(localpart='user'))
         self.assertFalse(self.store.is_support_user(res[0]))
+
+    def test_invalid_user_id_length(self):
+        invalid_user_id = "x" * 256
+        self.get_failure(
+            self.handler.register(localpart=invalid_user_id),
+            SynapseError
+        )
diff --git a/tests/rest/client/v1/test_rooms.py b/tests/rest/client/v1/test_rooms.py
index 6220172cde..5f75ad7579 100644
--- a/tests/rest/client/v1/test_rooms.py
+++ b/tests/rest/client/v1/test_rooms.py
@@ -1,5 +1,6 @@
 # -*- coding: utf-8 -*-
 # Copyright 2014-2016 OpenMarket Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -24,7 +25,7 @@ from twisted.internet import defer
 
 import synapse.rest.admin
 from synapse.api.constants import Membership
-from synapse.rest.client.v1 import login, room
+from synapse.rest.client.v1 import login, profile, room
 
 from tests import unittest
 
@@ -936,3 +937,70 @@ class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
         request, channel = self.make_request("GET", self.url, access_token=tok)
         self.render(request)
         self.assertEqual(channel.code, 200, channel.result)
+
+
+class PerRoomProfilesForbiddenTestCase(unittest.HomeserverTestCase):
+
+    servlets = [
+        synapse.rest.admin.register_servlets_for_client_rest_resource,
+        room.register_servlets,
+        login.register_servlets,
+        profile.register_servlets,
+    ]
+
+    def make_homeserver(self, reactor, clock):
+        config = self.default_config()
+        config["allow_per_room_profiles"] = False
+        self.hs = self.setup_test_homeserver(config=config)
+
+        return self.hs
+
+    def prepare(self, reactor, clock, homeserver):
+        self.user_id = self.register_user("test", "test")
+        self.tok = self.login("test", "test")
+
+        # Set a profile for the test user
+        self.displayname = "test user"
+        data = {
+            "displayname": self.displayname,
+        }
+        request_data = json.dumps(data)
+        request, channel = self.make_request(
+            "PUT",
+            "/_matrix/client/r0/profile/%s/displayname" % (self.user_id,),
+            request_data,
+            access_token=self.tok,
+        )
+        self.render(request)
+        self.assertEqual(channel.code, 200, channel.result)
+
+        self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok)
+
+    def test_per_room_profile_forbidden(self):
+        data = {
+            "membership": "join",
+            "displayname": "other test user"
+        }
+        request_data = json.dumps(data)
+        request, channel = self.make_request(
+            "PUT",
+            "/_matrix/client/r0/rooms/%s/state/m.room.member/%s" % (
+                self.room_id, self.user_id,
+            ),
+            request_data,
+            access_token=self.tok,
+        )
+        self.render(request)
+        self.assertEqual(channel.code, 200, channel.result)
+        event_id = channel.json_body["event_id"]
+
+        request, channel = self.make_request(
+            "GET",
+            "/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, event_id),
+            access_token=self.tok,
+        )
+        self.render(request)
+        self.assertEqual(channel.code, 200, channel.result)
+
+        res_displayname = channel.json_body["content"]["displayname"]
+        self.assertEqual(res_displayname, self.displayname, channel.result)
diff --git a/tests/rest/client/v2_alpha/test_auth.py b/tests/rest/client/v2_alpha/test_auth.py
index ad7d476401..b9ef46e8fb 100644
--- a/tests/rest/client/v2_alpha/test_auth.py
+++ b/tests/rest/client/v2_alpha/test_auth.py
@@ -92,7 +92,14 @@ class FallbackAuthTests(unittest.HomeserverTestCase):
         self.assertEqual(len(self.recaptcha_attempts), 1)
         self.assertEqual(self.recaptcha_attempts[0][0]["response"], "a")
 
-        # Now we have fufilled the recaptcha fallback step, we can then send a
+        # also complete the dummy auth
+        request, channel = self.make_request(
+            "POST", "register", {"auth": {"session": session, "type": "m.login.dummy"}}
+        )
+        self.render(request)
+
+        # Now we should have fufilled a complete auth flow, including
+        # the recaptcha fallback step, we can then send a
         # request to the register API with the session in the authdict.
         request, channel = self.make_request(
             "POST", "register", {"auth": {"session": session}}
diff --git a/tests/rest/client/v2_alpha/test_register.py b/tests/rest/client/v2_alpha/test_register.py
index 65685883db..d4a1d4d50c 100644
--- a/tests/rest/client/v2_alpha/test_register.py
+++ b/tests/rest/client/v2_alpha/test_register.py
@@ -1,3 +1,20 @@
+# -*- coding: utf-8 -*-
+# Copyright 2014-2016 OpenMarket Ltd
+# Copyright 2017-2018 New Vector Ltd
+# Copyright 2019 The Matrix.org Foundation C.I.C.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import datetime
 import json
 import os
@@ -409,3 +426,41 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
         self.assertEquals(channel.result["code"], b"200", channel.result)
 
         self.assertEqual(len(self.email_attempts), 1)
+
+
+class AccountValidityBackgroundJobTestCase(unittest.HomeserverTestCase):
+
+    servlets = [
+        synapse.rest.admin.register_servlets_for_client_rest_resource,
+    ]
+
+    def make_homeserver(self, reactor, clock):
+        self.validity_period = 10
+
+        config = self.default_config()
+
+        config["enable_registration"] = True
+        config["account_validity"] = {
+            "enabled": False,
+        }
+
+        self.hs = self.setup_test_homeserver(config=config)
+        self.hs.config.account_validity.period = self.validity_period
+
+        self.store = self.hs.get_datastore()
+
+        return self.hs
+
+    def test_background_job(self):
+        """
+        Tests whether the account validity startup background job does the right thing,
+        which is sticking an expiration date to every account that doesn't already have
+        one.
+        """
+        user_id = self.register_user("kermit", "user")
+
+        now_ms = self.hs.clock.time_msec()
+        self.get_success(self.store._set_expiration_date_when_missing())
+
+        res = self.get_success(self.store.get_expiration_ts_for_user(user_id))
+        self.assertEqual(res, now_ms + self.validity_period)
diff --git a/tests/rest/client/v2_alpha/test_relations.py b/tests/rest/client/v2_alpha/test_relations.py
index cd965167f8..43b3049daa 100644
--- a/tests/rest/client/v2_alpha/test_relations.py
+++ b/tests/rest/client/v2_alpha/test_relations.py
@@ -90,6 +90,15 @@ class RelationsTestCase(unittest.HomeserverTestCase):
         channel = self._send_relation(RelationTypes.ANNOTATION, EventTypes.Member)
         self.assertEquals(400, channel.code, channel.json_body)
 
+    def test_deny_double_react(self):
+        """Test that we deny relations on membership events
+        """
+        channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
+        self.assertEquals(200, channel.code, channel.json_body)
+
+        channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
+        self.assertEquals(400, channel.code, channel.json_body)
+
     def test_basic_paginate_relations(self):
         """Tests that calling pagination API corectly the latest relations.
         """
@@ -234,14 +243,30 @@ class RelationsTestCase(unittest.HomeserverTestCase):
         """Test that we can paginate within an annotation group.
         """
 
+        # We need to create ten separate users to send each reaction.
+        access_tokens = [self.user_token, self.user2_token]
+        idx = 0
+        while len(access_tokens) < 10:
+            user_id, token = self._create_user("test" + str(idx))
+            idx += 1
+
+            self.helper.join(self.room, user=user_id, tok=token)
+            access_tokens.append(token)
+
+        idx = 0
         expected_event_ids = []
         for _ in range(10):
             channel = self._send_relation(
-                RelationTypes.ANNOTATION, "m.reaction", key=u"👍"
+                RelationTypes.ANNOTATION,
+                "m.reaction",
+                key=u"👍",
+                access_token=access_tokens[idx],
             )
             self.assertEquals(200, channel.code, channel.json_body)
             expected_event_ids.append(channel.json_body["event_id"])
 
+            idx += 1
+
         # Also send a different type of reaction so that we test we don't see it
         channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", key="a")
         self.assertEquals(200, channel.code, channel.json_body)
@@ -320,14 +345,51 @@ class RelationsTestCase(unittest.HomeserverTestCase):
             },
         )
 
+    def test_aggregation_redactions(self):
+        """Test that annotations get correctly aggregated after a redaction.
+        """
+
+        channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
+        self.assertEquals(200, channel.code, channel.json_body)
+        to_redact_event_id = channel.json_body["event_id"]
+
+        channel = self._send_relation(
+            RelationTypes.ANNOTATION, "m.reaction", "a", access_token=self.user2_token
+        )
+        self.assertEquals(200, channel.code, channel.json_body)
+
+        # Now lets redact one of the 'a' reactions
+        request, channel = self.make_request(
+            "POST",
+            "/_matrix/client/r0/rooms/%s/redact/%s" % (self.room, to_redact_event_id),
+            access_token=self.user_token,
+            content={},
+        )
+        self.render(request)
+        self.assertEquals(200, channel.code, channel.json_body)
+
+        request, channel = self.make_request(
+            "GET",
+            "/_matrix/client/unstable/rooms/%s/aggregations/%s"
+            % (self.room, self.parent_id),
+            access_token=self.user_token,
+        )
+        self.render(request)
+        self.assertEquals(200, channel.code, channel.json_body)
+
+        self.assertEquals(
+            channel.json_body,
+            {"chunk": [{"type": "m.reaction", "key": "a", "count": 1}]},
+        )
+
     def test_aggregation_must_be_annotation(self):
         """Test that aggregations must be annotations.
         """
 
         request, channel = self.make_request(
             "GET",
-            "/_matrix/client/unstable/rooms/%s/aggregations/%s/m.replace?limit=1"
-            % (self.room, self.parent_id),
+            "/_matrix/client/unstable/rooms/%s/aggregations/%s/%s?limit=1"
+            % (self.room, self.parent_id, RelationTypes.REPLACE),
             access_token=self.user_token,
         )
         self.render(request)
@@ -349,11 +411,11 @@ class RelationsTestCase(unittest.HomeserverTestCase):
         channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "b")
         self.assertEquals(200, channel.code, channel.json_body)
 
-        channel = self._send_relation(RelationTypes.REFERENCES, "m.room.test")
+        channel = self._send_relation(RelationTypes.REFERENCE, "m.room.test")
         self.assertEquals(200, channel.code, channel.json_body)
         reply_1 = channel.json_body["event_id"]
 
-        channel = self._send_relation(RelationTypes.REFERENCES, "m.room.test")
+        channel = self._send_relation(RelationTypes.REFERENCE, "m.room.test")
         self.assertEquals(200, channel.code, channel.json_body)
         reply_2 = channel.json_body["event_id"]
 
@@ -374,7 +436,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
                         {"type": "m.reaction", "key": "b", "count": 1},
                     ]
                 },
-                RelationTypes.REFERENCES: {
+                RelationTypes.REFERENCE: {
                     "chunk": [{"event_id": reply_1}, {"event_id": reply_2}]
                 },
             },
@@ -386,7 +448,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
 
         new_body = {"msgtype": "m.text", "body": "I've been edited!"}
         channel = self._send_relation(
-            RelationTypes.REPLACES,
+            RelationTypes.REPLACE,
             "m.room.message",
             content={"msgtype": "m.text", "body": "foo", "m.new_content": new_body},
         )
@@ -406,7 +468,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
 
         self.assertEquals(
             channel.json_body["unsigned"].get("m.relations"),
-            {RelationTypes.REPLACES: {"event_id": edit_event_id}},
+            {RelationTypes.REPLACE: {"event_id": edit_event_id}},
         )
 
     def test_multi_edit(self):
@@ -415,7 +477,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
         """
 
         channel = self._send_relation(
-            RelationTypes.REPLACES,
+            RelationTypes.REPLACE,
             "m.room.message",
             content={
                 "msgtype": "m.text",
@@ -427,7 +489,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
 
         new_body = {"msgtype": "m.text", "body": "I've been edited!"}
         channel = self._send_relation(
-            RelationTypes.REPLACES,
+            RelationTypes.REPLACE,
             "m.room.message",
             content={"msgtype": "m.text", "body": "foo", "m.new_content": new_body},
         )
@@ -436,7 +498,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
         edit_event_id = channel.json_body["event_id"]
 
         channel = self._send_relation(
-            RelationTypes.REPLACES,
+            RelationTypes.REPLACE,
             "m.room.message.WRONG_TYPE",
             content={
                 "msgtype": "m.text",
@@ -458,7 +520,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
 
         self.assertEquals(
             channel.json_body["unsigned"].get("m.relations"),
-            {RelationTypes.REPLACES: {"event_id": edit_event_id}},
+            {RelationTypes.REPLACE: {"event_id": edit_event_id}},
         )
 
     def _send_relation(
diff --git a/tests/test_terms_auth.py b/tests/test_terms_auth.py
index f412985d2c..52739fbabc 100644
--- a/tests/test_terms_auth.py
+++ b/tests/test_terms_auth.py
@@ -59,7 +59,7 @@ class TermsTestCase(unittest.HomeserverTestCase):
         for flow in channel.json_body["flows"]:
             self.assertIsInstance(flow["stages"], list)
             self.assertTrue(len(flow["stages"]) > 0)
-            self.assertEquals(flow["stages"][-1], "m.login.terms")
+            self.assertTrue("m.login.terms" in flow["stages"])
 
         expected_params = {
             "m.login.terms": {
diff --git a/tests/utils.py b/tests/utils.py
index f38533a0c7..200c1ceabe 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -134,10 +134,6 @@ def default_config(name, parse=False):
         "email_enable_notifs": False,
         "block_non_admin_invites": False,
         "federation_domain_whitelist": None,
-        "federation_rc_reject_limit": 10,
-        "federation_rc_sleep_limit": 10,
-        "federation_rc_sleep_delay": 100,
-        "federation_rc_concurrent": 10,
         "filter_timeline_limit": 5000,
         "user_directory_search_all_users": False,
         "user_consent_server_notice_content": None,
@@ -156,8 +152,13 @@ def default_config(name, parse=False):
         "mau_stats_only": False,
         "mau_limits_reserved_threepids": [],
         "admin_contact": None,
-        "rc_messages_per_second": 10000,
-        "rc_message_burst_count": 10000,
+        "rc_federation": {
+            "reject_limit": 10,
+            "sleep_limit": 10,
+            "sleep_delay": 10,
+            "concurrent": 10,
+        },
+        "rc_message": {"per_second": 10000, "burst_count": 10000},
         "rc_registration": {"per_second": 10000, "burst_count": 10000},
         "rc_login": {
             "address": {"per_second": 10000, "burst_count": 10000},
@@ -375,12 +376,7 @@ def register_federation_servlets(hs, resource):
         resource=resource,
         authenticator=federation_server.Authenticator(hs),
         ratelimiter=FederationRateLimiter(
-            hs.get_clock(),
-            window_size=hs.config.federation_rc_window_size,
-            sleep_limit=hs.config.federation_rc_sleep_limit,
-            sleep_msec=hs.config.federation_rc_sleep_delay,
-            reject_limit=hs.config.federation_rc_reject_limit,
-            concurrent_requests=hs.config.federation_rc_concurrent,
+            hs.get_clock(), config=hs.config.rc_federation
         ),
     )
 
diff --git a/tox.ini b/tox.ini
index d0e519ce46..543b232ae7 100644
--- a/tox.ini
+++ b/tox.ini
@@ -94,7 +94,7 @@ commands =
     # Make all greater-thans equals so we test the oldest version of our direct
     # dependencies, but make the pyopenssl 17.0, which can work against an
     # OpenSSL 1.1 compiled cryptography (as older ones don't compile on Travis).
-    /bin/sh -c 'python -m synapse.python_dependencies | sed -e "s/>=/==/g" -e "s/psycopg2==2.6//" -e "s/pyopenssl==16.0.0/pyopenssl==17.0.0/" | xargs pip install'
+    /bin/sh -c 'python -m synapse.python_dependencies | sed -e "s/>=/==/g" -e "s/psycopg2==2.6//" -e "s/pyopenssl==16.0.0/pyopenssl==17.0.0/" | xargs -d"\n" pip install'
 
     # Add this so that coverage will run on subprocesses
     /bin/sh -c 'echo "import coverage; coverage.process_startup()" > {envsitepackagesdir}/../sitecustomize.py'