diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/application_services.rst | 3 | ||||
-rw-r--r-- | docs/log_contexts.rst | 10 | ||||
-rw-r--r-- | docs/replication.rst | 58 | ||||
-rw-r--r-- | docs/url_previews.rst | 74 |
4 files changed, 143 insertions, 2 deletions
diff --git a/docs/application_services.rst b/docs/application_services.rst index 7e87ac9ad6..fbc0c7e960 100644 --- a/docs/application_services.rst +++ b/docs/application_services.rst @@ -32,5 +32,4 @@ The format of the AS configuration file is as follows: See the spec_ for further details on how application services work. -.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api - +.. _spec: https://matrix.org/docs/spec/application_service/unstable.html diff --git a/docs/log_contexts.rst b/docs/log_contexts.rst new file mode 100644 index 0000000000..0046e171be --- /dev/null +++ b/docs/log_contexts.rst @@ -0,0 +1,10 @@ +What do I do about "Unexpected logging context" debug log-lines everywhere? + +<Mjark> The logging context lives in thread local storage +<Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context. +<Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context? +<Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed. +<Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor. +<Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults + +Unanswered: how and when should we preserve log context? \ No newline at end of file diff --git a/docs/replication.rst b/docs/replication.rst new file mode 100644 index 0000000000..7e37e71987 --- /dev/null +++ b/docs/replication.rst @@ -0,0 +1,58 @@ +Replication Architecture +======================== + +Motivation +---------- + +We'd like to be able to split some of the work that synapse does into multiple +python processes. In theory multiple synapse processes could share a single +postgresql database and we'd scale up by running more synapse processes. +However much of synapse assumes that only one process is interacting with the +database, both for assigning unique identifiers when inserting into tables, +notifying components about new updates, and for invalidating its caches. + +So running multiple copies of the current code isn't an option. One way to +run multiple processes would be to have a single writer process and multiple +reader processes connected to the same database. In order to do this we'd need +a way for the reader process to invalidate its in-memory caches when an update +happens on the writer. One way to do this is for the writer to present an +append-only log of updates which the readers can consume to invalidate their +caches and to push updates to listening clients or pushers. + +Synapse already stores much of its data as an append-only log so that it can +correctly respond to /sync requests so the amount of code changes needed to +expose the append-only log to the readers should be fairly minimal. + +Architecture +------------ + +The Replication API +~~~~~~~~~~~~~~~~~~~ + +Synapse will optionally expose a long poll HTTP API for extracting updates. The +API will have a similar shape to /sync in that clients provide tokens +indicating where in the log they have reached and a timeout. The synapse server +then either responds with updates immediately if it already has updates or it +waits until the timeout for more updates. If the timeout expires and nothing +happened then the server returns an empty response. + +However unlike the /sync API this replication API is returning synapse specific +data rather than trying to implement a matrix specification. The replication +results are returned as arrays of rows where the rows are mostly lifted +directly from the database. This avoids unnecessary JSON parsing on the server +and hopefully avoids an impedance mismatch between the data returned and the +required updates to the datastore. + +This does not replicate all the database tables as many of the database tables +are indexes that can be recovered from the contents of other tables. + +The format and parameters for the api are documented in +``synapse/replication/resource.py``. + + +The Slaved DataStore +~~~~~~~~~~~~~~~~~~~~ + +There are read-only version of the synapse storage layer in +``synapse/replication/slave/storage`` that use the response of the replication +API to invalidate their caches. diff --git a/docs/url_previews.rst b/docs/url_previews.rst new file mode 100644 index 0000000000..634d9d907f --- /dev/null +++ b/docs/url_previews.rst @@ -0,0 +1,74 @@ +URL Previews +============ + +Design notes on a URL previewing service for Matrix: + +Options are: + + 1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata. + * Pros: + * Decouples the implementation entirely from Synapse. + * Uses existing Matrix events & content repo to store the metadata. + * Cons: + * Which AS should provide this service for a room, and why should you trust it? + * Doesn't work well with E2E; you'd have to cut the AS into every room + * the AS would end up subscribing to every room anyway. + + 2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service: + * Pros: + * Simple and flexible; can be used by any clients at any point + * Cons: + * If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI + * We need somewhere to store the URL metadata rather than just using Matrix itself + * We can't piggyback on matrix to distribute the metadata between HSes. + + 3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata. + * Pros: + * Works transparently for all clients + * Piggy-backs nicely on using Matrix for distributing the metadata. + * No confusion as to which AS + * Cons: + * Doesn't work with E2E + * We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server. + + 4. Make the sending client use the preview API and insert the event itself when successful. + * Pros: + * Works well with E2E + * No custom server functionality + * Lets the client customise the preview that they send (like on FB) + * Cons: + * Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it. + + 5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target. + +Best solution is probably a combination of both 2 and 4. + * Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending) + * Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one) + +This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?" + +This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack. + +However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable. + +As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed. + +API +--- + +GET /_matrix/media/r0/preview_url?url=http://wherever.com +200 OK +{ + "og:type" : "article" + "og:url" : "https://twitter.com/matrixdotorg/status/684074366691356672" + "og:title" : "Matrix on Twitter" + "og:image" : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png" + "og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”" + "og:site_name" : "Twitter" +} + +* Downloads the URL + * If HTML, just stores it in RAM and parses it for OG meta tags + * Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs. + * If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents. + * Otherwise, don't bother downloading further. |