summary refs log tree commit diff
path: root/synapse/util/frozenutils.py
diff options
context:
space:
mode:
authorErik Johnston <erik@matrix.org>2021-09-28 10:37:58 +0100
committerGitHub <noreply@github.com>2021-09-28 09:37:58 +0000
commit707d5e4e48e839dabd34e4b67426fe8382a2c978 (patch)
treeeb4a2a3964c9b9b5c72dad55b0248598cf5367da /synapse/util/frozenutils.py
parentSign the git tag in release script (#10925) (diff)
downloadsynapse-707d5e4e48e839dabd34e4b67426fe8382a2c978.tar.xz
Encode JSON responses on a thread in C, mk2 (#10905)
Currently we use `JsonEncoder.iterencode` to write JSON responses, which ensures that we don't block the main reactor thread when encoding huge objects. The downside to this is that `iterencode` falls back to using a pure Python encoder that is *much* less efficient and can easily burn a lot of CPU for huge responses. To fix this, while still ensuring we don't block the reactor loop, we encode the JSON on a threadpool using the standard `JsonEncoder.encode` functions, which is backed by a C library.

Doing so, however, requires `respond_with_json` to have access to the reactor, which it previously didn't. There are two ways of doing this:

1. threading through the reactor object, which is a bit fiddly as e.g. `DirectServeJsonResource` doesn't currently take a reactor, but is exposed to modules and so is a PITA to change; or
2. expose the reactor in `SynapseRequest`, which requires updating a bunch of servlet types.

I went with the latter as that is just a mechanical change, and I think makes sense as a request already has a reactor associated with it (via its http channel). 
Diffstat (limited to 'synapse/util/frozenutils.py')
0 files changed, 0 insertions, 0 deletions