| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Improve logging: log things in the right order, include destination and txids
in all log lines, don't log successful responses twice
- Fix the docstring on TransportLayerClient.send_transaction
- Don't use treq.request, which is overcomplicated for our purposes: just use a
twisted.web.client.Agent.
- simplify the logic for setting up the bodyProducer
- fix bytes/str confusions
|
|
|
|
|
|
|
|
|
|
| |
If a connection is lost before a request is read from Request, Twisted
sets `method` (and `uri`) attributes to dummy values. These dummy values
have incorrect types (i.e. they're not bytes), and so things like
`__repr__` would raise an exception.
To fix this we had a helper method to return the method with a
consistent type.
|
|\
| |
| | |
Fix spurious exceptions when client closes conncetion
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a HTTP handler throws an exception while processing a request we
automatically write a JSON error response. If the handler had already
started writing a response twisted throws an exception.
We should check for this case and simple abort the connection if there
was an error after the response had started being written.
|
|/ |
|
| |
|
|
|
|
|
|
|
| |
The existing deferred timeout helper function (and the one into twisted)
suffer from a bug when a deferred's canceller throws an exception, #3842.
The new helper function doesn't suffer from this problem.
|
|\
| |
| | |
Fix matrixfederationclient.py logging: Destination is a string
|
| | |
|
|\ \
| |/
|/| |
Set SNI to the server_name, not whatever was in the SRV record
|
| |
| |
| |
| | |
Fixes #3843
|
|/
|
|
|
|
|
|
| |
We want to wait until we have read the response body before we log the request
as complete, otherwise a confusing thing happens where the request appears to
have completed, but we later fail it.
To do this, we factor the salient details of a request out to a separate
object, which can then keep track of the txn_id, so that it can be logged.
|
| |
|
|
|
|
|
| |
Python 3 compatibility: make sure that we decode some byte sequences before we
use them to create log lines and metrics labels.
|
|
|
|
| |
This is an attempt to mitigate #3842 by adding yet-another-timeout
|
|\
| |
| | |
timeouts 2: electric boogaloo
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| |/
|/| |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Outbound federation were incorrectly allowed when the config option was
set to an empty list
|
|\
| |
| |
| |
| |
| |
| |
| | |
Bugfixes
--------
- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs
([\#3723](https://github.com/matrix-org/synapse/issues/3723))
|
| |
| |
| |
| |
| | |
This fixes bugs introduced in #3700, by making sure that we behave sanely
when an incoming connection is closed before the headers are read.
|
| | |
|
|/ |
|
| |
|
|\
| |
| | |
Use a producer to stream back responses
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The problem with dumping all of the json response into the Request object at
once is that doing so starts the timeout for the next request to be received:
so if it takes longer than 60s to stream back the response to the client, the
client never gets it.
The correct solution is to use a Producer; then the timeout is only started
once all of the content is sent over the TCP connection.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit moves a bunch of the logic for deciding when to log the receipt and
completion of HTTP requests into SynapseRequest, rather than in the request
handling wrappers.
Advantages of this are:
* we get logs for *all* requests (including OPTIONS and HEADs), rather than
just those that end up hitting handlers we've remembered to decorate
correctly.
* when a request handler wires up a Producer (as the media stuff does
currently, and as other things will do soon), we log at the point that all
of the traffic has been sent to the client.
|
| |
|
|\
| |
| | |
send SNI for federation requests
|
| |\ |
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
send_sni_for_federation_requests
# Conflicts:
# synapse/crypto/context_factory.py
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
# Conflicts:
# synapse/http/endpoint.py
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This code brings the SimpleHttpClient into line with the
MatrixFederationHttpClient by having it raise HttpResponseExceptions when a
request fails (rather than trying to parse for matrix errors and maybe raising
MatrixCodeMessageException).
Then, whenever we were checking for MatrixCodeMessageException and turning them
into SynapseErrors, we now need to check for HttpResponseExceptions and call
to_synapse_error.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We really shouldn't be sending all CodeMessageExceptions back over the C-S API;
it will include things like 401s which we shouldn't proxy.
That means that we need to explicitly turn a few HttpResponseExceptions into
SynapseErrors in the federation layer.
The effect of the latter is that the matrix errcode will get passed through
correctly to calling clients, which might help with some of the random
M_UNKNOWN errors when trying to join rooms.
|
| |_|_|/
|/| | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| |_|/
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
the method "assert_params_in_request" does handle dicts and not
requests. A request body has to be parsed to json before this method
can be used
|
| | |
| | |
| | |
| | |
| | | |
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
|
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
We need to do a bit more validation when we get a server name, but don't want
to be re-doing it all over the shop, so factor out a separate
parse_and_validate_server_name, and do the extra validation.
Also, use it to verify the server name in the config file.
|
| |
| |
| |
| |
| | |
Make sure that server_names used in auth headers are sane, and reject them with
a sensible error code, before they disappear off into the depths of the system.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
otherwise we explode with:
```
Traceback (most recent call last):
File /usr/lib/python2.7/logging/handlers.py, line 78, in emit
logging.FileHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 950, in emit
StreamHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 887, in emit
self.handleError(record)
File /usr/lib/python2.7/logging/__init__.py, line 810, in handleError
None, sys.stderr)
File /usr/lib/python2.7/traceback.py, line 124, in print_exception
_print(file, 'Traceback (most recent call last):')
File /usr/lib/python2.7/traceback.py, line 13, in _print
file.write(str+terminator)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_io.py, line 170, in write
self.log.emit(self.level, format=u{log_io}, log_io=line)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 144, in emit
self.observer(event)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 136, in __call__
errorLogger = self._errorLoggerForObserver(brokenObserver)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 156, in _errorLoggerForObserver
if obs is not observer
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 81, in __init__
self.log = Logger(observer=self)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 64, in __init__
namespace = self._namespaceFromCallingContext()
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 42, in _namespaceFromCallingContext
return currentframe(2).f_globals[__name__]
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/python/compat.py, line 93, in currentframe
for x in range(n + 1):
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file site.py, line 129
File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
msg = self.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 732, in format
return fmt.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 471, in format
record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)
Logged from file site.py, line 129
```
...where the logger apparently recurses whilst trying to log the error, hitting the
maximum recursion depth and killing everything badly.
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Log number of events fetched from DB
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we finish processing a request, log the number of events we fetched from
the database to handle it.
[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
|
|/ |
|
| |
|
|\
| |
| | |
Remove email addresses / phone numbers from ID servers when they're removed from synapse
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \ |
|
| |\ \ \
| | |_|/
| |/| | |
use repr, not str
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| |\ \ \
| | |_|/
| |/| | |
Replace some more comparisons with six
|
| | |/
| | |
| | |
| | |
| | |
| | | |
plus a bonus b"" string I missed last time
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\| | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| | |
This tracks CPU and DB usage while requests are in flight, rather than
when we write the response.
|
| |\
| | |
| | | |
ConsentResource to gather policy consent from users
|
| | |
| | |
| | |
| | |
| | | |
Hopefully there are enough comments and docs in this that it makes sense on its
own.
|
| | | |
|
|/ / |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
(instead of everywhere that writes a response. Or rather, the subset of places
which write responses where we haven't forgotten it).
This also means that we don't have to have the mysterious version_string
attribute in anything with a request handler.
Unfortunately it does mean that we have to pass the version string wherever we
instantiate a SynapseSite, which has been c&ped 150 times, but that is code
that ought to be cleaned up anyway really.
|
|
|
|
|
|
| |
This is needless complexity; we might as well use the wrapper directly.
Also rename wrap_request_handler->wrap_json_request_handler.
|
|
|
|
| |
... so that it can be used on non-JSON endpoints
|
|
|
|
|
| |
The metrics are now available via the request, so this is redundant and can go
away at last.
|
|
|
|
| |
it's much neater there.
|
|
|
|
| |
less magic
|
|
|
|
|
| |
It fits quite nicely here, and opens the path to getting rid of the
"include_metrics" mess.
|
|
|
|
| |
... which is going to make it easier to move around.
|
| |
|
|
|
|
|
|
| |
This is useful in its own right, because server.py is full of stuff; but more
importantly, I want to do some refactoring that will cause a circular reference
as it is.
|
|\
| |
| | |
Fix 'Unhandled Error' logs with Twisted 18.4
|
| | |
|
| |
| |
| |
| | |
This gets two arguments, not one.
|
|\ \
| | |
| | | |
Replace stringIO imports with six
|
| | | |
|
|\ \ \
| | | |
| | | | |
more bytes strings
|
| |/ /
| | |
| | |
| | | |
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\ \ \
| |/ /
|/| | |
Use six.moves.urlparse
|
| | |
| | |
| | |
| | |
| | |
| | | |
The imports were shuffled around a bunch in py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
| | |
| | |
| | |
| | | |
Twisted 16.0 doesn't have addTimeout, so let's backport it.
|
|/ /
| |
| |
| | |
This doesn't feel like a wheel we need to reinvent.
|
|\ \
| | |
| | | |
Add b prefixes to some strings that are bytes in py3
|
| | |
| | |
| | |
| | |
| | |
| | | |
This has no effect on python2
Signed-off-by: Adrian Tschira <nota@notafile.com>
|
|\ \ \
| | | |
| | | | |
Improve handling of SRV records for federation connections
|
| |/ /
| | |
| | |
| | | |
Signed-off-by: Silke Hofstra <silke@slxh.eu>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We aren't ready to release this yet, so I'm reverting it for now.
This reverts commit d1679a4ed7947b0814e0f2af9b888a16c588f1a1, reversing
changes made to e089100c6231541c446e37e157dec8feed02d283.
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It is especially important that sync requests don't get cached, as if a
sync returns the same token given then the client will call sync with
the same parameters again. If the previous response was cached it will
get reused, resulting in the client tight looping making the same
request and never making any progress.
In general, clients will expect to get up to date data when requesting
APIs, and so its safer to do a blanket no cache policy than only
whitelisting APIs that we know will break things if they get cached.
|
|\ \ |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
It's useful to know when there are peaks in incoming requests - which isn't
quite the same as there being peaks in outgoing responses, due to the time
taken to handle requests.
|
| | |
| | |
| | |
| | |
| | | |
rephrase the OPTIONS and unrecognised request handling so that they look
similar to the common flow.
|
|\ \ \
| | | |
| | | | |
delete_local_events for purge_room_history
|
| | | |
| | | |
| | | |
| | | | |
Add a flag which makes the purger delete local events
|
|\ \ \ \
| | | | |
| | | | | |
Remove spurious log argument
|
| | | | |
| | | | |
| | | | |
| | | | | |
... which would cause scary-looking and unhelpful errors in the log on dns fail
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Use a connection pool for the SimpleHttpClient
|
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
In particular I hope this will help the pusher, which makes many requests to
sygnal, and is currently negotiating SSL for each one.
|
|/ / /
| | |
| | |
| | |
| | |
| | | |
Add federation_domain_whitelist
gives a way to restrict which domains your HS is allowed to federate with.
useful mainly for gracefully preventing a private but internet-connected HS from trying to federate to the wider public Matrix network
|
|\ \ \ |
|
| |\ \ \
| | | | |
| | | | | |
Track db txn time in millisecs
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Avoid throwing a (harmless) exception when we try to write an error response to
an http request where the client has disconnected.
This comes up as a CRITICAL error in the logs which tends to mislead people
into thinking there's an actual problem
|
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | | |
For each request, track the amount of time spent waiting for a db
connection. This entails adding it to the LoggingContext and we may as well add
metrics for it while we are passing.
|
|/ / /
| | |
| | |
| | | |
... to reduce the amount of floating-point foo we do.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In order to circumvent the number of duplicate foo:count metrics increasing
without bounds, it's time for a rearrangement.
The following are all deprecated, and replaced with synapse_util_metrics_block_count:
synapse_util_metrics_block_timer:count
synapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_db_txn_duration:count
The following are all deprecated, and replaced with synapse_http_server_response_count:
synapse_http_server_requests
synapse_http_server_response_time:count
synapse_http_server_response_ru_utime:count
synapse_http_server_response_ru_stime:count
synapse_http_server_response_db_txn_count:count
synapse_http_server_response_db_txn_duration:count
The following are renamed (the old metrics are kept for now, but deprecated):
synapse_util_metrics_block_timer:total ->
synapse_util_metrics_block_time_seconds
synapse_util_metrics_block_ru_utime:total ->
synapse_util_metrics_block_ru_utime_seconds
synapse_util_metrics_block_ru_stime:total ->
synapse_util_metrics_block_ru_stime_seconds
synapse_util_metrics_block_db_txn_count:total ->
synapse_util_metrics_block_db_txn_count
synapse_util_metrics_block_db_txn_duration:total ->
synapse_util_metrics_block_db_txn_duration_seconds
synapse_http_server_response_time:total ->
synapse_http_server_response_time_seconds
synapse_http_server_response_ru_utime:total ->
synapse_http_server_response_ru_utime_seconds
synapse_http_server_response_ru_stime:total ->
synapse_http_server_response_ru_stime_seconds
synapse_http_server_response_db_txn_count:total ->
synapse_http_server_response_db_txn_count
synapse_http_server_response_db_txn_duration:total
synapse_http_server_response_db_txn_duration_seconds
|
|/ /
| |
| |
| |
| | |
Make sure that we set the servlet name in the metrics object *before* calling
the servlet, in case the servlet throws an exception.
|
|\ \
| | |
| | | |
Fix error handling on dns lookup
|
| | |
| | |
| | |
| | |
| | |
| | | |
pass the right arguments to the errback handler
Fixes "TypeError('eb() takes exactly 2 arguments (1 given)',)"
|
|/ /
| |
| |
| |
| | |
Use failure.Failure to recover our failure, which will give us a useful
stacktrace, unlike the rethrown exception.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If somebody sends us a request where the the body is invalid utf-8, we should
return a 400 rather than a 500. (json.loads throws a UnicodeError in this
situation)
We might as well catch all Exceptions here: it seems very unlikely that we
would get a request that *isn't caused by invalid json.
|
| |
| |
| |
| |
| | |
Let the user specify custom modules which can be used for implementing extra
endpoints.
|
|\ \
| | |
| | | |
Front-end proxy: pass through auth header
|
| | | |
|
| | |
| | |
| | |
| | | |
Sometimes we need to pass headers into these methods
|
|/ /
| |
| |
| | |
`preserve_context_over_fn` is borked
|
| |
| |
| |
| | |
what could possibly go wrong
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
* don't log exception types twice
* not all exceptions have a meaningful 'message'. Use the repr rather than
attempting to build a string ourselves.
|
| | |
| | |
| | |
| | |
| | | |
... to cope with people with broken dnssec setups, mostly
|
| | |
| | |
| | |
| | |
| | | |
Support SRV records which point at AAAA records, as well as A records.
Fixes https://github.com/matrix-org/synapse/issues/2405
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| | |
Signed-off-by: Matthias Kesler <krombel@krombel.de>
|
| | |
|
| | |
|
| |
| |
| |
| | |
Fixes #2191
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | | |
dbkr/http_request_propagate_error
|
| | |
| | |
| | |
| | |
| | | |
The documentation on get_json has been wrong ever since the very first commit
to synapse...
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Parse json errors from get_json client methods and throw special
errors.
|
| | | |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| |
| |
| | |
When we're proxying Matrix endpoints, parse out Matrix error
responses and turn them into SynapseErrors so they can be
propagated sensibly upstream.
|
| |
| |
| |
| |
| |
| |
| | |
preserve_context_over_fn uses a ContextPreservingDeferred, which only restores
context for the duration of its callbacks, which isn't really correct, and
means that subsequent operations in the same request can end up without their
logcontexts.
|
| |
| |
| |
| |
| | |
Add a param to the federation client which lets us ignore historical backoff
data for federation queries, and set it for a handful of operations.
|
| |
| |
| |
| |
| | |
rather than having to instrument everywhere we make a federation call,
make the MatrixFederationHttpClient manage the retry limiter.
|
| |
| |
| |
| |
| | |
rename _create_request to _request, and push ascii-encoding of `destination`
and `path` down into it
|
|\ \
| | |
| | | |
Phone number registration / login support v2
|
| | |
| | |
| | |
| | | |
Changes from https://github.com/matrix-org/synapse/pull/1971
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we proxy a media request to a remote server, add a query-param, which will
tell the remote server to 404 if it doesn't recognise the server_name.
This should fix a routing loop where the server keeps forwarding back to
itself.
Also improves the error handling on remote media fetches, so that we don't
always return a rather obscure 502.
|
| | |
|
|/
|
|
| |
and replace requestEmailToken where we meant requestMsisdnToken
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The abort() method calls loseConnection() which tries to shutdown the
TLS connection cleanly. We now call abortConnection() directly which
should promptly close both the TLS connection and the underlying TCP
connection.
I also added some TODO markers to consider cancelling the old previous
timeout rather than checking time.time(). But given how urgently we want
to get this code released I'd rather leave the existing code with the
duplicate timeouts and the time.time() check.
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/| |
IPv6 support
|
| |
| |
| |
| |
| |
| | |
Apparently I just removed the spaces instead...
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
|
| |
| |
| |
| | |
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
|
| |
| |
| | |
Similar to https://github.com/matrix-org/synapse/pull/1689, but for endpoint.py
|
| |
| |
| | |
This is an (untested) general sketch of how to use wrapClientTLS to implement TLS over IPv6, as well as faster connections over IPv4.
|
|/ |
|
|
|
|
|
| |
Content-Type is allowed to contain options (`; charset=utf-8`, for
instance). We should allow that.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Add a timeout parameter for controlling how long synapse will wait
for responses from remote servers. For servers that fail include how
they failed to make it easier to debug.
Fetch keys from different servers in parallel rather than in series.
Set the default timeout to 10s.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Wrap up twisted's FileBodyProducer to work around
https://twistedmatrix.com/trac/ticket/8473. Hopefully this fixes
https://matrix.org/jira/browse/SYN-700.
|
|
|
|
|
|
|
| |
Always set the config key with an empty list, even if a list isn't specified.
This means that the codepaths are the same for both the empty list and
for a missing key. Since the behaviour is the same for both cases this
makes the code somewhat easier to reason about.
|
|
|
|
| |
matrix.org IP space
|
| |
|
| |
|
|
|
|
| |
JsonResource
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
URL previewing support
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
defaults to off.
Add url_preview_ip_range_blacklist to let admins specify internal IP ranges that must not be spidered.
Add url_preview_url_blacklist to let admins specify URL patterns that must not be spidered.
Implement a custom SpiderEndpoint and associated support classes to implement url_preview_ip_range_blacklist
Add commentary and generally address PR feedback
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \ |
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
experimental, etc. just putting it here for safekeeping for now
|
|\ \ \ \ \
| | |_|_|/
| |/| | | |
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | | |
pycharm supports them so there is no need to use the other format.
Might as well convert the existing strings to reduce the risk of
people accidentally cargo culting the wrong doc string format.
|
| | | | |
|
|/ / / |
|
| |/
|/| |
|
| |
| |
| |
| | |
before the compatibility hack that handled clients sending invalid JSON
|
| | |
|