| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Correctly handle outliers during persist events
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We incorrectly asserted that all contexts must have a non None state
group without consider outliers. This would usually be fine as the
assertion would never be hit, as there is a shortcut during persistence
if the forward extremities don't change.
However, if the outlier is being persisted with non-outlier events, the
function would be called and the assertion would be hit.
Fixes #3601
|
|\ \
| | |
| | | |
Fix another logcontext leak in _persist_events
|
| |/
| |
| |
| |
| |
| |
| |
| | |
We need to run the errback in the sentinel context to avoid losing our own
context.
Also: add logging to runInteraction to help identify where "Starting db
connection from sentinel context" warnings are coming from
|
|\ \
| | |
| | | |
Fix occasional 'tuple index out of range' error
|
| |/
| |
| |
| |
| |
| |
| | |
This fixes a bug in _delete_existing_rows_txn which was introduced in #3435
(though it's been on matrix-org-hotfixes for *years*). This code is only called
when there is some sort of conflict the first time we try to persist an event,
so it only happens rarely. Still, the exceptions are annoying.
|
|\ \
| | |
| | | |
Fix updating of cached remote profiles
|
| | | |
|
| |/
| |
| |
| |
| |
| | |
_update_remote_profile_cache was missing its `defer.inlineCallbacks`, so when
it was called, would just return a generator object, without actually running
any of the method body.
|
|\ \
| |/
|/| |
Wrap a number of things that run in the background
|
| |
| |
| |
| |
| |
| | |
on_notifier_poke no longer runs synchonously, so we have to do a different hack
to make sure that the replication data has been sent. Let's actually listen for
its arrival.
|
| | |
|
|/
|
|
|
| |
This will reduce the number of "Starting db connection from sentinel context"
warnings, and will help with our metrics.
|
|\
| |
| | |
Fix client_reader worker being able to handle /context requests
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This allows us to handle /context/ requests on the client_reader worker
without having to pull in all the various stream handlers (e.g.
precence, typing, pushers etc). The only thing the token gets used for
is pagination, and that ignores everything but the room portion of the
token.
|
|/ |
|
|\
| |
| | |
Use deltas to calculate current state deltas
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
If we have a delta from the existing to new current state, then we can
reuse that rather than manually working it out by fetching both lots of
state.
|
| | |
|
|\ \
| | |
| | | |
Improve logging for exceptions when handling PDUs
|
| | | |
|
| | |
| | |
| | |
| | | |
when we get an exception handling a federation PDU, log the whole stacktrace.
|
|\ \ \
| | | |
| | | | |
Fixes and optimisations for resolve_state_groups
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it's easier to create the new state group as a delta from the existing one.
(There's an outside chance this will help with
https://github.com/matrix-org/synapse/issues/3364)
|
| | | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
First of all, fix the logic which looks for identical input state groups so
that we actually use them. This turned out to be most easily done by factoring
the relevant code out to a separate function so that we could do an early
return.
Secondly, avoid building the whole `conflicted_state` dict (which was only ever
used as a boolean flag).
Thirdly, replace the construction of the `state` dict (which mapped from keys
to events that set them), with an optimistic construction of the resolution
result assuming there will be no conflicts. This should be no slower than
building the old `state` dict, and:
- in the conflicted case, we'll short-cut it, saving part of the work
- in the unconflicted case, it saves rebuilding the resolution from the
`state` dict.
Finally, do a couple of s/values/itervalues/.
|
|\ \ \
| |_|/
|/| | |
Remove redundant checks on room forgottenness
|
| |\ \ |
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Fixes #3550
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Speed up _calculate_state_delta
|
| |\ \ \ \
| |/ / / /
|/| | | |
| | | | | |
erikj/speed_up_calculate_state_delta
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Logcontext fixes
|
| |\ \ \ \ \
| |/ / / / /
|/| | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Make client_reader support some more read only APIs
|
| |\ \ \ \ \ \
| | | |_|_|_|/
| | |/| | | | |
|
| |\ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | | |
erikj/client_apis_move
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This is in preparation for moving GET /context/ to a worker
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This will let us call the read only parts from workers, and so be able
to move some APIs off of master, e.g. the `/state` API.
|
|\ \ \ \ \ \ \ \
| |_|_|/ / / / /
|/| | | | | | | |
Add some measure blocks to persist_events
|
| | | | | | | | |
|
| | | | | | | | |
|
|/ / / / / / /
| | | | | | |
| | | | | | |
| | | | | | | |
... to help us figure out where 40% of CPU is going
|
| | | | | | | |
|
| | | | | | | |
|
| |_|/ / / /
|/| | | | |
| | | | | |
| | | | | | |
Fix some random logcontext leaks.
|
| | | | | | |
|
| | | | | | |
|
| |_|/ / /
|/| | | | |
|
|\ \ \ \ \
| |_|_|_|/
|/| | | | |
Only get cached state from context in persist_event
|
| | | | | |
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | | |
We don't want to bother pulling out the current state from the DB since
until we know we have to. Checking the context for state is just an
optimisation.
|
|\ \ \ \
| | | | |
| | | | | |
Fix missing attributes on workers.
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | | |
This was missed during the transition from attribute to getter for
getting state from context.
|
|\ \ \ \
| | | | |
| | | | | |
Fix EventContext when using workers
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | | |
We were:
1. Not correctly setting all attributes
2. Using defer.inlineCallbacks in a non-generator
|
|\ \ \ \
| | | | |
| | | | | |
Add concept of StatelessContext, take 4.
|
| | | | | |
|
| |\ \ \ \
| |/ / / /
|/| | | | |
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Refcator EventContext to accept state during init
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| |/ / / |
|
| | | | |
|
| | | | |
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Announce deleted devices explicitly over federation.
|
| |\ \ \
| |/ / /
|/| | | |
|
| | | | |
|
|\ \ \ \
| |_|_|/
|/| | | |
Test and fix support for cancellation in Linearizer
|
| | | | |
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
A set of improvements to the Limiter
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Linearizer was effectively a Limiter with max_count=1, so rather than
maintaining two sets of code, let's combine them.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
* give them names, to improve logging
* use a deque rather than a list for efficiency
|
| | | |
| | | |
| | | |
| | | | |
Fixes #3570
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Switch changes to markdown & flip content on for miscs
|
| |\ \ \
| |/ / /
|/| | | |
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
(mostly just clarifications)
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| |/ / /
|/| | | |
|
|\| | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
This reverts commit 21d3b879433e040babd43c89b62827f92e3ac861.
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Changed http links to https
|
| | | | |
| | | | |
| | | | | |
Especially useful for the debian repo, as this makes it easier to get the key in a secure way
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
... we've fixed the things that caused the warnings, so we should reinstate the
warning.
|
|\ \ \ \ \
| | |/ / /
| |/| | | |
|
| |\ \ \ \
| | | | | |
| | | | | | |
Disable logcontext warning
|
| | | | | | |
|
| |/ / / /
| | | | |
| | | | |
| | | | | |
Temporary workaround to #3518 while we release 0.33.0.
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Run things as background processes
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This is more involved than it might otherwise be, because the current
implementation just drops its logcontexts and runs everything in the sentinel
context.
It turns out that we aren't actually using a bunch of the functionality here
(notably suppress_failures and the fact that Distributor.fire returns a
deferred), so the easiest way to fix this is actually by simplifying a bunch of
code.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This fixes #3518, and ensures that we get useful logs and metrics for lots of
things that happen in the background.
(There are certainly more things that happen in the background; these are just
the common ones I've found running a single-process synapse locally).
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Add response code to response timer metrics
|
| | | | | | | |
|
| | | | | | | |
|
| | |_|_|_|/
| |/| | | | |
|
|\ \ \ \ \ \
| |_|/ / / /
|/| | | | | |
add config for pep8
|
| | | | | | |
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Since, for better or worse, we seem to have configured isort to generate
89-character lines, pycharm is now complaining at me that our lines are too
long.
So, let's configure pep8 to behave consistently with flake8.
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Resource tracking for background processes
|
| | | | | |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This introduces a mechanism for tracking resource usage by background
processes, along with an example of how it will be used.
This will help address #3518, but more importantly will give us better insights
into things which are happening but not being shown up by the request metrics.
We *could* do this with Measure blocks, but:
- I think having them pulled out as a completely separate metric class will
make it easier to distinguish top-level processes from those which are
nested.
- I want to be able to report on in-flight background processes, and I don't
think we want to do this for *all* Measure blocks.
|
|\ \ \ \
| | | | |
| | | | | |
Remove event re-signing hacks
|
| | | | | |
|
| |\ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
These "temporary fixes" have been here three and a half years, and I can't find
any events in the matrix.org database where the calculated signature differs
from what's in the db. It's time for them to go away.
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
Comment dummy TURN parameters in default config
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This default config is parsed and used a base before the actual
config is overlaid, so with these values not commented out, the
code to detect when no turn params were set and refuse to generate
credentials was never firing because the dummy default was always set.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Fix visibility of events from erased users over federation
|
| | | | | | | |
|
| | | | | | | |
|
|/ / / / / / |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Refactor and optimze filter_events_for_server
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
for easier unit testing.
|
| | | | | | | |
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Fix perf regression in PR #3530
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
|/ / / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The get_entities_changed function was changed to return all changed
entities since the given stream position, rather than only those changed
from a given list of entities. This resulted in the function incorrectly
returning large numbers of entities that, for example, caused large
increases in database usage.
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Don't return unknown entities in get_entities_changed
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The stream cache keeps track of all entities that have changed since
a particular stream position, so get_entities_changed does not need to
return unknown entites when given a larger stream position.
This makes it consistent with the behaviour of has_entity_changed.
|
|\ \ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | | |
check isort by travis
|
| | | | | | | | | |
|
| | | | | | | | | |
|
|/ / / / / / / / |
|
| | | | | | | | |
|
|\ \ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | | |
Use parse and asserts from http.servlet
|
| | | | | | | | | |
|
| | | | | | | | | |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
the method "assert_params_in_request" does handle dicts and not
requests. A request body has to be parsed to json before this method
can be used
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
parse_integer and parse_string can take a request and raise errors
in case we have wrong or missing params.
This PR tries to use them more to deduplicate some code and make it
better readable
|
|/ / / / / / / / |
|
|\ \ \ \ \ \ \ \
| |/ / / / / / /
|/| | | | | | | |
Make FederationRateLimiter queue requests properly
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| |/ / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
popitem removes the *most recent* item by default [1]. We want the oldest.
Fixes #3524
[1]: https://docs.python.org/2/library/collections.html#collections.OrderedDict.popitem
|
|/ / / / / / |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Reduce set building in get_entities_changed
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This line shows up as about 5% of cpu time on a synchrotron:
not_known_entities = set(entities) - set(self._entity_to_key)
Presumably the problem here is that _entity_to_key can be largeish, and
building a set for its keys every time this function is called is slow.
Here we rewrite the logic to avoid building so many sets.
|
|\ \ \ \ \ \ \
| |/ / / / / /
|/| | | | | | |
Enforce the specified API for report_event
|
| | | | | | | |
|
| |\ \ \ \ \ \
| |/ / / / / /
|/| | | | | | |
|
|\ \ \ \ \ \ \
| |_|/ / / / /
|/| | | | | | |
Use stream cache in get_linearized_receipts_for_room
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This avoids us from uncessarily hitting the database when there has been
no change for the room
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
as per
https://matrix.org/docs/spec/client_server/unstable.html#post-matrix-client-r0-rooms-roomid-report-eventid
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
remote device
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
and generally make it work.
|
| |_|_|_|_|/
|/| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Previously we queued up the poke correctly when the device was deleted,
but then the actual EDU wouldn't get sent, as the device was no longer known.
Instead, we now send EDUs for deleted devices too if there's a poke for them.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* Use more portable syntax using attrs package.
Newer syntax
attr.ib(factory=dict)
is just a syntactic sugar for
attr.ib(default=attr.Factory(dict))
It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.
We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.
Signed-off-by: Oleg Girko <ol@infoserver.lv>
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Newer syntax
attr.ib(factory=dict)
is just a syntactic sugar for
attr.ib(default=attr.Factory(dict))
It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.
We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.
Signed-off-by: Oleg Girko <ol@infoserver.lv>
|
| |/ / / / /
|/| | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Let's try to include time spent in the DB threads in the per-request/block cpu
usage metrics.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
|
| | | | | | |
|
| | | | | | |
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
Add CPU metrics for _fetch_event_list
|
| | | | | | |
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | | |
add a Measure block on _fetch_event_list, in the hope that we can better
measure CPU usage here.
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Run isort on Synapse
|
| | | | | | |
|
|/ / / / / |
|
| |_|_|/
|/| | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Synapse 0.32.1 (2018-07-06)
===========================
Bugfixes
--------
- Add explicit dependency on netaddr ([#3488](https://github.com/matrix-org/synapse/issues/3488))
|
| | | | | |
|
|/| | | |
| | | | |
| | | | | |
Add explicit dependency on netaddr
|
|/ / / /
| | | |
| | | |
| | | |
| | | | |
the dependencies file, causing failures on upgrade (and presumably for new
installs).
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
... as described at
https://docs.google.com/document/d/1EttUVzjc2DWe2ciw4XPtNpUpIl9lWXGEsy2ewDS7rtw.
|
|\| | | |
| | | | |
| | | | | |
More server_name validation
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We need to do a bit more validation when we get a server name, but don't want
to be re-doing it all over the shop, so factor out a separate
parse_and_validate_server_name, and do the extra validation.
Also, use it to verify the server name in the config file.
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Reinstate lost run_on_reactor in unit test
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
a61738b removed a call to run_on_reactor from a unit test, but that call was
doing something useful, in making the function in question asynchronous.
Reinstate the call and add a check that we are testing what we wanted to be
testing.
|
|\ \ \ \
| | | | |
| | | | | |
Invalidate cache on correct thread
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| |_|/ / /
|/| | | | |
Fix up auth check
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Python 3 doesn't support comparing None to ints
|
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Make sure that server_names used in auth headers are sane, and reject them with
a sensible error code, before they disappear off into the depths of the system.
|
|\ \ \ \
| |/ / /
|/| | | |
don't mix unicode strings with utf8-in-byte-strings
|
| | | | |
|
| | | | |
|
| | | | |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
otherwise we explode with:
```
Traceback (most recent call last):
File /usr/lib/python2.7/logging/handlers.py, line 78, in emit
logging.FileHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 950, in emit
StreamHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 887, in emit
self.handleError(record)
File /usr/lib/python2.7/logging/__init__.py, line 810, in handleError
None, sys.stderr)
File /usr/lib/python2.7/traceback.py, line 124, in print_exception
_print(file, 'Traceback (most recent call last):')
File /usr/lib/python2.7/traceback.py, line 13, in _print
file.write(str+terminator)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_io.py, line 170, in write
self.log.emit(self.level, format=u{log_io}, log_io=line)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 144, in emit
self.observer(event)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 136, in __call__
errorLogger = self._errorLoggerForObserver(brokenObserver)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 156, in _errorLoggerForObserver
if obs is not observer
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 81, in __init__
self.log = Logger(observer=self)
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 64, in __init__
namespace = self._namespaceFromCallingContext()
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 42, in _namespaceFromCallingContext
return currentframe(2).f_globals[__name__]
File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/python/compat.py, line 93, in currentframe
for x in range(n + 1):
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file site.py, line 129
File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
msg = self.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 732, in format
return fmt.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 471, in format
record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)
Logged from file site.py, line 129
```
...where the logger apparently recurses whilst trying to log the error, hitting the
maximum recursion depth and killing everything badly.
|
|\ \ \
| | | |
| | | | |
Clarify "real name" in contributor requirements
|
| | | | |
|
| | | | |
|