From 36c6cf58a55c53664290ee2623ec625d0154bef3 Mon Sep 17 00:00:00 2001 From: babolivier Date: Tue, 10 Aug 2021 13:24:16 +0000 Subject: deploy: 9f7c038272318bab09535e85e6bb4345ed2f1368 --- latest/development/cas.html | 317 +++++++++++++++++ latest/development/contributing_guide.html | 31 +- latest/development/database_schema.html | 2 +- latest/development/git.html | 376 +++++++++++++++++++++ latest/development/img/git/branches.jpg | Bin 0 -> 72228 bytes latest/development/img/git/clean.png | Bin 0 -> 110840 bytes latest/development/img/git/squash.png | Bin 0 -> 29667 bytes .../development/internal_documentation/index.html | 6 +- latest/development/room-dag-concepts.html | 306 +++++++++++++++++ latest/development/saml.html | 294 ++++++++++++++++ 10 files changed, 1324 insertions(+), 8 deletions(-) create mode 100644 latest/development/cas.html create mode 100644 latest/development/git.html create mode 100644 latest/development/img/git/branches.jpg create mode 100644 latest/development/img/git/clean.png create mode 100644 latest/development/img/git/squash.png create mode 100644 latest/development/room-dag-concepts.html create mode 100644 latest/development/saml.html (limited to 'latest/development') diff --git a/latest/development/cas.html b/latest/development/cas.html new file mode 100644 index 0000000000..b94c57848c --- /dev/null +++ b/latest/development/cas.html @@ -0,0 +1,317 @@ + + + + + + CAS - Synapse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + + + +
+
+ +
+ +
+ +

How to test CAS as a developer without a server

+

The django-mama-cas project is an +easy to run CAS implementation built on top of Django.

+

Prerequisites

+
    +
  1. Create a new virtualenv: python3 -m venv <your virtualenv>
  2. +
  3. Activate your virtualenv: source /path/to/your/virtualenv/bin/activate
  4. +
  5. Install Django and django-mama-cas: +
    python -m pip install "django<3" "django-mama-cas==2.4.0"
    +
    +
  6. +
  7. Create a Django project in the current directory: +
    django-admin startproject cas_test .
    +
    +
  8. +
  9. Follow the install directions for django-mama-cas
  10. +
  11. Setup the SQLite database: python manage.py migrate
  12. +
  13. Create a user: +
    python manage.py createsuperuser
    +
    +
      +
    1. Use whatever you want as the username and password.
    2. +
    3. Leave the other fields blank.
    4. +
    +
  14. +
  15. Use the built-in Django test server to serve the CAS endpoints on port 8000: +
    python manage.py runserver
    +
    +
  16. +
+

You should now have a Django project configured to serve CAS authentication with +a single user created.

+

Configure Synapse (and Element) to use CAS

+
    +
  1. Modify your homeserver.yaml to enable CAS and point it to your locally +running Django test server: +
    cas_config:
    +  enabled: true
    +  server_url: "http://localhost:8000"
    +  service_url: "http://localhost:8081"
    +  #displayname_attribute: name
    +  #required_attributes:
    +  #    name: value
    +
    +
  2. +
  3. Restart Synapse.
  4. +
+

Note that the above configuration assumes the homeserver is running on port 8081 +and that the CAS server is on port 8000, both on localhost.

+

Testing the configuration

+

Then in Element:

+
    +
  1. Visit the login page with a Element pointing at your homeserver.
  2. +
  3. Click the Single Sign-On button.
  4. +
  5. Login using the credentials created with createsuperuser.
  6. +
  7. You should be logged in.
  8. +
+

If you want to repeat this process you'll need to manually logout first:

+
    +
  1. http://localhost:8000/admin/
  2. +
  3. Click "logout" in the top right.
  4. +
+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/latest/development/contributing_guide.html b/latest/development/contributing_guide.html index 7852dbec7d..6f8c687882 100644 --- a/latest/development/contributing_guide.html +++ b/latest/development/contributing_guide.html @@ -99,7 +99,7 @@ @@ -307,7 +307,7 @@ Make sure that you have saved all your files.

source ./env/bin/activate
 ./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
 
-

Run the unit tests.

+

Run the unit tests (Twisted trial).

The unit tests run parts of Synapse, including your changes, to see if anything was broken. They are slower than the linters but will typically catch more errors.

source ./env/bin/activate
@@ -324,7 +324,7 @@ trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_i
 

To increase the log level for the tests, set SYNAPSE_TEST_LOG_LEVEL:

SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
 
-

Run the integration tests.

+

Run the integration tests (Sytest).

The integration tests are a more comprehensive suite of tests. They run a full version of Synapse, including your changes, to check if anything was broken. They are slower than the unit tests but will @@ -334,6 +334,29 @@ configuration:

$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
 

This configuration should generally cover your needs. For more details about other configurations, see documentation in the SyTest repo.

+

Run the integration tests (Complement).

+

Complement is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.

+

It's often nice to develop on Synapse and write Complement tests at the same time. +Here is how to run your local Synapse checkout against your local Complement checkout.

+

(checkout complement alongside your synapse checkout)

+
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh
+
+

To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:

+
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory
+
+

To run a specific test, you can specify the whole name structure:

+
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory/parallel/Backfilled_historical_events_resolve_with_proper_state_in_correct_order
+
+

Access database for homeserver after Complement test runs.

+

If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:

+
    +
  1. In your Complement test comment out defer deployment.Destroy(t) and replace with defer time.Sleep(2 * time.Hour) to keep the homeserver running after the tests complete
  2. +
  3. Start the Complement tests
  4. +
  5. Find the name of the container, docker ps -f name=complement_ (this will filter for just the Compelement related Docker containers)
  6. +
  7. Access the container replacing the name with what you found in the previous step: docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash
  8. +
  9. Install sqlite (database driver), apt-get update && apt-get install -y sqlite3
  10. +
  11. Then run sqlite3 and open the database .open /conf/homeserver.db (this db path comes from the Synapse homeserver.yaml)
  12. +

9. Submit your patch.

Once you're happy with your patch, it's time to prepare a Pull Request.

To prepare a Pull Request, please:

@@ -496,7 +519,7 @@ flag to git commit, which uses the name and email set in your

By now, you know the drill!

Notes for maintainers on merging PRs etc

There are some notes for those with commit access to the project on how we -manage git here.

+manage git here.

Conclusion

That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully diff --git a/latest/development/database_schema.html b/latest/development/database_schema.html index 7a5266ff7f..794de1df66 100644 --- a/latest/development/database_schema.html +++ b/latest/development/database_schema.html @@ -99,7 +99,7 @@

diff --git a/latest/development/git.html b/latest/development/git.html new file mode 100644 index 0000000000..c134453ea4 --- /dev/null +++ b/latest/development/git.html @@ -0,0 +1,376 @@ + + + + + + Git Usage - Synapse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + + + +
+
+ +
+ +
+ +

Some notes on how we use git

+

On keeping the commit history clean

+

In an ideal world, our git commit history would be a linear progression of +commits each of which contains a single change building on what came +before. Here, by way of an arbitrary example, is the top of git log --graph b2dba0607:

+clean git graph +

Note how the commit comment explains clearly what is changing and why. Also +note the absence of merge commits, as well as the absence of commits called +things like (to pick a few culprits): +“pep8”, “fix broken +test”, +“oops”, +“typo”, or “Who's +the president?”.

+

There are a number of reasons why keeping a clean commit history is a good +thing:

+
    +
  • +

    From time to time, after a change lands, it turns out to be necessary to +revert it, or to backport it to a release branch. Those operations are +much easier when the change is contained in a single commit.

    +
  • +
  • +

    Similarly, it's much easier to answer questions like “is the fix for +/publicRooms on the release branch?” if that change consists of a single +commit.

    +
  • +
  • +

    Likewise: “what has changed on this branch in the last week?” is much +clearer without merges and “pep8” commits everywhere.

    +
  • +
  • +

    Sometimes we need to figure out where a bug got introduced, or some +behaviour changed. One way of doing that is with git bisect: pick an +arbitrary commit between the known good point and the known bad point, and +see how the code behaves. However, that strategy fails if the commit you +chose is the middle of someone's epic branch in which they broke the world +before putting it back together again.

    +
  • +
+

One counterargument is that it is sometimes useful to see how a PR evolved as +it went through review cycles. This is true, but that information is always +available via the GitHub UI (or via the little-known refs/pull +namespace).

+

Of course, in reality, things are more complicated than that. We have release +branches as well as develop and master, and we deliberately merge changes +between them. Bugs often slip through and have to be fixed later. That's all +fine: this not a cast-iron rule which must be obeyed, but an ideal to aim +towards.

+

Merges, squashes, rebases: wtf?

+

Ok, so that's what we'd like to achieve. How do we achieve it?

+

The TL;DR is: when you come to merge a pull request, you probably want to +“squash and merge”:

+

squash and merge.

+

(This applies whether you are merging your own PR, or that of another +contributor.)

+

“Squash and merge”1 takes all of the changes in the +PR, and bundles them into a single commit. GitHub gives you the opportunity to +edit the commit message before you confirm, and normally you should do so, +because the default will be useless (again: * woops typo is not a useful +thing to keep in the historical record).

+

The main problem with this approach comes when you have a series of pull +requests which build on top of one another: as soon as you squash-merge the +first PR, you'll end up with a stack of conflicts to resolve in all of the +others. In general, it's best to avoid this situation in the first place by +trying not to have multiple related PRs in flight at the same time. Still, +sometimes that's not possible and doing a regular merge is the lesser evil.

+

Another occasion in which a regular merge makes more sense is a PR where you've +deliberately created a series of commits each of which makes sense in its own +right. For example: a PR which gradually propagates a refactoring operation +through the codebase, or a +PR which is the culmination of several other +PRs. In this case the ability +to figure out when a particular change/bug was introduced could be very useful.

+

Ultimately: this is not a hard-and-fast-rule. If in doubt, ask yourself “do +each of the commits I am about to merge make sense in their own right”, but +remember that we're just doing our best to balance “keeping the commit history +clean” with other factors.

+

Git branching model

+

A lot +of +words have been +written in the past about git branching models (no really, a +lot). I tend to +think the whole thing is overblown. Fundamentally, it's not that +complicated. Here's how we do it.

+

Let's start with a picture:

+

branching model

+

It looks complicated, but it's really not. There's one basic rule: anyone is +free to merge from any more-stable branch to any less-stable branch at +any time2. (The principle behind this is that if a +change is good enough for the more-stable branch, then it's also good enough go +put in a less-stable branch.)

+

Meanwhile, merging (or squashing, as per the above) from a less-stable to a +more-stable branch is a deliberate action in which you want to publish a change +or a set of changes to (some subset of) the world: for example, this happens +when a PR is landed, or as part of our release process.

+

So, what counts as a more- or less-stable branch? A little reflection will show +that our active branches are ordered thus, from more-stable to less-stable:

+
    +
  • master (tracks our last release).
  • +
  • release-vX.Y (the branch where we prepare the next release)3.
  • +
  • PR branches which are targeting the release.
  • +
  • develop (our "mainline" branch containing our bleeding-edge).
  • +
  • regular PR branches.
  • +
+

The corollary is: if you have a bugfix that needs to land in both +release-vX.Y and develop, then you should base your PR on +release-vX.Y, get it merged there, and then merge from release-vX.Y to +develop. (If a fix lands in develop and we later need it in a +release-branch, we can of course cherry-pick it, but landing it in the release +branch first helps reduce the chance of annoying conflicts.)

+
+

[1]: “Squash and merge” is GitHub's term for this +operation. Given that there is no merge involved, I'm not convinced it's the +most intuitive name. ^

+

[2]: Well, anyone with commit access.^

+

[3]: Very, very occasionally (I think this has happened once in +the history of Synapse), we've had two releases in flight at once. Obviously, +release-v1.2 is more-stable than release-v1.3. ^

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/latest/development/img/git/branches.jpg b/latest/development/img/git/branches.jpg new file mode 100644 index 0000000000..715ecc8cd0 Binary files /dev/null and b/latest/development/img/git/branches.jpg differ diff --git a/latest/development/img/git/clean.png b/latest/development/img/git/clean.png new file mode 100644 index 0000000000..3accd7ccef Binary files /dev/null and b/latest/development/img/git/clean.png differ diff --git a/latest/development/img/git/squash.png b/latest/development/img/git/squash.png new file mode 100644 index 0000000000..234caca3e4 Binary files /dev/null and b/latest/development/img/git/squash.png differ diff --git a/latest/development/internal_documentation/index.html b/latest/development/internal_documentation/index.html index b2a891430f..7e8a25e37a 100644 --- a/latest/development/internal_documentation/index.html +++ b/latest/development/internal_documentation/index.html @@ -99,7 +99,7 @@ @@ -203,7 +203,7 @@ under the Usage section of the documentation.

- @@ -221,7 +221,7 @@ under the Usage section of the documentation.

- diff --git a/latest/development/room-dag-concepts.html b/latest/development/room-dag-concepts.html new file mode 100644 index 0000000000..87e7aadf52 --- /dev/null +++ b/latest/development/room-dag-concepts.html @@ -0,0 +1,306 @@ + + + + + + Room DAG concepts - Synapse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + + + +
+
+ +
+ +
+ +

Room DAG concepts

+

Edges

+

The word "edge" comes from graph theory lingo. An edge is just a connection +between two events. In Synapse, we connect events by specifying their +prev_events. A subsequent event points back at a previous event.

+
A (oldest) <---- B <---- C (most recent)
+
+

Depth and stream ordering

+

Events are normally sorted by (topological_ordering, stream_ordering) where +topological_ordering is just depth. In other words, we first sort by depth +and then tie-break based on stream_ordering. depth is incremented as new +messages are added to the DAG. Normally, stream_ordering is an auto +incrementing integer, but backfilled events start with stream_ordering=-1 and decrement.

+
+
    +
  • /sync returns things in the order they arrive at the server (stream_ordering).
  • +
  • /messages (and /backfill in the federation API) return them in the order determined by the event graph (topological_ordering, stream_ordering).
  • +
+

The general idea is that, if you're following a room in real-time (i.e. +/sync), you probably want to see the messages as they arrive at your server, +rather than skipping any that arrived late; whereas if you're looking at a +historical section of timeline (i.e. /messages), you want to see the best +representation of the state of the room as others were seeing it at the time.

+

Forward extremity

+

Most-recent-in-time events in the DAG which are not referenced by any other events' prev_events yet.

+

The forward extremities of a room are used as the prev_events when the next event is sent.

+

Backwards extremity

+

The current marker of where we have backfilled up to and will generally be the +oldest-in-time events we know of in the DAG.

+

This is an event where we haven't fetched all of the prev_events for.

+

Once we have fetched all of its prev_events, it's unmarked as a backwards +extremity (although we may have formed new backwards extremities from the prev +events during the backfilling process).

+

Outliers

+

We mark an event as an outlier when we haven't figured out the state for the +room at that point in the DAG yet.

+

We won't necessarily have the prev_events of an outlier in the database, +but it's entirely possible that we might. The status of whether we have all of +the prev_events is marked as a backwards extremity.

+

For example, when we fetch the event auth chain or state for a given event, we +mark all of those claimed auth events as outliers because we haven't done the +state calculation ourself.

+

State groups

+

For every non-outlier event we need to know the state at that event. Instead of +storing the full state for each event in the DB (i.e. a event_id -> state +mapping), which is very space inefficient when state doesn't change, we +instead assign each different set of state a "state group" and then have +mappings of event_id -> state_group and state_group -> state.

+

Stage group edges

+

TODO: state_group_edges is a further optimization... +notes from @Azrenbeth, https://pastebin.com/seUGVGeT

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/latest/development/saml.html b/latest/development/saml.html new file mode 100644 index 0000000000..50a4e8d116 --- /dev/null +++ b/latest/development/saml.html @@ -0,0 +1,294 @@ + + + + + + SAML - Synapse + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + + + +
+
+ +
+ +
+ +

How to test SAML as a developer without a server

+

https://capriza.github.io/samling/samling.html (https://github.com/capriza/samling) is a great +resource for being able to tinker with the SAML options within Synapse without needing to +deploy and configure a complicated software stack.

+

To make Synapse (and therefore Riot) use it:

+
    +
  1. Use the samling.html URL above or deploy your own and visit the IdP Metadata tab.
  2. +
  3. Copy the XML to your clipboard.
  4. +
  5. On your Synapse server, create a new file samling.xml next to your homeserver.yaml with +the XML from step 2 as the contents.
  6. +
  7. Edit your homeserver.yaml to include: +
    saml2_config:
    +  sp_config:
    +    allow_unknown_attributes: true  # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
    +    metadata:
    +      local: ["samling.xml"]   
    +
    +
  8. +
  9. Ensure that your homeserver.yaml has a setting for public_baseurl: +
    public_baseurl: http://localhost:8080/
    +
    +
  10. +
  11. Run apt-get install xmlsec1 and pip install --upgrade --force 'pysaml2>=4.5.0' to ensure +the dependencies are installed and ready to go.
  12. +
  13. Restart Synapse.
  14. +
+

Then in Riot:

+
    +
  1. Visit the login page with a Riot pointing at your homeserver.
  2. +
  3. Click the Single Sign-On button.
  4. +
  5. On the samling page, enter a Name Identifier and add a SAML Attribute for uid=your_localpart. +The response must also be signed.
  6. +
  7. Click "Next".
  8. +
  9. Click "Post Response" (change nothing).
  10. +
  11. You should be logged in.
  12. +
+

If you try and repeat this process, you may be automatically logged in using the information you +gave previously. To fix this, open your developer console (F12 or Ctrl+Shift+I) while on the +samling page and clear the site data. In Chrome, this will be a button on the Application tab.

+ +
+ + +
+
+ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file -- cgit 1.5.1