Merge pull request #697 from nucypher/docs

Post-publication documentation formatting adjustments
pull/698/head
Tux 2019-01-29 03:04:15 +00:00 committed by GitHub
commit 4b7c930020
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 386 additions and 316 deletions

View File

@ -1,28 +1,35 @@
# Nucypher's Approaches to Upgradeable Contracts
Nucypher's Approaches to Upgradeable Contracts
==============================================
Smart contracts in Ethereum are immutable...
Even if a contract can be deleted, it still exists in the blockchain after `selfdestruct`, and only the storage is cleared.
In order to fix bugs and provide upgrade logic it is possible to change the contract (address) and save the original contract's storage values.
## Approach A
Approach A
----------
One simple way to achieve this is to create a new contract, copy the original storage values to the new contract, then self-destruct (mark as deleted) the old contract.
When this happens, the client changes the address used for a requested contract.
*Note: (There will be two deployed versions of the contract during storage migration)*
.. note::
There will be two deployed versions of the contract during storage migration)*
## Approach B
Approach B
----------
A more convenient way is to use a proxy contract with an interface where each method redirects to the *target* contract.
This option is advantageous because the client uses one address most of the time but also has its own methods.
``` important:: If updates to the proxy contract's methods are made, then the client will need to change proxy address also.
```
.. important::
If updates to the proxy contract's methods are made, then the client will need to change proxy address also.
## Approach C
Approach C
----------
Another way is using a fallback function in the proxy contract - this function will execute on any request, redirecting the request to the target and returning the resulting value (using opcodes).
This is similar to the previous option, but this proxy doesn't have interface methods, only a fallback function, so there is no need to change the proxy address if contract methods are changed.
@ -33,9 +40,12 @@ This approach is not ideal, and has some restrictions:
* Proxy contracts (Dispatcher) hold storage (not in the contract itself). While upgrading storage, values must be the same or equivalent (see below).
## Interaction scheme
Interaction scheme
------------------
![Interaction scheme](../.static/img/Dispatcher.png)
.. image:: ../.static/img/Dispatcher.png
:target: ../.static/img/Dispatcher.png
Dispatcher - proxy contract that redirects requests to the target address.
@ -45,8 +55,9 @@ The contract's owner can change the target address by using the `Dispatcher`'s A
The `Dispatcher` contract uses `delegatecall` for redirecting requests, so `msg.sender` remains as the client address
and uses the dispatcher's storage when executing methods in the target contract.
``` warning:: If target address is not set, or the target contract does not exist, results may be unpredictable because `delegatecall` will return `true`.
```
.. warning::
If target address is not set, or the target contract does not exist, results may be unpredictable because `delegatecall` will return `true`.
Contract - upgradeable contract, each version must have the same ordering of storage values.
New versions of the contract can expand values, but must contain all the old values (containing values from dispatcher **first**).
@ -55,8 +66,8 @@ If a client sends a request to the contract directly to its deployed address wit
then the request may execute (without exception) using the wrong target address.
## Development
Development
-----------
* Use `Upgradeable` as base contract for all contracts that will be used with `Dispatcher`
* Implement `verifyState(address)` method which checks that a new version has correct storage values
@ -64,10 +75,11 @@ then the request may execute (without exception) using the wrong target address.
* Each upgrade should include tests which check storage equivalence
## Sources
Sources
-------
More examples:
* <https://github.com/maraoz/solidity-proxy> - Realization of using libraries (not contracts) but too complex and some ideas are obsolete after Byzantium hard fork
* <https://github.com/willjgriff/solidity-playground> - Most of the upgradeable proxy contract code is taken from this repository
* <https://github.com/0v1se/contracts-upgradeable> - Source code for verifying upgrade
* https://github.com/maraoz/solidity-proxy - Realization of using libraries (not contracts) but too complex and some ideas are obsolete after Byzantium hard fork
* https://github.com/willjgriff/solidity-playground - Most of the upgradeable proxy contract code is taken from this repository
* https://github.com/0v1se/contracts-upgradeable - Source code for verifying upgrade

View File

@ -36,13 +36,13 @@ For a full installation guide see the [NuCypher Installation Guide](/guides/inst
Assuming you already have `nucypher` installed with the `demos` extra, running the Heartbeat demo only involves running the `alicia.py` and `doctor.py` scripts; Run `alicia.py` first:
```sh
```bash
(nucypher)$ python alicia.py
```
This will create a temporal directory called `alicia-files` that contains the data for making Alicia persistent (i.e., her private keys). Apart from that, it will also generate data and keys for the demo. What's left is running the `doctor.py` script:
```sh
```bash
(nucypher)$ python doctor.py
```

View File

@ -1,68 +0,0 @@
# Local Development Fleet Testing
![](https://lh3.googleusercontent.com/u7OEMBBCZjPEZunlVJFC5kR7_2k2FEJWnkzQEB_P0JW-28wtmhFJbE_7M5Ludcuh9yJKXpM8ENKV3QXT4xq3ZGLbzGQMxSm6emo_rR0vLJBnXy0-LiwXPExIDE9F0bSbPV-27bKSS5Rohyl5magLvmFvYRZr9w7MUnoGifhLma0EpQBsRpiTJRVat8ceoxj-7xN3SA9_7BmvuzCbs6xj4KjMAzjkEEaW4t52KSmMeP3X_dc6GbCkIdo1t13Vg09bC5k1kyAYStrbgXx2wWiA5p3N_9TISWgTez4A2Wn1f36DB8V-sOCp5w51u9sUWjGtXZCWsFuUWtB7e3Far2SAnaOYfFNmf4cn0q81R9u5YannkZberqPT9MEhhJA7PRbB1NRRI4a5N_406NoyQlSZHXweC-KQ74Vn147BmJ3UeZETKILCUGk8OpD_qUZ89Rz3R1HUoSpvO9fDIHeZbcB-KXE-wCIRXynMgOunQWP5vy_nZj8mMeOIzlMxorC2uUotToNfjZFPRbMPflz_z-5jE6aYIWf7d8OOgUbOKp_Rw9dJDpZYJAIfwVglYPYMQUyRkkpNzApS6QJCpGtOh_c-b5Kc1mFUpyD-BO3KLHKorNdH1Pnq15D1rLZ8JQ-WjsGDkMEUsndLQt8giYU5hY5NQGg8wMN8LduFZlfi0uRHEc9LiiBmCJCtZ6Fcvltk1WAhhf0k5gpAUwKIogko9w=w1308-h982-no)
## Overview
``` note:: Currently only "Federated Only" mode is supported for local fleets
```
All Demo Ursulas:
* Run on `localhost`
* In `--federated-only` mode
* On the `TEMPORARY_DOMIAN` (Implied by `--dev`)
* Using temporary resources (files, database, etc.)
## Running A Local Fleet
### 1. Install Nucypher
Acquire the nucypher application code and install the dependencies.
For a full installation guide see the [NuCypher Installation Guide](/guides/installation_guide).
### 2. Run a Lonely Ursula
The first step is to launch the first Ursula on the network by running:
`$ python run_lonely_demo_ursula.py`
This will start an Ursula node:
* With seednode discovery disabled
* On port `11500`
### 3. Run a Local Fleet of Ursulas
Next, launch subsequent Ursulas, informing them of the first Ursula:
`$ python run_demo_ursula_fleet.py`
This will run 5 temporary Ursulas that:
* All specify the lonely Ursula as a seednode
* Run on ports `11501` through `11506`
### 4. Run an Entry-Point Ursula (Optional)
While the local fleet is running, you may want an entry-point to introspect the code in a debugger.
For this we provide the optional script `run_single_demo_ursula.py` for your convenience.
`$ python run_single_demo_ursula.py`
This will run a single temporary Ursula:
* That specifies a random fleet node as a teacher
* On a random available port
## Connecting to the Local Fleet
Alternately, you can connect any node run from the CLI by specifying one of the nodes
in the local fleet as a teacher, the same network domain, and the same operating mode.
By default, nodes started with the `--dev` flag run on a dedicated domain (`TEMPORARY_DOMAIN`) and
on a different port than the production default port (`9151`).
Local fleet Ursulas range from ports `11500` to `11506` by default.
Here is an example of connecting to a node in the local development fleet:
`nucypher ursula run --dev --teacher-uri localhost:11501`
``` note:: The local development fleet is an *example* meant to demonstrate how to design and use your own local fleet.
```

View File

@ -0,0 +1,90 @@
Local Development Fleet Testing
===============================
.. image:: https://lh3.googleusercontent.com/u7OEMBBCZjPEZunlVJFC5kR7_2k2FEJWnkzQEB_P0JW-28wtmhFJbE_7M5Ludcuh9yJKXpM8ENKV3QXT4xq3ZGLbzGQMxSm6emo_rR0vLJBnXy0-LiwXPExIDE9F0bSbPV-27bKSS5Rohyl5magLvmFvYRZr9w7MUnoGifhLma0EpQBsRpiTJRVat8ceoxj-7xN3SA9_7BmvuzCbs6xj4KjMAzjkEEaW4t52KSmMeP3X_dc6GbCkIdo1t13Vg09bC5k1kyAYStrbgXx2wWiA5p3N_9TISWgTez4A2Wn1f36DB8V-sOCp5w51u9sUWjGtXZCWsFuUWtB7e3Far2SAnaOYfFNmf4cn0q81R9u5YannkZberqPT9MEhhJA7PRbB1NRRI4a5N_406NoyQlSZHXweC-KQ74Vn147BmJ3UeZETKILCUGk8OpD_qUZ89Rz3R1HUoSpvO9fDIHeZbcB-KXE-wCIRXynMgOunQWP5vy_nZj8mMeOIzlMxorC2uUotToNfjZFPRbMPflz_z-5jE6aYIWf7d8OOgUbOKp_Rw9dJDpZYJAIfwVglYPYMQUyRkkpNzApS6QJCpGtOh_c-b5Kc1mFUpyD-BO3KLHKorNdH1Pnq15D1rLZ8JQ-WjsGDkMEUsndLQt8giYU5hY5NQGg8wMN8LduFZlfi0uRHEc9LiiBmCJCtZ6Fcvltk1WAhhf0k5gpAUwKIogko9w=w1308-h982-no
:target: https://pypi.org/project/nucypher/
Overview
--------
.. note::
Currently only "Federated Only" mode is supported for local fleets
All Demo Ursulas:
* Run on `localhost`
* In `--federated-only` mode
* On the `TEMPORARY_DOMIAN` (Implied by `--dev`)
* Using temporary resources (files, database, etc.)
Running A Local Fleet
---------------------
1. Install Nucypher
Acquire the nucypher application code and install the dependencies.
For a full installation guide see the [NuCypher Installation Guide](/guides/installation_guide).
2. Run a Lonely Ursula
The first step is to launch the first Ursula on the network by running:
.. code::
$ python run_lonely_demo_ursula.py
This will start an Ursula node:
* With seednode discovery disabled
* On port `11500`
3. Run a Local Fleet of Ursulas
Next, launch subsequent Ursulas, informing them of the first Ursula:
.. code::
$ python run_demo_ursula_fleet.py
This will run 5 temporary Ursulas that:
* All specify the lonely Ursula as a seednode
* Run on ports `11501` through `11506`
4. Run an Entry-Point Ursula (Optional)
While the local fleet is running, you may want an entry-point to introspect the code in a debugger.
For this we provide the optional script `run_single_demo_ursula.py` for your convenience.
.. code::
$ python run_single_demo_ursula.py
This will run a single temporary Ursula:
* That specifies a random fleet node as a teacher
* On a random available port
Connecting to the Local Fleet
------------------------------
Alternately, you can connect any node run from the CLI by specifying one of the nodes
in the local fleet as a teacher, the same network domain, and the same operating mode.
By default, nodes started with the `--dev` flag run on a dedicated domain (`TEMPORARY_DOMAIN`) and
on a different port than the production default port (`9151`).
Local fleet Ursulas range from ports `11500` to `11506` by default.
Here is an example of connecting to a node in the local development fleet:
.. code::
nucypher ursula run --dev --teacher-uri localhost:11501
.. note::
The local development fleet is an *example* meant to demonstrate how to design and use your own local fleet.

View File

@ -1,68 +0,0 @@
# Contributing
![NuCypher Unicorn](https://cdn-images-1.medium.com/max/800/1*J31AEMsTP6o_E5QOohn0Hw.png)
## Running the Tests
``` note:: A development installation including the solidity compiler is required to run the tests
```
There are several test implementations in `nucypher`, however, the vast majority
of test are written for execution with `pytest`.
For more details see the [Pytest Documentation](https://docs.pytest.org/en/latest/)
To run the tests:
```bash
(nucypher)$ pytest -s
```
Optionally, to run the full, slow, verbose test suite run:
```bash
(nucypher)$ pytest --runslow -s
```
## Building Documentation
``` note:: 'sphinx', 'recommonmark', and 'sphinx_rtd_theme' are non-standard dependencies that can be installed by running 'pip install -e .[docs]' from the project directory.
```
Documentation for `nucypher` is hosted on [Read The Docs](https://nucypher.readthedocs.io/en/latest/), and is automatically built without intervention by following the release procedure.
However, you may want to build the documentation html locally for development.
To build the project dependencies locally:
```bash
(nucypher)$ cd nucypher/docs/
(nucypher)$ make html
```
If the build is successful, the resulting html output can be found in `nucypher/docs/build/html`;
Opening `nucypher/docs/build/html/index.html` in a web browser is a reasonable next step.
## Building Docker
Docker builds are automated as part of the publication workflow on circleCI and pushed to docker cloud.
However you may want to build a local version of docker for development.
We provide both a `docker-compose.yml` and a `Dockerfile` which can be used as follows:
*Docker Compose:*
```bash
(nucypher)$ docker-compose -f deploy/docker/docker-compose.yml build .
```
## Issuing a New Release
``` note:: 'bumpversion' is a non-standard dependency that can be installed by running 'pip install -e .[deployment]' or 'pip install bumpversion'.
```
1. Ensure your local tree has no uncommitted changes
2. Run `$ bumpversion devnum`
3. Ensure you have the intended history and tag: `git log`
4. Push the resulting tagged commit to the originating remote, and directly upstream `$ git push origin <TAG> && git push upstream <TAG>`
5. Monitor the triggered deployment build on circleCI for manual approval

View File

@ -0,0 +1,86 @@
Contributing
============
.. image:: https://cdn-images-1.medium.com/max/800/1*J31AEMsTP6o_E5QOohn0Hw.png
:target: https://cdn-images-1.medium.com/max/800/1*J31AEMsTP6o_E5QOohn0Hw.png
Running the Tests
-----------------
.. note::
A development installation including the solidity compiler is required to run the tests
.. _Pytest Documentation: https://docs.pytest.org/en/latest/
There are several test implementations in `nucypher`, however, the vast majority
of test are written for execution with `pytest`.
For more details see the `Pytest Documentation`_
To run the tests:
.. code:: bash
(nucypher)$ pytest -s
Optionally, to run the full, slow, verbose test suite run:
.. code:: bash
(nucypher)$ pytest --runslow -s
Building Documentation
----------------------
.. note::
`sphinx`, `recommonmark`, and `sphinx_rtd_theme` are non-standard dependencies that can be installed by running `pip install -e .[docs]` from the project directory.
.. _Read The Docs: https://nucypher.readthedocs.io/en/latest/
Documentation for `nucypher` is hosted on `Read The Docs`_, and is automatically built without intervention by following the release procedure.
However, you may want to build the documentation html locally for development.
To build the project dependencies locally:
.. code:: bash
(nucypher)$ cd nucypher/docs/
(nucypher)$ make html
If the build is successful, the resulting html output can be found in `nucypher/docs/build/html`;
Opening `nucypher/docs/build/html/index.html` in a web browser is a reasonable next step.
Building Docker
---------------
Docker builds are automated as part of the publication workflow on circleCI and pushed to docker cloud.
However you may want to build a local version of docker for development.
We provide both a `docker-compose.yml` and a `Dockerfile` which can be used as follows:
*Docker Compose:*
.. code:: bash
(nucypher)$ docker-compose -f deploy/docker/docker-compose.yml build .
Issuing a New Release
---------------------
.. note::
`bumpversion` is a non-standard dependency that can be installed by running `pip install -e .[deployment]` or 'pip install bumpversion'.
1. Ensure your local tree has no uncommitted changes
2. Run `$ bumpversion devnum`
3. Ensure you have the intended history and tag: `git log`
4. Push the resulting tagged commit to the originating remote, and directly upstream `$ git push origin <TAG> && git push upstream <TAG>`
5. Monitor the triggered deployment build on circleCI for manual approval

View File

@ -1,4 +1,6 @@
# NuCypher Federated Testnet (NuFT) Setup Guide
=============================================
NuCypher Federated Testnet (NuFT) Setup Guide
=============================================
This guide is for individuals who intend to spin-up and maintain an Ursula node in the early stages of the NuFT
while working with the NuCypher team to improve user experience, comfort, and code-quality of the NuCypher network.
@ -12,182 +14,191 @@ Before getting started, please note:
* NuFT transmits application errors and crash reports to NuCyphers sentry server. This functionality is enabled by default for NuFT only and will be deactivated by default for mainnet.
``` warning:: WARNING The “NuCypher Federated Testnet” (NuFT) is an experimental pre-release of nucypher. Expect bugs, downtime, and unannounced domain-wide restarts. NuFT nodes do not connect to any blockchain. **DO NOT** perform transactions using NuFT node addresses.
```
.. warning::
``` important:: Exiting the setup process prior to completion may lead to issues/bugs. If you encounter issues, report feedback by opening an Issue on our GitHub (https://github.com/nucypher/nucypher/issues)
```
The “NuCypher Federated Testnet” (NuFT) is an experimental pre-release of nucypher. Expect bugs, downtime, and unannounced domain-wide restarts. NuFT nodes do not connect to any blockchain. **DO NOT** perform transactions using NuFT node addresses.
## Contents
.. important::
* Stage A [Install The Nucypher Environment](#stage-a-install-the-nucypher-environment)
* Stage B [Configure Ursula](#stage-b-configure-ursula)
* Stage C [(Interactive Method) - Run the Node](#stage-c-run-the-node-interactive-method)
* Stage C [(System Service Method) - Run the Node](#stage-c-run-the-node-system-service-method)
Exiting the setup process prior to completion may lead to issues/bugs. If you encounter issues, report feedback by opening an Issue on our GitHub (https://github.com/nucypher/nucypher/issues)
## Configure a NuFT Node
Contents
--------
### Stage A | Install The Nucypher Environment
* `Stage A | Install The Nucypher Environment`_
* `Stage B | Configure Ursula`_
* `Stage C | Run the Node (Interactive Method)`_
* `Stage C | Run the Node (System Service Method)`_
Configure a NuFT Node
---------------------
Stage A | Install The Nucypher Environment
------------------------------------------
1. Install Python and Git
If you dont already have them, install Python and git.
As of January 2019, we are working with Python 3.6, 3.7, and 3.8.
If you dont already have them, install Python and git.
As of January 2019, we are working with Python 3.6, 3.7, and 3.8.
* Official Python Website: <https://www.python.org/downloads/>
* Git Install Guide: <https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>
* Official Python Website: https://www.python.org/downloads/
* Git Install Guide: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
2. Create Virtual Environment
Create a system directory for the nucypher application code
Create a system directory for the nucypher application code
```
$ mkdir nucypher
```
.. code::
Create a virtual environment for your node to run in using virtualenv
$ mkdir nucypher
Create a virtual environment for your node to run in using virtualenv
```bash
.. code::
$ virtualenv nucypher -p python3
...
```
Activate your virtual environment
Activate your virtual environment
```bash
.. code::
$ source nucypher/bin/activate
...
(nucypher)$
```
3. Install Nucypher
Install nucypher with git and pip3 into your virtual environment
Install nucypher with git and pip3 into your virtual environment
```bash
.. code::
(nucypher)$ pip3 install git+https://github.com/nucypher/nucypher.git@master
```
Re-activate your environment after installing
Re-activate your environment after installing
```bash
.. code::
$ source nucypher/bin/activate
...
(nucypher)$
```
### Stage B | Configure Ursula
Stage B | Configure Ursula
--------------------------
1. Verify that the installation was successful
Activate your virtual environment and run the `nucypher --help` command
Activate your virtual environment and run the `nucypher --help` command
```bash
.. code::
$ source nucypher/bin/activate
...
(nucypher)$ nucypher --help
```
You will see a list of possible usage options (`--version`, `-v`, `--dev`, etc.) and commands (`accounts`, `configure`, `deploy`, etc.). For example, you can use `nucypher configure destroy` to delete all files associated with the node.
You will see a list of possible usage options (`--version`, `-v`, `--dev`, etc.) and commands (`accounts`, `configure`, `deploy`, etc.). For example, you can use `nucypher configure destroy` to delete all files associated with the node.
2. Configure a new Ursula node
```bash
.. code::
(nucypher)$ nucypher ursula init --federated-only
...
```
3. Enter your public-facing IPv4 address when prompted
`Enter Node's Public IPv4 Address: <YOUR NODE IP HERE>`
.. code::
`Enter Node's Public IPv4 Address: <YOUR NODE IP HERE>`
4. Enter a password when prompted
`Enter a passphrase to encrypt your keyring: <YOUR PASSWORD HERE>`
.. code::
Enter a passphrase to encrypt your keyring: <YOUR PASSWORD HERE>
.. important::
Save your password as you will need it to relaunch the node, and please note:
- Minimum password length is 16 characters
- There is no password recovery process for NuFT nodes
- Do not use a password that you use anywhere else
- Security audits are ongoing on this codebase. For now, treat it as un-audited.
## Running a NuFT Node
### Stage C | Run the Node (Interactive Method)
Running a NuFT Node
-------------------
Stage C | Run the Node (Interactive Method)
-------------------------------------------
1. Connect to Testnet
NuCypher is maintaining a purpose-built endpoint to initially connect to the test network. To connect to the swarm run:
```bash
NuCypher is maintaining a purpose-built endpoint to initially connect to the test network. To connect to the swarm run:
.. code:: bash
$(nucypher) nucypher ursula run --teacher-uri <SEEDNODE_URI>
...
```
2. Verify Connection
This will drop your terminal session into the “Ursula Interactive Console” indicated by the `>>>`. Verify that the node setup was successful by running the `status` command.
```bash
This will drop your terminal session into the “Ursula Interactive Console” indicated by the `>>>`. Verify that the node setup was successful by running the `status` command.
.. code::
Ursula >>> status
...
```
To view a list of known nodes, execute the known_nodes command
```bash
Ursula >>> known_nodes
...
```
You can also view your nodes network status webpage by navigating your web browser to `https://<your-node-ip-address>:9151/status`.
To view a list of known nodes, execute the known_nodes command
``` note:: Since nodes self-sign TLS certificates, you may receive a warning from your web browser.
```
To stop your node from the interactive console and return to the terminal session
```bash
Ursula >>> stop
.. code::
Ursula >>> known_nodes
...
```
Subsequent node restarts do not need the teacher endpoint specified.
```bash
(nucypher)$ nucypher ursula run
You can also view your nodes network status webpage by navigating your web browser to `https://<your-node-ip-address>:9151/status`.
.. note::
Since nodes self-sign TLS certificates, you may receive a warning from your web browser.
To stop your node from the interactive console and return to the terminal session
.. code::
Ursula >>> stop
...
Subsequent node restarts do not need the teacher endpoint specified.
.. code:: bash
(nucypher)$ nucypher ursula run
...
```
Alternately you can run your node as a system service.
See the *“System Service Method”* section below.
### Stage C | Run the Node (System Service Method)
Stage C | Run the Node (System Service Method)
----------------------------------------------
*NOTE - This is an alternative to the “Interactive Method”.*
1. Create Ursula System Service
Use this template to create a file named ursula.service and place it in */etc/systemd/system/*.
Use this template to create a file named ursula.service and place it in */etc/systemd/system/*.
`/etc/systemd/system/ursula.service`
```
.. code::
[Unit]
Description="Run 'Ursula', a NuCypher Staking Node."
@ -199,88 +210,96 @@ See the *“System Service Method”* section below.
[Install]
WantedBy=multi-user.target
```
2. Enable Ursula System Service
```bash
.. code::
$ sudo systemctl enable ursula
...
```
3. Run Ursula System Service
To start Ursula services using systemd
```bash
.. code::
$ sudo systemctl start ursula
...
```
Check Ursula service status
```bash
.. code::
$ sudo systemctl status ursula
...
```
To restart your node service
```bash
.. code::
$ sudo systemctl restart ursula
```
## Updating a NuFT Node
Updating a NuFT Node
---------------------
Nucypher is under active development, you can expect frequent code changes to occur as bugs are
discovered and code fixes are submitted. As a result, Ursula nodes will need to be frequently updated
to use the most up-to-date version of the application code.
``` important:: The steps to update an Ursula running on NuFT are as follows and depends on the type of installation that was employed.
```
.. important::
The steps to update an Ursula running on NuFT are as follows and depends on the type of installation that was employed.
1. Stop the node
Interactive method
Interactive method
```bash
.. code::
Ursula >>> stop
```
OR
Systemd method
OR
Systemd method
```bash
.. code::
$ sudo systemctl stop ursula
```
2. Update to the latest code version
Update your virtual environment
```bash
(nucypher)$ pip3 install git+https://github.com/nucypher/nucypher.git@federated`
```
Update your virtual environment
.. code::
(nucypher)$ pip3 install git+https://github.com/nucypher/nucypher.git@federated`
3. Restart Ursula Node
Re-activate your environment after updating
Interactive method
```bash
Re-activate your environment after updating
Interactive method:
.. code::
$ source nucypher/bin/activate
...
(nucypher)$ nucypher ursula run
```
OR
Systemd Method
```bash
OR
Systemd Method:
.. code::
$ sudo systemctl start ursula
```

View File

@ -6,91 +6,91 @@ Interactive Federated Ursula Configuration
1. Verify your `nucypher` installation and entry points are functional
Activate your virtual environment and run the `nucypher --help` command
Activate your virtual environment and run the `nucypher --help` command
.. code:: bash
.. code:: bash
$ source nucypher/bin/activate
...
(nucypher)$ nucypher --help
$ source nucypher/bin/activate
...
(nucypher)$ nucypher --help
You will see a list of possible usage options (`--version`, `-v`, `--dev`, etc.) and commands (`status`, `ursula`).
For example, you can use `nucypher ursula destroy` to delete all files associated with the node.
You will see a list of possible usage options (`--version`, `-v`, `--dev`, etc.) and commands (`status`, `ursula`).
For example, you can use `nucypher ursula destroy` to delete all files associated with the node.
If your installation in non-functional, be sure you have the latest version installed, and see the `Installation Guide`_
If your installation in non-functional, be sure you have the latest version installed, and see the `Installation Guide`_
.. _Installation Guide: installation_guide.html
.. _Installation Guide: installation_guide.html
2. Configure a new Ursula node
.. code:: bash
.. code:: bash
(nucypher)$ nucypher ursula init --federated-only
(nucypher)$ nucypher ursula init --federated-only
3. Enter your public-facing IPv4 address when prompted
.. code:: bash
.. code:: bash
Enter Nodes Public IPv4 Address: <YOUR NODE IP HERE>
Enter Nodes Public IPv4 Address: <YOUR NODE IP HERE>
4. Enter a password when prompted
.. code:: bash
.. code:: bash
Enter a PASSWORD to encrypt your keyring: <YOUR PASSWORD HERE>
Enter a PASSWORD to encrypt your keyring: <YOUR PASSWORD HERE>
.. important::::
Save your password as you will need it to relaunch the node, and please note:
- Minimum password length is 16 characters
- There is no password recovery process for NuFT nodes
- Do not use a password that you use anywhere else
- Your password may be displayed in logs or other recorded output.
- Security audits are ongoing on this codebase. For now, treat it as un-audited.
5. Connect to a Federation
.. code:: bash
.. code:: bash
(nucypher)$ nucypher ursula run --teacher-uri <SEEDNODE_URI>
(nucypher)$ nucypher ursula run --teacher-uri <SEEDNODE_URI>
6. Verify Node Connection
This will drop your terminal session into the “Ursula Interactive Console” indicated by the `>>>`.
Verify that the node setup was successful by running the `status` command.
This will drop your terminal session into the “Ursula Interactive Console” indicated by the `>>>`.
Verify that the node setup was successful by running the `status` command.
.. code:: bash
.. code:: bash
Ursula >>> status
Ursula >>> status
7. To view a list of known nodes, execute the `known_nodes` command
.. code:: bash
.. code:: bash
Ursula >>> known_nodes
Ursula >>> known_nodes
You can also view your nodes network status webpage by navigating your web browser to `https://<your-node-ip-address>:9151/status`.
You can also view your nodes network status webpage by navigating your web browser to `https://<your-node-ip-address>:9151/status`.
.. NOTE::
Since nodes self-sign TLS certificates, you may receive a warning from your web browser.
.. NOTE::
Since nodes self-sign TLS certificates, you may receive a warning from your web browser.
8. To stop your node from the interactive console and return to the terminal session:
.. code:: bash
.. code:: bash
Ursula >>> stop
Ursula >>> stop
9. Subsequent node restarts do not need the teacher endpoint specified:
.. code:: bash
.. code:: bash
(nucypher)$ nucypher ursula run
(nucypher)$ nucypher ursula run

View File

@ -5,10 +5,9 @@ NuCypher
.. image:: .static/img/nucypher_logo.svg
:width: 60%
----
.. image:: https://circleci.com/gh/nucypher/nucypher/tree/master.svg?style=svg
:target: https://circleci.com/gh/nucypher/nucypher/tree/master
.. image:: https://img.shields.io/pypi/wheel/nucypher.svg
:target: https://pypi.org/project/nucypher/