Add UI information for bond operator; update pre automation docs and remove old cloudworkers information.

pull/2888/head
derekpierre 2022-03-31 15:29:27 -04:00 committed by Kieran Prasch
parent 5904156886
commit b76b08fefe
5 changed files with 135 additions and 254 deletions

View File

@ -125,7 +125,6 @@ Whitepapers
pre_application/best_practices pre_application/best_practices
pre_application/node_providers pre_application/node_providers
pre_application/testnet pre_application/testnet
pre_application/cloud_provider_tutorial
.. toctree:: .. toctree::

View File

@ -1,148 +1,157 @@
.. _managing-cloud-nodes: .. _managing-cloud-nodes:
=========================
PRE Node Cloud Automation PRE Node Cloud Automation
========================= =========================
.. important:: .. note::
In order to run a PRE node on Threshold, ``nucypher`` v6.0.0 or above will be required. Previously this functionality was provided by the ``nucypher cloudworkers`` CLI command.
See `releases <https://pypi.org/project/nucypher/#history>`_ for the latest version. However, that command has been deprecated and analogous functionality is now provided
via `nucypher-ops <https://github.com/nucypher/nucypher-ops>`_.
NuCypher maintains a CLI to assist with the initialization and management of PRE nodes .. warning::
deployed on cloud infrastructure, that leverages automation tools
such as `Ansible <https://www.ansible.com/>`_ and `Docker <https://www.docker.com/>`_.
.. important:: `nucypher-ops <https://github.com/nucypher/nucypher-ops>`_ is under active development, and
should currently **only be used on testnet**.
Only supports Digital Ocean and AWS cloud infrastructure.
This tool will handle the minutiae of node configuration and operation on your behalf by
providing high-level CLI commands.
.. code:: bash PRE Node Testnet Cloud Automation
=================================
(nucypher)$ nucypher cloudworkers ACTION [OPTIONS] In this tutorial we're going to setup a Threshold PRE Node on the rinkeby testnet using a remote cloud provider (Digital Ocean, AWS, and more in the future).
Whilst this example will demonstrate how to deploy to Digital Ocean, the steps for any other infrastructure provider are virtually identical.
There are a few pre-requisites before we can get started.
First, we need to create accounts at `Digital Ocean <https://cloud.digitalocean.com/>`_ and `Infura <https://infura.io>`_.
We also need at least 40,000T in our wallet.
There are currently no Threshold testnet faucets but if you ask in the `Discord <https://discord.gg/Threshold>`_ channel, someone will be happy to help you out.
**Command Actions** .. warning::
+----------------------+-------------------------------------------------------------------------------+ Ensure that you are using a wallet on the rinkeby testnet, **don't** use a mainnet address.
| Action | Description |
+======================+===============================================================================+
| ``up`` | Creates and deploys hosts for stakers. |
+----------------------+-------------------------------------------------------------------------------+
| ``create`` | Creates and deploys the given number of hosts independent of stakes |
+----------------------+-------------------------------------------------------------------------------+
| ``add`` | Add an existing host to be managed by cloudworkers CLI tools |
+----------------------+-------------------------------------------------------------------------------+
| ``add_for_stake`` | Add an existing host to be managed for a specified staker |
+----------------------+-------------------------------------------------------------------------------+
| ``deploy`` | Install and run a node on existing managed hosts. |
+----------------------+-------------------------------------------------------------------------------+
| ``update`` | Update or manage existing installed nodes. |
+----------------------+-------------------------------------------------------------------------------+
| ``destroy`` | Shut down and cleanup resources deployed on AWS or Digital Ocean |
+----------------------+-------------------------------------------------------------------------------+
| ``stop`` | Stop the selected nodes. |
+----------------------+-------------------------------------------------------------------------------+
| ``status`` | Prints a formatted status of selected managed hosts. |
+----------------------+-------------------------------------------------------------------------------+
| ``logs`` | Download and display the accumulated stdout logs of selected hosts |
+----------------------+-------------------------------------------------------------------------------+
| ``backup`` | Download local copies of critical data from selected installed nodes |
+----------------------+-------------------------------------------------------------------------------+
| ``restore`` | Reconstitute and deploy an operating node from backed up data |
+----------------------+-------------------------------------------------------------------------------+
| ``list_hosts`` | Print local nicknames of all managed hosts under a given namespace |
+----------------------+-------------------------------------------------------------------------------+
| ``list_namespaces`` | Print namespaces under a given network |
+----------------------+-------------------------------------------------------------------------------+
Some examples:
.. code:: bash Digital Ocean
-------------
All of the Digital Ocean configuration will be done automatically, but there are two local environment variables we need to set in order to make this work:
# - ``DIGITALOCEAN_ACCESS_TOKEN`` - your Digital Ocean `access token <https://docs.digitalocean.com/reference/api/create-personal-access-token/>`_.
# Initialize a node - ``DIGITAL_OCEAN_KEY_FINGERPRINT`` - your Digital Ocean `key fingerprint <https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/to-account/>`_.
#
# on Digital Ocean Follow those two blog posts and either ``export`` the environment variables or add them to your ``~/.bashrc`` file.
##################
$ export DIGITALOCEAN_ACCESS_TOKEN=<your access token>
$ export DIGITALOCEAN_REGION=<a digitalocean availability region>
$ nucypher cloudworkers up --cloudprovider digitalocean --remote-provider http://mainnet.infura..3epifj3rfioj
# OR
# on AWS
########
# configure your local aws cli with named profiles https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
$ nucypher cloudworkers up --cloudprovider aws --aws-profile my-aws-profile --remote-provider https://mainnet.infura..3epifj3rfioj
#################################################################################################################################### Infura
# ------
# Management Commands We need a way to interact with both the Ethereum and Polygon testnet networks; Infura makes this easy for us.
# Create a new project at Infura with product type ``ETHEREUM``.
Also, add the Polygon add-on to this project.
We're going to create two more environment variables:
# add your ubuntu machine to an existing stake - ``INFURA_RINKEBY_URL``
$ nucypher cloudworkers add_for_stake --staker-address 0x9a92354D3811938A1f35644825188cAe3103bA8e --host-address somebox.myoffice.net --login-name ubuntu --key-path ~/.ssh/id_rsa - ``INFURA_MUMBAI_URL``
In the **Project Settings**, change the ``ENDPOINTS`` to ``RINKEBY`` / ``POLYGON_MUMBAI``.
Set the above environment variables to the corresponding ``https`` endpoint.
# update all your existing hosts to the latest code
$ nucypher cloudworkers update --nucypher-image nucypher/nucypher:latest
# stop the running node(s) on your host(s) Overall the environment variable process should look something like:
$ nucypher cloudworkers stop
# change two of your existing hosts to use alchemy instead of infura as a delegated blockchain .. code-block:: bash
# note: hosts created for local stakers will have the staker's checksum address as their nickname by default
$ nucypher cloudworkers update --remote-provider https://eth-mainnet.ws.alchemyapi.io/v2/aodfh298fh2398fh2398hf3924f... --include-host 0x9a92354D3811938A1f35644825188cAe3103bA8e --include-host 0x1Da644825188cAe3103bA8e92354D3811938A1f35
# add some random host and then deploy a node on it $ export INFURA_RINKEBY_URL=https://rinkeby.infura.io/v3/bd76baxxxxxxxxxxxxxxxxxxxxxf0ff0
$ nucypher cloudworkers add --host-address somebox.myoffice.net --login-name ubuntu --key-path ~/.ssh/id_rsa --nickname my_new_host $ export INFURA_MUMBAI_URL=https://polygon-mumbai.infura.io/v3/bd76baxxxxxxxxxxxxxxxxxxxxxf0ff0
$ nucypher cloudworkers deploy --include-host my_new_host --remote-provider https://mainnet.infura..3epifj3rfioj $ export DIGITALOCEAN_ACCESS_TOKEN=4ade7a8701xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxbafd23
$ export DIGITAL_OCEAN_KEY_FINGERPRINT=28:38:e7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:ca:5c
# deploy nucypher on all your managed hosts
$ nucypher cloudworkers deploy --remote-provider https://mainnet.infura..3epifj3rfioj
# deploy nucypher on all your managed hosts Setup Remote Node
$ nucypher cloudworkers deploy --remote-provider https://mainnet.infura..3epifj3rfioj -----------------
Locally, we will install `NuCypher Ops <https://github.com/nucypher/nucypher-ops>`_ to handle the heavy lifting of setting up a node.
# print the current status of all nodes across all namespaces (in bash) .. code-block:: bash
$ for ns in $(nucypher cloudworkers list-namespaces); do nucypher cloudworkers status --namespace $ns; done
> local nickname: Project11-mainnet-2
> nickname: Aquamarine Nine DarkViolet Foxtrot
> staker address: 0xFBC052299b8B3Df05CB8351151E71f21562096F4
> worker address: 0xe88bF385a6ed8C86aA153f08F999d8698B5326e0
> rest url: https://xxx.xxx.xxx.xxx:9151
> missing commitments: 0
> last committed period: 2657
> ETH: 0.xxx
> provider: https://mainnet.infura.io/v3/xxxx
> ursula docker image: "nucypher/nucypher:latest"
> ursula command: ""nucypher ursula run --network mainnet""
> last log line: Working ~ Keep Ursula Online!
.....
# see if all your managed hosts successfully committed to the next period $ pip install nucypher-ops
$ for ns in $(nucypher cloudworkers list-namespaces); do nucypher cloudworkers status --namespace $ns; done | grep "last committed period: \|last log line: \|local nickname:"
# backup all your node's critical data Now NuCypher Ops is installed we can create a droplet on Digital Ocean:
# note: this is also done after any update or deploy operations
$ for ns in $(nucypher cloudworkers list-namespaces); do nucypher cloudworkers backup --namespace $ns; done
# show some info about your hosts .. code-block:: bash
$ nucypher cloudworkers list-hosts -v
# set a max-gas-price for existing hosts nucypher-ops nodes create --network ibex --count 1 --cloudprovider digitalocean
$ nucypher cloudworkers update --cli max-gas-price=50
# NB: environment variables and cli args function identically for both update and deploy At this point you should see the droplet in your Digital Ocean dashboard.
Now we can deploy the PRE Node:
# set some environment variables to configure nodes on all your hosts .. code-block:: bash
$ nucypher cloudworkers deploy -e DONT_PERFORM_WORK_ON_SUNDAY=true
# set a max gas price and gas strategy for existing hosts nucypher-ops ursula deploy --eth-provider $INFURA_RINKEBY_URL --nucypher-image nucypher/nucypher:experimental --payment-provider $INFURA_MUMBAI_URL --network ibex
$ nucypher cloudworkers update --cli max-gas-price=50 --cli gas-strategy=slow
This should produce a lot of log messages as the ansible playbooks install all the requirements and setup the node.
The final output should be similar to:
.. code-block:: bash
some relevant info:
config file: "/SOME_PATH/nucypher-ops/configs/ibex/nucypher/ibex-nucypher.json"
inventory file: /SOME_PATH/nucypher-ops/configs/ibex-nucypher-2022-03-25.ansible_inventory.yml
If you like, you can run the same playbook directly in ansible with the following:
ansible-playbook -i "/SOME_PATH/nucypher-ops/configs/ibex-nucypher-2022-03-25.ansible_inventory.yml" "src/playbooks/setup_remote_workers.yml"
You may wish to ssh into your running hosts:
ssh root@123.456.789.xxx
*** Local backups containing sensitive data may have been created. ***
Backup data can be found here: /SOME_PATH//nucypher-ops/configs/ibex/nucypher/remote_worker_backups/
This tells us the location of several config files and helpfully prints the IP address of our newly created node (you can also see this on the Digital Ocean dashboard).
Let's ``ssh`` into it and look at the logs:
.. code-block:: bash
$ ssh root@123.456.789.xxx
root@nucypher-ibex-1:~#
root@nucypher-ibex-1:~# sudo docker logs --follow ursula
...
! Operator 0x06E11400xxxxxxxxxxxxxxxxxxxxxxxxxxxx1Fc0 is not funded with ETH
! Operator 0x06E11400xxxxxxxxxxxxxxxxxxxxxxxxxxxx1Fc0 is not bonded to a staking provider
...
These lines will print repeatedly until the Operator is funded with some rinkeby ETH and bonded to a staking provider.
Send rinkeby ETH to the operator address that is printed
Stake and Bond
--------------
Now that our operator is funded with ETH, we're ready to stake and bond.
At this point you need some testnet ETH and 40,000 T in a metamask wallet.
Again, ask in the discord if you need help with this.
Navigate to the `Testnet Staking Dashboard <https://dn3gsazzaajb.cloudfront.net/manage/stake>`_ and connect your metamask wallet.
Go to the **stake** tab and click "Stake liquid T on rinkeby"
.. image:: ../.static/img/testnet_stake_dashboard.png
:target: ../.static/img/testnet_stake_dashboard.png
Allow the 40,000 T spend, and then stake it.
Both transactions will require authorization via metamask.
You can ignore the **Configure Addresses** option - they should all default to the currently connected account.
Once those transactions are confirmed, swith to the **bond** tab.
Here you will paste the Operator address that is being printed by the docker logs:
.. image:: ../.static/img/testnet_bond_dashboard.png
:target: ../.static/img/testnet_bond_dashboard.png
Once that transaction is confirmed, switch back to view the logs of the node.
You should see:
.. code-block:: bash
Broadcasting CONFIRMOPERATORADDRESS Transaction (0.00416485444 ETH @ 88.58 gwei)
TXHASH 0x3329exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx5ec9a6
✓ Work Tracking
✓ Start Operator Bonded Tracker
✓ Rest Server https://123.456.789.000:9151
Working ~ Keep Ursula Online!
You can view the status of your node by visiting ``https://YOUR_NODE_IP:9151/status``

View File

@ -1,140 +0,0 @@
.. _cloud-provider-tutorial:
=================================
PRE Testnet Node Cloud Automation
=================================
In this tutorial we're going to setup a Threshold PRE Node on the rinkeby testnet using a remote cloud provider (Digital Ocean, AWS, and more in the future).
Whilst this example will demonstrate how to deploy to Digital Ocean, the steps for any other infrastructure provider are virtually identical.
There are a few pre-requisites before we can get started.
First, we need to create accounts at `Digital Ocean <https://cloud.digitalocean.com/>`_ and `Infura <https://infura.io>`_.
We also need at least 40,000T in our wallet.
There are currently no Threshold testnet faucets but if you ask in the `Discord <https://discord.gg/Threshold>`_ channel, someone will be happy to help you out.
.. note::
Ensure that you are using a wallet on the rinkeby testnet, **don't** use a mainnet address.
Digital Ocean
-------------
All of the Digital Ocean configuration will be done automatically, but there are two local environment variables we need to set in order to make this work:
- ``DIGITALOCEAN_ACCESS_TOKEN`` - Your Digital Ocean `access token <https://docs.digitalocean.com/reference/api/create-personal-access-token/>`_`
- ``DIGITAL_OCEAN_KEY_FINGERPRINT`` - Your Digital Ocean `key fingerprint <https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/to-account/>`_.
Follow those two blog posts and either ``export`` the environment variables or add them to your ``~/.bashrc`` file.
Infura
------
We need a way to interact with both the Ethereum and Polygon testnet networks; Infura makes this easy for us.
Create a new project at Infura with product type ``ETHEREUM``.
Also, add the Polygon add-on to this project.
We're going to create two more environment variables:
- ``INFURA_RINKEBY_URL``
- ``INFURA_MUMBAI_URL``
In the **Project Settings**, change the ``ENDPOINTS`` to ``RINKEBY`` / ``POLYGON_MUMBAI``.
Set the above environment variables to the corresponding ``https`` endpoint.
Overall the environment variable process should look something like:
.. code-block:: bash
$ export INFURA_RINKEBY_URL=https://rinkeby.infura.io/v3/bd76baxxxxxxxxxxxxxxxxxxxxxf0ff0
$ export INFURA_MUMBAI_URL=https://polygon-mumbai.infura.io/v3/bd76baxxxxxxxxxxxxxxxxxxxxxf0ff0
$ export DIGITALOCEAN_ACCESS_TOKEN=4ade7a8701xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxbafd23
$ export DIGITAL_OCEAN_KEY_FINGERPRINT=28:38:e7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:ca:5c
Setup Remote Node
-----------------
Locally, we will install `NuCypher Ops <https://github.com/nucypher/nucypher-ops>`_ to handle the heavy lifting of setting up a node.
.. code-block:: bash
$ pip install nucypher-ops
Now NuCypher Ops is installed we can create a droplet on Digital Ocean:
.. code-block:: bash
nucypher-ops nodes create --network ibex --count 1 --cloudprovider digitalocean
At this point you should see the droplet in your Digital Ocean dashboard.
Now we can deploy the PRE Node:
.. code-block:: bash
nucypher-ops ursula deploy --eth-provider $INFURA_RINKEBY_URL --nucypher-image nucypher/nucypher:experimental --payment-provider $INFURA_MUMBAI_URL --network ibex
This should produce a lot of log messages as the ansible playbooks install all the requirements and setup the node.
The final output should be similar to:
.. code-block:: bash
some relevant info:
config file: "/SOME_PATH/nucypher-ops/configs/ibex/nucypher/ibex-nucypher.json"
inventory file: /SOME_PATH/nucypher-ops/configs/ibex-nucypher-2022-03-25.ansible_inventory.yml
If you like, you can run the same playbook directly in ansible with the following:
ansible-playbook -i "/SOME_PATH/nucypher-ops/configs/ibex-nucypher-2022-03-25.ansible_inventory.yml" "src/playbooks/setup_remote_workers.yml"
You may wish to ssh into your running hosts:
ssh root@123.456.789.xxx
*** Local backups containing sensitive data may have been created. ***
Backup data can be found here: /SOME_PATH//nucypher-ops/configs/ibex/nucypher/remote_worker_backups/
This tells us the location of several config files and helpfully prints the IP address of our newly created node (you can also see this on the Digital Ocean dashboard).
Let's ``ssh`` into it and look at the logs:
.. code-block:: bash
$ ssh root@123.456.789.xxx
root@nucypher-ibex-1:~#
root@nucypher-ibex-1:~# sudo docker logs --follow ursula
...
! Operator 0x06E11400xxxxxxxxxxxxxxxxxxxxxxxxxxxx1Fc0 is not funded with ETH
! Operator 0x06E11400xxxxxxxxxxxxxxxxxxxxxxxxxxxx1Fc0 is not bonded to a staking provider
...
These lines will print repeatedly until the Operator is funded with some rinkeby ETH and bonded to a staking provider.
Send rinkeby ETH to the operator address that is printed
Stake and Bond
--------------
Now that our operator is funded with ETH, we're ready to stake and bond.
At this point you need some testnet ETH and 40,000 T in a metamask wallet.
Again, ask in the discord if you need help with this.
Navigate to the `Testnet Staking Dashboard <https://dn3gsazzaajb.cloudfront.net/manage/stake>`_ and connect your metamask wallet.
Go to the **stake** tab and click "Stake liquid T on rinkeby"
.. image:: ../.static/img/testnet_stake_dashboard.png
:target: ../.static/img/testnet_stake_dashboard.png
Allow the 40,000 T spend, and then stake it.
Both transactions will require authorization via metamask.
You can ignore the **Configure Addresses** option - they should all default to the currently connected account.
Once those transactions are confirmed, swith to the **bond** tab.
Here you will paste the Operator address that is being printed by the docker logs:
.. image:: ../.static/img/testnet_bond_dashboard.png
:target: ../.static/img/testnet_bond_dashboard.png
Once that transaction is confirmed, switch back to view the logs of the node.
You should see:
.. code-block:: bash
Broadcasting CONFIRMOPERATORADDRESS Transaction (0.00416485444 ETH @ 88.58 gwei)
TXHASH 0x3329exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx5ec9a6
✓ Work Tracking
✓ Start Operator Bonded Tracker
✓ Rest Server https://123.456.789.000:9151
Working ~ Keep Ursula Online!
You can view the status of your node by visiting ``https://YOUR_NODE_IP:9151/status``

View File

@ -88,6 +88,10 @@ In order to provide the PRE service and receive rewards, there are three options
of the PRE client. of the PRE client.
* **Self-Managed, Automated**: Run your own PRE node on either Digital Ocean or AWS, leveraging :ref:`automation tools <managing-cloud-nodes>` that speed up and simplify the installation process. In this case too, stakers are entirely responsible for setup, operation, and monitoring of the PRE client. * **Self-Managed, Automated**: Run your own PRE node on either Digital Ocean or AWS, leveraging :ref:`automation tools <managing-cloud-nodes>` that speed up and simplify the installation process. In this case too, stakers are entirely responsible for setup, operation, and monitoring of the PRE client.
.. note::
The :ref:`automation tools <managing-cloud-nodes>` are under active development, and should currently **only be used on testnet**.
Note that setting up a PRE node from scratch is non-trivial, but is typically inexpensive and unburdensome to maintain. Note that setting up a PRE node from scratch is non-trivial, but is typically inexpensive and unburdensome to maintain.
PRE end-users expect and require an on-demand service, wherein their *grant*, *revoke* and *re-encryption* requests are answered reliably, correctly, and without interruption. PRE end-users expect and require an on-demand service, wherein their *grant*, *revoke* and *re-encryption* requests are answered reliably, correctly, and without interruption.
Hence the most critical responsibility for stakers is ensuring that their PRE node remains online **at all times**. If this is not certain using a local machine, it is highly recommended to use cloud infrastructure instead. Hence the most critical responsibility for stakers is ensuring that their PRE node remains online **at all times**. If this is not certain using a local machine, it is highly recommended to use cloud infrastructure instead.

View File

@ -15,7 +15,8 @@ Running a PRE Node
NuCypher maintains a separate self-contained CLI that automates the initialization NuCypher maintains a separate self-contained CLI that automates the initialization
and management of PRE nodes deployed on cloud infrastructure. This CLI leverages and management of PRE nodes deployed on cloud infrastructure. This CLI leverages
automation tools such as Ansible and Docker to simplify the setup and management automation tools such as Ansible and Docker to simplify the setup and management
of nodes running in the cloud. See :ref:`managing-cloud-nodes`. of nodes running in the cloud (*under active development and currently limited to
testnet automation*). See :ref:`managing-cloud-nodes`.
After :ref:`staking on Threshold <stake-initialization>`, and finding a server that meets the :ref:`requirements <node-requirements>`, running a PRE node entails the following: After :ref:`staking on Threshold <stake-initialization>`, and finding a server that meets the :ref:`requirements <node-requirements>`, running a PRE node entails the following:
@ -55,6 +56,14 @@ should be performed by the Staking Provider via the ``nucypher bond`` command (d
Once the Operator address is bonded, it cannot be changed for 24 hours. Once the Operator address is bonded, it cannot be changed for 24 hours.
via UI
------
* Navigate to https://stake.nucypher.network/manage/bond
* Connect with the Staking Provider account to execute the bond operation
* Enter the Operator address to bond
* Click *"Bond Operator"*
via Docker via Docker
---------- ----------