Centrifuge

Set up your Centrifuge Mainnet or Testnet node.

Prerequisites

Options

  1. Run with Docker
  2. Run with binaries
đź’ˇ

The CLI options in this doc are a reference that works for the Centrifuge team, but feel free to play with the settings for your own setup using the official docs: https://docs.substrate.io/reference/command-line-tools/node-template/ The info here has been derived from the official documentation, where you can find more info and also all the options for logging and debugging: https://docs.substrate.io/deploy/deployment-options/

1. Run with Docker

You can use the container published by Centrifuge on their DockerHub repo or be fully trustless by cloning the cent-chain repository and using the Dockerfile (2-4h build time on an average machine), in the latter make sure to checkout the specific commit for the latest release before building.

To find the latest release go to the Centrifuge repository, and look for the listed Docker Image.

More images in the official Docker Hub repository.

đź’ˇ

Note: using the tag latest is always going to point to the latest release, make sure that your system always pulls the image though or you’ll end up with your local cached verison. For docker-compose this is achieved using the pull --policy always option

Create docker compose file

Create a docker-compose.yml file with the following contents. Change the ports based on your network setup. Replace /mnt/my_volume/data with the volume and/or data folder you want to use.

đź’ˇ

From version 0.10.35 onwards, the Docker container uses an unprivileged user to run the centrifuge-chain and thus it is possible it won’t be able to access the /data folder, to be sure either:

  • Make sure your data folder is owned by the centrifuge user: “
chown -R 1000:1000 /mnt/my_volume/data
  • Run the container as root adding user: root to your service definition in the docker-compose below (not recommended)
đź’ˇ

From version 0.10.35 the relay chain DB folder has changed names from relay-chain to polkadot. The container is prepared to handle this change automatically but it is advised to check your /mnt/my_volume/data to see if there are any extra folders after you upgrade from a previous version. For a first time setup, ignore this message.

version: '3'
services:
  centrifuge:
    container_name: centrifuge-chain
    image: "centrifugeio/centrifuge-chain:[INSERT_LATEST_RELEASE_HERE]"
    platform: "linux/amd64"
    restart: on-failure
    ports:
      - "30333:30333"
      - "9933:9933"
      - "9944:9944"
    volumes:
      # Mount your biggest drive
      - /mnt/my_volume/data:/data
    command:
    - "--port=30333"
    - "--rpc-port=9933"
    - "--rpc-external"
    - "--rpc-cors=all"
    - "--pruning=archive"
    - "--chain=centrifuge"
    - "--parachain-id=2031"
    - "--base-path=/data"
    - "--log=main,info"
    - "--execution=wasm"
    - "--wasm-execution=compiled"
    - "--ws-max-connections=5000"
    - "--bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc"
    - "--bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p"
    - "--name=YOUR_NODE_NAME"
    - "--"
    - "--execution=wasm"
    - "--wasm-execution=compiled"
    - "--chain=polkadot"
    - "--sync=fast"
version: '3'
services:
  centrifuge:
    container_name: centrifuge-chain
    image: "centrifugeio/centrifuge-chain:[INSERT_LATEST_RELEASE_HERE]"
    platform: "linux/amd64"
    ports:
      - "30355:30333"
      - "9936:9933"
      - "9946:9944"
    volumes:
      - /mnt/my_volume/data:/data
    command:
      - "--port=30333"
      - "--rpc-port=9933"
      - "--unsafe-rpc-external"
      - "--rpc-cors=all"
      - "--pruning=archive"
      - "--chain=/resources/demo-spec-raw.json"
      - "--parachain-id=2031"
      - "--base-path=/data"
      - "--log=main,info"
      - "--execution=wasm"
      - "--wasm-execution=compiled"
      - "--ws-max-connections=5000"
      - "--bootnodes=/ip4/34.89.150.73/tcp/30333/p2p/12D3KooWHsA69Fb1HkdWrwrxY3cSga3wWyHnxuk5GeYtCh59XsWB"
      - "--bootnodes=/ip4/35.198.144.90/tcp/30333/p2p/12D3KooWMJPzvEp5Jhea8eKsUDufBbAzGrn265GcaCmcnp3koPk4"
      - "--bootnodes=/ip4/35.198.98.253/tcp/30333/p2p/12D3KooWHzRutQHbQKWh3Z5BWoXGk5c3gxscxfG57N8fNV6XjVLz"
      - "--name=YOUR_NODE_NAME"
      - "--"
      - "--no-telemetry"
      - "--execution=wasm"
      - "--wasm-execution=compiled"
      - "--chain=/resources/westend-alphanet-raw-specs.json"
      - "--sync=fast"

Run the container

docker-compose pull --policy always && docker-compose up -d

2. Get or build binaries

Prepare user and folder

adduser centrifuge_service --system --no-create-home
mkdir /var/lib/centrifuge-data  # Or use a folder location of you choosing. But replace the all occurences of `/var/lib/centrifuge-data` below accordingly
chown -R centrifuge_service /var/lib/centrifuge-data

Getting the binary

# This dependencies install is only for Debian Distros:
sudo apt-get install cmake pkg-config libssl-dev git clang libclang-dev protobuf-compiler
git clone https://github.com/centrifuge/centrifuge-chain.git
cd centrifuge-chain
git checkout [INSERT_LATEST_RELEASE_HERE]
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
./scripts/install_toolchain.sh
cargo build --release
cp ./target/release/centrifuge-chain /var/lib/centrifuge-data

Use latest for testent, or a specific release tag for mainbet binaries. Keep in mind that the retrieved binary is build for Linux.

docker run --rm --name centrifuge-cp -d centrifugeio/centrifuge-chain:[INSERT_LATEST_RELEASE_HERE] --chain centrifuge
docker cp centrifuge-cp:/usr/local/bin/centrifuge-chain /var/lib/centrifuge-data

Configure systemd

1. Create systemd service file

We are now ready to start the node, but to ensure it is running in the background and auto-restarts in case of a server failure, we will set up a service file using systemd. Change the ports based on your network setup.

đź“ť

Note: It is important to leave the --bootnodes $ADDR in one line as otherwise the arguments are not parsed correctly. Making it impossible for the chain to find peers as no bootnodes will be present

sudo tee <<EOF >/dev/null /etc/systemd/system/centrifuge.service
[Unit]
Description="Centrifuge systemd service"
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=10
User=centrifuge_service
SyslogIdentifier=centrifuge
SyslogFacility=local7
KillSignal=SIGHUP
ExecStart=/var/lib/centrifuge-data/centrifuge-chain --bootnodes=/ip4/35.198.171.148/tcp/30333/ws/p2p/12D3KooWDXDwSdqi8wB1Vjjs5SVpAfk6neadvNTPAik5mQXqV7jF --bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc --bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p \
    --port=30333 \
    --rpc-port=9933 
    --rpc-external \
    --rpc-cors=all \
    --pruning=archive \
    --chain=centrifuge \
    --parachain-id=2031 \
    --base-path=/var/lib/centrifuge-data \
    --log=main,info \
    --execution=wasm \
    --wasm-execution=compiled \
    --ws-max-connections=5000 \
    --name=YOUR_NODE_NAME
    -- \
    --chain=polkadot \
    --execution=wasm \
    --wasm-execution=compiled \
    --sync=fast

[Install]
WantedBy=multi-user.target
EOF

You’ll have to download the chain specs first:

curl -O https://github.com/centrifuge/centrifuge-chain/blob/main/res/westend-alphanet-raw-specs.json
mv westend-alphanet-raw-specs.json /var/lib/centrifuge-data/moonbean-dev-relay.json
curl -O https://github.com/centrifuge/centrifuge-chain/blob/main/res/demo-spec-raw.json
mv demo-spec-raw.json /var/lib/centrifuge-data/centrifuge-demo-raw.json

Create systemd file now:

sudo tee <<EOF >/dev/null /etc/systemd/system/centrifuge.service
[Unit]
Description="Centrifuge Demo testnet systemd service"
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=10
User=centrifuge_service
SyslogIdentifier=centrifuge_testnet
SyslogFacility=local7
KillSignal=SIGHUP
ExecStart=/var/lib/centrifuge-data/centrifuge-chain --bootnodes=/ip4/34.89.150.73/tcp/30333/p2p/12D3KooWHsA69Fb1HkdWrwrxY3cSga3wWyHnxuk5GeYtCh59XsWB --bootnodes=/ip4/35.198.144.90/tcp/30333/p2p/12D3KooWMJPzvEp5Jhea8eKsUDufBbAzGrn265GcaCmcnp3koPk4 --bootnodes=/ip4/35.198.98.253/tcp/30333/p2p/12D3KooWHzRutQHbQKWh3Z5BWoXGk5c3gxscxfG57N8fNV6XjVLz \
    --port=30333 \
    --rpc-port=9933 \
    --unsafe-rpc-external \
    --rpc-cors=all \
    --pruning=archive \
    --chain=/var/lib/centrifuge-data/centrifuge-demo-raw.json \
    --parachain-id=2031 \
    --base-path=/var/lib/centrifuge-data \
    --log=main,info \
    --execution=wasm \
    --wasm-execution=compiled \
    --name=YOUR_NODE_NAME
    -- \
    --no-telemetry \
    --execution=wasm \
    --wasm-execution=compiled \
    --chain=/var/lib/centrifuge-data/moonbean-dev-relay.json \
    --sync=fast

[Install]
WantedBy=multi-user.target
EOF

2. Start the systemd service

Actually enable the previously generated service and start it.

sudo systemctl enable centrifuge.service
sudo systemctl start centrifuge.service

If everything was set-up correctly, your node should now be starting the process of synchronization. This will take several hours, depending on your hardware. To check the status of the running service or to follow the logs, use:

sudo systemctl status centrifuge.service
sudo journalctl -u centrifuge.service -f

Test your RPC connection

Once your node is fully synced, you can run a cURL request to see the status of your node, use the port you configured in your /etc/systemd/system/centrifuge.service file above

curl -H "Content-Type: application/json" 
localhost:9933/health

Expected output if node is synced is {"peers":35,"isSyncing":false,"shouldHavePeers":true}

Optional: Using a snapshot instead of synching from scratch

đź’ˇ

Centrifuge currently does not maintain automated snapshots, if you need a newer snapshot please reach out to the dev team

  • By downloading a snapshot from the Centrifuge Dev team:
    • You get faster sync, your fullnode will be ready in within hours (time depends on how old the snapshot is)
    • You are trusting the Centrifuge’s team snapshots and therefore is not as “trustless” or “decentralized” as synching from scratch

Prerequisites:

Step-by-step instructions:

# List the available bundles (choose between mainnet or testnet-demo)
gsutil ls gs://centrifuge-snapshots/{mainnet,testnet-demo}

# Download the bundle (~400GB)
# If you try to pipe the download directly into lz4 and tar and the network fails you'll have to
# start from scratch. We recommend downloading first to a file.
gsutil cp gs://centrifuge-snapshots/mainnet/YYYY-MM-DD-snap.tar.lz4 $TMP_PATH/
lz4 -d -c -q  $DATA_FOLDER_PATH/YYYY-MM-DD-snap.tar.lz4 | tar -xvf - -C $DATA_FOLDER_PATH

Inspect the $DATA_FOLDER_PATH it should contain a chain and a relay-chain directory, the parachain and relay chain DB data folders respectively. Use $DATA_FOLDER_PATH on your node config as --base-path=$DATA_FOLDER_PATH directly Remove the chain directory to sync ONLY the parachain from scratch but keep the relay DB (usually much bigger), this will remove a little bit of trust in the Centrifuge dev team by synching the parachain from scratch at least, which is what Axelar validators care about most in terms of data.

Configure vald

In order for vald to connect to your local node, your rpc_addr should be exposed in vald’s config.toml.

[[axelar_bridge_evm]]
name = "centrifuge"
rpc_addr = "http://IP:PORT"
start-with-bridge = true
[[axelar_bridge_evm]]
name = "centrifuge"
rpc_addr = "http://IP:PORT"
start-with-bridge = true

Troubleshooting

Upgrading from an older version

If you are asked to update your node version run this checlist:

  • Make sure your CLI arguments look like this docs, there could have been changes for a new version
  • It is a good practice to clone your data dir and setup a new node with the new version to make sure it runs, then either point to the new one or replace the old one with the exact same parameters
  • Use the latest release, not the latest code from the main branch on the centrifuge-chain repo.

Error logs during syncing

During fast syncing it is expected to see the following error messages on the [Relaychain] side.

ERROR tokio-runtime-worker sc_service::client::client: [Relaychain] Unable to pin block for finality notification. hash: 0x866f…387c, Error: UnknownBlock: State already discarded [...]
WARN tokio-runtime-worker parachain::runtime-api: [Relaychain] cannot query the runtime API version: Api called for an unknown Block:  State already discarded [...]

as long as the following logs are seen

INFO tokio-runtime-worker substrate: [Relaychain] ⚙️  Syncing, target=#18279012 (9 peers), best: #27674 (0x28a4…6fe6), finalized #27648 (0x406d…b89e), ⬇ 1.1MiB/s ⬆ 34.6kiB/s
INFO tokio-runtime-worker substrate: [Parachain] ⚙️  Syncing 469.4 bps, target=#4306117 (15 peers), best: #33634 (0x79d2…0a45), finalized #0 (0xb3db…9d82), ⬇ 1.3MiB/s ⬆ 2.0kiB/s

everything is working correctly. Once the chain is fully synced the errors logs will go away.

Stalled Syncing

If the chain stops syncing, mostly due to the unavailable blocks then please restart your node. The reason is in most cases that the p2p-view of your node is bad at the moment. Resulting in your node dropping the peers and being unable to further sync. A restart helps in theses cases.

Example logs will look like the following:

WARN tokio-runtime-worker sync: [Parachain] đź’” Error importing block 0x88591cb0cb4f66474b189a34abab560e335dc508cb8e7926343d6cf8db6840b7: consensus error: Import failed: Database

Changed bootnode or peer identities

It is common that bootnode change their p2p-identity leading to the following logs:

WARN tokio-runtime-worker sc_network::service: [Relaychain] đź’” The bootnode you want to connect to at `/dns/polkadot-bootnode.polkadotters.com/tcp/30333/p2p/12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y` provided a different peer ID `12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3` than the one you expect `12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y`.

These logs can be safely ignored.

Edit this page