Guide

ALLORA WORKER.

image

Full guide

Category: Node

Updated: 18 July, 2024

Author: admin

Reading time: 4 Min

How to run Worker Allora: Hugging Face Worker

Published: 29 June, 2024

worker node allorad

System Requirements

To participate in the Allora Network, ensure your system meets the following requirements:

Operating System this guide: Linux

CPU: 2 core

Memory: 4 GB.

Storage: SSD or NVMe with at least 20GB of space.

Install dependencies, Golang, Docker

Read more: https://rejump.dev/how-to-worker-node-on-allora-network/

0. Deploying a Hugging Face Model

In this example, we will use the Chronos model: amazon/chronos-t5-tiny(opens in a new tab). Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes. For simplicity, we will use Zero-shot forecasting, which refers to the ability of models to generate forecasts from unseen datasets.

1. Install allocmd

2. Initializing the worker

# Topic 2, 4, 6 (ETH, BTC, SOL) provide inferences on 10mins Prediction

# Topic 1, 3, 5 (ETH, BTC, SOL) provide inferences on 24hours Prediction

Example Wokername is faceworker

3. Creating the inference server

image

Full code for 9 topics

Register API_KEY on Coingecko: https://www.coingecko.com/en/api/pricing

image

4. Modifying requirements.txt

5. Modifying main.py to call the inference server

PYTHON

6. Updating the Docker

Modifying Dockerfile

Create new Dockerfile_inference

7. Update config

image

Update your hex_coded_pk:

Update boot_nodes: 07/010/2024

Check it if they update new heads: https://github.com/allora-network/networks/blob/main/edgenet/heads.txt

8. Initializing the worker for production

9. Update some bugs

Edit file prod-docker-compose.yaml

Add the inference service in the prod-docker-compose.yaml before the worker and the head services:

Change --allora-chain-topic-id to number of topic

image

Final: Run a Worker Node

image

Check your worker on chain like this

image

Categorized in: Node

END

Last updated