In this example, we will use the Chronos model: amazon/chronos-t5-tiny(opens in a new tab). Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes. For simplicity, we will use Zero-shot forecasting, which refers to the ability of models to generate forecasts from unseen datasets.
1. Install allocmd
2. Initializing the worker
# Topic 2, 4, 6 (ETH, BTC, SOL) provide inferences on 10mins Prediction
# Topic 1, 3, 5 (ETH, BTC, SOL) provide inferences on 24hours Prediction
Example Wokername is faceworker
3. Creating the inference server
image
Full code for 9 topics
Register API_KEY on Coingecko: https://www.coingecko.com/en/api/pricing
image
4. Modifying requirements.txt
5. Modifying main.py to call the inference server
PYTHON
6. Updating the Docker
Modifying Dockerfile
Create new Dockerfile_inference
7. Update config
image
Update your hex_coded_pk:
Update boot_nodes: 07/010/2024
Check it if they update new heads: https://github.com/allora-network/networks/blob/main/edgenet/heads.txt
8. Initializing the worker for production
9. Update some bugs
Edit file prod-docker-compose.yaml
Add the inference service in the prod-docker-compose.yaml before the worker and the head services:
import requests
import sys
import json
def process(argument):
headers = {'Content-Type': 'application/json'}
url = f"http://inference:8000/inference/{argument}"
response = requests.get(url, headers=headers)
return response.text
if __name__ == "__main__":
# Your code logic with the parsed argument goes here
try:
if len(sys.argv) < 5:
value = json.dumps({"error": f"Not enough arguments provided: {len(sys.argv)}, expected 4 arguments: topic_id, blockHeight, blockHeightEval, default_arg"})
else:
topic_id = sys.argv[1]
blockHeight = sys.argv[2]
blockHeightEval = sys.argv[3]
default_arg = sys.argv[4]
response_inference = process(argument=default_arg)
response_dict = {"infererValue": response_inference}
value = json.dumps(response_dict)
except Exception as e:
value = json.dumps({"error": {str(e)}})
print(value)
FROM alloranetwork/allora-inference-base:latest
RUN pip install requests
COPY main.py /app/
FROM amd64/python:3.9-buster
WORKDIR /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --upgrade pip \
&& pip install -r requirements.txt
EXPOSE 8000
ENV NAME sample
# Run gunicorn when the container launches and bind port 8000 from app.py
CMD ["gunicorn", "-b", ":8000", "app:app"]
allorad keys export faceworker --keyring-backend test --unarmored-hex --unsafe