This commit is contained in:
Tristan (t4t5)
2023-12-28 18:53:12 +00:00
4 changed files with 362 additions and 2 deletions

207
INSTALL.ubuntu.md Normal file
View File

@@ -0,0 +1,207 @@
# Detailed Installation Guide for OPI on Ubuntu 22.04
## Installing & Running bitcoind
```bash
apt install snapd
snap install bitcoin-core
## if you want to use a mounted media as chain folder:
snap connect bitcoin-core:removable-media
## create a folder for bitcoin chain
mkdir /mnt/HC_Volume/bitcoin_chain
## run bitcoind using the new folder
bitcoin-core.daemon -txindex=1 -datadir="/mnt/HC_Volume/bitcoin_chain" -rest
```
## Installing PostgreSQL
1) First install and run postgresql binaries.
```bash
sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service
```
2) *(Optional)*, I'll usually mark postgres on hold since apt will try to auto update postgres which will restart its process and close all active connections.
```bash
apt-mark hold postgresql postgresql-14 postgresql-client-14 postgresql-client-common postgresql-common postgresql-contrib
```
3) Set a password for postgresql user.
```bash
sudo -u postgres psql
```
```SQL
ALTER USER postgres WITH PASSWORD '********';
\q
```
4) *(Optional)*, if you want to connect to DB instance remotely (if postgres is not installed on your local PC) you need to configure pg_hba.conf file.
```bash
nano /etc/postgresql/14/main/pg_hba.conf
```
```
## add the following line to the end of the file, change ip_address_of_your_pc with real IP
hostssl all all <ip_address_of_your_pc>/32 scram-sha-256
```
To reload the new configuration:
```bash
sudo -u postgres psql
```
```SQL
SELECT pg_reload_conf();
\q
```
5) *(Optional)*, some configuration changes:
```bash
nano /etc/postgresql/14/main/postgresql.conf
```
```
listen_addresses = '*'
max_connections = 2000
```
```bash
sudo systemctl restart postgresql
```
6) **Now since the db is ready to use, we can start initialising tables.**
```bash
sudo -u postgres psql
```
If you want to use different databases for modules, you can create databases using `CREATE DATABASE <database-name>;`. After that, you can change databases using `\c <database-name>`. Using the same database for all modules will also work correctly.
Now, initialise all tables by running db_init.sql files inside psql console. You can just copy & paste the contents of the files. Be aware of the current connected database and change it if necessary using `\c <database-name>`.
## Installing NodeJS
These steps are following the guide at [here](https://github.com/nodesource/distributions).
```bash
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
NODE_MAJOR=20
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
sudo apt-get update
sudo apt-get install nodejs -y
```
## Installing Cargo & Rust
These steps are following the guide at [here](https://doc.rust-lang.org/cargo/getting-started/installation.html).
```bash
curl https://sh.rustup.rs -sSf | sh
```
To update cargo & rust:
```bash
rustup update stable
```
## Installing node modules
```bash
cd modules/main_index; npm install;
cd ../brc20_api; npm install;
cd ../bitmap_api; npm install;
```
*(Optional):*
Remove the following from `modules/main_index/node_modules/bitcoinjs-lib/src/payments/p2tr.js`
```js
if (pubkey && pubkey.length) {
if (!(0, ecc_lib_1.getEccLib)().isXOnlyPoint(pubkey))
throw new TypeError('Invalid pubkey for p2tr');
}
```
Otherwise, it cannot decode some addresses such as `512057cd4cfa03f27f7b18c2fe45fe2c2e0f7b5ccb034af4dec098977c28562be7a2`
## Installing python libraries
If you don't have pip installed, start by installing pip. [guide](https://pip.pypa.io/en/stable/installation/).
```bash
wget https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
rm get-pip.py
```
```bash
python3 -m pip install python-dotenv;
python3 -m pip install psycopg2-binary;
```
## Build ord:
```bash
cd ord; cargo build --release;
```
**Do not run ord binary directly. Main indexer will run ord periodically**
## Setup .env files
Copy `.env_sample` in main_index, brc20_index, brc20_api, bitmap_index and bitmap_api as `.env` and fill necessary information.
- Do not change `FIRST_INSCRIPTION_HEIGHT` if you want to report hashes, since cumulative hash calculation will start from this height and it'll be faulty if you change this variable.
- All scripts can use the same database. In sample env files, we used different `DB_DATABASE` but using postgres on all of them will also work correctly.
- `BITCOIN_CHAIN_FOLDER` is the datadir folder that is set when starting bitcoind.
- `ORD_BINARY` `ORD_FOLDER` and `ORD_DATADIR` can stay the same if you do not change the folder structure after `git clone`.
# Run
Postgres will auto run on system start. \
Bitcoind needs to be run with -txindex flag before running main indexer. \
**Do not run ord binary directly. Main indexer will run ord periodically**
**Main Meta-Protocol Indexer**
```bash
cd modules/main_index; node index.js;
```
**BRC-20 Indexer**
```bash
cd modules/brc20_index; python3 brc20_index.py;
```
**BRC-20 API**
This is an optional API and doesn't need to be run.
```bash
cd modules/brc20_api; node api.js;
```
**Bitmap Indexer**
```bash
cd modules/bitmap_index; python3 bitmap_index.py;
```
**Bitmap API**
This is an optional API and doesn't need to be run.
```bash
cd modules/bitmap_api; node api.js;
```
# Update
- Stop all indexers and apis (preferably starting from main indexer but actually the order shouldn't matter)
- Update the repo (`git pull`)
- Re-run all indexers and apis

View File

@@ -66,6 +66,9 @@ cumulative_hash = sha256_hex(last_cumulative_hash + block_hash)
# Setup
For detailed installation guides:
- Ubuntu: [installation guide](INSTALL.ubuntu.md)
OPI uses PostgreSQL as DB. Before running the indexer, setup a PostgreSQL DB (all modules can write into different databases as well as use a single database). Run init_db.sql for each module on their respective database.
**Build ord:**

View File

@@ -0,0 +1,150 @@
# pip install python-dotenv
# pip install psycopg2-binary
import os, sys, requests
from dotenv import load_dotenv
import traceback, time, codecs, json
import psycopg2
import hashlib
## psycopg2 doesn't get decimal size from postgres and defaults to 28 which is not enough for brc-20 so we use long which is infinite for integers
DEC2LONG = psycopg2.extensions.new_type(
psycopg2.extensions.DECIMAL.values,
'DEC2LONG',
lambda value, curs: int(value) if value is not None else None)
psycopg2.extensions.register_type(DEC2LONG)
## load env variables
load_dotenv()
db_user = os.getenv("DB_USER") or "postgres"
db_host = os.getenv("DB_HOST") or "localhost"
db_port = int(os.getenv("DB_PORT") or "5432")
db_database = os.getenv("DB_DATABASE") or "postgres"
db_password = os.getenv("DB_PASSWD")
db_metaprotocol_user = os.getenv("DB_METAPROTOCOL_USER") or "postgres"
db_metaprotocol_host = os.getenv("DB_METAPROTOCOL_HOST") or "localhost"
db_metaprotocol_port = int(os.getenv("DB_METAPROTOCOL_PORT") or "5432")
db_metaprotocol_database = os.getenv("DB_METAPROTOCOL_DATABASE") or "postgres"
db_metaprotocol_password = os.getenv("DB_METAPROTOCOL_PASSWD")
first_inscription_height = int(os.getenv("FIRST_INSCRIPTION_HEIGHT") or "767430")
report_to_indexer = (os.getenv("REPORT_TO_INDEXER") or "true") == "true"
report_url = os.getenv("REPORT_URL") or "https://api.opi.network/report_block"
report_retries = int(os.getenv("REPORT_RETRIES") or "10")
report_name = os.getenv("REPORT_NAME") or "opi_brc20_indexer"
if first_inscription_height != 767430:
print("first_inscription_height must be 767430, please check if you are using the correct .env file")
sys.exit(1)
## connect to db
cur = None
cur_metaprotocol = None
try:
conn = psycopg2.connect(
host=db_host,
port=db_port,
database=db_database,
user=db_user,
password=db_password)
conn.autocommit = True
cur = conn.cursor()
except:
print("Error connecting to brc20 database, please check .env file")
traceback.print_exc()
sys.exit(1)
try:
conn_metaprotocol = psycopg2.connect(
host=db_metaprotocol_host,
port=db_metaprotocol_port,
database=db_metaprotocol_database,
user=db_metaprotocol_user,
password=db_metaprotocol_password)
conn_metaprotocol.autocommit = True
cur_metaprotocol = conn_metaprotocol.cursor()
except:
print("Error connecting to metaprotocol database, please check .env file")
traceback.print_exc()
sys.exit(1)
## get block height info
brc20_min_height = 0
brc20_max_height = 0
try:
cur.execute("SELECT min(block_height), max(block_height) FROM brc20_block_hashes;")
row = cur.fetchone()
if row:
brc20_min_height = row[0]
brc20_max_height = row[1]
except:
print("Error getting brc20 block height info")
traceback.print_exc()
sys.exit(1)
## get block height info of main db
main_min_height = 0
main_max_height = 0
try:
cur_metaprotocol.execute("SELECT min(block_height), max(block_height) FROM block_hashes;")
row = cur_metaprotocol.fetchone()
if row:
main_min_height = row[0]
main_max_height = row[1]
except:
print("Error getting main block height info")
traceback.print_exc()
sys.exit(1)
if main_min_height > 767430:
print("main_min_height is greater than 767430, please check if you are using the correct .env file and rerun the main & brc20 indexer from start (run reset.py first)")
sys.exit(1)
if brc20_min_height != 767430:
print("brc20_min_height is not equal to 767430, please check if you are using the correct .env file and rerun the brc20 indexer from start (run reset.py first)")
sys.exit(1)
def check_block_hashes(height):
print("Checking block " + str(height))
cur.execute('''select bceh.block_event_hash, bceh.cumulative_event_hash, bbh.block_hash
from brc20_cumulative_event_hashes bceh
left join brc20_block_hashes bbh on bbh.block_height = bceh.block_height
where bceh.block_height = %s;''', (height,))
if cur.rowcount == 0:
print("Block not found on DB!!")
return False
row = cur.fetchone()
block_event_hash, cumulative_event_hash, block_hash = row
url = 'https://opi.network/api/get_best_hashes_for_block/' + str(height)
r = requests.get(url)
js = json.loads(r.text)
opi_best_block_hash = js['data']['best_block_hash']
opi_best_cumulative_hash = js['data']['best_cumulative_hash']
if opi_best_block_hash == block_hash and opi_best_cumulative_hash == cumulative_event_hash:
print("same")
return True
if opi_best_block_hash != None or opi_best_cumulative_hash != None:
print("different")
return False
print("not found on OPI API")
return True
current_min_height = 767430
current_max_height = brc20_max_height
if check_block_hashes(current_max_height):
print("brc20 block hashes are correct")
sys.exit(0)
if not check_block_hashes(current_min_height):
print("brc20 block hashes are incorrect from the start, please check if you are using the correct .env file and rerun the brc20 indexer from start (run reset.py first)")
sys.exit(1)
while True:
if current_max_height - current_min_height <= 1:
print("brc20 block hashes are incorrect starting at block (" + str(current_max_height) + "), please check if you are using the correct .env file and rerun the brc20 indexer from start (run reset.py first)")
sys.exit(1)
mid_block = (current_min_height + current_max_height) // 2
if check_block_hashes(mid_block):
current_min_height = mid_block
else:
current_max_height = mid_block

View File

@@ -125,8 +125,8 @@ async function main_index() {
if (block_height > current_height) continue
console.warn("Block repeating, possible reorg!!")
let blockhash = parts[3].trim()
let blockhash_db_q = await db_pool.query("select blockhash from block_hashes where block_height = $1;", [block_height])
if (blockhash_db_q.rows[0].blockhash != blockhash) {
let blockhash_db_q = await db_pool.query("select block_hash from block_hashes where block_height = $1;", [block_height])
if (blockhash_db_q.rows[0].block_hash != blockhash) {
let reorg_st_tm = +(new Date())
console.error("Reorg detected at block_height " + block_height)
await handle_reorg(block_height)