merge master into nakamoto (#1746)

* docs: Update how-to-install-stacks-cli.md (#1727)

Cleaned up the voice and made this page more conversational.

Of note, similar to my last PR, I'm removing some language around "Stacks 2.0" here alongside this cleanup

* docs: Update rosetta-support.md (#1728)

there is a missing period

* docs: Update overview.md (#1729)

Grammar fixes

* docs: Update how-to-run-api-node.md (#1730)

A number of grammar fixes and tweaks

* docs: Update how-to-run-mainnet-node.md (#1731)

Grammar fixes!

* docs: Update how-to-run-testnet-node.md (#1732)

Grammar fixes

* fix: move nft custody view into a table (#1741)

* fix: move to table

* fix: old nft events query

* fix: nft-custody-table migration

* fix: no longer rename old views, just remove them

---------

Co-authored-by: Matt <zone117x@gmail.com>

---------

Co-authored-by: max-crawford <102705427+max-crawford@users.noreply.github.com>
Co-authored-by: Matt <zone117x@gmail.com>
This commit is contained in:
Rafael Cárdenas
2023-11-07 13:38:05 -06:00
committed by GitHub
parent 74606c7e47
commit 872bcbdea1
11 changed files with 392 additions and 94 deletions

View File

@@ -56,4 +56,4 @@ rosetta-cli \
`rosetta-cli` will then sync with the blockchain until it reaches the tip, and then exit, displaying the test results.
Currently, account reconciliation is disabled; proper testing of that feature requires token transfer transactions while `rosetta-cli` is running.
Documentation for the Rosetta APIs can be found [here](https://hirosystems.github.io/stacks-blockchain-api/).
You may also review Data and Construction Rosetta endpoints [here](https://docs.hiro.so/api#tag/Rosetta)
You may also review Data and Construction Rosetta endpoints [here](https://docs.hiro.so/api#tag/Rosetta).

View File

@@ -2,7 +2,7 @@
title: How to install Stacks CLI
---
The Stacks CLI enables interactions with the Stacks 2.0 blockchain through a set of commands.
The Stacks CLI enables interactions with the Stacks blockchain through a set of commands.
## Installation
@@ -18,7 +18,7 @@ The `-g` flag makes the CLI commands available globally
## Network selection
By default, the CLI will attempt to interact with the mainnet of the Stacks 2.0 blockchain. However, it is possible to override the network and set it to the testnet:
By default, the CLI will attempt to interact with Stacks mainnet. However, it is possible to override the network and set it to testnet:
```sh
stx <command> -t
@@ -26,7 +26,7 @@ stx <command> -t
:::info
For account usage, that means addresses generated will _only_ be available for the specific network. An account generated for the testnet cannot be used on the mainnet.
For account usage, that means addresses generated will _only_ be available for the specific network. An account generated for testnet cannot be used on mainnet.
:::
@@ -42,7 +42,7 @@ This section describes how to use the CLI to manage an account.
:::caution
It is not recommended to use the CLI to handle accounts with real STX tokens on the mainnet. Using an appropriate wallet build to support secure token holding is recommended.
We don't recommended you use the CLI to handle accounts with real STX tokens on the mainnet. Instead, use an appropriate wallet to support secure token holding.
:::
@@ -68,14 +68,14 @@ Your response should look like this:
}
```
The mnemonic is your 24 word seed phrase which you should back up securely if you want access to this account again in the future. Once lost, it cannot be recovered.
The mnemonic is your 24 word seed phrase, which you should back up securely if you want access to this account again in the future. Once lost, it cannot be recovered.
The Stacks address associated with the newly generated account is:
`ST1BG7MHW2R524WMF7X8PGG3V45ZN040EB9EW0GQJ`
:::note
The preceding address is a testnet address that can only be used on the testnet.
The preceding address is a testnet address that can only be used on testnet.
:::
@@ -115,10 +115,10 @@ This section describes how to use the CLI to generate and broadcast transactions
In order to send tokens, the CLI command requires 5 parameters:
- **Recipient Address**: The Stacks address of the recipient
- **Amount**: The number of Stacks to send denoted in microstacks (1 STX = 1000000 microstacks)
- **Fee Rate**: The transaction fee rate for this transaction. You can safely set a fee rate of 200 for Testnet
- **Amount**: The number of Stacks to send denoted in microstacks (1 STX = 1,000,000 microstacks)
- **Fee Rate**: The fee rate for this transaction. You can safely set a fee rate of 200 for testnet
- **Nonce**: The nonce is a number that needs to be incremented monotonically for each transaction from the account. This ensures transactions are not duplicated
- **Private Key**: This is the private key corresponding to your account that was generated when
- **Private Key**: This is the private key corresponding to your account
The CLI command to use with these parameters is `send_tokens`:
@@ -135,7 +135,7 @@ stx send_tokens ST2KMMVJAB00W5Z6XWTFPH6B13JE9RJ2DCSHYX0S7 1000 200 0 381314da39a
With this command were sending 1000 microstacks to the Stacks address `ST2KMMVJAB00W5Z6XWTFPH6B13JE9RJ2DCSHYX0S7`.
We set the fee rate to `200` microstacks. If you're not sure how much your transaction will cost.
We set the fee rate to `200` microstacks.
:::tip
@@ -147,9 +147,9 @@ The nonce is set to `0` for this transaction, since it will be the first transac
Finally, the last parameter is the private key for the account. `381314da39a45f43f45ffd33b5d8767d1a38db0da71fea50ed9508e048765cf301`
Once again, were using the `-t` option to indicate that this is a Testnet transaction, so it should be broadcasted to Testnet.
Once again, were using the `-t` option to indicate that this is a testnet transaction, so it should be broadcast to testnet.
If valid, the transaction will be broadcasted to the network and the command will respond with a transaction ID.
If valid, the transaction will be broadcast to the network, and the command will respond with a transaction ID.
:::tip

View File

@@ -1,23 +1,23 @@
---
title: How to run API node
title: How to Run an API Node
---
This procedure demonstrates how to run a local API node using Docker images. There are several components that must be
This guide shows you how to run a local API node using Docker images. There are several components that must be
configured and run in a specific order for the local API node to work.
For this procedure, the order in which the services are brought up is very important. In order to start the API node
Note: the order in which the services are brought up is very important. In order to start the API node
successfully, you need to bring up the services in the following order:
1. `postgres`
2. `stacks-blockchain-api`
3. `stacks-blockchain`
When bringing down the API node, you should bring the services down in the exact reverse order in which they were
brought up, to avoid losing data.
When bringing down the API node, you should bring the services down in the reverse order in which they were
brought up in order to avoid losing data.
:::note
This procedure focuses on Unix-like operating systems (Linux and MacOS). This procedure has not been tested on
This guide focuses on Unix-like operating systems (Linux and MacOS). This has not been tested on
Windows.
:::
@@ -25,7 +25,7 @@ Windows.
## Prerequisites
Running a node has no specialized hardware requirements. Users have been successful in running nodes on Raspberry Pi
boards and other system-on-chip architectures. In order to complete this procedure, you must have the following software
boards and other system-on-chip architectures. However, in order to complete this guide, you do need the following software
installed on the node host machine:
- [Docker](https://docs.docker.com/get-docker/)
@@ -234,13 +234,13 @@ To verify the database is ready:
2. List current databases with the command `\l`
3. Disconnect from the database with the command `\q`
To verify the `stacks-blockchain` tip height is progressing use the following command:
To verify that the `stacks-blockchain` tip height is progressing, use the following command:
```sh
curl -sL localhost:20443/v2/info | jq
```
If the instance is running you should receive terminal output similar to the following:
If the instance is running, you should receive terminal output similar to the following:
```json
{

View File

@@ -1,12 +1,12 @@
---
title: How to run mainnet node
title: How to Run a Mainnet Node
---
This procedure demonstrates how to run a local mainnet node using Docker images.
This guide shows you how to run a local mainnet node using Docker images.
:::note
This procedure focuses on Unix-like operating systems (Linux and MacOS). This procedure has not been tested on
This guide focuses on Unix-like operating systems (Linux and MacOS). This has not been tested on
Windows.
:::
@@ -14,7 +14,7 @@ Windows.
## Prerequisites
Running a node has no specialized hardware requirements. Users have been successful in running nodes on Raspberry Pi
boards and other system-on-chip architectures. In order to complete this procedure, you must have the following software
boards and other system-on-chip architectures. However, in order to complete this guide, you do need the following software
installed on the node host machine:
- [Docker](https://docs.docker.com/get-docker/)
@@ -129,13 +129,13 @@ INFO [1626290748.103291] [src/burnchains/bitcoin/spv.rs:926] [main] Syncing Bitc
INFO [1626290776.956535] [src/burnchains/bitcoin/spv.rs:926] [main] Syncing Bitcoin headers: 1.7% (12000 out of 691034)
```
To verify the `stacks-blockchain` tip height is progressing use the following command:
To verify that the `stacks-blockchain` tip height is progressing, use the following command:
```sh
curl -sL localhost:20443/v2/info | jq
```
If the instance is running you should receive terminal output similar to the following:
If the instance is running, you should receive terminal output similar to the following:
```json
{

View File

@@ -1,12 +1,12 @@
---
title: How to run testnet node
title: How to Run a Testnet Node
---
This procedure demonstrates how to run a local testnet node using Docker images.
This guide shows you how to run a local testnet node using Docker images.
:::note
This procedure focuses on Unix-like operating systems (Linux and MacOS). This procedure has not been tested on
This guide focuses on Unix-like operating systems (Linux and MacOS). This has not been tested on
Windows.
:::
@@ -14,7 +14,7 @@ Windows.
## Prerequisites
Running a node has no specialized hardware requirements. Users have been successful in running nodes on Raspberry Pi
boards and other system-on-chip architectures. In order to complete this procedure, you must have the following software
boards and other system-on-chip architectures. However, in order to complete this procedure, you do need the following software
installed on the node host machine:
- [Docker](https://docs.docker.com/get-docker/)
@@ -45,7 +45,7 @@ These egress ports are for syncing `stacks-blockchain` and Bitcoin headers. If t
## Step 1: Initial setup
In order to run the testnet node, you must download the Docker images and create a directory structure to hold the
In order to run a testnet node, you must download the Docker images and create a directory structure to hold the
persistent data from the services. Download and configure the Docker images with the following commands:
```sh
@@ -100,13 +100,13 @@ INFO [1626290748.103291] [src/burnchains/bitcoin/spv.rs:926] [main] Syncing Bitc
INFO [1626290776.956535] [src/burnchains/bitcoin/spv.rs:926] [main] Syncing Bitcoin headers: 1.7% (12000 out of 2034380)
```
To verify the `stacks-blockchain` tip height is progressing use the following command:
To verify that the `stacks-blockchain` tip height is progressing, use the following command:
```sh
curl -sL localhost:20443/v2/info | jq
```
If the instance is running you should receive terminal output similar to the following:
If the instance is running, you should receive terminal output similar to the following:
```json
{

View File

@@ -6,7 +6,7 @@ Title: Overview
The Stacks blockchain API allows you to query the Stacks blockchain and interact with smart contracts. It was built to maintain paginated, materialized views of the Stacks Blockchain.
The Stacks Blockchain API is hosted by Hiro. Using it requires you to trust the hosted server, but this API also provides a faster development experience. You may wish to consider running your own API instance to create a fully trustless architecture for your app.
The Stacks Blockchain API is hosted by Hiro. Using it requires you to trust us as the hosted server, but in return we provide a faster development experience. If you want a fully trustless architecture for your app, you may wish to consider running your own API instance.
> **_NOTE:_**
>
@@ -18,14 +18,14 @@ The Stacks Blockchain API is hosted by Hiro. Using it requires you to trust the
![API architecture!](images/api-architecture.png)
* The `stacks-node` has it's own minimal set of http endpoints referred to as `RPC endpoints`
* The `stacks-node` has its own minimal set of http endpoints referred to as `RPC endpoints`
* The `stacks-blockchain-api` allows clients to access these endpoints by proxying them through to a load-balanced pool of `stacks-nodes`.
* See: https://github.com/blockstack/stacks-blockchain/blob/master/docs/rpc-endpoints.md -- some common ones:
* `POST /v2/transactions` - broadcast a tx.
* `POST /v2/transactions` - broadcast a transaction.
* `GET /v2/pox` - get current PoX-relevant information.
* `POST /v2/contracts/call-read/<contract>/<function>` - evaluates and returns the result of calling a Clarity function.
* `POST /v2/fees/transaction` - evaluates a given transaction and returns tx fee estimation data.
* `GET /v2/accounts/<address>` - used to get the current `nonce` required for creating transactions.
* `POST /v2/contracts/call-read/<contract>/<function>` - evaluate and return the result of calling a Clarity function.
* `POST /v2/fees/transaction` - evaluate a given transaction and return transaction fee estimation data.
* `GET /v2/accounts/<address>` - get the current `nonce` required for creating transactions.
* The endpoints implemented by `stacks-blockchain-api` provide data that the `stacks-node` can't due to various constraints.
@@ -52,7 +52,7 @@ The Stacks Blockchain API is hosted by Hiro. Using it requires you to trust the
* ALSO the OpenAPI + JSONSchemas are used to generate a standalone `@stacks/blockchain-api-client`.
* The easiest/quickest way to develop in this repo is using the vscode debugger. It uses docker-compose to setup a `stacks-node` and postgres instance.
* The easiest/quickest way to develop in this repo is using the VS Code debugger. It uses docker-compose to setup a `stacks-node` and Postgres instance.
* Alternatively, you can run `npm run dev:integrated` which does the same thing but without a debugger.

View File

@@ -0,0 +1,215 @@
/* eslint-disable camelcase */
exports.shorthands = undefined;
exports.up = pgm => {
pgm.dropMaterializedView('nft_custody');
pgm.createTable('nft_custody', {
asset_identifier: {
type: 'string',
notNull: true,
},
value: {
type: 'bytea',
notNull: true,
},
recipient: {
type: 'text',
},
block_height: {
type: 'integer',
notNull: true,
},
index_block_hash: {
type: 'bytea',
notNull: true,
},
parent_index_block_hash: {
type: 'bytea',
notNull: true,
},
microblock_hash: {
type: 'bytea',
notNull: true,
},
microblock_sequence: {
type: 'integer',
notNull: true,
},
tx_id: {
type: 'bytea',
notNull: true,
},
tx_index: {
type: 'smallint',
notNull: true,
},
event_index: {
type: 'integer',
notNull: true,
},
});
pgm.createConstraint('nft_custody', 'nft_custody_unique', 'UNIQUE(asset_identifier, value)');
pgm.createIndex('nft_custody', ['recipient', 'asset_identifier']);
pgm.createIndex('nft_custody', 'value');
pgm.createIndex('nft_custody', [
{ name: 'block_height', sort: 'DESC' },
{ name: 'microblock_sequence', sort: 'DESC' },
{ name: 'tx_index', sort: 'DESC' },
{ name: 'event_index', sort: 'DESC' }
]);
pgm.sql(`
INSERT INTO nft_custody (asset_identifier, value, recipient, tx_id, block_height, index_block_hash, parent_index_block_hash, microblock_hash, microblock_sequence, tx_index, event_index) (
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, recipient, tx_id, nft.block_height,
nft.index_block_hash, nft.parent_index_block_hash, nft.microblock_hash, nft.microblock_sequence, nft.tx_index, nft.event_index
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
)
`);
pgm.dropMaterializedView('nft_custody_unanchored');
pgm.createTable('nft_custody_unanchored', {
asset_identifier: {
type: 'string',
notNull: true,
},
value: {
type: 'bytea',
notNull: true,
},
recipient: {
type: 'text',
},
block_height: {
type: 'integer',
notNull: true,
},
index_block_hash: {
type: 'bytea',
notNull: true,
},
parent_index_block_hash: {
type: 'bytea',
notNull: true,
},
microblock_hash: {
type: 'bytea',
notNull: true,
},
microblock_sequence: {
type: 'integer',
notNull: true,
},
tx_id: {
type: 'bytea',
notNull: true,
},
tx_index: {
type: 'smallint',
notNull: true,
},
event_index: {
type: 'integer',
notNull: true,
},
});
pgm.createConstraint('nft_custody_unanchored', 'nft_custody_unanchored_unique', 'UNIQUE(asset_identifier, value)');
pgm.createIndex('nft_custody_unanchored', ['recipient', 'asset_identifier']);
pgm.createIndex('nft_custody_unanchored', 'value');
pgm.createIndex('nft_custody_unanchored', [
{ name: 'block_height', sort: 'DESC' },
{ name: 'microblock_sequence', sort: 'DESC' },
{ name: 'tx_index', sort: 'DESC' },
{ name: 'event_index', sort: 'DESC' }
]);
pgm.sql(`
INSERT INTO nft_custody_unanchored (asset_identifier, value, recipient, tx_id, block_height, index_block_hash, parent_index_block_hash, microblock_hash, microblock_sequence, tx_index, event_index) (
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, recipient, tx_id, nft.block_height,
nft.index_block_hash, nft.parent_index_block_hash, nft.microblock_hash, nft.microblock_sequence, nft.tx_index, nft.event_index
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
)
`);
};
exports.down = pgm => {
pgm.dropTable('nft_custody');
pgm.createMaterializedView('nft_custody', { data: true }, `
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, recipient, tx_id, nft.block_height
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
`);
pgm.createIndex('nft_custody', ['recipient', 'asset_identifier']);
pgm.createIndex('nft_custody', ['asset_identifier', 'value'], { unique: true });
pgm.createIndex('nft_custody', 'value');
pgm.dropTable('nft_custody_unanchored');
pgm.createMaterializedView('nft_custody_unanchored', { data: true }, `
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, recipient, tx_id, nft.block_height
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
`);
pgm.createIndex('nft_custody_unanchored', ['recipient', 'asset_identifier']);
pgm.createIndex('nft_custody_unanchored', ['asset_identifier', 'value'], { unique: true });
pgm.createIndex('nft_custody_unanchored', 'value');
};

View File

@@ -1363,6 +1363,20 @@ export interface NftEventInsertValues {
value: PgBytea;
}
export interface NftCustodyInsertValues {
event_index: number;
tx_id: PgBytea;
tx_index: number;
block_height: number;
index_block_hash: PgBytea;
parent_index_block_hash: PgBytea;
microblock_hash: PgBytea;
microblock_sequence: number;
recipient: string | null;
asset_identifier: string;
value: PgBytea;
}
export interface FtEventInsertValues {
event_index: number;
tx_id: PgBytea;

View File

@@ -3264,6 +3264,7 @@ export class PgStore extends BasePgStore {
FROM ${nftCustody} AS nft
WHERE nft.recipient = ${args.principal}
${assetIdFilter}
ORDER BY block_height DESC, microblock_sequence DESC, tx_index DESC, event_index DESC
LIMIT ${args.limit}
OFFSET ${args.offset}
)
@@ -3465,11 +3466,11 @@ export class PgStore extends BasePgStore {
AND block_height <= ${args.blockHeight}
ORDER BY asset_identifier, value, block_height DESC, microblock_sequence DESC, tx_index DESC, event_index DESC
)
SELECT sender, recipient, asset_identifier, value, event_index, asset_event_type_id, address_transfers.block_height, address_transfers.tx_id, (COUNT(*) OVER())::INTEGER AS count
FROM address_transfers
SELECT sender, recipient, asset_identifier, value, at.event_index, asset_event_type_id, at.block_height, at.tx_id, (COUNT(*) OVER())::INTEGER AS count
FROM address_transfers AS at
INNER JOIN ${args.includeUnanchored ? this.sql`last_nft_transfers` : this.sql`nft_custody`}
USING (asset_identifier, value, recipient)
ORDER BY block_height DESC, microblock_sequence DESC, tx_index DESC, event_index DESC
ORDER BY at.block_height DESC, at.microblock_sequence DESC, at.tx_index DESC, event_index DESC
LIMIT ${args.limit} OFFSET ${args.offset}
`;

View File

@@ -64,6 +64,7 @@ import {
DbPox3Event,
RawEventRequestInsertValues,
IndexesState,
NftCustodyInsertValues,
} from './common';
import { ClarityAbi } from '@stacks/transactions';
import {
@@ -399,7 +400,7 @@ export class PgWriteStore extends PgStore {
await this.updateFtEvent(sql, entry.tx, ftEvent);
}
for (const nftEvent of entry.nftEvents) {
await this.updateNftEvent(sql, entry.tx, nftEvent);
await this.updateNftEvent(sql, entry.tx, nftEvent, false);
}
deployedSmartContracts.push(...entry.smartContracts);
for (const smartContract of entry.smartContracts) {
@@ -458,7 +459,6 @@ export class PgWriteStore extends PgStore {
const ibdHeight = getIbdBlockHeight();
this.isIbdBlockHeightReached = ibdHeight ? data.block.block_height > ibdHeight : true;
await this.refreshNftCustody(batchedTxData);
await this.refreshMaterializedView('chain_tip');
await this.refreshMaterializedView('mempool_digest');
@@ -764,7 +764,6 @@ export class PgWriteStore extends PgStore {
}
});
await this.refreshNftCustody(txData, true);
await this.refreshMaterializedView('chain_tip');
await this.refreshMaterializedView('mempool_digest');
@@ -1331,27 +1330,65 @@ export class PgWriteStore extends PgStore {
`;
}
async updateNftEvent(sql: PgSqlClient, tx: DbTx, event: DbNftEvent) {
const values: NftEventInsertValues = {
async updateNftEvent(sql: PgSqlClient, tx: DbTx, event: DbNftEvent, microblock: boolean) {
const custody: NftCustodyInsertValues = {
asset_identifier: event.asset_identifier,
value: event.value,
tx_id: event.tx_id,
index_block_hash: tx.index_block_hash,
parent_index_block_hash: tx.parent_index_block_hash,
microblock_hash: tx.microblock_hash,
microblock_sequence: tx.microblock_sequence,
microblock_canonical: tx.microblock_canonical,
sender: event.sender ?? null,
recipient: event.recipient ?? null,
event_index: event.event_index,
tx_index: event.tx_index,
block_height: event.block_height,
};
const values: NftEventInsertValues = {
...custody,
microblock_canonical: tx.microblock_canonical,
canonical: event.canonical,
sender: event.sender ?? null,
asset_event_type_id: event.asset_event_type_id,
asset_identifier: event.asset_identifier,
value: event.value,
};
await sql`
INSERT INTO nft_events ${sql(values)}
`;
if (tx.canonical && tx.microblock_canonical && event.canonical) {
const table = microblock ? sql`nft_custody_unanchored` : sql`nft_custody`;
await sql`
INSERT INTO ${table} ${sql(custody)}
ON CONFLICT ON CONSTRAINT ${table}_unique DO UPDATE SET
tx_id = EXCLUDED.tx_id,
index_block_hash = EXCLUDED.index_block_hash,
parent_index_block_hash = EXCLUDED.parent_index_block_hash,
microblock_hash = EXCLUDED.microblock_hash,
microblock_sequence = EXCLUDED.microblock_sequence,
recipient = EXCLUDED.recipient,
event_index = EXCLUDED.event_index,
tx_index = EXCLUDED.tx_index,
block_height = EXCLUDED.block_height
WHERE
(
EXCLUDED.block_height > ${table}.block_height
)
OR (
EXCLUDED.block_height = ${table}.block_height
AND EXCLUDED.microblock_sequence > ${table}.microblock_sequence
)
OR (
EXCLUDED.block_height = ${table}.block_height
AND EXCLUDED.microblock_sequence = ${table}.microblock_sequence
AND EXCLUDED.tx_index > ${table}.tx_index
)
OR (
EXCLUDED.block_height = ${table}.block_height
AND EXCLUDED.microblock_sequence = ${table}.microblock_sequence
AND EXCLUDED.tx_index = ${table}.tx_index
AND EXCLUDED.event_index > ${table}.event_index
)
`;
}
}
async updateBatchSmartContractEvent(sql: PgSqlClient, tx: DbTx, events: DbSmartContractEvent[]) {
@@ -2263,7 +2300,7 @@ export class PgWriteStore extends PgStore {
await this.updateFtEvent(sql, entry.tx, ftEvent);
}
for (const nftEvent of entry.nftEvents) {
await this.updateNftEvent(sql, entry.tx, nftEvent);
await this.updateNftEvent(sql, entry.tx, nftEvent, true);
}
for (const smartContract of entry.smartContracts) {
await this.updateSmartContract(sql, entry.tx, smartContract);
@@ -2345,11 +2382,74 @@ export class PgWriteStore extends PgStore {
AND (index_block_hash = ${args.indexBlockHash} OR index_block_hash = '\\x'::bytea)
AND tx_id IN ${sql(txIds)}
`;
await this.updateNftCustodyFromReOrg(sql, {
index_block_hash: args.indexBlockHash,
microblocks: args.microblocks,
});
}
return { updatedTxs: updatedMbTxs };
}
/**
* Refreshes NFT custody data for events within a block or series of microblocks.
* @param sql - SQL client
* @param args - Block and microblock hashes
*/
async updateNftCustodyFromReOrg(
sql: PgSqlClient,
args: {
index_block_hash: string;
microblocks: string[];
}
): Promise<void> {
for (const table of [sql`nft_custody`, sql`nft_custody_unanchored`]) {
await sql`
INSERT INTO ${table}
(asset_identifier, value, tx_id, index_block_hash, parent_index_block_hash, microblock_hash,
microblock_sequence, recipient, event_index, tx_index, block_height)
(
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, tx_id, txs.index_block_hash,
txs.parent_index_block_hash, txs.microblock_hash, txs.microblock_sequence, recipient,
nft.event_index, txs.tx_index, txs.block_height
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
AND nft.index_block_hash = ${args.index_block_hash}
${
args.microblocks.length > 0
? sql`AND nft.microblock_hash IN ${sql(args.microblocks)}`
: sql``
}
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
)
ON CONFLICT ON CONSTRAINT ${table}_unique DO UPDATE SET
tx_id = EXCLUDED.tx_id,
index_block_hash = EXCLUDED.index_block_hash,
parent_index_block_hash = EXCLUDED.parent_index_block_hash,
microblock_hash = EXCLUDED.microblock_hash,
microblock_sequence = EXCLUDED.microblock_sequence,
recipient = EXCLUDED.recipient,
event_index = EXCLUDED.event_index,
tx_index = EXCLUDED.tx_index,
block_height = EXCLUDED.block_height
`;
}
}
/**
* Fetches from the `microblocks` table with a given `parent_index_block_hash` and a known
* latest unanchored microblock tip. Microblocks that are chained to the given tip are
@@ -2611,6 +2711,10 @@ export class PgWriteStore extends PgStore {
} else {
updatedEntities.markedNonCanonical.nftEvents += nftResult.count;
}
await this.updateNftCustodyFromReOrg(sql, {
index_block_hash: indexBlockHash,
microblocks: [],
});
// todo: do we still need pox2 marking here?
const pox2Result = await sql`
@@ -2980,49 +3084,13 @@ export class PgWriteStore extends PgStore {
}
/**
* Refreshes the `nft_custody` and `nft_custody_unanchored` materialized views if necessary.
* @param sql - DB client
* @param txs - Transaction event data
* @param unanchored - If this refresh is requested from a block or microblock
*/
async refreshNftCustody(txs: DataStoreTxEventData[], unanchored: boolean = false) {
await this.sqlWriteTransaction(async sql => {
const newNftEventCount = txs
.map(tx => tx.nftEvents.length)
.reduce((prev, cur) => prev + cur, 0);
if (newNftEventCount > 0) {
// Always refresh unanchored view since even if we're in a new anchored block we should update the
// unanchored state to the current one.
await this.refreshMaterializedView('nft_custody_unanchored', sql);
if (!unanchored) {
await this.refreshMaterializedView('nft_custody', sql);
}
} else if (!unanchored) {
// Even if we didn't receive new NFT events in a new anchor block, we should check if we need to
// update the anchored view to reflect any changes made by previous microblocks.
const result = await sql<{ outdated: boolean }[]>`
WITH anchored_height AS (SELECT MAX(block_height) AS anchored FROM nft_custody),
unanchored_height AS (SELECT MAX(block_height) AS unanchored FROM nft_custody_unanchored)
SELECT unanchored > anchored AS outdated
FROM anchored_height CROSS JOIN unanchored_height
`;
if (result.length > 0 && result[0].outdated) {
await this.refreshMaterializedView('nft_custody', sql);
}
}
});
}
/**
* (event-replay) Finishes DB setup after an event-replay.
* Called when a full event import is complete.
*/
async finishEventReplay() {
if (!this.isEventReplay) {
return;
}
await this.sqlWriteTransaction(async sql => {
await this.refreshMaterializedView('nft_custody', sql, false);
await this.refreshMaterializedView('nft_custody_unanchored', sql, false);
await this.refreshMaterializedView('chain_tip', sql, false);
await this.refreshMaterializedView('mempool_digest', sql, false);
});

View File

@@ -825,7 +825,7 @@ describe('search tests', () => {
recipient: addr7,
sender: 'none',
};
await db.updateNftEvent(client, stxTx1, nftEvent1);
await db.updateNftEvent(client, stxTx1, nftEvent1, false);
// test address as a nft event recipient
const searchResult7 = await supertest(api.server).get(`/extended/v1/search/${addr7}`);
@@ -853,7 +853,7 @@ describe('search tests', () => {
recipient: 'none',
sender: addr8,
};
await db.updateNftEvent(client, stxTx1, nftEvent2);
await db.updateNftEvent(client, stxTx1, nftEvent2, false);
// test address as a nft event sender
const searchResult8 = await supertest(api.server).get(`/extended/v1/search/${addr8}`);