Compare commits

..

100 Commits

Author SHA1 Message Date
Gaze
11d64b09d5 Merge branch 'main' into develop 2024-11-22 14:22:19 +07:00
Gaze
0cb66232ef feat: add bip322 pkg 2024-11-22 14:22:07 +07:00
gazenw
4074548b3e Merge pull request #74 from gaze-network/develop
Release 0.7.0
2024-10-31 14:18:59 +07:00
gazenw
c5c9a7bdeb feat: add get Runes info batch api (#73)
* fix: make existing handlers use new total holders usecase

* fix: error msg

* feat: add get token info batch

* feat: add includeHoldersCount in get tokens api

* refactor: extract response mapping

* fix: rename new field and add holdersCount to extend

* fix: query params array

* fix: error msg

* fix: struct tags

* fix: remove error

* feat: add default value to additional fields
2024-10-31 14:14:58 +07:00
gazenw
58334dd3e4 Merge pull request #72 from gaze-network/develop
feat: add etching tx hash for runes info api
2024-10-26 20:56:01 +07:00
Gaze
cffe378beb feat: add etching tx hash for runes info api 2024-10-25 16:22:24 +07:00
gazenw
9a7ee49228 Merge pull request #71 from gaze-network/develop
Release v0.6.0
2024-10-17 14:35:19 +07:00
gazenw
9739f61067 feat: implement batch insert using multirow inserts (#70)
* feat: add new batch inserts

* fix: migration

* fix: add casting to unnest with patch

* fix: add UTC() to timestamp mappers

* chore: unused imports

* chore: remove unnecessary comments
2024-10-17 14:34:04 +07:00
gazenw
f1267b387e Merge pull request #69 from gaze-network/develop
Release v0.5.6
2024-10-15 17:59:34 +07:00
gazenw
8883c24c77 fix: only call VerifyStates if not api only (#68) 2024-10-15 17:58:51 +07:00
gazenw
e9ce8df01a Merge pull request #67 from gaze-network/develop
Release v0.5.5
2024-10-11 14:25:05 +07:00
Gaze
3ff73a99f8 Merge branch 'main' into develop 2024-10-11 14:24:36 +07:00
gazenw
96afdfd255 fix: order get ongoing tokens by mint transactions (#66) 2024-10-11 14:23:31 +07:00
gazenw
c49e39be97 feat: optimize flush speed for Runes (#65)
* feat: switch to batchexec for faster inserts

* feat: add time taken log

* feat: add process time log

* feat: add event logs
2024-10-11 14:21:26 +07:00
Gaze
12985ae432 feat: remove conns timeout 2024-10-08 01:14:24 +07:00
Gaze
2d51e52b83 feat: support to config 2024-10-08 01:12:49 +07:00
Gaze
618220d0cb feat: support high httpclient conns 2024-10-08 00:44:24 +07:00
gazenw
6004744721 Merge pull request #64 from gaze-network/develop
Release v0.5.1
2024-10-07 21:18:11 +07:00
gazenw
90ed7bc350 fix: update sql (#63) 2024-10-07 21:17:16 +07:00
gazenw
7a0fe84e40 Merge pull request #62 from gaze-network/develop
Release v0.5.0
2024-10-06 23:52:10 +07:00
gazenw
f1d4651042 feat(runes): add Get Transaction by hash api (#39)
* feat: implement pagination on get balance, get holders

* feat: paginate get transactions

* fix: remove debug

* feat: implement pagination in get utxos

* feat: sort response in get holders

* feat: cap batch query

* feat: add default limits to all endpoints

* chore: rename endpoint funcs

* fix: parse rune name spacers

* feat(runes): get tx by hash api

* fix: error

* refactor: use map to collect rune ids

---------

Co-authored-by: Gaze <gazenw@users.noreply.github.com>
2024-10-06 23:50:13 +07:00
gazenw
5f4f50a9e5 Merge pull request #61 from gaze-network/develop
Release v0.4.7
2024-10-06 21:00:57 +07:00
gazenw
32c3c5c1d4 fix: correctly insert etchedAt in rune entry (#60) 2024-10-06 20:55:38 +07:00
Gaze
2a572e6d1e fix: migration 2024-10-06 20:29:25 +07:00
Gaze
aa25a6882b Merge remote-tracking branch 'origin/main' into develop 2024-10-06 20:06:06 +07:00
gazenw
6182c63150 Merge pull request #59 from gaze-network/develop
Release v0.4.5
2024-10-06 19:59:35 +07:00
gazenw
e1f8eaa3e1 fix: unescape query id (#58) 2024-10-06 19:59:06 +07:00
gazenw
107836ae39 feat(runes): add Get Tokens API (#38)
* feat: implement pagination on get balance, get holders

* feat: paginate get transactions

* fix: remove debug

* feat: implement pagination in get utxos

* feat: sort response in get holders

* feat: cap batch query

* feat: add default limits to all endpoints

* chore: rename endpoint funcs

* fix: parse rune name spacers

* feat(runes): add get token list api

* fix(runes): use distinct to get token list

* feat: remove unused code

* fix: count holders distinct pkscript

* feat: implement additional scopes

* chore: comments

* feat: implement search

* refactor: switch to use paginationRequest

* refactor: rename get token list to get tokens

* fix: count total holders by rune ids

* fix: rename file

* fix: rename minting to ongoing

* fix: get ongoing check rune is mintable

* chore: disable gosec g115

* fix: pr

---------

Co-authored-by: Gaze <gazenw@users.noreply.github.com>
2024-10-06 19:30:57 +07:00
Gaze
1bd84b0154 fix: bump sqlc verify action version to 1.27.0 2024-10-06 15:41:10 +07:00
Gaze
de26a4c21d feat(errs): add retryable error 2024-10-06 11:12:51 +07:00
Gaze
1dc57d74e0 Merge remote-tracking branch 'origin/main' into develop 2024-10-05 01:38:34 +07:00
gazenw
7c0e28d8ea Merge pull request #57 from gaze-network/develop
Release 0.4.4
2024-10-05 01:37:50 +07:00
gazenw
754fd1e997 fix: only check for chain reorg if current block has hash (#56)
* fix: only check for chain reorg if current block has hash

* fix: remove starting block hash

* fix: don't use starting block hash
2024-10-05 01:35:04 +07:00
Gaze
66f03f7107 feat: allow custom sigHashType when signing 2024-10-04 23:05:48 +07:00
gazenw
7a863987ec Merge pull request #55 from gaze-network/develop
Release v0.4.3
2024-10-04 13:23:30 +07:00
gazenw
f9c6ef8dfd fix: add different genesis runes config for each network (#54)
* fix: add different genesis runes config for each network

* fix: use slogx.Stringer

* refactor: remove unused value
2024-10-04 13:22:53 +07:00
gazenw
22a32468ef Merge pull request #53 from gaze-network/develop
Release v0.4.2
2024-10-03 18:26:32 +07:00
gazenw
b1d9f4f574 feat: add fractal support for runes (#52)
* feat: add fractal support for runes

* chore: remove common.HalvingInterval

* fix: update starting block height

* refactor: move network-genesis-rune definition to constants

* fix: use logger.Panic() instead of panic()

* fix: golangci-lint

* fix: missing return
2024-10-03 18:25:13 +07:00
Gaze
6a5ba528a8 Merge branch 'main' into develop 2024-10-02 20:28:42 +07:00
Nut Pinyo
6484887710 feat: add dust limit util (#51)
* feat: add dust limit util

* fix: use int64 instead
2024-10-02 15:08:29 +07:00
Gaze
9a1382fb9f feat: add fee estimation util 2024-09-06 22:56:57 +07:00
Gaze
3d5f3b414c feat: add sign tx util functions 2024-09-06 21:58:02 +07:00
gazenw
6e8a846c27 Merge pull request #50 from gaze-network/develop
Release v0.4.1
2024-08-30 16:18:40 +07:00
gazenw
8b690c4f7f fix: add decimals field in runes get holders (#49) 2024-08-30 16:17:57 +07:00
Ň𝑒𝕣ⒻẸ𝔻
cc37807ff9 Turkish translation (#46)
* Move Turkish translation of README.md to docs/ directory

* Add Turkish translation of README.md with last updated date and community translation notice

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

---------

Co-authored-by: gazenw <163862510+gazenw@users.noreply.github.com>
2024-08-30 16:15:14 +07:00
gazenw
9ab16d21e1 Merge pull request #48 from gaze-network/develop
Release v0.4.1
2024-08-28 23:49:15 +07:00
gazenw
32fec89914 Merge pull request #47 from gaze-network/feat/fractal-network-support
feat: add fractal network constant
2024-08-28 23:48:17 +07:00
Gaze
0131de6717 feat: add network support for fractal 2024-08-28 23:34:54 +07:00
gazenw
206eb65ee7 Merge pull request #44 from gaze-network/develop
Release v0.4.0
2024-08-13 17:31:16 +07:00
gazenw
fa810b0aed feat: add get utxo by tx hash and output idx for Runes (#42)
* feat: add handler

* feat: add get transaction

* feat: add get utxos output

* refactor: function parameter

* feat: add check utxo not found

* feat: add sats to get utxo output api

* feat: add utxo sats entity

* feat: add get utxos output batch

* feat: handle error

* fix: context

* fix: sqlc queries

* fix: remove unused code

* fix: comment

* fix: check utxo not found error

* refactor: add some space

* fix: comment

* fix: use public field
2024-08-13 17:20:46 +07:00
waiemwor
dca63a49fe Modify Nodesale API to allow query all nodes from a deployment. (#43)
* feat: allow query all nodes from a deployment.

* fix : function GetNodesByDeployment name.
2024-08-13 16:35:56 +07:00
Gaze
05ade4b9d5 Merge branch 'main' into develop 2024-08-06 13:25:48 +07:00
Gaze
074458584b fix: adjust content type check 2024-08-06 13:25:36 +07:00
waiemwor
db5dc75c41 Feature/nodesale (#40)
* feat: recover nodesale module.

* fix: refactored.

* fix: fix table type.

* fix: add entity

* fix: bug UTC time.

* ci: try to tidy before testing

* ci: touch result file

* ci: use echo to create new file

* fix: try to skip test in ci

* fix: remove os.Exit

* fix: handle error

* feat: add todo note

* fix: Cannot run nodesale test because qtx is not initiated.

* fix: 50% chance public key compare incorrectly.

* fix: more consistent SQL

* fix: sanity refactor.

* fix: remove unused code.

* fix: move last_block_default to config file.

* fix: minor mistakes.

* fix:

* fix: refactor

* fix: refactor

* fix: delegate tx hash not record into db.

* refactor: prepare for moving integration tests.

* refactor: convert to unit tests.

* fix: change to using input values since output values deducted fee.

* feat: add extra unit test.

* fix: wrong timestamp format.

* fix: handle block timeout = 0

---------

Co-authored-by: Gaze <gazenw@users.noreply.github.com>
2024-08-05 11:33:20 +07:00
Gaze
0474627336 Merge branch 'main' into develop 2024-08-05 11:31:42 +07:00
Gaze
359436e6eb fix(httpclient): preserve trailing slash if exists 2024-08-01 14:43:36 +07:00
gazenw
1967895d6d Merge pull request #41 from gaze-network/develop
Release v0.3.0
2024-07-25 15:07:51 +07:00
gazenw
7dcbd082ee feat: add Runes API pagination (#36)
* feat: implement pagination on get balance, get holders

* feat: paginate get transactions

* fix: remove debug

* feat: implement pagination in get utxos

* feat: sort response in get holders

* feat: cap batch query

* feat: add default limits to all endpoints

* chore: rename endpoint funcs

* fix: parse rune name spacers

* chore: use compare.Cmp

* feat: handle not found errors on all usecase
2024-07-23 15:46:45 +07:00
gazenw
880f4b2e6a fix: handle case where input rune id is not found (#37) 2024-07-15 18:32:28 +07:00
Gaze
3f727dc11b Merge remote-tracking branch 'origin/main' into develop 2024-07-15 16:33:56 +07:00
Planxnx
60717ecc65 feat(requestlogger): add response headers 2024-07-12 00:18:15 +07:00
Planxnx
6998adedb0 fix(requestlogger): logging all request headers 2024-07-11 23:53:27 +07:00
Thanee Charattrakool
add0a541b5 feat: Request Logger fields (#35)
* feat: add with request headers config

* feat: add with fields config

* feat: format request queries
2024-07-11 23:41:18 +07:00
gazenw
dad02bf61a Merge pull request #34 from gaze-network/develop
feat: release v0.2.7
2024-07-09 16:15:35 +07:00
Gaze
694baef0aa chore: golangci-lint 2024-07-09 15:48:09 +07:00
gazenw
47119c3220 feat: remove unnecessary verbose query (#33) 2024-07-09 15:44:14 +07:00
gazenw
6203b104db Merge pull request #32 from gaze-network/develop
feat: release v0.2.5
2024-07-08 14:50:40 +07:00
gazenw
b24f27ec9a fix: incorrect condition for finding output destinations (#31) 2024-07-08 14:32:58 +07:00
Planxnx
90f1fd0a6c Merge branch 'fix/invalid-httpclient-path' 2024-07-04 15:39:17 +07:00
Planxnx
aace33b382 fix(httpclient): support base url query params 2024-07-04 15:39:04 +07:00
Gaze
a663f909fa Merge remote-tracking branch 'origin/main' into develop 2024-07-04 12:46:51 +07:00
Thanee Charattrakool
0263ec5622 Merge pull request #30 from gaze-network/fix/invalid-httpclient-path 2024-07-04 04:12:19 +07:00
Planxnx
8760baf42b chore: remive unused comment 2024-07-04 00:03:36 +07:00
Planxnx
5aca9f7f19 perf(httpclient): reduce base url parsing operation 2024-07-03 23:58:20 +07:00
Planxnx
07aa84019f fix(httpclient): can't support baseURL path 2024-07-03 23:57:40 +07:00
Thanee Charattrakool
a5fc803371 Merge pull request #29 from gaze-network/develop
feat: release v0.2.4
2024-07-02 15:57:44 +07:00
Planxnx
72ca151fd3 feat(httpclient): support content-encoding 2024-07-02 15:53:18 +07:00
Gaze
53a4d1a4c3 Merge branch 'main' into develop 2024-06-30 21:04:08 +07:00
Gaze
3322f4a034 ci: update action file name 2024-06-30 21:03:57 +07:00
Planxnx
dcb220bddb Merge branch 'main' into develop 2024-06-30 20:17:13 +07:00
gazenw
b6ff7e41bd docs: update README.md 2024-06-30 20:12:44 +07:00
gazenw
7cb717af11 feat(runes): get txs by block range (#28)
* feat(runes): get txs by block range

* feat(runes): validate block range

* perf(runes): limit 10k txs

---------

Co-authored-by: Gaze <gazenw@users.noreply.github.com>
2024-06-30 18:45:23 +07:00
Gaze
0d1ae0ef5e Merge branch 'main' into develop 2024-06-27 00:12:13 +07:00
Thanee Charattrakool
81ba7792ea fix: create error handler middleware (#27) 2024-06-27 00:11:22 +07:00
Gaze
b5851a39ab Merge branch 'main' into develop 2024-06-22 21:15:06 +07:00
Gaze
b44fb870a3 feat: add query params to req logger 2024-06-22 21:00:02 +07:00
Gaze
373ea50319 feat(logger): support env config 2024-06-20 18:52:56 +07:00
Gaze
a1d7524615 feat(btcutils): make btcutils.Address comparable support 2024-06-14 19:38:01 +07:00
Gaze
415a476478 Merge branch 'main' into develop 2024-06-14 16:55:39 +07:00
Gaze
f63505e173 feat(btcutils): use chain params instead common.network 2024-06-14 16:55:28 +07:00
Gaze
65a69ddb68 Merge remote-tracking branch 'origin/main' into develop 2024-06-14 16:48:48 +07:00
Thanee Charattrakool
4f5d1f077b feat(btcutils): add bitcoin utility functions (#26)
* feat(btcutils): add bitcoin utility functions

* feat(btcutils): add bitcoin signature verification
2024-06-14 16:48:22 +07:00
Gaze
c133006c82 Merge branch 'main' into develop 2024-06-12 23:39:24 +07:00
Thanee Charattrakool
51fd1f6636 feat: move requestip config to http config (#25) 2024-06-12 22:08:03 +07:00
Thanee Charattrakool
a7bc6257c4 feat(api): add request context and logger middleware (#24)
* feat(api): add request context and logger middleware

* feat(api): add cors and favicon middleware

* fix: solve wrapcheck linter warning

* feat: configurable hidden request headers
2024-06-12 21:47:29 +07:00
gazenw
3bb7500c87 feat: update docker version 2024-06-07 13:55:55 +07:00
Gaze
8c92893d4a feat: release v0.2.1 2024-05-31 01:16:34 +07:00
Nut Pinyo
d84e30ed11 fix: implement Shutdown() for processors (#22) 2024-05-31 01:13:12 +07:00
Thanee Charattrakool
d9fa217977 feat: use current indexed block for first prev block (#23)
* feat: use current indexed block for first prev block

* fix: forgot to set next prev header
2024-05-31 01:11:37 +07:00
Nut Pinyo
d4b694aa57 fix: implement Shutdown() for processors (#22) 2024-05-30 23:57:41 +07:00
189 changed files with 12492 additions and 8143 deletions

View File

@@ -58,6 +58,9 @@ jobs:
cache: true # caching and restoring go modules and build outputs.
- run: echo "GOVERSION=$(go version)" >> $GITHUB_ENV
- name: Touch test result file
run: echo "" > test_output.json
- name: Build
run: go build -v ./...

View File

@@ -22,7 +22,7 @@ jobs:
- name: Setup Sqlc
uses: sqlc-dev/setup-sqlc@v4
with:
sqlc-version: "1.26.0"
sqlc-version: "1.27.0"
- name: Check Diff
run: sqlc diff

View File

@@ -101,3 +101,6 @@ linters-settings:
attr-only: true
key-naming-case: snake
args-on-sep-lines: true
gosec:
excludes:
- G115

View File

@@ -1,8 +1,10 @@
<!-- omit from toc -->
- [Türkçe](https://github.com/Rumeyst/gaze-indexer/blob/turkish-translation/docs/README_tr.md)
# Gaze Indexer
Gaze Indexer is an open-source and modular indexing client for Bitcoin meta-protocols. It has support for Runes out of the box, with **Unified Consistent APIs** across fungible token protocols.
Gaze Indexer is an open-source and modular indexing client for Bitcoin meta-protocols with **Unified Consistent APIs** across fungible token protocols.
Gaze Indexer is built with **modularity** in mind, allowing users to run all modules in one monolithic instance with a single command, or as a distributed cluster of micro-services.
@@ -51,8 +53,6 @@ Here is our minimum database disk space requirement for each module.
| ------ | -------------------------- | ---------------------------- |
| Runes | 10 GB | 150 GB |
Here is our minimum database disk space requirement for each module.
#### 4. Prepare `config.yaml` file.
```yaml
@@ -108,7 +108,7 @@ We will be using `docker-compose` for our installation guide. Make sure the `doc
# docker-compose.yaml
services:
gaze-indexer:
image: ghcr.io/gaze-network/gaze-indexer:v1.0.0
image: ghcr.io/gaze-network/gaze-indexer:v0.2.1
container_name: gaze-indexer
restart: unless-stopped
ports:

View File

@@ -17,16 +17,21 @@ import (
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/indexer"
"github.com/gaze-network/indexer-network/internal/config"
"github.com/gaze-network/indexer-network/modules/brc20"
"github.com/gaze-network/indexer-network/modules/nodesale"
"github.com/gaze-network/indexer-network/modules/runes"
"github.com/gaze-network/indexer-network/pkg/automaxprocs"
"github.com/gaze-network/indexer-network/pkg/errorhandler"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"github.com/gaze-network/indexer-network/pkg/middleware/errorhandler"
"github.com/gaze-network/indexer-network/pkg/middleware/requestcontext"
"github.com/gaze-network/indexer-network/pkg/middleware/requestlogger"
"github.com/gaze-network/indexer-network/pkg/reportingclient"
"github.com/gofiber/fiber/v2"
"github.com/gofiber/fiber/v2/middleware/compress"
"github.com/gofiber/fiber/v2/middleware/cors"
"github.com/gofiber/fiber/v2/middleware/favicon"
fiberrecover "github.com/gofiber/fiber/v2/middleware/recover"
"github.com/gofiber/fiber/v2/middleware/requestid"
"github.com/samber/do/v2"
"github.com/samber/lo"
"github.com/spf13/cobra"
@@ -35,7 +40,7 @@ import (
// Register Modules
var Modules = do.Package(
do.LazyNamed("runes", runes.New),
do.LazyNamed("brc20", brc20.New),
do.LazyNamed("nodesale", nodesale.New),
)
func NewRunCommand() *cobra.Command {
@@ -133,10 +138,26 @@ func runHandler(cmd *cobra.Command, _ []string) error {
// Initialize HTTP server
do.Provide(injector, func(i do.Injector) (*fiber.App, error) {
app := fiber.New(fiber.Config{
AppName: "Gaze Indexer",
ErrorHandler: errorhandler.NewHTTPErrorHandler(),
AppName: "Gaze Indexer",
ErrorHandler: func(c *fiber.Ctx, err error) error {
logger.ErrorContext(c.UserContext(), "Something went wrong, unhandled api error",
slogx.String("event", "api_unhandled_error"),
slogx.Error(err),
)
return errors.WithStack(c.Status(http.StatusInternalServerError).JSON(fiber.Map{
"error": "Internal Server Error",
}))
},
})
app.
Use(favicon.New()).
Use(cors.New()).
Use(requestid.New()).
Use(requestcontext.New(
requestcontext.WithRequestId(),
requestcontext.WithClientIP(conf.HTTPServer.RequestIP),
)).
Use(requestlogger.New(conf.HTTPServer.Logger)).
Use(fiberrecover.New(fiberrecover.Config{
EnableStackTrace: true,
StackTraceHandler: func(c *fiber.Ctx, e interface{}) {
@@ -145,6 +166,7 @@ func runHandler(cmd *cobra.Command, _ []string) error {
logger.ErrorContext(c.UserContext(), "Something went wrong, panic in http handler", slogx.Any("panic", e), slog.String("stacktrace", string(buf)))
},
})).
Use(errorhandler.New()).
Use(compress.New(compress.Config{
Level: compress.LevelDefault,
}))

View File

@@ -6,13 +6,15 @@ import (
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/constants"
"github.com/gaze-network/indexer-network/modules/runes"
"github.com/gaze-network/indexer-network/modules/nodesale"
runesconstants "github.com/gaze-network/indexer-network/modules/runes/constants"
"github.com/spf13/cobra"
)
var versions = map[string]string{
"": constants.Version,
"runes": runes.Version,
"": constants.Version,
"runes": runesconstants.Version,
"nodesale": nodesale.Version,
}
type versionCmdOptions struct {

View File

@@ -17,7 +17,7 @@ import (
type migrateDownCmdOptions struct {
DatabaseURL string
Modules string
Runes bool
All bool
}
@@ -59,7 +59,7 @@ func NewMigrateDownCommand() *cobra.Command {
}
flags := cmd.Flags()
flags.StringVar(&opts.Modules, "modules", "", "Modules to apply up migrations")
flags.BoolVar(&opts.Runes, "runes", false, "Apply Runes down migrations")
flags.StringVar(&opts.DatabaseURL, "database", "", "Database url to run migration on")
flags.BoolVar(&opts.All, "all", false, "Confirm apply ALL down migrations without prompt")
@@ -87,8 +87,6 @@ func migrateDownHandler(opts *migrateDownCmdOptions, _ *cobra.Command, args migr
}
}
modules := strings.Split(opts.Modules, ",")
applyDownMigrations := func(module string, sourcePath string, migrationTable string) error {
newDatabaseURL := cloneURLWithQuery(databaseURL, url.Values{"x-migrations-table": {migrationTable}})
sourceURL := "file://" + sourcePath
@@ -118,15 +116,10 @@ func migrateDownHandler(opts *migrateDownCmdOptions, _ *cobra.Command, args migr
return nil
}
if lo.Contains(modules, "runes") {
if opts.Runes {
if err := applyDownMigrations("Runes", runesMigrationSource, "runes_schema_migrations"); err != nil {
return errors.WithStack(err)
}
}
if lo.Contains(modules, "brc20") {
if err := applyDownMigrations("BRC20", brc20MigrationSource, "brc20_schema_migrations"); err != nil {
return errors.WithStack(err)
}
}
return nil
}

View File

@@ -11,13 +11,12 @@ import (
"github.com/golang-migrate/migrate/v4"
_ "github.com/golang-migrate/migrate/v4/database/postgres"
_ "github.com/golang-migrate/migrate/v4/source/file"
"github.com/samber/lo"
"github.com/spf13/cobra"
)
type migrateUpCmdOptions struct {
DatabaseURL string
Modules string
Runes bool
}
type migrateUpCmdArgs struct {
@@ -55,7 +54,7 @@ func NewMigrateUpCommand() *cobra.Command {
}
flags := cmd.Flags()
flags.StringVar(&opts.Modules, "modules", "", "Modules to apply up migrations")
flags.BoolVar(&opts.Runes, "runes", false, "Apply Runes up migrations")
flags.StringVar(&opts.DatabaseURL, "database", "", "Database url to run migration on")
return cmd
@@ -73,8 +72,6 @@ func migrateUpHandler(opts *migrateUpCmdOptions, _ *cobra.Command, args migrateU
return errors.Errorf("unsupported database driver: %s", databaseURL.Scheme)
}
modules := strings.Split(opts.Modules, ",")
applyUpMigrations := func(module string, sourcePath string, migrationTable string) error {
newDatabaseURL := cloneURLWithQuery(databaseURL, url.Values{"x-migrations-table": {migrationTable}})
sourceURL := "file://" + sourcePath
@@ -104,15 +101,10 @@ func migrateUpHandler(opts *migrateUpCmdOptions, _ *cobra.Command, args migrateU
return nil
}
if lo.Contains(modules, "runes") {
if opts.Runes {
if err := applyUpMigrations("Runes", runesMigrationSource, "runes_schema_migrations"); err != nil {
return errors.WithStack(err)
}
}
if lo.Contains(modules, "brc20") {
if err := applyUpMigrations("BRC20", brc20MigrationSource, "brc20_schema_migrations"); err != nil {
return errors.WithStack(err)
}
}
return nil
}

View File

@@ -4,7 +4,6 @@ import "net/url"
const (
runesMigrationSource = "modules/runes/database/postgresql/migrations"
brc20MigrationSource = "modules/brc20/database/postgresql/migrations"
)
func cloneURLWithQuery(u *url.URL, newQuery url.Values) *url.URL {

View File

@@ -1,4 +0,0 @@
package common
// HalvingInterval is the number of blocks between each halving event.
const HalvingInterval = 210_000

View File

@@ -24,6 +24,9 @@ var (
// Skippable is returned when got an error but it can be skipped or ignored and continue
Skippable = errors.NewWithDepth(depth, "skippable")
// Retryable is returned when got an error but it can be retried
Retryable = errors.NewWithDepth(depth, "retryable")
// Unsupported is returned when a feature or result is not supported
Unsupported = errors.NewWithDepth(depth, "unsupported")

View File

@@ -1,22 +1,31 @@
package common
import "github.com/btcsuite/btcd/chaincfg"
import (
"github.com/btcsuite/btcd/chaincfg"
"github.com/gaze-network/indexer-network/pkg/logger"
)
type Network string
const (
NetworkMainnet Network = "mainnet"
NetworkTestnet Network = "testnet"
NetworkMainnet Network = "mainnet"
NetworkTestnet Network = "testnet"
NetworkFractalMainnet Network = "fractal-mainnet"
NetworkFractalTestnet Network = "fractal-testnet"
)
var supportedNetworks = map[Network]struct{}{
NetworkMainnet: {},
NetworkTestnet: {},
NetworkMainnet: {},
NetworkTestnet: {},
NetworkFractalMainnet: {},
NetworkFractalTestnet: {},
}
var chainParams = map[Network]*chaincfg.Params{
NetworkMainnet: &chaincfg.MainNetParams,
NetworkTestnet: &chaincfg.TestNet3Params,
NetworkMainnet: &chaincfg.MainNetParams,
NetworkTestnet: &chaincfg.TestNet3Params,
NetworkFractalMainnet: &chaincfg.MainNetParams,
NetworkFractalTestnet: &chaincfg.MainNetParams,
}
func (n Network) IsSupported() bool {
@@ -31,3 +40,15 @@ func (n Network) ChainParams() *chaincfg.Params {
func (n Network) String() string {
return string(n)
}
func (n Network) HalvingInterval() uint64 {
switch n {
case NetworkMainnet, NetworkTestnet:
return 210_000
case NetworkFractalMainnet, NetworkFractalTestnet:
return 2_100_000
default:
logger.Panic("invalid network")
return 0
}
}

View File

@@ -23,6 +23,14 @@ reporting:
# HTTP server configuration options.
http_server:
port: 8080 # Port to run the HTTP server on for modules with HTTP API handlers.
logger:
disable: false # disable logger if logger level is `INFO`
request_header: false
request_query: false
requestip: # Client IP extraction configuration options. This is unnecessary if you don't care about the real client IP or if you're not using a reverse proxy.
trusted_proxies_ip: # Cloudflare, GCP Public LB. See: server/internal/middleware/requestcontext/PROXY-IP.md
trusted_proxies_header: # X-Real-IP, CF-Connecting-IP
enable_reject_malformed_request: false # return 403 if request is malformed (invalid IP)
# Meta-protocol modules configuration options.
modules:
@@ -39,3 +47,11 @@ modules:
password: "password"
db_name: "postgres"
# url: "postgres://postgres:password@localhost:5432/postgres?sslmode=prefer" # [Optional] This will override other database credentials above.
nodesale:
postgres:
host: "localhost"
port: 5432
user: "postgres"
password: "P@ssw0rd"
db_name: "postgres"
last_block_default: 400

View File

@@ -1,5 +1,5 @@
package constants
const (
Version = "v0.0.1"
Version = "v0.2.1"
)

View File

@@ -243,39 +243,32 @@ func (d *BitcoinNodeDatasource) prepareRange(fromHeight, toHeight int64) (start,
}
// GetTransaction fetch transaction from Bitcoin node
func (d *BitcoinNodeDatasource) GetTransactionByHash(ctx context.Context, txHash chainhash.Hash) (*types.Transaction, error) {
func (d *BitcoinNodeDatasource) GetRawTransactionAndHeightByTxHash(ctx context.Context, txHash chainhash.Hash) (*wire.MsgTx, int64, error) {
rawTxVerbose, err := d.btcclient.GetRawTransactionVerbose(&txHash)
if err != nil {
return nil, errors.Wrap(err, "failed to get raw transaction")
return nil, 0, errors.Wrap(err, "failed to get raw transaction")
}
blockHash, err := chainhash.NewHashFromStr(rawTxVerbose.BlockHash)
if err != nil {
return nil, errors.Wrap(err, "failed to parse block hash")
return nil, 0, errors.Wrap(err, "failed to parse block hash")
}
block, err := d.btcclient.GetBlockVerboseTx(blockHash)
block, err := d.btcclient.GetBlockVerbose(blockHash)
if err != nil {
return nil, errors.Wrap(err, "failed to get block header")
return nil, 0, errors.Wrap(err, "failed to get block header")
}
// parse tx
txBytes, err := hex.DecodeString(rawTxVerbose.Hex)
if err != nil {
return nil, errors.Wrap(err, "failed to decode transaction hex")
return nil, 0, errors.Wrap(err, "failed to decode transaction hex")
}
var msgTx wire.MsgTx
if err := msgTx.Deserialize(bytes.NewReader(txBytes)); err != nil {
return nil, errors.Wrap(err, "failed to deserialize transaction")
}
var txIndex uint32
for i, tx := range block.Tx {
if tx.Hex == rawTxVerbose.Hex {
txIndex = uint32(i)
break
}
return nil, 0, errors.Wrap(err, "failed to deserialize transaction")
}
return types.ParseMsgTx(&msgTx, block.Height, *blockHash, txIndex), nil
return &msgTx, block.Height, nil
}
// GetBlockHeader fetch block header from Bitcoin node
@@ -293,18 +286,11 @@ func (d *BitcoinNodeDatasource) GetBlockHeader(ctx context.Context, height int64
return types.ParseMsgBlockHeader(*block, height), nil
}
// GetTransaction fetch transaction from Bitcoin node
func (d *BitcoinNodeDatasource) GetTransactionOutputs(ctx context.Context, txHash chainhash.Hash) ([]*types.TxOut, error) {
rawTx, err := d.btcclient.GetRawTransaction(&txHash)
func (d *BitcoinNodeDatasource) GetRawTransactionByTxHash(ctx context.Context, txHash chainhash.Hash) (*wire.MsgTx, error) {
transaction, err := d.btcclient.GetRawTransaction(&txHash)
if err != nil {
return nil, errors.Wrap(err, "failed to get raw transaction")
}
msgTx := rawTx.MsgTx()
txOuts := make([]*types.TxOut, 0, len(msgTx.TxOut))
for _, txOut := range msgTx.TxOut {
txOuts = append(txOuts, types.ParseTxOut(txOut))
}
return txOuts, nil
return transaction.MsgTx(), nil
}

View File

@@ -6,6 +6,7 @@ import (
"sync"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/datasources"
@@ -91,6 +92,10 @@ func (i *Indexer[T]) Run(ctx context.Context) (err error) {
select {
case <-i.quit:
logger.InfoContext(ctx, "Got quit signal, stopping indexer")
if err := i.Processor.Shutdown(ctx); err != nil {
logger.ErrorContext(ctx, "Failed to shutdown processor", slogx.Error(err))
return errors.Wrap(err, "processor shutdown failed")
}
return nil
case <-ctx.Done():
return nil
@@ -138,7 +143,7 @@ func (i *Indexer[T]) process(ctx context.Context) (err error) {
// validate reorg from first input
{
remoteBlockHeader := firstInputHeader
if !remoteBlockHeader.PrevBlock.IsEqual(&i.currentBlock.Hash) {
if i.currentBlock.Hash != (chainhash.Hash{}) && !remoteBlockHeader.PrevBlock.IsEqual(&i.currentBlock.Hash) {
logger.WarnContext(ctx, "Detected chain reorganization. Searching for fork point...",
slogx.String("event", "reorg_detected"),
slogx.Stringer("current_hash", i.currentBlock.Hash),
@@ -204,19 +209,20 @@ func (i *Indexer[T]) process(ctx context.Context) (err error) {
}
// validate is input is continuous and no reorg
for i := 1; i < len(inputs); i++ {
header := inputs[i].BlockHeader()
prevHeader := inputs[i-1].BlockHeader()
prevHeader := i.currentBlock
for i, input := range inputs {
header := input.BlockHeader()
if header.Height != prevHeader.Height+1 {
return errors.Wrapf(errs.InternalError, "input is not continuous, input[%d] height: %d, input[%d] height: %d", i-1, prevHeader.Height, i, header.Height)
}
if !header.PrevBlock.IsEqual(&prevHeader.Hash) {
if prevHeader.Hash != (chainhash.Hash{}) && !header.PrevBlock.IsEqual(&prevHeader.Hash) {
logger.WarnContext(ctx, "Chain Reorganization occurred in the middle of batch fetching inputs, need to try to fetch again")
// end current round
return nil
}
prevHeader = header
}
ctx = logger.WithContext(ctx, slog.Int("total_inputs", len(inputs)))

View File

@@ -29,6 +29,9 @@ type Processor[T Input] interface {
// VerifyStates verifies the states of the indexed data and the indexer
// to ensure the last shutdown was graceful and no missing data.
VerifyStates(ctx context.Context) error
// Shutdown gracefully stops the processor. Database connections, network calls, leftover states, etc. should be closed and cleaned up here.
Shutdown(ctx context.Context) error
}
type IndexerWorker interface {

165
docs/README_tr.md Normal file
View File

@@ -0,0 +1,165 @@
## Çeviriler
- [English (İngilizce)](../README.md)
**Son Güncelleme:** 21 Ağustos 2024
> **Not:** Bu belge, topluluk tarafından yapılmış bir çeviridir. Ana README.md dosyasındaki güncellemeler buraya otomatik olarak yansıtılmayabilir. En güncel bilgiler için [İngilizce sürümü](../README.md) inceleyin.
# Gaze Indexer
Gaze Indexer, değiştirilebilir token protokolleri arasında **Birleştirilmiş Tutarlı API'lere** sahip Bitcoin meta-protokolleri için açık kaynaklı ve modüler bir indeksleme istemcisidir.
Gaze Indexer, kullanıcıların tüm modülleri tek bir komutla tek bir monolitik örnekte veya dağıtılmış bir mikro hizmet kümesi olarak çalıştırmasına olanak tanıyan **modülerlik** göz önünde bulundurularak oluşturulmuştur.
Gaze Indexer, verimli veri getirme, yeniden düzenleme algılama ve veritabanı taşıma aracı ile HERHANGİ bir meta-protokol indeksleyici oluşturmak için bir temel görevi görür.
Bu, geliştiricilerin **gerçekten** önemli olana odaklanmasını sağlar: Meta-protokol indeksleme mantığı. Yeni meta-protokoller, yeni modüller uygulanarak kolayca eklenebilir.
- [Modüller](#modules)
- [1. Runes](#1-runes)
- [Kurulum](#installation)
- [Önkoşullar](#prerequisites)
- [1. Donanım Gereksinimleri](#1-hardware-requirements)
- [2. Bitcoin Core RPC sunucusunu hazırlayın.](#2-prepare-bitcoin-core-rpc-server)
- [3. Veritabanı hazırlayın.](#3-prepare-database)
- [4. `config.yaml` dosyasını hazırlayın.](#4-prepare-configyaml-file)
- [Docker ile yükle (önerilir)](#install-with-docker-recommended)
- [Kaynaktan yükle](#install-from-source)
## Modüller
### 1. Runes
Runes Dizinleyici ilk meta-protokol dizinleyicimizdir. Bitcoin işlemlerini kullanarak Runes durumlarını, işlemlerini, rün taşlarını ve bakiyelerini indeksler.
Geçmiş Runes verilerini sorgulamak için bir dizi API ile birlikte gelir. Tüm ayrıntılar için [API Referansı] (https://api-docs.gaze.network) adresimize bakın.
## Kurulum
### Önkoşullar
#### 1. Donanım Gereksinimleri
Her modül farklı donanım gereksinimleri gerektirir.
| Modül | CPU | RAM |
| ------ | --------- | ---- |
| Runes | 0,5 çekirdek | 1 GB |
#### 2. Bitcoin Core RPC sunucusunu hazırlayın.
Gaze Indexer'ın işlem verilerini kendi barındırdığı ya da QuickNode gibi yönetilen sağlayıcıları kullanan bir Bitcoin Core RPC'den alması gerekir.
Bir Bitcoin Core'u kendiniz barındırmak için bkz. https://bitcoin.org/en/full-node.
#### 3. Veritabanını hazırlayın.
Gaze Indexer PostgreSQL için birinci sınıf desteğe sahiptir. Diğer veritabanlarını kullanmak isterseniz, her modülün Veri Ağ Geçidi arayüzünü karşılayan kendi veritabanı havuzunuzu uygulayabilirsiniz.
İşte her modül için minimum veritabanı disk alanı gereksinimimiz.
| Modül | Veritabanı Depolama Alanı (mevcut) | Veritabanı Depolama Alanı (1 yıl içinde) |
| ------ | -------------------------- | ---------------------------- |
| Runes | 10 GB | 150 GB |
#### 4. config.yaml` dosyasını hazırlayın.
```yaml
# config.yaml
logger:
output: TEXT # Output format for logs. current supported formats: "TEXT" | "JSON" | "GCP"
debug: false
# Network to run the indexer on. Current supported networks: "mainnet" | "testnet"
network: mainnet
# Bitcoin Core RPC configuration options.
bitcoin_node:
host: "" # [Required] Host of Bitcoin Core RPC (without https://)
user: "" # Username to authenticate with Bitcoin Core RPC
pass: "" # Password to authenticate with Bitcoin Core RPC
disable_tls: false # Set to true to disable tls
# Block reporting configuration options. See Block Reporting section for more details.
reporting:
disabled: false # Set to true to disable block reporting to Gaze Network. Default is false.
base_url: "https://indexer.api.gaze.network" # Defaults to "https://indexer.api.gaze.network" if left empty
name: "" # [Required if not disabled] Name of this indexer to show on the Gaze Network dashboard
website_url: "" # Public website URL to show on the dashboard. Can be left empty.
indexer_api_url: "" # Public url to access this indexer's API. Can be left empty if you want to keep your indexer private.
# HTTP server configuration options.
http_server:
port: 8080 # Port to run the HTTP server on for modules with HTTP API handlers.
# Meta-protocol modules configuration options.
modules:
# Configuration options for Runes module. Can be removed if not used.
runes:
database: "postgres" # Database to store Runes data. current supported databases: "postgres"
datasource: "bitcoin-node" # Data source to be used for Bitcoin data. current supported data sources: "bitcoin-node".
api_handlers: # API handlers to enable. current supported handlers: "http"
- http
postgres:
host: "localhost"
port: 5432
user: "postgres"
password: "password"
db_name: "postgres"
# url: "postgres://postgres:password@localhost:5432/postgres?sslmode=prefer" # [Optional] This will override other database credentials above.
```
### Docker ile yükleyin (önerilir)
Kurulum kılavuzumuz için `docker-compose` kullanacağız. Docker-compose.yaml` dosyasının `config.yaml` dosyası ile aynı dizinde olduğundan emin olun.
```yaml
# docker-compose.yaml
services:
gaze-indexer:
image: ghcr.io/gaze-network/gaze-indexer:v0.2.1
container_name: gaze-indexer
restart: unless-stopped
ports:
- 8080:8080 # Expose HTTP server port to host
volumes:
- "./config.yaml:/app/config.yaml" # mount config.yaml file to the container as "/app/config.yaml"
command: ["/app/main", "run", "--modules", "runes"] # Put module flags after "run" commands to select which modules to run.
```
### Kaynaktan yükleyin
1. Go` sürüm 1.22 veya daha üstünü yükleyin. Go kurulum kılavuzuna bakın [burada](https://go.dev/doc/install).
2. Bu depoyu klonlayın.
```bash
git clone https://github.com/gaze-network/gaze-indexer.git
cd gaze-indexer
```
3. Ana ikili dosyayı oluşturun.
```bash
# Bağımlılıkları al
go mod indir
# Ana ikili dosyayı oluşturun
go build -o gaze main.go
```
4. Veritabanı geçişlerini `migrate` komutu ve modül bayrakları ile çalıştırın.
```bash
./gaze migrate up --runes --database postgres://postgres:password@localhost:5432/postgres
```
5. Dizinleyiciyi `run` komutu ve modül bayrakları ile başlatın.
```bash
./gaze run --modules runes
```
Eğer `config.yaml` dosyası `./app/config.yaml` adresinde bulunmuyorsa, `config.yaml` dosyasının yolunu belirtmek için `--config` bayrağını kullanın.
```bash
./gaze run --modules runes --config /path/to/config.yaml
```
## Çeviriler
- [English (İngilizce)](../README.md)

24
go.mod
View File

@@ -6,12 +6,12 @@ require (
github.com/Cleverse/go-utilities/utils v0.0.0-20240119201306-d71eb577ef11
github.com/btcsuite/btcd v0.24.0
github.com/btcsuite/btcd/btcutil v1.1.5
github.com/btcsuite/btcd/btcutil/psbt v1.1.9
github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0
github.com/cockroachdb/errors v1.11.1
github.com/gaze-network/uint128 v1.3.0
github.com/gofiber/fiber/v2 v2.52.4
github.com/golang-migrate/migrate/v4 v4.17.1
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/jackc/pgx/v5 v5.5.5
github.com/mcosta74/pgx-slog v0.3.0
github.com/planxnx/concurrent-stream v0.1.5
@@ -21,23 +21,27 @@ require (
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.18.2
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
github.com/valyala/fasthttp v1.51.0
go.uber.org/automaxprocs v1.5.3
golang.org/x/sync v0.5.0
golang.org/x/sync v0.7.0
google.golang.org/protobuf v1.33.0
)
require github.com/stretchr/objx v0.5.2 // indirect
require (
github.com/andybalholm/brotli v1.0.5 // indirect
github.com/btcsuite/btcd/btcec/v2 v2.1.3 // indirect
github.com/bitonicnl/verify-signed-message v0.7.1
github.com/btcsuite/btcd/btcec/v2 v2.3.3
github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f // indirect
github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd // indirect
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792 // indirect
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b // indirect
github.com/cockroachdb/redact v1.1.5 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/crypto/blake256 v1.0.0 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
github.com/decred/dcrd/crypto/blake256 v1.0.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/getsentry/sentry-go v0.18.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
@@ -75,10 +79,10 @@ require (
github.com/valyala/tcplisten v1.0.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/crypto v0.20.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/sys v0.17.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/exp v0.0.0-20240525044651-4c93da0ed11d // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

56
go.sum
View File

@@ -7,18 +7,23 @@ github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5
github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII=
github.com/andybalholm/brotli v1.0.5 h1:8uQZIdzKmjc/iuPu7O2ioW48L81FgatrcpfFmiq/cCs=
github.com/andybalholm/brotli v1.0.5/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
github.com/bitonicnl/verify-signed-message v0.7.1 h1:1Qku9k9WgzobjqBY7tT3CLjWxtTJZxkYNhOV6QeCTjY=
github.com/bitonicnl/verify-signed-message v0.7.1/go.mod h1:PR60twfJIaHEo9Wb6eJBh8nBHEZIQQx8CvRwh0YmEPk=
github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ=
github.com/btcsuite/btcd v0.22.0-beta.0.20220111032746-97732e52810c/go.mod h1:tjmYdS6MLJ5/s0Fj4DbLgSbDHbEqLJrtnHecBFkdz5M=
github.com/btcsuite/btcd v0.23.5-0.20231215221805-96c9fd8078fd/go.mod h1:nm3Bko6zh6bWP60UxwoT5LzdGJsQJaPo6HjduXq9p6A=
github.com/btcsuite/btcd v0.24.0 h1:gL3uHE/IaFj6fcZSu03SvqPMSx7s/dPzfpG/atRwWdo=
github.com/btcsuite/btcd v0.24.0/go.mod h1:K4IDc1593s8jKXIF7yS7yCTSxrknB9z0STzc2j6XgE4=
github.com/btcsuite/btcd/btcec/v2 v2.1.0/go.mod h1:2VzYrv4Gm4apmbVVsSq5bqf1Ec8v56E48Vt0Y/umPgA=
github.com/btcsuite/btcd/btcec/v2 v2.1.3 h1:xM/n3yIhHAhHy04z4i43C8p4ehixJZMsnrVJkgl+MTE=
github.com/btcsuite/btcd/btcec/v2 v2.1.3/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE=
github.com/btcsuite/btcd/btcec/v2 v2.3.3 h1:6+iXlDKE8RMtKsvK0gshlXIuPbyWM/h84Ensb7o3sC0=
github.com/btcsuite/btcd/btcec/v2 v2.3.3/go.mod h1:zYzJ8etWJQIv1Ogk7OzpWjowwOdXY1W/17j2MW85J04=
github.com/btcsuite/btcd/btcutil v1.0.0/go.mod h1:Uoxwv0pqYWhD//tfTiipkxNfdhG9UrLwaeswfjfdF0A=
github.com/btcsuite/btcd/btcutil v1.1.0/go.mod h1:5OapHB7A2hBBWLm48mmw4MOHNJCcUBTwmWH/0Jn8VHE=
github.com/btcsuite/btcd/btcutil v1.1.5 h1:+wER79R5670vs/ZusMTF1yTcRYE5GUsFbdjdisflzM8=
github.com/btcsuite/btcd/btcutil v1.1.5/go.mod h1:PSZZ4UitpLBWzxGd5VGOrLnmOjtPP/a6HaFo12zMs00=
github.com/btcsuite/btcd/btcutil/psbt v1.1.9 h1:UmfOIiWMZcVMOLaN+lxbbLSuoINGS1WmK1TZNI0b4yk=
github.com/btcsuite/btcd/btcutil/psbt v1.1.9/go.mod h1:ehBEvU91lxSlXtA+zZz3iFYx7Yq9eqnKx4/kSrnsvMY=
github.com/btcsuite/btcd/chaincfg/chainhash v1.0.0/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc=
github.com/btcsuite/btcd/chaincfg/chainhash v1.0.1/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc=
github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0 h1:59Kx4K6lzOW5w6nFlA0v5+lk/6sjybR934QNHSJZPTQ=
@@ -50,10 +55,12 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.0.0 h1:/8DMNYp9SGi5f0w7uCm6d6M4OU2rGFK09Y2A4Xv7EE0=
github.com/decred/dcrd/crypto/blake256 v1.0.0/go.mod h1:sQl2p6Y26YV+ZOcSTP6thNdn47hh8kt6rqSlvmrXFAc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 h1:YLtO71vCjJRCBcrPMtQ9nqBsqpA1m5sE92cU+pd5Mcc=
github.com/decred/dcrd/crypto/blake256 v1.0.1 h1:7PltbUIQB7u/FfZ39+DGa/ShuMyJ5ilcvdfma9wOH6Y=
github.com/decred/dcrd/crypto/blake256 v1.0.1/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 h1:rpfIENRNNilwHwZeG5+P150SMrnNEcHYvcCuK6dPZSg=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0/go.mod h1:v57UDF4pDQJcEfFUCRop3lJL149eHGSe9Jvczhzjo/0=
github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218=
github.com/dhui/dktest v0.4.1 h1:/w+IWuDXVymg3IrRJCHHOkMK10m9aNVMOyD0X12YVTg=
github.com/dhui/dktest v0.4.1/go.mod h1:DdOqcUpL7vgyP4GlF3X3w7HbSlz8cEQzwewPveYEQbA=
@@ -92,13 +99,12 @@ github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrU
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.5.0 h1:1p67kYwdtXjb0gL0BPiP1Av9wiZPo5A8z2cWkTZ+eyU=
github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
@@ -108,8 +114,6 @@ github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
@@ -217,15 +221,17 @@ github.com/spf13/viper v1.18.2/go.mod h1:EKmWIqdnk5lOcmR72yw6hS+8OPYcwD0jteitLMV
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
@@ -247,14 +253,14 @@ golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnf
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.20.0 h1:jmAMJJZXr5KiCw05dfYK9QnqaqKLYXijU23lsEdcQqg=
golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/exp v0.0.0-20240525044651-4c93da0ed11d h1:N0hmiNbwsSNwHBAvR3QB5w25pUwH4tK0Y/RltD1j1h4=
golang.org/x/exp v0.0.0-20240525044651-4c93da0ed11d/go.mod h1:XtvwrStGgqGPLc4cjQfWqZHG1YFdYs6swckp8vpsjnc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@@ -269,8 +275,8 @@ golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -283,19 +289,19 @@ golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.0 h1:qc0xYgIbsSDt9EyWz05J5wfa7LOVW0YTLOXrqdLAWIw=
golang.org/x/tools v0.21.0/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -306,6 +312,8 @@ google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQ
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=

View File

@@ -8,10 +8,12 @@ import (
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common"
brc20config "github.com/gaze-network/indexer-network/modules/brc20/config"
nodesaleconfig "github.com/gaze-network/indexer-network/modules/nodesale/config"
runesconfig "github.com/gaze-network/indexer-network/modules/runes/config"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"github.com/gaze-network/indexer-network/pkg/middleware/requestcontext"
"github.com/gaze-network/indexer-network/pkg/middleware/requestlogger"
"github.com/gaze-network/indexer-network/pkg/reportingclient"
"github.com/spf13/pflag"
"github.com/spf13/viper"
@@ -60,12 +62,14 @@ type BitcoinNodeClient struct {
}
type Modules struct {
Runes runesconfig.Config `mapstructure:"runes"`
BRC20 brc20config.Config `mapstructure:"brc20"`
Runes runesconfig.Config `mapstructure:"runes"`
NodeSale nodesaleconfig.Config `mapstructure:"nodesale"`
}
type HTTPServerConfig struct {
Port int `mapstructure:"port"`
Port int `mapstructure:"port"`
Logger requestlogger.Config `mapstructure:"logger"`
RequestIP requestcontext.WithClientIPConfig `mapstructure:"requestip"`
}
// Parse parse the configuration from environment variables

View File

@@ -1,71 +0,0 @@
package brc20
import (
"context"
"strings"
"github.com/btcsuite/btcd/rpcclient"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/datasources"
"github.com/gaze-network/indexer-network/core/indexer"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/internal/config"
"github.com/gaze-network/indexer-network/internal/postgres"
"github.com/gaze-network/indexer-network/modules/brc20/internal/datagateway"
brc20postgres "github.com/gaze-network/indexer-network/modules/brc20/internal/repository/postgres"
"github.com/gaze-network/indexer-network/pkg/btcclient"
"github.com/samber/do/v2"
)
func New(injector do.Injector) (indexer.IndexerWorker, error) {
ctx := do.MustInvoke[context.Context](injector)
conf := do.MustInvoke[config.Config](injector)
// reportingClient := do.MustInvoke[*reportingclient.ReportingClient](injector)
cleanupFuncs := make([]func(context.Context) error, 0)
var brc20Dg datagateway.BRC20DataGateway
var indexerInfoDg datagateway.IndexerInfoDataGateway
switch strings.ToLower(conf.Modules.BRC20.Database) {
case "postgresql", "postgres", "pg":
pg, err := postgres.NewPool(ctx, conf.Modules.BRC20.Postgres)
if err != nil {
if errors.Is(err, errs.InvalidArgument) {
return nil, errors.Wrap(err, "Invalid Postgres configuration for indexer")
}
return nil, errors.Wrap(err, "can't create Postgres connection pool")
}
cleanupFuncs = append(cleanupFuncs, func(ctx context.Context) error {
pg.Close()
return nil
})
brc20Repo := brc20postgres.NewRepository(pg)
brc20Dg = brc20Repo
indexerInfoDg = brc20Repo
default:
return nil, errors.Wrapf(errs.Unsupported, "%q database for indexer is not supported", conf.Modules.BRC20.Database)
}
var bitcoinDatasource datasources.Datasource[*types.Block]
var bitcoinClient btcclient.Contract
switch strings.ToLower(conf.Modules.BRC20.Datasource) {
case "bitcoin-node":
btcClient := do.MustInvoke[*rpcclient.Client](injector)
bitcoinNodeDatasource := datasources.NewBitcoinNode(btcClient)
bitcoinDatasource = bitcoinNodeDatasource
bitcoinClient = bitcoinNodeDatasource
default:
return nil, errors.Wrapf(errs.Unsupported, "%q datasource is not supported", conf.Modules.BRC20.Datasource)
}
processor, err := NewProcessor(brc20Dg, indexerInfoDg, bitcoinClient, conf.Network, cleanupFuncs)
if err != nil {
return nil, errors.WithStack(err)
}
if err := processor.VerifyStates(ctx); err != nil {
return nil, errors.WithStack(err)
}
indexer := indexer.New(processor, bitcoinDatasource)
return indexer, nil
}

View File

@@ -1,10 +0,0 @@
package config
import "github.com/gaze-network/indexer-network/internal/postgres"
type Config struct {
Datasource string `mapstructure:"datasource"` // Datasource to fetch bitcoin data for Meta-Protocol e.g. `bitcoin-node`
Database string `mapstructure:"database"` // Database to store data.
APIHandlers []string `mapstructure:"api_handlers"` // List of API handlers to enable. (e.g. `http`)
Postgres postgres.Config `mapstructure:"postgres"`
}

View File

@@ -1,25 +0,0 @@
package brc20
import (
"github.com/Cleverse/go-utilities/utils"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/core/types"
)
const (
ClientVersion = "v0.0.1"
DBVersion = 1
EventHashVersion = 1
)
var startingBlockHeader = map[common.Network]types.BlockHeader{
common.NetworkMainnet: {
Height: 767429,
Hash: *utils.Must(chainhash.NewHashFromStr("00000000000000000002b35aef66eb15cd2b232a800f75a2f25cedca4cfe52c4")),
},
common.NetworkTestnet: {
Height: 2413342,
Hash: *utils.Must(chainhash.NewHashFromStr("00000000000022e97030b143af785de812f836dd0651b6ac2b7dd9e90dc9abf9")),
},
}

View File

@@ -1,17 +0,0 @@
BEGIN;
DROP TABLE IF EXISTS "brc20_indexer_states";
DROP TABLE IF EXISTS "brc20_indexed_blocks";
DROP TABLE IF EXISTS "brc20_processor_stats";
DROP TABLE IF EXISTS "brc20_tick_entries";
DROP TABLE IF EXISTS "brc20_tick_entry_states";
DROP TABLE IF EXISTS "brc20_event_deploys";
DROP TABLE IF EXISTS "brc20_event_mints";
DROP TABLE IF EXISTS "brc20_event_inscribe_transfers";
DROP TABLE IF EXISTS "brc20_event_transfer_transfers";
DROP TABLE IF EXISTS "brc20_balances";
DROP TABLE IF EXISTS "brc20_inscription_entries";
DROP TABLE IF EXISTS "brc20_inscription_entry_states";
DROP TABLE IF EXISTS "brc20_inscription_transfers";
COMMIT;

View File

@@ -1,189 +0,0 @@
BEGIN;
-- Indexer Client Information
CREATE TABLE IF NOT EXISTS "brc20_indexer_states" (
"id" BIGSERIAL PRIMARY KEY,
"client_version" TEXT NOT NULL,
"network" TEXT NOT NULL,
"db_version" INT NOT NULL,
"event_hash_version" INT NOT NULL,
"created_at" TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS brc20_indexer_state_created_at_idx ON "brc20_indexer_states" USING BTREE ("created_at" DESC);
-- BRC20 data
CREATE TABLE IF NOT EXISTS "brc20_indexed_blocks" (
"height" INT NOT NULL PRIMARY KEY,
"hash" TEXT NOT NULL,
"event_hash" TEXT NOT NULL,
"cumulative_event_hash" TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS "brc20_processor_stats" (
"block_height" INT NOT NULL PRIMARY KEY,
"cursed_inscription_count" INT NOT NULL,
"blessed_inscription_count" INT NOT NULL,
"lost_sats" BIGINT NOT NULL
);
CREATE TABLE IF NOT EXISTS "brc20_tick_entries" (
"tick" TEXT NOT NULL PRIMARY KEY, -- lowercase of original_tick
"original_tick" TEXT NOT NULL,
"total_supply" DECIMAL NOT NULL,
"decimals" SMALLINT NOT NULL,
"limit_per_mint" DECIMAL NOT NULL,
"is_self_mint" BOOLEAN NOT NULL,
"deploy_inscription_id" TEXT NOT NULL,
"deployed_at" TIMESTAMP NOT NULL,
"deployed_at_height" INT NOT NULL
);
CREATE TABLE IF NOT EXISTS "brc20_tick_entry_states" (
"tick" TEXT NOT NULL,
"block_height" INT NOT NULL,
"minted_amount" DECIMAL NOT NULL,
"burned_amount" DECIMAL NOT NULL,
"completed_at" TIMESTAMP,
"completed_at_height" INT,
PRIMARY KEY ("tick", "block_height")
);
CREATE TABLE IF NOT EXISTS "brc20_event_deploys" (
"id" BIGINT PRIMARY KEY NOT NULL,
"inscription_id" TEXT NOT NULL,
"inscription_number" BIGINT NOT NULL,
"tick" TEXT NOT NULL, -- lowercase of original_tick
"original_tick" TEXT NOT NULL,
"tx_hash" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tx_index" INT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"pkscript" TEXT NOT NULL,
"satpoint" TEXT NOT NULL,
"total_supply" DECIMAL NOT NULL,
"decimals" SMALLINT NOT NULL,
"limit_per_mint" DECIMAL NOT NULL,
"is_self_mint" BOOLEAN NOT NULL
);
CREATE INDEX IF NOT EXISTS brc20_event_deploys_block_height_idx ON "brc20_event_deploys" USING BTREE ("block_height");
CREATE TABLE IF NOT EXISTS "brc20_event_mints" (
"id" BIGINT PRIMARY KEY NOT NULL,
"inscription_id" TEXT NOT NULL,
"inscription_number" BIGINT NOT NULL,
"tick" TEXT NOT NULL, -- lowercase of original_tick
"original_tick" TEXT NOT NULL,
"tx_hash" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tx_index" INT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"pkscript" TEXT NOT NULL,
"satpoint" TEXT NOT NULL,
"amount" DECIMAL NOT NULL,
"parent_id" TEXT -- requires parent deploy inscription id if minting a self-mint ticker
);
CREATE INDEX IF NOT EXISTS brc20_event_mints_block_height_idx ON "brc20_event_mints" USING BTREE ("block_height");
CREATE TABLE IF NOT EXISTS "brc20_event_inscribe_transfers" (
"id" BIGINT PRIMARY KEY NOT NULL,
"inscription_id" TEXT NOT NULL,
"inscription_number" BIGINT NOT NULL,
"tick" TEXT NOT NULL, -- lowercase of original_tick
"original_tick" TEXT NOT NULL,
"tx_hash" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tx_index" INT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"pkscript" TEXT NOT NULL,
"satpoint" TEXT NOT NULL,
"output_index" INT NOT NULL,
"sats_amount" BIGINT NOT NULL,
"amount" DECIMAL NOT NULL
);
CREATE INDEX IF NOT EXISTS brc20_event_inscribe_transfers_block_height_idx ON "brc20_event_inscribe_transfers" USING BTREE ("block_height");
CREATE INDEX IF NOT EXISTS brc20_event_inscribe_transfers_inscription_id_idx ON "brc20_event_inscribe_transfers" USING BTREE ("inscription_id"); -- used for validating transfer transfer events
CREATE TABLE IF NOT EXISTS "brc20_event_transfer_transfers" (
"id" BIGINT PRIMARY KEY NOT NULL,
"inscription_id" TEXT NOT NULL,
"inscription_number" BIGINT NOT NULL,
"tick" TEXT NOT NULL, -- lowercase of original_tick
"original_tick" TEXT NOT NULL,
"tx_hash" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tx_index" INT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"from_pkscript" TEXT NOT NULL,
"from_satpoint" TEXT NOT NULL,
"from_input_index" INT NOT NULL,
"to_pkscript" TEXT NOT NULL,
"to_satpoint" TEXT NOT NULL,
"to_output_index" INT NOT NULL,
"spent_as_fee" BOOLEAN NOT NULL,
"amount" DECIMAL NOT NULL
);
CREATE INDEX IF NOT EXISTS brc20_event_transfer_transfers_block_height_idx ON "brc20_event_transfer_transfers" USING BTREE ("block_height");
CREATE TABLE IF NOT EXISTS "brc20_balances" (
"pkscript" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tick" TEXT NOT NULL,
"overall_balance" DECIMAL NOT NULL,
"available_balance" DECIMAL NOT NULL,
PRIMARY KEY ("pkscript", "tick", "block_height")
);
CREATE TABLE IF NOT EXISTS "brc20_inscription_entries" (
"id" TEXT NOT NULL PRIMARY KEY,
"number" BIGINT NOT NULL,
"sequence_number" BIGINT NOT NULL,
"delegate" TEXT, -- delegate inscription id
"metadata" BYTEA,
"metaprotocol" TEXT,
"parents" TEXT[], -- parent inscription id, 0.14 only supports 1 parent per inscription
"pointer" BIGINT,
"content" JSONB, -- can use jsonb because we only track brc20 inscriptions
"content_encoding" TEXT,
"content_type" TEXT,
"cursed" BOOLEAN NOT NULL, -- inscriptions after jubilee are no longer cursed in 0.14, which affects inscription number
"cursed_for_brc20" BOOLEAN NOT NULL, -- however, inscriptions that would normally be cursed are still considered cursed for brc20
"created_at" TIMESTAMP NOT NULL,
"created_at_height" INT NOT NULL
);
CREATE INDEX IF NOT EXISTS brc20_inscription_entries_id_number_idx ON "brc20_inscription_entries" USING BTREE ("id", "number");
CREATE TABLE IF NOT EXISTS "brc20_inscription_entry_states" (
"id" TEXT NOT NULL,
"block_height" INT NOT NULL,
"transfer_count" INT NOT NULL,
PRIMARY KEY ("id", "block_height")
);
CREATE TABLE IF NOT EXISTS "brc20_inscription_transfers" (
"inscription_id" TEXT NOT NULL,
"block_height" INT NOT NULL,
"tx_index" INT NOT NULL,
"tx_hash" TEXT NOT NULL,
"from_input_index" INT NOT NULL,
"old_satpoint_tx_hash" TEXT,
"old_satpoint_out_idx" INT,
"old_satpoint_offset" BIGINT,
"new_satpoint_tx_hash" TEXT,
"new_satpoint_out_idx" INT,
"new_satpoint_offset" BIGINT,
"new_pkscript" TEXT NOT NULL,
"new_output_value" BIGINT NOT NULL,
"sent_as_fee" BOOLEAN NOT NULL,
"transfer_count" INT NOT NULL,
PRIMARY KEY ("inscription_id", "block_height", "tx_index")
);
CREATE INDEX IF NOT EXISTS brc20_inscription_transfers_block_height_tx_index_idx ON "brc20_inscription_transfers" USING BTREE ("block_height", "tx_index");
CREATE INDEX IF NOT EXISTS brc20_inscription_transfers_new_satpoint_idx ON "brc20_inscription_transfers" USING BTREE ("new_satpoint_tx_hash", "new_satpoint_out_idx", "new_satpoint_offset");
COMMIT;

View File

@@ -1,142 +0,0 @@
-- name: GetLatestIndexedBlock :one
SELECT * FROM "brc20_indexed_blocks" ORDER BY "height" DESC LIMIT 1;
-- name: GetIndexedBlockByHeight :one
SELECT * FROM "brc20_indexed_blocks" WHERE "height" = $1;
-- name: GetLatestProcessorStats :one
SELECT * FROM "brc20_processor_stats" ORDER BY "block_height" DESC LIMIT 1;
-- name: GetInscriptionTransfersInOutPoints :many
SELECT "it".*, "ie"."content" FROM (
SELECT
unnest(@tx_hash_arr::text[]) AS "tx_hash",
unnest(@tx_out_idx_arr::int[]) AS "tx_out_idx"
) "inputs"
INNER JOIN "brc20_inscription_transfers" it ON "inputs"."tx_hash" = "it"."new_satpoint_tx_hash" AND "inputs"."tx_out_idx" = "it"."new_satpoint_out_idx"
LEFT JOIN "brc20_inscription_entries" ie ON "it"."inscription_id" = "ie"."id";
;
-- name: GetInscriptionEntriesByIds :many
WITH "states" AS (
-- select latest state
SELECT DISTINCT ON ("id") * FROM "brc20_inscription_entry_states" WHERE "id" = ANY(@inscription_ids::text[]) ORDER BY "id", "block_height" DESC
)
SELECT * FROM "brc20_inscription_entries"
LEFT JOIN "states" ON "brc20_inscription_entries"."id" = "states"."id"
WHERE "brc20_inscription_entries"."id" = ANY(@inscription_ids::text[]);
-- name: GetTickEntriesByTicks :many
WITH "states" AS (
-- select latest state
SELECT DISTINCT ON ("tick") * FROM "brc20_tick_entry_states" WHERE "tick" = ANY(@ticks::text[]) ORDER BY "tick", "block_height" DESC
)
SELECT * FROM "brc20_tick_entries"
LEFT JOIN "states" ON "brc20_tick_entries"."tick" = "states"."tick"
WHERE "brc20_tick_entries"."tick" = ANY(@ticks::text[]);
-- name: GetInscriptionNumbersByIds :many
SELECT id, number FROM "brc20_inscription_entries" WHERE "id" = ANY(@inscription_ids::text[]);
-- name: GetInscriptionParentsByIds :many
SELECT id, parents FROM "brc20_inscription_entries" WHERE "id" = ANY(@inscription_ids::text[]);
-- name: GetLatestEventIds :one
WITH "latest_deploy_id" AS (
SELECT "id" FROM "brc20_event_deploys" ORDER BY "id" DESC LIMIT 1
),
"latest_mint_id" AS (
SELECT "id" FROM "brc20_event_mints" ORDER BY "id" DESC LIMIT 1
),
"latest_inscribe_transfer_id" AS (
SELECT "id" FROM "brc20_event_inscribe_transfers" ORDER BY "id" DESC LIMIT 1
),
"latest_transfer_transfer_id" AS (
SELECT "id" FROM "brc20_event_transfer_transfers" ORDER BY "id" DESC LIMIT 1
)
SELECT
COALESCE((SELECT "id" FROM "latest_deploy_id"), -1) AS "event_deploy_id",
COALESCE((SELECT "id" FROM "latest_mint_id"), -1) AS "event_mint_id",
COALESCE((SELECT "id" FROM "latest_inscribe_transfer_id"), -1) AS "event_inscribe_transfer_id",
COALESCE((SELECT "id" FROM "latest_transfer_transfer_id"), -1) AS "event_transfer_transfer_id";
-- name: GetBalancesBatchAtHeight :many
SELECT DISTINCT ON ("brc20_balances"."pkscript", "brc20_balances"."tick") "brc20_balances".* FROM "brc20_balances"
INNER JOIN (
SELECT
unnest(@pkscript_arr::text[]) AS "pkscript",
unnest(@tick_arr::text[]) AS "tick"
) "queries" ON "brc20_balances"."pkscript" = "queries"."pkscript" AND "brc20_balances"."tick" = "queries"."tick" AND "brc20_balances"."block_height" <= @block_height
ORDER BY "brc20_balances"."pkscript", "brc20_balances"."tick", "block_height" DESC;
-- name: GetEventInscribeTransfersByInscriptionIds :many
SELECT * FROM "brc20_event_inscribe_transfers" WHERE "inscription_id" = ANY(@inscription_ids::text[]);
-- name: CreateIndexedBlock :exec
INSERT INTO "brc20_indexed_blocks" ("height", "hash", "event_hash", "cumulative_event_hash") VALUES ($1, $2, $3, $4);
-- name: CreateProcessorStats :exec
INSERT INTO "brc20_processor_stats" ("block_height", "cursed_inscription_count", "blessed_inscription_count", "lost_sats") VALUES ($1, $2, $3, $4);
-- name: CreateTickEntries :batchexec
INSERT INTO "brc20_tick_entries" ("tick", "original_tick", "total_supply", "decimals", "limit_per_mint", "is_self_mint", "deploy_inscription_id", "deployed_at", "deployed_at_height") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9);
-- name: CreateTickEntryStates :batchexec
INSERT INTO "brc20_tick_entry_states" ("tick", "block_height", "minted_amount", "burned_amount", "completed_at", "completed_at_height") VALUES ($1, $2, $3, $4, $5, $6);
-- name: CreateInscriptionEntries :batchexec
INSERT INTO "brc20_inscription_entries" ("id", "number", "sequence_number", "delegate", "metadata", "metaprotocol", "parents", "pointer", "content", "content_encoding", "content_type", "cursed", "cursed_for_brc20", "created_at", "created_at_height") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15);
-- name: CreateInscriptionEntryStates :batchexec
INSERT INTO "brc20_inscription_entry_states" ("id", "block_height", "transfer_count") VALUES ($1, $2, $3);
-- name: CreateInscriptionTransfers :batchexec
INSERT INTO "brc20_inscription_transfers" ("inscription_id", "block_height", "tx_index", "tx_hash", "from_input_index", "old_satpoint_tx_hash", "old_satpoint_out_idx", "old_satpoint_offset", "new_satpoint_tx_hash", "new_satpoint_out_idx", "new_satpoint_offset", "new_pkscript", "new_output_value", "sent_as_fee", "transfer_count") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15);
-- name: CreateEventDeploys :batchexec
INSERT INTO "brc20_event_deploys" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "total_supply", "decimals", "limit_per_mint", "is_self_mint") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14);
-- name: CreateEventMints :batchexec
INSERT INTO "brc20_event_mints" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "amount", "parent_id") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12);
-- name: CreateEventInscribeTransfers :batchexec
INSERT INTO "brc20_event_inscribe_transfers" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "output_index", "sats_amount", "amount") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13);
-- name: CreateEventTransferTransfers :batchexec
INSERT INTO "brc20_event_transfer_transfers" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "from_pkscript", "from_satpoint", "from_input_index", "to_pkscript", "to_satpoint", "to_output_index", "spent_as_fee", "amount") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16);
-- name: DeleteIndexedBlocksSinceHeight :exec
DELETE FROM "brc20_indexed_blocks" WHERE "height" >= $1;
-- name: DeleteProcessorStatsSinceHeight :exec
DELETE FROM "brc20_processor_stats" WHERE "block_height" >= $1;
-- name: DeleteTickEntriesSinceHeight :exec
DELETE FROM "brc20_tick_entries" WHERE "deployed_at_height" >= $1;
-- name: DeleteTickEntryStatesSinceHeight :exec
DELETE FROM "brc20_tick_entry_states" WHERE "block_height" >= $1;
-- name: DeleteEventDeploysSinceHeight :exec
DELETE FROM "brc20_event_deploys" WHERE "block_height" >= $1;
-- name: DeleteEventMintsSinceHeight :exec
DELETE FROM "brc20_event_mints" WHERE "block_height" >= $1;
-- name: DeleteEventInscribeTransfersSinceHeight :exec
DELETE FROM "brc20_event_inscribe_transfers" WHERE "block_height" >= $1;
-- name: DeleteEventTransferTransfersSinceHeight :exec
DELETE FROM "brc20_event_transfer_transfers" WHERE "block_height" >= $1;
-- name: DeleteBalancesSinceHeight :exec
DELETE FROM "brc20_balances" WHERE "block_height" >= $1;
-- name: DeleteInscriptionEntriesSinceHeight :exec
DELETE FROM "brc20_inscription_entries" WHERE "created_at_height" >= $1;
-- name: DeleteInscriptionEntryStatesSinceHeight :exec
DELETE FROM "brc20_inscription_entry_states" WHERE "block_height" >= $1;
-- name: DeleteInscriptionTransfersSinceHeight :exec
DELETE FROM "brc20_inscription_transfers" WHERE "block_height" >= $1;

View File

@@ -1,5 +0,0 @@
-- name: GetLatestIndexerState :one
SELECT * FROM brc20_indexer_states ORDER BY created_at DESC LIMIT 1;
-- name: CreateIndexerState :exec
INSERT INTO brc20_indexer_states (client_version, network, db_version, event_hash_version) VALUES ($1, $2, $3, $4);

View File

@@ -1,16 +0,0 @@
package brc20
import "github.com/gaze-network/indexer-network/common"
var selfMintActivationHeights = map[common.Network]uint64{
common.NetworkMainnet: 837090,
common.NetworkTestnet: 837090,
}
func isSelfMintActivated(height uint64, network common.Network) bool {
activationHeight, ok := selfMintActivationHeights[network]
if !ok {
return false
}
return height >= activationHeight
}

View File

@@ -1,21 +0,0 @@
package brc20
type Operation string
const (
OperationDeploy Operation = "deploy"
OperationMint Operation = "mint"
OperationTransfer Operation = "transfer"
)
func (o Operation) IsValid() bool {
switch o {
case OperationDeploy, OperationMint, OperationTransfer:
return true
}
return false
}
func (o Operation) String() string {
return string(o)
}

View File

@@ -1,170 +0,0 @@
package brc20
import (
"encoding/json"
"math"
"math/big"
"strconv"
"strings"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/shopspring/decimal"
)
type rawPayload struct {
P string // required
Op string `json:"op"` // required
Tick string `json:"tick"` // required
// for deploy operations
Max string `json:"max"` // required
Lim *string `json:"lim"`
Dec *string `json:"dec"`
SelfMint *string `json:"self_mint"`
// for mint/transfer operations
Amt string `json:"amt"` // required
}
type Payload struct {
Transfer *entity.InscriptionTransfer
P string
Op Operation
Tick string // lower-cased tick
OriginalTick string // original tick before lower-cased
// for deploy operations
Max decimal.Decimal
Lim decimal.Decimal
Dec uint16
SelfMint bool
// for mint/transfer operations
Amt decimal.Decimal
}
var (
ErrInvalidProtocol = errors.New("invalid protocol: must be 'brc20'")
ErrInvalidOperation = errors.New("invalid operation for brc20: must be one of 'deploy', 'mint', or 'transfer'")
ErrInvalidTickLength = errors.New("invalid tick length: must be 4 or 5 bytes")
ErrEmptyTick = errors.New("empty tick")
ErrEmptyMax = errors.New("empty max")
ErrInvalidMax = errors.New("invalid max")
ErrInvalidDec = errors.New("invalid dec")
ErrInvalidSelfMint = errors.New("invalid self_mint")
ErrInvalidAmt = errors.New("invalid amt")
ErrNumberOverflow = errors.New("number overflow: max value is (2^64-1) * 10^18")
)
func ParsePayload(transfer *entity.InscriptionTransfer) (*Payload, error) {
var p rawPayload
err := json.Unmarshal(transfer.Content, &p)
if err != nil {
return nil, errors.Wrap(err, "failed to unmarshal payload as json")
}
if p.P != "brc20" {
return nil, errors.WithStack(ErrInvalidProtocol)
}
if !Operation(p.Op).IsValid() {
return nil, errors.WithStack(ErrInvalidOperation)
}
if p.Tick == "" {
return nil, errors.WithStack(ErrEmptyTick)
}
if len(p.Tick) != 4 && len(p.Tick) != 5 {
return nil, errors.WithStack(ErrInvalidTickLength)
}
parsed := Payload{
Transfer: transfer,
P: p.P,
Op: Operation(p.Op),
Tick: strings.ToLower(p.Tick),
OriginalTick: p.Tick,
}
switch parsed.Op {
case OperationDeploy:
if p.Max == "" {
return nil, errors.WithStack(ErrEmptyMax)
}
var rawDec string
if p.Dec != nil {
rawDec = *p.Dec
}
dec, ok := strconv.ParseUint(rawDec, 10, 16)
if ok != nil {
return nil, errors.Wrap(ok, "failed to parse dec")
}
if dec > 18 {
return nil, errors.WithStack(ErrInvalidDec)
}
parsed.Dec = uint16(dec)
max, err := parseNumericString(p.Max, dec)
if err != nil {
return nil, errors.Wrap(err, "failed to parse max")
}
parsed.Max = max
limit := max
if p.Lim != nil {
limit, err = parseNumericString(*p.Lim, dec)
if err != nil {
return nil, errors.Wrap(err, "failed to parse lim")
}
}
parsed.Lim = limit
// 5-bytes ticks are self-mint only
if len(parsed.OriginalTick) == 5 {
if p.SelfMint == nil || *p.SelfMint != "true" {
return nil, errors.WithStack(ErrInvalidSelfMint)
}
// infinite mints if tick is self-mint, and max is set to 0
if parsed.Max.IsZero() {
parsed.Max = maxNumber
if parsed.Lim.IsZero() {
parsed.Lim = maxNumber
}
}
}
if parsed.Max.IsZero() {
return nil, errors.WithStack(ErrInvalidMax)
}
case OperationMint, OperationTransfer:
if p.Amt == "" {
return nil, errors.WithStack(ErrInvalidAmt)
}
// NOTE: check tick decimals after parsing payload
amt, err := parseNumericString(p.Amt, 18)
if err != nil {
return nil, errors.Wrap(err, "failed to parse amt")
}
parsed.Amt = amt
default:
return nil, errors.WithStack(ErrInvalidOperation)
}
return &parsed, nil
}
// max number for all numeric fields (except dec) is (2^64-1)
var (
maxNumber = decimal.NewFromBigInt(new(big.Int).SetUint64(math.MaxUint64), 0)
)
func parseNumericString(s string, maxDec uint64) (decimal.Decimal, error) {
d, err := decimal.NewFromString(s)
if err != nil {
return decimal.Decimal{}, errors.Wrap(err, "failed to parse decimal number")
}
if -d.Exponent() > int32(maxDec) {
return decimal.Decimal{}, errors.Errorf("cannot parse decimal number: too many decimal points: expected %d got %d", maxDec, d.Exponent())
}
if d.GreaterThan(maxNumber) {
return decimal.Decimal{}, errors.WithStack(ErrNumberOverflow)
}
return d, nil
}

View File

@@ -1,71 +0,0 @@
package datagateway
import (
"context"
"github.com/btcsuite/btcd/wire"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
)
type BRC20DataGateway interface {
BRC20ReaderDataGateway
BRC20WriterDataGateway
// BeginBRC20Tx returns a new BRC20DataGateway with transaction enabled. All write operations performed in this datagateway must be committed to persist changes.
BeginBRC20Tx(ctx context.Context) (BRC20DataGatewayWithTx, error)
}
type BRC20DataGatewayWithTx interface {
BRC20DataGateway
Tx
}
type BRC20ReaderDataGateway interface {
GetLatestBlock(ctx context.Context) (types.BlockHeader, error)
GetIndexedBlockByHeight(ctx context.Context, height int64) (*entity.IndexedBlock, error)
GetProcessorStats(ctx context.Context) (*entity.ProcessorStats, error)
GetInscriptionTransfersInOutPoints(ctx context.Context, outPoints []wire.OutPoint) (map[ordinals.SatPoint][]*entity.InscriptionTransfer, error)
GetInscriptionEntriesByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]*ordinals.InscriptionEntry, error)
GetInscriptionNumbersByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]int64, error)
GetInscriptionParentsByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]ordinals.InscriptionId, error)
GetBalancesBatchAtHeight(ctx context.Context, blockHeight uint64, queries []GetBalancesBatchAtHeightQuery) (map[string]map[string]*entity.Balance, error)
GetTickEntriesByTicks(ctx context.Context, ticks []string) (map[string]*entity.TickEntry, error)
GetEventInscribeTransfersByInscriptionIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]*entity.EventInscribeTransfer, error)
GetLatestEventId(ctx context.Context) (int64, error)
}
type BRC20WriterDataGateway interface {
CreateIndexedBlock(ctx context.Context, block *entity.IndexedBlock) error
CreateProcessorStats(ctx context.Context, stats *entity.ProcessorStats) error
CreateTickEntries(ctx context.Context, blockHeight uint64, entries []*entity.TickEntry) error
CreateTickEntryStates(ctx context.Context, blockHeight uint64, entryStates []*entity.TickEntry) error
CreateInscriptionEntries(ctx context.Context, blockHeight uint64, entries []*ordinals.InscriptionEntry) error
CreateInscriptionEntryStates(ctx context.Context, blockHeight uint64, entryStates []*ordinals.InscriptionEntry) error
CreateInscriptionTransfers(ctx context.Context, transfers []*entity.InscriptionTransfer) error
CreateEventDeploys(ctx context.Context, events []*entity.EventDeploy) error
CreateEventMints(ctx context.Context, events []*entity.EventMint) error
CreateEventInscribeTransfers(ctx context.Context, events []*entity.EventInscribeTransfer) error
CreateEventTransferTransfers(ctx context.Context, events []*entity.EventTransferTransfer) error
// used for revert data
DeleteIndexedBlocksSinceHeight(ctx context.Context, height uint64) error
DeleteProcessorStatsSinceHeight(ctx context.Context, height uint64) error
DeleteTickEntriesSinceHeight(ctx context.Context, height uint64) error
DeleteTickEntryStatesSinceHeight(ctx context.Context, height uint64) error
DeleteEventDeploysSinceHeight(ctx context.Context, height uint64) error
DeleteEventMintsSinceHeight(ctx context.Context, height uint64) error
DeleteEventInscribeTransfersSinceHeight(ctx context.Context, height uint64) error
DeleteEventTransferTransfersSinceHeight(ctx context.Context, height uint64) error
DeleteBalancesSinceHeight(ctx context.Context, height uint64) error
DeleteInscriptionEntriesSinceHeight(ctx context.Context, height uint64) error
DeleteInscriptionEntryStatesSinceHeight(ctx context.Context, height uint64) error
DeleteInscriptionTransfersSinceHeight(ctx context.Context, height uint64) error
}
type GetBalancesBatchAtHeightQuery struct {
PkScriptHex string
Tick string
BlockHeight uint64
}

View File

@@ -1,12 +0,0 @@
package datagateway
import (
"context"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
)
type IndexerInfoDataGateway interface {
GetLatestIndexerState(ctx context.Context) (entity.IndexerState, error)
CreateIndexerState(ctx context.Context, state entity.IndexerState) error
}

View File

@@ -1,11 +0,0 @@
package entity
import "github.com/shopspring/decimal"
type Balance struct {
PkScript []byte
Tick string
BlockHeight uint64
OverallBalance decimal.Decimal
AvailableBalance decimal.Decimal
}

View File

@@ -1,28 +0,0 @@
package entity
import (
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/shopspring/decimal"
)
type EventDeploy struct {
Id int64
InscriptionId ordinals.InscriptionId
InscriptionNumber int64
Tick string
OriginalTick string
TxHash chainhash.Hash
BlockHeight uint64
TxIndex uint32
Timestamp time.Time
PkScript []byte
SatPoint ordinals.SatPoint
TotalSupply decimal.Decimal
Decimals uint16
LimitPerMint decimal.Decimal
IsSelfMint bool
}

View File

@@ -1,27 +0,0 @@
package entity
import (
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/shopspring/decimal"
)
type EventInscribeTransfer struct {
Id int64
InscriptionId ordinals.InscriptionId
InscriptionNumber int64
Tick string
OriginalTick string
TxHash chainhash.Hash
BlockHeight uint64
TxIndex uint32
Timestamp time.Time
PkScript []byte
SatPoint ordinals.SatPoint
OutputIndex uint32
SatsAmount uint64
Amount decimal.Decimal
}

View File

@@ -1,26 +0,0 @@
package entity
import (
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/shopspring/decimal"
)
type EventMint struct {
Id int64
InscriptionId ordinals.InscriptionId
InscriptionNumber int64
Tick string
OriginalTick string
TxHash chainhash.Hash
BlockHeight uint64
TxIndex uint32
Timestamp time.Time
PkScript []byte
SatPoint ordinals.SatPoint
Amount decimal.Decimal
ParentId *ordinals.InscriptionId
}

View File

@@ -1,30 +0,0 @@
package entity
import (
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/shopspring/decimal"
)
type EventTransferTransfer struct {
Id int64
InscriptionId ordinals.InscriptionId
InscriptionNumber int64
Tick string
OriginalTick string
TxHash chainhash.Hash
BlockHeight uint64
TxIndex uint32
Timestamp time.Time
FromPkScript []byte
FromSatPoint ordinals.SatPoint
FromInputIndex uint32
ToPkScript []byte
ToSatPoint ordinals.SatPoint
ToOutputIndex uint32
SpentAsFee bool
Amount decimal.Decimal
}

View File

@@ -1,31 +0,0 @@
package entity
import (
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
)
type OriginOld struct {
Content []byte
OldSatPoint ordinals.SatPoint
InputIndex uint32
}
type OriginNew struct {
Inscription ordinals.Inscription
Parent *ordinals.InscriptionId
Pointer *uint64
Fee uint64
Cursed bool
CursedForBRC20 bool
Hidden bool
Reinscription bool
Unbound bool
}
type Flotsam struct {
Tx *types.Transaction
OriginOld *OriginOld // OriginOld and OriginNew are mutually exclusive
OriginNew *OriginNew // OriginOld and OriginNew are mutually exclusive
Offset uint64
InscriptionId ordinals.InscriptionId
}

View File

@@ -1,10 +0,0 @@
package entity
import "github.com/btcsuite/btcd/chaincfg/chainhash"
type IndexedBlock struct {
Height uint64
Hash chainhash.Hash
EventHash chainhash.Hash
CumulativeEventHash chainhash.Hash
}

View File

@@ -1,15 +0,0 @@
package entity
import (
"time"
"github.com/gaze-network/indexer-network/common"
)
type IndexerState struct {
CreatedAt time.Time
ClientVersion string
DBVersion int32
EventHashVersion int32
Network common.Network
}

View File

@@ -1,21 +0,0 @@
package entity
import (
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
)
type InscriptionTransfer struct {
InscriptionId ordinals.InscriptionId
BlockHeight uint64
TxIndex uint32
TxHash chainhash.Hash
Content []byte
FromInputIndex uint32
OldSatPoint ordinals.SatPoint
NewSatPoint ordinals.SatPoint
NewPkScript []byte
NewOutputValue uint64
SentAsFee bool
TransferCount uint32
}

View File

@@ -1,8 +0,0 @@
package entity
type ProcessorStats struct {
BlockHeight uint64
CursedInscriptionCount uint64
BlessedInscriptionCount uint64
LostSats uint64
}

View File

@@ -1,25 +0,0 @@
package entity
import (
"time"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/shopspring/decimal"
)
type TickEntry struct {
Tick string
OriginalTick string
TotalSupply decimal.Decimal
Decimals uint16
LimitPerMint decimal.Decimal
IsSelfMint bool
DeployInscriptionId ordinals.InscriptionId
DeployedAt time.Time
DeployedAtHeight uint64
MintedAmount decimal.Decimal
BurnedAmount decimal.Decimal
CompletedAt time.Time
CompletedAtHeight uint64
}

View File

@@ -1,285 +0,0 @@
package ordinals
import (
"bytes"
"encoding/binary"
"github.com/btcsuite/btcd/txscript"
"github.com/gaze-network/indexer-network/core/types"
"github.com/samber/lo"
)
type Envelope struct {
Inscription Inscription
InputIndex uint32 // Index of input that contains the envelope
Offset int // Number of envelope in the input
PushNum bool // True if envelope contains pushnum opcodes
Stutter bool // True if envelope matches stuttering curse structure
IncompleteField bool // True if payload is incomplete
DuplicateField bool // True if payload contains duplicated field
UnrecognizedEvenField bool // True if payload contains unrecognized even field
}
func ParseEnvelopesFromTx(tx *types.Transaction) []*Envelope {
envelopes := make([]*Envelope, 0)
for i, txIn := range tx.TxIn {
tapScript, ok := extractTapScript(txIn.Witness)
if !ok {
continue
}
newEnvelopes := envelopesFromTapScript(tapScript, i)
envelopes = append(envelopes, newEnvelopes...)
}
return envelopes
}
var protocolId = []byte("ord")
func envelopesFromTapScript(tokenizer txscript.ScriptTokenizer, inputIndex int) []*Envelope {
envelopes := make([]*Envelope, 0)
var stuttered bool
for tokenizer.Next() {
if tokenizer.Err() != nil {
break
}
if tokenizer.Opcode() == txscript.OP_FALSE {
envelope, stutter := envelopeFromTokenizer(tokenizer, inputIndex, len(envelopes), stuttered)
if envelope != nil {
envelopes = append(envelopes, envelope)
} else {
stuttered = stutter
}
}
}
if tokenizer.Err() != nil {
return envelopes
}
return envelopes
}
func envelopeFromTokenizer(tokenizer txscript.ScriptTokenizer, inputIndex int, offset int, stuttered bool) (*Envelope, bool) {
tokenizer.Next()
if tokenizer.Opcode() != txscript.OP_IF {
return nil, tokenizer.Opcode() == txscript.OP_FALSE
}
tokenizer.Next()
if !bytes.Equal(tokenizer.Data(), protocolId) {
return nil, tokenizer.Opcode() == txscript.OP_FALSE
}
var pushNum bool
payload := make([][]byte, 0)
for tokenizer.Next() {
if tokenizer.Err() != nil {
return nil, false
}
opCode := tokenizer.Opcode()
if opCode == txscript.OP_ENDIF {
break
}
switch opCode {
case txscript.OP_1NEGATE:
pushNum = true
payload = append(payload, []byte{0x81})
case txscript.OP_1:
pushNum = true
payload = append(payload, []byte{0x01})
case txscript.OP_2:
pushNum = true
payload = append(payload, []byte{0x02})
case txscript.OP_3:
pushNum = true
payload = append(payload, []byte{0x03})
case txscript.OP_4:
pushNum = true
payload = append(payload, []byte{0x04})
case txscript.OP_5:
pushNum = true
payload = append(payload, []byte{0x05})
case txscript.OP_6:
pushNum = true
payload = append(payload, []byte{0x06})
case txscript.OP_7:
pushNum = true
payload = append(payload, []byte{0x07})
case txscript.OP_8:
pushNum = true
payload = append(payload, []byte{0x08})
case txscript.OP_9:
pushNum = true
payload = append(payload, []byte{0x09})
case txscript.OP_10:
pushNum = true
payload = append(payload, []byte{0x10})
case txscript.OP_11:
pushNum = true
payload = append(payload, []byte{0x11})
case txscript.OP_12:
pushNum = true
payload = append(payload, []byte{0x12})
case txscript.OP_13:
pushNum = true
payload = append(payload, []byte{0x13})
case txscript.OP_14:
pushNum = true
payload = append(payload, []byte{0x14})
case txscript.OP_15:
pushNum = true
payload = append(payload, []byte{0x15})
case txscript.OP_16:
pushNum = true
payload = append(payload, []byte{0x16})
case txscript.OP_0:
// OP_0 is a special case, it is accepted in ord's implementation
payload = append(payload, []byte{})
default:
data := tokenizer.Data()
if data == nil {
return nil, false
}
payload = append(payload, data)
}
}
// incomplete envelope
if tokenizer.Done() && tokenizer.Opcode() != txscript.OP_ENDIF {
return nil, false
}
// find body (empty data push in even index payload)
bodyIndex := -1
for i, value := range payload {
if i%2 == 0 && len(value) == 0 {
bodyIndex = i
break
}
}
var fieldPayloads [][]byte
var body []byte
if bodyIndex != -1 {
fieldPayloads = payload[:bodyIndex]
body = lo.Flatten(payload[bodyIndex+1:])
} else {
fieldPayloads = payload[:]
}
var incompleteField bool
fields := make(Fields)
for _, chunk := range lo.Chunk(fieldPayloads, 2) {
if len(chunk) != 2 {
incompleteField = true
break
}
key := chunk[0]
value := chunk[1]
// key cannot be empty, as checked by bodyIndex above
tag := Tag(key[0])
fields[tag] = append(fields[tag], value)
}
var duplicateField bool
for _, values := range fields {
if len(values) > 1 {
duplicateField = true
break
}
}
rawContentEncoding := fields.Take(TagContentEncoding)
rawContentType := fields.Take(TagContentType)
rawDelegate := fields.Take(TagDelegate)
rawMetadata := fields.Take(TagMetadata)
rawMetaprotocol := fields.Take(TagMetaprotocol)
rawParent := fields.Take(TagParent)
rawPointer := fields.Take(TagPointer)
unrecognizedEvenField := lo.SomeBy(lo.Keys(fields), func(key Tag) bool {
return key%2 == 0
})
var delegate, parent *InscriptionId
inscriptionId, err := NewInscriptionIdFromString(string(rawDelegate))
if err == nil {
delegate = &inscriptionId
}
inscriptionId, err = NewInscriptionIdFromString(string(rawParent))
if err == nil {
parent = &inscriptionId
}
var pointer *uint64
// if rawPointer is not nil and fits in uint64
if rawPointer != nil && (len(rawPointer) <= 8 || lo.EveryBy(rawPointer[8:], func(value byte) bool {
return value != 0
})) {
// pad zero bytes to 8 bytes
if len(rawPointer) < 8 {
rawPointer = append(rawPointer, make([]byte, 8-len(rawPointer))...)
}
pointer = lo.ToPtr(binary.LittleEndian.Uint64(rawPointer))
}
inscription := Inscription{
Content: body,
ContentEncoding: string(rawContentEncoding),
ContentType: string(rawContentType),
Delegate: delegate,
Metadata: rawMetadata,
Metaprotocol: string(rawMetaprotocol),
Parent: parent,
Pointer: pointer,
}
return &Envelope{
Inscription: inscription,
InputIndex: uint32(inputIndex),
Offset: offset,
PushNum: pushNum,
Stutter: stuttered,
IncompleteField: incompleteField,
DuplicateField: duplicateField,
UnrecognizedEvenField: unrecognizedEvenField,
}, false
}
type Fields map[Tag][][]byte
func (fields Fields) Take(tag Tag) []byte {
values, ok := fields[tag]
if !ok {
return nil
}
if tag.IsChunked() {
delete(fields, tag)
return lo.Flatten(values)
} else {
first := values[0]
values = values[1:]
if len(values) == 0 {
delete(fields, tag)
} else {
fields[tag] = values
}
return first
}
}
func extractTapScript(witness [][]byte) (txscript.ScriptTokenizer, bool) {
witness = removeAnnexFromWitness(witness)
if len(witness) < 2 {
return txscript.ScriptTokenizer{}, false
}
script := witness[len(witness)-2]
return txscript.MakeScriptTokenizer(0, script), true
}
func removeAnnexFromWitness(witness [][]byte) [][]byte {
if len(witness) >= 2 && len(witness[len(witness)-1]) > 0 && witness[len(witness)-1][0] == txscript.TaprootAnnexTag {
return witness[:len(witness)-1]
}
return witness
}

View File

@@ -1,742 +0,0 @@
package ordinals
import (
"testing"
"github.com/Cleverse/go-utilities/utils"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/gaze-network/indexer-network/core/types"
"github.com/samber/lo"
"github.com/stretchr/testify/assert"
)
func TestParseEnvelopesFromTx(t *testing.T) {
testTx := func(t *testing.T, tx *types.Transaction, expected []*Envelope) {
t.Helper()
envelopes := ParseEnvelopesFromTx(tx)
assert.Equal(t, expected, envelopes)
}
testParseWitness := func(t *testing.T, tapScript []byte, expected []*Envelope) {
t.Helper()
tx := &types.Transaction{
Version: 2,
LockTime: 0,
TxIn: []*types.TxIn{
{
Witness: wire.TxWitness{
tapScript,
{},
},
},
},
}
testTx(t, tx, expected)
}
testEnvelope := func(t *testing.T, payload [][]byte, expected []*Envelope) {
t.Helper()
builder := NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF)
for _, data := range payload {
builder.AddData(data)
}
builder.AddOp(txscript.OP_ENDIF)
script, err := builder.Script()
assert.NoError(t, err)
testParseWitness(
t,
script,
expected,
)
}
t.Run("empty_witness", func(t *testing.T) {
testTx(t, &types.Transaction{
Version: 2,
LockTime: 0,
TxIn: []*types.TxIn{{
Witness: wire.TxWitness{},
}},
}, []*Envelope{})
})
t.Run("ignore_key_path_spends", func(t *testing.T) {
testTx(t, &types.Transaction{
Version: 2,
LockTime: 0,
TxIn: []*types.TxIn{{
Witness: wire.TxWitness{
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script()),
},
}},
}, []*Envelope{})
})
t.Run("ignore_key_path_spends_with_annex", func(t *testing.T) {
testTx(t, &types.Transaction{
Version: 2,
LockTime: 0,
TxIn: []*types.TxIn{{
Witness: wire.TxWitness{
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script()),
[]byte{txscript.TaprootAnnexTag},
},
}},
}, []*Envelope{})
})
t.Run("parse_from_tapscript", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script()),
[]*Envelope{{}},
)
})
t.Run("ignore_unparsable_scripts", func(t *testing.T) {
script := utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script())
script = append(script, 0x01)
testParseWitness(
t,
script,
[]*Envelope{
{},
},
)
})
t.Run("no_inscription", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
Script()),
[]*Envelope{},
)
})
t.Run("duplicate_field", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagNop.Bytes(),
{},
TagNop.Bytes(),
{},
},
[]*Envelope{
{
DuplicateField: true,
},
},
)
})
t.Run("with_content_type", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagBody.Bytes(),
[]byte("ord"),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("with_content_encoding", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagContentEncoding.Bytes(),
[]byte("br"),
TagBody.Bytes(),
[]byte("ord"),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
ContentEncoding: "br",
},
},
},
)
})
t.Run("with_unknown_tag", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagNop.Bytes(),
[]byte("bar"),
TagBody.Bytes(),
[]byte("ord"),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("no_body", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
},
[]*Envelope{
{
Inscription: Inscription{
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("no_content_type", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagBody.Bytes(),
[]byte("foo"),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("foo"),
},
},
},
)
})
t.Run("valid_body_in_multiple_pushes", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagBody.Bytes(),
[]byte("foo"),
[]byte("bar"),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("foobar"),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("valid_body_in_zero_pushes", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagBody.Bytes(),
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte(""),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("valid_body_in_multiple_empty_pushes", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagBody.Bytes(),
{},
{},
{},
{},
{},
{},
{},
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte(""),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("valid_ignore_trailing", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagContentType.Bytes()).
AddData([]byte("text/plain;charset=utf-8")).
AddData(TagBody.Bytes()).
AddData([]byte("ord")).
AddOp(txscript.OP_ENDIF).
AddOp(txscript.OP_CHECKSIG).
Script()),
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("valid_ignore_preceding", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_CHECKSIG).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagContentType.Bytes()).
AddData([]byte("text/plain;charset=utf-8")).
AddData(TagBody.Bytes()).
AddData([]byte("ord")).
AddOp(txscript.OP_ENDIF).
Script()),
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("multiple_inscriptions_in_a_single_witness", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagContentType.Bytes()).
AddData([]byte("text/plain;charset=utf-8")).
AddData(TagBody.Bytes()).
AddData([]byte("foo")).
AddOp(txscript.OP_ENDIF).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagContentType.Bytes()).
AddData([]byte("text/plain;charset=utf-8")).
AddData(TagBody.Bytes()).
AddData([]byte("bar")).
AddOp(txscript.OP_ENDIF).
Script()),
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("foo"),
ContentType: "text/plain;charset=utf-8",
},
},
{
Inscription: Inscription{
Content: []byte("bar"),
ContentType: "text/plain;charset=utf-8",
},
Offset: 1,
},
},
)
})
t.Run("invalid_utf8_does_not_render_inscription_invalid", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("text/plain;charset=utf-8"),
TagBody.Bytes(),
{0b10000000},
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte{0b10000000},
ContentType: "text/plain;charset=utf-8",
},
},
},
)
})
t.Run("no_endif", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
Script()),
[]*Envelope{},
)
})
t.Run("no_op_false", func(t *testing.T) {
testParseWitness(
t,
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script()),
[]*Envelope{},
)
})
t.Run("empty_envelope", func(t *testing.T) {
testEnvelope(
t,
[][]byte{},
[]*Envelope{},
)
})
t.Run("wrong_protocol_identifier", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
[]byte("foo"),
},
[]*Envelope{},
)
})
t.Run("extract_from_second_input", func(t *testing.T) {
testTx(
t,
&types.Transaction{
Version: 2,
LockTime: 0,
TxIn: []*types.TxIn{{}, {
Witness: wire.TxWitness{
utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagContentType.Bytes()).
AddData([]byte("text/plain;charset=utf-8")).
AddData(TagBody.Bytes()).
AddData([]byte("ord")).
AddOp(txscript.OP_ENDIF).
Script(),
),
{},
},
}},
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte("ord"),
ContentType: "text/plain;charset=utf-8",
},
InputIndex: 1,
},
},
)
})
t.Run("inscribe_png", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagContentType.Bytes(),
[]byte("image/png"),
TagBody.Bytes(),
{0x01, 0x02, 0x03},
},
[]*Envelope{
{
Inscription: Inscription{
Content: []byte{0x01, 0x02, 0x03},
ContentType: "image/png",
},
},
},
)
})
t.Run("unknown_odd_fields", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagNop.Bytes(),
{0x00},
},
[]*Envelope{
{
Inscription: Inscription{},
},
},
)
})
t.Run("unknown_even_fields", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagUnbound.Bytes(),
{0x00},
},
[]*Envelope{
{
Inscription: Inscription{},
UnrecognizedEvenField: true,
},
},
)
})
t.Run("pointer_field_is_recognized", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagPointer.Bytes(),
{0x01},
},
[]*Envelope{
{
Inscription: Inscription{
Pointer: lo.ToPtr(uint64(1)),
},
},
},
)
})
t.Run("duplicate_pointer_field_makes_inscription_unbound", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagPointer.Bytes(),
{0x01},
TagPointer.Bytes(),
{0x00},
},
[]*Envelope{
{
Inscription: Inscription{
Pointer: lo.ToPtr(uint64(1)),
},
DuplicateField: true,
UnrecognizedEvenField: true,
},
},
)
})
t.Run("incomplete_field", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagNop.Bytes(),
},
[]*Envelope{
{
Inscription: Inscription{},
IncompleteField: true,
},
},
)
})
t.Run("metadata_is_parsed_correctly", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagMetadata.Bytes(),
{},
},
[]*Envelope{
{
Inscription: Inscription{
Metadata: []byte{},
},
},
},
)
})
t.Run("metadata_is_parsed_correctly_from_chunks", func(t *testing.T) {
testEnvelope(
t,
[][]byte{
protocolId,
TagMetadata.Bytes(),
{0x00},
TagMetadata.Bytes(),
{0x01},
},
[]*Envelope{
{
Inscription: Inscription{
Metadata: []byte{0x00, 0x01},
},
DuplicateField: true,
},
},
)
})
t.Run("pushnum_opcodes_are_parsed_correctly", func(t *testing.T) {
pushNumOpCodes := map[byte][]byte{
txscript.OP_1NEGATE: {0x81},
txscript.OP_1: {0x01},
txscript.OP_2: {0x02},
txscript.OP_3: {0x03},
txscript.OP_4: {0x04},
txscript.OP_5: {0x05},
txscript.OP_6: {0x06},
txscript.OP_7: {0x07},
txscript.OP_8: {0x08},
txscript.OP_9: {0x09},
txscript.OP_10: {0x10},
txscript.OP_11: {0x11},
txscript.OP_12: {0x12},
txscript.OP_13: {0x13},
txscript.OP_14: {0x14},
txscript.OP_15: {0x15},
txscript.OP_16: {0x16},
}
for opCode, value := range pushNumOpCodes {
script := utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddData(TagBody.Bytes()).
AddOp(opCode).
AddOp(txscript.OP_ENDIF).
Script())
testParseWitness(
t,
script,
[]*Envelope{
{
Inscription: Inscription{
Content: value,
},
PushNum: true,
},
},
)
}
})
t.Run("stuttering", func(t *testing.T) {
script := utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script())
testParseWitness(
t,
script,
[]*Envelope{
{
Inscription: Inscription{},
Stutter: true,
},
},
)
script = utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script())
testParseWitness(
t,
script,
[]*Envelope{
{
Inscription: Inscription{},
Stutter: true,
},
},
)
script = utils.Must(NewPushScriptBuilder().
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_AND).
AddOp(txscript.OP_FALSE).
AddOp(txscript.OP_IF).
AddData(protocolId).
AddOp(txscript.OP_ENDIF).
Script())
testParseWitness(
t,
script,
[]*Envelope{
{
Inscription: Inscription{},
Stutter: false,
},
},
)
})
}

View File

@@ -1,13 +0,0 @@
package ordinals
import "github.com/gaze-network/indexer-network/common"
func GetJubileeHeight(network common.Network) uint64 {
switch network {
case common.NetworkMainnet:
return 824544
case common.NetworkTestnet:
return 2544192
}
panic("unsupported network")
}

View File

@@ -1,27 +0,0 @@
package ordinals
import "time"
type Inscription struct {
Content []byte
ContentEncoding string
ContentType string
Delegate *InscriptionId
Metadata []byte
Metaprotocol string
Parent *InscriptionId // in 0.14, inscription has only one parent
Pointer *uint64
}
// TODO: refactor ordinals.InscriptionEntry to entity.InscriptionEntry
type InscriptionEntry struct {
Id InscriptionId
Number int64
SequenceNumber uint64
Cursed bool
CursedForBRC20 bool
CreatedAt time.Time
CreatedAtHeight uint64
Inscription Inscription
TransferCount uint32
}

View File

@@ -1,67 +0,0 @@
package ordinals
import (
"fmt"
"strconv"
"strings"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/cockroachdb/errors"
)
type InscriptionId struct {
TxHash chainhash.Hash
Index uint32
}
func (i InscriptionId) String() string {
return fmt.Sprintf("%si%d", i.TxHash.String(), i.Index)
}
func NewInscriptionId(txHash chainhash.Hash, index uint32) InscriptionId {
return InscriptionId{
TxHash: txHash,
Index: index,
}
}
var ErrInscriptionIdInvalidSeparator = fmt.Errorf("invalid inscription id: must contain exactly one separator")
func NewInscriptionIdFromString(s string) (InscriptionId, error) {
parts := strings.SplitN(s, "i", 2)
if len(parts) != 2 {
return InscriptionId{}, errors.WithStack(ErrInscriptionIdInvalidSeparator)
}
txHash, err := chainhash.NewHashFromStr(parts[0])
if err != nil {
return InscriptionId{}, errors.Wrap(err, "invalid inscription id: cannot parse txHash")
}
index, err := strconv.ParseUint(parts[1], 10, 32)
if err != nil {
return InscriptionId{}, errors.Wrap(err, "invalid inscription id: cannot parse index")
}
return InscriptionId{
TxHash: *txHash,
Index: uint32(index),
}, nil
}
// MarshalJSON implements json.Marshaler
func (r InscriptionId) MarshalJSON() ([]byte, error) {
return []byte(`"` + r.String() + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler
func (r *InscriptionId) UnmarshalJSON(data []byte) error {
// data must be quoted
if len(data) < 2 || data[0] != '"' || data[len(data)-1] != '"' {
return errors.New("must be string")
}
data = data[1 : len(data)-1]
parsed, err := NewInscriptionIdFromString(string(data))
if err != nil {
return errors.WithStack(err)
}
*r = parsed
return nil
}

View File

@@ -1,109 +0,0 @@
package ordinals
import (
"testing"
"github.com/Cleverse/go-utilities/utils"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/stretchr/testify/assert"
)
func TestNewInscriptionIdFromString(t *testing.T) {
tests := []struct {
name string
input string
expected InscriptionId
shouldError bool
}{
{
name: "valid inscription id 1",
input: "1111111111111111111111111111111111111111111111111111111111111111i0",
expected: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 0,
},
},
{
name: "valid inscription id 2",
input: "1111111111111111111111111111111111111111111111111111111111111111i1",
expected: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 1,
},
},
{
name: "valid inscription id 3",
input: "1111111111111111111111111111111111111111111111111111111111111111i4294967295",
expected: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 4294967295,
},
},
{
name: "error no separator",
input: "abc",
shouldError: true,
},
{
name: "error invalid index",
input: "xyzixyz",
shouldError: true,
},
{
name: "error invalid index",
input: "abcixyz",
shouldError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual, err := NewInscriptionIdFromString(tt.input)
if tt.shouldError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expected, actual)
}
})
}
}
func TestInscriptionIdString(t *testing.T) {
tests := []struct {
name string
expected string
input InscriptionId
}{
{
name: "valid inscription id 1",
expected: "1111111111111111111111111111111111111111111111111111111111111111i0",
input: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 0,
},
},
{
name: "valid inscription id 2",
expected: "1111111111111111111111111111111111111111111111111111111111111111i1",
input: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 1,
},
},
{
name: "valid inscription id 3",
expected: "1111111111111111111111111111111111111111111111111111111111111111i4294967295",
input: InscriptionId{
TxHash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 4294967295,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equal(t, tt.expected, tt.input.String())
})
}
}

View File

@@ -1,68 +0,0 @@
package ordinals
import (
"fmt"
"strconv"
"strings"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/cockroachdb/errors"
)
type SatPoint struct {
OutPoint wire.OutPoint
Offset uint64
}
func (s SatPoint) String() string {
return fmt.Sprintf("%s:%d", s.OutPoint.String(), s.Offset)
}
var ErrSatPointInvalidSeparator = fmt.Errorf("invalid sat point: must contain exactly two separators")
func NewSatPointFromString(s string) (SatPoint, error) {
parts := strings.SplitN(s, ":", 3)
if len(parts) != 3 {
return SatPoint{}, errors.WithStack(ErrSatPointInvalidSeparator)
}
txHash, err := chainhash.NewHashFromStr(parts[0])
if err != nil {
return SatPoint{}, errors.Wrap(err, "invalid inscription id: cannot parse txHash")
}
index, err := strconv.ParseUint(parts[1], 10, 32)
if err != nil {
return SatPoint{}, errors.Wrap(err, "invalid inscription id: cannot parse index")
}
offset, err := strconv.ParseUint(parts[2], 10, 64)
if err != nil {
return SatPoint{}, errors.Wrap(err, "invalid sat point: cannot parse offset")
}
return SatPoint{
OutPoint: wire.OutPoint{
Hash: *txHash,
Index: uint32(index),
},
Offset: offset,
}, nil
}
// MarshalJSON implements json.Marshaler
func (r SatPoint) MarshalJSON() ([]byte, error) {
return []byte(`"` + r.String() + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler
func (r *SatPoint) UnmarshalJSON(data []byte) error {
// data must be quoted
if len(data) < 2 || data[0] != '"' || data[len(data)-1] != '"' {
return errors.New("must be string")
}
data = data[1 : len(data)-1]
parsed, err := NewSatPointFromString(string(data))
if err != nil {
return errors.WithStack(err)
}
*r = parsed
return nil
}

View File

@@ -1,89 +0,0 @@
package ordinals
import (
"testing"
"github.com/Cleverse/go-utilities/utils"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/stretchr/testify/assert"
)
func TestNewSatPointFromString(t *testing.T) {
tests := []struct {
name string
input string
expected SatPoint
shouldError bool
}{
{
name: "valid sat point",
input: "1111111111111111111111111111111111111111111111111111111111111111:1:2",
expected: SatPoint{
OutPoint: wire.OutPoint{
Hash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 1,
},
Offset: 2,
},
},
{
name: "error no separator",
input: "abc",
shouldError: true,
},
{
name: "error invalid output index",
input: "abc:xyz",
shouldError: true,
},
{
name: "error no offset",
input: "1111111111111111111111111111111111111111111111111111111111111111:1",
shouldError: true,
},
{
name: "error invalid offset",
input: "1111111111111111111111111111111111111111111111111111111111111111:1:foo",
shouldError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual, err := NewSatPointFromString(tt.input)
if tt.shouldError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expected, actual)
}
})
}
}
func TestSatPointString(t *testing.T) {
tests := []struct {
name string
input SatPoint
expected string
}{
{
name: "valid sat point",
input: SatPoint{
OutPoint: wire.OutPoint{
Hash: *utils.Must(chainhash.NewHashFromStr("1111111111111111111111111111111111111111111111111111111111111111")),
Index: 1,
},
Offset: 2,
},
expected: "1111111111111111111111111111111111111111111111111111111111111111:1:2",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equal(t, tt.expected, tt.input.String())
})
}
}

View File

@@ -1,170 +0,0 @@
package ordinals
import (
"encoding/binary"
"fmt"
"github.com/btcsuite/btcd/txscript"
)
// PushScriptBuilder is a helper to build scripts that requires data pushes to use OP_PUSHDATA* or OP_DATA_* opcodes only.
// Empty data pushes are still encoded as OP_0.
type PushScriptBuilder struct {
script []byte
err error
}
func NewPushScriptBuilder() *PushScriptBuilder {
return &PushScriptBuilder{}
}
// canonicalDataSize returns the number of bytes the canonical encoding of the
// data will take.
func canonicalDataSize(data []byte) int {
dataLen := len(data)
// When the data consists of a single number that can be represented
// by one of the "small integer" opcodes, that opcode will be instead
// of a data push opcode followed by the number.
if dataLen == 0 {
return 1
}
if dataLen < txscript.OP_PUSHDATA1 {
return 1 + dataLen
} else if dataLen <= 0xff {
return 2 + dataLen
} else if dataLen <= 0xffff {
return 3 + dataLen
}
return 5 + dataLen
}
func pushDataToBytes(data []byte) []byte {
if len(data) == 0 {
return []byte{txscript.OP_0}
}
script := make([]byte, 0)
dataLen := len(data)
if dataLen < txscript.OP_PUSHDATA1 {
script = append(script, byte(txscript.OP_DATA_1-1+dataLen))
} else if dataLen <= 0xff {
script = append(script, txscript.OP_PUSHDATA1, byte(dataLen))
} else if dataLen <= 0xffff {
buf := make([]byte, 2)
binary.LittleEndian.PutUint16(buf, uint16(dataLen))
script = append(script, txscript.OP_PUSHDATA2)
script = append(script, buf...)
} else {
buf := make([]byte, 4)
binary.LittleEndian.PutUint32(buf, uint32(dataLen))
script = append(script, txscript.OP_PUSHDATA4)
script = append(script, buf...)
}
// Append the actual data.
script = append(script, data...)
return script
}
// AddData pushes the passed data to the end of the script. It automatically
// chooses canonical opcodes depending on the length of the data. A zero length
// buffer will lead to a push of empty data onto the stack (OP_0) and any push
// of data greater than MaxScriptElementSize will not modify the script since
// that is not allowed by the script engine. Also, the script will not be
// modified if pushing the data would cause the script to exceed the maximum
// allowed script engine size.
func (b *PushScriptBuilder) AddData(data []byte) *PushScriptBuilder {
if b.err != nil {
return b
}
// Pushes that would cause the script to exceed the largest allowed
// script size would result in a non-canonical script.
dataSize := canonicalDataSize(data)
if len(b.script)+dataSize > txscript.MaxScriptSize {
str := fmt.Sprintf("adding %d bytes of data would exceed the "+
"maximum allowed canonical script length of %d",
dataSize, txscript.MaxScriptSize)
b.err = txscript.ErrScriptNotCanonical(str)
return b
}
// Pushes larger than the max script element size would result in a
// script that is not canonical.
dataLen := len(data)
if dataLen > txscript.MaxScriptElementSize {
str := fmt.Sprintf("adding a data element of %d bytes would "+
"exceed the maximum allowed script element size of %d",
dataLen, txscript.MaxScriptElementSize)
b.err = txscript.ErrScriptNotCanonical(str)
return b
}
b.script = append(b.script, pushDataToBytes(data)...)
return b
}
// AddFullData should not typically be used by ordinary users as it does not
// include the checks which prevent data pushes larger than the maximum allowed
// sizes which leads to scripts that can't be executed. This is provided for
// testing purposes such as regression tests where sizes are intentionally made
// larger than allowed.
//
// Use AddData instead.
func (b *PushScriptBuilder) AddFullData(data []byte) *PushScriptBuilder {
if b.err != nil {
return b
}
b.script = append(b.script, pushDataToBytes(data)...)
return b
}
// AddOp pushes the passed opcode to the end of the script. The script will not
// be modified if pushing the opcode would cause the script to exceed the
// maximum allowed script engine size.
func (b *PushScriptBuilder) AddOp(opcode byte) *PushScriptBuilder {
if b.err != nil {
return b
}
// Pushes that would cause the script to exceed the largest allowed
// script size would result in a non-canonical script.
if len(b.script)+1 > txscript.MaxScriptSize {
str := fmt.Sprintf("adding an opcode would exceed the maximum "+
"allowed canonical script length of %d", txscript.MaxScriptSize)
b.err = txscript.ErrScriptNotCanonical(str)
return b
}
b.script = append(b.script, opcode)
return b
}
// AddOps pushes the passed opcodes to the end of the script. The script will
// not be modified if pushing the opcodes would cause the script to exceed the
// maximum allowed script engine size.
func (b *PushScriptBuilder) AddOps(opcodes []byte) *PushScriptBuilder {
if b.err != nil {
return b
}
// Pushes that would cause the script to exceed the largest allowed
// script size would result in a non-canonical script.
if len(b.script)+len(opcodes) > txscript.MaxScriptSize {
str := fmt.Sprintf("adding opcodes would exceed the maximum "+
"allowed canonical script length of %d", txscript.MaxScriptSize)
b.err = txscript.ErrScriptNotCanonical(str)
return b
}
b.script = append(b.script, opcodes...)
return b
}
// Script returns the currently built script. When any errors occurred while
// building the script, the script will be returned up the point of the first
// error along with the error.
func (b *PushScriptBuilder) Script() ([]byte, error) {
return b.script, b.err
}

View File

@@ -1,81 +0,0 @@
package ordinals
// Tags represent data fields in a runestone. Unrecognized odd tags are ignored. Unrecognized even tags produce a cenotaph.
type Tag uint8
var (
TagBody = Tag(0)
TagPointer = Tag(2)
// TagUnbound is unrecognized
TagUnbound = Tag(66)
TagContentType = Tag(1)
TagParent = Tag(3)
TagMetadata = Tag(5)
TagMetaprotocol = Tag(7)
TagContentEncoding = Tag(9)
TagDelegate = Tag(11)
// TagNop is unrecognized
TagNop = Tag(255)
)
var allTags = map[Tag]struct{}{
TagPointer: {},
TagContentType: {},
TagParent: {},
TagMetadata: {},
TagMetaprotocol: {},
TagContentEncoding: {},
TagDelegate: {},
}
func (t Tag) IsValid() bool {
_, ok := allTags[t]
return ok
}
var chunkedTags = map[Tag]struct{}{
TagMetadata: {},
}
func (t Tag) IsChunked() bool {
_, ok := chunkedTags[t]
return ok
}
func (t Tag) Bytes() []byte {
if t == TagBody {
return []byte{} // body tag is empty data push
}
return []byte{byte(t)}
}
func ParseTag(input interface{}) (Tag, error) {
switch input := input.(type) {
case Tag:
return input, nil
case int:
return Tag(input), nil
case int8:
return Tag(input), nil
case int16:
return Tag(input), nil
case int32:
return Tag(input), nil
case int64:
return Tag(input), nil
case uint:
return Tag(input), nil
case uint8:
return Tag(input), nil
case uint16:
return Tag(input), nil
case uint32:
return Tag(input), nil
case uint64:
return Tag(input), nil
default:
panic("invalid tag input type")
}
}

View File

@@ -1,525 +0,0 @@
package postgres
import (
"context"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/datagateway"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/gaze-network/indexer-network/modules/brc20/internal/repository/postgres/gen"
"github.com/jackc/pgx/v5"
"github.com/samber/lo"
)
var _ datagateway.BRC20DataGateway = (*Repository)(nil)
// warning: GetLatestBlock currently returns a types.BlockHeader with only Height and Hash fields populated.
// This is because it is known that all usage of this function only requires these fields. In the future, we may want to populate all fields for type safety.
func (r *Repository) GetLatestBlock(ctx context.Context) (types.BlockHeader, error) {
block, err := r.queries.GetLatestIndexedBlock(ctx)
if err != nil {
if errors.Is(err, pgx.ErrNoRows) {
return types.BlockHeader{}, errors.WithStack(errs.NotFound)
}
return types.BlockHeader{}, errors.Wrap(err, "error during query")
}
hash, err := chainhash.NewHashFromStr(block.Hash)
if err != nil {
return types.BlockHeader{}, errors.Wrap(err, "failed to parse block hash")
}
return types.BlockHeader{
Height: int64(block.Height),
Hash: *hash,
}, nil
}
// GetIndexedBlockByHeight implements datagateway.BRC20DataGateway.
func (r *Repository) GetIndexedBlockByHeight(ctx context.Context, height int64) (*entity.IndexedBlock, error) {
model, err := r.queries.GetIndexedBlockByHeight(ctx, int32(height))
if err != nil {
if errors.Is(err, pgx.ErrNoRows) {
return nil, errors.WithStack(errs.NotFound)
}
return nil, errors.Wrap(err, "error during query")
}
indexedBlock, err := mapIndexedBlockModelToType(model)
if err != nil {
return nil, errors.Wrap(err, "failed to parse indexed block model")
}
return &indexedBlock, nil
}
func (r *Repository) GetProcessorStats(ctx context.Context) (*entity.ProcessorStats, error) {
model, err := r.queries.GetLatestProcessorStats(ctx)
if err != nil {
if errors.Is(err, pgx.ErrNoRows) {
return nil, errors.WithStack(errs.NotFound)
}
return nil, errors.WithStack(err)
}
stats := mapProcessorStatsModelToType(model)
return &stats, nil
}
func (r *Repository) GetInscriptionTransfersInOutPoints(ctx context.Context, outPoints []wire.OutPoint) (map[ordinals.SatPoint][]*entity.InscriptionTransfer, error) {
txHashArr := lo.Map(outPoints, func(outPoint wire.OutPoint, _ int) string {
return outPoint.Hash.String()
})
txOutIdxArr := lo.Map(outPoints, func(outPoint wire.OutPoint, _ int) int32 {
return int32(outPoint.Index)
})
models, err := r.queries.GetInscriptionTransfersInOutPoints(ctx, gen.GetInscriptionTransfersInOutPointsParams{
TxHashArr: txHashArr,
TxOutIdxArr: txOutIdxArr,
})
if err != nil {
return nil, errors.WithStack(err)
}
results := make(map[ordinals.SatPoint][]*entity.InscriptionTransfer)
for _, model := range models {
inscriptionTransfer, err := mapInscriptionTransferModelToType(model)
if err != nil {
return nil, errors.WithStack(err)
}
results[inscriptionTransfer.NewSatPoint] = append(results[inscriptionTransfer.NewSatPoint], &inscriptionTransfer)
}
return results, nil
}
func (r *Repository) GetInscriptionEntriesByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]*ordinals.InscriptionEntry, error) {
idStrs := lo.Map(ids, func(id ordinals.InscriptionId, _ int) string { return id.String() })
models, err := r.queries.GetInscriptionEntriesByIds(ctx, idStrs)
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry)
for _, model := range models {
inscriptionEntry, err := mapInscriptionEntryModelToType(model)
if err != nil {
return nil, errors.Wrap(err, "failed to parse inscription entry model")
}
result[inscriptionEntry.Id] = &inscriptionEntry
}
return result, nil
}
func (r *Repository) GetInscriptionNumbersByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]int64, error) {
idStrs := lo.Map(ids, func(id ordinals.InscriptionId, _ int) string { return id.String() })
models, err := r.queries.GetInscriptionNumbersByIds(ctx, idStrs)
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[ordinals.InscriptionId]int64)
for _, model := range models {
inscriptionId, err := ordinals.NewInscriptionIdFromString(model.Id)
if err != nil {
return nil, errors.Wrap(err, "failed to parse inscription id")
}
result[inscriptionId] = model.Number
}
return result, nil
}
func (r *Repository) GetInscriptionParentsByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]ordinals.InscriptionId, error) {
idStrs := lo.Map(ids, func(id ordinals.InscriptionId, _ int) string { return id.String() })
models, err := r.queries.GetInscriptionParentsByIds(ctx, idStrs)
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[ordinals.InscriptionId]ordinals.InscriptionId)
for _, model := range models {
if len(model.Parents) == 0 {
// no parent
continue
}
if len(model.Parents) > 1 {
// sanity check, should not happen since 0.14 ord supports only 1 parent
continue
}
inscriptionId, err := ordinals.NewInscriptionIdFromString(model.Id)
if err != nil {
return nil, errors.Wrap(err, "failed to parse inscription id")
}
parentId, err := ordinals.NewInscriptionIdFromString(model.Parents[0])
if err != nil {
return nil, errors.Wrap(err, "failed to parse parent id")
}
result[inscriptionId] = parentId
}
return result, nil
}
func (r *Repository) GetLatestEventId(ctx context.Context) (int64, error) {
row, err := r.queries.GetLatestEventIds(ctx)
if err != nil {
return 0, errors.WithStack(err)
}
return max(row.EventDeployID.(int64), row.EventMintID.(int64), row.EventInscribeTransferID.(int64), row.EventTransferTransferID.(int64)), nil
}
func (r *Repository) GetBalancesBatchAtHeight(ctx context.Context, blockHeight uint64, queries []datagateway.GetBalancesBatchAtHeightQuery) (map[string]map[string]*entity.Balance, error) {
pkScripts := make([]string, 0)
ticks := make([]string, 0)
for _, query := range queries {
pkScripts = append(pkScripts, query.PkScriptHex)
ticks = append(ticks, query.Tick)
}
models, err := r.queries.GetBalancesBatchAtHeight(ctx, gen.GetBalancesBatchAtHeightParams{
PkscriptArr: pkScripts,
TickArr: ticks,
BlockHeight: int32(blockHeight),
})
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[string]map[string]*entity.Balance)
for _, model := range models {
balance, err := mapBalanceModelToType(model)
if err != nil {
return nil, errors.Wrap(err, "failed to parse balance model")
}
if _, ok := result[model.Pkscript]; !ok {
result[model.Pkscript] = make(map[string]*entity.Balance)
}
result[model.Pkscript][model.Tick] = &balance
}
return result, nil
}
func (r *Repository) GetEventInscribeTransfersByInscriptionIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]*entity.EventInscribeTransfer, error) {
idStrs := lo.Map(ids, func(id ordinals.InscriptionId, _ int) string { return id.String() })
models, err := r.queries.GetEventInscribeTransfersByInscriptionIds(ctx, idStrs)
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[ordinals.InscriptionId]*entity.EventInscribeTransfer)
for _, model := range models {
event, err := mapEventInscribeTransferModelToType(model)
if err != nil {
return nil, errors.Wrap(err, "failed to parse event inscribe transfer model")
}
result[event.InscriptionId] = &event
}
return result, nil
}
func (r *Repository) GetTickEntriesByTicks(ctx context.Context, ticks []string) (map[string]*entity.TickEntry, error) {
models, err := r.queries.GetTickEntriesByTicks(ctx, ticks)
if err != nil {
return nil, errors.WithStack(err)
}
result := make(map[string]*entity.TickEntry)
for _, model := range models {
tickEntry, err := mapTickEntryModelToType(model)
if err != nil {
return nil, errors.Wrap(err, "failed to parse tick entry model")
}
result[tickEntry.Tick] = &tickEntry
}
return result, nil
}
func (r *Repository) CreateIndexedBlock(ctx context.Context, block *entity.IndexedBlock) error {
params := mapIndexedBlockTypeToParams(*block)
if err := r.queries.CreateIndexedBlock(ctx, params); err != nil {
return errors.WithStack(err)
}
return nil
}
func (r *Repository) CreateProcessorStats(ctx context.Context, stats *entity.ProcessorStats) error {
params := mapProcessorStatsTypeToParams(*stats)
if err := r.queries.CreateProcessorStats(ctx, params); err != nil {
return errors.WithStack(err)
}
return nil
}
func (r *Repository) CreateTickEntries(ctx context.Context, blockHeight uint64, entries []*entity.TickEntry) error {
entryParams := make([]gen.CreateTickEntriesParams, 0)
for _, entry := range entries {
params, _, err := mapTickEntryTypeToParams(*entry, blockHeight)
if err != nil {
return errors.Wrap(err, "cannot map tick entry to create params")
}
entryParams = append(entryParams, params)
}
results := r.queries.CreateTickEntries(ctx, entryParams)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateTickEntryStates(ctx context.Context, blockHeight uint64, entryStates []*entity.TickEntry) error {
entryParams := make([]gen.CreateTickEntryStatesParams, 0)
for _, entry := range entryStates {
_, params, err := mapTickEntryTypeToParams(*entry, blockHeight)
if err != nil {
return errors.Wrap(err, "cannot map tick entry to create params")
}
entryParams = append(entryParams, params)
}
results := r.queries.CreateTickEntryStates(ctx, entryParams)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateInscriptionEntries(ctx context.Context, blockHeight uint64, entries []*ordinals.InscriptionEntry) error {
inscriptionEntryParams := make([]gen.CreateInscriptionEntriesParams, 0)
for _, entry := range entries {
params, _, err := mapInscriptionEntryTypeToParams(*entry, blockHeight)
if err != nil {
return errors.Wrap(err, "cannot map inscription entry to create params")
}
inscriptionEntryParams = append(inscriptionEntryParams, params)
}
results := r.queries.CreateInscriptionEntries(ctx, inscriptionEntryParams)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateInscriptionEntryStates(ctx context.Context, blockHeight uint64, entryStates []*ordinals.InscriptionEntry) error {
inscriptionEntryStatesParams := make([]gen.CreateInscriptionEntryStatesParams, 0)
for _, entry := range entryStates {
_, params, err := mapInscriptionEntryTypeToParams(*entry, blockHeight)
if err != nil {
return errors.Wrap(err, "cannot map inscription entry to create params")
}
inscriptionEntryStatesParams = append(inscriptionEntryStatesParams, params)
}
results := r.queries.CreateInscriptionEntryStates(ctx, inscriptionEntryStatesParams)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateInscriptionTransfers(ctx context.Context, transfers []*entity.InscriptionTransfer) error {
params := lo.Map(transfers, func(transfer *entity.InscriptionTransfer, _ int) gen.CreateInscriptionTransfersParams {
return mapInscriptionTransferTypeToParams(*transfer)
})
results := r.queries.CreateInscriptionTransfers(ctx, params)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateEventDeploys(ctx context.Context, events []*entity.EventDeploy) error {
params := make([]gen.CreateEventDeploysParams, 0)
for _, event := range events {
param, err := mapEventDeployTypeToParams(*event)
if err != nil {
return errors.Wrap(err, "cannot map event deploy to create params")
}
params = append(params, param)
}
results := r.queries.CreateEventDeploys(ctx, params)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateEventMints(ctx context.Context, events []*entity.EventMint) error {
params := make([]gen.CreateEventMintsParams, 0)
for _, event := range events {
param, err := mapEventMintTypeToParams(*event)
if err != nil {
return errors.Wrap(err, "cannot map event mint to create params")
}
params = append(params, param)
}
results := r.queries.CreateEventMints(ctx, params)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateEventInscribeTransfers(ctx context.Context, events []*entity.EventInscribeTransfer) error {
params := make([]gen.CreateEventInscribeTransfersParams, 0)
for _, event := range events {
param, err := mapEventInscribeTransferTypeToParams(*event)
if err != nil {
return errors.Wrap(err, "cannot map event transfer to create params")
}
params = append(params, param)
}
results := r.queries.CreateEventInscribeTransfers(ctx, params)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) CreateEventTransferTransfers(ctx context.Context, events []*entity.EventTransferTransfer) error {
params := make([]gen.CreateEventTransferTransfersParams, 0)
for _, event := range events {
param, err := mapEventTransferTransferTypeToParams(*event)
if err != nil {
return errors.Wrap(err, "cannot map event transfer to create params")
}
params = append(params, param)
}
results := r.queries.CreateEventTransferTransfers(ctx, params)
var execErrors []error
results.Exec(func(i int, err error) {
if err != nil {
execErrors = append(execErrors, err)
}
})
if len(execErrors) > 0 {
return errors.Wrap(errors.Join(execErrors...), "error during exec")
}
return nil
}
func (r *Repository) DeleteIndexedBlocksSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteIndexedBlocksSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteProcessorStatsSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteProcessorStatsSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteTickEntriesSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteTickEntriesSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteTickEntryStatesSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteTickEntryStatesSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteEventDeploysSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteEventDeploysSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteEventMintsSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteEventMintsSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteEventInscribeTransfersSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteEventInscribeTransfersSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteEventTransferTransfersSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteEventTransferTransfersSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteBalancesSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteBalancesSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteInscriptionEntriesSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteInscriptionEntriesSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteInscriptionEntryStatesSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteInscriptionEntryStatesSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}
func (r *Repository) DeleteInscriptionTransfersSinceHeight(ctx context.Context, height uint64) error {
if err := r.queries.DeleteInscriptionTransfersSinceHeight(ctx, int32(height)); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}

View File

@@ -1,629 +0,0 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.26.0
// source: batch.go
package gen
import (
"context"
"errors"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
var (
ErrBatchAlreadyClosed = errors.New("batch already closed")
)
const createEventDeploys = `-- name: CreateEventDeploys :batchexec
INSERT INTO "brc20_event_deploys" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "total_supply", "decimals", "limit_per_mint", "is_self_mint") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
`
type CreateEventDeploysBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateEventDeploysParams struct {
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
TotalSupply pgtype.Numeric
Decimals int16
LimitPerMint pgtype.Numeric
IsSelfMint bool
}
func (q *Queries) CreateEventDeploys(ctx context.Context, arg []CreateEventDeploysParams) *CreateEventDeploysBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.InscriptionID,
a.InscriptionNumber,
a.Tick,
a.OriginalTick,
a.TxHash,
a.BlockHeight,
a.TxIndex,
a.Timestamp,
a.Pkscript,
a.Satpoint,
a.TotalSupply,
a.Decimals,
a.LimitPerMint,
a.IsSelfMint,
}
batch.Queue(createEventDeploys, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateEventDeploysBatchResults{br, len(arg), false}
}
func (b *CreateEventDeploysBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateEventDeploysBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createEventInscribeTransfers = `-- name: CreateEventInscribeTransfers :batchexec
INSERT INTO "brc20_event_inscribe_transfers" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "output_index", "sats_amount", "amount") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)
`
type CreateEventInscribeTransfersBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateEventInscribeTransfersParams struct {
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
OutputIndex int32
SatsAmount int64
Amount pgtype.Numeric
}
func (q *Queries) CreateEventInscribeTransfers(ctx context.Context, arg []CreateEventInscribeTransfersParams) *CreateEventInscribeTransfersBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.InscriptionID,
a.InscriptionNumber,
a.Tick,
a.OriginalTick,
a.TxHash,
a.BlockHeight,
a.TxIndex,
a.Timestamp,
a.Pkscript,
a.Satpoint,
a.OutputIndex,
a.SatsAmount,
a.Amount,
}
batch.Queue(createEventInscribeTransfers, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateEventInscribeTransfersBatchResults{br, len(arg), false}
}
func (b *CreateEventInscribeTransfersBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateEventInscribeTransfersBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createEventMints = `-- name: CreateEventMints :batchexec
INSERT INTO "brc20_event_mints" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "pkscript", "satpoint", "amount", "parent_id") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
`
type CreateEventMintsBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateEventMintsParams struct {
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
Amount pgtype.Numeric
ParentID pgtype.Text
}
func (q *Queries) CreateEventMints(ctx context.Context, arg []CreateEventMintsParams) *CreateEventMintsBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.InscriptionID,
a.InscriptionNumber,
a.Tick,
a.OriginalTick,
a.TxHash,
a.BlockHeight,
a.TxIndex,
a.Timestamp,
a.Pkscript,
a.Satpoint,
a.Amount,
a.ParentID,
}
batch.Queue(createEventMints, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateEventMintsBatchResults{br, len(arg), false}
}
func (b *CreateEventMintsBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateEventMintsBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createEventTransferTransfers = `-- name: CreateEventTransferTransfers :batchexec
INSERT INTO "brc20_event_transfer_transfers" ("inscription_id", "inscription_number", "tick", "original_tick", "tx_hash", "block_height", "tx_index", "timestamp", "from_pkscript", "from_satpoint", "from_input_index", "to_pkscript", "to_satpoint", "to_output_index", "spent_as_fee", "amount") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16)
`
type CreateEventTransferTransfersBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateEventTransferTransfersParams struct {
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
FromPkscript string
FromSatpoint string
FromInputIndex int32
ToPkscript string
ToSatpoint string
ToOutputIndex int32
SpentAsFee bool
Amount pgtype.Numeric
}
func (q *Queries) CreateEventTransferTransfers(ctx context.Context, arg []CreateEventTransferTransfersParams) *CreateEventTransferTransfersBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.InscriptionID,
a.InscriptionNumber,
a.Tick,
a.OriginalTick,
a.TxHash,
a.BlockHeight,
a.TxIndex,
a.Timestamp,
a.FromPkscript,
a.FromSatpoint,
a.FromInputIndex,
a.ToPkscript,
a.ToSatpoint,
a.ToOutputIndex,
a.SpentAsFee,
a.Amount,
}
batch.Queue(createEventTransferTransfers, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateEventTransferTransfersBatchResults{br, len(arg), false}
}
func (b *CreateEventTransferTransfersBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateEventTransferTransfersBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createInscriptionEntries = `-- name: CreateInscriptionEntries :batchexec
INSERT INTO "brc20_inscription_entries" ("id", "number", "sequence_number", "delegate", "metadata", "metaprotocol", "parents", "pointer", "content", "content_encoding", "content_type", "cursed", "cursed_for_brc20", "created_at", "created_at_height") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
`
type CreateInscriptionEntriesBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateInscriptionEntriesParams struct {
Id string
Number int64
SequenceNumber int64
Delegate pgtype.Text
Metadata []byte
Metaprotocol pgtype.Text
Parents []string
Pointer pgtype.Int8
Content []byte
ContentEncoding pgtype.Text
ContentType pgtype.Text
Cursed bool
CursedForBrc20 bool
CreatedAt pgtype.Timestamp
CreatedAtHeight int32
}
func (q *Queries) CreateInscriptionEntries(ctx context.Context, arg []CreateInscriptionEntriesParams) *CreateInscriptionEntriesBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.Id,
a.Number,
a.SequenceNumber,
a.Delegate,
a.Metadata,
a.Metaprotocol,
a.Parents,
a.Pointer,
a.Content,
a.ContentEncoding,
a.ContentType,
a.Cursed,
a.CursedForBrc20,
a.CreatedAt,
a.CreatedAtHeight,
}
batch.Queue(createInscriptionEntries, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateInscriptionEntriesBatchResults{br, len(arg), false}
}
func (b *CreateInscriptionEntriesBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateInscriptionEntriesBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createInscriptionEntryStates = `-- name: CreateInscriptionEntryStates :batchexec
INSERT INTO "brc20_inscription_entry_states" ("id", "block_height", "transfer_count") VALUES ($1, $2, $3)
`
type CreateInscriptionEntryStatesBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateInscriptionEntryStatesParams struct {
Id string
BlockHeight int32
TransferCount int32
}
func (q *Queries) CreateInscriptionEntryStates(ctx context.Context, arg []CreateInscriptionEntryStatesParams) *CreateInscriptionEntryStatesBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.Id,
a.BlockHeight,
a.TransferCount,
}
batch.Queue(createInscriptionEntryStates, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateInscriptionEntryStatesBatchResults{br, len(arg), false}
}
func (b *CreateInscriptionEntryStatesBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateInscriptionEntryStatesBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createInscriptionTransfers = `-- name: CreateInscriptionTransfers :batchexec
INSERT INTO "brc20_inscription_transfers" ("inscription_id", "block_height", "tx_index", "tx_hash", "from_input_index", "old_satpoint_tx_hash", "old_satpoint_out_idx", "old_satpoint_offset", "new_satpoint_tx_hash", "new_satpoint_out_idx", "new_satpoint_offset", "new_pkscript", "new_output_value", "sent_as_fee", "transfer_count") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
`
type CreateInscriptionTransfersBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateInscriptionTransfersParams struct {
InscriptionID string
BlockHeight int32
TxIndex int32
TxHash string
FromInputIndex int32
OldSatpointTxHash pgtype.Text
OldSatpointOutIdx pgtype.Int4
OldSatpointOffset pgtype.Int8
NewSatpointTxHash pgtype.Text
NewSatpointOutIdx pgtype.Int4
NewSatpointOffset pgtype.Int8
NewPkscript string
NewOutputValue int64
SentAsFee bool
TransferCount int32
}
func (q *Queries) CreateInscriptionTransfers(ctx context.Context, arg []CreateInscriptionTransfersParams) *CreateInscriptionTransfersBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.InscriptionID,
a.BlockHeight,
a.TxIndex,
a.TxHash,
a.FromInputIndex,
a.OldSatpointTxHash,
a.OldSatpointOutIdx,
a.OldSatpointOffset,
a.NewSatpointTxHash,
a.NewSatpointOutIdx,
a.NewSatpointOffset,
a.NewPkscript,
a.NewOutputValue,
a.SentAsFee,
a.TransferCount,
}
batch.Queue(createInscriptionTransfers, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateInscriptionTransfersBatchResults{br, len(arg), false}
}
func (b *CreateInscriptionTransfersBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateInscriptionTransfersBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createTickEntries = `-- name: CreateTickEntries :batchexec
INSERT INTO "brc20_tick_entries" ("tick", "original_tick", "total_supply", "decimals", "limit_per_mint", "is_self_mint", "deploy_inscription_id", "deployed_at", "deployed_at_height") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
`
type CreateTickEntriesBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateTickEntriesParams struct {
Tick string
OriginalTick string
TotalSupply pgtype.Numeric
Decimals int16
LimitPerMint pgtype.Numeric
IsSelfMint bool
DeployInscriptionID string
DeployedAt pgtype.Timestamp
DeployedAtHeight int32
}
func (q *Queries) CreateTickEntries(ctx context.Context, arg []CreateTickEntriesParams) *CreateTickEntriesBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.Tick,
a.OriginalTick,
a.TotalSupply,
a.Decimals,
a.LimitPerMint,
a.IsSelfMint,
a.DeployInscriptionID,
a.DeployedAt,
a.DeployedAtHeight,
}
batch.Queue(createTickEntries, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateTickEntriesBatchResults{br, len(arg), false}
}
func (b *CreateTickEntriesBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateTickEntriesBatchResults) Close() error {
b.closed = true
return b.br.Close()
}
const createTickEntryStates = `-- name: CreateTickEntryStates :batchexec
INSERT INTO "brc20_tick_entry_states" ("tick", "block_height", "minted_amount", "burned_amount", "completed_at", "completed_at_height") VALUES ($1, $2, $3, $4, $5, $6)
`
type CreateTickEntryStatesBatchResults struct {
br pgx.BatchResults
tot int
closed bool
}
type CreateTickEntryStatesParams struct {
Tick string
BlockHeight int32
MintedAmount pgtype.Numeric
BurnedAmount pgtype.Numeric
CompletedAt pgtype.Timestamp
CompletedAtHeight pgtype.Int4
}
func (q *Queries) CreateTickEntryStates(ctx context.Context, arg []CreateTickEntryStatesParams) *CreateTickEntryStatesBatchResults {
batch := &pgx.Batch{}
for _, a := range arg {
vals := []interface{}{
a.Tick,
a.BlockHeight,
a.MintedAmount,
a.BurnedAmount,
a.CompletedAt,
a.CompletedAtHeight,
}
batch.Queue(createTickEntryStates, vals...)
}
br := q.db.SendBatch(ctx, batch)
return &CreateTickEntryStatesBatchResults{br, len(arg), false}
}
func (b *CreateTickEntryStatesBatchResults) Exec(f func(int, error)) {
defer b.br.Close()
for t := 0; t < b.tot; t++ {
if b.closed {
if f != nil {
f(t, ErrBatchAlreadyClosed)
}
continue
}
_, err := b.br.Exec()
if f != nil {
f(t, err)
}
}
}
func (b *CreateTickEntryStatesBatchResults) Close() error {
b.closed = true
return b.br.Close()
}

View File

@@ -1,593 +0,0 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.26.0
// source: data.sql
package gen
import (
"context"
"github.com/jackc/pgx/v5/pgtype"
)
const createIndexedBlock = `-- name: CreateIndexedBlock :exec
INSERT INTO "brc20_indexed_blocks" ("height", "hash", "event_hash", "cumulative_event_hash") VALUES ($1, $2, $3, $4)
`
type CreateIndexedBlockParams struct {
Height int32
Hash string
EventHash string
CumulativeEventHash string
}
func (q *Queries) CreateIndexedBlock(ctx context.Context, arg CreateIndexedBlockParams) error {
_, err := q.db.Exec(ctx, createIndexedBlock,
arg.Height,
arg.Hash,
arg.EventHash,
arg.CumulativeEventHash,
)
return err
}
const createProcessorStats = `-- name: CreateProcessorStats :exec
INSERT INTO "brc20_processor_stats" ("block_height", "cursed_inscription_count", "blessed_inscription_count", "lost_sats") VALUES ($1, $2, $3, $4)
`
type CreateProcessorStatsParams struct {
BlockHeight int32
CursedInscriptionCount int32
BlessedInscriptionCount int32
LostSats int64
}
func (q *Queries) CreateProcessorStats(ctx context.Context, arg CreateProcessorStatsParams) error {
_, err := q.db.Exec(ctx, createProcessorStats,
arg.BlockHeight,
arg.CursedInscriptionCount,
arg.BlessedInscriptionCount,
arg.LostSats,
)
return err
}
const deleteBalancesSinceHeight = `-- name: DeleteBalancesSinceHeight :exec
DELETE FROM "brc20_balances" WHERE "block_height" >= $1
`
func (q *Queries) DeleteBalancesSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteBalancesSinceHeight, blockHeight)
return err
}
const deleteEventDeploysSinceHeight = `-- name: DeleteEventDeploysSinceHeight :exec
DELETE FROM "brc20_event_deploys" WHERE "block_height" >= $1
`
func (q *Queries) DeleteEventDeploysSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteEventDeploysSinceHeight, blockHeight)
return err
}
const deleteEventInscribeTransfersSinceHeight = `-- name: DeleteEventInscribeTransfersSinceHeight :exec
DELETE FROM "brc20_event_inscribe_transfers" WHERE "block_height" >= $1
`
func (q *Queries) DeleteEventInscribeTransfersSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteEventInscribeTransfersSinceHeight, blockHeight)
return err
}
const deleteEventMintsSinceHeight = `-- name: DeleteEventMintsSinceHeight :exec
DELETE FROM "brc20_event_mints" WHERE "block_height" >= $1
`
func (q *Queries) DeleteEventMintsSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteEventMintsSinceHeight, blockHeight)
return err
}
const deleteEventTransferTransfersSinceHeight = `-- name: DeleteEventTransferTransfersSinceHeight :exec
DELETE FROM "brc20_event_transfer_transfers" WHERE "block_height" >= $1
`
func (q *Queries) DeleteEventTransferTransfersSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteEventTransferTransfersSinceHeight, blockHeight)
return err
}
const deleteIndexedBlocksSinceHeight = `-- name: DeleteIndexedBlocksSinceHeight :exec
DELETE FROM "brc20_indexed_blocks" WHERE "height" >= $1
`
func (q *Queries) DeleteIndexedBlocksSinceHeight(ctx context.Context, height int32) error {
_, err := q.db.Exec(ctx, deleteIndexedBlocksSinceHeight, height)
return err
}
const deleteInscriptionEntriesSinceHeight = `-- name: DeleteInscriptionEntriesSinceHeight :exec
DELETE FROM "brc20_inscription_entries" WHERE "created_at_height" >= $1
`
func (q *Queries) DeleteInscriptionEntriesSinceHeight(ctx context.Context, createdAtHeight int32) error {
_, err := q.db.Exec(ctx, deleteInscriptionEntriesSinceHeight, createdAtHeight)
return err
}
const deleteInscriptionEntryStatesSinceHeight = `-- name: DeleteInscriptionEntryStatesSinceHeight :exec
DELETE FROM "brc20_inscription_entry_states" WHERE "block_height" >= $1
`
func (q *Queries) DeleteInscriptionEntryStatesSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteInscriptionEntryStatesSinceHeight, blockHeight)
return err
}
const deleteInscriptionTransfersSinceHeight = `-- name: DeleteInscriptionTransfersSinceHeight :exec
DELETE FROM "brc20_inscription_transfers" WHERE "block_height" >= $1
`
func (q *Queries) DeleteInscriptionTransfersSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteInscriptionTransfersSinceHeight, blockHeight)
return err
}
const deleteProcessorStatsSinceHeight = `-- name: DeleteProcessorStatsSinceHeight :exec
DELETE FROM "brc20_processor_stats" WHERE "block_height" >= $1
`
func (q *Queries) DeleteProcessorStatsSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteProcessorStatsSinceHeight, blockHeight)
return err
}
const deleteTickEntriesSinceHeight = `-- name: DeleteTickEntriesSinceHeight :exec
DELETE FROM "brc20_tick_entries" WHERE "deployed_at_height" >= $1
`
func (q *Queries) DeleteTickEntriesSinceHeight(ctx context.Context, deployedAtHeight int32) error {
_, err := q.db.Exec(ctx, deleteTickEntriesSinceHeight, deployedAtHeight)
return err
}
const deleteTickEntryStatesSinceHeight = `-- name: DeleteTickEntryStatesSinceHeight :exec
DELETE FROM "brc20_tick_entry_states" WHERE "block_height" >= $1
`
func (q *Queries) DeleteTickEntryStatesSinceHeight(ctx context.Context, blockHeight int32) error {
_, err := q.db.Exec(ctx, deleteTickEntryStatesSinceHeight, blockHeight)
return err
}
const getBalancesBatchAtHeight = `-- name: GetBalancesBatchAtHeight :many
SELECT DISTINCT ON ("brc20_balances"."pkscript", "brc20_balances"."tick") brc20_balances.pkscript, brc20_balances.block_height, brc20_balances.tick, brc20_balances.overall_balance, brc20_balances.available_balance FROM "brc20_balances"
INNER JOIN (
SELECT
unnest($1::text[]) AS "pkscript",
unnest($2::text[]) AS "tick"
) "queries" ON "brc20_balances"."pkscript" = "queries"."pkscript" AND "brc20_balances"."tick" = "queries"."tick" AND "brc20_balances"."block_height" <= $3
ORDER BY "brc20_balances"."pkscript", "brc20_balances"."tick", "block_height" DESC
`
type GetBalancesBatchAtHeightParams struct {
PkscriptArr []string
TickArr []string
BlockHeight int32
}
func (q *Queries) GetBalancesBatchAtHeight(ctx context.Context, arg GetBalancesBatchAtHeightParams) ([]Brc20Balance, error) {
rows, err := q.db.Query(ctx, getBalancesBatchAtHeight, arg.PkscriptArr, arg.TickArr, arg.BlockHeight)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Brc20Balance
for rows.Next() {
var i Brc20Balance
if err := rows.Scan(
&i.Pkscript,
&i.BlockHeight,
&i.Tick,
&i.OverallBalance,
&i.AvailableBalance,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getEventInscribeTransfersByInscriptionIds = `-- name: GetEventInscribeTransfersByInscriptionIds :many
SELECT id, inscription_id, inscription_number, tick, original_tick, tx_hash, block_height, tx_index, timestamp, pkscript, satpoint, output_index, sats_amount, amount FROM "brc20_event_inscribe_transfers" WHERE "inscription_id" = ANY($1::text[])
`
func (q *Queries) GetEventInscribeTransfersByInscriptionIds(ctx context.Context, inscriptionIds []string) ([]Brc20EventInscribeTransfer, error) {
rows, err := q.db.Query(ctx, getEventInscribeTransfersByInscriptionIds, inscriptionIds)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Brc20EventInscribeTransfer
for rows.Next() {
var i Brc20EventInscribeTransfer
if err := rows.Scan(
&i.Id,
&i.InscriptionID,
&i.InscriptionNumber,
&i.Tick,
&i.OriginalTick,
&i.TxHash,
&i.BlockHeight,
&i.TxIndex,
&i.Timestamp,
&i.Pkscript,
&i.Satpoint,
&i.OutputIndex,
&i.SatsAmount,
&i.Amount,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getIndexedBlockByHeight = `-- name: GetIndexedBlockByHeight :one
SELECT height, hash, event_hash, cumulative_event_hash FROM "brc20_indexed_blocks" WHERE "height" = $1
`
func (q *Queries) GetIndexedBlockByHeight(ctx context.Context, height int32) (Brc20IndexedBlock, error) {
row := q.db.QueryRow(ctx, getIndexedBlockByHeight, height)
var i Brc20IndexedBlock
err := row.Scan(
&i.Height,
&i.Hash,
&i.EventHash,
&i.CumulativeEventHash,
)
return i, err
}
const getInscriptionEntriesByIds = `-- name: GetInscriptionEntriesByIds :many
WITH "states" AS (
-- select latest state
SELECT DISTINCT ON ("id") id, block_height, transfer_count FROM "brc20_inscription_entry_states" WHERE "id" = ANY($1::text[]) ORDER BY "id", "block_height" DESC
)
SELECT brc20_inscription_entries.id, number, sequence_number, delegate, metadata, metaprotocol, parents, pointer, content, content_encoding, content_type, cursed, cursed_for_brc20, created_at, created_at_height, states.id, block_height, transfer_count FROM "brc20_inscription_entries"
LEFT JOIN "states" ON "brc20_inscription_entries"."id" = "states"."id"
WHERE "brc20_inscription_entries"."id" = ANY($1::text[])
`
type GetInscriptionEntriesByIdsRow struct {
Id string
Number int64
SequenceNumber int64
Delegate pgtype.Text
Metadata []byte
Metaprotocol pgtype.Text
Parents []string
Pointer pgtype.Int8
Content []byte
ContentEncoding pgtype.Text
ContentType pgtype.Text
Cursed bool
CursedForBrc20 bool
CreatedAt pgtype.Timestamp
CreatedAtHeight int32
Id_2 pgtype.Text
BlockHeight pgtype.Int4
TransferCount pgtype.Int4
}
func (q *Queries) GetInscriptionEntriesByIds(ctx context.Context, inscriptionIds []string) ([]GetInscriptionEntriesByIdsRow, error) {
rows, err := q.db.Query(ctx, getInscriptionEntriesByIds, inscriptionIds)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetInscriptionEntriesByIdsRow
for rows.Next() {
var i GetInscriptionEntriesByIdsRow
if err := rows.Scan(
&i.Id,
&i.Number,
&i.SequenceNumber,
&i.Delegate,
&i.Metadata,
&i.Metaprotocol,
&i.Parents,
&i.Pointer,
&i.Content,
&i.ContentEncoding,
&i.ContentType,
&i.Cursed,
&i.CursedForBrc20,
&i.CreatedAt,
&i.CreatedAtHeight,
&i.Id_2,
&i.BlockHeight,
&i.TransferCount,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getInscriptionNumbersByIds = `-- name: GetInscriptionNumbersByIds :many
SELECT id, number FROM "brc20_inscription_entries" WHERE "id" = ANY($1::text[])
`
type GetInscriptionNumbersByIdsRow struct {
Id string
Number int64
}
func (q *Queries) GetInscriptionNumbersByIds(ctx context.Context, inscriptionIds []string) ([]GetInscriptionNumbersByIdsRow, error) {
rows, err := q.db.Query(ctx, getInscriptionNumbersByIds, inscriptionIds)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetInscriptionNumbersByIdsRow
for rows.Next() {
var i GetInscriptionNumbersByIdsRow
if err := rows.Scan(&i.Id, &i.Number); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getInscriptionParentsByIds = `-- name: GetInscriptionParentsByIds :many
SELECT id, parents FROM "brc20_inscription_entries" WHERE "id" = ANY($1::text[])
`
type GetInscriptionParentsByIdsRow struct {
Id string
Parents []string
}
func (q *Queries) GetInscriptionParentsByIds(ctx context.Context, inscriptionIds []string) ([]GetInscriptionParentsByIdsRow, error) {
rows, err := q.db.Query(ctx, getInscriptionParentsByIds, inscriptionIds)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetInscriptionParentsByIdsRow
for rows.Next() {
var i GetInscriptionParentsByIdsRow
if err := rows.Scan(&i.Id, &i.Parents); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getInscriptionTransfersInOutPoints = `-- name: GetInscriptionTransfersInOutPoints :many
SELECT it.inscription_id, it.block_height, it.tx_index, it.tx_hash, it.from_input_index, it.old_satpoint_tx_hash, it.old_satpoint_out_idx, it.old_satpoint_offset, it.new_satpoint_tx_hash, it.new_satpoint_out_idx, it.new_satpoint_offset, it.new_pkscript, it.new_output_value, it.sent_as_fee, it.transfer_count, "ie"."content" FROM (
SELECT
unnest($1::text[]) AS "tx_hash",
unnest($2::int[]) AS "tx_out_idx"
) "inputs"
INNER JOIN "brc20_inscription_transfers" it ON "inputs"."tx_hash" = "it"."new_satpoint_tx_hash" AND "inputs"."tx_out_idx" = "it"."new_satpoint_out_idx"
LEFT JOIN "brc20_inscription_entries" ie ON "it"."inscription_id" = "ie"."id"
`
type GetInscriptionTransfersInOutPointsParams struct {
TxHashArr []string
TxOutIdxArr []int32
}
type GetInscriptionTransfersInOutPointsRow struct {
InscriptionID string
BlockHeight int32
TxIndex int32
TxHash string
FromInputIndex int32
OldSatpointTxHash pgtype.Text
OldSatpointOutIdx pgtype.Int4
OldSatpointOffset pgtype.Int8
NewSatpointTxHash pgtype.Text
NewSatpointOutIdx pgtype.Int4
NewSatpointOffset pgtype.Int8
NewPkscript string
NewOutputValue int64
SentAsFee bool
TransferCount int32
Content []byte
}
func (q *Queries) GetInscriptionTransfersInOutPoints(ctx context.Context, arg GetInscriptionTransfersInOutPointsParams) ([]GetInscriptionTransfersInOutPointsRow, error) {
rows, err := q.db.Query(ctx, getInscriptionTransfersInOutPoints, arg.TxHashArr, arg.TxOutIdxArr)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetInscriptionTransfersInOutPointsRow
for rows.Next() {
var i GetInscriptionTransfersInOutPointsRow
if err := rows.Scan(
&i.InscriptionID,
&i.BlockHeight,
&i.TxIndex,
&i.TxHash,
&i.FromInputIndex,
&i.OldSatpointTxHash,
&i.OldSatpointOutIdx,
&i.OldSatpointOffset,
&i.NewSatpointTxHash,
&i.NewSatpointOutIdx,
&i.NewSatpointOffset,
&i.NewPkscript,
&i.NewOutputValue,
&i.SentAsFee,
&i.TransferCount,
&i.Content,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getLatestEventIds = `-- name: GetLatestEventIds :one
WITH "latest_deploy_id" AS (
SELECT "id" FROM "brc20_event_deploys" ORDER BY "id" DESC LIMIT 1
),
"latest_mint_id" AS (
SELECT "id" FROM "brc20_event_mints" ORDER BY "id" DESC LIMIT 1
),
"latest_inscribe_transfer_id" AS (
SELECT "id" FROM "brc20_event_inscribe_transfers" ORDER BY "id" DESC LIMIT 1
),
"latest_transfer_transfer_id" AS (
SELECT "id" FROM "brc20_event_transfer_transfers" ORDER BY "id" DESC LIMIT 1
)
SELECT
COALESCE((SELECT "id" FROM "latest_deploy_id"), -1) AS "event_deploy_id",
COALESCE((SELECT "id" FROM "latest_mint_id"), -1) AS "event_mint_id",
COALESCE((SELECT "id" FROM "latest_inscribe_transfer_id"), -1) AS "event_inscribe_transfer_id",
COALESCE((SELECT "id" FROM "latest_transfer_transfer_id"), -1) AS "event_transfer_transfer_id"
`
type GetLatestEventIdsRow struct {
EventDeployID interface{}
EventMintID interface{}
EventInscribeTransferID interface{}
EventTransferTransferID interface{}
}
func (q *Queries) GetLatestEventIds(ctx context.Context) (GetLatestEventIdsRow, error) {
row := q.db.QueryRow(ctx, getLatestEventIds)
var i GetLatestEventIdsRow
err := row.Scan(
&i.EventDeployID,
&i.EventMintID,
&i.EventInscribeTransferID,
&i.EventTransferTransferID,
)
return i, err
}
const getLatestIndexedBlock = `-- name: GetLatestIndexedBlock :one
SELECT height, hash, event_hash, cumulative_event_hash FROM "brc20_indexed_blocks" ORDER BY "height" DESC LIMIT 1
`
func (q *Queries) GetLatestIndexedBlock(ctx context.Context) (Brc20IndexedBlock, error) {
row := q.db.QueryRow(ctx, getLatestIndexedBlock)
var i Brc20IndexedBlock
err := row.Scan(
&i.Height,
&i.Hash,
&i.EventHash,
&i.CumulativeEventHash,
)
return i, err
}
const getLatestProcessorStats = `-- name: GetLatestProcessorStats :one
SELECT block_height, cursed_inscription_count, blessed_inscription_count, lost_sats FROM "brc20_processor_stats" ORDER BY "block_height" DESC LIMIT 1
`
func (q *Queries) GetLatestProcessorStats(ctx context.Context) (Brc20ProcessorStat, error) {
row := q.db.QueryRow(ctx, getLatestProcessorStats)
var i Brc20ProcessorStat
err := row.Scan(
&i.BlockHeight,
&i.CursedInscriptionCount,
&i.BlessedInscriptionCount,
&i.LostSats,
)
return i, err
}
const getTickEntriesByTicks = `-- name: GetTickEntriesByTicks :many
WITH "states" AS (
-- select latest state
SELECT DISTINCT ON ("tick") tick, block_height, minted_amount, burned_amount, completed_at, completed_at_height FROM "brc20_tick_entry_states" WHERE "tick" = ANY($1::text[]) ORDER BY "tick", "block_height" DESC
)
SELECT brc20_tick_entries.tick, original_tick, total_supply, decimals, limit_per_mint, is_self_mint, deploy_inscription_id, deployed_at, deployed_at_height, states.tick, block_height, minted_amount, burned_amount, completed_at, completed_at_height FROM "brc20_tick_entries"
LEFT JOIN "states" ON "brc20_tick_entries"."tick" = "states"."tick"
WHERE "brc20_tick_entries"."tick" = ANY($1::text[])
`
type GetTickEntriesByTicksRow struct {
Tick string
OriginalTick string
TotalSupply pgtype.Numeric
Decimals int16
LimitPerMint pgtype.Numeric
IsSelfMint bool
DeployInscriptionID string
DeployedAt pgtype.Timestamp
DeployedAtHeight int32
Tick_2 pgtype.Text
BlockHeight pgtype.Int4
MintedAmount pgtype.Numeric
BurnedAmount pgtype.Numeric
CompletedAt pgtype.Timestamp
CompletedAtHeight pgtype.Int4
}
func (q *Queries) GetTickEntriesByTicks(ctx context.Context, ticks []string) ([]GetTickEntriesByTicksRow, error) {
rows, err := q.db.Query(ctx, getTickEntriesByTicks, ticks)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetTickEntriesByTicksRow
for rows.Next() {
var i GetTickEntriesByTicksRow
if err := rows.Scan(
&i.Tick,
&i.OriginalTick,
&i.TotalSupply,
&i.Decimals,
&i.LimitPerMint,
&i.IsSelfMint,
&i.DeployInscriptionID,
&i.DeployedAt,
&i.DeployedAtHeight,
&i.Tick_2,
&i.BlockHeight,
&i.MintedAmount,
&i.BurnedAmount,
&i.CompletedAt,
&i.CompletedAtHeight,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}

View File

@@ -1,49 +0,0 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.26.0
// source: info.sql
package gen
import (
"context"
)
const createIndexerState = `-- name: CreateIndexerState :exec
INSERT INTO brc20_indexer_states (client_version, network, db_version, event_hash_version) VALUES ($1, $2, $3, $4)
`
type CreateIndexerStateParams struct {
ClientVersion string
Network string
DbVersion int32
EventHashVersion int32
}
func (q *Queries) CreateIndexerState(ctx context.Context, arg CreateIndexerStateParams) error {
_, err := q.db.Exec(ctx, createIndexerState,
arg.ClientVersion,
arg.Network,
arg.DbVersion,
arg.EventHashVersion,
)
return err
}
const getLatestIndexerState = `-- name: GetLatestIndexerState :one
SELECT id, client_version, network, db_version, event_hash_version, created_at FROM brc20_indexer_states ORDER BY created_at DESC LIMIT 1
`
func (q *Queries) GetLatestIndexerState(ctx context.Context) (Brc20IndexerState, error) {
row := q.db.QueryRow(ctx, getLatestIndexerState)
var i Brc20IndexerState
err := row.Scan(
&i.Id,
&i.ClientVersion,
&i.Network,
&i.DbVersion,
&i.EventHashVersion,
&i.CreatedAt,
)
return i, err
}

View File

@@ -1,174 +0,0 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.26.0
package gen
import (
"github.com/jackc/pgx/v5/pgtype"
)
type Brc20Balance struct {
Pkscript string
BlockHeight int32
Tick string
OverallBalance pgtype.Numeric
AvailableBalance pgtype.Numeric
}
type Brc20EventDeploy struct {
Id int64
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
TotalSupply pgtype.Numeric
Decimals int16
LimitPerMint pgtype.Numeric
IsSelfMint bool
}
type Brc20EventInscribeTransfer struct {
Id int64
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
OutputIndex int32
SatsAmount int64
Amount pgtype.Numeric
}
type Brc20EventMint struct {
Id int64
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
Pkscript string
Satpoint string
Amount pgtype.Numeric
ParentID pgtype.Text
}
type Brc20EventTransferTransfer struct {
Id int64
InscriptionID string
InscriptionNumber int64
Tick string
OriginalTick string
TxHash string
BlockHeight int32
TxIndex int32
Timestamp pgtype.Timestamp
FromPkscript string
FromSatpoint string
FromInputIndex int32
ToPkscript string
ToSatpoint string
ToOutputIndex int32
SpentAsFee bool
Amount pgtype.Numeric
}
type Brc20IndexedBlock struct {
Height int32
Hash string
EventHash string
CumulativeEventHash string
}
type Brc20IndexerState struct {
Id int64
ClientVersion string
Network string
DbVersion int32
EventHashVersion int32
CreatedAt pgtype.Timestamptz
}
type Brc20InscriptionEntry struct {
Id string
Number int64
SequenceNumber int64
Delegate pgtype.Text
Metadata []byte
Metaprotocol pgtype.Text
Parents []string
Pointer pgtype.Int8
Content []byte
ContentEncoding pgtype.Text
ContentType pgtype.Text
Cursed bool
CursedForBrc20 bool
CreatedAt pgtype.Timestamp
CreatedAtHeight int32
}
type Brc20InscriptionEntryState struct {
Id string
BlockHeight int32
TransferCount int32
}
type Brc20InscriptionTransfer struct {
InscriptionID string
BlockHeight int32
TxIndex int32
TxHash string
FromInputIndex int32
OldSatpointTxHash pgtype.Text
OldSatpointOutIdx pgtype.Int4
OldSatpointOffset pgtype.Int8
NewSatpointTxHash pgtype.Text
NewSatpointOutIdx pgtype.Int4
NewSatpointOffset pgtype.Int8
NewPkscript string
NewOutputValue int64
SentAsFee bool
TransferCount int32
}
type Brc20ProcessorStat struct {
BlockHeight int32
CursedInscriptionCount int32
BlessedInscriptionCount int32
LostSats int64
}
type Brc20TickEntry struct {
Tick string
OriginalTick string
TotalSupply pgtype.Numeric
Decimals int16
LimitPerMint pgtype.Numeric
IsSelfMint bool
DeployInscriptionID string
DeployedAt pgtype.Timestamp
DeployedAtHeight int32
}
type Brc20TickEntryState struct {
Tick string
BlockHeight int32
MintedAmount pgtype.Numeric
BurnedAmount pgtype.Numeric
CompletedAt pgtype.Timestamp
CompletedAtHeight pgtype.Int4
}

View File

@@ -1,33 +0,0 @@
package postgres
import (
"context"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/modules/brc20/internal/datagateway"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/jackc/pgx/v5"
)
var _ datagateway.IndexerInfoDataGateway = (*Repository)(nil)
func (r *Repository) GetLatestIndexerState(ctx context.Context) (entity.IndexerState, error) {
model, err := r.queries.GetLatestIndexerState(ctx)
if err != nil {
if errors.Is(err, pgx.ErrNoRows) {
return entity.IndexerState{}, errors.WithStack(errs.NotFound)
}
return entity.IndexerState{}, errors.Wrap(err, "error during query")
}
state := mapIndexerStatesModelToType(model)
return state, nil
}
func (r *Repository) CreateIndexerState(ctx context.Context, state entity.IndexerState) error {
params := mapIndexerStatesTypeToParams(state)
if err := r.queries.CreateIndexerState(ctx, params); err != nil {
return errors.Wrap(err, "error during exec")
}
return nil
}

View File

@@ -1,604 +0,0 @@
package postgres
import (
"encoding/hex"
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/gaze-network/indexer-network/modules/brc20/internal/repository/postgres/gen"
"github.com/jackc/pgx/v5/pgtype"
"github.com/samber/lo"
"github.com/shopspring/decimal"
)
func decimalFromNumeric(src pgtype.Numeric) decimal.NullDecimal {
if !src.Valid || src.NaN || src.InfinityModifier != pgtype.Finite {
return decimal.NullDecimal{}
}
result := decimal.NewFromBigInt(src.Int, src.Exp)
return decimal.NewNullDecimal(result)
}
func numericFromDecimal(src decimal.Decimal) pgtype.Numeric {
result := pgtype.Numeric{
Int: src.Coefficient(),
Exp: src.Exponent(),
NaN: false,
InfinityModifier: pgtype.Finite,
Valid: true,
}
return result
}
func numericFromNullDecimal(src decimal.NullDecimal) pgtype.Numeric {
if !src.Valid {
return pgtype.Numeric{}
}
return numericFromDecimal(src.Decimal)
}
func mapIndexerStatesModelToType(src gen.Brc20IndexerState) entity.IndexerState {
var createdAt time.Time
if src.CreatedAt.Valid {
createdAt = src.CreatedAt.Time
}
return entity.IndexerState{
ClientVersion: src.ClientVersion,
Network: common.Network(src.Network),
DBVersion: int32(src.DbVersion),
EventHashVersion: int32(src.EventHashVersion),
CreatedAt: createdAt,
}
}
func mapIndexerStatesTypeToParams(src entity.IndexerState) gen.CreateIndexerStateParams {
return gen.CreateIndexerStateParams{
ClientVersion: src.ClientVersion,
Network: string(src.Network),
DbVersion: int32(src.DBVersion),
EventHashVersion: int32(src.EventHashVersion),
}
}
func mapIndexedBlockModelToType(src gen.Brc20IndexedBlock) (entity.IndexedBlock, error) {
hash, err := chainhash.NewHashFromStr(src.Hash)
if err != nil {
return entity.IndexedBlock{}, errors.Wrap(err, "invalid block hash")
}
eventHash, err := chainhash.NewHashFromStr(src.EventHash)
if err != nil {
return entity.IndexedBlock{}, errors.Wrap(err, "invalid event hash")
}
cumulativeEventHash, err := chainhash.NewHashFromStr(src.CumulativeEventHash)
if err != nil {
return entity.IndexedBlock{}, errors.Wrap(err, "invalid cumulative event hash")
}
return entity.IndexedBlock{
Height: uint64(src.Height),
Hash: *hash,
EventHash: *eventHash,
CumulativeEventHash: *cumulativeEventHash,
}, nil
}
func mapIndexedBlockTypeToParams(src entity.IndexedBlock) gen.CreateIndexedBlockParams {
return gen.CreateIndexedBlockParams{
Height: int32(src.Height),
Hash: src.Hash.String(),
EventHash: src.EventHash.String(),
CumulativeEventHash: src.CumulativeEventHash.String(),
}
}
func mapProcessorStatsModelToType(src gen.Brc20ProcessorStat) entity.ProcessorStats {
return entity.ProcessorStats{
BlockHeight: uint64(src.BlockHeight),
CursedInscriptionCount: uint64(src.CursedInscriptionCount),
BlessedInscriptionCount: uint64(src.BlessedInscriptionCount),
LostSats: uint64(src.LostSats),
}
}
func mapProcessorStatsTypeToParams(src entity.ProcessorStats) gen.CreateProcessorStatsParams {
return gen.CreateProcessorStatsParams{
BlockHeight: int32(src.BlockHeight),
CursedInscriptionCount: int32(src.CursedInscriptionCount),
BlessedInscriptionCount: int32(src.BlessedInscriptionCount),
LostSats: int64(src.LostSats),
}
}
func mapTickEntryModelToType(src gen.GetTickEntriesByTicksRow) (entity.TickEntry, error) {
deployInscriptionId, err := ordinals.NewInscriptionIdFromString(src.DeployInscriptionID)
if err != nil {
return entity.TickEntry{}, errors.Wrap(err, "invalid deployInscriptionId")
}
var completedAt time.Time
if src.CompletedAt.Valid {
completedAt = src.CompletedAt.Time
}
return entity.TickEntry{
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TotalSupply: decimalFromNumeric(src.TotalSupply).Decimal,
Decimals: uint16(src.Decimals),
LimitPerMint: decimalFromNumeric(src.LimitPerMint).Decimal,
IsSelfMint: src.IsSelfMint,
DeployInscriptionId: deployInscriptionId,
DeployedAt: src.DeployedAt.Time,
DeployedAtHeight: uint64(src.DeployedAtHeight),
MintedAmount: decimalFromNumeric(src.MintedAmount).Decimal,
BurnedAmount: decimalFromNumeric(src.BurnedAmount).Decimal,
CompletedAt: completedAt,
CompletedAtHeight: lo.Ternary(src.CompletedAtHeight.Valid, uint64(src.CompletedAtHeight.Int32), 0),
}, nil
}
func mapTickEntryTypeToParams(src entity.TickEntry, blockHeight uint64) (gen.CreateTickEntriesParams, gen.CreateTickEntryStatesParams, error) {
return gen.CreateTickEntriesParams{
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TotalSupply: numericFromDecimal(src.TotalSupply),
Decimals: int16(src.Decimals),
LimitPerMint: numericFromDecimal(src.LimitPerMint),
IsSelfMint: src.IsSelfMint,
DeployInscriptionID: src.DeployInscriptionId.String(),
DeployedAt: pgtype.Timestamp{Time: src.DeployedAt, Valid: true},
DeployedAtHeight: int32(src.DeployedAtHeight),
}, gen.CreateTickEntryStatesParams{
Tick: src.Tick,
BlockHeight: int32(blockHeight),
CompletedAt: pgtype.Timestamp{Time: src.CompletedAt, Valid: !src.CompletedAt.IsZero()},
CompletedAtHeight: pgtype.Int4{Int32: int32(src.CompletedAtHeight), Valid: src.CompletedAtHeight != 0},
MintedAmount: numericFromDecimal(src.MintedAmount),
BurnedAmount: numericFromDecimal(src.BurnedAmount),
}, nil
}
func mapInscriptionEntryModelToType(src gen.GetInscriptionEntriesByIdsRow) (ordinals.InscriptionEntry, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.Id)
if err != nil {
return ordinals.InscriptionEntry{}, errors.Wrap(err, "invalid inscription id")
}
var delegate, parent *ordinals.InscriptionId
if src.Delegate.Valid {
delegateValue, err := ordinals.NewInscriptionIdFromString(src.Delegate.String)
if err != nil {
return ordinals.InscriptionEntry{}, errors.Wrap(err, "invalid delegate id")
}
delegate = &delegateValue
}
// ord 0.14.0 supports only one parent
if len(src.Parents) > 0 {
parentValue, err := ordinals.NewInscriptionIdFromString(src.Parents[0])
if err != nil {
return ordinals.InscriptionEntry{}, errors.Wrap(err, "invalid parent id")
}
parent = &parentValue
}
inscription := ordinals.Inscription{
Content: src.Content,
ContentEncoding: lo.Ternary(src.ContentEncoding.Valid, src.ContentEncoding.String, ""),
ContentType: lo.Ternary(src.ContentType.Valid, src.ContentType.String, ""),
Delegate: delegate,
Metadata: src.Metadata,
Metaprotocol: lo.Ternary(src.Metaprotocol.Valid, src.Metaprotocol.String, ""),
Parent: parent,
Pointer: lo.Ternary(src.Pointer.Valid, lo.ToPtr(uint64(src.Pointer.Int64)), nil),
}
return ordinals.InscriptionEntry{
Id: inscriptionId,
Number: src.Number,
SequenceNumber: uint64(src.SequenceNumber),
Cursed: src.Cursed,
CursedForBRC20: src.CursedForBrc20,
CreatedAt: lo.Ternary(src.CreatedAt.Valid, src.CreatedAt.Time, time.Time{}),
CreatedAtHeight: uint64(src.CreatedAtHeight),
Inscription: inscription,
TransferCount: lo.Ternary(src.TransferCount.Valid, uint32(src.TransferCount.Int32), 0),
}, nil
}
func mapInscriptionEntryTypeToParams(src ordinals.InscriptionEntry, blockHeight uint64) (gen.CreateInscriptionEntriesParams, gen.CreateInscriptionEntryStatesParams, error) {
var delegate, metaprotocol, contentEncoding, contentType pgtype.Text
if src.Inscription.Delegate != nil {
delegate = pgtype.Text{String: src.Inscription.Delegate.String(), Valid: true}
}
if src.Inscription.Metaprotocol != "" {
metaprotocol = pgtype.Text{String: src.Inscription.Metaprotocol, Valid: true}
}
if src.Inscription.ContentEncoding != "" {
contentEncoding = pgtype.Text{String: src.Inscription.ContentEncoding, Valid: true}
}
if src.Inscription.ContentType != "" {
contentType = pgtype.Text{String: src.Inscription.ContentType, Valid: true}
}
var parents []string
if src.Inscription.Parent != nil {
parents = append(parents, src.Inscription.Parent.String())
}
var pointer pgtype.Int8
if src.Inscription.Pointer != nil {
pointer = pgtype.Int8{Int64: int64(*src.Inscription.Pointer), Valid: true}
}
return gen.CreateInscriptionEntriesParams{
Id: src.Id.String(),
Number: src.Number,
SequenceNumber: int64(src.SequenceNumber),
Delegate: delegate,
Metadata: src.Inscription.Metadata,
Metaprotocol: metaprotocol,
Parents: parents,
Pointer: pointer,
Content: src.Inscription.Content,
ContentEncoding: contentEncoding,
ContentType: contentType,
Cursed: src.Cursed,
CursedForBrc20: src.CursedForBRC20,
CreatedAt: lo.Ternary(!src.CreatedAt.IsZero(), pgtype.Timestamp{Time: src.CreatedAt, Valid: true}, pgtype.Timestamp{}),
CreatedAtHeight: int32(src.CreatedAtHeight),
}, gen.CreateInscriptionEntryStatesParams{
Id: src.Id.String(),
BlockHeight: int32(blockHeight),
TransferCount: int32(src.TransferCount),
}, nil
}
func mapInscriptionTransferModelToType(src gen.GetInscriptionTransfersInOutPointsRow) (entity.InscriptionTransfer, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.InscriptionID)
if err != nil {
return entity.InscriptionTransfer{}, errors.Wrap(err, "invalid inscription id")
}
txHash, err := chainhash.NewHashFromStr(src.TxHash)
if err != nil {
return entity.InscriptionTransfer{}, errors.Wrap(err, "invalid tx hash")
}
var oldSatPoint, newSatPoint ordinals.SatPoint
if src.OldSatpointTxHash.Valid {
if !src.OldSatpointOutIdx.Valid || !src.OldSatpointOffset.Valid {
return entity.InscriptionTransfer{}, errors.New("old satpoint out idx and offset must exist if hash exists")
}
txHash, err := chainhash.NewHashFromStr(src.OldSatpointTxHash.String)
if err != nil {
return entity.InscriptionTransfer{}, errors.Wrap(err, "invalid old satpoint tx hash")
}
oldSatPoint = ordinals.SatPoint{
OutPoint: wire.OutPoint{
Hash: *txHash,
Index: uint32(src.OldSatpointOutIdx.Int32),
},
Offset: uint64(src.OldSatpointOffset.Int64),
}
}
if src.NewSatpointTxHash.Valid {
if !src.NewSatpointOutIdx.Valid || !src.NewSatpointOffset.Valid {
return entity.InscriptionTransfer{}, errors.New("new satpoint out idx and offset must exist if hash exists")
}
txHash, err := chainhash.NewHashFromStr(src.NewSatpointTxHash.String)
if err != nil {
return entity.InscriptionTransfer{}, errors.Wrap(err, "invalid new satpoint tx hash")
}
newSatPoint = ordinals.SatPoint{
OutPoint: wire.OutPoint{
Hash: *txHash,
Index: uint32(src.NewSatpointOutIdx.Int32),
},
Offset: uint64(src.NewSatpointOffset.Int64),
}
}
newPkScript, err := hex.DecodeString(src.NewPkscript)
if err != nil {
return entity.InscriptionTransfer{}, errors.Wrap(err, "failed to parse pkscript")
}
return entity.InscriptionTransfer{
InscriptionId: inscriptionId,
BlockHeight: uint64(src.BlockHeight),
TxIndex: uint32(src.TxIndex),
TxHash: *txHash,
FromInputIndex: uint32(src.FromInputIndex),
Content: src.Content,
OldSatPoint: oldSatPoint,
NewSatPoint: newSatPoint,
NewPkScript: newPkScript,
NewOutputValue: uint64(src.NewOutputValue),
SentAsFee: src.SentAsFee,
TransferCount: uint32(src.TransferCount),
}, nil
}
func mapInscriptionTransferTypeToParams(src entity.InscriptionTransfer) gen.CreateInscriptionTransfersParams {
return gen.CreateInscriptionTransfersParams{
InscriptionID: src.InscriptionId.String(),
BlockHeight: int32(src.BlockHeight),
TxIndex: int32(src.TxIndex),
TxHash: src.TxHash.String(),
FromInputIndex: int32(src.FromInputIndex),
OldSatpointTxHash: lo.Ternary(src.OldSatPoint != ordinals.SatPoint{}, pgtype.Text{String: src.OldSatPoint.OutPoint.Hash.String(), Valid: true}, pgtype.Text{}),
OldSatpointOutIdx: lo.Ternary(src.OldSatPoint != ordinals.SatPoint{}, pgtype.Int4{Int32: int32(src.OldSatPoint.OutPoint.Index), Valid: true}, pgtype.Int4{}),
OldSatpointOffset: lo.Ternary(src.OldSatPoint != ordinals.SatPoint{}, pgtype.Int8{Int64: int64(src.OldSatPoint.Offset), Valid: true}, pgtype.Int8{}),
NewSatpointTxHash: lo.Ternary(src.NewSatPoint != ordinals.SatPoint{}, pgtype.Text{String: src.NewSatPoint.OutPoint.Hash.String(), Valid: true}, pgtype.Text{}),
NewSatpointOutIdx: lo.Ternary(src.NewSatPoint != ordinals.SatPoint{}, pgtype.Int4{Int32: int32(src.NewSatPoint.OutPoint.Index), Valid: true}, pgtype.Int4{}),
NewSatpointOffset: lo.Ternary(src.NewSatPoint != ordinals.SatPoint{}, pgtype.Int8{Int64: int64(src.NewSatPoint.Offset), Valid: true}, pgtype.Int8{}),
NewPkscript: hex.EncodeToString(src.NewPkScript),
NewOutputValue: int64(src.NewOutputValue),
SentAsFee: src.SentAsFee,
TransferCount: int32(src.TransferCount),
}
}
func mapEventDeployModelToType(src gen.Brc20EventDeploy) (entity.EventDeploy, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.InscriptionID)
if err != nil {
return entity.EventDeploy{}, errors.Wrap(err, "invalid inscription id")
}
txHash, err := chainhash.NewHashFromStr(src.TxHash)
if err != nil {
return entity.EventDeploy{}, errors.Wrap(err, "invalid tx hash")
}
pkScript, err := hex.DecodeString(src.Pkscript)
if err != nil {
return entity.EventDeploy{}, errors.Wrap(err, "failed to parse pkscript")
}
satPoint, err := ordinals.NewSatPointFromString(src.Satpoint)
if err != nil {
return entity.EventDeploy{}, errors.Wrap(err, "cannot parse satpoint")
}
return entity.EventDeploy{
Id: src.Id,
InscriptionId: inscriptionId,
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: *txHash,
BlockHeight: uint64(src.BlockHeight),
TxIndex: uint32(src.TxIndex),
Timestamp: src.Timestamp.Time,
PkScript: pkScript,
SatPoint: satPoint,
TotalSupply: decimalFromNumeric(src.TotalSupply).Decimal,
Decimals: uint16(src.Decimals),
LimitPerMint: decimalFromNumeric(src.LimitPerMint).Decimal,
IsSelfMint: src.IsSelfMint,
}, nil
}
func mapEventDeployTypeToParams(src entity.EventDeploy) (gen.CreateEventDeploysParams, error) {
var timestamp pgtype.Timestamp
if !src.Timestamp.IsZero() {
timestamp = pgtype.Timestamp{Time: src.Timestamp, Valid: true}
}
return gen.CreateEventDeploysParams{
InscriptionID: src.InscriptionId.String(),
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: src.TxHash.String(),
BlockHeight: int32(src.BlockHeight),
TxIndex: int32(src.TxIndex),
Timestamp: timestamp,
Pkscript: hex.EncodeToString(src.PkScript),
Satpoint: src.SatPoint.String(),
TotalSupply: numericFromDecimal(src.TotalSupply),
Decimals: int16(src.Decimals),
LimitPerMint: numericFromDecimal(src.LimitPerMint),
IsSelfMint: src.IsSelfMint,
}, nil
}
func mapEventMintModelToType(src gen.Brc20EventMint) (entity.EventMint, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.InscriptionID)
if err != nil {
return entity.EventMint{}, errors.Wrap(err, "invalid inscription id")
}
txHash, err := chainhash.NewHashFromStr(src.TxHash)
if err != nil {
return entity.EventMint{}, errors.Wrap(err, "invalid tx hash")
}
pkScript, err := hex.DecodeString(src.Pkscript)
if err != nil {
return entity.EventMint{}, errors.Wrap(err, "failed to parse pkscript")
}
satPoint, err := ordinals.NewSatPointFromString(src.Satpoint)
if err != nil {
return entity.EventMint{}, errors.Wrap(err, "cannot parse satpoint")
}
var parentId *ordinals.InscriptionId
if src.ParentID.Valid {
parentIdValue, err := ordinals.NewInscriptionIdFromString(src.ParentID.String)
if err != nil {
return entity.EventMint{}, errors.Wrap(err, "invalid parent id")
}
parentId = &parentIdValue
}
return entity.EventMint{
Id: src.Id,
InscriptionId: inscriptionId,
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: *txHash,
BlockHeight: uint64(src.BlockHeight),
TxIndex: uint32(src.TxIndex),
Timestamp: src.Timestamp.Time,
PkScript: pkScript,
SatPoint: satPoint,
Amount: decimalFromNumeric(src.Amount).Decimal,
ParentId: parentId,
}, nil
}
func mapEventMintTypeToParams(src entity.EventMint) (gen.CreateEventMintsParams, error) {
var timestamp pgtype.Timestamp
if !src.Timestamp.IsZero() {
timestamp = pgtype.Timestamp{Time: src.Timestamp, Valid: true}
}
var parentId pgtype.Text
if src.ParentId != nil {
parentId = pgtype.Text{String: src.ParentId.String(), Valid: true}
}
return gen.CreateEventMintsParams{
InscriptionID: src.InscriptionId.String(),
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: src.TxHash.String(),
BlockHeight: int32(src.BlockHeight),
TxIndex: int32(src.TxIndex),
Timestamp: timestamp,
Pkscript: hex.EncodeToString(src.PkScript),
Satpoint: src.SatPoint.String(),
Amount: numericFromDecimal(src.Amount),
ParentID: parentId,
}, nil
}
func mapEventInscribeTransferModelToType(src gen.Brc20EventInscribeTransfer) (entity.EventInscribeTransfer, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.InscriptionID)
if err != nil {
return entity.EventInscribeTransfer{}, errors.Wrap(err, "cannot parse inscription id")
}
txHash, err := chainhash.NewHashFromStr(src.TxHash)
if err != nil {
return entity.EventInscribeTransfer{}, errors.Wrap(err, "cannot parse hash")
}
pkScript, err := hex.DecodeString(src.Pkscript)
if err != nil {
return entity.EventInscribeTransfer{}, errors.Wrap(err, "cannot parse pkScript")
}
satPoint, err := ordinals.NewSatPointFromString(src.Satpoint)
if err != nil {
return entity.EventInscribeTransfer{}, errors.Wrap(err, "cannot parse satPoint")
}
return entity.EventInscribeTransfer{
Id: src.Id,
InscriptionId: inscriptionId,
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: *txHash,
BlockHeight: uint64(src.BlockHeight),
TxIndex: uint32(src.TxIndex),
Timestamp: src.Timestamp.Time,
PkScript: pkScript,
SatPoint: satPoint,
OutputIndex: uint32(src.OutputIndex),
SatsAmount: uint64(src.SatsAmount),
Amount: decimalFromNumeric(src.Amount).Decimal,
}, nil
}
func mapEventInscribeTransferTypeToParams(src entity.EventInscribeTransfer) (gen.CreateEventInscribeTransfersParams, error) {
var timestamp pgtype.Timestamp
if !src.Timestamp.IsZero() {
timestamp = pgtype.Timestamp{Time: src.Timestamp, Valid: true}
}
return gen.CreateEventInscribeTransfersParams{
InscriptionID: src.InscriptionId.String(),
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: src.TxHash.String(),
BlockHeight: int32(src.BlockHeight),
TxIndex: int32(src.TxIndex),
Timestamp: timestamp,
Pkscript: hex.EncodeToString(src.PkScript),
Satpoint: src.SatPoint.String(),
OutputIndex: int32(src.OutputIndex),
SatsAmount: int64(src.SatsAmount),
Amount: numericFromDecimal(src.Amount),
}, nil
}
func mapEventTransferTransferModelToType(src gen.Brc20EventTransferTransfer) (entity.EventTransferTransfer, error) {
inscriptionId, err := ordinals.NewInscriptionIdFromString(src.InscriptionID)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse inscription id")
}
txHash, err := chainhash.NewHashFromStr(src.TxHash)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse hash")
}
fromPkScript, err := hex.DecodeString(src.FromPkscript)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse fromPkScript")
}
fromSatPoint, err := ordinals.NewSatPointFromString(src.FromSatpoint)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse fromSatPoint")
}
toPkScript, err := hex.DecodeString(src.ToPkscript)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse toPkScript")
}
toSatPoint, err := ordinals.NewSatPointFromString(src.ToSatpoint)
if err != nil {
return entity.EventTransferTransfer{}, errors.Wrap(err, "cannot parse toSatPoint")
}
return entity.EventTransferTransfer{
Id: src.Id,
InscriptionId: inscriptionId,
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: *txHash,
BlockHeight: uint64(src.BlockHeight),
TxIndex: uint32(src.TxIndex),
Timestamp: src.Timestamp.Time,
FromPkScript: fromPkScript,
FromSatPoint: fromSatPoint,
FromInputIndex: uint32(src.FromInputIndex),
ToPkScript: toPkScript,
ToSatPoint: toSatPoint,
ToOutputIndex: uint32(src.ToOutputIndex),
SpentAsFee: src.SpentAsFee,
Amount: decimalFromNumeric(src.Amount).Decimal,
}, nil
}
func mapEventTransferTransferTypeToParams(src entity.EventTransferTransfer) (gen.CreateEventTransferTransfersParams, error) {
var timestamp pgtype.Timestamp
if !src.Timestamp.IsZero() {
timestamp = pgtype.Timestamp{Time: src.Timestamp, Valid: true}
}
return gen.CreateEventTransferTransfersParams{
InscriptionID: src.InscriptionId.String(),
InscriptionNumber: src.InscriptionNumber,
Tick: src.Tick,
OriginalTick: src.OriginalTick,
TxHash: src.TxHash.String(),
BlockHeight: int32(src.BlockHeight),
TxIndex: int32(src.TxIndex),
Timestamp: timestamp,
FromPkscript: hex.EncodeToString(src.FromPkScript),
FromSatpoint: src.FromSatPoint.String(),
FromInputIndex: int32(src.FromInputIndex),
ToPkscript: hex.EncodeToString(src.ToPkScript),
ToSatpoint: src.ToSatPoint.String(),
ToOutputIndex: int32(src.ToOutputIndex),
SpentAsFee: src.SpentAsFee,
Amount: numericFromDecimal(src.Amount),
}, nil
}
func mapBalanceModelToType(src gen.Brc20Balance) (entity.Balance, error) {
pkScript, err := hex.DecodeString(src.Pkscript)
if err != nil {
return entity.Balance{}, errors.Wrap(err, "failed to parse pkscript")
}
return entity.Balance{
PkScript: pkScript,
Tick: src.Tick,
BlockHeight: uint64(src.BlockHeight),
OverallBalance: decimalFromNumeric(src.OverallBalance).Decimal,
AvailableBalance: decimalFromNumeric(src.AvailableBalance).Decimal,
}, nil
}

View File

@@ -1,20 +0,0 @@
package postgres
import (
"github.com/gaze-network/indexer-network/internal/postgres"
"github.com/gaze-network/indexer-network/modules/brc20/internal/repository/postgres/gen"
"github.com/jackc/pgx/v5"
)
type Repository struct {
db postgres.DB
queries *gen.Queries
tx pgx.Tx
}
func NewRepository(db postgres.DB) *Repository {
return &Repository{
db: db,
queries: gen.New(db),
}
}

View File

@@ -1,236 +0,0 @@
package brc20
import (
"context"
"github.com/btcsuite/btcd/wire"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/indexer"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/datagateway"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/gaze-network/indexer-network/pkg/btcclient"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"github.com/gaze-network/indexer-network/pkg/lru"
)
// Make sure to implement the Bitcoin Processor interface
var _ indexer.Processor[*types.Block] = (*Processor)(nil)
type Processor struct {
brc20Dg datagateway.BRC20DataGateway
indexerInfoDg datagateway.IndexerInfoDataGateway
btcClient btcclient.Contract
network common.Network
cleanupFuncs []func(context.Context) error
// block states
flotsamsSentAsFee []*entity.Flotsam
blockReward uint64
// processor stats
cursedInscriptionCount uint64
blessedInscriptionCount uint64
lostSats uint64
// cache
outPointValueCache *lru.Cache[wire.OutPoint, uint64]
// flush buffers - inscription states
newInscriptionTransfers []*entity.InscriptionTransfer
newInscriptionEntries map[ordinals.InscriptionId]*ordinals.InscriptionEntry
newInscriptionEntryStates map[ordinals.InscriptionId]*ordinals.InscriptionEntry
// flush buffers - brc20 states
newTickEntries map[string]*entity.TickEntry
newTickEntryStates map[string]*entity.TickEntry
newEventDeploys []*entity.EventDeploy
newEventMints []*entity.EventMint
newEventInscribeTransfers []*entity.EventInscribeTransfer
newEventTransferTransfers []*entity.EventTransferTransfer
newBalances map[string]map[string]*entity.Balance
}
// TODO: move this to config
const outPointValueCacheSize = 100000
func NewProcessor(brc20Dg datagateway.BRC20DataGateway, indexerInfoDg datagateway.IndexerInfoDataGateway, btcClient btcclient.Contract, network common.Network, cleanupFuncs []func(context.Context) error) (*Processor, error) {
outPointValueCache, err := lru.New[wire.OutPoint, uint64](outPointValueCacheSize)
if err != nil {
return nil, errors.Wrap(err, "failed to create outPointValueCache")
}
return &Processor{
brc20Dg: brc20Dg,
indexerInfoDg: indexerInfoDg,
btcClient: btcClient,
network: network,
cleanupFuncs: cleanupFuncs,
flotsamsSentAsFee: make([]*entity.Flotsam, 0),
blockReward: 0,
cursedInscriptionCount: 0, // to be initialized by p.VerifyStates()
blessedInscriptionCount: 0, // to be initialized by p.VerifyStates()
lostSats: 0, // to be initialized by p.VerifyStates()
outPointValueCache: outPointValueCache,
newInscriptionTransfers: make([]*entity.InscriptionTransfer, 0),
newInscriptionEntries: make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry),
newInscriptionEntryStates: make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry),
newTickEntries: make(map[string]*entity.TickEntry),
newTickEntryStates: make(map[string]*entity.TickEntry),
newEventDeploys: make([]*entity.EventDeploy, 0),
newEventMints: make([]*entity.EventMint, 0),
newEventInscribeTransfers: make([]*entity.EventInscribeTransfer, 0),
newEventTransferTransfers: make([]*entity.EventTransferTransfer, 0),
newBalances: make(map[string]map[string]*entity.Balance),
}, nil
}
// VerifyStates implements indexer.Processor.
func (p *Processor) VerifyStates(ctx context.Context) error {
indexerState, err := p.indexerInfoDg.GetLatestIndexerState(ctx)
if err != nil && !errors.Is(err, errs.NotFound) {
return errors.Wrap(err, "failed to get latest indexer state")
}
// if not found, create indexer state
if errors.Is(err, errs.NotFound) {
if err := p.indexerInfoDg.CreateIndexerState(ctx, entity.IndexerState{
ClientVersion: ClientVersion,
DBVersion: DBVersion,
EventHashVersion: EventHashVersion,
Network: p.network,
}); err != nil {
return errors.Wrap(err, "failed to set indexer state")
}
} else {
if indexerState.DBVersion != DBVersion {
return errors.Wrapf(errs.ConflictSetting, "db version mismatch: current version is %d. Please upgrade to version %d", indexerState.DBVersion, DBVersion)
}
if indexerState.EventHashVersion != EventHashVersion {
return errors.Wrapf(errs.ConflictSetting, "event version mismatch: current version is %d. Please reset rune's db first.", indexerState.EventHashVersion, EventHashVersion)
}
if indexerState.Network != p.network {
return errors.Wrapf(errs.ConflictSetting, "network mismatch: latest indexed network is %d, configured network is %d. If you want to change the network, please reset the database", indexerState.Network, p.network)
}
}
stats, err := p.brc20Dg.GetProcessorStats(ctx)
if err != nil {
if !errors.Is(err, errs.NotFound) {
return errors.Wrap(err, "failed to count cursed inscriptions")
}
stats = &entity.ProcessorStats{
BlockHeight: uint64(startingBlockHeader[p.network].Height),
CursedInscriptionCount: 0,
BlessedInscriptionCount: 0,
LostSats: 0,
}
}
p.cursedInscriptionCount = stats.CursedInscriptionCount
p.blessedInscriptionCount = stats.BlessedInscriptionCount
p.lostSats = stats.LostSats
return nil
}
// CurrentBlock implements indexer.Processor.
func (p *Processor) CurrentBlock(ctx context.Context) (types.BlockHeader, error) {
blockHeader, err := p.brc20Dg.GetLatestBlock(ctx)
if err != nil {
if errors.Is(err, errs.NotFound) {
return startingBlockHeader[p.network], nil
}
return types.BlockHeader{}, errors.Wrap(err, "failed to get latest block")
}
return blockHeader, nil
}
// GetIndexedBlock implements indexer.Processor.
func (p *Processor) GetIndexedBlock(ctx context.Context, height int64) (types.BlockHeader, error) {
block, err := p.brc20Dg.GetIndexedBlockByHeight(ctx, height)
if err != nil {
return types.BlockHeader{}, errors.Wrap(err, "failed to get indexed block")
}
return types.BlockHeader{
Height: int64(block.Height),
Hash: block.Hash,
}, nil
}
// Name implements indexer.Processor.
func (p *Processor) Name() string {
return "brc20"
}
// RevertData implements indexer.Processor.
func (p *Processor) RevertData(ctx context.Context, from int64) error {
brc20DgTx, err := p.brc20Dg.BeginBRC20Tx(ctx)
if err != nil {
return errors.Wrap(err, "failed to begin transaction")
}
defer func() {
if err := brc20DgTx.Rollback(ctx); err != nil {
logger.WarnContext(ctx, "failed to rollback transaction",
slogx.Error(err),
slogx.String("event", "rollback_brc20_insertion"),
)
}
}()
if err := brc20DgTx.DeleteIndexedBlocksSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete indexed blocks")
}
if err := brc20DgTx.DeleteProcessorStatsSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete processor stats")
}
if err := brc20DgTx.DeleteTickEntriesSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete ticks")
}
if err := brc20DgTx.DeleteTickEntryStatesSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete tick states")
}
if err := brc20DgTx.DeleteEventDeploysSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete deploy events")
}
if err := brc20DgTx.DeleteEventMintsSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete mint events")
}
if err := brc20DgTx.DeleteEventInscribeTransfersSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete inscribe transfer events")
}
if err := brc20DgTx.DeleteEventTransferTransfersSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete transfer transfer events")
}
if err := brc20DgTx.DeleteBalancesSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete balances")
}
if err := brc20DgTx.DeleteInscriptionEntriesSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete inscription entries")
}
if err := brc20DgTx.DeleteInscriptionEntryStatesSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete inscription entry states")
}
if err := brc20DgTx.DeleteInscriptionTransfersSinceHeight(ctx, uint64(from)); err != nil {
return errors.Wrap(err, "failed to delete inscription transfers")
}
if err := brc20DgTx.Commit(ctx); err != nil {
return errors.Wrap(err, "failed to commit transaction")
}
return nil
}
func (p *Processor) Shutdown(ctx context.Context) error {
var errs []error
for _, cleanup := range p.cleanupFuncs {
if err := cleanup(ctx); err != nil {
errs = append(errs, err)
}
}
return errors.WithStack(errors.Join(errs...))
}

View File

@@ -1,413 +0,0 @@
package brc20
import (
"bytes"
"context"
"encoding/hex"
"time"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/brc20"
"github.com/gaze-network/indexer-network/modules/brc20/internal/datagateway"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/samber/lo"
"github.com/shopspring/decimal"
)
func (p *Processor) processBRC20States(ctx context.Context, transfers []*entity.InscriptionTransfer, blockHeader types.BlockHeader) error {
payloads := make([]*brc20.Payload, 0)
ticks := make(map[string]struct{})
for _, transfer := range transfers {
if transfer.Content == nil {
// skip empty content
continue
}
payload, err := brc20.ParsePayload(transfer)
if err != nil {
return errors.Wrap(err, "failed to parse payload")
}
payloads = append(payloads, payload)
ticks[payload.Tick] = struct{}{}
}
if len(payloads) == 0 {
// skip if no valid payloads
return nil
}
// TODO: concurrently fetch from db to optimize speed
tickEntries, err := p.brc20Dg.GetTickEntriesByTicks(ctx, lo.Keys(ticks))
if err != nil {
return errors.Wrap(err, "failed to get inscription entries by ids")
}
// preload required data to reduce individual data fetching during process
inscriptionIds := make([]ordinals.InscriptionId, 0)
inscriptionIdsToFetchParent := make([]ordinals.InscriptionId, 0)
inscriptionIdsToFetchEventInscribeTransfer := make([]ordinals.InscriptionId, 0)
balancesToFetch := make([]datagateway.GetBalancesBatchAtHeightQuery, 0) // pkscript -> tick -> struct{}
for _, payload := range payloads {
inscriptionIds = append(inscriptionIds, payload.Transfer.InscriptionId)
if payload.Op == brc20.OperationMint {
// preload parent id to validate mint events with self mint
if entry := tickEntries[payload.Tick]; entry.IsSelfMint {
inscriptionIdsToFetchParent = append(inscriptionIdsToFetchParent, payload.Transfer.InscriptionId)
}
}
if payload.Op == brc20.OperationTransfer {
if payload.Transfer.OldSatPoint == (ordinals.SatPoint{}) {
// preload balance to validate inscribe transfer event
balancesToFetch = append(balancesToFetch, datagateway.GetBalancesBatchAtHeightQuery{
PkScriptHex: hex.EncodeToString(payload.Transfer.NewPkScript),
Tick: payload.Tick,
})
} else {
// preload inscribe-transfer events to validate transfer-transfer event
inscriptionIdsToFetchEventInscribeTransfer = append(inscriptionIdsToFetchEventInscribeTransfer, payload.Transfer.InscriptionId)
}
}
}
inscriptionIdsToNumber, err := p.getInscriptionNumbersByIds(ctx, lo.Uniq(inscriptionIds))
if err != nil {
return errors.Wrap(err, "failed to get inscription numbers by ids")
}
inscriptionIdsToParent, err := p.getInscriptionParentsByIds(ctx, lo.Uniq(inscriptionIdsToFetchParent))
if err != nil {
return errors.Wrap(err, "failed to get inscription parents by ids")
}
latestEventId, err := p.brc20Dg.GetLatestEventId(ctx)
if err != nil {
return errors.Wrap(err, "failed to get latest event id")
}
// pkscript -> tick -> balance
balances, err := p.brc20Dg.GetBalancesBatchAtHeight(ctx, uint64(blockHeader.Height-1), balancesToFetch)
if err != nil {
return errors.Wrap(err, "failed to get balances batch at height")
}
eventInscribeTransfers, err := p.brc20Dg.GetEventInscribeTransfersByInscriptionIds(ctx, lo.Uniq(inscriptionIdsToFetchEventInscribeTransfer))
if err != nil {
return errors.Wrap(err, "failed to get event inscribe transfers by inscription ids")
}
newTickEntries := make(map[string]*entity.TickEntry)
newTickEntryStates := make(map[string]*entity.TickEntry)
newEventDeploys := make([]*entity.EventDeploy, 0)
newEventMints := make([]*entity.EventMint, 0)
newEventInscribeTransfers := make([]*entity.EventInscribeTransfer, 0)
newEventTransferTransfers := make([]*entity.EventTransferTransfer, 0)
newBalances := make(map[string]map[string]*entity.Balance)
for _, payload := range payloads {
tickEntry := tickEntries[payload.Tick]
if payload.Transfer.SentAsFee && payload.Transfer.OldSatPoint == (ordinals.SatPoint{}) {
// skip inscriptions inscribed as fee
continue
}
switch payload.Op {
case brc20.OperationDeploy:
if payload.Transfer.TransferCount > 1 {
// skip used deploy inscriptions
continue
}
if tickEntry != nil {
// skip deploy inscriptions for duplicate ticks
continue
}
tickEntry := &entity.TickEntry{
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TotalSupply: payload.Max,
Decimals: payload.Dec,
LimitPerMint: payload.Lim,
IsSelfMint: payload.SelfMint,
DeployInscriptionId: payload.Transfer.InscriptionId,
DeployedAt: blockHeader.Timestamp,
DeployedAtHeight: payload.Transfer.BlockHeight,
MintedAmount: decimal.Zero,
BurnedAmount: decimal.Zero,
CompletedAt: time.Time{},
CompletedAtHeight: 0,
}
newTickEntries[payload.Tick] = tickEntry
newTickEntryStates[payload.Tick] = tickEntry
// update entries for other operations in same block
tickEntries[payload.Tick] = tickEntry
newEventDeploys = append(newEventDeploys, &entity.EventDeploy{
Id: latestEventId + 1,
InscriptionId: payload.Transfer.InscriptionId,
InscriptionNumber: inscriptionIdsToNumber[payload.Transfer.InscriptionId],
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TxHash: payload.Transfer.TxHash,
BlockHeight: payload.Transfer.BlockHeight,
TxIndex: payload.Transfer.TxIndex,
Timestamp: blockHeader.Timestamp,
PkScript: payload.Transfer.NewPkScript,
SatPoint: payload.Transfer.NewSatPoint,
TotalSupply: payload.Max,
Decimals: payload.Dec,
LimitPerMint: payload.Lim,
IsSelfMint: payload.SelfMint,
})
latestEventId++
case brc20.OperationMint:
if payload.Transfer.TransferCount > 1 {
// skip used mint inscriptions that are already used
continue
}
if tickEntry == nil {
// skip mint inscriptions for non-existent ticks
continue
}
if -payload.Amt.Exponent() > int32(tickEntry.Decimals) {
// skip mint inscriptions with decimals greater than allowed
continue
}
if tickEntry.MintedAmount.GreaterThanOrEqual(tickEntry.TotalSupply) {
// skip mint inscriptions for ticks with completed mints
continue
}
if payload.Amt.GreaterThan(tickEntry.LimitPerMint) {
// skip mint inscriptions with amount greater than limit per mint
continue
}
mintableAmount := tickEntry.TotalSupply.Sub(tickEntry.MintedAmount)
if payload.Amt.GreaterThan(mintableAmount) {
payload.Amt = mintableAmount
}
var parentId *ordinals.InscriptionId
if tickEntry.IsSelfMint {
parentIdValue, ok := inscriptionIdsToParent[payload.Transfer.InscriptionId]
if !ok {
// skip mint inscriptions for self mint ticks without parent inscription
continue
}
if parentIdValue != tickEntry.DeployInscriptionId {
// skip mint inscriptions for self mint ticks with invalid parent inscription
continue
}
parentId = &parentIdValue
}
tickEntry.MintedAmount = tickEntry.MintedAmount.Add(payload.Amt)
if tickEntry.MintedAmount.GreaterThanOrEqual(tickEntry.TotalSupply) {
tickEntry.CompletedAt = blockHeader.Timestamp
tickEntry.CompletedAtHeight = payload.Transfer.BlockHeight
}
newTickEntryStates[payload.Tick] = tickEntry
newEventMints = append(newEventMints, &entity.EventMint{
Id: latestEventId + 1,
InscriptionId: payload.Transfer.InscriptionId,
InscriptionNumber: inscriptionIdsToNumber[payload.Transfer.InscriptionId],
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TxHash: payload.Transfer.TxHash,
BlockHeight: payload.Transfer.BlockHeight,
TxIndex: payload.Transfer.TxIndex,
Timestamp: blockHeader.Timestamp,
PkScript: payload.Transfer.NewPkScript,
SatPoint: payload.Transfer.NewSatPoint,
Amount: payload.Amt,
ParentId: parentId,
})
latestEventId++
case brc20.OperationTransfer:
if payload.Transfer.TransferCount > 2 {
// skip used transfer inscriptions
continue
}
if tickEntry == nil {
// skip transfer inscriptions for non-existent ticks
continue
}
if -payload.Amt.Exponent() > int32(tickEntry.Decimals) {
// skip transfer inscriptions with decimals greater than allowed
continue
}
if payload.Transfer.OldSatPoint == (ordinals.SatPoint{}) {
// inscribe transfer event
pkScriptHex := hex.EncodeToString(payload.Transfer.NewPkScript)
balance, ok := balances[pkScriptHex][payload.Tick]
if !ok {
balance = &entity.Balance{
PkScript: payload.Transfer.NewPkScript,
Tick: payload.Tick,
BlockHeight: uint64(blockHeader.Height - 1),
OverallBalance: decimal.Zero, // defaults balance to zero if not found
AvailableBalance: decimal.Zero,
}
}
if payload.Amt.GreaterThan(balance.AvailableBalance) {
// skip inscribe transfer event if amount exceeds available balance
continue
}
// update balance state
balance.BlockHeight = uint64(blockHeader.Height)
balance.AvailableBalance = balance.AvailableBalance.Sub(payload.Amt)
if _, ok := balances[pkScriptHex]; !ok {
balances[pkScriptHex] = make(map[string]*entity.Balance)
}
balances[pkScriptHex][payload.Tick] = balance
if _, ok := newBalances[pkScriptHex]; !ok {
newBalances[pkScriptHex] = make(map[string]*entity.Balance)
}
newBalances[pkScriptHex][payload.Tick] = &entity.Balance{}
event := &entity.EventInscribeTransfer{
Id: latestEventId + 1,
InscriptionId: payload.Transfer.InscriptionId,
InscriptionNumber: inscriptionIdsToNumber[payload.Transfer.InscriptionId],
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TxHash: payload.Transfer.TxHash,
BlockHeight: payload.Transfer.BlockHeight,
TxIndex: payload.Transfer.TxIndex,
Timestamp: blockHeader.Timestamp,
PkScript: payload.Transfer.NewPkScript,
SatPoint: payload.Transfer.NewSatPoint,
OutputIndex: payload.Transfer.NewSatPoint.OutPoint.Index,
SatsAmount: payload.Transfer.NewOutputValue,
Amount: payload.Amt,
}
latestEventId++
eventInscribeTransfers[payload.Transfer.InscriptionId] = event
newEventInscribeTransfers = append(newEventInscribeTransfers, event)
} else {
// transfer transfer event
inscribeTransfer, ok := eventInscribeTransfers[payload.Transfer.InscriptionId]
if !ok {
// skip transfer transfer event if prior inscribe transfer event does not exist
continue
}
if payload.Transfer.SentAsFee {
// return balance to sender
fromPkScriptHex := hex.EncodeToString(inscribeTransfer.PkScript)
fromBalance, ok := balances[fromPkScriptHex][payload.Tick]
if !ok {
fromBalance = &entity.Balance{
PkScript: inscribeTransfer.PkScript,
Tick: payload.Tick,
BlockHeight: uint64(blockHeader.Height),
OverallBalance: decimal.Zero, // defaults balance to zero if not found
AvailableBalance: decimal.Zero,
}
}
fromBalance.BlockHeight = uint64(blockHeader.Height)
fromBalance.AvailableBalance = fromBalance.AvailableBalance.Add(payload.Amt)
if _, ok := balances[fromPkScriptHex]; !ok {
balances[fromPkScriptHex] = make(map[string]*entity.Balance)
}
balances[fromPkScriptHex][payload.Tick] = fromBalance
if _, ok := newBalances[fromPkScriptHex]; !ok {
newBalances[fromPkScriptHex] = make(map[string]*entity.Balance)
}
newBalances[fromPkScriptHex][payload.Tick] = fromBalance
newEventTransferTransfers = append(newEventTransferTransfers, &entity.EventTransferTransfer{
Id: latestEventId + 1,
InscriptionId: payload.Transfer.InscriptionId,
InscriptionNumber: inscriptionIdsToNumber[payload.Transfer.InscriptionId],
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TxHash: payload.Transfer.TxHash,
BlockHeight: payload.Transfer.BlockHeight,
TxIndex: payload.Transfer.TxIndex,
Timestamp: blockHeader.Timestamp,
FromPkScript: inscribeTransfer.PkScript,
FromSatPoint: inscribeTransfer.SatPoint,
FromInputIndex: payload.Transfer.FromInputIndex,
ToPkScript: payload.Transfer.NewPkScript,
ToSatPoint: payload.Transfer.NewSatPoint,
ToOutputIndex: payload.Transfer.NewSatPoint.OutPoint.Index,
SpentAsFee: true,
Amount: payload.Amt,
})
} else {
// subtract balance from sender
fromPkScriptHex := hex.EncodeToString(inscribeTransfer.PkScript)
fromBalance, ok := balances[fromPkScriptHex][payload.Tick]
if !ok {
// skip transfer transfer event if from balance does not exist
continue
}
fromBalance.BlockHeight = uint64(blockHeader.Height)
fromBalance.OverallBalance = fromBalance.OverallBalance.Sub(payload.Amt)
if _, ok := balances[fromPkScriptHex]; !ok {
balances[fromPkScriptHex] = make(map[string]*entity.Balance)
}
balances[fromPkScriptHex][payload.Tick] = fromBalance
if _, ok := newBalances[fromPkScriptHex]; !ok {
newBalances[fromPkScriptHex] = make(map[string]*entity.Balance)
}
newBalances[fromPkScriptHex][payload.Tick] = fromBalance
// add balance to receiver
if bytes.Equal(payload.Transfer.NewPkScript, []byte{0x6a}) {
// burn if sent to OP_RETURN
tickEntry.BurnedAmount = tickEntry.BurnedAmount.Add(payload.Amt)
tickEntries[payload.Tick] = tickEntry
newTickEntryStates[payload.Tick] = tickEntry
} else {
toPkScriptHex := hex.EncodeToString(payload.Transfer.NewPkScript)
toBalance, ok := balances[toPkScriptHex][payload.Tick]
if !ok {
toBalance = &entity.Balance{
PkScript: payload.Transfer.NewPkScript,
Tick: payload.Tick,
BlockHeight: uint64(blockHeader.Height),
OverallBalance: decimal.Zero, // defaults balance to zero if not found
AvailableBalance: decimal.Zero,
}
}
toBalance.BlockHeight = uint64(blockHeader.Height)
toBalance.OverallBalance = toBalance.OverallBalance.Add(payload.Amt)
toBalance.AvailableBalance = toBalance.AvailableBalance.Add(payload.Amt)
if _, ok := balances[toPkScriptHex]; !ok {
balances[toPkScriptHex] = make(map[string]*entity.Balance)
}
balances[toPkScriptHex][payload.Tick] = toBalance
if _, ok := newBalances[toPkScriptHex]; !ok {
newBalances[toPkScriptHex] = make(map[string]*entity.Balance)
}
newBalances[toPkScriptHex][payload.Tick] = toBalance
}
newEventTransferTransfers = append(newEventTransferTransfers, &entity.EventTransferTransfer{
Id: latestEventId + 1,
InscriptionId: payload.Transfer.InscriptionId,
InscriptionNumber: inscriptionIdsToNumber[payload.Transfer.InscriptionId],
Tick: payload.Tick,
OriginalTick: payload.OriginalTick,
TxHash: payload.Transfer.TxHash,
BlockHeight: payload.Transfer.BlockHeight,
TxIndex: payload.Transfer.TxIndex,
Timestamp: blockHeader.Timestamp,
FromPkScript: inscribeTransfer.PkScript,
FromSatPoint: inscribeTransfer.SatPoint,
FromInputIndex: payload.Transfer.FromInputIndex,
ToPkScript: payload.Transfer.NewPkScript,
ToSatPoint: payload.Transfer.NewSatPoint,
ToOutputIndex: payload.Transfer.NewSatPoint.OutPoint.Index,
SpentAsFee: false,
Amount: payload.Amt,
})
}
}
}
}
p.newTickEntries = newTickEntries
p.newTickEntryStates = newTickEntryStates
p.newEventDeploys = newEventDeploys
p.newEventMints = newEventMints
p.newEventInscribeTransfers = newEventInscribeTransfers
p.newEventTransferTransfers = newEventTransferTransfers
p.newBalances = newBalances
return nil
}

View File

@@ -1,570 +0,0 @@
package brc20
import (
"context"
"encoding/json"
"slices"
"sync"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"github.com/samber/lo"
"golang.org/x/sync/errgroup"
)
func (p *Processor) processInscriptionTx(ctx context.Context, tx *types.Transaction, blockHeader types.BlockHeader) error {
ctx = logger.WithContext(ctx, slogx.String("tx_hash", tx.TxHash.String()))
envelopes := ordinals.ParseEnvelopesFromTx(tx)
inputOutPoints := lo.Map(tx.TxIn, func(txIn *types.TxIn, _ int) wire.OutPoint {
return wire.OutPoint{
Hash: txIn.PreviousOutTxHash,
Index: txIn.PreviousOutIndex,
}
})
transfersInOutPoints, err := p.getInscriptionTransfersInOutPoints(ctx, inputOutPoints)
if err != nil {
return errors.Wrap(err, "failed to get inscriptions in outpoints")
}
// cache outpoint values for future blocks
for outIndex, txOut := range tx.TxOut {
p.outPointValueCache.Add(wire.OutPoint{
Hash: tx.TxHash,
Index: uint32(outIndex),
}, uint64(txOut.Value))
}
if len(envelopes) == 0 && len(transfersInOutPoints) == 0 {
// no inscription activity, skip
return nil
}
floatingInscriptions := make([]*entity.Flotsam, 0)
totalInputValue := uint64(0)
totalOutputValue := lo.SumBy(tx.TxOut, func(txOut *types.TxOut) uint64 { return uint64(txOut.Value) })
inscribeOffsets := make(map[uint64]*struct {
inscriptionId ordinals.InscriptionId
count int
})
idCounter := uint32(0)
inputValues, err := p.getOutPointValues(ctx, inputOutPoints)
if err != nil {
return errors.Wrap(err, "failed to get outpoint values")
}
for i, input := range tx.TxIn {
inputOutPoint := wire.OutPoint{
Hash: input.PreviousOutTxHash,
Index: input.PreviousOutIndex,
}
inputValue := inputValues[inputOutPoint]
// skip coinbase inputs since there can't be an inscription in coinbase
if input.PreviousOutTxHash.IsEqual(&chainhash.Hash{}) {
totalInputValue += p.getBlockSubsidy(uint64(tx.BlockHeight))
continue
}
transfersInOutPoint := transfersInOutPoints[inputOutPoint]
for satPoint, transfers := range transfersInOutPoint {
offset := totalInputValue + satPoint.Offset
for _, transfer := range transfers {
floatingInscriptions = append(floatingInscriptions, &entity.Flotsam{
Offset: offset,
InscriptionId: transfer.InscriptionId,
Tx: tx,
OriginOld: &entity.OriginOld{
OldSatPoint: satPoint,
Content: transfer.Content,
InputIndex: uint32(i),
},
})
if _, ok := inscribeOffsets[offset]; !ok {
inscribeOffsets[offset] = &struct {
inscriptionId ordinals.InscriptionId
count int
}{transfer.InscriptionId, 0}
}
inscribeOffsets[offset].count++
}
}
// offset on output to inscribe new inscriptions from this input
offset := totalInputValue
totalInputValue += inputValue
envelopesInInput := lo.Filter(envelopes, func(envelope *ordinals.Envelope, _ int) bool {
return envelope.InputIndex == uint32(i)
})
for _, envelope := range envelopesInInput {
inscriptionId := ordinals.InscriptionId{
TxHash: tx.TxHash,
Index: idCounter,
}
var cursed, cursedForBRC20 bool
if envelope.UnrecognizedEvenField || // unrecognized even field
envelope.DuplicateField || // duplicate field
envelope.IncompleteField || // incomplete field
envelope.InputIndex != 0 || // not first input
envelope.Offset != 0 || // not first envelope in input
envelope.Inscription.Pointer != nil || // contains pointer
envelope.PushNum || // contains pushnum opcodes
envelope.Stutter { // contains stuttering curse structure
cursed = true
cursedForBRC20 = true
}
if initial, ok := inscribeOffsets[offset]; !cursed && ok {
if initial.count > 1 {
cursed = true // reinscription
cursedForBRC20 = true
} else {
initialInscriptionEntry, err := p.getInscriptionEntryById(ctx, initial.inscriptionId)
if err != nil {
return errors.Wrapf(err, "failed to get inscription entry id %s", initial.inscriptionId)
}
if !initialInscriptionEntry.Cursed {
cursed = true // reinscription curse if initial inscription is not cursed
}
if !initialInscriptionEntry.CursedForBRC20 {
cursedForBRC20 = true
}
}
}
// inscriptions are no longer cursed after jubilee, but BRC20 still considers them as cursed
if cursed && uint64(tx.BlockHeight) > ordinals.GetJubileeHeight(p.network) {
cursed = false
}
unbound := inputValue == 0 || envelope.UnrecognizedEvenField
if envelope.Inscription.Pointer != nil && *envelope.Inscription.Pointer < totalOutputValue {
offset = *envelope.Inscription.Pointer
}
floatingInscriptions = append(floatingInscriptions, &entity.Flotsam{
Offset: offset,
InscriptionId: inscriptionId,
Tx: tx,
OriginNew: &entity.OriginNew{
Reinscription: inscribeOffsets[offset] != nil,
Cursed: cursed,
CursedForBRC20: cursedForBRC20,
Fee: 0,
Hidden: false, // we don't care about this field for brc20
Parent: envelope.Inscription.Parent,
Pointer: envelope.Inscription.Pointer,
Unbound: unbound,
Inscription: envelope.Inscription,
},
})
if _, ok := inscribeOffsets[offset]; !ok {
inscribeOffsets[offset] = &struct {
inscriptionId ordinals.InscriptionId
count int
}{inscriptionId, 0}
}
inscribeOffsets[offset].count++
idCounter++
}
}
// parents must exist in floatingInscriptions to be valid
potentialParents := make(map[ordinals.InscriptionId]struct{})
for _, flotsam := range floatingInscriptions {
potentialParents[flotsam.InscriptionId] = struct{}{}
}
for _, flotsam := range floatingInscriptions {
if flotsam.OriginNew != nil && flotsam.OriginNew.Parent != nil {
if _, ok := potentialParents[*flotsam.OriginNew.Parent]; !ok {
// parent not found, ignore parent
flotsam.OriginNew.Parent = nil
}
}
}
// calculate fee for each new inscription
for _, flotsam := range floatingInscriptions {
if flotsam.OriginNew != nil {
flotsam.OriginNew.Fee = (totalInputValue - totalOutputValue) / uint64(idCounter)
}
}
// if tx is coinbase, add inscriptions sent as fee to outputs of this tx
ownInscriptionCount := len(floatingInscriptions)
isCoinbase := tx.TxIn[0].PreviousOutTxHash.IsEqual(&chainhash.Hash{})
if isCoinbase {
floatingInscriptions = append(floatingInscriptions, p.flotsamsSentAsFee...)
}
// sort floatingInscriptions by offset
slices.SortFunc(floatingInscriptions, func(i, j *entity.Flotsam) int {
return int(i.Offset) - int(j.Offset)
})
outputValue := uint64(0)
curIncrIdx := 0
// newLocations := make(map[ordinals.SatPoint][]*Flotsam)
type location struct {
satPoint ordinals.SatPoint
flotsam *entity.Flotsam
sentAsFee bool
}
newLocations := make([]*location, 0)
outputToSumValue := make([]uint64, 0, len(tx.TxOut))
for outIndex, txOut := range tx.TxOut {
end := outputValue + uint64(txOut.Value)
// process all inscriptions that are supposed to be inscribed in this output
for curIncrIdx < len(floatingInscriptions) && floatingInscriptions[curIncrIdx].Offset < end {
newSatPoint := ordinals.SatPoint{
OutPoint: wire.OutPoint{
Hash: tx.TxHash,
Index: uint32(outIndex),
},
Offset: floatingInscriptions[curIncrIdx].Offset - outputValue,
}
// newLocations[newSatPoint] = append(newLocations[newSatPoint], floatingInscriptions[curIncrIdx])
newLocations = append(newLocations, &location{
satPoint: newSatPoint,
flotsam: floatingInscriptions[curIncrIdx],
sentAsFee: isCoinbase && curIncrIdx >= ownInscriptionCount, // if curIncrIdx >= ownInscriptionCount, then current inscription came from p.flotSamsSentAsFee
})
curIncrIdx++
}
outputValue = end
outputToSumValue = append(outputToSumValue, outputValue)
}
for _, loc := range newLocations {
satPoint := loc.satPoint
flotsam := loc.flotsam
sentAsFee := loc.sentAsFee
// TODO: not sure if we still need to handle pointer here, it's already handled above.
if flotsam.OriginNew != nil && flotsam.OriginNew.Pointer != nil {
pointer := *flotsam.OriginNew.Pointer
for outIndex, outputValue := range outputToSumValue {
start := uint64(0)
if outIndex > 0 {
start = outputToSumValue[outIndex-1]
}
end := outputValue
if start <= pointer && pointer < end {
satPoint.Offset = pointer - start
break
}
}
}
if err := p.updateInscriptionLocation(ctx, satPoint, flotsam, sentAsFee, tx, blockHeader); err != nil {
return errors.Wrap(err, "failed to update inscription location")
}
}
// handle leftover flotsams (flotsams with offset over total output value) )
if isCoinbase {
// if there are leftover inscriptions in coinbase, they are lost permanently
for _, flotsam := range floatingInscriptions[curIncrIdx:] {
newSatPoint := ordinals.SatPoint{
OutPoint: wire.OutPoint{},
Offset: p.lostSats + flotsam.Offset - totalOutputValue,
}
if err := p.updateInscriptionLocation(ctx, newSatPoint, flotsam, false, tx, blockHeader); err != nil {
return errors.Wrap(err, "failed to update inscription location")
}
}
p.lostSats += p.blockReward - totalOutputValue
} else {
// if there are leftover inscriptions in non-coinbase tx, they are stored in p.flotsamsSentAsFee for processing in this block's coinbase tx
for _, flotsam := range floatingInscriptions[curIncrIdx:] {
flotsam.Offset = p.blockReward + flotsam.Offset - totalOutputValue
p.flotsamsSentAsFee = append(p.flotsamsSentAsFee, flotsam)
}
// add fees to block reward
p.blockReward = totalInputValue - totalOutputValue
}
return nil
}
func (p *Processor) updateInscriptionLocation(ctx context.Context, newSatPoint ordinals.SatPoint, flotsam *entity.Flotsam, sentAsFee bool, tx *types.Transaction, blockHeader types.BlockHeader) error {
txOut := tx.TxOut[newSatPoint.OutPoint.Index]
if flotsam.OriginOld != nil {
entry, err := p.getInscriptionEntryById(ctx, flotsam.InscriptionId)
if err != nil {
return errors.Wrapf(err, "failed to get inscription entry id %s", flotsam.InscriptionId)
}
entry.TransferCount++
transfer := &entity.InscriptionTransfer{
InscriptionId: flotsam.InscriptionId,
BlockHeight: uint64(flotsam.Tx.BlockHeight), // use flotsam's tx to track tx that initiated the transfer
TxIndex: flotsam.Tx.Index, // use flotsam's tx to track tx that initiated the transfer
TxHash: flotsam.Tx.TxHash,
Content: flotsam.OriginOld.Content,
FromInputIndex: flotsam.OriginOld.InputIndex,
OldSatPoint: flotsam.OriginOld.OldSatPoint,
NewSatPoint: newSatPoint,
NewPkScript: txOut.PkScript,
NewOutputValue: uint64(txOut.Value),
SentAsFee: sentAsFee,
TransferCount: entry.TransferCount,
}
// track transfers even if transfer count exceeds 2 (because we need to check for reinscriptions)
p.newInscriptionTransfers = append(p.newInscriptionTransfers, transfer)
p.newInscriptionEntryStates[entry.Id] = entry
return nil
}
if flotsam.OriginNew != nil {
origin := flotsam.OriginNew
var inscriptionNumber int64
sequenceNumber := p.cursedInscriptionCount + p.blessedInscriptionCount
if origin.Cursed {
inscriptionNumber = -int64(p.cursedInscriptionCount + 1)
p.cursedInscriptionCount++
} else {
inscriptionNumber = int64(p.blessedInscriptionCount)
p.blessedInscriptionCount++
}
// if not valid brc20 inscription, delete content to save space
if !isBRC20Inscription(origin.Inscription) {
origin.Inscription.Content = nil
origin.Inscription.ContentType = ""
origin.Inscription.ContentEncoding = ""
}
transfer := &entity.InscriptionTransfer{
InscriptionId: flotsam.InscriptionId,
BlockHeight: uint64(flotsam.Tx.BlockHeight), // use flotsam's tx to track tx that initiated the transfer
TxIndex: flotsam.Tx.Index, // use flotsam's tx to track tx that initiated the transfer
TxHash: flotsam.Tx.TxHash,
Content: origin.Inscription.Content,
FromInputIndex: 0, // unused
OldSatPoint: ordinals.SatPoint{},
NewSatPoint: newSatPoint,
NewPkScript: txOut.PkScript,
NewOutputValue: uint64(txOut.Value),
SentAsFee: sentAsFee,
TransferCount: 1, // count inscription as first transfer
}
entry := &ordinals.InscriptionEntry{
Id: flotsam.InscriptionId,
Number: inscriptionNumber,
SequenceNumber: sequenceNumber,
Cursed: origin.Cursed,
CursedForBRC20: origin.CursedForBRC20,
CreatedAt: blockHeader.Timestamp,
CreatedAtHeight: uint64(blockHeader.Height),
Inscription: origin.Inscription,
TransferCount: 1, // count inscription as first transfer
}
p.newInscriptionTransfers = append(p.newInscriptionTransfers, transfer)
p.newInscriptionEntries[entry.Id] = entry
p.newInscriptionEntryStates[entry.Id] = entry
return nil
}
panic("unreachable")
}
type brc20Inscription struct {
P string `json:"p"`
}
func isBRC20Inscription(inscription ordinals.Inscription) bool {
if inscription.ContentType != "application/json" && inscription.ContentType != "text/plain" {
return false
}
// attempt to parse content as json
if inscription.Content == nil {
return false
}
var parsed brc20Inscription
if err := json.Unmarshal(inscription.Content, &parsed); err != nil {
return false
}
if parsed.P != "brc-20" {
return false
}
return true
}
func (p *Processor) getOutPointValues(ctx context.Context, outPoints []wire.OutPoint) (map[wire.OutPoint]uint64, error) {
// try to get from cache if exists
cacheValues := p.outPointValueCache.MGet(outPoints)
result := make(map[wire.OutPoint]uint64)
outPointsToFetch := make([]wire.OutPoint, 0)
for i, outPoint := range outPoints {
if cacheValues[i] != 0 {
result[outPoint] = cacheValues[i]
} else {
outPointsToFetch = append(outPointsToFetch, outPoint)
}
}
eg, ectx := errgroup.WithContext(ctx)
txHashes := make(map[chainhash.Hash]struct{})
for _, outPoint := range outPointsToFetch {
txHashes[outPoint.Hash] = struct{}{}
}
txOutsByHash := make(map[chainhash.Hash][]*types.TxOut)
var mutex sync.Mutex
for txHash := range txHashes {
txHash := txHash
eg.Go(func() error {
txOuts, err := p.btcClient.GetTransactionOutputs(ectx, txHash)
if err != nil {
return errors.Wrap(err, "failed to get transaction outputs")
}
// update cache
mutex.Lock()
defer mutex.Unlock()
txOutsByHash[txHash] = txOuts
for i, txOut := range txOuts {
p.outPointValueCache.Add(wire.OutPoint{Hash: txHash, Index: uint32(i)}, uint64(txOut.Value))
}
return nil
})
}
if err := eg.Wait(); err != nil {
return nil, errors.WithStack(err)
}
for i := range outPoints {
if result[outPoints[i]] == 0 {
result[outPoints[i]] = uint64(txOutsByHash[outPoints[i].Hash][outPoints[i].Index].Value)
}
}
return result, nil
}
func (p *Processor) getInscriptionTransfersInOutPoints(ctx context.Context, outPoints []wire.OutPoint) (map[wire.OutPoint]map[ordinals.SatPoint][]*entity.InscriptionTransfer, error) {
// try to get from flush buffer if exists
result := make(map[wire.OutPoint]map[ordinals.SatPoint][]*entity.InscriptionTransfer)
outPointsToFetch := make([]wire.OutPoint, 0)
for _, outPoint := range outPoints {
var found bool
for _, transfer := range p.newInscriptionTransfers {
if transfer.NewSatPoint.OutPoint == outPoint {
found = true
if _, ok := result[outPoint]; !ok {
result[outPoint] = make(map[ordinals.SatPoint][]*entity.InscriptionTransfer)
}
result[outPoint][transfer.NewSatPoint] = append(result[outPoint][transfer.NewSatPoint], transfer)
break
}
}
if !found {
outPointsToFetch = append(outPointsToFetch, outPoint)
}
}
transfers, err := p.brc20Dg.GetInscriptionTransfersInOutPoints(ctx, outPointsToFetch)
if err != nil {
return nil, errors.Wrap(err, "failed to get inscriptions by outpoint")
}
for satPoint, transferList := range transfers {
if _, ok := result[satPoint.OutPoint]; !ok {
result[satPoint.OutPoint] = make(map[ordinals.SatPoint][]*entity.InscriptionTransfer)
}
result[satPoint.OutPoint][satPoint] = append(result[satPoint.OutPoint][satPoint], transferList...)
}
return result, nil
}
func (p *Processor) getInscriptionEntryById(ctx context.Context, id ordinals.InscriptionId) (*ordinals.InscriptionEntry, error) {
inscriptions, err := p.getInscriptionEntriesByIds(ctx, []ordinals.InscriptionId{id})
if err != nil {
return nil, errors.Wrap(err, "failed to get inscriptions by outpoint")
}
inscription, ok := inscriptions[id]
if !ok {
return nil, errors.Wrap(errs.NotFound, "inscription not found")
}
return inscription, nil
}
func (p *Processor) getInscriptionEntriesByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]*ordinals.InscriptionEntry, error) {
// try to get from cache if exists
result := make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry)
idsToFetch := make([]ordinals.InscriptionId, 0)
for _, id := range ids {
if inscriptionEntry, ok := p.newInscriptionEntryStates[id]; ok {
result[id] = inscriptionEntry
} else {
idsToFetch = append(idsToFetch, id)
}
}
if len(idsToFetch) > 0 {
inscriptions, err := p.brc20Dg.GetInscriptionEntriesByIds(ctx, idsToFetch)
if err != nil {
return nil, errors.Wrap(err, "failed to get inscriptions by outpoint")
}
for id, inscription := range inscriptions {
result[id] = inscription
}
}
return result, nil
}
func (p *Processor) getInscriptionNumbersByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]int64, error) {
// try to get from cache if exists
result := make(map[ordinals.InscriptionId]int64)
idsToFetch := make([]ordinals.InscriptionId, 0)
for _, id := range ids {
if entry, ok := p.newInscriptionEntryStates[id]; ok {
result[id] = int64(entry.Number)
} else {
idsToFetch = append(idsToFetch, id)
}
}
if len(idsToFetch) > 0 {
inscriptions, err := p.brc20Dg.GetInscriptionNumbersByIds(ctx, idsToFetch)
if err != nil {
return nil, errors.Wrap(err, "failed to get inscriptions by outpoint")
}
for id, number := range inscriptions {
result[id] = number
}
}
return result, nil
}
func (p *Processor) getInscriptionParentsByIds(ctx context.Context, ids []ordinals.InscriptionId) (map[ordinals.InscriptionId]ordinals.InscriptionId, error) {
// try to get from cache if exists
result := make(map[ordinals.InscriptionId]ordinals.InscriptionId)
idsToFetch := make([]ordinals.InscriptionId, 0)
for _, id := range ids {
if entry, ok := p.newInscriptionEntryStates[id]; ok {
if entry.Inscription.Parent != nil {
result[id] = *entry.Inscription.Parent
}
} else {
idsToFetch = append(idsToFetch, id)
}
}
if len(idsToFetch) > 0 {
inscriptions, err := p.brc20Dg.GetInscriptionParentsByIds(ctx, idsToFetch)
if err != nil {
return nil, errors.Wrap(err, "failed to get inscriptions by outpoint")
}
for id, parent := range inscriptions {
result[id] = parent
}
}
return result, nil
}
func (p *Processor) getBlockSubsidy(blockHeight uint64) uint64 {
return uint64(blockchain.CalcBlockSubsidy(int32(blockHeight), p.network.ChainParams()))
}

View File

@@ -1,135 +0,0 @@
package brc20
import (
"context"
"slices"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/brc20/internal/entity"
"github.com/gaze-network/indexer-network/modules/brc20/internal/ordinals"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"github.com/samber/lo"
)
// Process implements indexer.Processor.
func (p *Processor) Process(ctx context.Context, blocks []*types.Block) error {
for _, block := range blocks {
ctx = logger.WithContext(ctx, slogx.Uint64("height", uint64(block.Header.Height)))
logger.DebugContext(ctx, "Processing new block")
p.blockReward = p.getBlockSubsidy(uint64(block.Header.Height))
p.flotsamsSentAsFee = make([]*entity.Flotsam, 0)
// put coinbase tx (first tx) at the end of block
transactions := append(block.Transactions[1:], block.Transactions[0])
for _, tx := range transactions {
if err := p.processInscriptionTx(ctx, tx, block.Header); err != nil {
return errors.Wrap(err, "failed to process tx")
}
}
// sort transfers by tx index, output index, output sat offset
// NOTE: ord indexes inscription transfers spent as fee at the end of the block, but brc20 indexes them as soon as they are sent
slices.SortFunc(p.newInscriptionTransfers, func(t1, t2 *entity.InscriptionTransfer) int {
if t1.TxIndex != t2.TxIndex {
return int(t1.TxIndex) - int(t2.TxIndex)
}
if t1.SentAsFee != t2.SentAsFee {
// transfers sent as fee should be ordered after non-fees
if t1.SentAsFee {
return 1
}
return -1
}
if t1.NewSatPoint.OutPoint.Index != t2.NewSatPoint.OutPoint.Index {
return int(t1.NewSatPoint.OutPoint.Index) - int(t2.NewSatPoint.OutPoint.Index)
}
return int(t1.NewSatPoint.Offset) - int(t2.NewSatPoint.Offset)
})
if err := p.processBRC20States(ctx, p.newInscriptionTransfers, block.Header); err != nil {
return errors.Wrap(err, "failed to process brc20 states")
}
if err := p.flushBlock(ctx, block.Header); err != nil {
return errors.Wrap(err, "failed to flush block")
}
logger.DebugContext(ctx, "Inserted new block")
}
return nil
}
func (p *Processor) flushBlock(ctx context.Context, blockHeader types.BlockHeader) error {
brc20DgTx, err := p.brc20Dg.BeginBRC20Tx(ctx)
if err != nil {
return errors.Wrap(err, "failed to begin transaction")
}
defer func() {
if err := brc20DgTx.Rollback(ctx); err != nil {
logger.WarnContext(ctx, "failed to rollback transaction",
slogx.Error(err),
slogx.String("event", "rollback_brc20_insertion"),
)
}
}()
blockHeight := uint64(blockHeader.Height)
// CreateIndexedBlock must be performed before other flush methods to correctly calculate event hash
// TODO: calculate event hash
if err := brc20DgTx.CreateIndexedBlock(ctx, &entity.IndexedBlock{
Height: blockHeight,
Hash: blockHeader.Hash,
EventHash: chainhash.Hash{},
CumulativeEventHash: chainhash.Hash{},
}); err != nil {
return errors.Wrap(err, "failed to create indexed block")
}
// flush new inscription entries
{
newInscriptionEntries := lo.Values(p.newInscriptionEntries)
if err := brc20DgTx.CreateInscriptionEntries(ctx, blockHeight, newInscriptionEntries); err != nil {
return errors.Wrap(err, "failed to create inscription entries")
}
p.newInscriptionEntries = make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry)
}
// flush new inscription entry states
{
newInscriptionEntryStates := lo.Values(p.newInscriptionEntryStates)
if err := brc20DgTx.CreateInscriptionEntryStates(ctx, blockHeight, newInscriptionEntryStates); err != nil {
return errors.Wrap(err, "failed to create inscription entry states")
}
p.newInscriptionEntryStates = make(map[ordinals.InscriptionId]*ordinals.InscriptionEntry)
}
// flush new inscription entry states
{
if err := brc20DgTx.CreateInscriptionTransfers(ctx, p.newInscriptionTransfers); err != nil {
return errors.Wrap(err, "failed to create inscription transfers")
}
p.newInscriptionTransfers = make([]*entity.InscriptionTransfer, 0)
}
// flush processor stats
{
stats := &entity.ProcessorStats{
BlockHeight: blockHeight,
CursedInscriptionCount: p.cursedInscriptionCount,
BlessedInscriptionCount: p.blessedInscriptionCount,
LostSats: p.lostSats,
}
if err := brc20DgTx.CreateProcessorStats(ctx, stats); err != nil {
return errors.Wrap(err, "failed to create processor stats")
}
}
if err := brc20DgTx.Commit(ctx); err != nil {
return errors.Wrap(err, "failed to commit transaction")
}
return nil
}

View File

@@ -0,0 +1,99 @@
package httphandler
import (
"fmt"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"github.com/gofiber/fiber/v2"
"google.golang.org/protobuf/encoding/protojson"
)
type deployRequest struct {
DeployID string `params:"deployId"`
}
type tierResponse struct {
PriceSat uint32 `json:"priceSat"`
Limit uint32 `json:"limit"`
MaxPerAddress uint32 `json:"maxPerAddress"`
Sold int64 `json:"sold"`
}
type deployResponse struct {
Id string `json:"id"`
Name string `json:"name"`
StartsAt int64 `json:"startsAt"`
EndsAt int64 `json:"endsAt"`
Tiers []tierResponse `json:"tiers"`
SellerPublicKey string `json:"sellerPublicKey"`
MaxPerAddress uint32 `json:"maxPerAddress"`
DeployTxHash string `json:"deployTxHash"`
}
func (h *handler) deployHandler(ctx *fiber.Ctx) error {
var request deployRequest
err := ctx.ParamsParser(&request)
if err != nil {
return errors.Wrap(err, "cannot parse param")
}
var blockHeight uint64
var txIndex uint32
count, err := fmt.Sscanf(request.DeployID, "%d-%d", &blockHeight, &txIndex)
if count != 2 || err != nil {
return errs.NewPublicError("Invalid deploy ID")
}
deploys, err := h.nodeSaleDg.GetNodeSale(ctx.UserContext(), datagateway.GetNodeSaleParams{
BlockHeight: blockHeight,
TxIndex: txIndex,
})
if err != nil {
return errors.Wrap(err, "Cannot get NodeSale from db")
}
if len(deploys) < 1 {
return errs.NewPublicError("NodeSale not found")
}
deploy := deploys[0]
nodeCount, err := h.nodeSaleDg.GetNodeCountByTierIndex(ctx.UserContext(), datagateway.GetNodeCountByTierIndexParams{
SaleBlock: deploy.BlockHeight,
SaleTxIndex: deploy.TxIndex,
FromTier: 0,
ToTier: uint32(len(deploy.Tiers) - 1),
})
if err != nil {
return errors.Wrap(err, "Cannot get node count from db")
}
tiers := make([]protobuf.Tier, len(deploy.Tiers))
tierResponses := make([]tierResponse, len(deploy.Tiers))
for i, tierJson := range deploy.Tiers {
tier := &tiers[i]
err := protojson.Unmarshal(tierJson, tier)
if err != nil {
return errors.Wrap(err, "Failed to decode tiers json")
}
tierResponses[i].Limit = tiers[i].Limit
tierResponses[i].MaxPerAddress = tiers[i].MaxPerAddress
tierResponses[i].PriceSat = tiers[i].PriceSat
tierResponses[i].Sold = nodeCount[i].Count
}
err = ctx.JSON(&deployResponse{
Id: request.DeployID,
Name: deploy.Name,
StartsAt: deploy.StartsAt.UTC().Unix(),
EndsAt: deploy.EndsAt.UTC().Unix(),
Tiers: tierResponses,
SellerPublicKey: deploy.SellerPublicKey,
MaxPerAddress: deploy.MaxPerAddress,
DeployTxHash: deploy.DeployTxHash,
})
if err != nil {
return errors.Wrap(err, "Go fiber cannot parse JSON")
}
return nil
}

View File

@@ -0,0 +1,56 @@
package httphandler
import (
"encoding/json"
"time"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"github.com/gofiber/fiber/v2"
)
type eventRequest struct {
WalletAddress string `query:"walletAddress"`
}
type eventResposne struct {
TxHash string `json:"txHash"`
BlockHeight int64 `json:"blockHeight"`
TxIndex int32 `json:"txIndex"`
WalletAddress string `json:"walletAddress"`
Action string `json:"action"`
ParsedMessage json.RawMessage `json:"parsedMessage"`
BlockTimestamp time.Time `json:"blockTimestamp"`
BlockHash string `json:"blockHash"`
}
func (h *handler) eventsHandler(ctx *fiber.Ctx) error {
var request eventRequest
err := ctx.QueryParser(&request)
if err != nil {
return errors.Wrap(err, "cannot parse query")
}
events, err := h.nodeSaleDg.GetEventsByWallet(ctx.UserContext(), request.WalletAddress)
if err != nil {
return errors.Wrap(err, "Can't get events from db")
}
responses := make([]eventResposne, len(events))
for i, event := range events {
responses[i].TxHash = event.TxHash
responses[i].BlockHeight = event.BlockHeight
responses[i].TxIndex = event.TxIndex
responses[i].WalletAddress = event.WalletAddress
responses[i].Action = protobuf.Action_name[event.Action]
responses[i].ParsedMessage = event.ParsedMessage
responses[i].BlockTimestamp = event.BlockTimestamp
responses[i].BlockHash = event.BlockHash
}
err = ctx.JSON(responses)
if err != nil {
return errors.Wrap(err, "Go fiber cannot parse JSON")
}
return nil
}

View File

@@ -0,0 +1,15 @@
package httphandler
import (
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
)
type handler struct {
nodeSaleDg datagateway.NodeSaleDataGateway
}
func New(datagateway datagateway.NodeSaleDataGateway) *handler {
h := handler{}
h.nodeSaleDg = datagateway
return &h
}

View File

@@ -0,0 +1,26 @@
package httphandler
import (
"github.com/cockroachdb/errors"
"github.com/gofiber/fiber/v2"
)
type infoResponse struct {
IndexedBlockHeight int64 `json:"indexedBlockHeight"`
IndexedBlockHash string `json:"indexedBlockHash"`
}
func (h *handler) infoHandler(ctx *fiber.Ctx) error {
block, err := h.nodeSaleDg.GetLastProcessedBlock(ctx.UserContext())
if err != nil {
return errors.Wrap(err, "Cannot get last processed block")
}
err = ctx.JSON(infoResponse{
IndexedBlockHeight: block.BlockHeight,
IndexedBlockHash: block.BlockHash,
})
if err != nil {
return errors.Wrap(err, "Go fiber cannot parse JSON")
}
return nil
}

View File

@@ -0,0 +1,82 @@
package httphandler
import (
"fmt"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/common/errs"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gofiber/fiber/v2"
)
type nodeRequest struct {
DeployId string `query:"deployId"`
OwnerPublicKey string `query:"ownerPublicKey"`
DelegateePublicKey string `query:"delegateePublicKey"`
}
type nodeResponse struct {
DeployId string `json:"deployId"`
NodeId uint32 `json:"nodeId"`
TierIndex int32 `json:"tierIndex"`
DelegatedTo string `json:"delegatedTo"`
OwnerPublicKey string `json:"ownerPublicKey"`
PurchaseTxHash string `json:"purchaseTxHash"`
DelegateTxHash string `json:"delegateTxHash"`
PurchaseBlockHeight int32 `json:"purchaseBlockHeight"`
}
func (h *handler) nodesHandler(ctx *fiber.Ctx) error {
var request nodeRequest
err := ctx.QueryParser(&request)
if err != nil {
return errors.Wrap(err, "cannot parse query")
}
ownerPublicKey := request.OwnerPublicKey
delegateePublicKey := request.DelegateePublicKey
var blockHeight int64
var txIndex int32
count, err := fmt.Sscanf(request.DeployId, "%d-%d", &blockHeight, &txIndex)
if count != 2 || err != nil {
return errs.NewPublicError("Invalid deploy ID")
}
var nodes []entity.Node
if ownerPublicKey == "" {
nodes, err = h.nodeSaleDg.GetNodesByDeployment(ctx.UserContext(), blockHeight, txIndex)
if err != nil {
return errors.Wrap(err, "Can't get nodes from db")
}
} else {
nodes, err = h.nodeSaleDg.GetNodesByPubkey(ctx.UserContext(), datagateway.GetNodesByPubkeyParams{
SaleBlock: blockHeight,
SaleTxIndex: txIndex,
OwnerPublicKey: ownerPublicKey,
DelegatedTo: delegateePublicKey,
})
if err != nil {
return errors.Wrap(err, "Can't get nodes from db")
}
}
responses := make([]nodeResponse, len(nodes))
for i, node := range nodes {
responses[i].DeployId = request.DeployId
responses[i].NodeId = node.NodeID
responses[i].TierIndex = node.TierIndex
responses[i].DelegatedTo = node.DelegatedTo
responses[i].OwnerPublicKey = node.OwnerPublicKey
responses[i].PurchaseTxHash = node.PurchaseTxHash
responses[i].DelegateTxHash = node.DelegateTxHash
responses[i].PurchaseBlockHeight = txIndex
}
err = ctx.JSON(responses)
if err != nil {
return errors.Wrap(err, "Go fiber cannot parse JSON")
}
return nil
}

View File

@@ -0,0 +1,16 @@
package httphandler
import (
"github.com/gofiber/fiber/v2"
)
func (h *handler) Mount(router fiber.Router) error {
r := router.Group("/nodesale/v1")
r.Get("/info", h.infoHandler)
r.Get("/deploy/:deployId", h.deployHandler)
r.Get("/nodes", h.nodesHandler)
r.Get("/events", h.eventsHandler)
return nil
}

View File

@@ -0,0 +1,8 @@
package config
import "github.com/gaze-network/indexer-network/internal/postgres"
type Config struct {
Postgres postgres.Config `mapstructure:"postgres"`
LastBlockDefault int64 `mapstructure:"last_block_default"`
}

View File

@@ -0,0 +1,9 @@
BEGIN;
DROP TABLE IF EXISTS nodes;
DROP TABLE IF EXISTS node_sales;
DROP TABLE IF EXISTS events;
DROP TABLE IF EXISTS blocks;
COMMIT;

View File

@@ -0,0 +1,64 @@
BEGIN;
CREATE TABLE IF NOT EXISTS blocks (
"block_height" BIGINT NOT NULL,
"block_hash" TEXT NOT NULL,
"module" TEXT NOT NULL,
PRIMARY KEY("block_height", "block_hash")
);
CREATE TABLE IF NOT EXISTS events (
"tx_hash" TEXT NOT NULL PRIMARY KEY,
"block_height" BIGINT NOT NULL,
"tx_index" INTEGER NOT NULL,
"wallet_address" TEXT NOT NULL,
"valid" BOOLEAN NOT NULL,
"action" INTEGER NOT NULL,
"raw_message" BYTEA NOT NULL,
"parsed_message" JSONB NOT NULL DEFAULT '{}',
"block_timestamp" TIMESTAMP NOT NULL,
"block_hash" TEXT NOT NULL,
"metadata" JSONB NOT NULL DEFAULT '{}',
"reason" TEXT NOT NULL DEFAULT ''
);
INSERT INTO events("tx_hash", "block_height", "tx_index",
"wallet_address", "valid", "action",
"raw_message", "parsed_message", "block_timestamp",
"block_hash", "metadata")
VALUES ('', -1, -1,
'', false, -1,
'', '{}', NOW(),
'', '{}');
CREATE TABLE IF NOT EXISTS node_sales (
"block_height" BIGINT NOT NULL,
"tx_index" INTEGER NOT NULL,
"name" TEXT NOT NULL,
"starts_at" TIMESTAMP NOT NULL,
"ends_at" TIMESTAMP NOT NULL,
"tiers" JSONB[] NOT NULL,
"seller_public_key" TEXT NOT NULL,
"max_per_address" INTEGER NOT NULL,
"deploy_tx_hash" TEXT NOT NULL REFERENCES events(tx_hash) ON DELETE CASCADE,
"max_discount_percentage" INTEGER NOT NULL,
"seller_wallet" TEXT NOT NULL,
PRIMARY KEY ("block_height", "tx_index")
);
CREATE TABLE IF NOT EXISTS nodes (
"sale_block" BIGINT NOT NULL,
"sale_tx_index" INTEGER NOT NULL,
"node_id" INTEGER NOT NULL,
"tier_index" INTEGER NOT NULL,
"delegated_to" TEXT NOT NULL DEFAULT '',
"owner_public_key" TEXT NOT NULL,
"purchase_tx_hash" TEXT NOT NULL REFERENCES events(tx_hash) ON DELETE CASCADE,
"delegate_tx_hash" TEXT NOT NULL DEFAULT '' REFERENCES events(tx_hash) ON DELETE SET DEFAULT,
PRIMARY KEY("sale_block", "sale_tx_index", "node_id"),
FOREIGN KEY("sale_block", "sale_tx_index") REFERENCES node_sales("block_height", "tx_index")
);
COMMIT;

View File

@@ -0,0 +1,15 @@
-- name: GetLastProcessedBlock :one
SELECT * FROM blocks ORDER BY block_height DESC LIMIT 1;
-- name: GetBlock :one
SELECT * FROM blocks
WHERE "block_height" = $1;
-- name: RemoveBlockFrom :execrows
DELETE FROM blocks
WHERE "block_height" >= @from_block;
-- name: CreateBlock :exec
INSERT INTO blocks ("block_height", "block_hash", "module")
VALUES ($1, $2, $3);

View File

@@ -0,0 +1,14 @@
-- name: RemoveEventsFromBlock :execrows
DELETE FROM events
WHERE "block_height" >= @from_block;
-- name: CreateEvent :exec
INSERT INTO events ("tx_hash", "block_height", "tx_index", "wallet_address", "valid", "action",
"raw_message", "parsed_message", "block_timestamp", "block_hash", "metadata",
"reason")
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12);
-- name: GetEventsByWallet :many
SELECT *
FROM events
WHERE wallet_address = $1;

View File

@@ -0,0 +1,57 @@
-- name: ClearDelegate :execrows
UPDATE nodes
SET "delegated_to" = ''
WHERE "delegate_tx_hash" = '';
-- name: SetDelegates :execrows
UPDATE nodes
SET delegated_to = @delegatee, delegate_tx_hash = $3
WHERE sale_block = $1 AND
sale_tx_index = $2 AND
node_id = ANY (@node_ids::int[]);
-- name: GetNodesByIds :many
SELECT *
FROM nodes
WHERE sale_block = $1 AND
sale_tx_index = $2 AND
node_id = ANY (@node_ids::int[]);
-- name: GetNodesByOwner :many
SELECT *
FROM nodes
WHERE sale_block = $1 AND
sale_tx_index = $2 AND
owner_public_key = $3
ORDER BY tier_index;
-- name: GetNodesByPubkey :many
SELECT nodes.*
FROM nodes JOIN events ON nodes.purchase_tx_hash = events.tx_hash
WHERE sale_block = $1 AND
sale_tx_index = $2 AND
owner_public_key = $3 AND
delegated_to = $4;
-- name: CreateNode :exec
INSERT INTO nodes (sale_block, sale_tx_index, node_id, tier_index, delegated_to, owner_public_key, purchase_tx_hash, delegate_tx_hash)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8);
-- name: GetNodeCountByTierIndex :many
SELECT (tiers.tier_index)::int AS tier_index, count(nodes.tier_index)
FROM generate_series(@from_tier::int,@to_tier::int) AS tiers(tier_index)
LEFT JOIN
(SELECT *
FROM nodes
WHERE sale_block = $1 AND
sale_tx_index= $2)
AS nodes ON tiers.tier_index = nodes.tier_index
GROUP BY tiers.tier_index
ORDER BY tiers.tier_index;
-- name: GetNodesByDeployment :many
SELECT *
FROM nodes
WHERE sale_block = $1 AND
sale_tx_index = $2;

View File

@@ -0,0 +1,9 @@
-- name: CreateNodeSale :exec
INSERT INTO node_sales ("block_height", "tx_index", "name", "starts_at", "ends_at", "tiers", "seller_public_key", "max_per_address", "deploy_tx_hash", "max_discount_percentage", "seller_wallet")
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11);
-- name: GetNodeSale :many
SELECT *
FROM node_sales
WHERE block_height = $1 AND
tx_index = $2;

View File

@@ -0,0 +1,3 @@
-- name: ClearEvents :exec
DELETE FROM events
WHERE tx_hash <> '';

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,77 @@
package datagateway
import (
"context"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
)
type NodeSaleDataGateway interface {
BeginNodeSaleTx(ctx context.Context) (NodeSaleDataGatewayWithTx, error)
CreateBlock(ctx context.Context, arg entity.Block) error
GetBlock(ctx context.Context, blockHeight int64) (*entity.Block, error)
GetLastProcessedBlock(ctx context.Context) (*entity.Block, error)
RemoveBlockFrom(ctx context.Context, fromBlock int64) (int64, error)
RemoveEventsFromBlock(ctx context.Context, fromBlock int64) (int64, error)
ClearDelegate(ctx context.Context) (int64, error)
GetNodesByIds(ctx context.Context, arg GetNodesByIdsParams) ([]entity.Node, error)
CreateEvent(ctx context.Context, arg entity.NodeSaleEvent) error
SetDelegates(ctx context.Context, arg SetDelegatesParams) (int64, error)
CreateNodeSale(ctx context.Context, arg entity.NodeSale) error
GetNodeSale(ctx context.Context, arg GetNodeSaleParams) ([]entity.NodeSale, error)
GetNodesByOwner(ctx context.Context, arg GetNodesByOwnerParams) ([]entity.Node, error)
CreateNode(ctx context.Context, arg entity.Node) error
GetNodeCountByTierIndex(ctx context.Context, arg GetNodeCountByTierIndexParams) ([]GetNodeCountByTierIndexRow, error)
GetNodesByPubkey(ctx context.Context, arg GetNodesByPubkeyParams) ([]entity.Node, error)
GetNodesByDeployment(ctx context.Context, saleBlock int64, saleTxIndex int32) ([]entity.Node, error)
GetEventsByWallet(ctx context.Context, walletAddress string) ([]entity.NodeSaleEvent, error)
}
type NodeSaleDataGatewayWithTx interface {
NodeSaleDataGateway
Tx
}
type GetNodesByIdsParams struct {
SaleBlock uint64
SaleTxIndex uint32
NodeIds []uint32
}
type SetDelegatesParams struct {
SaleBlock uint64
SaleTxIndex int32
Delegatee string
DelegateTxHash string
NodeIds []uint32
}
type GetNodeSaleParams struct {
BlockHeight uint64
TxIndex uint32
}
type GetNodesByOwnerParams struct {
SaleBlock uint64
SaleTxIndex uint32
OwnerPublicKey string
}
type GetNodeCountByTierIndexParams struct {
SaleBlock uint64
SaleTxIndex uint32
FromTier uint32
ToTier uint32
}
type GetNodeCountByTierIndexRow struct {
TierIndex int32
Count int64
}
type GetNodesByPubkeyParams struct {
SaleBlock int64
SaleTxIndex int32
OwnerPublicKey string
DelegatedTo string
}

View File

@@ -0,0 +1,61 @@
package nodesale
import (
"context"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
delegatevalidator "github.com/gaze-network/indexer-network/modules/nodesale/internal/validator/delegate"
)
func (p *Processor) ProcessDelegate(ctx context.Context, qtx datagateway.NodeSaleDataGatewayWithTx, block *types.Block, event NodeSaleEvent) error {
validator := delegatevalidator.New()
delegate := event.EventMessage.Delegate
_, nodes, err := validator.NodesExist(ctx, qtx, delegate.DeployID, delegate.NodeIDs)
if err != nil {
return errors.Wrap(err, "Cannot query")
}
for _, node := range nodes {
valid := validator.EqualXonlyPublicKey(node.OwnerPublicKey, event.TxPubkey)
if !valid {
break
}
}
err = qtx.CreateEvent(ctx, entity.NodeSaleEvent{
TxHash: event.Transaction.TxHash.String(),
TxIndex: int32(event.Transaction.Index),
Action: int32(event.EventMessage.Action),
RawMessage: event.RawData,
ParsedMessage: event.EventJson,
BlockTimestamp: block.Header.Timestamp,
BlockHash: event.Transaction.BlockHash.String(),
BlockHeight: event.Transaction.BlockHeight,
Valid: validator.Valid,
WalletAddress: p.PubkeyToPkHashAddress(event.TxPubkey).EncodeAddress(),
Metadata: nil,
Reason: validator.Reason,
})
if err != nil {
return errors.Wrap(err, "Failed to insert event")
}
if validator.Valid {
_, err = qtx.SetDelegates(ctx, datagateway.SetDelegatesParams{
SaleBlock: delegate.DeployID.Block,
SaleTxIndex: int32(delegate.DeployID.TxIndex),
Delegatee: delegate.DelegateePublicKey,
DelegateTxHash: event.Transaction.TxHash.String(),
NodeIds: delegate.NodeIDs,
})
if err != nil {
return errors.Wrap(err, "Failed to set delegate")
}
}
return nil
}

View File

@@ -0,0 +1,84 @@
package nodesale
import (
"context"
"encoding/hex"
"testing"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway/mocks"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestDelegate(t *testing.T) {
ctx := context.Background()
mockDgTx := mocks.NewNodeSaleDataGatewayWithTx(t)
p := NewProcessor(mockDgTx, nil, common.NetworkMainnet, nil, 0)
buyerPrivateKey, _ := btcec.NewPrivateKey()
buyerPubkeyHex := hex.EncodeToString(buyerPrivateKey.PubKey().SerializeCompressed())
delegateePrivateKey, _ := btcec.NewPrivateKey()
delegateePubkeyHex := hex.EncodeToString(delegateePrivateKey.PubKey().SerializeCompressed())
delegateMessage := &protobuf.NodeSaleEvent{
Action: protobuf.Action_ACTION_DELEGATE,
Delegate: &protobuf.ActionDelegate{
DelegateePublicKey: delegateePubkeyHex,
NodeIDs: []uint32{9, 10},
DeployID: &protobuf.ActionID{
Block: uint64(testBlockHeight) - 2,
TxIndex: uint32(testTxIndex) - 2,
},
},
}
event, block := assembleTestEvent(buyerPrivateKey, "131313131313", "131313131313", 0, 0, delegateMessage)
mockDgTx.EXPECT().CreateEvent(mock.Anything, mock.MatchedBy(func(event entity.NodeSaleEvent) bool {
return event.Valid == true
})).Return(nil)
mockDgTx.EXPECT().GetNodesByIds(mock.Anything, datagateway.GetNodesByIdsParams{
SaleBlock: delegateMessage.Delegate.DeployID.Block,
SaleTxIndex: delegateMessage.Delegate.DeployID.TxIndex,
NodeIds: []uint32{9, 10},
}).Return([]entity.Node{
{
SaleBlock: delegateMessage.Delegate.DeployID.Block,
SaleTxIndex: delegateMessage.Delegate.DeployID.TxIndex,
NodeID: 9,
TierIndex: 1,
DelegatedTo: "",
OwnerPublicKey: buyerPubkeyHex,
PurchaseTxHash: mock.Anything,
DelegateTxHash: "",
},
{
SaleBlock: delegateMessage.Delegate.DeployID.Block,
SaleTxIndex: delegateMessage.Delegate.DeployID.TxIndex,
NodeID: 10,
TierIndex: 2,
DelegatedTo: "",
OwnerPublicKey: buyerPubkeyHex,
PurchaseTxHash: mock.Anything,
DelegateTxHash: "",
},
}, nil)
mockDgTx.EXPECT().SetDelegates(mock.Anything, datagateway.SetDelegatesParams{
SaleBlock: delegateMessage.Delegate.DeployID.Block,
SaleTxIndex: int32(delegateMessage.Delegate.DeployID.TxIndex),
Delegatee: delegateMessage.Delegate.DelegateePublicKey,
DelegateTxHash: event.Transaction.TxHash.String(),
NodeIds: delegateMessage.Delegate.NodeIDs,
}).Return(2, nil)
err := p.ProcessDelegate(ctx, mockDgTx, block, event)
require.NoError(t, err)
}

View File

@@ -0,0 +1,67 @@
package nodesale
import (
"context"
"time"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/validator"
"google.golang.org/protobuf/encoding/protojson"
)
func (p *Processor) ProcessDeploy(ctx context.Context, qtx datagateway.NodeSaleDataGatewayWithTx, block *types.Block, event NodeSaleEvent) error {
deploy := event.EventMessage.Deploy
validator := validator.New()
validator.EqualXonlyPublicKey(deploy.SellerPublicKey, event.TxPubkey)
err := qtx.CreateEvent(ctx, entity.NodeSaleEvent{
TxHash: event.Transaction.TxHash.String(),
TxIndex: int32(event.Transaction.Index),
Action: int32(event.EventMessage.Action),
RawMessage: event.RawData,
ParsedMessage: event.EventJson,
BlockTimestamp: block.Header.Timestamp,
BlockHash: event.Transaction.BlockHash.String(),
BlockHeight: event.Transaction.BlockHeight,
Valid: validator.Valid,
WalletAddress: p.PubkeyToPkHashAddress(event.TxPubkey).EncodeAddress(),
Metadata: nil,
Reason: validator.Reason,
})
if err != nil {
return errors.Wrap(err, "Failed to insert event")
}
if validator.Valid {
tiers := make([][]byte, len(deploy.Tiers))
for i, tier := range deploy.Tiers {
tierJson, err := protojson.Marshal(tier)
if err != nil {
return errors.Wrap(err, "Failed to parse tiers to json")
}
tiers[i] = tierJson
}
err = qtx.CreateNodeSale(ctx, entity.NodeSale{
BlockHeight: uint64(event.Transaction.BlockHeight),
TxIndex: event.Transaction.Index,
Name: deploy.Name,
StartsAt: time.Unix(int64(deploy.StartsAt), 0),
EndsAt: time.Unix(int64(deploy.EndsAt), 0),
Tiers: tiers,
SellerPublicKey: deploy.SellerPublicKey,
MaxPerAddress: deploy.MaxPerAddress,
DeployTxHash: event.Transaction.TxHash.String(),
MaxDiscountPercentage: int32(deploy.MaxDiscountPercentage),
SellerWallet: deploy.SellerWallet,
})
if err != nil {
return errors.Wrap(err, "Failed to insert NodeSale")
}
}
return nil
}

View File

@@ -0,0 +1,139 @@
package nodesale
import (
"context"
"encoding/hex"
"testing"
"time"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway/mocks"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"github.com/samber/lo"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/encoding/protojson"
)
func TestDeployInvalid(t *testing.T) {
ctx := context.Background()
mockDgTx := mocks.NewNodeSaleDataGatewayWithTx(t)
p := NewProcessor(mockDgTx, nil, common.NetworkMainnet, nil, 0)
prvKey, err := btcec.NewPrivateKey()
require.NoError(t, err)
strangerKey, err := btcec.NewPrivateKey()
require.NoError(t, err)
strangerPubkeyHex := hex.EncodeToString(strangerKey.PubKey().SerializeCompressed())
sellerWallet := p.PubkeyToPkHashAddress(prvKey.PubKey())
message := &protobuf.NodeSaleEvent{
Action: protobuf.Action_ACTION_DEPLOY,
Deploy: &protobuf.ActionDeploy{
Name: t.Name(),
StartsAt: 100,
EndsAt: 200,
Tiers: []*protobuf.Tier{
{
PriceSat: 100,
Limit: 5,
MaxPerAddress: 100,
},
{
PriceSat: 200,
Limit: 5,
MaxPerAddress: 100,
},
},
SellerPublicKey: strangerPubkeyHex,
MaxPerAddress: 100,
MaxDiscountPercentage: 50,
SellerWallet: sellerWallet.EncodeAddress(),
},
}
event, block := assembleTestEvent(prvKey, "0101010101", "0101010101", 0, 0, message)
mockDgTx.EXPECT().CreateEvent(mock.Anything, mock.MatchedBy(func(event entity.NodeSaleEvent) bool {
return event.Valid == false
})).Return(nil)
err = p.ProcessDeploy(ctx, mockDgTx, block, event)
require.NoError(t, err)
mockDgTx.AssertNotCalled(t, "CreateNodeSale")
}
func TestDeployValid(t *testing.T) {
ctx := context.Background()
mockDgTx := mocks.NewNodeSaleDataGatewayWithTx(t)
p := NewProcessor(mockDgTx, nil, common.NetworkMainnet, nil, 0)
privateKey, err := btcec.NewPrivateKey()
require.NoError(t, err)
pubkeyHex := hex.EncodeToString(privateKey.PubKey().SerializeCompressed())
sellerWallet := p.PubkeyToPkHashAddress(privateKey.PubKey())
startAt := time.Now().Add(time.Hour * -1)
endAt := time.Now().Add(time.Hour * 1)
message := &protobuf.NodeSaleEvent{
Action: protobuf.Action_ACTION_DEPLOY,
Deploy: &protobuf.ActionDeploy{
Name: t.Name(),
StartsAt: uint32(startAt.UTC().Unix()),
EndsAt: uint32(endAt.UTC().Unix()),
Tiers: []*protobuf.Tier{
{
PriceSat: 100,
Limit: 5,
MaxPerAddress: 100,
},
{
PriceSat: 200,
Limit: 5,
MaxPerAddress: 100,
},
},
SellerPublicKey: pubkeyHex,
MaxPerAddress: 100,
MaxDiscountPercentage: 50,
SellerWallet: sellerWallet.EncodeAddress(),
},
}
event, block := assembleTestEvent(privateKey, "0202020202", "0202020202", 0, 0, message)
mockDgTx.EXPECT().CreateEvent(mock.Anything, mock.MatchedBy(func(event entity.NodeSaleEvent) bool {
return event.Valid == true
})).Return(nil)
tiers := lo.Map(message.Deploy.Tiers, func(tier *protobuf.Tier, _ int) []byte {
tierJson, err := protojson.Marshal(tier)
require.NoError(t, err)
return tierJson
})
mockDgTx.EXPECT().CreateNodeSale(mock.Anything, entity.NodeSale{
BlockHeight: uint64(event.Transaction.BlockHeight),
TxIndex: uint32(event.Transaction.Index),
Name: message.Deploy.Name,
StartsAt: time.Unix(int64(message.Deploy.StartsAt), 0),
EndsAt: time.Unix(int64(message.Deploy.EndsAt), 0),
Tiers: tiers,
SellerPublicKey: message.Deploy.SellerPublicKey,
MaxPerAddress: message.Deploy.MaxPerAddress,
DeployTxHash: event.Transaction.TxHash.String(),
MaxDiscountPercentage: int32(message.Deploy.MaxDiscountPercentage),
SellerWallet: message.Deploy.SellerWallet,
}).Return(nil)
p.ProcessDeploy(ctx, mockDgTx, block, event)
}

View File

@@ -0,0 +1,55 @@
package entity
import "time"
type Block struct {
BlockHeight int64
BlockHash string
Module string
}
type Node struct {
SaleBlock uint64
SaleTxIndex uint32
NodeID uint32
TierIndex int32
DelegatedTo string
OwnerPublicKey string
PurchaseTxHash string
DelegateTxHash string
}
type NodeSale struct {
BlockHeight uint64
TxIndex uint32
Name string
StartsAt time.Time
EndsAt time.Time
Tiers [][]byte
SellerPublicKey string
MaxPerAddress uint32
DeployTxHash string
MaxDiscountPercentage int32
SellerWallet string
}
type NodeSaleEvent struct {
TxHash string
BlockHeight int64
TxIndex int32
WalletAddress string
Valid bool
Action int32
RawMessage []byte
ParsedMessage []byte
BlockTimestamp time.Time
BlockHash string
Metadata *MetadataEventPurchase
Reason string
}
type MetadataEventPurchase struct {
ExpectedTotalAmountDiscounted uint64
ReportedTotalAmount uint64
PaidTotalAmount uint64
}

View File

@@ -0,0 +1,51 @@
package delegate
import (
"context"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/validator"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
)
type DelegateValidator struct {
validator.Validator
}
func New() *DelegateValidator {
v := validator.New()
return &DelegateValidator{
Validator: *v,
}
}
func (v *DelegateValidator) NodesExist(
ctx context.Context,
qtx datagateway.NodeSaleDataGatewayWithTx,
deployId *protobuf.ActionID,
nodeIds []uint32,
) (bool, []entity.Node, error) {
if !v.Valid {
return false, nil, nil
}
nodes, err := qtx.GetNodesByIds(ctx, datagateway.GetNodesByIdsParams{
SaleBlock: deployId.Block,
SaleTxIndex: deployId.TxIndex,
NodeIds: nodeIds,
})
if err != nil {
v.Valid = false
return v.Valid, nil, errors.Wrap(err, "Failed to get nodes")
}
if len(nodeIds) != len(nodes) {
v.Valid = false
return v.Valid, nil, nil
}
v.Valid = true
return v.Valid, nodes, nil
}

View File

@@ -0,0 +1,6 @@
package validator
const (
INVALID_PUBKEY_FORMAT = "Cannot parse public key"
INVALID_PUBKEY = "Invalid public key"
)

View File

@@ -0,0 +1,17 @@
package purchase
const (
DEPLOYID_NOT_FOUND = "Depoloy ID not found."
PURCHASE_TIMEOUT = "Purchase timeout."
BLOCK_HEIGHT_TIMEOUT = "Block height over timeout block"
INVALID_SIGNATURE_FORMAT = "Cannot parse signature."
INVALID_SIGNATURE = "Invalid Signature."
INVALID_TIER_JSON = "Invalid Tier format"
INVALID_NODE_ID = "Invalid NodeId."
NODE_ALREADY_PURCHASED = "Some node has been purchased."
INVALID_SELLER_ADDR_FORMAT = "Invalid seller address."
INVALID_PAYMENT = "Total amount paid less than reported price"
INSUFFICIENT_FUND = "Insufficient fund"
OVER_LIMIT_PER_ADDR = "Purchase over limit per address."
OVER_LIMIT_PER_TIER = "Purchase over limit per tier."
)

View File

@@ -0,0 +1,283 @@
package purchase
import (
"context"
"encoding/hex"
"slices"
"time"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/btcec/v2/ecdsa"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/validator"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
type PurchaseValidator struct {
validator.Validator
}
func New() *PurchaseValidator {
v := validator.New()
return &PurchaseValidator{
Validator: *v,
}
}
func (v *PurchaseValidator) NodeSaleExists(ctx context.Context, qtx datagateway.NodeSaleDataGatewayWithTx, payload *protobuf.PurchasePayload) (bool, *entity.NodeSale, error) {
if !v.Valid {
return false, nil, nil
}
// check node existed
deploys, err := qtx.GetNodeSale(ctx, datagateway.GetNodeSaleParams{
BlockHeight: payload.DeployID.Block,
TxIndex: payload.DeployID.TxIndex,
})
if err != nil {
v.Valid = false
return v.Valid, nil, errors.Wrap(err, "Failed to Get NodeSale")
}
if len(deploys) < 1 {
v.Valid = false
v.Reason = DEPLOYID_NOT_FOUND
return v.Valid, nil, nil
}
v.Valid = true
return v.Valid, &deploys[0], nil
}
func (v *PurchaseValidator) ValidTimestamp(deploy *entity.NodeSale, timestamp time.Time) bool {
if !v.Valid {
return false
}
if timestamp.Before(deploy.StartsAt) ||
timestamp.After(deploy.EndsAt) {
v.Valid = false
v.Reason = PURCHASE_TIMEOUT
return v.Valid
}
v.Valid = true
return v.Valid
}
func (v *PurchaseValidator) WithinTimeoutBlock(timeOutBlock uint64, blockHeight uint64) bool {
if !v.Valid {
return false
}
if timeOutBlock == 0 {
// No timeout
v.Valid = true
return v.Valid
}
if timeOutBlock < blockHeight {
v.Valid = false
v.Reason = BLOCK_HEIGHT_TIMEOUT
return v.Valid
}
v.Valid = true
return v.Valid
}
func (v *PurchaseValidator) VerifySignature(purchase *protobuf.ActionPurchase, deploy *entity.NodeSale) bool {
if !v.Valid {
return false
}
payload := purchase.Payload
payloadBytes, _ := proto.Marshal(payload)
signatureBytes, _ := hex.DecodeString(purchase.SellerSignature)
signature, err := ecdsa.ParseSignature(signatureBytes)
if err != nil {
v.Valid = false
v.Reason = INVALID_SIGNATURE_FORMAT
return v.Valid
}
hash := chainhash.DoubleHashB(payloadBytes)
pubkeyBytes, _ := hex.DecodeString(deploy.SellerPublicKey)
pubKey, _ := btcec.ParsePubKey(pubkeyBytes)
verified := signature.Verify(hash[:], pubKey)
if !verified {
v.Valid = false
v.Reason = INVALID_SIGNATURE
return v.Valid
}
v.Valid = true
return v.Valid
}
type TierMap struct {
Tiers []protobuf.Tier
BuyingTiersCount []uint32
NodeIdToTier map[uint32]int32
}
func (v *PurchaseValidator) ValidTiers(
payload *protobuf.PurchasePayload,
deploy *entity.NodeSale,
) (bool, TierMap) {
if !v.Valid {
return false, TierMap{}
}
tiers := make([]protobuf.Tier, len(deploy.Tiers))
buyingTiersCount := make([]uint32, len(tiers))
nodeIdToTier := make(map[uint32]int32)
for i, tierJson := range deploy.Tiers {
tier := &tiers[i]
err := protojson.Unmarshal(tierJson, tier)
if err != nil {
v.Valid = false
v.Reason = INVALID_TIER_JSON
return v.Valid, TierMap{}
}
}
slices.Sort(payload.NodeIDs)
var currentTier int32 = -1
var tierSum uint32 = 0
for _, nodeId := range payload.NodeIDs {
for nodeId >= tierSum && currentTier < int32(len(tiers)-1) {
currentTier++
tierSum += tiers[currentTier].Limit
}
if nodeId < tierSum {
buyingTiersCount[currentTier]++
nodeIdToTier[nodeId] = currentTier
} else {
v.Valid = false
v.Reason = INVALID_NODE_ID
return false, TierMap{}
}
}
v.Valid = true
return v.Valid, TierMap{
Tiers: tiers,
BuyingTiersCount: buyingTiersCount,
NodeIdToTier: nodeIdToTier,
}
}
func (v *PurchaseValidator) ValidUnpurchasedNodes(
ctx context.Context,
qtx datagateway.NodeSaleDataGatewayWithTx,
payload *protobuf.PurchasePayload,
) (bool, error) {
if !v.Valid {
return false, nil
}
// valid unpurchased node ID
nodes, err := qtx.GetNodesByIds(ctx, datagateway.GetNodesByIdsParams{
SaleBlock: payload.DeployID.Block,
SaleTxIndex: payload.DeployID.TxIndex,
NodeIds: payload.NodeIDs,
})
if err != nil {
v.Valid = false
return v.Valid, errors.Wrap(err, "Failed to Get nodes")
}
if len(nodes) > 0 {
v.Valid = false
v.Reason = NODE_ALREADY_PURCHASED
return false, nil
}
v.Valid = true
return true, nil
}
func (v *PurchaseValidator) ValidPaidAmount(
payload *protobuf.PurchasePayload,
deploy *entity.NodeSale,
txPaid uint64,
tiers []protobuf.Tier,
buyingTiersCount []uint32,
network *chaincfg.Params,
) (bool, *entity.MetadataEventPurchase) {
if !v.Valid {
return false, nil
}
meta := entity.MetadataEventPurchase{}
meta.PaidTotalAmount = txPaid
meta.ReportedTotalAmount = uint64(payload.TotalAmountSat)
// total amount paid is greater than report paid
if txPaid < uint64(payload.TotalAmountSat) {
v.Valid = false
v.Reason = INVALID_PAYMENT
return v.Valid, nil
}
// calculate total price
var totalPrice uint64 = 0
for i := 0; i < len(tiers); i++ {
totalPrice += uint64(buyingTiersCount[i] * tiers[i].PriceSat)
}
// report paid is greater than max discounted total price
maxDiscounted := totalPrice * (100 - uint64(deploy.MaxDiscountPercentage))
decimal := maxDiscounted % 100
maxDiscounted /= 100
if decimal%100 >= 50 {
maxDiscounted++
}
meta.ExpectedTotalAmountDiscounted = maxDiscounted
if uint64(payload.TotalAmountSat) < maxDiscounted {
v.Valid = false
v.Reason = INSUFFICIENT_FUND
return v.Valid, nil
}
v.Valid = true
return v.Valid, &meta
}
func (v *PurchaseValidator) WithinLimit(
ctx context.Context,
qtx datagateway.NodeSaleDataGatewayWithTx,
payload *protobuf.PurchasePayload,
deploy *entity.NodeSale,
tiers []protobuf.Tier,
buyingTiersCount []uint32,
) (bool, error) {
if !v.Valid {
return false, nil
}
// check node limit
// get all selled by seller and owned by buyer
buyerOwnedNodes, err := qtx.GetNodesByOwner(ctx, datagateway.GetNodesByOwnerParams{
SaleBlock: deploy.BlockHeight,
SaleTxIndex: deploy.TxIndex,
OwnerPublicKey: payload.BuyerPublicKey,
})
if err != nil {
v.Valid = false
return v.Valid, errors.Wrap(err, "Failed to GetNodesByOwner")
}
if len(buyerOwnedNodes)+len(payload.NodeIDs) > int(deploy.MaxPerAddress) {
v.Valid = false
v.Reason = "Purchase over limit per address."
return v.Valid, nil
}
// check limit
// count each tiers
// check limited for each tier
ownedTiersCount := make([]uint32, len(tiers))
for _, node := range buyerOwnedNodes {
ownedTiersCount[node.TierIndex]++
}
for i := 0; i < len(tiers); i++ {
if ownedTiersCount[i]+buyingTiersCount[i] > tiers[i].MaxPerAddress {
v.Valid = false
v.Reason = "Purchase over limit per tier."
return v.Valid, nil
}
}
v.Valid = true
return v.Valid, nil
}

View File

@@ -0,0 +1,44 @@
package validator
import (
"bytes"
"encoding/hex"
"github.com/btcsuite/btcd/btcec/v2"
)
type Validator struct {
Valid bool
Reason string
}
func New() *Validator {
return &Validator{
Valid: true,
}
}
func (v *Validator) EqualXonlyPublicKey(target string, expected *btcec.PublicKey) bool {
if !v.Valid {
return false
}
targetBytes, err := hex.DecodeString(target)
if err != nil {
v.Valid = false
v.Reason = INVALID_PUBKEY_FORMAT
}
targetPubKey, err := btcec.ParsePubKey(targetBytes)
if err != nil {
v.Valid = false
v.Reason = INVALID_PUBKEY_FORMAT
}
xOnlyTargetPubKey := btcec.ToSerialized(targetPubKey).SchnorrSerialized()
xOnlyExpectedPubKey := btcec.ToSerialized(expected).SchnorrSerialized()
v.Valid = bytes.Equal(xOnlyTargetPubKey[:], xOnlyExpectedPubKey[:])
if !v.Valid {
v.Reason = INVALID_PUBKEY
}
return v.Valid
}

View File

@@ -0,0 +1,61 @@
package nodesale
import (
"context"
"fmt"
"github.com/btcsuite/btcd/rpcclient"
"github.com/gaze-network/indexer-network/core/datasources"
"github.com/gaze-network/indexer-network/core/indexer"
"github.com/gaze-network/indexer-network/internal/config"
"github.com/gaze-network/indexer-network/internal/postgres"
"github.com/gaze-network/indexer-network/modules/nodesale/api/httphandler"
repository "github.com/gaze-network/indexer-network/modules/nodesale/repository/postgres"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gofiber/fiber/v2"
"github.com/samber/do/v2"
)
var NODESALE_MAGIC = []byte{0x6e, 0x73, 0x6f, 0x70}
const (
Version = "v0.0.1-alpha"
)
func New(injector do.Injector) (indexer.IndexerWorker, error) {
ctx := do.MustInvoke[context.Context](injector)
conf := do.MustInvoke[config.Config](injector)
btcClient := do.MustInvoke[*rpcclient.Client](injector)
datasource := datasources.NewBitcoinNode(btcClient)
pg, err := postgres.NewPool(ctx, conf.Modules.NodeSale.Postgres)
if err != nil {
return nil, fmt.Errorf("Can't create postgres connection : %w", err)
}
var cleanupFuncs []func(context.Context) error
cleanupFuncs = append(cleanupFuncs, func(ctx context.Context) error {
pg.Close()
return nil
})
repository := repository.NewRepository(pg)
processor := &Processor{
NodeSaleDg: repository,
BtcClient: datasource,
Network: conf.Network,
cleanupFuncs: cleanupFuncs,
lastBlockDefault: conf.Modules.NodeSale.LastBlockDefault,
}
httpServer := do.MustInvoke[*fiber.App](injector)
nodeSaleHandler := httphandler.New(repository)
if err := nodeSaleHandler.Mount(httpServer); err != nil {
return nil, fmt.Errorf("Can't mount nodesale API : %w", err)
}
logger.InfoContext(ctx, "Mounted nodesale HTTP handler")
indexer := indexer.New(processor, datasource)
logger.InfoContext(ctx, "NodeSale module started.")
return indexer, nil
}

View File

@@ -0,0 +1,61 @@
package nodesale
import (
"time"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/decred/dcrd/dcrec/secp256k1/v4"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
var (
testBlockHeight uint64 = 101
testTxIndex uint32 = 1
)
func assembleTestEvent(privateKey *secp256k1.PrivateKey, blockHashHex, txHashHex string, blockHeight uint64, txIndex uint32, message *protobuf.NodeSaleEvent) (NodeSaleEvent, *types.Block) {
blockHash, _ := chainhash.NewHashFromStr(blockHashHex)
txHash, _ := chainhash.NewHashFromStr(txHashHex)
rawData, _ := proto.Marshal(message)
builder := txscript.NewScriptBuilder()
builder.AddOp(txscript.OP_FALSE)
builder.AddOp(txscript.OP_IF)
builder.AddData(rawData)
builder.AddOp(txscript.OP_ENDIF)
messageJson, _ := protojson.Marshal(message)
if blockHeight == 0 {
blockHeight = testBlockHeight
testBlockHeight++
}
if txIndex == 0 {
txIndex = testTxIndex
testTxIndex++
}
event := NodeSaleEvent{
Transaction: &types.Transaction{
BlockHeight: int64(blockHeight),
BlockHash: *blockHash,
Index: uint32(txIndex),
TxHash: *txHash,
},
RawData: rawData,
EventMessage: message,
EventJson: messageJson,
TxPubkey: privateKey.PubKey(),
}
block := &types.Block{
Header: types.BlockHeader{
Timestamp: time.Now().UTC(),
},
}
return event, block
}

View File

@@ -0,0 +1,303 @@
package nodesale
import (
"bytes"
"context"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/gaze-network/indexer-network/common"
"github.com/gaze-network/indexer-network/core/indexer"
"github.com/gaze-network/indexer-network/core/types"
"github.com/gaze-network/indexer-network/pkg/logger"
"github.com/gaze-network/indexer-network/pkg/logger/slogx"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
"github.com/cockroachdb/errors"
"github.com/gaze-network/indexer-network/core/datasources"
"github.com/gaze-network/indexer-network/modules/nodesale/datagateway"
"github.com/gaze-network/indexer-network/modules/nodesale/internal/entity"
"github.com/gaze-network/indexer-network/modules/nodesale/protobuf"
)
type NodeSaleEvent struct {
Transaction *types.Transaction
EventMessage *protobuf.NodeSaleEvent
EventJson []byte
TxPubkey *btcec.PublicKey
RawData []byte
InputValue uint64
}
func NewProcessor(repository datagateway.NodeSaleDataGateway,
datasource *datasources.BitcoinNodeDatasource,
network common.Network,
cleanupFuncs []func(context.Context) error,
lastBlockDefault int64,
) *Processor {
return &Processor{
NodeSaleDg: repository,
BtcClient: datasource,
Network: network,
cleanupFuncs: cleanupFuncs,
lastBlockDefault: lastBlockDefault,
}
}
func (p *Processor) Shutdown(ctx context.Context) error {
for _, cleanupFunc := range p.cleanupFuncs {
err := cleanupFunc(ctx)
if err != nil {
return errors.Wrap(err, "cleanup function error")
}
}
return nil
}
type Processor struct {
NodeSaleDg datagateway.NodeSaleDataGateway
BtcClient *datasources.BitcoinNodeDatasource
Network common.Network
cleanupFuncs []func(context.Context) error
lastBlockDefault int64
}
// CurrentBlock implements indexer.Processor.
func (p *Processor) CurrentBlock(ctx context.Context) (types.BlockHeader, error) {
block, err := p.NodeSaleDg.GetLastProcessedBlock(ctx)
if err != nil {
logger.InfoContext(ctx, "Couldn't get last processed block. Start from NODESALE_LAST_BLOCK_DEFAULT.",
slogx.Int64("currentBlock", p.lastBlockDefault))
header, err := p.BtcClient.GetBlockHeader(ctx, p.lastBlockDefault)
if err != nil {
return types.BlockHeader{}, errors.Wrap(err, "Cannot get default block from bitcoin node")
}
return types.BlockHeader{
Hash: header.Hash,
Height: p.lastBlockDefault,
}, nil
}
hash, err := chainhash.NewHashFromStr(block.BlockHash)
if err != nil {
logger.PanicContext(ctx, "Invalid hash format found in Database.")
}
return types.BlockHeader{
Hash: *hash,
Height: block.BlockHeight,
}, nil
}
// GetIndexedBlock implements indexer.Processor.
func (p *Processor) GetIndexedBlock(ctx context.Context, height int64) (types.BlockHeader, error) {
block, err := p.NodeSaleDg.GetBlock(ctx, height)
if err != nil {
return types.BlockHeader{}, errors.Wrapf(err, "Block %d not found", height)
}
hash, err := chainhash.NewHashFromStr(block.BlockHash)
if err != nil {
logger.PanicContext(ctx, "Invalid hash format found in Database.")
}
return types.BlockHeader{
Hash: *hash,
Height: block.BlockHeight,
}, nil
}
// Name implements indexer.Processor.
func (p *Processor) Name() string {
return "nodesale"
}
func extractNodeSaleData(witness [][]byte) (data []byte, internalPubkey *btcec.PublicKey, isNodeSale bool) {
tokenizer, controlBlock, isTapScript := extractTapScript(witness)
if !isTapScript {
return []byte{}, nil, false
}
state := 0
for tokenizer.Next() {
switch state {
case 0:
if tokenizer.Opcode() == txscript.OP_0 {
state++
} else {
state = 0
}
case 1:
if tokenizer.Opcode() == txscript.OP_IF {
state++
} else {
state = 0
}
case 2:
if tokenizer.Opcode() == txscript.OP_DATA_4 &&
bytes.Equal(tokenizer.Data(), NODESALE_MAGIC) {
state++
} else {
state = 0
}
case 3:
// Any instruction > txscript.OP_16 is not push data. Note: txscript.OP_PUSHDATAX < txscript.OP_16
if tokenizer.Opcode() <= txscript.OP_16 {
data := tokenizer.Data()
return data, controlBlock.InternalKey, true
}
state = 0
}
}
return []byte{}, nil, false
}
func (p *Processor) parseTransactions(ctx context.Context, transactions []*types.Transaction) ([]NodeSaleEvent, error) {
var events []NodeSaleEvent
for _, t := range transactions {
for _, txIn := range t.TxIn {
data, txPubkey, isNodeSale := extractNodeSaleData(txIn.Witness)
if !isNodeSale {
continue
}
event := &protobuf.NodeSaleEvent{}
err := proto.Unmarshal(data, event)
if err != nil {
logger.WarnContext(ctx, "Invalid Protobuf",
slogx.String("block_hash", t.BlockHash.String()),
slogx.Int("txIndex", int(t.Index)))
continue
}
eventJson, err := protojson.Marshal(event)
if err != nil {
return []NodeSaleEvent{}, errors.Wrap(err, "Failed to parse protobuf to json")
}
prevTx, _, err := p.BtcClient.GetRawTransactionAndHeightByTxHash(ctx, txIn.PreviousOutTxHash)
if err != nil {
return nil, errors.Wrap(err, "Failed to get Previous transaction data")
}
if txIn.PreviousOutIndex >= uint32(len(prevTx.TxOut)) {
return nil, errors.Wrap(err, "Invalid previous transaction from bitcoin")
}
events = append(events, NodeSaleEvent{
Transaction: t,
EventMessage: event,
EventJson: eventJson,
RawData: data,
TxPubkey: txPubkey,
InputValue: uint64(prevTx.TxOut[txIn.PreviousOutIndex].Value),
})
}
}
return events, nil
}
// Process implements indexer.Processor.
func (p *Processor) Process(ctx context.Context, inputs []*types.Block) error {
for _, block := range inputs {
logger.InfoContext(ctx, "NodeSale processing a block",
slogx.Int64("block", block.Header.Height),
slogx.Stringer("hash", block.Header.Hash))
// parse all event from each transaction including reading tx wallet
events, err := p.parseTransactions(ctx, block.Transactions)
if err != nil {
return errors.Wrap(err, "Invalid data from bitcoin client")
}
// open transaction
qtx, err := p.NodeSaleDg.BeginNodeSaleTx(ctx)
if err != nil {
return errors.Wrap(err, "Failed to create transaction")
}
defer func() {
err = qtx.Rollback(ctx)
if err != nil {
logger.PanicContext(ctx, "Failed to rollback db")
}
}()
// write block
err = qtx.CreateBlock(ctx, entity.Block{
BlockHeight: block.Header.Height,
BlockHash: block.Header.Hash.String(),
Module: p.Name(),
})
if err != nil {
return errors.Wrapf(err, "Failed to add block %d", block.Header.Height)
}
// for each events
for _, event := range events {
logger.InfoContext(ctx, "NodeSale processing event",
slogx.Uint32("txIndex", event.Transaction.Index),
slogx.Int64("blockHeight", block.Header.Height),
slogx.Stringer("blockhash", block.Header.Hash),
)
eventMessage := event.EventMessage
switch eventMessage.Action {
case protobuf.Action_ACTION_DEPLOY:
err = p.ProcessDeploy(ctx, qtx, block, event)
if err != nil {
return errors.Wrapf(err, "Failed to deploy at block %d", block.Header.Height)
}
case protobuf.Action_ACTION_DELEGATE:
err = p.ProcessDelegate(ctx, qtx, block, event)
if err != nil {
return errors.Wrapf(err, "Failed to delegate at block %d", block.Header.Height)
}
case protobuf.Action_ACTION_PURCHASE:
err = p.ProcessPurchase(ctx, qtx, block, event)
if err != nil {
return errors.Wrapf(err, "Failed to purchase at block %d", block.Header.Height)
}
default:
logger.DebugContext(ctx, "Invalid event ACTION", slogx.Stringer("txHash", (event.Transaction.TxHash)))
}
}
// close transaction
err = qtx.Commit(ctx)
if err != nil {
return errors.Wrap(err, "Failed to commit transaction")
}
logger.InfoContext(ctx, "NodeSale finished processing block",
slogx.Int64("block", block.Header.Height),
slogx.Stringer("hash", block.Header.Hash))
}
return nil
}
// RevertData implements indexer.Processor.
func (p *Processor) RevertData(ctx context.Context, from int64) error {
qtx, err := p.NodeSaleDg.BeginNodeSaleTx(ctx)
if err != nil {
return errors.Wrap(err, "Failed to create transaction")
}
defer func() { err = qtx.Rollback(ctx) }()
_, err = qtx.RemoveBlockFrom(ctx, from)
if err != nil {
return errors.Wrap(err, "Failed to remove blocks.")
}
affected, err := qtx.RemoveEventsFromBlock(ctx, from)
if err != nil {
return errors.Wrap(err, "Failed to remove events.")
}
_, err = qtx.ClearDelegate(ctx)
if err != nil {
return errors.Wrap(err, "Failed to clear delegate from nodes")
}
err = qtx.Commit(ctx)
if err != nil {
return errors.Wrap(err, "Failed to commit transaction")
}
logger.InfoContext(ctx, "Events removed",
slogx.Int64("Total removed", affected))
return nil
}
// VerifyStates implements indexer.Processor.
func (p *Processor) VerifyStates(ctx context.Context) error {
panic("unimplemented")
}
var _ indexer.Processor[*types.Block] = (*Processor)(nil)

View File

@@ -0,0 +1,806 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc v5.26.1
// source: modules/nodesale/protobuf/nodesale.proto
// protoc modules/nodesale/protobuf/nodesale.proto --go_out=. --go_opt=module=github.com/gaze-network/indexer-network
package protobuf
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Action int32
const (
Action_ACTION_DEPLOY Action = 0
Action_ACTION_PURCHASE Action = 1
Action_ACTION_DELEGATE Action = 2
)
// Enum value maps for Action.
var (
Action_name = map[int32]string{
0: "ACTION_DEPLOY",
1: "ACTION_PURCHASE",
2: "ACTION_DELEGATE",
}
Action_value = map[string]int32{
"ACTION_DEPLOY": 0,
"ACTION_PURCHASE": 1,
"ACTION_DELEGATE": 2,
}
)
func (x Action) Enum() *Action {
p := new(Action)
*p = x
return p
}
func (x Action) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Action) Descriptor() protoreflect.EnumDescriptor {
return file_modules_nodesale_protobuf_nodesale_proto_enumTypes[0].Descriptor()
}
func (Action) Type() protoreflect.EnumType {
return &file_modules_nodesale_protobuf_nodesale_proto_enumTypes[0]
}
func (x Action) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Action.Descriptor instead.
func (Action) EnumDescriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{0}
}
type NodeSaleEvent struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Action Action `protobuf:"varint,1,opt,name=action,proto3,enum=nodesale.Action" json:"action,omitempty"`
Deploy *ActionDeploy `protobuf:"bytes,2,opt,name=deploy,proto3,oneof" json:"deploy,omitempty"`
Purchase *ActionPurchase `protobuf:"bytes,3,opt,name=purchase,proto3,oneof" json:"purchase,omitempty"`
Delegate *ActionDelegate `protobuf:"bytes,4,opt,name=delegate,proto3,oneof" json:"delegate,omitempty"`
}
func (x *NodeSaleEvent) Reset() {
*x = NodeSaleEvent{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *NodeSaleEvent) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*NodeSaleEvent) ProtoMessage() {}
func (x *NodeSaleEvent) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use NodeSaleEvent.ProtoReflect.Descriptor instead.
func (*NodeSaleEvent) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{0}
}
func (x *NodeSaleEvent) GetAction() Action {
if x != nil {
return x.Action
}
return Action_ACTION_DEPLOY
}
func (x *NodeSaleEvent) GetDeploy() *ActionDeploy {
if x != nil {
return x.Deploy
}
return nil
}
func (x *NodeSaleEvent) GetPurchase() *ActionPurchase {
if x != nil {
return x.Purchase
}
return nil
}
func (x *NodeSaleEvent) GetDelegate() *ActionDelegate {
if x != nil {
return x.Delegate
}
return nil
}
type ActionDeploy struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
StartsAt uint32 `protobuf:"varint,2,opt,name=startsAt,proto3" json:"startsAt,omitempty"`
EndsAt uint32 `protobuf:"varint,3,opt,name=endsAt,proto3" json:"endsAt,omitempty"`
Tiers []*Tier `protobuf:"bytes,4,rep,name=tiers,proto3" json:"tiers,omitempty"`
SellerPublicKey string `protobuf:"bytes,5,opt,name=sellerPublicKey,proto3" json:"sellerPublicKey,omitempty"`
MaxPerAddress uint32 `protobuf:"varint,6,opt,name=maxPerAddress,proto3" json:"maxPerAddress,omitempty"`
MaxDiscountPercentage uint32 `protobuf:"varint,7,opt,name=maxDiscountPercentage,proto3" json:"maxDiscountPercentage,omitempty"`
SellerWallet string `protobuf:"bytes,8,opt,name=sellerWallet,proto3" json:"sellerWallet,omitempty"`
}
func (x *ActionDeploy) Reset() {
*x = ActionDeploy{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ActionDeploy) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ActionDeploy) ProtoMessage() {}
func (x *ActionDeploy) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ActionDeploy.ProtoReflect.Descriptor instead.
func (*ActionDeploy) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{1}
}
func (x *ActionDeploy) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *ActionDeploy) GetStartsAt() uint32 {
if x != nil {
return x.StartsAt
}
return 0
}
func (x *ActionDeploy) GetEndsAt() uint32 {
if x != nil {
return x.EndsAt
}
return 0
}
func (x *ActionDeploy) GetTiers() []*Tier {
if x != nil {
return x.Tiers
}
return nil
}
func (x *ActionDeploy) GetSellerPublicKey() string {
if x != nil {
return x.SellerPublicKey
}
return ""
}
func (x *ActionDeploy) GetMaxPerAddress() uint32 {
if x != nil {
return x.MaxPerAddress
}
return 0
}
func (x *ActionDeploy) GetMaxDiscountPercentage() uint32 {
if x != nil {
return x.MaxDiscountPercentage
}
return 0
}
func (x *ActionDeploy) GetSellerWallet() string {
if x != nil {
return x.SellerWallet
}
return ""
}
type Tier struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
PriceSat uint32 `protobuf:"varint,1,opt,name=priceSat,proto3" json:"priceSat,omitempty"`
Limit uint32 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
MaxPerAddress uint32 `protobuf:"varint,3,opt,name=maxPerAddress,proto3" json:"maxPerAddress,omitempty"`
}
func (x *Tier) Reset() {
*x = Tier{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Tier) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Tier) ProtoMessage() {}
func (x *Tier) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Tier.ProtoReflect.Descriptor instead.
func (*Tier) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{2}
}
func (x *Tier) GetPriceSat() uint32 {
if x != nil {
return x.PriceSat
}
return 0
}
func (x *Tier) GetLimit() uint32 {
if x != nil {
return x.Limit
}
return 0
}
func (x *Tier) GetMaxPerAddress() uint32 {
if x != nil {
return x.MaxPerAddress
}
return 0
}
type ActionPurchase struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload *PurchasePayload `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
SellerSignature string `protobuf:"bytes,2,opt,name=sellerSignature,proto3" json:"sellerSignature,omitempty"`
}
func (x *ActionPurchase) Reset() {
*x = ActionPurchase{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ActionPurchase) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ActionPurchase) ProtoMessage() {}
func (x *ActionPurchase) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ActionPurchase.ProtoReflect.Descriptor instead.
func (*ActionPurchase) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{3}
}
func (x *ActionPurchase) GetPayload() *PurchasePayload {
if x != nil {
return x.Payload
}
return nil
}
func (x *ActionPurchase) GetSellerSignature() string {
if x != nil {
return x.SellerSignature
}
return ""
}
type PurchasePayload struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
DeployID *ActionID `protobuf:"bytes,1,opt,name=deployID,proto3" json:"deployID,omitempty"`
BuyerPublicKey string `protobuf:"bytes,2,opt,name=buyerPublicKey,proto3" json:"buyerPublicKey,omitempty"`
NodeIDs []uint32 `protobuf:"varint,3,rep,packed,name=nodeIDs,proto3" json:"nodeIDs,omitempty"`
TotalAmountSat int64 `protobuf:"varint,4,opt,name=totalAmountSat,proto3" json:"totalAmountSat,omitempty"`
TimeOutBlock uint64 `protobuf:"varint,5,opt,name=timeOutBlock,proto3" json:"timeOutBlock,omitempty"`
}
func (x *PurchasePayload) Reset() {
*x = PurchasePayload{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PurchasePayload) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PurchasePayload) ProtoMessage() {}
func (x *PurchasePayload) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PurchasePayload.ProtoReflect.Descriptor instead.
func (*PurchasePayload) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{4}
}
func (x *PurchasePayload) GetDeployID() *ActionID {
if x != nil {
return x.DeployID
}
return nil
}
func (x *PurchasePayload) GetBuyerPublicKey() string {
if x != nil {
return x.BuyerPublicKey
}
return ""
}
func (x *PurchasePayload) GetNodeIDs() []uint32 {
if x != nil {
return x.NodeIDs
}
return nil
}
func (x *PurchasePayload) GetTotalAmountSat() int64 {
if x != nil {
return x.TotalAmountSat
}
return 0
}
func (x *PurchasePayload) GetTimeOutBlock() uint64 {
if x != nil {
return x.TimeOutBlock
}
return 0
}
type ActionID struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Block uint64 `protobuf:"varint,1,opt,name=block,proto3" json:"block,omitempty"`
TxIndex uint32 `protobuf:"varint,2,opt,name=txIndex,proto3" json:"txIndex,omitempty"`
}
func (x *ActionID) Reset() {
*x = ActionID{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ActionID) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ActionID) ProtoMessage() {}
func (x *ActionID) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ActionID.ProtoReflect.Descriptor instead.
func (*ActionID) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{5}
}
func (x *ActionID) GetBlock() uint64 {
if x != nil {
return x.Block
}
return 0
}
func (x *ActionID) GetTxIndex() uint32 {
if x != nil {
return x.TxIndex
}
return 0
}
type ActionDelegate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
DelegateePublicKey string `protobuf:"bytes,1,opt,name=delegateePublicKey,proto3" json:"delegateePublicKey,omitempty"`
NodeIDs []uint32 `protobuf:"varint,2,rep,packed,name=nodeIDs,proto3" json:"nodeIDs,omitempty"`
DeployID *ActionID `protobuf:"bytes,3,opt,name=deployID,proto3" json:"deployID,omitempty"`
}
func (x *ActionDelegate) Reset() {
*x = ActionDelegate{}
if protoimpl.UnsafeEnabled {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ActionDelegate) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ActionDelegate) ProtoMessage() {}
func (x *ActionDelegate) ProtoReflect() protoreflect.Message {
mi := &file_modules_nodesale_protobuf_nodesale_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ActionDelegate.ProtoReflect.Descriptor instead.
func (*ActionDelegate) Descriptor() ([]byte, []int) {
return file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP(), []int{6}
}
func (x *ActionDelegate) GetDelegateePublicKey() string {
if x != nil {
return x.DelegateePublicKey
}
return ""
}
func (x *ActionDelegate) GetNodeIDs() []uint32 {
if x != nil {
return x.NodeIDs
}
return nil
}
func (x *ActionDelegate) GetDeployID() *ActionID {
if x != nil {
return x.DeployID
}
return nil
}
var File_modules_nodesale_protobuf_nodesale_proto protoreflect.FileDescriptor
var file_modules_nodesale_protobuf_nodesale_proto_rawDesc = []byte{
0x0a, 0x28, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x73, 0x2f, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61,
0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x6e, 0x6f, 0x64, 0x65,
0x73, 0x61, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x08, 0x6e, 0x6f, 0x64, 0x65,
0x73, 0x61, 0x6c, 0x65, 0x22, 0x89, 0x02, 0x0a, 0x0d, 0x4e, 0x6f, 0x64, 0x65, 0x53, 0x61, 0x6c,
0x65, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x28, 0x0a, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x10, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c,
0x65, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e,
0x12, 0x33, 0x0a, 0x06, 0x64, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x16, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c, 0x65, 0x2e, 0x41, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x44, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x48, 0x00, 0x52, 0x06, 0x64, 0x65, 0x70, 0x6c,
0x6f, 0x79, 0x88, 0x01, 0x01, 0x12, 0x39, 0x0a, 0x08, 0x70, 0x75, 0x72, 0x63, 0x68, 0x61, 0x73,
0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61,
0x6c, 0x65, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x75, 0x72, 0x63, 0x68, 0x61, 0x73,
0x65, 0x48, 0x01, 0x52, 0x08, 0x70, 0x75, 0x72, 0x63, 0x68, 0x61, 0x73, 0x65, 0x88, 0x01, 0x01,
0x12, 0x39, 0x0a, 0x08, 0x64, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65, 0x18, 0x04, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x18, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c, 0x65, 0x2e, 0x41, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x44, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65, 0x48, 0x02, 0x52, 0x08,
0x64, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65, 0x88, 0x01, 0x01, 0x42, 0x09, 0x0a, 0x07, 0x5f,
0x64, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x70, 0x75, 0x72, 0x63, 0x68,
0x61, 0x73, 0x65, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x64, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65,
0x22, 0xa6, 0x02, 0x0a, 0x0c, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x44, 0x65, 0x70, 0x6c, 0x6f,
0x79, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x74, 0x61, 0x72, 0x74, 0x73, 0x41,
0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x08, 0x73, 0x74, 0x61, 0x72, 0x74, 0x73, 0x41,
0x74, 0x12, 0x16, 0x0a, 0x06, 0x65, 0x6e, 0x64, 0x73, 0x41, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28,
0x0d, 0x52, 0x06, 0x65, 0x6e, 0x64, 0x73, 0x41, 0x74, 0x12, 0x24, 0x0a, 0x05, 0x74, 0x69, 0x65,
0x72, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73,
0x61, 0x6c, 0x65, 0x2e, 0x54, 0x69, 0x65, 0x72, 0x52, 0x05, 0x74, 0x69, 0x65, 0x72, 0x73, 0x12,
0x28, 0x0a, 0x0f, 0x73, 0x65, 0x6c, 0x6c, 0x65, 0x72, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b,
0x65, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x73, 0x65, 0x6c, 0x6c, 0x65, 0x72,
0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x24, 0x0a, 0x0d, 0x6d, 0x61, 0x78,
0x50, 0x65, 0x72, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0d,
0x52, 0x0d, 0x6d, 0x61, 0x78, 0x50, 0x65, 0x72, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12,
0x34, 0x0a, 0x15, 0x6d, 0x61, 0x78, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x50, 0x65,
0x72, 0x63, 0x65, 0x6e, 0x74, 0x61, 0x67, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x15,
0x6d, 0x61, 0x78, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x50, 0x65, 0x72, 0x63, 0x65,
0x6e, 0x74, 0x61, 0x67, 0x65, 0x12, 0x22, 0x0a, 0x0c, 0x73, 0x65, 0x6c, 0x6c, 0x65, 0x72, 0x57,
0x61, 0x6c, 0x6c, 0x65, 0x74, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x73, 0x65, 0x6c,
0x6c, 0x65, 0x72, 0x57, 0x61, 0x6c, 0x6c, 0x65, 0x74, 0x22, 0x5e, 0x0a, 0x04, 0x54, 0x69, 0x65,
0x72, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x69, 0x63, 0x65, 0x53, 0x61, 0x74, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x08, 0x70, 0x72, 0x69, 0x63, 0x65, 0x53, 0x61, 0x74, 0x12, 0x14, 0x0a,
0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x05, 0x6c, 0x69,
0x6d, 0x69, 0x74, 0x12, 0x24, 0x0a, 0x0d, 0x6d, 0x61, 0x78, 0x50, 0x65, 0x72, 0x41, 0x64, 0x64,
0x72, 0x65, 0x73, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0d, 0x6d, 0x61, 0x78, 0x50,
0x65, 0x72, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x22, 0x6f, 0x0a, 0x0e, 0x41, 0x63, 0x74,
0x69, 0x6f, 0x6e, 0x50, 0x75, 0x72, 0x63, 0x68, 0x61, 0x73, 0x65, 0x12, 0x33, 0x0a, 0x07, 0x70,
0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x6e,
0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c, 0x65, 0x2e, 0x50, 0x75, 0x72, 0x63, 0x68, 0x61, 0x73, 0x65,
0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x12, 0x28, 0x0a, 0x0f, 0x73, 0x65, 0x6c, 0x6c, 0x65, 0x72, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74,
0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x73, 0x65, 0x6c, 0x6c, 0x65,
0x72, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xcf, 0x01, 0x0a, 0x0f, 0x50,
0x75, 0x72, 0x63, 0x68, 0x61, 0x73, 0x65, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x2e,
0x0a, 0x08, 0x64, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x12, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c, 0x65, 0x2e, 0x41, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x49, 0x44, 0x52, 0x08, 0x64, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x49, 0x44, 0x12, 0x26,
0x0a, 0x0e, 0x62, 0x75, 0x79, 0x65, 0x72, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x62, 0x75, 0x79, 0x65, 0x72, 0x50, 0x75, 0x62,
0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x18, 0x0a, 0x07, 0x6e, 0x6f, 0x64, 0x65, 0x49, 0x44,
0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x07, 0x6e, 0x6f, 0x64, 0x65, 0x49, 0x44, 0x73,
0x12, 0x26, 0x0a, 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x41, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x53,
0x61, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x41,
0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x53, 0x61, 0x74, 0x12, 0x22, 0x0a, 0x0c, 0x74, 0x69, 0x6d, 0x65,
0x4f, 0x75, 0x74, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0c,
0x74, 0x69, 0x6d, 0x65, 0x4f, 0x75, 0x74, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x22, 0x3a, 0x0a, 0x08,
0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x6c, 0x6f, 0x63,
0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x12, 0x18,
0x0a, 0x07, 0x74, 0x78, 0x49, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52,
0x07, 0x74, 0x78, 0x49, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x8a, 0x01, 0x0a, 0x0e, 0x41, 0x63, 0x74,
0x69, 0x6f, 0x6e, 0x44, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65, 0x12, 0x2e, 0x0a, 0x12, 0x64,
0x65, 0x6c, 0x65, 0x67, 0x61, 0x74, 0x65, 0x65, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65,
0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x12, 0x64, 0x65, 0x6c, 0x65, 0x67, 0x61, 0x74,
0x65, 0x65, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x18, 0x0a, 0x07, 0x6e,
0x6f, 0x64, 0x65, 0x49, 0x44, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x07, 0x6e, 0x6f,
0x64, 0x65, 0x49, 0x44, 0x73, 0x12, 0x2e, 0x0a, 0x08, 0x64, 0x65, 0x70, 0x6c, 0x6f, 0x79, 0x49,
0x44, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x12, 0x2e, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61,
0x6c, 0x65, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x52, 0x08, 0x64, 0x65, 0x70,
0x6c, 0x6f, 0x79, 0x49, 0x44, 0x2a, 0x45, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12,
0x11, 0x0a, 0x0d, 0x41, 0x43, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x44, 0x45, 0x50, 0x4c, 0x4f, 0x59,
0x10, 0x00, 0x12, 0x13, 0x0a, 0x0f, 0x41, 0x43, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x50, 0x55, 0x52,
0x43, 0x48, 0x41, 0x53, 0x45, 0x10, 0x01, 0x12, 0x13, 0x0a, 0x0f, 0x41, 0x43, 0x54, 0x49, 0x4f,
0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x47, 0x41, 0x54, 0x45, 0x10, 0x02, 0x42, 0x43, 0x5a, 0x41,
0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x61, 0x7a, 0x65, 0x2d,
0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x2f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x65, 0x72, 0x2d,
0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x2f, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x73, 0x2f,
0x6e, 0x6f, 0x64, 0x65, 0x73, 0x61, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_modules_nodesale_protobuf_nodesale_proto_rawDescOnce sync.Once
file_modules_nodesale_protobuf_nodesale_proto_rawDescData = file_modules_nodesale_protobuf_nodesale_proto_rawDesc
)
func file_modules_nodesale_protobuf_nodesale_proto_rawDescGZIP() []byte {
file_modules_nodesale_protobuf_nodesale_proto_rawDescOnce.Do(func() {
file_modules_nodesale_protobuf_nodesale_proto_rawDescData = protoimpl.X.CompressGZIP(file_modules_nodesale_protobuf_nodesale_proto_rawDescData)
})
return file_modules_nodesale_protobuf_nodesale_proto_rawDescData
}
var file_modules_nodesale_protobuf_nodesale_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_modules_nodesale_protobuf_nodesale_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_modules_nodesale_protobuf_nodesale_proto_goTypes = []interface{}{
(Action)(0), // 0: nodesale.Action
(*NodeSaleEvent)(nil), // 1: nodesale.NodeSaleEvent
(*ActionDeploy)(nil), // 2: nodesale.ActionDeploy
(*Tier)(nil), // 3: nodesale.Tier
(*ActionPurchase)(nil), // 4: nodesale.ActionPurchase
(*PurchasePayload)(nil), // 5: nodesale.PurchasePayload
(*ActionID)(nil), // 6: nodesale.ActionID
(*ActionDelegate)(nil), // 7: nodesale.ActionDelegate
}
var file_modules_nodesale_protobuf_nodesale_proto_depIdxs = []int32{
0, // 0: nodesale.NodeSaleEvent.action:type_name -> nodesale.Action
2, // 1: nodesale.NodeSaleEvent.deploy:type_name -> nodesale.ActionDeploy
4, // 2: nodesale.NodeSaleEvent.purchase:type_name -> nodesale.ActionPurchase
7, // 3: nodesale.NodeSaleEvent.delegate:type_name -> nodesale.ActionDelegate
3, // 4: nodesale.ActionDeploy.tiers:type_name -> nodesale.Tier
5, // 5: nodesale.ActionPurchase.payload:type_name -> nodesale.PurchasePayload
6, // 6: nodesale.PurchasePayload.deployID:type_name -> nodesale.ActionID
6, // 7: nodesale.ActionDelegate.deployID:type_name -> nodesale.ActionID
8, // [8:8] is the sub-list for method output_type
8, // [8:8] is the sub-list for method input_type
8, // [8:8] is the sub-list for extension type_name
8, // [8:8] is the sub-list for extension extendee
0, // [0:8] is the sub-list for field type_name
}
func init() { file_modules_nodesale_protobuf_nodesale_proto_init() }
func file_modules_nodesale_protobuf_nodesale_proto_init() {
if File_modules_nodesale_protobuf_nodesale_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*NodeSaleEvent); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ActionDeploy); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Tier); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ActionPurchase); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PurchasePayload); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ActionID); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ActionDelegate); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_modules_nodesale_protobuf_nodesale_proto_msgTypes[0].OneofWrappers = []interface{}{}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_modules_nodesale_protobuf_nodesale_proto_rawDesc,
NumEnums: 1,
NumMessages: 7,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_modules_nodesale_protobuf_nodesale_proto_goTypes,
DependencyIndexes: file_modules_nodesale_protobuf_nodesale_proto_depIdxs,
EnumInfos: file_modules_nodesale_protobuf_nodesale_proto_enumTypes,
MessageInfos: file_modules_nodesale_protobuf_nodesale_proto_msgTypes,
}.Build()
File_modules_nodesale_protobuf_nodesale_proto = out.File
file_modules_nodesale_protobuf_nodesale_proto_rawDesc = nil
file_modules_nodesale_protobuf_nodesale_proto_goTypes = nil
file_modules_nodesale_protobuf_nodesale_proto_depIdxs = nil
}

View File

@@ -0,0 +1,60 @@
syntax = "proto3";
// protoc modules/nodesale/protobuf/nodesale.proto --go_out=. --go_opt=module=github.com/gaze-network/indexer-network
package nodesale;
option go_package = "github.com/gaze-network/indexer-network/modules/nodesale/protobuf";
enum Action {
ACTION_DEPLOY = 0;
ACTION_PURCHASE = 1;
ACTION_DELEGATE = 2;
}
message NodeSaleEvent {
Action action = 1;
optional ActionDeploy deploy = 2;
optional ActionPurchase purchase = 3;
optional ActionDelegate delegate = 4;
}
message ActionDeploy {
string name = 1;
uint32 startsAt = 2;
uint32 endsAt = 3;
repeated Tier tiers = 4;
string sellerPublicKey = 5;
uint32 maxPerAddress = 6;
uint32 maxDiscountPercentage = 7;
string sellerWallet = 8;
}
message Tier {
uint32 priceSat = 1;
uint32 limit = 2;
uint32 maxPerAddress = 3;
}
message ActionPurchase {
PurchasePayload payload = 1;
string sellerSignature = 2;
}
message PurchasePayload {
ActionID deployID = 1;
string buyerPublicKey = 2;
repeated uint32 nodeIDs = 3;
int64 totalAmountSat = 4;
uint64 timeOutBlock = 5;
}
message ActionID {
uint64 block = 1;
uint32 txIndex = 2;
}
message ActionDelegate {
string delegateePublicKey = 1;
repeated uint32 nodeIDs = 2;
ActionID deployID = 3;
}

View File

@@ -0,0 +1,12 @@
package nodesale
import (
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/btcutil"
)
func (p *Processor) PubkeyToPkHashAddress(pubKey *btcec.PublicKey) btcutil.Address {
addrPubKey, _ := btcutil.NewAddressPubKey(pubKey.SerializeCompressed(), p.Network.ChainParams())
addrPubKeyHash := addrPubKey.AddressPubKeyHash()
return addrPubKeyHash
}

Some files were not shown because too many files have changed in this diff Show More