fix: add destination to transfer events - release v1.0.2

* chore: stop auto-adding issues to DevTools Project (#170)

* fix: enable streaming for in-memory observers (#171)

* Squashed commit of the following:

commit 9862b71c34
Author: semantic-release-bot <semantic-release-bot@martynus.net>
Date:   Thu Sep 7 00:06:39 2023 +0000

    chore(release): 1.0.0 [skip ci]

    ## 1.0.0 (2023-09-07)

    ### Features

    * ability to control inclusion of inputs/outputs/proofs/witness ([daf5547](daf55476c9))
    * ability to download hord.sqlite ([3dafa53](3dafa53ac0))
    * ability to generate config ([9fda9d0](9fda9d0d34))
    * ability to replay inscriptions ([f1adca9](f1adca9b0f))
    * ability to resume ([6c7eaa3](6c7eaa3bee))
    * ability to target blocks ([f6be49e](f6be49e24d))
    * ability to tolerate corrupted data ([adb1b98](adb1b988a6))
    * ability to track updates when scanning bitcoin (+refactor) ([9e54bff](9e54bfff35))
    * ability to update stacks db from cli + fix caching logic ([3ea9f59](3ea9f597af))
    * add command to check stacks db integrity ([322f473](322f47343c))
    * add get block command to cli ([97de0b0](97de0b071b))
    * add log, fix ordinal transfers scan ([c4202da](c4202dad2c))
    * add logs ([473ddd0](473ddd0595))
    * add metrics to `/ping` response of event observer server ([#297](https://github.com/hirosystems/ordhook/issues/297)) ([0e1ee7c](0e1ee7c1ee)), closes [#285](https://github.com/hirosystems/ordhook/issues/285)
    * add option to skip chainhook node ping ([a7c0b12](a7c0b12ad9))
    * add options for logs ([917090b](917090b408))
    * add post_transfer_output_value ([4ce0e9e](4ce0e9e5db))
    * add retry ([117e41e](117e41eae8))
    * add shared cache ([07523ae](07523aed1a))
    * add support for bitcoin op DelegatedStacking ([6516155](6516155055))
    * add transfers table ([db14f60](db14f60347))
    * always try to initialize tables when starting service ([1a9eddb](1a9eddb6aa))
    * attempt to scale up multithreading ([be91202](be91202d6b))
    * attempt to support cursed inscriptions ([9b45f90](9b45f908b8))
    * attempt transition to lazy model ([dda0b03](dda0b03ea3))
    * batch ingestion, improve cleaning ([168162e](168162e0dd))
    * better handling of blessed inscription turning cursed ([f11509a](f11509ab97))
    * cascade changes in CLI interface ([24f27fe](24f27fea63))
    * cascade hord activation ([42c090b](42c090ba7e))
    * chainhook-sdk config niteties ([7d9e179](7d9e179464))
    * class interface ([9dfec45](9dfec454f5))
    * client draft ([6a6451c](6a6451c864))
    * complete migration to lazy blocks ([fa50584](fa5058471a))
    * disable certs ([389f77d](389f77d473))
    * draft naive inscription detection ([9b3e38a](9b3e38a441))
    * draft ordhook-sdk-js ([b264e72](b264e7281b))
    * draft sha256 verification (wip) ([e6f0619](e6f0619a7c))
    * drafting lazy deserialization ([eaa2f71](eaa2f71fce))
    * dry config ([135297e](135297e978))
    * expose `is_streaming_blocks` prop ([1ba27d7](1ba27d7459))
    * expose more functions for working with the indexer ([654fead](654feadbdf))
    * expose scanning status in GET endpoint ([156c463](156c463cc0))
    * expose transfers_pre_inscription ([65afd77](65afd77492))
    * fetch full bitcoin block, including witness data ([ee9a345](ee9a3452ac))
    * fix download block ([38b50df](38b50df7a1))
    * handle stacks unconfirmed state scans ([f6d050f](f6d050fbce))
    * handle transfer ([fd5da52](fd5da52df4))
    * HTTP responses adjustments ([51572ef](51572efd93))
    * implement and document new development flow ([66019a0](66019a06e7))
    * implement zmq runloop ([c6c1c0e](c6c1c0ecce))
    * import inscription parser ([45e0147](45e0147ecf))
    * improve cli ergonomics ([991e33f](991e33ff42))
    * improve cli experience ([e865628](e8656285b2))
    * improve debug log ([5df77d7](5df77d7f84))
    * improve hord db commands ([21c09c2](21c09c296f))
    * improve onboarding ([deaa739](deaa739bdd))
    * improve ordinal scan efficiency ([e510d4b](e510d4bd09))
    * improve README ([f30e6f4](f30e6f4ed5))
    * improve repair command conveniency ([46be0ab](46be0ab5a7))
    * improving curse approach ([dcb8054](dcb805485f))
    * in-house thread pool ([bc5ffdd](bc5ffddb5b))
    * inscription replay speedup ([33a4f8b](33a4f8b6af))
    * introduce check command ([f17dc4c](f17dc4c343))
    * introduce evaluation reports ([54ad874](54ad874ee5))
    * introduce migration script ([8c2b16c](8c2b16cc48))
    * introduce new predicate + refactor schemas ([611c79c](611c79cee3))
    * introduce rocksdb storage for Stacks ([4564e88](4564e8818a))
    * introduce sync command ([ab022e6](ab022e6098))
    * introduce terminate function ([91616f6](91616f6531))
    * is_streaming_blocks ([aacf487](aacf487de6))
    * keep 1st tx in cache ([0978a5d](0978a5d4c1))
    * logic to start ingestion during indexing ([3c1c99d](3c1c99df5d))
    * merge "inscription_revealed" and "inscription_transferred" into "inscription_feed" ([741290d](741290de13))
    * migrate stacks scans to rocksdb ([4408b1e](4408b1e7ec))
    * migration to rocksdb, moving json parsing from networking thread ([5ad0147](5ad0147fa0))
    * move thread pool size to config ([bc313fa](bc313fad5c))
    * multithread traversals ([fba5c89](fba5c89a48))
    * number of retries for 4 to 3 ([b294dff](b294dff69a))
    * optimize memory ([5db1531](5db1531a3d))
    * optimize replay ([be26dac](be26daccd0))
    * ordinal inscription_transfer code complete ([f55a5ee](f55a5ee167))
    * plug inscription processing in ibd ([df36617](df36617214))
    * plumbing for ordhook-sdk-js ([7487589](74875896a3))
    * polish `hord find sat_point` command ([d071484](d0714842a2))
    * polish first impression ([3c2b00c](3c2b00ce38))
    * predicate schemas ([198cdaa](198cdaa6c8))
    * prototype warmup ([fa6c86f](fa6c86fb1f))
    * re-approach stacks block commit schema ([218d599](218d5998d6))
    * re-implement satoshi overflows handling ([8ea5bdf](8ea5bdf819))
    * re-introduce ingestion ([71c90d7](71c90d755d))
    * restore ability to replay transfers ([98e7e9b](98e7e9b21d))
    * return enable in api ([f39259c](f39259ceeb))
    * return local result when known ([5441851](5441851db7))
    * revisit caching strategy ([2705b95](2705b9501b))
    * revisit threading model ([05b6d5c](05b6d5c4d7))
    * scan inscription revealed ([84c5a0c](84c5a0c521))
    * scan inscription revealed ([644d515](644d5155d2))
    * share traversals_cache over 10 blocks spans ([b0378c3](b0378c3099))
    * simplify + improve coordination ([1922fd9](1922fd9bc4))
    * start investigating zmq signaling ([0ec2653](0ec265380c))
    * streamline processors ([13421db](13421db297))
    * support cursed inscriptions in chainhook client ([d7cc5a4](d7cc5a4410))
    * support for latest archives, add logs ([494cf3c](494cf3c9a5))
    * tweak mmap / page_size values ([5316a57](5316a575b0))
    * update chainhook-sdk ([f052e08](f052e08469))
    * update inscription transfer logic ([9d0d106](9d0d106e9c))
    * update inscription transfer schemas ([f80e983](f80e983481))
    * upgrade `service start`  implementation + documentation ([02db65e](02db65e417))
    * use caching on streamed blocks ([784e9a0](784e9a0830))
    * use thread pools for scans ([45b9abd](45b9abd3e0))
    * zmq sockets ([d2e328a](d2e328aa57))

    ### Bug Fixes

    * ability to run without redis ([96825c3](96825c35a8))
    * add busy handler ([d712e0d](d712e0ddae))
    * add exp backoff ([f014c14](f014c14277))
    * add retry logic in rocksdb ([247df20](247df2088a))
    * add retry logic to work around unexpected responses from bitcoind ([2ab6b32](2ab6b32ff0))
    * additional adjustments ([fe26063](fe26063513))
    * additional fixes (network, address, offsets) ([8006000](8006000034))
    * address build warnings ([dc623a0](dc623a01e5))
    * address non-inscribed block case ([a7d08a3](a7d08a3722))
    * address redis disconnects ([a6b4a5f](a6b4a5fb38))
    * address remaining issues ([74b2fa9](74b2fa9411))
    * adjust error message ([3e7b0d0](3e7b0d03f9))
    * allow empty block ([fe8ce45](fe8ce455a1))
    * always fetch blocks ([97060a1](97060a13ca))
    * async/await regression ([676aac1](676aac196d))
    * attempt ([9e14fce](9e14fce0e4))
    * attempt to fix offset ([e6c5d0e](e6c5d0eed8))
    * attempt to retrieve blocks from iterator ([f718071](f718071b33))
    * attempt to tweak rocksdb ([11b9b6b](11b9b6be62))
    * auto enable stacks predicate ([30557f8](30557f8667))
    * backpressure on traversals ([3177e22](3177e22921))
    * batch inscription ([cd1085c](cd1085ceb0))
    * batch migration ([ed8b7ad](ed8b7ad2f3))
    * better redis error handling ([debb06c](debb06cd5c))
    * better support of reinscriptions ([a1410e2](a1410e29dd))
    * better termination ([8a5482c](8a5482c131))
    * binary name ([4950a50](4950a50381))
    * block streaming ([dcdfd16](dcdfd1655c))
    * boot sequence ([577f1c2](577f1c237e))
    * boot sequence, logs, format ([d03f851](d03f85178d))
    * borrow issue ([66e2a7c](66e2a7c785))
    * broken build ([f0d471e](f0d471ea8b))
    * broken test ([239b26a](239b26a614))
    * broken tests ([2ab6e7d](2ab6e7d679))
    * build ([4067f08](4067f0814f))
    * build ([607ac95](607ac953b1))
    * build error ([d6ed108](d6ed10894c))
    * build error ([bbede8b](bbede8b546))
    * build error ([fa802fa](fa802fae7a))
    * build error ([44ca74b](44ca74b2c5))
    * build error ([053b781](053b7815a8))
    * build error ([5c3bcf4](5c3bcf42fc))
    * build error ([b78c0cc](b78c0ccea6))
    * build error ([879ed67](879ed6775a))
    * build errors ([60cd4d0](60cd4d0c61))
    * build errors ([8dd91bf](8dd91bfce3))
    * build errors / merge snafu ([47da0c1](47da0c132a))
    * build errors + warnings ([938c6df](938c6dff27))
    * build failing ([83f1496](83f14964a6))
    * build warning ([561e51e](561e51eb27))
    * build warning ([75847df](75847df0d1))
    * build warning ([0194483](0194483b75))
    * build warnings ([d3e998c](d3e998c469))
    * build warnings ([e7ad175](e7ad175805))
    * build warnings ([670bde6](670bde6379))
    * bump incoming payload limit to 20mb ([7e15086](7e150861a4))
    * cache invalidation ([05bd903](05bd9035eb))
    * cache L2 capacity ([e2fbc73](e2fbc73eaf))
    * cache size ([ce61205](ce61205b96))
    * cache's ambitions ([e438db7](e438db7514))
    * Cargo.toml ([759c3a3](759c3a393f))
    * chain mixup, add logs ([0427a10](0427a10a63))
    * change forking behavior ([4c10014](4c100147c2))
    * clean expectations ([f9e089f](f9e089f90d))
    * clear cache more regularly ([c3b884f](c3b884fd30))
    * command for db patch ([27f6838](27f683818d))
    * commands doc ([3485e6f](3485e6f3d9))
    * compatibility with clarinet ([a282655](a28265509f))
    * condition ([0233dc5](0233dc5bf0))
    * create dummy inscription for sats overflow ([84aa6ce](84aa6ce7fd))
    * db init command ([55e293b](55e293b3ca))
    * decrease compression - from 4 bytes to 8 bytes ([b2eb314](b2eb31424b))
    * deployer predicate wildcard ([05ca395](05ca395da1))
    * disable sleep ([41ecace](41ecacee0e))
    * disable steam scan when scanning past blocks ([e2949d2](e2949d213a))
    * disambiguate inscription_output_value and inscription_fee ([9816cbb](9816cbb70a))
    * do not panic ([a0fa1a9](a0fa1a9301))
    * doc drift ([b595339](b595339024))
    * docker build ([df39302](df39302616))
    * docker file ([6ad5206](6ad52061eb))
    * dockerfile ([73ad612](73ad612ea4))
    * dockerfile ([da21ec4](da21ec4cb9))
    * documentation drift ([c5335a7](c5335a765c))
    * documentation drift ([38153ca](38153ca22f))
    * don't early exit when satoshi computing fail ([a8d76b0](a8d76b03ac))
    * don't enable predicate if error ([1274cbf](1274cbf9c4))
    * early return ([8f97b56](8f97b5643b))
    * edge case when requests processed in order ([8c4325f](8c4325f721))
    * edge case when requests processed out of order ([a35cea2](a35cea2b54))
    * edge case when requests processed out of order ([a6651b8](a6651b851f))
    * enable profiling ([f99b073](f99b073528))
    * enable specs on reboot ([f23be24](f23be246c2))
    * enforce db reconnection in http endpoints ([bcd2a45](bcd2a45a86))
    * enum serialization ([67cb340](67cb340674))
    * error management ([f0274f5](f0274f5726))
    * export all types on ts client ([be8bfbc](be8bfbcf60))
    * failing build ([1502d5d](1502d5d682))
    * fee ([0337f92](0337f92ce0))
    * filter out sat overflows from payloads ([ce439ae](ce439ae900))
    * gap in stacks scanning ([8c8c5c8](8c8c5c8611))
    * generator typo ([8a7eddb](8a7eddb092))
    * handle hint and case of re-inscriptions ([f86b184](f86b184832))
    * handle non-spending transaction ([cb01eb5](cb01eb55fd))
    * handle re-inscription for unbound inscriptions ([a1ffc1a](a1ffc1a59a))
    * hard coded dev-dependency ([5c105de](5c105de8b5))
    * ignore invalid inscription ([f18bc00](f18bc00f5a))
    * ignore transaction aborting that we could not classify ([37c80f7](37c80f7e83))
    * implement error handler ([d071b81](d071b81954))
    * improve progress bar ([b28da56](b28da5697d))
    * improve rewrite block command ([d524771](d52477142a))
    * in-block re-inscription case ([90db9c3](90db9c3d15))
    * include blocks discovered during scan, if any ([1eabce2](1eabce25c3))
    * include ordinals operations in standardized blocks ([a13351d](a13351d46a))
    * include proof on scan commands ([6574008](6574008ae8))
    * increase number of retries ([343ddd6](343ddd65a8))
    * indexing ([45661ab](45661ab62c))
    * inject l1 cache hit in results (+ clearing) ([62fd929](62fd92948e))
    * inscription fee ([2ac3022](2ac302235c))
    * inscription_number ([a7d8153](a7d8153a8c))
    * insert new locations ([6475aeb](6475aeb8d4))
    * iterate on values ([0c73e62](0c73e62902))
    * keep trying opening rocksdb conn if failing ([dbc794a](dbc794a0d4))
    * lazy block approach ([b567322](b567322859))
    * leader_registered doc ([f9d7370](f9d7370c43))
    * loading predicates from redis ([3bd308f](3bd308fb15))
    * log level, zeromq dependency ([4a2a6ef](4a2a6ef297))
    * logic determining start height ([5dd300f](5dd300fb05))
    * logs ([81be24e](81be24ef08))
    * mark inscriber_address as nullable ([77fd88b](77fd88b9c1))
    * more pessimism on retries ([9b987c5](9b987c51a9))
    * move parsing back to network thread ([bad1ee6](bad1ee6d4e))
    * moving stacks tip ([87c409e](87c409e01c))
    * multithreading cap ([c80ae60](c80ae60991))
    * myriad of improvements ([0633182](063318233d))
    * nefarious logs ([3b01a48](3b01a48f1e))
    * network, cascade changes ([1f45ec2](1f45ec26da))
    * off by one ([2a0e75f](2a0e75f6a3))
    * off by one ([c31611f](c31611fb28))
    * off by one ([94e1141](94e11411f8))
    * off by one ([abf70e7](abf70e7204))
    * off by one error ([3832cf9](3832cf9770))
    * off by one inscriptions number ([cdfbf48](cdfbf487fa))
    * off by one isssue ([fead2ed](fead2ed693))
    * off by one issue ([a8988ba](a8988ba573))
    * off by one issue ([155e3a6](155e3a6d29))
    * off by one issue on sats overflow ([8a12004](8a120040e7))
    * off-by-one error in backward traversal ([d4128aa](d4128aa8a1))
    * off-by-one in sats number resolution ([42acbeb](42acbebcd5))
    * offset ([278a655](278a65524b))
    * only avoid override for blessed inscriptions ([b50bbc1](b50bbc1bf7))
    * optimize reg and dereg ([c2ec1b5](c2ec1b5283))
    * ordinals scans ([62b62bd](62b62bd98a))
    * outdated dockerfile ([771b036](771b0362b2))
    * outdated documentation ([f472a49](f472a49c42))
    * overriden inscriptions ([25c6441](25c6441404))
    * parsing ([1f047a9](1f047a9162))
    * patch absence of witness data ([f8fcfca](f8fcfcad6d))
    * patch boot latency ([0e3faf9](0e3faf9a61))
    * patch crach ([20d9df6](20d9df6c65))
    * patch db call ([d385df2](d385df2037))
    * pipeline logic ([a864c85](a864c85c33))
    * pipeline resuming ([06883c6](06883c655a))
    * ports ([3ee98a8](3ee98a8be9))
    * potential resolve coinbase spent ([5d26738](5d267380f7))
    * PoxInfo default for scan commands ([a00ccf5](a00ccf589a))
    * predicate documentation ([572cf20](572cf202ba))
    * predicate generator network ([8f0ae21](8f0ae216c8))
    * provide optional values ([2cbf87e](2cbf87ebcc))
    * re-apply initial fix ([f5cb516](f5cb516ee0))
    * re-arrange logs ([2857d0a](2857d0a1a4))
    * re-enable sleep ([0f61a26](0f61a26fda))
    * re-initiate inscriptions connection every 250 blocks ([39671f4](39671f4378))
    * re-qualify error to warn ([9431684](9431684afe))
    * re-wire cmd ([a1447ad](a1447ad277))
    * README ([db1d584](db1d584827))
    * recreate db conn on a regular basis ([81d8575](81d85759a4))
    * redis update ([d4889f1](d4889f16b7))
    * related issue ([4b3a0da](4b3a0daa43))
    * remove rocksdb reconnect ([f2b067e](f2b067e85e))
    * remove sleep ([c371e74](c371e74de7))
    * remove start logic ([a04711a](a04711ad7c))
    * remove support for p2wsh inscription reveal support ([4fe71f2](4fe71f2622))
    * remove symbols ([108117b](108117b82e))
    * remove thread_max * 2 ([359c6f9](359c6f9422))
    * reopen connect on failures ([3e15da5](3e15da5565))
    * reply with 500 on payload processing error ([eaa6d7b](eaa6d7b640))
    * report generation ([0dce12a](0dce12a4e2))
    * restore stable values ([fb5c591](fb5c591943))
    * return blocks to rollback in reverse order ([9fab5a3](9fab5a34a2))
    * reuse existing computation for fix ([222f7c3](222f7c3a14))
    * revert fix, avoid collision in traversals map ([dfcadec](dfcadec680))
    * revisit log level ([4168661](416866123a))
    * revisit transfer loop ([1f2151c](1f2151c098))
    * rocket_okapi version ([2af31a8](2af31a8e64))
    * safer db open, dockerfile ([43d37d7](43d37d73f2))
    * safer error handling ([11509e4](11509e4435))
    * sat offset computation ([b278b66](b278b66f84))
    * sats overflow handling ([a3f745c](a3f745cfa7))
    * schema for curse_type ([72d43c6](72d43c6b41))
    * serialize handlers in one thread ([cdfc264](cdfc264cff))
    * slow down initial configuration ([3096ad3](3096ad3b26))
    * sql query ([1a3bc42](1a3bc428ea))
    * sql query bis ([a479884](a4798848b1))
    * sql request ([6345df2](6345df2652))
    * sql table setup ([c8884a7](c8884a7dbe))
    * stack overflow ([aed7d5d](aed7d5d005))
    * stacks predicate format ([fcf9fb0](fcf9fb0e3f))
    * start_block off by one ([b99f7b0](b99f7b0011))
    * streamline txid handling ([ad48351](ad48351044))
    * test suite ([c7672f9](c7672f91a1))
    * test warns and errors ([0887d6b](0887d6b8ca))
    * threading model ([c9c43ae](c9c43ae3e3))
    * threading model ([c2354fc](c2354fcacd))
    * track interrupted scans ([2b51dca](2b51dca8f3))
    * transaction type schema ([c35a737](c35a737ed2))
    * transfer recomputing commit ([3643636](364363680f))
    * transfer tracking ([0ea85e3](0ea85e3d20))
    * transfer tracking ([30f299e](30f299ef7c))
    * transfer tracking ([0cd29f5](0cd29f5925))
    * transfer tracking + empty blocks ([dc94875](dc948755b2))
    * traversals algo ([e8ee3ab](e8ee3ab036))
    * tweak rocksdb options ([a0a6950](a0a69502d8))
    * typo ([b0498bb](b0498bb048))
    * typo ([baa773f](baa773ff4d))
    * unexpected expectation ([7dd362b](7dd362b4f5))
    * unify rosetta operation schemas ([bf3216b](bf3216b100))
    * unused imports ([3aab402](3aab4022ab))
    * update chainhook schema ([4e82714](4e8271491b))
    * update inscription_number ([89b94e7](89b94e7d5d))
    * update license ([6ebeb77](6ebeb77d6a))
    * update rust version in docker build ([fab6f69](fab6f69df5))
    * update spec status ([e268925](e2689255b7))
    * update/pin dependencies ([#311](https://github.com/hirosystems/ordhook/issues/311)) ([f54b374](f54b374b24)), closes [#310](https://github.com/hirosystems/ordhook/issues/310)
    * use first input to stick with ord spec interpretation / implementation ([206678f](206678f0d1))
    * use rpc instead of rest ([1b18818](1b188182f1))
    * zeromq, subsidy issue ([dbca70c](dbca70c197))

    ### Reverts

    * Revert "chore: tmp patch" ([3e022ca](3e022ca322))

commit 4ef18d5b1e
Merge: d111c44 4cde5e8
Author: Scott McClellan <scott.mcclellan@gmail.com>
Date:   Wed Sep 6 18:44:26 2023 -0500

    Merge pull request #168 from hirosystems/develop

    Merge up `develop` to `main`

* fix: CI rust version mismatch, create empty db  (#173)

* fix: create db if does not exists

* chore: update rust version

* chore: bump version to 1.0.1

* fix: service boot sequence (#175)

* fix: ci

* fix: initial flow (#178)

* chore: update chainhook-sdk + cascade changes

* fix: update archive url

* feat: only create rocksdb if sqlite present

* fix: use crossbeam channel instead of std

* fix: improve error message

* doc: update README

* fix: build warnings

* fix: block_archiving expiration

* fix: archive url

* fix: read content len from http header

* chore: untar sqlite file

* chore: bump versions

* fix: build error / warning

* change title (#182)

* Ordhook doc updates (#183)

* update copy

* add openapi reference file to ordhook docs for better context

* typo in reference link for bitcoind

* remove references to chainhook

* break out guides for scanning and streaming ordinal activities

* fix references to Ordhook.toml

* update content for each guide

* replace mentions of Chainhook

---------

Co-authored-by: Max Efremov <51917427+mefrem@users.noreply.github.com>

* provide a pointer to rust installation and a next steps linking to guides (#184)

* update Ordhook.toml (#185)

* fix: grammar tweaks

grammar tweaks

* fix: grammar tweaks

Grammar tweaks

* fix: grammar updates

grammar updates

Co-authored-by: Ludo Galabru <ludo@hiro.so>

* doc: update getting-started

Co-authored-by: Ludo Galabru <ludo@hiro.so>

* doc: update overview.md

Updated grammar

Co-authored-by: Ludo Galabru <ludo@hiro.so>

* feat: ordhook-sdk-js refactoring (#186)

---------

Co-authored-by: Scott McClellan <scott.mcclellan@gmail.com>
Co-authored-by: Ryan <ryan.waits@gmail.com>
Co-authored-by: Max Efremov <51917427+mefrem@users.noreply.github.com>
Co-authored-by: max-crawford <102705427+max-crawford@users.noreply.github.com>
This commit is contained in:
Ludo Galabru
2023-11-01 20:47:00 -04:00
committed by GitHub
parent ac3915f035
commit 47f365eb47
59 changed files with 6718 additions and 9321 deletions

View File

@@ -4,36 +4,58 @@ on:
push:
branches:
- develop
- main
tags-ignore:
- "**"
- feat/ordhook-sdk-js
paths-ignore:
- "**/CHANGELOG.md"
- '**/CHANGELOG.md'
pull_request:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
build-publish:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
token: ${{ secrets.GH_TOKEN || secrets.GITHUB_TOKEN }}
fetch-depth: 0
persist-credentials: false
- name: Cache cargo
uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Cargo test
run: |
rustup update
cargo test --all
RUST_BACKTRACE=1 cargo test --all -- --test-threads=1
build-publish:
runs-on: ubuntu-latest
needs: test
outputs:
docker_image_digest: ${{ steps.docker_push.outputs.digest }}
new_release_published: ${{ steps.semantic.outputs.new_release_published }}
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Semantic Release
uses: cycjimmy/semantic-release-action@v3
uses: cycjimmy/semantic-release-action@v4
id: semantic
# Only run on non-PR events or only PRs that aren't from forks
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository
env:
GITHUB_TOKEN: ${{ secrets.GH_TOKEN || secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SEMANTIC_RELEASE_PACKAGE: ${{ github.event.repository.name }}
with:
semantic_version: 19
@@ -42,15 +64,21 @@ jobs:
@semantic-release/git@10.0.1
conventional-changelog-conventionalcommits@6.1.0
- name: Checkout tag
if: steps.semantic.outputs.new_release_version != ''
uses: actions/checkout@v4
with:
persist-credentials: false
ref: v${{ steps.semantic.outputs.new_release_version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v3
- name: Docker Meta
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: |
blockstack/${{ github.event.repository.name }}
hirosystems/${{ github.event.repository.name }}
tags: |
type=ref,event=branch
@@ -59,18 +87,134 @@ jobs:
type=semver,pattern={{major}}.{{minor}},value=${{ steps.semantic.outputs.new_release_version }},enable=${{ steps.semantic.outputs.new_release_version != '' }}
type=raw,value=latest,enable={{is_default_branch}}
- name: Login to DockerHub
uses: docker/login-action@v2
- name: Log in to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build/Tag/Push Image
uses: docker/build-push-action@v2
- name: Build/Push Image
uses: docker/build-push-action@v5
id: docker_push
with:
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
file: ./dockerfiles/components/ordhook.dockerfile
cache-from: type=gha
cache-to: type=gha,mode=max
# Only push if (there's a new release on main branch, or if building a non-main branch) and (Only run on non-PR events or only PRs that aren't from forks)
push: ${{ (github.ref != 'refs/heads/master' || steps.semantic.outputs.new_release_version != '') && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository) }}
push: ${{ (github.ref != 'refs/heads/main' || steps.semantic.outputs.new_release_version != '') && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository) }}
deploy-dev:
runs-on: ubuntu-latest
strategy:
matrix:
k8s-env: [mainnet]
needs: build-publish
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository
env:
DEPLOY_ENV: dev
environment:
name: Development-${{ matrix.k8s-env }}
url: https://platform.dev.hiro.so/
steps:
- name: Checkout actions repo
uses: actions/checkout@v4
with:
ref: main
token: ${{ secrets.GH_TOKEN }}
repository: ${{ secrets.DEVOPS_ACTIONS_REPO }}
- name: Deploy Ordhook build to Dev ${{ matrix.k8s-env }}
uses: ./actions/deploy
with:
docker_tag: ${{ needs.build-publish.outputs.docker_image_digest }}
file_pattern: manifests/bitcoin/${{ matrix.k8s-env }}/ordhook/${{ env.DEPLOY_ENV }}/base/kustomization.yaml
gh_token: ${{ secrets.GH_TOKEN }}
auto-approve-dev:
runs-on: ubuntu-latest
if: needs.build-publish.outputs.new_release_published == 'true' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository)
needs: build-publish
steps:
- name: Approve pending deployments
run: |
sleep 5
ENV_IDS=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/hirosystems/ordhook/actions/runs/${{ github.run_id }}/pending_deployments" | jq -r '[.[].environment.id // empty]')
if [[ "${ENV_IDS}" != "[]" ]]; then
curl -s -X POST -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/hirosystems/ordhook/actions/runs/${{ github.run_id }}/pending_deployments" -d "{\"environment_ids\":${ENV_IDS},\"state\":\"approved\",\"comment\":\"auto approve\"}"
fi
deploy-staging:
runs-on: ubuntu-latest
strategy:
matrix:
k8s-env: [mainnet]
needs:
- build-publish
- deploy-dev
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository
env:
DEPLOY_ENV: stg
environment:
name: Staging-${{ matrix.k8s-env }}
url: https://platform.stg.hiro.so/
steps:
- name: Checkout actions repo
uses: actions/checkout@v4
with:
ref: main
token: ${{ secrets.GH_TOKEN }}
repository: ${{ secrets.DEVOPS_ACTIONS_REPO }}
- name: Deploy Chainhook build to Stg ${{ matrix.k8s-env }}
uses: ./actions/deploy
with:
docker_tag: ${{ needs.build-publish.outputs.docker_image_digest }}
file_pattern: manifests/bitcoin/${{ matrix.k8s-env }}/ordhook/${{ env.DEPLOY_ENV }}/base/kustomization.yaml
gh_token: ${{ secrets.GH_TOKEN }}
auto-approve-stg:
runs-on: ubuntu-latest
if: needs.build-publish.outputs.new_release_published == 'true' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository)
needs:
- build-publish
- deploy-dev
steps:
- name: Approve pending deployments
run: |
sleep 5
ENV_IDS=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/hirosystems/ordhook/actions/runs/${{ github.run_id }}/pending_deployments" | jq -r '[.[].environment.id // empty]')
if [[ "${ENV_IDS}" != "[]" ]]; then
curl -s -X POST -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/hirosystems/ordhook/actions/runs/${{ github.run_id }}/pending_deployments" -d "{\"environment_ids\":${ENV_IDS},\"state\":\"approved\",\"comment\":\"auto approve\"}"
fi
deploy-prod:
runs-on: ubuntu-latest
strategy:
matrix:
k8s-env: [mainnet,testnet]
needs:
- build-publish
- deploy-staging
if: needs.build-publish.outputs.new_release_published == 'true' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository)
env:
DEPLOY_ENV: prd
environment:
name: Production-${{ matrix.k8s-env }}
url: https://platform.hiro.so/
steps:
- name: Checkout actions repo
uses: actions/checkout@v4
with:
ref: main
token: ${{ secrets.GH_TOKEN }}
repository: ${{ secrets.DEVOPS_ACTIONS_REPO }}
- name: Deploy Ordhook build to Prd ${{ matrix.k8s-env }}
uses: ./actions/deploy
with:
docker_tag: ${{ needs.build-publish.outputs.docker_image_digest }}
file_pattern: manifests/bitcoin/${{ matrix.k8s-env }}/ordhook/${{ env.DEPLOY_ENV }}/base/kustomization.yaml
gh_token: ${{ secrets.GH_TOKEN }}

518
.github/workflows/ordhook-sdk-js.yml vendored Normal file
View File

@@ -0,0 +1,518 @@
name: ordhook-sdk-js
env:
DEBUG: napi:*
APP_NAME: ordhook-sdk-js
COMPONENT_PATH: components/ordhook-sdk-js
MACOSX_DEPLOYMENT_TARGET: '13.0'
permissions:
contents: write
id-token: write
'on':
push:
branches:
- feat/ordhook-sdk-js
tags-ignore:
- '**'
paths-ignore:
- '**/*.md'
- LICENSE
- '**/*.gitignore'
- .editorconfig
- docs/**
pull_request: null
jobs:
build:
strategy:
fail-fast: false
matrix:
settings:
- host: macos-latest
target: x86_64-apple-darwin
build: |
yarn build
strip -x *.node
# - host: windows-latest
# build: yarn build
# target: x86_64-pc-windows-msvc
# - host: windows-latest
# build: |
# rustup target add i686-pc-windows-msvc
# yarn build --target i686-pc-windows-msvc
# target: i686-pc-windows-msvc
- host: ubuntu-latest
target: x86_64-unknown-linux-gnu
docker: ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-debian
build: |-
sudo apt-get install libssl-dev &&
set -e &&
yarn --cwd components/ordhook-sdk-js build --target x86_64-unknown-linux-gnu &&
strip -x components/ordhook-sdk-js/*.node
# - host: ubuntu-latest
# target: x86_64-unknown-linux-musl
# docker: ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-alpine
# build: set -e && yarn --cwd components/ordhook-sdk-js build && strip components/ordhook-sdk-js/*.node
- host: macos-latest
target: aarch64-apple-darwin
build: |
rustup target add aarch64-apple-darwin
yarn build --target aarch64-apple-darwin
strip -x *.node
# - host: ubuntu-latest
# target: aarch64-unknown-linux-gnu
# docker: ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-debian-aarch64
# build: |-
# sudo apt-get install libssl-dev &&
# set -e &&
# rustup target add aarch64-unknown-linux-gnu &&
# yarn --cwd components/ordhook-sdk-js build --target aarch64-unknown-linux-gnu &&
# aarch64-unknown-linux-gnu-strip components/ordhook-sdk-js/*.node
# - host: ubuntu-latest
# target: armv7-unknown-linux-gnueabihf
# setup: |
# sudo apt-get update
# sudo apt-get install gcc-arm-linux-gnueabihf -y
# build: |
# rustup target add armv7-unknown-linux-gnueabihf
# yarn --cwd components/ordhook-sdk-js build --target armv7-unknown-linux-gnueabihf
# arm-linux-gnueabihf-strip components/ordhook-sdk-js/*.node
# - host: ubuntu-latest
# target: aarch64-unknown-linux-musl
# docker: ghcr.io/napi-rs/napi-rs/nodejs-rust:lts-alpine
# build: |-
# set -e &&
# rustup target add aarch64-unknown-linux-musl &&
# yarn --cwd components/ordhook-sdk-js build --target aarch64-unknown-linux-musl &&
# /aarch64-linux-musl-cross/bin/aarch64-linux-musl-strip components/ordhook-sdk-js/*.node
# - host: windows-latest
# target: aarch64-pc-windows-msvc
# build: |-
# rustup target add aarch64-pc-windows-msvc
# yarn build --target aarch64-pc-windows-msvc
name: stable - ${{ matrix.settings.target }} - node@18
runs-on: ${{ matrix.settings.host }}
defaults:
run:
working-directory: ./components/ordhook-sdk-js
steps:
- uses: actions/checkout@v3
- name: Setup node
uses: actions/setup-node@v3
if: ${{ !matrix.settings.docker }}
with:
node-version: 18
check-latest: true
cache: yarn
cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
- name: Install
uses: dtolnay/rust-toolchain@stable
if: ${{ !matrix.settings.docker }}
with:
toolchain: stable
targets: ${{ matrix.settings.target }}
- name: Cache cargo
uses: actions/cache@v3
with:
path: |
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
.cargo-cache
target/
key: ${{ matrix.settings.target }}-cargo-${{ matrix.settings.host }}
# - uses: goto-bus-stop/setup-zig@v2
# if: ${{ matrix.settings.target == 'armv7-unknown-linux-gnueabihf' }}
# with:
# version: 0.10.1
- name: Setup toolchain
run: ${{ matrix.settings.setup }}
if: ${{ matrix.settings.setup }}
shell: bash
# - name: Setup node x86
# if: matrix.settings.target == 'i686-pc-windows-msvc'
# run: yarn config set supportedArchitectures.cpu "ia32"
# shell: bash
- name: Install dependencies
run: yarn install
# - name: Setup node x86
# uses: actions/setup-node@v3
# if: matrix.settings.target == 'i686-pc-windows-msvc'
# with:
# node-version: 18
# check-latest: true
# cache: yarn
# cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
# architecture: x86
- name: Build in docker
uses: addnab/docker-run-action@v3
if: ${{ matrix.settings.docker }}
with:
image: ${{ matrix.settings.docker }}
options: '--user 0:0 -v ${{ github.workspace }}/.cargo-cache/git/db:/usr/local/cargo/git/db -v ${{ github.workspace }}/.cargo/registry/cache:/usr/local/cargo/registry/cache -v ${{ github.workspace }}/.cargo/registry/index:/usr/local/cargo/registry/index -v ${{ github.workspace }}:/build -w /build'
run: ${{ matrix.settings.build }}
- name: Build
run: ${{ matrix.settings.build }}
if: ${{ !matrix.settings.docker }}
shell: bash
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: bindings-${{ matrix.settings.target }}
path: ${{ env.COMPONENT_PATH }}/${{ env.APP_NAME }}.*.node
if-no-files-found: error
# build-freebsd:
# runs-on: macos-12
# name: Build FreeBSD
# defaults:
# run:
# working-directory: ./components/ordhook-sdk-js
# steps:
# - uses: actions/checkout@v3
# - name: Build
# id: build
# uses: vmactions/freebsd-vm@v0
# env:
# DEBUG: napi:*
# RUSTUP_HOME: /usr/local/rustup
# CARGO_HOME: /usr/local/cargo
# RUSTUP_IO_THREADS: 1
# with:
# envs: DEBUG RUSTUP_HOME CARGO_HOME RUSTUP_IO_THREADS
# usesh: true
# mem: 3000
# prepare: |
# pkg install -y -f curl node libnghttp2 npm yarn
# curl https://sh.rustup.rs -sSf --output rustup.sh
# sh rustup.sh -y --profile minimal --default-toolchain beta
# export PATH="/usr/local/cargo/bin:$PATH"
# echo "~~~~ rustc --version ~~~~"
# rustc --version
# echo "~~~~ node -v ~~~~"
# node -v
# echo "~~~~ yarn --version ~~~~"
# yarn --version
# run: |
# export PATH="/usr/local/cargo/bin:$PATH"
# pwd
# ls -lah
# whoami
# env
# freebsd-version
# cd ./components/ordhook-sdk-js
# yarn install
# yarn build
# strip -x *.node
# yarn test
# rm -rf node_modules
# rm -rf target
# rm -rf .yarn/cache
# - name: Upload artifact
# uses: actions/upload-artifact@v3
# with:
# name: bindings-freebsd
# path: ${{ env.COMPONENT_PATH }}/${{ env.APP_NAME }}.*.node
# if-no-files-found: error
test-macOS-binding:
name: Test bindings on ${{ matrix.settings.target }} - node@${{ matrix.node }}
needs:
- build
strategy:
fail-fast: false
matrix:
settings:
- host: macos-latest
target: x86_64-apple-darwin
# - host: windows-latest
# target: x86_64-pc-windows-msvc
node:
- '14'
- '16'
- '18'
runs-on: ${{ matrix.settings.host }}
steps:
- uses: actions/checkout@v3
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node }}
check-latest: true
cache: yarn
cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
- name: Install dependencies
run: yarn install
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: bindings-${{ matrix.settings.target }}
path: .
- name: List packages
run: ls -R .
shell: bash
test-linux-x64-gnu-binding:
name: Test bindings on Linux-x64-gnu - node@${{ matrix.node }}
needs:
- build
strategy:
fail-fast: false
matrix:
node:
- '14'
- '16'
- '18'
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./components/ordhook-sdk-js
steps:
- uses: actions/checkout@v3
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node }}
check-latest: true
cache: yarn
cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
- name: Install dependencies
run: yarn install
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: bindings-x86_64-unknown-linux-gnu
path: .
- name: List packages
run: ls -R .
shell: bash
# - name: Test bindings
# run: docker run --rm -v $(pwd):/build -w /build node:${{ matrix.node }}-slim yarn test
# test-linux-x64-musl-binding:
# name: Test bindings on x86_64-unknown-linux-musl - node@${{ matrix.node }}
# needs:
# - build
# strategy:
# fail-fast: false
# matrix:
# node:
# - '14'
# - '16'
# - '18'
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Setup node
# uses: actions/setup-node@v3
# with:
# node-version: ${{ matrix.node }}
# check-latest: true
# cache: yarn
# cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
# - name: Install dependencies
# run: |
# yarn config set supportedArchitectures.libc "musl"
# yarn install
# - name: Download artifacts
# uses: actions/download-artifact@v3
# with:
# name: bindings-x86_64-unknown-linux-musl
# path: .
# - name: List packages
# run: ls -R .
# shell: bash
# - name: Test bindings
# run: docker run --rm -v $(pwd):/build -w /build node:${{ matrix.node }}-alpine yarn test
# test-linux-aarch64-gnu-binding:
# name: Test bindings on aarch64-unknown-linux-gnu - node@${{ matrix.node }}
# needs:
# - build
# strategy:
# fail-fast: false
# matrix:
# node:
# - '14'
# - '16'
# - '18'
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Download artifacts
# uses: actions/download-artifact@v3
# with:
# name: bindings-aarch64-unknown-linux-gnu
# path: .
# - name: List packages
# run: ls -R .
# shell: bash
# - name: Install dependencies
# run: |
# yarn config set supportedArchitectures.cpu "arm64"
# yarn config set supportedArchitectures.libc "glibc"
# yarn install
# - name: Set up QEMU
# uses: docker/setup-qemu-action@v2
# with:
# platforms: arm64
# - run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# - name: Setup and run tests
# uses: addnab/docker-run-action@v3
# with:
# image: node:${{ matrix.node }}-slim
# options: '--platform linux/arm64 -v ${{ github.workspace }}:/build -w /build'
# run: |
# set -e
# yarn test
# ls -la
# test-linux-aarch64-musl-binding:
# name: Test bindings on aarch64-unknown-linux-musl - node@${{ matrix.node }}
# needs:
# - build
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Download artifacts
# uses: actions/download-artifact@v3
# with:
# name: bindings-aarch64-unknown-linux-musl
# path: .
# - name: List packages
# run: ls -R .
# shell: bash
# - name: Install dependencies
# run: |
# yarn config set supportedArchitectures.cpu "arm64"
# yarn config set supportedArchitectures.libc "musl"
# yarn install
# - name: Set up QEMU
# uses: docker/setup-qemu-action@v2
# with:
# platforms: arm64
# - run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# - name: Setup and run tests
# uses: addnab/docker-run-action@v3
# with:
# image: node:lts-alpine
# options: '--platform linux/arm64 -v ${{ github.workspace }}:/build -w /build'
# run: |
# set -e
# yarn test
# test-linux-arm-gnueabihf-binding:
# name: Test bindings on armv7-unknown-linux-gnueabihf - node@${{ matrix.node }}
# needs:
# - build
# strategy:
# fail-fast: false
# matrix:
# node:
# - '14'
# - '16'
# - '18'
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Download artifacts
# uses: actions/download-artifact@v3
# with:
# name: bindings-armv7-unknown-linux-gnueabihf
# path: .
# - name: List packages
# run: ls -R .
# shell: bash
# - name: Install dependencies
# run: |
# yarn config set supportedArchitectures.cpu "arm"
# yarn install
# - name: Set up QEMU
# uses: docker/setup-qemu-action@v2
# with:
# platforms: arm
# - run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# - name: Setup and run tests
# uses: addnab/docker-run-action@v3
# with:
# image: node:${{ matrix.node }}-bullseye-slim
# options: '--platform linux/arm/v7 -v ${{ github.workspace }}:/build -w /build'
# run: |
# set -e
# yarn test
# ls -la
universal-macOS:
name: Build universal macOS binary
needs:
- build
runs-on: macos-latest
steps:
- uses: actions/checkout@v3
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: 18
check-latest: true
cache: yarn
cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
- name: Install dependencies
run: yarn --cwd components/ordhook-sdk-js install
- name: Download macOS x64 artifact
uses: actions/download-artifact@v3
with:
name: bindings-x86_64-apple-darwin
path: components/ordhook-sdk-js/artifacts
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v3
with:
name: bindings-aarch64-apple-darwin
path: components/ordhook-sdk-js/artifacts
- name: Combine binaries
run: yarn --cwd components/ordhook-sdk-js universal
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: bindings-universal-apple-darwin
path: ${{ env.COMPONENT_PATH }}/${{ env.APP_NAME }}.*.node
if-no-files-found: error
publish:
name: Publish
runs-on: ubuntu-latest
needs:
# - build-freebsd
- test-macOS-binding
- test-linux-x64-gnu-binding
# - test-linux-x64-musl-binding
# - test-linux-aarch64-gnu-binding
# - test-linux-aarch64-musl-binding
# - test-linux-arm-gnueabihf-binding
- universal-macOS
steps:
- uses: actions/checkout@v3
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: 18
check-latest: true
cache: yarn
cache-dependency-path: ./components/ordhook-sdk-js/yarn.lock
- name: Install dependencies
run: yarn --cwd components/ordhook-sdk-js install
- name: Download all artifacts
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Move artifacts
run: yarn --cwd components/ordhook-sdk-js artifacts
- name: List packages
run: ls -R components/ordhook-sdk-js/./npm
shell: bash
- name: Publish
run: |
cd components/ordhook-sdk-js
npm config set provenance true
if git log -1 --pretty=%B | grep "^[0-9]\+\.[0-9]\+\.[0-9]\+$";
then
echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" >> ~/.npmrc
npm publish --access public
elif git log -1 --pretty=%B | grep "^[0-9]\+\.[0-9]\+\.[0-9]\+";
then
echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" >> ~/.npmrc
npm publish --tag next --access public
else
echo "Not a release, skipping publish"
fi
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

198
.gitignore vendored
View File

@@ -22,3 +22,201 @@ components/chainhook-types-js/dist
cache/
./tests
tmp/
# Created by https://www.toptal.com/developers/gitignore/api/node
# Edit at https://www.toptal.com/developers/gitignore?templates=node
### Node ###
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# Next.js build output
.next
# Nuxt.js build / generate output
.nuxt
dist
# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# TernJS port file
.tern-port
# Stores VSCode versions used for testing VSCode extensions
.vscode-test
# End of https://www.toptal.com/developers/gitignore/api/node
# Created by https://www.toptal.com/developers/gitignore/api/macos
# Edit at https://www.toptal.com/developers/gitignore?templates=macos
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### macOS Patch ###
# iCloud generated files
*.icloud
# End of https://www.toptal.com/developers/gitignore/api/macos
# Created by https://www.toptal.com/developers/gitignore/api/windows
# Edit at https://www.toptal.com/developers/gitignore?templates=windows
### Windows ###
# Windows thumbnail cache files
Thumbs.db
Thumbs.db:encryptable
ehthumbs.db
ehthumbs_vista.db
# Dump file
*.stackdump
# Folder config file
[Dd]esktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp
# Windows shortcuts
*.lnk
# End of https://www.toptal.com/developers/gitignore/api/windows
#Added by cargo
/target
Cargo.lock
.pnp.*
.yarn/*
!.yarn/patches
!.yarn/plugins
!.yarn/releases
!.yarn/sdks
!.yarn/versions
*.node

683
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@ num_cpus = "1.16.0"
serde = "1"
serde_json = "1"
serde_derive = "1"
reqwest = { version = "0.11", features = ["stream", "json"] }
reqwest = { version = "0.11", default-features = false, features = ["stream", "json", "rustls-tls"] }
hiro-system-kit = "0.3.1"
clap = { version = "3.2.23", features = ["derive"], optional = true }
clap_generate = { version = "3.0.3", optional = true }

View File

@@ -108,6 +108,9 @@ struct ScanBlocksCommand {
/// HTTP Post activity to a URL
#[clap(long = "post-to")]
pub post_to: Option<String>,
/// HTTP Auth token
#[clap(long = "auth-token")]
pub auth_token: Option<String>,
}
#[derive(Parser, PartialEq, Clone, Debug)]
@@ -284,6 +287,9 @@ struct StartCommand {
/// Block height where ordhook will start posting Ordinals activities
#[clap(long = "start-at-block")]
pub start_at_block: Option<u64>,
/// HTTP Auth token
#[clap(long = "auth-token")]
pub auth_token: Option<String>,
}
#[derive(Subcommand, PartialEq, Clone, Debug)]
@@ -499,9 +505,15 @@ async fn handle_command(opts: Opts, ctx: &Context) -> Result<(), String> {
&post_to,
cmd.start_block,
Some(cmd.end_block),
cmd.auth_token,
)?
.into_selected_network_specification(&config.network.bitcoin_network)?;
scan_bitcoin_chainstate_via_rpc_using_predicate(&predicate_spec, &config, &ctx)
scan_bitcoin_chainstate_via_rpc_using_predicate(
&predicate_spec,
&config,
None,
&ctx,
)
.await?;
} else {
let _ = download_ordinals_dataset_if_required(&config, ctx).await;
@@ -635,7 +647,13 @@ async fn handle_command(opts: Opts, ctx: &Context) -> Result<(), String> {
let mut predicates = vec![];
for post_to in cmd.post_to.iter() {
let predicate = build_predicate_from_cli(&config, post_to, start_block, None)?;
let predicate = build_predicate_from_cli(
&config,
post_to,
start_block,
None,
cmd.auth_token.clone(),
)?;
predicates.push(ChainhookFullSpecification::Bitcoin(predicate));
}
@@ -830,6 +848,7 @@ pub fn build_predicate_from_cli(
post_to: &str,
start_block: u64,
end_block: Option<u64>,
auth_token: Option<String>,
) -> Result<BitcoinChainhookFullSpecification, String> {
let mut networks = BTreeMap::new();
// Retrieve last block height known, and display it
@@ -847,7 +866,7 @@ pub fn build_predicate_from_cli(
predicate: BitcoinPredicateType::OrdinalsProtocol(OrdinalOperations::InscriptionFeed),
action: HookAction::HttpPost(HttpHook {
url: post_to.to_string(),
authorization_header: "".to_string(),
authorization_header: format!("Bearer {}", auth_token.unwrap_or("".to_string())),
}),
},
);

View File

@@ -12,10 +12,10 @@ redis = "0.21.5"
serde-redis = "0.12.0"
hex = "0.4.3"
rand = "0.8.5"
chainhook-sdk = { version = "=0.9.5", default-features = false, features = ["zeromq", "log"] }
# chainhook-sdk = { version = "=0.9.0", path = "../../../chainhook/components/chainhook-sdk", default-features = false, features = ["zeromq", "log"] }
chainhook-sdk = { version = "=0.10.5", features = ["zeromq"] }
# chainhook-sdk = { version = "=0.10.1", path = "../../../chainhook/components/chainhook-sdk", default-features = false, features = ["zeromq", "log"] }
hiro-system-kit = "0.3.1"
reqwest = { version = "0.11", features = ["stream", "json"] }
reqwest = { version = "0.11", default-features = false, features = ["stream", "json", "rustls-tls"] }
tokio = { version = "=1.24", features = ["full"] }
futures-util = "0.3.24"
flate2 = "1.0.24"
@@ -33,7 +33,7 @@ fxhash = "0.2.1"
rusqlite = { version = "0.27.0", features = ["bundled"] }
anyhow = { version = "1.0.56", features = ["backtrace"] }
schemars = { version = "0.8.10", git = "https://github.com/hirosystems/schemars.git", branch = "feat-chainhook-fixes" }
pprof = { version = "0.12", features = ["flamegraph"] }
pprof = { version = "0.13.0", features = ["flamegraph"], optional = true }
progressing = '3'
futures = "0.3.28"
@@ -46,5 +46,5 @@ features = ["lz4", "snappy"]
# debug = true
[features]
debug = ["hiro-system-kit/debug"]
debug = ["hiro-system-kit/debug", "pprof"]
release = ["hiro-system-kit/release"]

View File

@@ -94,16 +94,14 @@ pub fn compute_next_satpoint_data(
SatPosition::Output((output_index, (offset_cross_inputs - offset_intra_outputs)))
}
pub fn should_sync_rocks_db(
config: &Config,
ctx: &Context,
) -> Result<Option<(u64, u64)>, String> {
pub fn should_sync_rocks_db(config: &Config, ctx: &Context) -> Result<Option<(u64, u64)>, String> {
let blocks_db = open_readwrite_ordhook_db_conn_rocks_db(&config.expected_cache_path(), &ctx)?;
let inscriptions_db_conn = open_readonly_ordhook_db_conn(&config.expected_cache_path(), &ctx)?;
let last_compressed_block = find_last_block_inserted(&blocks_db) as u64;
let last_indexed_block = match find_latest_inscription_block_height(&inscriptions_db_conn, ctx)? {
let last_indexed_block = match find_latest_inscription_block_height(&inscriptions_db_conn, ctx)?
{
Some(last_indexed_block) => last_indexed_block,
None => 0
None => 0,
};
let res = if last_compressed_block < last_indexed_block {
@@ -164,7 +162,6 @@ pub fn should_sync_ordhook_db(
}
};
// TODO: Gracefully handle Regtest, Testnet and Signet
let (mut end_block, speed) = if start_block < 200_000 {
(end_block.min(200_000), 10_000)

View File

@@ -1,18 +1,15 @@
use chainhook_sdk::{types::BitcoinBlockData, utils::Context};
use crossbeam_channel::{Sender, TryRecvError};
use rocksdb::DB;
use std::{
thread::{sleep, JoinHandle},
time::Duration,
};
use crossbeam_channel::{Sender, TryRecvError};
use chainhook_sdk::{types::BitcoinBlockData, utils::Context};
use rocksdb::DB;
use crate::{
config::Config,
core::pipeline::{PostProcessorCommand, PostProcessorController, PostProcessorEvent},
db::{
insert_entry_in_blocks,
open_readwrite_ordhook_db_conn_rocks_db, LazyBlock,
},
db::{insert_entry_in_blocks, open_readwrite_ordhook_db_conn_rocks_db, LazyBlock},
};
pub fn start_block_archiving_processor(

View File

@@ -32,7 +32,10 @@ use crate::{
},
OrdhookConfig,
},
db::{get_any_entry_in_ordinal_activities, open_readonly_ordhook_db_conn},
db::{
get_any_entry_in_ordinal_activities, open_ordhook_db_conn_rocks_db_loop,
open_readonly_ordhook_db_conn,
},
};
use crate::db::{LazyBlockTransaction, TraversalResult};
@@ -43,7 +46,7 @@ use crate::{
new_traversals_lazy_cache,
pipeline::{PostProcessorCommand, PostProcessorController, PostProcessorEvent},
},
db::{open_readwrite_ordhook_db_conn, open_readwrite_ordhook_db_conn_rocks_db},
db::open_readwrite_ordhook_db_conn,
};
pub fn start_inscription_indexing_processor(
@@ -66,8 +69,7 @@ pub fn start_inscription_indexing_processor(
open_readwrite_ordhook_db_conn(&config.expected_cache_path(), &ctx).unwrap();
let ordhook_config = config.get_ordhook_config();
let blocks_db_rw =
open_readwrite_ordhook_db_conn_rocks_db(&config.expected_cache_path(), &ctx)
.unwrap();
open_ordhook_db_conn_rocks_db_loop(true, &config.expected_cache_path(), &ctx);
let mut empty_cycles = 0;
let inscriptions_db_conn =

View File

@@ -24,8 +24,7 @@ use crate::{
find_blessed_inscription_with_ordinal_number,
find_latest_cursed_inscription_number_at_block_height,
find_latest_inscription_number_at_block_height, format_satpoint_to_watch,
update_inscriptions_with_block, LazyBlockTransaction,
TraversalResult,
update_inscriptions_with_block, LazyBlockTransaction, TraversalResult,
},
ord::height::Height,
};
@@ -473,11 +472,7 @@ pub fn augment_block_with_ordinals_inscriptions_data_and_write_to_db_tx(
);
// Store inscriptions
update_inscriptions_with_block(
block,
inscriptions_db_tx,
ctx,
);
update_inscriptions_with_block(block, inscriptions_db_tx, ctx);
any_events
}
@@ -523,7 +518,9 @@ pub fn augment_block_with_ordinals_inscriptions_data(
// Handle sats overflow
while let Some((tx_index, op_index)) = sats_overflows.pop_front() {
let OrdinalOperation::InscriptionRevealed(ref mut inscription_data) = block.transactions[tx_index].metadata.ordinal_operations[op_index] else {
let OrdinalOperation::InscriptionRevealed(ref mut inscription_data) =
block.transactions[tx_index].metadata.ordinal_operations[op_index]
else {
continue;
};
let is_curse = inscription_data.curse_type.is_some();
@@ -711,7 +708,10 @@ fn consolidate_transaction_with_pre_computed_inscription_data(
OrdinalOperation::InscriptionTransferred(_) => continue,
};
let Some(traversal) = inscriptions_data.remove(&(tx.transaction_identifier.clone(), inscription.inscription_input_index)) else {
let Some(traversal) = inscriptions_data.remove(&(
tx.transaction_identifier.clone(),
inscription.inscription_input_index,
)) else {
continue;
};

View File

@@ -2,7 +2,7 @@ use chainhook_sdk::{
bitcoincore_rpc_json::bitcoin::{hashes::hex::FromHex, Address, Network, Script},
types::{
BitcoinBlockData, BitcoinNetwork, BitcoinTransactionData, BlockIdentifier,
OrdinalInscriptionTransferData, OrdinalOperation, TransactionIdentifier,
OrdinalInscriptionTransferData, OrdinalOperation, TransactionIdentifier, OrdinalInscriptionTransferDestination,
},
utils::Context,
};
@@ -114,7 +114,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
let (
outpoint_post_transfer,
offset_post_transfer,
updated_address,
destination,
post_transfer_output_value,
) = match post_transfer_data {
SatPosition::Output((output_index, offset)) => {
@@ -124,7 +124,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
tx.metadata.outputs[output_index].get_script_pubkey_hex();
let updated_address = match Script::from_hex(&script_pub_key_hex) {
Ok(script) => match Address::from_script(&script, network.clone()) {
Ok(address) => Some(address.to_string()),
Ok(address) => OrdinalInscriptionTransferDestination::Transferred(address.to_string()),
Err(e) => {
ctx.try_log(|logger| {
warn!(
@@ -133,7 +133,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
e.to_string()
)
});
None
OrdinalInscriptionTransferDestination::Burnt(script.to_string())
}
},
Err(e) => {
@@ -144,7 +144,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
e.to_string()
)
});
None
OrdinalInscriptionTransferDestination::Burnt(script_pub_key_hex.to_string())
}
};
@@ -181,7 +181,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
offset
)
});
(outpoint, total_offset, None, None)
(outpoint, total_offset, OrdinalInscriptionTransferDestination::SpentInFees, None)
}
};
@@ -190,7 +190,7 @@ pub fn augment_transaction_with_ordinals_transfers_data(
let transfer_data = OrdinalInscriptionTransferData {
inscription_id: watched_satpoint.inscription_id.clone(),
updated_address,
destination,
tx_index,
satpoint_pre_transfer,
satpoint_post_transfer,

View File

@@ -20,7 +20,10 @@ use chainhook_sdk::{
};
use crate::{
core::protocol::inscription_parsing::{get_inscriptions_revealed_in_block, get_inscriptions_transferred_in_block}, ord::sat::Sat,
core::protocol::inscription_parsing::{
get_inscriptions_revealed_in_block, get_inscriptions_transferred_in_block,
},
ord::sat::Sat,
};
pub fn get_default_ordhook_db_file_path(base_dir: &PathBuf) -> PathBuf {
@@ -228,7 +231,7 @@ pub fn open_readonly_ordhook_db_conn_rocks_db(
opts.set_disable_auto_compactions(true);
opts.set_max_background_jobs(0);
let db = DB::open_for_read_only(&opts, path, false)
.map_err(|e| format!("unable to open hord.rocksdb: {}", e.to_string()))?;
.map_err(|e| format!("unable to read hord.rocksdb: {}", e.to_string()))?;
Ok(db)
}
@@ -276,7 +279,7 @@ pub fn open_readwrite_ordhook_db_conn_rocks_db(
let path = get_default_ordhook_db_file_path_rocks_db(&base_dir);
let opts = rocks_db_default_options();
let db = DB::open(&opts, path)
.map_err(|e| format!("unable to open hord.rocksdb: {}", e.to_string()))?;
.map_err(|e| format!("unable to read-write hord.rocksdb: {}", e.to_string()))?;
Ok(db)
}
@@ -494,12 +497,18 @@ pub fn insert_transfer_in_locations(
pub fn get_any_entry_in_ordinal_activities(
block_height: &u64,
inscriptions_db_tx: &Connection,
_ctx: &Context,
ctx: &Context,
) -> bool {
let args: &[&dyn ToSql] = &[&block_height.to_sql().unwrap()];
let mut stmt = inscriptions_db_tx
let mut stmt = match inscriptions_db_tx
.prepare("SELECT DISTINCT block_height FROM inscriptions WHERE block_height = ?")
.unwrap();
{
Ok(stmt) => stmt,
Err(e) => {
ctx.try_log(|logger| error!(logger, "{}", e.to_string()));
panic!();
}
};
let mut rows = stmt.query(args).unwrap();
while let Ok(Some(_)) = rows.next() {
return true;
@@ -824,12 +833,12 @@ pub fn find_all_inscriptions_in_block(
{ parse_inscription_id(&inscription_id) };
let Some(transfer_data) = transfers_data
.get(&inscription_id)
.and_then(|entries| entries.first()) else {
.and_then(|entries| entries.first())
else {
ctx.try_log(|logger| {
error!(
logger,
"unable to retrieve inscription genesis transfer data: {}",
inscription_id,
"unable to retrieve inscription genesis transfer data: {}", inscription_id,
)
});
continue;

View File

@@ -6,9 +6,11 @@ use flate2::read::GzDecoder;
use futures_util::StreamExt;
use progressing::mapping::Bar as MappingBar;
use progressing::Baring;
use tar::Archive;
use std::fs::{self, File};
use std::io::{self, Cursor};
use std::io::{Read, Write};
use std::path::PathBuf;
use tar::Archive;
pub fn default_sqlite_file_path(_network: &BitcoinNetwork) -> String {
format!("hord.sqlite").to_lowercase()
@@ -18,10 +20,12 @@ pub fn default_sqlite_sha_file_path(_network: &BitcoinNetwork) -> String {
format!("hord.sqlite.sha256").to_lowercase()
}
pub async fn download_sqlite_file(config: &Config, _ctx: &Context) -> Result<(), String> {
pub async fn download_sqlite_file(config: &Config, ctx: &Context) -> Result<(), String> {
let destination_path = config.expected_cache_path();
std::fs::create_dir_all(&destination_path).unwrap_or_else(|e| {
if ctx.logger.is_some() {
println!("{}", e.to_string());
}
});
// let remote_sha_url = config.expected_remote_ordinals_sqlite_sha256();
@@ -39,36 +43,66 @@ pub async fn download_sqlite_file(config: &Config, _ctx: &Context) -> Result<(),
// write_file_content_at_path(&local_sha_file_path, &res.to_vec())?;
let file_url = config.expected_remote_ordinals_sqlite_url();
if ctx.logger.is_some() {
println!("=> {file_url}");
}
let res = reqwest::get(&file_url)
.await
.or(Err(format!("Failed to GET from '{}'", &file_url)))?;
// Download chunks
let (tx, rx) = flume::bounded(0);
let decoder_thread = std::thread::spawn(move || {
let input = ChannelRead::new(rx);
let mut decoder = GzDecoder::new(input);
let mut content = Vec::new();
let _ = decoder.read_to_end(&mut content);
let mut archive = Archive::new(&content[..]);
if let Err(e) = archive.unpack(&destination_path) {
println!("unable to write file: {}", e.to_string());
std::process::exit(1);
}
});
if res.status() == reqwest::StatusCode::OK {
let limit = res.content_length().unwrap_or(10_000_000_000) as i64;
let archive_tmp_file = PathBuf::from("db.tar");
let decoder_thread = std::thread::spawn(move || {
{
let input = ChannelRead::new(rx);
let mut decoder = GzDecoder::new(input);
let mut tmp = File::create(&archive_tmp_file).unwrap();
let mut buffer = [0; 512_000];
loop {
match decoder.read(&mut buffer) {
Ok(0) => break,
Ok(n) => {
if let Err(e) = tmp.write_all(&buffer[..n]) {
let err = format!(
"unable to update compressed archive: {}",
e.to_string()
);
return Err(err);
}
}
Err(e) => {
let err =
format!("unable to write compressed archive: {}", e.to_string());
return Err(err);
}
}
}
let _ = tmp.flush();
}
let archive_file = File::open(&archive_tmp_file).unwrap();
let mut archive = Archive::new(archive_file);
if let Err(e) = archive.unpack(&destination_path) {
let err = format!("unable to decompress file: {}", e.to_string());
return Err(err);
}
let _ = fs::remove_file(archive_tmp_file);
Ok(())
});
let mut progress_bar = MappingBar::with_range(0i64, limit);
progress_bar.set_len(60);
let mut stdout = std::io::stdout();
if ctx.logger.is_some() {
print!("{}", progress_bar);
let _ = stdout.flush();
}
let mut stream = res.bytes_stream();
let mut progress = 0;
let mut steps = 0;
let mut tx_err = None;
while let Some(item) = stream.next().await {
let chunk = item.or(Err(format!("Error while downloading file")))?;
progress += chunk.len() as i64;
@@ -78,24 +112,28 @@ pub async fn download_sqlite_file(config: &Config, _ctx: &Context) -> Result<(),
}
progress_bar.set(progress);
if steps == 0 {
if ctx.logger.is_some() {
print!("\r{}", progress_bar);
let _ = stdout.flush();
}
tx.send_async(chunk.to_vec())
.await
.map_err(|e| format!("unable to download stacks event: {}", e.to_string()))?;
}
if let Err(e) = tx.send_async(chunk.to_vec()).await {
let err = format!("unable to download archive: {}", e.to_string());
tx_err = Some(err);
break;
}
}
progress_bar.set(limit);
if ctx.logger.is_some() {
print!("\r{}", progress_bar);
let _ = stdout.flush();
println!();
drop(tx);
}
drop(tx);
tokio::task::spawn_blocking(|| decoder_thread.join())
.await
.unwrap()
.unwrap();
decoder_thread.join().unwrap()?;
if let Some(_e) = tx_err.take() {}
}
Ok(())
}

View File

@@ -26,10 +26,10 @@ use chainhook_sdk::types::{
use chainhook_sdk::utils::{file_append, send_request, BlockHeights, Context};
use std::collections::HashMap;
// TODO(lgalabru): Re-introduce support for blocks[] !!! gracefully handle hints for non consecutive blocks
pub async fn scan_bitcoin_chainstate_via_rpc_using_predicate(
predicate_spec: &BitcoinChainhookSpecification,
config: &Config,
event_observer_config_override: Option<&EventObserverConfig>,
ctx: &Context,
) -> Result<(), String> {
let _ = download_ordinals_dataset_if_required(config, ctx).await;
@@ -85,7 +85,10 @@ pub async fn scan_bitcoin_chainstate_via_rpc_using_predicate(
let mut actions_triggered = 0;
let mut err_count = 0;
let event_observer_config = config.get_event_observer_config();
let event_observer_config = match event_observer_config_override {
Some(config_override) => config_override.clone(),
None => config.get_event_observer_config(),
};
let bitcoin_config = event_observer_config.get_bitcoin_config();
let number_of_blocks_to_scan = block_heights_to_scan.len() as u64;
let mut number_of_blocks_scanned = 0;
@@ -95,15 +98,6 @@ pub async fn scan_bitcoin_chainstate_via_rpc_using_predicate(
while let Some(current_block_height) = block_heights_to_scan.pop_front() {
number_of_blocks_scanned += 1;
// Re-initiate connection every 250 blocks (pessimistic) to avoid stale connections
let conn_updated = if number_of_blocks_scanned % 250 == 0 {
inscriptions_db_conn =
open_readonly_ordhook_db_conn(&config.expected_cache_path(), ctx)?;
true
} else {
false
};
if !get_any_entry_in_ordinal_activities(&current_block_height, &inscriptions_db_conn, &ctx)
{
continue;
@@ -151,7 +145,7 @@ pub async fn scan_bitcoin_chainstate_via_rpc_using_predicate(
info!(
ctx.expect_logger(),
"Processing block #{current_block_height} through {} predicate ({} inscriptions revealed: [{}], db_conn updated: {conn_updated})",
"Processing block #{current_block_height} through {} predicate ({} inscriptions revealed: [{}])",
predicate_spec.uuid,
inscriptions_revealed.len(),
inscriptions_revealed.join(", ")

View File

@@ -14,10 +14,9 @@ use crate::core::protocol::inscription_parsing::{
use crate::core::protocol::inscription_sequencing::SequenceCursor;
use crate::core::{new_traversals_lazy_cache, should_sync_ordhook_db, should_sync_rocks_db};
use crate::db::{
delete_data_in_ordhook_db, insert_entry_in_blocks,
update_inscriptions_with_block, update_locations_with_block,
open_readwrite_ordhook_db_conn, open_readwrite_ordhook_db_conn_rocks_db,
open_readwrite_ordhook_dbs, LazyBlock, LazyBlockTransaction,
delete_data_in_ordhook_db, insert_entry_in_blocks, open_readwrite_ordhook_db_conn,
open_readwrite_ordhook_db_conn_rocks_db, open_readwrite_ordhook_dbs,
update_inscriptions_with_block, update_locations_with_block, LazyBlock, LazyBlockTransaction,
};
use crate::scan::bitcoin::process_block_with_predicates;
use crate::service::http_api::start_predicate_api_server;
@@ -49,8 +48,8 @@ use std::sync::mpsc::channel;
use std::sync::Arc;
pub struct Service {
config: Config,
ctx: Context,
pub config: Config,
pub ctx: Context,
}
impl Service {
@@ -217,7 +216,7 @@ impl Service {
>,
) -> Result<(), String> {
let PredicatesApi::On(ref api_config) = self.config.http_api else {
return Ok(())
return Ok(());
};
let (bitcoin_scan_op_tx, bitcoin_scan_op_rx) = crossbeam_channel::unbounded();
@@ -388,7 +387,7 @@ impl Service {
);
event_observer_config.chainhook_config = Some(chainhook_config);
let data_rx = if enable_internal_trigger {
let (tx, rx) = crossbeam_channel::unbounded();
let (tx, rx) = crossbeam_channel::bounded(256);
event_observer_config.data_handler_tx = Some(tx);
Some(rx)
} else {
@@ -598,12 +597,14 @@ fn chainhook_sidecar_mutate_ordhook_db(command: HandleBlock, config: &Config, ct
let compressed_block: LazyBlock = match LazyBlock::from_standardized_block(&block) {
Ok(block) => block,
Err(e) => {
ctx.try_log(|logger| {
error!(
ctx.expect_logger(),
logger,
"Unable to compress block #{}: #{}",
block.block_identifier.index,
e.to_string()
);
)
});
return;
}
};
@@ -616,17 +617,9 @@ fn chainhook_sidecar_mutate_ordhook_db(command: HandleBlock, config: &Config, ct
);
let _ = blocks_db_rw.flush();
update_inscriptions_with_block(
&block,
&inscriptions_db_conn_rw,
&ctx,
);
update_inscriptions_with_block(&block, &inscriptions_db_conn_rw, &ctx);
update_locations_with_block(
&block,
&inscriptions_db_conn_rw,
&ctx,
);
update_locations_with_block(&block, &inscriptions_db_conn_rw, &ctx);
}
}
}
@@ -709,12 +702,14 @@ pub fn chainhook_sidecar_mutate_blocks(
let compressed_block: LazyBlock = match LazyBlock::from_standardized_block(&cache.block) {
Ok(block) => block,
Err(e) => {
ctx.try_log(|logger| {
error!(
ctx.expect_logger(),
logger,
"Unable to compress block #{}: #{}",
cache.block.block_identifier.index,
e.to_string()
);
)
});
continue;
}
};
@@ -729,16 +724,8 @@ pub fn chainhook_sidecar_mutate_blocks(
let _ = blocks_db_rw.flush();
if cache.processed_by_sidecar {
update_inscriptions_with_block(
&cache.block,
&inscriptions_db_tx,
&ctx,
);
update_locations_with_block(
&cache.block,
&inscriptions_db_tx,
&ctx,
);
update_inscriptions_with_block(&cache.block, &inscriptions_db_tx, &ctx);
update_locations_with_block(&cache.block, &inscriptions_db_tx, &ctx);
} else {
updated_blocks_ids.push(format!("{}", cache.block.block_identifier.index));

View File

@@ -32,6 +32,7 @@ pub fn start_bitcoin_scan_runloop(
let op = scan_bitcoin_chainstate_via_rpc_using_predicate(
&predicate_spec,
&moved_config,
None,
&moved_ctx,
);

View File

@@ -0,0 +1,3 @@
[target.aarch64-unknown-linux-musl]
linker = "aarch64-linux-musl-gcc"
rustflags = ["-C", "target-feature=-crt-static"]

View File

@@ -0,0 +1,16 @@
target
Cargo.toml
Cargo.lock
.cargo
.github
npm
src
.eslintrc
.prettierignore
rustfmt.toml
yarn.lock
**/*.rs
*.node
.yarn
__test__
renovate.json

Binary file not shown.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
nodeLinker: node-modules
yarnPath: .yarn/releases/yarn-3.6.4.cjs

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +1,30 @@
[package]
name = "ordhook-sdk-js"
version = "0.5.0"
edition = "2021"
exclude = ["index.node"]
name = "ordhook-sdk-js"
version = "0.6.0"
[lib]
crate-type = ["cdylib"]
[dependencies]
serde = "1"
error-chain = "0.12"
# Default enable napi4 feature, see https://nodejs.org/api/n-api.html#node-api-version-matrix
napi = { version = "2.12.2", default-features = false, features = ["napi4", "async", "tokio_rt", "serde-json"] }
napi-derive = "2.12.2"
crossbeam-channel = "0.5.6"
ordhook = { path = "../ordhook-core" }
hiro-system-kit = "0.3.1"
crossbeam-channel = "0.5.6"
serde_json = "1"
serde = "1"
[dependencies.neon]
version = "0.9.1"
default-features = false
features = ["napi-4", "channel-api", "event-queue-api", "try-catch-api"]
[build-dependencies]
napi-build = "2.0.1"
[dependencies.num]
version = "0.2"
default-features = false
[build]
target = "armv7-unknown-linux-gnueabihf"
rustflags = ["-C", "link-args=-L/lib/arm-linux-gnueabihf"]
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-g++"
[profile.release]
lto = true

View File

@@ -1,4 +1,6 @@
import { OrdinalsIndexer } from "./index";
import test from 'ava'
import { OrdinalsIndexer } from "../index.js";
const indexer = new OrdinalsIndexer({
bitcoinRpcUrl: 'http://0.0.0.0:8332',
@@ -8,23 +10,12 @@ const indexer = new OrdinalsIndexer({
logs: false
});
indexer.applyBlock(block => {
indexer.onBlock(block => {
console.log(`Hello from JS ${JSON.stringify(block)}`);
});
indexer.undoBlock(block => {
indexer.onBlockRollBack(block => {
console.log(`Hello from JS ${JSON.stringify(block)}`);
});
// indexer.streamBlocks();
indexer.dropBlocks([32103, 32104]);
indexer.rewriteBlocks([32103, 32104]);
indexer.syncBlocks();
indexer.replayBlocks([32103, 32104]);
indexer.terminate();
indexer.replayBlocks([767430, 767431]);

View File

@@ -0,0 +1,5 @@
extern crate napi_build;
fn main() {
napi_build::setup();
}

21
components/ordhook-sdk-js/index.d.ts vendored Normal file
View File

@@ -0,0 +1,21 @@
/* tslint:disable */
/* eslint-disable */
/* auto-generated by NAPI-RS */
export interface OrdinalsIndexerConfig {
bitcoinRpcUrl?: string
bitcoinRpcUsername?: string
bitcoinRpcPassword?: string
workingDir?: string
logsEnabled?: boolean
}
export class OrdinalsIndexer {
constructor(configOverrides?: OrdinalsIndexerConfig | undefined | null)
onBlock(callback: (block: any) => boolean): void
onBlockRollBack(callback: (block: any) => boolean): void
streamBlocks(): void
replayBlocks(blocks: Array<number>): void
replayBlockRange(startBlock: number, endBlock: number): void
terminate(): void
}

View File

@@ -0,0 +1,257 @@
/* tslint:disable */
/* eslint-disable */
/* prettier-ignore */
/* auto-generated by NAPI-RS */
const { existsSync, readFileSync } = require('fs')
const { join } = require('path')
const { platform, arch } = process
let nativeBinding = null
let localFileExisted = false
let loadError = null
function isMusl() {
// For Node 10
if (!process.report || typeof process.report.getReport !== 'function') {
try {
const lddPath = require('child_process').execSync('which ldd').toString().trim()
return readFileSync(lddPath, 'utf8').includes('musl')
} catch (e) {
return true
}
} else {
const { glibcVersionRuntime } = process.report.getReport().header
return !glibcVersionRuntime
}
}
switch (platform) {
case 'android':
switch (arch) {
case 'arm64':
localFileExisted = existsSync(join(__dirname, 'ordhook-sdk-js.android-arm64.node'))
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.android-arm64.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-android-arm64')
}
} catch (e) {
loadError = e
}
break
case 'arm':
localFileExisted = existsSync(join(__dirname, 'ordhook-sdk-js.android-arm-eabi.node'))
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.android-arm-eabi.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-android-arm-eabi')
}
} catch (e) {
loadError = e
}
break
default:
throw new Error(`Unsupported architecture on Android ${arch}`)
}
break
case 'win32':
switch (arch) {
case 'x64':
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.win32-x64-msvc.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.win32-x64-msvc.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-win32-x64-msvc')
}
} catch (e) {
loadError = e
}
break
case 'ia32':
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.win32-ia32-msvc.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.win32-ia32-msvc.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-win32-ia32-msvc')
}
} catch (e) {
loadError = e
}
break
case 'arm64':
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.win32-arm64-msvc.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.win32-arm64-msvc.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-win32-arm64-msvc')
}
} catch (e) {
loadError = e
}
break
default:
throw new Error(`Unsupported architecture on Windows: ${arch}`)
}
break
case 'darwin':
localFileExisted = existsSync(join(__dirname, 'ordhook-sdk-js.darwin-universal.node'))
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.darwin-universal.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-darwin-universal')
}
break
} catch {}
switch (arch) {
case 'x64':
localFileExisted = existsSync(join(__dirname, 'ordhook-sdk-js.darwin-x64.node'))
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.darwin-x64.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-darwin-x64')
}
} catch (e) {
loadError = e
}
break
case 'arm64':
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.darwin-arm64.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.darwin-arm64.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-darwin-arm64')
}
} catch (e) {
loadError = e
}
break
default:
throw new Error(`Unsupported architecture on macOS: ${arch}`)
}
break
case 'freebsd':
if (arch !== 'x64') {
throw new Error(`Unsupported architecture on FreeBSD: ${arch}`)
}
localFileExisted = existsSync(join(__dirname, 'ordhook-sdk-js.freebsd-x64.node'))
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.freebsd-x64.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-freebsd-x64')
}
} catch (e) {
loadError = e
}
break
case 'linux':
switch (arch) {
case 'x64':
if (isMusl()) {
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.linux-x64-musl.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.linux-x64-musl.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-linux-x64-musl')
}
} catch (e) {
loadError = e
}
} else {
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.linux-x64-gnu.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.linux-x64-gnu.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-linux-x64-gnu')
}
} catch (e) {
loadError = e
}
}
break
case 'arm64':
if (isMusl()) {
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.linux-arm64-musl.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.linux-arm64-musl.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-linux-arm64-musl')
}
} catch (e) {
loadError = e
}
} else {
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.linux-arm64-gnu.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.linux-arm64-gnu.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-linux-arm64-gnu')
}
} catch (e) {
loadError = e
}
}
break
case 'arm':
localFileExisted = existsSync(
join(__dirname, 'ordhook-sdk-js.linux-arm-gnueabihf.node')
)
try {
if (localFileExisted) {
nativeBinding = require('./ordhook-sdk-js.linux-arm-gnueabihf.node')
} else {
nativeBinding = require('@hirosystems/ordhook-sdk-js-linux-arm-gnueabihf')
}
} catch (e) {
loadError = e
}
break
default:
throw new Error(`Unsupported architecture on Linux: ${arch}`)
}
break
default:
throw new Error(`Unsupported OS: ${platform}, architecture: ${arch}`)
}
if (!nativeBinding) {
if (loadError) {
throw loadError
}
throw new Error(`Failed to load native binding`)
}
const { OrdinalsIndexer } = nativeBinding
module.exports.OrdinalsIndexer = OrdinalsIndexer

View File

@@ -1,108 +0,0 @@
"use strict";
const {
ordinalsIndexerNew,
ordinalsIndexerStreamBlocks,
ordinalsIndexerReplayBlocks,
ordinalsIndexerDropBlocks,
ordinalsIndexerSyncBlocks,
ordinalsIndexerRewriteBlocks,
ordinalsIndexerOnBlockApply,
ordinalsIndexerOnBlockUndo,
ordinalsIndexerTerminate,
} = require("../native/index.node");
// import {
// BitcoinChainUpdate,
// Block,
// StacksBlockMetadata,
// StacksBlockUpdate,
// StacksChainUpdate,
// StacksTransaction,
// StacksTransactionMetadata,
// Transaction,
// } from "@hirosystems/chainhook-types";
// export * from "@hirosystems/chainhook-types";
export class OrdinalsIndexer {
handle: any;
/**
* @summary Construct a new OrdinalsIndexer
* @param
* @memberof OrdinalsIndexer
*/
constructor(settings: {
bitcoinRpcUrl: string,
bitcoinRpcUsername: string,
bitcoinRpcPassword: string,
workingDirectory: string,
logs: boolean,
}) {
this.handle = ordinalsIndexerNew(settings);
}
/**
* @summary Start streaming blocks
* @memberof OrdinalsIndexer
*/
streamBlocks() {
return ordinalsIndexerStreamBlocks.call(this.handle);
}
/**
* @summary Drop a set of blocks
* @memberof OrdinalsIndexer
*/
dropBlocks(blocks: number[]) {
return ordinalsIndexerDropBlocks.call(this.handle, blocks);
}
/**
* @summary Drop, downloard and re-index a set of blocks
* @memberof OrdinalsIndexer
*/
rewriteBlocks(blocks: number[]) {
return ordinalsIndexerRewriteBlocks.call(this.handle, blocks);
}
/**
* @summary Replay a set of blocks
* @memberof OrdinalsIndexer
*/
replayBlocks(blocks: number[]) {
return ordinalsIndexerReplayBlocks.call(this.handle, blocks);
}
/**
* @summary Download and index blocks
* @memberof OrdinalsIndexer
*/
syncBlocks() {
return ordinalsIndexerSyncBlocks.call(this.handle);
}
/**
* @summary Apply Block
* @memberof OrdinalsIndexer
*/
applyBlock(callback: (block: any) => void) {
return ordinalsIndexerOnBlockApply.call(this.handle, callback);
}
/**
* @summary Undo Block
* @memberof OrdinalsIndexer
*/
undoBlock(callback: (block: any) => void) {
return ordinalsIndexerOnBlockUndo.call(this.handle, callback);
}
/**
* @summary Terminates indexer
* @memberof DevnetNetworkOrchestrator
*/
terminate() {
return ordinalsIndexerTerminate.call(this.handle);
}
}

View File

@@ -0,0 +1,3 @@
# `@hirosystems/ordhook-sdk-js-darwin-arm64`
This is the **aarch64-apple-darwin** binary for `@hirosystems/ordhook-sdk-js`

View File

@@ -0,0 +1,18 @@
{
"name": "@hirosystems/ordhook-sdk-js-darwin-arm64",
"version": "0.6.2",
"os": [
"darwin"
],
"cpu": [
"arm64"
],
"main": "ordhook-sdk-js.darwin-arm64.node",
"files": [
"ordhook-sdk-js.darwin-arm64.node"
],
"license": "MIT",
"engines": {
"node": ">= 10"
}
}

View File

@@ -0,0 +1,3 @@
# `@hirosystems/ordhook-sdk-js-darwin-universal`
This is the **universal-apple-darwin** binary for `@hirosystems/ordhook-sdk-js`

View File

@@ -0,0 +1,15 @@
{
"name": "@hirosystems/ordhook-sdk-js-darwin-universal",
"version": "0.6.2",
"os": [
"darwin"
],
"main": "ordhook-sdk-js.darwin-universal.node",
"files": [
"ordhook-sdk-js.darwin-universal.node"
],
"license": "MIT",
"engines": {
"node": ">= 10"
}
}

View File

@@ -0,0 +1,3 @@
# `@hirosystems/ordhook-sdk-js-darwin-x64`
This is the **x86_64-apple-darwin** binary for `@hirosystems/ordhook-sdk-js`

View File

@@ -0,0 +1,18 @@
{
"name": "@hirosystems/ordhook-sdk-js-darwin-x64",
"version": "0.6.2",
"os": [
"darwin"
],
"cpu": [
"x64"
],
"main": "ordhook-sdk-js.darwin-x64.node",
"files": [
"ordhook-sdk-js.darwin-x64.node"
],
"license": "MIT",
"engines": {
"node": ">= 10"
}
}

View File

@@ -0,0 +1,3 @@
# `@hirosystems/ordhook-sdk-js-linux-x64-gnu`
This is the **x86_64-unknown-linux-gnu** binary for `@hirosystems/ordhook-sdk-js`

View File

@@ -0,0 +1,21 @@
{
"name": "@hirosystems/ordhook-sdk-js-linux-x64-gnu",
"version": "0.6.2",
"os": [
"linux"
],
"cpu": [
"x64"
],
"main": "ordhook-sdk-js.linux-x64-gnu.node",
"files": [
"ordhook-sdk-js.linux-x64-gnu.node"
],
"license": "MIT",
"engines": {
"node": ">= 10"
},
"libc": [
"glibc"
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,48 +1,42 @@
{
"name": "@hirosystems/ordhook-sdk-js",
"version": "1.7.1",
"description": "ordhook-sdk-js is a library for writing protocols .",
"author": "Ludo Galabru",
"repository": "https://github.com/hirosystems/ordhook/tree/main/components/ordhook-sdk-js",
"license": "GPL-3.0",
"main": "dist/index.js",
"files": [
"dist"
],
"scripts": {
"build": "tsc --build && cargo-cp-artifact -nc native/index.node -- cargo build --message-format=json-render-diagnostics",
"build-debug": "npm run build --",
"build-release": "npm run build -- --release",
"build-linux-x64-glibc": "npm run build-release -- --target x86_64-unknown-linux-gnu",
"build-linux-x64-musl": "npm run build-release -- --target x86_64-unknown-linux-musl",
"build-windows-x64": "npm run build-release -- --target x86_64-pc-windows-msvc",
"build-darwin-x64": "npm run build-release -- --target x86_64-apple-darwin",
"build-darwin-arm64": "npm run build-release -- --target aarch64-apple-darwin",
"install": "node-pre-gyp install --fallback-to-build=false || npm run build-release",
"lint": "eslint .",
"package": "node-pre-gyp package",
"spec": "jest",
"test": "npm run build && npm run spec",
"upload-binary": "npm run build-release && node-pre-gyp package && node-pre-gyp-github publish",
"version": "npm run build-release"
"version": "0.6.3",
"main": "index.js",
"repository": {
"type": "git",
"url": "https://github.com/hirosystems/ordhook",
"directory": "components/ordhook-sdk-js"
},
"dependencies": {
"@hirosystems/chainhook-types": "^1.1.2",
"@mapbox/node-pre-gyp": "^1.0.8",
"neon-cli": "^0.9.1",
"node-pre-gyp-github": "^1.4.3",
"typescript": "^4.5.5"
},
"devDependencies": {
"@types/node": "^16.11.11",
"cargo-cp-artifact": "^0.1"
},
"binary": {
"module_name": "index",
"host": "https://github.com/hirosystems/clarinet/releases/download/",
"remote_path": "v{version}",
"package_name": "stacks-devnet-js-{platform}-{arch}-{libc}.tar.gz",
"module_path": "./native",
"pkg_path": "."
"types": "index.d.ts",
"napi": {
"name": "ordhook-sdk-js",
"triples": {
"additional": [
"aarch64-apple-darwin",
"aarch64-unknown-linux-gnu",
"universal-apple-darwin"
]
}
},
"license": "MIT",
"devDependencies": {
"@napi-rs/cli": "^2.16.3",
"ava": "^5.1.1"
},
"ava": {
"timeout": "45m"
},
"engines": {
"node": ">= 10"
},
"scripts": {
"artifacts": "napi artifacts",
"build": "napi build --platform --release",
"build:debug": "napi build --platform",
"prepublishOnly": "napi prepublish -t npm",
"test": "ava",
"universal": "napi universal",
"version": "napi version"
},
"packageManager": "yarn@3.6.4"
}

View File

@@ -0,0 +1,2 @@
tab_spaces = 2
edition = "2021"

View File

@@ -1,444 +1,6 @@
#![deny(clippy::all)]
#[macro_use]
extern crate error_chain;
extern crate napi_derive;
mod serde;
use core::panic;
use crossbeam_channel::{select, Sender};
use neon::prelude::*;
use ordhook::chainhook_sdk::observer::DataHandlerEvent;
use ordhook::chainhook_sdk::utils::Context as OrdhookContext;
use ordhook::config::Config;
use ordhook::service::Service;
use std::thread;
struct OrdinalsIndexerConfig {
pub bitcoin_rpc_url: String,
pub bitcoin_rpc_username: String,
pub bitcoin_rpc_password: String,
pub working_directory: String,
pub logs_enabled: bool,
}
impl OrdinalsIndexerConfig {
pub fn default() -> OrdinalsIndexerConfig {
OrdinalsIndexerConfig {
bitcoin_rpc_url: "http://0.0.0.0:8332".to_string(),
bitcoin_rpc_username: "devnet".to_string(),
bitcoin_rpc_password: "devnet".to_string(),
working_directory: "/tmp/ordinals".to_string(),
logs_enabled: true,
}
}
}
struct OrdinalsIndexer {
command_tx: Sender<IndexerCommand>,
custom_indexer_command_tx: Sender<CustomIndexerCommand>,
}
#[allow(dead_code)]
enum IndexerCommand {
StreamBlocks,
SyncBlocks,
DropBlocks(Vec<u64>),
RewriteBlocks(Vec<u64>),
ReplayBlocks(Vec<u64>),
Terminate,
}
#[allow(dead_code)]
enum CustomIndexerCommand {
UpdateApplyCallback(Root<JsFunction>),
UpdateUndoCallback(Root<JsFunction>),
Terminate,
}
impl Finalize for OrdinalsIndexer {}
impl OrdinalsIndexer {
fn new<'a, C>(cx: &mut C, ordhook_config: Config) -> Self
where
C: Context<'a>,
{
let (command_tx, command_rx) = crossbeam_channel::unbounded();
let (custom_indexer_command_tx, custom_indexer_command_rx) = crossbeam_channel::unbounded();
let logger = hiro_system_kit::log::setup_logger();
let _guard = hiro_system_kit::log::setup_global_logger(logger.clone());
let ctx = OrdhookContext {
logger: Some(logger),
tracer: false,
};
// Initialize service
// {
// let _ = initialize_ordhook_db(&ordhook_config.expected_cache_path(), &ctx);
// let _ = open_readwrite_ordhook_db_conn_rocks_db(&ordhook_config.expected_cache_path(), &ctx);
// }
let mut service: Service = Service::new(ordhook_config, ctx);
// Set-up the observer sidecar - used for augmenting the bitcoin blocks with
// ordinals informations
let observer_sidecar = service
.set_up_observer_sidecar_runloop()
.expect("unable to setup indexer");
// Prepare internal predicate
let (observer_config, payload_rx) = service
.set_up_observer_config(vec![], true)
.expect("unable to setup indexer");
// Indexing thread
let channel = cx.channel();
thread::spawn(move || {
let payload_rx = payload_rx.unwrap();
channel.send(move |mut cx| {
let mut apply_callback: Option<Root<JsFunction>> = None;
let mut undo_callback: Option<Root<JsFunction>> = None;
loop {
select! {
recv(payload_rx) -> msg => {
match msg {
Ok(DataHandlerEvent::Process(payload)) => {
if let Some(ref callback) = undo_callback {
for to_rollback in payload.rollback.into_iter() {
let callback = callback.clone(&mut cx).into_inner(&mut cx);
let this = cx.undefined();
let payload = serde::to_value(&mut cx, &to_rollback).expect("Unable to serialize block");
let args: Vec<Handle<JsValue>> = vec![payload];
callback.call(&mut cx, this, args)?;
}
}
if let Some(ref callback) = apply_callback {
for to_apply in payload.apply.into_iter() {
let callback = callback.clone(&mut cx).into_inner(&mut cx);
let this = cx.undefined();
let payload = serde::to_value(&mut cx, &to_apply).expect("Unable to serialize block");
let args: Vec<Handle<JsValue>> = vec![payload];
callback.call(&mut cx, this, args)?;
}
}
}
Ok(DataHandlerEvent::Terminate) => {
return Ok(());
}
_ => {
}
}
}
recv(custom_indexer_command_rx) -> msg => {
match msg {
Ok(CustomIndexerCommand::UpdateApplyCallback(callback)) => {
apply_callback = Some(callback);
}
Ok(CustomIndexerCommand::UpdateUndoCallback(callback)) => {
undo_callback = Some(callback);
}
Ok(CustomIndexerCommand::Terminate) => {
return Ok(())
}
_ => {}
}
}
}
}
});
});
// Processing thread
thread::spawn(move || {
loop {
let cmd = match command_rx.recv() {
Ok(cmd) => cmd,
Err(e) => {
panic!("Runloop error: {}", e.to_string());
}
};
match cmd {
IndexerCommand::StreamBlocks => {
// We start the service as soon as the start() method is being called.
let future = service.catch_up_with_chain_tip(false, &observer_config);
let _ = hiro_system_kit::nestable_block_on(future)
.expect("unable to start indexer");
let future = service.start_event_observer(observer_sidecar);
let (command_tx, event_rx) = hiro_system_kit::nestable_block_on(future)
.expect("unable to start indexer");
// Blocking call
let _ = service.start_main_runloop(&command_tx, event_rx, None);
break;
}
IndexerCommand::ReplayBlocks(blocks) => {
println!("Will replay blocks {:?}", blocks);
}
IndexerCommand::DropBlocks(blocks) => {
println!("Will drop blocks {:?}", blocks);
}
IndexerCommand::RewriteBlocks(blocks) => {
println!("Will rewrite blocks {:?}", blocks);
}
IndexerCommand::SyncBlocks => {
println!("Will sync blocks");
}
IndexerCommand::Terminate => {
std::process::exit(0);
}
}
}
});
Self {
command_tx,
custom_indexer_command_tx,
// termination_rx,
}
}
fn stream_blocks(&self) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::StreamBlocks);
Ok(true)
}
fn terminate(&self) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::Terminate);
Ok(true)
}
fn replay_blocks(&self, blocks: Vec<u64>) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::ReplayBlocks(blocks));
Ok(true)
}
fn drop_blocks(&self, blocks: Vec<u64>) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::DropBlocks(blocks));
Ok(true)
}
fn rewrite_blocks(&self, blocks: Vec<u64>) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::RewriteBlocks(blocks));
Ok(true)
}
fn sync_blocks(&self) -> Result<bool, String> {
let _ = self.command_tx.send(IndexerCommand::SyncBlocks);
Ok(true)
}
fn update_apply_callback(&self, callback: Root<JsFunction>) -> Result<bool, String> {
let _ = self
.custom_indexer_command_tx
.send(CustomIndexerCommand::UpdateApplyCallback(callback));
Ok(true)
}
fn update_undo_callback(&self, callback: Root<JsFunction>) -> Result<bool, String> {
let _ = self
.custom_indexer_command_tx
.send(CustomIndexerCommand::UpdateUndoCallback(callback));
Ok(true)
}
}
impl OrdinalsIndexer {
fn js_new(mut cx: FunctionContext) -> JsResult<JsBox<OrdinalsIndexer>> {
let settings = cx.argument::<JsObject>(0)?;
let mut config = OrdinalsIndexerConfig::default();
if let Ok(res) = settings
.get(&mut cx, "bitcoinRpcUrl")?
.downcast::<JsString, _>(&mut cx)
{
config.bitcoin_rpc_url = res.value(&mut cx);
}
if let Ok(res) = settings
.get(&mut cx, "bitcoinRpcUsername")?
.downcast::<JsString, _>(&mut cx)
{
config.bitcoin_rpc_username = res.value(&mut cx);
}
if let Ok(res) = settings
.get(&mut cx, "bitcoinRpcPassword")?
.downcast::<JsString, _>(&mut cx)
{
config.bitcoin_rpc_password = res.value(&mut cx);
}
if let Ok(res) = settings
.get(&mut cx, "workingDirectory")?
.downcast::<JsString, _>(&mut cx)
{
config.working_directory = res.value(&mut cx);
}
if let Ok(res) = settings
.get(&mut cx, "logs")?
.downcast::<JsBoolean, _>(&mut cx)
{
config.logs_enabled = res.value(&mut cx);
}
let mut ordhook_config = Config::mainnet_default();
ordhook_config.network.bitcoind_rpc_username = config.bitcoin_rpc_username.clone();
ordhook_config.network.bitcoind_rpc_password = config.bitcoin_rpc_password.clone();
ordhook_config.network.bitcoind_rpc_url = config.bitcoin_rpc_url.clone();
ordhook_config.storage.working_dir = config.working_directory.clone();
ordhook_config.logs.chainhook_internals = config.logs_enabled;
ordhook_config.logs.ordinals_internals = config.logs_enabled;
let devnet: OrdinalsIndexer = OrdinalsIndexer::new(&mut cx, ordhook_config);
Ok(cx.boxed(devnet))
}
fn js_stream_blocks(mut cx: FunctionContext) -> JsResult<JsUndefined> {
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.stream_blocks()
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_replay_blocks(mut cx: FunctionContext) -> JsResult<JsUndefined> {
let blocks = {
let seq = cx
.argument::<JsArray>(0)?
.root(&mut cx)
.into_inner(&mut cx)
.to_vec(&mut cx)?;
let mut blocks = vec![];
for item in seq.iter() {
let block = item.downcast::<JsNumber, _>(&mut cx).unwrap();
blocks.push(block.value(&mut cx) as u64);
}
blocks
};
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.replay_blocks(blocks)
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_drop_blocks(mut cx: FunctionContext) -> JsResult<JsUndefined> {
let blocks = {
let seq = cx
.argument::<JsArray>(0)?
.root(&mut cx)
.into_inner(&mut cx)
.to_vec(&mut cx)?;
let mut blocks = vec![];
for item in seq.iter() {
let block = item.downcast::<JsNumber, _>(&mut cx).unwrap();
blocks.push(block.value(&mut cx) as u64);
}
blocks
};
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.drop_blocks(blocks)
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_sync_blocks(mut cx: FunctionContext) -> JsResult<JsUndefined> {
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.sync_blocks()
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_rewrite_blocks(mut cx: FunctionContext) -> JsResult<JsUndefined> {
let blocks = {
let seq = cx
.argument::<JsArray>(0)?
.root(&mut cx)
.into_inner(&mut cx)
.to_vec(&mut cx)?;
let mut blocks = vec![];
for item in seq.iter() {
let block = item.downcast::<JsNumber, _>(&mut cx).unwrap();
blocks.push(block.value(&mut cx) as u64);
}
blocks
};
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.rewrite_blocks(blocks)
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_terminate(mut cx: FunctionContext) -> JsResult<JsUndefined> {
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.terminate()
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_on_block_apply(mut cx: FunctionContext) -> JsResult<JsUndefined> {
let callback = cx.argument::<JsFunction>(0)?.root(&mut cx);
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.update_apply_callback(callback)
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
fn js_on_block_undo(mut cx: FunctionContext) -> JsResult<JsUndefined> {
let callback = cx.argument::<JsFunction>(0)?.root(&mut cx);
cx.this()
.downcast_or_throw::<JsBox<OrdinalsIndexer>, _>(&mut cx)?
.update_undo_callback(callback)
.or_else(|err| cx.throw_error(err.to_string()))?;
Ok(cx.undefined())
}
}
#[neon::main]
fn main(mut cx: ModuleContext) -> NeonResult<()> {
cx.export_function("ordinalsIndexerNew", OrdinalsIndexer::js_new)?;
cx.export_function(
"ordinalsIndexerStreamBlocks",
OrdinalsIndexer::js_stream_blocks,
)?;
cx.export_function(
"ordinalsIndexerReplayBlocks",
OrdinalsIndexer::js_replay_blocks,
)?;
cx.export_function("ordinalsIndexerDropBlocks", OrdinalsIndexer::js_drop_blocks)?;
cx.export_function("ordinalsIndexerSyncBlocks", OrdinalsIndexer::js_sync_blocks)?;
cx.export_function(
"ordinalsIndexerRewriteBlocks",
OrdinalsIndexer::js_rewrite_blocks,
)?;
cx.export_function("ordinalsIndexerTerminate", OrdinalsIndexer::js_terminate)?;
cx.export_function(
"ordinalsIndexerOnBlockApply",
OrdinalsIndexer::js_on_block_apply,
)?;
cx.export_function(
"ordinalsIndexerOnBlockUndo",
OrdinalsIndexer::js_on_block_undo,
)?;
Ok(())
}
mod ordinals_indexer;

View File

@@ -0,0 +1,360 @@
use core::panic;
use crossbeam_channel::Sender;
use napi::bindgen_prelude::*;
use napi::threadsafe_function::{
ErrorStrategy, ThreadSafeCallContext, ThreadsafeFunction, ThreadsafeFunctionCallMode,
};
use ordhook::chainhook_sdk::chainhooks::bitcoin::BitcoinTransactionPayload;
use ordhook::chainhook_sdk::chainhooks::types::{
BitcoinChainhookFullSpecification, BitcoinChainhookNetworkSpecification, BitcoinPredicateType,
HookAction, OrdinalOperations,
};
use ordhook::chainhook_sdk::observer::DataHandlerEvent;
use ordhook::chainhook_sdk::utils::{BlockHeights, Context as OrdhookContext};
use ordhook::config::Config;
use ordhook::scan::bitcoin::scan_bitcoin_chainstate_via_rpc_using_predicate;
use ordhook::service::Service;
use std::collections::BTreeMap;
use std::thread;
enum IndexerCommand {
StreamBlocks,
ReplayBlocks(Vec<u64>),
SyncBlocks,
DropBlocks(Vec<u64>),
RewriteBlocks(Vec<u64>),
Terminate,
}
type BlockJsHandler = ThreadsafeFunction<BitcoinTransactionPayload, ErrorStrategy::Fatal>;
#[allow(dead_code)]
enum CustomIndexerCommand {
UpdateApplyCallback(BlockJsHandler),
UpdateUndoCallback(BlockJsHandler),
Terminate,
}
struct OrdinalsIndexingRunloop {
pub command_tx: Sender<IndexerCommand>,
pub custom_indexer_command_tx: Sender<CustomIndexerCommand>,
}
impl OrdinalsIndexingRunloop {
pub fn new(ordhook_config: Config) -> Self {
let (command_tx, command_rx) = crossbeam_channel::unbounded();
let (custom_indexer_command_tx, custom_indexer_command_rx) = crossbeam_channel::unbounded();
let logger = hiro_system_kit::log::setup_logger();
let _guard = hiro_system_kit::log::setup_global_logger(logger.clone());
let ctx = OrdhookContext {
logger: Some(logger),
tracer: false,
};
// Initialize service
// {
// let _ = initialize_ordhook_db(&ordhook_config.expected_cache_path(), &ctx);
// let _ = open_readwrite_ordhook_db_conn_rocks_db(&ordhook_config.expected_cache_path(), &ctx);
// }
let mut service: Service = Service::new(ordhook_config, ctx);
// Set-up the observer sidecar - used for augmenting the bitcoin blocks with
// ordinals informations
let observer_sidecar = service
.set_up_observer_sidecar_runloop()
.expect("unable to setup indexer");
// Prepare internal predicate
let (observer_config, payload_rx) = service
.set_up_observer_config(vec![], true)
.expect("unable to setup indexer");
// Indexing thread
thread::spawn(move || {
let payload_rx = payload_rx.unwrap();
let mut apply_callback: Option<BlockJsHandler> = None;
let mut undo_callback: Option<BlockJsHandler> = None;
loop {
let mut sel = crossbeam_channel::Select::new();
let payload_rx_sel = sel.recv(&payload_rx);
let custom_indexer_command_rx_sel = sel.recv(&custom_indexer_command_rx);
// The second operation will be selected because it becomes ready first.
let oper = sel.select();
match oper.index() {
i if i == payload_rx_sel => match oper.recv(&payload_rx) {
Ok(DataHandlerEvent::Process(payload)) => {
if let Some(callback) = undo_callback.clone() {
for to_rollback in payload.rollback.into_iter() {
loop {
let (tx, rx) = crossbeam_channel::bounded(1);
callback.call_with_return_value::<bool, _>(to_rollback.clone(), ThreadsafeFunctionCallMode::Blocking, move |p| {
let _ = tx.send(p);
Ok(())
});
match rx.recv() {
Ok(true) => break,
Ok(false) => continue,
_ => panic!(),
}
}
}
}
if let Some(ref callback) = apply_callback.clone() {
for to_apply in payload.apply.into_iter() {
loop {
let (tx, rx) = crossbeam_channel::bounded(1);
callback.call_with_return_value::<bool, _>(to_apply.clone(), ThreadsafeFunctionCallMode::Blocking, move |p| {
let _ = tx.send(p);
Ok(())
});
match rx.recv() {
Ok(true) => break,
Ok(false) => continue,
_ => panic!(),
}
}
}
}
}
Ok(DataHandlerEvent::Terminate) => {
break;
}
Err(e) => {
println!("Error {}", e.to_string());
}
},
i if i == custom_indexer_command_rx_sel => match oper.recv(&custom_indexer_command_rx) {
Ok(CustomIndexerCommand::UpdateApplyCallback(callback)) => {
apply_callback = Some(callback);
}
Ok(CustomIndexerCommand::UpdateUndoCallback(callback)) => {
undo_callback = Some(callback);
}
Ok(CustomIndexerCommand::Terminate) => break,
_ => {}
},
_ => unreachable!(),
};
}
});
// Processing thread
thread::spawn(move || {
loop {
let cmd = match command_rx.recv() {
Ok(cmd) => cmd,
Err(e) => {
panic!("Runloop error: {}", e.to_string());
}
};
match cmd {
IndexerCommand::StreamBlocks => {
// We start the service as soon as the start() method is being called.
let future = service.catch_up_with_chain_tip(false, &observer_config);
let _ = hiro_system_kit::nestable_block_on(future).expect("unable to start indexer");
let future = service.start_event_observer(observer_sidecar);
let (command_tx, event_rx) =
hiro_system_kit::nestable_block_on(future).expect("unable to start indexer");
// Blocking call
let _ = service.start_main_runloop(&command_tx, event_rx, None);
break;
}
IndexerCommand::ReplayBlocks(blocks) => {
let network = &service.config.network.bitcoin_network;
let mut networks = BTreeMap::new();
// Retrieve last block height known, and display it
networks.insert(
network.clone(),
BitcoinChainhookNetworkSpecification {
start_block: None,
end_block: None,
blocks: Some(blocks),
expire_after_occurrence: None,
include_proof: None,
include_inputs: None,
include_outputs: None,
include_witness: None,
predicate: BitcoinPredicateType::OrdinalsProtocol(
OrdinalOperations::InscriptionFeed,
),
action: HookAction::Noop,
},
);
let predicate_spec = BitcoinChainhookFullSpecification {
uuid: "replay".to_string(),
owner_uuid: None,
name: "replay".to_string(),
version: 1,
networks,
}
.into_selected_network_specification(&network)
.unwrap();
let future = scan_bitcoin_chainstate_via_rpc_using_predicate(
&predicate_spec,
&service.config,
Some(&observer_config),
&service.ctx,
);
let _ = hiro_system_kit::nestable_block_on(future).expect("unable to start indexer");
if let Some(tx) = observer_config.data_handler_tx {
let _ = tx.send(DataHandlerEvent::Terminate);
}
break;
}
IndexerCommand::DropBlocks(blocks) => {
println!("Will drop blocks {:?}", blocks);
}
IndexerCommand::RewriteBlocks(blocks) => {
println!("Will rewrite blocks {:?}", blocks);
}
IndexerCommand::SyncBlocks => {
println!("Will sync blocks");
}
IndexerCommand::Terminate => {
if let Some(tx) = observer_config.data_handler_tx {
let _ = tx.send(DataHandlerEvent::Terminate);
}
std::process::exit(0);
}
}
}
});
Self {
command_tx,
custom_indexer_command_tx,
}
}
}
#[napi(object)]
pub struct OrdinalsIndexerConfig {
pub bitcoin_rpc_url: Option<String>,
pub bitcoin_rpc_username: Option<String>,
pub bitcoin_rpc_password: Option<String>,
pub working_dir: Option<String>,
pub logs_enabled: Option<bool>,
}
impl OrdinalsIndexerConfig {
pub fn default() -> OrdinalsIndexerConfig {
OrdinalsIndexerConfig {
bitcoin_rpc_url: Some("http://0.0.0.0:8332".to_string()),
bitcoin_rpc_username: Some("devnet".to_string()),
bitcoin_rpc_password: Some("devnet".to_string()),
working_dir: Some("/tmp/ordinals".to_string()),
logs_enabled: Some(true),
}
}
}
#[napi(js_name = "OrdinalsIndexer")]
pub struct OrdinalsIndexer {
runloop: OrdinalsIndexingRunloop,
}
#[napi]
impl OrdinalsIndexer {
#[napi(constructor)]
pub fn new(config_overrides: Option<OrdinalsIndexerConfig>) -> Self {
let mut config = Config::mainnet_default();
if let Some(config_overrides) = config_overrides {
if let Some(bitcoin_rpc_url) = config_overrides.bitcoin_rpc_url {
config.network.bitcoind_rpc_url = bitcoin_rpc_url.clone();
}
if let Some(bitcoin_rpc_username) = config_overrides.bitcoin_rpc_username {
config.network.bitcoind_rpc_username = bitcoin_rpc_username.clone();
}
if let Some(bitcoin_rpc_password) = config_overrides.bitcoin_rpc_password {
config.network.bitcoind_rpc_password = bitcoin_rpc_password.clone();
}
if let Some(working_dir) = config_overrides.working_dir {
config.storage.working_dir = working_dir.clone();
}
if let Some(logs_enabled) = config_overrides.logs_enabled {
config.logs.chainhook_internals = logs_enabled.clone();
}
if let Some(logs_enabled) = config_overrides.logs_enabled {
config.logs.ordinals_internals = logs_enabled;
}
}
let runloop = OrdinalsIndexingRunloop::new(config);
OrdinalsIndexer { runloop }
}
#[napi(
js_name = "onBlock",
ts_args_type = "callback: (block: any) => boolean"
)]
pub fn update_apply_block_callback(&self, apply_block_cb: JsFunction) {
let tsfn: ThreadsafeFunction<BitcoinTransactionPayload, ErrorStrategy::Fatal> = apply_block_cb
.create_threadsafe_function(0, |ctx| ctx.env.to_js_value(&ctx.value).map(|v| vec![v]))
.unwrap();
let _ = self
.runloop
.custom_indexer_command_tx
.send(CustomIndexerCommand::UpdateApplyCallback(tsfn));
}
#[napi(
js_name = "onBlockRollBack",
ts_args_type = "callback: (block: any) => boolean"
)]
pub fn update_undo_block_callback(&self, undo_block_cb: JsFunction) {
let tsfn: ThreadsafeFunction<BitcoinTransactionPayload, ErrorStrategy::Fatal> = undo_block_cb
.create_threadsafe_function(
0,
|ctx: ThreadSafeCallContext<BitcoinTransactionPayload>| {
ctx.env.to_js_value(&ctx.value).map(|v| vec![v])
},
)
.unwrap();
let _ = self
.runloop
.custom_indexer_command_tx
.send(CustomIndexerCommand::UpdateUndoCallback(tsfn));
}
#[napi]
pub fn stream_blocks(&self) {
let _ = self.runloop.command_tx.send(IndexerCommand::StreamBlocks);
}
#[napi]
pub fn replay_blocks(&self, blocks: Vec<i64>) {
let blocks = blocks
.into_iter()
.map(|block| block as u64)
.collect::<Vec<u64>>();
let _ = self
.runloop
.command_tx
.send(IndexerCommand::ReplayBlocks(blocks));
}
#[napi]
pub fn replay_block_range(&self, start_block: i64, end_block: i64) {
let range = BlockHeights::BlockRange(start_block as u64, end_block as u64);
let blocks = range.get_sorted_entries().into_iter().collect();
let _ = self
.runloop
.command_tx
.send(IndexerCommand::ReplayBlocks(blocks));
}
#[napi]
pub fn terminate(&self) {
let _ = self.runloop.command_tx.send(IndexerCommand::Terminate);
}
}

View File

@@ -1,84 +0,0 @@
//! Defines error handling types used by the create
//! uses the `error-chain` create for generation
use neon;
use serde::ser;
use std::convert::From;
use std::fmt::Display;
error_chain! {
errors {
/// nodejs has a hard coded limit on string length
/// trying to serialize a string that is too long will result in an error
StringTooLong(len: usize) {
description("String too long for nodejs")
display("String too long for nodejs len: {}", len)
}
/// when deserializing to a boolean `false` `undefined` `null` `number`
/// are valid inputs
/// any other types will result in error
UnableToCoerce(to_type: &'static str) {
description("Unable to coerce")
display("Unable to coerce value to type: {}", to_type)
}
/// occurs when deserializing a char from an empty string
EmptyString {
description("EmptyString")
display("EmptyString")
}
/// occurs when deserializing a char from a sting with
/// more than one character
StringTooLongForChar(len: usize) {
description("String too long to be a char")
display("String too long to be a char expected len: 1 got len: {}", len)
}
/// occurs when a deserializer expects a `null` or `undefined`
/// property and found another type
ExpectingNull {
description("ExpectingNull")
display("ExpectingNull")
}
/// occurs when deserializing to an enum and the source object has
/// a none-1 number of properties
InvalidKeyType(key: String) {
description("InvalidKeyType")
display("key: '{}'", key)
}
/// an internal deserialization error from an invalid array
ArrayIndexOutOfBounds(index: u32, length: u32) {
description("ArrayIndexOutOfBounds")
display(
"ArrayIndexOutOfBounds: attempt to access ({}) size: ({})",
index,
length
)
} #[doc(hidden)]
/// This type of object is not supported
NotImplemented(name: &'static str) {
description("Not Implemented")
display("Not Implemented: '{}'", name)
}
/// A JS exception was thrown
Js(throw: neon::result::Throw) {
description("JS exception")
display("JS exception")
}
/// failed to convert something to f64
CastError {
description("CastError")
display("CastError")
}
}
}
impl ser::Error for Error {
fn custom<T: Display>(msg: T) -> Self {
ErrorKind::Msg(msg.to_string()).into()
}
}
impl From<neon::result::Throw> for Error {
fn from(throw: neon::result::Throw) -> Self {
ErrorKind::Js(throw).into()
}
}

View File

@@ -1,4 +0,0 @@
mod errors;
mod ser;
pub use ser::to_value;

View File

@@ -1,575 +0,0 @@
//!
//! Serialize a Rust data structure into a `JsValue`
//!
use super::errors::Error;
use super::errors::ErrorKind;
use super::errors::Result as LibResult;
use neon::prelude::*;
use num;
use serde::ser::{self, Serialize};
use std::marker::PhantomData;
fn as_num<T: num::cast::NumCast, OutT: num::cast::NumCast>(n: T) -> LibResult<OutT> {
match num::cast::<T, OutT>(n) {
Some(n2) => Ok(n2),
None => bail!(ErrorKind::CastError),
}
}
/// Converts a value of type `V` to a `JsValue`
///
/// # Errors
///
/// * `NumberCastError` trying to serialize a `u64` can fail if it overflows in a cast to `f64`
/// * `StringTooLong` if the string exceeds v8's max string size
///
#[inline]
pub fn to_value<'j, C, V>(cx: &mut C, value: &V) -> LibResult<Handle<'j, JsValue>>
where
C: Context<'j>,
V: Serialize + ?Sized,
{
let serializer = Serializer {
cx,
ph: PhantomData,
};
let serialized_value = value.serialize(serializer)?;
Ok(serialized_value)
}
#[doc(hidden)]
pub struct Serializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
cx: &'a mut C,
ph: PhantomData<&'j ()>,
}
#[doc(hidden)]
pub struct ArraySerializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
cx: &'a mut C,
array: Handle<'j, JsArray>,
}
#[doc(hidden)]
pub struct TupleVariantSerializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
outer_object: Handle<'j, JsObject>,
inner: ArraySerializer<'a, 'j, C>,
}
#[doc(hidden)]
pub struct MapSerializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
cx: &'a mut C,
object: Handle<'j, JsObject>,
key_holder: Handle<'j, JsObject>,
}
#[doc(hidden)]
pub struct StructSerializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
cx: &'a mut C,
object: Handle<'j, JsObject>,
}
#[doc(hidden)]
pub struct StructVariantSerializer<'a, 'j, C: 'a>
where
C: Context<'j>,
{
outer_object: Handle<'j, JsObject>,
inner: StructSerializer<'a, 'j, C>,
}
#[doc(hidden)]
impl<'a, 'j, C> ser::Serializer for Serializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
type SerializeSeq = ArraySerializer<'a, 'j, C>;
type SerializeTuple = ArraySerializer<'a, 'j, C>;
type SerializeTupleStruct = ArraySerializer<'a, 'j, C>;
type SerializeTupleVariant = TupleVariantSerializer<'a, 'j, C>;
type SerializeMap = MapSerializer<'a, 'j, C>;
type SerializeStruct = StructSerializer<'a, 'j, C>;
type SerializeStructVariant = StructVariantSerializer<'a, 'j, C>;
#[inline]
fn serialize_bool(self, v: bool) -> Result<Self::Ok, Self::Error> {
Ok(JsBoolean::new(self.cx, v).upcast())
}
#[inline]
fn serialize_i8(self, v: i8) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_i16(self, v: i16) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_i32(self, v: i32) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_i64(self, v: i64) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_i128(self, v: i128) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_u8(self, v: u8) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_u16(self, v: u16) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_u32(self, v: u32) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_u64(self, v: u64) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_u128(self, v: u128) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_f32(self, v: f32) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, as_num::<_, f64>(v)?).upcast())
}
#[inline]
fn serialize_f64(self, v: f64) -> Result<Self::Ok, Self::Error> {
Ok(JsNumber::new(self.cx, v).upcast())
}
fn serialize_char(self, v: char) -> Result<Self::Ok, Self::Error> {
let mut b = [0; 4];
let result = v.encode_utf8(&mut b);
let js_str =
JsString::try_new(self.cx, result).map_err(|_| ErrorKind::StringTooLongForChar(4))?;
Ok(js_str.upcast())
}
#[inline]
fn serialize_str(self, v: &str) -> Result<Self::Ok, Self::Error> {
let len = v.len();
let js_str = JsString::try_new(self.cx, v).map_err(|_| ErrorKind::StringTooLong(len))?;
Ok(js_str.upcast())
}
#[inline]
fn serialize_bytes(self, v: &[u8]) -> Result<Self::Ok, Self::Error> {
let mut buff = JsBuffer::new(self.cx, as_num::<_, u32>(v.len())?)?;
self.cx
.borrow_mut(&mut buff, |buff| buff.as_mut_slice().clone_from_slice(v));
Ok(buff.upcast())
}
#[inline]
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Ok(JsNull::new(self.cx).upcast())
}
#[inline]
fn serialize_some<T: ?Sized>(self, value: &T) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
value.serialize(self)
}
#[inline]
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Ok(JsNull::new(self.cx).upcast())
}
#[inline]
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Ok(JsNull::new(self.cx).upcast())
}
#[inline]
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
self.serialize_str(variant)
}
#[inline]
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
value.serialize(self)
}
#[inline]
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
variant: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
let obj = JsObject::new(&mut *self.cx);
let value_js = to_value(self.cx, value)?;
obj.set(self.cx, variant, value_js)?;
Ok(obj.upcast())
}
#[inline]
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
Ok(ArraySerializer::new(self.cx))
}
#[inline]
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
Ok(ArraySerializer::new(self.cx))
}
#[inline]
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Ok(ArraySerializer::new(self.cx))
}
#[inline]
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
TupleVariantSerializer::new(self.cx, variant)
}
#[inline]
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
Ok(MapSerializer::new(self.cx))
}
#[inline]
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
Ok(StructSerializer::new(self.cx))
}
#[inline]
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
StructVariantSerializer::new(self.cx, variant)
}
}
#[doc(hidden)]
impl<'a, 'j, C> ArraySerializer<'a, 'j, C>
where
C: Context<'j>,
{
#[inline]
fn new(cx: &'a mut C) -> Self {
let array = JsArray::new(cx, 0);
ArraySerializer { cx, array }
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeSeq for ArraySerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
fn serialize_element<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let value = to_value(self.cx, value)?;
let arr: Handle<'j, JsArray> = self.array;
let len = arr.len(self.cx);
arr.set(self.cx, len, value)?;
Ok(())
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.array.upcast())
}
}
impl<'a, 'j, C> ser::SerializeTuple for ArraySerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
#[inline]
fn serialize_element<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
ser::SerializeSeq::serialize_element(self, value)
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
ser::SerializeSeq::end(self)
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeTupleStruct for ArraySerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
#[inline]
fn serialize_field<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
ser::SerializeSeq::serialize_element(self, value)
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
ser::SerializeSeq::end(self)
}
}
#[doc(hidden)]
impl<'a, 'j, C> TupleVariantSerializer<'a, 'j, C>
where
C: Context<'j>,
{
fn new(cx: &'a mut C, key: &'static str) -> LibResult<Self> {
let inner_array = JsArray::new(cx, 0);
let outer_object = JsObject::new(cx);
outer_object.set(cx, key, inner_array)?;
Ok(TupleVariantSerializer {
outer_object,
inner: ArraySerializer {
cx,
array: inner_array,
},
})
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeTupleVariant for TupleVariantSerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
#[inline]
fn serialize_field<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
use serde::ser::SerializeSeq;
self.inner.serialize_element(value)
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.outer_object.upcast())
}
}
#[doc(hidden)]
impl<'a, 'j, C> MapSerializer<'a, 'j, C>
where
C: Context<'j>,
{
fn new(cx: &'a mut C) -> Self {
let object = JsObject::new(cx);
let key_holder = JsObject::new(cx);
MapSerializer {
cx,
object,
key_holder,
}
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeMap for MapSerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
fn serialize_key<T: ?Sized>(&mut self, key: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let key = to_value(self.cx, key)?;
self.key_holder.set(self.cx, "key", key)?;
Ok(())
}
fn serialize_value<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let key: Handle<'j, JsValue> = self.key_holder.get(&mut *self.cx, "key")?;
let value_obj = to_value(self.cx, value)?;
self.object.set(self.cx, key, value_obj)?;
Ok(())
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.object.upcast())
}
}
#[doc(hidden)]
impl<'a, 'j, C> StructSerializer<'a, 'j, C>
where
C: Context<'j>,
{
#[inline]
fn new(cx: &'a mut C) -> Self {
let object = JsObject::new(cx);
StructSerializer { cx, object }
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeStruct for StructSerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
#[inline]
fn serialize_field<T: ?Sized>(
&mut self,
key: &'static str,
value: &T,
) -> Result<(), Self::Error>
where
T: Serialize,
{
let value = to_value(self.cx, value)?;
self.object.set(self.cx, key, value)?;
Ok(())
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.object.upcast())
}
}
#[doc(hidden)]
impl<'a, 'j, C> StructVariantSerializer<'a, 'j, C>
where
C: Context<'j>,
{
fn new(cx: &'a mut C, key: &'static str) -> LibResult<Self> {
let inner_object = JsObject::new(cx);
let outer_object = JsObject::new(cx);
outer_object.set(cx, key, inner_object)?;
Ok(StructVariantSerializer {
outer_object: outer_object,
inner: StructSerializer {
cx,
object: inner_object,
},
})
}
}
#[doc(hidden)]
impl<'a, 'j, C> ser::SerializeStructVariant for StructVariantSerializer<'a, 'j, C>
where
C: Context<'j>,
{
type Ok = Handle<'j, JsValue>;
type Error = Error;
#[inline]
fn serialize_field<T: ?Sized>(
&mut self,
key: &'static str,
value: &T,
) -> Result<(), Self::Error>
where
T: Serialize,
{
use serde::ser::SerializeStruct;
self.inner.serialize_field(key, value)
}
#[inline]
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.outer_object.upcast())
}
}

View File

@@ -1,100 +0,0 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig.json to read more about this file */
/* Projects */
// "incremental": true, /* Enable incremental compilation */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./", /* Specify the folder for .tsbuildinfo incremental compilation files. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es5", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for TC39 stage 2 draft decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h' */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using `jsx: react-jsx*`.` */
// "reactNamespace": "", /* Specify the object invoked for `createElement`. This only applies when targeting `react` JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
/* Modules */
"module": "commonjs", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like `./node_modules/@types`. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "resolveJsonModule": true, /* Enable importing .json files */
// "noResolve": true, /* Disallow `import`s, `require`s or `<reference>`s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the `checkJS` option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from `node_modules`. Only applicable with `allowJs`. */
/* Emit */
"declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
"declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
"sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If `declaration` is true, also designates a file that bundles all .d.ts output. */
"outDir": "./dist", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have `@internal` in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like `__extends` in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing `const enum` declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables `allowSyntheticDefaultImports` for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied `any` type.. */
// "strictNullChecks": true, /* When type checking, take into account `null` and `undefined`. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for `bind`, `call`, and `apply` methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when `this` is given the type `any`. */
// "useUnknownInCatchVariables": true, /* Type catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when a local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Include 'undefined' in index signature results */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true, /* Skip type checking all .d.ts files. */
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,17 +2,41 @@ FROM rust:bullseye as build
WORKDIR /src
RUN apt update && apt install -y ca-certificates pkg-config libssl-dev libclang-11-dev
RUN apt-get update && apt-get install -y ca-certificates pkg-config libssl-dev libclang-11-dev curl gnupg
RUN rustup update 1.72.0 && rustup default 1.72.0
COPY ./components/ordhook-cli /src/components/ordhook-cli
RUN mkdir /out
ENV NODE_MAJOR=18
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
RUN echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
RUN apt-get update
RUN apt-get install nodejs -y
RUN npm install -g @napi-rs/cli yarn
COPY ./components/ordhook-core /src/components/ordhook-core
WORKDIR /src/components/ordhook-cli
COPY ./components/ordhook-sdk-js /src/components/ordhook-sdk-js
RUN mkdir /out
COPY ./components/ordhook-cli /src/components/ordhook-cli
WORKDIR /src/components/ordhook-sdk-js
RUN yarn install
RUN yarn build
RUN cp *.node /out
WORKDIR /src/components/ordhook-cli
RUN cargo build --features release --release
@@ -20,8 +44,14 @@ RUN cp target/release/ordhook /out
FROM debian:bullseye-slim
WORKDIR /ordhook-sdk-js
RUN apt update && apt install -y ca-certificates libssl-dev
COPY --from=build /out/*.node /ordhook-sdk-js/
COPY --from=build /out/ordhook /bin/ordhook
COPY --from=build /out/ordhook /bin/ordhook
WORKDIR /workspace

View File

@@ -2,7 +2,7 @@
title: Getting Started
---
Ordhook is a utility designed for interacting with the [Ordinal Theory](https://trustmachines.co/glossary/ordinal-theory) protocol, enabling you to assign distinct identifiers to newly minted satoshis for diverse applications. Follow the steps below to install Ordhook.
Ordhook is a developer tool designed for interacting with [Bitcoin ordinals](https://www.hiro.so/blog/what-are-bitcoin-ordinals), enabling you to retrieve ordinal activity from the Bitcoin chain. Follow the steps below to install Ordhook.
## Installing Ordhook
@@ -31,6 +31,10 @@ You are now inside the Ordhook directory.
### Install Ordhook
> **_NOTE:_**
>
> Ordhook requires Rust to be installed on your system. If you haven't installed Rust yet, you can do so by following the instructions on the [official Rust website](https://www.rust-lang.org/tools/install).
Use the Rust package manager, Cargo, to install Ordhook. Run the following command:
```bash
@@ -38,3 +42,10 @@ cargo ordhook-install
```
This command compiles the Ordhook code and installs it on your system.
### Next Steps
With Ordhook installed, you can:
- Scan blocks in your terminal. See the [Scanning Ordinal Activities](./how-to-guides/how-to-scan-ordinal-activities.md) guide.
- Stream ordinal activity to an external API. Refer to the [Streaming Ordinal Activities](./how-to-guides/how-to-stream-ordinal-activities.md) guide.

View File

@@ -1,109 +0,0 @@
---
title: Explore Ordinal activities in your terminal
---
Ordhook can be used to extract ordinal activities from the Bitcoin chain and stream these activities to the indexer. This guide helps you with the steps to explore ordinal activities and post these activities to the indexer.
### Explore ordinal activity
You can use the following command to scan a range of blocks on mainnet or testnet.
`ordhook scan blocks 767430 767753 --mainnet`
The above command generates a `hord.sqlite.gz` file in your directory and displays inscriptions and transfers activities occurring in the range of the specified blocks.
The output of the above command looks like this:
```
Inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 revealed at block #767430 (ordinal_number 1252201400444387, inscription_number 0)
Inscription 26482871f33f1051f450f2da9af275794c0b5f1c61ebf35e4467fb42c2813403i0 revealed at block #767753 (ordinal_number 727624168684699, inscription_number 1)
```
You can now generate an activity for a given inscription by using the following command:
`ordhook scan inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 --mainnet`
The above command generates the ordinal activity for that inscription and also the number of transfers in the transactions.
```
Inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 revealed at block #767430 (inscription_number 0, ordinal_number 1252201400444387)
→ Transferred in transaction 0x2c8a11858825ae2056be90c3e49938d271671ac4245b452cd88b1475cbea8971 (block #785391)
→ Transferred in transaction 0xbc4c30829a9564c0d58e6287195622b53ced54a25711d1b86be7cd3a70ef61ed (block #785396)
Number of transfers: 2
```
### Configure ordhook
This section walks you through streaming ordinal activities to an indexer. To post the ordinal activity, you'll need to configure bitcoind. Refer to [Setting up a bitcoin node](https://docs.hiro.so/chainhook/how-to-guides/how-to-run-chainhook-as-a-service-using-bitcoind#setting-up-a-bitcoin-node) to understand the steps to configure Bitcoind.
> **_NOTE_**
> Ordhook is applicable to the Bitcoin chain only.
Once the Bitcoin node is configured, you can use the following command in your terminal to create a configuration for Ordhook.
`ordhook config new --mainnet`
You will see a success message "Created file Ordhook.toml" in your terminal.
The generated `ordhook.toml` file looks like this:
```toml
[storage]
working_dir = "ordhook"
# The Http Api allows you to register / deregister
# dynamically predicates.
# Disable by default.
#
# [http_api]
# http_port = 20456
# database_uri = "redis://localhost:6379/"
[network]
mode = "mainnet"
bitcoind_rpc_url = "http://0.0.0.0:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
# Bitcoin block events can be received by Chainhook
# either through a Bitcoin node's ZeroMQ interface,
# or through the Stacks node. Zmq is being
# used by default:
bitcoind_zmq_url = "tcp://0.0.0.0:18543"
# but stacks can also be used:
# stacks_node_rpc_url = "http://0.0.0.0:20443"
[limits]
max_number_of_bitcoin_predicates = 100
max_number_of_concurrent_bitcoin_scans = 100
max_number_of_processing_threads = 16
bitcoin_concurrent_http_requests_max = 16
max_caching_memory_size_mb = 32000
# Disable the following section if the state
# must be built locally
[bootstrap]
download_url = "https://archive.hiro.so/mainnet/ordhook/mainnet-ordhook-sqlite-latest"
[logs]
ordinals_internals = true
chainhook_internals = true
```
Observe that the bitcoind configured fields will appear in the `ordhook.toml` file. Now, ensure that these fields are configured with the right values and URLs, as shown below:
```toml
bitcoind_rpc_url = "http://0.0.0.0:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
bitcoind_zmq_url = "tcp://0.0.0.0:18543"
```
### Post ordinal activity to the indexer
After adjusting the `Ordhook.toml` settings to make them match the bitcoind configuration, the following command can be run to scan blocks and post events to a local server.
`ordhook scan blocks 767430 767753 --post-to=http://localhost:3000/api/events --config-path=./Ordhook.toml`
The above command uses chainhook to create a predicate ordinal theory where one of the inscriptions is created and posts the events to `http://localhost:3000/api/events`.
You can update the above command to scan between block heights and post events to the local server.

View File

@@ -0,0 +1,157 @@
---
title: Run Ordhook as a Service Using Bitcoind
---
## Prerequisites
### Setting Up a Bitcoin Node
The Bitcoin Core daemon (bitcoind) is a program that implements the Bitcoin protocol for remote procedure call (RPC) use. Ordhook can be set up to interact with the Bitcoin chainstate through bitcoind's ZeroMQ interface, its embedded networking library, passing raw blockchain data to be evaluated for relevant events.
This guide is written to work with the latest Bitcoin Core software containing bitcoind, [Bitcoin Core 25.0](https://bitcoincore.org/bin/bitcoin-core-25.0/).
> **_NOTE:_**
>
> While bitcoind can and will start syncing a Bitcoin node, customizing this node to your use cases beyond supporting a Ordhook is out of scope for this guide. See the Bitcoin wiki for ["Running Bitcoin"](https://en.bitcoin.it/wiki/Running_Bitcoin) or bitcoin.org [Running A Full Node guide](https://bitcoin.org/en/full-node).
- Make note of the path of your `bitcoind` executable (located within the `bin` directory of the `bitcoin-25.0` folder you downloaded above appropriate to your operating system)
- Navigate to your project folder where your Ordhook node will reside, create a new file, and rename it to `bitcoin.conf`. Copy the configuration below to this `bitcoin.conf` file.
- Find and copy your Bitcoin data directory and paste to the `datadir` field in the `bitcoin.conf` file below. Either copy the default path (see [list of default directories by operating system](https://en.bitcoin.it/wiki/Data_directory)) or copy the custom path you set for your Bitcoin data
- Set a username of your choice for bitcoind and use it in the `rpcuser` configuration below (`devnet` is a default).
- Set a password of your choice for bitcoind and use it in the `rpcpassword` configuration below (`devnet` is a default).
```conf
# Bitcoin Core Configuration
datadir=/path/to/bitcoin/directory/ # Path to Bitcoin directory
server=1
rpcuser=devnet
rpcpassword=devnet
rpcport=8332
rpcallowip=0.0.0.0/0
rpcallowip=::/0
txindex=1
listen=1
discover=0
dns=0
dnsseed=0
listenonion=0
rpcserialversion=1
disablewallet=0
fallbackfee=0.00001
rpcthreads=8
blocksonly=1
dbcache=4096
# Start zeromq
zmqpubhashblock=tcp://0.0.0.0:18543
```
> **_NOTE:_**
>
> The below command is a startup process that, if this is your first time syncing a node, might take a few hours to a few days to run. Alternatively, if the directory pointed to in the `datadir` field above contains bitcoin blockchain data, syncing will resume.
Now that you have the `bitcoin.conf` file ready with the bitcoind configurations, you can run the bitcoind node. The command takes the form `path/to/bitcoind -conf=path/to/bitcoin.conf`, for example:
```console
/Volumes/SSD/bitcoin-25.0/bin/bitcoind -conf=/Volumes/SSD/project/Ordhook/bitcoin.conf
```
Once the above command runs, you will see `zmq_url` entries in the console's stdout, displaying ZeroMQ logs of your bitcoin node.
### Configure Ordhook
In this section, you will configure Ordhook to match the network configurations with the bitcoin config file. First, [install the latest version of Ordhook](../getting-started.md#installing-ordhook).
Next, you will generate a `Ordhook.toml` file to connect Ordhook with your bitcoind node. Navigate to the directory where you want to generate the `Ordhook.toml` file and use the following command in your terminal:
```console
ordhook config generate --mainnet
```
Several network parameters in the generated `Ordhook.toml` configuration file need to match those in the `bitcoin.conf` file created earlier in the [Setting up a Bitcoin Node](#setting-up-a-bitcoin-node) section. Please update the following parameters accordingly:
1. Update `bitcoind_rpc_username` with the username set for `rpcuser` in `bitcoin.conf`.
2. Update `bitcoind_rpc_password` with the password set for `rpcpassword` in `bitcoin.conf`.
3. Update `bitcoind_rpc_url` with the same host and port used for `rpcport` in `bitcoin.conf`.
Additionally, if you want to receive events from the configured Bitcoin node, substitute `stacks_node_rpc_url` with `bitcoind_zmq_url`, as follows:
```toml
[storage]
working_dir = "ordhook"
# The Http Api allows you to register / deregister
# dynamically predicates.
# Disable by default.
#
# [http_api]
# http_port = 20456
# database_uri = "redis://localhost:6379/"
[network]
mode = "mainnet"
bitcoind_rpc_url = "http://0.0.0.0:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
# Bitcoin block events can be received by Chainhook
# either through a Bitcoin node's ZeroMQ interface,
# or through the Stacks node. Zmq is being
# used by default:
bitcoind_zmq_url = "tcp://0.0.0.0:18543"
# but stacks can also be used:
# stacks_node_rpc_url = "http://0.0.0.0:20443"
[limits]
max_number_of_bitcoin_predicates = 100
max_number_of_concurrent_bitcoin_scans = 100
max_number_of_processing_threads = 16
bitcoin_concurrent_http_requests_max = 16
max_caching_memory_size_mb = 32000
# Disable the following section if the state
# must be built locally
[bootstrap]
download_url = "https://archive.hiro.so/mainnet/ordhook/mainnet-ordhook-sqlite-latest"
[logs]
ordinals_internals = true
chainhook_internals = true
```
Here is a table of the relevant parameters this guide changes in our configuration files.
| bitcoin.conf | Ordhook.toml |
| --------------- | --------------------- |
| rpcuser | bitcoind_rpc_username |
| rpcpassword | bitcoind_rpc_password |
| rpcport | bitcoind_rpc_url |
| zmqpubhashblock | bitcoind_zmq_url |
## Initiate Ordhook Service
In this section, you'll learn how to run Ordhook as a service using [Ordhook SDK](https://github.com/hirosystems/ordhook/tree/develop/components/ordhook-sdk-js) to post events to a server.
Use the following command to start the Ordhook service for streaming and processing new blocks appended to the Bitcoin blockchain:
`ordhook service start --post-to=http://localhost:3000/api/events --config-path=./Ordhook.toml`
When the Ordhook service starts, it is initiated in the background to augment the blocks from Bitcoin. Bitcoind sends ZeroMQ notifications to Ordhook to retrieve the inscriptions.
### Add `http-post` endpoints dynamically
To enable dynamically posting endpoints to the server, you can spin up the Redis server by enabling the following lines of code in the `Ordhook.toml` file:
```toml
[http_api]
http_port = 20456
database_uri = "redis://localhost:6379/"
```
## Run ordhook service
Based on the `Ordhook.toml` file configuration, the ordhook service spins up an HTTP API to manage event destinations. Use the following command to start the ordhook service:
`ordhook service start --config-path=./Ordhook.toml`
A comprehensive OpenAPI specification explaining how to interact with this HTTP REST API can be found [here](https://github.com/hirosystems/ordhook/blob/develop/docs/ordhook-openapi.json).

View File

@@ -0,0 +1,36 @@
---
title: Scan Ordinal Activities in Your Terminal
---
Ordhook is a tool that helps you find ordinal activity on the Bitcoin chain. Think of it like a detective that can find and track this activity for you. Once it finds any activity, that data can be used to help build your own database. This guide will show you how to use Ordhook to scan this activity on the Bitcoin chain in your terminal.
### Explore Ordinal Activity
You can use the following command to scan a range of blocks on mainnet or testnet:
`ordhook scan blocks 767430 767753 --mainnet`
The above command generates a `hord.sqlite.gz` file in your directory and displays inscriptions and transfers activities occurring in the range of the specified blocks.
> **_NOTE_**
> By default, Ordhook creates a folder named `ordhook` in your current directory and stores the `hord.sqlite` file there. This file is used to pull in the latest ordinal data and scan against it based on the block numbers you provide.
The output of the above command looks like this:
```
Inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 revealed at block #767430 (inscription_number 0, ordinal_number 1252201400444387)
Inscription 26482871f33f1051f450f2da9af275794c0b5f1c61ebf35e4467fb42c2813403i0 revealed at block #767753 (inscription_number 1, ordinal_number 727624168684699)
```
You can also get activity for a given inscription by using the following command:
`ordhook scan inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 --mainnet`
The above command returns ordinal activity for that inscription and also the number of transfers in the transactions.
```
Inscription 6fb976ab49dcec017f1e201e84395983204ae1a7c2abf7ced0a85d692e442799i0 revealed at block #767430 (inscription_number 0, ordinal_number 1252201400444387)
→ Transferred in transaction 0x2c8a11858825ae2056be90c3e49938d271671ac4245b452cd88b1475cbea8971 (block #785391)
→ Transferred in transaction 0xbc4c30829a9564c0d58e6287195622b53ced54a25711d1b86be7cd3a70ef61ed (block #785396)
Number of transfers: 2
```

View File

@@ -0,0 +1,79 @@
---
title: Stream Ordinal Activities to an API
---
Ordhook is a tool that helps you find ordinal activity from the Bitcoin chain and build your own customized database with that data. This guide will show you how to use Ordhook to stream ordinal activity.
### Configure Ordhook
This section walks you through streaming ordinal activities. To post the ordinal activity, you'll need to configure bitcoind. Refer to [Setting up a bitcoin node](./how-to-run-ordhook-as-a-service-using-bitcoind.md#setting-up-a-bitcoin-node) to understand the steps to configure Bitcoind.
> **_NOTE_**
> Ordhook is applicable to the Bitcoin chain only.
Once the Bitcoin node is configured, you can use the following command in your terminal to create a configuration for Ordhook:
`ordhook config new --mainnet`
You will see a success message "Created file Ordhook.toml" in your terminal.
The generated `Ordhook.toml` file looks like this:
```toml
[storage]
working_dir = "ordhook"
# The Http Api allows you to register / deregister
# dynamically predicates.
# Disable by default.
#
# [http_api]
# http_port = 20456
# database_uri = "redis://localhost:6379/"
[network]
mode = "mainnet"
bitcoind_rpc_url = "http://0.0.0.0:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
# Bitcoin block events can be received by Chainhook
# either through a Bitcoin node's ZeroMQ interface,
# or through the Stacks node. Zmq is being
# used by default:
bitcoind_zmq_url = "tcp://0.0.0.0:18543"
# but stacks can also be used:
# stacks_node_rpc_url = "http://0.0.0.0:20443"
[limits]
max_number_of_bitcoin_predicates = 100
max_number_of_concurrent_bitcoin_scans = 100
max_number_of_processing_threads = 16
bitcoin_concurrent_http_requests_max = 16
max_caching_memory_size_mb = 32000
# Disable the following section if the state
# must be built locally
[bootstrap]
download_url = "https://archive.hiro.so/mainnet/ordhook/mainnet-ordhook-sqlite-latest"
[logs]
ordinals_internals = true
chainhook_internals = true
```
Observe that the bitcoind configured fields will appear in the `Ordhook.toml` file. Now, ensure that these fields are configured with the right values and URLs, as shown below:
```toml
bitcoind_rpc_url = "http://0.0.0.0:8332"
bitcoind_rpc_username = "devnet"
bitcoind_rpc_password = "devnet"
bitcoind_zmq_url = "tcp://0.0.0.0:18543"
```
### Post Ordinal Activity to an External Endpoint
After adjusting the `Ordhook.toml` settings to make them match the bitcoind configuration, the following command can be run to scan blocks and post events to a local server:
`ordhook scan blocks 767430 767753 --post-to=http://localhost:3000/api/events --config-path=./Ordhook.toml`
The above command uses Ordhook to stream and then post ordinal activities to `http://localhost:3000/api/events` where you can build out your own database or custom views.

View File

@@ -1,38 +0,0 @@
---
title: Run Ordhook as a Service using Bitcoind
---
## Run ordhook as a service using Bitcoind
In this section, you'll learn how to run Ordhook as a service using [Chainhook SDK](https://github.com/hirosystems/chainhook/tree/develop/components/chainhook-sdk) to create a predicate ordinal theory and post events to a server.
## Prerequisites
- [Bitcoind configuration](https://docs.hiro.so/chainhook/how-to-guides/how-to-run-chainhook-as-a-service-using-bitcoind#setting-up-a-bitcoin-node)
- Configure ordhook by following [this](./how-to-explore-ordinal-activities.md#configure-ordhook) section
## Run ordhook service for streaming blocks
Use the following command to start the ordhook service for streaming and processing new blocks appended to the Bitcoin blockchain.
`ordhook service start --post-to=http://localhost:3000/api/events --config-path=./Ordhook.toml`
When the ordhook service starts, the chainhook service gets initiated in the background to augment the blocks that Bitcoind sends. The bitcoind sends zeromq notifications to Ordhook to retrieve the inscriptions.
### Add `http-post` endpoints dynamically
To enable dynamically posting endpoints to the server, you can spin up the redis server by enabling the following lines of code in the `ordhook.toml` file.
```toml
[http_api]
http_port = 20456
database_uri = "redis://localhost:6379/"
```
## Run ordhook service
Based on the `Ordhook.toml` file configuration, the ordhook service spins up an HTTP API to manage event destinations. Use the following command to start the ordhook service.
`ordhook service start --config-path=./Ordhook.toml`
A comprehensive OpenAPI specification explaining how to interact with this HTTP REST API can be found [here](https://github.com/hirosystems/chainhook/blob/develop/docs/chainhook-openapi.json).

1159
docs/ordhook-openapi.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,21 @@
---
title: Ordhook Overview
title: Overview
---
## Ordinal Theory
The Ordinal Theory is a protocol with a specific goal: assigning unique identifiers to newly created satoshis (sats). The protocol achieves this by introducing a numbering scheme. This numbering allows for the inclusion of arbitrary content on these satoshis. This content is referred to as _inscriptions_. These inscriptions essentially turn satoshis into Bitcoin-native digital artifacts, often referred to as Non-Fungible Tokens (NFTs). This means that you can associate additional information or data with individual satoshis.
Ordinal Theory is a protocol with a specific goal: assigning unique identifiers to every single satoshi (sat). The protocol achieves this by introducing a numbering scheme. This numbering allows for the inclusion of arbitrary content on these satoshis.
This content is referred to as _inscriptions_. These inscriptions essentially turn satoshis into Bitcoin-native digital artifacts, often referred to as Non-Fungible Tokens (NFTs). This means that you can associate additional information or data with individual satoshis.
Ordinal Theory accomplishes this without requiring the use of sidechains or creating separate tokens. This makes it an attractive option for those looking to integrate, expand, or utilize this functionality within the Bitcoin ecosystem.
Satoshis with inscriptions can be transferred through standard Bitcoin transactions, sent to regular Bitcoin addresses, and held in Bitcoin Unspent Transaction Outputs (UTXOs). The uniqueness of these transactions is identified while sending individual satoshis with inscriptions, transactions must be crafted to control the order and value of inputs and outputs following the rules defined by Ordinal Theory.
Satoshis with inscriptions can be transferred through standard Bitcoin transactions, sent to regular Bitcoin addresses, and held in Bitcoin Unspent Transaction Outputs (UTXOs). The uniqueness of these transactions is identified while sending individual satoshis with inscriptions, and transactions must be crafted to control the order and value of inputs and outputs following the rules defined by Ordinal Theory.
## Ordhook
Ordhook is an indexer designed to assist developers in creating new applications that are resistant to re-organizations (re-orgs) and are built on top of the [Ordinal Theory](https://trustmachines.co/glossary/ordinal-theory) protocol. Its primary purpose is to simplify the process for both protocol developers and users to track and discover the ownership of Ordinal Theory's inscriptions. It also provides a wealth of information about each inscription.
Ordhook is an indexer designed to assist developers in creating new applications that are resistant to blockchain re-organizations (re-orgs) and are built on top of the [Ordinal Theory](https://trustmachines.co/glossary/ordinal-theory) protocol. Its primary purpose is to simplify the process for both protocol developers and users to track and discover the ownership of Ordinal Theory's inscriptions. It also provides a wealth of information about each inscription.
Ordhook utilizes the Chainhook Software Development Kit (SDK) from the Chainhook project. This SDK is a transaction indexing engine that is aware of re-orgs and is designed for use with both Stacks and Bitcoin. The Chainhook SDK operates on first-class event-driven principles. This means it efficiently extracts transaction data from blocks and maintains a consistent view of the state of the blockchain. This helps ensure that the data retrieved remains accurate and up-to-date.
Ordhook offers a valuable tool for Bitcoin developers. They can rely on it to implement feature-rich protocols and business models that make use of near-real-time information related to Ordinal inscriptions and their transfers.
Ordhook offers a valuable tool for Bitcoin developers. They can rely on it to implement feature-rich protocols and business models that make use of near-real-time information related to ordinal inscriptions and their transfers.

11
ordhook.code-workspace Normal file
View File

@@ -0,0 +1,11 @@
{
"folders": [
{
"path": "."
},
{
"path": "../ordinals-api"
}
],
"settings": {}
}

6
package-lock.json generated Normal file
View File

@@ -0,0 +1,6 @@
{
"name": "ordhook",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}