Commit Graph

19 Commits

Author SHA1 Message Date
Jude Nelson
31144dd34f no need for get_db_state or bloom filters 2016-09-14 16:12:20 -04:00
Jude Nelson
c19cda29eb don't log unparsable zonefiles 2016-09-13 22:26:16 -04:00
Jude Nelson
63fb35a8d8 use atlasdb_open to set the proper row factory 2016-09-13 22:20:40 -04:00
Jude Nelson
ae45aa947b use iterative construction of zonefile inventory instead of list
comprehension
2016-09-13 22:13:42 -04:00
Jude Nelson
245d939ec8 use right comparison in server version 2016-09-13 22:07:10 -04:00
Jude Nelson
eb179d23d5 update zonefile acquisition logic: try storage drivers first, and then
try atlas peers.  Only try storage drivers once per zonefile hash either
way (add a 'tried_storage' column to the atlas db to track this)
2016-09-13 16:30:41 -04:00
Jude Nelson
e692706058 don't remove a peer if it's whitelisted or blacklisted 2016-08-30 20:37:06 -04:00
Jude Nelson
069037077b bugfixes found in testing; better socket error logging 2016-08-30 20:23:20 -04:00
Jude Nelson
03da6c4a66 runtime sanity checks on lock state 2016-08-30 12:31:46 -04:00
Jude Nelson
7edcafa753 more bugfixes from mesh testing 2016-08-30 00:37:02 -04:00
Jude Nelson
eb2015307b more code clean-up:
* autoincrement peer index
* enable test network methods only if we're a subordinate atlas peer,
and not the main test thread
* locking bugfixes
* remove redundant code
* more debug output
2016-08-29 17:34:36 -04:00
Jude Nelson
28f01ac54d more bugfixes found during testing; remove dead code; fix deadlocks 2016-08-26 18:45:52 -04:00
Jude Nelson
d23bdf3c16 bugfixes found in testing; add top-level startup/shutdown logic; add db
sync logic
2016-08-24 17:41:53 -04:00
Jude Nelson
848cfff660 fix a few bugs in MHRWDA, found during testing 2016-08-23 23:13:05 -04:00
Jude Nelson
8039310238 Implement Metropolis-Hastings Random Walk with Delayed Acceptance,
instead of Metropolis-Hastings Random Walk with Backtracking.  This is
in light of reading the SIGCOMM 2012 paper by Lee, Xu, and Eun.  Also,
update documentation to briefly explain the peer selection and
exploration rationale.
2016-08-23 18:01:50 -04:00
Jude Nelson
f67c2258b1 rework peer discovery and peer retention logic:
* use metropolis-hastings random walk with random backtracking (variant of MRWB)
to sample the peer graph in as unbiased a way as we can.  The only major
difference between this and MRWB is that we don't maintain a peer stack;
we just select a random peer instead of backtracking.
* keep the peer table and the peer database in sync--the table is the
cache-coherent copy.
* keep only 65536 peers.  Hash the peer address and a nonce to select a
slot, and ping an old peer before evicting it on collision (and keep the
old peer on collision if it responds)
* a peer can ask for all other peers.
* remove up to 10 peers that are unhealthy per walk; add up to 10 new
peers that we discover on remote peer queries.
* use socket-determined peer address, not RPC-given peer address
* various code cleanups.
2016-08-22 17:55:58 -04:00
Jude Nelson
566b6f5735 numerous bugfixes found in testing 2016-08-18 18:33:54 -04:00
Jude Nelson
5fc3851aed more work on atlas support:
* push zonefiles if we receive them, and know of another peer that needs
them (i.e. should happen frequently near the chain tip)
* when getting the peer list, queue the requester for pinging
* move unit tests to integration test framework
2016-08-17 00:35:59 -04:00
Jude Nelson
d01edf1f10 WIP: Atlas peer code and (rough) unit tests.
* Atlas nodes are a lot like BitTorrent nodes, where the blockchain
gets used to encode the sequence of zonefiles (chunks) a peer goes and
fetches.
* Atlas nodes discover each other from a set of seed nodes, and try
to construct a K-regular network graph using a random walk through
the peers neighbor relations.
* Atlas nodes propagate peer information to each other in a "K-rarest
known peers that are alive" fashion, to encourage even peer mixing.
* Atlas nodes maintain a bitwise big-endian "zonefile inventory vector"
where bit i is set if the ith NAME_UPDATE's associated zonefile has
been obtained.
* Atlas nodes exchange zonefile inventory vectors amongst their
neighbors in an effort to find missing zonefiles.  They obtain zonefiles
in a rarest-first fashion--i.e. the first missing zonefile to fetch
should be the one known by the least amount of neighbors.
* Atlas nodes occasionally refresh inventory vectors with neighbors to
ensure that knowledge of a zonefile propagates through the network.

This is mostly untested code, save for what the unit tests at the end of
the file cover.  TODO: integrate with CircleCI
2016-08-15 18:20:43 -04:00