You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The seed node cluster is what is behind mainnet.polykey.io and testnet.polykey.io requires some auto-configuration to gain knowledge of each other so that they can share their DHT workload which include signalling and relaying.
Currently seed nodes are launched without knowledge of any seed nodes. This makes sense, since they are the first seed nodes. However as we scale the number of seed nodes, it would make sense that seed nodes can automatically discover each other and establish connections. This would make easier to launch clusters of seed nodes.
There are several challenges here and questions we must work out:
Does it mean it is possible to run multiple seed nodes with the same NodeID?
If multiple seed nodes have different NodeIDs, are their root keys connected to each other in a trust relationship (either hierarchically via PKI, or a loose-mesh via the gestalt graph + root chain)
How does this impact how this trust information is propagated eventually-consistently across the network?
How does this deal with attacks/impersonation/DHT poisoning/sybil...?
How does this deal with revocation?
What does this mean for our default seed node list that is configured in the PK software distribution
If seed nodes are scaled up and down, how do they acquire their recovery keys securely and without conflict?
Using a network load balancer means we need to preserve stickiness for "flows", we must ensure that this doesn't break down our network connections mid-flight and mid-conversation.
AWS sets this to 120s timeout for a UDP flow, this is not configurable.
AWS load balances according to origin IP address, and maintains the stickiness for the lifetime of a flow
The stickiness must be preserved between NLB to multiple listeners, from listener to multiple target groups, from target group to multiple targets.
Load balancing introduces network proxies. These network proxies must preserve the client IP address, otherwise NAT-busting signalling will not work.
We've enabled this option for NLB
There's a special protocol for UDP/TCP for preserving client IPs in case it's not possible to be done at the IP-layer, but must be done on the UDP/TCP layer
Cloudflare is becoming more used as the gateway to all polykey services, it's interesting to see that they are becoming that API gateway, and then do add-on services on top... and they skipped the VM and containers and went straight to serverless with WASM. WASM with WASI is the new unikernel system
Tasks
Research DNS load balancing as an alternative
Work out how distributed PK with multiple nodes sharing the same IP address will work
Specification
The seed node cluster is what is behind
mainnet.polykey.ioandtestnet.polykey.iorequires some auto-configuration to gain knowledge of each other so that they can share their DHT workload which include signalling and relaying.Currently seed nodes are launched without knowledge of any seed nodes. This makes sense, since they are the first seed nodes. However as we scale the number of seed nodes, it would make sense that seed nodes can automatically discover each other and establish connections. This would make easier to launch clusters of seed nodes.
There are several challenges here and questions we must work out:
ProxyclassAdditional context
Tasks