Skip to content

feat: draft for #33 that incorporates feedback#35

Closed
Gozala wants to merge 7 commits intomultiformats:masterfrom
Gozala:context-binding
Closed

feat: draft for #33 that incorporates feedback#35
Gozala wants to merge 7 commits intomultiformats:masterfrom
Gozala:context-binding

Conversation

@Gozala
Copy link
Contributor

@Gozala Gozala commented Sep 9, 2020

This is a draft implementation that illustrates #33 and incorporates some of the feedback from the call with @mikeal and makes few additional changes to because I recognized some issues with the approach we thought made most sense. Here is the high level overview of all the changes and reasoning behind them (in very short from).

Motivation

Only after doing several implementations I think I'm finally able to put in words what is this trying to solve. Current version employs dynamic registry that allows adding / removing base encoding, codecs, hashers etc... Then every library just threads through the registry to share a configuration (that I'll refer to as context). It is great because it reduces a lot of boilerplate (by binding the context) but there are some drawbacks, which I would like to address:

  1. It becomes really hard to see what individual components require to be fully (or partially) functional. E.g. some library somewhere may depend on the specific base encoding (or a hasher) and if user doesn't install it error will eventually (maybe not until production) will arise (I have a bad experience with large code bases that suffered from these kind of errors, and they are real pain to debug).

  2. While higher level libraries can gain huge benefit out of context binding, other lower level libraries may get little to none but pay the cost of introducing dependency injection pattern. Ideally we would enable both (context binding and context passing) so authors can make their own decisions.

  3. Dependency injection introduced the side effect of parallel classes. This introduces another axis of complexity, which I am not sure how to articulate other than it feels counter intuitive. It also introduces opaque dependency on bound context. Ideally class instances would carry all of the contextual information (as this bindings) so that it's methods can do the job without having to refer to enclosed context.

  4. Dynamic registry introduces side effects into the system. E.g. same code can behave differently (depending on code path) before something was added (or removed) to the registry. This is not an issue if registry is not mutated (e.g. is setup at the beginning), however in my experience dynamic registries tend to get mutated sometimes from unexpected places which can lead to whole class of the hard to debug problems and given that they may only show in certain code paths they might be very hard to reproduce as well.

Approach

To address above drawbacks approach this pull request demonstrates does following:

  1. Instead of having dynamic registry each component defines own requirements in terms of configuration. E.g. CID requires:
    • base encoder + decoder to serialize itself to string, parse CIDs from string.
    • base58btc encoder + decoder to be able to map to / from CIDv0.

User can still choose to omit base58btc implementation by providing a stub that throws.

  1. All components provide low level APIs that take context as an argument and a configure function that returns high level API with bounded context. This way some libraries can choose to pass context and others can choose to bind it.

At the moment if you do not pass context to low level API it will throw somewhere down the stack when it's accessed, but intention is to throw at call site instead so that forgetting to pass it can't go unnoticed.

  1. Changes here decouple classes from what used to be static methods (which were not really static). This way binding context is straight forward by wrapping low level APIs, while classes are passed all the contextual info they need for their methods to function without binding anything.

  2. Because dynamic registry is replaced with static configuration (which sure could be mutated because everything can be in JS, at least by design you should not) this avoids all kind of issues that can be introduced by side effects.

Implementation Details

  • Codecs got split into Encoder & Decoder pieces enabling better separation of capabilities. Also often sender needs an encoder and receiver decoder. Codec becomes just a convenient { encoder, decoder } tuple.
  • Indirection of passing base names, codec names and hashing algorithms got replaced by passing around references to base encoders/decoders, codecs and hashers. This way they don't need to be registered and later looked up and tools can spot typos.
  • base.js introduces Encoder, Decoder, Codec classes for creating (multi)base codecs (as in they consider prefix, but also have baseEncode / baseDecode for unprefixed operations). Classes are used mostly because JS engines can optimize those much better (via constructor inlining and class shapes), but think of it as implementation detail. All existing base implementations generate multibase codecs as instances of those classes.
  • codec.js introduces similar Encoder, Decoder, Codec classes but for blocks. With that multibase.multicodec.encode(data, 'json') turns into multibase.codecs.json.encode(data).
  • digest.js introduces data type (class) {code size, digest, bytes} (where digest is a slice of bytes containing just a hash) for representing parsed multihashes, because they were parsed and validated quite a few times.
  • hasher.js introduces data type for hasher abstraction, that just has digest(input:Uint8Array):Promise<Digest> so that they could be passed around instead of hashing algorithm names and results don't have to be re-parsed or validated.
  • cid.js (as it was alluded to) exports CID separate from it's API (that was represented via static methods). It exposes configure() function to bind API to the configuration (context) and low level API functions that can be passed context as argument. CID constructor saves it's configuration { base, base68btc } on the instance so that it's methods work as expected without external opaque dependencies.
  • block.js illustrates similar take on js-block implementation, which further extends configuration settings to require hasher implementation. Additionally block encoder / decoders at instantiation take codec encoder / decoder that they use for block they decode / encode. Idea is that instead of passing codec names as arguments user can instantiate block encoders / decoders it needs and call .encode / .decode on them instead.

    This is not a great fit for arbitrary dag decoding, where codecs need to be loaded on demand. That's ok, because it is still good for generic dag decoding where switch on bound decoders will do.

Codec updates

P.S. This pull request appears as if it adds a lot more than it removes, while it is true it is highly misleading because a lot of those additions are type declarations and jsdoc comments. In terms of functional code I believe it removes more than it adds.

@Gozala Gozala marked this pull request as draft September 9, 2020 06:14
@Gozala Gozala force-pushed the context-binding branch 2 times, most recently from bcae296 to 16008d8 Compare September 9, 2020 06:42
@rvagg
Copy link
Member

rvagg commented Sep 10, 2020

One piece I don't see discussed in either of these issues is the case of needing flexible decoding. The current implementation is built around an assumption that a graph may be comprised of multiple codecs, raw, dag-pb, dag-cbor, etc., and navigating through them would be nicer if you didn't have to inspect the CID yourself to figure that out, just hand that concern off to Block. I think in your proposals this burden is pushed back to the user, making mixed-codec graphs a bit more annoying. Right now we have the problem of having to pre-load the codecs you think are going to be there, but at least I don't have to go implementing a big switch() to handle the variations, or my own mapping.

And where would you fit the Reader piece into this puzzle? Current old style codecs push the resolve() concern down into codecs. New Block pulls it up into there. Where would it be in this version and how would it work?

A more general concern that bothers me about how we're conceiving of these pieces is of the direct mapping of Data Model <> JavaScript Objects, which I think gets us into a bit of trouble in a number of places. This idea that we can take an entire block and decode it into a fully instantiated JavaScript object and do round-trips with that, is kind of nice, but also a bit broken at the edges. What we're ultimately wanting to get to is a higher level abstraction that lets you work above the blocks themselves, and navigate with path resolution, optionally applying schemas and plugging in logic in the form of Advanced Data Layouts that let you transform complex traversal logic into simple paths (e.g. foo/bar/bang where 'bar' is the key in a large multi-block HAMT). I'm not sure whether this proposal, or the current Block help or hinder that vision, but I'm itching to get us to start building up there, to work toward parity with go-ipld-prime in this respect and I worry that our tinkering with these pieces is just deferring that and maybe making it harder because we're making concrete the centrality of the "block" and it's direct mapping to entire JavaScript objects.

@rvagg
Copy link
Member

rvagg commented Sep 10, 2020

I wanted to spell out a bit more about how I think about that last paragraph above but don't want to make it the topic of this thread so I wrote it in a gist: https://gist.github.com/rvagg/dbf445a494d5eed98093aebef4a40f1b

I don't know if that really helps with this discussion, but as long as we're picking apart Block, I wouldn't mind this kind of thinking to be part of it. Because right now I can only see future experiments having to throw away Block and replace it with something new because of the strong transactional binding from the Block <> JavaScript Object mapping.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 10, 2020

One piece I don't see discussed in either of these issues is the case of needing flexible decoding. The current implementation is built around an assumption that a graph may be comprised of multiple codecs, raw, dag-pb, dag-cbor, etc., and navigating through them would be nicer if you didn't have to inspect the CID yourself to figure that out, just hand that concern off to Block. I think in your proposals this burden is pushed back to the user, making mixed-codec graphs a bit more annoying. Right now we have the problem of having to pre-load the codecs you think are going to be there, but at least I don't have to go implementing a big switch() to handle the variations, or my own mapping.

It is true that proposed approach leaves this out. It is deliberate and my justification for this is:

  1. Block API (as presented in https://github.com/ipld/js-block) does not really solve graph navigation problem either. Sure it can encode / decode in arbitrary codecs, but you still have to pass codec names:

    import multiformats from 'multiformats/basics'
    import dagcbor from '@ipld/dag-cbor'
    import { create } from '@ipld/block' // Yet to be released Block interface
    multiformats.multicodec.add(dagcbor)
    const Block = create(multiformats)
    
    const bytes = Block.encoder({ hello: 'world' }, 'dag-cbor').encode()
    const data =  Block.decoder(bytes, 'dag-cbor').decode()

    All I'm suggesting is that above is actually more indirect and error prone way to do the same thing by reference:

     import { block } from 'multiformats/basics' 
     import dagcbor from '@ipld/dag-cbor'
    
    conts bytes = block.encoder({ hello: 'world' }, dagcbor).encode()
    const data = block.decoder(bytes, dagcbor).decode()

    I do however propose slightly different API in the pull request (which is based on @mikeal's feedback, or my interpretation):

    import { block } from 'multiformats/basics' 
    import dagcbor from '@ipld/dag-cbor'
    
    const bytes = block.encoder(dagcbor).encode({ hello: 'world' })
    const data = block.decoder(dagcbor).decode(bytes)
  2. I am all up for creating functional parser combinators like library for composing decoders / encoders. That does not require dynamic registry either and is next layer of the stack. I would even put one together for illustration purposes, but a primary complication there is block retrieval not the dispatch.

And where would you fit the Reader piece into this puzzle? Current old style codecs push the resolve() concern down into codecs. New Block pulls it up into there. Where would it be in this version and how would it work?

I am not fan of Reader API as it implies decode first. It is nice that codecs do not need to concern themselves, especially if they need to decode whole block to provide functionality anyway. Ideally codecs could optionally provide API that would allow Reader without decode, however I would leave it as is for now.

In which case I would just add link, tree, get methods to the Block class e.g:

diff --git a/src/block.js b/src/block.js
index d00f871..f668d4b 100644
--- a/src/block.js
+++ b/src/block.js
@@ -1,6 +1,6 @@
 // @ts-check
 
-import { createV1 } from './cid.js'
+import { createV1, asCID } from './cid.js'
 
 /**
  * @class
@@ -92,6 +92,102 @@ export class Block {
       return cid
     }
   }
+
+  links () {
+    return links(this.data, [], this)
+  }
+
+  tree () {
+    return tree(this.data, [], this)
+  }
+
+  /**
+   * @param {string} path
+   */
+  get (path) {
+    return get(this.data, path.split('/').filter(Boolean), this)
+  }
+}
+
+/**
+ * @template T
+ * @param {T} source
+ * @param {Array<string|number>} base
+ * @param {BlockConfig} config
+ * @returns {Iterable<[string, CID]>}
+ */
+const links = function * (source, base, config) {
+  for (const [key, value] of Object.entries(source)) {
+    const path = [...base, key]
+    if (value != null && typeof value === 'object') {
+      if (Array.isArray(value)) {
+        for (const [index, element] of value.entries()) {
+          const elementPath = [...path, index]
+          const cid = asCID(element, config)
+          if (cid) {
+            yield [elementPath.join('/'), cid]
+          } else if (typeof element === 'object') {
+            yield * links(element, elementPath, config)
+          }
+        }
+      } else {
+        const cid = asCID(value, config)
+        if (cid) {
+          yield [path.join('/'), cid]
+        } else {
+          yield * links(value, path, config)
+        }
+      }
+    }
+  }
+}
+
+/**
+ * @template T
+ * @param {T} source
+ * @param {Array<string|number>} base
+ * @param {BlockConfig} config
+ * @returns {Iterable<string>}
+ */
+const tree = function * (source, base, config) {
+  for (const [key, value] of Object.entries(source)) {
+    const path = [...base, key]
+    yield path.join('/')
+    if (value != null && typeof value === 'object' && !asCID(value, config)) {
+      if (Array.isArray(value)) {
+        for (const [index, element] of value.entries()) {
+          const elementPath = [...path, index]
+          yield elementPath.join('/')
+          if (typeof element === 'object' && !asCID(elementPath, config)) {
+            yield * tree(element, elementPath, config)
+          }
+        }
+      } else {
+        yield * tree(value, path, config)
+      }
+    }
+  }
+}
+
+/**
+ * @template T
+ * @param {T} source
+ * @param {string[]} path
+ * @param {BlockConfig} config
+ */
+const get = (source, path, config) => {
+  let node = source
+  for (const [index, key] of path.entries()) {
+    node = node[key]
+    if (node == null) {
+      throw new Error(`Object has no property at ${path.slice(0, index - 1).map(part => `[${JSON.stringify(part)}]`).join('')}`)
+    }
+    const cid = asCID(node, config)
+    if (cid) {
+      return { value: cid, remaining: path.slice(index).join('/') }
+    }
+  }
+  return { value: node }
 }
 
 /**

If we do want to allow codecs optionally provide traversal (which I think would be a good idea) it would require bit more thinking on what API codec should provide.

A more general concern that bothers me about how we're conceiving of these pieces is of the direct mapping of Data Model <> JavaScript Objects, which I think gets us into a bit of trouble in a number of places. This idea that we can take an entire block and decode it into a fully instantiated JavaScript object and do round-trips with that, is kind of nice, but also a bit broken at the edges.

I agree. In fact I voiced those exact concerns when codec API redesign first came up.

What we're ultimately wanting to get to is a higher level abstraction that lets you work above the blocks themselves, and navigate with path resolution, optionally applying schemas and plugging in logic in the form of Advanced Data Layouts that let you transform complex traversal logic into simple paths (e.g. foo/bar/bang where 'bar' is the key in a large multi-block HAMT). I'm not sure whether this proposal, or the current Block help or hinder that vision, but I'm itching to get us to start building up there, to work toward parity with go-ipld-prime in this respect and I worry that our tinkering with these pieces is just deferring that and maybe making it harder because we're making concrete the centrality of the "block" and it's direct mapping to entire JavaScript objects.

I agree (although not really familiar with go-ipld-prime). I just think that solid foundation with smaller margin of error is going to help to get there.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 10, 2020

I wanted to spell out a bit more about how I think about that last paragraph above but don't want to make it the topic of this thread so I wrote it in a gist: https://gist.github.com/rvagg/dbf445a494d5eed98093aebef4a40f1b

I don't know if that really helps with this discussion, but as long as we're picking apart Block, I wouldn't mind this kind of thinking to be part of it. Because right now I can only see future experiments having to throw away Block and replace it with something new because of the strong transactional binding from the Block <> JavaScript Object mapping.

@rvagg your bringing up some really good points and I would love to continue that conversation. Maybe it could move to an issue thread ? I can also comment on the gist, but I do tend to loose those threads.

I am also recognizing now that block.js module that I've sticked here is misleading, it was mostly for illustration purposes showing what the effect of the proposed changes would be on current API. I could not agree more with this (quote from your gist)

But maybe a traversal can be smarter and more efficient than that? What if we have a use-case where we have very large blocks and usually only reach down to a small part of the block? If we have vertical integration with a codec, we might be able to partial-decode and only instantiate at the asObject() end of the chain.

I while back I was trying to make an argument that encoding used by flatbuffers allows decoding nested sub structure fields without allocating parent structures or parsing underlying bytes.

Argument I got then was that decoder could return some JS class that provides lazy decode or full blown .get(path) style API. Which is technically true, but I do not think it would be possible to create high level abstractions without having common generalized API across all codecs.


That said, this proposal aims to strike a different balance between convenience & simplicity for low level building blocks (like codecs and cids), because in my view current implementation trades simplicity (by introducing registries and dependency injections) for convenience (of not having to pass context around). I think that is the wrong balance, because in my experience in low level building blocks it's better to pass an extra (context) argument around than to deal with incidental complexity (describe in the motivation section).

Proposal itself attempts to strike a balance, by:

  1. Providing low level APIs that just pass context around.
  2. Providing convenience by simply, binding low level API to a context so it does not need to be passed around.

Whether we end up with Block <> JavaScript Object mapping or a Block <> Generalized traversal API, I think it's based to make trade-offs in favor of simplicity over convenience. Because adding convenience to the simple system tends to be straight forward task, but turing convenient but complex system into simple tends to be impossible.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 14, 2020

On thing that I'm realizing only now is that if we go with a proposed approach, it would remove a need for a huge migration from older to newer stack and we can gradually make transition. Without dependency injections difference between older stuff and newer is significantly smaller.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 15, 2020

Hey @mikeal, any feedback on any of this ?

Comment on lines +27 to +29
base: base32,
base58btc
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this mean exactly?

i think i’d prefer this to be { base32, base58btc }

we actually need the default behavior of toString() without a requested base encoding to be stable, so we shouldn’t have a way to set a different default base encoding here. so simply passing in the implementations should be sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this mean exactly?

Comments in the ./cid/interface.ts attempt to clarify that. Inlining here for convenience:

export interface Config {
  /**
   * Multibase codec used by CID to encode / decode to and out of
   * string representation.
   */
  base: MultibaseCodec<any>
  /**
   * CIDv0 requires base58btc encoding decoding so CID must be
   * provided means to perform that task.
   */
  base58btc: MultibaseCodec<'z'>
} 

i think i’d prefer this to be { base32, base58btc }

Well base supposed to represent base codec for whatever encoding you choose to use for the CID been created. If you make it base32 that would mean CIDs could only be in base32 encoding. In fact if anything base58btc is kind of outlier here, which only there to support toV0 and I kind of wish passing it was unnecessary.

we actually need the default behavior of toString() without a requested base encoding to be stable, so we shouldn’t have a way to set a different default base encoding here. so simply passing in the implementations should be sufficient.

Are you saying user should not be able to create CID with a different base encoding ? Or simply that decision about encoding should be deferred until toString() is called and it should default to base32 if no encodnig is passed ?

If later, I understand your argument. However this config is also used for parsing CID in string representations so that when you do following:

const c1 = cid.parse('mAXASIDHD1XCA2EY6PGOykj31odQK16c+rloUr1hCE+X1BKwz', {
  base: base64,
  base58btc: base58btc
})
c1.toString() // mAXASIDHD1XCA2EY6PGOykj31odQK16c+rloUr1hCE+X1BKwz

Would you expect c1.toString() to print bafybeibrypkxbagyiy5dyy5ssi67lioubll2opvolikk6wcccps7kbfmgm instead ?

If you expect to print in base32 then { base32, base58btc } as CID config makes sense. If you expect it to print in base64 than I'd say current config makes more sense.

@mikeal
Copy link
Contributor

mikeal commented Sep 16, 2020

Sorry that I hadn’t seen this, I’m very behind on emails.

I’d like to see all of the tests ported so that we can see how large the API difference is.

I’m +1 on having this block interface, but I’m -1 on calling “block” :) I think we could call it “multicodec” instead and it would be accurate and less confusing.

We need a higher level Block interface that does all the caching and other sugar, but we also need something lower level which this does incredibly well. Calling them both “block” would be confusing, so I think we should just call this one “multicodec.”

In the example code you could still call the return value “block” but the exposed interface should just be multicodec.

In general, this change is going to move a lot of the “configuration” to the Block interface, which is probably fine but i’m a little worried about what it’ll all look like when we’re done. We should start landing this work against a branch so that I can pull it into a branch in @ipld/block where we can explore that.

I left an inline comment about the API for passing base encodings to CID.

One remaining issue I see: The consumer of the Block interface is the person who knows all the base encodings they’ll need. I don’t see a way for them to pass those into the codecs (who need them for the CID instantiations) since there’s no dep injection in the codec. We can’t rely on global configuration at the module level for this because of version discrepancies in npm dep trees. This is a blocker, but not a very difficult one to resolve, we just need a way to configure the bases passed to the codec that is standardized across the codec implementations. The obvious place for that in the old model was to pass the CID class (via the multiformats interface) when instantiating a codec interface, but I’m not sure where the best place to do that now is.

PS.

Regarding the comments about pushing up the Block reader methods, the reason that is broken out as it’s own object is to leave the door open for codecs to implement a custom version of that interface in the future. For instance, if a codec supports parsing out the links without a full de-serialization, that’s how we would expose it.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 16, 2020

I’d like to see all of the tests ported so that we can see how large the API difference is.

I'll work on that

I’m +1 on having this block interface, but I’m -1 on calling “block” :) I think we could call it “multicodec” instead and it would be accurate and less confusing.

We did discuss this off the channel a bit, but I would like to post it here for the future reference.

The block API in this pull meant to just illustrate what the API for js-block could be like if built on top of these changes. It diverges a bit from the existing js-block API so I'll try to elaborate a bit about the motivation, but honestly I should probably refactor block.js to a pull request against js-block.

  • Most strike difference in this implementation unlike js-block takes codec argument when you create encoder or a decoder. It was motivated by your feedback (or maybe my misunderstanding). It could just as well not take a codec during construction and require it as argument when performing encode / decode. Although I do think that passing it ahead of time does make a bit more sense.

  • I think you got a wrong impression that this implementation unlike js-block does not do any caching. In fact it does, it just end up a bit simpler, because it does not attempt to abstract BlockEncoder and BlockDecoder into same general API, so when you create an BlockEncoder it just stores data and only has encode():Block method which returns materialized block that has both data an bytes as references.

    BlockEncoder does not attempt to memoize returned Block. Thinking there is that encoder on it's own is not all that useful, and users will likely hold a reference to the Block instance instead and that one has both data and bytes set and computes and caches cid on demand. BlockDecoder does the same thing but other way round.

    I personally like this approach because it makes it clear what computation occurs when. If you want a thunk to a block, you can hold reference to the encoder / decoder instead. If you do materialize it either via encode or decode then you know it will perform encode / decode computation and you'll have a block.

  • Current implementation has no reader API. Which was true before I pushed changes that I had locally. That said it still does not has a reader API, instead it just adds get, links and tree methods to the Block instance. It could expose those methods under the .reader() instead, but I'm not sure what is the benefit of doing that.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 16, 2020

One remaining issue I see: The consumer of the Block interface is the person who knows all the base encodings they’ll need. I don’t see a way for them to pass those into the codecs (who need them for the CID instantiations) since there’s no dep injection in the codec.

You are right, I made a mistake in the examples section, where it should be more like this:

// Import basics package with dep-free codecs, hashes, and base encodings
import { block, config } from 'multiformats/basics'
import Dagcbor from '@ipld/dag-cbor'

const dagcbor = Dagcbor(config)
const encoder = block.encoder(dagcbor)
const hello = encoder.encode({ hello: 'world' })
const cid = await hello.cid()

That said I think it would be a lot better if codec.encode / codec.decode could be provided configuration as an argument. In fact that is also why block encoder, decoder, coder all take a following configuration:

export interface Config {
/**
* Multihasher to be use for the CID of the block. Will use a default
* if not provided.
*/
hasher: Hasher
/**
* Base encoder that will be passed by the CID of the block.
*/
base: MultibaseCodec<any>
/**
* Base codec that will be used with CIDv0.
*/
base58btc: MultibaseCodec<'z'>
}

Which is intentionally just like one passed to CID with addition of hasher.

We can’t rely on global configuration at the module level for this because of version discrepancies in npm dep trees. This is a blocker, but not a very difficult one to resolve, we just need a way to configure the bases passed to the codec that is standardized across the codec implementations. The obvious place for that in the old model was to pass the CID class (via the multiformats interface) when instantiating a codec interface, but I’m not sure where the best place to do that now is.

My intention was that codecs would have low level API like:

interface Codec<T> {
  encode(data: T, config: Config): Uint8Array
  decode(bytes: Uint8Array, config: Config): T
}

And similar configure function that returns API that makes config optional for both functions.

@Gozala
Copy link
Contributor Author

Gozala commented Sep 16, 2020

We came to a conclusion that it would be best to use buffer free base-x (forked here https://github.com/multiformats/base-x) and include base32 and base58btc with CID implementation as that simplifies things quite a bit.

Now that my work has migrated to multiformats/js-multiformats#context-binding I'm inclined to close this pull request.

As of ongoing block API discussions & changes I would migrate it the js-block repo instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants