Skip to content

[TOPI] Implementation Guideline #215

@tqchen

Description

@tqchen

There has been a few discussions on this and I am creating this issue to consolidate what we have so far. The specific question we have is that how should we create APIs for data flow declaration and Schedule interface in TOPI, here are a two key guidelines

Tensor in,Tensor out in dataflow declaration

The qoute(tensor in/tensor out) comes from google brain team. This is a general principle for compositional APIs design. Imagine we want to create a dataflow declaration for conv-relu, instead of create a declaration function for conv_relu, we want to create two functions (conv, and relu), and compose them

def conv(input, weight):
     out =  tvm.comput(...)
     return out

def relu(input):
     return tvm.compute(input.shape, lambda i*:tvm.max(0, input(*)))

input = tvm.placeholder((n, c, h, w))
weight = tvm.placeholder((c1, c2, hh, ww))
net= conv(input, weight)
net = relu(net)
# Now that net contains data flow declaration of conv-relu

Seperate data flow declaration from schedule

While it is usually convenient to put schedule logic together with the data flow declaration, it is also somewhat harmful. Imagine we have a schedule for conv-relu, what if we want to schedule conv-sigmoid? They contains the essentially the same pattern, but in the old style we need to create schedule for each of them. So ideally, we want to create a generic schedule function for a class of dataflows, without directly touching the dataflow part.

To get the tensors needed for schedule, we can recover them by traversing the dataflow DAG. Here is an possible skeleton to schedule schedule conv-ewise

# generic schedule convolution ewise
def schedule_conv_map(op):  
      # find the conv part
     s = create_schedule(op)
     conv_args = []
     def schedule_conv(data, filter, conv):
         # schedule conv here
         pass
     # visit, maybe need deduplicate
     def  visit(op):
        if is_ewise(op):
          if not is_output(op):  
              s[op].compute_inline()
           for t in op.input_tensors:
               visit(t.op)
        if is_conv(op):
           conv = op.output(0)
           data = op.input_tensors[0]
           filter = op.input_tensors[1]
           schedule_conv(conv,  data, filter)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions