Skip to content

Commit e16e895

Browse files
authored
[Doc] fix some document warnings (dmlc#645)
* fix doc * fix some format and warnings * fix
1 parent 463807c commit e16e895

File tree

6 files changed

+25
-18
lines changed

6 files changed

+25
-18
lines changed

docs/source/api/python/graph_store.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Graph Store -- Graph for multi-processing and distributed training
77
.. autoclass:: SharedMemoryDGLGraph
88

99
Querying the distributed setting
10-
------------------------
10+
--------------------------------
1111

1212
.. autosummary::
1313
:toctree: ../../generated/
@@ -26,7 +26,7 @@ Using Node/edge features
2626
SharedMemoryDGLGraph.init_edata
2727

2828
Computing with Graph store
29-
-----------------------
29+
--------------------------
3030

3131
.. autosummary::
3232
:toctree: ../../generated/

docs/source/api/python/nodeflow.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
.. _apinodeflow:
22

33
NodeFlow -- Graph sampled from a large graph
4-
=========================================
4+
============================================
55

66
.. currentmodule:: dgl
77
.. autoclass:: NodeFlow

python/dgl/graph.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3323,25 +3323,27 @@ def __repr__(self):
33233323

33243324
# pylint: disable=invalid-name
33253325
def to(self, ctx):
3326-
"""
3327-
Move both ndata and edata to the targeted mode (cpu/gpu)
3326+
"""Move both ndata and edata to the targeted mode (cpu/gpu)
33283327
Framework agnostic
33293328
33303329
Parameters
33313330
----------
3332-
ctx : framework specific context object
3331+
ctx : framework-specific context object
3332+
The context to move data to.
33333333
3334-
Examples (Pytorch & MXNet)
3334+
Examples
33353335
--------
3336-
>>> import backend as F
3336+
The following example uses PyTorch backend.
3337+
3338+
>>> import torch
33373339
>>> G = dgl.DGLGraph()
33383340
>>> G.add_nodes(5, {'h': torch.ones((5, 2))})
33393341
>>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))})
33403342
>>> G.add_edges([0, 1], [1, 2], {'m' : torch.ones((2, 2))})
3341-
>>> G.to(F.cuda())
3342-
3343+
>>> G.to(torch.device('cuda:0'))
33433344
"""
33443345
for k in self.ndata.keys():
33453346
self.ndata[k] = F.copy_to(self.ndata[k], ctx)
33463347
for k in self.edata.keys():
33473348
self.edata[k] = F.copy_to(self.edata[k], ctx)
3349+
# pylint: enable=invalid-name

python/dgl/nodeflow.py

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -558,12 +558,17 @@ def block_incidence_matrix(self, block_id, typestr, ctx):
558558
or not.
559559
560560
There are two types of an incidence matrix `I`:
561-
* "in":
562-
- I[v, e] = 1 if e is the in-edge of v (or v is the dst node of e);
563-
- I[v, e] = 0 otherwise.
564-
* "out":
565-
- I[v, e] = 1 if e is the out-edge of v (or v is the src node of e);
566-
- I[v, e] = 0 otherwise.
561+
562+
* ``in``:
563+
564+
- I[v, e] = 1 if e is the in-edge of v (or v is the dst node of e);
565+
- I[v, e] = 0 otherwise.
566+
567+
* ``out``:
568+
569+
- I[v, e] = 1 if e is the out-edge of v (or v is the src node of e);
570+
- I[v, e] = 0 otherwise.
571+
567572
"both" isn't defined in the block of a NodeFlow.
568573
569574
Parameters

tutorials/models/5_giant_graph/1_sampling_mx.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
.. _sampling:
2+
.. _model-sampling:
33
44
NodeFlow and Sampling
55
=======================================

tutorials/models/5_giant_graph/2_giant.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
.. _sampling:
2+
.. _model-graph-store:
33
44
Large-Scale Training of Graph Neural Networks
55
=============================================

0 commit comments

Comments
 (0)