-
Notifications
You must be signed in to change notification settings - Fork 40
Expand file tree
/
Copy pathindex.html
More file actions
191 lines (138 loc) · 17.3 KB
/
index.html
File metadata and controls
191 lines (138 loc) · 17.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Bottom-Up Hierarchical Clustering: From Pixels to Prediction</title>
<style>
body {
font-family: Georgia, 'Times New Roman', serif;
max-width: 760px;
margin: 2em auto;
padding: 0 1.5em;
line-height: 1.7;
color: #222;
background: #fff;
}
h1 {
font-size: 1.8em;
margin-bottom: 0.1em;
}
.author {
font-style: italic;
color: #555;
margin-bottom: 2em;
}
h2 {
font-size: 1.35em;
margin-top: 2em;
border-bottom: 1px solid #ddd;
padding-bottom: 0.3em;
}
h3 {
font-size: 1.1em;
margin-top: 1.5em;
}
code {
font-family: 'Consolas', 'Menlo', monospace;
background: #f4f4f4;
padding: 0.15em 0.4em;
border-radius: 3px;
font-size: 0.92em;
}
ul {
padding-left: 1.5em;
}
li {
margin-bottom: 0.5em;
}
strong {
font-weight: 700;
}
a {
color: #1a5c8a;
}
hr {
border: none;
border-top: 1px solid #ddd;
margin: 2.5em 0 1em;
}
.footer {
font-style: italic;
color: #555;
}
</style>
</head>
<body>
<h1>Bottom-Up Hierarchical Clustering: From Pixels to Prediction</h1>
<p class="author">Boris Kazachenko</p>
<h2>The Core Principle</h2>
<p>All unsupervised learning can be reduced to cross-correlation and clustering: compare inputs, then group what matches. The groups become new inputs, and the process repeats — recursively, indefinitely.</p>
<p>Prediction requires patterns. Patterns are discovered by comparing proximate inputs and measuring how much they match versus how much they differ. Match defines clusters; difference defines their boundaries. Clusters of clusters form higher-order representations, each level adding a layer of abstraction. This is the entire framework. Everything below is derivation.</p>
<h2>A Concrete Example</h2>
<p>Take a row of pixels with grayscale values: <code>[120, 122, 121, 80, 45, 47, 46]</code>.</p>
<p><strong>Step 1 — Cross-comparison.</strong> Compare each adjacent pair. Each comparison produces two signals:</p>
<ul>
<li><strong>Match:</strong> the portion of shared magnitude between the two values. For 120 and 122, most of the signal is shared — match is high.</li>
<li><strong>Difference (miss):</strong> the residual. Here it's 2 — low.</li>
</ul>
<p>Across this row, the first three pixels produce high match and low difference with their neighbors. Then comes a sharp drop (120 → 80 → 45): low match, high difference. The last three pixels again match well with each other.</p>
<p><strong>Step 2 — Clustering.</strong> Connected regions of high match form clusters. Here, that yields two clusters: roughly <code>[120, 122, 121]</code> and <code>[45, 47, 46]</code>. The sharp gradient between them forms a boundary — an edge, in image terms.</p>
<p><strong>Step 3 — Recursion.</strong> Each cluster is now a single composite unit, with aggregate parameters (mean value, internal variance, span, boundary properties). These units become the inputs to the next level of comparison. Now you're comparing cluster-to-cluster: do two adjacent clusters share similar statistics? If so, they form a higher-order cluster. The boundary between them carries higher-order derivative information.</p>
<p>This recurse produces a hierarchy: pixels → segments → regions → objects — with no top-down fitting, no predefined number of layers, and no training signal.</p>
<h2>Concepts Are Clusters</h2>
<p>This isn't just a computational trick — it reflects how all semantic concepts are formed. Every concept is a cluster, but the type of clustering depends on the type of composition.</p>
<p>Some concepts have <strong>structural composition</strong>: their identity depends on how parts are connected. A letter is a specific arrangement of strokes; a face is a specific arrangement of features; a sentence is a syntactically linked sequence of words. These map to connectivity-based clusters, where the pattern is defined by the topology of match-links between components. Rearrange the parts and you get a different concept — "b" vs. "d", or "dog bites man" vs. "man bites dog."</p>
<p>Other concepts have <strong>affine composition</strong>: their identity is defined by proximity to a prototype, with instances varying along continuous parameters like scale, position, orientation, or intensity. "Red" is a cluster of similar wavelength responses. "Walking" is a cluster of motion sequences varying in speed and stride. "Chair" groups objects that vary in size and material but share similar affordance. These map to centroid-based clusters, where membership is graded by similarity to the centroid.</p>
<p>Most real concepts combine both: a "dog" has structural composition (legs connected to torso, head connected to neck — specific topology) and affine variation across instances (size, color, breed). The structural skeleton is captured by connectivity clustering; the variation around it is captured by centroid clustering. Abstract concepts like "justice" or "efficiency" follow the same principle at higher levels: structural relations between roles and outcomes, with affine variation across specific cases. There is no concept that isn't a generalization over similar instances — and generalization is clustering.</p>
<h2>Core vs. Boundary</h2>
<p>There is also a fork within each cluster. Clustering by node match produces a <strong>core</strong>: the region of internal similarity. But the high-difference links at the periphery — those with large difference between their endpoint nodes — are more salient: they carry richer properties (gradient magnitude, direction, shape), giving them more information content to compare. When these links are cross-compared, their mutual match can produce clusters of its own, forming a <strong>boundary</strong>. The result is a complemented cluster: core + boundary as a single unit. In images, the core is a flat-ish region of similar pixels; the boundary is the contour surrounding it — a cluster of high-gradient links that share similar orientation and magnitude. In text, the core might be a coherent topic; the boundary is the transition to a different topic. In a social network, the core is a tightly connected community; the boundary is the set of bridging ties linking it to neighboring communities. The clustering criterion is the same everywhere — match — but high-diff links are simply more salient inputs to comparison, and therefore more likely to form structured clusters. The boundary is not just a delimiter — it carries its own derivative structure, which becomes input to the next level of cross-comparison. This is why each level of recursion adds both a level of composition and a layer of derivation.</p>
<p>When complemented clusters are cross-compared at the next level, the relative salience of cores and boundaries can shift. If boundary-to-boundary match is higher than core-to-core match, then the boundaries become the more informative signal — they cluster more strongly and carry more of the pattern. This is common: image contours often share more distinctive structure (orientation, curvature) than the flat regions they separate, which may differ only in brightness. At higher levels, what started as a secondary derivative signal can become the primary basis for clustering.</p>
<p>This extends naturally into the temporal dimension. Temporal links between spatial clusters in successive frames represent change over time — they are temporal differences, analogous to spatial gradients. If these temporal links match across longer time spans, they form patterns of interaction between clusters or elements. Just as a spatial boundary can be more informative than the core it surrounds, a cluster of temporal interactions can be higher-order than the interacting nodes themselves. This is not an edge case — it is how the most important structures in nature are organized. In biology, it is metabolic and reproductive cycles that define living systems, not the molecules participating in them. In economics, it is exchange patterns — trade, credit, supply chains — that define the economy, not the individual agents.</p>
<h2>The Three-Stage Process</h2>
<p>Each level of the hierarchy executes three stages:</p>
<p><strong>1. Cross-comparison.</strong> Inputs at the current level are compared laterally — initially by proximity, later by derived similarity along other dimensions. Each comparison produces explicit match and miss (difference) signals. Comparison range and derivation order can increase incrementally: first adjacent pairs, then more distant ones; first raw values, then derivatives.</p>
<p><strong>2. Clustering.</strong> Nodes connected by pairwise match are grouped in connectivity-based clusters. Pair links carry both the match and the difference, so the clusters have built-in boundary representations. After connectivity clustering, strong groups form groupwise representations: centroids. Then their element sets are iteratively refined by the degree of membership: similarity between the element and the centroid. This is known as centroid-based clustering, which dominates on higher scope and composition levels, as all relationships become more abstract and less local.</p>
<p><strong>3. Feedback.</strong> Cluster-level parameters are projected downward — not to adjust learned weights (there are none), but to rescale comparison filters, update input coordinates, and potentially modify the process itself. The objective function is projected similarity: how well do known patterns predict new inputs? This is open-ended, not a fixed loss target.</p>
<h2>How This Differs from Neural Networks</h2>
<h3>Comparison order</h3>
<p>Neural networks sum inputs first (weighted aggregation in each neuron), then compare the output to a target. Summation before comparison is lossy: the individual input structure is destroyed before any pattern is extracted from it.</p>
<p>This framework compares first, then sums — only within match-defined clusters. Individual comparison results are preserved and propagated, so higher levels receive structured representations rather than opaque activations.</p>
<h3>Direction of learning</h3>
<p>Backpropagation decomposes output error and propagates it backward through all layers, adjusting weights to reduce that error. Each layer's representation is shaped by what the final output needs, not by what the input contains. The resolution of this process degrades exponentially with depth — hence the need for massive iteration.</p>
<p>Bottom-up clustering builds representations directly from input structure. Each level's patterns are determined by the actual match and difference signals at that level. There is no backward error signal because there is no target output to err against.</p>
<h3>What "similarity" means</h3>
<p>In self-attention (transformers), similarity is a dot product between projected vectors: it scales with component-wise co-occurrence but discards the sign and magnitude of individual differences. In CNNs, kernels converge on gradient detection — effectively selecting for <em>difference</em> rather than similarity.</p>
<p>Here, comparison produces both match and miss as signed, structured quantities. Match supports clustering; difference supports boundary detection and higher-order derivation. Both are retained.</p>
<h3>Hierarchy</h3>
<p>In standard deep learning, depth is fixed at design time and all nodes at a given layer share the same receptive scope. Representations are distributed: every node at every layer participates in every input.</p>
<p>In this framework, hierarchy is formed dynamically by recursion. Depth is determined by the data. Processing is localized: each cluster is a semantically coherent unit, compared and clustered only with its neighbors. This enables sparse, selective computation — only informative regions propagate upward.</p>
<h3>Temporal clustering and LLM reasoning</h3>
<p>The connectivity-vs-centroid distinction extends into time. A one-off chain of linked transitions — each step predicting or recruiting the next — is a temporal connectivity cluster: an episode or thread. When many such threads share a common transition pattern, they compress into a temporal centroid cluster: a process type or template. So connectivity clusters individual events into threads; centroid clustering abstracts across threads into process types.</p>
<p>This framework builds episodes first and abstracts process types after — micro-connectivity to macro-centroid. LLMs do the reverse. Base model training compresses co-occurrence patterns across tokens and contexts into shared feature directions — that is spatial centroid clustering, scaled up but not fundamentally different from a single-layer perceptron.</p>
<p>Training the base model to reason — fine-tuning on chains of thought, where the model learns recurrent step-to-step transition structure — is where temporal centroid clustering actually enters. At inference, attention then assembles a transient chain of token-to-token links within that pre-shaped space. That chain is connectivity-like, but it operates in a geometry that was already centroidized by training.</p>
<p>So the inversion is: this framework starts with actual linked structure and derives abstractions from it. LLMs start with abstractions (learned parameters) and generate linked structure (reasoning traces) inside them. What persists in LLMs is the centroidized parameter geometry; what persists here is the connectivity-derived structure, with centroid summarization as optional higher-level compression.</p>
<h2>Why I think this will scale far better than flat nets?</h2>
<p>The recursive structure means the system can form patterns of arbitrary complexity without architectural redesign. Each new level of clustering adds:</p>
<ul>
<li>A level of <strong>composition</strong> (clusters of clusters).</li>
<li>A layer of <strong>derivation</strong> (differences of differences).</li>
<li>A corresponding increment in comparison <strong>range</strong>.</li>
</ul>
<p>This is a dual hierarchy — composition and derivation grow in parallel — which gives the system a combinatorial vocabulary for representing structure. Standard networks have composition (depth) but no explicit derivation hierarchy; they must learn derivative-like features implicitly through training.</p>
<p>The open-ended objective function — projected correspondence between model and environment — means the system doesn't converge to a fixed optimum. It continues to discover structure as long as the input contains predictable regularities. This makes it a candidate for general-purpose intelligence in a way that loss-minimizing systems are not, because those systems are bounded by their training distribution.</p>
<h2>Relation to Existing Methods</h2>
<p>Several existing approaches share partial overlap:</p>
<ul>
<li><strong>Graph Neural Networks / Transformers:</strong> Both embed lateral relationships between inputs (edges in GNNs, attention scores in transformers). But both train these relationships via backpropagation rather than deriving them directly from input comparison.</li>
<li><strong>Capsule Networks (Hinton):</strong> Use explicit positional/pose information, similar to the explicit coordinates here. But still rely on backprop for training.</li>
<li><strong>Spectral Clustering:</strong> Re-orders inputs along eigenvectors of a similarity matrix — conceptually similar to the feedback stage here, which re-orders along predictive dimensions. But spectral clustering is a one-shot method, not recursive.</li>
<li><strong>Hebbian Learning:</strong> Adjusts weights by input-output coincidence — a local, binary version of similarity matching. Each neuron acts as a fuzzy centroid cluster. But Hebbian learning lacks explicit comparison and has no mechanism for hierarchical composition.</li>
<li><strong>Contour Grouping (Pb detector, tensor voting):</strong> In computer vision, these methods explicitly select high-gradient elements (high internal difference), then group them by similarity of their properties (orientation, magnitude). The pipeline — identify salient high-diff elements, compare them, cluster by match — is procedurally close to the boundary fork described above. But these are domain-specific to 2D images, hand-designed for a single level, and not framed as a general recursive principle.</li>
<li><strong>Scale-Space Theory (Lindeberg, Witkin):</strong> Derivative structures at one scale become the objects of analysis at the next — conceptually adjacent to the idea that each recursion level adds a layer of derivation. But scale-space achieves this through continuous smoothing, not through discrete clustering of high-diff links, and has no compositional hierarchy.</li>
</ul>
<p>The distinguishing feature of this framework is that all operations — comparison, clustering, feedback, recursion — derive from a single principle (prediction via pattern discovery), with no external training signal and no fixed architecture. Whether this theoretical consistency translates to practical performance at scale is the open research question.</p>
<hr>
<p class="footer">Implementation: <a href="https://github.com/boris-kz/CogAlg">CogAlg on GitHub</a>, main process in <code><a href="https://github.com/boris-kz/CogAlg/blob/master/frame_2D_alg/agg_recursion.py">agg_recursion.py</a></code>.</p>
</body>
</html>