You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/admin/cluster-large.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,11 +23,11 @@ certainly want the docs that go with that version.</h1>
23
23
# Kubernetes Large Cluster
24
24
25
25
## Support
26
-
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md#reliability-and-performance)).
26
+
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
27
27
28
28
## Setup
29
29
30
-
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/config-default.sh)).
30
+
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).
31
31
32
32
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
33
33
@@ -49,7 +49,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
49
49
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers limit the number of VMs you can create during a given period.
50
50
51
51
### Addon Resources
52
-
To prevent memory leaks or other resource issues in [cluster addons](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
52
+
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
Copy file name to clipboardExpand all lines: docs/admin/high-availability.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ certainly want the docs that go with that version.</h1>
35
35
## Introduction
36
36
This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
37
37
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
38
-
the simple [Docker based single node cluster instructions](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker.md),
38
+
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
39
39
or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes.
40
40
41
41
Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will
Copy file name to clipboardExpand all lines: docs/design/event_compression.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Each binary that generates events (for example, ```kubelet```) should keep track
35
35
Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries.
36
36
37
37
## Design
38
-
Instead of a single Timestamp, each event object [contains](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/types.go#L1111) the following fields:
38
+
Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields:
39
39
*```FirstTimestamp util.Time```
40
40
* The date/time of the first occurrence of the event.
41
41
*```LastTimestamp util.Time```
@@ -47,7 +47,7 @@ Instead of a single Timestamp, each event object [contains](https://github.com/G
47
47
48
48
Each binary that generates events:
49
49
* Maintains a historical record of previously generated events:
50
-
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go).
50
+
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](../../pkg/client/record/events_cache.go).
51
51
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
52
52
*```event.Source.Component```
53
53
*```event.Source.Host```
@@ -59,7 +59,7 @@ Each binary that generates events:
59
59
*```event.Reason```
60
60
*```event.Message```
61
61
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
62
-
* When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/record/event.go)).
62
+
* When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](../../pkg/client/record/event.go)).
63
63
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
64
64
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
65
65
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).
1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768)
31
31
1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353)
NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh)
55
-
which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh)
56
-
using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh).
54
+
NOTE: This script calls [cluster/kube-up.sh](../../cluster/kube-up.sh)
55
+
which in turn calls [cluster/aws/util.sh](../../cluster/aws/util.sh)
56
+
using [cluster/aws/config-default.sh](../../cluster/aws/config-default.sh).
57
57
58
58
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
59
59
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
60
60
tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth.
61
61
62
62
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
63
-
You can override the variables defined in [config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh) to change this behavior as follows:
63
+
You can override the variables defined in [config-default.sh](../../cluster/aws/config-default.sh) to change this behavior as follows:
An up-to-date documentation page for this tool is available here: [kubectl manual](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
96
+
An up-to-date documentation page for this tool is available here: [kubectl manual](../../docs/user-guide/kubectl/kubectl.md)
97
97
98
98
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
99
-
For more information, please read [kubeconfig files](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md)
99
+
For more information, please read [kubeconfig files](../../docs/user-guide/kubeconfig-file.md)
100
100
101
101
### Examples
102
102
See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
@@ -114,7 +114,7 @@ cluster/kube-down.sh
114
114
```
115
115
116
116
## Further reading
117
-
Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering
117
+
Please see the [Kubernetes docs](../../docs/) for more details on administering
Copy file name to clipboardExpand all lines: docs/getting-started-guides/locally.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -150,7 +150,7 @@ hack/local-up-cluster.sh
150
150
One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp.
151
151
152
152
#### The pods fail to connect to the services by host names
153
-
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns#how-do-i-configure-it)
153
+
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it)
0 commit comments