-
Notifications
You must be signed in to change notification settings - Fork 92
Conversation
Launched pod-nginx.json from examples/ and it looks like the new events stuff is confused (selflink?):
|
Ugly pod condition - it eventually resolves itself but really that process should not take so long. To reproduce:
kubelet logs:
[[EDIT]] this was further aggravated by a bug fixed in @bda0d35 |
Observation: with service portals, kube-proxy allocates ephemeral ports on the slaves to back the virtual ip addresses that iptables is pre-routing. these allocations can possibly consume ports in the range of "ports" resources that slaves are offering to other schedulers. we should find a way to configure kube-proxy to either (a) avoid certain ranges, or else; (b) work within a specified range.
It's worth noting that on Linux, at least, the default Mesos "ports" resource range is outside of the OS's default ephemeral port range - meaning that, by default, random ports allocated by kube-proxy should not squash mesos jobs consuming "ports" resources. ~# cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000 Should probably document the potential for disaster here as a known issue. |
After spending some time wading through various issues in the upstream tracker it's unclear to me how to best approach a generic "least-surprise" solution for mesos users wanting to expose a service on a public address. So now I'm starting to think about hacks. One hack that occurs to me is to "filter" the /services REST API and modify, only empty values of, PublicIP for entries of returned service.Spec's such that they'll point to the public ip-address of the machine the master is running on. This would also require that kube-proxy runs on the master. I'm not terribly fond of this approach, but it does offer convenience to mesos users who want to run in environments where external load balancers may not be readily available. This hack could be enabled by default, and disabled with a flag ( |
looks like the proxier has some changes in 0.6x re: public ips working better on non-GCE clouds. it may be worth considering changing this PR to a 0.6 upgrade to pick those up. |
Note: during the v0.6.2 rebase a dependency on |
Running
Configure frontend-service with {
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 9998,
"selector": {
"name": "frontend"
},
"publicIPs": [
"146.148.86.181"
]
} Then (WIP) open up iptables on the master:
(apparently I need something more since this isn't working, but it does stop logging rejected incoming packets). |
Got the UI working for external clients on GCE. jclouds@development-2863-77a:~$ cat /tmp/frontend-service.json
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 9998,
"selector": {
"name": "frontend"
},
"publicIPs": [
"10.57.172.200"
]
}
jclouds@development-2863-77a:~$ echo $servicehost
10.57.172.200 Next, determine the port that the proxy is listening on for the jclouds@development-2863-77a:~$ sudo iptables-save|grep -e frontend
-A KUBE-PROXY -d 10.10.10.79/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.57.172.200:56640
-A KUBE-PROXY -d 10.57.172.200/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.57.172.200:56640 Then, add an jclouds@development-2863-77a:~$ sudo iptables -A INPUT -i eth0 -p tcp \
-m state --state NEW,ESTABLISHED -m tcp --dport 56640 -j ACCEPT |
Wow. k8s v0.7 was just RC'd |
I was trying to access the k8s UI from a web browser, but the iptables prerouting rules aren't set up to DNAT requests that way. Not critical |
…ed version - avoids using incomplete pod spec on refresh errors
…ocker commands" This reverts commit a352787.
testing on GCE, params used for k8sm framework (use these for updating the tutorial):
|
@jdef let me know if there's something specific you want me to review merging. |
Thanks. This PR is the next one to be merged, though I've been holding off On Fri, Dec 26, 2014 at 11:29 AM, Vladimir Vivien notifications@github.com
James DeFelice |
This is a WIP -- use this branch at your own risk.
TODOs