tofu/terraform output command does not produce kubeconfig #1639
-
Hi, all! I may be missing something obvious, but I followed the steps in the README, and all resources are created correctly on Hetzner. However, when I try to run > terraform output kubeconfig
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs are empty. Please define an output in your configuration with the `output` keyword and run `tofu refresh` for it to become
│ available. If you are using interpolation, please verify the interpolated value is not empty. You can use the `tofu console` command to assist. EDIT:This is caused because the Error: local-exec provisioner error
│
│ with module.kube-hetzner.module.control_planes["1-0-control-plane-nbg1"].hcloud_server.server,
│ on .terraform/modules/kube-hetzner/modules/host/main.tf line 64, in resource "hcloud_server" "server":
│ 64: provisioner "local-exec" {
│
│ Error running command 'timeout 600 bash <<EOF
│ until ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o PubkeyAuthentication=yes -i /tmp/rtor37yb8bculxcxba2v -o ConnectTimeout=2 -p 2220 root@188.245.151.169 true 2> /dev/null
│ do
│ echo "Waiting for MicroOS to become available..."
│ sleep 3
│ done
│ EOF
│ ': signal: killed. Config File# kube.tf
locals {
hcloud_token = var.hcloud_token
}
module "kube-hetzner" {
cluster_name = "my-cluster-name"
source = "kube-hetzner/kube-hetzner/hcloud"
providers = {
hcloud = hcloud
}
hcloud_token = var.hcloud_token
ssh_public_key = file("~/.ssh/id_ed25519.pub")
ssh_private_key = file("~/.ssh/id_ed25519")
network_region = "eu-central"
control_plane_nodepools = [
{
name = "control-plane-fsn1"
server_type = "cax11"
location = "fsn1"
labels = []
taints = []
count = 1
},
{
name = "control-plane-nbg1"
server_type = "cax11"
location = "nbg1"
labels = []
taints = []
count = 1
},
{
name = "control-plane-hel1"
server_type = "cax11"
location = "hel1"
labels = []
taints = []
count = 1
}
]
agent_nodepools = [
{
name = "agent-arm-small"
server_type = "cax11"
location = "fsn1"
labels = []
taints = []
count = 1
}
]
autoscaler_nodepools = [
{
name = "autoscaled-arm"
server_type = "cax11"
location = "fsn1"
min_nodes = 0
max_nodes = 10
labels = {
"autoscaler.kubernetes.io/enabled" = "true"
}
taints = []
}
]
create_kubeconfig = false
ingress_controller = "nginx"
load_balancer_type = "lb11"
load_balancer_location = "fsn1"
automatically_upgrade_k3s = true
automatically_upgrade_os = true
system_upgrade_use_drain = true
enable_metrics_server = true
initial_k3s_channel = "stable"
cluster_autoscaler_version = "v1.30.0"
cluster_autoscaler_log_level = 4
cluster_autoscaler_server_creation_timeout = 20
enable_wireguard = true
}
provider "hcloud" {
token = var.hcloud_token
}
terraform {
required_version = ">= 1.5.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = ">= 1.43.0"
}
}
}
output "kubeconfig" {
value = module.kube-hetzner.kubeconfig
sensitive = true
}
variable "hcloud_token" {
sensitive = true
default = ""
}
When I set create_kubeconfig to true, it doesn't create one either. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
@rafaelbeckel If you are on mac make sure you have this installed |
Beta Was this translation helpful? Give feedback.
-
@rafaelbeckel are you sure this is your full kube.tf?
So I guess you used I've had the same issue and filed this bug and a PR to fix it: |
Beta Was this translation helpful? Give feedback.
-
Previously, I ran into the issue that the default file creates more servers than it's allowed by new accounts on Hetzner (they allow only 5). So I removed the redundant control planes, and now my resources are under the limits. However, I'm still stuck in the infinite loop. I tried with SSH ports 2220 and the default 22. I also installed |
Beta Was this translation helpful? Give feedback.
-
@rafaelbeckel terraform destroy. Then try deleting those lines: create_kubeconfig = false Try if it works, then if ok, destroy again, and proceed by elimination. When you find what's wrong, let us know please. |
Beta Was this translation helpful? Give feedback.
-
In my case was also getting "Waiting for MicroOS to become available..." error on the ssh command when doing terraform apply, so tested doing the ssh command on it's own in terminal like |
Beta Was this translation helpful? Give feedback.
In my case was also getting "Waiting for MicroOS to become available..." error on the ssh command when doing terraform apply, so tested doing the ssh command on it's own in terminal like
ssh root@188.245.151.169
- that gave an error mentioning "signing failed for ED25519". Turns out I needed to tweak myssh_public_key
andssh_private_key
values in kube.tf (my ssh key is passphrase protected, so tweaked the settings to work with ssh agent), then did terraform destroy and again terraform apply, and it fixed issue. Seems a few things can cause this issue, I saw others mention it was due to not settingfirewall_ssh_source
correctly with their local IP.