many thanks to erethon for his help & support on this article.
Working on your home lab, it is quiet often that you need to spawn containers or virtual machines to test or develop something. I was doing this kind of testing with public cloud providers with minimal VMs and for short time of periods to reduce any costs. In this article I will try to explain how to use libvirt -that means kvm- with terraform and provide a simple way to run this on your linux machine.
Be aware this will be a (long) technical article and some experience is needed with kvm/libvirt & terraform but I will try to keep it simple so you can follow the instructions.
Terraform
Install Terraform v0.13 either from your distro or directly from hashicopr’s site.
$ terraform version
Terraform v0.13.2
Libvirt
same thing for libvirt
$ libvirtd --version
libvirtd (libvirt) 6.5.0
$ sudo systemctl is-active libvirtd
active
verify that you have access to libvirt
$ virsh -c qemu:///system version
Compiled against library: libvirt 6.5.0
Using library: libvirt 6.5.0
Using API: QEMU 6.5.0
Running hypervisor: QEMU 5.1.0
Terraform Libvirt Provider
To access the libvirt daemon via terraform, we need the terraform-libvirt provider.
Terraform provider to provision infrastructure with Linux’s KVM using libvirt
The official repo is on GitHub - dmacvicar/terraform-provider-libvirt and you can download a precompiled version for your distro from the repo, or you can download a generic version from my gitlab repo
ebal / terraform-provider-libvirt · GitLab
These are my instructions
mkdir -pv ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/
curl -sLo ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt https://gitlab.com/terraform-provider/terraform-provider-libvirt/-/jobs/artifacts/master/raw/terraform-provider-libvirt/terraform-provider-libvirt?job=run-build
chmod +x ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt
Terraform Init
Let’s create a new directory and test that everything is fine.
mkdir -pv tf_libvirt
cd !$
cat > Provider.tf <<EOF
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.2"
}
}
}
EOF
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
everything seems okay!
We can verify with tree or find
$ tree -a
.
├── Provider.tf
└── .terraform
└── plugins
├── registry.terraform.io
│ └── dmacvicar
│ └── libvirt
│ └── 0.6.2
│ └── linux_amd64 -> /home/ebal/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
└── selections.json
7 directories, 2 files
Provider
but did we actually connect to libvirtd via terraform ?
Short answer: No.
We just told terraform to use this specific provider.
How to connect ?
We need to add the connection libvirt uri to the provider section:
provider "libvirt" {
uri = "qemu:///system"
}
so our Provider.tf looks like this
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.2"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
Libvirt URI
libvirt is a virtualization api/toolkit that supports multiple drivers and thus you can use libvirt against the below virtualization platforms
- LXC - Linux Containers
- OpenVZ
- QEMU
- VirtualBox
- VMware ESX
- VMware Workstation/Player
- Xen
- Microsoft Hyper-V
- Virtuozzo
- Bhyve - The BSD Hypervisor
Libvirt also supports multiple authentication mechanisms like ssh
virsh -c qemu+ssh://username@host1.example.org/system
so it is really important to properly define the libvirt URI in terraform!
In this article, I will limit it to a local libvirt daemon, but keep in mind you can use a remote libvirt daemon as well.
Volume
Next thing, we need a disk volume!
Volume.tf
resource "libvirt_volume" "ubuntu-2004-vol" {
name = "ubuntu-2004-vol"
pool = "default"
#source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
source = "ubuntu-20.04.img"
format = "qcow2"
}
I have already downloaded this image and verified its checksum, I will use this local image as the base image for my VM’s volume.
By running terraform plan
we will see this output:
# libvirt_volume.ubuntu-2004-vol will be created
+ resource "libvirt_volume" "ubuntu-2004-vol" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "ubuntu-2004-vol"
+ pool = "default"
+ size = (known after apply)
+ source = "ubuntu-20.04.img"
}
What we expect is to use this source image and create a new disk volume (copy) and put it to the default disk storage libvirt pool.
Let’s try to figure out what is happening here:
terraform plan -out terraform.out
terraform apply terraform.out
terraform show
# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
format = "qcow2"
id = "/var/lib/libvirt/images/ubuntu-2004-vol"
name = "ubuntu-2004-vol"
pool = "default"
size = 2361393152
source = "ubuntu-20.04.img"
}
and
$ virsh -c qemu:///system vol-list default
Name Path
------------------------------------------------------------
ubuntu-2004-vol /var/lib/libvirt/images/ubuntu-2004-vol
Volume Size
BE Aware: by this declaration, the produced disk volume image will have the same size as the original source. In this case ~2G of disk.
We will show later in this article how to expand to something larger.
destroy volume
$ terraform destroy
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# libvirt_volume.ubuntu-2004-vol will be destroyed
- resource "libvirt_volume" "ubuntu-2004-vol" {
- format = "qcow2" -> null
- id = "/var/lib/libvirt/images/ubuntu-2004-vol" -> null
- name = "ubuntu-2004-vol" -> null
- pool = "default" -> null
- size = 2361393152 -> null
- source = "ubuntu-20.04.img" -> null
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s
Destroy complete! Resources: 1 destroyed.
verify
$ virsh -c qemu:///system vol-list default
Name Path
----------------------------------------------------------
reminder: always destroy!
Domain
Believe it or not, we are half way from our first VM!
We need to create a libvirt domain resource.
Domain.tf
cat > Domain.tf <<EOF
resource "libvirt_domain" "ubuntu-2004-vm" {
name = "ubuntu-2004-vm"
memory = "2048"
vcpu = 1
disk {
volume_id = libvirt_volume.ubuntu-2004-vol.id
}
}
EOF
Apply the new tf plan
terraform plan -out terraform.out
terraform apply terraform.out
$ terraform show
# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
arch = "x86_64"
autostart = false
disk = [
{
block_device = ""
file = ""
scsi = false
url = ""
volume_id = "/var/lib/libvirt/images/ubuntu-2004-vol"
wwn = ""
},
]
emulator = "/usr/bin/qemu-system-x86_64"
fw_cfg_name = "opt/com.coreos/config"
id = "3a4a2b44-5ecd-433c-8645-9bccc831984f"
machine = "pc"
memory = 2048
name = "ubuntu-2004-vm"
qemu_agent = false
running = true
vcpu = 1
}
# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
format = "qcow2"
id = "/var/lib/libvirt/images/ubuntu-2004-vol"
name = "ubuntu-2004-vol"
pool = "default"
size = 2361393152
source = "ubuntu-20.04.img"
}
Verify via virsh:
$ virsh -c qemu:///system list
Id Name State
--------------------------------
3 ubuntu-2004-vm running
Destroy them!
$ terraform destroy
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
libvirt_domain.ubuntu-2004-vm: Destroying... [id=3a4a2b44-5ecd-433c-8645-9bccc831984f]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s
Destroy complete! Resources: 2 destroyed.
That’s it !
We have successfully created a new VM from a source image that runs on our libvirt environment.
But we can not connect/use or do anything with this instance. Not yet, we need to add a few more things. Like a network interface, a console output and a default cloud-init file to auto-configure the virtual machine.
Variables
Before continuing with the user-data (cloud-init), it is a good time to set up some terraform variables.
cat > Variables.tf <<EOF
variable "domain" {
description = "The domain/host name of the zone"
default = "ubuntu2004"
}
EOF
We are going to use this variable within the user-date yaml file.
Cloud-init
The best way to configure a newly created virtual machine, is via cloud-init and the ability of injecting a user-data.yml file into the virtual machine via terraform-libvirt.
user-data
#cloud-config
#disable_root: true
disable_root: false
chpasswd:
list: |
root:ping
expire: False
# Set TimeZone
timezone: Europe/Athens
hostname: "${hostname}"
# PostInstall
runcmd:
# Remove cloud-init
- apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
cloud init disk
Terraform will create a new iso by reading the above template file and generate a proper user-data.yaml file. To use this cloud init iso, we need to configure it as a libvirt cloud-init resource.
Cloudinit.tf
data "template_file" "user_data" {
template = file("user-data.yml")
vars = {
hostname = var.domain
}
}
resource "libvirt_cloudinit_disk" "cloud-init" {
name = "cloud-init.iso"
user_data = data.template_file.user_data.rendered
}
and we need to modify our Domain.tf accordingly
cloudinit = libvirt_cloudinit_disk.cloud-init.id
Terraform will create and upload this iso disk image into the default libvirt storage pool. And attach it to the virtual machine in the boot process.
At this moment the tf_libvirt directory should look like this:
$ ls -1
Cloudinit.tf
Domain.tf
Provider.tf
ubuntu-20.04.img
user-data.yml
Variables.tf
Volume.tf
To give you an idea, the abstract design is this:
apply
terraform plan -out terraform.out
terraform apply terraform.out
$ terraform show
# data.template_file.user_data:
data "template_file" "user_data" {
id = "cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00"
rendered = <<~EOT
#cloud-config
#disable_root: true
disable_root: false
chpasswd:
list: |
root:ping
expire: False
# Set TimeZone
timezone: Europe/Athens
hostname: "ubuntu2004"
# PostInstall
runcmd:
# Remove cloud-init
- apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
EOT
template = <<~EOT
#cloud-config
#disable_root: true
disable_root: false
chpasswd:
list: |
root:ping
expire: False
# Set TimeZone
timezone: Europe/Athens
hostname: "${hostname}"
# PostInstall
runcmd:
# Remove cloud-init
- apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
EOT
vars = {
"hostname" = "ubuntu2004"
}
}
# libvirt_cloudinit_disk.cloud-init:
resource "libvirt_cloudinit_disk" "cloud-init" {
id = "/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539"
name = "cloud-init.iso"
pool = "default"
user_data = <<~EOT
#cloud-config
#disable_root: true
disable_root: false
chpasswd:
list: |
root:ping
expire: False
# Set TimeZone
timezone: Europe/Athens
hostname: "ubuntu2004"
# PostInstall
runcmd:
# Remove cloud-init
- apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
EOT
}
# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
arch = "x86_64"
autostart = false
cloudinit = "/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c"
disk = [
{
block_device = ""
file = ""
scsi = false
url = ""
volume_id = "/var/lib/libvirt/images/ubuntu-2004-vol"
wwn = ""
},
]
emulator = "/usr/bin/qemu-system-x86_64"
fw_cfg_name = "opt/com.coreos/config"
id = "3ade5c95-30d4-496b-9bcf-a12d63993cfa"
machine = "pc"
memory = 2048
name = "ubuntu-2004-vm"
qemu_agent = false
running = true
vcpu = 1
}
# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
format = "qcow2"
id = "/var/lib/libvirt/images/ubuntu-2004-vol"
name = "ubuntu-2004-vol"
pool = "default"
size = 2361393152
source = "ubuntu-20.04.img"
}
Lots of output , but let me explain it really quick:
generate a user-data file from template, template is populated with variables, create an cloud-init iso, create a volume disk from source, create a virtual machine with this new volume disk and boot it with this cloud-init iso.
Pretty much, that’s it!!!
$ virsh -c qemu:///system vol-list --details default
Name Path Type Capacity Allocation
---------------------------------------------------------------------------------------------
cloud-init.iso /var/lib/libvirt/images/cloud-init.iso file 364.00 KiB 364.00 KiB
ubuntu-2004-vol /var/lib/libvirt/images/ubuntu-2004-vol file 2.20 GiB 537.94 MiB
$ virsh -c qemu:///system list
Id Name State
--------------------------------
1 ubuntu-2004-vm running
destroy
$ terraform destroy -auto-approve
data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s
Destroy complete! Resources: 3 destroyed.
Most important detail is:
Resources: 3 destroyed.
- cloud-init.iso
- ubuntu-2004-vol
- ubuntu-2004-vm
Console
but there are a few things still missing.
To add a console for starters so we can connect into this virtual machine!
To do that, we need to re-edit Domain.tf and add a console output:
console {
target_type = "serial"
type = "pty"
target_port = "0"
}
console {
target_type = "virtio"
type = "pty"
target_port = "1"
}
the full file should look like:
resource "libvirt_domain" "ubuntu-2004-vm" {
name = "ubuntu-2004-vm"
memory = "2048"
vcpu = 1
cloudinit = libvirt_cloudinit_disk.cloud-init.id
disk {
volume_id = libvirt_volume.ubuntu-2004-vol.id
}
console {
target_type = "serial"
type = "pty"
target_port = "0"
}
console {
target_type = "virtio"
type = "pty"
target_port = "1"
}
}
Create again the VM with
terraform plan -out terraform.out
terraform apply terraform.out
And test the console:
$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])
wow!
We have actually logged-in to this VM using the libvirt console!
Virtual Machine
some interesting details
root@ubuntu2004:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 2.0G 916M 1.1G 46% /
devtmpfs 998M 0 998M 0% /dev
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 200M 392K 200M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
/dev/vda15 105M 3.9M 101M 4% /boot/efi
tmpfs 200M 0 200M 0% /run/user/0
root@ubuntu2004:~# free -hm
total used free shared buff/cache available
Mem: 2.0Gi 73Mi 1.7Gi 0.0Ki 140Mi 1.8Gi
Swap: 0B 0B 0B
root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
Do not forget to destroy
$ terraform destroy -auto-approve
data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s
Destroy complete! Resources: 3 destroyed.
extend the volume disk
As mentioned above, the volume’s disk size is exactly as the origin source image.
In our case it’s 2G.
What we need to do, is to use the source image as a base for a new volume disk. In our new volume disk, we can declare the size we need.
I would like to make this a user variable: Variables.tf
variable "vol_size" {
description = "The mac & iP address for this VM"
# 10G
default = 10 * 1024 * 1024 * 1024
}
Arithmetic in terraform!!
So the Volume.tf should be:
resource "libvirt_volume" "ubuntu-2004-base" {
name = "ubuntu-2004-base"
pool = "default"
#source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
source = "ubuntu-20.04.img"
format = "qcow2"
}
resource "libvirt_volume" "ubuntu-2004-vol" {
name = "ubuntu-2004-vol"
pool = "default"
base_volume_id = libvirt_volume.ubuntu-2004-base.id
size = var.vol_size
}
base image –> volume image
test it
terraform plan -out terraform.out
terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])
ubuntu2004 login: root
Password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Sep 12 18:27:46 EEST 2020
System load: 0.29 Memory usage: 6% Processes: 66
Usage of /: 9.3% of 9.52GB Swap usage: 0% Users logged in: 0
0 updates can be installed immediately.
0 of these updates are security updates.
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
Last login: Sat Sep 12 18:26:37 EEST 2020 on ttyS0
root@ubuntu2004:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/root 9.6G 912M 8.7G 10% /
root@ubuntu2004:~#
10G !
destroy
terraform destroy -auto-approve
Swap
I would like to have a swap partition and I will use cloud init to create a swap partition.
modify user-data.yml
# Create swap partition
swap:
filename: /swap.img
size: "auto"
maxsize: 2G
test it
terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])
root@ubuntu2004:~# free -hm
total used free shared buff/cache available
Mem: 2.0Gi 86Mi 1.7Gi 0.0Ki 188Mi 1.8Gi
Swap: 2.0Gi 0B 2.0Gi
root@ubuntu2004:~#
success !!
terraform destroy -auto-approve
Network
How about internet? network?
Yes, what about it ?
I guess you need to connect to the internets, okay then.
The easiest way is to add this your Domain.tf
network_interface {
network_name = "default"
}
This will use the default network libvirt resource
$ virsh -c qemu:///system net-list
Name State Autostart Persistent
----------------------------------------------------
default active yes yes
if you prefer to directly expose your VM to your local network (be very careful) then replace the above with a macvtap interface. If your ISP router provides dhcp, then your VM will take a random IP from your router.
network_interface {
macvtap = "eth0"
}
test it
terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])
root@ubuntu2004:~#
root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:36:66:96 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.228/24 brd 192.168.122.255 scope global dynamic ens3
valid_lft 3544sec preferred_lft 3544sec
inet6 fe80::5054:ff:fe36:6696/64 scope link
valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
root@ubuntu2004:~# ping -c 5 google.com
PING google.com (172.217.23.142) 56(84) bytes of data.
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=1 ttl=115 time=43.4 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=2 ttl=115 time=43.9 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=3 ttl=115 time=43.0 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=4 ttl=115 time=43.1 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=5 ttl=115 time=43.4 ms
--- google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 42.977/43.346/43.857/0.311 ms
root@ubuntu2004:~#
destroy
$ terraform destroy -auto-approve
Destroy complete! Resources: 4 destroyed.
SSH
Okay, now that we have network it is possible to setup ssh to our virtual machine and also auto create a user. I would like cloud-init to get my public key from github and setup my user.
disable_root: true
ssh_pwauth: no
users:
- name: ebal
ssh_import_id:
- gh:ebal
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
write_files:
- path: /etc/ssh/sshd_config
content: |
AcceptEnv LANG LC_*
AllowUsers ebal
ChallengeResponseAuthentication no
Compression NO
MaxSessions 3
PasswordAuthentication no
PermitRootLogin no
Port "${sshdport}"
PrintMotd no
Subsystem sftp /usr/lib/openssh/sftp-server
UseDNS no
UsePAM yes
X11Forwarding no
Notice, I have added a new variable called sshdport
Variables.tf
variable "ssh_port" {
description = "The sshd port of the VM"
default = 12345
}
and do not forget to update your cloud-init tf
Cloudinit.tf
data "template_file" "user_data" {
template = file("user-data.yml")
vars = {
hostname = var.domain
sshdport = var.ssh_port
}
}
resource "libvirt_cloudinit_disk" "cloud-init" {
name = "cloud-init.iso"
user_data = data.template_file.user_data.rendered
}
Update VM
I would also like to update & install specific packages to this virtual machine:
# Install packages
packages:
- figlet
- mlocate
- python3-apt
- bash-completion
- ncdu
# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true
# PostInstall
runcmd:
- figlet ${hostname} > /etc/motd
- updatedb
# Firewall
- ufw allow "${sshdport}"/tcp && ufw enable
# Remove cloud-init
- apt-get -y autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
Yes, I prefer to uninstall cloud-init at the end.
user-date.yaml
the entire user-date.yaml looks like this:
#cloud-config
disable_root: true
ssh_pwauth: no
users:
- name: ebal
ssh_import_id:
- gh:ebal
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
write_files:
- path: /etc/ssh/sshd_config
content: |
AcceptEnv LANG LC_*
AllowUsers ebal
ChallengeResponseAuthentication no
Compression NO
MaxSessions 3
PasswordAuthentication no
PermitRootLogin no
Port "${sshdport}"
PrintMotd no
Subsystem sftp /usr/lib/openssh/sftp-server
UseDNS no
UsePAM yes
X11Forwarding no
# Set TimeZone
timezone: Europe/Athens
hostname: "${hostname}"
# Create swap partition
swap:
filename: /swap.img
size: "auto"
maxsize: 2G
# Install packages
packages:
- figlet
- mlocate
- python3-apt
- bash-completion
- ncdu
# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true
# PostInstall
runcmd:
- figlet ${hostname} > /etc/motd
- updatedb
# Firewall
- ufw allow "${sshdport}"/tcp && ufw enable
# Remove cloud-init
- apt-get -y autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
Output
We need to know the IP to login so create a new terraform file to get the IP
Output.tf
output "IP" {
value = libvirt_domain.ubuntu-2004-vm.network_interface.0.addresses
}
but that means that we need to wait for the dhcp lease. Modify Domain.tf to tell terraform to wait.
network_interface {
network_name = "default"
wait_for_lease = true
}
Plan & Apply
$ terraform plan -out terraform.out && terraform apply terraform.out
Outputs:
IP = [
"192.168.122.79",
]
Verify
$ ssh 192.168.122.79 -p 12345 uptime
19:33:46 up 2 min, 0 users, load average: 0.95, 0.37, 0.14
$ ssh 192.168.122.79 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1023-kvm x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Sep 12 19:34:45 EEST 2020
System load: 0.31 Processes: 72
Usage of /: 33.1% of 9.52GB Users logged in: 0
Memory usage: 7% IPv4 address for ens3: 192.168.122.79
Swap usage: 0%
0 updates can be installed immediately.
0 of these updates are security updates.
_ _ ____ ___ ___ _ _
_ _| |__ _ _ _ __ | |_ _ _|___ / _ / _ | || |
| | | | '_ | | | | '_ | __| | | | __) | | | | | | | || |_
| |_| | |_) | |_| | | | | |_| |_| |/ __/| |_| | |_| |__ _|
__,_|_.__/ __,_|_| |_|__|__,_|_____|___/ ___/ |_|
Last login: Sat Sep 12 19:34:37 2020 from 192.168.122.1
ebal@ubuntu2004:~$
ebal@ubuntu2004:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/root 9.6G 3.2G 6.4G 34% /
ebal@ubuntu2004:~$ free -hm
total used free shared buff/cache available
Mem: 2.0Gi 91Mi 1.7Gi 0.0Ki 197Mi 1.8Gi
Swap: 2.0Gi 0B 2.0Gi
ebal@ubuntu2004:~$ ping -c 5 libreops.cc
PING libreops.cc (185.199.108.153) 56(84) bytes of data.
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=1 ttl=55 time=48.4 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=2 ttl=55 time=48.7 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=3 ttl=55 time=48.5 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=4 ttl=55 time=48.3 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=5 ttl=55 time=52.8 ms
--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 48.266/49.319/52.794/1.743 ms
what !!!!
awesome
destroy
terraform destroy -auto-approve
Custom Network
One last thing I would like to discuss is how to create a new network and provide a specific IP to your VM. This will separate your VMs/lab and it is cheap & easy to setup a new libvirt network.
Network.tf
resource "libvirt_network" "tf_net" {
name = "tf_net"
domain = "libvirt.local"
addresses = ["192.168.123.0/24"]
dhcp {
enabled = true
}
dns {
enabled = true
}
}
and replace network_interface
in Domains.tf
network_interface {
network_id = libvirt_network.tf_net.id
network_name = "tf_net"
addresses = ["192.168.123.${var.IP_addr}"]
mac = "52:54:00:b2:2f:${var.IP_addr}"
wait_for_lease = true
}
Closely look, there is a new terraform variable
Variables.tf
variable "IP_addr" {
description = "The mac & iP address for this VM"
default = 33
}
$ terraform plan -out terraform.out && terraform apply terraform.out
Outputs:
IP = [
"192.168.123.33",
]
$ ssh 192.168.123.33 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information disabled due to load higher than 1.0
12 updates can be installed immediately.
8 of these updates are security updates.
To see these additional updates run: apt list --upgradable
Last login: Sat Sep 12 19:56:33 2020 from 192.168.123.1
ebal@ubuntu2004:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b2:2f:33 brd ff:ff:ff:ff:ff:ff
inet 192.168.123.33/24 brd 192.168.123.255 scope global dynamic ens3
valid_lft 3491sec preferred_lft 3491sec
inet6 fe80::5054:ff:feb2:2f33/64 scope link
valid_lft forever preferred_lft forever
ebal@ubuntu2004:~$
Terraform files
you can find every terraform example in my github repo
tf/0.13/libvirt/0.6.2/ubuntu/20.04 at master · ebal/tf · GitHub
That’s it!
If you like this article, consider following me on twitter ebalaskas.
It is a known fact, that my favorite hosting provider is edis. I’ve seen them improving their services all these years, without forgeting their customers. Their support is great and I am really happy working with them.
That said, they dont offer (yet) a public infrastructre API like hetzner, linode or digitalocean but they offer an Auto Installer option to configure your VPS via a post-install shell script, put your ssh key and select your basic OS image.
I am experimenting with this option the last few weeks, but I wanted to use my currect cloud-init configuration file without making many changes. The goal is to produce a VPS image that when finished will be ready to accept my ansible roles without making any addition change or even login to this VPS.
So here is my current solution on how to use the post-install option to provide my current cloud-init configuration!
cloud-init
Josh Powers @ DebConf17
I will not get into cloud-init details in this blog post, but tldr; has stages, has modules, you provide your own user-data file (yaml) and it supports datasources. All these things is for telling cloud-init what to do, what to configure and when to configure it (in which step).
NoCloud Seed
I am going to use NoCloud datastore for this experiment.
so I need to configure these two (2) files
/var/lib/cloud/seed/nocloud-net/meta-data
/var/lib/cloud/seed/nocloud-net/user-data
Install cloud-init
My first entry in the post-install shell script should be
apt-get update && apt-get install cloud-init
thus I can be sure of two (2) things. First the VPS has already network access and I dont need to configure it, and second install cloud-init software, just to be sure that is there.
Variables
I try to keep my user-data file very simple but I would like to configure hostname and the sshd port.
HOSTNAME="complimentary"
SSHDPORT=22422
Users
Add a single user, provide a public ssh key for authentication and enable sudo access to this user.
users:
- name: ebal
ssh_import_id:
- gh:ebal
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
Hardening SSH
- Change sshd port
- Disable Root
- Disconnect Idle Sessions
- Disable Password Auth
- Disable X Forwarding
- Allow only User (or group)
write_files:
- path: /etc/ssh/sshd_config
content: |
Port $SSHDPORT
PermitRootLogin no
ChallengeResponseAuthentication no
ClientAliveInterval 600
ClientAliveCountMax 3
UsePAM yes
UseDNS no
X11Forwarding no
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
PasswordAuthentication no
AllowUsers ebal
enable firewall
ufw allow $SSHDPORT/tcp && ufw enable
remove cloud-init
and last but not least, I need to remove cloud-init in the end
apt-get -y autoremove --purge cloud-init lxc lxd snapd
Post Install Shell script
let’s put everything together
#!/bin/bash
apt-get update && apt-get install cloud-init
HOSTNAME="complimentary"
SSHDPORT=22422
mkdir -p /var/lib/cloud/seed/nocloud-net
# Meta Data
cat > /var/lib/cloud/seed/nocloud-net/meta-data <<EOF
dsmode: local
EOF
# User Data
cat > /var/lib/cloud/seed/nocloud-net/user-data <<EOF
#cloud-config
disable_root: true
ssh_pwauth: no
users:
- name: ebal
ssh_import_id:
- gh:ebal
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
write_files:
- path: /etc/ssh/sshd_config
content: |
Port $SSHDPORT
PermitRootLogin no
ChallengeResponseAuthentication no
UsePAM yes
UseDNS no
X11Forwarding no
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
PasswordAuthentication no
AllowUsers ebal
# Set TimeZone
timezone: Europe/Athens
HOSTNAME: $HOSTNAME
# Install packages
packages:
- figlet
- mlocate
- python3-apt
- bash-completion
# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true
# PostInstall
runcmd:
- figlet $HOSTNAME > /etc/motd
- updatedb
# Firewall
- ufw allow $SSHDPORT/tcp && ufw enable
# Remove cloud-init
- apt-get -y autoremove --purge cloud-init lxc lxd snapd
- apt-get -y --purge autoremove
- apt -y autoclean
- apt -y clean all
EOF
cloud-init clean --logs
cloud-init init --local
That’s it !
After a while (needs a few reboot) our VPS is up & running and we can use ansible to configure it.
Ansible is a wonderful software to automatically configure your systems. The default mode of using ansible is Push Model.
That means from your box, and only using ssh + python, you can configure your flee of machines.
Ansible is imperative. You define tasks in your playbooks, roles and they will run in a serial manner on the remote machines. The task will first check if needs to run and otherwise it will skip the action. And although we can use conditional to skip actions, tasks will perform all checks. For that reason ansible seems slow instead of other configuration tools. Ansible runs in serial mode the tasks but in psedo-parallel mode against the remote servers, to increase the speed. But sometimes you need to gather_facts and that would cost in execution time. There are solutions to cache the ansible facts in a redis (in memory key:value db) but even then, you need to find a work-around to speed your deployments.
But there is an another way, the Pull Mode!
Useful Reading Materials
to learn more on the subject, you can start reading these two articles on ansible-pull.
Pull Mode
So here how it looks:
You will first notice, that your ansible repository is moved from you local machine to an online git repository. For me, this is GitLab. As my git repo is private, I have created a Read-Only, time-limit, Deploy Token.
With that scenario, our (ephemeral - or not) VMs will pull their ansible configuration from the git repo and run the tasks locally. I usually build my infrastructure with Terraform by HashiCorp and make advance of cloud-init to initiate their initial configuration.
Cloud-init
The tail of my user-data.yml looks pretty much like this:
...
# Install packages
packages:
- ansible
# Run ansible-pull
runcmd:
- ansible-pull -U https://gitlab+deploy-token-XXXXX:YYYYYYYY@gitlab.com/username/myrepo.git
Playbook
You can either create a playbook named with the hostname of the remote server, eg. node1.yml
or use the local.yml
as the default playbook name.
Here is an example that will also put ansible-pull into a cron entry. This is very useful because it will check for any changes in the git repo every 15 minutes and run ansible again.
- hosts: localhost
tasks:
- name: Ensure ansible-pull is running every 15 minutes
cron:
name: "ansible-pull"
minute: "15"
job: "ansible-pull -U https://gitlab+deploy-token-XXXXX:YYYYYYYY@gitlab.com/username/myrepo.git &> /dev/null"
- name: Create a custom local vimrc file
lineinfile:
path: /etc/vim/vimrc.local
line: 'set modeline'
create: yes
- name: Remove "cloud-init" package
apt:
name: "cloud-init"
purge: yes
state: absent
- name: Remove useless packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes
# vim: sts=2 sw=2 ts=2 et
Using Terraform by HashiCorp and cloud-init on Hetzner cloud provider.
Nowadays with the help of modern tools, we use our infrastructure as code. This approach is very useful because we can have Immutable design with our infra by declaring the state would like our infra to be. This also provide us with flexibility and a more generic way on how to handle our infra as lego bricks, especially on scaling.
UPDATE: 2019.01.22
Hetzner
We need to create an Access API Token
within a new project under the console of hetzner cloud.
Copy this token and with that in place we can continue with terraform.
For the purposes of this article, I am going to use as the API token: 01234567890
Install Terraform
the latest terraform version at the time of writing this blog post is: v.11.11
$ curl -sL https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip |
bsdtar -xf- && chmod +x terraform
$ sudo mv terraform /usr/local/bin/
and verify it
$ terraform version
Terraform v0.11.11
Terraform Provider for Hetzner Cloud
To use the hetzner cloud via terraform, we need the terraform-provider-hcloud plugin.
hcloud, is part of terraform providers repository. So the first time of initialize our project, terraform will download this plugin locally.
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "hcloud" (1.7.0)...
...
* provider.hcloud: version = "~> 1.7"
Compile hcloud
If you like, you can always build hcloud from the source code.
There are notes on how to build the plugin here Terraform Hetzner Cloud provider.
GitLab CI
or you can even download the artifact from my gitlab-ci repo.
Plugin directory
You will find the terraform hcloud plugin under your current directory:
./.terraform/plugins/linux_amd64/terraform-provider-hcloud_v1.7.0_x4
I prefer to copy the tf plugins centralized under my home directory:
$ mkdir -pv ~/.terraform/plugins/linux_amd64/
$ mv ./.terraform/plugins/linux_amd64/terraform-provider-hcloud_v1.7.0_x4 ~/.terraform.d/plugins/linux_amd64/terraform-provider-hcloud
or if you choose the artifact from gitlab:
$ curl -sL -o ~/.terraform/plugins/linux_amd64/terraform-provider-hcloud https://gitlab.com/ebal/terraform-provider-hcloud-ci/-/jobs/artifacts/master/raw/bin/terraform-provider-hcloud?job=run-build
That said, when working with multiple terraform projects you may be in a position that you need different versions of the same tf-plugin. In that case it is better to have them under your current working directory/project instead of your home directory. Perhaps one project needs v1.2.3 and another v4.5.6 of the same tf-plugin.
Hetzner Cloud API
Here is a few examples on how to use the Hetzner Cloud API:
$ export -p API_TOKEN="01234567890"
$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/datacenters | jq -r .datacenters[].name
fsn1-dc8
nbg1-dc3
hel1-dc2
fsn1-dc14
$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/locations | jq -r .locations[].name
fsn1
nbg1
hel1
$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/images | jq -r .images[].name
ubuntu-16.04
debian-9
centos-7
fedora-27
ubuntu-18.04
fedora-28
hetzner.tf
At this point, we are ready to write our terraform file.
It can be as simple as this (CentOS 7):
# Set the variable value in *.tfvars file
# or using -var="hcloud_token=..." CLI option
variable "hcloud_token" {}
# Configure the Hetzner Cloud Provider
provider "hcloud" {
token = "${var.hcloud_token}"
}
# Create a new server running centos
resource "hcloud_server" "node1" {
name = "node1"
image = "centos-7"
server_type = "cx11"
}
Project_Ebal
or a more complex config: Ubuntu 18.04 LTS
# Project_Ebal
variable "hcloud_token" {}
# Configure the Hetzner Cloud Provider
provider "hcloud" {
token = "${var.hcloud_token}"
}
# Create a new server running centos
resource "hcloud_server" "Project_Ebal" {
name = "ebal_project"
image = "ubuntu-18.04"
server_type = "cx11"
location = "nbg1"
}
Repository Structure
Although in this blog post we have a small and simple example of using hetzner cloud with terraform, on larger projects is usually best to have separated terraform files for variables, code and output. For more info, you can take a look here: VCS Repository Structure - Workspaces
├── variables.tf
├── main.tf
├── outputs.tf
Cloud-init
To use cloud-init with hetzner is very simple.
We just need to add this declaration user_data = "${file("user-data.yml")}"
to terraform file.
So our previous tf is now this:
# Project_Ebal
variable "hcloud_token" {}
# Configure the Hetzner Cloud Provider
provider "hcloud" {
token = "${var.hcloud_token}"
}
# Create a new server running centos
resource "hcloud_server" "Project_Ebal" {
name = "ebal_project"
image = "ubuntu-18.04"
server_type = "cx11"
location = "nbg1"
user_data = "${file("user-data.yml")}"
}
to get the IP_Address of the virtual machine, I would also like to have an output declaration:
output "ipv4_address" {
value = "${hcloud_server.ebal_project.ipv4_address}"
}
Clout-init
You will find more notes on cloud-init on a previous blog post: Cloud-init with CentOS 7.
below is an example of user-data.yml
#cloud-config
disable_root: true
ssh_pwauth: no
users:
- name: ubuntu
ssh_import_id:
- gh:ebal
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
# Set TimeZone
timezone: Europe/Athens
# Install packages
packages:
- mlocate
- vim
- figlet
# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true
# Remove cloud-init
runcmd:
- figlet Project_Ebal > /etc/motd
- updatedb
Terraform
First thing with terraform is to initialize our environment.
Init
$ terraform init
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Plan
Of course it is not necessary to plan and then plan with out.
You can skip this step, here exist only for documentation purposes.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ hcloud_server.ebal_project
id: <computed>
backup_window: <computed>
backups: "false"
datacenter: <computed>
image: "ubuntu-18.04"
ipv4_address: <computed>
ipv6_address: <computed>
ipv6_network: <computed>
keep_disk: "false"
location: "nbg1"
name: "ebal_project"
server_type: "cx11"
status: <computed>
user_data: "sk6134s+ys+wVdGITc+zWhbONYw="
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Out
$ terraform plan -out terraform.tfplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ hcloud_server.ebal_project
id: <computed>
backup_window: <computed>
backups: "false"
datacenter: <computed>
image: "ubuntu-18.04"
ipv4_address: <computed>
ipv6_address: <computed>
ipv6_network: <computed>
keep_disk: "false"
location: "nbg1"
name: "ebal_project"
server_type: "cx11"
status: <computed>
user_data: "sk6134s+ys+wVdGITc+zWhbONYw="
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: terraform.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "terraform.tfplan"
Apply
$ terraform apply "terraform.tfplan"
hcloud_server.ebal_project: Creating...
backup_window: "" => "<computed>"
backups: "" => "false"
datacenter: "" => "<computed>"
image: "" => "ubuntu-18.04"
ipv4_address: "" => "<computed>"
ipv6_address: "" => "<computed>"
ipv6_network: "" => "<computed>"
keep_disk: "" => "false"
location: "" => "nbg1"
name: "" => "ebal_project"
server_type: "" => "cx11"
status: "" => "<computed>"
user_data: "" => "sk6134s+ys+wVdGITc+zWhbONYw="
hcloud_server.ebal_project: Still creating... (10s elapsed)
hcloud_server.ebal_project: Still creating... (20s elapsed)
hcloud_server.ebal_project: Creation complete after 23s (ID: 1676988)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
ipv4_address = 1.2.3.4
SSH and verify cloud-init
$ ssh 1.2.3.4 -l ubuntu
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-43-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Fri Jan 18 12:17:14 EET 2019
System load: 0.41 Processes: 89
Usage of /: 9.7% of 18.72GB Users logged in: 0
Memory usage: 8% IP address for eth0: 1.2.3.4
Swap usage: 0%
0 packages can be updated.
0 updates are security updates.
Destroy
Be Careful without providing a specific terraform out plan, terraform will destroy every tfplan within your working directory/project. So it is always a good practice to explicit destroy a specify resource/tfplan.
$ terraform destroy
should better be:
$ terraform destroy -out terraform.tfplan
hcloud_server.ebal_project: Refreshing state... (ID: 1676988)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- hcloud_server.ebal_project
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
hcloud_server.ebal_project: Destroying... (ID: 1676988)
hcloud_server.ebal_project: Destruction complete after 1s
Destroy complete! Resources: 1 destroyed.
That’s it !
Cloud-init is the defacto multi-distribution package that handles early initialization of a cloud instance
This article is a mini-HowTo use cloud-init with centos7 in your own libvirt qemu/kvm lab, instead of using a public cloud provider.
How Cloud-init works
Josh Powers @ DebConf17
How really works?
Cloud-init has Boot Stages
- Generator
- Local
- Network
- Config
- Final
and supports modules to extend configuration and support.
Here is a brief list of modules (sorted by name):
- bootcmd
- final-message
- growpart
- keys-to-console
- locale
- migrator
- mounts
- package-update-upgrade-install
- phone-home
- power-state-change
- puppet
- resizefs
- rsyslog
- runcmd
- scripts-per-boot
- scripts-per-instance
- scripts-per-once
- scripts-user
- set_hostname
- set-passwords
- ssh
- ssh-authkey-fingerprints
- timezone
- update_etc_hosts
- update_hostname
- users-groups
- write-files
- yum-add-repo
Gist
Cloud-init example using a Generic Cloud CentOS-7 on a libvirtd qmu/kvm lab · GitHub
Generic Cloud CentOS 7
You can find a plethora of centos7 cloud images here:
Download the latest version
$ curl -LO http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz
Uncompress file
$ xz -v --keep -d CentOS-7-x86_64-GenericCloud.qcow2.xz
Check cloud image
$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2
image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 863M
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
The default image is 8G.
If you need to resize it, check below in this article.
Create metadata file
meta-data are data that comes from the cloud provider itself. In this example, I will use static network configuration.
cat > meta-data <<EOF
instance-id: testingcentos7
local-hostname: testingcentos7
network-interfaces: |
iface eth0 inet static
address 192.168.122.228
network 192.168.122.0
netmask 255.255.255.0
broadcast 192.168.122.255
gateway 192.168.122.1
# vim:syntax=yaml
EOF
Crete cloud-init (userdata) file
user-data are data that comes from you aka the user.
cat > user-data <<EOF
#cloud-config
# Set default user and their public ssh key
# eg. https://github.com/ebal.keys
users:
- name: ebal
ssh-authorized-keys:
- `curl -s -L https://github.com/ebal.keys`
sudo: ALL=(ALL) NOPASSWD:ALL
# Enable cloud-init modules
cloud_config_modules:
- resolv_conf
- runcmd
- timezone
- package-update-upgrade-install
# Set TimeZone
timezone: Europe/Athens
# Set DNS
manage_resolv_conf: true
resolv_conf:
nameservers: ['9.9.9.9']
# Install packages
packages:
- mlocate
- vim
- epel-release
# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true
# Remove cloud-init
runcmd:
- yum -y remove cloud-init
- updatedb
# Configure where output will go
output:
all: ">> /var/log/cloud-init.log"
# vim:syntax=yaml
EOF
Create the cloud-init ISO
When using libvirt with qemu/kvm the most common way to pass the meta-data/user-data to cloud-init, is through an iso (cdrom).
$ genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data
or
$ mkisofs -o cloud-init.iso -V cidata -J -r user-data meta-data
Provision new virtual machine
Finally run this as root:
# virt-install
--name centos7_test
--memory 2048
--vcpus 1
--metadata description="My centos7 cloud-init test"
--import
--disk CentOS-7-x86_64-GenericCloud.qcow2,format=qcow2,bus=virtio
--disk cloud-init.iso,device=cdrom
--network bridge=virbr0,model=virtio
--os-type=linux
--os-variant=centos7.0
--noautoconsole
The List of Os Variants
There is an interesting command to find out all the os variants that are being supported by libvirt in your lab:
eg. CentOS
$ osinfo-query os | grep CentOS
centos6.0 | CentOS 6.0 | 6.0 | http://centos.org/centos/6.0
centos6.1 | CentOS 6.1 | 6.1 | http://centos.org/centos/6.1
centos6.2 | CentOS 6.2 | 6.2 | http://centos.org/centos/6.2
centos6.3 | CentOS 6.3 | 6.3 | http://centos.org/centos/6.3
centos6.4 | CentOS 6.4 | 6.4 | http://centos.org/centos/6.4
centos6.5 | CentOS 6.5 | 6.5 | http://centos.org/centos/6.5
centos6.6 | CentOS 6.6 | 6.6 | http://centos.org/centos/6.6
centos6.7 | CentOS 6.7 | 6.7 | http://centos.org/centos/6.7
centos6.8 | CentOS 6.8 | 6.8 | http://centos.org/centos/6.8
centos6.9 | CentOS 6.9 | 6.9 | http://centos.org/centos/6.9
centos7.0 | CentOS 7.0 | 7.0 | http://centos.org/centos/7.0
DHCP
If you are not using a static network configuration scheme, then to identify the IP of your cloud instance, type:
$ virsh net-dhcp-leases default
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
---------------------------------------------------------------------------------------------------------
2018-11-17 15:40:31 52:54:00:57:79:3e ipv4 192.168.122.144/24 - -
Resize
The easiest way to grow/resize your virtual machine is via qemu-img command:
$ qemu-img resize CentOS-7-x86_64-GenericCloud.qcow2 20G
Image resized.
$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2
image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 870M
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
You can add the below lines into your user-data file
growpart:
mode: auto
devices: ['/']
ignore_growroot_disabled: false
The result:
[root@testingcentos7 ebal]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 870M 20G 5% /
Default cloud-init.cfg
For reference, this is the default centos7 cloud-init configuration file.
# /etc/cloud/cloud.cfg
users:
- default
disable_root: 1
ssh_pwauth: 0
mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys: 0
ssh_genkeytypes: ~
syslog_fix_perms: ~
cloud_init_modules:
- migrator
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
cloud_config_modules:
- mounts
- locale
- set-passwords
- rh_subscription
- yum-add-repo
- package-update-upgrade-install
- timezone
- puppet
- chef
- salt-minion
- mcollective
- disable-ec2-metadata
- runcmd
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
system_info:
default_user:
name: centos
lock_passwd: true
gecos: Cloud User
groups: [wheel, adm, systemd-journal]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
distro: rhel
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd
# vim:syntax=yaml