In this blog post I will describe the easiest installation of a DoH/DoT VM for personal use, using dnsdist.
Next I will present a full installation example (from start) with dnsdist and PowerDNS.
Server Notes: Ubuntu 18.04
Client Notes: Archlinux
Every
{{ }}
is a variable you need to change.
Do NOT copy/paste without making the changes.
Login to VM
and became root
$ ssh {{ VM }}
$ sudo -i
from now on, we are running commands as root.
TLDR;
dnsdist DoH/DoT
If you just need your own DoH and DoT instance, then dnsdi...
Thank you.
In this blog post you will find my personal notes on how to setup a Kubernetes as a Service (KaaS). I will be using Terraform to create the infrastructure on Hetzner’s VMs, Rancher for KaaS and Helm to install the first application on Kubernetes.
Many thanks to dear friend: adamo for his help.
Terraform
Let’s build our infrastructure!
We are going to use terraform to build 5 VMs
- One (1) master
- One (1) etcd
- Two (2) workers
- One (1) for the Web dashboard
I will not go to much details about terraform, but to have a basic idea
Provider.tf
provider "hcloud"
To see the entire article, click uppon article's title/link.
Thank you.
LibreDNS has a new endpoint
https://doh.libredns.gr/ads
This new endpoint is unique cause it blocks by default Ads & Trackers !
AdBlock
We are currently using Steven Black’s hosts file.
noticeable & mentionable
LibreDNS DOES NOT keep any logs and we are using OpenNIC as TLD Tier1 root NS
Here are my settings
LibreOps & LibreDNS
LibreOps announced a new public service: LibreDNS, a new DoH/DoT (DNS over Https/DNS over TLS) free public service for people that want to bypass DNS restrictions and/or want to use TLS in their DNS queries. Firefox has already collaborated with Cloudflare for this case but I believe we can do better than using a centralized public service of a profit-company.
Personal Notes
So here are my personal notes for using LibreDNS in firefox
Firefox
Open Preferences/Options
Enable DoH
TRR mode 2
Now the tricky part.
TRR mode is 2 when you enable DoH. What does t...
Thank you.
a few days ago CentOS-8 (1905) was released and you can find details here ReleaseNotes
Below is a visual guide on how to net-install centos8 1905
notes on a qemu-kvm
Boot
Select Language
Menu
I have marked the next screens. For netinstall you need to setup first network
Time
Network
Disable kdump
Add Repo
ftp.otenet.gr/linux/centos/8/BaseOS/x86_64/os/
Server Installation

Disk
Review
Begin Installation
Root
User
Make this user administrator
Installation
Reboot
Grub
Boot
CentOS-8 (1905)
When using tf most of times you need to reuse your Infrastructure as Code, and so your code should be written in such way. In my (very simple) use-case, I need to reuse user-data for cloud-init to setup different VMs but I do not want to rewrite basic/common things every time. Luckily, we can use the template_file.
user-data.yml
In the below yaml file, you will see that we are using tf string-template to produce hostname with this variable:
"${hostname}"
here is the file:
#cloud-config
disable_root:
To see the entire article, click uppon article's title/link.
Thank you.
this article also has an alternative title:
How I Learned to Stop Worrying and Loved my Team
This is a story of troubleshooting cloud disk volumes (long post).
Cloud Disk Volume
Working with data disk volumes in the cloud have a few benefits. One of them is when the volume runs out of space, you can just increase it! No need of replacing the disk, no need of buying a new one, no need of transferring 1TB of data from one disk to another. It is a very simple matter.
Partitions Vs Disks
My personal opinion is not to use partitions. Cloud data disk on EVS (elastic volume service) or cloud volumes for short, they do not need a partition table. You can use the entire disk for data.
Use: /dev/vdb
instead of /dev/vdb1
Filesystem
You have to choose your filesystem carefully. You can use XFS that supports Onlin...
Thank you.
WackoWiki is the wiki of my choice and one of the first opensource project I’ve ever contributed, and I still use wackowiki for personal use.
A few days ago, wackowiki released version 5.5.12. In this blog post I will try to share my experience on installing wackowiki on a new ubuntu 18.04 LTS.
Ansible Role
I’ve created an example ansible role for the wackowiki for the Requirements section: WackoWiki Ansible Role
Requirements
Ubuntu 18.04.3 LTS
apt -y install
php
php-common
php-bcmath
php-ctype
php-gd
php-iconv
php-json
php-mbstring
php-mysql
apache2
libapache2-mod-php
mariadb-server
unzip
To see the entire article, click uppon article's title/link.
Thank you.
GitLab is my favorite online git hosting provider, and I really love the CI feature (that now most of the online project providers are also starting supporting it).
Archlinux uses git and you can find everything here: Arch Linux git repositories
There are almost 2500 packages there! There are 6500 in core/extra/community (primary repos) and almost 55k Packages in AUR, the Archlinux User Repository.
We are going to use git to retrieve our PKGBUILD from aur archlinux as an example.
The same can be done with one of the core packages by using the above git repo.
So here is a very simple .gitlab-ci.yml file that we can use to build an archlinux package in gitlab
image: archlinux/base:latest
before_script:
- To see the entire article, click uppon article's title/link.
Thank you.
MinIO is a high performance object storage server compatible with Amazon S3 APIs
In a previous article, I mentioned minio as an S3 gateway between my system and backblaze b2. I was impressed by minio. So in this blog post, I would like to investigate the primary use of minio as an S3 storage provider!
Install Minio
Minio, is also software written in Golang. That means we can simple use the static binary executable in our machine.
Download
The latest release of minio is here:
curl -sLO https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
Version
./minio version
To see the entire article, click uppon article's title/link.
Thank you.
- Backblaze - Cloud Storage Backup
- rclone - rsync for cloud storage
- MinIO - Object Storage cloud storage software
- s3cmd - Command Line S3 Client
In this blog post, I will try to write a comprehensive guide on how to use cloud object storage for backup purposes.
Goal
What is Object Storage
In a nutshell object storage software uses commodity hard disks in a distributed way across a cluster of systems.
Why using Object Storage
The main characteristics of object storage are:
- Scalability
- Reliability
- Efficiency
- Performance
- Accessibility
Scalability
We can immediately increase our storage by simpl...
Thank you.
DSVPN is designed to address the most common use case for using a VPN
Works with TCP, blocks IPv6 leaks, redirect-gateway out-of-the-box!
last updated: 20190810
- iptables rules example added
- change vpn.key to dsvpn.key
- add base64 example for easy copy/transfer across machines
dsvpn binary
I keep a personal gitlab CI for dsvpn here: DSVPN
Compile
Notes on the latest ubuntu:18.04
docker image:
# git clone https://github.com/jedisct1/dsvpn.git
Cloning into
To see the entire article, click uppon article's title/link.
Thank you.
Notes from archlinux
xdg-open - opens a file or URL in the user’s preferred application
When you are trying to authenticate to a new workspace (with 2fa) using the slack-desktop, it will open your default browser and after the authentication your browser will re-direct you to the slack-desktop again using something like this
slack://6f69f7c8b/magic-login/t3bnakl6qabc-16869c6603bdb64f3a6f69f7c8b2d920fa26149f990e0556b4e5c6f26984db0a
This is mime query !
$ xdg-mime query default x-scheme-handler/slack
slack.desktop
$ locate slack.desktop
/usr/share/applications/slack.desktop
Thank you.
Notes based on Ubuntu 18.04 LTS
My notes for this k8s blog post are based upon an Ubuntu 18.05 LTS KVM Virtual Machine. The idea is to use nested-kvm to run minikube inside a VM, that then minikube will create a kvm node.
minikube builds a local kubernetes cluster on a single node with a set of small resources to run a small kubernetes deployment.
Archlinux –> VM Ubuntu 18.04 LTS runs minikube/kubeclt —> KVM minikube node
Pre-requirements
Nested kvm
Host
(archlinux)
$ grep ^NAME /etc/os-release
NAME="Arch Linux"
Check that nested-kvm is already supported:
$ cat /sys/module/kvm_intel/parameters/nested
N
If the output is N (No) then remove & enable kernel module again:
To see the entire article, click uppon article's title/link.
Thank you.
Quick notes
Identify slow disk
# hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 2502 MB in 2.00 seconds = 1251.34 MB/sec
Timing buffered disk reads: 538 MB in 3.01 seconds = 178.94 MB/sec
# hdparm -Tt /dev/sdb
/dev/sdb:
Timing cached reads: 2490 MB in 2.00 seconds = 1244.86 MB/sec
Timing buffered disk reads: 536 MB in 3.01 seconds = 178.31 MB/sec
# hdparm -Tt /dev/sdc
/dev/sdc:
Timing cached reads: 2524 MB in 2.00 seconds = 1262.21 MB/sec
Timing buffered disk reads: 538 MB in 3.00 seconds = 179.15 MB/sec
# hdparm -Tt /dev/sdd
/dev/sdd:
Timing cached reads: 223...
Thank you.
Hardware Details
HP ProLiant MicroServer
AMD Turion(tm) II Neo N54L Dual-Core Processor
Memory Size: 2 GB - DIMM Speed: 1333 MT/s
Maximum Capacity: 8 GB
Running 24×7 from 23/08/2010, so nine years!
Prologue
The above server started it’s life on CentOS 5 and ext3. Re-formatting to run CentOS 6.x with ext4 on 4 x 1TB OEM Hard Disks with mdadm raid-5. That provided 3 TB storage with Fault tolerance 1-drive failure. And believe me, I used that setup to zeroing broken disks or replacing faulty disks.
As we are reaching the end of CentOS 6.x and there is no official dist-upgrade path for CentOS, and still waiting for CentOS 8.x, I made decision to switch to Ubuntu 18.04 LTS. At that point this would be the 3rd official ...
Thank you.
MariaDB Galera Cluster on Ubuntu 18.04.2 LTS
Last Edit: 2019 06 11
Thanks to Manolis Kartsonakis for the extra info.
Official Notes here:
MariaDB Galera Cluster
a Galera Cluster is a synchronous multi-master cluster setup. Each node can act as master. The XtraDB/InnoDB storage engine can sync its data using rsync. Each data transaction gets a Global unique Id and then using Write Set REPLication the nodes can sync data across each other. When a new node joins the cluster the State Snapshot Transfers (SSTs) synchronize full data but in Incremental State Transfers (ISTs) only the missing data are synced.
With this setup we can have:
- Data Redundancy
- Scalability
- Availability
Thank you.
start by reading:
man 5 sshd_config
CentOS 6.x
Ciphers aes128-ctr,aes192-ctr,aes256-ctr
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-256,hmac-sha2-512
and change the below setting in /etc/sysconfig/sshd:
AUTOCREATE_SERVER_KEYS=RSAONLY
CentOS 7.x
Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-To see the entire article, click uppon article's title/link.
Thank you.
I was suspicious with a cron entry on a new ubuntu server cloud vm, so I ended up to be looking on the logs.
Authentication token is no longer valid; new one required
After a quick internet search,
# chage -l root
Last password change : password must be changed
Password expires : password must be changed
Password inactive : password must be changed
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
due to the password must be changed on the root account, the cron entry does not run as it should.
This ephemeral image does not need ...
Thank you.
Ansible is a wonderful software to automatically configure your systems. The default mode of using ansible is Push Model.
That means from your box, and only using ssh + python, you can configure your flee of machines.
Ansible is imperative. You define tasks in your playbooks, roles and they will run in a serial manner on the remote machines. The task will first check if needs to run and otherwise it will skip the action. And although we can use conditional to skip actions, tasks will perform all checks. For that reason ansible seems slow instead of other configuration tools. Ansible runs in serial mode the tasks but in psedo-parallel mode against the remote servers, to increase the speed. But sometimes you need to gather_facts
Thank you.