🎉 If you want a self‑contained, production‑ready reverse proxy that automatically provisions TLS certificates from Let’s Encrypt and uses LuaDNS as the DNS provider, you’re in the right place.
Below you’ll find a step‑by‑step guide that walks through:
- Installing the required containers
- Configuring Traefik with LuaDNS DNS‑Challenge
- Running the stack and verifying everything works
TL;DR – Copy the files, set your environment variables, run
docker compose up -d, and point a browser tohttps://<your‑hostname>.
📁 Project Layout
traefik/
├── certs/ # ACME certificates will be stored here
├── docker-compose.yml # Docker‑Compose definition
├── .env # Environment variables for the stack
└── etc_traefik/
└── traefik.yml # Traefik configuration
└── dynamic/ # Dynamic Traefik configuration will be stored here
└── whoami.yml # WhoAmI configuration
Why this structure?
certs/– keeps the ACME JSON file outside the container so it survives restarts.etc_traefik/– keeps the Traefik config in a dedicated folder for clarity..env– central place to store secrets and other runtime values.
🔧 Step 1 – Prepare Your Environment
1. Install Docker & Docker‑Compose
If you don’t already have them:
# Debian/Ubuntu
sudo apt update && sudo apt install docker.io docker-compose-plugin
# Verify
docker --version
docker compose version
2. Clone or Create the Project Folder
mkdir -p traefik/certs traefik/etc_traefik/dynamic
cd traefik
⚙️ Step 2 – Create the Configuration Files
1. docker-compose.yml
services:
traefik:
image: traefik:v3.5
container_name: traefik
hostname: traefik
env_file:
- ./.env
environment:
- TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL=${LUADNS_API_USERNAME}
restart: unless-stopped
# Expose HTTP, HTTPS and the dashboard
ports:
- "8080:8080" # Dashboard (insecure)
- "80:80"
- "443:443"
volumes:
- ./certs:/certs
- ./etc_traefik:/etc/traefik
- /var/run/docker.sock:/var/run/docker.sock:ro
healthcheck:
test: ["CMD", "traefik", "healthcheck"]
interval: 30s
retries: 3
timeout: 10s
start_period: 10s
whoami:
image: traefik/whoami
container_name: whoami
hostname: whoami
depends_on:
traefik:
condition: service_healthy
labels:
- "traefik.enable=true"
Why
whoami?
It’s a simple container that prints the request metadata. Perfect for testing TLS, routing and DNS‑Challenge.
2. .env
UMASK="002"
TZ="Europe/Athens"
# LuaDNS credentials (replace with your own)
LUADNS_API_TOKEN="<Your LuaDNS API key>"
LUADNS_API_USERNAME="<Your Email Address>"
# Hostname you want to expose
MYHOSTNAME=whoami.example.org
# (Optional) LibreDNS server used for challenge verification
DNS="88.198.92.222"
Important – Do not commit your
.envto version control.
Use a.gitignoreentry or environment‑variable injection on your host.
3. etc_traefik/traefik.yml
# Ping endpoint for health checks
ping: {}
# Dashboard & API
api:
dashboard: true
insecure: true # `true` only for dev; enable auth in prod
# Logging
log:
filePath: /etc/traefik/traefik.log
level: DEBUG
# Entry points (HTTP & HTTPS)
entryPoints:
web:
address: ":80"
reusePort: true
websecure:
address: ":443"
reusePort: true
# Docker provider – disable auto‑exposure
providers:
docker:
exposedByDefault: false
# Enable file provider
file:
directory: /etc/traefik/dynamic/
watch: true
# ACME resolver using LuaDNS
certificatesResolvers:
letsencrypt:
acme:
# Will read from TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL
# Or your add your email address directly !
email: ""
storage: "/certs/acme.json"
# Uncomment the following line for production
## caServer: https://acme-v02.api.letsencrypt.org/directory
# Staging environment (for testing only)
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
dnsChallenge:
provider: luadns
delayBeforeCheck: 0
resolvers:
- "8.8.8.8:53"
- "1.1.1.1:53"
Key points
storagepoints to the sharedcerts/folder.- We’re using the staging Let’s Encrypt server – change it to production when you’re ready.
dnsChallenge.provideris set toluadns; Traefik will automatically look for a LuaDNS plugin.
4. etc_traefik/dynamic/whoami.yml
http:
routers:
whoami:
rule: 'Host(`{{ env "MYHOSTNAME" }}`)'
entryPoints: ["websecure"]
service: "whoami"
tls:
certResolver: letsencrypt
services:
whoami:
loadBalancer:
servers:
- url: "http://whoami:80"
🔐 Step 3 – Run the Stack
docker compose up -d
Docker will:
- Pull
traefik:v3.5andtraefik/whoami. - Create the containers, mount volumes, and start Traefik.
- Trigger a DNS‑Challenge for
whoami.example.org(via LuaDNS). - Request an ACME certificate from Let’s Encrypt.
Tip – Use
docker compose logs -f traefikto watch the ACME process in real time.
🚀 Step 4 – Verify Everything Works
-
Open a browser and go to https://whoami.example.org
(replace with whatever you set inMYHOSTNAME). -
You should see a JSON response similar to:
Hostname: whoami
IP: 127.0.0.1
IP: ::1
IP: 172.19.0.3
RemoteAddr: 172.19.0.2:54856
GET / HTTP/1.1
Host: whoami.example.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br, zstd
Accept-Language: en-GB,en;q=0.6
Cache-Control: max-age=0
Priority: u=0, i
Sec-Ch-Ua: "Brave";v="141", "Not?A_Brand";v="8", "Chromium";v="141"
Sec-Ch-Ua-Mobile: ?0
Sec-Ch-Ua-Platform: "macOS"
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Sec-Gpc: 1
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 602.13.13.18
X-Forwarded-Host: whoami.example.org
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik
X-Real-Ip: 602.13.13.18
-
In the browser’s developer tools → Security tab, confirm the certificate is issued by Let’s Encrypt and that it is valid.
-
Inspect the Traefik dashboard at http://localhost:8080 (you’ll see the
whoamirouter and its TLS configuration).
🎯 What’s Next?
| Feature | How to enable |
|---|---|
| HTTPS‑only | Add - "traefik.http.middlewares.redirectscheme.scheme=https" to the router and use it as a middlewares label. |
| Auth on dashboard | Use Traefik’s built‑in auth middlewares or an external provider. |
| Automatic renewal | Traefik handles it automatically; just keep the stack running. |
| Production CA | Switch caServer to the production URL in traefik.yml. |
by making the change here:
# Uncomment the following line for production
caServer: https://acme-v02.api.letsencrypt.org/directory
## caServer: https://acme-staging-v02.api.letsencrypt.org/directory
Final Thoughts
Using Traefik with LuaDNS gives you:
- Zero‑configuration TLS that renews automatically.
- Fast DNS challenges thanks to LuaDNS’s low‑latency API.
- Docker integration – just add labels to any container and it’s instantly exposed.
Happy routing! 🚀
That’s it !
PS. These are my personal notes from my home lab; AI was used to structure and format the final version of this blog post.
Original Post is here:
https://balaskas.gr/blog/2025/10/10/setting-up-traefik-and-lets-encrypt-acme-with-luadns-in-docker/
🚀 Curious about trying out a Large Language Model (LLM) like Mistral directly on your own macbook?
Here’s a simple step-by-step guide I used on my MacBook M1 Pro. No advanced technical skills required, but some techinal command-line skills are needed. Just follow the commands and you’ll be chatting with an AI model in no time.
🧰 What We’ll Need
- LLM: A CLI utility and Python library for interacting with Large Language Models → a command-line tool and Python library that makes it easy to install and run language models.
- Mistral → a modern open-source language model you can run locally.
- Python virtual environment → a safe “sandbox” where we install the tools without messing with the rest of the system.
- MacBook → All Apple Silicon MacBooks (M1, M2, M3, M4 chips) feature an integrated GPU on the same chip as the CPU.
🧑🔬 About Mistral 7B
Mistral 7B is a 7-billion parameter large language model, trained to be fast, efficient, and good at following instructions.
Technical requirements (approximate):
- Full precision model (FP16) → ~13–14 GB of RAM (fits best on a server or high-end GPU).
- Quantized model (4-bit, like the one we use here) → ~4 GB of RAM, which makes it practical for a MacBook or laptop.
- Disk storage → the 4-bit model download is around 4–5 GB.
- CPU/GPU → runs on Apple Silicon (M1/M2/M3) CPUs and GPUs thanks to the MLX library. It can also run on Intel Macs, though it may be slower.
👉 In short:
With the 4-bit quantized version, you can run Mistral smoothly on a modern MacBook with 8 GB RAM or more. The more memory and cores you have, the faster it runs.
⚙️ Step 1: Create a Virtual Environment
We’ll create a clean workspace just for this project.
python3 -m venv ~/.venvs/llm
source ~/.venvs/llm/bin/activate
👉 What happens here:
python3 -m venvcreates a new isolated environment namedllm.source .../activateswitches you into that environment, so all installs stay inside it.
📦 Step 2: Install the LLM Tool
Now, let’s install LLM.
pip install -U llm
👉 This gives us the llm command we’ll use to talk to models.
🛠️ Step 3: Install Extra Dependencies
Mistral needs a few extra packages:
pip install mlx
pip install sentencepiece
👉 mlx is Apple’s library that helps models run efficiently on Mac.
👉 sentencepiece helps the model break down text into tokens (words/pieces).
🔌 Step 4: Install the Mistral Plugin
We now connect LLM with Mistral:
llm install llm-mlx
👉 This installs the llm-mlx plugin, which allows LLM to use Mistral models via Apple’s MLX framework.
Verify the plugin with this
llm plugins
result should look like that:
[
{
"name": "llm-mlx",
"hooks": [
"register_commands",
"register_models"
],
"version": "0.4"
}
]
⬇️ Step 5: Download the Model
Now for the fun part — downloading Mistral 7B.
llm mlx download-model mlx-community/Mistral-7B-Instruct-v0.3-4bit
👉 This pulls down the model from the community in a compressed, 4-bit version (smaller and faster to run on laptops).
Verify the model is on your system:
llm models | grep -i mistral
output should be something similar with this:
MlxModel: mlx-community/Mistral-7B-Instruct-v0.3-4bit (aliases: m7)
🏷️ Step 6: Set a Shortcut (Alias)
Typing the full model name is long and annoying. Let’s create a shortcut:
llm aliases set m7 mlx-community/Mistral-7B-Instruct-v0.3-4bit
👉 From now on, we can just use -m m7 instead of the full model name.
💡 Step 7: One last thing
if you are using Homebrew then most probably you already have OpenSSL on your system, if you do not know what we are talking about, then you are using LibreSSL and you need to make a small change:
pip install "urllib3<2"
only if you are using brew run:
brew install openssl@3
💬 Step 8: Ask Your First Question
Time to chat with Mistral!
llm -m m7 'Capital of Greece ?'
👉 Expected result:
The model should respond with:
Athens
🎉 Congratulations — you’ve just run a powerful AI model locally on your Mac!
👨💻 A More Technical Example
Mistral isn’t only for trivia — it can help with real command-line tasks too.
For example, let’s ask it something more advanced:
llm -m m7 'On Arch Linux, give only the bash command using find
that lists files in the current directory larger than 1 GB,
do not cross filesystem boundaries. Output file sizes in
human-readable format with GB units along with the file paths.
Return only the command.'
👉 Mistral responds with:
find . -type f -size +1G -exec du -sh {} +
💡 What this does:
find . -type f -size +1G→ finds files bigger than 1 GB in the current folder.-exec ls -lhS {} ;→ runslson each file to display the size in human-readable format (GB).
This is the kind of real-world productivity boost you get by running models locally.
Full text example output:
This command will find all files (
-type f) larger than 1 GB (-size +1G) in the current directory (.) and execute thedu -shcommand on each file to display the file size in a human-readable format with GB units (-h). The+after-exectellsfindto execute the command once for each set of found files, instead of once for each file.
🌟 Why This Is Cool
- 🔒 No internet needed once the model is downloaded.
- 🕵️ Privacy: your text never leaves your laptop.
- 🧪 Flexible: you can try different open-source models, not just Mistral.
though it won’t be as fast as running it in the cloud.
That’s it !
PS. These are my personal notes from my home lab; AI was used to structure and format the final version of this blog post.