Dynamic DNS via Digital Ocean API

This is a quick tutorial about setting up a simple shell script that runs periodically to update a DNS record on the internet with the IP address of your network. If you’ve ever wanted to have myhome.mydomain.com always updated with your home IP address, this is one way you can do it. There are paid and free services that offer this functionality, but I choose to use this method because it gives me ultimate flexibility over my domain.
This tutorial assumes you have your own Top Level Domain (TLD) and that you’ve configured it to use Digital Ocean as the authoritative DNS for it. Here, I’ll use “example.com” as the top level domain.

Generate a Personal Access Token

Log in to Digital Ocean and click on API from the top Menu. Click the Generate New Token button and give it a name. The name is not parsed, it’s just for your own reference. I like to give it something descriptive to my application. For this example, I’ll use MyDNS. Be sure to check the box next to Write so we can update the records using this API Key.

Once you have the API Key, save it somewhere for reference. It’s one of the 3 variables we’ll need to update in the update_do_dns.sh script.

The API Key will look something like this:
fe3aeda96b7wer8wer1e6bb5erae528sdf3a6120dfrf7e492bwer6343fsdf

Read More

Comments

Quieting a Dell R710

I have a Dell R710 rev. II that I use in my home office lab (homelab) running ESXi 6.5. The R710 sits in my office where my girl and I spend some days when we work from home. Normally the hum of the R710 fans isn’t terribly bothersome - the 5 fans it houses run at around 3,800 RPM each. The noise is definitely noticeable so I did a little bit of digging into ways I could quiet it down. After looking into replacing the fans with quieter ones I found that I could override the system control of the fans and silence them that way. While I have to monitor the onboard temperatures more closely when disabled, I’ve found little downside to doing so when I’m in there. Here’s how to do it:

The commands used below assume default username / password of root / calvin. Hopefully you’ve changed the default password so substitute yours where applicable.

Read More

Comments

A simple Ansible playbook for updating multiple Pihole DNS

I wrote a very simple little playbook for updating my local DNS records for my piholes. For me it’s easier than manually ssh’ing onto each node and editing a file and restarting the service. Here’s the playbook:

update_dns.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/env ansible-playbook
---
- hosts: ns-01, ns-02
gather_facts: yes
sudo: yes
tasks:
- name: TASK | Copy dnsmasq config for cbnet
template: src=templates/02-localnet.conf.j2 dest=/etc/dnsmasq.d/02-localnet.conf force=yes
- name: TASK | Copy updated dns file
template: src=templates/localnet.list.j2 dest=/etc/pihole/localnet.list force=yes
- name: TASK | Restart dnsmasq
service:
name: dnsmasq
state: restarted

This playbook adds a DNSmasq config file for my local network and copies a template file (dnsmasq include file for my local network) and restarts DNSmasq. Here is the template (sample):

Read More

Comments

A Dashboard for Pihole Stats

Pihole + Grafana + InfluxDB Dashboard

Grafana Dashboard
I wanted to add the metrics from my ad-blocker, the great Pihole to my executive dashboard. To create the dashboard I used Grafana to display the graphs and InfluxDB a the time-series backend database. I use a simple python script to get the metrics from pihole and record them in influxdb.
Grafana makes it easy to render them into a user friendly dashboard.

Installing Grafana and Influxdb is beyond the scope of this blog post but here is the scipt that I use to get the data from pihole and insert it into Influx.

After you’re getting data in your influx db you’ll have to create a grafana dashboard.

Read More

Comments

Here's how they voted

Credit: truefalseequivalence @ reddit

Internet Freedom

Senate Vote for Net Neutrality

For Against
Republicans 0 46
Democrats 52 0

House Vote for Net Neutrality

For Against
Republicans 2 234
Democrats 177 6

Read More

Comments

pfSense graphs in Grafana

Using Grafana with pfSense

I put this guide together using information from various other blogs. This is current as of 2018 and using pfSense 2.4.2. For this tutorial, you’ll need your IP or hostname of your influxdb data source and your username and password.

The data flow is as follows:
pfSense -> Telegraf (gather metrics) -> InfluxDB (store metrics) -> Grafana (render graphs)

Step 1 - Install Telegraf on pfSense

ssh in to pfsense and select option 8 to get a shell

1
ssh pfsense-01.chrisbergeron.com

Select option 8 to get a shell.

Read More

Comments

Using Ansible to build a high availablity Nzbget usenet downloader

I’m limited to about 80MB/s on downloads on my VPC at Digital Ocean, but I run Nzbget for downloading large files from usenet. It doesn’t take long to download at all, but out of curiosity I wanted to see if I could parallelize this and download multiple files at the same. I use Sonarr for searching usenet for freely distributable training videos which then sends them to NZBget for downloading. Since Sonarr can send multiple files to nzbget which get queued up, I figured I can reduce the queue by downloading them at the same time.

Using Ansible and Terraform (devops automation tools), I can spin up VPC on demand, provision them, configure them as nzbget download nodes and then destroy the instances when complete.

The instances all run the same nzbget config and the instances use haproxy for round-robin distribution. I will probably change this to Consul, but I just wanted something quick so I used a basic haproxy config.

Terraform builds 4 nzbget, 1 haproxy, and 1 ELK instance. It configures a VIP which I point Sonarr to. Here’s the terraform config that builds an nzbget server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
resource "digitalocean_droplet" "nzbget1" {s
image = "centos-7-x64"
name = "nzbget1"
tags = ["nzbget"]
region = "nyc1"
size = "2gb"
private_networking = true
ssh_keys = [
"${var.ssh_fingerprint}"
]
connection {
user = "root"
type = "ssh"
private_key = "${file(var.pvt_key)}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
"sudo yum -y install epel-release"
]
}
}

Read More

Comments

Record and playback terminal sessions with Showterm

Showterm

I just found a neat tool that will let you record a bash session for playback / site linking. It’s called Showterm. Adding the playback video is as simple as adding an iframe to your page:

1
<iframe src="http://showterm.io/7b5f8d42ba021511e627e" width="640" height="480"></iframe>

or pasting the url:

1
http://showterm.io/7b5f8d42ba021511e627e

Here’s a sample:

Comments

Building an executive dashboard with Grafana

Grafana + InfluxDB + scripts = Awesome

I have many interests and some of them have metrics that are useful or fun to watch. For example, I have investment in Bitcoin so it’s nice to be able to keep an eye on it periodically.
I decided to create a graphical “at a glance” dashboard for myself. I chose Grafana as the user interface / front end and InfluxDB a the time-series backend database to store the metrics. I use various scripts and applets to populate the data into Influx and Grafana makes it easy to
render them into a user friendly dashboard.

Some of the metrics I monitor are Pihole stats, the price of bitcoin, how many IPs get banned from my webservers and my network throughput.

Here’s my dashboard:

Grafana Dashboard

Technology used:



Grafana
InfluxDBPiHole
Comments

Using Ansible to build a high availablity Sabnzbd usenet downloader

I’m limited to about 40MB/s on downloads on my VPC at Digital Ocean, but I run Sabnzbd for downloading large files from usenet. It doesn’t take long to download at all, but out of curiosity I wanted to see if I could parallelize this and download multiple files at the same. I use Sonarr for searching usenet for freely distributable training videos which then sends them to SABnzbd for downloading. Since Sonarr can send multiple files to sabnzbd which get queued up, I figured I can reduce the queue by downloading them at the same time.

Using Ansible and Terraform (devops automation tools), I can spin up VPC on demand, provision them, configure them as sabnzbd download nodes and then destroy the instances when complete.

The instances all run the same sabnzbd config and the instances use haproxy for round-robin distribution. I will probably change this to Consul, but I just wanted something quick so I used a basic haproxy config.

Read More

Comments