Quieting a Dell R710

I have a Dell R710 rev. II that I use in my home office lab (homelab) running ESXi 6.5. The R710 sits in my office where my girl and I spend some days when we work from home. Normally the hum of the R710 fans isn’t terribly bothersome - the 5 fans it houses run at around 3,800 RPM each. The noise is definitely noticeable so I did a little bit of digging into ways I could quiet it down. After looking into replacing the fans with quieter ones I found that I could override the system control of the fans and silence them that way. While I have to monitor the onboard temperatures more closely when disabled, I’ve found little downside to doing so when I’m in there. Here’s how to do it:

The commands used below assume default username / password of root / calvin. Hopefully you’ve changed the default password so substitute yours where applicable.

Read More

Comments

A simple Ansible playbook for updating multiple Pihole DNS

I wrote a very simple little playbook for updating my local DNS records for my piholes. For me it’s easier than manually ssh’ing onto each node and editing a file and restarting the service. Here’s the playbook:

update_dns.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/env ansible-playbook
---
- hosts: ns-01, ns-02
gather_facts: yes
sudo: yes
tasks:
- name: TASK | Copy dnsmasq config for cbnet
template: src=templates/02-localnet.conf.j2 dest=/etc/dnsmasq.d/02-localnet.conf force=yes
- name: TASK | Copy updated dns file
template: src=templates/localnet.list.j2 dest=/etc/pihole/localnet.list force=yes
- name: TASK | Restart dnsmasq
service:
name: dnsmasq
state: restarted

This playbook adds a DNSmasq config file for my local network and copies a template file (dnsmasq include file for my local network) and restarts DNSmasq. Here is the template (sample):

Read More

Comments

A Dashboard for Pihole Stats

Pihole + Grafana + InfluxDB Dashboard

Grafana Dashboard
I wanted to add the metrics from my ad-blocker, the great Pihole to my executive dashboard. To create the dashboard I used Grafana to display the graphs and InfluxDB a the time-series backend database. I use a simple python script to get the metrics from pihole and record them in influxdb.
Grafana makes it easy to render them into a user friendly dashboard.

Installing Grafana and Influxdb is beyond the scope of this blog post but here is the scipt that I use to get the data from pihole and insert it into Influx.

After you’re getting data in your influx db you’ll have to create a grafana dashboard.

Read More

Comments

Here's how they voted

Credit: truefalseequivalence @ reddit

Internet Freedom

Senate Vote for Net Neutrality

For Against
Republicans 0 46
Democrats 52 0

House Vote for Net Neutrality

For Against
Republicans 2 234
Democrats 177 6

Read More

Comments

pfSense graphs in Grafana

Using Grafana with pfSense

I put this guide together using information from various other blogs. This is current as of 2018 and using pfSense 2.4.2. For this tutorial, you’ll need your IP or hostname of your influxdb data source and your username and password.

The data flow is as follows:
pfSense -> Telegraf (gather metrics) -> InfluxDB (store metrics) -> Grafana (render graphs)

Step 1 - Install Telegraf on pfSense

ssh in to pfsense and select option 8 to get a shell

1
ssh pfsense-01.chrisbergeron.com

Select option 8 to get a shell.

Download telegraf:

1
pkg add wget http://pkg.freebsd.org/freebsd:11:x86:64/latest/All/telegraf-1.4.4.txz

Enable telegraf:

1
echo 'telegraf_enable=YES' >> /etc/rc.conf

Edit the telegraf config file:

1
2
cd /usr/local/etc
vi telegraf.conf

Step 2 - Configure Telegraf

Make the following changes:

1
2
3
4
5
6
7
8
9
10
[[outputs.influxdb]]
urls = ["http://10.5.5.40:8089"]
...
database = "pfsense"
...
username = "your_username"
password = "your_password"
[[inputs.net]]
interfaces = ["igb0", "igb1", "igb2", "ovpns1"]

You’ll have to modify the inputs for your own setup. In this example, I’ll be monitoring an OpenVPN tunnel and 3 interfaces: WAN, LAN and backup WAN.

Step 3 - Start Telegraf

Finally, start telegraf:

1
2
cd /usr/local/etc/rc.d
./telegraf start

You won’t need to restart anything on the pfSense box. If you have any issues, you can look at the log file (/var/log/telegraf.log). In a future blog post I’ll show how to create a data source in Grafana using the influx source and building a basic graph.

I’m mainly graphing bandwidth, but you can graph any of the following:

  • cpu
  • disk
  • diskio
  • net
  • processes
  • swap
  • system

Here’s Graph sample:

Grafana Dashboard

Technology used:



Grafana
InfluxDBpfSenseTelegraf
Comments

Using Ansible to build a high availablity Nzbget usenet downloader

I’m limited to about 80MB/s on downloads on my VPC at Digital Ocean, but I run Nzbget for downloading large files from usenet. It doesn’t take long to download at all, but out of curiosity I wanted to see if I could parallelize this and download multiple files at the same. I use Sonarr for searching usenet for freely distributable training videos which then sends them to NZBget for downloading. Since Sonarr can send multiple files to nzbget which get queued up, I figured I can reduce the queue by downloading them at the same time.

Using Ansible and Terraform (devops automation tools), I can spin up VPC on demand, provision them, configure them as nzbget download nodes and then destroy the instances when complete.

The instances all run the same nzbget config and the instances use haproxy for round-robin distribution. I will probably change this to Consul, but I just wanted something quick so I used a basic haproxy config.

Terraform builds 4 nzbget, 1 haproxy, and 1 ELK instance. It configures a VIP which I point Sonarr to. Here’s the terraform config that builds an nzbget server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
resource "digitalocean_droplet" "nzbget1" {s
image = "centos-7-x64"
name = "nzbget1"
tags = ["nzbget"]
region = "nyc1"
size = "2gb"
private_networking = true
ssh_keys = [
"${var.ssh_fingerprint}"
]
connection {
user = "root"
type = "ssh"
private_key = "${file(var.pvt_key)}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
"sudo yum -y install epel-release"
]
}
}

Read More

Comments

Record and playback terminal sessions with Showterm

Showterm

I just found a neat tool that will let you record a bash session for playback / site linking. It’s called Showterm. Adding the playback video is as simple as adding an iframe to your page:

1
<iframe src="http://showterm.io/7b5f8d42ba021511e627e" width="640" height="480"></iframe>

or pasting the url:

1
http://showterm.io/7b5f8d42ba021511e627e

Here’s a sample:

Comments

Building an executive dashboard with Grafana

Grafana + InfluxDB + scripts = Awesome

I have many interests and some of them have metrics that are useful or fun to watch. For example, I have investment in Bitcoin so it’s nice to be able to keep an eye on it periodically.
I decided to create a graphical “at a glance” dashboard for myself. I chose Grafana as the user interface / front end and InfluxDB a the time-series backend database to store the metrics. I use various scripts and applets to populate the data into Influx and Grafana makes it easy to
render them into a user friendly dashboard.

Some of the metrics I monitor are Pihole stats, the price of bitcoin, how many IPs get banned from my webservers and my network throughput.

Here’s my dashboard:

Grafana Dashboard

Technology used:



Grafana
InfluxDBPiHole
Comments

Using Ansible to build a high availablity Sabnzbd usenet downloader

I’m limited to about 40MB/s on downloads on my VPC at Digital Ocean, but I run Sabnzbd for downloading large files from usenet. It doesn’t take long to download at all, but out of curiosity I wanted to see if I could parallelize this and download multiple files at the same. I use Sonarr for searching usenet for freely distributable training videos which then sends them to SABnzbd for downloading. Since Sonarr can send multiple files to sabnzbd which get queued up, I figured I can reduce the queue by downloading them at the same time.

Using Ansible and Terraform (devops automation tools), I can spin up VPC on demand, provision them, configure them as sabnzbd download nodes and then destroy the instances when complete.

The instances all run the same sabnzbd config and the instances use haproxy for round-robin distribution. I will probably change this to Consul, but I just wanted something quick so I used a basic haproxy config.

Read More

Comments

System info bash script

I put together a quick bash shell script to view system info at a glance. I know there are existing tools for this like inxi, but I wanted to put something together I can copypasta. This is specific to RHEL, Centos and Sci Linux but it can be easily adapted for other distros.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash
# run as root
#echo "================================ Services: ================================"
#for i in `journalctl -F _SYSTEMD_UNIT | grep -v session | sed -e "s/.service//g"`; do echo "========= $i ========="; ps aux | grep -v grep | grep $i; done
echo "================================ Service restarts today: ====================================="
output=$(journalctl --since today | grep -v slice | egrep "Starting|Stopping" | grep -v systemd)
echo "$output"
echo "================================ User logins today: =========================================="
output=$(journalctl --since today | grep -v slice | grep pam_unix)
echo "$output"
echo "================================ Non core processes: ========================================="
pstree -n | grep -v lvmeta | grep -v systemd | grep -v rhn | grep -v dbus-daemon | grep -v crond | grep -v agetty | grep -v oddjobd | grep -v vmtoolsd | grep -v tuned | grep -v rsyslog | grep -v irqbalance | grep -v ntpd | grep -v sssd | grep -v qmgr | grep -v pickup | grep -v puppet | grep -v mcollectived | grep -v dhclient | grep -v sshd | grep -v nrpe | grep -v collectd | grep -v xinetd | grep -v consul | grep -v rpcbind | grep -v grep
echo "================================ Open ports: ================================================="
sudo netstat -ntpl | grep -v ssh | grep -v nrpe | grep -v master | grep -v "rpc" | grep -v Active | grep -v Proto | grep -v xinetd | grep -v systemd | sed -e "s/:::/0.0.0.0:/g" | tr -s " " | cut -f4,7- -d " "
echo "================================ Open Sockets: ==============================================="
sudo systemctl list-sockets
echo "================================ Uptime: ====================================================="
uptime
echo "================================ Users: ======================================================"
who
echo "================================ Volume Info: ================================================“
lvdisplay | grep "LV Name"
vgdisplay | grep "VG Name"
Comments