Using Ansible to build a high availablity Nzbget usenet downloader

I’m limited to about 80MB/s on downloads on my VPC at Digital Ocean, but I run Nzbget for downloading large files from usenet. It doesn’t take long to download at all, but out of curiosity I wanted to see if I could parallelize this and download multiple files at the same. I use Sonarr for searching usenet for freely distributable training videos which then sends them to NZBget for downloading. Since Sonarr can send multiple files to nzbget which get queued up, I figured I can reduce the queue by downloading them at the same time.

Using Ansible and Terraform (devops automation tools), I can spin up VPC on demand, provision them, configure them as nzbget download nodes and then destroy the instances when complete.

The instances all run the same nzbget config and the instances use haproxy for round-robin distribution. I will probably change this to Consul, but I just wanted something quick so I used a basic haproxy config.

Terraform builds 4 nzbget, 1 haproxy, and 1 ELK instance. It configures a VIP which I point Sonarr to. Here’s the terraform config that builds an nzbget server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
resource "digitalocean_droplet" "nzbget1" {s
image = "centos-7-x64"
name = "nzbget1"
tags = ["nzbget"]
region = "nyc1"
size = "2gb"
private_networking = true
ssh_keys = [
"${var.ssh_fingerprint}"
]
connection {
user = "root"
type = "ssh"
private_key = "${file(var.pvt_key)}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
"sudo yum -y install epel-release"
]
}
}

Here is a terraform play to provision 6 new hosts (1 Elasticsearch, 1 HAproxy and 4 Nzbget nodes):

Click here to watch terraform build the instances

I run a script which takes the IPs/node names from the terraform output and updates my local /etc/hosts file, my ansible hosts file, the haproxy.cfg.j2 ansible template and my ssh_config file (optional, for convenience).

A few minutes later droplets are available and ready.

Secrets, API keys and passwords
I keep my API keys and passwords in my ansible vault. This keeps them AES encryped but still allows Ansible to put them into the config file templates for nzbget. The API keys are for the Usenet providers and the passwords are for accessing the Web UI.

Here is the ansible playbook that configuress the nzbget instances:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
#!/usr/bin/env ansible-playbook
---
- hosts: all
gather_facts: yes
become: yes
tasks:
- name: TASK | Create usenet group
group:
name: usenet
state: present
- name: TASK | Create usenet user
user:
name: usenet
shell: /bin/bash
groups: usenet
append: yes
password: *** REMOVED ***
update_password: on_create
- name: TASK | Create /apps directory
file:
path: /apps
state: directory
owner: usenet
group: usenet
mode: 0755
- name: TASK | Create /apps/data/nzbget directory
file:
path: /apps/data/nzbget
state: directory
owner: usenet
group: usenet
mode: 0755
- name: TASK | Install EPEL Repo
action: >
yum name=epel-release state=present update_cache=yes
- name: TASK | Install packages
action: >
yum name={{ item }} state=present update_cache=yes
with_items:
- vim
- git
- mlocate
- par2cmdline
- p7zip
- unzip
- tar
- gcc
- python-feedparser
- python-configobj
- python-cheetah
- python-dbus
- python-devel
- screen
- vim
- htop
- iftop
- bind-utils
- tree
- jq
- telnet
- lsof
- tcpdump
- nload
- name: TASK | Install unrar
yum:
name: ftp://rpmfind.net/linux/dag/redhat/el7/en/x86_64/dag/RPMS/unrar-5.0.3-1.el7.rf.x86_64.rpm
state: present
- name: TASK | Install yenc
yum:
name: http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/22/Everything/x86_64/os/Packages/p/python-yenc-0.4.0-4.fc22.x86_64.rpm
state: present
- name: TASK | Change owner of Nzbget to usenet
file:
path: /apps
state: directory
owner: usenet
group: usenet
mode: 0755
- name: TASK | Install systemd unit file for Nzbget
template: src="templates/nzbget.service.j2" dest="/etc/systemd/system/nzbget.service" mode=0644
- name: TASK | Copy Nzbget config
template: src="templates/nzbget.conf.j2" dest="/apps/nzbget/nzbget.conf" mode=0644
- name: TASK | Enable and start Nzbget
systemd:
name: nzbget
enabled: yes
state: started
- name: TASK | Install filebeat
yum:
name: https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.2-x86_64.rpm
state: present
update_cache: yes
- name: TASK | Copy filebeat config file
template: src="templates/filebeat.yml.j2" dest="/etc/filebeat/filebeat.yml" mode=0644
- name: TASK | Enable and start filebeat
systemd:
name: filebeat
enabled: yes
state: restarted

Next steps
The next part of this post will be automatically provisioning a Digital Ocean volume and setting it up as an NFS share. Then I can mount the volume on all of the nzbget nodes for centralized storage. Since I pay by the minute for the VPC instances, I want to be able to quickly provision them, download as much as I can as quickly as I can and then destroy the infrastructure. I’ll keep the volume around, but I can destroy the nzbget downloaders once they’ve finished their queues.

Technology used:

TerraformAnsibleNzbgetElasticsearchSonarr
Comments