Assuming you do have an authoritative and a resolving nameserver. The resolving nameserver (my-resolver in the following) runs Unbound. If your resolver has jeanbruenn.info in it’s cache resolving is pretty fast:
In fact I am an ISC-Fanboy and so I’ve been using BIND since I can remember. Never taken a look at different Nameservers up until a few weeks ago. A few weeks ago I did setup Unbound as resolver to take a look on how it performs and how easy it is to set it up. However, this post is just about how to setup that stuff and make sure it does DNSSEC.
I’ve just noticed that my domain-registrar published, alongside with a new interface, a form to upload DNSSEC data to the parent. Which means that I am finally able to setup DNSSEC as well. I was waiting for that for two years now.
Part of my job are system administrative tasks at Accelerated IT Services, the company I work for. In case of emergencies I need a secure connection from home to the office. Our usual network equipment is from Juniper (awesome CLI really love that stuff!) though for testing/evaluation and our bureaus our network department bought an Ubiquiti EdgeRouter Pro (haven’t had time to take a closer look, yet) and configured IPsec/L2TP for me. This post is about setting a client connection up for that.
Getting the above error when trying to play around with zdb in zfs on linux? Just take a look at the FAQ and set the cachefile. My pool is called storage, so it’s as simple as issuing: zpool set cachefile=/etc/zfs/zpool.cache storage and everything works like a charm.
Just some playing around with zdb to get if there are differences between a filesystem or volume and a snapshot.
root@christine:~# zpool status pool: storage state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 2h1m with 0 errors on Mon Mar 13 22:41:57 2017 config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1-0 ONLINE 0 0 0 WD-WCC4N2AJ9T7E ONLINE 0 0 0 SG-W6A12G2H ONLINE 0 0 0 WD-WCC4N6VCK2TD ONLINE 0 0 0 SG-Z5020FXJ ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 WD-WCC4N6SXZ3PF ONLINE 0 0 0 SG-W6A12F14 ONLINE 0 0 0 WD-WCC4N4NNTF1P ONLINE 0 0 0 SG-W6A12FMB ONLINE 0 0 0 raidz1-2 DEGRADED 0 0 0 SG-W6A12G0B ONLINE 0 0 0 WD-WCC4N6KV534N ONLINE 0 0 0 SG-W6A12FXS ONLINE 0 0 0 SG-Z5020G18 FAULTED 0 6 0 too many errors raidz1-3 ONLINE 0 0 0 WD-WCAWZ2194067 ONLINE 0 0 0 SG-Z501ZYA5 ONLINE 0 0 0 WD-WCAWZ2194120 ONLINE 0 0 0 SG-Z5020G17 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 zil1 ONLINE 0 0 0 zil2 ONLINE 0 0 0 cache cache1 ONLINE 0 0 0 cache2 ONLINE 0 0 0 errors: No known data errors
root@christine:~# zpool status pool: storage state: ONLINE scan: scrub repaired 0 in 4h36m with 0 errors on Sun Apr 9 05:00:44 2017 config:
Assume that you have two nodes one with two IPv4 networks and one with one IPv4 network and each with one IPv6 network and we’d like to have a separate connection for the IPv6 stuff
While trying to debug some really weird issue, I’ve noticed that you really really really should use bridge_hw or a pre-up/up/post-up whatever rule to assign a permanent MAC to your bridge.
I had a hard time finding why my OpenVZ containers wouldn’t respond to ping-packets which came in through IPSEC. Some guides suggested to use disable_policy – I did without success. A few days later by accident I realized that you need to do that within the VM.