I recently had to get GPU transcoding in Plex to work. The setup involved running Plex inside a Docker container, inside of an LXC container, running on top of Proxmox. I found some general guidelines online, but none that covered all aspects (especially dual layer of containerization/virtualization). I ran into a few challenges to get this working properly, so I’ll attempt to give a complete guide here.
Usually I don’t bother installing appropriate (i.e. public/proper) HTTPS/SSL-certificates for management softwares and other “internal” software. However, making parts of Cisco Prime Infrastructure available for “outsiders” can be quite useful, hence I saw the need to install a proper certificate.
I recently had to do this while installing Cisco Prime Infrastructure 3.0, so I thought I’d document it, since it’s not as straight-forward as one would think. The last time I did the procedure, was after installing Prime Infrastructure 2.0 almost 2 years ago.
There are basically three steps;
1) Fetch CA + properly convert the certificate
2) Install the CA certificates
3) Install the actual certificate
Recently I came across an issue with Windows DHCP & DNS, specifically related to Cisco AP’s and DDNS. By default Cisco AP’s have period in the hostname (
APxxxx.yyyy.zzzz), and this apparently causes issues for Windows DHCP/DNS regarding DDNS. If you have a scope with option 15 (Domain Name) set to
foo.bar, and you have clients that only returns option 12 (hostname) and no FQDN (option 81) you’d expect Windows to append option 15 to the hostname. In the case for Cisco AP’s, they seem to only return option 12. You’d then expect Windows DHCP to use
APxxxx.yyyy.zzzz.foo.bar as the FQDN for the DDNS update, but this is not the case. In stead, it tries to update the DNS with
APxxxx.yyyy.zzzz as the FQDN (where
yyyy.zzzz is considered a domain due to the period), hence it will obviously fail, as you don’t have any zone
yyyy.zzzz configured in your DNS.
When trying to convert an 1142 AP from LAP to Autonomous AP, I did a mistake. I copied the new IOS (.tar) with the ‘copy’-command. However, I should’ve used the ‘archive /xtract’-command. When I reloaded, the AP presented me with this;
Ever needed to convert a Cisco LAP to Autonomous AP? I did, and this is how I did it;
If you want to find out the size that MySQL databases use, you can issue the following query to list all the databases, with their…
Since I recently configured and installed a MySQL-cluster, I thought I’d share the procedure. A lot of the examples around explains how to set it all up on the same machine for “testing purposes” — which, in theory, is the same as setting it up on different machines. I’ll be explaining the latter, that is, installing it onto different machines.
To achieve true redundancy in a MySQL-cluster, you need at least 3 seperate, physical machines; two data-nodes, and one management-node. The latter you can use a virtual machine for, as long as it doesn’t run on the two data-nodes (which means you still need at least 3 physical machines). You can also use the management-node as a mysql-proxy for transparent failover/load-balancing for the clients. My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-mgmt) running Debian 6 (Squeeze). The VM is not running on the two physical machines. db0 and db1 is the actual data-nodes/servers, and mysql-mgmt is going to be used as the management-node for the cluster. In addition, mysql-mgmt is also going to be configured with mysql-proxy, so that we have transparent failover/load-balancing for the clients.
Update 2011-10-26: I?ve changed the setup a bit, compared to my original walkthrough. I hit some memory-limits when using the NDB-engine. This caused MySQL to fail inserting new rows (stating that the table was full). There are some variables that you can set (DataMemory and IndexMemory), to increase the memory-consumption for the ndb-process (which was what caused the issues). Since I had limited amount of memory available on the mysql-mgmt virtual machine (and lots on db0/1), I decided to run ndb_mgmd on db0 + db1. Apparently, you can do this, and it?s still redundant. The post has been changed to reflect this.
My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-proxy) running Debian 6 (Squeeze). Previously, the virtual machine ran ndb_mgmd, but due to the above mentioned issues, both db0 and db1 runs their own ndb_mgmd-processes. The virtual machine is now only used to run mysql-proxy (and hence it’s hostname has changed to reflect this).
Update 2012-01-30: morphium pointed out that /etc/my.cnf needed it’s own [mysql_cluster]-section, so that ndbd and ndb_mgmd connects to something else than localhost (which is the default if no explicit hosts is defined). The post has been updated to reflect this.
Update 2013-01-03: Dave Weddell pointed out that newer versions of mysql-proxy had a different syntax for the proxy-backend-addresses parameter. Instead of having multiple proxy-backend-addresses-parameters (one for each backend), it wants all the backends in one parameter (comma separated). The post has been updated to reflect this.
Update 2013-05-15: Richard pointed out that he had to alter two more tables to use the ‘ndbcluster’-engine in order for it to work. It was not needed when I originally set this up, but recent versions might have introduced more tables. I’ve updated the post to reflect this.
Need to make a complete dump of all databases in your MySQL-server? Then this command is quite handy; mysqldump -h -u -p –all-databases | gzip…
Ever had the need to use PVLANs in conjunction with one or more trunks, but your Cisco-switch doesn’t support it? I did. And I found a solution. It works well, but if you need to trunk many PVLANs, then this is not the solution you’re looking for; get a 4500/6500 to play with instead.
I’ll be using my scenario as an example in this article, but you could use it for whatever other reasons you might have. At school we have a Cisco-lab, with 5 racks containing various Cisco-equipment. For a while now, there’s been situations where you’d really like a DHCP-server, TFTP-server, or similar, at hand. So, since we already had a VMware ESXi-server running in the lab, it was fairly easy to setup a dedicated lab-server. However, since this ESXi also had to be publicly available, and the lab-network shouldn’t be, we decided to use a trunk between the ESXi and our 3560G (sitting as a gateway between the lab, the servers, and the internet). Each VM is then assigned to their respective VLANs. All well so far.
Tired of having a Cisco-device that always ends up with a wrong clock? I was.
The first thing you’ll need, is a proper NTP-server. You can either set up one locally (which syncs from a hardware-device ? like a GPS ? or from an external server), or you can choose one of the public available NTP-servers. I’ve chosen to use ?220.127.116.11? in this example.