Had a situation at $WORK where network connections were just hanging there, open, with no activity. So I needed to send something, whatever, on the open connection, just to see how it behaves.
In order to take advantage of shared clipboard, seamless integration and drag&drop features of VirtualBox two things are needed. First, the guest additions need to be installed, pretty straight-forward in most distributions. Second,
VBoxClient needs to be started. This depends on desktop environment and distribution, some start it from the get-go, others don’t. If it’s started, it should show in the list of processes. If it isn’t, there’s a script that starts and enables all features, it’s called
VBoxClient-all. Just add that one to the startup list of the DE.
My ideea was to set up a script running on Google’s servers that would automatically get files from my site and back them up on Google Drive. I would have gotten a free off-site backup. Unsurprisingly, it’s not working due to limits imposed by Google, even though I did manage to cheat on the 10MB limit for UrlFetchApp and file creation. Instead it now fails either because it’s execution takes too long, or because it tries to write to Drive too many times in a short period.
Some basics first.
What is Google Drive? (Really?) It’s a site where people can store files. 15GB for free. Like Dropbox.
There are limits though and they are set pretty low. Understandable, as it would be easy to abuse otherwise. Like, for example, the maximum size that can be fetched from an URL is 10MB. The maximum size of a file that is created via scripting is also 10MB. Not really useful for backups.
Right. On to the actual script and why it (still) doesn’t work.
I’ve been using this shell script to backup my WordPress site and database for a while. There is nothing WordPress-specific in it though, the script can be used for anything that uses a database. It creates a MySQL dump of the database and a ‘ls’ of the site directory, compares then to the last ones and creates a new archives of each of them if something has changed. What it doesn’t do is move those files off-site for an actual backup.
Works for small sites, like this one. Dumping the whole database just to see if anything changed might become an incresingly worse idea as the site grows. Use common sense, run on a slave for bigger setups or something.
Task: write a function called GCD that prints the greatest common divisor of two positive integers (a and b). Hint: you can use the fact that GCD(a, b) = GCD(b, a mod b) and GCD(a, 0) = a
When trying to upgrade an official module, puppet complains that it can’t find it on the forge. It might actually be that it doesn’t recognize the SSL certificate as being valid. When trying to install a module from forge.puppetlabs.com, it returns the proper error, complaining that the certificate is invalid:
# puppet module install puppetlabs/ntp
Notice: Preparing to install into /usr/local/etc/puppet/modules ...
Notice: Downloading from https://forge.puppetlabs.com ...
Error: Could not connect via HTTPS to https://forge.puppetlabs.com
Unable to verify the SSL certificate
The certificate may not be signed by a valid CA
The CA bundle included with OpenSSL may not be valid or up to date
Apparently https is a jerk, so, other than the obvious, a solution is to use http instead of https for the repository:
# puppet module install puppetlabs/ntp --module_repository=http://forge.puppetlabs.com
On FreeBSD it might be that /etc/ssl/cert.pem isn’t symlinked to /usr/local/share/certs/ca-root-nss.crt, where the ca_root_nss package installs.
… but everything is set up properly. Check if there is anything under /proc/fs/nfsd/. If there isn’t, run
# mount -t nfsd nfsd /proc/fs/nfsd
Apparently it happens when NFS support has been compiled into the kernel, but the userland tools have been built on a kernel where NFS is loaded as a module.
Another possible issue is if the server can’t resolve the client’s hostname. Might depend on NFS version. The client sends it’s hostname with the request, if the server can’t resolve it because it doesn’t have access to a DNS server for example, it might not allow the client to mount. First obvious solution is to make sure DNS resolution works properly, second obvious solution is to add the host to server’s /etc/hosts file. Third solution is less obvious, make sure there is no DNS server to interrogate. If there is nothing in resolv.conf to time out, the server will allow the connection, even if it can’t resolve the hostname.
A shell script that keeps running rsync in a loop until a file is created to stop it. Obviously not the right way to do such things, but good enough for a quickie.
if [ -f '/tmp/stoprsync' ]; then
echo `date` >> /tmp/stoprsync
printf 'found /tmp/stoprsync, exiting now\n'
rsync -avP --stats remote_host::rsync_module /local/dir >> rsyncloop.log
printf "exit code: $?\n\n" >> rsyncloop.log
TRANSFERED=`tail -n 15 rsyncloop.log | grep "files\ transferred" | sed 's/^.*\:\ //'`
printf "TRANSFERED: $TRANSFERED.\n"
if [ "$TRANSFERED" == 0 ]; then
# nothing was transfered the last run, sleep for 5 minutes
# no stress
Create /tmp/stoprsync to stop it.
rsync --status is needed in order to check if any files were transferred the previous run.
% VBoxManage list extpacks
Extension Packs: 1
Pack no. 0: VNC
Description: VNC plugin module
VRDE Module: VBoxVNC
% VBoxManage setproperty vrdeextpack VNC
% VBoxManage modifyvm test --vrdeproperty VNCPassword=somepass
% VBoxManage modifyvm test --vrdeauthlibrary null
% VBoxManage modifyvm test --vrdeport 1501
% VBoxHeadless -s test &
How to redirect something like “www.example.org” to “www.example.com“, or “longer.subdomain.example.org” to “longer.subdomain.example.com” with a rewrite rule in nginx.conf:
rewrite ^(.*) $scheme://$name.example.com$1 permanent;