I'm using google compute engine and have an auto scaling instance group that spins up new VMs as needed all sitting behind a load balancer. I'm also using google's cloud SQL in the same project. The VMs need to connect to the cloud SQL instance.
Since the IPs of the VMs are dynamic I can't just plug in the IPs to the SQL access config so I followed the cloud sql proxy setup along with the notes from this very similar question: How to connect from a pool of Google Compute Engine instances to Cloud SQL DB in the same project?
I can now log into a single test VM and run:
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
and everything works great and that VM connects to the cloud SQL instance.
The next step is where I'm having issues. How can I setup the VM so it automatically starts up the proxy when it's either built from an instance template or just restarted. The obvious answer seem to be to shove the above in the VM's start-up script but that doesn't seem to be working. So with my single test VM I can SSH into the VM and manually run the cloud_sql_proxy command and all works. If I then include the below in my start-up script and restart the VM it doesn't connect:
#! /bin/bash
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
Any suggestions? I seriously can't believe it's this hard to connect to the SQL cloud from a VM in the same project...