One way to backup your data that's available today is to run an export MapReduce as described here:
https://cloud.google.com/bigtable/docs/exporting-importing#export-bigtable
You are correct that as of today, Bigtable Cluster availability is tied to the availability of the Zone they run in. If stronger availability is a concern, you can look at various methods for replicating your writes (such as kafka) but be aware that this adds other complexity to the system you are building such as managing consistency between clusters. (What happens if there is a bug in your software, and you skip distribution of some writes?)
Using a different system such as Cloud Datastore avoids this problem, as it is not a single zonal system - but it provides other tradeoffs to consider.