I am doing a test backup of an Azure SQL database (HyperScale) of about 2TB but the operation is very slow for my liking. I am copying it to a NAS location. In 5 hours I was able to copy only 18GB. Is there a way to improve this throughput?
Unfortunately, extracting such a volume of data at the logical level takes a very long time. Regrettably, SQLBackupAndFTP is not suitable for backing up giant databases in Azure.
Please let us know if you have any other questions.
Thank you, and sorry for the inconvenience.