Missing Backups - 2 instances of the job not running simultaneously

Hello,

According to our plan, we have scheduled a Full backup daily at 9PM and Trans Log backups every 15 minutes. The first job of the Full backup is taking around 70-72 minutes to complete due to the compression and upload processes.

But then we are seeing that none of the Trans Log backups are being taken till 10:15PM. It seemed like only 1 instance of a backup job is running at a time. This is inconvenient and can lead to a significant loss of data. Can you please clarify if this is expected/known system behavior or is there an error at our end which we can rectify?

If it is a known error, we request you to please rectify this at the earliest.

Hello PRESS_FIT_PIPE_AND_P,

Thank you for the details. Please give us some time to check the issue.

Sorry for the inconvenience.

Hello PRESS_FIT_PIPE_AND_P,

We have checked this issue and it is how SQLBackupAndFTP works.

Sorry for the delay and for the inconvenience.

Hello,

Thanks for acknowledging it. I request you to try to resolve this issue i.e. the inability to run 2 instances of the same job, in the application. As I described in the example, this can lead to a significant data loss.

In our previous server, it was taking us over 3 hours to take a full backup because we had to lower the compression priority. We had to do this so that the users are not impacted by the compression processing. In this case, no backups were taken for over 3 hours. I hope you understand the problems that will be faced by users in case they don’t have high performance servers.

Regards,
Aakash

Hi PRESS_FIT_PIPE_AND_P

Thank you for the details. In future releases of SQLBackupAndFTP, we plan to add Native SQL Server Compression it will allow reducing a full backup time. will it work for you?

Hello,

Reducing the backup time would help in reducing the potential data loss amount but we are using 7zip Extreme Compression. I don’t know if the Native SQL Server Compression and that are related.

Also, it won’t solve the problem entirely. The reason is: as the DB size goes on increasing, the compression would still take a significant amount of time. For servers with normal processing power, it would take a lot of time and that would mean possibly high data loss.

The best solution would be the ability to run multiple instances.

Hello PRESS_FIT_PIPE_AND_P,

Thank you for the detailed explanations. Sorry, but currently we do not have any plans to do it.

Sorry for the inconvenience.