I once faced a 30GB SQL Server backup through a VPN that managed about 1MB/s and dropped every 45-60 minutes. My first thought was, “If I split this into thirty 1GB pieces and copy them in parallel, maybe I can outrun disconnects.” Later that morning I ran a quick PowerShell script to slice the .bak into 30 parts, which took just a couple of minutes.
I tested transfer speeds next. At about 1MB/s per slice, copying took roughly 15-17 minutes each. In a typical VPN window I could finish two slices and start another before the link dropped. When the VPN hiccuped, I lost at most one gigabyte instead of dozens, but overall transfer time stayed about the same: moving 30GB at 1MB/s still takes about eight hours, plus some extra minutes for splitting and retries. I had smaller failure domains, but no net time savings.
By midday I was frustrated and decided to tune Robocopy’s parallelism. I wrote a simple harness to copy the first few slices with various /MT thread counts and measured each run:
$sourceDir = '\\server\share'
$destDir = 'D:\migrations'
$filePattern = 'large.bak.part00[1-5]'
$mtOptions = 2,4,8,16,32
$results = @{}
foreach ($mt in $mtOptions) {
$start = Get-Date
robocopy $sourceDir $destDir $filePattern /Z /MT:$mt /R:1 /W:1 | Out-Null
$results[$mt] = (Get-Date) - $start
}
$bestMt = ($results.GetEnumerator() |
Sort-Object Value.TotalSeconds |
Select-Object -First 1).Key
Write-Host "Best MT value is" $bestMt
That test took about half an hour, and /MT:8 came out on top. With that insight, I started the full copy:
robocopy \\server\share D:\migrations large.bak.part* /Z /MT:8 /R:3 /W:5
Next, to avoid a single long list of files, I ran three Robocopy sessions in parallel, each handling a batch of slices:
$parts = Get-ChildItem '\\server\share\large.bak.part*' | Sort-Object Name
$groups = @(
$parts[0..9],
$parts[10..19],
$parts[20..29]
)
foreach ($batch in $groups) {
$fileArgs = $batch.Name -join ' '
Start-Process robocopy -ArgumentList '\\server\share', 'D:\migrations', $fileArgs, '/Z', '/MT:8', '/R:3', '/W:5' -NoNewWindow
}
I let those jobs run through the afternoon and evening. By day’s end all 30 parts had arrived successfully.
Finally, to restore the original .bak file once all parts have transferred, you can unchunk them with a simple PowerShell loop in your migration folder:
$parts = Get-ChildItem 'D:\migrations\large.bak.part*' | Sort-Object Name
$out = 'D:\migrations\restored-large.bak'
if (Test-Path $out) { Remove-Item $out }
foreach ($part in $parts) {
Get-Content $part -Encoding Byte -ReadCount 0 | Add-Content $out -Encoding Byte
}
This reads each slice in order and writes it into restored-large.bak. After that, you can point SQL Server at restored-large.bak when running your RESTORE DATABASE command.
Or, from a classic CMD prompt run:
copy /b large.bak.part* restored-large.bak
Either method stitches the chunks back into the original backup file, ready for use.
Today I learned that chopping a large file into pieces rarely speeds up the total transfer. Instead, tuning Robocopy’s /MT setting and running multiple instances gives you a straightforward, resilient workflow. When your VPN inevitably drops, you’ll appreciate that you let a tested tool handle resume logic for you.