This morning I stared at a 30 GB SQL Server backup and a VPN that managed about 1 MB/s and dropped every 45 to 60 minutes. I thought, “If I split this into thirty 1 GB pieces and copy them in parallel, maybe I can outrun the disconnects.” Later that morning I ran a quick PowerShell script to slice the .bak into 30 parts, which took just a couple of minutes.
I tested transfer times next. At around 1 MB/s each slice copied in roughly 15 to 17 minutes. In a typical VPN window I could finish two slices and start another before the link dropped. When the VPN hiccuped I lost at most one gigabyte instead of dozens, but the overall transfer time stayed the same: moving 30 GB at 1 MB/s still takes about eight hours, plus those few extra minutes for splitting and retries. I had smaller failure domains, but no net time savings.
By midday I was frustrated and decided to tune Robocopy’s parallelism. I wrote a simple harness to copy the first handful of slices with various /MT
thread counts and measured each run:
$sourceDir = '\\server\share'
$destDir = 'D:\migrations'
$filePattern = 'large.bak.part00[1-5]'
$mtOptions = 2,4,8,16,32
$results = @{}
foreach ($mt in $mtOptions) {
$start = Get-Date
robocopy $sourceDir $destDir $filePattern /Z /MT:$mt /R:1 /W:1 | Out-Null
$results[$mt] = (Get-Date) - $start
}
$bestMt = ($results.GetEnumerator() |
Sort-Object Value.TotalSeconds |
Select-Object -First 1).Key
Write-Host "Best MT value is" $bestMt
That test took on the order of half an hour, and /MT:8
came out on top. With that insight, I launched the full copy:
robocopy \\server\share D:\migrations large.bak.part* /Z /MT:8 /R:3 /W:5
Next, to avoid a single long list of files, I ran three Robocopy sessions in parallel, each handling a batch of slices:
$parts = Get-ChildItem '\\server\share\large.bak.part*' | Sort-Object Name
$groups = @(
$parts[0..9],
$parts[10..19],
$parts[20..29]
)
foreach ($batch in $groups) {
$fileArgs = $batch.Name -join ' '
Start-Process robocopy -ArgumentList '\\server\share', 'D:\migrations', $fileArgs, '/Z', '/MT:8', '/R:3', '/W:5' -NoNewWindow
}
I let those jobs run unattended into the afternoon and evening, and by day’s end all 30 parts had arrived successfully.
Finally, to restore the original .bak file once all parts have transferred, you can unchunk them with a simple PowerShell loop in your migration folder:
$parts = Get-ChildItem 'D:\migrations\large.bak.part*' | Sort-Object Name
$out = 'D:\migrations\restored-large.bak'
if (Test-Path $out) { Remove-Item $out }
foreach ($part in $parts) {
Get-Content $part -Encoding Byte -ReadCount 0 | Add-Content $out -Encoding Byte
}
This reads each slice in order and writes it into restored-large.bak
. After that, you can point SQL Server at restored-large.bak
when running your RESTORE DATABASE command.
Or, from a classic CMD prompt run:
copy /b large.bak.part* restored-large.bak
Either method stitches the chunks back into the original backup file, ready for use.
Today i learned that chopping a large file into pieces rarely speeds up the total transfer. Instead, tuning Robocopy’s /MT
setting and running multiple instances gives you a straightforward, resilient workflow. When your VPN inevitably drops, you’ll be grateful you let a tested tool handle resume logic for you.