I inherit many SQL Server databases. Different teams, different eras, different ideas of what “good enough” looks like. What always surprises me is how inconsistent the autogrowth settings are. Filegrowths in megabytes, percentages, tiny increments… it’s all over the place.
In my earlier post on safely shrinking SQL Server transaction logs, I mentioned why autogrowth matters: uncontrolled growth leads to excessive VLF counts, which hurts performance.
I often take over databases that are 10+ years old, with no history, no documentation, and no log monitoring. There’s nobody around to ask why things are set the way they are. So, unless there’s evidence to the contrary, I take the same approach every time: standardize autogrowth to 1024MB for every user database.
The Script
Here’s the script I use:
SELECT
db.name AS DatabaseName
,db.database_id AS DatabaseId
,mf.type AS FileType
,mf.name AS FileName
,'USE ' + db.name + '; ALTER DATABASE [' + db.name + '] MODIFY FILE (NAME = [' + mf.name + '], FILEGROWTH = 1024MB);' AS Script
FROM sys.databases db
JOIN sys.master_files mf ON db.database_id = mf.database_id
WHERE db.database_id > 4;
Run it, get your ALTER DATABASE commands, and apply them. This doesn’t change current file sizes, it just makes future growth happen in predictable, healthy chunks.