A few people have commented that they found a post I did a while back on per database RAM usage quite helpful. You can see that article here:
I didn’t mention it at the time but I was actually trying to do something a little bit more subtle than just seeing which databases were using the most RAM.
The additional part of what I was trying to do was to determine how much of each database is currently “dirty” and waiting to be flushed to disk at the next database checkpoint.
This might be of interest if you are looking at what volume of changes would need to be reapplied during the REDO phase of a database recovery or failover. It would be of particular interest if you are looking at how quickly a failover cluster would take to failover from one host to another as the amount of REDO work has a direct bearing on how quickly this can occur.
The script is as follows:
-- Generate various metrics on how much RAM various databases are taking up via the buffer pool along with how many pages are dirty and will be flushed at the next -- checkpoint SELECT DB_NAME (database_id), -- The total number of 8K data and index pages used by the database COUNT(*) AS TotalNumberOfPages, -- Count the total number of 8K pages in memory with no known changes SUM(CASE WHEN Is_Modified = 0 THEN 1 ELSE 0 END) AS CleanPages, -- Count the total number of 8K pages in memory that have changes and will be flushed at the next checkpoint SUM(CASE WHEN Is_Modified = 0 THEN 0 ELSE 1 END) AS DirtyPages, -- The total amount of the buffer pool used by the database by all index and data pages (in megabytes) COUNT(*) / 128 AS TotalNumberOfPagesInMegabytes, -- Count the total amount of memory used by pages with no known changes - in megabytes SUM((CASE WHEN Is_Modified = 0 THEN 1 ELSE 0 END)) / 128 AS CleanPagesInMegabytes, -- Count the total amount of memory used by dirty pages that will be flushed at the next checkpoint - in megabytes (SUM(CASE WHEN Is_Modified = 0 THEN 0 ELSE 1 END)) / 128 AS DirtyPagesInMegabytes -- NOTE: Will return 1 row for *every page* currently sitting in the buffer pool. -- On large systems this can return a large numbers of rows FROM sys.dm_os_buffer_descriptors WITH (NOLOCK) GROUP BY database_id ORDER BY COUNT(*) DESC
This should give results along the lines of:
For most users this information is probably fairly academic, but if you have a very write-heavy system – the amount of data getting checkpointed out over time might of some interest to you, particularly if you are responsible for HA/DR SLAs.