Mark Minasi's Tech Forum
Register Calendar Latest Topics Chat
 
 
 


Reply
  Author   Comment  
MartinDBaker

Still Checking the Forum Out
Registered:
Posts: 3
Reply with quote  #1 
Guys,

Can someone categorically tell me what the latest Microsoft best practice is for pagefile sizes on mailbox servers and hub/CAS servers?

This MS TechNet article seems to provide latest advice, but only for Exchange 2013 onwards. So basically, if you are running 32GB of RAM or less you do <RAM> + 10MB = page file size. Anything over 32GB should be 32GB + 10MB = page file size.

However, I'm implementing 4 Exch2010 mailbox servers with 64GB RAM. My question is, what page-file size should I set for these?

Also, I have 6 hub/CAS servers with 32GB RAM. So have set these as 32GB+10MB=32777MB.

BW
Martin
0
donoli

Senior Member
Registered:
Posts: 459
Reply with quote  #2 
Withe 32GB of memory, is a page file really needed at all? 
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 740
Reply with quote  #3 
So we mentioned this a bit previously.
The biggest dump you will get is from 32GB of RAM, so adding the 10MB allows the file to be dumped.

For a server with 33GB of RAM, you have a page file of 32GB + 10MB
For a server with 128 GB of RAM, you have a page file of 32GB + 10MB
For a server up to 32GB of RAM, have a page file of the RAM size, 16GB RAM, 16GB Page file.

Reference - 
1. http://newforum.minasi.com/post/is-there-a-need-for-more-page-on-guest-vms-8093312?highlight=10mb&pid=1292109824
2. https://technet.microsoft.com/en-us/library/dn879075(v=exchg.150).aspx
3. https://technet.microsoft.com/en-us/library/dn879075(v=exchg.150).aspx

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
anthonymaw

Avatar / Picture

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 12
Reply with quote  #4 
Most of the time you can just leave it on "System Managed" pagefile size setting which automatically creates a pagefile that is appropriately sized for running program code but not cached data.

The rule about RAM +10MB really only applies if you are low level debugging and troubleshooting particularly blue screen o' death where you need to figure out what was being processed by what CPU register when the crash occurred.

Exchange (and SQL Server) uses RAM differently than most other server applications.

With 32 GB of RAM there's very little paging in and out necessary.

All the physical RAM is used by Exchange for dynamically caching database pages and that data is of little or no value for debugging purposes.

It is useful to understand a detailed breakdown of how Windows OS and server applications actually utilizes physical (not virtual) memory before one can make a statement about how the pagefile size should be.

__________________
Anthony Maw, B.Sc., MCSE, Vancouver, Canada, Earth, Solar System, Milky Way Galaxy.....
Tel/SMS: +1 604-318-9994
http://www.anthonymaw.com
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 740
Reply with quote  #5 
Anthony

If you have an Exchange server with 64GB or RAM or a SQL Server with 256GB of RAM what page file do you expect them to have?

I think we will find a page file of 64 to 65 GB and 256 to 257GB.
Therefore setting the page file to 32GB + 10MB allows Exchange 2013 and 2016 to save dump should it crash. An action it automatically configures as well as other performance monitoring that it save daily should any error or troubleshooting become necessary.
Think it consumes another 1.5GB per day by default on the C Drive. This can be moved though.
Not sure about SQL performance logging as I don't do that as often.

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
anthonymaw

Avatar / Picture

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 12
Reply with quote  #6 
Hello wobbler...

Yeah I have built 64-bit Windows Server 2008 and 2012 with Exchange and SQL Server as well as SharePoint and Lync with between 32 and 64 GB of physical RAM.

I wondered the same thing myself but when the pagefile is allowed to be "System Managed" it starts off very small but it can grow over time.

Windows OS with Intel CPU architecture tries to opportunistically and then algorithmically write out pageable pool memory to disk.

But if you think about it, this strategy doesn't make sense for cached disk and database data, else it would generate excessive unnecessary disk I/O to cache and then page  out that cache memory.

Therefore the pagefile does not contain cache data.

What I think is happening is that for performance reasons the System Managed pagefile size is pre-allocating a structured data file on disk for the pagefile, rather than seek out free disk space to dynamically allocate the pagefile.

Also in the "old days" it helps to pre-allocate the pagefile to avoid disk fragmentation.

This worked in the old days when you had 64 or Megabytes of RAM and Microsoft would happily recommend 1.5 X RAM or RAM + xMB.

But in today's server hardware capable of up to 3 TB of RAM (I work on Cisco UCS node servers) then pre-allocating a fixed pagefile size would be impractical as well.

Regards!

__________________
Anthony Maw, B.Sc., MCSE, Vancouver, Canada, Earth, Solar System, Milky Way Galaxy.....
Tel/SMS: +1 604-318-9994
http://www.anthonymaw.com
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 740
Reply with quote  #7 
Nope....gonna talk this one out.

We need to limit the Page File Size, because we don't need the OS to have a page file the size of RAM in the Windows box if it has more than 32GB or RAM available.

A Blade/ server/ Guest with 3TB RAM, on a system managed Page File will have a page file of from 16MB to 4.5TB of disk space.....once it expands out, and by that I mean starts dropping 'cold blocks' into the page file and starts to dynamically expand, the only way back is reboot. On a box with 3TB of RAM a situation we don't want - imagind the RFO on that!
Another reason is that the 3TB (4.5TB possibly) will be on shared storage and not local cheap and nasty SATA, so damn expensive fragments of some un-necessary data.

The page file will consume the space by default in the C Drive and will consume all usable space if allowed, starts at 16MB (right....) and looks to take up to 1.5 times RAM size.
So if you put SQL onto the C Drive of a 64GB RAM machine with 60GB C Drive, after approx 24 hours, there will be no space on the C Drive, SQL will fail and the server is useless.

Azure places the page file on another temp drive (D in most common cases) to isolate it from the OS and also limit the page file size to a "fixed disk size".
I suspect that this is cheap and nasty 10 X 4TB Raid 6 disk as opposed to some high performance T1, T0 or even accelerated disk.
Considering that they and other cloud providers offer us drug dealer pricing on cloud and 'aaS' solutions, seems they missed screwing us on pricing there.


__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
anthonymaw

Avatar / Picture

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 12
Reply with quote  #8 
On systems with large amounts of physical RAM the purpose of a System Managed pagefile size is less to do with a crash dump analysis (but Windows never crashes, right?) and more to do with providing a "soft landing" in case the memory commit charge exceeds physical RAM.

Intel x86 CPU architecture actually has memory paging to disk baked into the silicon since the 80386 for this purpose it is the job of the OS whether Windows, Linux, OSX or OS/2 to manage paging tough.

In other words, *in case* the commit charge exceeds physical memory, instead of a crash and reboot, it gracefully pages out older pageable memory pages to disk so that system performance gradually slows down rather than an outright crash and reboot.

In general Windows OS will not automatically create a pagefile equal to the physical RAM, althought theoretically it could.

If physical RAM is exhausted the typical competent systems administrator response would be to add more RAM to maintain performance.

On systems with huge RAM like Cisco UCS servers capable of up to 3TB, it's almost always running a hyper-visor like VMware or Hyper-V and yes, storage is over FC SAN but the UCS modular servers also have on-board SSD drives for OS boot and the local pagefile, not on the SAN.

__________________
Anthony Maw, B.Sc., MCSE, Vancouver, Canada, Earth, Solar System, Milky Way Galaxy.....
Tel/SMS: +1 604-318-9994
http://www.anthonymaw.com
0
wobble_wobble

Avatar / Picture

Associate Troublemaker Apprentice
Registered:
Posts: 740
Reply with quote  #9 
Sorry we seem to have left best practice on page file.

Do we allow the Microsft Hyper-V OS manage the page file on a UCS Blade with 3TB of RAM and if so how big a disk do you put into the blade to support that?

Or do you set it at a number?

__________________
Have you tried turning it off and walking away? The next person can fix it!

New to the forum? Read this
0
anthonymaw

Avatar / Picture

New Friend (or an Old Friend who Built a New Account)
Registered:
Posts: 12
Reply with quote  #10 
The current Microsoft recommendations for 64-bit OS and default setting is "System Managed" pageful size so this discussion is a moot point.  They recommend "some" amount of free disk space for the pagefile, and running Windows OS without a pagefile is an unsupported configuration, but there's no pre-set pagefile size. For optimum server performance if you are paging to disk you really need buy more RAM anyways.
__________________
Anthony Maw, B.Sc., MCSE, Vancouver, Canada, Earth, Solar System, Milky Way Galaxy.....
Tel/SMS: +1 604-318-9994
http://www.anthonymaw.com
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation: