Notice: Do this only if it causes a bottleneck in your app, and make sure you understand the consequences – See Hans’s answer for a good explanation on why they put this in WPF in the first place.
You have some nasty code there trying to fix a nasty hack in the framework… As it’s all static and called from multiple places in WPF, you can’t really do better than use reflection to break it (other solutions would be much worse).
So don’t expect a clean solution there. No such thing exists unless they change the WPF code.
But I think your hack could be simpler and avoid using a timer: just hack the _totalMemory value and you’re done. It’s a long, which means it can go to negative values. And very big negative values at that.
private static class MemoryPressurePatcher
{
public static void Patch()
{
var memoryPressureType = typeof(Duration).Assembly.GetType("MS.Internal.MemoryPressure");
var totalMemoryField = memoryPressureType?.GetField("_totalMemory", BindingFlags.Static | BindingFlags.NonPublic);
if (totalMemoryField?.FieldType != typeof(long))
return;
var currentValue = (long) totalMemoryField.GetValue(null);
if (currentValue >= 0)
totalMemoryField.SetValue(null, currentValue + long.MinValue);
}
}
Here, now your app would have to allocate about 8 exabytes before calling GC.Collect. Needless to say, if this happens you’ll have bigger problems to solve. 🙂
If you’re worried about the possibility of an underflow, just use long.MinValue / 2 as the offset. This still leaves you with 4 exabytes.
Note that AddToTotal actually performs bounds checking of _totalMemory, but it does this with a Debug.Assert here:
Debug.Assert(newValue >= 0);
As you’ll be using a release version of the .NET Framework, these asserts will be disabled (with a ConditionalAttribute), so there’s no need to worry about that.
You’ve asked what problems could arise with this approach. Let’s take a look.
-
The most obvious one: MS changes the WPF code you’re trying to hack.
Well, in that case, it pretty much depends on the nature of the change.
-
They change the type name/field name/field type: in that case, the hack will not be performed, and you’ll be back to stock behavior. The reflection code is pretty defensive, it won’t throw an exception, it just won’t do anything.
-
They change the
Debug.Assertcall to a runtime check which is enabled in the release version. In that case, your app is doomed. Any attempt to load an image from disk will throw. Oops.This risk is mitigated by the fact that their own code is pretty much a hack. They don’t intend it to throw, it should go unnoticed. They want it to sit quiet and fail silently. Letting images load is a much more important feature which should not be impaired by some memory management code whose only purpose is to keep memory usage to a minimum.
-
In the case of your original patch in the OP, if they change the constant values, your hack may stop working.
-
They change the algorithm while keeping the class and field intact. Well… anything could happen, depending on the change.
-
-
Now, let’s suppose the hack works and disables the
GC.Collectcall successfully.The obvious risk in this case is increased memory usage. Since collections will be less frequent, more memory will be allocated at a given time. This should not be a big issue, since collections would still occur naturally when gen 0 fills up.
You’d also have more memory fragmentation, this is a direct consequence of fewer collections. This may or may not be a problem for you – so profile your app.
Fewer collections also means fewer objects are promoted to a higher generation. This is a good thing. Ideally, you should have short-lived objects in gen 0, and long-lived objects in gen 2. Frequent collections will actually cause short-lived objects to be promoted to gen 1 and then to gen 2, and you’ll end up with many unreachable objects in gen 2. These will only be cleaned up with a gen 2 collection, will cause heap fragmentation, and will actually increase the GC time, since it’ll have to spend more time compacting the heap. This is actually the primary reason why calling
GC.Collectyourself is considered a bad practice – you’re actively defeating the GC strategy, and this affects the whole application.
In any case, the correct approach would be to load the images, scale them down and display these thumbnails in your UI. All of this processing should be done in a background thread. In the case of JPEG images, load the embedded thumbnails – they may be good enough. And use an object pool so you don’t need to instantiate new bitmaps each time, this completely bypasses the MemoryPressure class problem. And yes, that’s exactly what the other answers suggest 😉