so you first use optimisation of augmented relativistic data relative to upscaling the geometric style, colour and fine detail and you train it on both real world data without nature and 3d simulated graphics with the objective on an efficient data set for upscaling then extend the set to nature.
at a certain level of detail or low polygon ness the geometry upscaler won't do much but with enough polygons and well developed texture and lighting the upscaler could work it's magic.
So say you have a low poly model and a high poly model could AI not fast morph the triangulation between the 2 models relative there being enough data for the upscaler. this could help the unreal engine where instead of shifting triangulation down relative to view you slide between 2 models more effectively targeting and speeding up the render relative too the upscaler.
To speed the upscaler up the image should morph to the upscaled render frame rely more on style upscaling rather than totally redoing the image to a full remaster.