At the moment to use AI for upscaling low definition and mixed definition images the ai has to be good at working a random noise image into an upscaled feature like refining a high detail human head. How ever this takes a lot of work for all the stuff that needs rendering well. However if you first use deep learning to first compress the issue into a noise and pattern balanced library of augmented data templates then another deep learner can learn to render from the compiled data then when trained you can infer from the compiled data your AI forgery. This would require a different data fitting on the gann where your more looking for a good convincing knock off pattern wise for the end user experience over higher noise to detail accuracy.
Augmentation as a form of compression and dataset efficiency is in it's infancy and how to rank augmented a dataset for dynamic use of such a data set needs improvement.