MrFlibble 31 Posted January 9, 2019 (edited) I've been increasingly interested in the possibilities of scaling up video game graphics using neural networks recently, and after a relatively brief but overall productive stint with waifu2x I realised its limitations and tried out ESRGAN. While this network, like many others of this kind, is intended to process photographic images and not video gamer graphics, a model was trained by one of the users that is quite suitable for art. Without going into much detail, I scaled up three official screenshots from Command & Conquer, The Covert Operations and Red Alert, then downsized them to 640x480 and converted to the respective original palettes: Note that this was done purely for fun/out of curiosity, but the results are pretty interesting! Edited January 9, 2019 by MrFlibble Share this post Link to post
Alex06 39 Posted January 12, 2019 Holy smokes, this looks absolutely amazing! Share this post Link to post
Plok 323 Posted January 12, 2019 It'd still need human corrections because it looks watercolour-ey, but definitely interesting results. Share this post Link to post
HOPE1134 28 Posted January 12, 2019 Looks alright, needs touch up though. The Nod soldier in my signature is an example of waifu2x. Share this post Link to post
TaxOwlbear 20 Posted January 23, 2019 Looks pretty interesting. I dig the oil painting effect. I'm not sure whether I preferred this over crisp pixels, but I'd like to see this in action on a large display. Share this post Link to post
Nyerguds 100 Posted January 25, 2019 I let it loose on a bunch of renders... the results were pretty cool: Command & Conquer 1:https://imgur.com/a/2HhPlOz Red Alert 1:https://imgur.com/a/rC2AlC7 Share this post Link to post
MrFlibble 31 Posted January 27, 2019 Actually I just tried a different network from the same developers as ESRGAN, SFTGAN. Unlike ESRGAN, this one does not scale up the image itself, rather, it processes input scaled by other methods to recover texture detail. So I fed it some waifu2x images, and the results seem quite more suited to C&C screenshots: The images got sharper but also feel more jaggy though. I think this could produce possibly better results with renders as well. Share this post Link to post
Plok 323 Posted January 27, 2019 The upper image somehow looks worse. The Humvees look borked and apparently the algorithms are confused by the dirt/sand. The lower one looks good. Share this post Link to post
MrFlibble 31 Posted January 29, 2019 Actually I like how the sand and rock textures came out grainy, they're too smoothed in the original Manga image (which I also had to apply surface blur to to get rid of the faux JPEG compression artifacts it created because the model was trained on JPEG images). Also if you look closely at the Hummvees, at least you can see their mounted guns in the second image, which are almost indiscernible blobs in the Manga one. Share this post Link to post
MrFlibble 31 Posted April 30, 2019 (edited) In the meantime, someone made a whole bunch of new models that are significant improvement upon the original Manga109Attempt. I tested them a bit and that guy's own Manga109 version can produce vastly superior results (haven't bothered converting them to original palettes this time): This is possibly the best results I got so far with C&C images. I used only a little pre-processing on each image: use Scaler Test to resize the original image to 4x with xBRZ scaler (don't forget to uncheck the display scaler name option) load the scaled image in GIMP, apply Gaussian blur at 1 pixel radius, scale down to original size using Sinc interpolation (or, apply Gaussian at 1.5 pixels and use Bicubic) For the model, I used an interpolation of mymanga109_250000 from the site I linked to above with RRDB_ESRGAN_x4 from the default ESRGAN package. In net_interp.py, find and change code to: net_PSNR_path = './models/RRDB_ESRGAN_x4.pth' net_ESRGAN_path = './models/mymanga109_250000.pth' Edited April 30, 2019 by MrFlibble Share this post Link to post
MrFlibble 31 Posted April 30, 2019 (edited) Thanks! Here's a few more made with the same method: Edited April 30, 2019 by MrFlibble fixed typo Share this post Link to post
MrFlibble 31 Posted August 28, 2019 I tried some new models from the list I found here, with some interesting results (same images): This is from an interpolation of Fatality and Rebout models (alpha = 0.5) which is then interpolated (again at 0.5) with DeToon. Share this post Link to post