160p unistyle black and white Picture generator.
vocabulary 6000 words 80 program specific technical word functions.
First a wider space is more wildly spayed with text and detail trait
comparative sprays.
Then the spay is to shrink variation and detail closer to rugularized.
Then heavy regulation is done to the picture and then extra form detail
is sprayed as a wider space is compiled to a 160p top mixed res
1bit black and white picture with
detail manipulation and variation control which usually a
recompile takes less time more like 1 min for a medium
modification request.
Data trait library 2.2GB MAX needed per stage 6GB Storage
Brain 400mb max 1.2GB on storage
Process ram required 600mb 128mb GPU
Time to do 3 variations 6 minutes.
But hey it would be real AI prompt generation on todays PI400 if a top
data science team spent 8 months making the brain and optimizing
the data trait library.
Then you take the low definition 1bit picture and processes
it into a coloring frame and you use 100's of fast running mini AI bot tools
more like standard picture editing tools with an AI touch
To assist with coloring it in how you want it in 1080p full
colour. For example you can touch colors in and the AI blends the
coloring idea more than an exact tool use and you can search for
colouring in styles. then there is the add form detail to a region tool.
you can also add small regions of black detail then reform the data
better into the region of the picture in different ways.
vola AI assisted picture generation is possible on on a PI 400.
This means that if the industry had thought
about the issue more efficiently in the first place A reasonable PC
could have done this task range by 2008 and AI image generation
would be a lot more dominated by people who really touched up their work well.
So bigger models with more ability to do it all quickly from the prompt with
less artistic control would be less competition and Animation and video clip generation
with the artists touching many things up in a similar way would have been available by
2012 on the cloud.
With more efficient work in the area people would have been crafting good videos on the cloud by 2018 with a lot less effort, anything they wanted.
I always thought these researchers were more obsessed with more singular approaches using more resources data trying to run more before they could walk.