10 years from now what could be.
Considering 10 years more R&D and fabrication evolution of optical computing and EUV electronics along with more cost effective energy becoming more of a thing the path should be getting quite optimally forward in terms of progress in the computer industry.
The top server compute card might do upto 0.3 Exa flops at 500 watts (4,000 of them at say about 1 zeta flop)and a top mini PC could do easily 4 Peta flops.
So...
With many billions in the world benefiting on how well they can develop their economic experience.
200 million people working with the mother data trait library(about 200 Exa bytes) from their different fields of interest.
2 million people working on the initial state egg and knowledge fruit generator very data intensive but not that bad on calculation demand.
8 million people working on the main training bot range. An efficient model that develops the fruit and the egg into a bot and efficient library that does what someone or some company wants it to do either as a bot on the bot browser or as a tool on an app.
To develop the bot you can add a few 10's of TB's of input data and can program upto 1 million 3 paragraph parameters in a team with additional technical coding abilities and AI support to improve your clarity of input.
The training bot goes through upto 14 main different process stages one after the other perfecting the data trait library and the bot from a more noisy wider mainly inferred state to an often smaller more efficient developed and trained model.
What I would do with this is with about 2000 parameters coded over a few months I would generate a biggish child mother bot that I can use to create my ideal operating system with over 1-2 years on a cheaper cloud service.
Right now
8 Tflop's is what a top mini PC can do.
So what could be achieved over the next 4 years for an 8 - 24 32bit TFlop system.
Well you could work on piping the process of short scene creation, Picture creation and Reduce regularised basic AI technical Simulation generator by developing such work in a more optimal stage by stage process with the user able to determine a lot more as human and ai interaction in midway development stages. you could also do something like a diary bot.
Short scene and picture generation. How would this work efficiently...
-The AI first only generates in a limited style range and works from a set library and it can take some time to compile just few images into the data trait library for use .
-The AI works by determining with text, a small technical codec and and data from the data trait library to do holistic aspects of the data in a sort of spray paint like way onto the canvas space.
-The Canvas space is big to start with and shrinks to a more determined space some times getting a little bigger for some stage.
-The canvas can hold variability information useful for mid pipe human with AI modification stages.
-First a big more unfocused wilder plot in the space is processed with spray operations if their is a whole animation scene then then a 3d cuboid space is being sprayed on inferred from the well compiled data trait library.
-The Next stage in the pipe is to despray and respray the space and to slightly shrink the data of the space to formalise a little more decisiveness with regard to data trait representation in the canvas space.
-The Next Stage is to regularise the data into the main focus and regulated variation data with spay can in 3D if an animation is being generated.
-The Next Stage is to formalise the canvas data this is a bit of touching up to make the regulated detail a bit more formed not just regulated .
-Now you take you mid stage frame and using a re-compiler get this stage formed exactly how you would like this template to proceed.
-In the next stage more detail is formalised by extending the space wild spraying, de spray and re spay, regularisation and formalisation accept this time not to generate the whole picture but to perfect the main...
[ Continued ]