10 years from now what could be.
Considering 10 years more R&D and fabrication evolution of optical computing and EUV electronics along with more cost effective energy becoming more of a thing the path should be getting quite optimally forward in terms of progress in the computer industry.
The top server compute card might do upto 0.3 Exa flops at 500 watts (4,000 of them at say about 1 zeta flop)and a top mini PC could do easily 4 Peta flops.
So...
With many billions in the world benefiting on how well they can develop their economic experience.
200 million people working with the mother data trait library(about 200 Exa bytes) from their different fields of interest.
2 million people working on the initial state egg and knowledge fruit generator very data intensive but not that bad on calculation demand.
8 million people working on the main training bot range. An efficient model that develops the fruit and the egg into a bot and efficient library that does what someone or some company wants it to do either as a bot on the bot browser or as a tool on an app.
To develop the bot you can add a few 10's of TB's of input data and can program upto 1 million 3 paragraph parameters in a team with additional technical coding abilities and AI support to improve your clarity of input.
The training bot goes through upto 14 main different process stages one after the other perfecting the data trait library and the bot from a more noisy wider mainly inferred state to an often smaller more efficient developed and trained model.
What I would do with this is with about 2000 parameters coded over a few months I would generate a biggish child mother bot that I can use to create my ideal operating system with over 1-2 years on a cheaper cloud service.
Right now
8 Tflop's is what a top mini PC can do.
So what could be achieved over the next 4 years for an 8 - 24 32bit TFlop system.
Well you could work on piping the process of short scene creation, Picture creation and Reduce regularised basic AI technical Simulation generator by developing such work in a more optimal stage by stage process with the user able to determine a lot more as human and ai interaction in midway development stages. you could also do something like a diary bot.
Short scene and picture generation. How would this work efficiently...
-The AI first only generates in a limited style range and works from a set library and it can take some time to compile just few images into the data trait library for use .
-The AI works by determining with text, a small technical codec and and data from the data trait library to do holistic aspects of the data in a sort of spray paint like way onto the canvas space.
-The Canvas space is big to start with and shrinks to a more determined space some times getting a little bigger for some stage.
-The canvas can hold variability information useful for mid pipe human with AI modification stages.
-First a big more unfocused wilder plot in the space is processed with spray operations if their is a whole animation scene then then a 3d cuboid space is being sprayed on inferred from the well compiled data trait library.
-The Next stage in the pipe is to despray and respray the space and to slightly shrink the data of the space to formalise a little more decisiveness with regard to data trait representation in the canvas space.
-The Next Stage is to regularise the data into the main focus and regulated variation data with spay can in 3D if an animation is being generated.
-The Next Stage is to formalise the canvas data this is a bit of touching up to make the regulated detail a bit more formed not just regulated .
-Now you take you mid stage frame and using a re-compiler get this stage formed exactly how you would like this template to proceed.
-In the next stage more detail is formalised by extending the space wild spraying, de spray and re spay, regularisation and formalisation accept this time not to generate the whole picture but to perfect the main aspect of the lower data still keeping within a limited style range.
-So you have variation and fine detail data on the canvas what next well in the next stage you have a wider styling range which the user can tweak the canvas with by extending the canvas and using the different stages for bigger and lower influences and fine detail this stage crafts your picture how you want it styled how you want. However to be faster the image is upto 720p mixed resolution and still with a bit of wider canvas data.
-The next stage you use more limited diffusion on a 1d continuous data based space to finalise the picture or animation from the canvas data.
-Now you either upscale over night or upscale on the cloud.
AI Simulation Scripts and mathematical and scientific simulation with simple and very reduced drawing traits How would this work efficiently.
-Only 32 colour's up to 360p 20 seconds long.
-The style range is a lot more limited.
-The Canvas is wider but the stages produce more regulated data on the canvas on the whole.
-The canvas also holds technical language corresponding to numerical and calculated results natural language is more limited.
-The process of generating the simulation even though it's not even interactive can take hours after writing the simulation script.
-Advantages include how old data can help a lot and how the simulation script can be easier to do more with in this way than doing it on a maths platform.
The Diary bot how would that work.
You use an upto 4000 parameter Diary bot generator too fine tune your diary bot on the could. then your bot is generated. Your bot can add notes to your private diary and also works with a 4MB txt file mind some of which you can set how you want and the other bit of the txt space is for useful notes which the bot uses and makes you can edit these as they are all plain English.