Psychology and Mental Health Forum


https://www.psychforums.com/blog/highdimensionman/index_sid-b23b3bd3455e51c77ae4bad40ef5e15d_start-10.html

Author:  highdimensionman [ Mon Sep 26, 2022 4:50 pm ]
Blog Subject:  The Power of narrow intelligence

10 years from now what could be.
Considering 10 years more R&D and fabrication evolution of optical computing and EUV electronics along with more cost effective energy becoming more of a thing the path should be getting quite optimally forward in terms of progress in the computer industry.
The top server compute card might do upto 0.3 Exa flops at 500 watts (4,000 of them at say about 1 zeta flop)and a top mini PC could do easily 4 Peta flops.
So...
With many billions in the world benefiting on how well they can develop their economic experience.
200 million people working with the mother data trait library(about 200 Exa bytes) from their different fields of interest.
2 million people working on the initial state egg and knowledge fruit generator very data intensive but not that bad on calculation demand.
8 million people working on the main training bot range. An efficient model that develops the fruit and the egg into a bot and efficient library that does what someone or some company wants it to do either as a bot on the bot browser or as a tool on an app.
To develop the bot you can add a few 10's of TB's of input data and can program upto 1 million 3 paragraph parameters in a team with additional technical coding abilities and AI support to improve your clarity of input.
The training bot goes through upto 14 main different process stages one after the other perfecting the data trait library and the bot from a more noisy wider mainly inferred state to an often smaller more efficient developed and trained model.
What I would do with this is with about 2000 parameters coded over a few months I would generate a biggish child mother bot that I can use to create my ideal operating system with over 1-2 years on a cheaper cloud service.

Right now
8 Tflop's is what a top mini PC can do.
So what could be achieved over the next 4 years for an 8 - 24 32bit TFlop system.
Well you could work on piping the process of short scene creation, Picture creation and Reduce regularised basic AI technical Simulation generator by developing such work in a more optimal stage by stage process with the user able to determine a lot more as human and ai interaction in midway development stages. you could also do something like a diary bot.
Short scene and picture generation. How would this work efficiently...
-The AI first only generates in a limited style range and works from a set library and it can take some time to compile just few images into the data trait library for use .
-The AI works by determining with text, a small technical codec and and data from the data trait library to do holistic aspects of the data in a sort of spray paint like way onto the canvas space.
-The Canvas space is big to start with and shrinks to a more determined space some times getting a little bigger for some stage.
-The canvas can hold variability information useful for mid pipe human with AI modification stages.
-First a big more unfocused wilder plot in the space is processed with spray operations if their is a whole animation scene then then a 3d cuboid space is being sprayed on inferred from the well compiled data trait library.
-The Next stage in the pipe is to despray and respray the space and to slightly shrink the data of the space to formalise a little more decisiveness with regard to data trait representation in the canvas space.
-The Next Stage is to regularise the data into the main focus and regulated variation data with spay can in 3D if an animation is being generated.
-The Next Stage is to formalise the canvas data this is a bit of touching up to make the regulated detail a bit more formed not just regulated .
-Now you take you mid stage frame and using a re-compiler get this stage formed exactly how you would like this template to proceed.
-In the next stage more detail is formalised by extending the space wild spraying, de spray and re spay, regularisation and formalisation accept this time not to generate the whole picture but to perfect the main...

[ Continued ]

Author:  highdimensionman [ Mon Sep 26, 2022 1:33 pm ]
Blog Subject:  Spray can prompt based Image generation.

At the moment the AI lines up in a data space txt and image data where it clips together the base of the image then with that space they use random noise to integrate finer detail.
Rather than working in one space you work little spaces which determine holistic image sprays. So in the first stage a rough and noisy space is sprayed onto relative to text and detail traits modifying sprays.
The next stage the bot resprays the space formalizing more detail traits in the space. Then in stage 3 the space is regularised with spay can space and stage 4 the space is formalised then you have finer detail spray stages. The final image result can then be upscaled and improved on the cloud and you can modify the mid process to get the layout to your liking.
This brain and library would be quicker more memory efficient very concise with lots of picture requirements but would have issues like have to take time compiling new picture into the library if you want to use stuff in the picture and a less exotic output covering less of a dynamic style.
Because the AI works with a compiled data trait library and detail regulation.
I suspect you could use a slight clip and reclip on the output image later to make the output convey style and dynamic better but in the early frame that's what your sacrificing some picture dynamic for better artistic control.

Author:  highdimensionman [ Sat Sep 24, 2022 9:15 pm ]
Blog Subject:  Project 2040 for me to do version 1 with AI. Dynaverse

An extensive dynamic universe at 512TB that slowly grows and does things like video and mini game content generated from the dynaverse and a near infinity of reality possibility to explore my way as a game os where you can interact via the dynaverse or main internet services.
Then at version 1 I'll open up the development.

Author:  highdimensionman [ Sat Sep 24, 2022 5:56 pm ]
Blog Subject:  How well can a 4d 7d and 14d tensor sectioned mind do multiD

So you have a low middle and high dimensional tensor array trained through random bias seeding in order to use a very large data library compiled by another trained mind using say a 10D tensor array.
This AI is trained so that once trained it can through inference some training calculating and recompiling and generating new data insights the AI replaces itself with a dataset rebundled utilising all bias Dimensions from 4 to 14 for data relative to task dimensional efficiency. It also recompiles the library for a better fit what this AI bundle can do is generate bots through mainly inference from upto 256,000 3 paragraph long parameters within a technical communication dynamic including to some degree english.
This mother load running in the cloud compute infrastructure could be well maintained and optimized by loads of top people and we'll regulated we need real personal assistants ect to work well with computers we don't need that ai ruler thing. This all the mother load does is with programming spawn task range bots in a well kept and regulated space it doesn't need to be alive deciding life.

Author:  highdimensionman [ Sat Sep 24, 2022 5:21 pm ]
Blog Subject:  Pi 500 wish list

8GB of ram
80 140 GFLOPS.
AI picture and animation generator crafter and editor.
AI basic simulation generator.
AI character environment and fully interactive simulation generator.
Yourtube AI continuously personalized video and AI generated video platform.
Full internet content creation suit upto 600p before upscale and hyper midi file editing from generated content on yourtube.
Small data AI where applicable to assist hobbyist functionality.
Good chrome functionality.
Pi cloud doing pretty much the same as you can mainly do ok with a pi cluster and costed with children in mind who would be part of the audience using the stuff.
Good deterministic and some more controlled variability AI piping down with advanced determinism with the main pi processor.
A nice 64 bit porting's
SD micro express
A slice of the future 200,000,000 people optimizing in their different fields the mother bundle of data trait extraction library and task based AI doing the task of bot and library creator mainly through inference. These 200,000,000 people will be essential to the progress and what the other billions can achieve in life in terms of designer and personal touch.

All times are UTC

Powered by phpBB © 2002, 2006 phpBB Group
www.phpbb.com