Our partner

Thoughts from transgressing dimension
Here you can see some of my wild thoughts and you may find some good worldly ideas on here. I just love thinking and thought I should let my thinking be read.
User avatar
highdimensionman
Consumer 2
Consumer 2
 
Posts: 55
Joined: Tue Jul 08, 2014 5:38 pm
Blog: View Blog (1012)
Archives
- November 2022
+ September 2022
+ August 2022
+ July 2022
+ June 2022
+ May 2022
+ April 2022
+ March 2022
+ February 2022
+ January 2022
+ December 2021
+ November 2021
+ October 2021
+ September 2021
+ August 2021
+ July 2021
+ June 2021
+ May 2021
+ April 2021
+ January 2021
+ December 2020
+ November 2020
+ October 2020
+ September 2020
+ August 2020
+ July 2020
+ June 2020
+ May 2020
+ April 2020
+ February 2020
+ November 2019
+ October 2019
+ September 2019
+ May 2018
+ April 2018
+ March 2018
+ February 2018
+ December 2017
+ August 2017
+ May 2017
+ April 2017
+ March 2017
+ February 2017
+ December 2016
+ November 2016
+ October 2016
+ September 2015
+ August 2015
+ April 2015
+ March 2015
+ February 2015
+ January 2015
+ December 2014
+ November 2014
+ September 2014
+ August 2014
Search Blogs

Feed
PreviousNext

The Power of narrow intelligence

Permanent Linkby highdimensionman on Mon Sep 26, 2022 4:50 pm

10 years from now what could be.
Considering 10 years more R&D and fabrication evolution of optical computing and EUV electronics along with more cost effective energy becoming more of a thing the path should be getting quite optimally forward in terms of progress in the computer industry.
The top server compute card might do upto 0.3 Exa flops at 500 watts (4,000 of them at say about 1 zeta flop)and a top mini PC could do easily 4 Peta flops.
So...
With many billions in the world benefiting on how well they can develop their economic experience.
200 million people working with the mother data trait library(about 200 Exa bytes) from their different fields of interest.
2 million people working on the initial state egg and knowledge fruit generator very data intensive but not that bad on calculation demand.
8 million people working on the main training bot range. An efficient model that develops the fruit and the egg into a bot and efficient library that does what someone or some company wants it to do either as a bot on the bot browser or as a tool on an app.
To develop the bot you can add a few 10's of TB's of input data and can program upto 1 million 3 paragraph parameters in a team with additional technical coding abilities and AI support to improve your clarity of input.
The training bot goes through upto 14 main different process stages one after the other perfecting the data trait library and the bot from a more noisy wider mainly inferred state to an often smaller more efficient developed and trained model.
What I would do with this is with about 2000 parameters coded over a few months I would generate a biggish child mother bot that I can use to create my ideal operating system with over 1-2 years on a cheaper cloud service.

Right now
8 Tflop's is what a top mini PC can do.
So what could be achieved over the next 4 years for an 8 - 24 32bit TFlop system.
Well you could work on piping the process of short scene creation, Picture creation and Reduce regularised basic AI technical Simulation generator by developing such work in a more optimal stage by stage process with the user able to determine a lot more as human and ai interaction in midway development stages. you could also do something like a diary bot.
Short scene and picture generation. How would this work efficiently...
-The AI first only generates in a limited style range and works from a set library and it can take some time to compile just few images into the data trait library for use .
-The AI works by determining with text, a small technical codec and and data from the data trait library to do holistic aspects of the data in a sort of spray paint like way onto the canvas space.
-The Canvas space is big to start with and shrinks to a more determined space some times getting a little bigger for some stage.
-The canvas can hold variability information useful for mid pipe human with AI modification stages.
-First a big more unfocused wilder plot in the space is processed with spray operations if their is a whole animation scene then then a 3d cuboid space is being sprayed on inferred from the well compiled data trait library.
-The Next stage in the pipe is to despray and respray the space and to slightly shrink the data of the space to formalise a little more decisiveness with regard to data trait representation in the canvas space.
-The Next Stage is to regularise the data into the main focus and regulated variation data with spay can in 3D if an animation is being generated.
-The Next Stage is to formalise the canvas data this is a bit of touching up to make the regulated detail a bit more formed not just regulated .
-Now you take you mid stage frame and using a re-compiler get this stage formed exactly how you would like this template to proceed.
-In the next stage more detail is formalised by extending the space wild spraying, de spray and re spay, regularisation and formalisation accept this time not to generate the whole picture but to perfect the main...

[ Continued ]

0 Comments Viewed 2810 times

Spray can prompt based Image generation.

Permanent Linkby highdimensionman on Mon Sep 26, 2022 1:33 pm

At the moment the AI lines up in a data space txt and image data where it clips together the base of the image then with that space they use random noise to integrate finer detail.
Rather than working in one space you work little spaces which determine holistic image sprays. So in the first stage a rough and noisy space is sprayed onto relative to text and detail traits modifying sprays.
The next stage the bot resprays the space formalizing more detail traits in the space. Then in stage 3 the space is regularised with spay can space and stage 4 the space is formalised then you have finer detail spray stages. The final image result can then be upscaled and improved on the cloud and you can modify the mid process to get the layout to your liking.
This brain and library would be quicker more memory efficient very concise with lots of picture requirements but would have issues like have to take time compiling new picture into the library if you want to use stuff in the picture and a less exotic output covering less of a dynamic style.
Because the AI works with a compiled data trait library and detail regulation.
I suspect you could use a slight clip and reclip on the output image later to make the output convey style and dynamic better but in the early frame that's what your sacrificing some picture dynamic for better artistic control.

0 Comments Viewed 2935 times

Project 2040 for me to do version 1 with AI. Dynaverse

Permanent Linkby highdimensionman on Sat Sep 24, 2022 9:15 pm

An extensive dynamic universe at 512TB that slowly grows and does things like video and mini game content generated from the dynaverse and a near infinity of reality possibility to explore my way as a game os where you can interact via the dynaverse or main internet services.
Then at version 1 I'll open up the development.

0 Comments Viewed 2803 times

How well can a 4d 7d and 14d tensor sectioned mind do multiD

Permanent Linkby highdimensionman on Sat Sep 24, 2022 5:56 pm

So you have a low middle and high dimensional tensor array trained through random bias seeding in order to use a very large data library compiled by another trained mind using say a 10D tensor array.
This AI is trained so that once trained it can through inference some training calculating and recompiling and generating new data insights the AI replaces itself with a dataset rebundled utilising all bias Dimensions from 4 to 14 for data relative to task dimensional efficiency. It also recompiles the library for a better fit what this AI bundle can do is generate bots through mainly inference from upto 256,000 3 paragraph long parameters within a technical communication dynamic including to some degree english.
This mother load running in the cloud compute infrastructure could be well maintained and optimized by loads of top people and we'll regulated we need real personal assistants ect to work well with computers we don't need that ai ruler thing. This all the mother load does is with programming spawn task range bots in a well kept and regulated space it doesn't need to be alive deciding life.

0 Comments Viewed 2873 times

Pi 500 wish list

Permanent Linkby highdimensionman on Sat Sep 24, 2022 5:21 pm

8GB of ram
80 140 GFLOPS.
AI picture and animation generator crafter and editor.
AI basic simulation generator.
AI character environment and fully interactive simulation generator.
Yourtube AI continuously personalized video and AI generated video platform.
Full internet content creation suit upto 600p before upscale and hyper midi file editing from generated content on yourtube.
Small data AI where applicable to assist hobbyist functionality.
Good chrome functionality.
Pi cloud doing pretty much the same as you can mainly do ok with a pi cluster and costed with children in mind who would be part of the audience using the stuff.
Good deterministic and some more controlled variability AI piping down with advanced determinism with the main pi processor.
A nice 64 bit porting's
SD micro express
A slice of the future 200,000,000 people optimizing in their different fields the mother bundle of data trait extraction library and task based AI doing the task of bot and library creator mainly through inference. These 200,000,000 people will be essential to the progress and what the other billions can achieve in life in terms of designer and personal touch.

0 Comments Viewed 3466 times

Who is online

Registered users: Bing [Bot], conradjames, Google [Bot], Google Adsense [Bot], Google Feedfetcher, Majestic-12 [Bot]