Our partner

Thoughts from transgressing dimension
Here you can see some of my wild thoughts and you may find some good worldly ideas on here. I just love thinking and thought I should let my thinking be read.
User avatar
highdimensionman
Consumer 2
Consumer 2
 
Posts: 55
Joined: Tue Jul 08, 2014 5:38 pm
Blog: View Blog (1012)
Archives
- November 2022
+ September 2022
+ August 2022
+ July 2022
+ June 2022
+ May 2022
+ April 2022
+ March 2022
+ February 2022
+ January 2022
+ December 2021
+ November 2021
+ October 2021
+ September 2021
+ August 2021
+ July 2021
+ June 2021
+ May 2021
+ April 2021
+ January 2021
+ December 2020
+ November 2020
+ October 2020
+ September 2020
+ August 2020
+ July 2020
+ June 2020
+ May 2020
+ April 2020
+ February 2020
+ November 2019
+ October 2019
+ September 2019
+ May 2018
+ April 2018
+ March 2018
+ February 2018
+ December 2017
+ August 2017
+ May 2017
+ April 2017
+ March 2017
+ February 2017
+ December 2016
+ November 2016
+ October 2016
+ September 2015
+ August 2015
+ April 2015
+ March 2015
+ February 2015
+ January 2015
+ December 2014
+ November 2014
+ September 2014
+ August 2014
Search Blogs

The path towards global economic reunification

Permanent Linkby highdimensionman on Wed Dec 17, 2014 6:30 am

It was the year 2011 I suggested to China that heading towards a single curency was determential for various different reasons and later suggested that the time was comming where china needed to hold their economic horses a little to muture and develop the mianland dynamics in a more sensitive way for the chinese people this is when parrallisation in the global economy became official. Very recently the situation has come such that the east is officially leap frogging the west hence the issue of the east rising above the west has reared its head as opposed to the inverse senario which many in the west didn't want because of the degree of domestic and forigen policy corruption in the west and how its been operating on to obsolete a script.
However technilogically in west run labs the west is still not doing to bad especially when east and west work together. Although I don't like certian despotic aspects of corrupt top brass western thinking and the corruption below a good mind is after all a good mind be it from the east or west and good engineers and coders are much in demand no matter where such people are born.

This all leaves the world and China with its current progressing economic might with a dilema. The dilema is very technically challenging, requiring of management evolution and most of all can only be optimal as people learn the full dynamic mindedness of one another and learn to respect love and make use of it.
That question being what economic dynamic should we be converging that is a good optimal.

To answer this lets look where we could get to by 2030-2040
We end up with a few key areas of interest which ill go further into detail and explain which dynamic demographic is best suited to what.

1. The Hexoganol hive cloud compute internet infastructure HHCCII.
1a. Neural synaptic technology.
1b. Arithmatic processing cells and future fast turing complete systems.
1c. Analoptic computing.
1d. Highly corrective quantum single task at a time technology.
1e. The server hive.

2. Orgonic progression.
2a. Universal tuning.
2b. Crystal technology.
2c. Energy processing issues.
2d. Spiritual progression in optimal syncronicity with the earth mind.

3. Light age technology.
3a. Optomagnetic technology.
3b. Opto thermal technologies.
3c. Opto neuclear technologies.
3d. Opto carbon based technology issues.

4. True ecological progression in scientific industrialised development.
4a. Working with a good enough understanding of nature as to make optimal use of it.
4b. Learning how to work with chemistry in ever more ecological ways.
4c. Learning how to work with physics in ever more ecologically sound ways.
4d. Convergence in engineering towards ecological optimals.

So with out further a do lets begin and yes this think as per usual is comming out of thin air as things do regarding creativly working with problems using a human mind so don't feel insulted or suprised all the best stuff of the future does come from the unwritten book so to speak.

1. The Hexogonal Hive Cloud Compute Infastructure

1a. Neural synaptic technology.

Section 1 cooking up the cells.
Todays current IBM synaptic chips are simply tiny square cell event managers that work not relative to processor clock cycle but realitive to events.
The event management cells are modelled on simplified aspects of the neuro synaptic aspect of the mind.

Advantages..
Such a system is generally very energy efficient and can recognise preprogrammed pattern recognitions of dynamic data extremly fast.
Such a system programed to work with a wider dynamic of pattern sensitivity can be creative in sense when say finding cool new recipies searching out specific issues regarding dna research data or working with a given technogy like say lego and building new innovative lego designs.

Disadvantages..
Such a system has issues regarding live complexity dynamics. Although such a system can be programmed to recognise you and learn better to talk each pattern sensing problem has to be set by the devoloper. So when for example you work more with say natural human flow the issues regarding machine learning in real time become an issue. Also because each pattern sensational innvation is based on preset event handling when such a system need to deviate as to try searching for other ways of solving a problem due to such limitations it likly wont converge optimally with new problems for quite some compute time. This is fine if your a specfic company looking for new lego designs, search out a specific type of dna data. Or even wanting AI that can talk to the consumer about common product issues and solutions. However when a brand new problem arises it time to get in touch with tech support.
The more cells you have the more your system can work with a wide set of data and the better it can home in on given innovative solutions however more cell tends to mean more chips or more layers of silicone so when your working say with graphene that you can only industrialise at first in small die form this solution is not complete also more silicone production can mean more production cost efficiency issues considering this issue its not suprising the fabbing issue was deligated to a more fab focused group recently when that part of IBMs business was sold off to Global Foundries.

A hexogonal solution to some of the issues raised.
Instead of working with two stage square cell dynamics lets change to three stage hexogonal dynamics.

What do I mean.
Well not all neurons in the brain are the same size or work synaptically in the same way you can easily see this when comparing the neural synaptic dynamics of the left and right hemispheres and the brain works dynamically at the central axies also the brain can cook its processing abilities as it tends to when one sleeps how can we take advantage of these issues in an industrial optimally simplified way as we move forward towards better capability.
Well in the IBM technology you have two squary stages the little square cells and the squary form chip.

What if instead we worked with a three stage hexogonal IC and dont worry ill get to an indepth comparasion to outline pros and cons. So your first stage is a hexogonal IC made up of loads of smaller hexogonal stage 2 cells an optimally decided range of these cells are divided up into stage 3 synaptic event handler cells the other stage 2 cells are vivd (variable input variable data) event flow handlers with of course bigger caches. This in therory enables management such that a smaller ic range can handle more dynamically more neural synaptic events during 1 second of sensitive event processing.

Pros..
Working with the hexogonal layout you can make use of cell edge spacing better for issues like thermal management and optimal communications.

You require less silicone to achieve similar or more dynamic event handling.
When working with graphene you can better capatilise on graphenes forseen advantages needing far less die space possibly in the area of nueral synaptic technology causing graphene to be a more viable option sooner.

Cons..
When you cook up processing you can easily tend towards lower energy efficiency a big issue if you working with silicone you have loads of money loads of engineers but you don't want to invest in a neuclear power station just to help the scientific community with specific problems all the time whether the issue is that bad I dont know but it certianly is an issue.

The manufacuturing capability to produce this technology is in a far more capable mature state so for now at least isnt it best to learn how to make the best use of whats avalible.

We can barley program our current synaptic cores that well yet let alone working with an extra more dynamic stage lets not forget were working with old standards to try and cross over to a brand new model.

1b. Arithmatic processing cells and future fast turing complete systems.

current ARM 64 and AMD 64 solutions

ARM 64 and other RISCy Approaches.
ARM 64 reduces the compile time dynamics of computation such that you can easily do a lot of standard compute tasks simply and energy efficiently.
To take advantage of doing a given task in mass parallell ARM makes use of SIMD and MIMD extensions including the GPGPU cores which is essentially cellular arrays of MIMD extensions working with the Cache cyclicly to parralise a set task.

Advantages
This RISCy von neuman architectural approach allows the user to do things like play high polygon depth games or for a mathmatician to parralise a simplistic non step dependent itterative task like say calculating a factorial number or pixels for a fractal and theyve even managed on boinc to use it to speed up primality tests.
The ARM cores in particular due to their RISCy nature scale well in big little configurations and are keeped nice and simple to program and because their small compared to more CISCy alternatives even a little guy can benefit from the smaller lower transistor depth ICs that are common place with ARM. Being 64bit an ARM64 can now work with the kind of memory sizes you would normally find in AMD64 computers.

Disadvantages..
During a parallel task you can't utilise intermittent communication well at all or even work processing flow well with energy efficiency you just set the task parallelise and wait till its done in most real word cases. So when your brain is working something out your neural synaptic system is working in a far more dynamic way accross the corpus colloum different neurons are working analoptically with the synapsis all working toward that uncertain conclusion the maybe the doctors right maybe I have a mental health condition.
A technology like an ARM processor can't for example very easily act as someone to talk to about your problems the best a quad core ARM tablet can do is txtually spout out limited conditional responses or jibberish great if your program is trying to mimick an overly dumb god worshiping nut or simply dumbly telling you how medication would help but not really the personalised support you may be looking for. It is usful however when it comes to simply relaying data in a simple standardised range of formats the internet today is the ultimate example.

AMD64 and other CISCy approaches.
AMD64 has much of the same issues AMD64 has however its more complexed to program and compile with.

Advantages.
Because with AMD and Intel your usually talking about paying some good wonga at times your buying into bigger dies. Although your using more watts you can afford it with your electric grid connection especially if your living in a more electrically developed nation.
Because your using more die space you can do some more advanced maths functions quicker due to all that bigger cache and wider range of extensions.
Professionally such systems can work together with a wide range of other cores like FPGA cores GPUs and with your system you can have more space to add things like high perfomance Graphics Cards and ASICS.

Disadvantages.
If your from a not very electrically developed country or region and your using some old PC youll very likely suffer many issues including poor network coverage and power outages hear some westerner talk about your nation moving to the latest 4g and using more energy effcient mobile devices may sound great but never seems to be that pratical a solution in the long run.
ARM 64 systems cost more and take up more space in sever farms than their RISCy ARM 64 counter parts and interms of who you can market to dynamics ARM64 has the potential to allow small time professionals and consumers far more flexibility when it comes to self expression and saling ones self ect.

A hexogonal solution to some of the issues raised.
What if you worked with arithmatically able event driven logic compute cells of a hexogonal shape how could this work in a three stage setup you could have your second stage cells working with larger caches flow utilising vivd logic in order to manage 3rd stage small cell calculation events. These cells could work in such a dynamic way with the second stage cells that your no longer talking about 8bit 16bit 32 bit and set 64bits modes collectivly a few cells could process an overall 64bit equation using 8 bit and such elements in the caculation if such was deemed useful by the programer and the system could processes with a far more dynamic flow than you standard von neuman style architectures of today. Each of the two cell stages for issues of simplicity and efficiency could each be a uniform structure this surly when programmed right could allow a user to do loads of interesting things in parrallel and find more optimal routes and with some extension could even allow for more dynamic graphics processing say to work with both point particle and polygonal graphics in an ever more efficient way and even save on them there watts leading to longer battery life.

Pros
Being dynamic and cellular this approach could yeild significatant benefits like enabling one to work at compile time more dynamically and efficiently.
When working in conjuction with a neural synaptic core such a system could be tweaked to help ai technology really start to get to grips with mathematics enabling more efficient mathematical analytics and problem solving for Cern scientists and technicians whilst at the same time remaining very energy efficient.

Cons
To develop software that makes optimal use of such a core would take people time to get their head round.
When originally working with standard code optimised for todays ASICy and RISCy compute solutions such a technology would be slower and less efficient.
Such a core wouldent be as good as todays cpus at some ridgid advanced calculation tasks due to un optimal coding and low extension deepth on the 3rd stage cells.

Moving on...

1c. Analoptic computing

Today there is only one good example of a standard anoloptic computer and that is the mind.
So lets begin.
A natural mind.
A natural mind works in sync with the universe in a detailed field managment way.
A natural mind is able to adapt well during evolotionary events optimal to structural efficiency.
The emotional abilities of a natural mind enables nature when the enviroment demands to solve problems in radical new ways.
A natural mind can work in synergy easily with natural mind states of the universe including being able to work with high degrees of analouge complexity as to forfill neiches and be equlibrius.
The full potential of a natural mind is unknown but your adverage human mind has a considerable anoloptic range compared to any man made technology to date.

Advantages.
The ability to seek and find radical solutions to a wide dynamic of problems at any one time.
The ability to adapt to new challenges with out the assistence of an industrial mind upgrade and you should be fulled into assuming otherwise to do so is to misunderstand the true abilities of a natural mind.
Humans using their natural minds have managed to collectivly converge on an awsome range of interesting topics and developments and theres no reason why this wont continue to be the case.

Disadvantages..
Emotion can lead to poor decision making and undesirable outcomes.
Natural minds in their pusuits can form undesirable biases I could give you gazillons of examples.
The mind can fall into so many pits of contradiction and paradoxical thinking causing very undesirable consequences at the collective scale.
When working with digital maths the natural mind hasnt prooven to be as fast as an adverage mobile phone can be programmed to be and such a mobile phone maybe using far less watts during such a challenge.
Compared to even the current early stage neural synaptic technology at focused singular tasks the neural synaptic technology can spot more conditional events say off a video stream than a human can and it can do it quicker. Soon within a set boundary such technology will be able to come up with more useful lego designs over a year than any human could over that same year.

A hexogonal based solution to some of the problems raised.
So working with our cellular hexogonal model what might be a good analoptic ic to assist compute systems as to find more significant discoveries and crative solutions to issues that are very human relavent now bare with me this topic has been mostly ignore and poorly investigated so i have little to go with as to help here but ill try an divise a usefull simplictic solution.

To achive my solution im going to use a little synergist magic synergising state of the art Russian technology with the more westernised IBM neural synamptic cell thinking.

Ok here goes

A hexogonal solution to some of the issues raised..

We again are working with the three stage hexogonal cell approach theres many manufacturing advantages to using such a high degree of standardisation.
Instead of our base electronic component being transistors the root component were going to use in this concept is the humble memrisistor or memristor.

A memristor can work well with imperfect analouge fields an signal data. Its kind of like a verible resistor working with electrical field memory at a guess.

So our second stage cell well these are the anoloptic system flow tuners that have a kind of memristor cache that can be altered by external event in order to develop and manipulate 3rd cell synaptic routing.

Our 3rd stage cells well these are our synaptic event driven routing units. Unlike an IBM synaptic ic our anoloptic ic is more variable as to how many synaptic routes it can work with although with my adaption of the IBM tech so to would that be variable in similar ways.

Advantages
please go easy on me not a lot is known for sure with regards to such supposid advantages im really am guessing.
The dynamic imperfect data flow of such technology should allow for a more dynamic emotinal like process control this should help with optimisation issues as problem complexity is increased.
It might over time seem like such technology allows for a degree of perceptability over processes.
In the long term as developers get used to working with this technology they might find the technology really does enable the true friend effect meaning with this technology the ai your talking to might seem a lot more natural in its approch and may aid AI costomer support and human and machine intelligece convergent support regarding a wide range of very natural issues.

Disadvantages.
Were not talking a numerically perfect machine here one plus one doesnt exactly equal 2 regardless how clearly you define the rules of your software.
Radical creativity is evolutionary regardless of how capable such a technology is it still need stimulus or programming to work and develop creativly.
If your trying to solve a traveling sales man problem quickly as near as you can just using this technology then what works fast on the fifth go for exactly the same problem on the sixth go may take considerably longer to solve so like with a human play some sporting activity there is always going to be an element of luck.

Here we go the big one what does it mean relative to a natural mind how do the two anolopic technologies compare is such a system collectivly going to do away with mankind or see us like pets might such be a threat to natural intelligence.

Ok opti jump to go lets do this.

Memristor ics vs natural mindedness.

The memristor ic Im talking about is not as dynamic as a natural mind in some ways not even as dynamic as the mind of a fly this means that for many instances it cant be as radically creative as a human can for example you may be working on a feild working with grass trying to find a good mix of grasses to use for your cattle this may only be a step towards even more dynamic farming solutions. Although the ai using a memristor ic can ideed manage the grass to the point that its working with an ideal mix of grass its ability to tune in with natural flow such that it knows this is only a first step has to be then inputted by humans.
Such a machine just like a human can may be able to find may more dynamic paths forward for the farmer but its suggestions due to human nature and nature the end solution may not be the same ideal solution do to progressive human issues such an intelligence in many should and probably will be most of the time catching up in a globalistic intelligence sense.
It will likly still find some human insights priceless.
Its life sensation dynamic will be far lower than even simple natural beings.
A trained artist will still probably find it easier to paint a mountian view well enough that people can see the human touch difference.

Ok so now onto the last IC

1d. Highly corrective quantum single task at a time technology.

Lets begin.

A modern Quantum computer.

A Dwave system working with over 100 qbits.
Dwave systems work with a technique called anaeling in order to seek out optimal process routes.
The quantum technology they relly on is the good old josephson junctions. Im sure most top brass psychiatrist know at least what a SQUID is and probably have at least heard of josphson junctions but would never admit they do to a patient. Most neuro scientists would gladly tell you how they work with such technology on a day to day bases if asked nicley but when asked about psychiatric use of such technologies would spin you with some crap about the technology needing to be attached to your head or maybe if your getting into a debate tell you about how much scanning with such technology takes so long to get a good image. Yes but cant you use it in conjunction with radio transcever technology from a further distance and in a grid say in a patients ward room with a puposly ungrounded bed to project certian aspects of limited dream data and poor quality so call thought control into someones head whilst they sleep as to overide the hind brain in an attempt to manipulate thought in the frontalcotex during the patients day if you add with at least some patients cases the use of implants like deep brain stimulators, bineural transevers, Brain wave disruptors and synusthetic tuning where speech processing and visual perception converge call me a paranoid schizophrenic if you like the psychiatrist does but your getting my drift if your keeping up with this blog I try and leave no stone unturned.
Anyway back to Dwave they use these magnetic junctions as a kind of QBIT processor unit.
What they do with this technology is dynamically test processes options seeking out an optimal low its actually quite a simple test the dynamic bit is the quantum aspect of doing this.

Advantages..
As you raise the qbit threshhold and lower the component size the technique becomes quicker relative to a more mature manufactured technology like an ASIC designed to do the same thing.

Disadvantages.
The issues with thermo noise are enough to make eienstine laugh say ha ha not so fast mr quantum man the universe isn't that easy were he alive today.
Even when your working at low tempretures which Dwave computers do as to work with atoms that are closer to crystal state and your having to use liquid helium which is yet another issue the thermal noise issue and poor understanding of how the quantum universe works is almost ubearable and certianly require a lot more r and d yet.
As DWave systems evolve so do Standard IC ASICs who know which evolves faster when its not as obvious as the hype surrounding quantum computing might first make it seem.

So what might be a future soloution such that we can all get a good optimal with our non helium super cooled computers at least a solution for your adverage researcher to make good use of a quantum computer via a data center would be nice.

Hear goes a possible solution and no this solution isn't perfect relative to the hype but it is a possible mass manufacturable solution and yes it could be able to do things like prime factorisation.

Here goes..
A hexogonal solution to some of the issues raised.
Our solution makes full use of external technologies in the other above solutions to full function.
Again we start with a three stage hexogonal cell ic design.
This time we have two different stage 2 cells and a stage 3 cell for our solution.
Stage 2 cells.

Our first cell is the quantum processor or processors to be more precise on the cell is four 32 qbit general purpose quantum processors however they are mirror processors each of the four processors is doing the same task on top of that these processor take a lot of system tweaking to start them off sometimes to even continue their work. In our first cell our technology uses dynamically controlled nano lazers and much thermal complexity as ill explian later.
Our second cell is placed around each quantum processor unit these are the flow mastering kernals working with each of the quantum processor cells in order to achieve perfect results regarding on task at a time opperations. So you can already see a great deal of errorcorrection is required just to complete perfectly one general purpose task but essentially yes the system is turing complete and yes it can work in more imperfect way for some tasks also like anaeling.
Stage 3 cells these are thermo event management cells each one of these cell works in a dynamic way with the kernel and the wider system technology to converge towards an optimal quantum state of process.

Advantages.
When it comes to a simple task you might today do with a grid gpus this quantum unit should kick arse.
You can do these simplified tasks almost to a dream degree what might have taken you 10 life times with your current state of the are nvidia card using a simple function task now takes a day or two in a worst case senario usually less than 30 mins.
In some cases a given complexed task can be broken down into stages and solved relativly fast with this ic technology.

Disadvantages.
This isn't going to replace your need for other processing capabilities.
This processor is able to utilise a dynamic system in order to do a simple tasks really well when dealing with issues of higher complexity and dynamic issues this supposid dream machine can be out witted by our ARM64 replacement given a good coder behind the wheel.
This system requires a lot of intergrated development with the other technologies to be of an optimal enough benefit.
Given the relativistic convergent nature of p vs np many times it may be more useful to work in a complex way with the rest of the system in order to make use of more dynamic results.

and thats the ics part of this blog/paper complete.

And without further a do..

1e. The server hive.

So we have a our four ics as were dealing with graphene related technology we are talking 2030 to 2040 after all after which Is about my retirement age lol our graphene based ics each go on a standard sized hexogonal frame hexogonal top thermo optic pin. Our server well due to the advantages of industry standardisation and the r and d require just collectivly working with an ultastandardised model for coputing at the data center level you guess it our server cases are a standard size hexagon with few long term memory options and many different vendor sale the as do many different manufacturers have to intensely collaberate to make the components that go inside. Of course although based on the same Wide Configurable Function recompiler 3 level development model there are a good few os systems that vary in ability using the canvas so to speak and many other software companies who code software for the given os varieties. One benefit you might not spot so easily is that due to the degree of standardisation in the system our engineers coder manager and the like are mostly human even at this high level of technilogical advance mostly thanks to the degree of standardisation making the development issue easier for the technically minded to get there head round and make good use of try if you will the alternative getting far less efficient and dynamic ai to more and more do all the best work all because the economic structure is so perplexing due to human issues to know what your doing leaving your best advantage being to let the computer do the work for you ever more so. I just thought id point that benefit out you might think the thing so intelligent whats the use of human but this degree of standardisation and intergration is actually more human work force benefical and help advance science and ability more.

Obviously each pin sits on comb inbetween the main ics sit hexogonal layered memory pegs which include different layers and different types of short term memory analouge memory error corrected quantum memory and binary memory.

The case has 5 levels
2 processor combs with 2 long term memory combs inbetween them and above them.
The 5th level is a dynamic heat sink making full use of thermo management technologies like dynamic thermo diodes and coolent pipping.
Each of the compute and memory combs are connected with thermo management pegs connecting ultimatly to the 5th level.
Each of these racks go in a hexogonal server which are interconnected in a large comb to make the compute hive.

Advantages.
This system ultimatly is of a very high degree of machine inteligence.
Due to the degree of standardisation and corperate intergration this is about as good as is possible at such a time of mans development.

Disadvantages..
Many intercoperate fudes and development issues must be over come for the optimal realisation of such a dream.
Its not exactly an individualists idea business model and is very libertarian a coperate model although much libertarian and individualist use can be sought from such a mega conglomerate approach lets say but humans dont like getting picked on and are often dubious of human progress.

To be continued......

Diagnosis - Paranoid Schizophrenia although challenged.
Medication - Clopixol injection 4 weekly.
Personal diagnosis - I battle with the chemistry and biological effects of Clopixol every day trying to improve on the heath and range of my approach.
I try to improve my thinking. Hopefully some good ideas shine through.
0 Comments Viewed 7001 times

Who is online

Registered users: Bing [Bot], Google [Bot], Google Adsense [Bot], jaus tail, Majestic-12 [Bot], mcdowellglass, Todymife, Yahoo [Bot]