My idea was that where your not focusing on in a model there is less data computation until you focus on that area.
I thought in 2009 that 1 exaflops should be enough to achieve a brain modeling milestone in terms of modeling micro seconds of full neural brain activity in a few days.
Unfortunately or fortunately It turns out you might require better quantum computers or a more powerful super computer say a 50 - 200 Exa flop system to achieve this in terms of understanding all the natural processes that go on in the brain.
Still to make a good generalized robot you need about 1 peta flop per watt in the industry.
Lets look at in terms of ARM.
2023 3 - 2nm ARM with AI upscaling and better GPU's at 250 Gflops per watt max.
2026 next gen gate devices, nano sheets and nano wiring at present. could easily boost clock speeds to 20GHz and be separated more into smaller dies plus having up to 4 layers. say 6-10 terra flops per watt
2030's skipping morse law by a few generations. nano wrinkle devices with 100's of layers on tiny dies on optic pins vertically stacked. all costs a cool millions for the best builds but at 100 GHz allows for 200 - 1 petaflop per watt a machine revolution milestone.
at 1 petaflop per watt you could get 10 zeta flops in a supercomputer.
So I'm guessing by the end of this decade to the middle of next decade there will be Very good AI and far better brain simulations.