Subscribe to Feed            Add to your Favourites

“It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.” – Neil Armstrong (1930-2012)

Fresh Reads from the Science 'o sphere!

Wednesday, February 18, 2009

FAMILIAR Part 2: Why No Runaway Complexity?

Long time Fresh Brainz readers probably know that my undergrad training was not in molecular biology, but in neuroscience.

My interest in systems science started about 10 years ago when I learnt a bizarre fact about neurons - that the transmission of neural impulses was a probabilistic process.

The firing of a neuronal action potential is not perfectly reliable because it depends on a complex interplay of input signals such as EPSPs, IPSPs and internal states such as the refractory period.

Due to the inherent unpredictability of any single neuron, vertebrates have to rely on a large number of neurons in each nerve in order to convey a reliable signal to other parts of the body.

This struck me as something that is particularly odd.

If you can't even trust one neuron to do its job, how can you trust a thousand of them?!??

Why won't they simply misfire all over the place and garble the signal?

The nervous system has often been compared with human technology such as computers, but you'd be barking mad to try design a computer using millions of components that are not 100% reliable.

These questions perplexed me during my undergrad years and also in my first job as a research assistant in neuroscience. What was even more puzzling then was the realization that other researchers around me simply took this fact for granted - nobody would explain to me why a bunch of unreliable parts could suddenly make a reliable system.

As I started grad school in 2004, I noticed that a similar situation occurs in cell biology. In an unfinished article entitled "Brief thoughts on the Inception of Systems" I wrote:

A cell is an amazing mixed bag of biochemical processes, some of them quite straightforward, others so convoluted that Occam’s razor would not find its mark there.

Unlike the oft used analogy of a factory, each cell is made up of components that do not fit together clearly like a clock. For example, many proteins have multiple roles across several different pathways. From cell to cell, proteins can have different functions depending on where and when it is expressed.

The compounded variability from the probabilistic performance of each intracellular player should become so large so as to make an integrated system impossible.

What I am saying is, if one wanted to make a reliable machine to fulfill a very specific function, one would not deliberately use components with variable and probabilistic characteristics. But yet cells do exists, and are quite stable and reliable. How can this be?

How indeed?

Let me illustrate this problem with a graph.















When the number of components in a system is small, the total number of interactions is limited and predictability of component behaviour is high.

This is why we can play games like pool and snooker - we can tell where the target balls will end up.

As the number of components increase, the total number of interactions increase exponentially to such an extent that it quickly becomes impossible to know exactly what will happen.

Imagine a pool table with thousands of balls.

In addition, the probabilistic behaviour of each component makes this problem far, far worse.

Imagine a pool table with thousands of unbalanced balls!

By this additive concept, it should be impossible for any limited sentient being to comprehend much of the Universe, since it consists of trillions upon trillions of probabilistic subatomic particles in constant interaction.

But here lies the trick...















In some cases, the increasing number of components start to exhibit emergent properties, forming a complex system. The whole system itself becomes reliable enough to be a "component" (or module) of another larger system.

Each successive complex system then occupies a higher organization level in a hierarchical structure, so that the total number of "component" interactions at the higher organization level never gets out of hand.

This why we can predict the trajectory of a cannonball with such accuracy, even though we can never predict the exact locations of all the component electrons in a cannonball.

There is no runaway complexity because complexity appears "fold in" on itself with each successive organization level.

But how exactly does a complex system make itself reliable and predictable?

Stay tuned for my next article about Ludwig von Bertalanffy and his General System Theory.

0 Comments: