Categories

# 31 Talker on Mind As A Computer Part 1 of 3

Print Friendly

The Ancient One Speaks Blog
# 31 Talker on Mind As A Computer Part 1 of 3

In computers we use the term multi-tasking.
In human mind use, the term used is multi-attending.
Perhaps the best way to appreciate multi-attending problems, is to compare the computer you are using today to the one, you might have been using a couple of years ago. While the best of the current computers are so fast that we may not notice when we ask our computers to do multiple tasks, the previous generation couldn’t do anything else while it was printing.

In the human mind, we encounter these problems when we try to divide our mental processing capacity between two tasks. Most multi-attending is done without thinking about it.
We can walk and chew gum at the same time.
We can walk and talk at the same time.
But the more difficult the mental task, the more difficult it becomes for our brains to jump back and forth between the two tasks requiring some portion of its attention.

For instance, a routine conversation on the telephone . This is a classic case of single attention. The conversation requires only one sensory input – hearing – and there is little need to divide attention, as only one other person is involved. However, as soon as you add additional stimuli into the mix, multi-attentional problems will start to show up. A child crying in the background, or asking for lunch, will greatly increase the attentional requirements. If the conversation is in person, rather than on the phone, additional distractions become a problem. Now sensory input from the eyes, background noises and personal body language may start to require the brain’s attention. What were previously instinctually sensed, may require conscious thought to process.

When the brain is required to concentrate on background processing tasks, problems begin to arise. The solution to this information processing log jam is increased concentration, through a process called over attending. Over attending can work, but it comes at an awful price: rapid fatigue. In a computer, this is tantamount to a very sluggish response time of action.

The problem is that it is very difficult to be a successful executive, without having to deal with more than one input at a time. And even if they are able to insulate their work environment from multiple stimuli, they can only sustain concentrated work effort for a few hours at a time. If, as researchers suggest, they are able to “rest for an hour or two”, in essence take a nap, they may be able to continue on, on the good days. But invariably, there are the bad days, the days when too much done the day before, or too little sleep, or too much other stress, makes it so that they don’t start the day renewed. On such days, they might as well stay at home.

I find it helpful to use the computer as an analogy. The first analogy is the comparison between short term and long term memory, and RAM versus hard drive memory on your computer. If you have gotten this far into a web page, you probably understand how your computer uses and records information. That which you are using at this moment, is being used by your computers RAM, available to you so long as you keep this window open and supply power to your computer. But cut the power or close the window and this information will be lost, unless of course you have stored it in some way on your hard drive. That data which you have stored on your hard drive, can be retrieved the next time you turn on your computer, subject of course to your ability to find where it is that you stored it.

The terms used in brain and memory research of short term and long term memory are equivalent to the same process. Short term memories are those you are using now, long term are those you can retrieve from your brain memory banks.
The science of human memory is largely a study of how information gets transferred from one to the other, through what is called encoding.
(See link below for info regards encoding. When you do a ‘Save As’ on the computer, you may or may not be aware, that you have at least four choices as to what method of ‘encoding’ is used. Mostly it’s poses no problem for the average user.)

Yet, there is a more important factor when considering memory problems , and that is the input factor. Your mind cannot remember what did not get inputted.

What is only partially inputted, will probably not be remembered, or will be remembered improperly.

If you have a direct Internet connection, which is crudely an equivalent to an exceptional brain processing speed, you would seem to get all of the data at once on your monitor screen at first try. But it don’t actually happen that way. It all is processed ‘bit by bit’, ‘packet by packet’, albeit, very fast.
With the monitor showing a full page of information, you may still need to scroll up or down the page to see and read all the information.

The actions of ones brain is much the same as the computer, regards gathering, inputting, reading, storing and the recalling of stored information.

The speed of the computer and ones mind, depends on how busy all other functions are. Some downloading is rapid, while at times it is a snails pace. Your brain works the same way. If its processing speed is slowed, or it is busy doing something else when the memory should be encoded, it will get incomplete data from which to encode the memory. If the data is incomplete, the memory will also be incomplete.

As stated memory problems (brain or computer) can seriously complicate efforts to accommodate the information processing. Memory problems are what most seriously affects reliability of ‘recall’, and/or life’s activities.

Complex organisms, in particular those with brains, suffer from information overload. In primates, about one million fibers leave each eye and carry on the order of one megabyte per second of raw information. One way to deal with this deluge of data is to select a small fraction of it and to process this reduced input in real-time, while the non-selected portion of the input is processed at a reduced bandwidth. In this view, attention is a mechanism that selects information of current relevance to the organism while leaving the non-selected, and thus non-attended, data to suffer from benign neglect.

Here I will show basic data, that will put some focus on the ‘how and what’ of brain and mind handling of information packets.
(See link below for info regards packets and Byte Prefixes and Binary Math.)

Byte Prefixes and Binary Math
When you start talking about lots of bytes, you get into prefixes like kilo, mega and giga, as in kilobyte, megabyte and gigabyte (also shortened to K, M and G, as in Kbytes, Mbytes and Gbytes or KB, MB and GB).

The following table shows the binary multipliers:

Name —- Abbr —— Size

Kilo______ K _____ 2^10 = 1,024
Mega_____ M _____2^20 = 1,048,576
Giga______ G _____2^30 = 1,073,741,824
Tera ______ T _____2^40 = 1,099,511,627,776

You can see in this chart that kilo is about a thousand, mega is about a million, giga is about a billion, and so on. So when someone says, “This computer has a 2 gig hard drive,” what he or she means is that the hard drive stores 2 gigabytes, or approximately 2 billion bytes, or exactly 2,147,483,648 bytes. How could you possibly need 2 gigabytes of space? When you consider that one CD holds 650 megabytes, you can see that just three CDs worth of data will fill the whole thing!

Terabyte databases are fairly common these days, and there are probably a few petabyte databases floating around the Pentagon by now.

Breaking this all down a little further, we’ll see how it all ties together.
8 Bits = One Byte. (Equivalent to one symbol, digit or letter)
The letter “A’ equals one Byte.
The letters ‘ABCD’ equals four Bytes.
The letters ‘A B C D’ equals seven Bytes, as each ‘space’ counts as a Byte.

So within ones brain there can be rapid super high speed activity. On average, the ‘conscious’ mind can only handle seven Bytes at a time.
(Under certain conditions this can go as high as ten Bytes at a time.)
So under normal conditions, (conscious mind) there is a very fast ‘inputting and outputting’ in small packets at a time.
Under normal conditions, within the (subconscious mind) can be super high speed, handling of 20,000 Bytes, in large packets at a time.

(The average time it takes for a complete human blink is about 300 to 400 milliseconds or 3/10ths to 4/10ths of a second. Of course this is an average only and can differ from person to person)

Here is a very important point.
When the (conscious mind) entertains ‘any thought’, positive, negative, questionable, beneficial or otherwise, there is an average 500 millisecond delay, while ones (subconscious mind) ‘evaluates’ that ‘negative or questionable thought’, before feeding it back to the (conscious mind) for action. In essence, the blink of an eye, time wise, is what determines the outcome of a given choice.

While difficult to concisely explain, the ‘time’ construct as we know it, is necessary for the (conscious mind) to process the vast amounts of information it receives, due to the ‘tiny’ amounts of data it can process in one given moment.
So the ‘packets’ of information, are fed so to speak, rapidly in ‘chopped up’ Byte packets, in a sequential linear manner, giving the illusion of time passing.

Now we have focus, attention and consciousness that almost appear as a single function. These three functions are highly complex. They can function seemingly as individual functions, or a mix in any combination.

Constantly being ‘input/fed’ into ones inner consciousness, are literally thousands of environmental bytes of data, recognized or not. Data so received, can be precisely or, at times, not so precisely stored as inner memories, to be called upon as circumstances demand and call for them, for given actions.

To the extent that one accepts that attention and consciousness have different functions, one has to accept that they cannot be the same process.

Multitasking denotes attention to a variety of extraneous and internal stimuli. All research that one can find concludes that the human mind performs much less efficiently under multitasking environments.

Task switching denotes shifting full attention from one activity to the next. It seems to parallel our current understanding of brain function in a high stimuli environment.
Multitasking:
You can’t pay full attention to both sights and sounds. Lab findings suggest that cell phones and driving don’t mix The reason talking on a cell phone makes drivers less safe may be that the brain can’t simultaneously give full attention to both the visual task of driving and the auditory task of listening, a study by a Johns Hopkins University psychologist suggests. The study, published in a recent issue of “The Journal of Neuroscience,” reinforces earlier behavioral research on the danger of mixing mobile phones and motoring.

“Our research helps explain why talking on a cell phone can impair driving performance, even when the driver is using a hands-free device,” said Steven Yantis, a professor in the Department of Psychological and Brain Sciences in the university’s Zanvyl Krieger School of Arts and Sciences.

“The reason?” he said. “Directing attention to listening effectively ‘turns down the volume’ on input to the visual parts of the brain. The evidence we have right now strongly suggests that attention is strictly limited – a zero-sum game. When attention is deployed to one modality – say, in this case, talking on a cell phone – it necessarily extracts a cost on another modality – in this case, the visual task of driving.”

Yantis’s chief collaborator on this research project was Sarah Shomstein, who was a doctoral candidate at Johns Hopkins. Shomstein is now a post-doctoral fellow at Carnegie-Mellon University.

Though the results of Yantis’ research can be applied to the real world problem of drivers and their cell phones, that was not directly what the professor and his team studied. Instead, healthy young adults ages 19 to 35 were brought into a neuroimaging lab and asked to view a computer display while listening to voices over headphones. They watched a rapidly changing display of multiple letters and digits, while listening to three voices speaking letters and digits at the same time.

The purpose was to simulate the cluttered visual and auditory input people deal with every day.

Using functional magnetic resonance imaging (fMRI), Yantis and his team recorded brain activity during each of these tasks. They found that when the subjects directed their attention to visual tasks, the auditory parts of their brain recorded decreased activity, and vice versa.

Yantis’ team also examined the parts of the brain that control shifts of attention.

They discovered that when a person was instructed to move his attention from vision to hearing, for instance, the brain’s parietal cortex and the prefrontal cortex produced a burst of activity that the researchers interpreted as a signal to initiate the shift of attention.
This surprised them, because it has previously been thought that those parts of the brain were involved only in visual functions.

“Ultimately, we want to understand the connection between voluntary acts of the will (for instance, a choice to shift attention from vision to hearing), changes in brain activity (reflecting both the initiation of cognitive control and the effects of that control), and resultant changes in the performance of a task, such as driving,”

Yantis said. “By advancing our understanding of the connection between mind, brain and behavior, this research may help in the design of complex devices – such as airliner cockpits – and may help in the diagnosis and treatment of neurological disorders such as ADHD or schizophrenia.”

Scientist who has specialized in studying how fireflies and other creatures communicate has won a million-dollar grant to conduct a pioneering 5-year study on the roles that attention and memory play when the human brain hears and processes spoken language.

“This is the chance to study the ultimate form of animal communication language,” said Thomas A. Christensen of UA’s department of speech, language and hearing sciences (SLHS). “Humans have evolved a very sophisticated symbolic form of communication. Language affects how we think, what we believe, how we interact with each other. I’d even go so far as to say that our future as a species depends on understanding how we communicate. But very little is known about what’s going on in the brain when we’re having a simple conversation.”

Until recently, Christensen was a research scientist with the Arizona Research Laboratories’ Division of Neurobiology, studying olfactory communication (the sense of smell) in insects.
His research is grounded in the areas of learning and memory, systems physiology and animal communication. Encouraged by Elena Plante, head of the SLHS department, he applied for a $1 million career development award from the National Institute of Deafness and Other Communication Disorders. The grant was awarded in April.

The grant will take his career — and biomedical science — in new directions. Christensen will use UA’s state-of-the-art magnetic resonance imaging (MRI) facilities to map the areas and networks within the brain linked to language, attention and memory. The UA’s advanced MRI is a non-invasive imaging tool that is sensitive enough to show exactly what parts of the brain are involved when a person listens to another human voice.

“What you read in the text books is that if you’re right handed, then language is localized to the left hemisphere of your brain,” Christensen said. “I found out right away — that’s just not true. Analyzing a human voice also involves the right hemisphere and even parts of the cerebellum.” The cerebellum is a large part of the brain that serves to coordinate voluntary movements, posture, and balance in humans.

“These MRI images destroy the myth that you’re only using about 10 percent of your brain for any particular task,” Christensen said. “The crux of this grant is to learn more about the language, attention and memory centers in the brain, and
also about the complex interactions between them.”

Inside the scanner, volunteer subjects don headphones and perform simple language discrimination tasks in Christensen’s experiments. They’re asked to respond by pressing a button when they hear words that fall into a certain semantic category — the name of an animal, for example. Then, to make the task a bit harder, subjects are asked to respond only when they hear a woman’s voice speak a word in the chosen category. The task taxes attention even more when subjects are asked to respond to a woman’s voice speaking a ‘target’ word in one ear at the same time a man’s voice is speaking words in the other.

The cognitive unconscious is distinct from the psychoanalytic unconscious and is defined in experimental psychology as including mental processes which can influence behavior while remaining outside phenomenal awareness.
On this basis, it was proposed to relate the psychological phenomena of ‘unconscious attention’ and implicit perception to characteristic physiological mechanisms in the human brain, namely the incomplete parallel cognitive processing of sensory inputs which remain on the fringe of the consciously attended objects or events.

MRI scanner records activity throughout 45-minute sessions, revealed multiple regions and networks, some deep within the brain, that scientists didn’t suspect were involved when the brain listens.

“We’re getting a snapshot of what that activity is across the population. What’s so striking is how clearly we see that certain areas of the brain are strongly engaged in attentional control while other areas are not. As we scan more volunteers, we’re definitely beginning to see a pattern here.”

Christensen’s research on the brain-governing system we called “attention” — how the brain selects only some information from its environment and is able to focus awareness on objects and events relevant to immediate goals — is profoundly relevant to such disorders as schizophrenia, ADHD and many other impairments that affect language abilities.

“ADHD (Attention Deficit Hyperactivity Disorder) is probably one of the most over-diagnosed disorders of our time,” Christensen said. “The reason for that, I think, is that we really don’t know very much about the biological basis of this syndrome.
There’s a lot of research on it, but there’s still a lot of disagreement about what the root cause is, and about whether drugs like Ritalin that are being prescribed to children as young as 2 years old are doing any good, and if we have any business exposing our children to drugs at such a very early age,” he added.

As Christensen collects more MRI data that show the connections among areas of the brain that are strongly engaged in language tasks, he plans to collaborate with computer modeling experts. “We could develop a mathematical model that would allow us to generate hypotheses about what we expect if we deliver a certain type of stimulus. We’d see what effect it would produce in our model.”

Simulating brain activity in the mathematical model “would take the whole question of language processing beyond just looking at blobs of activation in the brain. That’s what I hope to do,” Christensen said.

What is coming to light, due to the many research projects in motion, is that very little is actually known of how the brain and mind function.

Meditaton, prayer, certain music and thoughts, and relaxation exercises, tend to calm and quite down the ‘conscious’ mind, allowing the ‘subconscious’ mind to better function.
Harsh words, thoughts and actions, are like malware and viruses in a computer, they can and will contribute to difficulties in ones life experiences.
(Part 1 of 3)

http://betterexplained.com/articles/unicode/
http://forums.cnet.com/5208-7583_102-0.html?threadID=104624
http://www.techspot.com/vb/topic41189.html
http://www.howstuffworks.com/bytes.htm
http://www.madsci.org/posts/archives/nov98/911697403.Me.r.html

http://www.thetalker.org/archives/449/talker-on-mind-as-a-computer-part1/
http://www.thetalker.org/archives/509/32-talker-on-mind-as-a-computer-part-2-of-3/
http://www.thetalker.org/archives/521/mind-as-a-computer-part3/

6 comments to # 31 Talker on Mind As A Computer Part 1 of 3

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>