The Technicity Thesis: a constructionist proposition
Micheál
Ó Dúill, logios.org@googlemail.com
Logios.Org
Abstract
Constructionism entails learning by
constructing objects open to inspection, which Papert claims is the most
felicitous means of learning. This begs many questions. The cornerstone
question is: How is the human capable of constructing objects? The keystone
question is: How does this make learning most felicitous? An answer to both
questions is offered by the proposition of a small, unique information
processing adaptation in the human brain. The term ‘technicity,’ is adopted to
denote this source of the human capacity for technology (and art). Information
entropy at this source is far lower than that of environmental sensory input.
The consequence is that technology is both simpler and more powerful than
biological organisms. Mentally constructed, the concepts derived from the
technicity adaptation are shown to be more congruent with properties of matter
than are perceptually based concepts for which language is the evolved
communication medium.
Keywords
Information, entropy, human evolution,
technology, art, concept formation, primary school, Logo, body syntonicity,
turtle geometry.
Introduction
This paper is the culmination of a decade
of work on the technicity thesis. The trigger was certain behaviour by children
with learning difficulties that raised the question: How do humans draw? This
generalised to the more general question of technology and art, on which
science was silent. Like the dark side of the moon, it appeared perceptually
inaccessible. The first step in developing the notion of technicity, the
linking of feature detector neurones (Hubel 1995) with drawing, was presented at
Eurologo in Portugal. In Warsaw, the square/diamond effect
offered supportive evidence that also called into question language primacy. Paris saw a more developed hypothesis based on the more recent understanding of the role
of prefrontal cortex, a matter of interest in primary and special education (Ó
Dúill 2010). Plausible though this proposal was in terms of the “how” of
constructionism, it did not explain “why” it was felicitous. The cornerstones
were in place but there was no keystone to hold the edifice together. For this
the concept of ‘entropy and information’ (Stonier 1990)
was required. This is used further to develop the technicity thesis.
Working from first principles, genetic,
neurological and informational knowledge is assembled to offer a new
perspective on human evolution. Key is recognition that information available
to the genotype is of a different quality from that available to the phenotype.
The former is of lower entropy, thus more powerful. The mechanism proposed in
the earlier papers, reprised below, makes available this information to cognition.
The thesis explains why technology is both simple and powerful. It precisely
defines the difference between scientific and naïve concepts; and
demonstrates this by the square/diamond effect. The secondary intellectual
quality of language is also revealed, leading to reconsideration of Vigotsky’s
(1962) ideas on thought and language.
Most work used in developing the thesis has
reached the realms of non-specialist science. Key sources include Lewin (1998),
human evolution; Streidter (2004), brain evolution; Fuster (2008), prefrontal
cortex; Dawkins (1989, 1999), genetics. The thesis is contrasted with the
social brain and language theories (Dunbar 2004a, 2004b) and triangulated against
current views on human evolution (Mellors 2007); child development, notably in
drawing (Anning and Ring 2004); but mainly against the everyday experience of
teachers and parents in the primary school years.
The Technicity Thesis
Technology is simple. Biology is complex. This
is the conundrum. Simplicity is of low entropy: it requires little information
to fully describe something simple. The 'second law insists that it is not
possible to reduce entropy without doing work, without the expenditure of
so-called free energy. Given that biological processes all increase complexity
and thereby entropy, the appearance of low entropy entities, such as purified
red ochre, associated with the earliest human, appears to be physically
impossible. The Neanderthal, with a larger brain than modern humans and with
very similar neurological architecture, signally failed to develop anything
recognisably technological.
A critical distinction needs to be made
between creating technology and making things, including tools. There is a
discontinuity between animal artefacts and human technology. Animals construct
their artefacts according to genetically determined templates. The result is
that their constructions are species specific. The test for genetic templates
is stasis over time. The tool assemblage of the Neanderthal remained unchanged
for hundreds of thousands of years. Pigment production, whilst characteristic
of the human and thereby a species identifier, has not remained static.
Seeing red
Isolating a pure primary pigment like red
is not trivial. The signal processing overhead required to extract red from the
image at the retina would be very high if noise removal were used. This is not
the evolutionary approach. Colour vision is an early adaptation. Fish, as
children know from the classroom aquarium, have a very good colour sense. The
underlying mechanism is the same in goldfish as in the human, though its
location and scale differ. Hubel (1995) and colleagues first described this
mechanism in primates, referring to the neurological structures as “blobs” from
their histological appearance. These computational units are necessary because
the receptors in the retina cannot fully resolve light into spectral colours, a
consequence of the photochemicals used. Information on light colour is lost in
the chemical reaction a photon energises in a receptor. The result of that
reaction is a nerve impulse. A nerve impulse is not an analogue of sensation;
it is a symbol. From a computational perspective, it is a symbol on a Turing
tape with no intrinsic meaning. The meaning of the symbol emerges only when it
is read by the machine and causes a change in its state. A symbol may only
cause a change in machine state if a machine already has information about its
meaning. This implies that information about the redness of red is built into
the nervous system. That information about photons of 470 THz frequency,
primary red, is built into the brain raises the question of its origin.
The source of the information is the
genotype, not the environmental experience of its phenotype. This solves the
entropy problem. Evolutionary processes work in geological time, not lifetimes.
Genes have had aeons in which to incorporate information on properties of matter
in their four-base code. A little reflection shows that they incorporate a very
great amount of such information in order to build the body of thr phenotype. The
evolution of distance senses required the genome to have information about the
medium used and to express that information in a suitable structure. Such
structures are a function both of the property of matter to be discriminated, e.g.
pressure in sound waves or photon frequency, and constraints of neurone
function: the excitatory/inhibitory character of synapses in particular. In
colour vision this leads to a system that generates a form of false colour
rendering of spectral colours, which bends the spectral segment that is visible
light back on itself to form a circle with non-spectral purple. The system defines
a colour space by the opponent pairs: red/green, blue/yellow and black/white. In
this way, photon frequency becomes a means of differentiation (red fruit
contrasts with green leaves) and is incorporated into instinctual behaviour. This
implies that all animals, including the human, with the same photon
identification system ‘see’ a 470 THz photon as “red” in the same way. This
does not imply that they all have a concept of red. For red to become a concept
rather than a percept, it is necessary that information on pure unassociated
colour be made directly available to cognition.
Neurone nature
Neurones are metabolically expensive
informavores. Their representation increased in response to environmental
complexity; in hominines at the expense of the gut. This implies that a
capacity to model the world is adaptive. In mammals, neocortex evolved. This
makes social behaviour and planning possible and inhibits instinct. Birds have
a homologous structure. Prefrontal neocortex underwent the greatest relative
expansion: from some 3% in the cat through 17% for chimpanzees to 27% for the
human (and Neanderthal). In hominines, overall brain expansion was from some
450cc in chimpanzees to a Neanderthal maximum of over 1400cc. Expansion was
accompanied by invasive connection of prefrontal neurones to most other parts
of the brain. This is the means by which prefrontal cortex performs its
executive function; modulating and moderating actions of the older brain; and
manipulating memory from neocortex, motor and sensory, to create new
possibilities from historical information: to plan for the future and to modify
that plan in the light of experience. Over-production of neurones and
connections is considerable. Both are pruned, leading to the loss of some 50%
of neurones and connections by adulthood; neurones that receive no input die. Aggressive
invasion by imperialistic prefrontal neurones turned out to be adaptive. There
is no reason why this process should have ceased by the time that the
Neanderthal, human and Denisovan shared a common ancestor, somewhere in the
region of half a million years ago. The stage is now set to consider the
adaptation that led to the human capacity for technology.
Creative connections
The technicity thesis proposes that, in the
human, prefrontal neurones invasively connected to primary sensory cortex and
its homologues thereby making available to cognition the information the genome
expressed there and the manner of its structural expression. Such structures
are active neurone circuits. They may be activated either by sensory input or
by a probe from elsewhere. No teleology is involved; an ongoing neurogenetic
process of expansion simply ran into a new class of information. When
manipulated by the prefrontal cortex, the result was the creation of novel
cognitive entities and these turned out to be adaptively advantageous. The
proposed change in neural connectivity is shown schematically in figure 1.
Executive |
Association |
Sensation |
Prefrontal
Cortex
Lateral
convexity
(cognitive) 
|
Sensory (perception)
|
 
“Feature detection” |
|
Motor (action) |
Orbitomedial
(affective)
|
Older
pre-cortical brain
(need,
attention, motivation, reaction) |
Figure 1. The architecture
of the technicity adaptation. Extension of prefrontal neuron connection to the
genomic information available in primary sensory cortex (arrow passing through
the unshaded area) may be compared with the normal sensory-perceptual route
(connections through the blue shaded areas). The type of information made
available by the technicity adaptation from so-called feature detector neurons
is illustrated symbolically. Note the reciprocal connectivity to the rest of
the brain from prefrontal cortex and extensive connection between its cognitive
and affective divisions.
That humans consider purple to be a colour,
suggests that these neurone circuits are the source of colour concepts. The
question now becomes: What other information might be available from this
neural structure? The list includes: line length and angle and direction of
motion in primary visual cortex, pitch in primary auditory cortex. The human ability
to identify and blend notes to make perfumes and flavours in cuisine suggests
olfactory bulb connections. There may well be others. For the present
discussion, only line length and angle, the foundations of geometry, are needed.
Conflicting concepts
Linear cut-mark designs on bones are taken
as an early sign of human behaviour. The Platonists identified geometric shape
as an aspect of ideal form. Such activity is based on the composition of line
length and angle information in prefrontal cortex. One ideal shape, the square,
can affirm Papert’s proposition about the felicity of construction. The shape
may be physically constructed by folding; a straightedge and compass; or turtle
commands, repeat 4 [fd (number) rt 90]; none of which are orientation sensitive.
Now consider figure 2.

Figure 2. The square/diamond
effect demonstrating the conflict between V-concepts and T-concepts.
Both shapes are products of technicity and both are square, but the
characteristics of the perceptual system lead to the perception, and naming, of
two distinct objects.
Both shapes consist of pairs of equal
length equidistant parallel lines intersecting at a right angle: a square. But
the one-eighth rotation of the leftmost brings to mind a different, distinct
form and name: diamond. At one level there is a single concept with a single
verbal description; at another level two different linguistic concepts exist. Scientifically,
they are the same cognitive construct, a product of technicity sourced from
line orientation information in primary sensory cortex. Only when processed by
the visual system does orientation becomes an issue; the result of perceptual artefact.
Whilst this form is unique in generating the effect, it signals a general
conceptual issue: the well documented divide between science and naïve
perception. As the physical construction of a square and its rotation, and
inspection of the effects of rotation, can overcome the perception of two
objects in figure 2, so technicity-based constructs overcame the strong perception
that the sun revolves around Earth.
Why is the technicity adaptation more
powerful than perception and language? The answer lies in relative entropy,
which the square/diamond effect helps clarify. The square is described only by
equality of side and angle, as the turtle geometric formulation demonstrates.
The square/diamond effect includes orientation. Less information is needed to
describe the technicity construct than to describe the perception-derived
concept. It is, therefore, of lower entropy. Recalling that the brain is a
physical system, Carnot principles show the technicity concept to be the more
powerful.
The effect also demonstrates the weakness
of language noted by Papert. Rotating produces the effect and resolves it: two
perceptual, verbally-denoted concepts resolved into one. In so doing it produces
a conflict with language. This is to be expected. Language, which serves
perceptual processes, was fully evolved before the human technicity adaptation
arose. Technicity, though the more powerful cognitive capacity, is verbally
inarticulate and consequently must convey thought through constructed physical
forms: Papert’s objects open to inspection. Here lies both the power of the
technicity adaptation and the difficulty of its verbal communication.
In technicity contexts, language lacks the
means of expression and new terms must be coined. In this case the word
“concept” has ceased to be adequately expressive. It is proposed to resolve
this by prefixing. Concepts originating from the technicity adaptation will be
T-concepts. Those that arise by normal perceptual processes become V-concepts.
The T denotes technicity as the source of the concept and honours Alan Turing (Ince
1992) whose thinking helped their identification. V denotes the perceptual/social/verbal
nexus that is the foundation of these concepts and honours Lev Vigotsky (1962)
who first described their formation. The crucial difference between the two is
that a V-concept may be accepted because of its internal linguistic consistency
but a T-concept is consistent with the behaviour of the physical world; to
which technicity uniquely provides the human with cognitive access. Some
differences between these two qualities of concept are shown in table 1.
T-concept |
V-concept |
Technicity
based (genomic) |
Perception
based (experiential) |
Non-linguistic
(constructed product) |
Verbal
(internal and spoken utterance) |
Low entropy
(simple and powerful) |
Environmental
entropy (complex) |
Species
level (universal) |
Culture
level (specific) |
Tested against
properties of matter |
Tested for
cultural consistency |
Table 1. Some differences in
quality between T-concepts and V-concepts.
As an aside, it may be noted that, although
only indirectly derived from language, mathematical formulations must also be
proved against the real world.
Art and aesthetics
Prefrontal cortex has two divisions,
affective and cognitive. The relationship between the two is described by
Damasio (2006) and Fuster (2008). The effect is to give art and technology the
same foundation, the difference being largely in affect: the constructions of
the technologist have less affect when perceived that do those of the artist.
However, at the creative construction stage they both entail control over the
properties the materials employed. In order to play a violin concerto it is
necessary to compose the music, using pitch information from primary sensory
cortex, and build a violin using craft knowledge of wood and fibres. Science
can describe the relationship between string length and pitch but evocative
sounds rely on the craft exercise of technicity.
Triangulating technicity
The technicity thesis is a proposition
designed to be tested for congruence with reality; merely to be plausible is
insufficient.
Child development and activity in kindergarten
and primary school are by far the best sources of evidence, thought little
regarded by constructionists. By the age of eight, as figure 3 shows, the basic
information that the technicity adaptation provides has begun to be combined
into complex representations, though the purity of that information still
shines through in simple geometric constructions and primary colouring. The
drawing shows the use of mental processes that are also present in the
structure in language. If these forms do not originate from the technicity
adaptation, how does an immature mind extract them from sensory input, from
environmental information?

Figure 3. A drawing by an
eight year-old girl, illustrating the composition of primary-sensory-sourced
information to create an aesthetically pleasing and expressive communication.
The earliest signs used to identify the
presence of the human include pigment processing and the presence of points and
other geometric microliths used to make component-built tools. There is no
evidence that the larger-brained Neanderthal ever progressed beyond the
standard Mousterian tool assemblage even when coexisting with the human. Neither
is there evidence of artistic ability nor of any ability to organize living
space (Findlayson 2010).
The mechanism proposed for the technicity
adaptation is consistent with current knowledge in the fields of genetics,
brain evolution and the role of prefrontal cortex. That the technicity
adaptation comes on stream during the years of elementary education, from infancy
to puberty, is consistent with the finding that prefrontal maturation takes
place during this phase and is highly influenced by experience: hence the
universal importance given to primary schooling.
There has long been the issue of the gulf
between the “two cultures” of the sciences and arts (or humanities). This was
categorised in terms of cocktail party conversation by Snow (1963) using
Shakespeare and the Second Law as exemplars. In selecting Shakespeare, Snow
placed the focus on the socio-linguistic domain, which has great evolutionary
depth. This contrasts with the fruits of technicity, which are recent and have
only secondary linguistic representation.
Finally, there is the issue of entropy.
Technology is of far lower entropy, defined in both physical and information
terms, than biological phenotypes but is commensurate with that of genes. This
means that technological forms created from this information have greater power
than those that originate from perception: simple T-concepts are more powerful
than complex V-concepts.
Summary
The economy of the technicity thesis is
greater than its alternatives: language and the social brain. Some elements
from which technology and art are constructed, and against which the verbal
(and mathematical) hypotheses of science are tested, are listed in table 2.
Colour |
Line |
Motion |
Pitch |
Chemical |
Pigment
Art
Spectrum Photons |
Shapes
Architecture Writing Geometry |
Projectiles Choreography Machines Entropy |
Tone
Music
Time Relativity |
Flavour
Cuisine
Molecules Particles |
Table 2. Some sources of
genomic information expressed in neurone circuits and behavioural correlates.
A cognitive consequence of the
neurological architecture of technicity is an additional quality of concept.
Directly sourced from low entropy information, T-concepts provide the
entrée to science through technology and moderate social V-concepts
derived from verbal-perceptual experience.
Technicity and linguistic thinking:
T-concepts vs. V-concepts
Thought, from a technicity perspective is
not language. T-conceptual thinking, by definition, is non-linguistic thought.
In the case of music, dance, games, visual arts, architecture, mechanical and
electronic design and production, and mathematics, the involvement of language is
minimal: reducing to injunctions such as, “Do it like that.” Scientific enquiry
is different. Academic means of communication are largely verbal. Conceptual
frameworks shared between peers are expressed in language. Testing of
scientific concepts is carried out, however, not against rigorous linguistic
formulations, as is the case with philosophy and mathematics, but in terms of
congruence with physical reality. Thus, the foundation of science is the
technology devised to verify new ideas. Old ideas expressed in language and based
on established perceptual processes are resistant to change because they fit
the current view of reality. Advances in science appear to be outlandish when
first proposed, even to eminent scholars: vide Einstein and quantum theory.
Acceptance of scientific ideas, however unreasonable they might appear to
V-conceptual thought, comes about because they work out in practice. This cognitive
conflict explains the time needed for scientific ideas to take hold and the
difficulty that many people have in accepting them. The V/T concept division
may also lead to misconceptions, particularly where a large intellectual
investment has been made. In the constructionist community there is a nice example
of this process at work.
LOGO and Turtle Graphics
At the time the microcomputer entered the
primary school classroom there was much discussion about its role: tool, tutor
or tutee. For the present, the first is dominant. Papert, with a computerist
background, was as much concerned with the programmability of the medium and
its potential, as he saw it, to catalyse the early development of Piagetian
formal operational thinking. LOGO, as a formal programming language might offer
a means to this end. Work with the button box and the floor turtle suggested an
entry point for young children of kindergarten and primary school age. The
simple ‘forward, back and turn’ commands to this small robot spawned turtle
graphics and its academic big brother turtle geometry. Turtle graphics was
simply the name for turtle drawing. Its academic variant offered educational
kudos: a flexible relative geometry to complement the rigid frame of Cartesian
coordinates. Turtle geometry (Abelson & diSessa 1980) was new math. Papert
saw it as a means of inculcating mathematical thinking at primary school level.
At another time in another place, it might have become a carefully researched
PhD project. At the time, however, it was a poorly researched vehicle for
getting computer science ideas into primary education. When primary school
teachers expressed concerns about the subject matter in relation to the shape
and space curriculum they taught, they were condemned as conservative and
obstructive. A culture of questioning the professionalism pf primary school teachers
made this appear not unreasonable.
Both Papert (1981) and the authors of
Turtle Geometry reported so-called bugs in the children’s thinking. Three classic
bugs are shown in table 3 along with their associated explanations. These
explanations seem reasonable and have an authentic mathematical and computer
science feel. The suggested general solution was to “play turtle” to get the
idea of heading. Teachers tried this with children moving paper arrowheads in
different orientations. Papert went further and proposed that if children
“Walked Turtle” a cognitive phenomenon that he called “body syntonicity” would
lead to the internalising of the concept of heading and the ability to describe
shapes in “Turtle Talk”. However, experience in the primary school classroom
with turtle graphics suggested that there was a problem of greater cognitive
depth and the idea of body syntonicity was questionable.
|
Triangle |
House |
Man |
Target |

|

|

|
Outcome |

|

|

|
Explanation |
Thinking about the internal angle of the shape
rather than the “heading” of the Turtle |
Failure to realise that an “interface” procedure is
needed to place the Turtle in the right state to draw. |
Solved by breaking the drawing into procedures for
the parts and then combining them, but see the house bug. |
Table 3. The turtle
geometric bugs reported in Mindstorms and Turtle Geometry.
Body syntonicity
Papert expressed the idea of body syntonicity,
derived from Freud’s ego syntonicity, as follows:
“The
Turtle circle incident illustrates syntonic learning. This term is borrowed
from clinical psychology and can be contrasted to the dissociated learning
already discussed. Sometimes the term is used with qualifiers that refer to
different kinds of syntonicity. For example the Turtle circle is body syntonic
in that the circle is firmly related to the children’s sense and knowledge
about their own bodies. Or it is ego syntonic in that it is coherent with
children’s sense of themselves a people with intentions, goals, desires, likes,
and dislikes. A child who draws a Turtle circle wants to draw the circle; doing
it produces pride and achievement.
Turtle
geometry is learnable because it is syntonic.” (Mindstorms p.63)
This argument is a linguistic one based on
observation and analogy. The notion is V-conceptual. When referred to physical
reality it is seen to conflict with the childhood development of drawing which
is instrumental. The constructive processes of technicity make shapes, like the
ones played with in infant posting boxes and in kindergarten. The child has a
concept of circle already in mind, the earliest scribbles are circular. It
follows that stepping around a circle and describing the action is but to create
a mnemonic to link to the programming language. Papert’s linking with the
aesthetic is, however, entirely consistent with technicity and is highly
educationally important.
Rotating squares
Unqualified hindsight is of little value,
but when informed by a new perspective can help to guide thinking. On the
preceding page of Mindstorms is the illustration in table 4 column two.
Rotating figures was a pastime that mathematics educators liked because it
emphasises the invariance of the form, illustrates symmetry and is
aesthetically pleasing. The rotation here is 120°, the
angle of turn for the turtle triangle. Papert suggests other angles be tried,
illustrating the shape produced by 36°. A turn of 45° (table 4, column 3) is
not mentioned. The shape is disturbing, dissonant. Conceptual conflict arises
from V-conceptual perceptuo-linguistic effects. It is perceived as two
different figures and not as the same one rotated. It feels anti-mathetic. When
repeated eight times the figure in column 4 is generated. Here diamond and
square vie for dominance.
to square
repeat 4 [fd 50 rt 90] |
repeat 3
[square rt 120] |
repeat 2
[square rt 135] |
repeat 8
[square rt 135] |

|

|

|

|
Table 4. Rotations reported
in Mindstorms and the square/diamond rotation.
Do we see the mathematician’s search for
pattern and symmetry subconsciously overlooking the dissonance? Did educational
philosophy and mathematical evangelism misdirect critical thought? Whatever,
the need for new ideas to be rigorously tested against reality is shown in
stark relief.
Education
It should be obvious that the technicity
adaptation imposes the requirement for education on the human. T-conceptual products
of technicity do not derive from a genetically specified capability as do
animal artefacts and communication, including human language. They cannot be
activated and refined by immersion in the social milieu. Epistemological
processes are required to transmit and increment knowledge generationally. These
matters are discussed in a companion paper.
Conclusion
A thesis is useful only if it illuminates
cognitively dark corners, suggests further research, and (preferably) has
immediate application. Technicity fulfils all these requirements and more. The
instruction/construction distinction hinted at by Papert is given a sound
biological foundation, and, fittingly for the conference location, the
Platonist’s question concerning the redness of red is given an answer; and geometry,
as written over the Academy door, takes on new meaning. Most importantly,
however, the cognitive complexity, so frequently passed over, of the primary
phase of education is thrown into sharp relief – to the possible embarrassment
of academe.
Nobody expects the second law of
thermodynamics to appear at a conference as mathematically oriented as Eurologo;
but it has now. Entropy underpins technicity, the evolutionary adaptation
unique to the human and the source of the species’ technological capability and
artistic ability. It is the power behind constructionist educational methods. It
offers both prospects and discomfort.
Acknowledgements
Richad Noss, for
redirecting a primary school teacher to Mindstorms.
References
Abelson, H. and diSessa, A. (1981). Turtle
Geometry: The Computer as a Medium for Exploring Mathematics. Boston, Mass: The MIT Press.
Anning, A. and
Ring, K. (2004). Making Sense of Children's Drawings. Maidenhead Berks:
Open University Press.
Bransford,
J. D., Brown, A. L., Cocking, R. R. (eds.) (2000). How People Learn,
Expanded Edition. Washington DC: National Academy Press.
Damasio, A.
(2006). Descartes’ Error: Emotion, reason and the human brain. London: Vintage.
Dawkins, R. (1989). The Selfish Gene. Oxford: Oxford University Press.
Dawkins, R. (1999). The Extended Phenotype.
Oxford: Oxford University Press.
Dunbar, R. (2004a). Grooming,
Gossip and the Evolution of Language. London: Faber & Faber.
Dunbar, R. (2004b). The
Human Story. London: Faber & Faber.
Finlayson, C. (2010). Humans
Who Went Extinct: Why Neanderthals died out and we survived. Oxford: Oxford University Press.
Fuster, J.M. (2008). The
Prefrontal Cortex. Fourth Edition. London: Academic Press.
Harel, I., & Papert, S. (1991). Constructionism. Norwood, NJ: Ablex Publishing
Corporation.
Hubel, D.H. (1995) Eye,
Brain, and Vision. http://hubel.med.harvard.edu/index.html [20.07.2011]
Ince, D. C. (1992). Collected
Works of Alan Turing: Mechanical Intelligence. London, New Holland
Lewin, R. (1998), Principles of Human Evolution. Malden MA: Blackwell Scientific.
Mellors, P. Boyle, K. Bar-Yosef, O. and Stringer, C. (eds). (2007). Rethinking the Human Revolution, Cambridge: McDonald Institute Monograph.
Ó Dúill, M. (2010). Can there be a Science of Construction? In: Clayson, E.J., Kalaš, I,
Eds. Proceedings of Constructionism 2010 – 12th European Logo Conference.
Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava.
Papert, S. (1980). Mindstorms.
Brighton: Harvester Press.
Snow, C. P. (1963): The Two
Cultures and a Second Look. New English Library, London.
Stonier, T. (1990): Information
and the Internal Structure of the Universe. London: Springer-Verlag.
Striedter, G.F. (2005): Principles
of Brain Evolution. Sunderland MA: Sinauer Associates.
Vigotsky, L.
(1962): Thought and Language. The MIT Press, Cambridge MA.