More on semantics and phonology

by pieterseuren

So let us accept the general point that humans are equipped with heavy, largely innate, machinery in dealing with the outside world, processing external stimuli in highly specific and highly functional ways, while science, in dealing with that outside world, faces the task of undoing, as far as possible, these specific processing effects and thus of approximating the ideal of a dehumanised analysis and description of that outside world ‘an und für sich’, as Kant said. (The prototypical example of this ‘undoing’ is the contrast between our naïve perception of the sun rising and setting and what we consider to be physical reality: we now know that our perception of a rising and setting sun, etc., is an illusion brought about by the parameters of human perception. In reality, there is no ‘rising’ and ‘setting’, just gravity-driven near-circular movements of planets around a light-emitting sun.) The scientific effort has, of course, been a central factor in the coming about of our modern, technologically advanced society. Yet it has hardly been recognised till the present day that it follows from this that in applying that same scientific method to the human mind, now objectified as part of the ‘outside world’, the first thing to do is find out about the specifics of the processing machinery that makes humans come to terms with their environment. From this general perspective we can now see that this is what gave birth to phonology, as opposed to phonetics. And this is what should make us look for the human system of logic, as opposed to the ‘objective’ mathematical logic hitherto used in formal semantics.

Now look at the effect the native acquisition of the phonological system of each language has upon the perception of speech sounds. As the phonological system of a language is being acquired by a monolingual infant, the child loses the ability to perceive phonetic differences that are irrelevant in its newly acquired phonological system. Unsophisticated central and southern Italian speakers, for example, who do not have the phoneme /ü/ in their language or dialect but only the phonemes /i/ and /u/, hear [ü] as either [i] or [u], depending on context, but most northern Italians are sensitive to the difference, since it is functional in their dialects. In a way, this might be taken to play into the hands of Whorfians, who wish to see the mind as a tabula rasa and to see all mental processing as the result of external stimuli, such as speech sounds, in relation to presumed regularly co-occurring behavioural patterns. If they used this argument (which they do not), they would have a point in so far as it is undoubtedly so that the acquisition of a specific phonological system channels and restricts the perception of speech sounds according to the native phonological system. Phonetics found its origin, in the late 18th century, in the effort to undo these natively acquired biassing effects. Yet there is no evidence at all that the acquisition of a native phonological system has any influence on acoustic perception in general: the language-specific perception effects are limited to speech sounds and have no bearing on, say, the perception of music or other kinds of acoustic input. And Whorfians have never said they do.

But Whorfians do say that the meanings of words, morphological elements and syntactic constructions in any given language influence (some even go so far as to say that they determine) ‘thinking’ and conceptual content and structure in a general, cognitive sense. I strongly reject that thesis. The long Chapter 2 in my forthcoming book From Whorf to Montague (OUP, September 2013) is devoted to a refutation of Whorfianism from all points of view. My thesis is that human thinking, and conceptual content and structure with it, is subject to strong and robust innate principles and constraints allowing for and at the same time restricting culture-specific variations, and that these variations are recognisably reflected in the lexical and constructional meanings of the languages spoken in and by each language community. I thus claim the opposite of what Whorfians claim.

Yet I do allow, in very general terms, for some marginal effect of language on peripheral areas of cognition, in particular in the area where thinking is being prepared for linguistic expression—what Dan Slobin has called (in an article of 1987) “thinking-for-speaking”, or, in the words of Pim Levelt (in his book Speaking of 1989, on pp. 144–157), “microplanning”. There are indications that, for example in the search process for the proper lexical items, the semantic range and the grammatical category of the items in question to some extent influence the precise shape of the proposition that is being fed into the grammar module yielding the linguistic output.

I’ll give just two examples. First, suppose an English speaker wants to express the proposition saying that the crime of murder remains punishable by law for an indefinite amount of time. Our English speaker will begin to search in his or her mental lexicon for the proper lexical item expressing this state of affairs and will hit on the nominal expression statutory limitation. (S)he will then formulate the appropriate sentence, coming up with something like There is no statutory limitation on murder. A German speaker, however, who wishes to say the same thing, will have to use a totally different syntactic structure, since German has the verbal predicate verjähren, meaning, among other things, ‘be no longer punishable by law’—something like ‘superannuate’—leading to a sentence such as Mord verjährt nicht (‘murder does not “superannuate”’, though that is, of course, not proper English). Since the two sentences are identical in meaning but very different in linguistic form, there must be a prelinguistic level at which the semantic content gets differentiated according to the language concerned, depending on the availability of the appropriate lexical material. (Needless to say, such cases are a nightmare for developers of machine translation programs.)

Then, languages differ as to what needs to be expressed and what is optional. Thus, in a language L with the obligatory category of evidentiality, one has to specify the kind of source of the information expressed in any assertive proposition: by personal perception or experience, by rumour or hearsay, etc. Other languages do not have such a requirement. Thus, a speaker of L is forced to think about the information source and has to classify it into one of the grammatical categories available in L before (s)he can produce the corresponding utterance.

This kind of mental activity is what Slobin calls “thinking-for-speaking” and what Levelt calls “microplanning”. Yet this language-specific ‘preshaping’ of propositional thought is very late in the cognitive preparation for speaking and there is no evidence at all that it has any influence on deeper, more central layers of conceptual structure and patterns of thinking.

If all this is correct, we have a further parallel between phonology and semantics. Both show a marginal, superficial or peripheral influence of any given specific language on cognitive processes. Phonology shapes the perception of speech, but not of other, sounds, while semantics shapes the last stage of the formation of propositional thought before the actual linguistic expression through the grammar module, but leaves central processes of concept formation and thinking patterns unaffected. Stuff for further thought.

Advertisements