Total Posts:35|Showing Posts:1-30|Last Page
Jump to topic:

Separating theorisation and testing

Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 12:55:12 PM
Posted: 6 months ago
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

Specifically, I'm thinking about dominant research methods in Chomskyan linguistics where linguists, particularly syntacticians, theorise about the mechanics behind language using their own native-speaker judgements and/or minimal data, along with frameworks and hypotheses earlier formulated by other linguists. They rarely seem to test out their own ideas, instead leaving this burden to the corpus linguists, computational linguists, psycholinguists and neurolinguists. Most of what they do is pencil-and-paper theorisation.

I feel that although the Chomskyan framework has produced many important ideas (e.g. context-free grammar, X-bar theory...), it has also grown in a rather unhealthy manner. 'Principles' are laid down by one linguist looking at one language, and then when other linguists working with other languages (or even the same language!) find counter-examples, they find ways to gerrymander their way around the counterexample or make ad hoc modifications to the original hypothesis, etc. This has led to a huge literature, a huge pile of hypotheses floating around (at least that's my impression as a non-Chomskyan lol) and a theory that is extremely complex and unparsimonious, not to mention strange. For example, Chomskyans working in the government and binding framework believe that 'I was killed' comes from the deep structure 'was killed I' - the 'I' was assigned the theta-role of patient, and then moved into the subject position to satisfy the extended projection principle.

I think part of the reason why formulating theories and testing them are separate in Chomskyan linguistics is Chomsky's competence vs performance distinction. Chomsky believes he simply studies people's linguistic knowledge, rather than how language is actually produced and comprehended in real time - that is a problem for psycholinguists, computational linguists and to some extent neurolinguists to solve. But for whatever reason, his model of grammar is very like a serial processing model (if you know automata theory, you probably know part of the reason why).

So, do you find this approach to science unsound? Thanks in advance.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 6:13:11 PM
Posted: 6 months ago
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

Great question, Diqi! I feel I want to give you two partial answers. One is contextual, from the history and philosophy of science; the other is professional. The answers will be partial because while I have some familiarity with Chomsky's work due to overlap with my professional interests, I can't comment in detail without more reading. But I hope to at least offer some inroads, and it might be more helpful if I merge the partial answers together rather than separating them.

Science, as you doubtless know, is based on systematic empiricism, and the key principle of empiricism is knowledge through observation. [https://en.wikipedia.org...] So theory isn't knowledge, modelling isn't knowledge, conjecture isn't knowledge. Rather, careful observation is knowledge and modelling helps us organise and make use of what we know, while conjecture and theory help us explore how much we know, and what else we might find out. Scientific empiricism is systematic, and theoretical and applied science are both part of the same system, but they're not always pulling in the same directions. That can create gaps, and across the gaps appear mutual criticisms, one of which is captured by your concern above. :)

I've worked as both a theoretical and an experimental scientist, and while both work with the scientific method, I can attest that life is very different for each.

At the risk of oversimplifying, an experimental scientist uses existing and emerging techniques to solve existing and emerging problems. Sometimes they discover totally new information and have to make sense of it, but much of the time they're verifying or falsifying results they already anticipate. The big creative challenge with experimental science is finding ways to control all the variables and get reliable, unambiguous outcomes: you can think of experimental science as a way of engineering the environment to produce clear and useable information. So developing new experimental techniques and finding new and better ways to apply them are significant achievements in experimental science, since they are the achievements that produce new results.

In a similar oversimplification, the significant achievements in theoretical science are in 'look ahead' and 'clean up'. 'Look ahead' is anticipating questions existing scientific models hadn't thought to ask and working out approaches for exploring them; while 'clean up' is developing more concise and predictive models for what we already know. A famous example of 'look ahead' is Einsteinian relativity in physics, which almost looks prescient although it wasn't. Famous examples of 'clean up' include the switch from Linneaen biological classification to cladistics, or the demotion of Pluto from planet to dwarf planet. So theoreticians mightn't say it this way, but you can think of theoretical science as engineering the language and intuitions by which we investigate and make sense of the world.

Experimental science without theoretical science runs the risk of tunnel vision: experimentalists are so obsessed with eliminating noise and extraneous variables, they can sometimes throw away new and interesting data simply because they're so focused on the problems they want to solve. Examples of data that could have been thrown away include the blurred photographic films that first revealed radioactivity, the melted candy-bar that first revealed the heating properties of radar, and the noise of cosmic background radiation that first verified the Big Bang theory.

But theoretical science without experiment runs the risk of becoming pseudoscience: theoreticians are so obsessed with linguistic and intuitive elegance, they sometimes produce ideas that are either unfalsifiable, or experimentally infeasible to test. An example of a scientific theory that risks becoming pseudoscience is Superstring theory; and example of a theory that did become pseudoscience is phrenology -- the study of personality and intellect through cranial bumps. :)

So, I suspect you're asking when theoretical science 'crosses the line' from useful conjecture to pseudoscience. There's no hard line here, but many scientists feel that this risk is especially high in the qualitiative fields -- that is, fields that don't yet have enough numerical measures in place to force precise prediction and test their accuracy. Two such fields that immediately spring to mind are psychology and linguistics.

All science is contentious at times, but fields without strong enough predictive numerical measures can get ideological and tribal, with supporters of one paradigm railing at the theoretical failings of another paradigm, and vice-versa. Part of the challenge with qualitative theories is that it's hard to know how significant a particular failing is -- it's prone to interpretation, and that can lead to subjective biases.

An obvious answer is to put predictive numerical measures in place as soon as you can. However, good measures depend on mapping out mechanisms, and that requires observation of what the mechanisms are. By way of example, biology received a massive boost to its transparency and accountability with the advent of genetic sequencing. Suddenly, the relationship between every species with genetic material could be quantitatively measured, rather than only conjectured on morphological similarities and relative positions in rock strata.

For psychology, big improvements may well come with the development of neuroscience, since neurological mechanisms almost certainly underpin psychology. But with linguistics it may be the same. The sooner we understand language as a neurological function, the sooner comprehensive measures can be put in place to make linguistics more transparent and falsifiable.

With that said, there are already some measures in place. An old friend of mine is a senior researcher in linguistics, and I know from dinner-table chats with him that there are a lot of measures one can already use on language itself. However language cognition -- the area Chomsky is most concerned with -- seems not to have nearly so many measures.

(And if it's any consolation, I get a kind of allergic pseudoscience shudder when I read Chomsky too. :D)

Hope that helps, DC!
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 6:19:05 PM
Posted: 6 months ago
At 6/10/2016 6:13:11 PM, RuvDraba wrote:
Scientific empiricism is systematic, and theoretical and applied science are both part of the same system, but they're not always pulling in the same directions.
Sorry -- that 'applied' should have been 'experimental', as I think subsequent context should make clear. :)
dylancatlow
Posts: 12,254
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 6:44:46 PM
Posted: 6 months ago
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I don't claim to know more about this topic than you do, but it seems like identifying linguistic patterns, coming up with general rules to explain them, finding exceptions to those rules and modifying the theory accordingly is a perfectly valid approach for trying to understand how human languages are constructed. I mean, if we sent an archive of human conversations to very intelligent aliens light years away, they could probably learn a lot about our communicative process just by studying it without ever actually meeting a human, assuming they had the tools to learn the very basics of the language. You don't necessarily need to conduct experiments in order to understand HOW human languages are put together and operate, because that's essentially a form of scientific "stamp collecting". Understanding why they're put together that way, or what the consequences are for human thought, etc, is probably a different story.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 7:07:08 PM
Posted: 6 months ago
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

I don't know much about linguistics, but I believe I can still give a general answer.

There are, as Ruv described, roughly two kinds of scientists, experimental and theoretical.
Analogously there are also two kinds of theories: phenomenological and deductive. (I found the distinction in a textbook on molecular theoretical chemistry. The best way to study the philosophy of science is to study science)

Phenomenological theories are a collection of (descriptive) statements derived from experience. E.g. classical thermodynamics.
Deductive theories are based on a small set of axioms, completely mathematically formalized and consistent. E.g. nonrelativistic quantum mechanics.

It seems Chomsky is engaging in "theoretical linguistics", formulating /trying to formulate an analogue to a deductive theory. The point is, this is established scientific practice and although a theorist can come up with basically anything, that does not mean everything becomes accepted scientific knowledge. These theories may not arise purely from experiments, but they're support depends on them in full.

.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/10/2016 9:08:36 PM
Posted: 6 months ago
At 6/10/2016 7:07:08 PM, Fkkize wrote:
The best way to study the philosophy of science is to study science
Absolutely, Fkkize. This links to something I was hinting at with respect to science research, and was part of what motivated my question in your recent LFT thread. [http://www.debate.org...]

One can get basic comprehension of a scientific result by reading science journalism, or get background knowledge of theory and some practical knowledge of methods by reading a suitable text. But understanding the ontology, methodology and epistemology needed to advance research means delving into the history -- and that's also a prerequisite for the philosophy too. So an undergraduate and postgraduate approach to studying a field can be quite different, depending on temperament and time available to study.

And you're right that both phenomenology and deduction are legitimate science, however as I mentioned above they're more robust and effective together than apart. Phenomenology is less blind with exhaustive deduction; deduction is less biased with meticulous phenomenology. Nevertheless they develop as different skill-sets, and scientists tend to spend more time with one than another.

So science is intrinsically a social activity, and understanding the sociology of science seems (to me) intrinsic to evaluating the robustness of its product.

Apropos of our present topic, I began my researches in Artificial Intelligence in the mid 1980s. Chomsky was already casting a long shadow, and even then there was a lot of tribalism in cognitive science. I'm no longer researching in that area, but it sounds from DC's comments that the tribalism may continue. :)
Axonly
Posts: 1,802
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 4:19:12 PM
Posted: 6 months ago
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

Specifically, I'm thinking about dominant research methods in Chomskyan linguistics where linguists, particularly syntacticians, theorise about the mechanics behind language using their own native-speaker judgements and/or minimal data, along with frameworks and hypotheses earlier formulated by other linguists. They rarely seem to test out their own ideas, instead leaving this burden to the corpus linguists, computational linguists, psycholinguists and neurolinguists. Most of what they do is pencil-and-paper theorisation.

I feel that although the Chomskyan framework has produced many important ideas (e.g. context-free grammar, X-bar theory...), it has also grown in a rather unhealthy manner. 'Principles' are laid down by one linguist looking at one language, and then when other linguists working with other languages (or even the same language!) find counter-examples, they find ways to gerrymander their way around the counterexample or make ad hoc modifications to the original hypothesis, etc. This has led to a huge literature, a huge pile of hypotheses floating around (at least that's my impression as a non-Chomskyan lol) and a theory that is extremely complex and unparsimonious, not to mention strange. For example, Chomskyans working in the government and binding framework believe that 'I was killed' comes from the deep structure 'was killed I' - the 'I' was assigned the theta-role of patient, and then moved into the subject position to satisfy the extended projection principle.

I think part of the reason why formulating theories and testing them are separate in Chomskyan linguistics is Chomsky's competence vs performance distinction. Chomsky believes he simply studies people's linguistic knowledge, rather than how language is actually produced and comprehended in real time - that is a problem for psycholinguists, computational linguists and to some extent neurolinguists to solve. But for whatever reason, his model of grammar is very like a serial processing model (if you know automata theory, you probably know part of the reason why).

So, do you find this approach to science unsound? Thanks in advance.

This is a good thread so "bump"
Meh!
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 5:51:02 PM
Posted: 6 months ago
At 6/10/2016 6:13:11 PM, RuvDraba wrote:
Great question, Diqi! I feel I want to give you two partial answers. One is contextual, from the history and philosophy of science; the other is professional. The answers will be partial because while I have some familiarity with Chomsky's work due to overlap with my professional interests, I can't comment in detail without more reading. But I hope to at least offer some inroads, and it might be more helpful if I merge the partial answers together rather than separating them.
Thanks Ruv! An excellent, well-written response as usual. :) Thanks for the response, and I'll try to apply what you wrote to linguistics, and see if you agree with how I applied it.
Science, as you doubtless know, is based on systematic empiricism, and the key principle of empiricism is knowledge through observation. [https://en.wikipedia.org...] So theory isn't knowledge, modelling isn't knowledge, conjecture isn't knowledge. Rather, careful observation is knowledge and modelling helps us organise and make use of what we know, while conjecture and theory help us explore how much we know, and what else we might find out. Scientific empiricism is systematic, and theoretical and applied science are both part of the same system, but they're not always pulling in the same directions. That can create gaps, and across the gaps appear mutual criticisms, one of which is captured by your concern above. :)
Yep, exactly. :) But in a way, I feel there are two gaps in linguistics. In fact, the experimental-theoretical gap is relatively narrow compared to the other gap, because experimental methods are used relatively less in some areas of linguistics (e.g. morphology and syntax), where experimental work is often difficult and may require extensive input from neuroscience and psychology. The other gap is the one between descriptive and theoretical linguistics.

Within linguistics, we tend to have a third category (aside from theoretical and experimental, and excluding applied): Descriptive linguistics. That is, a linguist goes somewhere (usually an exotic place), gathers linguistic data from the locals, and writes descriptions of their grammar. Some linguists will then use these data to test or modify linguistic theory, but some focus on description alone, finding linguistic theory to be a fruitless or even laughable effort. In fact, some of the severest criticisms of Chomsky comes from RMV Dixon, best-known for his grammar of Dyirbal and his work on ergativity, who is a field linguist. He believes linguists should be collecting data about language rather than theorising in their office, just as a biologist should be collecting data about organisms in their habitats instead of engaging in abstract theorisation at home.

While many approaches use descriptive data to inform theory in a healthy way, such as Lexical-Functional Grammar, I feel the Chomskyans aren't doing that. The best example is the Minimalist Programme, which you may not be aware of (since it started in the 1990s). The Minimalist Programme is based on a small set of 'beautiful' but odd assumptions, and researchers in this programme try to make every language fit this mould. This has led to a lot of complicated analyses that frankly make very little sense. It's hard to take a language where words can appear in pretty much any order they want, and represent the language as a solely right-branching tree - this leads to very odd trees indeed.
I've worked as both a theoretical and an experimental scientist, and while both work with the scientific method, I can attest that life is very different for each.

At the risk of oversimplifying, an experimental scientist uses existing and emerging techniques to solve existing and emerging problems. Sometimes they discover totally new information and have to make sense of it, but much of the time they're verifying or falsifying results they already anticipate. The big creative challenge with experimental science is finding ways to control all the variables and get reliable, unambiguous outcomes: you can think of experimental science as a way of engineering the environment to produce clear and useable information. So developing new experimental techniques and finding new and better ways to apply them are significant achievements in experimental science, since they are the achievements that produce new results.
Very true. In fact, we've learnt a lot of methods from psychology and adopt them quite often in linguistic research, with pretty fruitful results. One of our syntax professors is looking for ways to cooperate with our psycholinguistics professor. :P
In a similar oversimplification, the significant achievements in theoretical science are in 'look ahead' and 'clean up'. 'Look ahead' is anticipating questions existing scientific models hadn't thought to ask and working out approaches for exploring them; while 'clean up' is developing more concise and predictive models for what we already know. A famous example of 'look ahead' is Einsteinian relativity in physics, which almost looks prescient although it wasn't. Famous examples of 'clean up' include the switch from Linneaen biological classification to cladistics, or the demotion of Pluto from planet to dwarf planet. So theoreticians mightn't say it this way, but you can think of theoretical science as engineering the language and intuitions by which we investigate and make sense of the world.
Chomsky has absolutely done a great deal of 'look ahead', and it's for this reason that I still have a great deal of respect for him, even though his ideas have been increasingly strange. As for 'clean up', he has done a lot to make things concise, but I don't think his theories are particularly predictive"

By the way, as I'm not familiar with the physical science, is there any where work has been done in Chomsky's manner (start with a few languages and theorise about how they work, then try to extend the theory to other languages, modifying the theory on the way but not changing basic assumptions)? I'm thinking there might be similar stuff in biology, since they study organisms rather than languages.
Experimental science without theoretical science runs the risk of tunnel vision: experimentalists are so obsessed with eliminating noise and extraneous variables, they can sometimes throw away new and interesting data simply because they're so focused on the problems they want to solve. Examples of data that could have been thrown away include the blurred photographic films that first revealed radioactivity, the melted candy-bar that first revealed the heating properties of radar, and the noise of cosmic background radiation that first verified the Big Bang theory.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 5:51:22 PM
Posted: 6 months ago
But theoretical science without experiment runs the risk of becoming pseudoscience: theoreticians are so obsessed with linguistic and intuitive elegance, they sometimes produce ideas that are either unfalsifiable, or experimentally infeasible to test. An example of a scientific theory that risks becoming pseudoscience is Superstring theory; and example of a theory that did become pseudoscience is phrenology -- the study of personality and intellect through cranial bumps. :)
That's exactly what I was thinking - Chomsky's minimalist programme is, again, a paradigmatic example of this problem. He's too concerned with linguistic 'beauty' and not concerned enough, I think, with whether his theory makes sense" and his competence/performance distinction just makes his theories harder to test than it already is.

Chomsky's approach isn't even unfalsifiable by recorded linguistic data because of his I- and E-language distinction. The thing is, while the majority of permutations of words in any language is either categorically correct or categorically wrong, there are some which are marginally right or marginally wrong - native speakers will likely disagree or equivocate about their grammaticality. Chomskyans simply use their own judgements without looking at corpora, a good practice from Bloomsfield's tradition that Chomsky explicitly rejected because of his I-language vs E-language distinction.

Lexical-Functional Grammar assumes the Strong Competence Hypothesis, which posits that competence and performance use the same mechanisms. In fact, a collection of psycholinguistic essays by Joan Bresnan was very important in the establishment of LFG as a plausible linguistic theory. Bresnan also criticised Chomsky's I-language approach and emphasises the importance of corpus linguistics.
So, I suspect you're asking when theoretical science 'crosses the line' from useful conjecture to pseudoscience. There's no hard line here, but many scientists feel that this risk is especially high in the qualitiative fields -- that is, fields that don't yet have enough numerical measures in place to force precise prediction and test their accuracy. Two such fields that immediately spring to mind are psychology and linguistics.
That's very true. Phonetics has been quantitative for many years now, and phonology is increasingly informed by phonetics, so it's also getting better. Computational linguists and psycholinguists have worked hard to quantify even the more qualitative fields - morphology and syntax. (Semantics and pragmatics will probably take a while though")
All science is contentious at times, but fields without strong enough predictive numerical measures can get ideological and tribal, with supporters of one paradigm railing at the theoretical failings of another paradigm, and vice-versa. Part of the challenge with qualitative theories is that it's hard to know how significant a particular failing is -- it's prone to interpretation, and that can lead to subjective biases.
That's true" you can't just take two generative linguistic theories covering the same number of surface linguistic data and do a chi-square test to find out whether they correspond well enough to cognition.
An obvious answer is to put predictive numerical measures in place as soon as you can. However, good measures depend on mapping out mechanisms, and that requires observation of what the mechanisms are. By way of example, biology received a massive boost to its transparency and accountability with the advent of genetic sequencing. Suddenly, the relationship between every species with genetic material could be quantitatively measured, rather than only conjectured on morphological similarities and relative positions in rock strata.
I see. So to some extent, the accountability of a high-level field depends on how much we know about the lower-level field that underlies it - could I say that?
For psychology, big improvements may well come with the development of neuroscience, since neurological mechanisms almost certainly underpin psychology. But with linguistics it may be the same. The sooner we understand language as a neurological function, the sooner comprehensive measures can be put in place to make linguistics more transparent and falsifiable.
Agreed, though it seems to me that neuroscience still has a looong way to go before we know better the mechanisms underlying language production and comprehension :P Most of the work done on lower-level processes of linguistic cognition rely on psychological models - from what I've seen, much of the neurolinguistics work is 'does this part light up"'
With that said, there are already some measures in place. An old friend of mine is a senior researcher in linguistics, and I know from dinner-table chats with him that there are a lot of measures one can already use on language itself. However language cognition -- the area Chomsky is most concerned with -- seems not to have nearly so many measures.
By language itself he probably meant E-language - we can do regression analyses, analyses of variance and whatnot with corpus data, and ditto for sociolinguistic data in the Labovian tradition (though the latter isn't very interesting to me haha). And phonetics has been filled with quantitative measures for a long time - probably because we understand the underlying physical and biological processes very well!
(And if it's any consolation, I get a kind of allergic pseudoscience shudder when I read Chomsky too. :D)

Hope that helps, DC!

It certainly did, thanks!
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 6:34:16 PM
Posted: 6 months ago
At 6/10/2016 6:44:46 PM, dylancatlow wrote:
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I don't claim to know more about this topic than you do, but it seems like identifying linguistic patterns, coming up with general rules to explain them, finding exceptions to those rules and modifying the theory accordingly is a perfectly valid approach for trying to understand how human languages are constructed.
That's what I initially thought as well. When I first got familiar with Chomskyan theories, as you may remember from some of my early posts, I was very excited about his theories. But as I ventured away from the textbook and into the literature, I started feeling that the Chomskyan method did not live up to what it had promised to do - find out the actual underlying mechanisms that underpin all human language.

The problem is that Chomsky and his students makes bold 'principles' based on a small set of languages, and intends to impose them on all languages. This results in ridiculousness. Some languages, like Jiwarli, simply have no such thing as configurational word order, but Chomskyans are still bent on giving them X-bar trees. They simply wouldn't entertain the possibility that X-bar trees don't work on some languages.

The problem is that Chomsky is not typologically minded like the Greenburgian functional-typological framework is. Greenburgians are also eager to look for universals, but they do this by analysing a lot of languages and comparing them.

Of course, the Greenburgian approach language is empiricist and informal, and therefore may not be satisfying with those interested in the underlying mechanisms of language. But Lexical-Functional Grammar shows that it can also be done from a rationalist perspective, as long as we drop the spurious competence-performance distinction. LFG has absorbed a lot of ideas from the functional-typological framework - one of its central claims is, in fact, based on the Keenan-Comrie Hierarchy - but it is also highly mathematical. A lot of work in computational linguistics (classical AI style) has been done on it (Kaplan, who along with Bresnan invented LFG, has a mathematical background.)
I mean, if we sent an archive of human conversations to very intelligent aliens light years away, they could probably learn a lot about our communicative process just by studying it without ever actually meeting a human, assuming they had the tools to learn the very basics of the language.
Eh, probably not, unless it's in video form and has context. It's hard to decode a language if you have none (imagine the Rosetta Stone without the Greek translation). Of course, that's beside the point, since Chomsky isn't even doing that. Looking at actual utterances is the approach adopted by Bloomfield and Bresnan, not by Chomsky. Chomskyans usually use native-speaker judgements instead.
You don't necessarily need to conduct experiments in order to understand HOW human languages are put together and operate, because that's essentially a form of scientific "stamp collecting". Understanding why they're put together that way, or what the consequences are for human thought, etc, is probably a different story.
I'm not sure I'd agree with that, because how else can we test a theory of cognition but by testing it on people who actually use those cognitive processes? Remember that Chomskyans study the processes underlying I-language cognition. But Chomsky assumes that if a system of formal rules can account for grammaticality judgements, then it's a good one. You can create a formal system of rules that transforms Proto-Indo-European words into English ones, and it will be a valid account of grammaticality because sound sequences which don't satisfy these rules will generally violate English phonotactics. But this rule system is psychologically absurd, because who converts PIE to English in their head? (About the only person who won't find this absurd is the philosopher who claimed French was the language of thought...) That's the problem with Chomskyan linguistics that Bresnan and Kaplan pointed out in the 1980s.

That's the problem - if you only look at surface data and ignore the underlying mechanisms, then two theories capture the same amount of surface data will be treated on the same level, even if one of them is utterly absurd when it comes to the underlying mechanisms.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 6:47:23 PM
Posted: 6 months ago
At 6/10/2016 7:07:08 PM, Fkkize wrote:
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

I don't know much about linguistics, but I believe I can still give a general answer.

There are, as Ruv described, roughly two kinds of scientists, experimental and theoretical.
Analogously there are also two kinds of theories: phenomenological and deductive. (I found the distinction in a textbook on molecular theoretical chemistry. The best way to study the philosophy of science is to study science)

Phenomenological theories are a collection of (descriptive) statements derived from experience. E.g. classical thermodynamics.
Deductive theories are based on a small set of axioms, completely mathematically formalized and consistent. E.g. nonrelativistic quantum mechanics.

It seems Chomsky is engaging in "theoretical linguistics", formulating /trying to formulate an analogue to a deductive theory. The point is, this is established scientific practice and although a theorist can come up with basically anything, that does not mean everything becomes accepted scientific knowledge. These theories may not arise purely from experiments, but they're support depends on them in full.
Thanks for the contribution - it's greatly appreciated, and I agree that learning about philosophy of science should be done through science (heck, I first learnt about Popper and Kuhn in the first chapter of a linguistics book).

However, I'm not entirely sure if Chomsky's method can really count as theoretical linguistics. Much of it is formulating theory from data, not producing a theory from a basic set of assumptions. I'm not sure if the latter is even possible in linguistics, because unlike in physics, where everything is assumed to follow truly universal laws, linguistics has parameters - languages can 'choose' between parameters like pro-drop, and there's no real law governing which one they choose (well there are, but those are complex social-psychological laws - not what Chomsky's interested in). The really strange thing is that even though he's generalising theory from data, he still insists on assumptions ('axioms'?) that are prima facie incompatible with data, and when faced with these data, he and other Chomskyans will come up with bizarre 'analyses' to protect the axiom. TBH, it reminds me more of Ptolemy making lots of ad hoc modifications to Aristotle's theory of astronomy than anything...

I can certainly testify that phenomological science exists in lingusitics though - the functional-typological school, and basic linguistic theory certainly do that. Perhaps cognitive linguistics could also fall into this category, especially Talmy's work. Frameworks like LFG and HPSG have strong mathematical assumptions but are also good at formalising descriptive statements (in fact some researchers have used LFG solely for this purpose). But they also have assumptions, just less strong than Chomskyans', so probably they're between the two.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
dylancatlow
Posts: 12,254
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 8:33:48 PM
Posted: 6 months ago
At 6/12/2016 6:34:16 PM, Diqiucun_Cunmin wrote:
At 6/10/2016 6:44:46 PM, dylancatlow wrote:
I mean, if we sent an archive of human conversations to very intelligent aliens light years away, they could probably learn a lot about our communicative process just by studying it without ever actually meeting a human, assuming they had the tools to learn the very basics of the language.
Eh, probably not, unless it's in video form and has context. It's hard to decode a language if you have none (imagine the Rosetta Stone without the Greek translation). Of course, that's beside the point, since Chomsky isn't even doing that. Looking at actual utterances is the approach adopted by Bloomfield and Bresnan, not by Chomsky. Chomskyans usually use native-speaker judgements instead.

But doesn't Chomsky's approach still require that a language's structure be deconstructed in order to make sense of what the native-speakers deem to be "grammatical"? I mean, a language user might be able to tell you whether a given sentence is grammatical, but not necessarily the reasons they find a sentence to be grammatical.

You don't necessarily need to conduct experiments in order to understand HOW human languages are put together and operate, because that's essentially a form of scientific "stamp collecting". Understanding why they're put together that way, or what the consequences are for human thought, etc, is probably a different story.
I'm not sure I'd agree with that, because how else can we test a theory of cognition but by testing it on people who actually use those cognitive processes? Remember that Chomskyans study the processes underlying I-language cognition. But Chomsky assumes that if a system of formal rules can account for grammaticality judgements, then it's a good one. You can create a formal system of rules that transforms Proto-Indo-European words into English ones, and it will be a valid account of grammaticality because sound sequences which don't satisfy these rules will generally violate English phonotactics. But this rule system is psychologically absurd, because who converts PIE to English in their head? (About the only person who won't find this absurd is the philosopher who claimed French was the language of thought...) That's the problem with Chomskyan linguistics that Bresnan and Kaplan pointed out in the 1980s.

I'm not talking about a "theory of cognition," but rather an understanding of the rules that a language follows. In other words, pattern finding. Understand the mechanisms that cause the patterns would probably take experimentation, like you said. For instance, if there's a set of rules that can reliably convert PIE to English, one would need to conduct some sort of experiment in order to understand what's going on, because there's obviously other explanations besides "English speakers are converting PIE into English in their head". The most obvious explanation would be that PIE and English have a common ancestor, or that English branched off from PIE, and therefore have overlapping structure, and that they differ in a superficial and systematic way.

That's the problem - if you only look at surface data and ignore the underlying mechanisms, then two theories capture the same amount of surface data will be treated on the same level, even if one of them is utterly absurd when it comes to the underlying mechanisms.

I know that Chomsky is quite skeptical of applying statistical analysis to linguistics, and thinks that it's only helpful when integrated with fundamental principles and mechanisms, so he would probably agree with this.
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/12/2016 8:49:25 PM
Posted: 6 months ago
At 6/12/2016 5:51:02 PM, Diqiucun_Cunmin wrote:
At 6/10/2016 6:13:11 PM, RuvDraba wrote:
I feel there are two gaps in linguistics. In fact, the experimental-theoretical gap is relatively narrow compared to the other gap, because experimental methods are used relatively less in some areas of linguistics (e.g. morphology and syntax), where experimental work is often difficult and may require extensive input from neuroscience and psychology. The other gap is the one between descriptive and theoretical linguistics.

Yes. We often think of experiments as occurring in a lab under synthetic and controlled conditions, but scientists also conduct what are called 'natural experiments'. [https://en.wikipedia.org...] These are observations where the factors affecting mechanisms are outside the observer's control, yet there's enough natural isolation and randomisation of factors to yield useful predictive information. An example of a natural experiment was the introduction of a new species of lizards to the island of Pod Mrcaru off the coast of Serbia. [http://news.nationalgeographic.com...] Left to their own devices for some three decades, the lizards evolved rapidly due to an effect called the Founder Effect [https://en.wikipedia.org...], which reduces genetic variation and can rapidly introduce genetic difference.

In a methodological sense, looking at a single planet through a telescope is also something of a natural experiment. The planet is separated from other celestial bodies by vast amounts of space; the aperture and focus of the telescope isolate information received from the planet from all other information; you systematically record changes in observational direction, you can perform the same observation night after night to see how the information changes, and can vary the place from which you observe (perhaps in collaboration with other astronomers) to see how much observation is a function of place and method, and how much is of object. The point is, you're isolating whatever you can, randomising whatever you can't, observing systematically while taking responsibility for how you observe, and thereby gaining systematic, empirical data. You might not be making an explicit hypothesis, but there are always implicit hypotheses associated with observational methodology, emerging in part from the observer's ontology (for example, we're observing the same planet night after night, and the universe isn't altering during daylight hours when we can't observe), and from methodological assumptions (e.g. the glass in my telescope isn't changing character and inducing effects I attribute to the subject.)

In that way, when you see surface features of the planet moving slowly, you might conjecture that it's a sphere in rotation, and not a disk; when you see an object near the planet, casting a shadow on the surface, you might conjecture a moon, and note its period of revolution. You can gain great ontological information just by observing systematically in isolation. The act of isolating a phenomenon to observe it, and systematically recording and cataloguing the results is perhaps the most foundational natural experiment in science, and every science must continue doing it until nothing more can be found to catalogue.

I think this matters to linguists too.

Linguistically, we know that different groups of people share the same world, yet they engage it differently, refer to it with different ontologies, and communicate about it using different grammars. So finding a stable language to catalogue may mean finding a group of people where the linguistic commonality is more than the linguistic differences, who've lived together for a long time in the same environment, so their ontology and lexicon are stable, and who don't see a lot of visitors from other linguistic groups so they have a stable grammar rather than some sort of emerging creole. And I imagine that language would need to be sampled from random members, and not simply the elders, or just the men, or just the members specialising in trade -- if so, those are experimental techniques applied to systematic observation.

So while I'm not a linguist, it seems to me that a good linguist would be thoughtful in choice of people to study, and approach to cataloguing, and that such cataloguing would test a linguist's observational methodology and the ontology of a linguist's theory, even as it extended the catalogue.

So I'd think of that as a simple natural experiment too.

Chomsky has absolutely done a great deal of 'look ahead', and it's for this reason that I still have a great deal of respect for him, even though his ideas have been increasingly strange. As for 'clean up', he has done a lot to make things concise, but I don't think his theories are particularly predictive"

All theorists are a kind of intellectual entrepreneur, anticipating the ontology and models a science will need, trying to guess which way the observational data may break, and catching those observations with effective models before anyone else does. :D It's not prophetic -- it's simply guesswork and smart intuitions. But careers are made out of being right, so the theory can become the end rather than (as it ought to be) the means, and absent good observational data, it can become increasingly strange and cultlike.

There's no shame in false scientific prediction, but there is embarrassment in not being diligent. I'm not a linguist so I don't really have a right to judge, however I've felt embarrassed for Chomsky for some decades because he seems to be building reputation from cult of personality, rather than curbing unfalsifiable predictions in favour of revisiting and exploring observation.

That's what can happen when theory becomes the centrepiece, rather than intellectual plumbing of observation. [And to declare my own bias, I say this as someone who began as a a theorist, became more experimental over time, and ended up trying to bridge the science and the engineering. :)]

By the way, as I'm not familiar with the physical science, is there any where work has been done in Chomsky's manner (start with a few languages and theorise about how they work, then try to extend the theory to other languages, modifying the theory on the way but not changing basic assumptions)? I'm thinking there might be similar stuff in biology, since they study organisms rather than languages.
That's pretty much how every science works early on, Diqi! :D You observe one planet in detail, build an ontology to describe it, and a model that predicts it, and then see if it works on another planet. :D When scientists are working that way, you know the science is young: that not everything which can be catalogued has been; that their ontology is being developed concurrently with their stamp-collecting. :)

I realise that linguistics isn't young in comparison with other sciences, but it's a psychological as well as a sociological science, and all psychology is young -- it's really still in its cataloguing phase. So it's not my field, but I think of linguistics as having a mature sociological leg, but an immature psychological leg. We really, really need a more mature neuroscience to help it out.
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/13/2016 3:12:45 PM
Posted: 5 months ago
At 6/12/2016 8:33:48 PM, dylancatlow wrote:
At 6/12/2016 6:34:16 PM, Diqiucun_Cunmin wrote:
At 6/10/2016 6:44:46 PM, dylancatlow wrote:
I mean, if we sent an archive of human conversations to very intelligent aliens light years away, they could probably learn a lot about our communicative process just by studying it without ever actually meeting a human, assuming they had the tools to learn the very basics of the language.
Eh, probably not, unless it's in video form and has context. It's hard to decode a language if you have none (imagine the Rosetta Stone without the Greek translation). Of course, that's beside the point, since Chomsky isn't even doing that. Looking at actual utterances is the approach adopted by Bloomfield and Bresnan, not by Chomsky. Chomskyans usually use native-speaker judgements instead.

But doesn't Chomsky's approach still require that a language's structure be deconstructed in order to make sense of what the native-speakers deem to be "grammatical"? I mean, a language user might be able to tell you whether a given sentence is grammatical, but not necessarily the reasons they find a sentence to be grammatical.
I think a better way of putting what he does is to use grammatical judgements to deconstruct the language. The problem is that by his standards - that is, all a grammatical theory needs to aim to do is to explain grammaticality judgements - the PIE example is just as acceptable as a psychologically supported theory of phonotactics. So we should abandon the strict competence-performance distinction, and allow the use of psychological experimentation to have more bearing on our theories of linguistic competence, which is what people like Bresnan and Jackendoff are doing/have done with their theories.
You don't necessarily need to conduct experiments in order to understand HOW human languages are put together and operate, because that's essentially a form of scientific "stamp collecting". Understanding why they're put together that way, or what the consequences are for human thought, etc, is probably a different story.
I'm not sure I'd agree with that, because how else can we test a theory of cognition but by testing it on people who actually use those cognitive processes? Remember that Chomskyans study the processes underlying I-language cognition. But Chomsky assumes that if a system of formal rules can account for grammaticality judgements, then it's a good one. You can create a formal system of rules that transforms Proto-Indo-European words into English ones, and it will be a valid account of grammaticality because sound sequences which don't satisfy these rules will generally violate English phonotactics. But this rule system is psychologically absurd, because who converts PIE to English in their head? (About the only person who won't find this absurd is the philosopher who claimed French was the language of thought...) That's the problem with Chomskyan linguistics that Bresnan and Kaplan pointed out in the 1980s.

I'm not talking about a "theory of cognition," but rather an understanding of the rules that a language follows. In other words, pattern finding.
But Chomsky is; his theory of linguistic competence is basically the linguistic knowledge that he believes is stored in the language faculty. The rules are cognitive... material stored in the brain, not just any abstract rule system like rules in the structuralist framework.
Understand the mechanisms that cause the patterns would probably take experimentation, like you said. For instance, if there's a set of rules that can reliably convert PIE to English, one would need to conduct some sort of experiment in order to understand what's going on, because there's obviously other explanations besides "English speakers are converting PIE into English in their head". The most obvious explanation would be that PIE and English have a common ancestor, or that English branched off from PIE, and therefore have overlapping structure, and that they differ in a superficial and systematic way.
Yes, that's right - it needs experimentation. But Chomsky's strict demarcation of competence and performance leads him to ignore the importance of experimentation. That's why his theories are psychologically unrealistic. There is no psychological evidence that linguistic structures turn from D-structures into S-structures, or that words are extracted from one place and land in another. But under Chomsky's standards, his unrealistic theory is not any better or worse than more psychologically realistic theories like LFG, as long as both of them can explain grammatical judgements. And explaining grammatical judgements is an easy criterion to satisfy, because Chomskyan linguists can think of pretty much any way to gerrymander their theories to fit linguistic structures, no matter how silly the result becomes.
That's the problem - if you only look at surface data and ignore the underlying mechanisms, then two theories capture the same amount of surface data will be treated on the same level, even if one of them is utterly absurd when it comes to the underlying mechanisms.

I know that Chomsky is quite skeptical of applying statistical analysis to linguistics, and thinks that it's only helpful when integrated with fundamental principles and mechanisms, so he would probably agree with this.
By 'surface data' I was actually talking about grammaticality judgements...
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/13/2016 3:15:30 PM
Posted: 5 months ago
I think directly taking Bresnan and Kaplan's PIE example may have caused some confusion. I could put it another way: I could claim that all English words are stored in Pig Latin in our mental lexicons, and it's only at production that we turn them into normal words. Again, as long as the theory results in words that are legit in English grammar, it is acceptable by Chomsky's standards. But again, it's psychologically absurd.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/13/2016 3:53:46 PM
Posted: 5 months ago
At 6/12/2016 8:49:25 PM, RuvDraba wrote:
Yes. We often think of experiments as occurring in a lab under synthetic and controlled conditions, but scientists also conduct what are called 'natural experiments'. [https://en.wikipedia.org...]...
Cool! Yes, in that sense then, descriptive linguistics is also experimental. Linguists apply some kind of isolation indeed - they may elicit certain sentences or grammatical constructions or ask about their grammaticality. Or they may conduct conversations under controlled conditions. For theoretically-minded or cognitive linguists, they generally do quite a bit of this. Sociolinguists, hail as the holy grail of sociolinguistic research completely natural conversation, so there is less 'environmental engineering', but it's still like an observational study more than anything.
In a methodological sense, looking at a single planet through a telescope is also something of a natural experiment....
Hmmm, I see. Statistics class generally gives us the impression that experimentation is about H_0s and H_As, so I don't look at such studies as 'experiments' lol. But what you said hear about the parallels between truly randomised experimentation and natural experiments makes sense. Even for someone like Daniel Everett (who started studying Piraha as a missionary and only became a linguist from there), he may be conducting an 'experiment' in some sense by observing just the Piraha...
In that way, when you see surface features of the planet moving slowly, you might conjecture that it's a sphere in rotation, and not a disk; when you see an object near the planet, casting a shadow on the surface, you might conjecture a moon, and note its period of revolution....
I see. And it's very true that science should continue to catalogue as best it can. One problem for linguistics is that a lot of stuff that needs cataloging is dying out at an alarming pace :( Field linguists are currently trying to catalogue them as quickly as they can.
I think this matters to linguists too.

Linguistically, we know that different groups of people share the same world, yet they engage it differently, refer to it with different ontologies, and communicate about it using different grammars. So finding a stable language to catalogue may mean finding a group of people where the linguistic commonality is more than the linguistic differences, who've lived together for a long time in the same environment, so their ontology and lexicon are stable, and who don't see a lot of visitors from other linguistic groups so they have a stable grammar rather than some sort of emerging creole.
This largely depends on the linguist and what they'd like to study... More theoretically and cognitively minded linguists generally prefer to do this. Chomsky is the best example of those who advocate idealisation, one of his most famous lines being 'Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its (the speech community's) language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance.' On the other extreme, many linguists prefer working with language contact and its many concomitant phenomena (language shift, code-switching, code-mixing, pidgnisation, creolisation...) and the mechanisms behind them. Some even feel that 'idealising linguists' are simply hostile to variation and intellectually dishonest (i.e. they're conveniently discarding variation because it's hard to explain in their theory). I think both kinds of linguistics have their place, though...
And I imagine that language would need to be sampled from random members, and not simply the elders, or just the men, or just the members specialising in trade -- if so, those are experimental techniques applied to systematic observation.
Unfortunately, convenience sampling is usually the best linguists can do. In studying English dialectal differences in the US, linguists generally survey old white men because their accents tend to be more conservative. Besides, many of the languages linguists study are no longer spoken by children, or spoken very poorly. While the poor form is itself a subject of research in contact linguistics, most linguists want to, first and foremost, preserve and record the 'older' language! And, finally, many people simply don't have time or willingness to participate in research, so linguists tend to end up talking to whoever is willing. As for major languages that don't involve dialects, even those tend not to be sampled randomly. They usually send emails to undergraduates (us) looking for volunteers, like psychologists do, lol.
All theorists are a kind of intellectual entrepreneur, anticipating the ontology and models a science will need, trying to guess which way the observational data may break, and catching those observations with effective models before anyone else does. :D It's not prophetic -- it's simply guesswork and smart intuitions. But careers are made out of being right, so the theory can become the end rather than (as it ought to be) the means, and absent good observational data, it can become increasingly strange and cultlike.
That's very true, in Chomsky's case. He will not even concede in the face of psychological and other cognitive evidence that is incompatible with his theories, because his competence-performance distinction gives him a free pass - his theories of competence need not be compatible with linguistic performance.
There's no shame in false scientific prediction, but there is embarrassment in not being diligent. I'm not a linguist so I don't really have a right to judge, however I've felt embarrassed for Chomsky for some decades because he seems to be building reputation from cult of personality, rather than curbing unfalsifiable predictions in favour of revisiting and exploring observation.
In a way, unfortunately yes. If the Minimalist Programme had been proposed by Jackendoff instead of Chomsky, everyone would probably have considered Jackendoff a madman. Unfortunately, it was Chomsky who proposed it, and Jackendoff's psychologically- and biologically-motivated architecutre has gained very little traction except among his small circle of friends.
That's pretty much how every science works early on, Diqi! :D You observe one planet in detail, build an ontology to describe it, and a model that predicts it, and then see if it works on another planet. :D When scientists are working that way, you know the science is young: that not everything which can be catalogued has been; that their ontology is being developed concurrently with their stamp-collecting. :)
Have they also made the same mistakes (well, I feel they're mistakes) as Chomsky - that is, holding onto strong assumptions derived from one language, not conceding in the face of evidence that they don't work in another, and thus having to gerrymander around the assumption? For example, when Chomsky first developed X-bar theory, it was a great idea - for English. It remained a great idea for French, Chinese and even Italian, which has freer word order. But it's quite preposterous when applied to many Australian languages. Chomskyan linguists, however, simply wouldn't admit that X-bar trees don't work on all languages, even in face of evidence that many languages have no configurationality at all. They think of ways to 'protect' the idea that X-bar trees are universal by proposing rather silly 'transformations'. A visiting professor at my school is a seasoned researcher in Australian linguistics working in LFG, and he also finds Chomsky's current framework silly...
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
v3nesl
Posts: 4,505
Add as Friend
Challenge to a Debate
Send a Message
6/13/2016 5:06:17 PM
Posted: 5 months ago
At 6/10/2016 7:07:08 PM, Fkkize wrote:
At 6/10/2016 12:55:12 PM, Diqiucun_Cunmin wrote:
I have some questions for those who are better versed in the scientific method than I am. My question is, when scientists do pencil-and-paper theorisation without carrying out testing (though others may do so for them), is this method scientifically unsound?

I don't know much about linguistics, but I believe I can still give a general answer.

There are, as Ruv described, roughly two kinds of scientists, experimental and theoretical.

There's another distinction, which may sound insulting but isn't intended that way at all: Practical science and fun-and-games science. Fun-and-games science is science that can't be tested because the data is not accessible. Multiverse speculation would be an extreme example. The interesting thing to note is, if science is or becomes practical, it is pretty much by definition testable. If you're using it, you're testing it.

More and more of science is becoming fun-and-games. It's stuff we can't ever really hope to verify. We may rule out certain types of cosmology, for instance, because of future inconsistencies, but we're never ever going to run a big bang to see if the idea really works. And I think this shift towards fun-and-games science has softened the discipline. We're far more tolerant than we should be of people who haven't figured out a way to demonstrate their claims.
This space for rent.
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/13/2016 7:06:10 PM
Posted: 5 months ago
At 6/13/2016 3:53:46 PM, Diqiucun_Cunmin wrote:
At 6/12/2016 8:49:25 PM, RuvDraba wrote:
Yes. We often think of experiments as occurring in a lab under synthetic and controlled conditions, but scientists also conduct what are called 'natural experiments'. [https://en.wikipedia.org...]...
Sociolinguists, hail as the holy grail of sociolinguistic research completely natural conversation, so there is less 'environmental engineering', but it's still like an observational study more than anything.
Yes, but that makes sense too. It's like the difference between pinning a butterfly to a corkboard to studying at its morphology, and watching two butterflies' mating. In the first place, you only need to know what is and isn't a butterfly, so you don't pin a dandelion to the corkboard too, by mistake. But in the second, you need to be sure that the flower they rest on isn't part of the mating -- that they'd be just as happy resting on a leaf. You're at least isolating the observation in your mind, and it's theory which does that.

Hmmm, I see. Statistics class generally gives us the impression that experimentation is about H_0s and H_As, so I don't look at such studies as 'experiments' lol.
Statistics can tell you whether the dandelion should be part of the observation of mating butterflies, even before you know why they landed on it. You'd probably want to time how long they mate before landing, but HA(dandelions are erotic to butterflies) will also indicate whether you'd be losing significant predictive information by ignoring what they land on too. Such testing can change ontology and hence theory instantly -- which makes it an experiment, yes?

One problem for linguistics is that a lot of stuff that needs cataloging is dying out at an alarming pace :( Field linguists are currently trying to catalogue them as quickly as they can.
Yes, as an Australian who loves history and human thought, I'm aware of the vast intellectual extinctions in my own country, and of linguistic conservation efforts with the young. It's sad that we often don't know what we're losing enough to even value it.

Finding a stable language to catalogue may mean finding a group of people where the linguistic commonality is more than the linguistic differences
Chomsky is the best example of those who advocate idealisation. On the other extreme, many linguists prefer working with language contact and its many concomitant phenomena (language shift, code-switching, code-mixing, pidgnisation, creolisation...) and the mechanisms behind them. I think both kinds of linguistics have their place...
If we can switch frames to physics for a moment, the two principal studies of Enlightenment physics were matter and energy. Energy is most readily measured as transformations to matter, while matter is most readily measured by variation in energy. So Enlightenment physicists would randomise matter to study energy (e.g. dropping different masses from the same height), or randomise energy to study matter (e.g. heating materials to different temperatures to see how they reacted.) From observation, mass and energy were both thought to be conserved. Yet by the late 19th century, radioactivity had been observed, and mass-energy equivalence was suspected, so that conservation of mass-energy was the emerging theory.

When I wrote the above, I guessed that early linguists would want to study and catalogue language before doing so for linguistic cognition. Yet though I didn't write it, it also struck me that some observers would want to understand the process (thought) through the product (language), while others would want to study the product through the process. Neurologically, the product (thought) is a trace of the process (stimulation, neural connections and firing thresholds), while the process is also a model of the product (in that trained neurons can recognise it.) Which makes me wonder: is there some linguistic duality here, and might those two approaches reconcile once an equivalence is found?

Unfortunately, convenience sampling is usually the best linguists can do.
Yes. but there's no assurance that kids learn language the way elders use it -- and every evidence they don't. :)

[Theorist] careers are made out of being right, so the theory can become the end rather than (as it ought to be) the means, and absent good observational data, it can become increasingly strange and cultlike.
In Chomsky's case. He will not even concede in the face of psychological and other cognitive evidence that is incompatible with his theories
That step right there -- when a theorist moves the bar of evidence downward with no regard for falsification, or the scope of ontology outward with no regard for verifying mechanisms -- that's the step into pseudoscience. Science must always move the bar upwards, and expand ontology only as fast as empirical methods permit. You can see these errors applied repeatedly in astrology, homeopathy, enneagrams, reiki, biorhythms...The reason it doesn't apply (yet) to Superstring theory is that it's presently all neutral conjecture. If it stays like that, the worst it will become is an inapplicable philosophy. However, the moment it became contraindicated by evidence (if it did), and they added more dimensions to accommodate (if they did), it'd begin to smell rank. :)

That's pretty much how every science works early on, Diqi! :D You observe one planet in detail, build an ontology to describe it, and a model that predicts it, and then see if it works on another planet. :D When scientists are working that way, you know the science is young: that not everything which can be catalogued has been; that their ontology is being developed concurrently with their stamp-collecting. :)
Have they also made the same mistakes (well, I feel they're mistakes) as Chomsky
Only when conditions are propitious, but then -- yes, they make them routinely. :D A complex, changing system with mechanisms hard to isolate and observe is often sufficient, because it's hard to falsify significance, so bias runs rampant.

If you can imagine what it was like in medicine before microbes could be isolated, cultured and observed, disease had 'spirit' theories, 'four humour' theories, 'planetary influence' theories, 'moral consequence' theories and so on. You could find evidence to support all of them, and it was hard to falsify any of them simply because when the mechanism is unobservable, all mechanisms are equally plausible. :D But culture and catalogue microbes from sick patients, and suddenly statistics tells you they're hugely significant, even if you don't know yet what they do. That gives rise to a slew of theoretical predictions experimentalists will want to test, and meanwhile, anything less significant evaporates in a generation or so as rusted-on theorists run out of predictions anyone cares about. :)

In a similar fashion, I think detailed observation and classification of neurological activity and its correlation with behaviour may sweep away incipient pseudoscience from both psychology and linguistics. It may also help resolve the language/learning dichotomy and perhaps reveal it for a sort of dualism (e.g. between modeling and representation.)

So I hope anyway. :)
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
6/14/2016 8:17:15 PM
Posted: 5 months ago
At 6/12/2016 6:47:23 PM, Diqiucun_Cunmin wrote:
Much of it is formulating theory from data, not producing a theory from a basic set of assumptions. I'm not sure if the latter is even possible in linguistics, because unlike in physics, where everything is assumed to follow truly universal laws, linguistics has parameters - languages can 'choose' between parameters like pro-drop, and there's no real law governing which one they choose (well there are, but those are complex social-psychological laws - not what Chomsky's interested in). The really strange thing is that even though he's generalising theory from data, he still insists on assumptions ('axioms'?) that are prima facie incompatible with data, and when faced with these data, he and other Chomskyans will come up with bizarre 'analyses' to protect the axiom. TBH, it reminds me more of Ptolemy making lots of ad hoc modifications to Aristotle's theory of astronomy than anything...

That's no unusual thing either. Many theories are not to be taken literally. For the most part that is due to them making assumptions we know to be incorrect (it's of course a little more complicated).
Here is just one quote on ligand field theory (simply because that's the most recent thing I studied), by C. K. J"rgensen in "Modern Aspects of
Ligand Field Theory", 1971

The crystal field model "combined the rather unusual properties of giving an excellent phenomenological classification of the energy levels of partly filled d and f shell of transition group complexes and having an absolutely unreasonable physical basis! [...]
It is remarkable that it does work so well!" (own emphasis)

The last part is, I believe, the most important one about a theory. I suppose one can fully support Chomsky's work as an epirically adequate/ instrumental theory/ for pragmatic reasons (whatever you want to call it), as long as one realizes, it is not to be taken literally.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 3:50:31 AM
Posted: 5 months ago
At 6/13/2016 7:06:10 PM, RuvDraba wrote:
Yes, but that makes sense too. It's like the difference between pinning a butterfly to a corkboard to studying at its morphology, and watching two butterflies' mating. In the first place, you only need to know what is and isn't a butterfly, so you don't pin a dandelion to the corkboard too, by mistake. But in the second, you need to be sure that the flower they rest on isn't part of the mating -- that they'd be just as happy resting on a leaf. You're at least isolating the observation in your mind, and it's theory which does that.
Agreed, and it's always nice to see parallels between linguistics and the physical sciences :) I think it's what shows us we're on the right track, in a way, because of the physical sciences' success.
Statistics can tell you whether the dandelion should be part of the observation of mating butterflies, even before you know why they landed on it. You'd probably want to time how long they mate before landing, but HA(dandelions are erotic to butterflies) will also indicate whether you'd be losing significant predictive information by ignoring what they land on too. Such testing can change ontology and hence theory instantly -- which makes it an experiment, yes?
Yep :)
Yes, as an Australian who loves history and human thought, I'm aware of the vast intellectual extinctions in my own country, and of linguistic conservation efforts with the young. It's sad that we often don't know what we're losing enough to even value it.
Yeah. :( Half the Australian languages have died out and most that remain have no more than a small circle of elderly speakers. However, we've already seen that the major ones have made huge contributions to linguistic theory, particularly Warlpiri. Makes one wonder what would happen if more of them remained...
If we can switch frames to physics for a moment, the two principal studies of Enlightenment physics were matter and energy. Energy is most readily measured as transformations to matter, while matter is most readily measured by variation in energy. So Enlightenment physicists would randomise matter to study energy (e.g. dropping different masses from the same height), or randomise energy to study matter (e.g. heating materials to different temperatures to see how they reacted.) From observation, mass and energy were both thought to be conserved. Yet by the late 19th century, radioactivity had been observed, and mass-energy equivalence was suspected, so that conservation of mass-energy was the emerging theory.

When I wrote the above, I guessed that early linguists would want to study and catalogue language before doing so for linguistic cognition. Yet though I didn't write it, it also struck me that some observers would want to understand the process (thought) through the product (language), while others would want to study the product through the process. Neurologically, the product (thought) is a trace of the process (stimulation, neural connections and firing thresholds), while the process is also a model of the product (in that trained neurons can recognise it.) Which makes me wonder: is there some linguistic duality here, and might those two approaches reconcile once an equivalence is found?
Psycholinguists, neurolinguists and developmental linguists certainly study the process through the product (and they also study the processes themselves). Generative and cognitive linguists also attempt to do so. It's sometimes hard to tell between the two, though - much of what generativists do is to study the language, although they claim they're also studying the underlying thought processes by doing so. And cognitive linguists (Lakoff, Talmy, etc.) simply don't seem to be willing or able to separate language and thought. When you study semantics across languages, you will see that different languages have different ways of framing events, so you're basically looking at both. In France, they've conducted a project, with Talmy's theory as its basis, to find out how different languages, major and minor, encode information about their actions in their language. It's hard to say whether they're doing linguistic typology or studying the framing of events...
That step right there -- when a theorist moves the bar of evidence downward with no regard for falsification, or the scope of ontology outward with no regard for verifying mechanisms -- that's the step into pseudoscience. Science must always move the bar upwards, and expand ontology only as fast as empirical methods permit. You can see these errors applied repeatedly in astrology, homeopathy, enneagrams, reiki, biorhythms...The reason it doesn't apply (yet) to Superstring theory is that it's presently all neutral conjecture. If it stays like that, the worst it will become is an inapplicable philosophy. However, the moment it became contraindicated by evidence (if it did), and they added more dimensions to accommodate (if they did), it'd begin to smell rank. :)
Yep, that's what I felt too. I feel it's a bit unfair to put Chomskyan linguistics in the same category as astrology and biorhythms, because it does have more-than-chance predictability. I'm currently looking at some constructions in Late Archaic Chinese, and it's striking how X-bar theory - which was first put forward by Chomsky, though it was Jackendoff who gave it full development - applies to it. That's why it's an element of Chomskyan linguistics retained by LFG - LFG just doesn't claim the universality of X-bar structures - and why it remains a central part of the second-year curriculum even at a school as anti-Chomskyan as mine. :) His theory of empty categories has also had some predictive power (though I haven't read enough related studies to be sure if it's just a case of a broken clock getting the time right twice a day.) Chomsky did base his theories in facts, even if they're facts from a small set of languages, and that does separate him from homeopathy or biorhythms, which are based mostly on speculation.
In a similar fashion, I think detailed observation and classification of neurological activity and its correlation with behaviour may sweep away incipient pseudoscience from both psychology and linguistics. It may also help resolve the language/learning dichotomy and perhaps reveal it for a sort of dualism (e.g. between modeling and representation.)

So I hope anyway. :)
I hope too :) There's still a long way to go though. I don't think they can even agree on whether Broca's area has computational (even if emergent) processes...
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 3:56:59 AM
Posted: 5 months ago
At 6/14/2016 8:17:15 PM, Fkkize wrote:
At 6/12/2016 6:47:23 PM, Diqiucun_Cunmin wrote:
Much of it is formulating theory from data, not producing a theory from a basic set of assumptions. I'm not sure if the latter is even possible in linguistics, because unlike in physics, where everything is assumed to follow truly universal laws, linguistics has parameters - languages can 'choose' between parameters like pro-drop, and there's no real law governing which one they choose (well there are, but those are complex social-psychological laws - not what Chomsky's interested in). The really strange thing is that even though he's generalising theory from data, he still insists on assumptions ('axioms'?) that are prima facie incompatible with data, and when faced with these data, he and other Chomskyans will come up with bizarre 'analyses' to protect the axiom. TBH, it reminds me more of Ptolemy making lots of ad hoc modifications to Aristotle's theory of astronomy than anything...

That's no unusual thing either. Many theories are not to be taken literally. For the most part that is due to them making assumptions we know to be incorrect (it's of course a little more complicated).
Here is just one quote on ligand field theory (simply because that's the most recent thing I studied), by C. K. J"rgensen in "Modern Aspects of
Ligand Field Theory", 1971

The crystal field model "combined the rather unusual properties of giving an excellent phenomenological classification of the energy levels of partly filled d and f shell of transition group complexes and having an absolutely unreasonable physical basis! [...]
It is remarkable that it does work so well!" (own emphasis)

The last part is, I believe, the most important one about a theory. I suppose one can fully support Chomsky's work as an epirically adequate/ instrumental theory/ for pragmatic reasons (whatever you want to call it), as long as one realizes, it is not to be taken literally.
That's very odd D: I didn't know there were such theories... I mean, I'm fine with idealisation (which they do a lot of in e.g. economics), but an 'absolutely unreasonable' physical basis doesn't sound like mere idealisation...

But by 'working so well', does that mean the theory has strong predictive power? Chomsky's theories do have more-than-chance predictive power, but I'm not sure I can call them 'working so well'...

(I've actually seen chemistry being used in support of Chomsky before The argument goes: Chemistry and physics used to conflict before, before the advent of quantum physics showed that chemistry was right all along. So a high-level science can get something right, even if it's contradicted by our current knowledge of a low-level science. That same relationship holds between linguistics and neuroscience.)
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 1:55:02 PM
Posted: 5 months ago
At 6/15/2016 3:56:59 AM, Diqiucun_Cunmin wrote:
At 6/14/2016 8:17:15 PM, Fkkize wrote:
At 6/12/2016 6:47:23 PM, Diqiucun_Cunmin wrote:
Much of it is formulating theory from data, not producing a theory from a basic set of assumptions. I'm not sure if the latter is even possible in linguistics, because unlike in physics, where everything is assumed to follow truly universal laws, linguistics has parameters - languages can 'choose' between parameters like pro-drop, and there's no real law governing which one they choose (well there are, but those are complex social-psychological laws - not what Chomsky's interested in). The really strange thing is that even though he's generalising theory from data, he still insists on assumptions ('axioms'?) that are prima facie incompatible with data, and when faced with these data, he and other Chomskyans will come up with bizarre 'analyses' to protect the axiom. TBH, it reminds me more of Ptolemy making lots of ad hoc modifications to Aristotle's theory of astronomy than anything...

That's no unusual thing either. Many theories are not to be taken literally. For the most part that is due to them making assumptions we know to be incorrect (it's of course a little more complicated).
Here is just one quote on ligand field theory (simply because that's the most recent thing I studied), by C. K. J"rgensen in "Modern Aspects of
Ligand Field Theory", 1971

The crystal field model "combined the rather unusual properties of giving an excellent phenomenological classification of the energy levels of partly filled d and f shell of transition group complexes and having an absolutely unreasonable physical basis! [...]
It is remarkable that it does work so well!" (own emphasis)

The last part is, I believe, the most important one about a theory. I suppose one can fully support Chomsky's work as an epirically adequate/ instrumental theory/ for pragmatic reasons (whatever you want to call it), as long as one realizes, it is not to be taken literally.
That's very odd D: I didn't know there were such theories...
We had to start somewhere, right?

I mean, I'm fine with idealisation (which they do a lot of in e.g. economics), but an 'absolutely unreasonable' physical basis doesn't sound like mere idealisation...
If it works lol

But by 'working so well', does that mean the theory has strong predictive power?
Exactly. Using just LFT one can tell the difference in potential energy of the different orbitals of transition metals just by pouring them in solution and looking at the color (at least for simple cases). Which, at least to me, sounds like a rather profound insight, considering the simplicity.

Chomsky's theories do have more-than-chance predictive power, but I'm not sure I can call them 'working so well'...

(I've actually seen chemistry being used in support of Chomsky before The argument goes: Chemistry and physics used to conflict before, before the advent of quantum physics showed that chemistry was right all along. So a high-level science can get something right, even if it's contradicted by our current knowledge of a low-level science. That same relationship holds between linguistics and neuroscience.)

The more I think about this, the more complicated the answer seems to become. I'll respond later.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
v3nesl
Posts: 4,505
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 3:17:19 PM
Posted: 5 months ago
At 6/15/2016 1:55:02 PM, Fkkize wrote:
...

That's an intriguing quote in your sig: Do not pretend to doubt in philosophy what you do not doubt in your heart. My first instinct is to dispute that maxim, but I suspect the context might clarify. Where does it come from?
This space for rent.
v3nesl
Posts: 4,505
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 3:59:04 PM
Posted: 5 months ago
At 6/15/2016 3:17:19 PM, v3nesl wrote:
At 6/15/2016 1:55:02 PM, Fkkize wrote:
...

That's an intriguing quote in your sig: Do not pretend to doubt in philosophy what you do not doubt in your heart. My first instinct is to dispute that maxim, but I suspect the context might clarify. Where does it come from?

I googled it. Got my answer.
This space for rent.
dylancatlow
Posts: 12,254
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 6:57:02 PM
Posted: 5 months ago
At 6/13/2016 3:12:45 PM, Diqiucun_Cunmin wrote:
At 6/12/2016 8:33:48 PM, dylancatlow wrote:
At 6/12/2016 6:34:16 PM, Diqiucun_Cunmin wrote:
At 6/10/2016 6:44:46 PM, dylancatlow wrote:
I mean, if we sent an archive of human conversations to very intelligent aliens light years away, they could probably learn a lot about our communicative process just by studying it without ever actually meeting a human, assuming they had the tools to learn the very basics of the language.
Eh, probably not, unless it's in video form and has context. It's hard to decode a language if you have none (imagine the Rosetta Stone without the Greek translation). Of course, that's beside the point, since Chomsky isn't even doing that. Looking at actual utterances is the approach adopted by Bloomfield and Bresnan, not by Chomsky. Chomskyans usually use native-speaker judgements instead.

But doesn't Chomsky's approach still require that a language's structure be deconstructed in order to make sense of what the native-speakers deem to be "grammatical"? I mean, a language user might be able to tell you whether a given sentence is grammatical, but not necessarily the reasons they find a sentence to be grammatical.
I think a better way of putting what he does is to use grammatical judgements to deconstruct the language. The problem is that by his standards - that is, all a grammatical theory needs to aim to do is to explain grammaticality judgements - the PIE example is just as acceptable as a psychologically supported theory of phonotactics. So we should abandon the strict competence-performance distinction, and allow the use of psychological experimentation to have more bearing on our theories of linguistic competence, which is what people like Bresnan and Jackendoff are doing/have done with their theories.

A grammatical judgement only tells you what grammatically correct sentences look like, not how they are put together at a fundamental level. In other words, it tells you what to deconstruct, not how to deconstruct it. And I don't think that Chomsky places all theories on equal footing so long as they are able to account for a language's grammar; he seems to strongly favor the simplest hypothesis (or the one that requires the least mental computation) until evidence is provided for accepting a more complex one.

You don't necessarily need to conduct experiments in order to understand HOW human languages are put together and operate, because that's essentially a form of scientific "stamp collecting". Understanding why they're put together that way, or what the consequences are for human thought, etc, is probably a different story.
I'm not sure I'd agree with that, because how else can we test a theory of cognition but by testing it on people who actually use those cognitive processes? Remember that Chomskyans study the processes underlying I-language cognition. But Chomsky assumes that if a system of formal rules can account for grammaticality judgements, then it's a good one. You can create a formal system of rules that transforms Proto-Indo-European words into English ones, and it will be a valid account of grammaticality because sound sequences which don't satisfy these rules will generally violate English phonotactics. But this rule system is psychologically absurd, because who converts PIE to English in their head? (About the only person who won't find this absurd is the philosopher who claimed French was the language of thought...) That's the problem with Chomskyan linguistics that Bresnan and Kaplan pointed out in the 1980s.

I'm not talking about a "theory of cognition," but rather an understanding of the rules that a language follows. In other words, pattern finding.
But Chomsky is; his theory of linguistic competence is basically the linguistic knowledge that he believes is stored in the language faculty. The rules are cognitive... material stored in the brain, not just any abstract rule system like rules in the structuralist framework.

I don't think he is. I mean, obviously Chomsky believes that a theory of human cognition could in principle be developed to explain how humans acquire and use language, given that he holds that our language ability derives from an innate linguistic faculty in the brain, but the exact cognitive processes which give rise to the unique features of human language doesn't seem to be something he even knows how to investigate. For instance, when he lists off what he sees as the "big unanswered questions in linguistics" he'll almost always bring up the seemingly trivial question "how are we able to use the ability you and I are now using"?

Understand the mechanisms that cause the patterns would probably take experimentation, like you said. For instance, if there's a set of rules that can reliably convert PIE to English, one would need to conduct some sort of experiment in order to understand what's going on, because there's obviously other explanations besides "English speakers are converting PIE into English in their head". The most obvious explanation would be that PIE and English have a common ancestor, or that English branched off from PIE, and therefore have overlapping structure, and that they differ in a superficial and systematic way.
Yes, that's right - it needs experimentation. But Chomsky's strict demarcation of competence and performance leads him to ignore the importance of experimentation. That's why his theories are psychologically unrealistic. There is no psychological evidence that linguistic structures turn from D-structures into S-structures, or that words are extracted from one place and land in another. But under Chomsky's standards, his unrealistic theory is not any better or worse than more psychologically realistic theories like LFG, as long as both of them can explain grammatical judgements. And explaining grammatical judgements is an easy criterion to satisfy, because Chomskyan linguists can think of pretty much any way to gerrymander their theories to fit linguistic structures, no matter how silly the result becomes.
That's the problem - if you only look at surface data and ignore the underlying mechanisms, then two theories capture the same amount of surface data will be treated on the same level, even if one of them is utterly absurd when it comes to the underlying mechanisms.

I know that Chomsky is quite skeptical of applying statistical analysis to linguistics, and thinks that it's only helpful when integrated with fundamental principles and mechanisms, so he would probably agree with this.
By 'surface data' I was actually talking about grammaticality judgements...
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 7:16:46 PM
Posted: 5 months ago
At 6/15/2016 3:56:59 AM, Diqiucun_Cunmin wrote:
(I've actually seen chemistry being used in support of Chomsky before The argument goes: Chemistry and physics used to conflict before, before the advent of quantum physics showed that chemistry was right all along. So a high-level science can get something right, even if it's contradicted by our current knowledge of a low-level science. That same relationship holds between linguistics and neuroscience.)

Some have taken this to be an argument for the ontological autonomy of chemistry and physics. In that sense, if there are sui generis chemical entities/properties/relations, this would be an argument pro Chomsky, depending on the exact assumptions.
I think, however, this ontological pluralism stands in the way of a unified scientific enterprise. What these people seem to play down, especially when it comes to chemistry-physics, is that although we did good chemistry before, we have improved on it greatly because of quantum physics. Even many undergraduate reactions would lack an explanation for why this and not that product is formed without it. In particular when there are typical "chemical" reasons against the product that is actually formed..
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
RuvDraba
Posts: 6,033
Add as Friend
Challenge to a Debate
Send a Message
6/15/2016 7:33:21 PM
Posted: 5 months ago
At 6/14/2016 8:17:15 PM, Fkkize wrote:
At 6/12/2016 6:47:23 PM, Diqiucun_Cunmin wrote:
Much of it is formulating theory from data, not producing a theory from a basic set of assumptions. I'm not sure if the latter is even possible in linguistics, because unlike in physics, where everything is assumed to follow truly universal laws, linguistics has parameters - languages can 'choose' between parameters like pro-drop, and there's no real law governing which one they choose (well there are, but those are complex social-psychological laws - not what Chomsky's interested in). The really strange thing is that even though he's generalising theory from data, he still insists on assumptions ('axioms'?) that are prima facie incompatible with data, and when faced with these data, he and other Chomskyans will come up with bizarre 'analyses' to protect the axiom. TBH, it reminds me more of Ptolemy making lots of ad hoc modifications to Aristotle's theory of astronomy than anything...
Many theories are not to be taken literally. For the most part that is due to them making assumptions we know to be incorrect (it's of course a little more complicated).
Yes, to be predictive a theory must first be usable, and that may require models and approximations known to contain inaccuracy.

It made me grin that you pulled a chemistry example out, because that's where I first noticed it as a young scientist-in-training too. The observed mechanisms of chemistry are complicated: there's Brownian motion, and complex electromagnetic interactions, and both work more statistically than deterministically, and the math models can be taxing to use in detail. However when predicting the course of chemical and molecular interactions, many chemists seem to work with a deterministic narrative instead, of a 'typical' molecule of 'typical' atoms making 'choices'. It's intuitive and predictive, but if you asked a chemist to point out which molecule in particular was the 'typical' one, and in which moment it made a choice, I think they'd have difficulty. :)

Some of my classmates noticed this at about the same time I did. It offended the purists among us more than a little -- we wanted our theories to be factual, meticulous, devoid of narrative invention. For some, it made us leave chemistry for 'purer' fields. But little did we realise how much narrative sees use across all the sciences as a usable approximation of what is observed.

The last part is, I believe, the most important one about a theory. I suppose one can fully support Chomsky's work as an empirically adequate/ instrumental theory/ for pragmatic reasons (whatever you want to call it), as long as one realizes, it is not to be taken literally.
A question that might not be for you, Fkkize, but might be for DC or anyone else interested in linguistics: does Chomsky himself realise that? I'm not persuaded that he does. And if he doesn't, then what is he teaching his students?
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/16/2016 12:02:34 AM
Posted: 5 months ago
At 6/15/2016 6:57:02 PM, dylancatlow wrote:
At 6/13/2016 3:12:45 PM, Diqiucun_Cunmin wrote:
I think a better way of putting what he does is to use grammatical judgements to deconstruct the language. The problem is that by his standards - that is, all a grammatical theory needs to aim to do is to explain grammaticality judgements - the PIE example is just as acceptable as a psychologically supported theory of phonotactics. So we should abandon the strict competence-performance distinction, and allow the use of psychological experimentation to have more bearing on our theories of linguistic competence, which is what people like Bresnan and Jackendoff are doing/have done with their theories.

A grammatical judgement only tells you what grammatically correct sentences look like, not how they are put together at a fundamental level. In other words, it tells you what to deconstruct, not how to deconstruct it.
Agreed.
And I don't think that Chomsky places all theories on equal footing so long as they are able to account for a language's grammar; he seems to strongly favor the simplest hypothesis (or the one that requires the least mental computation) until evidence is provided for accepting a more complex one.
Well OK, I admit that I was stretching the truth a bit about this one - Chomsky does favour the simpler model. The problem is, his conception of 'simple' (or 'economy' as he calls it) is rather strange. He assumes all linguistic trees are binary-branching because this makes language acquisition simpler. But this means Chomskyans have to do a lot of theoretical gerrymandering to make trees with (coordinating) conjunctions also binary branching! That's one problem with Chomskyan theories - they prefer simplifying the basic mechanisms behind language, at the cost of requiring really complex explanations of even simple phenomena and, let's face it, plausibility.

I'll give you an example. In LFG, we try to eschew purely theoretical machinery with little surface evidence, like vP and nP, whenever possible. We do retain some abstract-ish categories, like IP and DP, but those usually have surface realisations in some form (determiners, auxiliaries, etc.) Stuff like vP and nP, however, are introduced into Chomskyan linguistics so that they can wrap their heads around syntactic phenomena like double objects, when there are much more plausible explanations in LFG, which uses an entirely separate f-structures for purposes like these. A Chomskyan might call LFG complex because it assumes a new structure, but there is a plenty of cross-linguistic evidence that grammatical functions exist on a different level as, say, case and theta roles. vP and nP, on the other hand, seem to be purely abstract constructs that allow Chomskyans to explain phenomena that aren't supposed to be hard to explain...
But Chomsky is; his theory of linguistic competence is basically the linguistic knowledge that he believes is stored in the language faculty. The rules are cognitive... material stored in the brain, not just any abstract rule system like rules in the structuralist framework.

I don't think he is. I mean, obviously Chomsky believes that a theory of human cognition could in principle be developed to explain how humans acquire and use language, given that he holds that our language ability derives from an innate linguistic faculty in the brain, but the exact cognitive processes which give rise to the unique features of human language doesn't seem to be something he even knows how to investigate. For instance, when he lists off what he sees as the "big unanswered questions in linguistics" he'll almost always bring up the seemingly trivial question "how are we able to use the ability you and I are now using"?
I do remember reading about Chomsky's three questions for linguistics, but I admit that it's been a while and I don't remember what exactly they were and how they were formulated. But anyway, I think under Chomskyan thought, this question is one of linguistic performance - the how - rather than linguistic competence - the what. His linguistic theories are a kind of what, not a kind of how, as he repeatedly stresses.
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/16/2016 12:24:39 AM
Posted: 5 months ago
At 6/15/2016 7:16:46 PM, Fkkize wrote:
At 6/15/2016 3:56:59 AM, Diqiucun_Cunmin wrote:
(I've actually seen chemistry being used in support of Chomsky before The argument goes: Chemistry and physics used to conflict before, before the advent of quantum physics showed that chemistry was right all along. So a high-level science can get something right, even if it's contradicted by our current knowledge of a low-level science. That same relationship holds between linguistics and neuroscience.)

Some have taken this to be an argument for the ontological autonomy of chemistry and physics. In that sense, if there are sui generis chemical entities/properties/relations, this would be an argument pro Chomsky, depending on the exact assumptions.
I think, however, this ontological pluralism stands in the way of a unified scientific enterprise. What these people seem to play down, especially when it comes to chemistry-physics, is that although we did good chemistry before, we have improved on it greatly because of quantum physics. Even many undergraduate reactions would lack an explanation for why this and not that product is formed without it. In particular when there are typical "chemical" reasons against the product that is actually formed..
I agree. As Ruv mentioned, neuroscience will be very helpful for us to make linguistics accountable, because (hopefully) it will imply that linguistic theories, or at least the general principles governing them, will be testable through experiment. And that would naturally corroborate certain theories in favour of others. We may still have different theories around, in the same way that physical sciences do, but I'm sure a lot will be ruled out. (So far though, while psychology has provided linguistics with a plenty of good input - and, of course, the most prominent populariser, Steven Pinker - neuroscience's contribution is still quite limited, I think. The 'this part lights up, that part doesn't' evidence is too open to interpretation...)
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
6/16/2016 12:28:01 AM
Posted: 5 months ago
At 6/15/2016 7:33:21 PM, RuvDraba wrote:
At 6/14/2016 8:17:15 PM, Fkkize wrote:

The last part is, I believe, the most important one about a theory. I suppose one can fully support Chomsky's work as an empirically adequate/ instrumental theory/ for pragmatic reasons (whatever you want to call it), as long as one realizes, it is not to be taken literally.
A question that might not be for you, Fkkize, but might be for DC or anyone else interested in linguistics: does Chomsky himself realise that? I'm not persuaded that he does. And if he doesn't, then what is he teaching his students?
I think he does tell people not to take his theories literally, in a way. As I stated in the OP, his theory looks very much like a serial processing model, but he tells everyone that his theory is one of knowledge and not how language is processed in real time.

Here's a diagram that I drew of his model in his previous theory (government and binding):
http://www.debate.org...

It's hard to believe it's not a serial processing model, but I think he insists it isn't...
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...