Samuel Sayantan Mandal
Biolinguist, Computationalist, Anarcho-Syndicalist
Author: fodor O.S.: Ghost in the Machine (A GNU/HURD O.S.)
I’m a Ph.D candidate in Concordia University, Montreal, researching the computational nature and biological basis of Phonology as a cognitive ability. Here I’m supervised by Prof. Charles Reiss (Principal), Prof. Mark Hale and Prof. Alan Bale from Linguistics, as well as Prof. Roberto de Almeida from Psychology. My research is carried out under the umbrella of the INDI individualized doctorate program, in the form of an interdisciplinary investigation of the computational nature of our Language Faculty, specifically the sound domain, using both formal/mathematical and empirical tools. I’m supported by the Louise Dandurand Scholarship in Interdisciplinary Studies & the International Graduate Tuition Excellence Award.
From time to time I also get fresh perspectives from Veno Volenec (Concordia), and Profs. Tom Bever (U. Arizona), Massimo Piatelli-Palmarini (U. Arizona), Martin Everaert (Utrecht), Bill Idsardi (UMD), Thomas Graff (Stony Brooke) and Norbert Hornstein (UMD). I strongly believe that the diversity of perspectives these experts provide leads to a very converging approach that leverages a wide variety of tools.
I studied a hodge podge mixture of Mathematics, Politics, Literature, and Formal Logic in college, and then while stumbling through graduate school, mostly taking courses in Computer Science, Linguistics and Philosophy, way back in 2008, I fell in love. Head over heels. With three different people at a time too! The enigmatic David Marr. The all-around badass Jerry Fodor. And with that recursive generator of epiphanies that is Noam Chomsky! Of course, being in love with three outrageously attractive minds is not something society approves of, and I have been dodging pitchforks and ducking from lit torches ever since! It is not easy being an unreconstructed rationalist in a field that still exudes an irrational (mostly!) empiricist bias… (sigh)
I am interested in a whole range of related things, but at their centre are the issues that Noam Chomsky raises in Things No Amount of Learning Can Teach! What kind of knowledge is encoded in our genetic make-up, and how? It is a given fact that we are animals, and like all animals we are very rigidly constrained in things that our brains can, and cannot, make sense of. What are those limits, especially with regard to our cognitive and Language abilities?
Following in the footsteps of Noam Chomsky, I treat the word “Language” as a placeholder for a complex set of computational-cognitive abilities that are functions of a biological organ (the Faculty of Language, or FL). I study Language as an organ, meaning I am uninterested in describing any specific (or every) language. As Charles Reiss puts it, ‘one is as good as the others’ for my purposes. But since all biological organs have internal structural and functional limitations, it follows then, so must Language— the total number of possible grammars must be finite. What is an “impossible language”? What boundaries of human-ness do they violate? What are those elusive properties that all the languages of the world, despite appearances, share? What laws constrain natural languages, and create its undeniable “fingerprint”? These are the questions that I try to pursue, bringing together penetrating insights offered by the minimalist program and recent developments in computational and neuroimaging techniques. I am particularly interested in MEG and EEG for imaging studies, but also in the prospects of TMS as a tool for exploring the independence of core linguistic computations from the sensory-motor systems.
But deep in my heart, I am still a formalist. All my research interests, including instrumental ones, ultimately converge on the computational and combinatoric complexity of Language as a natural biological phenomena, and their origin story. I do not find simplistic appeals to stochastic methods and hidden Markov models to be explanatorily adequate tools for understanding Language and the human mind. Rather, I think the task at hand is to ask truly penetrating questions of some epistemological importance. What is the nature of linguistic computations, and why are they the way they are? How do natural laws of physics and mathematics constrain the computational rules that govern higher cognition? Are linguistic computations optimal? If so, in what sense? What kind of memory architecture is required to support linguistic principles on a biological substrate? The tools of studying the functioning brain, when properly combined with the formal-analytical apparatus of computationalism and generative grammar, affords a unique view of linguistic computations as the reverse side of a tapestry — an intricate hidden architecture projecting infinite possibilities.
Currently, I’m working on how biologically informed models of computation ought to address certain logical problems having to do with the mental representation of phonological knowledge, and the syntax of phonological computations. I approach the formal issues taking cue from the Chomskyan Biolinguistic program (specifically, Hale & Reiss’ Substance-Free Phonology), while the neurobiological investigations are primarily motivated by Granularity Mismatch (GM) and Ontological Incommensurability (OIP) arguments put forth by Embick, Poeppel and colleagues (Embick & Poeppel, 2015; Krakauer et al., 2017; Poeppel & Embick, 2017). My primary motivations are (a) elaborating the computational nature of phonology (or, the syntax of phonology) and (b)creating a plausible interface theory/linking hypothesis between phonological computations and neurobiological implementation that is compatible with our understanding of the evolutionary and anatomical properties of the brain.