A professor in grad school used to tell us that the word method comes from the Greek roots meta (after) and odos (path), which together give a path to follow, a pursuit, or, as he used to put it very consciously in the past tense, the road we’ve traveled. There are two ways of seeing method, he used to say. It is either the path we improvise in pursuit of an elusive, mobile, and shadowy prey, or the road we can look back on when we’ve caught it. In both cases, method is unplanned. It unfolds of necessity. It is the sum of decisions made on the spur of the moment and under sometimes severe constraints.
Method, he said, is not clean. It isn’t proper, or appropriate, and it most certainly is never validated. It doesn’t lie on the pages of a book, like mortal remains on a bier. It is messy, improvised, and alive. Like the forest path that brings us to a deer or a berry patch, it is immune to judgement. It is provident.
To an idealistic young graduate student, fresh from a series of draconian and prescriptive methods courses, high on certainty, with many more answers in my head than questions, this was simply unacceptable. It was only one of his many pronouncements that shocked me at the time. Another, which I still find deeply shocking now, is that anthropology is a useless passion. I haven’t reconciled myself with that one, and I don’t think I ever will, but I have certainly come around on this methods thing.
My first clue that he might be right came early on after my MSc, but only now do I realize that it was the first step on my road traveled to a view of method that is more retrospective than prospective. My first job after my masters had little to do with archaeology. I will never say that something has nothing to do with archaeology, but this had little. I was hired to do data analysis on a study that combined some clinical observations and some very small number of structured interviews. The study was supposed to validate a clinical screening tool they had developed, using a previously validated tool as a foundation.
We were dealing with very small numbers of observations. Problematically small. The project manager was a more senior PhD student. She and I did our homework. We scoured the literature for statistical methods. We consulted with a stats prof in the math department. We put together a plan and made an appointment to see the Principal Investigator.
When we finished our presentation, there was a longish silence. She looked at our papers, asked a couple of questions, and then said “I think we should stick with the more well-known statistical methods”. I was floored, not for the last time in my budding academic career. The project manager and I looked at each other and silently agreed that we would do what needed to be done for the project, and work on the PI later.
I learned from that experience that I didn’t think methodological choices were supposed to be safe or comfortable. They could be perilous, like a hillside trail in the forest at dusk. They could have corners around which we couldn’t see, and still be productive and hopeful. I learned that our choices should be dictated by the terrain and by our objectives, not by social acceptability.
Next, I encountered standards. Whether in fieldwork, in data analysis, or in computer simulation, I was told that there were standards, and that things must be done a certain way. These exhortations, command-like, usually preceded any significant discussion of research objectives, questions or hypotheses.
Methodological standards, it gradually occurred to me, are a diversity reduction mechanism. They often try to specify the phenotype of research, the specific shape it should take, rather than its genotype, or the principles by which its shape should be expressed under various conditions.
At the same time, I was becoming increasingly convinced that diversity production is the key to the research enterprise. External selection will take care of diversity reduction. The more material it has to work on, the better. This means that preemptively cutting off entire regions of the methodological landscape by imposing standards is a bad idea. It doesn’t mean that anything goes. But it means that anything might. It means that tolerance of methodological diversity and creativity is crucial.
Still, we have to evaluate methods, for ourselves when we try to answer questions or test hypotheses, and for others when we try to decide whether their claims are founded and can be built upon. Standing at the edge of the forest, as we do when we start a project or start reading someone else’s paper, we shouldn’t ask whether a method is an appropriate path for a hunt, for answering a given question, for obtaining certain data, or for testing a hypothesis. We should cross into the woods, not with standards, but with minimum expectations and a few basic, adaptable principles for making decisions. We should want others to do the same, and accept that they might do things in ways we never imagined.
We should ask, standing in a clearing, deep in the woods, with our prey caught, whether our method took us here and how. If we’re lost and hungry instead, we should ask why the path we followed didn’t take us to the clearing. We should ask what we’ve learned traveling this particular path and whether we can use that knowledge on our next hunting trip, knowing that the next expedition might call for a radically different approach.
My concern about this, stemming from my own bioarchaeological viewpoint, is that high methodological diversity often makes it difficult to compare results. Assessing human biocultural change is difficult if different researchers measure change differently – are you capturing true variability in regards to what happened in the past, or are you capturing differences in how people score and measure their variables?
I tried valiantly to work a hunting metaphor into this somehow, but I am in Spain and it is too late for the amount of mental gymnastics that would require.
LikeLike
This is where transparency is important. A good methods section should make those differences clear, and allow us to infer their impact. In the end, if I have a choice between the problems caused by methodological diversity and those caused by methodological uniformity, I will choose the former.
There is also that methodological uniformity is a bit of an illusion. One of your colleagues in the lab (she may have been a couple of years ahead of you) did a study of mesolithic stone tool and debitage in northern Germany. She wanted to know whether the assemblages clustered more chronologically or regionally. To her surprise (and mine) she found that despite a high uniformity of reported methods , the assemblages differentiated not along geographical or chronological lines, but according to analyst. I always thought she should have published that paper.
At the very least, avowed methodological diversity has the advantage of not hiding methodological diversity, and of encouraging fuller descriptions of method that avoid shortcuts.
But now I am ranting again. I’ll try to avoid hunting metaphors in future.
LikeLiked by 1 person