Slavery Studies’ Digital Straw Man
Written for Slavery and the Numbers Game at 50: A Roundtable Discussion at the Forty-Sixth Annual meeting of the Society for Historians of the Early American Republic.
Thank you to Dr. Joshua Rothman for convening and moderating this panel, and to my fellow panelists for what I’m sure will be an enriching conversation.
Considering that Herbert Gutman passed away five years before I was born, it may come as little surprise that the cliometric moment was over well before I began my career as a historian. In fact, by the time I began graduate school in 2015, the field was roughly 25 years into the epoch known as “the archival turn,” a period in which historians rejected conceptualizations of the archive as a neutral repository of facts, and instead emphasized its silences, biases, and material structures, and the role those things play in the stories we are trying to tell. The archival turn, of course, has been especially influential in slavery studies, in which archival absences and fragmentation have been and remain central methodological and interpretive concerns.
Herbert Gutman, however, made extensive use of cliometrics: quantitative, computational, and statistical methods on which theorists of the archival turn generally cast a distrustful gaze.
And yet, before there was Trouillot’s Silencing the Past, or Hartman’s Scenes of Subjection, or Scott’s “The Evidence of Experience,” there was, in fact, Herbert Gutman. Long before the archival turn was even visible around the bend, that is, Gutman–yes, a cliometrician himself–offered his own full-throated critique of what he viewed as cliometrics-gone-wrong, Fogel & Engerman’s Time on the Cross.
This is not to say that Gutman was making the same general critique of cliometrics that scholars of the 1990s and later would. This is my point, though. In many ways, Gutman’s critique was certainly just as thoughtful as later critiques, and maybe even more nuanced by virtue of the fact that his critique was not of quantitative methods writ large, but of the very particular cliometric methods of very particular scholars in a particular book.
Distilled to their essence, Gutman’s critiques came down to two things—many numbers, graphs, and tables–but just two things.
Gutman was arguing for the importance of accuracy and care in the use of quantitative methods in slavery studies. Nothing more. Nothing less.
So what might Gutman’s critique have looked like in practice? We don’t have to guess. The Black Family in Slavery and Freedom, 1750–1925 (1976) made extensive and careful use of quantitative methods, especially in service of challenging prevailing statistical claims about African American families.
Gutman was not the only person to do this. He was not, I mean, the only scholar who chose to fight fire (meaning quantitative methods) with fire (ostensibly improved quantitative methods). But I do find his efforts laudable, for surely it is one thing, and perhaps a very easy thing, to critique quantitative methods, the whole lot of them, as fundamentally and irrevocably cursed–one thing to condemn statistics and abstractions and mathematics writ large as simply too hot to handle, and quite another thing to choose, instead, to wrestle with the data of slavery’s archive in its awful magnitude. To me, that is what Herbert Gutman was trying to do.
Even so, as I said, by the time of Herbert Gutman’s death in 1985, the field was headed in a decidedly different direction, and while I know I described this historiographical moment a few minutes ago, I think it’s worth emphasizing the dramatic nature of this shift.
We went from obsessively, if not always thoughtfully, encoding every scrap of information wrought from slavery’s archive, to obsessively theorizing and lamenting–again, not always thoughtfully–all the things that could not be found in that archive.
And yes, in so many ways, this was a necessary corrective. We must name and mourn the silences. We are and we will. But surely that’s not all we can do.
We seem, in other words, to be looking away. We seem, to me, scared of the very tools–data and quantitative methods–which we might use to fix our gaze to take in the magnitude of ten million ancestors–the gravity of ten million lives.
So how do we explain all this? What explains the almost seamless metamorphosis of the archival turn into this fear–this data turn? Why do we have so many conversations about the idea of data–about the specter of data–but very few about data that actually is, or could be? We talk all the time about we should not do with data, but I’m eager for someone to tell me how I should use data. The big kind. The relational kind. The spreadsheet.
Gathering, as we are, with the wasteland of what was once the academic job market spread out before us, I have no trouble identifying one factor that helps explain all this, and that is that when it comes to writing books based upon years of quantitative inquiry, nobody’s got time for that. Not anymore.
Academia, I mean, is no longer structured in a way that allows the average scholar to spend a decade or so compiling massive datasets of information from slavery’s archive, or any archive, really.
Publish or perish, they say. And the truth is most of us perish even if we do publish. But yes, the odds are slightly better if you publish a great deal and do so quickly. And so what if you rely a bit too heavily on anecdotal evidence? Who’s going to notice anyway?
Besides that, data and quantitative methods can be intimidating, especially for those of us, myself included, whose training delved little or not at all into statistics, or relational database design, or data ethics, or anything of the like.
But there is something deeper going on, too, I think. The real reason few people write deeply quantitative studies of slavery and enslaved people, in my opinion, is that trepidation I mentioned earlier–a serious and pervasive fear of replicating the ethical and/or mathematical misdeeds of the cliometricians of yore.
Speaking from experience, I can tell you, this fear is well-founded. It is rational. This fear, in fact, has very nearly proved fatal to my own quantitative ambitions on more than one occasion. After all, there exists now an entire historiographical genre dedicated to delineating data’s power to hurt—its potential to replicate the abstractions implicit to slavery’s original archive. And it’s important to note, by the way, that the harms which these works (by Jessica Marie Johnson, Jennifer Morgan, Katherine McKittrick, Tonya Sutherland, and others) describe are not distant memories.They are, instead, harms that continue to manifest today, as there is an entire body of work–including several massively-funded digital humanities projects–existing in the present which all but refuse to heed the clear calls for caution issued by theorists of data and the archive.
So if you ask me, there are good reasons why we, as a field, are so damn timid about the sort of quantitative work Herbert Gutman modeled and its potential modern iterations. This fear is rational, and I feel it, too. And yet, I have concerns. I am worried that we are allowing fear of what we could do with data–if we were to misuse it, if we were to be clumsy or thoughtless with it–limit what we actually do with data.
There are reasons, in other words, why I have decided not to throw the baby out with the bath water. There are reasons I haven’t washed my hands of quantitative methods and data entirely, and reaons why those things are, instead, at the center of my work.
First, I believe that our aversion to all things quantitative or in spreadsheet form is decidedly at odds with the wishes of many descendants who desire, above all, access to the records documenting their ancestors’ lives.
I used to begin presentations of my work by apologizing for it. I’d apologize, above all, for the tabular design of the databases I have built. “I recognize that this data in many ways replicates the dehumanization of the original sources,” I would say. And then I’d start to explain what I meant by that. But here’s the thing. Most of the people who attend my talks are descendants of enslaved ancestors. So, standing there, describing the pain of the archive, I’d very quickly turn red and feel quite like an idiot, because who the hell did I think I was talking to? If there’s a population who already knows that slavery’s archive is dehumanizing, it’s descendants of enslaved ancestors. No whitesplaining is necessary from me, thank you.
I must confess I also feel this way about arguments against the digitization of archival materials–advertisements for fugitives from slavery, for example. I understand the epistemological violence of those records. I really do. But I simply cannot imagine showing up to one of Kinfolkology’s workshops, greeting dozens of descendants eager to find records of their ancestors' lives, and saying, “Well, there might be a record of your ancestor, but we decided not to digitize it because we thought doing so would be harmful.” Who would I be to say such a thing? Can you imagine? (And by the way, why aren’t we asking descendants which records should be digitized and how and in what order in the first place?)
Second, I believe, as long as we are skittish about thinking about and working with the archive as data, other entities will move full steam ahead doing so without any concern at all for the very dangers of quantification about which we have heard so much lately, for their concerns—the concerns of Ancestry.com, for example—lie elsewhere, in the monetization of access to information about enslaved ancestors who were themselves commodified.
Lastly, data and quantitative methods are powerful tools, even if yes, and as Herbert Gutman argued, they require both skill and care. They reveal things obscured by qualitative and anecdotal evidence, and they answer important questions. How many enslaved ancestors? Or, how much was stolen? How can we begin to reckon with slavery’s violence—I wonder—if we don’t bother grappling with its scale?