Science feature

Districts and Data

Justin Solomon and a cohort of Boston-based researchers use modern computing power to tackle gerrymandering mathematically

8682 justin solomon profile
Justin Solomon is a member of the Metric Geometry and Gerrymandering Group (MGGG), a group of Boston-based researchers developing computational algorithms for quantifying the fairness of voting districts to better understand the problem of gerrymandering.
Lillie Paquette

In the 2010 U.S. elections, riding unpopular sentiment towards the then Democrat-controlled federal government, Republicans scored major victories throughout the country. They famously won a staggering majority in the House of Representatives, taking 63 seats, and managed to turn 6 seats in the Senate, significantly expanding their minority.

But perhaps more importantly, Republicans’ astonishing victories extended to state legislatures and governorships, allowing them to capture majorities of both. This offered them a unique opportunity to establish a lasting advantage over Democrats because the election took place right before a national census. As a result, Republican controlled state-legislatures throughout the country were able to redraw congressional districts in ways that minimized Democrats’ prospects for winning seats in state legislatures and the House of Representatives. Indeed, Republicans have managed to maintain a comfortable majority in the House ever since, a fact often attributed to their opportunistic use of the 2010 redistricting process.

The redistricting process is undertaken after every national census to satisfy a constitutional requirement that each district within a state — or local district within a jurisdiction — be comprised of roughly the same number of people. In theory, this process is meant to prevent communities from losing or gaining influence in elections through the shifting of a state’s population. But as was evident in the aftermath of the 2010 census, redistricting is often exploited by partisan bodies for the very opposite: to manipulate the political clout of certain voting blocs to influence the outcomes of elections. And though this practice has been most notably employed by the Republican Party in recent years, Democrats have also used it in the past and may again soon if a “blue wave” materializes in the 2020 elections.

This opportunistic use of the redistricting process — commonly known as gerrymandering — is widely reviled as one of the most undemocratic practices employed in American politics. It has been leveraged throughout U.S. history to establish advantages for political parties, protect incumbents, and marginalize or empower certain demographics, and is often implicated as the root of many of our political system’s ills, such as congressional gridlock, runaway elections, and ideological extremism in government.

But despite many hard-fought court cases to challenge it, this undemocratic practice persists with seeming impunity. Now, however, researchers throughout the country are starting to develop promising techniques for tackling this issue, and there are already early signs that their efforts are paying off.

Justin Solomon, Principal Investigator of the Geometric Data Processing Group at MIT, is a prominent member of the Metric Geometry and Gerrymandering Group (MGGG), a cohort of Boston-based computer scientists and mathematicians, founded by Professor Moon Duchin at Tufts University, that are leveraging modern computing power to study the problem of fairness in redistricting with a level of quantitative rigor that has not been possible until recently.

“From my perspective, one of the big challenges in redistricting is that we lack clear, quantitative standards for evaluating the fairness of redistricting plans” said Solomon. “For that reason, there’s no clear path to a standard that’s easily enforceable and understandable.”

“Our effort, broadly, is… to assemble a clear set of standards and a way to talk about the redistricting problem in a fashion that’s quantitative and that’s fair and easy to apply,” he told The Tech. “That includes a lot of different aspects. Everything from understanding the shape of a district and what bearing it has on the outcome of the vote… to understanding the big space of all the different ways of dividing up a state.”

To date, most attempts to contest partisan gerrymanders in court have failed due to the lack of a clear, convincing standard against which to judge them. But Solomon’s work, and that of his colleagues in the MGGG holds the potential to fundamentally reshape the debate around gerrymandering by offering a feasible means of formulating and implementing such a standard for the first time in U.S. history.

Previous work by Solomon involved evaluating the utility of various compactness scores, metrics designed to quantify how “weirdly-shaped” a district is. Some examples of compactness scores include the Reock score: the ratio of a district’s area to the area of the smallest circle that completely encloses it, and the Convex Hull score: the ratio of a district’s area to the area of the smallest convex polygon into which it fits.

But a district’s compactness doesn’t tell the whole story. “The reality is these districts are designed with so many different criteria in mind,” Solomon explained. “One is the shape. Others include compliance with civil rights law… [T]here are plenty of districts that really were designed quite carefully to give a particular minority a voice, in which case maybe you needed a crazy shape to pull that off.”

Reliable and convincing metrics for quantifying the fairness of redistricting plans must be able to accurately account for a host of complex and interrelated factors. Formulating these metrics remains a thorny, open problem for Solomon and the MGGG. But even once they’ve identified metrics that are legally practicable, they will have to work out feasible methods for implementing them, a problem that is far from trivial.

In any given state, there are an enormous number of potential redistricting plans that would comply with federal and state requirements. It would be impractical to require states to iterate through all of them and identify the fairest plan under any given metric. So, with the brute force approach out of the question, Solomon and his colleagues have had to devise clever procedures for applying these metrics. They have been developing an approach akin to a statistical outlier analysis, a technique that would require detailed, though approximate knowledge of the shape of the distribution of redistricting plans, rather than all possible plans and their associated fairness scores.

“[N]ow, I can look at the plan that was proposed by the legislature and I can say ‘how likely is it that, in that huge set of things that follow the rules, I could have accidently stumbled upon the one the legislature found?’”

“If you notice that, in the space of plans, your plan is an outlier, then you have a pretty strong argument that there is something nefarious going on.”

Though in its infancy, this type of analysis has already been used to litigate gerrymandering disputes with some success. Notably, in Common Cause v. Rucho — a recent case brought before a panel of federal judges — a redistricting plan proposed by the North Carolina state legislature was deemed a partisan gerrymander based on a similar analysis carried out by Jonathan Mattingly, a professor of statistics at Duke University. The plan was rejected as unconstitutional on these grounds, though the legislature may choose to appeal the decision to the Supreme Court.

The Rucho decision is especially interesting because the panel of judges found that their ruling was consistent with existing legal precedent, leaving open the possibility that this type of argument may gain widespread acceptance as a standard for arbitrating gerrymandering disputes. The challenge, then, for Dr. Solomon and the MGGG is establishing a firm enough understanding of the enormous and complex space of redistricting plans for this approach to be applied reliably in legal contexts.

“When you invoke that kind of argument, you have to be confident that you have a representative sample, that you’ve walked around in this space a sufficient amount and so on,” Dr. Solomon explained.

“The ways of dividing up a state or a country or a county or a school district is this huge combinatorial space, and this is really the first time in history that we’ve had the computational power to be able to explore that space with any level of certainty or understanding to make a clear argument.”

Because this space has remained largely unexplored until recently, quite a bit of work remains to establish a firm understanding of it. But even so, Dr. Solomon and his colleagues have decided to pursue the ambitious goal of having workable prototypes of these techniques ready by the 2020 census. This would enable quantitatively rigorous analyses of redistricting plans proposed throughout the country after the census, and potentially offer a feasible route to successfully challenging gerrymanders that arise from the process.

And they’re not just trying to develop analysis techniques that can be used by expert witnesses in court. Rather, they hope to make these analyses accessible to the general public in the form of open-source software packages, giving average citizens a voice in how their electoral districts are constructed.

“We have software that’s under development for Markov chain Monte Carlo analysis in the space of districting plans” Solomon told The Tech. This technique involves randomly sampling districting plans and tabulating a fairness metric for the sampled plans, giving a sense of the values that occur across the space of redistricting plans.

“We’re working really, really hard to make it stable, reliable and easy to use, so that if you’re trying to argue about your congressional district, then maybe all you have to do is load in the shape files that come from GIS software and give it a shot.”

“These are heavy-duty mathematical tools that require a little bit of nuanced understanding. But the vision is really to democratize this process.”

It’s a hopeful vision: a democratic solution to one of the most undemocratic problems plaguing American politics. But if the MGGG succeeds in deploying tools and methodologies for identifying partisan gerrymanders that are able to convince judges, it would represent a monumental step towards eliminating this practice’s pernicious effects on our democratic system. And thanks to the efforts of math- and computer-savvy experts like Justin Solomon, as well as modern advances in computing power, a step like this may be within reach for the first time in history.

Minor corrections have been made for accuracy.