More Connecting the Dots



About halfway through the movie Patton, the general, played by George C. Scott, is driving his Seventh Army along the coast of Sicily toward Messina and mainland Italy.  An infantry commander asks for another day to bring up more troops and reduce casualties.  “General Truscott,” retorts Patton, “if your conscience won’t permit you to conduct the operation, I’ll find someone who can.”

Moments (of screen time) later, closer to the front, Patton encounters a colonel not yet fording a river.  “Get that outfit cranked up,” shouts Patton, “or you’ll be out of a job.”  He threatens another slow-to-advance colonel—“Put fire into this battalion or I’ll get somebody who can!”—and instantly makes good on his threat: “Major,” he shouts to another officer, “you’re now commanding officer.  You’ve got four hours to break through that beachhead.  If you don’t make it, I’ll fire you.”

If you can’t do it I’ll find someone who can.  If you don’t break through, you’re fired.  It’s a model of decisiveness and command.  It’s what many want in a general—or in a president.  It’s Donald Trump.  And understanding that is an important part of the appeal of Trump’s presidential candidacy.

It’s tempting to attribute Trump’s appeal to irrationality or ignorance.  How can his supporters not understand, for example, that undocumented immigrants are here because our economy depends on them and offers them jobs; that deporting 11 million people is at least impossibly expensive and likely just impossible; that the level of force and cruelty required even to attempt such a mass deportation would be comparable to the ethnic cleansing campaigns carried out by the Soviet Union, China, and the former Yugoslavia; and that for all his supposed business success, Trump has nowhere near the government or political experience even to daydream about running such a massive operation?

And how can they not understand that the kinds of challenges that presidents face, the national and international conflicts that face presidents, are orders of magnitude more complex and resistant to resolution than can be surmounted by threatening to fire subordinates or by the most immovable negotiator?

But that’s not what Trump supporters perceive.  They see 11 million people who have broken the law and, far from being prosecuted, are allowed to remain in the country and upon whom some would confer legitimacy and even a path to citizenship.  They see mainstream Republican leaders who would look away to avoid alienating Hispanic voters.  And they see—finally–a leader unafraid to be outraged instead of understanding, who, instead of compromising with powerful adversaries, domestic and foreign, will negotiate them into submission, and who instead of being balked by obstacles will bulldoze them.

They’re wrong, deeply wrong, of course, about the nature of illegal immigration and the way decisions are made and implemented, these Trump true-believers.  But they’re not crazy to hunger and thirst for decisive solutions to long-festering issues.  They’re not irrational.  And to suppose that they are, harms—not their opinion of themselves, not the cause they espouse, but the cause they oppose, the progressive cause.

For one thing, it only reinforces their resolve.  As Jack Shafer of Politico told the Washington Post, “Establishment attacks on a demagogue only stiffen the loyalties of his subjects, proving to them that he is telling truth to power.”

For another, to dismiss concerns about issues like immigration and similar hot-button issues as the ravings of maniacs and ignoramuses may lead us to underestimate the importance of the disconnect between immigration policy and reality and to underestimate as well the urgency of resolving the issue.  It may also lead us to underestimate people’s anger at government dysfunction and their hunger for government action.

So if Trump fails, as we all suppose he will, his supporters may attribute his defeat, not to his outlandish persona and nostrums, but to the resistance or indifference of an out-of-touch establishment.  And the next demagogue may not advertise himself or herself so transparently.



Some years ago, as a congressional staffer, one of my duties was to liaise with the House Ethics Committee, to get their judgment as to whether a contemplated trip or district project was a permissible use of official funds. *

Most often the Ethics Committee counsel’s response was that the proposed use was well within the rules.  The question, rather, was whether the congressman would be comfortable with the trip or project being reported in his home-town paper.  It might be legal, in other words, but how would it look?  Might it require awkward explanation that would distract constituents or the media on his public policy views and actions?

Hillary Clinton does not seem to have had such a conversation about her email account or about the employment status of Huma Abedin.  Re the email account: Is there anyone who doesn’t know that, while serving as secretary of state, instead of using a State Department email account like everyone else, Clinton set up a private email domain for both personal and official use?  Huma Abedin is a longtime retainer who worked for Clinton while simultaneously on drawing paychecks from the State Department, the Clinton Foundation, a consulting firm,  and Clinton’s own personal funds.

Both matters seem fundamentally trivial: Who cares about the details of email or payroll arrangements?  For such things she jeopardized an almost certain presidential nomination?

The email imbroglio appears likely to have been legal, at least in the sense that there appears to have been no law that exactly prohibited it.  And the Abedin arrangement, according to the Washington Post, was “allowed by a special government designation [Abedin] held permitting outside employment.”

But is there, perhaps, something else she’d like to be doing with her public appearances and political capital other than endlessly explaining, rationalizing and defending these unusual arrangements?  Wasn’t it foreseeable–jeez, wasn’t it almost certain?–that all this would come to light and could eclipse what she wanted to say and do with her campaign?

And what was the point?  What was to be gained?  To avoid carrying two phones—her explanation—one each for official and personal communications?  Please.

To thwart congressional committees that would find it harder to snoop into her affairs if her emails were held privately held instead of in federal archives?  How has that turned out for her?

(A third possibility is suggested by a remark made, half in jest, by a public TV producer at a meeting with a funder.  Why, the production company executive was asked, did his organization present its budget in the unorthodox and idiosyncratic format it used, instead of like everyone else?  Because then, replied the executive good-naturedly, we’d be like everyone else.)

The pattern isn’t new.  Look at the six-figure speaking fees that both Clintons continued to rake in even after they were rich as Croesus.  Look at the overlap between Clinton Foundation donors and countries and companies with interests before Hillary Clinton’s State Department.  Look, for that matter, at the Whitewater scandal which started as a real estate scheme (a subject about which the Clintons knew nothing) that they hoped would supplement their meager salaries in public office and legal practice.  It led to monetary losses, which led to subpoenas, which led to Kenneth Starr, which led to Monica Lewinski, which led to impeachment.  All were, at their inception, tangential to the main stream of the Clintons’ public ambitions and aspirations.  All culminated in political disaster.

The Clintons are among the smartest people in public life.  Have they not learned that nothing stays secret?  Do they and the super-smart denizens of Hillaryland and Friends of Bill ever look down the road of these schemes and ask: What’s the point?  What could go wrong?  What could an aggressive Republican chair and staff of the House Oversight Committee do with it?

Doesn’t anybody ask them—don’t they ask themselves–what it would look like on the front page?



While governments in other parts of the world struggle to put down military rebellions, Germany, the IMF and the rest of the Eurozone establishment moved decisively to put down Greece’s economic rebellion against the austerity regimen the creditor nations and institutions.

Now comes the hard part.  Will Germany treat Greece the way the US treated Germany after World War II, helping Greece rebuild its economy so it need not again finance itself with credit it cannot repay?  Or will it—as it has so far–treat Greece as Germany was treated after World War I, as a defeated enemy which must not only repay its debt but suffer for its sins?

The hard line Germany has taken on Greek debt repayment is “more than a little hypocritical,” says French economist Thomas Piketty.  “Germany is really the single best example of a country that, throughout its history, has never repaid its external debt,“ Piketty says in an interview with the German newspaper Die Zeit, translated and reported in Slate.  “However, it has frequently made other nations pay up, such as after the Franco-Prussian War of 1870.”

But there is more wrong with Germany’s hard line than hypocrisy.   It violates the axiom attributed to Einstein that doing the same thing repeatedly and expecting different results is a definition of insanity.  (It also violates the more sensible version of the axiom: that doing the same thing repeatedly and expecting different results is a recipe for frustration and futility.)  We’ve been down this road before—the forked road that proceeds from the vanquishing of a defeated nation by a powerful adversary.  We have taken the right road and the wrong.  We have a good idea where each of them leads.

We know where the road from Versailles, the road from World War I, led the world.  The Versailles treaty imposed ruinous reparations on Germany–reparations that a devastated Germany could never pay and never did.  Its overseas colonies were taken, not to be granted independence but to be given to victorious colonial powers Britain and France.   The road that started in Versailles led to hyper-inflation, National Socialism, rearmament, the Holocaust and World War II.  Math teacher/song-satirist Tom Lehrer captured the process perfectly:

Once all the Germans were warlike and mean,
But that couldn’t happen again.
We taught them a lesson in nineteen eighteen,
And they’ve hardly bothered us since then.

The world took a different path after World War II—the war for which the policies followed after the last war had helped set the stage.  Once again there were voices calling for a defeated Germany to be crushed so it could never wage war again.  Secretary of the Treasury Henry Morgenthau called for Germany to “not only be stripped of all presently existing industries but so weakened and controlled that it cannot in the foreseeable future become an industrial area.”

And in fact, for six years after VE Day, that was the direction that US policy took.

But by 1951, President Truman, at the urging of military leaders led by Gens. George C. Marshall and Lucius D. Clay, reversed course, issuing a directive that “[a]n orderly, prosperous Europe requires the economic contributions of a stable and productive Germany.”  Germany was even made a beneficiary of the Marshall Plan.*

Unlike the path followed after World War I, post-World War II policies led to a strong, prosperous, and not only friendly but supportive Europe so unified that today, the threat of Greece’s leaving the Eurozone raises alarms.  As well it should, given the long history of intra-European conflict that preceded post-World War II unification.  This aspect of the Greek debt crisis—and of near-miss debt crises in countries like Ireland and Italy—is often lost in analyses: that what is at stake is not economic policy alone, but a framework that has required the countries of Europe to work together.

The Greek economy needs to be made sustainable.  But how that happens makes a difference.  Germany has the resources to treat Greece the way the US treated Germany after World War II, to help Greece rebuild its economy.  It also has the power to squeeze Greece like over-ripe fruit and collect what it is owed as it drips out.  Squeezing Greece until the pips squeak risks fostering the resentment that squeezing Germany incurred after World War I. That is a road best not taken again.



Democrats are treating Donald Trump’s candidacy and his surge to the top of the polls as Christmas—or at least Election Day—in July.  The left-leaning Salon headlines Trump as “Democrats’’ greatest gift,” and prophesies that “his buffoonish politics threaten to bring down his entire party.”  The right-wing media pretty much agree: “Donald Trump Controversy Plays Right into Democrats’ Hands” reads a National Review headline.  And that was before Trump tried to rip John McCain but succeeded only in ripping himself.

Donald Trump   Photo: Mike Licht,

Both parties assume that Trump will drive voters away from an otherwise appealing slate of candidates.  But they may have it backwards.  Trump’s fulminations may actually be good for Republicans—but bad for Democrats.

Democrats assume that every fair-minded person recoils, as they do, from Trump’s racist xenophobia and his carnival-barker persona.  They react, as they often do to adversaries whose appeal baffles them: with mocking disdain four of Letterman’s top ten “interesting facts” about Trump, for example, mocked not his policies but his hair.  But as John Dickerson, chief political correspondent for Slate magazine and political director of CBS News, has pointed out, “There is a group of Republican voters who like what Trump is saying, how he says it, and don’t mind that the political clowns are being ruffled by this party-crasher.”  To them, and to people like them, Democrats’ disdain for Trump may come across as alienating disdain for his supporters, and for those who fear or oppose illegal immigration but are embarrassed to admit it.  .

Trump’s outré campaign and persona deliver more to Republicans than votes.  Precisely because Trump is so far beyond the pale, he gives other Republicans cover to distance themselves from his headline-grabbing freak-show while adopting positions not so far from those he espouses.

So, for Republicans, Trump could be win-win.

For Democrats, not so much.

Maybe the American people will see Trump’s agenda and tone as the superficial, ill-advised and mean-spirited notions they are.  Maybe Trump’s surge in the polls is fueled not by agreement but by celebrity and will melt away as people get to know him better.  Maybe his attack on McCain will prove his undoing.

But maybe the people whose support has propelled him to the top of the polls are representative of the 44 percent of Republicans who tell Gallup that they believe he would “do a good job” on immigration, the kind of number that supports’s conclusion that “anti-Hispanic bigotry plays well with the Republican primary electorate.”  A showing of half of that would guarantee Trump a spot in the top tier of Republican candidates, the tier that will not only make it into the debates but will mark the boundaries and of hot-button arguments over issues like immigration, not just in the primaries but after the conventions.  Let us not forget that notions and rhetoric not so unlike Trump’s were compelling enough to elect majorities in the House and Senate.  The exposure they will get from the Republican debates may attract support, not repel it.

Enough support to get Trump the presidency or the Republican nomination?  Probably not.  Enough to fuel a third-party Trump candidacy when Trump doesn’t get the Republican nomination?  Could be.

But there may be enough Trump votes to motivate Republican candidates who do have a chance to adjust their positions and rhetoric to make them plausible second choices when the Donald drops out.  But isn’t that the whole point of seemingly hopeless candidacies like Trump’s and, for that matter, Bernie Sanders?  To demonstrate enough support for their policies that the serious candidates have to make them their own, and to increase their influence in the campaign, in the administration that will follow if their party wins, and in the not-so-loyal opposition if they don’t.

So put Christmas off until December, and Election Day till primary season.  Trump will hurt his opponents.  It’s just not clear yet which ones.



In an updated version of an old joke, four Washington policy types are having dinner together.  During the pause before coffee and dessert, God appears and offers to answer one question from each of them.  “When,” asks the first wonk, “will we have racial harmony in our cities?”  “Not in your lifetime” replies God.  “When will we pay off the national debt?” asks the second wonk.  “Not in your lifetime,” God answers.  “When will there be peace in the Middle East?” inquires the third wonk.  Once again, God replies, “Not in your lifetime.”

For his question, the fourth wonk chooses the one on the cover of Indicators of Higher Education Equity In The United States: 45 Year Trend Report, recently released by the Pell Institute for the Study of Opportunity in Higher Education:  “When will the U.S. close the gap in higher education attainment by family income?” God thinks for a moment, then replies, “Not in my lifetime.”

Well, maybe it won’t take that long.  But nothing in the Pell Institute report affords grounds for much optimism that the gap will be closed or substantially narrowed anytime soon.  For 1970, for example, Pell Institute figures show a 46 percentage point gap between the college continuation rate for the top quartile of families, arranged by income, and the bottom.  Forty-two years later, the gap was 37 points.  A nine-point gain in 42 years: At that rate, the gap should be closed in about 173 years.

Or take bachelor’s degree attainment.  In 1970, a 34 percentage-point gap between the top and bottom quartiles; in 2013, the gap had doubled to 68 points.  But take heart.  With the top quartile’s rate capped at 100 percent, a milestone it should reach, at its rate of increase over the past 43 years, by 2040, the bottom quartile of families, at its rate of increase since 1970, will catch up in 1300 years, by 3313—about a century after the conclusion of the time frame projected by Star Trek: Enterprise.

If anything, the report’s shallow-to-flat-to-negative trend lines actually understate the dilemma for low-income students.  A forty-plus-year time span is one useful way of measuring progress.  But in real life, students confront a 4-6-year timeline, the time it takes them to go through college.  Any other time frame is all but meaningless to their educational progress.

Of course, there’s no law that says that federal and state governments can’t increase their investment in educating low-income students.

Actually, there are such laws, the laws that set levels of government support for state universities and Pell grant scholarships.  So, for example, 29 of the 50 states have decreased their total funding of state colleges and universities, according to the New America Foundation; 44 of 50 decreased per-pupil support. And, as Indicators documents, “the percent of average college costs covered by the maximum Pell Grant declined by 40 percentage points – from a high of 67 percent in 1975 to a low of 27 percent in 2012.”

But those laws could be changed.  Which is what Pell Institute director Margaret Cahalan, this report’s co-author, recommends in her concluding essay,  “Sixteen Strategies for Widening Equity of Participation in Higher Education in the United States: Reflections from International Comparisons”: ”Restor[e] Public Funding at the Federal, State, and Local levels to Earlier Levels Including Restoring Pell Grants to Their Former Buying Power.”

But does anyone think that state legislatures, or a Congress controlled by starve-the-beast conservatives, will restore those billions of dollars of cuts—or make any of Cahalan’s other sensible investments in educating low-income students, most proven in other countries–anytime soon?

That leaves strategies that don’t require such large funding outlays, and Cahalan includes several of those among her “Sixteen Strategies,” things like “Supporting Competency-Mastery Based Learning and Recognition of Prior Learning…,” “Listening to What Students Are Telling Us” and “Taking an Integrated and Holistic Approach to Student Services and Institutional Access Plans.”

They would all help.  But would any or all of them reverse downward trends or accelerate weak upward ones enough to close the gap?  Doesn’t seem likely. Like other societal needs, the prospects for supplying the public resources needed to narrow the college-attainment gap between rich and poor seem dim, at least for the next several election cycles.

To recognize that reality is not a counsel of despair. But it does point to the need for an alternate path to closing the college completion gap.  The road to, if not closing, at least narrowing the college attainment gap may need to start, not when students start college, but when they start kindergarten.

In fact Indicators also recognizes, in a grab-bag section entitled “Other Factors Affecting Equity” the importance of an obstacle to low-income college completion comparable in importance to paying college costs:

Achieving equity in attainment will require eliminating gaps not only in college enrollment, choice, and completion, but also in other critical outcomes, including completion of a rigorous academic curricular program…In order to enroll and succeed in college, all individuals must graduate from high school academically ready for college-level work.

Of course, all individuals are not now graduating from high school college-ready.  In 2013, ACT reports, 45 percent of students from families with incomes under $36,000 were college-ready in reading, compared to 84 percent for families with incomes over $100,000, a gap of 39 percentage points.  In math, the gap is 42 points, in reading 37 points, and science 39 points.  The racial gap is comparable: 28 percent of all white students tested college-ready in all four subject areas; African Americans: 4 percent.

The consequence: While almost twenty percent of all entering freshmen at four-year colleges must take remedial or developmental courses to make up for content they should have been taught in high school, the figure is about 32 percent for low-income students; and 39 percent for African Americans, 20 percent for Hispanic Americans, and just 13.6 percent for white students—for courses for which they pay college tuition but receive no college credit.  Worse yet, fewer than four of ten students who take remedial courses graduate from college in six years; one in four don’t even complete the remedial courses.

But although the lack of financial support for college education bears comparable responsibility with K-12 deficiencies for the completion gap, there is a difference in the two factors’ amenability to improvement: K-12 reform has been fermenting for more than a quarter-century and is still bubbling.  The nation has been awakened to the need to improve the education that low-income students receive before college, and an active debate is going on.  Teach For America, charter schools, Joel Klein, Michelle Rhee, Race To the Top. Education reform is channeling FDR’s dictum:  Try something.   If it fails, try something else.  But above all, try something.  No such zeitgeist, no comparable energy, no potential for real improvement obtains in higher education policy.

Congress could make college readiness, rather than just high school graduation, the objective of federal K-12 policy.  Reauthorization of No Child Left Behind, a/k/a the Elementary and Secondary Education Act (ESEA), appears to be on a glide path toward floor debate in the Senate So, as it happens, is the Higher Education Act.  In a just world, Senate HELP Committee chair Lamar Alexander (R-Tenn) and ranking member Patty Murray (D-Wash) would recognize that K-12 and post-secondary policy are inextricably linked and join the two pieces of legislation together.  That won’t happen.

But even more important than crossing our fingers for Senate action—and more promising than the snowball’s-chance-in-hell of constructive action in the House—is to act on the aspect of the issue that can be affected, by tapping into the existing momentum of K-12 reform.  State by state and city by city, charter schools, standardized testing, the Common Core curriculum and the rest of the education reform agenda will be implemented, refined, and will succeed—or they won’t and will fall by the wayside to be replaced, as FDR advocated, by something else.

Restoring the buying power of Pell Grants, increasing state support for public universities, and the other sensible dollar-sign-bearing recommendations in the Pell Institute report will stay on the back burner. But increasing college completion rates by increasing college readiness is ready to go.


In our final glimpse of Don Draper, in the Mad Men series finale, we see him in a white shirt, clean shaven, with the haircut that InStyle described as “ maybe the most defining feature of [Draper actor Jon] Hamm’s character, short on the sides, slightly longer on top.”

Around him in the shot are his fellow participants at an early-‘seventies, Esalen-like New Age retreat, their shirts counter-culture prints, their hair fashionably long.  His erstwhile ad agency colleagues—Roger, Peggy, Pete, Rizzo and the rest—seen in the rest of the finale’s last hour are likewise dressed and coiffed for the ‘seventies.

Not Don.  The world has changed.  His life has changed. He has stolen the name and identity of a fallen military colleague’s.  He has been married and divorced twice and serially betrayed both wives.  He has neglected his work and his business partners.

But he looks as he has always looked over the eleven or so years covered by Mad Men’s seven seasons:  his hair no longer or grayer than when we first met him, his gaze no less direct, his face still unlined—a mid-20th-century Dorian Gray, the title character in Oscar Wilde’s novel, whose youth and beauty never fade, while his portrait changes over time to reflect the moral corruption and decay of the life he has lived.

In addition to the fact that their appearance never changes, Don Draper and Dorian Gray also both have secret identities.  Gray’s identity is split between his unchanging physical self and the portrait that reveals his corruption.*  Draper is haunted by the fear that his family and colleagues will discover that his name and identity are not his own.  Born Dick Whitman, he stole the name Don Draper from an officer who was killed in action in the Korean War.

Perhaps most damning, both Gray and Draper use their unchanging outward perfection to enable their success, to gain, and then betray, the confidence of intimates and associates.  “Once Dorian discovers his godlike powers,” New Yorker critic Alex Ross has written, “he carries out various heinous acts, including murder.”   And a young colleague, dismissed after he tries unsuccessfully to duplicate one of Draper’s more outrageous approaches, tells Don that the only reason the gambit worked for Don was his handsomeness.

The two parallel stories, Dorian Gray’s and Don Draper’s, diverge at the end.  Seeking to reclaim his virtue and conscience, Dorian slashes the picture that reveals him as the degenerate he has become.  Police and servants hear screams of agony.  But when they gain access to the room where the painting was stored, they find, not the ageless, beautiful youth who had entered the room just moments before, but a corpse, “withered and wrinkled and loathsome of visage.”

Happily, Don Draper escapes, or at least postpones, a comparable final reckoning.  After walking away from his ad agency and giving his expensive car away, he seems to flirt with the idea of renouncing, as Dorian did, his malignant and dishonestly acquired identity. Hair askew, wearing a checked shirt, he appears to contemplate suicide.

But he is Don Draper, not Dorian Gray.  He keeps his identity—not just the name but the appearance and the ad man’s persona.  In the final scene of the series, we see him in a crowd of mantra-chanting meditators, clad in his trademark haircut and white shirt, eyes closed.

Slowly a smile lights his face. Could it be a smile of peace with the world and with himself?  Hardly. He has come up with the idea for another ad campaign.  A segue to the iconic Coca Cola “I’d like to buy the world a Coke” ad, combined with our recollection of a phone call with Peggy back at the ad agency—They’d take you back in a minute.  You could have the Coke account—tells us that Don hasn’t changed.   He’s still an ad man.

Dorian Gray dies a horrible death. Don Draper will never grow old.



☐ Teachers who change student answers
☐ Administrators who set “no-excuses” goals
☐ “Experts” who think that setting goals can make up for inadequate preparation

(Choose all that apply)

pencil erasing answer sheetWhat could the Veterans Affairs scandal and the Atlanta public school test cheating scandal have in common? Quite a bit, actually.

Both involve public employees whose jobs committed them to help vulnerable people, could not fulfill those commitments, and tried to cover it up.

In my last Connecting the Dots post, Want to Get to the Root of the VA Scandal? Follow the Money, I suggested that responsibility in the VA debacle belonged not only to the VA functionaries who jiggered appointment records to make it look like veterans were receiving timely attention, but should be shared by those who took the country to war without budgeting adequately “to care for him who shall have borne the battle” (in the words of Lincoln that form the VA’s motto) and those who thought that management systems, goals and incentives could compensate for actual funding for veterans’ health care..

As a recent and riveting New Yorker article, “Wrong Answer,” recounts, the events that led to the Atlanta testing scandal followed a similar pattern, a pattern of cheating that arose from an effort to meet unrealistic goals, driven by a combination of dedication and the desire to keep jobs and earn bonuses.

The scandal has been heavily covered in the Atlanta Journal-Constitution and national media: Public school teachers and administrators previewed sealed tests and changed wrong answers to correct ones. 109 school system employees implicated, including 38 principals and administrators. And a former superintendent, mortally ill with cancer, and her associates await trial as the masterminds of the enterprise.

The New Yorker story is told from the point of view not of the investigators who uncovered the cheating but from the point of view of the teachers and administrators who were faced with what they saw as an impossible task: eliciting high performances on standardized tests from woefully under-prepared middle-schoolers, some of them, according to teacher Damany Lewis “were still reading by sounding out the letters…. A veteran teacher told Lewis that only twenty per cent of his students would grasp what he was teaching.”

The story is well worth reading in its entirety, but a few excerpts convey the way it unfolded as testing-time approached:

Lewis felt that he had pushed [his students] to work harder than they ever had in their lives. “I’m not going to let the state slap them in the face and say they’re failures,” he told me. “I’m going to do everything I can to prevent the why-try spirit.”…

At the end of the testing week, Lewis went back to the testing office with Crystal Draper, a language-arts teacher. For about an hour, they erased wrong answers and bubbled in the right ones…. Many students were on the cusp of passing, and he gave them a little nudge, so that they would pass by one or two points…

A month later, when the scores came back,… a teacher announced, “You did it! You finally made it!” For the first time since the passage of No Child Left Behind, Parks [Middle School] had met its annual goals: “We had heard what everyone was saying: Y’all aren’t good enough [a student told New Yorker writer Rachel Aviv]. “Now we could finally go to school with our heads held high.”

[After a year,] the cheating process began to take the form of a routine…[The school’s reading coordinator] paged up to six teachers and told them to report to the room. While their students were at recess, the teachers erased wrong answers and filled in the right ones. Lewis took photographs of the office with his cell phone so that he could make sure he left every object, even the pencils on Kiel’s desk, exactly as he’d found them…. cheating had become a “well-oiled machine”.

The Atlanta scandal raises a far more complicated set of issues than the VA scandal, many of whose problems could be solved with the money needed to hire more doctors and other staff needed to treat sick and injured vets. Unlike VA employees, who gamed the system to keep their jobs and earn bonuses, the Atlanta teachers in “Wrong Answer” did what they did in what they thought were the best interests of their students as well as themselves. Many children, predominantly from low-income families, are at levels of education far below grade level and need to catch up. Setting goals is a useful way to set expectations and determine whether they’ve been met. Even “stretch” or aspirational goals have their place: mediocre or worse sports teams begin each season and game believing that they can win, even in the face of rosters that suggest that victory is unlikely.

But just as the trouble at the VA started when treating patients was replaced as an objective by meeting—or appearing to meet–time-to-appointment goals, so the trouble in Atlanta started when education was replaced as a goal by meeting test-score targets. “’Data’ and ‘accountability’ had become almost magic words,” Aviv writes: “If administrators repeated them enough, it seemed they believed that scores should rise, even if there hadn’t been significant enhancements in instruction.

It’s not an uncommon phenomenon, Aviv was told by John Ewing, the former executive director of the American Mathematical Society. Ewing is perplexed by educators’ ”infatuation with data,” Aviv writes, “their faith that it is more authoritative than using their own judgment. He explains the problem in terms of Campbell’s law, a principle that describes the risks of using a single indicator to measure complex social phenomena: the greater the value placed on a quantitative measure, like test scores, the more likely it is that the people using it and the process it measures will be corrupted.

Like the VA malefactors, the Atlanta teachers who leaked test questions and changed answers (a small fraction of all Atlanta teachers) had choices. They made the wrong ones and will suffer the consequences.

But they weren’t the ones that set unrealistically high expectations–67 percent of its students had to score satisfactorily in language arts and 58 percent in math or the school could be closed–for children who were still sounding out words in seventh grade. And they weren’t the ones who, by labeling their policy “no excuses,” implied that references to the burden of under-education that students brought with them were “excuses” made by lazy or incompetent teachers.

Under-educated students can be brought up to grade level. The KIPP charter schools have had some success doing it: not by establishing unrealistic ‘no-excuses” goals, but with significantly longer school days, weeks and years and with teachers who give their students their cell-phone numbers for evening and weekend consultation; and not in a year, but over several years.

It is also worth saying, though, that this regimen of benign-Stakhanovism takes its toll on those who are charged with carrying it out. Steven Brill, in his generally pro-education-reform book Class Warfare, follows the school year of a driven young KIPP assistant principal–only to report in his closing chapter that she resigned at the end of the year. “I know I can’t do this forever. And if I know had a child I couldn’t do it. As it is, it is screwing up my marriage,” she told Brill. “This wasn’t a sustainable life.”

The point is that bringing under-educated students to grade-level and beyond takes years of smart and hard—perhaps unsustainably hard—work. It will not be accomplished by setting goals and blaming teachers if their students fall short. And if—when—administrators nevertheless set such goals and sanction teachers and schools when the goals are not met, those who set the goals should join the teachers in taking the fall.



When—or should it be if?—the inquiry into the Department of Veterans Affairs scheduling scandal moves beyond identifying which VA officials’ heads will roll, investigators will start working their way up the agency’s ladder of responsibility.

They’ll start with the intake staff who handled veterans’ applications for treatment. They’ll move on to admins who entered appointment data they knew were intended to obscure how long veterans were waiting to get an appointment with a doctor. They’ll look at supervisors who reported that veterans who needed attention were being seen in a timely manner. Those who did wrong will be punished, and they’ll deserve it.

And then, if they follow the advice that Deep Throat gave Bob Woodward in the movie of “All the President’s Men,”* investigators will follow the money. Because almost all the media coverage and official reports point to lack of resources—a severe shortage of doctors and other medical staff and of the funds needed to hire and retain them—as the root cause of almost all the evil that’s taken place at the VA.

At the head of the money trail, they will find the policy makers and appropriators who took the country into two wars but failed to recognize that 21st century warfare produces many fewer deaths and many more, and more excruciating, injuries, both physical and mental that require more, and more expensive, treatment Or who recognized it but failed to give the VA the human and financial resources that would be needed, in the words of Lincoln that form the VA’s motto, “to care for him who shall have borne the battle.” At the end of that road they will also find the experts who believe that lack of resources is merely an excuse made by incompetent or lazy employees who can be brought into line by goals and incentives that force people to work harder.

On the surface, it looks as though VA funding has increased substantially over the last decade and more of war. Total VA funding rose from $58 billion in 2002, before the invasion of Iraq, to $130 billion in 2013, a 142 percent increase. But a closer look shows that while VA entitlement funding for pensions and disability payments) has gone up 153 percent—as you would expect with Viet Nam-era vets reaching retirement age—discretionary funding, which pays for medical treatment, has gone up only 129 percent. In 2002, with the U.S. more or less at peace, VA discretionary funding represented 48 percent of the total. In 2013, after a flood of over 50,000 Iraq and Afghanistan wounded vets, discretionary funding represented 46 percent of the total VA budget—less than when we were at peace.

Congressional leaders admit that VA health care has been underfunded and blame the VA. “In an environment where everybody is told, ‘Keep the cost down. Don’t tell me anything costs more,’” Senate Budget Committee chair Patty Murray told Yahoo News, administrators “don’t want to be told by their bosses, ‘Don’t tell me you need more money, because we can’t say that.’… [I]f they need more money, they need to be able to tell us, because how else are we going to solve these problems?” (Congress is not always so diffident in the face of agency budget requests. Post Office requests to balance budgets by cutting service are routinely rejected, while Defense Department recommendations that weapons systems be discontinued are disregarded at the behest of defense contractors and the members of Congress who represent them.)

The underfunding diagnosis is confirmed by those who have been inside the VA. “The scheduling scandal is a symptom of a much more serious disease,” Dr. Sam Foote, a whistleblower who spent 23 years at its Phoenix hospital, wrote in the New York Times, “a mismatch between the VA’s mission and its resources.”

Rob Nabors, the deputy White House chief of staff whom President Obama dispatched to the VA to sort things out, agrees. “With regard to increasing access to care,” he reported to the president in late June, scheduling “is secondary to the need for additional resources to actually schedule—doctors, nurses, and other health professionals; physical space; and appropriately trained administrative support personnel.”

But what about the scheduling deceit and the cover-up? What about the culture of corruption? They’re real, all right, at VA and at other agencies, and in the private sector. But as VA whistleblower Foote says, they’re symptoms, symptoms of under-resourcing. I’ve never worked at the VA, but I have worked at an underfunded federal bureaucracy, and I saw how people react to not having enough people, time and money to do what needs to be done. It comes in stages: commitment, frustration, cynicism, resignation or, for some, workarounds and gaming the system, within the rules or without.

All of which doesn’t make the bad actors at VA innocent victims. They could have responded otherwise. Several whistleblowers, like Dr. Foote, for example, have come forward; more will step up. Others will surface who became discouraged and left for other agencies or the private sector. Still others soldiered on, scheduling and caring for as many patients as they could. Only a relative few of the Veterans Health Administration’s 200,000+ employees, I am confident, will be found to have cooked the books to keep their jobs or to get promotions and bonuses. There will be more than enough tumbrils for them.

But a share of guilt, moral if not legal, should be saved for those who created the situations out of which the misdeeds arose and in which they festered: the leaders who sent 2,5 million members of the armed services to war with the VA we had instead of the VA service members needed and deserved; the administrations, from two presidents and four Secretaries of Veterans Affairs on down who didn’t request adequate funding; and the members of Congress who believed that holding the line on the federal budget outweighed caring for wounded veterans.

And save a share of culpability for all those who believe that missions, public and private, that are difficult enough to accomplish with adequate resources can be done just as well without that support. For the sake of those they serve, may their delusions never catch up with their constituents and beneficiaries the way the VA’s have caught up with wounded veterans.



President Obama’s critics, more numerous than ever as his term-and political influence winds down, see the looming disintegration of the Iraqi government as proof positive that his policy toward that country and others in the region has failed.

To the contrary:  It suggests that he was right. Would the small residual force that critics think Obama should have left in Iraq have stemmed the tide of ISIS soldiers now sweeping toward Baghdad?  Or would it have required either a massive redeployment to stop ISIS or an unseemly evacuation of the residual forces to avoid defeat?  And when you look at the news footage of US armaments left in Iraq to bolster the Maliki administration but now abandoned by Iraqi defenders and being used against the Maliki administration.

Indeed, Iraq’s proto-democracy appears to be viable only so long as substantial numbers of US and other countries’ troops are there to secure democratic processes and keep warring factions from tearing each other and the country apart.  Once they depart, Iraqis’ sense of their national identity—the sense that they are parts of one country and not three, one Sunni, one Shiite and one Kurdish—is not strong enough to hold together a country whose borders were drawn by British civil servants close to a century ago.

Think of it as America’s King Canute moment.  Canute is remembered as the eleventh century Danish monarch who in a fit of vainglorious futility commanded the waves to stop.  In reality, it appears that Canute knew very well that the waves wouldn’t stop but commanded them to cease in order to demonstrate to his nobles that even a king did not have unlimited power.

Iraq’s disintegration demonstrates once again that even the United States, with its bottomless bank account, its willingness to risk and lose its soldiers’ lives and health, and its international hard and soft power cannot command centuries-old ethnic and religious chasms to come and stay together or cobble a democracy for and with peoples with no experience of self-rule.

Whatever made us think that we could?

Primarily our experience after World War II.  After forcing Germany and Japan into unconditional surrender, the US and its wartime allies assumed the political and economic governance of the two defeated powers.  Wartime leaders were executed, imprisoned or effectively exiled.  The structure of government was redesigned.  Even the two countries’ formal names were changed.  And from our tutelage, two strong and stable democracies and US allies emerged.

It has been said that success breeds confidence.  Here it bred overconfidence—the belief that we were rich and wise and resolute enough to turn autocracies into democracies and to turn enemies into friends and allies, if only we were willing to deploy enough military might, invest enough lives and money and stay long enough..

We were wrong.  Vietnam, Iraq and Afghanistan are not Germany and Japan—not already-industrialized countries, not nations with histories of self-government.  A decade of exposure to American political values, of US oversight of the forms of democracy, has not taken root.

Now we need to look from one aspect of our post-World War II experience to another.  Instead of trying to replicate our success in Japan and West Germany rebuilding countries from the ground up, we need to replicate our patience and fortitude toward the countries of Eastern and Central Europe during the 45 years between the end of the war and the collapse of the Soviet empire and the Soviet Union.

In the years after the war, Republicans made a lot of noise about rolling back communism and Soviet domination in Eastern Europe. But everybody knew that the US was not going to confront Soviet military power and precipitate another land war in Europe—especially after President Eisenhower, the former general who knew America’s military capability better than anyone else, declined to intervene in the Hungarian uprising in 1956.

But the fact that the US could not oust and replace communist regimes across Central and Eastern Europe didn’t mean that we couldn’t do anything.  The CIA had a robust presence in the Soviet Union and across the satellites.  American soldiers and American armaments reinforced the countries on the western side of the Iron Curtain.  We spent massively to stay ahead of the Soviets in the nuclear arms race.  We reached out to the other side in a succession of summits.

And we waited, while the fundamental flaws in communist economic policy undermined the economies and public support of Eastern Europe and the Soviet Union, whose citizens could see Western European prosperity just across their western borders.

Historical analogies are never more than approximate.  Our enemies in the Middle East are not states, as they were in Eastern Europe, but non-state forces.  The threats against which we must defend ourselves are not symmetrical, force against force, but asymmetrical.

But we have spent thirteen years testing the hypothesis that the way to fight terrorism is by reconstituting countries so they are less hospitable to terrorists and it hasn’t worked.  It hasn’t worked partly because, being non-state actors, terrorists can easily relocate.  And it hasn’t worked because despite our willingness to expend lives and money, ours and our adversaries’, we have not been able to remake Afghanistan and Iraq, and what stability has been achieved has not so far survived our departure.

Despite his inability to turn back the waves, King Canute, so Wikipedia says, “maintained his power by uniting Danes and Englishmen under cultural bonds of wealth and custom, rather than by sheer brutality.”  Maybe there’s a lesson there for us.



Glenn Kessler, the Washington Post’s fact checker, joins other analysts and commentators in finding fault with President Obama’s once again calling attention to the fact that “Today, the average full-time working woman earns just 77 cents for every dollar a man earns…in 2014, that’s an embarrassment. It is wrong.”  The 23-cent gap, Kessler says, doesn’t so much reflect outright Lilly Ledbetter-type sexism or discrimination—paying women less than men for exactly the same work—as it does life and career choices women have made: “The average woman has less work experience than the average man…[M]ore of the weeks worked by women are part-time rather than full-time. Women also tend to leave the work force for periods in order to raise children, seek jobs that may have more flexible hours but lower pay and choose careers that tend to have lower pay.”

How real are these “choices”?  How many women would work part-time or leave the workforce to raise children if good and affordable child-care were readily accessible?  How many would choose lower-paying professions if not for the barriers that still exist to the higher-paid careers?

But there are larger and more important problems with the pay gap, and with the arguments advanced by both pay-equity advocates, like the president, and critics.  The gap is not bad only because of the process that leads to it, i.e. because it may be the result of illegal discrimination. It’s bad because it means that half the workforce, and 40 percent of the country’s heads of households, earns almost 25 percent less and therefore has substantially less to spend on their own and their children’s food, housing, health care and education, and less to save for retirement, deficits that will burden their children and are likely to be passed on to their children when they become adults.

In fact, the “choices” that pay-gap defenders point to as lessening or excusing its inequity—like women staying home to care for children and holding lower-paying jobs—are doubly damaging:  They not only contribute to the gap but serve as disincentives to the performance of work that society badly needs to be done.  Parents being able to choose to care for their children is a good thing.  We should enable such choices—as long as they’re real choices—not burden them.  We need to encourage people to choose careers in lower-paying fields historically populated primarily by women, like teaching and social work, not punish them by paying them less and retarding their career progress.

The curse of the pay gap is thus not only the processes that lead to it but the outcomes it produces: families that cannot afford secure middle-class lives and that will pass on that insecurity to the next generations.

The same can be said, by the way, for income inequalities between races and ethnic groups. They are deleterious not only as the results of discriminatory processes, but as the perpetrators and perpetuators of economic and social deficits that harm not those directly burdened by them but the rest of us as well.

Of course, even a more generous federal government than we now enjoy could not legislate or fund a mandate that lower-paid jobs traditionally held by women receive compensation equal to that traditionally held by women.  But just because we can’t do everything doesn’t mean we can’t do anything.

We could, for example, expand the child-care tax credit to include people—mostly but not exclusively women—who forego paid employment to care for their own children.  We could provide tax or other economic incentives to people who enter jobs, like teaching and social work, that society needs to be done and done well and that are traditionally held by women.  Or we could give women who interrupt careers to care for children guarantees like those we give to those who interrupt careers to serve in the National Guard or the Reserves: that their jobs, salaries and seniority will be waiting for them when they return to work.

Would such measures involve costs?  Yes.  But so does the gender pay gap.  The only question is who should bear them?

« Previous PageNext Page »