Friday, April 11, 2014

Our Year of No Sugar: One Family’s Grand Adventure

By Eve O. Schaub, Special to Everyday Health
April 1, 2014
Once upon a time, I was healthy; at least I thought I was.
Sure, I lacked enough energy to get me through the day, but with all the commercials on TV touting energy drinks for America’s tired masses, I always assumed I wasn’t the only one suffering. And sure, everyone in my family dreaded the coming cold and flu season, but again, I thought come January everyone develops some degree of germophobia.
At least, that’s what I thought until I heard some disturbing new information about the effects of sugar. According to several experts, sugar is the thing that is making so many Americans fat and sick. The more I thought about it the more this made sense to me — a lot of sense. One in seven Americans has metabolic syndrome. One in three Americans is obese. The rate of diabetes is skyrocketing and cardiovascular disease is America’s number one killer.
According to this theory, all of these maladies and more can be traced back to one large toxic presence in our diet… sugar.

A Bright Idea

I took all of this newfound knowledge and formulated an idea. I wanted to see how hard it would be to have our family — me, my husband, and our two children (ages 6 and 11) — spend an entire year eating foods that contained no added sugar. We’d cut out anything with an added sweetener, be it table sugar, honey, molasses, maple syrup, agave or fruit juice. We also excluded anything made with fake sugar or sugar alcohols. Unless the sweetness was attached to its original source (e.g., a piece of fruit), we didn’t eat it.
Once we started looking we found sugar in the most amazing places: tortillas, sausages, chicken broth, salad dressing, cold cuts, crackers, mayonnaise, bacon, bread, and even baby food. Why add all of this sugar? To make these items more palatable, add shelf life, and make packaged food production ever cheaper.
Call me crazy, but avoiding added sugar for a year struck me as a grand adventure. I was curious as to what would happen. I wanted to know how hard it would be, what interesting things could happen, how my cooking and shopping would change. After continuing my research, I was convinced removing sugar would make us all healthier. What I didn’t expect was how not eating sugar would make me feel better in a very real and tangible way.

A Sugar-Free Year Later

It was subtle, but noticeable; the longer I went on eating without added sugar, the better and more energetic I felt. If I doubted the connection, something happened next which would prove it to me: my husband’s birthday.Capture11
During our year of no sugar, one of the rules was that, as a family, we could have one actual sugar-containing dessert per month; if it was your birthday, you got to choose the dessert. By the time September rolled around we noticed our palates starting to change, and slowly, we began enjoying our monthly “treat” less and less.
But when we ate the decadent multi-layered banana cream pie my husband had requested for his birthday celebration, I knew something new was happening. Not only did I not enjoy my slice of pie, I couldn’t even finish it. It tasted sickly sweet to my now sensitive palate. It actually made my teeth hurt. My head began to pound and my heart began to race; I felt awful.
It took a good hour lying on the couch holding my head before I began to recover. “Geez,” I thought, “has sugar always made me feel bad, but because it was everywhere, I just never noticed it before?”
After our year of no sugar ended, I went back and counted the absences my kids had in school and compared them to those of previous years. The difference was dramatic. My older daughter, Greta, went from missing 15 days the year before to missing only two.
Now that our year of no sugar is over, we’ll occasionally indulge, but the way we eat it is very different. We appreciate sugar in drastically smaller amounts, avoid it in everyday foods (that it shouldn’t be in in the first place), and save dessert for truly special occasions. My body seems to be thanking me for it. I don’t worry about running out of energy. And when flu season comes around I somehow no longer feel the urge to go and hide with my children under the bed. But if we  do come down with something, our bodies are better equipped to fight it. We get sick less and get well faster. Much to my surprise, after our no-sugar life, we all feel healthier and stronger. And that is nothing to sneeze at.
Eve O. Schaub is the author of Year of No Sugar: A Memoir. She holds a BA and a BFA from Cornell University, and a MFA from the Rochester Institute of Technology. Her personal essays have been featured many times on the Albany, New York, NPR station WAMC. You can join Schaub’s family and take your own Day of No Sugar Challenge on April 9, 2014.

Stephen Halbrook’s Masterful History of Nazi Gun Control Measures

By Clayton E. Cramer
http://pjmedia.com/
April 11, 2014


Over the last twenty years, Stephen P. Halbrook’s scholarly work on gun control has become more polished, nuanced, and methodical. His latest book, Gun Control in the Third Reich: Disarming the Jews and “Enemies of the State,” [1] is an astonishing piece of scholarship: complete, careful, and thoughtful.

For a very long time, Americans opposed to gun control have used the example of Nazi Germany’s gun control laws as a warning of what might happen here. Regrettably, not everyone has been careful enough. There is a quote purportedly from Hitler about gun control that starts out “1935 will go down in history” that used to float around the Internet; it does not appear so often anymore because a number of people, including me, demonstrated its falsity.

Part of what allowed bogus quotes like this to survive was that few historians had bothered to research the real history of the Nazis and gun control. Jews for the Preservation of Firearms Ownership did a nice job of obtaining and translating the 1928 and 1931 Weimar Republic gun control laws and the 1938 Nazi gun control law [2] some years ago. But as useful as those translations are, they simply do not compare to what Halbrook has done with his new book.

Halbrook traces the development of German gun control law from the collapse of the Kaiser’s government in 1918 through the post-war chaos, the Weimar Republic’s efforts to prevent the violence of the Nazis and the Communists in the 1920s and early 1930s, and then the ways in which the Nazis used those laws to disarm anyone who they regarded as “enemies of the state” (which of course included all Jews).

In doing so, Halbrook makes use of an astonishing set of sources. His secondary sources are impressive: scholarly histories of the period, such as The Berlin Police Force in the Weimar Republic; specialized works that you might not even expect to exist, such as Der Weg des Sports in die nationalsozialistische Diktatur (The Way of Sport in the National Socialist Dictatorship). Halbrook goes far beyond that, however, with an impressive collection of primary sources, including diaries by people who lived through the time, surviving police records, internal government memos, and court decisions.

Part of what makes a book like this possible is part of what made it so easy to convict Nazis war criminals: the German penchant for documenting everything, and the difficulty in making those documents disappear when it became apparent that the war was lost.

There are many parallels between the laws passed in the Weimar Republic and by the Nazis, and current gun control laws and proposals. For example: the nature and duration of the records that gun manufacturers and dealers were required to keep (p. 135); issuance of gun carry licenses “only to persons considered reliable and only if a need is proven” (p. 107); the use of relatively rare incidents to justify widespread disarmament of “enemies of the state” (p. 155); and the prohibition of firearms with features not generally used “for hunting or sporting purposes” (p. 134).

This is not to say that gun control advocates in America today are planning a police state, concentration camps, and mass extermination. As Halbrook points out, when the Weimar Republic pursued its campaign of strict licensing and registration, they were genuinely trying to deal with a serious violence problem. They picked a solution that did not work, as some police officials of the time pointed out, causing some German states to refuse to go along with the Weimar Republic’s mandatory registration regulations in 1931 (pp. 34-38).

The problem was that, as some pointed out when mandatory registration was under discussion in 1931, “in chaotic times, the lists of firearms owners would fall into the wrong hands, allowing unauthorized persons to seize arms and use them to commit unlawful acts” (p. 29). The lists did fall into the wrong hands — the Nazi government, after the 1933 elections. And they did use them to seize arms, especially from Jews and other “enemies of the state.”

A common argument of gun control advocates against the insurrectionary theory of the Second Amendment is that there was no practical way for opponents of the Nazi government to overturn it [3]. What value could rifles and pistols have in Nazi Germany?

It is indeed possible that an armed German population in 1934 or 1935 would not have made that much of a difference. Hitler was very popular with most Germans, and even German Jews regarded the continual legal disabilities and extralegal injuries imposed on them as insults, best accepted because the alternative was worse. Still, when the time came that German Jews started to be loaded up on railroad cars and shipped to concentration camps, the writing was on the wall, and more than a few knew that they had little chance of getting out of this alive. But by that point, the Nazi government had used the registration lists dating from the Weimar Republic to disarm most German Jews.

Perhaps rifles and pistols in the hands of Germany’s Jews would not have seriously delayed the Holocaust, but the example of the Warsaw Ghetto, where Polish Jews with ten rifles and a few dozen pistols [4] delayed the German Army for six weeks, suggests otherwise.

How could Germany’s Jews being armed for resistance have made anything worse?

Article printed from PJ Media: http://pjmedia.com

URLs in this post:

[2] 1938 Nazi gun control law: http://jpfo.org/filegen-n-z/NaziLawEnglish.htm
[3] was no practical way for opponents of the Nazi government to overturn it: http://baltimorepostexaminer.com/dump-second-amendment-privacy-amendment/2014/02/03

Did Jesus Have a Wife? No.

April 10, 2014
A fragment of papyrus, known as the “Gospel of Jesus’s Wife,” has been analyzed by professors at Columbia University, Harvard University and the Massachusetts Institute of Technology, who reported that it resembled other ancient papyri. CreditKaren L. King/Harvard University, via Reuters

Reports today of the authenticity of a fragment of the so-called “Gospel of Jesus’s Wife,” which was originally unveiled in 2012 by the Harvard Divinity School professor Karen L. King, are sure to cause a rash of news stories and opinion pieces on whether Jesus was married.  So here’s my answer: No.
Before I talk about the reasons why almost every New Testament scholar believes that Jesus was unmarried, let me say that my faith does not rise or fall on whether Jesus was married.  The Christian faith is not based on Jesus’s celibacy, but on the Incarnation and the Resurrection.  In short, a married man healing the sick, stilling storms and raising the dead is just as impressive as an unmarried man doing so.  More to the point, if a married man himself rises from the dead after being in a tomb for three days, I would be following him. Married or unmarried, Jesus is still the Son of God.
Also, before I talk about the reasons scholars believe that he was almost certainly single, I should point out that the manuscript in question was written long after the canonical Gospels (Matthew, Mark, Luke and John) were written.  The fragment is most likely from the fourth to the eighth century.   By contrast, the earliest Gospel, Mark, was written in AD 75, only 40 years after Jesus’s time on earth.  The fragment in question, at its earliest, was written three hundred years after the canonical Gospels.  In general, it’s better to rely on sources closer to the actual events, particularly, as in the case of Mark, when there were people still alive who had known Jesus, and could presumably have corrected any inaccuracies in Mark's text.  Three hundred years later, people take many more liberties with he story.
Overall, it’s more likely that Jesus was unmarried.  How do we know this?  Here’s what I say in my new book, Jesus: A Pilgrimage
Now, it is almost certain that Jesus was celibate.  How do we know this?  For one thing, the Gospels talk about Jesus’s mother and “brothers and sisters” several times, so if he had a wife it would be odd not to mention her.  In his magisterial book A Marginal Jew, John Meier, a professor of New Testament at Notre Dame, and scholar of the “historical Jesus,” suggests that being unmarried was seen as undesirable for most rabbis of the time, and even though Jesus is not technically a rabbi, it would have been strange for the Gospel writers to concoct a story that he was celibate if he was in fact married. The Gospels’ silence about a wife and children likely means that Jesus had neither. 
What are some possible reasons for Jesus’s remaining unmarried?  He may have intuited that once he started his ministry it would be short or even meet a disastrous end. As a Jew, knew the fate of other prophets.  Jesus may have foreseen the difficulty of caring for a family while being an itinerant preacher.  Or perhaps his celibacy was another manifestation of his single-hearted commitment to God.  After sifting through the facts, Meier lands on the last reason: “The position that Jesus remained celibate on religious grounds [is] the more probable hypothesis.”
Other theories, where Mary Magdalene is proposed as Jesus’s wife, are also rather far-fetched.  Most women and women disciples in the New Testament are referred to, by the convention of their time, as “the wife of” or “the mother of.” In a patriarchal world, they were most often identified through their associations with either a husband or a son (or sons).  So we read of women like “Mary, the wife of Clopas” and “Joanna, the wife of Chusa.” Consequently, it is more likely that if Mary Magdalene were married to Jesus she would be called not "Mary of Magdala," but “Mary, the wife of Jesus.”
Also, in terms of the Passion narratives, which Christians will read during Holy Week, the Gospels place several important women at the scene of Jesus's death, at foot of the Cross.  The Gospel of John, written in roughly AD 100, reports the following women as present at the Crucifixion: “his [Jesus's] mother, and his mother’s sister, Mary the wife of Clopas and Mary Magdalene.”  An even earlier Gospel, Matthew, written around AD 85, says that there were “many women,” and then lists those the Gospel writer considers important: “Mary Magdalene, and Mary the mother of James and Joseph, and the mother of the sons of Zebedee.”  Were Jesus married, not mentioning “the wife of Jesus" in either the stories of the Crucifixion or the Resurrection would be odd indeed.
Nearly every scholar believes that Jesus was unmarried.  So do I.  As I said, my faith does not rest on his being unmarried--but my reason tells me that he was.

Here's What I Would Have Said at Brandeis

We need to make our universities temples not of dogmatic orthodoxy, but of truly critical thinking.

By Ayaan Hirsi Ali
April 11, 2014
On Tuesday, after protests by students, faculty and outside groups, Brandeis University revoked its invitation to Ayaan Hirsi Ali to receive an honorary degree at its commencement ceremonies in May. The protesters accused Ms. Hirsi Ali, an advocate for the rights of women and girls, of being "Islamophobic." Here is an abridged version of the remarks she planned to deliver.
One year ago, the city and suburbs of Boston were still in mourning. Families who only weeks earlier had children and siblings to hug were left with only photographs and memories. Still others were hovering over bedsides, watching as young men, women, and children endured painful surgeries and permanent disfiguration. All because two brothers, radicalized by jihadist websites, decided to place homemade bombs in backpacks near the finish line of one of the most prominent events in American sports, the Boston Marathon.
All of you in the Class of 2014 will never forget that day and the days that followed. You will never forget when you heard the news, where you were, or what you were doing. And when you return here, 10, 15 or 25 years from now, you will be reminded of it. The bombs exploded just 10 miles from this campus.

Related Video

Associate books editor Bari Weiss on Brandeis University's decision to withdraw its offer of an honorary degree to women's rights activist Ayaan Hirsi Ali. Photo credit: Associated Press.
I read an article recently that said many adults don't remember much from before the age of 8. That means some of your earliest childhood memories may well be of that September morning simply known as "9/11."
You deserve better memories than 9/11 and the Boston Marathon bombing. And you are not the only ones. In Syria, at least 120,000 people have been killed, not simply in battle, but in wholesale massacres, in a civil war that is increasingly waged across a sectarian divide. Violence is escalating in Iraq, in Lebanon, in Libya, in Egypt. And far more than was the case when you were born, organized violence in the world today is disproportionately concentrated in the Muslim world.
Another striking feature of the countries I have just named, and of the Middle East generally, is that violence against women is also increasing. In Saudi Arabia, there has been a noticeable rise in the practice of female genital mutilation. In Egypt, 99% of women report being sexually harassed and up to 80 sexual assaults occur in a single day.
Especially troubling is the way the status of women as second-class citizens is being cemented in legislation. In Iraq, a law is being proposed that lowers to 9 the legal age at which a girl can be forced into marriage. That same law would give a husband the right to deny his wife permission to leave the house.
Sadly, the list could go on. I hope I speak for many when I say that this is not the world that my generation meant to bequeath yours. When you were born, the West was jubilant, having defeated Soviet communism. An international coalition had forced Saddam Hussein out of Kuwait. The next mission for American armed forces would be famine relief in my homeland of Somalia. There was no Department of Homeland Security, and few Americans talked about terrorism.
Two decades ago, not even the bleakest pessimist would have anticipated all that has gone wrong in the part of world where I grew up. After so many victories for feminism in the West, no one would have predicted that women's basic human rights would actually be reduced in so many countries as the 20th century gave way to the 21st.
Associated Press
Today, however, I am going to predict a better future, because I believe that the pendulum has swung almost as far as it possibly can in the wrong direction.
When I see millions of women in Afghanistan defying threats from the Taliban and lining up to vote; when I see women in Saudi Arabia defying an absurd ban on female driving; and when I see Tunisian women celebrating the conviction of a group of policemen for a heinous gang rape, I feel more optimistic than I did a few years ago. The misnamed Arab Spring has been a revolution full of disappointments. But I believe it has created an opportunity for traditional forms of authority—including patriarchal authority—to be challenged, and even for the religious justifications for the oppression of women to be questioned.
Yet for that opportunity to be fulfilled, we in the West must provide the right kind of encouragement. Just as the city of Boston was once the cradle of a new ideal of liberty, we need to return to our roots by becoming once again a beacon of free thought and civility for the 21st century. When there is injustice, we need to speak out, not simply with condemnation, but with concrete actions.
One of the best places to do that is in our institutions of higher learning. We need to make our universities temples not of dogmatic orthodoxy, but of truly critical thinking, where all ideas are welcome and where civil debate is encouraged. I'm used to being shouted down on campuses, so I am grateful for the opportunity to address you today. I do not expect all of you to agree with me, but I very much appreciate your willingness to listen.
I stand before you as someone who is fighting for women's and girls' basic rights globally. And I stand before you as someone who is not afraid to ask difficult questions about the role of religion in that fight.
The connection between violence, particularly violence against women, and Islam is too clear to be ignored. We do no favors to students, faculty, nonbelievers and people of faith when we shut our eyes to this link, when we excuse rather than reflect.
So I ask: Is the concept of holy war compatible with our ideal of religious toleration? Is it blasphemy—punishable by death—to question the applicability of certain seventh-century doctrines to our own era? Both Christianity and Judaism have had their eras of reform. I would argue that the time has come for a Muslim Reformation.
Is such an argument inadmissible? It surely should not be at a university that was founded in the wake of the Holocaust, at a time when many American universities still imposed quotas on Jews.
The motto of Brandeis University is "Truth even unto its innermost parts." That is my motto too. For it is only through truth, unsparing truth, that your generation can hope to do better than mine in the struggle for peace, freedom and equality of the sexes.
Ms. Hirsi Ali is the author of "Nomad: My Journey from Islam to America" (Free Press, 2010). She is a fellow at the Belfer Center of Harvard's Kennedy School and a visiting fellow at the American Enterprise Institute.

The IRS Scandal Comes Into Focus


House Ways and Means Committee Chairman Dave Camp lays out damning evidence of Lois Lerner's targeting of conservative groups.

By Kimberly Strassel
April 11, 2014
Nearly a year into the IRS scandal, we still don't know exactly what happened—though we are finally getting an inkling. That's thanks to the letter House Ways and Means Chairman Dave Camp sent this week to the Justice Department recommending a criminal probe of Lois Lerner.
The average citizen might be dizzied by the torrent of confusing terms—BOLO lists, Tigta, 501(c)(4)—and the array of accusations that have made up this IRS investigation. Mr. Camp's letter takes a step back to remind us why this matters, even as it provides compelling new information that goes to motive and method—and clarifies some of the curious behavior of Democrats during the investigation.
Motive: Republicans began this investigation looking for a direct link between the White House and IRS targeting. The more probable explanation all along was that Ms. Lerner felt emboldened by Democratic attacks against conservative groups to do what came naturally to her. We know from the record that she disdained money in politics. And we know from her prior tenure at the Federal Election Commission that she had a particular animus against conservative organizations.
As the illuminating timeline accompanying the Camp letter shows, Ms. Lerner's focus on shutting down Crossroads GPS came only after Obama adviser David Axelrod listed Crossroads among "front groups for foreign-controlled companies"; only after Senate Democrats Dick Durbin, Carl Levin, Chuck Schumer and others demanded the IRS investigate Crossroads; only after the Democratic Congressional Campaign Committee launched a website to "expose donors" of Crossroads; and only after Obama's campaign lawyer, Bob Bauer, filed a complaint with the Federal Election Commission about Crossroads.
The information in Mr. Camp's letter shows that Ms. Lerner sprang to action following a January 2013 meeting with Democracy 21, a campaign-finance outfit petitioning for a crackdown on Crossroads and the liberal big-dollar Priorities USA. (She never touched Priorities, run by former Obama aides.) The Camp outline suggests cause and effect, and that's new.
Lois Lerner at a congressional hearing, May 22, 2013.jonathan ernst/Reuters
Method: The general prohibition on releasing taxpayer information has meant that—up until Ways and Means voted Wednesday to release this info—it was impossible to know what precise actions Ms. Lerner had taken against whom. We now know that she took it upon herself to track down the status of Crossroads, to give grief to an IRS unit for not having audited it, to apparently direct another unit to deny it tax-exempt status, and to try to influence the appeals process.
We know, too, that Ms. Lerner did some of this in contravention of IRS policy, for instance involving herself in an audit decision that was supposed to be left to a special review committee. We have the story of a powerful bureaucrat targeting an organization and circumventing IRS safeguards against political or personal bias. That ought to mortify all members of Congress. That Democrats seem not to care gets to another point.
Aftermath: Democrats quickly dropped any feigned outrage over IRS targeting and circled the wagons around the agency. Why? The targeting was outrageous, the public was fuming, and nobody likes the IRS. Joining with Republicans would have only been right and popular.
That is, unless Democrats are worried. As the Camp timeline and details show, the IRS responded to liberal calls to go after conservative groups. Democrats weren't just sending letters. Little noticed in the immediate aftermath of the IRS scandal was a letter sent May 23, 2013, by Carl Levin and (Republican) John McCain to the new acting director of the IRS disguised as an expression of outrage over IRS targeting. Artfully hidden within it was Mr. Levin's acknowledgment that his subcommittee on investigations had for a full year been corresponding and meeting with IRS staff (including Ms. Lerner) to ask "why it was not enforcing the 501(c)(4) statute."
What was said in the course of that year? How much specific information was demanded on conservative groups, and how many demands dispensed on how to handle them? Good questions.
In 2012, both the IRS and Democratic Rep. Elijah Cummings were targeting the group True the Vote. We now have email showing contact between a Cummings staffer and the IRS over that organization. How much more contact was there? It's one thing to write a public letter calling on a regulator to act. It's another to haul the regulator in front of your committee, or have your staff correspond with or pressure said regulator, with regard to ongoing actions. That's a no-no.
The final merit of Mr. Camp's letter is that he's called out Justice and Democrats. Mr. Camp was careful in laying out the ways Ms. Lerner may have broken the law, with powerful details. Democrats can't refute the facts, so instead they are howling about all manner of trivia—the release of names, the "secret" vote to release taxpayer information. But it remains that they are putting themselves on record in support of IRS officials who target groups, circumvent rules, and potentially break the law. That ought to go down well with voters.
Write to kim@wsj.com

Thursday, April 10, 2014

Today's Tune: Lindsey Buckingham ~ Love Runs Deeper (Live 2008)

Book Review: ‘John Wayne: The Life and Legend’ by Scott Eyman

Has any actor ever dominated American cinema more completely than John Wayne? For more than four decades and 162 feature films, he filled the screen and our cultural fantasies. For 25 of 26 years, from 1949 through 1974, he was one of the top 10 box office stars, and 35 years after his death from cancer at 72, he still makes the list of top five all-time favorite actors. Along the way he became a symbol of American masculinity — the emphatic authority figure defending our values with fists and guns — “an innocent man in primary colors,” in the words of Scott Eyman’s entertaining new biography.
But what’s most striking about Eyman’s thorough and sympathetic portrait is the cloud of sadness and regret that hangs over its seemingly unconquerable protagonist. Wayne emerges as a restless, melancholy figure, always struggling for more respect from his critics, more time with his family, more money and better health.
He was born Marion Morrison in Iowa in 1907, but his parents moved him and his younger brother to Southern California seven years later in a vain search for prosperity. It was a bitterly divided household where money was scarce, with an affectionate but feckless father and a chilly, hypercritical mother who withheld affection from her oldest son throughout his life. His most cherished mentor, the great film director John Ford, whom he met after dropping out of the University of Southern California with a shoulder injury that cost him a football scholarship, constantly mocked and derided him, even after Wayne’s fame and fortune far outstripped Ford’s own.
For all his extraordinary success, it was Wayne’s failures that haunted him, in Eyman’s account. The devoted family man crashed through three marriages and reaped troubled relationships with several of his seven children. The self-styled super-patriot felt shame and guilt for dodging military service during World War II. The box office champion never felt financially secure because of bad business deals and unfaithful friends, and felt compelled to keep on working even as his health faded and his appeal diminished. Although beloved and admired by millions, Eyman writes, “he always seemed surprised and pleased by praise, perhaps because he received so little of it for so long.”
We all think we know John Wayne, in part because he seemed to be playing himself in movie after movie. Yet as Eyman carefully lays out, “John Wayne” was an invention, a persona created layer by layer by an ambitious young actor. Wayne did not write his parts, but he invented the character who played them.
“That guy you see on the screen isn’t really me,” Wayne once said. “I’m Duke Morrison, and I never was and never will be a film personality like John Wayne. I know him well. I’m one of his closest students. I have to be. I make a living out of him.”
For many of us, our image of Wayne was forged in the 1960s and early ’70s, when he too often seemed a lumbering, overweight, toupee-wearing self-parody spouting simplistic, right-wing views and playing the same role over and over in largely second-rate Westerns. But before he became John Wayne Inc., Wayne was an actor of unusual authority who created a character of rough-hewn charisma, vulnerability and physical menace. Intuitively but brilliantly, the self-reliant character he created was the cinematic successor to a range of American frontier heroes, real and imaginary, stretching back to Daniel Boone, Davy Crockett and James Fenimore Cooper’s Hawkeye. Or as Eyman puts it, “He came to embody a sort of race memory of Manifest Destiny, the nineteenth century as it should have been.”
Eyman narrates the familiar story of Wayne’s early days of grinding B Westerns for second-rate “Poverty Row” film studios such as Monogram Pictures and Republic Productions, all the while building his unique film persona. His brilliance was in understanding that movies, with their close-ups, were creating a new intimacy between actor and audience and a demand for authenticity. Modeling himself after men he admired like Western star Harry Carey Jr., character actor Paul Fix and stuntman Yakima Canutt, Wayne built his own character. The pigeon-toed walk, the hesitant vocal delivery, the slow-burning smile when angry — all became his acquired traits.
Ford saw his potential and eventually cast him in his breakthrough role as the Ringo Kid in “Stagecoach” (1939), liberating Wayne after nearly a decade of low-paying B Westerns. A bitter, lifelong alcoholic, Ford continued to verbally abuse Wayne throughout their long partnership. But he also came to recognize Wayne’s greatness. “Ford looked at Duke Morrison and saw John Wayne,” Eyman writes, “a capacity for strength and violence that coexisted with a dangerous beauty.”
Eyman recounts those successes and the building of the Wayne star machine, then moves on to his more ambitious failures. The worst was Wayne’s decade-long effort to make “The Alamo” (1960), an epic film that summed up his belief in the American character. Sadly the end result was too long, too preachy and — unusual for Wayne — too wordy. It ultimately cost him $2 million of his own money. “Everybody made money from it but me,” he would complain.
The other debacle was “The Green Berets” (1968), a piece of pro-Vietnam War propaganda that did well at the box office but permanently alienated a generation of young people who had little use for Wayne’s particular blend of piety and patriotism. (It’s fair to note that he had little use for them, either.)
When the great directors like Ford and Howard Hawks passed from the scene, Wayne increasingly took to micro-managing his films, overwhelming his directors and implicitly demanding “that his parts be modeled on the man he had become.” It wasn’t a pretty sight: He smoked up to six packs of cigarettes a day, consumed heroic amounts of food and drink, and became increasingly impatient and demanding of those around him. When he was home and not working, he would rise at dawn, then start waking up family members because he didn’t like to be alone. After cancer surgery that cost him part of a lung, he wheezed and lumbered his way through countless second-rate Westerns while turning down meatier roles like “Dirty Harry” that didn’t fit his self-image. And he began to mistake himself for the fictional character he had created.
Still, there were times when he knew who he really was — and wasn’t. When first told he had lung cancer, he recalled, “I sat there, trying to be John Wayne.”
Something terribly sad about that.
Glenn Frankel is author of “The Searchers: The Making of an American Legend.”

The Shame of Brandeis

And of a culture that has lost its way 

Wednesday, April 09, 2014

Unpacking Progressivity

Most Americans don’t understand how “progressive” taxation works. 

French economist Thomas Piketty

“β = s/g”

That’s the message of the hottest book in policy circles right now, Capital in the Twenty-First Century, a 700-page tome by the French economist Thomas Piketty. As explained by Piketty, this formula stands for the concept that the capital/income ratio is equal in the long run to the savings rate divided by the growth rate. “β = s/g” is a problem, according to Piketty. And one so grave that he recommends making the progressive tax code even more progressive by raising our top marginal tax rate to 80 percent. Not content with further “progressivizing” the U.S. income tax, Piketty also advocates a global progressive tax on wealth per se.

“A progressive levy on individual wealth would reassert control over capitalism in the name of the general interest while relying on the forces of private property and competition,” Piketty writes confidently.

Private property will bolt to the moon if such a tax becomes global law. Most Americans sense this, but are halted from arguing because they are not sure they understand Piketty’s formula. It’s not clear that most Americans even understand progressivity.

And that’s not an accident. You don’t have to be Machiavelli, to name a clearer foreign writer, to see that the incomprehensibility of such concepts is not incidental — it is necessary to the grander process of redistribution.

Consider the history of progressivity, first introduced on a national scale with the income tax in 1913. The idea was, and still is, that tax rates go up like stair steps, and that the last dollar earned by a wealthy man or woman is taxed at a different, higher rate than the first dollar. The members of Congress who wrote that first income-tax law set their top progressive rate at a level then deemed sky-high, 7 percent.

Over the hundred years intervening, studies have shown that generally people do think that the greater the wealth, the more dollars wealthy people should pay in tax, proportionally. But that is not a progressive rate structure. That is a flat tax. A progressive tax increases rates as you earn more, disproportionally.

Nor are many people aware that under a progressive structure the last dollar is taxed at a different rate from the first dollar. The top marginal rate is not necessarily the average rate. In the early 1980s, scholar Karlyn Keene found that many Americans, when interviewed, thought flat taxes fair. Before Keene, Walter Blum and Harry Kalven at the University of Chicago studied attitudes toward progressivity and its functions and came away, despite their liberal predilections, concluding that the case for progressivity is “uneasy.”

Politicians, and the economists behind them, simply played off citizens’ ignorance. The simple early code of 1913 became complex and, yes, more onerous.

The real question should be why Americans allow themselves to be intimidated, especially when it comes to progressivity.

Vanity of two sorts provides answers. Most Americans are unwilling to concede that they may not understand or be comfortable with long formulas and complex economic ideas. So, like the Enron audit committee, they simply nod and go along.

The second vanity involves not intelligence but a kind of Puritan pretension. No American wants to be caught appearing unfair, even if in the most fleeting snapshot. “Progressivity” sounds like “progress.” Nobody wants to be seen opposing progress, even if that progress is regress and unfair to boot.

In any case: That willed American ignorance is the single greatest reason our progressive income-tax rates have moved, at times, into the 90 percent range, up from that original 7 percent.

Worse, the attitude makes progressivity hard to undo. When you cut taxes for all in a progressive rate structure, the rich necessarily get a larger tax break because they pay a greater share of the taxes. But “larger tax breaks for the rich” are impossible to sell. A redistributive corollary: benefits for the poor. This week Paul Ryan is getting scourged because his budget cuts affect the poor more than the rich. That is because the poor get more of the benefits in the first place.

The first step to overcoming such intimidation is to encourage economists to write and talk in plain language, English or French. The second is to counter with policies in plain English, plain formulas, and plain titles. A book like Piketty’s may be called Capital. A more accurate title might be Expropriation.

— Amity Shlaes chairs the board of the Calvin Coolidge Presidential Foundation.