The Individual vs. The Community

Your personal political philosophy is highly influence by how you view the way individuals relate to their larger community.

For example:

  • Conservatives view the individual as the prime mover. The individual is the engine of the economy and the engine of the community as a whole.
  • Liberals view the aggregate of individuals – the community itself – as the prime mover. The community that individuals create together is the engine that moves us forward.

This belief drives to what extent you believe the community (the government, in some form) should influence the actions of the individual:

  • Conservatives believe in individual liberty above all else. Leave the individual to themselves, and they will work things out to the general betterment of the larger community – their individual actions will work toward advancing the aggregate.
  • Liberals believe that the community should assist individuals to achieve greater things and that individuals themselves tend to act in their own self-interest to the detriment of the community. The community has a responsibility to enforce the fair rules of the game.

So a person’s political philosophy often comes down where their natural focus rests: on the individual or on the community?

Last year, I read The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt, which was an examination of exactly this concept – why people can have such profoundly different views of politics. What Haidt stated was something similar to what I said above – it largely comes down to how people view the individual in relationship to the community.

In fact, Haidt recounted a study which revealed how fundamentally different this view can be, based on your personal culture or background. Given a picture of a living room, someone from Culture/Background A might see the room itself in aggregate (“this is a living room”; the community), while someone from Culture/Background B might see the things that comprise the room (“this is a couch, a table, and a lamp in a room”; the individuals). One person immediately extrapolates the individuals to a larger community and that’s the thing he’s looking at. The other sees the individuals and stops there – he is, of course, aware that they comprise a living room together, but that’s not what he’s looking at.

So, do we view ourselves first as independent actors operating in the world, or do we think of the world first as something we are a part of?

These subconscious inclinations are ingrained in us as children and explain why some countries accept things like single-payer health care as natural and completely reasonable, and other countries damn-near go to war over it (ahem, us).

Effort and Potential

Almost everybody has heard of “The 10,000 Hour Rule,” which says it takes 10,000 hours of practice to master anything.  I actually read the original study (PDF) last year.

But I’ve often wondered about the limits of this theory. Does there exist people who simply will never master something, no matter how much they practice?

I wondered this quite a bit when trying to teach my late mother how to use her computer. We managed to get basic things, sure, but I found myself wondering if I could make my mother an expert computer user, or even a programmer, if I had to.  Would she have the temerity for it?  The ability to think abstractly and in analogy? That just wasn’t the way my mother’s mind worked.

And what of people who just, well, aren’t smart?  We hate to talk in terms of pure, genetic intelligence, but it’s silly to pretend this doesn’t exist. Some people are just born smarter than other people. Somehow, it’s acceptable to say someone is “naturally athletic” (and thus imply the opposite – some people are not naturally athletic), but it’s rude, even sinister, to say some people are just not naturally intelligent.

Uncomfortable or not, it’s sadly true. There are people who are just not gifted with good memory, advanced reasoning, abstract thought, pattern recognition, etc.  These are the things that make up what we’d call “intelligence.” They exist in people in varying degrees – some people have a lot of them, some people have very little.

At the very lower ends of the intelligence scale are people for whom life is going to be difficult. These are people that can have the biggest hearts in the world, the most willpower, and a genuine desire to better their circumstances, but there’s an upper limit to what they might be able to accomplish.  For them, holding down a job stocking shelves at Walmart might be a huge victory.  (And, sure, there are people who suffer from actual mental disabilities, but hovering right above this group are people who might not have a diagnosable mental defect, but who clearly just aren’t all there.)

Do I think these people can absolutely better their circumstances through hard work and persistence? Yes, absolutely. Are they ever going to become college professors or make a six figure income?  Probably not.  Is applying the 10,000 rule going to allow them to become an expert at anything they choose?  Nope.

(It feels elitist even writing this.  But don’t shoot the messenger.)

A study this year called The 10,000 Hour Rule into question:  Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions

We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions. We conclude that deliberate practice is important, but not as important as has been argued.

So, what is the rest of the difference?  Likely some form of natural ability.

Malcolm Gladwell popularized The 10,000 Hour Rule in his book Outliers.  But even he has since said that he was misunderstood.  In a Reddit AMA, he said this:

The point is simply that natural ability requires a huge investment of time in order to be made manifest.

So, practice only amplifies natural ability, which has to exist first.

This article in Slate essentially echos the same point, and puts a political spin on it: The 10,000 Hour Rule Is Wrong and Perpetuates a Cruel Myth

Societal inequality is thus justified on the grounds that anyone who is willing to put in the requisite time and effort can succeed and should be rewarded with a good life, whereas those who struggle to make ends meet are to blame for their situations and should pull themselves up by their own bootstraps. If we acknowledge that people differ in what they have to contribute, then we have an argument for a society in which all human beings are entitled to a life that includes access to decent housing, health care, and education, simply because they are human. Our abilities might not be identical, and our needs surely differ, but our basic human rights are universal.

Essentially, if we blame everyone for their situation – as if their current circumstances are based solely on their effort and nothing else – then we can justify economic and social inequality.  If we acknowledge that people are successful in some degree because they were gifted with great genes, then it gets tougher.

As with everything, there’s a fine line.  It’d be wonderful if we could easily determine to what percentage someone is living up their potential, where we consider “potential” to be the highest possible circumstance they could achieve with the natural gifts they’ve been given. Then this would be easy – someone living up to 100% of their potential is to be admired no matter what that potential looks like on an absolute scale.  Someone at 10% of their potential is to be scorned, even if Mommy and Daddy paid for Harvard and a Mercedes.

In the end, are we looking for results, or genuine effort (meaning effort in an absolutely honest attempt to get better, not just effort to make it look like you’re trying hard)?  In my mind, genuine effort is the correct yardstick.  I have great admiration for someone who has overcome any limitation over which they had no control – be it intelligence, athletic ability, childhood upbringing, physical catastrophe of some kind – and achieved some level of success in spite of that.

But this isn’t easy to measure. You don’t know everyone’s story, so I have no idea that the weathered lady riding the bus is actually doing phenomenally well despite the fact that she was sexually abused as a child, has never been naturally intelligent, and had a husband who ran off leaving her with three kids to feed.

Instead, we admire the idiot in the sharp suit and BMW, not knowing that he dropped out of three colleges, got the BMW for Christmas from his parents last year, and is managing to hide a thousand-dollar-a-week cocaine habit because his Daddy gave him a cushy job in the family business to essentially just occupy an office for eight hours a day.

Eight hours a day means he should master being a lazy douchebag in about five years.

The Human Tie and the Lack of Corporate Morality

I own 25% of my company, which is a significant percentage. This chunk of ownership binds me to this company, in the sense that I see the company as a sort of extension of myself in many ways.  I’m not a minor owner – I’m a significant part of the day-to-day operations, and my input goes a long way towards steering the company.

Thus, if Blend started doing things that were morally objectionable, my personal persona would take a hit. If Blend suddenly did something bad to our employees – cancelled their health care, for example, to increase our profit – word would get around, and people would think, “Wow, Deane is jerk.”

I stand to benefit from the profits of Blend, so if I do something to increase those profits at a human cost to someone, then I am placing money above human welfare.  Conversely, if I reduce my personal profit in exchange for a human good, then I’m placing humanity above money. Good for me.

This is a check and balance of my moral leadership of Blend. I am compelled to guide Blend to act morally (sometimes in opposition to profit) in part because it is a reflection of me as a 25% owner. Put another way, there is a significant “Human Tie” to Blend. Blend is less a company by itself, and is rather a very direct aggregation of the morals and ethics of the four owners.

With such direct ownership, humanity shows through. I do not think Blend has any morals itself — it’s a legal entity, and thus knows nothing of morals. Any morals it appears to have are just the morals of its owners, reflected by what the company does.

Now, let’s contrast this to, say, IBM. That company has almost 1 billion shares outstanding. Thus, “ownership” of IBM is almost theoretical.  I can own a single share, and say I’m an owner of the company, but I have no real ability to steer it anywhere.

As of this writing, the largest single owner of IBM shares is Berkshire Hathaway, which owns 7% (just over 70 million shares). Berkshire is, of course, a holding company, putatively owned by Warren Buffet, but in which you can also buy shares. Berkshire has 1.6 million shares outstanding, of which Buffet himself is the largest owner, having about 32%. This effectively means that, filtered through Berkshire, the person of Warren Buffet owns about 2% of IBM.

The largest direct owner of IBM shares is a guy by the name of Steve Mills (he’s also one of their VPs).  He owns about 153,000 shares, or .2% of Berkshire’s ownership, and only .01% of the company as a whole.

What this means is that there’s much less of a Human Tie to IBM.  What IBM does as a company really isn’t a reflection of anybody.  Buffet is the most influential single owner, and even he owns only 2% of it, and he would have to wield that 2% to the benefit of Berkshire and its other shareholders.

When a company like this does something bad – screws its employees, burns the rain forest, ignores human rights abuses, whatever – I find it a little funny that people say, “IBM should be ashamed itself!”, because this means nothing. IBM is a legal entity, incapable of shame.

What they mean to say, I think, is “The decision makers at IBM should be ashamed of themselves!”  In this, they likely mean the managers and directors.  But they’re not the ultimate decision makers. They simply have to reflect the wishes of the owners.

So, let’s say this: “The owners of IBM should be ashamed of themselves!”  Here we have the truth. But without a strong Human Tie, there’s really no way to enforce this shame.  The only way for this shame to reflect on a human is for some human to identify with this company as an extension of their morality.

If I’m Steve Mills, I own .01% of IBM.  Even as an officer, I doubt he looks at IBM In any way as an extension of himself. It doesn’t reflect his values or morals or ethics.  He’s just a tiny owner in the big picture (even if he is the largest direct shareholder). He doesn’t own enough of this company to possibly feel shame at anything it does beyond his personal decisions in his position.

In this way, the U.S. economy has devolved into ownership by proxy.  At scale, few people really “own” anything in the sense that it reflects them as humans and that they feel a driving personal need to operate the company according to their private moral code. Even if any owner wanted to exercise their personal ethics, they would have limited ability to do so.

Large, publically-held corporations have essentially become automatons, beheld to nothing but their share price.  Any why wouldn’t they be?  The vast majority of their ownership only relates to the company through that number – if it’s up, things are good, if it’s down things are bad. For most of their ownership, that is the sole barometer of success.

I believe that as any financial market evolves, it will gradually strip out pesky problems like morals and humanity. Financial markets are designed to evolve in service of profit, and they do this very well.

Yes, yes, companies have charitable giving programs and employee benefits, but how much of this is in service to PR and employee retention?  Consider this article in the Wall Street Journal: “The Case Against Corporate Social Responsibility

Very simply, in cases where private profits and public interests are aligned, the idea of corporate social responsibility is irrelevant: Companies that simply do everything they can to boost profits will end up increasing social welfare.

I feel like this sums up the thinking of most of corporate America – trickle-down economics works, and if we simply create as much profit as possible, it will all work out.

Here’s another line of thinking: “When Corporate Theft is Good

Shareholders tolerate a certain amount of what looks like corporate philanthropy because some customers like to see it, and so become more inclined to buy the company’s products. Used in this way, philanthropy is simply part of a firm’s marketing. And it must be justified in the same way as any other marketing effort: Does it increase revenues by more than it does costs?

So, charitable giving is marketing, essentially.  We’ll do it so long as it benefits us, but not just because it’s the right thing to do.

In 2002, the WSJ did a survey of corporate recruiters, asking what traits they look for in MBA candidates. The results were obvious: corporate citizenship was last.  Worse:

Recruiters might even regard good deeds on a resume as a negative factor. “If an M.B.A. student spent a summer building houses for Habitat for Humanity, that person could be seen as soft and not ready for the rough-and-tumble world of investment banking,”

And this makes sense, because notions of objective right and wrong – an absolute moral standard – exist only in the minds of humans: we are the only true things that contain morals.  To a corporation, the only objective standard it knows is profit, so deciding it’s cheaper to allow 180 people to burn to death rather than fix a fuel system problem (PDF; see page 6) is a perfectly valid action when compared against that standard.

In the end, corporations are inherently greedy by design, and they will act in the interests of the only standard they can be held to: their bottom line.  This is not their fault – these are just the rules of the game we created.

What we hope is that their owners will have a strong enough Human Tie that they desire to operate the company as an extension of their humanity.  Sadly, the chances of this become more and more remote as the number of owners increases. More owners means that each individual human – each container of morality, inasmuch as only humans can be true moral actors – gets further away from the company. With dilution of actual ownership comes dilution of the Human Tie, and with that, the dilution of any need to act toward the good of anyone or anything else.

As with a lot of things in life, there is no real solution to this, short of artificial legal rules that may or may not work, but would be a constant source of legislative turmoil either way. This is just how markets evolve, and this is a price we pay for freedom and liberty and the other (considerable) benefits that provides.

I’ll conclude with a quote from Ron Paul’s book, Liberty Defined, which I read a couple years ago:

We need to become tolerant of the imperfections that come with freedom, and we need to give up the illusion that somehow putting government in charge of anything is going to improve its workings, much less bring on utopia.

Why Marketing Bothers Me

I was thinking about marketing this week while I was on a trip, and two things occurred to me that I find distasteful about the discipline. I’m not claiming all marketing is like this, but I’d call it a majority, certainly.

  • Marketing is often about lying. A lot of marketing is simply overstating your product and positioning it in a space or claiming that it fills a need that it does not in fact fill, or can only be considered appropriate at the very edges of imagination. Call it “marketing by wishful thinking.”  We hope that you’re buying this overpriced thing because you have disposable income and are making a prudent fiscal choice, but we’re quite sure that 99% of the time, it’s a huge mistake for anyone. And, of course, Super Sugar Smacks can be part of a well-balanced breakfast, but they hardy ever are.
  • Marketing is often about making people feel badly about themselves. We sell things by filling a need, and when that need doesn’t exist, marketing creates that need by making people feel badly about their current situation. Hell, pretty much every women’s fashion magazine is predicated on this entire idea – page after page of people prettier than you are. We know you thought you looked fine, but after closing this magazine, we’re hoping you feel like a troll and go out and buy our makeup. And sure, your car is practical and reliable, but it sucks compared to this new thing, so come out and spend more than you can afford to make yourself feel better.

I absolutely concede that marketing doesn’t have to be this way, and my hats are off to companies that avoid these two traps. But it’s rare, and society is worse for it.

The Relationship of Faith to Education

Compulsory Schooling Laws and Formation of Beliefs: Education, Religion and Superstition: This paper is behind a pay wall, but this quote was provided by the FiveThirtyEight where I read about it:

One additional year of schooling reduces individuals’ propensity to pray every day by about 10 percentage points. Likewise, an additional year of full-time education reduces the propensity to attend religious services at least once a week by 10 percentage points. We also find that schooling reduces the propensity to believe in the protective power of lucky charms, and it decreases the tendency to consult horoscopes, and to take into account horoscopes in daily life.

Gratifying Narrative Syndrome

As human beings, we chronically suffer from what I call “Gratifying Narrative Syndrome,” or a desire to confirm narratives that we find emotionally or psychologically gratifying.

You see these all the time – stories and paradigms that click with us for some reason, and that we very much want to be true. To this end, we interpret evidence in support of them and discount evidence that contradicts them.

  • To lose weight, eat many smaller meals. (Nope)
  • Lowering taxes will actually increase tax revenue. (Uh uh)
  • You need to drink eight glasses of water a day. (A myth)
  • Organic food is better for you. (No evidence)
  • The United States gives large amounts of money away in foreign aid. (Maybe in absolute terms, but not relative to the budget)

This is clearly confirmation bias at work.

Confirmation bias, also called myside bias, is the tendency to search for, interpret, or prioritize information in a way that confirms one’s beliefs or hypotheses. It is a type of cognitive bias and a systematic error of inductive reasoning. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way.

Politics and health are two great sources for this, because they both involve hard truths that we just don’t want to admit are true. We’re emotionally seeking some exception to the rules that we don’t like, so we hold out hope there’s a hidden secret. When we see or hear something which confirms it, we seize it and hang on for dear life.

(Yes, we’re actually always looking for that “one weird trick” that the banner ads promise us. We love this, because we love the idea that a thorny problem can be unlocked by a secret key.)

Additionally, sometimes these narratives subconsciously confirm other paradigms that we hold dear. Thinking that the U.S. gives away a lot of money in foreign aid (in truth, it’s less than 1% of the budget) might secretly make us happy because we like to remain convinced of our relevance to world politics, or that other countries couldn’t function without us, or perhaps that our budget woes are because we’re benevolent to other countries through some form of the Christian work ethic. Story A confirms Story B. If A isn’t true, then neither is B, and we don’t want this.

Another factor: we just love contrived stories that neatly explain something in an interesting way.  The world is full of questions, so we take comfort in the idea that there are explanations for them all. Consider this explanation for the phrase “rule of thumb”:

The expression “rule of thumb” did not originate from a law allowing a man to beat his wife with a stick no thicker than his thumb, and there is no evidence that such a law ever existed.

This story does several things: (1) it neatly explains a phrase that never really made sense to us, (2) it stokes our sense of moral outrage, and (3) it’s interesting in a morbidly curious way.  We imagine telling this story at dinner parties and people nodding in agreement as we have just sagely confirmed a Gratifying Narrative for them.

That quote on the “Rule of Thumb” is from an entire Wikipedia page on common misconceptions. Go read that and see how many you’re heard and that you thought were true.  And then ask yourself why you thought they were true. If you trace back, it’s probably because someone else told you, it sounded reasonable and interesting or perhaps it helped confirm some other mental paradigm you were holding onto, so you labeled it as accurate and didn’t seek any confirmation lest the boat get rocked.

I’m suddenly wondering is Gratifying Narratives are viral. Do they seek out their own survival by being interesting enough to pass on? Do they need to be contagious to survive?  This is the classic definition of a meme, as coined by the evolutionary biologist Richard Dawkins:

A meme is “an idea, behavior, or style that spreads from person to person within a culture.” A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme.

Is this why we seek to confirm Gratifying Narratives?  Do we secretly want to ensure their survival?

Clearly, that gets a little far-fetched. I think the truth lies somewhere nearer what Doris Graber described in Processing the News. In order to make sense of the world, we’re essentially assembling a large puzzle. Every new piece of information is a new piece in that puzzle. When we can fit that piece into a larger framework somewhere, it goes in with a gratifying “click.”

We love that click. We seek it out, and we avoid any inconvenient details that prevent it from happening.

The Rich and Their Effect on the Cost of Living

I’m not an economist, but I’m fairly confident in a couple of economic principles. 

  • Inflation tells is that when there’s more money chasing the same amount of goods, prices will go up.
  • Supply and demand tells us that when demand increases, prices will also go up.

I read an article (How the ‘creative class’ is dividing U.S. cities) that I think illustrates an end result of that: the relatively well-heeled “knowledge workers” are demanding more and more residential living in the downtown core, which is raising rents and pushing the lower class out of downtown.

The housing options of the disadvantaged are invariably defined by what’s left over. If the wealthy want to live on the waterfront, the poor are driven inland. If high-paid professionals want to live close to the subway — picture the popular orange-line corridor in Arlington — then low-paid cashiers are pushed farther from transit. If upper-class college graduates want to live downtown, as is increasingly the case in many big cities, the poor are priced out to the periphery.

I recently saw this same thing in Sioux Falls.  Across from the only downtown grocery store, several low-rent homes were torn down to build new loft apartments.  I assume those homes had people living in them – eyesores that they were, the houses still fit the economic needs of someone.  I further assume those people can’t afford the apartments which replaced their homes, and that they were therefore evicted and pushed somewhere else.  (As of this writing, you can still see the homes on Google Maps.)

I think there’s a secondary concept at work here – the existence of people with far more wealth ends up indirectly draining the finances of the lower-class.

The fact that someone can afford $2,000/month in rent means there’s a  market for that type of apartment, and its existence will eventually come at the expense of more affordable housing and bring average rents up.  The willingness of the upper class to pay large sums of money for things has the tendency to drag costs up overall – they inject more money into the system and increase demand.

This isn’t a problem if the people below them on the wealth ladder are moving up too – inflation raises costs overall, for everyone. The real problem comes when the the cliché becomes true – “the rich get richer and the poor get poorer.” When this happens, the rich getting richer drag costs up with them. Even if the poor stay where they’re at, their costs go up. If they’re not advancing economically, they’re worse off by just staying in place.

Put another way, Bob the Banker’s willingness to drop $1 million on a house has a negative, trickle-down effect on finances of Mike the Meatpacker. Mike’s rent edges up in response to the demand and the excess money injected into the local economy by Bob. If Mike doesn’t get a raise, he has a problem.

(But Mike might get a raise. Bob’s money goes into the local economy, and – if trickle-down economics is true – that means there’s more money floating around to pay Mike. Yay, capitalism!)

Assuming Mike gets screwed, is this Bob’s fault?  Not really. He’s not doing this intentionally, and his role in this drama is just as accidental as Mike’s.

And what’s the solution?  I honestly have no idea.

The Myth of Water Consumption

I enjoyed this article about the myth of water intake: You Don’t Need 8 Glasses Of Water A Day. This idea has been floating around for decades: you must drink more water – specifically eight glasses per day.

This threshold appears to be a long-standing medical myth. It’s not even clear where it started. The best answer I can find[…] is that the source was a 1945 publication by the National Food and Nutrition Board, a government advisory agency, that stated this: “A suitable allowance of water for adults is 2.5 liters daily in most instances. … Most of this quantity is contained in prepared foods.” The theory is that people read this, ignored the last sentence, and the eight glasses a day (about 2.5 liters)recommendation was born.

The article goes on to cite study after study which found no effect of drinking more water. A few studies suggest some value, but the threshold is much lower than eight glasses, and it’s a level most people just get accidentally.

A meta-study at Dartmouth found the same thing: “Drink at least eight glasses of water a day.” Really? Is there scientific evidence for “8×8”? (PDF; the “8×8” in the title refers to the common exhortation to consume eight 8-oz. glasses of water per day)

No scientific studies were found in support of 8 x8.

I’ve always thought this was silly. I’ve never even been close to that amount in pure water (though I used to drink eight cans of Diet Coke a day), and I’ve been fine. The idea that we’re all walking around chronically dehydrated is kind of comical, really.

How about increasing fluid intake to get over a cold?  Nope. Consider:  “Drink plenty of fluids”: a systematic review of evidence for this recommendation in acute respiratory infections

We found data to suggest that giving increased fluids to patients with respiratory infections may cause harm. To date there are no randomised controlled trials to provide definitive evidence […]

All is not lost, however. There may be some value in protecting the kidneys from infection: Fluid and nutrient intake and risk of chronic kidney disease

Higher intakes of fluid appear to protect against CKD. CKD may be preventable at a population level with low-cost increased fluid intake.

But what’s important to note is that we get a lot of fluids in solid food. An Australian study (What drove us to drink 2 litres of water per day?; sadly, behind a paywall) indicated that we get a lot of fluids accidentally. A baked potato, for instance, is apparently 75% water. Women get 2.4 litres of water and men get 3.2 litres, just from eating normally.

Downtown Belongs to Us All

I finished Jeff Speck’s book Walkable City last week.  In the very last section, he discuss why he thinks downtown is the place to begin any attempt to make a city more walkable.  I loved his answer:

The answer to this question is simple. The downtown is the only part of the city that belongs to everybody. It doesn’t matter where you may find your home; the downtown is yours too. Investing in the downtown of a city is the only place-based way to benefit all of its citizens at once.

When I’m in your neighborhood, I’m a foreigner.  It’s your neighborhood, not mine.

But when I’m downtown, it belongs to the city itself. Downtown is the soul of a city – due to a lack of residential focus and its proximity and history with the city, it is no one’s neighborhood. Downtown belongs to us all.

The Validity of The Lesser of Two Evils

This article has reinforced a paradigm that I think gets ignored too often by the environmentally conscious: when considering an optional you find lacking, always consider the alternative or the default, and weigh the option against that. Because, no matter how much you don’t like what’s being offered, the alternative or status quo might be worse.

A researcher did a meta-study on what might happen if we replaced fossil fuels with nuclear power:

They next estimated the total number of deaths that could be prevented through nuclear power over the next four decades using available estimates of future nuclear use. Replacing all forecasted nuclear power use until 2050 with natural gas would cause an additional 420,000 deaths, whereas swapping it with coal, which produces significantly more pollution than gas, would mean about 7 million additional deaths.

The inverse is this: current fossil fuel usage causes a lot of deaths.  The status quo is deadly. Air pollution kills.

Anti-nuclear activists will point to deaths caused by nuclear power, which total several thousand over 50+ years (though it’s tough to estimate because the two deadliest accidents were Soviet, and they don’t talk much about fatalities). Even looking at the numbers pessimistically, it’s perhaps a few hundred per year (which is skewed horribly by the two Soviet accidents, which easily account for 90% of total fatalities).

In looking at this, you have to consider the alternative: fossil fuels.  Yes, alternative energy is great, and solar is coming along nicely, but if nuclear goes away, all that capacity is not getting absorbed by wind power, I promise you.  Which means that the alternative to nuclear power is not pure as the driven snow, and is no-doubt worse no matter how you’re looking at it.

Another example: corporate farming is undesirable for many reasons. But organic farming is not perfect due to lower crop yields and lack of scalability. As bad as corporate farming is, it generates a lot of food which feeds a lot of people. If we switch the whole world to organic farming, millions of people will likely starve as a result – people who don’t shop at Whole Foods and who likely aren’t on this continent.  I don’t love pesticides, but I love them immeasurably more than starving children.

When considering something you don’t like, it’s always easy to whitewash the alternative. If you don’t take Option A, you’ve constructed a strawman of Option B in your heads which is perfect and wonderful and righteous. I don’t think this is valid. You need to consider Option B in light of reality, and it likely has warts.

“The lesser of two evils” is a valid perspective.  If it improves on the status quo, however imperfectly, perhaps that’s the best you can hope for.