social institutions
When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.
“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”
Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.
Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.
In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.
“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.
Group dynamics
Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”
The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.
“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”
Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.
A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”
When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”
The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.
Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.
“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”
Getting lost
The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.
“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.
Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.
The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Air Force Office of Scientific Research, and the Packard Foundation.
Miriam Schulman
In 1988, the Markkula Center for Applied Ethics published an article, “Kidneys for Sale,” which was posted about ten years later on our Web site. It addressed the ethical issues raised by the potential for a market in human body parts.
That article has inspired sporadic emails from people asking for advice about how to sell their organs. In recent years, as the economy has soured, we’ve noticed an uptick in the number of such messages. Here’s a sample:
I just read your information about how many people need a kidney. I would like more information about it and how I could sell one of my kidneys to your university because I really need money. I want to go to college, but it’s really expensive.
These correspondents raise some of the hard questions that are inspiring a reevaluation of the question: Should organ donation remain a completely altruistic “gift of life,” or should donors be compensated? The Center’s Emerging Issues Group, which meets weekly to discuss ethical issues in the news, addressed these questions at a recent session. This article outlines some of the crucial considerations raised during this discussion.
A Shortage of Donated Organs
First, a few facts about the acute shortage of kidneys. As of March 6, the waiting list in the United States for all organs was 113,143, with 91,015 waiting for kidneys. In 2011, there were a total of 15,417 kidney transplants in the United States, 10,185 from deceased donors and 5,232 from living donors.
“Data such as these underscore just how scarce organs are,” says Margaret McLean, director of bioethics at the Markkula Center for Applied Ethics. “About 17 people die every day while waiting for a suitable organ. Although numerous strategies have been tried to increase the number of donors—from pink dots on driver’s licenses to PR campaigns to donor reciprocal chains to organ swapping—we continue to come up short.”
That shortage has led to many violations of both US and international laws against kidney sales. For example, this month the Chinese news agency Xinhua reported that a 17-year-old sold his kidney, which is illegal in China, to get enough money for an iPhone. He is now suffering from renal insufficiency. “Only the truly naïve imagine that organs are not currently being sold on the black market,” McLean says. The International Business Times estimates that illegal organ sales constitute a $75 million per year industry.
Should such transactions be legalized? What are the ethical questions we should ask about the sale of kidneys?
The Commodification of Human Life
Even if legalizing organ sales might inspire more donations, many ethicists reject this approach because they fear where it may lead: to the commodification of human life. Cynthia Cohen from the Kennedy Institute of Ethics at Georgetown writes, “Human beings…are of incomparable ethical worth and admit of no equivalent. Each has a value that is beyond the contingencies of supply and demand or of any other relative estimation. They are priceless. Consequently, to sell an integral human body part is to corrupt the very meaning of human dignity.”
Despite these concerns, the black market itself has put a value on human organs—about $5,000 according to most reports. Peter Minowitz, professor of political science at SCU, suggests, “The actuality is there’s a thriving market for organs, even crossing global boundaries. So even though the sale of organs may, in itself, violate human dignity, that dignity is being violated now on a fairly large scale, especially among the most desperate. Maybe it would be better for them if we legalized the sale and imposed certain standards on it. It’s a very complicated series of considerations, mixing moral judgment with what’s going on in the real world.”
Do No Harm
Undoubtedly, increasing the supply of living donors would be good for organ recipients. According to the Organ Procurement and Transplantation Network, about 90 percent of people who receive a living-donor kidney and 82 percent of those who received a deceased-donor kidney were alive five years after the transplant.
But what happens to the donors? “Usually, in medical ethics, we are looking at harm and good respective to a single patient,” says McLean. “Here we are looking at harm and good for two patients where good is going to accrue to one and potential harm to the other.”
Generally, kidney donation from a living donor is seen as a relatively safe procedure, as the human body functions adequately with only one kidney. The mortality rate for the removal of a kidney (nephrectomy) is between 0.02 and 0.03 percent, major complications affect 1.5 percent of patients, and minor complications affect 8.5 percent. The University of Maryland Transplant Center states:
The risks of donation are similar to those involved with any major surgery, such as bleeding and infection. Death resulting from kidney donation is extremely rare. Current research indicates that kidney donation does not change life expectancy or increase a person’s risks of developing kidney disease or other health problems.
While this picture may accurately reflect the experience of donors in first world countries, those in the developing world report less benign outcomes. Madhav Goyal, Ravindra Mehta, Lawrence Schneiderman, and Ashwini Sehgal studied 305 residents of Chennai, India, who had sold their organs. Participants were asked to rate their health status before and after the operation. Eighty-nine percent of the respondents reported at least some decline in their health. “Fifty percent complained of persistent pain at the nephrectomy site and 33 percent complained of long-term back pain.”
McLean points out that society also incurs risks when someone donates a kidney. “Who pays if the donor is harmed or develops renal failure of unrelated etiology 15 years later and needs a transplant?” she asks.
In bioethics, where the first rule is “Do no harm,” can the sale of kidneys be judged to conform to this basic principle? Are there better ways to protect donors so that no disproportional harm comes to them?
The Problem of Exploitation and Informed Consent
The Indian experience points to another of the key objections that have been raised against the sale of organs: the danger that poor people will be exploited in the transaction. Nicky Santos, S.J., visiting scholar at the Ethics Center and an expert on marketing strategy for impoverished market segments, argues strongly that desperation “drives the poor to make choices which are not really in their best interests.” Such lopsided transactions may exacerbate already existing inequities, where the rich have access to excellent health care and the poor do not.
That was the conclusion of the Bellagio Task Force Report to the International Red Cross on “Transplantation, Bodily Integrity, and the International Traffic in Organs”:
Existing social and political inequities are such that commercialization would put powerless and deprived people at still graver risk. The physical well-being of disadvantaged populations, especially in developing countries, is already placed in jeopardy by a variety of causes, including the hazards of inadequate nutrition, substandard housing, unclean water, and parasitic infection. In these circumstances, adding organ sale to this roster would be to subject an already vulnerable group to yet another threat to its physical health and bodily integrity.
On the other hand, some view this attitude as paternalistic. “You could raise the question,” says Michael McFarland, S.J., “Are the rich or those in power in a position to tell the poor they are not capable of making a decision? Doesn’t that violate their human dignity? It seems to me that a person in desperate circumstances could be making a perfectly rational decision that the sale of a kidney is in his or her best interests.”
McFarland, a Center visiting scholar and the former president of College of the Holy Cross, goes on, “You could see the sale of organs as a way for the poor to derive some benefit from donating an organ, which they wouldn’t otherwise get. For example, if a poor person was willing to donate a kidney but couldn’t afford to take the time off, wouldn’t it be reasonable to allow him or her to be compensated for that time?”
More people might be persuaded by this argument if, in fact, kidney sales really did help the poor financially. But in India, donors often did not receive the benefit they expected from the sale of their organs. Ninety-six percent of the people in the study had agreed to the donation to pay off a debt, but six years after the operation, 74 percent of those studied still owed money.
Most of the benefit from organ sales goes to middlemen. Havocscope, which monitors black markets, found last May that the average reported amount paid to kidney donors was $5,000, while the average price paid by recipients was $150,000. “The real injustice to the poor is they are getting so little, while those who are involved in these illegal sales are getting all the money,” says Rev. Brendan McGuire, vicar general of the Diocese of San Jose.
Santos believes that the poor cannot really make free decisions to sell their organs because they are so driven by their dire circumstances. McFarland agrees that the issue of consent is the real sticking point for creating a market for organs. “I think what stops us is the concern about being able to count on a genuine free consent on the part of the donor.” But he does not believe any moral absolute makes the sale of kidneys unacceptable. “It comes back to the issue of truly informed consent. Do people understand the risks they are taking on? Are those acceptable risks? Are people capable of making free decisions about whether to take those risks?”
Altruism or Justice
Informed consent is, of course, as crucial for organ donation as it would be for organ sale. But donation frames the process as a wholly altruistic act. “For a living donor,” says McLean, “it may be a chance to help a family member or friend or even a stranger.” For a person signing on to donate organs after death, it may be seen as a way to give back or not to die in vain. And for the family of a deceased donor, it’s “a way to have a little bit of someone alive in the world,” she continues.
Many people value this altruistic aspect of the current system and do not want to see organ donation reduced to a business transaction. But, McFarland asks, “Is it the wisest and most moral policy to run a social system like kidney donation entirely on altruism?” That may be the ideal, he agrees, but since it has not been very effective at meeting the need for organs, it may be better to “strive for justice and not depend totally on altruism.”
The idea of justice encompasses concern about the exploitation of the poor, but it raises even broader concerns about fairness. These might be summed up in another email we received at the Ethics Center:
So what? Is the sale of one’s kidney lawful? Morality or ethics has nothing to do with it when you’re down and out. Why doesn’t someone ask the same of doctors and hospitals when they sell the transplant operation? Why is it when John Q. Public sees a way into the open markets, that he gets hit with the morality/ethical questions?
Is it fair that everyone involved in organ transplantation—doctors, hospital, nurses, recipient—gets something out of the process except the donor or the donor’s family?
Also, donors on the black market are rarely paid anything approaching what the kidney is worth. Justice might be better served if donors were paid more. In the Indian study, the average price of an organ in 2001 was $1,410. Nobel Laureate in Economics Gary Becker and his colleague Julio Elias have calculated $45,000 as a fair price. Fairer, still say some ethicists, would be a system that pays the donor a figure closer to the actual cost of maintaining a patient on the waiting list for organs, including the cost of dialysis over many years. Arthur Matas and Mark Schnitzler have calculated that a transplant from a living unrelated donor would save at least $94,579.
Alternatively, the donor wouldn’t necessarily need to be paid to be compensated. McLean reviews some other proposals to give something back to donors: “One suggestion has been to at least offer to pay funeral expenses for a deceased donor because for many people that’s a stumbling block. For live donors—and this could be hugely attractive in the current environment —we might offer to cover their health care for the rest of their lives in exchange for doing this good. ”
Another cut at fairness has recently been adopted by Israel and is advocated in the United States by the private organization Life Sharers. Top priority on Israel’s waiting list goes to candidates who have themselves agreed to be donors. Those who don’t sign up as donors get a transplant only if there is an excess of organs.
All proposals to allow the sale of organs raise ethical as well as medical risks. However, as E.A. Friedman and A.L. Friedman argue in Kidney International, Journal of the International Society of Nephrology:
At least debating the controlled initiation and study of potential regimens that may increase donor kidney supply in the future in a scientifically and ethically responsible manner, is better than doing nothing more productive than complaining about the current system’s failure.
Miriam Schulman is assistant director of the Markkula Center for Applied Ethics at Santa Clara University.
Apr 1, 2012
Every Nook and Cranny: The Dangerous Spread of Commercialized Culture
by Gary Ruskin and Juliet Schor
In December, many people in Washington, D.C. paused to absorb the meaning in the lighting of the National Christmas Tree, at the White House Ellipse. At that event, President George W. Bush reflected that the “love and gifts” of Christmas were “signs and symbols of even a greater love and gift that came on a holy night.”
But these signs weren’t the only ones on display. Perhaps it was not surprising that the illumination was sponsored by MCI, which, as MCI WorldCom, committed one of the largest corporate frauds in history. Such public displays of commercialism have become commonplace in the United States.
The rise of commercialism is an artifact of the growth of corporate power. It began as part of a political and ideological response by corporations to wage pressures, rising social expenditures, and the successes of the environmental and consumer movements in the late 1960s and early 1970s. Corporations fostered the anti-tax movement and support for corporate welfare, which helped create funding crises in state and local governments and schools, and made them more willing to carry commercial advertising. They promoted “free market” ideology, privatization and consumerism, while denigrating the public sphere. In the late 1970s, Mobil Oil began its decades-long advertising on the New York Times op-ed page, one example of a larger corporate effort to reverse a precipitous decline in public approval of corporations. They also became adept at manipulating the campaign finance system, and weaknesses in the federal bribery statute, to procure influence in governments at all levels.
Perhaps most importantly, the commercialization of government and culture and the growing importance of material acquisition and consumer lifestyles was hastened by the co-optation of potentially countervailing institutions, such as churches (papal visits have been sponsored by Pepsi, Federal Express and Mercedes-Benz), governments, schools, universities and nongovernmental organizations.
While advertising has long been an element in the circus of U.S. life, not until recently has it been recognized as having political or social merit. For nearly two centuries, advertising (lawyers call it commercial speech) was not protected by the U.S. Constitution. The U.S. Supreme Court ruled in 1942 that states could regulate commercial speech at will. But in 1976, the Court granted constitutional protection to commercial speech. Corporations have used this new right of speech to proliferate advertising into nearly every nook and cranny of life.
Entering the schoolhouse
During most of the twentieth century, there was little advertising in schools. That changed in 1989, when Chris Whittle’s Channel One enticed schools to accept advertising, by offering to loan TV sets to classrooms. Each school day, Channel One features at least two minutes of ads, and 10 minutes of news, fluff, banter and quizzes. The program is shown to about 8 million children in 12,000 schools.
Soda, candy and fast food companies soon learned Channel One’s lesson of using financial incentives to gain access to schoolchildren. By 2000, 94 percent of high schools allowed the sale of soda, and 72 percent allowed sale of chocolate candy. Energy, candy, personal care products, even automobile manufacturers have entered the classroom with “sponsored educational materials” — that is, ads in the guise of free “curricula.”
Until recently, corporate incursion in schools has mainly gone under the radar. However, the rise of childhood obesity has engendered stiff political opposition to junk food marketing, and in the last three years, coalitions of progressives, conservatives and public health groups have made headway. The State of California has banned the sale of soda in elementary, middle and junior high schools. In Maine, soda and candy suppliers have removed their products from vending machines in all schools. Arkansas banned candy and soda vending machines in elementary schools. Los Angeles, Chicago and New York have city-wide bans on the sale of soda in schools. Channel One was expelled from the Nashville public schools in the 2002-3 school year, and will be removed from Seattle in early 2005. Thanks to activist pressure, a company called ZapMe!, which placed computers in thousands of schools to advertise and extract data from students, was removed from all schools across the country.
Ad creep and spam culture
Advertisers have long relied on 30-second TV spots to deliver messages to mass audiences. During the 1990s, the impact of these ads began to drop off, in part because viewers simply clicked to different programs during ads. In response, many advertisers began to place ads elsewhere, leading to “ad creep” — the spread of ads throughout social space and cultural institutions. Whole new marketing sub-specialties developed, such as “place-based” advertising, which coerces captive viewers to watch video ads. Examples include ads before movies, ads on buses and trains in cities (Chicago, Milwaukee and Orlando), and CNN’s Airport channel. Video ads are also now common on ATMs, gas pumps, in convenience stores and doctors’ offices.
Another form of ad creep is “product placement,” in which advertisers pay to have their product included in movies, TV shows, museum exhibits, or other forms of media and culture. Product placement is thought to be more effective than the traditional 30-second ad because it sneaks by the viewer’s critical faculties. Product placement has recently occurred in novels, and children’s books. Some U.S. TV programs (American Idol, The Restaurant, The Apprentice) and movies (Minority Report, Cellular) are so full of product placement that they resemble infomercials. By contrast, many European nations, such as Austria, Germany, Norway and the United Kingdom, ban or sharply restrict product placement on television.
Commercial use of the Internet was forbidden as recently as the early 1990s, and the first spam wasn’t sent until 1994. But the marketing industry quickly penetrated this sphere as well, and now 70 percent of all e-mail is spam, according to the spam filter firm Postini Inc. Pop-ups, pop-unders and ad-ware have become major annoyances for Internet users. Telemarketing became so unpopular that the corporate-friendly Federal Trade Commission established a National Do Not Call Registry, which has brought relief from telemarketing calls to 64 million households.
Even major cultural institutions have been harnessed by the advertising industry. During 2001-2002, the Smithsonian Institution, perhaps the most important U.S. cultural institution, established the General Motors Hall of Transportation and the Lockheed Martin Imax Theater. Following public opposition and Congressional action, the commercialization of the Smithsonian has largely been halted. In 2000, the Library of Congress hosted a giant celebration for Coca-Cola, essentially converting the nation’s most important library into a prop to sell soda pop.
Targeting kids
For a time, institutions of childhood were relatively uncommercialized, as adults subscribed to the notion of childhood innocence, and the need to keep children from the “profane” commercial world. But what was once a trickle of advertising to children has become a flood. Corporations spend about $15 billion marketing to children in the United States each year, and by the mid-1990s, the average child was exposed to 40,000 TV ads annually.
Children have few legal protections from corporate marketers in the United States.
This contrasts strongly to the European Union, which has enacted restrictions. Norway and Sweden have banned television advertising to children under 12 years of age; in Italy, advertising during TV cartoons is illegal, and toy advertising is illegal in Greece between 7 AM and 11 PM. Advertising before and after children’s programs is banned in Austria.
Government brought to you by…
As fiscal crises have descended upon local governments, they have turned to advertisers as a revenue source. This trend began inauspiciously in Buffalo, New York in 1995 when Pratt & Lambert, a local paint company, purchased the right to call itself the city’s official paint. The next year the company was bought by Sherwin-Williams, which closed the local factory and eliminated its 200 jobs.
In 1997, Ocean City, Maryland signed an exclusive marketing deal to make Coca-Cola the city’s official drink, and other cities have followed with similar deals with Coke or Pepsi. Even mighty New York City has succumbed, signing a $166 million exclusive marketing deal with Snapple, after which some critics dubbed it the “Big Snapple.”
At the United Nations, UNICEF made a stir in 2002 when it announced that it would “team up” with McDonald’s, the world’s largest fast food company, to promote “McDonald’s World Children’s Day” in celebration of the anniversary of the United Nations adoption of the Convention on the Rights of the Child. Public health and children’s advocates across the globe protested, prompting UNICEF to decline participation in later years.
Another victory for the anti-commercialism forces, perhaps the most significant, came in 2004, when the World Health Organization’s Framework Convention on Tobacco Control became legally binding. The treaty commits nations to prohibit tobacco advertising to the extent their constitutions allow it.
Impacts
Because the phenomenon of commercialism has become so ubiquitous, it is not surprising that its effects are as well. Perhaps most alarming has been the epidemic of marketing-related diseases afflicting people in the United States, and especially children, such as obesity, type 2 diabetes and smoking-related illnesses. Each day, about 2,000 U.S. children begin to smoke, and about one-third of them will die from tobacco-related illnesses. Children are inundated with advertising for high calorie junk food and fast food, and, predictably, 15 percent of U.S. children aged 6 to 19 are now overweight.
Excessive commercialism is also creating a more materialistic populace. In 2003, the annual UCLA survey of incoming college freshmen found that the number of students who said it was a very important or essential life goal to “develop a meaningful philosophy of life” fell to an all-time low of 39 percent, while succeeding financially has increased to a 13-year high, at 74 percent. High involvement in consumer culture has been show (by Schor) to be a significant cause of depression, anxiety, low self-esteem and psychosomatic complaints in children, findings which parallel similar studies of materialism among teens and adults. Other impacts are more intangible. A 2004 poll by Yankelovich Partners, found that 61 percent of the U.S. public “feel that the amount of marketing and advertising is out of control,” and 65 percent “feel constantly bombarded with too much advertising and marketing.” Is advertising diminishing our sense of general well-being? Perhaps.
The purpose of most commercial advertising is to increase demand for a product. As John Kenneth Galbraith noted 40 years ago, the macro effect of advertising is to artificially boost the demand for private goods, thereby reducing the “demand” or support for unadvertised, public goods. The predictable result has been the backlash to taxes, and reduced provision of public goods and services.
This imbalance also affects the natural environment. The additional consumption created by the estimated $265 billion that the advertising industry will spend in 2004 will also yield more pollution, natural resource destruction, carbon dioxide emissions and global warming.
Finally, advertising has also contributed to a narrowing of the public discourse, as advertising-driven media grow ever more timid. Sometimes it seems as if we live in an echo chamber, a place where corporations speak and everyone else listens.
Governments at all levels have failed to address these impacts. That may be because the most insidious effect of commercialism is to undermine government integrity. As governments adopt commercial values, and are integrated into corporate marketing, they develop conflicts of interest that make them less likely to take stands against commercialism.
Disgust among yourselves
As corporations consolidate their control over governments and culture, we don’t expect an outright reversal of commercialization in the near future.
That’s true despite considerable public sentiment for more limits and regulations on advertising and marketing. However, as commercialism grows more intrusive, public distaste for it will likely increase, as will political support for restricting it. In the long run, we believe this hopeful trend will gather strength.
In the not-too-distant future, the significance of the lighting of the National Christmas Tree may no longer be overshadowed by public relations efforts to create goodwill for corporate wrongdoers.