It is extremely easy to purvey the world in which we live in and point out all of the areas that are problematic. It is much more difficult to provide viable solutions to the things that ail us, and one small word plays a massive role in why it is that we do not provide solutions to the obvious problems that have plagued us for eons:

Consensus. If we can ever come to consensus about the problems we are faced with as a culture, we can then unleash our inexhaustible creativity to find new, dynamic solutions to age-old problems. However, we do not, and here is why:

We simply cannot put aside our differences in belief between one another to come to a place where we agree to disagree on the minor details, all the while pushing forward to complete the task of addressing and solving the large-scale problems that, if we are honest with ourselves and one another, know for a fact exist.

To take this idea a step further, let us narrow the focus to just the United States. In this country, according to the 2012 census, 46.5 million people were living at or below the poverty level. That is fifteen percent of the populace. When pressed, most people would agree that poverty is a problem in this country, and one that needs to be discussed, assessed, and addressed. However, it is at this point that any form of discourse becomes derailed, and the focal point of finding and creating solutions gets lost amid our inability to achieve consensus as to what to do about the problem.

Here is how it happens: some people will admit that poverty is indeed a problem, and it is best addressed by turning it over to the hands of our elected officials. They believe that problems as large-scale as this should be solved by people who were put in a position to tackle issues as part of their job description. Other people will respond to this suggested solution by saying that the government is already far too large, that our elected officials have spent years addressing this issue, and we still do not have a solution; therefore, we must look at the root cause of poverty, and address why it even exists in the first place. One solution (viable or not) is suggested, and the response changes the focus from the viability of the suggestion to a completely different discussion. Therefore, to continue the discussion, group (or individual) “A” will state that the problem is systemic, cultural, and generational, and something needs to be done to fix the inequalities that currently exist. Group (or individual) “B” replies that it is not the system, it is the people who are the root of the problem, and everyone in this country has the opportunity to pull themselves up by the bootstraps and make a better life for themselves. Those interactions shift the focus to the viability of the “American Dream”, and to whether or not such a thing ever existed, and if it did, whether or not it still exists today. This shifting of the discussion continues on ad infinitum, all the while forgetting the original assertion that poverty is problematic and needs to be discussed, assessed, and addressed.

Now, all of the hypothetical responses will have to be dealt with at some point of the discussion, as each assertion contains defensible truths. Our elected officials should indeed be seeking out how to best “promote the general welfare” and “secure the blessing of our liberty to ourselves and our posterity” as part of the oath they took. The case can also be made that said elected officials have been attempting to do this for generations, and their solutions have not been effective. The flaws that are inherent in the system will need to be addressed, as will the notion of whether or not the ideology behind the concept of the “American Dream” is still (or ever was) a reality for all the people who inhabit this country. These are real concepts, defendable assertions, and in need of dialogue. However, they miss the original need of achieving consensus in our general agreement that poverty is not good and must be solved.

In any creative endeavor, it is extremely rare that a finished idea comes fully formed out of the ether. There are rare instances of this happening (such as a musician who states that “the song just came out of nowhere” or the artist who claims “the painting came to me in a dream”), but the majority of creative endeavors needs an initial phase: brainstorming. One of the concepts I teach my students is that in the brainstorming session, there is no such thing as a “bad idea”. No idea can be too fantastic, too ridiculous, too silly, too radical, or too far-fetched. Every idea gets written down in the process, and after all ideas have been exhausted, then it becomes time to begin the decision-making process of what potentially is a solid idea and what ideas can be discarded, amended, incorporated, tweaked, or simply cast aside. The importance of granting credence to any and every idea while brainstorming is key to coming up with creative, innovative, and ultimately positive solutions to whatever the problem (be it the creation of a work of art or the solution to a major social ill) happens to be, and I am convinced that we have yet to apply this approach to the things that are wrong with our society and culture.

But before we can creatively attempt to solve problems, we must come to consensus, a general agreement, that the problems indeed exist in the first place. From there, we can move forward collectively in an attempt to make this world a better place for all of humanity.

Now, it is easy to be cynical about the notion of collectively creating major change in the world. As history indicates, paradigm shifts are slow-moving events that take years, sometimes generations to occur, and are usually met with extreme resistance (think the American Revolution or the Civil Rights movement). Change is difficult, which is why it happens so infrequently. Major change requires dialogue, discourse, education, and the ability to be open-minded to new ideas, even at the expense of discarding long-held ideologies and beliefs if they stand in the way of positive outcomes for the whole. It requires much.

But it is possible. We now live in a world that is full of magnificent technological gifts that no other generation has ever had. If we combine those gifts with our endless creativity and ingenuity, we have the potential to re-shape the world into a place where we agree to come to a consensus regarding what our most pressing problems are, and then address them with creative ideas that have yet to be applied.

Why should we believe that there are new ideas to combat old social dilemmas? Well, there are a few reasons. One, the problems still exist, which means that we have yet to come up with the proper solutions for them, and that means that the answer is still out there, waiting to be imagined. Two, we have tried a variety of approaches to solving social problems, but have generally stayed within the “traditional” parameters (waiting for elected officials to do it, throw money and programs at the problem, et al) but have yet to apply creative approaches to problem solving. If our creativity can produce the mind-blowing technologies that we have available to us today, imagine what we could conceive if we applied that same creativity and “what if?” approach to social issues. The potential results are worth our collective efforts. Three, our current world of social media allows us to connect in ways never before seen in human history; we just need to start using these outlets for more than mere entertainment.

Think of YouTube: Long before it was a place where people uploaded the works of other people for the world to see, it was originally designed to be a platform for putting out the works of individuals who chose to put their creations out into the ether that is the internet. When it debuted, YouTube carried the tagline, “Broadcast Yourself”, and that is what people who engaged with the site did. This use of such a powerful tool continues to grow today, and it does so in a way that exemplifies people helping each other out.

Example: Recently, my air conditioning in my home stopped working (which is not a “problem”, but merely an inconvenience). Anyone who has been in such a situation knows that repair of heating/air conditioning units comes with a large price tag. However, growing up as a son of a General Contractor and having spent my high school and college summers working for the family company, I tend to be quite handy around the house. Not a professional, mind you, but I can set tile, lay hardwood floors, fix plumbing problems, lay outdoor sprinkler systems, and other jobs that have allowed me to fix things in the home without the need to seek out a professional to do them.

But not air conditioning.

Because it went out on a relatively cool weekend, I did not need to schedule a technician to come out until after the weekend. In my spare time, I researched what potentially could be wrong with the system and how those problems could be addressed.

Enter YouTube.

I found “how to” videos, uploaded by people who ran the gamut from self-described laypeople to folks who were professionals, and there were more than I could possibly watch, even if given the time. The videos were detailed, full of pertinent information, and extremely instruction about what and what not to do. It was an amazing treasure trove of collective thought and wisdom, complete with many of the people who originally uploaded the video answering questions posed to them in the comments section. What I learned is that one of the potential problems is a worn out capacitor, and that it was a relatively easy and inexpensive fix.

I kept my appointment with the technician, and his diagnostic revealed that I did indeed have a worn out capacitor. He wrote up an estimate for repair, which totaled nearly seven hundred dollars. I thanked him for his time, paid him for the diagnostic, and proceeded to an electric supply store to buy a replacement capacitor. Armed with my YouTube video and pictures of my old connection, I opened the panel, de-charged and removed the old capacitor (which the technician had put back in place), and replaced it with a new one. Total time (including driving to the supply store): one and a half hours. Total cost: $18.30. That is not a typo. I fixed the problem for less than twenty dollars, thanks to an unknown individual who chose to share their know-how and expertise because they wanted to help someone else out.

Imagine, for a moment, if we could apply our creativity, our know-how, our immense amount of collective knowledge to solving the problems of social injustice, using these skills because we want to enact positive change. It could change the world.

I do not have all of the answers, and neither do you. But together, collectively, information could be exchanged, ideas thrown about, and possible solutions arrived at when all is said and done.

But first, we must come to consensus that there are indeed issues that are wrong in the world. If we can do that, we can then proceed applying our indefatigable collective creativity through our amazing technology as the means to an end of injustices that have plagued us for far too long.

To quote a decades old song: “What a beautiful world this will be…”

If we survey the course of human history, the list of accomplishments and achievements is staggering. Innovation has always been a strong component of our existence, and it continues to produce ideas that, not long ago, were viewed as fiction, as impossibilities. Time and again, humans have managed to defy common perceptions and go beyond what was thought to be possible, and in doing so, have made the world a better place, and made our existence much easier to navigate.

And yet…

And yet.

We live in a nation where almost 700,000 people do not have a home. We live in a nation where fifteen percent of our population live below the poverty threshold. We live in a nation where one out of five children go to be hungry every night. We live in one of the richest countries in the history of the world, and yet, there is still a large percentage of our population that doesn’t have their most basic needs met each day.

Why?

How is it possible, given all of our technological advancements that allow for everything from GPS navigation to space travel to stem cell research to internet accessibility via a plethora of devices, that we cannot take the same innovative approach to ensure that our fellow human beings have their most basic of needs met? How it is possible that, given how the world seems to be shrinking and growing smaller because of our technological advancements, we still have a large percentage of people who seem to not even be a blip on our cultural radar or even register in our collective consciousness? Find any list of problems facing America from any news source, and the homeless and the hungry will (more times than not) be conspicuously absent from the list.

Therein lies one of the many paradoxes of our culture today: we have the means and the minds to provide sound, solid solutions to complex problems and achieve creative success in ways that used to seem impossible, but we direct that creativity into making technological devices that, while making for a more convenient lifestyle for those that can afford them, still tend to focus on their “coolness” factor and entertainment value. As powerful a tool as a computer, tablet, or smart phone may potentially be, they are primarily used by most people as a vehicle for all things trivial. In the process of this, the truths, realities, and facts that make us uncomfortable are pushed far away behind the sheen and novelty of the latest and greatest device that we possess. While we may be able to effectively not think about the problems that plague our culture and turn our attention and focus to a more comfortable place, it doesn’t mean that the problems disappear. Rather, this approach exacerbates the situation, as it creates a situation in which not only are we not acknowledging the problem, we’re not even thinking about it.

Now, this is not an indictment of culture in particular, nor is it an indictment of technology in general. Rather, it is a request that we collectively consider the fact that there are those who exist around us that are disadvantaged to the point of not having their basic needs met, and that we focus our amazing creative inventiveness on generating solutions to problems that need not be seen as unsolvable. If we can create a device that is no bigger than our hand, but is capable of taking pictures and movies, playing music, accessing the internet, and functions as a multi layered communication device, we should be able to create a solution to the problems of homelessness and hunger.

To do that, a few things need to occur. The first (and quite possibly, the most difficult) is that we need to dispel the notion that folks who are in a position to experience homelessness or hunger have somehow “brought in on themselves” and that they have “nobody to blame but themselves” for the predicament that they are in presently. While that may be an easy way to seemingly dismiss the dilemma entirely (even though this take only comes out of the mouths of people who currently are NOT homeless nor hungry), it does nothing to create progress towards the solution and is counterproductive towards problem solving. Are there people who live on the streets that have made some questionable choices in life that have accelerated their descent into their present state? Of course there are. However, that doesn’t make them any less human than those of us who are currently fortunate enough to not exist in that fashion, and being part of humanity is what ties us all together. This sentiment was best expressed in the words of Eugene Debs:

“While there is a lower class, I am in it, while there is a criminal element, I am of it, and while there is a soul in prison, I am not free.”

If we can ever get to the place in which we see the stories of others as being our stories, when we see the trials and tribulations of others as being our trials and tribulations, and we see that what compromises the life of another fellow being as being one and the same with our lives, then we will be on the road to a better understanding of what it actually means to be human, both as individuals, and as members of the human race. In order to begin the journey to such a place, we have to cease rushing to the notion of blame, and instead rush to the notion of understanding.

The next step is the we need to come to an understanding of (and an admittance to) the fact that, while our technology is amazing, it is literally changing the way our brains think, and it averts our attention from the bigger picture and directs us towards places of distraction. (Note: for in- depth analysis of this phenomenon, I would urge you to read The Shallows by Nicholas Carr). We must make the time in our lives as individuals (and then later, collectively) to put the distractions away for some time. Turn off the music (or at least listen to music that gives us space to think and reflect). Turn off the television. Turn off the phone, the tablet, the computer. Be still. Contemplate the parts of our culture that don’t always feel good to consider. Ask ourselves, our families, our friends, our community what can be done to alleviate something problematic in the world and assist our fellow members of the human race who are in need of assistance. Granted, we have to start small, and it will not happen overnight, but at least it is a start. Then, keep at it. Find like-minded individuals and groups to share ideas, share strategies, share anything that will promote the common good. There is not only safety in numbers; there is power, and transformative power, at that.

Finally, we need to remove our fear of and resistance to working as a collective. Unfortunately, that word is too often mistaken for “Communism” and “Socialism”, when it is neither one of those entities. Communism and Socialism are political and economic theories, while working as a collective is simply a group of like-minded individuals working together to achieve a specific goal. However, mention the possibility of a “collective conversation as a nation about X problem”, and you will often be met with immediate resistance. This knee jerk reaction occurs because, in America, we have been indoctrinated to recoil from anything that can be remotely construed as either Communism or Socialism (or both), as either ideology runs contrary to the present systems in place. However, we shouldn’t fear working collectively to try to solve problems that have plagued us for years, especially when there are so many ways in which we haven’t tried to solve them.

So, does a single individual possess the definitive answer that would cure the aforementioned problems? Of course not, and that is precisely why we need to come together collectively, merging our experience, our expertise, our innovations, our imaginations, and our ideas to find a solution to problems that need not (and should not) exist.

You may say I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And the world will live as one

Here’s to holding on to the hope that someday, somehow, we’ll see Lennon’s imaginings realized. We are capable, we have the ability, and it’s not too late.

Towards the end of the school year, I was working with my students in preparation for their upcoming debates. While reading through a closing statement, I made the general announcement that referring to someone as a “terrorist” was an Ad Hominem attack, and therefore a logical fallacy. Multiple students protested such a pronouncement, because the word so clearly describes the type of behavior the individual (or individuals) in question engage in. I had to explain that because the word carries a negative connotation, it qualifies as name calling, which is a component of the Ad Hominem fallacy. They then requested that I provide them with an alternative, which I did: persons engaged in acts of terror. One student threw up his hands and said, “but that’s just semantics!”, and considering the way the word “semantics” gets used today, he was correct in his assertion (but incorrect as to “terrorist” not being an Ad Hominem attack).

When we use the word “semantics”, we are often describing a situation in which multiple people are talking about the same thing, but using different language in their descriptions. This makes sense, as semantics traditionally deals with words and their meaning. However, there are additional definitions for the word, and one that, in light of so many recent events, needs to be explored.

Merriam-Webster offers this as a secondary definition under “general semantics”:

The language used (as in advertising or political propaganda) to achieve a desired effect on an audience especially through the use of words with novel or dual meanings.

If we examine the language used by both our politicians and the media, we can find evidence that he use of semantics is abundant and unchecked, and that most of our populace doesn’t acknowledge (let alone recognize) that fact, and because of this, we often support ideologies, movements, and campaigns that, under normal circumstances
devoid of rhetoric, would possibly be deemed unsupportable. Here are some examples:

Osama bin Laden was a “terrorist” (there’s that Ad Hominem attack again) because he allegedly masterminded the 9/11 attacks (and I say “allegedly” because the FBI never listed the events of 9/11 in the crimes that caused him to be number one on their most wanted list; plus, there is a mountain of evidence that displays that the official narrative is weak, at best). Based on the actions that were attributed to him, he was painted by our elected officials and news media as a monster, demonized until his death. However, when the United States engages in the very same actions they accuse bin Laden of, we have a very different name for it:

Operation Iraqi Freedom.

Notice the lack of a negative connotation to that phrase? Notice how a vicious military attack against a country that had nothing to do with 9/11 (even though that was the guise that the attack was sold to the American people under) can be put forth as a positive, beneficial action based solely on the carefully designed name? This is where semantics become extremely important, because if we were to actually use language that accurately depicted the horror of such an Operation, most of the citizenry would be adamantly opposed to the events that transpired. However, the elected officials make it sound much more palatable by calling it an “Operation” (which sounds official and purposeful) and tying it to “Freedom” (which we are all conditioned to believe is the highest and most noble of principles), instead of deeming it what it was in reality: an unprovoked and devastating attack that had almost nothing to do with the freedom of the Iraqi people, as that was number eight in the list of eight mission objectives put forth by Donald Rumsfeld at the beginning of the Operation.

Language is also used do draw the proverbial line in the sand, especially when it comes to situations involving violence. The people who died during the events of 9/11, the Aurora shooting, and Sandy Hook were “innocent victims” who lost their lives to “monsters”. The non-combatants who died during Operation Iraqi Freedom (mostly women and children) were referred to as “collateral damage”, and no reference was made to the goodness or badness of the perpetrator, for obvious reasons. In both instances, the party on the receiving end of the violent act is either going to the language used is extremely important and very carefully chosen, but we do not notice the subtleties inherent in the word choices, as this is not something that most people take the time to ever consider, let alone acknowledge, but we would be wise to do so, because the use of semantics to manipulate people’s viewpoints and stance on a given situation is becoming more and more ambiguous every day, with potentially devastating results.

With the recent revelations (that confirmed what many people have know for years) pertaining to the amount of spying that occurs on the American people, the people who hold positions of power trotted out their favorite two word phrase that, while being completely amorphous, works every time:

National security.

Those two words have been used as a response to any situation that suits the government doing anything that they see fit, regardless of the will of the citizens they are supposed to represent. The reason for records pertaining to the JFK assassination are still partially or fully closed to the public? National security. The reason all of the information made available by Wikileaks should have never seen the light of day? National security. The reason that the NSA, FBI, and CIA feel the need to obliterate our Fourth Amendment by practicing illegal searches and seizures? National security. Time and again, when pushed for answers to legitimate questions from the American people, the response is all too frequently “National security”.

Unfortunately, we have been conditioned to view that answer as an acceptable one, when in reality, any time we hear or read those two words together from an elected official, it should give us pause, because their use of that phrase is word play at its finest.

While it is indeed the job of our elected officials to maintain the security of the people and to have a vested interest in the security of the United States, it is also true that those two words have morphed into a catchall phrase that is used whenever they do not want to disclose information about their actions to the American people. However, the phrase (in and of itself) appears to be relatively straightforward, but what does it really mean? What makes it an answer that is so appeasing and reassuring to the populace? What is it about this phrase that immediately stops any line of questioning? What is it about this phrase that allows our elected officials to push forward in their semantical word games, all the while forwarding secret agendas and violating the rights set forth in the Constitution?

I believe the answer is simple: conditioning. We are conditioned to respond in the proper fashion, because we have given over our minds and thought processes to the semantics and rhetoric of government, which has more often than not been about rule and control, not service and freedoms. We are conditioned to not consider the ramifications, the consequences of our acquiescence to the official narrative as it has been woven throughout history. We are conditioned to root for one side of the political spectrum and vehemently oppose the other, regardless of the rightness or wrongness of any subject matter that may be under debate between the two. We are conditioned to read the headline and react, follow the ticker at the bottom of the screen, and form an opinion on a matter that was not delved into deeply or considered from all possible angles before coming to a conclusion. In short, we have abdicated our thinking and reasoning abilities in favor of believing the semantical Newspeak that permeates the language of the media and the politicians alike, instead of performing our due diligence to ensure that a proper dialogue is engage so that we do not always readily accept near empty phrases such as “National security” as dogma. In short, we have failed to balance our end of the equation, creating a chasm in the flow of power and information that is clearly one sided, thus removing our individual and collective strength.

As it is with all things, though, there is still the opportunity for change. The recent revelations of the realities of the situation in Benghazi, the IRS and its targeting of certain groups for political purposes, and the NSA regularly violating the Fourth Amendment of an obscene amount of citizens have given people the feeling that they have a voice, and that their voice not only matters, but can be the catalyst for change. Semantics matter, and we need to point out the ways they are being abused to manipulate, and correct the situation by calling for (and engaging in) dialogue that is authentic and sincere, searching our collective intelligence for better answers than the ones that have been delivered as gospel truth, yet benefit only the few.

Let’s revive the tired, worn-out political words in our language from their current semantic doldrums, and let real discussion take place in its stead.

Work, work, work, is preached from the pulpit, the newspaper, and magazines: laboring people are anxious to divide the honor, but they won’t. You never hear from the pulpit, the magazine, or newspaper headline rest, rest, rest.”

-Mother Jones

While surveys and polls are not the best evidence (there are far too many variables involved), they can shed light on what the “pulse of the nation” is in regard to cultural trends, and what is trending in the U.S. now is a general dissatisfaction with our jobs.

Given the level of current unemployment, one could conclude that having a job in this market would be enough to create a certain level of satisfaction. However, multiple polls/surveys show that over half of the people that are employed are not “satisfied” with their present state of employment. This is not surprising and shocking at the same time.

It is not surprising, because many of the jobs and occupations that are available are “cog” jobs, in which the worker is just a small component who performs a specific task that keeps the machine running, and offers not much in the way of creativity or advancement. It i not surprising, because there is a large number of people who do jobs that have nothing to do with their actual interests, talents, or passion. It is not surprising, because in most work environments, the “many” do the majority of the actual work, and their efforts benefit the “few” who hold the titles and positions of power. It is not surprising, because jobs are viewed by many people as something that has to de done, not something they want to do.

That the majority of people are not satisfied with their jobs is shocking, not because there is evidence to the contrary, but because with the level of dissatisfaction that tis reported today, one would think that we would have done something to change the situation into a more positive experience in which a larger percentage of people would describe themselves as “satisfied” with what they have to do on a daily basis. Unfortunately, that is not the case, and it is baffling that it never enters into the national discussion.

Why do we work? The obvious answer to that question is that we have to work, because we live in a capitalist society that utilizes money as its means of exchange for goods and services; therefore, we must go about the daily business of working to earn what is necessary to pay for our expenses. That is the extrinsic value of working. We are paid to produce goods or provide services, which in turn provides us with the means of paying someone else for the goods and services we deem necessary. It is the system on which the foundation of our core beliefs is founded, and it is never brought into question, nor is it ever discussed. It is simply the reality of our existence. We all, if we want to function as members of society, will spend the vast majority of our waking lives dedicating our time to an occupation that may or may not be wholly satisfying, and yet, as it is with so many other institutionalized ideologies, no one ever questions the system. We push forward, day after day, because we have been led to believe that it is the only way to exist and that working is inevitable.
Granted, some sort of effort is necessary in order to sustain our lives. However, I think the concept of “work” has been confused with the reality of our “occupations”, and the result of such a confusion has led to us believing we are nothing more than cogs in a larger machine. Day in and day out, we fulfill our role as the dutiful servants, spending the majority of our waking hours working for the benefit of someone other than ourselves (not that there is anything wrong with working for others, but that is a completely different situation). To put it in perspective: we spend our childhood in school to prepare us for college so that we can obtain a degree in a specific field in which we seek to obtain employment so that we can work for the majority of our lives in the hope that we will be able to eventually retire (at which point the best years of our lives may have already passed us by). This is the blueprint for what most would consider to be a successful, fulfilled life.

We long ago accepted the norm that there will indeed be something that we do “for a living” (such an accurate phrase), and created and perpetuated a monetary system in which we must produce something in order to receive compensation, so that we may then purchase and obtain goods and services, and we have never looked back.

Why?

The system, as it exists today, has the vast majority of people doing the bulk of the work to benefit an incredibly small percentage of the population. Granted, enough of us live in a way that can be considered a “good life”, if we use the modern definition of that phrase, as much of the population has not only their needs met, but are able to have their wants met, as well, which is why the system continues to perpetuate itself; the majority of the population is not in a position that they would deem as being unhappy, so why consider any alternative to the way we currently live? As long as the majority of the population continues to live in a way that resembles the status quo, we will not even consider (let alone attempt) a different way of living, of existing, of functioning. That our work is our “living” is so ingrained in our minds that to think of life being any other way is inconceivable.

So, the question becomes this: can we discover or create a better way to live out our time here on this planet in a way that is beneficial to all people and doesn’t involve having to secure and maintain an occupation? Is it even possible to consider such a notion?

No, it isn’t. To do so would require a full blown paradigm shift, and too many people are completely satisfied with the way things are now and are not even open to engaging in dialogue about such a drastic transformation. We would have to re-examine our political system, our marriage to capitalism, our values, beliefs, and the way we view existence in general. In other words, even the discussion of such an idea is not going to happen.

So, how do we make the best of our reality? How do we arrive at a place in which the “rest, rest, rest” that Mother Jones spoke of is not only prescribed, but practiced by us, one and all?

Buckminster Fuller had a suggestion of where to start:

We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors.

The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”

Considering that Mr. Fuller made the above statement in full confidence in 1981, years before we experienced the technology explosion that propelled and advanced us to where we are today, it is safe to say that such a concept rings even more true now than it did at the time he made the statement. With the power, speed, and accessibility of modern technology, the antiquated idea of a forty hour work week to “earn a living” should have already met its demise, as technology has increased the speed of production and communication, and allows us to share ideas and information in real time. We have modified the way we do so many things in our daily life to match the tools given to us by our technological advancements; why have we not recalibrated our concepts work and the role it plays in our lives?

Imagine a world in which we decided that working for the majority of the time we exist on this planet was not the ultimate goal, nor was it a necessity. In such a world, we would need to share, help one another, take care of one another, and live our fleeting and finite lives in a very different fashion, in which the focus was not working to get ahead, but rather, working together to make the world a better place for all people.

What a world that would be.


”All media exists to invest our lives with artificial perceptions and arbitrary values.”

-Marshall McLuhan

One of the more obvious signs that we, as a culture, have stopped thinking for ourselves and allowed an unseen few to think for us is found in the ridiculous world of the fashion industry. The fact that we allow clothing brands to even exist (let alone thrive the way that they do) is proof positive that we have taken the soma, swallowed the blue pill, and found comfort in the shadows we see on the wall of the cave.

The fashion industry is just one of the many manufactured constructs that exist in the world today, but it is one that dominates the landscape. We have bought into the notion that what we dress like says something about who we are, what brand we represent speaks to the lifestyle we lead, and how “in style” we are represents something substantive, but none of that is true. It has been sold to us as truth, as gospel, and it is time we refute and refuse the ideology inherent in this way of thinking.

First, we have to conclude that at the most fundamental level, clothing does serve a purpose: protection from the elements. When the weather is cold, the proper outfit keeps us warm. When it is hot, the proper outfit assists us in cooling down. When it is wet, the proper outfit keeps us dry. As long as we consider clothing in light of its intrinsic value, we can view it objectively, at which point the ideology of fashion no longer exists or applies.

However, we do not view clothing from an objective standpoint. Were we to do so, we would come to the understanding that there are trends that happen, year in and year out, without anyone questioning whether or not they are positive, nor discussing whether or not they impact the world in any significant way. For instance, if we look to the concept of what is “in style” from season to season, we can note that the majority of popular designers and labels either have an uncanny ability to guess what the others are going to create, or that they are colluding with one another. In other words, trends are created, which is why it matters not what major chain you go to or what label you prefer, the “style” is the same. There may be slight variances from one store/label to the next, but the differences are minimal, at best. In other words, when the labels decide to bring back argyle sweaters or decide that women should indeed wear knee-high boots with lots of buckles, stores will be stocked full of those items. When the industry doesn’t want to exercise any sort of creativity, they bring back a style that they managed to peddle to the public years ago and call it “retro”, thereby tapping in to the collective nostalgia of a certain time period. There isn’t much choice or personal style involved, as what we choose from is limited to what is produced and stocked. Granted, there are only so many possible variables in the actual product, but that doesn’t change the fact that our selection choices are minimal. Either the labels and their designers possess extrasensory perception, or the trends and styles are pre-planned for each season.

Because the construct of fashion is manufactured, it has to be placed in the collective consciousness in order for it to even exist, and this is where marketing and advertising play their part. This is where the differences between brands becomes less about quality and construction and have more to do with how well the agency can push their brand’s “lifestyle” (not the product itself) to the largest number of buyers.

There is much research on why advertising works, but the fact remains that even the most savvy citizen isn’t immune to its effects, and the industry knows that. There once was a time in which ads focused on extolling the virtues of their product, and the ad reflected such an approach. In today’s information overloaded world, ads don’t attempt to sell a product; they sell a lifestyle. While there are certainly people who prefer the fit or feel of one brand over another, it is more likely they are drawn to the image that is attached to their brand of choice. Given the choice between brand “x” and brand “y” (and considering that both brands are equal in terms of quality), people will choose the brand that has appealed to their own sense of lifestyle or the lifestyle they wished they had. This is not commentary of the gullibility of people; it is just how marketing and advertising are designed to function, and we are all susceptible to those schemes.

In selling a brand by way of connecting it to a lifestyle, it is clear that the industry has been very successful in creating the “artificial perceptions” and “arbitrary values” that McLuhan pointed out so many years ago. Here’s an example: shoe manufacturers. A shoe is a shoe, and it serves a specific, basic purpose (protecting the foot). The moment any words are attributed to the product that go beyond its basic function (and this is usually done by marketers via advertising), an artificial perception is created, and the product itself can now signify any number of things, which is the marketer’s ultimate goal. Once the perception is in place, we are no longer dealing with something concrete (whether or not the shoe serves its purpose correctly and sufficiently), but are left to navigate the landscape of abstract concepts that have zero context. By aligning the image of a shoe (or any other product) with a specific song, celebrity, location, social scene, or any other image they wish for us to associate with their product, it ensures that we will, from a mental standpoint, forever link the two together. Images, once they have entered our memory, are impossible to forget, and not only do advertisers know this, they exploit it as often as possible. Thus, we now purchase products because of their association with something that has no direct relationship to the functions of the products themselves, and the appeal is created in an abstract fashion that has no connection to the fundamental nature of the products. Any attentive observation of commercials, print ads, billboards, internet banners, or radio spots bears this out.

All of this matters for a number of reasons. First, it matters because anytime we do anything that hasn’t been fully thought out, with all of the elements and ramifications considered and put in perspective, we allow someone else to do our thinking for us. Thus, when we fall into the trap of trying to keep up with the trends that we neither asked for nor designed and are completely arbitrary, we abdicate our decision-making process into the hands of a nameless and faceless industry.

Second, even though there is a segment of our population that shuns the dictates of the fashion industry, we are almost all guilty of purchasing clothing that was manufactured in a foreign country that doesn’t have the same labor laws as the United States, in working conditions that are (at best) questionable, and (at worst) sweatshops; by people (oftentimes children) working long hours for extremely low pay so that the companies whose products they are manufacturing can obtain the highest profit margins possible (which is what is required in a capitalist system, but that’s another subject for another time). Attempting to find clothing that was “Made in the U.S.A.” in retail stores in next to impossible. As such, the odds of the clothing that you are wearing right this minute were manufactured in a foreign country with no labor laws and in dangerous working conditions is quite high. Since we do not hold them responsible, the labels and clothing companies (the ones responsible for dictating what is “in style” season to season) will continue to utilize factories and laborers in a way that allows them to make the highest profit possible.

Third, the existence of the fashion industry is yet another example of an artificial construct thriving and flourishing, despite the fact that we know it’s all a ruse. We know the way the industry works against us, we know that were paying more for a label that supposedly “represents” who we are and where our rung on the social stratification ladder resides, and we know that this is all done to allow the institution to remain in place and prosperous ad infinitum. We know.

We know.

However, this knowledge gets us nowhere, and nothing short of a paradigm shift can improve the situation.

We all know how often those happen…

Despite all of the ways that technology has enhanced our modern lives, there is an unforeseen possibility that creeps closer to reality with each passing day, and it needs to be addressed:

Technology is destroying the arts.

Now, there is a multitude of ways in which technology has been beneficial to artists. For instance, the very existence of the internet allows artists of all disciplines to have the potential for world wide exposure, allowing them to increase their audience in ways that used to be unthinkable. Musicians can purchase an Apple computer and have access to a high end recording studio (which is what GarageBand is, if one knows what to do with it) without having to pay the exorbitant studio costs or purchase additional recording gear, as they come standard in the program. Film makers can access an instant, world wide audience by uploading their movies to YouTube. Photographers have a variety of electronic tools to assist with their craft, as well as the ability to host their gallery online. Even novelists have the option to self publish, which has been made much easier with the advent of the internet age.

When looked at objectively, one can see that there are many examples of how modern technology has been beneficial to artists specifically and the arts in general; however, as with everything, there is another side to the equation. It’s a side that is not generally being considered or engaged as part of the national discussion. What if all of our technological advancements are actually detrimental to the arts in general? What if, in spite of the apparent value, technology is eroding the expertise and discipline required to pursue a career in an artistic discipline? What if, when the day comes that everything is completely digitized, we find ourselves in a world in which what was traditionally considered art no longer exists?

Impossible? Not at all. That day may be closer than we think.

We are now not far removed from a time in which we see the end of the existence of physical media (CD’s, DVD’s, Video Games, et al) in favor of digital downloads and distribution. This process has already begun with music purchases via iTunes and other online stores, and can also be seen in streaming opportunities such as Netflix. It just doesn’t make sense for companies to pay for packaging, shipping, and a cut of the profit to the host store, when they can make available for download or stream the product from their own site. Thus, digital distribution instead of physical distribution is the inevitable future. As consumers, this is a positive development, as the means by which we obtain our product is easier than it ever has been before. Select the product, click to purchase, download, and enjoy your music/movie/book/game. While there are many positives for both the manufacturer of content and the consumer, this new system of distribution has had, what I would presume, an unintended outcome:

It has made content cheaper, which makes it disposable, and ultimately lowers the quality level of the product.

Think about it: in the music industry, when Compact Discs were introduced, they came with a price tag that averaged at around eighteen dollars an album. The only means one had to know what they were getting was via radio airplay, and artists and their labels would always ensure that their strongest material made it to that medium so that people would feel secure in making their purchase. Unless the music was released from a singer/group I trusted to release quality content, I had a “three song rule” in which I had to like three of the singles released to radio from a specific album before I would purchase it, and I wasn’t alone in this practice. Therefore, record labels had to carefully sign their acts, make sure the content was top quality, and develop them over a period of years to get them to the status level in which people would purchase their product on the day of its release, regardless of whether or not they had heard more than one of the songs from the album. This simply isn’t the case any longer. Rarely do we see artists who are coming out today have any sort of longevity in the industry, and it’s because the industry isn’t about the craft; it’s about what’s selling right now, and what’s selling right now is songs that are produced by non-songwriters, music performed by non-musicians, and a product that is best described as entertainment, because it certainly doesn’t resemble what we have long considered to be the art and craft of music. For this, we can blame the advancement in technology.

To play an instrument or sing at the highest level requires years of discipline and practice, as well as a commitment to obtaining mastery in a difficult medium. One does not simply wake up one morning knowing how to play the guitar well enough to get up in front of an audience and perform. But because of the advancements in technology, one does not need to know how to play an instrument in order to produce sound. One just needs to have access to a sound/loop library, purchase said library, and then proceed to drag and drop loops into a recording program on their computer. The only “musical” requirement to do this is an understanding of how songs are constructed, which, because of the proliferation of popular music, we all comprehend to a certain degree. Today, one doesn’t need to be able to sing, either. Just throw on auto tune, and digitize your lack of ability away. Practice is not required, discipline is not required, and musicianship is not required, and it has led to our present situation, whereby musicians have been replaced by entertainers and style trumps substance.

The movie industry is in a similar position. Distribution of a completed film use to require multiple prints, as they had to be sent to each movie house that chose to run it. Because of the exorbitant costs to produce a movie, studios had to carefully choose which movies they would produce, because if the movie bombed, it could be devastating to the future of the producer, director, and studio. Like the music industry, studios were more apt to work with and develop directors who were superior at their craft, as that would ensure a loyal following for all parties involved. They also had to create a product that had compelling stories and interesting characters, as special effects were much more complex to execute and were used sparingly.

But with the advancement in technology, movies have slowly but surely morphed from well designed stories and character studies into exercises in visual gymnastics, which is made possible by the ability to easily create special effects with a computer. Consequently, much of what makes up the top grossing lists is short on plot, dialogue, and character development, and long on chase scenes, explosions, long, drawn out battle scenes, and lowest common denominator plots, all thanks to the advancements in technology.

Technology has made it possible for anyone to make art with programs that replace the act of creation and performance regardless of training or ability. While some would argue this a positive, the end result counteracts such an argument and stands as proof positive that access does not equate ability. Just because our devices allow us to take pictures, capture film, generate musical tones, and make digital art does not make us a photographer, director, musician, or artist. Each of those disciplines require years of practice to achieve mastery. Just because we have programs that allow us to manipulate ones and zeroes into what then appear to be artistic renderings (but are nothing more that digital compositions) does not mean that we get to wear the mantle of artist.

To put it in perspective, we would never grant the title of “Doctor” to someone who happens to have Web M.D. on their device and uses it to diagnose a rash, nor would we grant the title of “Lawyer” to someone because they have the digital version of Black’s Law Dictionary and are able to look up the definition of “in pari delicto”. No, we all are aware that to become a doctor or lawyer requires years of dedication, schooling, training, and practice. Those who fulfill those requirements deserve the title.

In addition, those who put in the years of dedication, schooling, training, and practice in the arts deserve the same recognition and respect. If we ever arrive at that point, we may find ourselves in the position whereby our technology is viewed as a means, and not the end, to our artistic endeavors.

And maybe then, technology will not be destroying the arts, but rather, enhancing them in the hands of people who have the skill and expertise to do so.

Slow Down

Posted: July 2, 2014 in Uncategorized

Stop.

Look.

Listen.

Remember this early life lesson? It was intended to teach us how to contend with moving vehicles as young pedestrians, but should now be applied to our lives as adults as soon as possible, lest the descent into the infotainment freak show increase in both pace and scope.

I distinctly remember learning this lesson in preschool. As a group, we were lead by our teachers to a not-so-busy intersection at the end of the driveway of the school. Holding hands, we were instructed to the approach we should take when it came time for us to make our way from one side of the street to the other. The first directive in this set of instructions was to “stop”, which allowed us to assess the situation, get our bearings, and place ourselves in the position to be fully aware of our surroundings. We were then instructed to “look”, in which we looked in both directions to determine whether or not is was safe to proceed. Finally, we were told to “listen”, as it was entirely possible that stopping to look both ways wasn’t enough, as there may be something that our eyes couldn’t see, and we would need to engage other senses to ensure our safety.

In hindsight, it was a significant lesson that has applications for our lives today, albeit in different areas. Truth be told, it is a methodology that we should practice on a daily basis when interacting with our media saturated culture.

In the world we live in today, reflection isn’t a priority for most people, but not for lack of desire. However, we are scheduled to the teeth, constantly busy, always on the run to the next event or obligation. Therefore, we need to follow the first admonition that learned in our traffic lesson as children:

We need to stop.

Just stop. Take a deep breath, in through the nose, out through the mouth. Let all of the surrounding noise dissipate. Close you eyes, and listen to your heart beat. Breathe. Slow everything down, and take the time to listen to yourself. Let the exterior distractions fade into oblivion. Take a moment to listen to yourself, instead of the constant barrage of nonstop decontextualized noise.

It is imperative that we do this, more so now than any other time in history, as it seems that this present culture is recklessly careening towards the edge of the proverbial cliff. There is the housing market crash, the economic free fall, soaring unemployment, the trashing of the Bill of Rights under the guise of “national security”, endless foreign wars in countless locations with no apparent purpose or goal, and all the while, the divide between the elite and rest of the nation continues to grow. We have reached the saturation point, and must do whatever we can to take the time to stop.

If we can manage to do so, then it will be possible to follow the second directive from our childhood lesson:

We need to look.

It sounds simple enough, but actually requires more than most would assume. A large majority of our populace reads newspapers, periodicals, web sites/blogs, or watches the news on a daily basis. However, that’s not enough. Mass market media in the United States today is in the hands of such a small group of corporations that it is often difficult to find differences in the content they produce. Therefore, it behooves us to look deeper than the surface in an attempt to understand not just the “what”, but the “who” that is behind it as well.

We need to look long and hard at what we, as a culture, value. We need to look at what we, as a culture, hold in the esteemed position of importance to existence. We need to look at what we, as a culture, create as the overarching messages that make up our way of thinking. We need to look at how we, as a culture, will be viewed in the annals of history.

Too often, when we do have the rare occasion to stop, what we look at is not important. It may be the television, a movie, a YouTube clip, email, a sporting even, or some other distraction that allows us to retreat from the constant din of noise that follows us through our hectic lifestyles (which is completely understandable). However, we would be better suited to take into consideration some of the weightier concepts of existence, as that could ultimately lead us to a new and better place. If we can manage to stop, and look at what is the bigger picture, then we will be able to do the last (and most important) component of our lesson:

We need to listen.

And, not only listen, but listen pragmatically. Emotion must be trumped by rationality if we are to truly take into consideration the realities of this age.

The nature of the language used by people in positions of power is based solely on emotion in today’s culture. It is no longer about ideology , it is no longer about dialogue, it is no longer about discourse. The unidirectional flow of information has been reduced to sloganeering and sound bites, all of which are delivered in the context of eliciting an emotional response from the listener.

Here’s the reality: if we manage to stop, look at the current landscape, and actively listen to what messages are dominant, it will not be a pleasant experience initially. First, we would have to come to terms with the actuality that most of what passes as important in today’s day and age is useless distraction (and that would be the best case scenario). We would then have to conclude that the useless distractions that litter our mental landscape come from a frighteningly small group of corporations and politicians, and that most of the distractions are of the absolute lowest common denominator. We would have to further conclude that much of what is foisted upon us is, at best, altered to fit a certain ideology, and at worst, blatantly untrue, and deal with the ramifications of such an understanding. A thoughtful and well-reasoned study of the information machine will produce such an understanding. However, if we are to enact positive change, we must first come to such an awareness. The initial feeling may not be pleasant, but must be faced in order to move forward. Real change (as opposed to the never-going-to-happen “change” that politicians like to trot out every election season) is indeed possible . However, we must come to terms with what needs to be changed before we can do anything about it.

Stop.

Look.

Listen.

It needs to happen. It has to happen.

And then, we can make a difference.