Being the change

A central question in a class I just finished (taught by Becca Deysach at Prescott College, relevantly titled “Be the Change”) was if and how personal practice (e.g. meditation, mindfulness, self-examination or compassion) relates to social transformation. Here are my reflections.

The central relationship between personal practice and social change (the evolution of society) is that social change starts with people, with each individual, and thus as personal practice powerfully influences individuals, it influences social change (through the contributions of these individuals). Personal practice provides a stable foundation to build upon, a meaningful source for influences that can spread far wider than the individual.

Social change is nothing more than individual change observed from afar; the growth of a forest is nothing but the growth of individual plants and trees and blades of grass. One spreads compassion by being compassionate; one spreads love by being love; one empowers by being powerful (in communication and relationship).

I can’t quite say what contribution I’ve made to this world so far, but I try to be a positive force, first and foremost, by who I am. Our power to affect change is manifested daily in interactions with baristas, drivers, sales clerks, beggars, family, friends and strangers. Every interaction is an opportunity to spread love, acceptance and possibility (or hate, judgement and negativity, which we often do unconsciously). Every interaction is an opportunity to serve. Our influence on the world comes not first from what we do — our material contributions and accredited accomplishments — but from who we are, not only to those we like and those we know but to every single person we encounter (and to ourselves!). It’s from this place of being that doing arises.

Personal practice helps us to guide and shape who we are; our ways of being. To be one who cares, who loves, who empowers, who inspires, is to be a powerful force of potential in the world. It is to be the possibility of social change, one interaction at a time. Social change happens on many scales, but it starts small, with every single one of us.

An in-depth look at Chris Anderson’s Free and related theories

(Originally written in late 2010; some sections have been updated or removed.)

Table of Contents:
1 Summary of key concepts in Free: The future of a radical price
     1.1 Atoms vs. bits
     1.2 Value is no longer determined by price
     1.3 Making money from Free
     1.4 Piracy
     1.5 Economics of abundance
     1.6 “You can’t stop Free.”
2 Free in context: Relation to other ideas and theories
     2.1 Chris Anderson’s qualifications
     2.2 Building on past work
     2.3 New Rules for a New Economy by Kevin Kelly
     2.4 Related theories
     2.5 Karl Marx’s alienated labor
     2.6 Responses to Free
     2.7 Free in practice
     2.8 Chris Anderson, technological determinist?
3 Bibliography
4 Footnotes

Summary of key concepts in Free: The future of a radical price

The reigning logic of the 20th century has been that “there’s no such thing as a free lunch.” Essentially, even if you’re not paying directly, there’s always an associated cost. In Free: The future of a radical price (2009), Chris Anderson challenges this assumption, claiming that this economic maxim is no longer true: with zero as the prevailing base price and economic models of Free revolutionizing various markets, there is such a thing as something for nothing[^1].

In the 21st century, there are things that are genuinely free, as opposed to being merely samples, promotions or products that require continued investment (like razors or game systems). Free went from a marketing method to a new economic model, asserts Anderson, and Free has now become the default.
Continue reading

Radical Considerations on Creativity and Originality

In 1974, Alejandro Jodorowsky began work on his film “adaption” of Frank Herbert’s Dune. It was to be the story of the illumination of a hero, a people, and a planet, the planet being the Messiah of the Universe, spreading its light. It was to feature Salvador Dali as an insane emperor, and Pink Floyd was to write and record the music. Each, along with many other artists, were specifically selected by Jodorowsky to create his ambitious vision. The spice was a drug containing “the highest level of consciousness,” and the film was to end with the illumination of the Universe, as the planet Dune spreads consciousness across the galaxy.

It was to be, in every way, Jodorowsky‘s Dune (unfortunately it was never made, although a documentary about the adaption is being released). The project was approached in this way from the very beginning, due to the way Jodorowsky viewed the work of artists (originally written in French):

I did not want to respect the novel, I wanted to recreate it. For me Dune did not belong to Herbert as Don Quixote did not belong to Cervantes, nor Edipo with Esquilo.

There is an artist, only one in the medium of a million other artists, which only once in his life, by a species of divine grace, receives an immortal topic, a MYTH… I say “receives” and not “creates” because the works of art its received in a state of mediumnity directly of the unconscious collective. Work exceeds the artist and to some extent, it kills it because humanity, by receiving the impact of the Myth, has a major need to erase the individual who received it and transmitted: its individual personality obstructs, stains the purity of the message which, of its base, requires to be anonymous… We know whom created the cathedral of Notre-Dame, neither the Aztec solar calendar, neither the tarot of Marseilles, nor the myth of Don Juan, etc.

One feels that Cervantes gave HIS version of Quixote – of course incomplete – and that we carry in the heart the total character… Christ belongs not to Mark, neither to Luke, neither to Matthew, nor to John… There are many other Gospels known as apocryphal books and there is as many lifes of Christ as there are believers. Each one of us has their own version of Dune, its Jessica, their Paul… I felt in enthusiastic admiration towards Herbert and at the same time in conflict (I think that the same thing occurred to him)… He obstructed me… I did not want him as a technical adviser … I did everything to move him away from the project… I had received a version of Dune and I wanted to transmit it: the Myth was to give up the literary form and to become Image…

Regardless of its truth, the idea that certain incredible stories do not “belong” to the person who first presented them is interesting. It means that once an idea has been released, it’s the property of the world, of humanity, not of a single person. There can be many versions, and none are inherently canon. To believe this has some serious implications, like invalidating the concept of copyright (in certain situations, when taken to the extreme), and I don’t think that it’s fair or beneficial to remove the understanding of ownership from the original creator. It is, however, an interesting philosophy to consider as it relates to originality and creativity. Interpretation, extrapolation, and re-creation strengthen any creative ecosystem.

Such a philosophy as Jodorowsky’s makes it easier for someone who’s created a world or a set of characters to lease them to someone else for adaption or expansion. Orson Scott Card, for example, isn’t worried about absolute faithfulness to the book for the film adaption of Ender’s Game, it seems, from what he wrote after visiting the set:

…it was amusing when others asked me how it felt to have my book brought to life. My book was already alive in the mind of every reader. This is writer-director Gavin Hood’s movie, so they were his words, and it was his scene.

He’s not relinquishing his right to the story or its characters, but he’s accepting that his version is not the only version.

This whole idea reminds me of something Elizabeth Gilbert talks about in her TED talk. In ancient Greek and Roman societies, creativity was not something believed to come from a person. It was more like a divine companion, outside of the individual, “that came to human beings from some distant and unknowable source, for distant and unknowable reasons.” The Greeks called these “divine attendant spirits” daemons. The Romans called them geniuses. A genius was not something a person could be; it was more like a “magical divine entity, who was believed to literally live in the walls of an artist’s studio, kind of like Dobby the house elf, and who would come out and sort of invisibly assist the artist with their work and would shape the outcome of that work.” Which absolves a lot of responsibility and pressure from people who we would consider to be naturally talented and creative:

So brilliant — there it is, right there that distance that I’m talking about — that psychological construct to protect you from the results of your work. And everyone knew that this is how it functioned, right? So the ancient artist was protected from certain things, like, for example, too much narcissism, right? If your work was brilliant you couldn’t take all the credit for it, everybody knew that you had this disembodied genius who had helped you. If your work bombed, not entirely your fault, you know? Everyone knew your genius was kind of lame. And this is how people thought about creativity in the West for a really long time.

And then the Renaissance came and everything changed, and we had this big idea, and the big idea was let’s put the individual human being at the center of the universe above all gods and mysteries, and there’s no more room for mystical creatures who take dictation from the divine. And it’s the beginning of rational humanism, and people started to believe that creativity came completely from the self of the individual. And for the first time in history, you start to hear people referring to this or that artist as being a genius rather than having a genius.

And I got to tell you, I think that was a huge error. You know, I think that allowing somebody, one mere person to believe that he or she is like, the vessel you know, like the font and the essence and the source of all divine, creative, unknowable, eternal mystery is just a smidge too much responsibility to put on one fragile, human psyche. It’s like asking somebody to swallow the sun. It just completely warps and distorts egos, and it creates all these unmanageable expectations about performance. And I think the pressure of that has been killing off our artists for the last 500 years.

It’s undeniable that sometimes creativity just seems to flow. When there’s no friction, no hesitance, no struggle; the words just keep coming, fitting together as if choreographed by some invisible guide, each a step towards a perfect dance of language and meaning. Inspiration arrives unexpectedly, dormant imagination and brilliance spring to life, and creativity seems not to be something that must be called, but something that must simply be let free.

What if we don’t entirely own our own creativity? What if we’re accessing something greater, or something is being transmitted through us? I’m not putting this forth as the truth — I’m putting it forth as something worth considering, independent from its relation to “reality” as we know it. I don’t think creativity is some daemon crouching in the corner, but I do think considering creativity and inspiration as more than meets the eye has value 1. The work that comes from an artist can be greater than the artist himself. And I don’t think the originator of an idea or a story or a realm should have his claim to them renounced, no matter their greatness, but I do think all are more powerful when others are allowed to change them, to remake them, and to expand them. An idea, after all, is only as powerful as its execution, and why should stories be constrained by what one single imagination is capable of?


  1. But don’t take this as an excuse to avoid working, when the inspiration isn’t there, on those days when creating something worth keeping seems impossible, when every attempt at writing seems empty and repulsive. Some elements of creativity may be out of your control, but showing up — creating anyway, despite plenty of easy reasons not to — is the part you do control, where you have power whether there’s a daemon whispering in your ear or not. 

Art and influences

Early last year, I published a post contrasting two views on art:

Seth Godin:

Art is what we call…the thing an artist does. [...] Art is not in the eye of the beholder. It’s in the soul of the artist.

Banksy:

I’ve learnt from experience that a painting isn’t finished when you put down your brush – that’s when it starts. The public reaction is what supplies meaning and value.

As I’ve said before, meaning is inherent to the act of creation, but it is manifested in the act of sharing. The artist creates meaning and value in his own way by creating, yet the audience supplies their own meaning and derives their own value. The two need not agree, but that does not make them incompatible (the two above quotes complement each other). Art is entirely subjective, and this subjectivity is integral to both its creation and acceptance (“acceptance” that could include a rejection of meaning and value).

Steven Pressfield has an interesting way of defining this symbiotic dichotomy, with a focus on commercial response:

Track #1, the Muse Track, represents our work in its most authentic, true-to-itself and true-to-our-own-heart expression.

Track #2, the Commercial Track, represents the response our work gets in the marketplace.

With art, there is:
1. the art itself,
2. the artist’s relationship to her art (Pressfield’s Track #1), and
3. everyone else’s relationship to her art (Pressfield’s Track #2).

The latter two are accompanied by judgements of meaning and value, and the third one involves a commercial value judgement, which often seeps into the second one, the artist’s relationship to her art. There’s also attention, recognition, and other non-monetary currency.

Pressfield advises that an artist not let the third layer of judgements overwhelm her own, that one remains grounded on Track #1, and finds balance between the two tracks. And when we’re lucky they overlap: “When an artist’s voice is true enough to his own heart and authentic enough to his own vision, Track #1 pulls Track #2 to it. Bruce Springsteen. Bob Dylan. Hunter S. Thompson.”

Creation cannot exist in a vacuum devoid of any relationship beyond the artist and his art, but both must nonetheless be separated — protected — from misguided or greedy influences. The artist guides the art, and the world guides the artist. It’s up to each artist to decide by how much.

The future

The distractions, the deluge of notifications, the overwhelming amount of information, all these subjects upon which there is so much discussion about dealing with: they are only the beginning.

In a hundred years, we’ll have computer chips embedded in our bodies, integrating us more than ever into the digital world — and the vast network that connects it (a successor to the internet, and the latest of many iterations). The barriers to distraction will be lower than ever; only mental defenses will remain, like focus, discipline and restraint. Instead of a click and some keystrokes, all it will take to access Twitter — or any digital repository — will be a thought. Our brains, with the help of a processor or converter, will be able to interact directly with data and information, without the assistance of any external device.

The transition will be gradual, of course. If started with phones, and other things we use already — glasses, watches, wristbands, clothing. As technology advances, the need for multiple external devices will decrease, as functionality is continuously compounded into smaller, more powerful devices, and eventually into our minds and bodies.

Every surface will be covered in advertisements and information. Touch will control anything not integrated with the chips connected to our minds.

The only solace will be to close our eyes and disable the chip (or enter an electronic-free zone, where the chip will turn off and advertisments will be static, more easily avoided) — unless the chip is performing a function essential to our survival, like regulating or preventing a mental disease, in which case it can never be off. (A rare few will reject the integration of chips, like those today who believe they don’t need the internet.)

For matters of the mind, the line between organic and digital, natural and artificial, a line that is already blurring, will be functionally irrelevant.

Realistic Online Privacy Expectations

When sharing anything on a public or semi-public digital network, you must be willing to accept two maxims of digital content distributed over public (and often semi-public) networks:

  1. That it is timeless and permanent
  2. That it is accessible to anyone, and infinitely reproducible

To expect anything less is to deny the way the internet can — and often does — work. Too often, people seem to be surprised by the privacy issues created by their own actions; by their own thoughtless sharing. To ensure adequate privacy of personal information, one must be aware of the nature and consequences of digital sharing.

It should be noted, neither of these outcomes are guaranteed, but they are both very possible, and in some cases likely.

That it is timeless and permanent

Unlike paper — books, notes, letters — digital content does not decay. It does not fade away in the face of time. For the most part, it does not need to be maintained to remain. Once something has been digitized or created in digital format, it is essentially timeless, disappearing only by intentional deletion or neglect (like old storage being discarded or formats becoming obsolete). The limits of the physical world do not exist in the digital world — digital data is constrained only by the physical tools that are used to contain digital information. As data storage becomes more reliable and its cost decreases, the constraints of the physical world — the need for maintenance, the effect of time’s passing, limited space — become increasingly irrelevant.

When you are the sole proprietor of your data, it’s easy to delete, and in most cases unlikely to be recovered. But when it is made available on a network, it becomes another story entirely.

That it is accessible to anyone, and infinitely reproducible

With digital information, infinite copies can be made, with no loss of quality. Once something is on a network, anyone on that network can make copies, and it can be spread and duplicated endlessly, each copy being equal to the “original”. Therefore, deleting your information is not always enough. Once digital content has been released, if it has any relevance or value, it’s likely to continue to exist in one form or another, even after the “original” is gone. Backups and copies are made not only by other users but automatically by certain services. Archive.org, for example, is a “digital library”, providing access to many old websites. On sites like Facebook, “deleting” content removes it from the front-end of the service, but it can take more time before it’s deleted from their storage (and there’s no telling who of your “friends” has downloaded a copy).

Even if it can be accessed by only a single person, that person has the power to release it to the world. That’s why semi-public networks can still result in open access. As with any interaction, the only thing between “private” and “public” is trust: a network is only as private as its users. The larger the network, the more likely any content shared on it will reach someone unintended and untrustworthy.

To be able to share judiciously, it’s important to be aware of how digital content and networks work, and the potential consequences of digitally sharing personal information and media.

Posting a status update to Facebook or Twitter is not the same as saying it aloud to your friends; uploading a photo is not the same as passing a printed one around; putting family photos on a website intended only for friends and family is not the same as showing them the album. And crucially, neither are the consequences.

Eyes up

The man looked up at the beautiful blue sky and the exquisite, ancient architecture. He marveled at the splendor of the world and man’s tremendous ability to shape it. Eyes up in awe, feet finding their way.

Then he felt something soft beneath his foot, and whiffed a pungent stench. He looked down to see dog shit under shoe.

The new addiction

It’s undeniable that computer technology, and the ease of information and communication access it enables, can be addictive. Comparisons to classically addictive materials, then, like cigarettes, is to be expected, and in some ways quite apt. As Ian Bogost lays it out in The Cigarette of This Century (via Shawn Blanc):

Today, all our wives and husbands have Blackberries or iPhones or Android devices or whatever–the progeny of those original 950 and 957 models that put data in our pockets. Now we all check [our] email (or Twitter, or Facebook, or Instagram, or…) compulsively at the dinner table, or the traffic light. Now we all stow our devices on the nightstand before bed, and check them first thing in the morning. We all do. It’s not abnormal, and it’s not just for business. It’s just what people do. Like smoking in 1965, it’s just life.

But there’s an important — crucial — difference between cigarettes and smartphones, or any mobile devices (one of many differences, like the obvious one that cigarettes can kill you and those around you). Or more accurately, a contrast between the relationship smokers have to smoking and the one most of us have to our mobile devices.

Smoking is a social activity. You can smoke and talk. Cigarettes are shared. The most common icebreaker between strangers I hear is “Can I have a cigarette?” or “Got a light?” Smoking is something you can do in conjunction with another activity.

But you don’t text and talk (you might think you can, but for the person trying to have a conversation with you, it’s frustrating). Checking email or Facebook or Twitter or Instagram is not something done while simultaneously interacting with the people around you. It’s an alternative, a withdrawal, an escape. Digital connection instead of immediate social connection. Smartphones — not their nature, but our prevailing use of them — is individual, antisocial (paradoxically), and disconnected (from our immediate surroundings).

Writes Bogost:

As Marshall McLuhan observed, the cigarette enhances a sense of poise and calm by giving the smoker a prop, reducing social awkwardness. It retrieves tribal practices of ritual and security and obsolesces loneliness by giving everyone something in common to do, such as asking for a light.

In the same way, a smartphone is a prop. But instead of a prop that encourages interaction — asking for a light, socializing in the smoking area — it encourages distraction, avoidance, pulling further into one’s self. Devices like smartphones pull us out of the moment constantly, in ways addictions like cigarettes only achieve occasionally (smokers excuse themselves to smoke, but is that worse than constant peeping with no excuse?). Cigarettes are harmful to our longterm health. Smartphones are harmful to everyday face-to-face communication.

Using the iPhone for creation over escapism

James Smits, in an email about my post on iPhone escapism:

Recently, I noticed myself in the same pattern that you mention, reaching for my iPhone before I even knew why I wanted it. Too frequently the culprit was Twitter. Recognizing this fact, I deleted My Twitter client. Soon I was reaching for my phone habitually only to find I had nothing to do with it. Soon after that I deleted any other app that became a time-sink.

Well, after that why would I need an iPhone? I rearranged the apps I had into categories — nature, media, productivity and creative. Quickly I started using the creative folder. It even contained the native notes app packaged with the iPhone. It also contained a recorder. I began to reach for my phone with motives besides boredom. An interesting conversation? Recorder app. A random thought? Notes.

This loose system begins to take on new life when you shuffle the categories. I added the built in clock app to “nature” not knowing what else to do with it. Move it to creative, or media and it instantly has a new context. Same with the compass app.

I love the idea of a creative folder, and of shifting an app’s context by shifting its categorization. Despite its ability to distract, I also find tremendous value in having such a powerful device as the iPhone with me at all times1. My most used creative apps — iTalk Recorder Premium, Camera+ and Simplenote — are all on my home screen.

And while I haven’t deleted all the potential distractions (although they do have their own folder), I have another barrier between me and the endless internet: I have a prepaid plan with very limited data. Wi-Fi is ubiquitous enough that if I need to connect, I usually can, and I have enough data for occasional directions or to check my email when I’m expecting an important message. But it encourages me to only connect to the internet when I’ll be somewhere for a while, or somewhere familiar, as opposed to anytime and anywhere. It was an intentional decision, and when I’m away from the usual places where Wi-Fi networks are remembered, it makes accessing the internet something I consider instead of something that’s automatic. And it’s great for another reason: it’s a whole lot cheaper.


  1. As I wrote last year, “If we are to agree, at least for creative processes, that the best tool is the one you have with you, then something like the iPhone — powerful, versatile, and always in your pocket — is the best tool for a lot of things (with apps being key).” 

Damon Lindelof defends Lost’s ending →

Lost co-creator and executive producer Damon Lindelof, in an interview with Joshua Topolsky:

I always just felt like the ending that we were shooting for was gonna be one that dealt with sorta the emotional reality of the characters, and gave some fundamental explanation for ‘why— what did these people get out of this plane crash?’ And the answer, as corny as it sounds, was the one that appealed to me the most, which is: each other. That’s what they got. They were all fucked up, sad individuals who were lost in their own lives and hated themselves, and somehow they found some fundamental community amongst each other. If they hadn’t met each other, and spent all that time on the island, then they would never have been able to forgive themselves for their past sins, and break through to some sort of level of self awakening and forgiveness. It is new agey, it is hokey, but it’s the story that I wanted to tell.

As I ranted after finishing it, I was disappointed by the concluding episode of Lost (the insane expecations set by the quality of all previous finales didn’t help). I felt that it failed to do justice to the six seasons that preceded it — Lost‘s narrative and mysteries were boxes within boxes, and each season we got closer to what was inside. And then at the last minute, instead of being opened, the final box was wrapped in pretty paper with a neat bow on top. The finale provided emotional closure, but it did not provide closure on the epic scale of the tale that Lost told.

The most common argument I’ve heard for why the finale worked (or was simply satisfying) was that it focused on the characters, and that’s what the show was about all along. This seems to be how Lindelof saw it.

While the characters were certainly important — the reason the show remained gripping even through undeniable ups and downs in quality and focus — I don’t feel like providing their stories with some purgatorial purpose negates an expectation that similar justice be done to the mysteries and mythology of the show.

Nonetheless, for both those who felt slighted and satisfied by the ending, it’s interesting to hear Lindelof’s perspective, two years later, on the finale and the saga of one of TV’s most interesting, unique and polarizing shows. It’s pretty amazing — a testament to the show — that I’m still interested after all this time.