Archive for February 2015

The Programming of Robotic American Culture – PART I Time Periods

Thursday, February 26th, 2015

Schwerte PART I- Time Periods

Dar Chabanne

I.  PRE-INDUSTRIAL REVOLUTION: Villages

It is clear that I was not around during the Pre-Industrial Revolution.  The best first-hand experience I can bring to this as to the way in which people interacted and communicated with each other during this time is from my yearly experience at the Burning Man Festival during the last week in August.  There are printing presses, and some radio, but for the most part, information is disseminated by word of mouth and from camp to camp.  There are no cell towers or broadcast mediums.  If any were present, most people would ignore them.

In a sense, Burning Man is a village again–a throwback to the way things used to be.  In terms of understanding any sort of unified message or cultural programming, you are left with the mediums of bikes, art cars, and shoes moving you from place to place—watching the signs out in front of camps on the streets is really one of the only true mass communications method available.

There was a rainstorm in 2014, which actually was a mild natural disaster, as no cars are able to drive on the streets when they are wet.  This eventually leads to no ice and no waste removal from the porta-potties, and if this lasted for an extended period of time, there would be no more food or water either.  Necessary information was distributed from person to person.  I took it upon myself to make up things that weren’t true about how long it would be before the city would start to function again.  There was no way to determine if the rains would continue, or if the streets would become passable in the near future.  Whom could you trust?

What was nice about this experience, despite not having a greater knowledge of what may be happening around us, is that we focused more closely on what we could perceive for ourselves unmediated—we did not look at a screen or through some formulated statement to know what was happening in front of our faces or in our community.  We looked to the west to see if more clouds were forming, and watched another storm pass to the south as theatre.  We paid attention to the changing winds, cloud formations, and barometric pressures as our bodies could feel/see them—that was the most accurate information we had.  We experienced the excitement and potential serious danger of the endless unknown, which as humanity we still actually experience beyond our planet and in many respects here at home, but we don’t realize it.

Developing mass medias—including the broadcast channels of television, radio and print—and then seeing the recent destruction of them into a more personalized, village-like experience raises serious questions about whether or not we have more or less freedom now that information is perceived to be under our control.  While trying to learn what everyone else knows, are we able to affect it, or are we arrested by it?

Through the illusion that we are now all on the same page as a remnant of the Broadcast Media era, I assert that without the mass medias, we are actually less free and less individualized.  Further examination is needed to illuminate where these ideas and programmings come from, how they are accepted and who perpetuates them.  Emerging arguments question the manipulation of our ideas, information and our “digital trails” through dark and opaque forces, often referred to as algorithms by those ‘in the know’ or the wisdom of crowds by those who aren’t.  These arguments are an attempt to bring about the independence we once had in the villages, while also maintaining the illusion of a mass medias singular perception.  There is a reason so many technologists go to Burning Man; they need to step out of themselves, whether most consciously realize it or not—they are the ones enabling us to obscure ourselves from each other.
II. THE BROADCAST ERA: Forrest Gump, the Editor

It may have started with a fireside chat from President Franklin D. Roosevelt, but with radio came the information—the idea that we as a country all had the same exact version of events at the exact same time.  For once, not only did you not have to read, but you could hear someone’s voice from very far away at the same time as the rest of the country.

Marshall McCluhan said, “The medium is the message.”  That observation arose during the time of broadcast medias, with the fascination on knowing what everyone else might be thinking at the same time.  Mass information helped to produce genocide in Nazi Germany – as well as war bond sales here in the United States.  The modern era was about getting everyone to think the same things, and have the same view of their culture and each other.  The medium of this time was broadcast, the message was the same idea to the largest audience.

Mass media aided the U.S. in going to war against communism in Vietnam and in maintaining the Cold War; many believe it also told everyone when those wars were over as well.  The electronic mediums of broadcast at that time were one-way channels of information from one source to the masses.  If we were told Khrushchev was crazy for beating his shoe on a podium, or conversely that Ronald Reagan ended the Cold War by telling Gorbachev to tear down the Berlin Wall, there were not many other choices from which to glean the narrative.

During this time, the reaction to cultural programming came in three choices:  with the stream, against the stream, or ambivalent.  What was different about these times is that even those who chose to be ambivalent probably heard, in some fashion, what everyone else was hearing.

In mass media there was always a lot left out of the story—such as the perspectives of minority or counter culture groups, or even those with different shades of the popular ideas.  All information was mostly a reaction to the central narrative.  With the duopoly of American political parties, the reaction to one or possibly two choices in direction persists today.  There were rules that editors would consider based on ‘objectivity’ requirements (equal sides, equal time, but only two sides) that reinforced the lack of choices.

Dispensation of information was somewhat limited as well.  Cultural programming in electronic form was mostly handled via a limited number of dispensers—for the first six decades of electronic media’s existence, there were only three major television networks of national significance, ABC, CBS, and NBC, which only broadcast news 15 to 30 minutes per day.  This grew with the advent of the UHF broadcast spectrum, and then eventually cable television, but the limits of information were easy to find.  By the time I entered the news business in 1998, there were only five places that could transmit news video to a national or global audience from within the United States—ABC, NBC, CBS, Fox and CNN—and this was true until about 2004.  Each day you could find all the available information by simply reading one or two newspapers and watching one or two newscasts.

In my hometown of Toledo, Ohio in the 90s, I was lucky if I could find a copy of the New York Times at five places in town.  One of the places was the public library, and The Times would be shared with others—you could not take it home, let alone share it in an email or a social network with all your friends.  Presently there is more access to this type of information than ever before, but people do not pay for it as they did then—which at the time constituted a sizeable barrier to entry as well.

We were stuck with a very localized and limited narrative of events.  Almost all local papers received their information from two large wire services: the stronger Associated Press, and the quicker United Press International.  Throughout the electronic medias and into the internet, the Associated Press was considered the non-partisan, unbiased source of information.  When the AP reported a story, people thought it was true throughout the land.

In the closing moments of the last century, most Americans mainly got their information from television, which has very limited bandwidth for the amount of ideas that could be conveyed in the same amount of time and space when compared to print.  This landscape was true for most Americans throughout the time of Broadcasting.

 

 

 

 

III. DIGITAL REVOLUTION: 1998-2008

1998 The Aggregator

While this period brings the advent of the computer playing a larger role in what information is available and the perception of what we think we know, the very beginning of the internet and what you saw was still decided editorially by a person through the use of portals and aggregators.

Portals such as Yahoo! or the paid service AOL would be a starting place for your ride on the ‘information superhighway’ of cyber space—which in their case was a very limited range of media partnerships and chat rooms.  News aggregators like The Drudge Report, which was responsible for bringing the Monica Lewinsky affair to light, and thus giving the internet its first credibility as a primary medium, were still managed by people.  They may have had an editorial bent, but their stories were sourced by people who decided their placements like traditional newspaper editors did with their front page.  These aggregators often used RSS feeds to stay up on the news from other sites, and would often make money by selling link placements or doing swaps of links with other sites like theirs in exchange for traffic.

Although there is value in the organization of ideas, the ideas they were passing among themselves are not genuinely original—they contributed no new articles, only copied and spread around what was already there made by someone else.  The rise of the news aggregator contributed to the decline of the news originator—journalists that would go out and generate original stories and information.  Many services, like the early Yahoo! News and Google News strictly posted links from other sources, thus giving the impression that there were thousands of articles on one story.  I still hear from people that these thousands of the same articles actually provide more coverage than there had been before the internet, when they do not or about how overwhelming this period of time was to the mind—that may be true for some.

The truth is, there was more original reporting before these news aggregators and there has been significantly less since their invention—it was just harder and more expensive for one person to access it all.  There could have been an internet with more original sources gaining access to their audiences, but unfortunately, the business decisions made by news companies, combined with pseudointellectual faux-science propelled by a GO-GO Silicon Valley spirit, bankrupted our original journalism infrastructure.

There were claims by self-proclaimed internet experts like Jeff Jarvis, who once said “all media companies should be like Google.”  In this sense he meant all companies should create nothing themselves and sell ads off of the other media companies that were giving all their work away for virtually nothing.  How many of those can the world tolerate?

News Aggregators marked the beginning of the illusion that there were thousands of versions of one single story, when in truth, they were merely copies of the same AP version, framed with different colors.  The only winner in this game was the news aggregator, like Google News or Yahoo News – because all the little clicks of these same stories added up to a lot of ad revenue on their aggregation sites.  The individual publications would suffer as they neglected what made them original—in exchange for providing aggregators the ‘long tail’ that made them some of the most profitable companies in history by having little to no cost for their service.  This “long tail’ whipped away the value of original reporting by paying the same amount to publishers as they would receive by publishing the same wire story.  On most days there were about seven stories of national significance, such as a plane crash, a piece of legislation passed or an update on a war, or a scandalous criminal trial.  Now, on cable news, there are two or three such stories, or most recently, just one.

 

The Search Engine

Early search engines, such as WebCrawler, AltaVista and Lycos, were somewhat primitive.  Site owners may have had to apply to the search engines to have a link considered, or scanned each others’ sites, as opposed to the modern day notion that search engines find everything that wants to be found.  These engines employed a simple way of searching the web, but did not exercise consistent organization in how search results were displayed.  Many modeled their organization after the Yellow Pages: by topic.

Google’s entrance to the scene was not profound because of its simple interface, but because of the way they crawled the internet, and more importantly, how they surfaced search results to the user.  Google’s results were more intelligently delivered and more intelligently displayed than its predecessors’ results.  When combined with the incredibly lucrative search term based advertising, Google, more than any other search engine or editorial service, relied on algorithms to determine which results were shown to its users—with technologists programming and altering how the computers decided the results.  Accordingly, what people saw was often determined by what in reality was a black box—with a faint guise of fairness, often perpetuated through Google’s PR department and the mythology they prescribe of Google “doing no evil”.   If we knew how the search results were reached we could game the system—and while search engine optimization remains a serious industry, every time someone cracks the code of what it takes to get to the top of Google’s search results without either earning it or paying Google, the programmers at this powerful company change their mouse trap.

 

Maintaing their impressions from broadcast media, early search engines and internet portals like AOL & Prodigy, most internet users still believe that everyone sees the same results when searching the same search terms.   In the very beginning of Google, this notion may have been true, but in 2004, the company began developing user profiles.  It searched its Gmail service’s clients’ emails, and targeted its advertising and search results based on this personalized data.  With this advancement, cultural programming entered a new era of obscurity.

Although algorithms are executed by computers, they are designed and tweaked by humans—with their interests at front of mind.  The early stage of the internet had a happy, friendly expansion period in which these tweaks towards profitability are neither necessary, nor employed as the internet and its technology companies merely were trying to gain adoption and find their footing.  Almost all technology companies attempt to maintain this feeling of an honest dispensation of information and friendly expansion, but those days are long over if you listen to the business plans of new tech start-ups and now leading tech titans.

YouTube stars, for instance, stimulated by their audiences alone, and with no monetary reward, were often keen to sense changes in algorithms to propel their fame.  In 2011 Google, which bought YouTube, had enough with this fun-loving, friendly time of experimentation and simply stomped out an individual’s ability to rise to the top of search engines in favor of paid circulation from brand advertisers.   They now are in the process of trying to pay these stars to keep them from leaving the platform.

The Webloggers

There was a rise of individuals speaking their mind in the form of a weblog.  The early experimental phase of the internet, combined with the primitive, more transparent/easy to manipulate search engines, produced instant fame for early adapters of weblogs.

The weblog period was exciting in the sense that everyone from the “letters to the editors” crowd to the aspiring writer, and in some cases, well-known columnists, could have seemingly direct access to their audiences.  Some did well, but most did not, and writing regularly for no audience, while again adding up to profits for aggregators like Google, was not enough for individuals to pay their bills or more importantly feel the stimulation of having an audience read their work.  A seemingly endless amount of distribution and space was opened up overnight, but there still was a limited amount of time for people to read about what everyone else was thinking, so most blogs became empty, abandoned strip malls after a short period of time.

Even the reasonably successful blogger struggled.  Some of these bloggers had thousands of readers—a true following—but they did not have the job security of working at a newspaper.  If a blogger did receive money from an ad source like Google AdSense, the pay per click was considered very good at a dollar per thousand views—or a tenth of a penny per click.  Relatively large audiences did not even mean bloggers could sustain themselves, and if you did somehow earn enough, you were on a never-ending treadmill to keep those clicks up.  In a world where clicks on pictures of cats and a well thought out and researched journalistic investigation are the same, this new form of monetization incentivizes the lowest common denominator for content—this is often referred to now as BuzzFeed.

This precarity of the individual writer’s position in the marketplace, empowered the powerful instead of what many thought of as a revolution in self-publishing.  The more that concepts like citizen journalism—the idea where unpaid citizens submit acts of journalism, ranging from written stories, pictures, videos, and even investigative pieces and reports from press conferences in lieu of paid, professional reporters—were rolled out, the less powerful and substantive our journalism became.  Large publications with big buildings, deep pockets, and large staffs including teams of lawyers, had more security to not only question power, but also to be fully researched and taken seriously when they did.  Citizen journalists, motivated by attention alone, stood by themselves and often struggled to get true access to what was really happening even when they may have thought they did.  Like lightening strikes, most interesting things do not happen on accident in front of your cell phone camera when you happen to be recording something two times in a row.  Yet this was what most media outlets wanted from their citizen journalists—an endless stream of disposable gotcha moments.

We have ended up with more, loud, emotive repetition, and reaction to already-known information and gossip, the kind of information we all can find sitting at a desk with an internet connection without talking to anyone, and less actual research, reporting, or original story generation upon which to fuel it all.  It has gotten so bad, even our elected leaders wonder if there are any journalists left who know how to or have the time to research a story.

The more the pseudo-academics and Silicon Valley companies like Google and Yahoo! made deals with traditional journalism institutions, the more they destroyed the ability for the actual generation of original research and reporting.  They used the elite hiring/corrupt hiring practices of these institutions as a foil for the destruction of good paying reporting jobs and journalistic practices, in favor of a Walmart-style model of low pay, and in many cases, even darker, more opaque distributive practices that made it harder for provocative, original ideas to reach the general public.  At least in the old system you knew who was saying no to you.  Online as a creator, you have no idea why some stories have an audience and others do not.

These same Silicon Valley companies, and others who have joined them like Facebook, have traded in their pseudo-intellects for lobbyists, who now collectively spend almost as much as oil companies making sure the rules are bent in their favor.  The European Union has rejected these lobbyists and have instituted more humanistic policies, such as the ‘right to be forgotten’—if you request to have parts of your life erased from search engines, tech companies are now required to remove them in Europe.  The main reason companies like Google fight these measures in the United States and abroad has everything to do with having access to everything that they can possibly monetize and know about individuals, whether it’s humane or not.  They profit off of your pain and joy—it’s all the same click to them as long as they have total and unregulated access to it at all times.

A combination of blogs and news aggregators was later formed. The Huffington Post, one of the most prominent blogs, relied heavily on Search Engine Optimization.  This (leach) is worth noting because The Huffington Post relied almost completely on the voluntary work of those who published there, and who copied every article and video onto their site, thus creating a search engine magnet and thrusting itself to the top of every search and every advertiser’s call list.  The site still rarely broke any new information.  It also rewarded what everyone would generally view as stealing—simply taking a story from another site that actually paid for the original reporting and pasting it on to your own so you got the money for the clicks, and created a whole new crop of destinations that further eroded the funding for actual original reporting and idea generation.  Few, if any, even in the journalism industry, have decried these moves for fear of being on the wrong side of history.  Unfortunately, they all mostly lost their jobs and their profession anyway—and the rest of us are disempowered with fewer sources of real information.

 

IV. PRESENT TIMES:  An Obscure Era

2004-2008: Social Networking/Viral Video

There was a mythology of the democratization of the mediums.  After such a long period of obvious, centralized places of power with all of their walls, restrictions, and corruptions for gaining access to distribution, having the ability to seemingly transmit your idea or vision to just about anyone in the world at any time without a filter was quite profound.  With the advent of YouTube, it seemed like if you put up the right video, it might go viral—essentially, that everyone in the world would see it, if it was good enough merely by the world passing it to each other as simply as the common cold.

To summarize Slavoj Zizek, it’s not the opposite, but always the inverse of what you are thinking.  That is a tremendous oversimplification, but it does apply to these times we live in.  While you may think the viral video is possible—it is now certainly dead.  When you post a video, no one can possibly find it unless you have access to money or a friend who can pull a lever for you inside one of the social networks or distribution platforms.  There’s something incredible lost in everyone not realizing this.

I recently got into a discussion which exemplifies this opaque period.  It was Election Day 2014, and someone who had previously worked at Change.org put a posting on her Facebook page applauding Facebook for giving up some of its precious screen space to encourage people to vote:

Girl’s name obscured via NPR

November 4 at 4:28pm ·

Props to Facebook for doing this!

Anyone who’s worked in online sales knows that every time you add a non-revenue-generating link for someone to click on, you decrease your chances that they will click on a revenue-generating link. Facebook’s management team clearly decided to optimize for civic engagement here instead of money-making. That’s a tough decision to make as a publicly-traded company, and I applaud them for making it!


  • Daniel Beckmann
     I mean.. sound of one hand clapping. We all work for free for them 24/7

Girls Name Obscured  Daniel Beckmann, we also get an incredible relationship-building service from them for free. I personally think it’s a fair trade-off.

November 4 at 5:24pm · Like

Daniel Beckmann I dont. They don’t value work. They get a lot more out of it then we all do.

 

As users of the Facebook platform, even though we are essentially working for free, giving them our brightest ideas, our best pictures, in what Mark Zuckerberg explains as the history of our lives, to do with as they please including owning our postings into eternity, we are indeed so fortunate to have this platform to use, and thus, should applaud them for giving up revenue to benefit our plebian society?  While posting on Facebook is not like working in a steel mill, imagine if Andrew Carnegie had a similar policy where you were lucky to have access to his factory to do his work and that was the pay?

What is inferred in this exchange is the idea that we owe Facebook anything, or that they are providing us any service.  There would be no Facebook without us.  The truth is we are neither told how and why things are placed in our feed, nor do we know how our actions are being monetized.  But perhaps most importantly, we are given a false sense that the things we put out there are actually being heard by anyone.  Our work/postings have value, especially if we are to evolve as a society—that’s why Facebook has value and our collective value is worth more than their platform or anything they claim to do for us.  If we all stop doing work for them, they cease to exist and the relationship should be viewed that way.  We are not lucky they are giving us space to encourage ourselves to vote.  They are lucky we are there.

 

Obscurity in Life or Death

In February 2013, I died while surfing, and then was brought back to life.  A long and complete explanation can be found here: http://www.schmoonews.com/2013/02/14/1048/.   About three days after I came out of my coma, I told everyone via Facebook that I was alive.  I received an outpouring of support for my existence.  With so many likes and comments on this post, you would have thought that everyone on Facebook was made aware of what had happened to me.  So, if they didn’t comment, respond or make mention of their appreciation for my existence, I would be correct in interpreting that they didn’t care very much for whether I lived or died.

Before Facebook and social network sites, I could trace the flow of information to key people, but I also could give people the benefit of the doubt as to whether they even heard the news.  I would have some idea who knew what had happened to me based on whom I told, and some guesses about who they may tell.  In the outer circles of people I knew of with whom I did not speak directly, someone could accurately say they may not have known what happened to me.  There were some people who have never mentioned this incident, and to this day, I think they know, but I cannot be completely certain.  Through this Facebook posting, most of the people I know found out about it but I don’t know who did not.  Facebook has not only obscured my relationship with my community,  but also whether or not people value my existence on such a fundamental level.

 

Viral is Dead.  There is No Such Thing as Viral

In the early days of YouTube, Facebook, and other social networks, it was possible for a video to go viral.  This meant that users could share something with friends, and it could spread wildly through their friends and beyond like a disease as a network effect.  Under the guise of privacy, YouTube, Facebook and other social networks changed their algorithms to make it impossible for videos to go viral unless you paid them, had a friend who worked there who would cheat for you, or the forces of virality happened some external way, such as through the press, or from a sort of seemingly-fair if-you-use-it-everyday-and-are-known-there aggregator site like Reddit.

 

Social networking sites put up silos between friend groups so that the feed of information was possibly narrower than in the broadcast days.  Users kept seeing the same types of ideas based on the silo they’ve been put in.  People don’t realize all they are missing.  New ideas are hard to bring into these silos and users are seeing fewer ideas that they may not like or agree with than during the broadcast days.  There may be more information available, but there is no easy way to make sure you know about it.  Even search engines silo results based on who they think users are in order to persuade you to buy things from certain advertisers, or to give users results they think fit their assumptions for that person without the user having any understanding of why their online world is being constructed this way.  (More on meta data follows.)

In May 2012, at the Digital Upfronts (a spring tradition where advertisers are courted from publishers to buy ads with them), YouTube proclaimed to the world that they were now BrandTube.  They planned to make YouTube a place where brands could feel comfortable.  User-generated videos were hard for brands to associate with because the brands never were truly certain what the contents of the video were that they were advertising on.  Even if brands placed ads on the long tail—one ad running across lots of little bits of videos—they wouldn’t necessarily reach the same magnitude of audience that they did in broadcast media.  Their old methods of selling, let’s say orange juice, were based off reaching the predictable mass audience of television and the fishy Nielson ratings.  The ad performance in these user-generated categories were much harder for brands to figure out even if they liked the number of clicks they were getting-were these the right people?  No one was getting fired for placing another television buy or buying on hulu, essentially the television networks’ attempt at continuing their business model online.

Also in May 2012, I attended the last ROFLCon—the Rolling on the Floor Laughing Convention—which took place at MIT.  The viral video was officially pronounced dead there.  Viral videos required the following to be possible: anonymity, a huge and free network effect without walls in between users, and finally an interest/fascination in user-generated content by the platforms themselves that regularly generated these videos.  All of those were dead by May 2012.  Those in the Youtube community noticed almost immediately the lack of virals to talk about—they had to rely more on making these videos themselves or having their users send them in.  The 2012 election was not one of the viral video—there had been many in 2008—it instead was the year of the static print meme.  The image files were easier to send through a network effect on social networks with higher bars and through traditional-style print mediums, primarily because Youtube, the primary network for video distribution, had destroyed virality as a possibility at all on their platform.

 

It’s All Slowing Down

The reason that I’m finally able to write this paper in 2014, is that the Digital Revolution, which began in the late 1980s/early 1990s, has begun to slow.  The oft-quoted Moore’s Law (named for Gordon Moore, the co-founder of Intel), explained that the number of transistors in a circuit doubles every two years.  The space on a microchip has now reached the atomic level, effectively ending Moore’s Law.  The entire technology community relied on Moore’s Law.  In order to increase computing speed, we now have to devise more efficient uses of the materials we have or accept more non-noticeable errors in our computing.  There has been a noticeable change as we approach the end of Moore’s Law.  For instance, you no longer need a new computer as often anymore.  I bought mine in 2009, and it is still nearly as fast as the ones being marketed today.

 

The Selfie (OMG)

The selfie is a very NOW example of the abundance of visual communication combined with the narcissistic period in which we now live.  Within the frame of a smartphone screen, it is not just a subject’s face that is communicated, but also the subject’s surroundings.  The audience is friends, family, and anyone else who cares or more often doesn’t care about the subject.  While the selfie may seem to have something to do with the historic record—I was at this bar next to this beer (very popular selfie)—the abundance of selfies trivializes these historic marker points, as there are now just too many of them to ever have enough time to look back and consider them all.  The selfie is really the continuation of the notion of “thyself as publicist”, shaping the image to elicit certain impressions from the audience.

“Thyself as the publicist” will continue to be refined through our mediums.  We have yet to truly consider how to construct alternative selves and redefine the selfie into acceptable alterations beyond your idealistic image of yourself.  The acceptable number of selfies is quite primitive—I’m with friends, I’m with a famous person, I’m next to the Eiffel Tower.  We have merely begun to shape the digital image of ourselves based on our programming, while also using these images to program others.

With limitations in technological developments, particularly in creative tools such as cameras and non-linear video editing, we will hopefully sooner rather than later enter a period of refinement of these digital mediums as opposed to the worship of wow wow wow gizmos and yet even more additional space to distribute.  More value will be placed on the creative person and their ability to make technology work instead of the technologist simply creating more capacity for work.

 

The Obscurity of Self-Image in Generations

 Generationally, Americans born during different eras have significant gaps in the way they receive and understand information.  These gaps may be more significant than those experiences during the last century, as we have split into smaller generations and have been born into vastly different periods of media, programmings and understandings of our selves and each other through these formats.

 

Baby Boomers/Forrest Gump & Older (Born Before 1965)—Baby Boomers see the world in terms of narrative stories.  This shared understanding is the last vestige of the campfire mode of story telling.  They also see things in dualities—wrong or right, for war or against it, Republican or Democrat.  On television, their primary medium, generally only two points of view are the most you can reasonably fit on a screen—these interviews in the industry are often called two-ways.  They have been given these two sides their entire lives and these dualities will die with them.

 

Generation X (1965-1978)—These kids were born as part of a collective hangover.  Popular culture raised them, but wasn’t as loving as true human contact.  They are the ultimate creation of the analog society and will forever remain lost in a digital world.  Their understanding of economics was destroyed by a .com boom in the late 1990s in which they thought they could get money easily from signing bonuses, stock options, and IPOs, and will always define the leaders of this generation as to whether or not they cashed in at the right time.  A great majority of them are untrainable in the new world and as the last analog-native generation they may be a burden for society until they leave us.

 

Generation Y (1978-1985)—Generation Y could be considered late Generation X.  They appreciated the analog medias, but some of the first noises they heard were electronic, synthesized music. They also entered homes with no answering machines and phones with busy signals.   We also had cable television and MTV tell us that we were original because we had an opinion and having an opinion WAS our opinion.  This fill-in-the-blank individualism was especially susceptible to marketing—and spawned the ever present ‘hipster’ movement, based solely in style, not substance.  The SNOWFlake, the idea that each person was slightly different and special, was born.

Yes, I am in this generation. We have flaws, but we are both digitally and analog native—there is hope for us.  But like Generation X, we were also heavily raised by popular culture.  We were forced to understand narratives, but were the first generation to talk about cool moments—or just the best parts of a story on their own not connected to anything else.  We still use Facebook—we created it.  The analog vs. the digital is a conflict within us, but we can be the translators between the two.  When the analog days are gone, and the digital natives feel like something human is missing from their lives, our generation will possess the knowledge of the higher quality of what records based on analog recordings sounded like, but we will also be credible because we enjoyed mp3s in college dorm rooms.   Or at least we will think this of ourselves.

 

Millennials (1986-2008)—This is often referred to as the snowflake generation—the idea that each and every one of them is special in design, white, sparkling, and bland and cold when they’re all stuck together.  It is quite possible that the members of this generation never experienced the pleasures of boredom.  Accordingly, all they understand are moments through social media although less and less through Facebook (still a narrative their parents use), and in favor of the disposable micromoments captured socially in Instagram and Snapchat.  To Millennials, life is quick and disposable, but their sense of self is great due to the seemingly immense distributive power of their Selfie ideas.

When Millennials attend an event, they do not participate in it directly.  Instead, they take pictures of real life and see it through a screen (we’ve already talked about selfies).  There will be no time to look at these moments later, because it’s not about posterity, only about immediate self-promotion and edification.  Yes, I will be the old guy who says that he is concerned about this generation’s future as we leave this era of endless wonderment and enter a new truly productive era of creative potential.  Millennials aren’t deeply connected to very much.  They didn’t play outside as much because it was too dangerous, unless it was part of some organized sport.  When they enter the workforce, they have to deal with the robotic nature of programmings (explained fully later on) through constant appointments and measurements (such as the SATs) and other external signifiers, such as Harvard University, or grades that will have increasingly less meaning as an individual’s ability to create and form subjectivities on their own will be paramount to survival.

I can argue that each person is special and has the ability to influence the world.  Without taking the time to appreciate the world, however, or develop patterns to program themselves and others around them, the Millennials will never truly explore their individuality and their ability to break patterns beyond what is being shown to them through the obscurity of an algorithm.  They are the first digitally native generation.  They may fail to add lasting creative works to the collective human consciousness by the time they leave this planet.

 

Generation Next (2008->)I do not want credit for coining this phrase, and I would adopt another one that comes along. I do think that those born after 2008 will be a truly superlative generation.  They will be comfortably digitally native.  They will grow up during a time of refinement of digital tools, and instead of being told these times are special, simply because of the facility of self-expression and mass distribution, their teachers will have points of criticism against what the digital world has become—an incredibly limited world after all.  This generation will master technology, and will be in a position to rebel against what has become of it.  While they may not play outside as much as previous generations, they may have enough time to wonder about what digital has done to us in terms of our attachment to each other and humanity as a whole.  They will be the first to truly stake claims in the digital space creatively by not merely adapting ideas from the analog era, but making shapes, sites and sounds completely unique to the digital era.  Toward the end of their lives, this generation may experience the 1970s of digital expression—a time when the limits of the digital awakening are reached and they can put everything they’ve got into something creative, not just learning how to master the tools.  They will take the individual’s freedom and maybe actually use it for themselves and each other, instead of simply proclaiming it as part of an easily manipulated, consumerized purchasing power.

 

Venture Capital as a Study

We live in an era in which original idea generation is not encouraged.   Traditionally in technology, venture capital was allotted for BIG IDEAS that were risky.  The upside was huge if they worked out—but so was the likelihood that they wouldn’t.  The Venture community was small, and was often the first and last place a person could go to get funding for something truly technologically profound—or at least that’s the ghost of what Silicon Valley would like to believe about itself.

Now, a large portion and central piece of our economy is focused on technological and digital growth (as opposed to growth in manufacturing, service or energy-based economies for instance), and the true venture capital for this endeavor has been severely crowded out and diminished by those simply claiming to do it.  Individuals and groups of a few individuals are trying to start small businesses and they are calling this Venture Capital.  Before the technological boom, these were the same types of people who would have opened a neighborhood dry cleaner, or the one-hour photo shops that could do the job in 45 minutes instead of an hour.

These convenience items are not technological wonders, and cost less to start than physical dry cleaning businesses that require machinery.  The ease of the cloud makes it easier to start tech companies, but more difficult to make more clouds/infrastructure for truly original and profound foundational technologies.

When there was a requirement from an infrastructural standpoint for people to be close to means of production, whether it be a server, or a physical computer, or in some cases each other, the venture money was taking more risks.  Silicon Valley was a physical reality in the sense that all the components for taking true technological risk, from the universities and talent pools, to the physical computing power and culture of risk were all there, thus reducing some of the risk to your tremendous boom or bust investment.  The history of California itself is one of boom or bust, and Silicon Valley was one of the few places it made sense to try big things (the other being Boston).

Now, investors put money into start-ups because they want to maintain control of the technical labor pool and in some cases squash-innovative competitors.  They invest small amounts in many different companies and hope that one takes off in a major way to not only pay for their other investments but also in hopes of massive returns.  When they don’t, they still own the labor behind the failures and can often fold them in to other owned concerns and try to keep them off the market from their competitors.

In order to acquire funds, a company needs to show investors a very sharp increase in revenue—also known as the ‘hockey stick’ on a graph.  This pressure on a young company’s ability to perform reduces its ability to focus on true innovation, other than what might produce immediate revenue so they can sell to a larger company as soon as possible.  This entire exchange, in which the venture capitalists often have ownership stakes and make commissions on these deals when they sell to only a handful of tech titans of which they often have an ownership stake as well, reduces the pressure for places like Google, Facebook and certainly Yahoo! to innovate, while they pay off and capture any threatening talent before they can produce legitimate competition.

The most recent and egregious form of stifling innovation and independence was Facebook’s purchase of WhatsApp,  a program truly in the purview of homo generator, as it allows people from all over the world to text with each other at little or no cost (roughly $1 per year).   WhatsApp’s user numbers quickly rivaled Facebook’s, but at a faster growth trajectory.  It also had a broader base of younger users that valued moments instead of narratives – a base that will make up the future of most users and one Facebook was struggling to attract, even with the recent purchase of Instagram.  Mark Zuckerberg made many overtures to buy WhatsApp, but its founders declined to sell. Zuckerberg eventually offered such an immense amount of money that the WhatsApp founders had no rational choice but to close the deal.  They could give up on what they built, take their money after some period of time being off the market, and begin another project.  It is not so easy, however, to build such a culturally relevant product all over again—especially once the monetary motivation is completely taken away.  With the tyranny of limited resources, better editorial choices are often made.

This happens all the time at varying levels.  It has reduced our ability to truly innovate, and the new startups are not motivated, despite lip service to this effect, to develop technologies are truly helping the world in any way.  Most Silicon Valley business models are based on exploiting endless free labour in the form of end-users. Until we place higher value or any value at all on the labor of the end user, the true realization of homo generator will either take longer to realize, or could be drastically curtailed in a new form of centralized, systematic control and slavery.

 

The Unshared Economy

It’s been called the shared economy – the idea that you can make money off the labor and resources you already have if you only knew of someone who wanted to buy them from you.  You’re connected via digital technologies, but the advent of the smart phone made the possibilities related to proximity really take off!  In some ways, this could be a considered an evolution; jobs are created based on ideas, and not productive elements that machines can do for us.  However, this is not the case in the largest growth area of this time.

The so-called shared economy, while not forcing people to work in coal mines under terrible conditions, maintains a similar social and economic order with one exception—in a coal mine, workers didn’t have to buy their own shovel, but in the shared economy, they bring all the upfront capital & really do not receive anything from their task masters to complete their jobs.

Forcing people to work to have their machines is nothing new.  Forcing people to use their machines as work in order to have them at all is seemingly a regression.  Those without machines or the means of any production, who need better access to the economy are becoming increasingly locked out..  The structures they used to gain employment are the ones most threatened by the shared economy—there’s nothing being talked about in technology for how to include this population, except often a token recognition that there’s a problem.

If the shared economy becomes the norm, we will also have less time for original content generation as we will have to spend more time simply maintaining the base standard of living to pay for the programmers/owners of this shared economy.  Even the programmers/owners of the shared economy will have a less interesting existence.  Who will have time to make their apps look cool without any time left for inspiration/visit the void as they resell their homes, cars, electronics and themselves in order to afford these things we now consider basic and mostly unshared?

There are several examples, from Uber, the car sharing service, to Airbnb, the service that allows you to share or rent your house/apartment to visitor from out of town in exchange for money.  Airbnb has had the following effects on its home city of San Francisco:

1)    Less affordable residential housing is on the market as long-term residences are being turned into short-term lodgings for out-of-town visitors

2)    Less sales tax revenue has been generated as a percentage from out-of-town visitors

3)    Secure, union, low-skilled jobs are being threatened as these bnb’s compete on price.

4)    In order to afford their apartments, people who used to live in their apartments alone full-time now have to rent them out (both when they are out of town and while they are home) and perform the tasks of a bed and breakfast, a service for which they are rated.

5)    The middlemen of Airbnb are making record profits by not paying lodging taxes, and more copycats emerge constantly, promising easy cash to all who take their assignments.  They continue to support and fund candidates who vote against taxing their business in the name of innovation, which would provide a more level playing field.

 

I recently attended a conference titled the “Personal Democracy Forum” where the founder of Airbnb was given a platform to speak.  He said that all of this bad press around his company was merely an issue of people not realizing the true positive nature of their story (later using the same Silicon Valley “tell our true story” Public Relations playbook, the CEO of another shared-economy leader, Uber, used the same justification after sort of apologizing for the actions of one of his top executives who told journalists of a plan to track, follow, and punish any detractors of the company).  He shared a tale of a woman with cancer who was able to pay her rent by sharing her house with guests while she underwent chemotherapy.  This woman wrote in a nice testimonial about Airbnb.  While it is sad that this woman’s only option was to open her home during such a trying time, it is more upsetting that Airbnb thought presenting her to its audience would make their business more palatable.  Instead, it reinforced that their actions are predatory, and do not truly value the humanity they exploit.

 

Self in the Age of Obscurity

If we can’t count on the viral video to program the masses, there are fewer and fewer venues with a captive audience, and we can’t even tell if people think we’re alive or dead, it would appear that those trying to control the information from a centralized point of power would have trouble doing so as well.  This is not true.  As a result of the internet, despite the appearance of more information, we are less aware of what is happening in the present moment and with whose ideas we are being programmed.  The general public has less war coverage than we did ten years ago.  This is not entirely a result of the business models of traditional media collapsing.  The wars are not even known.  The U.S. administration will casually mention an operation in Yemen or Somalia, but we receive little information about the specifics of this operation.  This creates piles of unanswered basic questions about those operations—such as who we are fighting, for what reason, or even simply how is it going?  With no critical mass from the public asking these questions, we may never get answers to these questions anytime soon, if ever.

Our ideas are shaped by the situations of which we are well aware.  There were always and will always be ways to hide information or perspectives.  Unfortunately, it is now much easier for the powerful to obscure what we know and understand about our world and each other under the guise of the open and free internet.

One example is our much-reviled legislature, the U.S. Congress.  Almost all the reporting we receive now about our government is about the sport of legislating, the personalities, and the winners and losers.  One publication in particular, Politico, personifies the lack of actual substantive reporting on our government.  While emotions run high (a popular phrase in television news), what’s lost in the balance is the substance of what’s being debated and all the different choices of what’s possible in our legislation.  Despite the fact that there is more capacity for distributing information, that capacity is not being utilized.  The low approval ratings our legislature also acts to diminish our power as individuals through our representative government.

Recently President Barack Obama used an executive order to circumvent what he called a dysfunctional Congress to give certain rights to undocumented immigrants.  In so doing, he set a precedent for future presidents to take similar unilateral actions.  The polarization of our media, the noise of information without the substance, is a product of our new digital media paradigm.  At the moment our power is being taken away in front of our faces, with one wide exception.

Twitter has been used, and in many respects was designed, to allow mass groups of people who don’t know each other to communicate.  Due to its limited character capacity, it’s been most effective in organizing spontaneous mass protests.  The Occupy Wall Street movement relied heavily on Twitter in the United States to organically grow its masses, and certainly a large part of the Arab Spring can be attributed to Twitter.  But again, Twitter is an open and anonymous platform for communication—virality can happen there, but the bar for truly mastering Twitter, and communicating complex ideas is limited on that platform.  Twitter has always struggled with its inability to support the financial desires of all those invested in it.  It started as a mass communications tool, but even if it stops failing in its attempts to turn the service into an advertising juggernaut, its future as a trusted and effective organizing tool accessible to everyone anonymously lays in doubt.

The sooner that we understand that Facebook, Google, Apple, Amazon, and many, many other technology companies are using data to determine who we are, what we are doing, and what we’re supposed to have access to in terms of our understanding of the world, the sooner we can take back a fuller understanding of ourselves and each other.  John Gilmore, cofounder of the Electronic Frontier Foundation, a proponent of technological advancement while at the same time protecting an individual’s digital rights, said during his speech at Burning Man in August 2014 that we have to remember that we are not the customers for these companies, and therefore, are not their highest priority.    My company, however, buys ads, and is a customer of these companies.  I can attest that the rights of the users and their well-being are often the last consideration in their focus, their transactions, or their goals to seek personal riches, corporate growth, market dominance/influence, and shareholder growth.  We have long passed the era of exploitation of children working in labor mines, among other labour rights achieved in the wake of the industrial revolution.  We have not even started our discussion about how almost every single business model being funded in Silicon Valley relies heavily on the exploitation of anyone who’s ever used their products or plans to in the future, unless something changes.

 

V. NEAR FUTURE:  Refinements

Welcome Logjams

Limitations = Better Work

 

In its infancy, the Internet seemed to have no bounds.  We would have endless distribution and endless space.  For instance, users can take and upload as many pictures as they want now because web space and bandwidth is cheap.  Prior to digital photography, we had to develop film at a cost and dial-up modems were too slow to effectively distribute pictures.  Endless space, however, did nothing to advance the meaning of the photographs.  While there are certainly more pictures available, and thus more possibilities for the pictures to be meaningful, due to the sheer volume of visual potentialities, very few have given any consideration to when, how, and why we take pictures in this new paradigm.  There have been some advances towards refining the quality or meaning of what we see visually online through the use of Instagram.  With this photo-editing and photo-sharing site, users can cosmetically filter and distribute their abundance of pictures.  Increasingly and especially with younger generations and those communicating internationally between different languages who prefer pictures to the written word, visual communication is expanding.  It is no longer a mode simply of posed picture taking, but instead of instant communication, without having to write anything or compose a complete thought.

 

Our Data Residue 

Eric Schmidt, Chairman at Google recently said that more information is created every 2 days than in the entire period of human existence before 2003.  Our digital footprint is being generated through everything we do online and quite often offline, as well as the actions our machines make on our behalves, whether we realize it or not.  The National Security Administration, most noisily, but also by every single major technology company from IBM to Microsoft, to Facebook to AT&T, to even political campaigns to some degree are making movements in “data science” to attempt to determine intelligent life from this activity, for both business purposes and betterment of society.  For example, Facebook recently concluded secret psychological experiments to determine how to manipulate their audiences better, and Google, through the purchase of firms like Nest, will attempt to first record and analyze and then better control the climate of your home and then as many homes and businesses across the country, based off this data.  Are you spending more on heating than your neighbor?  They will guilt you into programming your furnace the same way, possibly even publicly.

There is an example of the expansion into the ‘internet of things’—the idea that everything in our world, all machines, humans, and nature are connected to the internet for the purpose of collecting and analyzing data, often under the guise of the betterment of the world based on ‘data sciene’.  Another modern example, your car will report back your driving behavior to your insurance company—at first in exchange for lower rates, but eventually as a standard feature on all cars sold, all in an attempt to reach the promise of a better, more well behaved society, based on the central control of our actions through algorithms and data science.  Data Science is a new religion and within the minds of data scientists, a huge amount of trust is being placed.  Ask most any honest data scientist, who is really a mathematician or a statistician, and they will hopefully tell you about the limits of their abilities to understand the present, let alone predict the future.  What’s really happening here are the bright ideas of a few, no matter how flawed they may be in universal application, are being used as a way to further reduce control of the masses into the hands of those who have access to this data and the platforms for population and mind control.

My company presently works on data science projects.  We have a data science platform, part of which attempts to analyze and report the sentiment of people—for instance, what subject they are writing in about and whether or not they are happy or angry about any given topic when writing in to their elected officials.  The intention is that their politicians may react to their concerns or desires in real time, while removing the impartialness and imperfections of human organized and analyzed correspondence.

Although computers are beneficial for shortening tasks for which there are large quantities of routine processes—and in the case of the application we built for Congress, our machines are more accurate in their job than the humans doing it before them—they are only accurate in sorting what has already occurred.  Despite all attempts, they are not as good as the equipped person at predicting unforeseen outliers and anomalies.  For instance, many attempts have been made to use data science to predict the outcome of legislation—and while these algorithms work on the obvious bills, they are no better than a coin toss at predicting the outcome for legislation that is too close for any human to call—essentially the machine is only right almost half of the time.

 

The Tyranny of Actuaries       

A recent White House report declared that data science has the potential to be incredibly discriminatory.  The availability of all the data we now generate, combined with an increasing use of software and the assumptions associated with all of our mundane actions is creating a world in which people will be denied freedoms, in the form of home loans, health insurance, and access to information without any knowledge of why or how, and without any recourse.

This pervasive monitoring culture, already in effect in post-September 11th New York City, prevents individuals’ actions for fear of constantly being watched.  For instance, during the New York City blackout of 2003 and the longer, less well-known, 2006 Queens blackout, crime was lower than normal, as people felt they were being watched by authorities even when the power was out.

At present, even despite the White House report, our society is now trending toward a complete and total meta data surveillance society that goes well beyond what the NSA is doing.  Private companies already maintain data on everything they are able to access.  It is not necessarily the data itself that is the issue, but the interpretation and the uses of this data by those who base their entire worldview in logic while simultaneously rejecting the humanity and emotions of our actions.  These assumptions about who we are and the ways in which we will act based on theories, math equations or general prejudices prove themselves wrong time and time again, and often are situated to give those in power even more power.  Despite the possibility for better data analysis to improve our individual quality of life, very rarely do those with the power to master these techniques and the control of the facilities to distribute them, enter with the goal of empowering humanity on an individual, decentralized level.

 

Singularity

Silicon Valley is obsessed with the concept of singularity—the idea that we are advancing toward machines and humans becoming one someday soon.  Technologists, self proclaimed futurists, their bankers, and their business and marketing attache´s use this as a justification for any of their personal greed or ignore the raw humanity they are repressing through their actions to generate greater return on investment.  In a sense, these perpetrators of the advancement of machines are using the concept of singularity as a way to gain more control of all our humanity and control it with their companies and their greed.  It’s not just a way they can sleep easily at night—the inevitability that we will all be controlled by machines and vice versa—they get giddy with the prospect of planning the demise of free will and individuality among humans.  They will be the humans closest to the machines.  It’s akin to those who joined up with Adolf Hitler before it was really popular—and a great many of these people control a large chunk of our technology sector in Silicon Valley and beyond.

The concept of homo generator, as I understand it, will employ machines to do physical work so that humans can focus on more complex activities—but at its foundation is a humanist notion.  These latest trends in technology, however, do not point towards this eventual reality—they seek machine control over humans with an uber council of human programmers who have some ultimate control.  In reality, there is a willingness on the part of humans to accept the imperfections of machines, or at least to assume that we will reach a point when machines are so improved that imperfections are no longer an issue – but without a proper symbiosis, humanity is poised to rebel against its machinery time and time again before homo generator is fully realized.

 

What’s the Big Idea?

I have not heard of a venture capital firm, or even a start-up idea, that attempts to create what could be the next BIG revolution—the connection of minds.  Where do our ideas exist?  How do they become a reality?  What do those brainwaves look like, and how can they be tuned to be interpreted by another human in real time?  These questions have serious interpersonal implications.  How would business deals be conducted?  How would sex work?  Would we need blockers to prevent hearing each other’s thoughts?  What are animals afraid of?  What keeps them from attacking us?  This type of technology, should it ever be conceived, would turn our entire world on its head.  The meaning of our writing, sense of history, poetry would all drastically change overnight—in much grander fashion to that of the internet, and our world of blind intentions.

Twitter is the closest existing technology to a rapid understanding of what someone else may be thinking—except it’s actually what they want you to think they’re thinking, not what’s going on in their mind.  You can read the historic record of what users wanted you to think when hopefully the ideas came to their minds, but they had to act with intention and could have changed the thought by the time it reached their keyboards.  While many pseudo intellectuals and old-timey journalists speak of their amazement at the speed of 24/7 communication, if we incorporated wearable technologies like health monitoring watches and the transmittance of bio-readings to others in the room, in short order, we would see a larger flow of information on what the other is experiencing on a raw, human level almost overnight.

The big idea is knowing what every living creature on the planet thinks, as well as their perspectives.  Beyond our planet, of course, lies an even bigger idea.  So, yes we have microwave popcorn, notifications of things that bother us to spend money at a store when we walk by it, and ads that attempt to read our minds about what we want built on generalizations they make based on a cloudy nether world of data salesmen.

We are extremely primitive, and the excitement about our technologically advanced times is also a tyranny.  The fascination and defense of our recently established and opaque internet platforms stabilizes the power in the hands of the few, for the moment, and it prevents further innovation that can bring our entire society into homo generator: a world where the machines work more for us than we work for our machines.

The Programming of Robotic American Culture – TOC, Introduction, Preface

Thursday, February 26th, 2015

The Programming of Robotic American Culture

 

 

 

 

 

 

 

 

 

A Dissertation Submitted to the

Division of Media and Communication

of The European Graduate School

in Candidacy for the Degree of

Doctor of Philosophy

 

Daniel P. Beckmann

November 2014


Table of Contents

 

INTRODUCTION                                                                5

PREFACE                                                                              8

                        The Provocation of Study

The Programming of Robotic American Culture Project as Explained[TM1] 

Why American Culture?

            PART I – Time Periods                                                        13

1.1  PRE INDUSTRIAL REVOLUTION: Villages

1.2  THE BROADCAST ERA: Forrest Gump the Editor

1.3  DIGITAL REVOLUTION: 1998-2008

1998 The Aggregator

The Search Engine

The Webloggers

2004-2008 Social Networking/Viral Video

 

1.4  PRESENT TIMES: An Obscure Time

Obscurity in Life or Death

Viral is Dead – There is No Such Thing as Viral

The Selfie (omg)

Venture Capital as a Study

The Unshared Economy

Self in the Age of Obscurity

1.5 NEAR FUTURE: Refinements

Welcome Logjams

It’s All Slowing Down

      Our Data Residue

      The Tyranny of the Actuaries

Singularity

What’s the BIG idea?

PART II – Theories in Programming

Robotic American Culture                                      38

                        2.0 PROGRAMMINGS & Homogenerator

                        2.1 Robots vs. Programmers

                        2.2 A Discussion in Robotic Culture 

Alright everyone now-IN-FORMATION!

Robotic formations: Answers first, then proof

 

2.3 A Discussion in Programming

2.4  Robot Vs. The Programmer

2.5  Homogenerator: Applying Philosophy to The Programming of Robotic American Culture

                                    The Role of Metaphysics

                                    New Dimensions in Programming

 

2.6 Introduction to the American Mainstream as Model of Robotic Programming

                                    Functionality of the American Mainstream in 2007

                                    Theories in Programming for the American Mainstream Audience

            PART IIIApplications in the Real                       58

3.1 THE 4 step PRO-cess for IDEA CREATION:

3.2 Subjectivities Accepted: The Formula for Finding Most Agreeable Programming

Broadcast News

                                    The Eagles

 

                        3.3 Ending Subjectivities

Broadcast Anchor Leaves

 

3.4 Real Potentialities for the Programming of Robotic American Culture in the Future

So Much Potential

                                    The Pen vs. The Internet

                                    The Egocentricity of the ‘New Media’

                                    Infantile Stage and Here’s Why

                                    Disruptive Technologies: User Generated Content

                                    How to Program for the New Technologies

 

PART IV – Ephemeral Writings                                         73

 

                        The Robots vs. The Programmers

NYT’s Digital Report is BAD NEWS

                        Obvious Places have Obvious Faces

                        Contempt for Content 

                        Political Programming

                        Kick the Bucket List

 

PART V – Projects in the Real                                            100

                        5.1- Tomarken.com

5.2- The Blindspot/ABC News

5.3- Beatniks Project/ABC News

5.4- Current Journalism/Current TV

5.5- Viral Videos/Supporter Generated Content/Obama For America ‘08

5.6- Schmooru Network

5.7 – The Narrow Show/Schmooru

5.8- Art as Life/Schmooru

5.9- Correlate For Congress/ IB5k

5.10- Correlate.io

ACKNOWLEDGEMNTS                                                    125


 

 

INTRODUCTION

The European Graduate School in Saas Fee, Switzerland, is like no other place on earth.  It is the only place a student like me, whose education embodies my entire life, can successfully execute my academic mission.  I earned a traditional Master of Science in Journalism certification from Northwestern University.  In most cases, this is the ultimate degree in my profession, and along with my wealth of work experience, would enable me to teach at many esteemed institutions of higher learning throughout the world.

Northwestern did not allow me to complete the research I needed to fully understand the work of a journalist.  It did not encourage me to think, or question how the media works.  I did publish what one might consider a master’s thesis, but the Northwestern registry labeled it a “special project”.  This thesis entitled “New Business Models in Broadcast Journalism” provided the foundation for the first part of my journalistic career.  It foretold of the downfall of traditional business models and provided for some of the new ones that I have tried to employ, and others have failed to attempt, over the last 10 years.

For the past decade, I have been working on this dissertation.  While some have suggested I “finish this already” or thought I wasn’t serious about completing it, the truth is, I had not felt comfortable asserting anything that would stand the test of time, let alone even a few years from now.  The 10 years since I started my work at EGS have been marked by tremendous upheaval in our understanding of how American culture is programmed.  The pace of technological development and our reactions to it have slowed down enough over the last four to six years that I now have a reasonable basis upon which to build a foundation for my life’s work, which will be explained in the pages to come.  As a result, an entire part of this paper will strictly deal with the changes that have taken place over the last 10 years since I began this piece.  This is part of the experimentation I conducted since I began my work at EGS.

 

A Different Dissertation

While I do intend this to be a philosophy paper, it will read dramatically differently from dissertations from traditional American institutions.  There’s a reason I’ve chosen to do my studies on foreign soil at the European Graduate School.  It is the only institution I have found that has tolerance for my study methods and the way I can best convey my ideas.  EGS’s faculty is well known for their experimental nature, in addition to their groundbreaking and provocative work in their respective fields.

In particular, I was drawn to the ability to continue my daily work in an attempt to bring the theoretical to real, everyday practice.  While it might have been nice to dedicate my entire time to academic study, my ability to apply theory in the real is a unique experience that is singular to EGS and made this dissertation like no other.

           

Unfinished Potential

In part, this paper deals with the potentiality for writing or even starting such a task, an idea attributable to the influences of EGS professor Georgio Agamben.  All we have in life is the potential that we work with for the future.

I have included an ephemeral section of writings and examples of my work over the last 10 years to remove the reader from a traditional academic perspective and closer to the front lines where media philosophies find their struggle and conflict.

This paper will never be completed, nor will my work.  It serves as the foundation for the potentialities in further research and development in the areas I will outline.  If this paper were to actually ever be finished, using the terminology of Wolfgang Schirmacher, homo generator will cease to generate.  These dissertations and descriptions will indeed act as a guide for the enrichment of homogenerations to come.

 

Where are the footnotes?

It is never my intention to present anyone else’s ideas as my own.  I will credit ideas wherever it is possible—conversationally as part of the text—so that acknowledgment doesn’t become buried as it so often is in traditional academic works. My objective for this project is plausible deniability.  Few in my generation have had the chance to read enough to have full command of everything that is written on their topics before writing a dissertation.  Furthermore, true understanding of another’s written or even spoken words is almost impossible in this writer’s view.

When EGS faculty member Jacques Derrida died while I was enrolled in school, but before I could take his class, it seemed that some philosophers breathed a sigh of relief because Derrida was no longer there to criticize them if they butchered his words and intentions to serve their own purposes in papers such as this.  In order to respect the genius of Derrida and all who came before and after him, I will take great care to avoid inflicting similar damage onto their hard work.  I will delineate when I am referring to someone else’s influence/programming whenever and wherever possible, but I do not claim to offer a total understanding of another’s work in this regard.  Moreover, I cannot claim that there are no further and/or more representative examples from where these ideas were originally gleaned.

I will be as complete in expressing my philosophy, as I am able to do at this point.  After the paper is published, I will continue to seek out continuous criticism of everything presented here.  When the ideas presented here cease to be discussed, my work is no longer alive.

 

Please email me at beckmann@tomarken.com at any time to tell me what you think of my approach in writing this paper.

 

-Daniel Paul Beckmann

Omaha, NE 03/2007

Toledo, OH 07/2014

San Francisco, CA 09/2014

 

 

PREFACE

The Provocation of Study

I originally started my consideration of the programming of Robotic American Culture as people used to pass me in the hall at Ottawa Hills High School in Toledo, Ohio.  It is customary in the Midwest for people to greet each other as they pass one another whether or not they like the person or even know them.  Strangers always say “hi” to each other.  This is part of the reason that the Midwestern United States has a reputation for being ‘friendly’- at least on the surface.

There is a significantly greater meaning behind why this greeting is such an important currency in the Midwest.  I didn’t know much of the deeper context when I first started out.  I did know that it greatly irritated me, and I began to react differently to it than my peers as my way of coping.  I began to grunt at people as they walked past.  It wasn’t always in such a negative tone – they still got the impression that I meant the same thing as saying ‘hi’, but my friendly grunting often elicited a different reaction.

At first, most people laughed.  Laughter is an acceptable reaction when someone has perceived the truth, but doesn’t know how else to react.  In this case, it wasn’t the truth I was telling them—at least not explicitly.  Instead, the Midwesterner encountered programming for which there is no script.  The truth was “why do we all feel the need to exchange these pleasantries when we may not actually even know or like each other at all”?  But then, in place of ‘hi’, I grunted.  It did not compute.  The error message can come in three different forms—laughter, anger and no response.

Most people would argue, why not just let it go and enjoy the hospitality?  The problem was that the ‘hi’ didn’t really say anything at all about the greeters’ personalities, their real intentions or what they actually thought of me.  The ‘hi’ was simply a means to keep the peace.  The repressive Midwestern culture, still has not begun to completely boil over— although, it’s starting to show its wear, in places like Ferguson, Missouri where they are just now beginning to talk about the awkward non-greetings they’ve had between white and black people over at least the last hundred years.  This repression is based in this scripted form of interaction.  Passing on the street: “Hey what’s up?”  Asking a waiter for something: “SORRY, can I have some ketchup?”  Are you really sorry to ask for the ketchup?   No.  But you disturbed the silence and almost all conversations among strangers are awkward in the Midwest. People are not saying what they feel to each other—they’d mostly rather not talk at all.  (Midwesterners are programmed to reject this hypothesis at ALL costs.  Ask them about it.  See how far you get.  In the East—they tell you how they actually feel or nothing at all.  The West, you may not even register to them, they’re into themselves. The South, they genuinely want to have a friendly culture among their fellow citizenry for better or for worse).

 

The Programming of Robotic American Culture Project as Explained

The example I’ve been using to explain this project to everyone with whom I have discussed this paper over the last ten years since the name was conceived is such a useful one that it deserves high billing in this preface.

 

When anyone enters a 7-11 anywhere in the U.S., no one ever fears that she will not be able to operate one.  The customer goes inside and looks for her desired purchases. The clerk watches from the counter until the shopper puts the items on the counter, tallies the total, gives change, and says, “Thank you, come again”.  There may be variances on that theme, but something or a group of things decided this cultural programming long ago—but what were they?  7-11 does not have to work this way.  In other countries, things work differently at times.  In some cultures, any discussion at all is offensive or perceived as a waste of time.  In yet other cultures, not stopping and having a more in depth discussion with a live person in front of your face is perceived as quick and offensive.

I am interested in how these programmings originate, how certain ones catch on, and why others do not.  What other programmings are already out there that I don’t yet perceive and which ones have already died out.  The pursuit of the origination, dissemination, and the inevitable doom of these programmings will be my life’s work.

 

Why American Culture?

Why not human interactions overall?  Why limit myself?  Put simply, I have been a student of this culture and its programming from a code level to a folk capacity my entire life.  I understand it well, even beyond what language allows.  I’m certain some readers will want a larger area of focus.  The Holocaust and World War II are a common period of discussion at EGS.  There were certainly cultural programmings present that led to the mass execution of several million Jews, including a great many of my relatives.

Despite the American focus of this paper, I have to admit that this period evoked serious questions within me and has effected the foundation of this paper.  I have often said that the reason I found my way to journalism and still work in this field in the capacities in which I do has a lot to do with the hundreds of relatives of which I am one of the only descendents, who had no one to hear their screams as they were swept from their homes, broken apart, dehumanized, and killed.

My grandfather was the lone survivor of a very large family of rabbinical Jews. There are many questions about why he may have survived.  He had blond hair and blue eyes.  He also escaped and joined the Polish partisans.  Ultimately, it is unknown why he escaped.  My grandfather probably only talked to me about his experience for a grand total of 2 to 3 hours over the entire 16 years that I knew him.  He settled in the American Midwest, in suburban St. Louis, Missouri.  His perspectives—even those not associated directly with the first half of his life—helped me to realize just how interesting our open, yet repressive culture is, but also how it could be used to program large groups of people to do inhumane things.

I’m motivated do this work to give voice to the people who are screaming right now, whom no one can hear.  I write this dissertation and continue my life’s work to try to understand how these atrocities continue to occur.  It is my view that we, as humanity, through our digital tools which often obscure the truth instead of refine it, are going farther away from understanding these questions and, despite the endless cheerleaders to the contrary – even farther away from a clearer, more enlightened point of view.