- Web 2.0을 최초로 명명한 2004년 10월 모임을 주최했던 오라일리 사가 정리한 정의: http://www.oreillynet.com/lpt/a/6228
- 이후 2005년부터 추진된 국제 컨퍼런스 사이트: http://www.web2con.com/
- 국내 IT 컬럼리스트가 요약한 내용: http://www.dal.co.kr/chair/semanticweb/sw1501.html - 그의 저서: http://www.dal.co.kr/chair/semanticweb/sw.html (작년에 김중태 씨의 강연을 각기 다른 모임에서 두 번이나 들었었는데, 기술적이고 이론적인 심도는 얕았지만 일상생활과 문화적 맥락에서 재미있게 짚어 주는 내용이었다.)
The Operations 오퍼레이션 = 인터페이스의 상위에서 벌어지고 있는 기술의 단계, 즉 응용소프트웨어 프로그램에 공통된 일반적 기술이나 명령어 = 컴퓨터 데이터를 처리하는 방식이면서 동시에 컴퓨터 시대에 일하고 생각하고 존재하는 방식 : 하드웨어와 소프트웨어로 실현되기 전에 개념으로서 존재
"사회와 소프트웨어 사용과 설계 사이의 의사소통은 양방향으로 진행된다."
- 생산자와 사용자가 중복되는 경우가 급속도로 증가 eg. 게임 패치
* 선택 Selection - Menus, Filters, Plug-ins : 메뉴에서의 선택이 순수 창조를 대신 : 전자 미디어와 디지털 미디어에서 예술 창작은 기성의 요소들에서 선택하는 것을 의미한다. : 뉴미디어는 미리 정의된 수많은 메뉴에서 가치를 선택하는 사회의 정체성 논리를 가장 잘 표현해준다.
* 합성 Compositing : 합성의 결과는 가상공간이다. : (합성과 선택 사이의) 상호작용은 뉴미디어 객체가 여러 배율로 모듈식으로 구성되었기에 가능해졌다. : 요소들은 생산과정 전반에 걸쳐 개별적인 정체성을 유지하기 때문에 쉽게 수정되고 대체되고 삭제된다. - 모듈성 - 연속성
* 원격행위 Teleaction <= 원격현전 : 물리적인 현실을 실시간으로 조작할 수 있는 능력을 제공 : 원격현전의 본질은 반존재 anti-presence이다. - 원격통신: 전자 원격통신 + 컴퓨터 => 신호로 실시간 객체 조종
In his story Sarrasine, Balzac, speaking of a castrato disguised as a woman, writes this sentence: "It was Woman, with her sudden fears, her irrational whims, her instinctive fears, her unprovoked bravado, her daring and her delicious delicacy of feeling" Who is speaking in this way? Is it the story's hero, concerned to ignore the castrato concealed beneath the woman? Is it the man Balzac, endowed by his personal experience with a philosophy of Woman? Is it the author Balzac, professing certain "literary" ideas of femininity? Is it universal wisdom? or romantic psychology? It will always be impossible to know, for the good reason that all writing is itself this special voice, consisting of several indiscernible voices, and that literature is precisely the invention of this voice, to which we cannot assign a specific origin: literature is that neuter, that composite, that oblique into which every subject escapes, the trap where all identity is lost, beginning with the very identity of the body that writes.
· · ·
Probably this has always been the case: once an action is recounted, for intransitive ends, and no longer in order to act directly upon reality — that is, finally external to any function but the very exercise of the symbol — this disjunction occurs, the voice loses its origin, the author enters his own death, writing begins. Nevertheless, the feeling about this phenomenon has been variable; in primitive societies, narrative is never undertaken by a person, but by a mediator, shaman or speaker, whose "performance" may be admired (that is, his mastery of the narrative code), but not his "genius" The author is a modern figure, produced no doubt by our society insofar as, at the end of the middle ages, with English empiricism, French rationalism and the personal faith of the Reformation, it discovered the prestige of the individual, or, to put it more nobly, of the "human person" Hence it is logical that with regard to literature it should be positivism, resume and the result of capitalist ideology, which has accorded the greatest importance to the author's "person" The author still rules in manuals of literary history, in biographies of writers, in magazine interviews, and even in the awareness of literary men, anxious to unite, by their private journals, their person and their work; the image of literature to be found in contemporary culture is tyrannically centered on the author, his person, his history, his tastes, his passions; criticism still consists, most of the time, in saying that Baudelaire's work is the failure of the man Baudelaire, Van Gogh's work his madness, Tchaikovsky's his vice: the explanation of the work is always sought in the man who has produced it, as if, through the more or less transparent allegory of fiction, it was always finally the voice of one and the same person, the author, which delivered his "confidence."
· · ·
Though the Author's empire is still very powerful (recent criticism has often merely consolidated it), it is evident that for a long time now certain writers have attempted to topple it. In France, Mallarme was doubtless the first to see and foresee in its full extent the necessity of substituting language itself for the man who hitherto was supposed to own it; for Mallarme, as for us, it is language which speaks, not the author: to write is to reach, through a preexisting impersonality — never to be confused with the castrating objectivity of the realistic novelist — that point where language alone acts, "performs," and not "oneself": Mallarme's entire poetics consists in suppressing the author for the sake of the writing (which is, as we shall see, to restore the status of the reader.) Valery, encumbered with a psychology of the Self, greatly edulcorated Mallarme's theory, but, turning in a preference for classicism to the lessons of rhetoric, he unceasingly questioned and mocked the Author, emphasized the linguistic and almost "chance" nature of his activity, and throughout his prose works championed the essentially verbal condition of literature, in the face of which any recourse to the writer's inferiority seemed to him pure superstition. It is clear that Proust himself, despite the apparent psychological character of what is called his analyses, undertook the responsibility of inexorably blurring, by an extreme subtilization, the relation of the writer and his characters: by making the narrator not the person who has seen or felt, nor even the person who writes, but the person who will write (the young man of the novel — but, in fact, how old is he, and who is he? — wants to write but cannot, and the novel ends when at last the writing becomes possible), Proust has given modern writing its epic: by a radical reversal, instead of putting his life into his novel, as we say so often, he makes his very life into a work for which his own book was in a sense the model, so that it is quite obvious to us that it is not Charlus who imitates Montesquiou, but that Montesquiou in his anecdotal, historical reality is merely a secondary fragment, derived from Charlus. Surrealism lastly — to remain on the level of this prehistory of modernity — surrealism doubtless could not accord language a sovereign place, since language is a system and since what the movement sought was, romantically, a direct subversion of all codes — an illusory subversion, moreover, for a code cannot be destroyed, it can only be "played with"; but by abruptly violating expected meanings (this was the famous surrealist "jolt"), by entrusting to the hand the responsibility of writing as fast as possible what the head itself ignores (this was automatic writing), by accepting the principle and the experience of a collective writing, surrealism helped secularize the image of the Author. Finally, outside of literature itself (actually, these distinctions are being superseded), linguistics has just furnished the destruction of the Author with a precious analytic instrument by showing that utterance in its entirety is a void process, which functions perfectly without requiring to be filled by the person of the interlocutors: linguistically, the author is never anything more than the man who writes, just as I is no more than the man who says I: language knows a "subject," not a "person," end this subject, void outside of the very utterance which defines it, suffices to make language "work," that is, to exhaust it.
· · ·
The absence of the Author (with Brecht, we might speak here of a real "alienation:' the Author diminishing like a tiny figure at the far end of the literary stage) is not only a historical fact or an act of writing: it utterly transforms the modern text (or — what is the same thing — the text is henceforth written and read so that in it, on every level, the Author absents himself). Time, first of all, is no longer the same. The Author, when we believe in him, is always conceived as the past of his own book: the book and the author take their places of their own accord on the same line, cast as a before and an after: the Author is supposed to feed the book — that is, he pre-exists it, thinks, suffers, lives for it; he maintains with his work the same relation of antecedence a father maintains with his child. Quite the contrary, the modern writer (scriptor) is born simultaneously with his text; he is in no way supplied with a being which precedes or transcends his writing, he is in no way the subject of which his book is the predicate; there is no other time than that of the utterance, and every text is eternally written here and now. This is because (or: it follows that) to write can no longer designate an operation of recording, of observing, of representing, of "painting" (as the Classic writers put it), but rather what the linguisticians, following the vocabulary of the Oxford school, call a performative, a rare verbal form (exclusively given to the first person and to the present), in which utterance has no other content than the act by which it is uttered: something like the / Command of kings or the I Sing of the early bards; the modern writer, having buried the Author, can therefore no longer believe, according to the "pathos" of his predecessors, that his hand is too slow for his thought or his passion, and that in consequence, making a law out of necessity, he must accentuate this gap and endlessly "elaborate" his form; for him, on the contrary, his hand, detached from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin — or which, at least, has no other origin than language itself, that is, the very thing which ceaselessly questions any origin.
· · ·
We know that a text does not consist of a line of words, releasing a single "theological" meaning (the "message" of the Author-God), but is a space of many dimensions, in which are wedded and contested various kinds of writing, no one of which is original: the text is a tissue of citations, resulting from the thousand sources of culture. Like Bouvard and Pecuchet, those eternal copyists, both sublime and comical and whose profound absurdity precisely designates the truth of writing, the writer can only imitate a gesture forever anterior, never original; his only power is to combine the different kinds of writing, to oppose some by others, so as never to sustain himself by just one of them; if he wants to express himself, at least he should know that the internal "thing" he claims to "translate" is itself only a readymade dictionary whose words can be explained (defined) only by other words, and so on ad infinitum: an experience which occurred in an exemplary fashion to the young De Quincey, so gifted in Greek that in order to translate into that dead language certain absolutely modern ideas and images, Baudelaire tells us, "he created for it a standing dictionary much more complex and extensive than the one which results from the vulgar patience of purely literary themes" (Paradis Artificiels). succeeding the Author, the writer no longer contains within himself passions, humors, sentiments, impressions, but that enormous dictionary, from which he derives a writing which can know no end or halt: life can only imitate the book, and the book itself is only a tissue of signs, a lost, infinitely remote imitation.
· · ·
Once the Author is gone, the claim to "decipher" a text becomes quite useless. To give an Author to a text is to impose upon that text a stop clause, to furnish it with a final signification, to close the writing. This conception perfectly suits criticism, which can then take as its major task the discovery of the Author (or his hypostases: society, history, the psyche, freedom) beneath the work: once the Author is discovered, the text is "explained:' the critic has conquered; hence it is scarcely surprising not only that, historically, the reign of the Author should also have been that of the Critic, but that criticism (even "new criticism") should be overthrown along with the Author. In a multiple writing, indeed, everything is to be distinguished, but nothing deciphered; structure can be followed, "threaded" (like a stocking that has run) in all its recurrences and all its stages, but there is no underlying ground; the space of the writing is to be traversed, not penetrated: writing ceaselessly posits meaning but always in order to evaporate it: it proceeds to a systematic exemption of meaning. Thus literature (it would be better, henceforth, to say writing), by refusing to assign to the text (and to the world as text) a "secret:' that is, an ultimate meaning, liberates an activity which we might call counter-theological, properly revolutionary, for to refuse to arrest meaning is finally to refuse God and his hypostases, reason, science, the law.
· · ·
Let us return to Balzac's sentence: no one (that is, no "person") utters it: its source, its voice is not to be located; and yet it is perfectly read; this is because the true locus of writing is reading. Another very specific example can make this understood: recent investigations (J. P. Vernant) have shed light upon the constitutively ambiguous nature of Greek tragedy, the text of which is woven with words that have double meanings, each character understanding them unilaterally (this perpetual misunderstanding is precisely what is meant by "the tragic"); yet there is someone who understands each word in its duplicity, and understands further, one might say, the very deafness of the characters speaking in front of him: this someone is precisely the reader (or here the spectator). In this way is revealed the whole being of writing: a text consists of multiple writings, issuing from several cultures and entering into dialogue with each other, into parody, into contestation; but there is one place where this multiplicity is collected, united, and this place is not the author, as we have hitherto said it was, but the reader: the reader is the very space in which are inscribed, without any being lost, all the citations a writing consists of; the unity of a text is not in its origin, it is in its destination; but this destination can no longer be personal: the reader is a man without history, without biography, without psychology; he is only that someone who holds gathered into a single field all the paths of which the text is constituted. This is why it is absurd to hear the new writing condemned in the name of a humanism which hypocritically appoints itself the champion of the reader's rights. The reader has never been the concern of classical criticism; for it, there is no other man in literature but the one who writes. We are now beginning to be the dupes no longer of such antiphrases, by which our society proudly champions precisely what it dismisses, ignores, smothers or destroys; we know that to restore to writing its future, we must reverse its myth: the birth of the reader must be ransomed by the death of the Author.
— translated by Richard Howard
Sarrasine Balzac English empiricism French rationalism the Reformation positivism Mallarme Valery Proust Charlus Montesquioi Surrealism jolt Brecht pathos Bouvard Pecuchet De Quincey Baudelaire Paradis Artificiels J. P. Vernant
The voice loses its origin, the author enters his own death, writing begins.
Since language is a system. For a code cannot be destroyed, it can only be “played with”.
Linguistics has just furnished the destruction of the Author with a precious analytic instrument by showing that utterance in its entirety is a void process, which requiring to be filled by the person of the interlocutors.
There is no other time than that of the utterance, and every text is eternally written here and now.
A text is a space of many dimensions, in which are wedded and contested various kinds of writing, no one of which is original: the text is a tissue of citations, resulting from the thousand sources of cultures.
If the writer wants to express himself, at least he should know that the internal “thing” he claims to “translate” is itself only a readymade dictionary whose words can be explained (defined) only by other words, and so on ad infinitum.
Literature (it would be better to say writing) liberates an activity which we might call counter-theological, properly revolutionary, for to refuse to arrest meaning is finally to refuse God and his hypostases, reason, science, the law.
The true locus of writing is reading.
A text consists of multiple writings, issuing from several cultures and entering into dialogue with each other, into parody, into contestation; but there is one place where this multiplicity is collected, united, and this place is not the author, but the reader: the reader is the very space in which are inscribed, without any being lost, all the citations a writing consists of; the unity of a text is not in its origin, it is in its destination.
The birth of the reader must be ransomed by the death of the Author.
The privileged role played by the manual
construction of images in digital cinema is one example of a larger
trend: the return of pre-cinematic moving images techniques.
Marginalized by the twentieth century institution of live action
narrative cinema which relegated them to the realms of animation and
special effects, these techniques reemerge as the foundation of digital
filmmaking. What was supplemental to cinema becomes its norm; what was
at its boundaries comes into the center. Digital media returns to us
the repressed of the cinema. Moving image culture is being redefined
once again; the cinematic realism is being displaced from being its
dominant mode to become only one option among many.
Cinema, the Art of the Index
Thus far, most discussions of cinema in the digital age have focused on
the possibilities of interactive narrative. It is not hard to
understand why: since the majority of viewers and critics equate cinema
with storytelling, digital media is understood as something which will
let cinema tell its stories in a new way. Yet as exciting as the ideas
of a viewer participating in a story, choosing different paths through
the narrative space and interacting with characters may be, they only
address one aspect of cinema which is neither unique nor, as many will
argue, essential to it: narrative.
The challenge which digital media poses to cinema
extends far beyond the issue of narrative. Digital media redefines the
very identity of cinema. In a symposium which took place in Hollywood
in the Spring of 1996, one of the participants
provocatively referred to movies as "flatties" and to human actors as
"organics" and "soft fuzzies." As these terms accurately suggest, what
used to be cinema's defining characteristics have become just the
default options, with many others available. When one can "enter" a
virtual three-dimensional space, to view flat images projected on the
screen is hardly the only option. When, given enough time and money,
almost everything can be simulated in a computer, to film physical
reality is just one possibility.
This "crisis" of cinema's identity also affects the terms and the
categories used to theorize cinema's past. French film theorist Christian Metz
wrote in the 1970s that "Most films shot today, good or bad, original
or not, 'commercial' or not, have as a common characteristic that they
tell a story; in this measure they all belong to one and the same
genre, which is, rather, a sort of 'super-genre' ['sur-genre']."
In identifying fictional films as a "super-genre' of twentieth century
cinema, Metz did not bother to mention another characteristic of this
genre because at that time it was too obvious: fictional films are live action
films, i.e. they largely consist of unmodified photographic recordings
of real events which took place in real physical space. Today, in the
age of computer simulation and digital compositing, invoking this
characteristic becomes crucial in defining the specificity of twentieth
century cinema. From the perspective of a future historian of visual
culture, the differences between classical Hollywood films, European
art films and avant-garde films (apart from abstract ones) may appear
less significance than this common feature: that they relied on
lens-based recordings of reality. This essay is concerned with the
effect of the so-called digital revolotution on cinema as defined by
its "super genre" as fictional live action film.
During cinema's history, a whole repertoire of
techniques (lighting, art direction, the use of different film stocks
and lens, etc.) was developed to modify the basic record obtained by a
film apparatus. And yet behind even the most stylized cinematic images
we can discern the bluntness, the sterility, the banality of early
nineteenth century photographs. No matter how complex its stylistic
innovations, the cinema has found its base in these deposits of
reality, these samples obtained by a methodical and prosaic process.
Cinema emerged out of the same impulse which engendered naturalism,
court stenography and wax museums. Cinema is the art of the index; it
is an attempt to make art out of a footprint.
Even for Andrey Tarkovsky, film-painter par
excellence, cinema's identity lay in its ability to record reality.
Once, during a public discussion in Moscow sometime in the 1970s he was
asked the question as to whether he was interested in making abstract
films. He replied that there can be no such thing. Cinema's most basic
gesture is to open the shutter and to start the film rolling, recording
whatever happens to be in front of the lens. For Tarkovsky, an abstract
cinema is thus impossible.
But what happens to cinema's indexical identity if it is now possible
to generate photorealistic scenes entirely in a computer using 3-D
computer animation; to modify individual frames or whole scenes with
the help a digital paint program; to cut, bend, stretch and stitch
digitized film images into something which has perfect photographic
credibility, although it was never actually filmed?
This essay will address the meaning of these changes
in the filmmaking process from the point of view of the larger cultural
history of the moving image. Seen in this context, the manual
construction of images in digital cinema represents a return to
nineteenth century pre-cinematic practices, when images were
hand-painted and hand-animated. At the turn of the twentieth century,
cinema was to delegate these manual techniques to animation and define
itself as a recording medium. As cinema enters the digital age, these
techniques are again becoming the commonplace in the filmmaking
process. Consequently, cinema can no longer be clearly distinguished
from animation. It is no longer an indexical media technology but,
rather, a sub-genre of painting.
A Brief Archeology of Moving Pictures
As testified by its original names (kinetoscope, cinematograph, moving
pictures), cinema was understood, from its birth, as the art of motion,
the art which finally succeeded in creating a convincing illusion of
dynamic reality. If we approach cinema in this way (rather than the art
of audio-visual narrative, or the art of a projected image, or the art
of collective spectatorship, etc.), we can see it superseding previous
techniques for creating and displaying moving images.
These earlier techniques shared a number of common
characteristics. First, they all relied on hand-painted or hand-drawn
images. The magic lantern slides were painted at least until the 1850s;
so were the images used in the Phenakistiscope, the Thaumatrope, the
Zootrope, the Praxinoscope, the Choreutoscope and numerous other
nineteenth century pro-cinematic devices. Even Muybridge's celebrated Zoopraxiscope lectures of the 1880s featured not actual photographs but colored drawings painted after the photographs.
Zoopraxiscope
Not only were the images created manually, they were also manually animated. In Robertson's Phantasmagoria,
which premiered in 1799, magic lantern operators moved behind the
screen in order to make projected images appear to advance and
withdraw. More often, an exhibitor used only his hands, rather than his
whole body, to put the images into motion. One animation technique
involved using mechanical slides consisting of a number of layers. An exhibitor
would slide the layers to animate the image. Another technique was to
slowly move a long slide containing separate images in front of a magic
lantern lens. Nineteenth century optical toys enjoyed in private homes
also required manual action to create movement -- twirling the strings
of the Thaumatrope, rotating the Zootrope's cylinder, turning the
Viviscope's handle.
Kinetoscope 1894
It was not until the last decade of the nineteenth
century that the automatic generation of images and their automatic
projection were finally combined. A mechanical eye became coupled with
a mechanical heart; photography met the motor. As a result, cinema - a
very particular regime of the visible - was born. Irregularity,
non-uniformity, the accident and other traces of the human body, which
previously inevitably accompanied moving image exhibitions, were
replaced by the uniformity of machine vision.
A machine, which like a conveyer belt, was now spitting out images, all
sharing the same appearance, all the same size, all moving at the same
speed, like a line of marching soldiers.
Cinema also eliminated the discrete character of
both space and movement in moving images. Before cinema, the moving
element was visually separated from the static background as with a
mechanical slide show or Reynaud's Praxinoscope Theater (1892). The
movement itself was limited in range and affected only a clearly
defined figure rather than the whole image. Thus, typical actions would
include a bouncing ball, a raised hand or eyes, a butterfly moving back
and forth over the heads of fascinated children -- simple vectors
charted across still fields.
Cinema's most immediate predecessors share something else. As the
nineteenth-century obsession with movement intensified, devices which
could animate more than just a few images became increasingly popular.
All of them - the Zootrope, the Phonoscope, the Tachyscope, the
Kinetoscope - were based on loops, sequences of images featuring
complete actions which can be played repeatedly. The Thaumatrope
(1825), in which a disk with two different images painted on each face
was rapidly rotated by twirling a strings attached to it, was in its
essence a loop in its most minimal form: two elements replacing one
another in succession. In the Zootrope
(1867) and its numerous variations, approximately a dozen images were
arranged around the perimeter of a circle. The Mutoscope, popular in
America throughout the 1890s, increased the duration of the loop by
placing a larger number of images radially on an axle. Even Edison's
Kinetoscope (1892-1896), the first modern cinematic machine to employ
film, continued to arrange images in a loop. 50 feet of film translated
to an approximately 20 second long presentation - a genre whose
potential development was cut short when cinema adopted a much longer
narrative form.
From Animation to Cinema
Once the cinema was stabilized as a technology, it cut all references
to its origins in artifice. Everything which characterized moving
pictures before the twentieth century - the manual construction of
images, loop actions, the discrete nature of space and movement - all
of this was delegated to cinema's bastard relative, its supplement, its
shadow - animation. Twentieth century animation became a depository for
nineteenth century moving image techniques left behind by cinema.
The opposition between the styles of animation and
cinema defined the culture of the moving image in the twentieth
century. Animation foregrounds its artificial character, openly
admitting that its images are mere representations. Its visual language
is more aligned to the graphic than to the photographic. It is discrete
and self-consciously discontinuous: crudely rendered characters moving
against a stationary and detailed background; sparsely and irregularly
sampled motion (in contrast to the uniform sampling of motion by a film
camera -- recall Jean-Luc Godard's definition of cinema as "truth 24
frames per second"), and finally space constructed from separate image
layers.
In contrast, cinema works hard to erase any traces of its own
production process, including any indication that the images which we
see could have been constructed rather than recorded. It denies that
the reality it shows often does not exist outside of the film image,
the image which was arrived at by photographing an already impossible
space, itself put together with the use of models, mirrors, and matte
paintings, and which was then combined with other images through
optical printing. It pretends to be a simple recording
of an already existing reality - both to a viewer and to itself.
Cinema's public image stressed the aura of reality "captured" on film,
thus implying that cinema was about photographing what existed before
the camera, rather than "creating the 'never-was' "
of special effects. Rear projection and blue screen photography, matte
paintings and glass shots, mirrors and miniatures, push development,
optical effects and other techniques which allowed filmmakers to
construct and alter the moving images, and thus could reveal that
cinema was not really different from animation, were pushed to cinema's periphery by its practitioners, historians and critics.
Today, with the shift to digital media, these marginalized techniques move to the center.
What is Digital Cinema?
A visible sign of this shift is the new role which computer generated special effects have come to play in Hollywood
industry in the last few years. Many recent blockbusters have been
driven by special effects; feeding on their popularity. Hollywood has
even created a new-mini genre of "The Making of..." videos and books
which reveal how special effects are created.
I will use special effects from few recent Hollywood
films for illustrations of some of the possibilities of digital
filmmaking. Until recently, Hollywood studios were the only ones who
had the money to pay for digital tools and for the labor involved in
producing digital effects. However, the sbift to digital media affects
not just Hollywood, but filmmaking as a whole. As traditional film
technology is universally being replaced by digital technology, the
logic of the filmmaking process is being redefined. What I describe
below are the new principles of digital filmmaking which are equally
valid for individual or collective film productions, regardless of
whether they are using the most expensive professional hardware and
software or its amateur equivalents.
Consider, then, the following principles of digital filmmaking:
1. Rather than filming physical reality it
is now possible to generate film-like scenes directly in a computer
with the help of 3-D computer animation. Therefore, live action footage
is displaced from its role as the only possible material from which the
finished film is constructed.
2. Once live action footage is digitized
(or directly recorded in a digital format), it loses its privileged
indexical relationship to pro-filmic reality. The computer does not
distinguish between an image obtained through the photographic lens, an
image created in a paint program or an image synthesized in a 3-D
graphics package, since they are made from the same material - pixels.
And pixels, regardless of their origin, can be easily altered,
substituted one for another, and so on. Live action footage is reduced
to be just another graphic, no different than images which were created manually.
3. If live action footage was left intact
in traditional filmmaking, now it functions as raw material for further
compositing, animating and morphing. As a result, while retaining
visual realism unique to the photographic process, film obtains the
plasticity which was previously only possible in painting or animation.
To use the suggestive title of a popular morphing software, digital
filmmakers work with "elastic reality." For example, the opening shot
of Forest Gump (Robert Zemeckis, Paramount
Pictures, 1994; special effects by Industrial Light and Magic) tracks
an unusually long and extremely intricate flight of a feather. To
create the shot, the real feather was filmed against a blue background
in different positions; this material was then animated and composited
against shots of a landscape. The result: a new kind of realism, which
can be described as "something which looks is intended to look exactly
as if it could have happened, although it really could not."
4. Previously, editing and special effects
were strictly separate activities. An editor worked on ordering
sequences of images together; any intervention within an image was
handled by special effects specialists. The computer collapses this
distinction. The manipulation of individual images via a paint program
or algorithmic image processing becomes as easy as arranging sequences
of images in time. Both simply involve "cut and paste." As this basic
computer command exemplifies, modification of digital images (or other
digitized data) is not sensitive to distinctions of time and space or
of differences of scale. So, re-ordering sequences of images in time,
compositing them together in space, modifying parts of an individual
image, and changing individual pixels become the same operation,
conceptually and practically.
5. Given the preceding principles, we can define digital film in this way:
digital film = live action material + painting + image processing +
compositing + 2-D computer animation + 3-D computer animation
Live action material can either be recorded
on film or video or directly in a digital format. Painting, image
processing and computer animation refer to the processes of modifying
already existent images as well as creating new ones. In fact, the very
distinction between creation and modification, so clear in film-based
media (shooting versus darkroom processes in photography, production
versus post-production in cinema) no longer applies to digital cinema, since each image, regardless of its origin, goes through a number of programs before making it to the final film.
Let us summarize the principles discussed thus far.
Live action footage is now only raw material to be manipulated by hand:
animated, combined with 3-D computer generated scenes and painted over.
The final images are constructed manually from different elements; and
all the elements are either created entirely from scratch or modified
by hand.
We can finally answer the question "what is digital
cinema?" Digital cinema is a particular case of animation which uses
live action footage as one of its many elements.
This can be re-read in view of the history of the
moving image sketched earlier. Manual construction and animation of
images gave birth to cinema and slipped into the margins...only to
re-appear as the foundation of digital cinema. The history of the
moving image thus makes a full circle. Born from animation, cinema pushed animation to its boundary, only to become one particular case of animation in the end.
The relationship between "normal" filmmaking and
special effects is similarly reversed. Special effects, which involved
human intervention into machine recorded footage and which were
therefore delegated to cinema's periphery throughout its history,
become the norm of digital filmmaking.
The same applies for the relationship between
production and post-production. Cinema traditionally involved arranging
physical reality to be filmed though the use of sets, models, art
direction, cinematography, etc. Occasional manipulation of recorded
film (for instance, through optical printing) was negligible compared
to the extensive manipulation of reality in front of a camera. In
digital filmmaking, shot footage is no longer the final point but just
raw material to be manipulated in a computer where the real
construction of a scene will take place. In short, the production
becomes just the first stage of post-production.
The following examples illustrate this shift from
re-arranging reality to re-arranging its images. From the analog era:
for a scene in Zabriskie Point
(1970), Michaelangelo Antonioni, trying to achieve a particularly
saturated color, ordered a field of grass to be painted. From the
digital era: to create the launch sequence in Apollo 13y
(Universal Studious, 1995; special effects by Digital Domain), the crew
shot footage at the original location of the launch at Cape Canaveral.
The artists at Digital Domain scanned the film and altered it on
computer workstations, removing recent building construction, adding
grass to the launch pad and painting the skies to make them more
dramatic. This altered film was then mapped onto 3D planes to create a
virtual set which was animated to match a 180-degree dolly movement of
a camera following a rising rocket.
The last example brings us to yet another
conceptualization of digital cinema - as painting. In his book-length
study of digital photography, William J. Mitchell
focuses our attention on what he calls the inherent mutability of a
digital image: "The essential characteristic of digital information is
that it can be manipulated easily and very rapidly by computer. It is
simply a matter of substituting new digits for old... Computational
tools for transforming, combining, altering, and analyzing images are
as essential to the digital artist as brushes and pigments to a
painter." As Mitchell points out, this inherent mutability erases the
difference between a photograph and a painting. Since a film is a
series of photographs, it is appropriate to extend Mitchell's argument
to digital film. With an artist being able to easily manipulate
digitized footage either as a whole or frame by frame, a film in a
general sense becomes a series of paintings.
Hand-painting digitized film frames, made possible
by a computer, is probably the most dramatic example of the new status
of cinema. No longer strictly locked in the photographic, it opens
itself towards the painterly. It is also the most obvious example of
the return of cinema to its nineteenth century origins - in this case,
to hand-crafted images of magic lantern slides, the Phenakistiscope,
the Zootrope.
We usually think of computerization as automation, but here the result
is the reverse: what was previously automatically recorded by a camera
now has to be painted one frame at a time. But not just a dozen images,
as in the nineteenth century, but thousands and thousands. We can draw
another parallel with the practice, common in the early days of silent
cinema, of manually tinting film frames in different colors according
to a scene's mood.
Today, some of the most visually sophisticated
digital effects are often achieved using the same simple method:
painstakingly altering by hand thousands of frames. The frames are
painted over either to create mattes ("hand drawn matte extraction") or
to directly change the images, as, for instance, in Forest Gump,
where President Kennedy was made to speak new sentences by altering the
shape of his lips, one frame at a time. In principle, given enough time
and money, one can create what will be the ultimate digital film:
90 minutes, i.e., 129600 frames completely painted by hand from
scratch, but indistinguishable in appearance from live photography.
Multimedia as "Primitive" Digital Cinema
3-D animation, compositing, mapping, paint retouching: in commercial
cinema, these radical new techniques are mostly used to solve technical
problems while traditional cinematic language is preserved unchanged.
Frames are hand-painted to remove wires which supported an actor during
shooting; a flock of birds is added to a landscape; a city street is
filled with crowds of simulated extras. Although most Hollywood releases now involve digitally manipulated scenes, the use of computers is always carefully hidden.
Commercial narrative cinema still continues to hold on to the classical realist style
where images function as unretouched photographic records of some
events which took place in front of the camera. Cinema refuses to give
up its unique cinema-effect, an effect which, according to Christian Metz's
penetrating analysis made in the 1970s, depends upon narrative form,
the reality effect and cinema's architectural arrangement all working
together.
Towards the end of his essay, Metz wonders whether
in the future non-narrative films may become more numerous; if this
happens, he suggests that cinema will no longer need to manufacture its
reality effect. Electronic and digital media have already brought about
this transformation. Beginning in the 1980s, new cinematic forms have
emerged which are not linear narratives, which are exhibited on a
television or a computer screen, rather than in a movie theater - and
which simultaneously give up cinematic realism.
What are these forms? First of all, there is the
music video. Probably not by accident, the genre of music video came
into existence exactly at the time when electronic video effects
devices were entering editing studios. Importantly, just as music
videos often incorporate narratives within them, but are not linear
narratives from start to finish, they rely on film (or video) images,
but change them beyond the norms of traditional cinematic realism. The
manipulation of images through hand-painting and image processing,
hidden in Hollywood cinema, is brought into the open on a television
screen. Similarly, the construction of an image from heterogeneous
sources is not subordinated to the goal of photorealism but functions
as a aesthetic strategy. The genre of music video has been a laboratory
for exploring numerous new possibilities of manipulating photographic
images made possible by computers -- the numerous points which exist in
the space between the 2-D and the 3-D, cinematography and painting,
photographic realism and collage. In short, it is a living and
constantly expanding textbook for digital cinema.
A detailed analysis of the evolution of music video
imagery (or, more generally, broadcast graphics in the electronic age)
deserves a separate treatment and I will not try to take it up here.
Instead, I will discuss another new cinematic non-narrative form,
CD-ROM games, which, in contrast to music video, relied on the computer
for storage and distribution from the very beginning. And, unlike music
video designers who were consciously pushing traditional film or video
images into something new, the designers of CD-ROMs arrived at a new
visual language unintentionally while attempting to emulate traditional
cinema.
In the late 1980s, Apple began to promote the
concept of computer multimedia; and in 1991 it released QuickTime
software to enable an ordinary personal computer to play movies.
However, for the next few years the computer did not perform its new
role very well. First, CD-ROMs could not hold anything close to the
length of a standard theatrical film. Secondly, the computer would not
smoothly play a movie larger than the size of a stamp. Finally, the
movies had to be compressed, degrading their visual appearance. Only in
the case of still images was the computer able to display
photographic-like detail at full screen size.
Because of these particular hardware limitations,
the designers of CD-ROMs had to invent a different kind of cinematic
language in which a range of strategies, such as discrete motion,
loops, and superimposition, previously used in nineteenth century
moving image presentations, in twentieth century animation, and in the
avant-garde tradition of graphic cinema, were applied to photographic
or synthetic images. This language synthesized cinematic illusionism
and the aesthetics of graphic collage, with its characteristic
heterogeneity and discontinuity. The photographic and the graphic,
divorced when cinema and animation went their separate ways, met again
on a computer screen.
The graphic also met the cinematic. The designers of
CD-ROMs were aware of the techniques of twentieth century
cinematography and film editing, but they had to adopt these techniques
both to an interactive format and to hardware limitations. As a result,
the techniques of modern cinema and of nineteenth century moving image
have merged in a new hybrid language.
We can trace the development of this language by analyzing a few well-known CD-ROM titles. The best selling game Myst
(Broderbund, 1993) unfolds its narrative strictly through still images,
a practice which takes us back to magic lantern shows (and to Chris
Marker's La Jetée). But in other ways Myst
relies on the techniques of twentieth century cinema. For instance, the
CD-ROM uses simulated camera turns to switch from one image to the
next. It also employs the basic technique of film editing to
subjectively speed up or slow down time. In the course of the game, the
user moves around a fictional island by clicking on a mouse. Each click
advances a virtual camera forward, revealing a new view of a 3-D
environment. When the user begins to descend into the underground
chambers, the spatial distance between the points of view of each two
consecutive views sharply decreases. If before the user was able to
cross a whole island with just a few clicks, now it takes a dozen
clicks to get to the bottom of the stairs! In other words, just as in
traditional cinema, Myst slows down time to create suspense and tension.
In Myst, miniature animations are sometimes embedded within the still images. In the next best-selling CD-ROM 7th Guest
(Virgin Games, 1993), the user is presented with video clips of live
actors superimposed over static backgrounds created with 3-D computer
graphics. The clips are looped, and the moving human figures clearly
stand out against the backgrounds. Both of these features connect the
visual language of 7th Guest to nineteenth century pro-cinematic devices and twentieth century cartoons rather than to cinematic verisimilitude. But like Myst, 7th Guest
also evokes distinctly modern cinematic codes. The environment where
all action takes place (an interior of a house) is rendered using a
wide angle lens; to move from one view to the next a camera follows a
complex curve, as though mounted on a virtual dolly.
Next, consider the CD-ROM Johnny Mnemonic
(Sony Imagesoft, 1995). Produced to complement the fiction film of the
same title, marketed not as a "game" but as an "interactive movie," and
featuring full screen video throughout, it comes closer to cinematic
realism than the previous CD-ROMs - yet it is still quite distinct from
it. With all action shot against a green screen and then composited
with graphic backgrounds, its visual style exists within a space
between cinema and collage.
It would be not entirely inappropriate to read this
short history of the digital moving image as a teleological development
which replays the emergence of cinema a hundred years earlier. Indeed,
as computers' speed keeps increasing, the CD-ROM designers have been
able to go from a slide show format to the superimposition of small
moving elements over static backgrounds and finally to full-frame
moving images. This evolution repeats the nineteenth century
progression: from sequences of still images (magic lantern slides
presentations) to moving characters over static backgrounds (for
instance, in Reynaud's Praxinoscope Theater) to full motion (the
Lumieres' cinematograph). Moreover, the introduction of QuickTime in
1991 can be compared to the introduction of the Kinetoscope in 1892:
both were used to present short loops, both featured the images
approximately two by three inches in size, both called for private
viewing rather than collective exhibition. Finally, the Lumieres' first
film screenings of 1895 which shocked their audiences with huge moving
images found their parallel in 1995 CD-ROM titles where the moving
image finally fills the entire computer screen. Thus, exactly a hundred
years after cinema was officially "born," it was reinvented on a
computer screen.
But this is only one reading. We no longer think of
the history of cinema as a linear march towards only one possible
language, or as a progression towards more and more accurate
verisimilitude. Rather, we have come to see its history as a succession
of distinct and equally expressive languages, each with its own
aesthetic variables, each new language closing off some of the
possibilities of the previous one -- a cultural logic not dissimilar to
Kuhn's analysis of scientific paradigms. Similarly, instead of
dismissing visual strategies of early multimedia titles as a result of
technological limitations, we may want to think of them as an
alternative to traditional cinematic illusionism, as a beginning of
digital cinema's new language.
For the computer/entertainment industry, these
strategies represent only a temporary limitation, an annoying drawback
that needs to be overcome. This is one important difference between the
situation at the end of the nineteenth and the end of the twentieth
centuries: if cinema was developing towards the still open horizon of
many possibilities, the development of commercial multimedia, and of
corresponding computer hardware (compression boards, storage formats
such as Digital Video Disk), is driven by a clearly defined goal: the
exact duplication of cinematic realism. So if a computer screen, more
and more, emulates cinema's screen, this not an accident but a result
of conscious planning.
The Loop
A number of artists, however, have approached these strategies not as
limitations but as a source of new cinematic possibilities. As an
example, I will discuss the use of the loop in Jean-Louis Boissier's Flora petrinsularis (1993) and Natalie Bookchin's The Databank of the Everyday (1996).
As already mentioned, all nineteenth century
pro-cinematic devices, up to Edison's Kinetoscope, were based on short
loops. As "the seventh art" began to mature, it banished the loop to
the low-art realms of the instructional film, the pornographic
peep-show and the animated cartoon. In contrast, narrative cinema has
avoided repetitions; as modern Western fictional forms in general, it
put forward a notion of human existence as a linear progression through
numerous unique events.
Cinema's birth from a loop form was reenacted at
least once during its history. In one of the sequences of the
revolutionary Soviet montage film, A Man with a Movie Camera
(1929), DzigaVertov shows us a cameraman standing in the back of a
moving automobile. As he is being carried forward by an automobile, he
cranks the handle of his camera. A loop, a repetition, created by the
circular movement of the handle, gives birth to a progression of events
-- a very basic narrative which is also quintessentially modern: a
camera moving through space recording whatever is in its way. In what
seems to be a reference to cinema's primal scene, these shots are
intercut with the shots of a moving train. Vertov even re-stages the
terror which Lumieres's film supposedly provoked in its audience; he
positions his camera right along the train track so the train runs over
our point of view a number of times, crushing us again and again.
Early digital movies share the same limitations of
storage as nineteenth century pro-cinematic devices. This is probably
why the loop playback function was built into QuickTime interface, thus
giving it the same weight as the VCR-style "play forward" function. So,
in contrast to films and videotapes, QuickTime movies are supposed to
be played forward, backward or looped. Flora petrinsularis realizes some of the possibilities contained in the loop form, suggesting a new temporal aesthetics for digital cinema.
The CD-ROM, which is based on Rousseau's Confessions,
opens with a white screen, containing a numbered list. Clicking on each
item leads us to a screen containing two frames, positioned side by
side. Both frames show the same video loop but are slightly offset from
each other in time. Thus, the images appearing in the left frame
reappear in a moment on the right and vice versa, as though an
invisible wave is running through the screen. This wave soon becomes
materialized: when we click on one of the frames we are taken to a new
screen showing a loop of a rhythmically vibrating water surface. As
each mouse click reveals another loop, the viewer becomes an editor,
but not in a traditional sense. Rather than constructing a singular
narrative sequence and discarding material which is not used, here the
viewer brings to the forefront, one by one, numerous layers of looped
actions which seem to be taking place all at once, a multitude of
separate but co-existing temporalities. The viewer is not cutting but
re-shuffling. In a reversal of Vertov's sequence where a loop generated
a narrative, viewer's attempt to create a story in Flora petrinsularis leads to a loop.
The loop which structures Flora petrinsularis
on a number of levels becomes a metaphor for human desire which can
never achieve resolution. It can be also read as a comment on cinematic
realism. What are the minimal conditions necessary to create the
impression of reality? As Boissier demonstrates, in the case of a field
of grass, a close-up of a plant or a stream, just a few looped frames
become sufficient to produce the illusion of life and of linear time.
Steven Neale
describes how early film demonstrated its authenticity by representing
moving nature: "What was lacking [in photographs] was the wind, the
very index of real, natural movement. Hence the obsessive contemporary
fascination, not just with movement, not just with scale, but also with
waves and sea spray, with smoke and spray." What for early cinema was
its biggest pride and achievement -- a faithful documentation of
nature's movement - becomes for Boissier a subject of ironic and
melancholic simulation. As the few frames are looped over and over, we
see blades of grades shifting slightly back and forth, rhythmically
responding to the blow of non-existent wind which is almost
approximated by the noise of a computer reading data from a CD-ROM.
Something else is being simulated here as well,
perhaps unintentionally. As you watch the CD-ROM, the computer
periodically staggers, unable to maintain consistent data rate. As a
result, the images on the screen move in uneven bursts, slowing and
speeding up with human-like irregularity. It is as though they are
brought to life not by a digital machine but by a human operator,
cranking the handle of the Zootrope a century and a half ago ...
If Flora petrinsularis uses the loop to comment on cinema's visual realism, The Databank of the Everyday
suggests that the loop can be a new narrative form appropriate for the
computer age. In her ironic manifesto which parodies the avant-garde
manifestos from the earlier part of the century, Bookchin reminds us
that the loop gave birth not only to cinema but also to computer
programming. Programming involves altering the linear flow of data
through control structures, such as "if/then" and "repeat/while"; the
loop is the most elementary of these control structures.
As digital media replaces film and photography, it is only
logical that the computer program's loop should replace photography's
frozen moment and cinema's linear narrative. The Databank champions the
loop as a new form of digital storytelling; there is no true beginning
or end, only a series of the loops with their endless repetitions,
halted by a users's selection or a power shortage. Natalie Bookchin
The computer program's loop makes its first "screen debut" in one particularly effective image from The Databank of the Everyday.
The screen is divided into two frames, one showing a video loop of a
woman shaving her leg, another - a loop of a computer program in
execution. Program statements repeating over and over mirror the
woman's arm methodically moving back and forth. This image represents
one of the first attempts in computer art to apply a Brechtian
strategy; that is, to show the mechanisms by which the computer
produces its illusions as a part of the artwork. Stripped of its usual
interface, the computer turns out to be another version of Ford's
factory, with a loop as its conveyer belt.
As Boissier, Bookchin also also explores
alternatives to cinematic montage, in her case replacing its
traditional sequential mode with a spatial one. Ford's assembly line
relied on the separation of the production process into a set of
repetitive, sequential, and simple activities. The same principle made
computer programming possible: a computer program breaks a tasks into a
series of elemental operations to be executed one at a time. Cinema
followed this principle as well: it replaced all other modes of
narration with a sequential narrative, an assembly line of shots which
appear on the screen one at a time. A sequantial narrative turned out
to be particularly incompatible with a spatialized narrative which
played a prominent role in European visual culture for centuries. From
Giotto's fresco cycle at Capella degli Scrovegni in Padua to Courbet's A Burial at Ornans,
artists presented a multitude of separate events (which sometimes were
even separated by time) within a single composition. In contrast to
cinema's narrative, here all the "shots" were accessible to a viewer at
one.
Cinema has elaborated complex techniques of montage
between different images replacing each other in time; but the
possibility of what can be called "spatial montage" between
simultaneously co-exiting images were not explored. The Databank of the Everyday
begins to explore this direction, thus opening up again the tradition
of spatialized narrative suppressed by cinema. In one section we are
presented with a sequence of pairs of short clips of everyday actions
which function as antonyms, for instance, opening and closing a door,
or pressing up and down buttons in an elevator. In another section the
user can choreograph a number of miniature actions appearing in small
windows positioned throughout the screen.
From Kino-Eye to Kino-Brush
In the twentieth century, cinema has played two roles at once. As a
media technology, cinema's role was to capture and to store visible
reality. The difficulty of modifying images once they were recorded was
exactly what gave cinema its value as a document, assuring its
authenticity. The same rigidity of the film image has defined the
limits of cinema as I defined it earlier, i.e. the super-genre of live
action narrative. Although it includes within itself a variety of
styles - the result of the efforts of many directors, designers and
cinematographers -- these styles share a strong family resemblance.
They are all children of the recording process which uses lens, regular
sampling of time and photographic media. They are all children of a
machine vision.
The mutability of digital data impairs the value of
cinema recordings as a documents of reality. In retrospect, we can see
that twentieth century cinema's regime of visual realism, the result of
automatically recording visual reality, was only an exception, an
isolated accident in the history of visual representation which has
always involved, and now again involves the manual construction of
images. Cinema becomes a particular branch of painting - painting in
time. No longer a kino-eye, but a kino-brush.
The privileged role played by the manual
construction of images in digital cinema is one example of a larger
trend: the return of pre-cinematic moving images techniques.
Marginalized by the twentieth century institution of live action
narrative cinema which relegated them to the realms of animation and
special effects, these techniques reemerge as the foundation of digital
filmmaking. What was supplemental to cinema becomes its norm; what was
at its boundaries comes into the center. Digital media returns to us
the repressed of the cinema.
As the examples discussed in this essay suggest, the
directions which were closed off at the turn of the century when cinema
came to dominate the modern moving image culture are now again
beginning to be explored. Moving image culture is being redefined once
again; the cinematic realism is being displaced from being its dominant
mode to become only one option among many.
Who is the Author? Sampling / Remixing / Open Source
Lev Manovich
New media culture brings with it a number of new models of authorship which all involve different forms of collaboration. Of course, collaborative authorship is not unique to new media: think of medieval cathedrals, traditional painting studios which consisted from a master and assistants, music orchestras, or contemporary film productions which, like medieval cathedrals, involve thousands of people collaborating over a substantial period of time. In fact, in we think about this historically, we will see collaborative authorship represents a norm rather than exception. In contrast, romantic model of a solitary single author occupies a very small place in the history of human culture. New media, however, offers some new variations on the previous forms of collaborative authorship. In this essay I will look at some of these variations. I will try to consider them not in isolation but in a larger context of contemporary cultural economies. As we will see, new media industries and cultures systematically pioneer new types of authorship, new relationships between producers and consumers, and new distribution models, thus acting as a the avant-garde of the culture industry.
1. Collaboration of Different Individuals and/or Groups
The most often discussed new type of authorship associated with new media is collaboration (over the network or in person, in real time or not) between a group of artists to create a new media work / performance / event / “project.” Often, no tangible objects or an even definite event like a performance ever comes out from these collaborations, but this does not matter. People meet people with common interests and start a “project” or a series of “projects.” We can think of this as a “social culture”; we may also note that while the new media culture may not have produced any “masterpieces”, it definitely had a huge impact on how people and organizations communicate. Along with database, navigable space, simulation and interactivity, new cultural forms enabled by new media also include new patterns of social communication. In short, the network-enabled process of collaboration, networking, and exchange is a valuable form of contemporary culture, regardless of whether it results in any “objects” or not.
2. Interactivity as Miscommunication Between the Author and the User
In the first part of the 1990s when interactivity was a new term, it was often claimed that an interactive artwork involves collaboration between an author and a user. Is this true? The notion of collaboration assumes some shared understanding and the common goals between the collaborators, but in the case of interactive media these are often absent. After an author designs the work, s/he has no idea about the assumptions and intentions of a particular user. Such a user, therefore, can’t be really called a collaborator of the author. From the other side, a user coming to a new media artwork often also does not know anything about this work, what is supposed to do, what its interface is, etc. For this user, therefore, an author is not really a collaborator. Instead of collaborators, the author and the user are often two total strangers, two aliens which do not share a common communication code. While interactivity in new media art often leads to” miscommunication” between the author and the user, commercial culture employs interactive feedback to assure that no miscommunication will take place. It is common for film producers to test a finished edit of a new film before a “focus group.” The responses of the viewers are then used to re–edit the film to improve comprehension of the narrative or to change the ending. In this practice, rather than presenting the users with multiple versions of the narrative, a single version that is considered the most successful is selected.
3. Authorship as Selection From a Menu
I discuss this type of authorship in detail in The Language of New Media; here I just want to note that it applies to both professional designers and the users. The design process in new media involves selection from various menus of software packages, databases of media assets, etc. Similarly, a user is often made to feel like a “real artist” by allowing her/him to quickly create a professional looking work by selecting from a few menus. The examples of such “authorship by selection” are the Web sites that allow the users to quickly construct a postcard or even a short movie by selecting from a menu of images, clips and sounds. Three decades ago Roland Barthes elegantly defined a cultural text as “a tissue of quotations”: “We know now that a text is not a line of words releasing a single ‘theological’ meaning (the ‘message’ of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from innumerable centres of culture.” In software-driven production environment, these quotations come not only from the creators’ memories of what they previously saw, read, and heard, but also directly from the databases of media assets, as well as numerous other words that in the case of the World Wide Web are just a click away.
4. Collaboration Between a Company and the Users
When it released the original Doom (1993), id software also released detailed descriptions of game files formats and a game editor, thus encouraging the players to expand the game, creating new levels. Adding to the game became its essential part, with new levels widely available on the Internet for anybody to download. Since Doom, such practices became commonplace in computer game industry. Often, the company would include elements designed by the users in a new release. With another widely popular game Sims (2001), this type of collaboration reached a new stage. The Web site for the game allows users to upload the characters, the settings, and the narratives they constructed into the common library, as well as download characters, settings, and narratives constructed by others. Soon it turned out that the majority of users do not even play the game but rather use its software to create their own characters and storyboard their adventures. In contrast to earlier examples of such practice – for instance the 1980s Star Track fans editing their own video tapes by sampling from various Star Track episodes or writing short stories involving main Star Track characters – now it came into the central place, being legitimized and encouraged by game producers. Another way in which a company can be said to collaborate with the users of its software is by incorporating their suggestions about new features into the new version of the software. This is common practice of many software companies.
5. Collaboration Between the Author and Software
Authoring using Al or AI is the most obvious case of human-software collaboration. The author sets up some general rules but s/he has no control over the concrete details of the work – these emerge as a result of the interactions of the rules. More generally, we can say that all authorship that uses electronic and computer tools is a collaboration between the author and these tools that make possible certain creative operations and certain ways of thinking while discouraging others. Of course humans have designed these tools, so it would be more precise to say that the author who uses electronic/ software tools engages in a dialog with the software designers (see #4).
6. Remixing
Remixing originally had a precise and a narrow meaning that gradually became diffused. Although precedents of remixing can be found earlier, it was the introduction of multi-track mixers that made remixing a standard practice. With each element of a song – vocals, drums, etc. – available for separate manipulation, it became possible to “re-mix” the song: change the volume of some tracks or substitute new tracks for the old ounces. Gradually the term became more and more broad, today referring to any reworking of an original musical work(s). In his DJ Culture Ulf Poscardt singles out different stages in the evolution of remixing practice. In 1972 DJ Tom Moulton mixed his first disco remixes; as Poscard points out, they “show a very chaste treatment of the original song. Moulton sought above all a different weighting of the various soundtracks, and worked the rhythmic elements of the disco songs even more clearly and powerfully…Moulton used the various elements of the sixteen or twenty-four track master tapes and remixed them.” By 1987, “DJs started to ask other DJs for remixes” and the treatment of the original material became much more aggressive. For example, “Coldcut used the vocals from Ofra Hanza’s ‘Im Nin Alu’ and contrasted Rakim’s ultra-deep bass voice with her provocatively feminine voice. To this were added techno sounds and a house-inspired remix of a rhythm section that loosened the heavy, sliding beat of the rap piece, making it sound lighter and brighter.” In another example, London DJ Tim Simenon produced a remix of his personal top ten of 1987. Simenon: “We found a common denominator between the songs we wanted to use, and settled on the speed of 114 beats per minute. The tracks of the individual songs were adapted to this beat either by speeding them up or slowing them down.” In the last few years people started to apply the term “remix” to other media: visual productions, software, literary texts. With electronic music and software serving as the two key reservoirs of new metaphors for the rest of culture today, this expansion of the term is inevitable; one can only wonder why it did no happen earlier. Yet we are left with an interesting paradox: while in the realm of commercial music remixing is officially accepted, in other cultural areas it is seen as violating the copyright and therefore as stealing. So while filmmakers, visual artists, photographers, architects and Web designers routinely remix already existing works, this is not openly admitted, and no proper terms equivalent to remixing in music exist to describe these practices. The term that we do have is “appropriation.” However, this never left its original art world context where it was first applied to the works of post-modern artists of the early 1980s based on re-working older photographic images. Consequently, it never achieved the same wide use as “remixing.” Anyway, “Remixing” is a better term because it suggests a systematic re-working of a source, the meaning which “appropriation” does not have. And indeed, the original “appropriation artists” such as Richard Prince simply copied the existing image as a whole rather than re-mixing it. As in the case of Duchamp’s famous urinal, the aesthetic effect here is the result of a transfer of a cultural sign from one sphere to another, rather than any modification of a sign. The only other commonly used term across media is “quoting” but I see it as describing a very different logic than remixing. If remixing implies systematically rearranging the whole text, quoting means inserting some fragments from old text(s) into the new one. Thus it is more similar to another new fundamental authorship practice that, like remixing, was made possible by electronic technology – sampling.
7. Sampling: New Collage?
According to Ulf Poscardt, “The DJ’s domination of the world started around 1987.” This take-over is closely related to the new freedom in the use of mixing and sampling. That year M/A/R/S released their record “Pump Up the Volume”; as Poscardt points out, “This record, cobbled together from a crazy selection of samples, fundamentally changed the pop world. As if from nowhere, the avant-garde sound collage, unusual for the musical taste of the time, made it to the top of the charts and became the year’s highest-selling 12-inch single in Britain.” Theorizing immediately after M/A/R/S, Coldcut, Bomn The Bass and S-Xpress made full use of sampling, music critic Andrew Goodwin defined sampling as “the uninhibited use of digital sound recording as a central element of composition. Sampling thus becomes an aesthetic programme.” We can say that with sampling technology, the practices of montage and collage that were always central to twentieth century culture, became industrialized. Yet we should be careful in applying the old terms to new technologically driven cultural practices. While the terms “montage” and “collage” regularly pop up in the writings of music theorists from Poscardt to Kodwo Eshun and DJ Spooky, I think these terms that come to us from literary and visual modernism of the early twentieth century do not adequately describe new electronic music. To note just three differences: musical samples are often arranged in loops; the nature of sound allows musicians to mix pre-existent sounds in a variety of ways, from clearly differentiating and contrasting individual samples (thus following the traditional modernist aesthetics of montage/collage), to mixing them into an organic and coherent whole; finally, the electronic musicians often conceive their works beforehand as something that will be remixed, sampled, taken apart and modified. Poscardt: “house (like all other kinds of club music) has relinquished the unity of the song and its inviolability. Of course the creator of a house song thinks at first in terms of his single track, but he also thinks of it in the context of a club evening, into which his track can be inserted at a particular point.” Last but not least, It is relevant to note here that the revolution in electronic pop music that took place in the second part of the 1980s was paralleled by similar developments in pop visual culture of the same period. The introduction of electronic editing equipment such as switcher, keyer, paintbox, and image store made remixing and sampling a common practice in video production towards the end of the decade; first pioneered in music videos, it later took over the whole visual culture of TV. Other software tools such as Photoshop (1989) had the same effect on the fields of graphic design, commercial illustration and photography. And, a few years later, World Wide Web redefined an electronic document as a mix of other documents. Remix culture has arrived.
8. Open Source Model
Open Source model is just one among a number of different models of authorship (and ownership) which emerged in software community and which can be applied (or are already being applied) to cultural authorship. The examples of such models are the original project Xanadu by Ted Nelson, “freeware,” and “shareware.” In the case of Open Source, the key idea is that one person (or group) writes software code, which can be then modified by another user; the result can be subsequently modified by a new user, and so on. If we apply this model to a cultural sphere, do we get any new model of authorship? It seems to me that the models of remixing, sampling and appropriation conceptually are much richer than the Open Source idea. There are, however, two aspects of Open Source movement that make it interesting. One is the idea of license. There are approximately 30 different types of licenses in Open Source movement. The licenses specify the rights and responsibilities of a person modifying the code. For instance, one license (called Gnu Public License) specifies that the programmer have to provide the copy of the new code to the community; another stipulates that the programmer can sell the new code and he does not have to share with the community, but he can’t do things to damage the community. Another idea is that of the kernel. At the “heart” of Lunix operating system is its kernel - the code essential to the functioning of the system. While users add and modify different parts of Lunix system, they are careful not to change the kernel in fundamental ways. Thus all dialects of Lunix share the common core. I think that the ideas of license and of kernel can be directly applied to cultural authorship. Currently appropriation, sampling, remixing and quoting are controlled by a set of heterogeneous and often outdated legal rules. These rules tell people what they are not allowed to do with the creative works of others. Imagine now a situation where an author releases her/his work into the world accompanied by a license that will tell others both what they should not do with this work and also what they can do with it (i.e. the ways in which it can be modified and re-used) Similarly we may imagine a community formed around some creative work; this community would agree on what constitutes the kernel of this work. Just as in the case of Lunix, it would be assumed that while the work can be played with and endlessly modified, the users should not modify the kernel in dramatic ways. Indeed, if music, films, books and visual art are our cultural software, why not apply the ideas from software development to cultural authorship? In fact, I believe that we can already find many communities and individual works that employ the ideas of license and kernel, even though these terms are not explicitly used. One example is Jon Ippolito’s Variable Media Initiative. Ippolito proposed that an artist who accepts variability in how her/his work will be exhibited and/or re-created in the future (which is almost inevitable in the case of net art and other software-based work) should specify what constitutes the legitimate exhibition/recreation; in short, s/he should provide the equivalent of the software license. Among the cultural projects inspired by Open Source Movement, OPUS project (2002) stands out from the rest in how it tackles with the question of authorship in computer culture. Importantly, OPUS, created by Raqs Media Collective (New Delhi), is both a software package and an accompanying “theoretical package.” Thus the theoretical ideas about authorship articulated by Raqs collective do not remain theory but are implemented in software available for everybody to use. In short, this is “software theory” at its best: theoretical ideas translated into a new kind of cultural software. OPUS software designed to enable possible multi-user cultural collaboration in a digital network environment. In OPUS (which stands for “Open Platform for Unlimited Signification), anybody can start a new project and invite other people to download and upload media objects to the project’s area on OPUS site (it is also possible to download OPUS software itself and put it on new servers). When the author uploads a new media object (anything from a text to a piece of music), s/he can specify what modifications by others will be allowed. Subsequently, OPUS software keeps track of every new modification to this object.
Each media objects archived, exhibited and made available for transformation within OPUS carries with it data that can identify all whose who worked on it. This means that while OPUS enables collaboration, it also preserves the identity of authors/creators (no matter how big or small their contribution may be) at each stage of a work’s evolution.
The Raqs Collective introduces a new term “rescension” to address this type of colloborative authorship. In my view, “rescension” presents a sophisticated comprise between the two extreme ideologies of digital authorship commonly envoked and used today: on the one hand, completely open model that lets everybody modify anything; on the other hand, tight control of all permissible uses of a cultural object by traditional copyright practices. Importantly, as distribution of culture, from texts to music to videos, is increasingly tmoving online, economically dominant ideas about authorship and copyright in our society will be implemented in actual software that will control who can access, copy and modify the cultural objects, and at what price. For instance, while MPEG-1 through MPEG-7 media formats focused on “compression and the coordination of different media tracks, the recent proposal for MPEG-21 focuses on digital rights management. The authors of the proposal imagine a future “multimedia framework” where “all people on Earth take part in a network involving content providers, value adders, packages, service providers, consumers, and resellers.” Like XML, MPEG-21 consists from a number of separate components, those very names reveal its aim to manage all the difficult issues of content creation and distribution in digital network environment through technological solutions: “Intellectual property Management and Protection,” “Rights Data Dictionary,” “Rights Expression Language.” OPUS anticipates this kind of future by providing an intellectually sofisticated alternative paradigm of cultural authorship and access implemented in software.
9. Brand as the Author
Who are the people behind Nike? Prada? Sony? Gap? Consumer brands do not make visible design teams, engineers, stylists, writers, programmers, and other creative indivdiuals who make their individual products and product lines. Competing in already crowded semantic space, the company wants the consumers to remember one thing only: the brand name. To bring in the names of individuals involved in creating brand products - which are numerous and which continuosly change - would dissolve brand identity. Note that a company does not try to hide these names - you can find them if you want - but they are just not part of brand publicity. Unless, of course, the name involved itself represents another brand, like Rem Koolhaus or Bruce Mau. Koolhaus and Mau are brands because they function exactly like all other brands: they have big teams working on diffirent projects but the names of individual contributors are not made visible. A museum hires Rem Koolhaus to have a building by Rem Koolhaus - not because it wants to skills of a particular media designer, lighting designer, or an architect working for Koolhaus. The same goes for most well-known musicians, artists, and architects. In contrast to “corporate brands,” these are "individual brands." When we think of these individual brands we not supposed to also think of all the people involved in their creations. We can see here the romantic ideology with its emphasis on a solitary genius still at work. In a certain sense, corporate brands are more "progressive" in that they dont't hide (although they dont foreground it either) the fact that everything they sell is created by collectives of individuals. And while in the last decade a number of artists’ collectives have presented themselves as corporate brands, in most case their mascarades still followed the conventiosn of artworld rather than of commercial brand environment. For instance, when jodi.org burst into the emerging net art scene with their Web site a number of years ago, the fact that for the first couple of years we only knew the project by the name of its rule URL but not the artist’s names was part of the attraction. However, eventually the names of the creators, Joan Heemskerk and Dirk Paesmans, became public. And Etoy, the most systematic among artists’ collectives simulating as brands, still has not been completely consistent in following the rules of corporate authorship. Etoy presents itself as a company which consists from a small number of etoy agents which go by their first names: etoy.zak, etoy.zai, and so on. Thus it foregrounds all the inividuals involved in brand managemnet, even though they go by semi-fictional names. My aim here is not to critic jodi or etoy but rather to point that high culture and consumer culture follow very diffirent models of authorship, which makes it hard even for smartest artists to completely simulate the corporate model. Still, artist-as-ananomous-brand phenomenon that already existed before Internet became much more common on the Web, with many artists, designers and design groups choosing to focus visibility on the name of their site rather than their individual names: from jodi and etoy to future farmers, unclickable.com, uncontrol.com, and many many others.
Conclusion
The commonality of menu selection / remixing / sampling / synthesis / “open sourcing” in contemporary culture calls for a whole new critical vocabulary to adequately describe these operations, their multiple variations and combinations. One way to develop such a vocabulary is to begin correlate the terms that already exist but are limited to particular media. Electronic music theory brings to the table analysis of mixing, sampling, and synthesis; academic literary theory can also make a contribution, with its theorizations of intertext, paratext, and hyperlinking; the scholars of visual culture can contribute their understanding of montage, collage and appropriation. Having a critical vocabulary that can be applied across media will help us to finally accept these operations as legitimate cases of authorship, rather than exceptions. To quote Poscardt one last time, “however much quoting, sampling and stealing is done – in the end it is the old subjects that undertake their own modernization. Even an examination of technology and the conditions of productions does not rescue aesthetics from finally having to believe in the author. He just looks different.”
Lev Manovich, The Language of New Media (Cambridge, Mass.: The MIT Press, 2001). Roland Barthes, Image, Music, Text, translated by Stephen Heath (New York: Hill and Wang, 1977), 146. http://www.ea.com/eagames/games/pccd/thesims/thesims.jsp. Ulf Poschardt, DJ Culture, trans. Shaun Whiteside (London: Quartet Books Ltd, 1998), 123. Ibid, 271. Ibid., 273. Fro instance, Web users are invited to remix Madonna songs at http://madonna.acidplanet.com/default.asp?subsection=madonna. Ibid., 261. Ibid., 261-262. Ibid., 280. To use the term of Barthes’s quote above, we can say that if modernist collage always involved a “clash” of element, electronic and software collage also allows for “blend.” Ibid., 252. Cindy Shirky, presentation during Human Generosity Project Summit, Banff Center for the Arts, September 2001. Some modification of Lunix kernel becomes necessary when Lunix is adapted for embedded systems which usually have less memory and less processing power than desktop PCs. See http://www.linuxdevices.com/news/NS6362696390.html. Lunix community will condemd any modifications that will change the kernel in fundamental ways. See http://slashdot.org/articles/99/02/27/076204.shtml. Ippolito is a new media artist and a Associate Curator of Media Arts at the Guggenheim Museum. For more information on Variable Media Initiative, see www.guggenheim.org. See http://www.opuscommons.net/main.php. http://www.opuscommons.net/templates/doc/manual.html. See HYPERLINK "http://www.opuscommons.net/templates/doc/manual_left.htm" http://www.opuscommons.net/templates/doc/manual_left.htm. See http://mpeg.selt.it/. For definitions of these terms introduced by Gerard Genette, see http://www.ht01.org/presentations/Session4b/dalgaard_HT01/ html_with_notes/tsld006.htm. Poschardt, DJ Culture, 284.
뉴미디어의 영상미학에 대한 상세하고 포괄적인 최초의 분석서. 뉴미디어 미학의 근원을 회화, 사진, 영화 그리고 텔레비전에서 찾아내면서 디지털 이미징, 인간-컴퓨터 인터페이스(HCI), 하이퍼미디어, 컴퓨터게임, 합성, 애니메이션, 그리고 가상세계를 살펴보는 책이다. 책의 각 부분은 복합적이지만 접근 가능한 입구를 만들어 우리가 뉴미디어 형식의 역사 안으로 들어가게 해주고, 그러한 형식이 그 자체로만 고립되지 않고
8쪽 - 한국어판 저자 서문
뉴미디어의 언어, 보다 정확히 말하자면 여러 개의 별개 언어들은 네트워크에 연결된 디지털 컴퓨터라는, 전세계적 정보사회의 새로운 기계가 가져온 보다 새로운 기술만이 아니라 영화, 연극, 활자도서 등 이미 잘 자리잡은 문화형식의 기술과 기억, 전문성을 통합하고 있는 일종의 혼합종이기 마련이다.
원천자료의 개방 하이퍼텍스트 1990년대 사이버 예언가들의 미래주의 문화적 상위구조 산업적 근대주의 절충주의 아방가르드 예술가, 미래파
정보미학 타불라 라사 Tabula Rasa 백지상태 이데올로기의 해이 현상 포스트모던
로버트 벤츄리, 스코트 브라운, 이제 나우와 공저 <라스베가스의 교훈 Learning From Las Vegas> 1972
세계화된 국제성
10-11쪽
정보미학(즉, 과거 산업사회의 문화와는 구별되는 정보사회의 새로운 문화)는 이미, 아니면 앞으로 산업적 근대주의와는 판이한 논리를 갖고 있거나 갖게 되리라 예상한다. 이러한 논리는 다양한 조합을 통해, 과거의 것과 새로운 것을 창조적으로 병치하려는 욕구이다. 본서는 이러한 혼합성의 미학이 추진시키고 있는 현대문화의 한 단면, 즉 네트워크로 연결된 디지털 컴퓨터와 이미 자리를 잡고 있는 문화 형식이 교차되는 단면을 체계적으로 검토하는 것이다.
재혼합 remix 현대문화는 세 개의 핵심적 과정, 즉 세 가지 종류의 재혼합에서 바라볼 수 있다. 첫번째 재혼합은 포스트모더니즘으로, 주어진 미디어 안이나 문화 형식(음악, 건축, 패션 등) 안에 과거 문화의 내용과 형식을 재혼합하는 것이다. 두번째 재혼합은 세계화로, 한 나라의 문화 전통, 특성, 그리고 감수성을 그 안에서 뿐만이 아니라, 새로운 세계화된 국제성의 스타일과 상호작용하도록 섞어내는 것이다. 세번째 재혼합은 문화와 컴퓨터의 재혼합이다. 다양한 문화 형식의 인터페이스와 새로운 소프트웨어 기술의 재혼합이다. 이 문화 논리가 새로운 이유는 과거를 지워버리려 했던 근대주의적 새로움 때문이 아니라 그와 반대로 진행되고 있는 재혼합 과정의 광범위함, 속도, 연관된 요소들 자체의 새로움 때문이다.
디지털이라는 새로운 미디어는 몰입을 유도하는 영화적 인터페이스와 함께 데이터베이스의 하이퍼텍스트적 구조를 갖고 있는 새로운 문화의 내용을 만들어냈다. 또 그 문화의 내용은 사용자가 쉽게 조작할 수 있는 유연한 구조를 가지고 있다.
14쪽 - 추천서 '뉴미디어 영상미학에 대한 최초의 분석서' 마크 트라이브 Mark Tribe (Rhizome.org 창시자)
예술은 항상 기술과 연관되어 왔고, 예술가는 새로운 기술이 나타나면 그것을 최초로 수용하는 사람이었다. 우리는 새로운 기술이 어떤 일을 할 수 있는지를 보고, 그 기술이 기술자들이 의도했던 영역을 넘어서도록 하고, 기술이 의미하는 바가 무엇인지를 이해하며, 그 기술의 효과를 되새겨 보고, 또 그 기슬이 그것의 한계를 넘어서도록 밀고 나가며, 그 기술에 익숙해지기 위해 만지작거린다. (중략) 인터넷은 협동 작업, 민주적 배포, 그리고 참여적 경험 등의 새 유형을 가능하게 해주는 잠재력이 특히 잘 여물어 있다.
In this book Lev Manovich offers the first
systematic and rigorous theory of new media. He places new media within
the histories of visual and media cultures of the last few centuries.
He discusses new media's reliance on conventions of old media, such as
the rectangular frame and mobile camera, and shows how new media works
create the illusion of reality, address the viewer, and represent
space. He also analyzes categories and forms unique to new media, such
as interface and database.
Manovich uses concepts from film theory, art history, literary theory,
and computer science and also develops new theoretical constructs, such
as cultural interface, spatial montage, and cinegratography. The theory
and history of cinema play a particularly important role in the book.
Among other topics, Manovich discusses parallels between the histories
of cinema and of new media, digital cinema, screen and montage in
cinema and in new media, and historical ties between avant-garde film
and new media.
Lev Manovich is Professor of Visual Arts, University of California, San Diego. His book The Language of New Media (MIT Press, 2001) has been hailed as "the most suggestive and broad ranging media history since Marshall McLuhan."