Digital media: Can content, business and users coexist?
Leonardo Chiariglione Telecom Italia Lab, Torino (IT)
Ladies and Gentlemen, it is a great pleasure to address the Broadcast Engineering Conference of the NAB. As you may know, I work for a telecommunication company and for them channels are by definition two-way and one-to-one. But for a person who has been working for quite a few years now to make creation, distribution and consumption of audio-visual content easier, more effective and more rewarding, I can easily recognise how effective the one-way and one-to-many form of distribution can be when it comes to content, be it of educational, informational or entertainment nature. Over the years the very success of this form of distribution has prompted the NAB to extend the scope of its conference and exhibition beyond the pure "broadcast" form of distribution. Therefore this conference is an excellent place to study what has happened in the last few years, to interpret what is happening and to understand and possibly orientate what will come next.
Lets start from television and lets remind ourselves that television broadcasting has been the result of a great technological push that materialised as a mass-market phenomenon around the middle of last century. It started as "distribution of NTSC/PAL/SECAM signals in the VHF/UHF band" that is usually called today "terrestrial broadcasting". This form television has provided a unique experience to billions of people around the world. What governments had sought for centuries, i.e. to provide education, guidance and entertainment to their citizens, became possible with an effectiveness that printed material and radio could not achieve.
Key to the functioning of broadcasting was the fact that TV viewers were bound to watch the content that the TV stations selected to broadcast at the time chosen by them. Then, some 20 years ago, the Consumer Electronics (CE) industry developed the Video Cassette Recorder (VCR) and gave TV viewers the ability to watch content selected from those previously broadcasted at the time chosen by the users. Content companies were not excited by this new freedom and, strictly from the logical viewpoint, they were right. Twenty years later, however, one must conclude that the new device has created an entirely new distribution outlet, one whose revenues exceed those of the traditional cinema theatre.
But new challenges keep on showing up. New video recorders, called Personal Video Recorders (PVR), are actually computers with a high-capacity hard disk and provide the same features of a VCR with the added advantage for the user of the ability to control what is recorded and played back in a finer way. Much as content companies 20 years ago, today broadcasters complain that PVRs undermine their business model and content companies are afraid that their content so registered, can easily flow unrestricted through the Internet. I understand that now both groups are asking lawmakers in their countries to inhibit recording under certain conditions.
Frankly, I never understood why terrestrial broadcasters should have wanted to replace their good distribution system with something else, just because it is digital. The result of this decision, however, is in front of us and its reading is unambiguous. In Europe several attempts have been made at introducing digital terrestrial television (DTT) as commercial ventures, but the results have been bankruptcies. In this country I am told that things are not progressing as fast as originally planned. DTT is indeed a difficult beast to deal with but it is clear that, if there is a distribution system carrying content in digital form, the concerns with the PVR are just amplified. The ongoing stalemate is causing fibrillation in some public authorities.
Ten years ago digital technologies proved a great attraction for those who wanted to try a new satellite-based distribution system because this could provide access to a wide variety of content at reduced cost compared to analogue. Access to this content was provided for a fee, hence the name "pay TV". After almost a decade a conclusion is taking form: pay TV works when there is only operator providing this service. If there is more than one operator, business is invariably meagre, to say the least. As a result, in several European markets satellite pay TV operators are asking competition authorities permission to merge.
The music market saw a new life 20 years ago when the Compact Disc (CD) was released. The idea was to bring the quality of the recording studio to people in the home and, later, on the move. But the combination of signal processing and networks created the MP3 phenomenon and, in a sense, the last few years have seen creativity at its best, with hundreds of self-styled entrepreneurs trying new distribution systems, some of them based on new technologies such as peer-to-peer (P2P) networks. The problem with this phenomenon, however, is the fact that those entrepreneurs were actually dealing with content that was not in their possession. As a response, the recording industry created the Secure Digital Music Initiative (SDMI), an organisation that I expected would become the forerunner of a movement that would set the rules and the technology components of the digital world. Unfortunately, the problem turned out to be more complex than it had been envisaged.
The last case that I would like to consider is the Digital Versatile Disc (DVD). The development of the specification for that device was the result of a cross-industry effort that targeted the equivalent of the quality of the video studio in the home. Unlike the CD, bits recorded on the DVD are scrambled but, unlike the pay TV set top box, all DVD players can play content from any content source. The result is that the success of DVD has beaten all previous successes in CE history. Security of DVD was eventually compromised, but fortunately this is not having a particularly negative effect on the DVD market itself that continues to grow. It does mean, however, that files derived from DVD cannot find their way to P2P networks.
It is time to stop for a moment and try to make some considerations. The first is that, while the media distribution business is still managed within vertical systems, the design of these systems can no longer be done in isolation and even if it is done this way, workarounds are easily found. As a result, once content is released though one channel, it immediately finds its way to another.
Fissures are opening all over the place. In times of big changes like this one it is only natural that you find opinions spanning the entire spectrum. On one side you find those who say that this way of creating new forms of distribution, reaching new people and providing new experiences is great and should be exploited, even though they are aware of the dangers. At the other extreme you find those who are totally risk averse and would not yield an inch to the new world. The first group raises the banner of technology as the enabling factor and the second the banner of law as the restraining factor.
This time of big changes should indeed spawn a turning point. But the reality is that opposing camps are confronting each other and there is no sign that there will ever be a turning point. This generation, our sons and grandsons will look back and will shake their heads complaining about the missed opportunities that the inability of this generation to make things move forward has cost mankind. In the meantime we live in an age of stagnation, worse, of decadence.
Things worked fine when the life cycle of information was subdivided in a number of virtually independent streams. But the cases I studied above show that there are no longer different streams and that trying to keep them separate is a waste of time for all involved, is not understood by the end users, runs against the tide of technology and ultimately deprives business players of new opportunities and end users of new experiences.
It is not the first time that this kind of conflicts happens in history and we must learn from the lessons of the past if we want to move forward. Almost a quarter of millennium ago, a few tens of sages met in the city of Philadelphia, worked for a few months and produced the Constitution of the United States of America. This Constitution was the first practical application of the ideas of "social contract" that XVIII century philosophers had been debating for decades. I think we need a social contract for the media or, more generally, for information. As much as the new society in North America of the second half of the XVIII century established a social contract, so the new world of digital content needs a social contract for the Information Society. But beware; this social contract should not be a patch. It must be something rationally conceived.
Because of its rational conception, the US Constitution has withstood the attacks of time and the only changes the US Constitution has seen are "Amendments". Compare this with Europe, where Constitutions were introduced as a patch to mend the autocratic ways of the past and accommodate the democratic needs of the present. In European countries Constitutions come and go. Italy is at its second Constitution, Germany at its third and France at its fifth. What about the UK? Well, this is called the British difference: they still have no Constitution.
As for any other society, the Information Society is made of individuals that I will call "peers". Information Society peers operate on a complex network, where they can play different roles and have multiple relationships between themselves. Peers operate on information: some peers create, other peers put together or process other peers information, some others sell it and some eventually consume it. Even consumption can be a creative process.
Information Society is a special type of society. One could imagine a "Food Society" where there are peers growing vegetables and cattle, other peers selling them, other peers cooking them and finally other peers consuming them. I guess it would not be a big deal if we wanted to write the rules of the "Food Society".
But Information Society is different. The Internet and the Web have shown that people crave for information. It was the need to have an educated population that prompted XIX century governments all over the world to impose and often administer directly compulsory primary education. It was the need to keep their populations constantly informed that prompted many governments to put radio and television broadcasting directly under their responsibility. It is because of the societal role of information that you have a standard tax deduction in your income statements for newspapers and magazine subscriptions.
But we need rules. Generation and exchange of information is so fundamental to democracy and progress that leaving information without rules is going to have the most negative impact on our future. But we should not work out the rules by squabbling about the meaning of "fair use" in Information Society or we will end up with a Constitution that will be in a constant need of redrafting, much as it happened to the French. It is ground rules we need.
I am afraid I would not qualify to join the elite team of the Information Society Constitution drafters. Being a technologist, however, I know that Information Society will need technology. Even though it would be ideal to have a completely top down approach, by starting from first principles and then go down to precise technical requirements on the technologies underpinning the Information Society, I think there is no reason to wait because there are technologies that are going to be needed anyway and development of technology takes time.
For two years, starting in 1999, I was involved in SDMI, an initiative that was a great personal although difficult experience that taught me a lot. That experience made me realise the extent of the complexities I mentioned before and the need to include all value network actors in the effort if we are to get anywhere.
The result has been the ISO standardisation project called MPEG-21. It is not my intention to describe the project in detail but only show that technologies developed can provide the technology foundation for the Information Society.
The goal of the MPEG-21 project can be described as "enabling electronic commerce of "Digital Items" (DI). Before you stop me asking questions about what is a DI, I will tell you that DI is the unit of transactions between Users. An example of a DI is a music compilation, full with MP3 files, metadata, all sort of related links etc. By "User", I mean all entities that act on the value network, i.e. creators, market players, regulators, consumers etc.
MPEG-21 has developed a number of technologies. The first is a standard way of "declaring" DIs. Called "Digital Item Declaration" (DID), this standard has the purpose of defining multimedia content in terms of its components (i.e. resources and metadata) and structure.
For any transaction we need a means to identify the object of the transaction. That is why we need a standard to uniquely identify DIs. Called "Digital Item identification" (DII) the standard plays very much the same role as ISBN does for books and ISSN for periodicals. At the last MPEG meeting we recommended, form a number of excellent candidates, that CISAC, the International Confederation of Authors and Composers Societies, be appointed as the Registration Authority for organisations that intend to act as assigners of DI Identifiers.
Getting an identifier for a DI is important, but how are we going to put a "sticker" on it? This is where Persistent Association Technologies come in. SDMI struggled with the selection of very advanced "Phase I" and "Phase II" screening technologies and its task was made harder by the fact that no established methods exist to assess the performance of these technologies. That is why we are developing another part of the standard called "Evaluation Methods for Persistent Association Technologies". This is not meant to be a "prescriptive" (normative) standard but more like "best practice" for those who need to assess the performance of watermarking and similar technologies.
The next step is the development of reference architecture of Intellectual Property Management and Protection (IPMP) to manage and protect DIs. This part is still under development and active participation from all players is needed so as to make the architecture truly generic, without any bias towards a particular way of trading DIs.
A key component of this architecture is a standard called IPMP Extension (IPMP-X). It is a standard enabling communication between the tools used to protect a piece of content and a terminal that needs to process (e.g. decrypt, decode, present) the content. Another component is the "Rights Expression Language" (REL). Already in the physical world we seldom have absolute rights to an object. More so it will be in the virtual world, where the disembodiment of content from carriage augments the flexibility with which business can be conducted. The REL is a language allowing Users to express what rights exists about a DI in a way that can be interpreted by a computer.
A right exists to perform actions on something. Today we use such verbs as: "display", "print", "copy" or "store" and we humans think we know what we mean by those words. But computers must to be taught the meaning. This is why we need a "Rights Data Dictionary" (RDD) that gives the precise semantics of all the verbs that are used in the REL.
Information and Communication Technologies (ICT) let people do more than just new ways of doing old business. Content and service providers used to know their customers very well. They used to know even control the means through which their content was going to be delivered. Consumers used to know the meaning of well-classified services such as television, movie and music. Today we are having less and less such certainties: end users are more unpredictable than ever, the same piece of content can reach them through a variety of delivery systems and can be enjoyed by a plethora of widely differing consuming devices. How can we cope with this unpredictability of end user features, delivery systems and consumption devices? This is where "Digital Item Adaptation" (DIA) comes to help, by providing the means to describe how a DI should be adapted (i.e. transformed) so that it best matches the specific features of the User, the Delivery System and the Device.
I will complete the current list of basic MPEG-21 technologies mentioning "Event Reporting" (ER), whose purpose is to provide metrics and interfaces for performance of all reportable events, "File Format" (FF), that provides a standard way to store and transmit DIs and "Digital Item Processing" (DIP), whose purpose is to provide the means to "play" a DI. New audio and video coding schemes are also being considered. These will provide full scalability, a dream that the signal processing community has had for many years without being able to make it true, and now seems without reach.
What I have explained above is not a declaration of intentions; it is the result of a coordinated work that has involved hundreds of individuals and has lasted 3.5 years. At the moment we have two standards (DID and DII) that have achieved International Standard (IS) status. Two more (REL and RDD) will reach that status in July 2003. Another (DIA) will do so in December 2003. More will follow in 2004.
Let me now make a summary of my messages to you:
You may now ask: exactly how does all this help my business? If you want a precise answer I am afraid it will not be possible in the few minutes that have been allocated to my speech, unless the organisers of this conference are ready for a radical change in the program.
My suggestion is that people should give up broadminded altruism, reject the lures of narrow-minded selfishness and adopt broadminded selfishness instead.
Let me explain. Analogue broadcasting has been an example of broadminded altruism. Just release content and make people happy. This model worked well for 60 years because the state of technology was such that propagation of content was physically possible, but too awkward to realise on a broad scale to pose a threat to the business of content delivery.
Then, in the first years of the digital age, we saw two extremes. On the one hand some people applied broadminded altruism to digital content thinking that, if digital technologies are a tiger that makes distribution easier, lets ride the tiger. The problem was and still is that it is easy to be successful if you give away your asset, even more so if it is somebody elses asset. On the other hand there has been another remarkable business model experimented, which I call narrow-minded selfishness, that relies on the creation of the entire chain from source to destination based on proprietary technologies. This model has been tried for enough time and its value should by now be clear. I can only praise the authorities in this country for having prevented the creation of a monopoly and I would love to see authorities in Europe similarly watchful.
Should we discount the great user experience that access to a large number of content gives users? My answer is no. But how can we reconcile the advantages and the dangers, then? Broadminded selfishness is the answer. Broadcasters and other content and service providers should retain control of their assets, but users should have freedom of choice, using equipment they have purchased in the shops, much as it happened in the last 60 years for analogue television. Broadminded selfishness is possible today if you rely on the technologies I have introduced to you before.
Ladies and gentlemen, in the analogue world broadminded altruism has earned people prosperity on earth and possibly heaven in afterlife. In the digital world, broadminded altruism has earned people poverty on earth and possibly hell in afterlife. Narrow-minded selfishness has earned people poverty on earth and I do not know what in afterlife. I encourage you to try broadminded selfishness. It may provide you prosperity on earth and heaven in afterlife.