Peer Networks - Plague or Promise?
April 1, 2005
The reemergence of the peer
network has begun the discussion of whether there is a way for organizations to capitalize and make money. The fact that almost 70 percent of all traffic on the Internet today is peer-to-peer (P2P) gives us an indication of the rapid growth and penetration of P2P technology. The P2P networking model is considered by many as an emerging paradigm shift that could change information technology, sensing, communications and business processes in much the same way as the Internet has.
P2P is a path back to the original model of the Internet. It is precursor of a new network and processor model in which every device is a server.
The concept of P2P has many different meanings to various stakeholders. To some, it is sharing files or stealing music. To others it is sharing CPU and storage or instant messaging. Some view P2P as ‘serverless’ collaborative work, Web services, location-based services or as a game. Despite the varying views, all agree it is the next step beyond the client-server Web.
There are many drivers influencing the adoption of P2P networks and the market. There are business, social and regulatory drivers and constraints. The three main ones accelerating the adoption of P2P are:
Broadband communication. A forecast by my company, Technology Futures Inc., finds some form of broadband for Internet access in 40 percent of connected households in the United States in 2005, and predicts a fast growth rate.
Increased popularity of wireless devices adds another dimension and offers opportunities for new applications. These wireless devices cross many domains and include 3G and applications that even do not have a user in control.
Advances in software agent technologies provide mechanisms for managing the complexity of P2P computing.
Economic and market drivers also will spur the rate of adoption. A phrase being used to describe the P2P impacts is ‘friction-free economics.’
As technology reduces the costs of operating a firm, it reduces the costs of the market itself. As firms get more efficient, the market also is getting more efficient. Moore’s Law and Metcalfe’s Law are working to create a new marketplace where transaction costs are reduced not incrementally but exponentially. Harvard Business School professors Jeffrey Rayport and John Sviokla point out, in this evolving ‘marketspace,’ it is not only the infrastructure that is different, but also the content and context of transactions.
Think of the Internet not as a network of connected computers, but as the test bed for a new market economy, that is global, continuously operating and increasingly automating the processes of buying, selling, producing and distributing by using many of the advantages offered by P2P.
The nature of the basic unit of business is changing. The concept of a firm as a physical entity is moving to a ‘virtual organization,’ where employees may be part-time or contracted, assets may be jointly owned by many organizations, and where the separation between what is inside and outside the firm becomes increasingly hazy.
The idea of a ‘friction-free’ economy is one cleared of ‘middlemen.’ Academics call it, ‘disintermediation.’ P2P can be changed very rapidly and be virtual. It can be as scaleable and closed as the needs of the business. There are benefits to this concept. With more distributed ‘knowledge’ in the marketplace, markets for goods could clear faster and avoid inventory buildups, which contribute to economic downturns.
P2P also brings in use another law called Reed’s Law as a social driver. Reed’s Law says that any system that lets users create and maintain groups creates a set of group forming options that increase exponentially with the number of potential members.
Much of the interest surrounding P2P computing is on its potential. P2P enables the user to accomplish tasks previously done by a remote server that someone else is managing. It provides the opportunity to get back huge amounts of untapped resources, including processing power for large-scale computations and enormous storage potential. P2P allows the elimination of the single-source bottleneck. Data and control can be distributed and requests can be load-balanced across the Internet eliminating the single-point-offailure hazard.
IT operations may be able to replace some costly data center functions by enabling distributed services between clients. By allowing access and shared space P2P improves remotemaintenance capabilities.
However, much of the appeal of P2P is due to social and psychological factors. Others like the ability to bypass all centralized control. For better or worse, almost perfect anonymity is possible.
But there are constraints to P2P networks. Security is a serious one. P2P networks are inherently less secure than centralized systems. Accountability and threats to intellectual property are important barriers. Network and social vulnerabilities and resistance in the corporate IT structure are other considerations. These barriers must be addressed.
P2P networks have grown from useful tools in information sharing to havens for trafficking in unauthorized copies of intellectual property. Content owners have been pushing for digital rights management (DRM) technologies to control distribution of intellectual property. Their concern is that P2P is ideal for unfettered distribution of files. P2P allows for accessibility throughout the Internet; does not require the source of a file to send it or know the identity of the recipient; and allows files to be copied instantaneously without the cost of physical media.
The same attributes that frighten intellectual property owners make P2P networks attractive to those who want to publish information as easily and widely as possible.
DRM and P2P seem to be polar opposites. To content owners, P2P offers open invitations to copyright infringement and rampant theft. To consumer advocates, P2P is natural outgrowth of the ‘open’ functionality of the Internet.
The reality is that DRM and P2P are sets of capabilities. Neither is mutually exclusive. Many believe that P2P functionality is key to implementing new business models for content. DRM is necessary to close the larger holes that P2P creates in the ability to profit from intellectual property.
Content owners need to consider how they can integrate DRM functionality with P2P networks. Many DRM technologies have gaps in their ability to be integrated into P2P networks. Those gaps include costrelated functionality limitations, device tethering, lack of superdistribution support, and the complexity of the integration. These gaps are the future direction for DRM designers.
In terms of cost, the biggest challenge in the market has been to find participants in the content value chain willing to pay for it.
For content owners, the reasons for supporting device tethering derive from the media industry’s traditional product orientation: the principle that two different formats of the same content - for example, the DVD and VHS versions of the same movie - are separate products and should be paid for separately.
Most DRM schemes only support single levels of distribution, or they support a limited form of superdistribution. Support for true superdistribution requires far more complex technology.
A serious barrier to growth in the DRM market has been how expensive, time consuming and complex it is to integrate DRM technology.
What will drive the future business value of the emerging P2P networks? Location-based services and mobility will be major drivers. The real driver of P2P business profit will be exploiting the shift to device-to-device communications, from people-to-things, and people-topeople communications.
David Smith is vice president of research and consulting firm Technology Futures Inc. Smith has been actively involved in technology management and forecasting for more than 30 years.
Links |
---|
Technology Futures Inc. www.tfi.com |
Read more about:
AgentsAbout the Author
You May Also Like