|
|
|
Contents: |
|
|
|
Related content: |
|
|
|
Subscriptions: |
|
|
| The new JXTA release makes pragmatic changes, adapts to
real-world topologies
Sing
Li (mailto:westmakaha@yahoo.com?cc=&subject=JXTA
2: A high-performance, massively scalable P2P network) Author, Wrox
Press 11 November 2003
JXTA 2 is the second major release of the
open source P2P network building substrate with a popular Java-based
reference implementation. Significant design modifications have been
introduced to create higher performance, massively scalable, and
maintainable P2P networks. This article, which builds on Sing Li's JXTA
series Making
P2P interoperable, published two years ago, brings you up to date on
the platform's major changes.
Since the introduction of JXTA 1.0, the open source peer-to-peer (P2P)
and distributed computing communities have embraced it with a passion (see
Resources for
earlier developerWorks
coverage of JXTA). The JXTA community flourished under a proliferation of
independent open source projects aimed to test the claims made by the JXTA
platform development team: that JXTA can and should become the preferred
standard upon which future interoperable P2P networking applications will
be built.
Learning from accumulated
experience During the last two years, the JXTA platform
design team has diligently assisted the JXTA development community to
better understand the basic design concepts and best practices in the use
of the facilities provided by the JXTA platform API. The feedback and
experience from the application development community has provided the
team with valuable comments and suggestions. This feedback is the result
of real-life deployment attempts, including the observation of limitations
in the platform's original design, awkwardness or inconsistency in the API
library, and specific pragmatic requirements that will make JXTA more
applicable in building real-world P2P applications. This valuable body of
accumulated experience would be impossible to gather without first
introducing a workable reference implementation. As such, the first
version of JXTA has served its intended purpose well.
Based on the feedback, the JXTA platform team has focused on a set of
major goals in the design and implementation of JXTA 2. Table 1 shows the
top design goals, along with a brief description of how the JXTA 2
implementation satisfies them (differences between JXTA 1 and JXTA 2 are
highlighted where applicable).
Table 1. JXTA
2 design goals and implementations
Goal |
JXTA 2
Implementation |
Massive scalability on today's most typical network
configuration |
- JXTA 2 introduces the concept of a rendezvous super-peer
network, greatly improving scalability by reducing propagation
traffic (see Toward
massive scalability)
- Implementation of the shared resource distributed index (SRDI)
within the rendezvous super-peer network, creating a loosely
consistent, fault-resilient, distributed hash table (see Shared
resource distributed index)
- Significantly improved resource utilization, both locally on a
per-node basis and network wide
- Use of large super computers in laboratories to simulate the
interactions in very large JXTA 2 networks and ensure network
scalability to hundred of thousands of nodes
|
Increased performance |
- Improved resource allocation and reuse, both at a local
per-node level and network wide
- Local optimization, including tight control over platform
memory footprint; efficiency improvement in thread handling;
elimination of repeated multiple buffer copies within the protocol
stack; providing control over the advertisements stored in the
local cache; ability to obtain the pipe advertisement associated
with a pipe without performing discovery; and more
- Network resource improvements, including better use of
available network connections by using the back-channel of a TCP
connection; new TCP relay for network address translation (NAT --
see the sidebar "Network
address translation device") peers; propagating only indices
of advertisements instead of complete advertisements through SRDI;
introduction of a bit-saving binary on-the-wire protocol format;
and more
- Local advertisement storage and indexing now performed with a
customized Xindice btree database, resulting in improved
performance over previous file system-based storage scheme
- HTTP transport that now supports both 1.0 and 1.1 for more
efficient use of underlying TCP connection
- Routing hint support added to advertisements to expedite route
resolution
|
Improved developer friendliness |
There are numerous API redesigns and improvements, making the
APIs easier to understand and considerably more consistent. Major
API changes include:
- Message creation and handling
- Message elements manipulation
- Improved flexibility in asynchronous discovery
handling
- Programmatic configurator support
- Clear, distinct separation of relay, rendezvous, router, and
transport functionality
- Unification of terminology in describing firewall, relay, and
proxy elements found in current networks
- ID factory for group ID (previously a cumbersome task to
manually create)
JxtaSocket providing stream-based API between
peers, similar to the Java platform's socket APIs
JxtaBiDiPipe providing bi-directional pipe
capabilities for passing messages between peers |
Improved reliability |
Changes were made on the adaptive behavior of peers to improve
overall network reliability (since rendezvous and relays are the
most vital services for network availability, most of the changes
revolve around them):
- Failover on rendezvous connections
- Failover on relay connections
- Dynamic discovery and tracking of rendezvous
- Dynamic discovery and tracking of relays
- Edge-peer automatically becomes rendezvous if no rendezvous
were found after fixed time
|
Increased manageability |
Support for detailed remote monitoring of peers. Extensive
instrumentation and metering capability is built-in with JXTA 2.
This is essential for benchmarking, an essential enabler for network
tuning and performance/scalability improvements. A GUI monitoring
utility is also available. It marks the beginning of a long-term
plan to make JXTA peers more manageable and easier to
administer. |
Mobile compatibility |
Provides proxy capabilities for JXTA implementation for Java 2
mobile edition (JXME). |
Network address translation device A network
address translation device (NAT) enables multiple computers and
terminals to connect to the Internet through a single network
address (by translating the network address on outgoing packets and
retranslating them on incoming packets). These devices are widely
used by home users or business users working remotely, and are often
called high
speed Internet routers or Internet
sharing devices. |
In this article, we'll examine in more detail the key changes that make
massive scalability possible in JXTA 2. We'll also do some hands-on coding
with the new platform configuration API, creating a silent startup harness
for the JXTA shell (a harness that you can readily reuse in your own JXTA
applications). Finally, we'll explore a couple of new shell commands and
take a peek at how resources are indexed and located in a JXTA 2
network.
Toward massive
scalability Network topology in the real world evolves over
time. Its evolution is often influenced by external infrastructure and
economic factors that typically are not in direct control of the software
system designers. JXTA 1 took a pure and rather theoretical approach to
using the underlying physical network topology. In contrast, JXTA 2 takes
a significantly more pragmatic approach to engineering an overlay network
that will work in a high performance and scalable manner on top of today's
most common network topology.
JXTA 2 introduces the concept of a rendezvous
super-peer network: dynamically and adaptively segregating the P2P
network into populations of edge peers plus rendezvous peers. Propagation
only occurs within the more stable (and smaller population of) rendezvous
peers. This restriction significantly enhances scalability and reduces the
possibility of network-wide message storms or flooding.
Let's take a deeper look into the implementation of this new overlay
network.
Edge peer versus rendezvous super
peer Under JXTA 2, edge peers are the most transient in
nature, and may come and go frequently. Most peers in a JXTA 2 network are
expected to be edge peers. An edge peer does not forward query requests,
and it may or may not maintain its own local advertisement cache. Most
edge peers will maintain a local cache, but limited capability devices
such as cell phones, PDAs, and pagers may not be able to cache, and will
need to use the service of a JXME JxtaProxy (see Resources for
more on JXME).
While some edge peers have direct connection to the rendezvous network,
many edge peers connect to the rendezvous network through intermediate
peers acting as relays or a JxtaProxy . To participate in the
P2P network, an edge peer only has to know how to reach a single
rendezvous. Typically, an edge peer maintains a list of known rendezvous
(called seed
rendezvous) and participates in dynamic discovery of new rendezvous, which
enables the peer to fail-over to an alternate rendezvous and enhances
overall network reliability.
Rendezvous peers are expected to be less transient and more stable
(that is, once they connect to the network, they stay up and stay
connected). It is anticipated that there are significantly more edge peers
in a JXTA network at any time than rendezvous peers. Rendezvous peers are
direct-connect peers in nature, meaning that they should not require
relays or JxtaProxy to connect to other rendezvous in the
network. Every rendezvous peer is a member of the rendezvous super-peer
network (in the context of a particular JXTA group). Rendezvous peers
forward queries that they cannot resolve based on their own cache of
indices to other rendezvous; they participate in maintaining a loosely
consistent distributed hash
table (DHT) of all the accessible resources in the network. Every
rendezvous peer maintains a dynamic view of all the known rendezvous in
the network and can connect or fail-over to alternate rendezvous as
network topology changes.
Every JXTA peer has the ability to assume the role of edge peer or
rendezvous peer. In fact, an edge peer can adaptively become a rendezvous
peer by default if it cannot connect to any rendezvous for an extended
period of time. All rendezvous peers in the network can also issue
queries, meaning that edge peer functionality is effectively a proper
subset of rendezvous peer functionality. In general, symmetry is
adaptively and pragmatically applied in a JXTA 2 network, subject to the
constraint of the underlying physical network topology as well as
customized user configuration.
Shared resource distributed
index The operation of the JXTA 2 network relies on its
ability to resolve distributed queries (see the sidebar "The essence of P2P
networking" for more on this concept). In JXTA 2, the rendezvous
super-peer network forms a loosely consistent DHT to resolve distributed
queries.
JXTA 2 uses a distributed algorithm, called shared resource
distributed index (SRDI) to create and maintain a conceptual index of
resources in the network. In JXTA, resources are described by metadata in
the form of advertisements (essentially XML documents). SRDI is used to
index these advertisements network wide, through a set of specified
attributes. The distributed index maintained resembles a hash table, with
the indexed attributes as hash keys and the hash values mapping back to
the source peer containing the actual advertisement. Queries can then be
made anywhere in the rendezvous network based on these attributes. In this
way, SRDI can answer advertisement queries in the network by locating the
peer that has the required advertisement. For example, a peer may send a
query for "a pipe named LotteryService " to the network
consisting of thousands of nodes. SRDI enables this query to be quickly
resolved and answered by the peer that holds the
"LotteryService pipe" advertisement.
Under SRDI, it is no longer necessary to remotely publish any
advertisement. Only the index of the advertisements stored on a peer is
published. This index information is "pushed out" to the DHT (rendezvous
super-peer network) through a single connected rendezvous.
A loosely consistent
DHT The JXTA 2 network acts as an always-available,
network-wide, dynamic, distributed data structure: a virtual hash table
containing the index of all the published advertisements in the entire
JXTA group. An edge peer can query the hash table at any time by supplying
a set of attributes -- the hash keys in the table. The query is resolved
by the network (actually the rendezvous super-peer network) by hashing the
key to the required value (that is, the peer containing the requested
advertisement), as illustrated in Figure 1:
Figure 1. Rendezvous super-peer network as a loosely
consistent DHT
In Figure 1, edge peer 1 (EP1) creates a pipe and stores its
advertisement locally. The index is updated through SRDI and the DHT now
knows about this advertisement. Some time later, edge peer 2 (EP2)
performs a query for EP1's pipe. The rendezvous super-peer network
resolves the query and notifies EP1 of the request for the advertisement.
As a result, EP1 sends the requested advertisement as a response to
EP2.
To create this DHT, each peer caches advertisements locally, and all
locally stored advertisements are indexed. The index is pushed to the
rendezvous node (JXTA 2 edge peer connects to only one rendezvous node at
any time). The rendezvous super-peer network maintains the DHT containing
the amalgamated indices. Queries are always sent to a rendezvous, as
illustrated in Figure 2:
Figure 2. Query resolution in the rendezvous super-peer
network acting as a DHT
The steps shown in Figure 2 are as follows:
- EP1 creates a pipe and stores its advertisement locally.
- The index is updated locally and the changes are pushed to the
connected rendezvous (R1).
- Through SRDI index propagation, the rendezvous receiving the update
(R1) replicates the new index information to the selected rendezvous (R3
and R5) in the super-peer network (see Rendezvous
peerview and RPV walker for more information on this selection and
replication process).
- Later, EP2 queries for EP1's pipe. This query is sent to its only
connected rendezvous -- R4.
- R4 hashes the query's attributes and redirects the request to
another rendezvous in the super-peer network -- R3.
- R3, having received EP1's index update through R1, immediately
notifies EP1 of EP2's request.
- EP1 sends a response directly to EP2 containing the requested pipe
advertisement. At this point, the query is resolved.
- EP2 may in turn decided to store the pipe advertisement, and the
cycle begins again.
Thus far we've been describing the DHT maintained by the JXTA 2
rendezvous network as if it is maintained in a perfectly consistent
fashion all the time. This is not currently possible because the set of
rendezvous cooperating to implement the DHT may come and go (albeit less
frequently than edge peers) in a P2P network. When a rendezvous goes away,
the chunk of index that it is maintaining becomes unavailable for a period
of time (until the responsible peers publish it again). Based on an
adaptive approach, JXTA 2 can maintain a loosely consistent DHT, one that
can deal with the transient nature of peers in a P2P network.
The approach that JXTA 2 takes ensures that the DHT continues to work
in a suddenly partitioned network (that is, separate islands of peers
created by the disappearance of a connecting peer), as well as a newly
merged network (that is, a connecting peer appearing to connect formerly
separate islands of peers). JXTA 2's approach involves the notion of a rendezvous
peerview (RPV) and a pluggable rendezvous walker.
Rendezvous peerview and RPV
walker An RPV is maintained by each rendezvous in the
super-peer network. The RPV is the list of known rendezvous to the peer,
ordered by each rendezvous' unique peer ID. The hash function used in the
DHT algorithm is identical in every peer and is used to determine the
rendezvous that a (locally irresolvable) query request should be forwarded
to.
Any rendezvous that becomes unreachable is removed from a peer's RPV.
Each rendezvous in the super-peer network regularly sends a random subset
of its known rendezvous to a random selection of rendezvous in its RPV.
This is done to ensure eventual convergence of RPV network-wide and to
adapt to any partitioning or merging occurring in the underlying physical
network. Note that at any given time, the RPV maintained by different
rendezvous in the network may be inconsistent with one another.
Maintaining the DHT, SRDI will store incoming index information to a
selected rendezvous peer in the super-peer network based on a fixed
hashing function. To cope with the loosely consistent nature, the index
information is redundantly replicated to additional rendezvous peers
adjacent on the RPV (recall that the RPV is an ordered list of known
rendezvous by their universally unique peer ID). This will ensure that if
the targeted rendezvous crashed, the probability of successfully hashing
to that index information during a query is still quite high.
When resolving queries, the hashing function is performed against a
rendezvous' own RPV. Because it is foreseeable that multiple existing
rendezvous may have disconnected or multiple new rendezvous may have
joined the super-peer network, if hashing does not immediately resolve the
query, an RPV walker is introduced and forwards the query to a limited
number of additional rendezvous. The algorithm used by this limited range
walker is designed to be "pluggable," or customizable for more specific
network scenarios. Figure 3 illustrates the redundant storage of index
information.
Figure 3. SRDI-redundant storage of index
information
In this figure, the hash function maps the incoming index information
to R5. R1, the rendezvous that received the index information from the
edge peer, will send the index information to R5, as well as replicate it
to R4 and R6, increasing the availability of the index information. If,
let's say, R3 received a query matching the stored index, the hash
performed will send the query to R5 in R3's RPV. If, however, R5 had
disappeared in the meantime, the RPV would collapse and close the gap
where R5 used to reside. This means that the former R6 would become the
new R5.
If R6 does not have the required index information, the topology of the
network may have significantly changed. In this case, the RPV walking
algorithm comes into action. The walker will walk up and down the RPV list
looking for the information.
We will actually configure and work with a rendezvous super-peer
network next. Before doing this, we need to take a look at some API and
shell improvements in JXTA 2.
Major configurator API
improvements One major area of API improvement is in
automated configuration support. It is now finally possible to use
platform APIs to programmatically create a PlatformConfig
file (peer advertisement) and start the JXTA platform. This means that the
formerly mind-boggling array of user configurable parameters can
completely be hidden.
There are multiple APIs to perform automated configuration, depending
on your specific needs. Most of the configuration-related APIs are located
in the net.jxta.util.config package. The
net.jxta.util.config.Configurator class is a general purpose
wrapper class for simplifying JXTA 2 configuration programming and is
adequate for many circumstances. Under the hood, the
net.jxta.util.config.Configurator class uses the lower-level
classes within the same package to do the work.
In the upcoming release 2.2 of JXTA (not yet available at time of
writing), these configuration APIs will be unified under an external
net.jxta.ext.config package. Profiles for peers in specific
roles (such as edge-peer, rendevous, relay) will further facilitate
programmatic configuration.
The core objective of configuration is to create a peer advertisement
that describes the peer to be started. This peer advertisement is saved by
default in the .jxta directory as the PlatformConfig file.
Programming JXTA configuration for a
silent startup To facilitate our experiment, we need to
start a total of six JXTA peers consisting of five rendezvous and one edge
peer. To make it simple for you to duplicate this network, we would like
to run all six peers on the same machine. Formerly, this would demand the
Herculean task of describing all the configuration parameters to be
entered manually by the JXTA GUI -- a description that could easily take
the entire length of this article.
To circumvent this complexity, we'll create a starter class called
com.ibm.devworks.jxta2.shell.ShellStarter . This class
will:
- Read command line parameters to configure either a rendezvous or
edge peer, with unique name and transport parameters.
- Create the peer advertisement using the
net.jxta.util.config package and store it.
- Start the shell with the new configuration.
Steps 1 and 2 will be performed only if no existing peer advertisement
is found.
Listing 1 shows part of the code for the
com.ibm.devworks.jxta2.shell.ShellStarter class. See the Resources
section to download the complete code. Listing 1.
Inexpensive memory allocation in a copying collector
public class ShellStarter {
private static final String TLS_PRINCIPAL_PROP = "net.jxta.tls.principal";
private static final String TLS_PASSWORD_PROP = "net.jxta.tls.password";
private static final String ADDR_SEP = ":";
private static final String PORT_PRE = "97";
...
public ShellStarter() {
}
public static void main(String[] args) throws Exception {
...
String tpFname =
PlatformConfigurator.getPlatformConfigFileName();
File tpFile =
new File(PlatformConfigurator.getPlatformConfigFileName());
// only perform config if not already configured
if (!tpFile.exists()) {
tcpAddress = args[1];
rdvNode = args[3];
int rdvNodeNum = Integer.parseInt(rdvNode);
myPort = Integer.parseInt(PORT_PRE + args[2] + PORT_POST);
Vector rdvList = new Vector();
if (rdvNodeNum < 10)
rdvList.add(TCP_PRE + tcpAddress + ADDR_SEP + PORT_PRE
+ rdvNode + PORT_POST);
pa = PlatformConfigurator.createTcpEdge(
args[0], // peername
"A dwPeer - " + args[0], // description
tcpAddress, // ip
myPort, // ports
rdvList , // rdvs
USER_NAME,
USER_PASS
);
// disable multicast
// pass in to preserve settings
TcpConfigurator tc = new TcpConfigurator(pa);
tc.setMulticastState(false);
// enable incoming connection
tc.setServer(tcpAddress);
tc.setServerEnabled(true);
tc.save(pa); // save to pa only, not file
// configure the rendezvous
if (isRdv) {
// pass in to preserve rdv settings created by PlatformConfig
RdvConfigurator rdv = new RdvConfigurator(pa);
rdv.setIsRendezVous(true);
rdv.save(pa);
}
PlatformConfigurator.save(pa);
} // if config exists
System.setProperty(TLS_PRINCIPAL_PROP, USER_NAME);
System.setProperty(TLS_PASSWORD_PROP, USER_PASS);
Boot.main(args);
}
}
|
In Listing 1, the highlighted code shows how the
net.jxta.util.config.PlatformConfigurator class is used to
prepare a peer advertisement that we use to start an instance of the shell
silently, without invoking the GUI configurator. We first create an edge
peer by calling the PlatformConfigurator.createTcpEdge()
helper method with some command line arguments. However, edge peers enable
multicast by default, so we want to disable this in our single machine
situation. We use the net.jxta.util.config.TcpConfigurator
class to turn off the multicast state. Using the same
TcpConfigurator , we also enable incoming TCP connections.
Finally, we check to see if the command line specifies that this should be
a rendezvous node. If it does, we use a
net.jxta.util.config.RdvConfigurator instance to set the peer
to a rendezvous. Note also the setting of the
net.jxta.tls.principal and net.jxta.tls.password
system properties to bypass the login prompt.
The command line for ShellStarter takes the following
parameters:
ShellStarter <peer name> <local IP or hostname> <port index>
<rdv port index> [edge | rdv]
|
Each of the edge peers and rendezvous that we create will run on the
same host, but will use different TCP ports. The <port
index> parameter indicates that the peer will be running at port
97?1, where "?" is the index. This will enable us to configure up to 10
total peers and rendezvous at ports 9701, 9711, 9721, and so on. For
example, to create a rendezvous with peer name rdv1, on IP 192.168.23.17,
on port 9711, we use the following command line:
ShellStarter rdv1 192.168.23.17 1 99 rdv
|
Note the use of port index 99 to indicate that this rendezvous has no
other known seed rendezvous.
To create an edge peer called peer1, on the same IP, on port 9701, and
seeded with the above rendezvous, we would use the following command
line:
ShellStarter peer1 192.168.23.17 0 1 edge
|
Under the test directory, there are five startup directories for
rendezvous -- rdv1 to rdv5, respectively. There is also a peer startup
directory, called peer1. Each directory contains a runshell.bat file that
has the corresponding parameterization for ShellStarter . You
will need to edit these files to modify the IP address. Figure 4 shows the
configuration of this network.
Figure 4. Staging a JXTA 2 network for
experimentation
New shell commands in JXTA 2 There
are several new commands in the JXTA 2 shell distribution. Among the
most useful for the developer is the kdb command. You
can use this command to turn on and off debugging logs for the
various JXTA components while the platform is running.
The kdb command is menu driven, and can be used to
set any of the 16 components to the various log priority levels
(thus generating varying amounts of debugging information).
The new route command can be used to display or
manipulate JXTA route table information.
Another useful command in the JXTA 2 shell, especially to
facilitate the understanding of how the rendezvous super-peer
network operates, is the new rdv command. This command
has many options, several of which are explored in detail during our
experimentation. |
To start the system, first start all the rendezvous in order, from one
to five. Be sure to wait for each one to completely start before starting
the next one, then start peer1.
Observing RPV and walker
action Using the rdv command (see the sidebar
"New shell commands
in JXTA 2"), it is possible to see the RPV maintained by any of the
rendezvous.
As an example, the following snippet shows the result from our
rdv1:
The RPV ordering reflects the peers' JXTA peer ID ordering. We are now
ready to see RPV walker in action. The rdv command has an
option to run a string indexing service (for test and diagnostics only).
To start this service, run the following command on each of the six
peers:
Now, create a string on one of the rendezvous. In our case, we create
"treasure" on rdv4:
You can use the following list option to confirm that rdv4 now contains
this string:
Now let's see RPV walker in action. On peer1 , search for
the following string:
JXTA>rdv -search treasure
Sending test message
rdv has sent search query for treasure
JXTA>rdv received from : jxta://uuid-59616261646162614A78746150325
03369170C5E92004D0DB2E48AAA571741C803
found: treasure
|
The treasure string was found immediately on rdv4 . You can
verify that the peer ID is indeed for rdv4 by typing
whoami on rdv4 .
Looking at the rdv4 shell window, you'll also see that
indeed the reply is sent directly from rdv4 :
Replying search query= treasure
send reply sent
|
On some of the other rendezvous, you will see evidence of forwarding of
the query:
Forwarding search query= treasure
|
Here's what happened: when you query for "treasure," the rendezvous
walker is invoked to "walk" the rendezvous super-peer network. As a
rendezvous client, the request is passed to the only connected rendezvous
-- rdv1. It is the "walk" commencing from rdv1 that causes the propagation
of the query to occur. Every rendezvous that does not have the "treasure"
string forwards the request (up to an arbitrarily set
time-to-live/hop-count limit of 10, and subject to loop detection) to
other known rendezvous points through the walker, until the request
reaches rdv4. Upon receiving the query, rdv4 replies directly to
peer1.
Observing SRDI in
action Using the rdv shell command, you can
easily experiment with the RPV and rendezvous walking mechanism. To
actually see SRDI in action, we can set the LOG level of the
SRDI message-logging mechanism to DEBUG and then cause some
SRDI activity (that is, create an advertisement on a peer).
To avoid spurious messages from the diagnostic string indexing service,
first turn off the services. Turn off service from rdv1 to rdv5, as well
as peer1:
Then on each peer, turn the SRDI messages LOG level to
DEBUG using the kdb command:
JXTA>kdb
KDB Main Menu
1 Change LOG configuration
q Quit
MAIN> 1
LOG Menu
1 Global 2 EndpointRouter
3 Endpoint 4 HTTP
5 TCP 6 TLS
7 Rendezvous 8 Discovery
9 Resolver 10 Pipe
11 Relay 12 Messengers
13 Messages 14 Quota Listener
15 SRDI 16 CM
q Quit
LOG> 15
Level [w,i,f,d,e,q or ?])> d
LOG Menu
1 Global 2 EndpointRouter
3 Endpoint 4 HTTP
5 TCP 6 TLS
7 Rendezvous 8 Discovery
9 Resolver 10 Pipe
11 Relay 12 Messengers
13 Messages 14 Quota Listener
15 SRDI 16 CM
q Quit
LOG>
|
Now, on peer1, we will create an advertisement that will cause SRDI
index information to be pushed to rdv1 and subsequently replicated in the
DHT. The easiest way to do this is simply to create a new peergroup
(actually a peergroup advertisement):
If you then look at the messages coming out on rdv1, you may see SRDI
messages similar to the following:
<DEBUG 14:36:07,078 Srdi:521> Pushing deltas in group NetPeerGroup
<DEBUG 14:36:07,078 Srdi:494> waiting 30000ms before sending
deltas in group NetPeerGroup
|
Other rendezvous may also receive additional SRDI messages sent by rdv1
as the hashing and redundant index replication occurs.
Conclusion The impact of
P2P networking on everyday computing is significant, and a very large
population of computing users is affected. You can't hope to design a
viable P2P application creation substrate in total isolation.
Consideration for user feedback, continuous requirement analysis,
rethinking the design, and design iteration are all important ingredients
in creating a usable substrate. JXTA is loyal to this philosophy. In its
second generation, JXTA has evolved and adapted to the realistic
requirements of its constituents of early adopters.
By pragmatically segregating the overlay network into an edge-peers
population interconnected through a core rendezvous-peers population, JXTA
2 is significantly more deployable over today's common network topology.
Problems with message broadcast storms and flooding, common in a "flat"
architecture, are also alleviated.
JXTA 2's core rendezvous-peers network implements a loosely consistent
DHT. All of the resources available over the P2P network are indexed and
dynamically retrievable with the help of this DHT. The index maintained by
the DHT is the SRDI. In contrast to the first version, advertisements are
no longer propagated throughout the network; instead, only index
information is propagated on demand. This approach significantly improves
the use of the available network bandwidth, in a trade-off for slightly
more computation during query resolution.
Another area of growing maturity is the JXTA platform API. The
availability of a fully functional peer configuration API enables
developers to create applications that will start silently without booting
JXTA's default (and often confusing) peer configurator. We applied this
API when creating our silent shell startup scripts for experimentation.
Using the new shell command rdv , we were able to observe the
rendezvous super-peer network in action. SRDI messaging and operations
were also observed by setting DEBUG log levels through the new
kdb shell command.
The goal of creating a scalable, high performance, but (most important)
usable P2P
network needs to be pursued relentlessly. JXTA 2 is an important step in
the right direction.
Resources
- Download the source code
for this article.
- See the series Making P2P
interoperable, also by Sing Li, for coverage of the first release of
the JXTA platform:
- "Part
1, The JXTA story" provides an overview of Project JXTA and how it
enables and facilitates the simple fabrication of P2P applications
without imposing unnecessary policies or enforcing specific
application operational models.
- "Part
2, The JXTA command shell" gives you a hands-on tour of the JXTA
shell. You'll explore its command set and extend its capabilities by
writing your own custom commands using the Java programming language.
- "Part
3, Creating JXTA systems" showcases JXTA's extension of the TCP/IP
network and demonstrates that JXTA isn't bound by the constraints
typical of client-server networks
- Visit the official JXTA community
to find the latest specifications, documentation, source, and
binaries.
- The white paper "Project JXTA 2.0
Super-Peer Virtual Network," by Bernard Traversat et al (Project
JXTA, May 2003), describes the inner workings of the rendezvous
super-peer network in intricate details.
- See how to deploy JXME and turn mobile devices into JXTA and Jabber
clients in "Mobile P2P
messaging, Part 2: Develop mobile extensions to generic P2P
networks" by Michael Juntao Yuan (developerWorks,
January 2003).
- For more JXME coverage, check out mobile devices in "Tips & tricks:
JXTA" by Roman Vichr (developerWorks,
April 2002).
- Anne Zieger examines JXTA in her "Peer-to-peer
communications using XML" article (developerWorks,
April 2002).
- Todd Sundsted's developerWorks
series on "The practice of
peer-to-peer computing" (March 2001 - January 2002) provides
background reading on fundamental P2P computing principles.
- Find hundreds of articles about every aspect of Java programming in
the developerWorks
Java technology zone.
About the
author Sing
Li is the author of Professional
Apache Tomcat, Early Adopter
JXTA,, and Professional
Jini, as well as numerous other books with Wrox Press. He is a
regular contributor to technical magazines and is an active
evangelist of the P2P evolution. Sing is a consultant and freelance
writer and can be reached at westmakaha@yahoo.com. |
|
|