7 Services
Much of the functionality of PLAN comes from its services. While the
initial release of PLAN offered only a smattering of services, the current
version provides a rich library from which to choose. Services may be
divided into two categories: the core services, which are present on
all Active Routers, and additional service packages. Each service
package consists of one or more service routines made available to PLAN
programs.
As with all functions called from PLAN, service functions may take multiple
arguments (and not a single argument which may be a tuple, as in ML); this
is presented below as a comma-separated list of types. And, like all PLAN
functions, services always return a value; a service will return unit
in the case that the output has no meaning.
Because service routines are not guaranteed to be ubiquitious, the PLAN
programmer must declare the types he expects the services to take in his
PLAN program using service declarations. These may be present at the
top-level and have the form svc id : argtypes -> resulttype. Types may be polymorphic; type variables are
specified with a ' followed by a letter. Here are some example
declarations:
svc defaultRoute : host -> host * dev
svc fragment : ('a chunk,int) -> unit chunk list
svc getRB : void -> int
The first names the defaultRoute function, which takes a single
argument of type host and returns a pair of type host ×
dev. The second names the fragment function which takes two
arguments, of types a chunk and int, returning a list of
unit chunks. Finally, getRB takes no parameters and
returns a value of type int.
7.1 Core Services
These are the services which the programmer may expect to be present at
every active router. As such, service declarations for these services need
not be supplied for typechecking purposes.
7.2 Service Packages
There are ten service packages provided with the current PLAN distribution:
install, port, resident, RIP, arp, flow,
dns, auth, frag, reliable, and csum. The
latter three may be used to implement traditional protocol layering in PLAN.
They are described below.
7.2.1 install
This package provides routines that allow service packages to be dynamically
installed. It is however only available in a specialized version of
PLAN, available in the distribution as an executable called plan_loader. For more on this, refer to [2].
- installServices : (string × blob) list ® unit
This executes the Caml bytecode in the blob on the
invoking node. Each element of the input list is a pair whose
first element is a string which is ignored (it is included for
backward compatibility), while the second element is the
bytecode file containing code for the service package to be
installed. Elements of this list are installed in order starting from
the head first, and therefore the order is important. All packages
being installed will be dynamically linked into the router's
environment.
If a failure occurs during this process, a diagnostic will be printed
to the router console, and the exception BadService will be raised.
An obvious side-effect of this service is that loaded packages will
consume memory on the active router, a fact that could be maliciously
or inadvertently exploited. Therefore, we anticipate that future
releases of PLAN will require authentication before allowing a PLAN
program to dynamically load services.
7.2.2 port
This package provides a means for PLAN programs to interact with host
applications. The package makes use of the PLAN ports, which have PLAN type
port. A PLAN port is a duplex connection between an active router and
a host application. The active router side of the PLAN port always receives
PLAN packets, while the application side receives output in a number of
formats, depending on the service used. When a PLAN packet is first
injected into the network onto a particular node, it is associated with its
injection PLAN port on that node. This PLAN port is referred to as the implicit PLAN port of a packet.
The module in the source distribution which implements PLAN ports is planport.ml. Here is its signature:
type plan_port
exception ClosedByPeer
exception PortOpFailed
exception PortClosed
exception NoPacket
(* client side ports *)
val open_port : int -> plan_port
val close_port : plan_port -> unit
val get_packet : plan_port -> string
val try_get_packet : plan_port -> string option
val timed_get_packet : plan_port -> float -> string
val send_packet : plan_port -> string -> unit
(* server port (only one) *)
val get_local_port : int -> plan_port
By default, PLAN ports are implemented as Unix Domain sockets -- that is,
each PLAN port names a Unix Domain connection between pland (the
active node daemon) and a host application. Associated with each PLAN port
is a port number, and the socket opened is called
``/tmp/sware''^port-num. Alternatively, TCP may be used as the
underlying connection mechansim; this is done by compiling with the Camlp4
option -DPLANPORT_TCP. In this case, the PLAN port number refers to
a TCP port number.3
PLAN ports may be created implicitly or explicitly. Each Active Router has
a server thread listening on a PLAN port whose number is specified when the
router is started. Essentially this thread consists of a loop which calls
get_local_port on the user-specified port number; this routine waits
for and accepts client connections. Once a connection is established,
another thread is forked which repeatedly calls get_packet to obtain
client packets and then queues them for processing. The router tracks the
connection in a table and creates a PLAN port to name it. Packets that
arrive via this connection will have this PLAN port as their implicit PLAN
port. The implicit PLAN port can be obtained within a PLAN program with the
PLAN routine getImplicitPort.
A PLAN port may be created explicitly within an executing PLAN program by
the routine openPort (which is a wrapper for the planport.ml
routine, open_port. This routine attempts to connect to a server
application listening on the specified port number, and if successful
returns the PLAN port associated with the connection.
PLAN Ports are only valid on the router which created the PLAN port in the
first place. In other words, if PLAN port P lives on router R, but a
PLAN program attempts to send data through it while executing on router S,
the operation will fail and raise an exception.
Note that PLAN port names are not stored in any explicit way on the routers.
Therefore, a PLAN program that wishes to use a PLAN port must retain the
value of type port which names the PLAN port. This serves to prevent
one PLAN application from using the PLAN ports of another. Note that PLAN
programs may use the resident data facility to store PLAN port values
on the routers (see Section 7.2.3 for more on resident data).
A Unix Domain (or TCP/IP) socket connection named by a PLAN port is closed
whenever a PLAN program explicitly closes it via closePort, or when
the peer terminates the connection.
- getImplicitPort : () ®
port
This function takes no arguments and returns the implicit PLAN port of the
invoking PLAN program. Note that this PLAN port will not be valid if it was
previously closed using the closePort service.
- openPort : int ® port
This function attempts to create a connection between the invoking host and
a server application residing on that host listening on the specified Unix
domain (or TCP/IP) socket. If successful, the function returns a PLAN port
naming that connection. On failure, the exception OpenFailed is
raised.
- closePort : port ® unit
This function closes the connection associated with the given PLAN port.
If the PLAN port was closed previously, or does not reside on this router, the
exception InvalidPort will be raised.
- printPort : ( port , a
) ® unit
This function is identical to the core service print except that the
output is transmitted via the specified PLAN port, rather than via the
invoking packet's implicit PLAN port. The commands print(a) and printPort(getImplicitPort(),a) are identical. If the PLAN port was closed
previously, or does not reside on this router, the exception InvalidPort will be raised.
- deliver : (port , a )
® unit
This function is identical to printPort, above, except in
the form of its output. Rather than converting the specified PLAN value to
a string, the value is instead structurally marshalled to be sent
across the connection. This marshalling scheme is not currently documented,
but is defined by the class basis/marshalling.ml. This routine is
useful when the receiving host application wishes to manipulate PLAN values
directly. This application, if written in OCaml, would make use of the basis package, which defines OCaml versions of PLAN's values (the ones used
by the implementation, in fact). An example of such an application is
provided with the PLAN distribution, called server. This application
listens on the specified PLAN port for connections, and then prints any PLAN
values it receives through the PLAN port. A sample PLAN program to use with
this application is rout_tests/deliver.plan.
If the specified PLAN port was closed previously, or does not reside on this
router, the exception InvalidPort will be raised. If delivery fails
for any other reason, the exception DeliveryFailed will be raised.
7.2.3 resident
Resident data is data that remains on the router after the PLAN program that
created it has terminated. This feature was added to PLAN to enable
programs to easily perform network traversals, such as depth first search
(DFS) and breadth first search (BFS). To perform these searches, it is
necessary to have some way of marking nodes so they are not revisited,
as the network topology may be cyclic. Rather than providing a
special-purpose service to enable node marking, we opted to provide the more
general service of resident data. An example of the use of resident data to
perform a DFS is given in the example program rout_tests/traceNet.plan.
Resident data is stored in a table on the router which is indexed by a
string which names the data, and a session key. This key serves to
differentiate data stored by different instances of the same PLAN
application.
One hazard of allowing resident data is that PLAN programs may (maliciously
or inadvertently) leave large chunks of data on a router, consuming memory
even after the PLAN program has terminated. To combat this problem, each
piece of resident data has a timeout associated with it. This timeout is
set by the user when the data is created, but may not exceed 5 minutes (in
other words, users may be ``nice'' and opt to release their data before the
5 minute limit). In the future, we hope to implement a more
semantically-appealing way of cleaning up leftover resident data, such as a
form of distributed garbage collection. We also anticipate that eventually
the use of this service will require authentication.
- generateKey : () ® key
Generates a fresh key which is globally unique.
- get : (string , key)
® a
Retrieves the value indexed by the provided string and key. If no such
value exists, the exception NotFound is raised.
- put : (string , key
, a , int) ® unit
Associates in the table the specified value (the third argument) with the
provided string and key. The fourth argument specifies the number of
seconds that the data is allowed to live. This value is currently capped at
300.
- setLT : (string , key
, int , int) ® bool
Same as put, except that before changing the value, a test is
performed on the stored data to see if the specifed new value is less
than the old stored value, and the value changed only if the
comparison returned true. The test and set happens as an atomic
operation, i.e., if another PLAN packet tried to look at the data
during the call to setLT, it would wait until the call completed.
The result of the comparison is returned.
- setGT : (string , key
, int , int) ® bool
Same as setLT, except the comparison ``greater-than'' is used.
- delete : (string , key)
® unit
Deletes the value indexed by the provided string and key. If no such
value exists, the exception NotFound is raised.
7.2.4 RIP
This implements a distributed routing protocol based on the standard RIP
protocol. The following are included :
- RIP : host ® host ×
dev
Given a node of type host, returns a pair consisting of the next hop
on the route to the detination and the interface to the next hop. By
default, defaultRoute is an alias of this function.
- RIPGetNeighbors : () ®
(host × dev) list
Returns a list of all neighbors of the current node. The interface to
that neighbor is also returned, along with the node. By default, getNeighbors is an alias of this function.
- RIPRecv : ((host × int) list
, host) ® unit
Given a table of distances to various nodes from a neighbor specified
in the second argument, RIPRecv updates the routing table at the
current node.
- RIPGetRoutes : () ®
(host × host × dev ×
int) list
Returns the entire routing table at the current node as a list of
(destination, nextHop, interfact, distance) tuples.
7.2.5 arp
This package contains services needed to implement address resolution. This
package is based on the standard Address Resolution Protocol (ARP).
-
bind : (blob , host , dev) ® unit
Registers a binding of the link layer address specified in the first
argument to the given node. The third argument is the interface that
connects to the node.
If the binding fails for any reason, the exception BindFailed is raised.
- retrieveBinding : (host , dev)
® blob
Returns the link layer address corresponding to the specified node for
the given interface. The exception NoBinding is raised if a
binding is not found.
7.2.6 flow
In some applications, it is reasonable to route packets along a
route different from the one determined by the default, hop-count based
routing protocol. Since hop count metrics tend to approximate latency,
applications with higher bandwidth requirements might choose to accept
longer latencies via longer paths in order to achieve higher bandwidth.
Such an example is described in [5].
This package provides services to help set up and use such alternate routes,
called ``flows''. Associated with a flow is the notion of a ``metric,''
which may be calculated differently for different applications. Each node
maintains a value called the ``toll'' which might be used in determining the
metric for a flow. A toll of 1 at each node, with the metric being the sum
of all toll values on a route reduces to shortest path routing. Several
other strategies can be explored.
A flow is identified using a PLAN key (see Section 7.2.3),
which must have been generated at the destination.
-
flowSet : (key , host
, int , dev) ® unit
This attempts to install a flow route on the current node, given the
key associated with the route, the next hop to take from this node for
packets moving with the flow, the route metric, and the outgoing
interface.
If there is already a flow route installed associated with the given
key, the new route overrides the previous one. It is the PLAN
programmer's responsibility to ensure that the metric of the new route
is better than that of the previous one, consistent with his
definition of the metric.
- flow : host ® host ×
dev
Ignores the argument, and uses packet headers to route.
- getToll : () ® int
Returns the value of toll at a given node.
- setToll : int ®
unit
Sets the toll value to the specified integer.
7.2.7 dns
This package implements domain-name resolution. Mappings are provided by a
static table, generated from a file specified at router startup time.
-
getHostByName : string ®
host
Takes as input a name, according to the following grammar:
name |
::= |
host | host : port-num |
host |
::= |
ip-addr | domain-name |
port-num |
::= |
int-literal |
domain-name |
::= |
string-literal |
where ip-addr has the form n.n.n.n where each n is an
integer from 0 to 255. If the port-num argument is not
provided, then it is assumed to be the same PLAN port as the invoking node.
getHostByName attempts to resolve this name into a value
of the PLAN host type. Raises an UnknownHost
exception on failure.
7.2.8 frag
This package provides a set of services used to fragment PLAN programs into
smaller pieces and then reconstitute them at a remote destination. This
service is needed because most physical networks have a maximum deliverable
packet size, called the Maximum Transferable Unit, or MTU. Consider the
following PLAN program:
svc defaultRoute : host -> host * dev
fun sendem (c:unit chunk,x:int*host) =
(OnRemote(c,snd x,fst x,defaultRoute);x)
svc fragment : ('a chunk,int) -> unit chunk list
svc length : 'a list -> int
svc getRB : void -> int
fun fragment_deliver (dest, path_mtu, c:unit chunk) =
let val cs = fragment(c,path_mtu)
val l = length(ds) in
(foldr(sendem,cs,(getRB()/l,dest)); ())
end
The purpose of the PLAN function fragment_deliver is to deliver the
chunk c to node dest and evaluate it there. The problem is that
the chunk c, when marshalled and encapsulated in a PLAN packet, may be
too large to send across one of the links on the path to dest.
Therefore, fragment_deliver first fragments that chunk into n
smaller, properly-sized chunks via the call to the fragment service.
Each of these chunks consists of a call to the service reassemble,
which takes four arguments: a fragment of the original chunk, the sequence
number of the fragment (numbered 0 to n-1), a boolean set to true
for the last fragment and false otherwise, and finally a value of type
key which is uniquely generated and set to the same for each fragment.
Next, each fragment is sent via the call to sendem from foldr.
When each fragment arrives at the destination, it calls reassemble,
which stores each original chunk fragment until all have arrived, at which
point it reassembles the original chunk and evaluates it.
Notes:
-
The second argument given to the fragment service is the size of
the packet to be sent, i.e. the MTU. Therefore, the resulting list of
chunks will each be slightly smaller than the specified MTU, in order
to accommodate the additional overhead of packet encapsulation. In
particular, this overhead is the value of the constant fragment_overhead.
- Each call to reassemble stores the information passed to it in a
table, indexed by the key. If any fragment is lost, the stored
information will be useless and will eventually time out. In
particular, by default each fragment may arrive at most frag_timeout seconds apart, else the space in the table is freed.
By default, this value is set to 10.
- The reassembly table is, of course, of finite size. By default
it is set to 211 entries4.
Any attempt to add an entry to a full table will fail.
- Each packet that calls reassemble may have leftover resource
bound. This bound is also stored in the table, and then donated
to the reconstituted chunk for evaluation.
The following services are included:
- fragment : ( a chunk ,
int) ® unit chunk list
Takes a chunk and a size, and fragments the chunk into chunks of that size
minus fragment_overhead. Each of the returned chunks contains a
fragment of the original chunk. When all of these chunks are evaluated, the
original chunk is itself reconstituted and then evaluated.
The chunk to be fragmented is first marshalled into the form that is
actually sent over the wire. It is then divided into small units that
can fit into the MTU given, and each of these units is encapsulated as
a PLAN value of type blob. In case the MTU is too small
for any fragment to fit inside it, the PLAN exception MtuTooSmall is
raised.
The list of chunks returned contains one chunk for each blob. Each of
these chunks has no code and a call to the reassembly service to be
invoked at the remote end.
- reassemble : (blob , int
, bool , key) ® unit
At the evalDest, each fragment puts in a call to the reassembly
service, providing the actual packet fragment in the blob, a sequence
number to indicate its position in the stream of fragmented packets, a
boolean value indicating whether there are fragments with a higher
sequence number or not, and lastly a key to identify a particular
stream of fragments. Each call to this service merely registers the
arrival of fragments, until the last fragment is received, which
causes the fragments to be reassembled and the complete packet to be
evaluated, or a timeout occurs, which causes the fragments already
received to be dropped.
- fragment_overhead : int
Specifes the
additional number of bytes added to a chunk encapsulated by the fragment service plus standard packet overhead5. If overhead is o and MTU is m, then
useful space in a fragment is only m - o.
7.2.9 reliable
This package provides a set of services used in conjunction with the RetransOnRemote primitive to ensure reliable delivery of
PLAN data. Consider the following PLAN program:
svc generateKey : void -> key
svc sendAck : ('a chunk,key,int) -> 'a chunk
svc defaultRoute : host -> host * dev
svc fragment : ('a chunk,int) -> unit chunk list
svc getMTU : dev -> int
svc fragment_overhead : int
fun reliable_deliver(dest, c:unit chunk) =
let val seq = generateKey()
val d = sendAck(c,seq,10)
val p = defaultRoute(dest)
val ds = fragment(d,getMTU(snd p)-fragment_overhead) in
RetransOnRemote(ds,seq,3,dest,20,defaultRoute)
end
The purpose of the PLAN function reliable_deliver is to reliably
deliver the chunk c to node dest and evaluate it there. The
program first generates a key to associate with the packets to send; this
can be thought of as sequence number which identifies the chunk. More
specifically, this key will be used in a call to reliableAck on the
source to halt further retransmissions of the packets containing c.
Next, the call to sendAck creates a new chunk d which
``encapsulates'' c with some additional code to be executed on the
destination (the encapsulating code will be shown below); part of the action
of d is to send a packet to the source to invoke reliableAck.
Next, d is fragmented into a list of chunks ds using the
fragmentation service (see Section 7.2.8), using the outgoing link
MTU for size information. Finally, this list of chunks is sent using RetransOnRemote. The list of chunks will be resent at most 3 times (third
argument), and each chunk will receive 20 resource bound (fifth argument).
Suppose that the list ds consists of two chunks. When the second
chunk arrives at the destination, the two chunks will be reconstituted into
the chunk d which is then evaluated; the code for d (created by
sendAck), is as follows:
svc reliableAck : key -> unit
svc defaultRoute : host -> host * dev
svc eval : 'a chunk -> 'a
svc arrived : key -> bool
fun send_ack(c:unit chunk,source,seq,rb) =
(OnRemote(|reliableAck|(seq),
source,rb,defaultRoute);
if arrived(seq) then
()
else
(eval(c);()))
Here, the arguments c, seq, and 10 which were given to
sendAck are bound here to c, seq, and rb,
respectively (the source variable is filled in automatically by sendAck). First, a packet is sent back to the source to invoke reliableAck; this tells the sender to quit resending. Next, the arrived service is invoked with the given key to make sure this isn't a
duplicate packet (and if not, it registers the key). If not, the original
chunk c is finally evaluated (note that the returned value is
discarded to satisfy the typechecker).
Some things to note:
-
using the link MTU (the getMTU call) is not strictly correct;
there is no guarantee that the MTU of the local network is £ the
path MTU to the final destination dest. We have just
done so for expositional purposes. See the example program
rout_tests/pathMTU.plan for a PLAN program which determines
path MTU.
- True reliable delivery is not actually acheived here, since
chunks are only retransmitted a finite number of times. This
is because of the difficulty of managing resource bound. We hope
to have a solution to this problem in the near future.
- The example above is of a reliable packet which is then
fragmented. The alternative is to send reliable fragments by
reordering the calls to the encapsulation services:
svc generateKey : void -> key
svc sendAck : ('a chunk,key,int) -> unit chunk
svc defaultRoute : host -> host * dev
fun sendem (c:unit chunk,x:int*host) =
let val seq = generateKey()
val d = sendAck(c,seq,10) in
(RetransOnRemote([d],seq,3,#2 x,#1 x,defaultRoute);x)
end
svc fragment : ('a chunk,int) -> unit chunk list
svc getMTU : dev -> int
svc fragment_overhead : int
svc reliable_overhead : int
svc length : 'a list -> int
fun reliable_fragments(dest,c:unit chunk) =
let val p = defaultRoute(dest)
val cs = fragment(c,getMTU(snd p)-reliable_overhead)
val l = length(cs) in
(foldr(sendem,cs,(getRB()/(l*3),dest)); ())
end
What happens here is that the chunk c is fragmented first,
and then each fragment is sent reliably with a call to the PLAN
function sendem. Note that the size of the fragments is
set to be the MTU size minus the overhead of encapsulating
the reliable code. Only those fragments which are lost will
actually be resent, thus preserving network bandwidth. The tradeoff
is that less useful payload is delivered per packet, since each
packet sent is encapsulated with both fragmentation and reliability
code. In our original example, only the original chunk is
encapsulated for reliability, and so this overhead is divided
evenly among each of the resulting fragments.
The following services are included:
- sendAck : ( a chunk ,
key , int) ® unit chunk
Takes a chunk and a key, and wraps the chunk into a new chunk which
causes an ack to be sent back to the source.
- reliableAck : key ® unit
Registers the receipt of an ack at the current node for the reliable
packet sent out with the given key.
- arrived : key ® bool
Checks to see if the packet specified by the given key has already
arrived.
- reliable_overhead : int
Specifes the additional number of bytes added to a chunk encapsulated by the
sendAck service.
7.2.10 csum
This package provides a encapsulation-based checksum service. For more
details on workings of encapsulation, see Sections 7.2.9 and
7.2.8. The following services are included:
- checksum : a chunk ® unit chunk
The specified chunk is first marshalled into the wire form and a
checksum is calculated for it. This wire form is then encapsulated as
a PLAN blob, and a new chunk is returned, which has the following as
its code part:
svc verifyChecksum : (blob,int) -> bool
svc evalBlob : blob -> unit
fun unchecksum(b,c) =
if verifyChecksum(b,c) then evalBlob(b) else ()
Where b is bound to the wire form of the original chunk and c is
bound to its checksum. When this code is evaluated at the evalDest, the
original chunk will only be executed if its checksum is correct.
- verifyChecksum : (blob , int)
® bool
Verifies if the given blob has the checksum specified as the second
argument.
- checksum_overhead : int
The overhead (in bytes) for the encapsulation described above.
7.2.11 authproto
Each of these services is described in more detail in the PLAN Security Guide.
-
DHmessageOne : ( a , blob) ® a
- DHmessageTwo : ( a , blob) ® a
- DHmessageThree : ( a , blob) ® a
7.2.12 security
Each of these services is described in more detail in the PLAN Security Guide.
-
authEval : ( a chunk ,
blob × int × int) ® a
7.3 Services available at injection
Not all services are available in the arguments to the initial invocation at
the point where a PLAN packet is injected into a PLAN network. The services
available at the injection point include all the core services, plus the
services in the dns package and the following new service:
-
getBlobFromFile : string ® blob
Takes the filename given and reads the binary data into an opaque value of
type blob. This service could be used to deliver payloads, via
PLAN ports, or to install new services via the installServices
routine. If the specified file can't be found, the FileNotFound
exception is raised.
7.4 Services available at the REPL level
Finally, the plan tool (explained in the tutorial) also limits the
availability of services. The ones available are all the core
services, plus the services in the csum, frag and reliable modules.