Users interact with the Mesh through a portal. A Mesh portal may be a private portal serving a restricted group of users or a public portal that shares profile data published to it through the CryptoMesh.
The primary purpose of the CryptoMesh is to provide a medium through which public profile data may eventually be published and notarized. This is not currently supported in the specifications however.
At present, the reference code only supports use of a private portal. So there isn't a choice to be made. In production, there are potential advantages to both approaches.
A private portal gives the greatest control over the stored profile data and allows for better protection against traffic analysis attacks. The main disadvantage being that it is another service to administer that requires high availability, proper backup management, etc.
Using a public portal removes the need for local system administration and is therefore the preferred approach for most users who are not technical experts.
To configure the standalone server, the following parameters need to be configured:
To resolve a Mesh service for 'example.com', a Mesh client uses the following discovery strategy
1. Attempt to resolve an SRV record for _mmm._tcp.<PortalAddress>. If this succeeds, choose a <HostAddress> from the specified targets and use the Web service endpoint http://<HostAddress>:80/.well-known/mmm/
For this reason, by default, the mesh server will attempt to bind to both of the following Web Service Endpoints:
The /nofallback flag can be used to disable binding to the fall-back address. This is useful in testing multiple servers on the same machine.
At present, the server is only supported on the Windows platform where it runs as a HTTPListener under the HTTP server built into the operating system. It is therefore necessary to set the relevant permissions to run the server. This is done using the netsh tool.
The command format is:netsh http add urlacl url="http://<HostAddress>:80/.well-known/mmm/" user=<Account>
By default, the server advertises service on both the local and the fall-back prefix.
netsh http add urlacl url="http://host1.prismproof.org:80/.well-known/mmm/" user=HOST1\alice
netsh http add urlacl url="http://mmm.prismproof.org:80/.well-known/mmm/" user=HOST1\alice
To start the server, it is necessary to specify
The standalone server automatically starts a Web Service at the Mesh 'well known' endpoint address, /.well-known/mmm.servermesh /start <PortalAddress> <HostAddresses>
servermesh /start prismproof.org host1.prismproof.org
Note that if the server refuses to start due to a file access conflict, it is probably because some bad mannered program is running that monopolizes the HTTP port. Skpye is a particular culprit.
You can find out which application is to blame using netstat:
Proto Local Address Foreign Address State PID TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 13500 TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 428 TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 13500 TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
You can then lookup the PID using the task manager.
The use of SRV discovery is strongly encouraged as this allows features such as load balancing and fault tolerance to be supported.
The use of A records for service discovery is intended to be used only as a fall-back in situations where the network infrastructure blocks SRV queries or responses. Since these are almost certainly legacy IPv4 only systems, there is nothing to be gained from attempting AAAA record resolution.
Thus it is recommended that DNS records are configured for SRV discovery of the service on the selected host.
The DNS configuration for the previous example would be:_mmm._tcp.prismproof.org. IN SRV 0 5 80 host1.prismproof.org. mmm.prismproof.org IN CNAME host1.prismproof.org host1.prismproof.org IN A 192.168.1.39
The reference server is configured for simplicity and ease of testing rather than production use but it could be adapted for production use with minor changes.
In particular, although the reference service is designed for multi-threaded operation, it is currently locked to one thread for ease of debugging. A production service should also make use of a proper database back end rather than the log based persistence store currently implemented.