Security vulnerabilities appear in myriad forms and guises but most are the result of a rather small number of root causes.
One of the most common complaints of security specialists is that the reason the user was compromised was that they did the wrong thing or they didn't do the right thing.
Blaming the user should not be accepted as an excuse. When a plane crashes a finding of 'pilot error' does not mean the end of the investigation, it means the start of a new investigation to find out why the pilot made the mistake. The job of the FAA is to stop planes from crashing, not to find out who to blame.
We need to approach user acceptance in the same way. If users aren't using the security products provided by their employer, it is the designer's responsibility to work out why and how to fix it. Solutions that depend on finding employees who can be trained to do the right thing are never going to be very successful.
The purpose of the Mesh is to eliminate user acceptance as a source of insecurity by proving that it is not necessary for a secure system to be any harder to use than an insecure one.
Buffer overrun and command injection attacks remain the chief causes of application vulnerabilities. This should be surprising since programming languages with array bounds checking have been available since the early 1960s as have the security hazards of using scripting languages.
The Mesh Reference code is implemented in a managed language (C#) and does not use any form of scripting language internally. The only purpose for which a scripting language is accepted is when this is the only form of interface that is supported by an application program being configured.
The most common cryptographic failure in applications is that the work factor is insufficient to deter an attacker. In the 1990s, public key cryptography was slow and many of the commonly used algorithms simply didn't use a big enough key. The circumstances that led to the use of those systems haven't been true for 15 years but the many systems are still stuck in the past.
Adding strong cryptographic algorithms does not actually improve the security of a product. The only way to improve security is to stop using the insecure ones. Deploying SHA-2 did not make the world any more secure except in that it made it possible to stop using SHA-1.
The use of password authentication is really just the most common case of using an insufficient work factor. There is simply no way that a human can choose a password that is memorable enough to be remembered and sufficiently unguessable to be secure. This is even true of passphrase authentication. The phrase Horse Battery Staple Correct only provides a 60 bit work factor (assuming the words are taken from a 32,000 word dictionary). This does not provide an acceptable level of security.
Another common security failure arising from password use is that they are shared across domains and devices. The failure of one domain or device thus puts all at risk.
The risks of sharing passwords across domains is of course a risk that computer security specialists like to lecture users about at length. But how else are users to remember a strong password if not through repeated use? Nobody can be expected to remember a unique strong password for an account that they use only once a year.
And this advice against sharing passwords across systems is in stark contrast to the use of passwords in every wireless networking product that has been designed for consumer use. The designers of the WiFi security layer knew that a password shared across systems is insecure but the system they built gives consumers no practical alternative.