Baking in Security

The aptly titled Writing Secure Code prescribes a best practices approach to hardening your code.

In testing we have calculated that the worm can attempt to infect roughly half a million IP addresses a day and that was a ruff estimate made from using a very slow network.
—excerpt from the eEye analysis of the Code Red worm

The past several years have given us an astounding increase in security problems, from the Code Red worm to ways to take over a Windows server remotely. It's also given us, finally, some tools to help prevent such tomfoolery from happening in the future. One such tool is the book Writing Secure Code, by Michael Howard and David LeBlanc (Microsoft Press, 2002).

Subtitled "Practical strategies and proven techniques for building secure applications in a networked world," this book comes from a couple of Microsoft's own security gurus. Of course, some people will stop reading right there. Microsoft has, after all, released a good deal of insecure code in the past few years, as a quick glance at the TechNet Security Bulletins page at http://microsoft.com/technet/treeview/
default.asp?url=/technet/security/current.asp
will confirm. As I write this, recent vulnerabilities include holes in Word, Excel, SQL Server, the RAS phone book, IIS, ASP.NET, and even the venerable gopher protocol in Internet Explorer. But perhaps Howard and LeBlanc can be forgiven for Microsoft's sins. With 50,000 people working at Microsoft, there might be a few who haven't heard their message yet.

Crash Course in Crashes
And what is that message? The take-home is pretty simple:

"There is simply no substitute for applications that employ secure defaults."

If you haven't been following mailing lists such as BugTraq (http://online.securityfocus.com/archive/1) and NtBugTraq (http://www.ntbugtraq.com/)—if you're developing code that will run on Internet-connected computers, you probably should—you might be surprised at just how many ways there are for your code to be attacked. Buffer overflows. Registry-based resource starvation attacks. Canonicalization holes. Impersonation attacks. The list is long, but the authors do a good job of covering it, with examples of specific holes that have been found in the past (including many that affect Microsoft products). One thing that this book is great for is sorting out the terminology for those new to thinking about security.

But Howard and LeBlanc go beyond cataloging attacks to suggest ways to prevent them. There is plenty of code in this book that demonstrates everything from the proper way to copy strings correctly in C to safe ways to retrieve information from the registry. If you're reading because you need a sense of the security landscape, you may want to skim past those chunks, but if you're down in the trenches of Windows coding, they'll be valuable.

Lies, Damned Lies, and Security Myths
The authors also puncture a number of stupid ideas about security. They're especially scathing about those who blithely assume that their product is secure because it implements cryptography ("Do not, under any circumstances, create your own encryption algorithm. The changes are very good that you will get it wrong."). They also have some choice words for various groups at Microsoft who tried to ignore security problems, and as a result had the authors break into their test servers. It's hard to know whether to be more disgusted at an organization where teams would consider shipping known security holes, or pleased that the authors didn't get fired as a result of these shenanigans. But that does suggest one strategy for making your own company's products more secure. There's also a strong section on other ways to make security a priority, which emphasizes the importance of baking in security up front instead of trying to tack it on at the end.

Of course, no matter how good your design and implementation, your code might still get broken by attacks as yet undreamt of. I would have liked more coverage of ways to react to security holes in shipping products, from the importance of regression-testing patches to the ethical way to treat those who find such holes. This is especially important in light of the perennial battle between those who believe in "full disclosure" (releasing bug details to force the vendor to fix things) and those who prefer a more secretive approach. But perhaps that's a subject for another book.

10 Is a Nice Round Number
As an appendix, Howard and LeBlanc present the 10 Immutable Laws of Security:

  1. If a bad guy can persuade you to run his program on your computer, it's not your computer anymore.
  2. If a bad guy can alter the operating system on your computer, it's not your computer anymore.
  3. If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.
  4. If you allow a bad guy to upload programs to your Web site, it's not your Web site anymore.
  5. Weak passwords trump strong security.
  6. A machine is only as secure as the administrator is trustworthy.
  7. Encrypted data is only as secure as the decryption key.
  8. An out-of-date virus scanner is only marginally better than no virus scanner at all.
  9. Absolute anonymity isn't practical, in real life or on the Web.
  10. Technology is not a panacea.

Contemplate this list the next time you hear of a security hole in code. Chances are that it depends on a violation of one or more of these laws. The last one—hammering home the point that security consists of technology plus the policy and will to use it correctly—is especially interesting in light of Microsoft's recently leaked "Palladium" project (http://www.msnbc.com/news/770511.asp), which aims to provide an end-to-end security solution by releasing new and wonderful technology. Hmm.

Entertaining and Useful
A book like this could be deadly dry or a morass of boring code. Fortunately, this one avoids those sins. There's enough humor and light style to keep you reading and enough discussion of the code to give you a conceptual background rather than just a bunch of recipes for quick fixes. The continuous stream of examples from real-life security reviews helps illustrate just how widespread problems in secure coding are in the industry today.

Along the way, the authors touch on a lot of other security issues that they've thought about and you might not have. For example, are you logged on as an administrator on the machine where you're reading this right now? Why? (I confess, I am…and it's just because I'm lazy.) How much safer would you be if you were logged on as a regular user, and any attack code couldn't run as administrator?

Such thought-provoking asides will keep you thinking as you read through this one. Then put it on your shelf, but keep it handy. You'll want it the next time you're designing a new architecture or performing a code walkthrough. At this point in the history of software, we all need to be security-conscious in our development.

Got any security horror stories of your own? Ready to dump Microsoft and move to BSD over this stuff? Write and let me know.

About the Author

Mike Gunderloy, MCSE, MCSD, MCDBA, is a former MCP columnist and the author of numerous development books.

Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
TechMentor @ Microsoft HQ
August 11-15, 2025