Defending Security Research

| 4 Comments

Assume for a minute that you could carve out a legally protected niche around legitimate (that is, non-black-hat) security research. How would you do it? How would you define "security research"? Assume for a minute that there's an opportunity to do just that, at least from the perspective of the American Bar Association. What sort of things would you consider in-scope for "security research" and would want to see explicitly protected? That opportunity is in front of you right now, and so - as the co-vice-chair of the ABA's Information Security Committee - I'm looking for your feedback in helping define "security research" in a useful manner, as well as in drafting a proposed resolution to flow up through the ISC to the SciTech Section and, by next Summer, to the overall ABA.

So, that said, what are the key criteria? I believe we would need a reasonably unambiguous definition of the following:
- "security researcher" - who, and by what practices/ethics?
- "security research" - what sort of activities?
- "responsible disclosure" - putting some reasonable parameters around it, without being too prescriptive or verbose
- "responsible vendor actions" (or a similar title/category) - what are the appropriate vendor responsibilities?

Are these the right main categories? Is anything else missing? And, most importantly, if this is right, then how define them? Please provide your responses in the comments and, if you want acknowledgement, please include your name. If you'd rather not post it publicly, please feel free to ping me on twitter @falconsview, or email me at tomhave(a@t)secureconsulting-dot-net.

Thank you!

4 Comments

Assuming that the ABA came up with a fantastic definition of "security research", what difference would it make? The problem, to me, is that software is licensed not purchased like other products. This allows vendors to control what you can and cannot do with that software. So no matter what the ABA says, legitimate research can still be controlled. Or is this effort going to attempt to change the UCITA, UCC, etc...

Ignoring that problem for a moment, there is already a fairly limited exception to the DMCA for security research. Perhaps that would make a good starting point for your quest. See http://www.copyright.gov/1201.

Personally, I would like to see a broad exception to copyright, DMCA, and software contracts that allows researchers to do whatever is necessary to use software securely, including all forms of vulnerability analysis, security testing, reverse engineering, and software modification. This would not grant any additional rights to copy, distribute, or perform the software.

The debate over "responsible" has raged for two decades while the software we use has become increasingly vulnerable. Recently a few companies have started to allow security testing, offer bounties, and work with security researchers. And the world hasn't collapsed. In my mind, this type of collaboration is what any new standards around disclosure, particularly legal standards, ought to encourage.

Again, just my opinion, but I don't think that the right to do security research ought to be granted only if the researcher agrees to do it or discuss it in a certain way.


Thanks for the comment, Jeff.

I think there's a few things. The first is just trying to get a small resolution bubbled up through the ABA that defines the boundaries of "acceptable" security researcher activity, that outlines the ethical responsible of general counsels in engaging with them (for and against), and perhaps touches on responsible disclosure or similar ethical practices.

The notion is that:
a) Security research, within the proper ethical boundaries, is good and beneficial.
b) Security researchers, within those boundaries, should be protected.
c) Lawyers should be educated as to what those protections are, in particular to stem things like DMCA abuses (excessive take-down notices and the like).
d) If such a resolution could be drafted and eventually approved, it could then be represented by the ABA's Lobby to help influence future legislation.

To me, this could be as simple as a broad statement of support for "security research within reasonable ethical boundaries" followed by definition of security research and those boundaries, as well as guidance on when support or opposition is appropriate and what those appropriate responses look like.

I noticed that the statement from the Librarian of Congress that Jeff pointed out seems to single out video games as subject to exemption for the purposes of security analysis. It also seems to be strictly limited to "for own use." It also only applies to circumvention of digital rights management (DRM) mechanisms.

Security research needs to have a much broader scope: DRM might prevent you from looking at the actual source or executable code, but it has very little impact on actual security. There's a saying that "security through obscurity is no security at all." If someone is determined to exploit a security flaw for malicious ends, they're not going to have any compunctions about slicing through any DRM mechanisms that might protect the actual program code. But even without direct access to the code, it's still possible to do Bad Things.

The problem with defining security research is that it requires the exact same knowledge and techniques as malicious hacking: the difference is all in the intent and the ethics.

My general stance would be to leave no limits on who can be considered a "security researcher." That would just leave too much room to split hairs about who fits into the protected category. Having to declare your intent to perform security research on a particular product leaves a lot of room for retaliatory action from the owner of the software being tested. (Imagine someone trying to find security flaws in a popular computer operating system: most EULAs allow the manufacturer to revoke your license to operate the software for any reason they choose. This person might never be able to operate a computer again.)

Also, the scope of activity that fall under the security research umbrella is similarly broad: in general, a security researcher needs to be able to use any method that might be used by a malicious entity. I would propose that requirements for protection under "security research" be

1) Does not adversely affect the experience of other users
2) Minimizes impact to "production" services (live web services, server farms, aggregators, etc. not owned by the researcher). There may need to be some leeway here in the case where the owner of original system is unwilling to provide the information or code needed to set up an independent server.
3) Does not adversely affect the performance of publically available services (doesn't place undue burden on the public Internet, doesn't raise alarms that require attention of public safety services, etc.)
4) To the extent possible, does not affect or expose personal data belonging to third parties.

If all research is performed on private systems and not over the public Internet, it should be considered fair game.

For responsible disclosure, I would suggest different requirements for public and private disclosure:

The owner/manufacturer of the software and operators of services based on the software (web site operators using frameworks, etc.) should be notified first, with possible impact and security implications, the nature of the flaw, and complete instructions for exploiting the flaw (for the purposes of testing any fixes). This is necessary to provide actionable insight into the nature of flaws and defects.

The general public (end users of software and of web services) should next be notified, possibly restricted to impact on their own use and risk to their personal information and systems, as well as steps to mitigate their risk. The security researcher should be permitted to make this disclosure in the event that the manufacturer/provider doesn't.

After a reasonable time for response from the manufacturer/provider, the full exploit should probably be publicised, both to motivate users to apply the security fixes and to inform future research. The manufacturer/provider should have some means of requesting that public disclosure be delayed, but shouldn't be allowed to make it an indefinite postponement: if a security researcher has found and reported a flaw, it's a reasonable assumption that a malicious user/hacker has already found it.

Vendors should be responsible for notifying their customers of the existence of security flaws and vulnerabilities as soon as reasonably possible. If a fix cannot be provided within a certain number of days (say, 5), customers should have the right to know about the implications for their own security.

Vendors should provide security patches for all supported software versions (with support being required for a set period of time after the version ceases to be distributed) and should provide them free of charge (to prevent abuse by withholding security updates to force users to purchase upgrades). If this is not possible, the vendor should be required to provide an upgrade to a similar product version in which the flaw is no longer present at no charge to the customer.

About this Entry

This page contains a single entry by Ben Tomhave published on September 6, 2011 11:21 AM.

What Are the Minimum Security Practices? was the previous entry in this blog.

Trying to Travel Minimally is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7