Affiliate Disclosure: By buying the products we recommend, you help keep the lights on at MakeUseOf. Read more.
Oracle is in hot water this week over a blog post written by their security chief, Mary Davidson. The post, though it covers a range of topics, is mostly about the practice of reporting possible security vulnerabilities to Oracle. Specifically, why you shouldn’t.
“Recently, I have seen a large-ish uptick in customers reverse-engineering our code to attempt to find security vulnerabilities in it.
This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.”
Davidson explains that there are a growing number of security-conscious customers who are reverse-engineering Oracle software looking for security vulnerabilities (or hiring consultants to do it for them). Davidson accuses these clients of violating their license agreements, of not taking mundane security precautions, of trying to do Oracle’s job for them, and of generally being Bad People. If the customer has found a real vulnerability, while Oracle will fix it.
“I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. […] we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.””
This did not go over well in the security community, and the post was quickly taken down -though not before spawning a new hashtag.
"Check Enigma's EULA first" said Alan Turing. #oraclefanfic
— Thorsten Sick (@ThorstenSick) August 11, 2015
But, if you aren’t familiar with the security world, it might not be obvious why the original post is so misguided. So, today, we’re going to talk about where Oracle’s philosophy of security departs from the mainstream, and why it’s a problem.
Explaining the Controversy
So, what exactly is reverse engineering, and why is Davidson so concerned about it? Basically, when Oracle releases a piece of software, they “compile” their internal source code into executable files, and then deliver those files to customers. Compilation is a process that translates human-readable code (in languages like C++) into a denser binary language that can be fed directly into a computer processor.
Oracle’s source code isn’t public. This is intended to make it more difficult for others to steal their intellectual property. However, it also means that it’s very difficult for customers to verify that the code is secure. This is where ‘decompilation’ comes into play. Basically, de-compilation translates in the other direction, converting executable files back into human readable code. This does not deliver exactly the original source code, but it does deliver code that functions in the same way – though it’s often difficult to read, due to the loss of comments and organizational structure.
This is the “reverse-engineering” that Davidson is referring to. Oracle is against it because they think it puts their intellectual property at risk. This is at least a little foolish, because using a license agreement to prohibit IP theft is a little like using a sternly worded doormat to prevent home invasion. The sorts of people who are going to try to clone your products don’t care about license agreements, and often aren’t in jurisdictions where you could enforce those agreements in any case.
— CyberAnarchist (@Cyb3rOps) August 12, 2015
The policy really only affects legitimate customers. The situation is similar to that of videogame DRM, but somehow even more ineffective.
Why would customers want decompile these executable? It’s all about security. Having access to the source code allows you to dig through it looking for bugs and potential issues. Often, this is done using software which performs a “static code analysis” – an automated read-through of the code, which identifies known bugs and dangerous software practices which tend to lead to bugs. While there are tools that analyze the executable file directly, decompiling it allows for generally deeper analyses. This sort of static analysis is a standard tool of the trade in security, and most security-conscious companies use such software internally to produce code that is less likely to contain serious bugs.
Oracle’s policy on this sort of analysis is simply “don’t.” Why? I’ll let Davidson explain.
“A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive) […] Now, I should note that we don’t just accept scan reports as “proof that there is a there, there,” in part because whether you are talking static or dynamic analysis, a scan report is not proof of an actual vulnerability. […] Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so.”
In other words, the tool turning up a result isn’t proof of a real bug – and, since Oracle uses these tools internally, there’s no point in customers running them on their own.
The big problem with this is that these static code analysis tools don’t exist just to bring bugs to your attention. They’re also supposed to serve as a target for code quality and safety. If you dump Oracle’s code-base into an industry-standard static analysis tool and it spits out hundred of pages of issues, that’s a really bad sign.
The correct response, when a static code analysis tool spits back an issue, isn’t to look at the issue and say ‘oh, no, that doesn’t cause a bug because such-and-such.’ The correct answer is to go in and fix the issue. The things flagged by static code analysis tools are usually bad practices in general, and your ability to determine whether or not a given issue actually causes a bug is fallible. Over thousands of issues, you’re going to miss stuff. You’re better off not having such things in your code base in the first place.
Here’s Oculus CTO John Carmack singing the praises of these tools from his time at iD Software. (Seriously, read the whole essay, it’s interesting stuff).
“We had a period where one of the projects accidentally got the static analysis option turned off for a few months, and when I noticed and re-enabled it, there were piles of new errors that had been introduced in the interim. […] These were demonstrations that the normal development operations were continuously producing these classes of errors, and [static code analysis] was effectively shielding us from a lot of them.”
In short, it’s likely that many of Oracle’s customers weren’t necessarily trying to report specific bugs – they were asking why Oracle’s coding practices were so poor that their code base was riddled with thousands upon thousands of issues so basic that they could be picked out by automated software.
I'm still sad that Sun is gone. And who was the genius that sold them to Oracle? That's like letting Darth Vader babysit your kids.
— Brad Neuberg (@bradneuberg) August 15, 2015
Security By Stickers
So, what should security-concerned customers do, instead of using static analysis tools? Thankfully, Davidson’s blog post was extremely detailed on that subject. Aside from advocating general basic security practices, she makes concrete suggestions for those concerned about the security of the software they use.
“[T]here are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences).”
This is a horrifying response from an organization as large as Oracle. Computer security is a rapidly evolving field. New vulnerabilities are found all the time, and formalizing security requirements into a certification that gets updated every few years is absurd. Security is not a sticker. If you trust that a piece of crucial software is secure on the basis of a seal on the packaging, you’re being irresponsibly stupid.
Heck, static analysis tools get updated much more frequently than these certifications do – in some cases, daily – and eliminating all the issues they turn up still isn’t enough to have much confidence in the security of your code, because most vulnerabilities are too complex to be detected by these sorts of automated tools.
The only way to have an confidence in your own security is to expose your code to the world, and ask hackers to try to break it. This is how most major software companies operate: if you find an issue with their code, they won’t condescendingly snark at you for violating your usage agreement. They’ll pay you money. They want people trying their best to break their software all the time. It’s the only way they can have any confidence their code is at all secure.
These programs are called “bug bounty” programs, and they’re the best thing to happen to enterprise-level security in a long time. They’re also, coincidentally, something that Davidson has pretty strong opinions on.
“Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers […] to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure.
Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. […] I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem.”
For starters, based on the results of those static code analyses, it might turn out to be a lot more than 3% if you paid them. But I digress. The real point is this: bug bounties are not for you, they’re for us. Could you find bugs more efficiently if you spent the same money on internal security experts? Well, probably not – but let’s throw Oracle a bone and assume that they could. However, they could also take the money, bank it, and then do absolutely nothing. If the resulting security is sub-par, customers will only find out about it years from now when their social security numbers mysteriously wind up on the deep web.
— Schuyler St. Leger (@DocProfSky) August 11, 2015
Bug bounties exist half because they’re a genuinely effective way of identifying bugs, and half because they’re a form of security you can’t fake. A bug bounty credibly tells the world that any bugs left in the code are more expensive to find than the stated bounty.
Bug bounties don’t exist for your convenience, Oracle, they exist because we don’t trust you.
Nor should we! Plenty of big companies allow security to fall by the wayside, as the numerous megabreaches show all too clearly. You’re the second-largest software maker in the world. It’s absurd to ask us to just take your word that your products are secure.
What Davidson Gets Right
In fairness to Davidson, there are elements of this that are reasonable in context. Likely, many of their clients do embark on ambitious audits of Oracle’s code, without taking the time to eliminate more mundane security issues from their systems.
“Advanced Persistent Threats” – skilled hacker organizations trying to get access to specific organizations to steal data – are certainly scary, but by the numbers they’re a lot less dangerous than the millions of opportunistic amateur hackers with automated tools. Doing these sorts of static analyses of commercial software when you haven’t adopted basic security measures is a lot like installing a panic room when you don’t yet have a lock on the front door.
Likewise, it probably really is frustrating and unhelpful to be presented with the same automated analysis again and again and again.
However, taken as a whole, the article reveals some seriously outdated ideas about system security, and the relationship between developers and customers. I appreciate that Davidson’s job is frustrating, but users going out of their way to verify the security of the software they use are not the problem. Here’s president of Security Awareness, Ira Winkler’s take on it:
“Oracle is a very large and rich company, with products that are widely distributed and used for critical applications. Period. They have a responsibility to make their software as strong as possible […] There might be a lot of false positives and associated costs, but that is a factor of [their selling] a lot of software that has a lot of users. It is a cost of doing business. I’m sure all software companies have the same false positive reports. I don’t hear Microsoft et al. complaining.”
If Oracle doesn’t want to keep receiving thousands of issues found by static security tools, maybe they should fix those thousands of issues. If they’re annoyed by people turning in the same non-bugs over and over again, maybe they should have a proper bug bounty program that has mechanisms for dealing with repeat submissions of non-issues. Oracle’s customers are clamoring for a higher standard of security, and shaming them for it is not the right answer.
Even though Oracle has taken down and generally disavowed the post, that it was written at all reveals a profoundly misguided security culture within Oracle. Oracle’s approach to security prioritizes protecting their own intellectual property over the security and peace of mind of their customers – and if you entrust Oracle software with critical information, that should scare the bejeezus out of you.
What do you think? Are you concerned about Oracle’s philosophy of security? Do you think Davidson is being treated too harshly? Let us know in the comments!