The Theory Of Disclosure

The Theory Of Disclosure

Participating in the field of IT security exposes you to the chance that you stumble upon services, software or hardware which you may find vulnerable. Either the logic of the objective is broken ("Why am I able to edit this text?!"), or you can inject arbitrary SQL or Shell ("Haha, sleep(100) breaks my connection!"), or maybe something else. You then start to search on the internet for that vulnerability. Maybe you'll find a CVE number matching that vulnerability? Or a blog post? A tweet? At least a confirmation notice written by the vendor?

In most cases, you find something. Nice, time to search for a publicly available exploit!

But in some cases, you find nothing.

But what now?

First of all, you need to be aware of the responsibility you carry from now on. Sure, one could argue that there are many skilled people out there who may also find that vulnerability - or already have found it in the past! - and who may process that vulnerability responsible-minded. Another argument could be: "But it's not my responsibility to disclose that vulnerability!"

It's obvious that both arguments are weak and can't hold up against a closer look. The moment you search for vulnerabilities and other security issues, you need to be aware of your obligation, to inform the vendor, to inform others, to make that vulnerability public. Short: to minimize the impact, the damage the vulnerability may create.

"But I wasn't aware of that!". Well, your problem. Try to cope with that. Let's draw a comparison: The moment you get your drivers license, buy a car and start the engine, you need to be aware of the fact, the possibility, that you may hit someone or something with it - maybe injure someone with that car. Maybe you hit someone so hard that it's a fatal injury? May be! It doesn't even need to be your fault, however, if you would've avoided to drive that car, you may have prevented the injury of said person. Of course, many drivers out there will never ever fatally injure someone in their driver's lifetime, thus keeping in mind that it may happen could slowly fade from your mind, taking just the backseat of your car if you drive with it.

Maybe you find that comparison a bit uneven. However, it shows that, if you are not aware of your responsibly all the time when you actively search for security issues, then you miss out an important aspect of the whole operation. You rely on others disclosing security vulnerabilities you didn't search for, like in the Linux Kernel (how many CVEs did you pull out of the Kernel? None? Ah, okay), your ISP's mail server (I heard Exim is secure?) or your IP-connected camera (greetings to twink0r, a2nkf and me). You never searched every bit of source code you are using, either directly used in your projects, or indirect in your Smart Fridge, Internet-connected Tooth Brush, your car, on your server, in your CPU, ... you name it. You simply believe that If there is a security issue, it will become public sooner or later. And yes, that may happen. But if you encountered security issues, you are part of the process - it may depend on you if it becomes public sooner or later. What about EternalBlue, KRACK, Meltdown, ShellShock, DirtyCow, DROWN, Heartbleed? All of them: attacks that were possible over a long time, some of them utilizing a single line of faulty code which was there over years. Nobody can surely tell if someone attacked me - or you? - with one of these attacks. We will never know.

Even if you will never find a security issue with an impact as wide-spread as the named ones: you need to publicize your knowledge as soon as possible.

photography of black antenna during daytijme

But how?

First of all, try to get more information about the security issue. If you encountered it in the wild, try to find the faulty source code lines, if you have access to it. Try to compile or build it, try to run it, try to rebuild the attack scenario in which you identified the security issue in the first place. More simple, try to replicate the attack. Maybe write a small exploit for it. That ensures that you are doing the exact same steps on different targets, that the attack is comparable over different machines.

To be honest, I can't write a step-by-step guide on verifying vulnerabilities - it's mostly a matter of the specific case. But it's a good idea to keep your playground as local and isolated as possible. Don't simply attack other people's services.

Small paranthesis about finding the same (and, identified by yourself, faulty) service on different machines: Don't simply attack services you found with search engines like Shodan. Why? Because you are unable to determine whether or not you are generating side effects. Exploiting an undiscovered vulnerability, for example on the mail server of a company, may trigger processes you didn't know about in the first place. Do you know what time-critical processes are relying on that mail server? Maybe a whole production line stops for hours if you kill the process. Maybe a firewall, IPS or Security Gateway kills your connection and cuts off the uplink of the server, for security reasons? Maybe you attacked a power plant without your knowledge and risk an electricity cut? Or you hit a hospital and risk health of many? ... That's why you should never hit on services or machines that you don't know enough about. And that's the case for nearly any host found on search engines like Shodan. And no, you can't know about every side effect you may generate. Oh, and just to mention it: it's illegal to attack other peoples services...

The disclosing process itself can be as distinct as your found vulnerability. In any case, you can, after you collected enough information about the impact, affected software components or versions, request a CVE number through Don't worry: the information submitted will not be published until you enter reference URLs. After you receive your CVE number by e-mail, you can go on and contact the vendor, as described in the next abstracts. You may ask why it's important to get the CVE number in the first place, before contacting the vendor? Well, two major arguments: first, it adds weight to your notice towards the vendor: The person contacting the vendor is not an ordinary user with a "security issue" which may turn out as a misconfiguration by the user itself or just no security issue at all, but a professional security researcher with useful information. Additionally, it clarifies that the security issue in question has some kind of severity to be processed by Mitre. The second argument pro requesting a CVE number is that you have a way to publish the issue, to explicitly name it in your writeup, PoC or other kind of documentation; you can identify it. It's a way to hint other security researchers that at some point in time, this software had a security issue. A vendor is not able to unassign that number, to revoke your description, writeup or PoC. All in all, requesting a CVE number is an essential part of documenting a security issue.

thunder with clouds at nighttime

If you found a vulnerability in an open-sourced project, try to send the creator or the company/NGO behind it an e-mail, informing about the issue. I'd advice against opening an issue in GitHub as a first step: that's a public disclosure, and you're not granting the vendor enough time to fix the issue and release updates. You risk 0day exploits, with unknown impact. Even if that was not your intention, because "Creating an issue is the best way to tell the project that they have a problem". But it's not.

If the vulnerability you found is part of a software sold by a company or for some other reason closed-source, it's not as easy as sending the vendor an e-mail, mostly because of two factors: in many cases, the vendor earns money with the software, and disclosing a security issue may cause profit setback for the customer, which may lead to a lawsuit against you because of the loss the company is facing; even without loss, it may be an infringement against the law - depending on where you are living and carrying out your attack - or at least against the Terms of Service you may've agreed to in the first place. In theory, you have multiple ways you could proceed on from this point (in fact, you don't): first, you could simply create an e-mail account at one of those trash mail providers, connect to it via VPN or Tor and write the vendor an e-mail describing the issue and why it's dangerous. You don't risk your own ass to face a lawsuit, however, most vendors won't react to this, and you don't have any way to escalate that issue. If the vendor decides to keep the vulnerability unfixed, you are very limited in your steps, because every step outside of the trash-mail account, VPN or Tor may risk your ass again. Second, you could write a PoC or even fully working exploit and throw it on a page with frequent black-hat visitors, somewhere in the range from uploading it to GitHub over pasting it on Pastebin to the nuclear way of throwing it on a darknet blackhat forum. You may've guessed it already, but I'd strongly advise against all of these options, for very similar reasons as Opening an Issue on an open-sourced project on GitHub regarding a vulnerability, as described in the paragraph before. Uploading 0days is unsocial and dangerous, and you have no way to determine what impact it may have on the internet.

But if all of that is not the correct way, what to do now? Well, it depends a bit on what your vulnerability is exactly about. In any case, it's a good advise to talk to a lawyer.

  • Depending on your jurisdiction, the lawyer may be able to contact the vendor without the risk of lawsuits. Even if not, the lawyer can recommend further steps and discuss the case with you.
  • Search for bug bounty programs. This may be the simplest solution for you.
  • Search for "security contacts" on the web page of the vendor - in most cases, companies setting up these kind of contacts are happy (or at least not angry) about information you give them regarding their insecure products
  • Search for a ministry or other "official" authority which is responsible for IT stuff in your jurisdiction

In Germany (and Austria and Switzerland, too), we got the lucky situation of having a big NGO, the Chaos Computer Club, which aided many hackers responsibly disclosing their found vulnerabilities in the past. The club acts both as "abstraction layer" for the communication (which may help keeping the situation cool in some situations) and as a professional third party. In German-speaking areas of Europe, as well as in the Tech industry, the club is well-known, which may also help to keep the situation on a professional, friendly level. So this may be worth a try, too. Sadly, I'm unaware of any other clubs which may be interesting for disclosing strategies.

A few concluding words: Please release security issues if you found them. The world would be a worse place if everyone kept their knowledge secret. And: the world could be a better place if we'd share all of our knowledge. Security issues are no exception from this - keeping software secure ensures private data to remain private. You just laugh about some credential database leakage until it hits yourself. So remain professional and don't act arrogant just because your exploit can crash some systems.

Oh, and please don't get frustrated if the company/project/individual is not thankful for your finding and disclosure towards them. In most cases, they aren't. I'd realized that, on my own and in a team of others, multiple times. But we still hack (and disclose) anyway. Keep up! 😊