Archive for 2019

Emotet infection with spambot activity, (Wed, Dec 18th)


On Monday 2019-12-16, I tested some Emotet samples. I normally get Trickbot as the follow-up malware, which I’ve already documented from Monday.  But every once in a while, I’ll see spambot traffic instead of (or in addition to) Trickbot.

When I tested another Emotet sample later that day, I saw spambot traffic.  Today’s diary reviews information from that infection.

The email

On Monday afternoon (Unite States Central time), I saw an Emotet malspam message that made it to my inbox.

Shown above:  Emotet malspam that made it to my inbox.

Why is the sender named Billy Idol?  Because that was a name in the address book from one of my Emotet-infected Windows hosts a few months back.  I generally make up names as I spin up vulnerable hosts in my lab.  At some point, I vaguely remember using “Billy Idol” as a name when I’d set up a fake email account and generated some items for the inbox of a lab host.

That doesn’t mean “Billy Idol” was infected.  It just means an Emotet-infected host had an email in the inbox (or sent items) with an address using that name as an alias.

The email had an attached Word document, which I tested in my lab.

Shown above: Word doc from the email with macro for Emotet.

The infected Window host

My infected host had a Windows executable for Emotet made persistent through the Windows registry as shown below.  This is normal behavior for Emotet.

Shown above:  Emotet persistent on an infected Windows host.

Infection traffic

The traffic patterns were typical for Emotet. However, if an Emotet-infected Windows client turns into a spambot, it will generate SMTP and encrypted SMTP traffic.  The spambot traffic is mostly encrypted SMTP–in fact, most often all of it is encrypted.  But sometimes you might find unencrypted SMTP when reviewing the traffic in Wireshark as shown below.  You can also use Wireshark to export emails found in unencrypted SMTP traffic from the pcap.

Shown above:  Filtering on web traffic in Wireshark shows SSL/TLS spambot traffic to mailservers, mostly over TCP ports 25, 465, and 587.

Shown above:  You can filter on various SMTP commands (like EHLO) to get a better idea of the encrypted/unencrypted spambot traffic.

Shown above:  You can filter on to find any emails in the pcap from unencrypted SMTP.

Shown above:  Using Wireshark to export emails from unencrypted email traffic in a pcap.

Shown above:  Traffic from my infected lab host only had one email I could export from the pcap.

Shown above:  The exported email file opened in a text editor.

Indicators of Compromise (IoCs)

Malware from an infected Windows host:

SHA256 hash: b82542fa69e2a8936972242c0d2d5049235b6b0d24030073a886937f1f179680

  • File size: 191,744 bytes
  • File name: INVOICE.doc
  • File description: Malspam attachment–Word doc with macro for Emotet

SHA256 hash: 8bfb28788bd813e2ec3e7dc0cce9c95bda8d5df89a65b911c539e0a6aebcfc05

  • File size: 307,528 bytes
  • File location: hxxp://blog.itsaboutnature[.]net/confabulate-grainy/tad0m4bjt-li6lr-5546823/
  • File location: C:Users[username]26.exe
  • File location: C:Users[username]AppDatalocaliascorsiascors.exe
  • File description: Emotet malware binary retrieved by Word macro

Traffic caused by Word macro to retrieve an Emotet EXE file:

  • 104.27.149[.]107 port 80 – www.simple-it[.]org – attempted TCP connections but no response from the server
  • 43.255.154[.]108 port 443 – www.uaeneeds[.]com – HTTPS/SSL/TLS traffic
  • 157.7.106[.]97 port 80 – oki-dental[.]com – GET /sys/upydu-4nmmykhbf-292/
  • 65.254.248[.]88 port 80 – blog.itsaboutnature[.]net – GET /confabulate-grainy/tad0m4bjt-li6lr-5546823/

Emotet post-infection HTTP traffic:

  • 190.38.252[.]45 port 443 – 190.38.252[.]45:443 – POST /Zm3bDTIjDcE0VBqqFO
  • 105.225.77[.]21 port 80 – 105.225.77[.]21 – POST /7rS6p32cGJz6yHNBUKW
  • 181.167.35[.]84 port 80 – 181.167.35[.]84 – POST /Utmt2SR
  • 164.68.115[.]146 port 8080 – 164.68.115[.]146:8080 – POST /dzbBGrkIdBkIqwPjf
  • 5.189.148[.]98 port 8080 – 5.189.148[.]98:8080 – POST /DmiI74YHj
  • 5.189.148[.]98 port 8080 – 5.189.148[.]98:8080 – POST /lmmBjn
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /lmmBjn
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /QIrnjidOBG
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /fsIL1F4aeW
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /Qb6Hb0ONYVQ2an
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /Cux8Ia00axEqkIhB2
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /lqcZ9GHhKIkoVPdb
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /xaMc6JN
  • 64.207.176[.]141 port 8080 – 64.207.176[.]141:8080 – POST /VJ9ZrKRKSWYOwNrPCk
  • 82.145.43[.]153 port 8080 – 82.145.43[.]153:8080 – POST /PKgFIQr2tR
  • 149.202.153[.]251 port 8080 – 149.202.153[.]251:8080 – POST /iEo555d
  • 149.202.153[.]251 port 8080 – 149.202.153[.]251:8080 – POST /1SxH7

Spambot traffic:

  • Various IP addresses over various TCP ports – SMTP and encrypted SMTP traffic

Final words

A malspam example, a pcap of the infection traffic, and the associated malware can be found here.

Brad Duncan
brad [at]

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Malicious .DWG Files?, (Mon, Dec 16th)

This weekend, I took a look at AutoCAD drawing files (.dwg) with embedded VBA macros.

When a .dwg file contains VBA macros, a Compound File Binary Format file (what I like to call an OLE file) is embedded inside the .dwg file. This OLE file contains the VBA macros. It’s similar to .docm files, except that a .dwg file is not a ZIP container. More details on the file format can be found in my blog post “Analyzing .DWG Files With Embedded VBA Macros“, but knowing these details is not a prerequisite to be able to perform an analysis as I show here.

I combine my tools and to extract and analyze the embedded OLE file with macros:

This .dwg file that I was given contains indeed a VBA project, but it’s an empty project, without actual code (remark indicator m for stream 3 and that stream 3 doesn’t contain “real” VBA code, just the normal attributes).

I did some searching for .dwg files with real VBA code, and found the following sample:

This is clearly malicious: a CreateThread API declaration and a sequence of numbers (between 0-255). This is shellcode that is injected into the AutoCAD process.

I’m no longer an AutoCAD specialist (I used AutoCAD and AutoLISP a lot in the 90’s), but as far as I know, subroutine names like Auto_Open, AutoOpen and Workbook_Open do not trigger automatic execution in AutoCAD. One needs to associate a subroutine with an AcadDocument event to trigger execution.

This drawing contains malicious code, but it will not execute automatically. This sample is probably a PoC or some kind of test.

The shellcode can be extracted and analyzed with and the shellcode emulator scdbg:

With this IOC, I found other maldocs using the same IP address.

Like this sample, with exactly the same VBA source code (ignoring whitespace). This malicious Office document  was submitted to VT one month earlier than the malicious AutoCAD drawing. And if we can trust the medata data of the Office document, then it’s almost 2 years old.

This leads me to believe that this malicious AutoCAD drawing I found could well be an experiment.

My search was far from exaustive, but I did not find other examples of AutoCAD drawings with embedded, malicious VBA code (remark that VBA is not the only way the achieve RCE with AutoCAD).


Like Office, AutoCAD will warn users when a document with embedded macros is opened:

And even better: since AutoCAD 2016, VBA is no longer included. It is an optional install (separate download and manual installation):

So if you use AutoCAD in your organisation, know that drawings with embedded, malicious VBA code seem to be rare (caveat: my search was far from exhaustive), and that with modern versions of AutoCAD, VBA no longer comes pre-installed.



Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Is it Possible to Identify DNS over HTTPs Without Decrypting TLS?, (Tue, Dec 17th)

Whenever I talk about DNS over HTTPS (DoH), the question comes up if it is possible to fingerprint DoH traffic without decrypting it. The idea is that something about DoH packets is different enough to identify them.

This evening after recording my podcast, I experimented a bit with this idea to see what could be used to identify DNS over HTTPS traffic. To run this experiment, I used Firefox. I used Firefox for  a couple of different reasons:

  • I consider the Mozilla DoH implementation mature. Mozilla was one of the trailblazers for DoH and has made it very easy to enable DoH. I find Chrome to be a bit more “tricky” in that it is more careful in its use of DoH.
  • Firefox, just like Chrome, allows me to collect TLS master keys via the SSLKEYLOGFILE environment variable. This allowed me to decrypt and separate the DoH from other HTTPS traffic

At this point, I would call the experiment a “proof of concept.” It is not a conclusive experiment. I only collected a few minutes of traffic and went maybe to a dozen different sites. All tests were performed on a Mac using Firefox 71 and Cloudflare as a resolver. I may get around to do more testing during the day and will update this post accordingly.

I started by running tcpdump (otherwise I forget it and realize that I need to start it after I started Firefox)

% tcpdump -i en8 -w /tmp/ssl.pcap

Next, in a different terminal, I set the SSLKEYLOGFILE environment variable

% export SSLKEYLOGFILE=/tmp/sslkeylogfile

Finally, I started Firefox from the console in the same terminal, where I set the environment variable (so it sees the environment variable). Make sure Firefox isn’t already running.

% open /Application/

Next, I went to a few random websites (Google, CNN,,… ). After I ran out of sites to visit, I closed Firefox and exited tcpdump.

I loaded the packet capture file and the SSL Key Logfile in Wireshark. I used version 3.1.0, which fully supports DoH and HTTP2 (Firefox uses HTTP2 for DoH). I identified the DoH traffic using the simple display filter “dns and tls.” The entire DoH traffic was confined to a single connection between my host and (2606:4700::6810:f8f9). Could I have just identified the traffic using this hostname? Sure. In this specific case. But you can run your own DoH server and evade simple blacklists like this.

I filtered all traffic to and from that Cloudflare host. Next, I filtered all port 443 traffic that did not involve this IP to a second file and did some simple statistics. Aside from the session length, I found that the payload length for DoH is somewhat telling. DNS queries and responses are usually a couple of hundred bytes long. HTTPS connections, on the other hand, tend to “fill” the MTU. So there is a graph of the payload size-frequency for DoH and HTTPS:

TLS Without DoH DoH Only

The 3 (4?) spikes in the DoH traffic could be due to the limited sample. But these are typical sizes for DNS payloads. Note how the DoH payload size “clusters” below 5-600 bytes, the legacy DNS reply limit. For the non-DOH traffic, the payload sizes peak close to the MTU (the MTU was 1500 Bytes here).

In short: if you see long-lasting TLS connections, with payloads that rarely exceed a kByte, you probably got a DoH connection. But I need to run more tests to verify that. Feel free to do your own experiments and see what you find. Of course, some of these artifacts may be implementation-specific. The RFC somewhat suggests the extended session length. But in other implementations (earlier Firefox versions?), I seem to remember shorter TLS sessions for DoH.

So please let me know what you find, and I will likely update this some time tomorrow if I find time to look at more traffic.

Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

VirusTotal Email Submissions, (Sun, Dec 15th)

I think it’s a good idea to highlight VirusTotal’s Email Submission feature, as I recently had to point this out to a couple of people.

In stead of using the VirusTotal’s web interface or API, one can also send an email to [email protected] with the file to be scanned in attach (don’t exceed 32MB) and subject SCAN (requesting plaintext report) or SCAN+XML (requesting XML report).

I usually get a reply after a couple of minutes. If I don’t get a reply, it usually means that my attachment was detected and blocked by the email server I’m using, and that it never reached VirusTotal.


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

(Lazy) Sunday Maldoc Analysis: A Bit More …, (Sat, Dec 14th)

At the end of my diary entry “(Lazy) Sunday Maldoc Analysis“, I wrote that there was something unusal about this document.

Let’s take a look at the content of the file and compare that with the file size:

A rough estimate: the total size of the streams is 120 kB. While the file size is around 10 MB. That’s a huge difference!

In such cases, I take a look with olemap:

Here I can see that there is extra data appended to the file (position 0x25400) and it’s about 10 MB in size.

Extracting the appended data and calculating some statistics gives me:

This tells me there’s about 10 MB of 0x00 bytes appended.

Was this done by the malware authors? Or did it happen later, during transmission or storage?

I don’t know.

Maybe it was done to bypass scanning, for example when there is a size-limit for files to be scanned. Just speculating …

Please post a comment if you have an idea.


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Internet banking sites and their use of TLS… and SSLv3… and SSLv2?!, (Fri, Dec 13th)

Although SSLv3 has been considered obsolete and insecure for a long time, a large number of web servers still support its use. And even though the numbers are much lower, some servers on the web support SSLv2 to this day as well. And, as it turns out, this is true even when it comes to web servers hosting internet banking portals…

Tests of SSL/TLS configuration are usually conducted as a normal part of a vulnerability scan or penetration test of TLS-enabled servers. But even when done as a stand-alone activity, they can provide us with very interesting information, especially when the targets are highly-sensitive servers, such as the ones used to provide internet banking services. That is the reason why many people and organizations have published analyses and statistics of TLS configuration of banking sites over the years[1]. I’ve actually done so myself a couple of times[2]. Most, if not all of such analyses have however been limited in scope and covered only a specific country or region.

For a long time, I’ve wanted to try to do a similar analysis on a global scale and when a colleague of mine provided me with a list of nearly 1,500 unique internet banking sites from all around the world earlier this year, I couldn’t let the opportunity pass. Even though 1,459 unique internet banking domains might not sound like much, when one considers that there appear to be between 25,000 and 85,000 banks overall[3], and not all such institutions provide internet banking services, it is actually not a bad sample size.

Before we get to the analysis and its results, let’s take a short look at TLS.

At this point in time, SSL and TLS have been used to secure communication over untrusted networks for almost 25 years[4]. The protocols, as well as the cipher suits which they use, have evolved significantly over time in reaction to discovered vulnerabilities and weaknesses. This cycle has led us to the current state of affairs, when, according to the current best practices[5], it is advisable to use TLSv1.2 and TLSv1.3 only, unless there is a special reason to use/support an older version or the protocol. But even though TLSv1.0 and TLSv1.1 are now considered outdated and will probably stop being supported by browsers in the near future[6], these protocols still provide a significantly higher level of security then SSLv3 (which was itself a notable improvement over SSLv2).

A large number of tools can determine which protocols and cipher suites a server supports[7]. In this case, I’ve decided to use Nmap, since its scans are fairly quick and it has couple of scripts (ssl-enum-ciphers.nse and sslv2.nse), which can provide us with almost all the information we might want with regards to SSL/TLS. The only drawback is that SSL/TLS enumeration capabilities of Nmap currently lack support for TLSv1.3[8], which is the reason why you won’t find statistics for use of the latest version of TLS protocol mentioned bellow.

After deciding to use Nmap, I fed the list of internet banking domains to it and had it output the results to XML. The ssl-enum-ciphers script gives us – among other data – information about all the protocols and cipher suits used by a server and marks them on a scale A to F, based on the security they provide, using a slightly modified version of the SSL Server Rating Guide methodology[9,10]. Since this script can however only identify use of SSLv3, TLSv1.0, TLSv1.1 and TLSv1.2 and I wanted to identify SSLv2 support as well, I’ve used it in a tandem with the sslv2 script, which provides this functionality. Once the scan was done, I generated a CSV from the resulting XML using a quick-and-dirty Python tool I wrote for parsing outputs of both Nmap scripts and I delved into analyzing it.

Due to several servers being unreachable, broken DNS records and other error conditions, Nmap didn’t manage to get data for all domains from the original list, but for “only” 1375 domains from 143 TLDs. The information I was most interested in for each server was a list of supported protocols and a mark for the weakest cipher suite, along with data about vulnerabilities and weaknesses related to SSL/TLS which Nmap managed to identify (specifically SWEET32, POODLE and use of RC4).

Given how sensitive the communication sent to internet banking servers is, the results were quite surprising – one server was actually affected by POODLE, 4 servers managed to hit the worst mark (F) for the weakest supported cipher suite, more than 3% of the servers supported SSLv3 and almost 1% (11 servers) still even supported SSLv2. Since there hasn’t been a reason use SSLv3 for a long time (barring exceptional cases), one might not expect to find it – not to mention SSLv2 – supported on any web server that provides a service for which ensuring confidentiality of its network traffic is paramount…

As you may see, the results didn’t look especially good – they were notably worse than I would have expected for internet banking servers. I wasn’t sure, however, whether they were on average worse or better than results for any other high-profile sites. I therefore decided to look for a “baseline” I might be able to compare the results against. I ended up choosing the Top 1000 Sites from Alexa, as – although most of these are not as sensitive as the internet banking sites – they are high-profile enough so that we might expect that reasonable security standards are enforced on them.

Scan of the 1000 sites resulted in only 921 results from 78 TLDs. The reason was that some of the second-level domains which were present on the Alexa list didn’t have any DNS records set (i.e.,, etc.). As you may see bellow, in some areas the banking sites did better than our baseline, while in others they did worse. On average it seems that while the Alexa Top 1000 sites are all over the map when it comes to SSL/TLS configuration, it appears that while most banking sites are configured very well, a notable minority seems to be configured quite badly.

The following chart shows overall support for all protocols. It should be mentioned at this point, that although support for TLSv1.2 was generally quite high in both samples, only 23.7% of banking sites and 14.7% of Alexa sites supported only TLSv1.2 (and possibly TLSv1.3) and were therefore configured according to the current best practices.

When it comes to vulnerabilities, the internet banking servers seem to be better configured then the “Top 1000” sites (although even one vulnerable internet banking server would seem one too many). Almost half (49.19%) of the Alexa sites and almost one third (30.55%) of the internet banking servers were vulnerable to SWEET32, 7 banking sites (0.59%) were found to still support RC4 while there were 12 such sites (1.30%) on the Alexa list. On the same list were also 5 sites (0.54%) still affected by POODLE, while – as was mentioned above – there was “only” 1 such site (0.07%) among the internet banking servers.

While the most common SWEET32 vulnerability isn’t that bad, POODLE and the continued use of RC4 are definitely worrisome (for more information, see Bojan’s webcast at

Although the above mentioned statistics definitely don’t give us the entirety of the situation and should not be taken out of context, they are quite unsettling. They seem to indicate that even when it comes to internet banking sites, security doesn’t always get the attention it should. This is especially well illustrated by the continued support of deprecated SSL protocols on several of the sites. Which – from a purely technical standpoint – is actually quite interesting, given that most browsers today have support for SSLv2 and SSLv3 turned off by default…



Jan Kopriva
Alef Nula

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 3 of 59 12345...»