Archive for May, 2020

Windows 10 Built-in Packet Sniffer – PktMon, (Sun, May 31st)

Microsoft released with the October 2018 Update a built-in packet sniffer for Windows 10 located in C:Windowssystem32PktMon.exe. At ISC we like packets and this is one of the multiple ways to capture packets and send us a copy for analysis. Rob previously published another way of capturing packets in Windows here. If Windows 10 was compromised, this application would be a prime target by malicious actors and it need to be monitored, protected or removed in an enterprise.

In order to collect packets you need to launch a Windows 10 command prompt as admin before using PktMon.

The first thing to do is figure out what can be done with PktMon, if you execute PktMon filter add help it list all posible options by MAC address, datalink, VLAN, protocol, IPv4/IPv6 and services:

For example, let’s capture SSL traffic on port 443, the filter will look like this:

PktMon filter add -p 443

To view the port filtered list:
PktMon filter list

To remove the same filter when done will look like this:
PktMon filter remove -p 443

To clear the packet port filtered list (capture all ports):
PktMon filter remove list

To list the interfaces available for packet capture on Windows 10, use PktMon comp list. This list can contains several interfaces (i.e. wireless, VPN, Ethernet, etc)

Starting PktMon with -p 0 to capture the entire packet (default to 128 bytes), start packet capture from Ethernet interface Id: 10 and save the packets to a log file with Event Tracing for Windows (–etw default filename is PktMon1.etl):
pktmon start –etw -p 0 -c 10

Stopping PktMon you get the traffic statistics from the interface and leave a file PktMon1.etl on the drive where PktMon was started:

The file PktMon1.etl can be converted to text:

pktmon format PktMon.etl -o https.txt

14:08:19.937939100 MAC Dest 0x000C2986BE53, MAC Src 0x247703FD6DE8, EtherType IPv4 , VlanId 0, IP Dest, IP Src, Protocol UDP , Port Dest 62594, Port Src 3389, TCPFlags 0, PktGroupId 1125899906842838, PktCount 1, Appearance 1, Direction Tx , Type Ethernet , Component 95, Edge 1, Filter 0

Finally, reset all counter back to 0 and get ready for the next packet capture:

PktMon reset
All counters reset to 0.

Microsoft Network Monitor is dated and no longer actively supported by Microsft but until the next release of PktMon in Windows 10 2004 supporting conversion to pcapng, it can be used to open and read these packet capture files or read them as text has previous demonstratred.


Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

YARA v4.0.1, (Sat, May 30th)

A couple of weeks ago, YARA 4.0.0. was released with support for BASE64 strings.

If you plan on using/testing this new feature, be sure to use the latest YARA version 4.0.1 (a bugfix version).


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

The Impact of Researchers on Our Data, (Fri, May 29th)

Researchers have been using various tools to perform internet-wide scans for many years. Some will publish data continuously to either notify users of infected or misconfigured systems. Others will use the data to feed internal proprietary systems, or publish occasional research papers.

We have been offering a feed of IP addresses used by researchers. This feed is available via our API. To get the complete list, use:
(add ?json or ?tab to get it in JSON or a tab delimited format)

We also have feeds for specific research groups (see ). 

Some of the research groups I have seen recently:

  • Shodan: Probably the best-known group. Shodan publishes the results of its scans at
  • Shadowserver: Also relatively well known. Shadowserver doesn’t make much of its data public. But it will make data available to ISPs to flag vulnerable/infected systems. You can find more about shadowserver at
  • Strechoid: I just recently came across this group, and do not know much about them. They have a very minimal web page with an opt-out form:
  • Onyphe: A company selling threat feeds. See
  • CyberGreen: See . A bit like Shadowserver in that it is organized as a not-for-profit collaborative effort. Some data is made public, but more in aggregate form.

The next question: Should you block these IP addresses? Well… my simple honest answer: Probably not, but it all depends.

Shodan for example (I put them in the research category) will publish the data it collects, and an attacker may now use Shodan to find a vulnerable system instead of performing their own scan. There are anecdotal stories of that happening, and I have seen pentesters do this. But we had a SANS Technology Institute student perform some research to find the impact of Shodan and he did not find a significant change in attack traffic depending on if an IP was listed or not [1]. On the other hand, he also found that many IP addresses that appear to be used by Shodan are not identified as such via a reverse DNS lookup. Our list will likely miss a lot of them.

But then again, it probably doesn’t hurt (right… our lists are “perfect”? Probably not). And blocking these scans at the perimeter may cut down on some of the noise.

So what is the impact? Here is some data I pulled from yesterday. We had a total of about 260k IP addresses reported to us. They generated about 30 million reports. So on average, a single source generates about 117 reports. The one Researcher exceeding this number significantly is Shodan, with about 5176 reports per source. Remember that Shodan will hit multiple target ports. Also, Shodan uses a relatively small set of published source IPs.

As far a the number of reports go, Stetchoid is actually the “winner” with Shodan 2nd and Shadowserver third. Cybergreen with a total of 100 reports (compared to Stretchoids 164k) hardly shows up. This may in part be due to us missing a lot of the Cybergreen addresses. I will have to look into that again.

What about the legality and ethics of these scans? The legality of port scans has often been discussed, and I am not a legal expert to weigh in on that. In my opinion, an ethical researcher should have a well-published “opt-out” option. IP addresses should reverse resolve to a hostname that will provide additional information about the organization performing the scan. Scans also should be careful to not cause any damage. A famous example is an old (very old) vulnerability in Cisco routers where an empty UDP packet to port 500 caused the router to crash. Researchers should not go beyond a simple connection attempt (using valid payload) and a banner “grab”. These scans should not attempt to log in, and rate-limiting has to be done carefully. In particular, if IP addresses are scanned sequentially, it may happen that several fo these IPs point to the same server.

Anything else you have seen researchers do that you liked or didn’t like? There are more researchers than I listed here. I need to add more to the feed. Also, not all of them scan continuously, and the data I am showing here is only from yesterday.

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Flashback on CVE-2019-19781, (Thu, May 28th)

First of all, did you know that the Flame[1] malware turned 8 years today! Happy Birthday! This famous malware discovered was announced on May 28th, 2012. The malware was used for targeted cyber espionage activities in the Middle East area. If this malware was probably developed by a nation-state organization. It infected a limited amount of hosts (~1000 computers[3]) making it a targeted attack.

At the opposite, we see very broad attacks that try to abuse vulnerabilities present in very common products. Almost every day, new CVEs (“Common Vulnerability Exposure”) are released or updated. Yesterday, I indexed 141 new CVEs:

In a perfect world, a CVE is followed by a patch released by the vendor or the developer, followed by the deployment of this patch by the end-user. Case closed! But, it’s not always as simple, for multiple reasons. Recently, an interesting article was released about the top-10 most exploited vulnerabilities[3]. It’s interesting to discover how very old vulnerabilities are still exploited in the wild, by example: %%cve:2017-11882%% (from 2017!)

Amongst others, let’s have a look at %%cve:2019-19781%% also know as “Shitrix”[4].  We searched for the population of ‘Citrix NetScaler’ hosts in SHODAN, then we search for the ones tagged with the CVE. Results are interesting (starting from the beginning of the year).

In blue, you see the number of devices identified as vulnerable. The green data represent the entire population of Citrix devices seen online. Let’s focus on the two first months:

We see that SHODAN is scanning the web and found more and more vulnerable devices, then organizations started to patch then but we remain for a while to a stable amount of devices (around ~4000 detected daily). But we see also a decrease in detected NetScaler devices. How to interpret this? 

  • Some organizations got rid of their Citrix device and replaced it with another solution? (it could happen)
  • Devices were hardened and do not disclose the version/model (footprint not possible)
  • Devices facing the Internet are now protected by filters/firewalls
  • SHODAN IP addresses are blacklisted (which is bad and does NOT secure your infrastructure)

Anyway, the best advice remains patch, patch, and patch again!


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Frankenstein's phishing using Google Cloud Storage, (Wed, May 27th)

Phishing e-mail messages and/or web pages are often unusual in one way or another from the technical standpoint – some are surprisingly sophisticated, while others are incredibly simple, and sometimes they are a very strange mix of the two. The latter was the case with an e-mail, which our company e-mail gateway caught last week – some aspects of it appeared to be professionally done, but others screamed that the author was a “beginner” at best.

The message appeared to come from info[@]orlonvalves[.]com and passed both SPF and DKIM checks. Contrary to popular belief, it is not that unusual to see a phishing e-mail from an SPF-enabled domain[1,2]. Phishing message with a valid DKIM signature, on the other hand, is something, which is usually seen in connection with a compromised e-mail server. Although it is possible that this was the case in this instance as well, I’m not completely sure about that. The reason is that the domain in question was registered about half a year back using Namecheap, neither it nor any existing subdomain appears to be hosting any content and no company of corresponding name seems to exist. In contrast, a company named Orion Valves, which uses the domain orionvalves[.]com, does exist and although we may only speculate on whether the domain was intended to be used for phishing, since the substitution of characters (i.e. “l” for “i”) in lookalike domain names is a common tactic for phishers, I wouldn’t be surprised if this effect was what the domain holder was actually going for.

As you may see, apart from the potentially interesting sender domain, the message was a fairly low-quality example of a run-of-the-mill phishing. It claimed to be from Microsoft, but also from a source at (i.e. our company domain). The only further small point of interest connected with it was hidden within its HTML code. Even though it is usually not necessary to analyze the code of phishing messages, it may sometimes provide us with at least some information about their authors. In this case, for example, given that there are attributes “data-cke-saved” and “data-cke-eol” present in the code, we may surmise that the author most likely used the CK Editor to create the HTML code (and that he probably used a historical phishing message which pointed to different phishing pages as a base to build it from)[3].

As the code shows, the links in the message lead to the following Google Cloud Storage URL.


I reported the URL to Google, but since the page is still reachable at the time of writing, you may be able to take a look at it yourself, if you’re interested.

Although web page didn’t look like anything too special at first glance, at the second one it turned out to be quite interesting for multiple reasons.

It was self-contained, with all scripts, styles as well as pictures embedded in the code. This technique is sometimes used by attackers in order to create phishing pages they may use as attachments[4], but isn’t too common for the server-hosted phishing sites (though, given where this page was hosted, use of the technique makes some sort of sense).

It also appeared to be fairly well written – the author expected both a situation when a script blocker would stop JavaScript from executing and a situation when the scripts would be executed. If JS execution was possible, it would “personalize” the contents of the page and pre-fill the users e-mail address in the form, if not, it would stay in a more generic, but still fully functional form.

On the other hand, personalization of the page wasn’t the only thing which the embedded JS would try do.

Another piece of JavaScript contained an encoded version of the entire page (i.e. code identical to the one present in the HTML) and it would try to decode it and write it in the body of the document. This would be a bit strange by itself, since – as we’ve mentioned – both versions of the HTML code were the same and if the code were to run, it would result in the entire contents being present twice (i.e. two complete credential stealing forms on one page). But where it got even stranger was the placement of the JavaScript code – it was placed in a style tag within the head portion of the site, which would result in the code never executing (at least not in any browser I’ve tried). It was also probably supposed to be commented out, though it didn’t end up that way as there was a newline after the comment tag instead of a space… In short, there was no reason for the code to be there as it would never run and the way in which it was embedded was completely wrong even if the author intended it as some sort of backup.

If a target of the phishing were to input his credentials in the page, they would be sent in a POST request to the following URL:


After that, the browser would be redirected (HTTP 302) to another PHP script on the same server (go.php) and from there to the domain, to which the e-mail address, which was specified in the form, belonged. Redirection to a legitimate domain after credentials have been gathered by a phishing site is quite a common tactic, since the target may then come to believe that they simply made a mistake while typing the password.

As we may see, the phishing really was a strange mix. On one hand, we have the use of a potential phishing domain with SPF and DKIM set up to send the original e-mail, a well-written phishing page and a fairly standard credential gathering mechanism using a different domain and server from the ones hosting the phishing site itself. On the other hand, we have a very low-quality phishing message trying (though not very hard) to look like it was sent by two different sources at once and a nonsensical inclusion of JavaScript in the phishing page, which would never execute, but if it did, it would completely ruin the appearance of the page as anything even nearly legitimate.

Who knows how this came to be – perhaps the attackers cobbled together pieces of different phishing campaigns they found online and ended up with something functional but resembling the creation of Dr. Frankenstein more than anything else…


Indicators of Compromise (IoCs)




Jan Kopriva
Alef Nula

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Seriously, SHA3 where art thou?, (Tue, May 26th)

A couple weeks ago, Rob wrote a couple of nice diaries. In our private handlers slack channel I was joking after the first one about whether he was going to rewrite CyberChef in PowerShell. After the second I asked what about SHA3? So, he wrote another one (your welcome for the diary ideas, Rob). I was only half joking.

SHA2 (SHA256 –or more accurately SHA2-256– being the most common version in use) was first adopted in 2001. SHA3 was adopted in 2015. Fortunately, because we’ve known about the weaknesses in MD5 and SHA1 for years, those have been phased out for integrity purposes over the last decade. And, fortunately, I’m not aware of any weakneses in SHA2, yet, but it is only a matter of time. Having said that, I still see a lot of malware or forensic reports that will include MD5 or SHA1, fortunately usually these days also with SHA256, but I don’t believe that even VirusTotal is calculating SHA3 hashes for new samples. I understand the arguments that using both MD5 and SHA1 is probably sufficient for the moment for malware sample identification purposes, but the new standard has been out there for 5 years now and the hash that is being used is almost 20 years old. What is the hold up? In my own personal malware database, I added a column for SHA3 back when NIST first announced that they were going to have a competition to choose the new hash. Python has included SHA3 in hashlib since 3.6 and it was backported to 2.7-3.5 in pysha3. The Perl Digest::SHA3 module has been around since the standard was adopted. I added it to my tool more than 3 years ago, more specifically, I use SHA3-384 (as did Jesse Kornblum’s beta of sha3deep, though I don’t see a final release of that). So, what is the hold up? Why aren’t we using the current standard? I, for one, plan to include both SHA2-256 and SHA3-384 hashes in all of my reports going forward. Thoughts?



Jim Clausing, GIAC GSE #26
jclausing –at– isc [dot] sans (dot) edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 1 of 7 12345...»