Archive for March, 2019

Maldoc Analysis of the Weekend by a Reader, (Sun, Mar 31st)

This week, reader Ahmed submitted a malicious document. Which he was later able to analyse himself:

Today we have detected a Doc File containing Macro. I would like to share with you my analysis for this file using tools made by Mr. Didier Stevens.

By checking the File we can see it have 3 Important Streams as shown in the picture below.

After looking in at Stream 16 as follows I found it doesn’t have any interesting Strings or Code.

But I found it calling a VBA Document “YAA_AQD” which you can see in the List of streams in the first Picture.
I tried to run strings on the file and indeed I found a base64 encoded string that is being feed to powershell command, but I found a better way to detect it using  a plugin called “” also provided by Didier Stevens.

As you can see this plugin was able to detect and parse the 20th stream which contained a Powershell command with encoded base64.

Piping this output to “” can easily decode it.

Now we are able to see the powershell script which will execute andwe can see it tries to reach the below website and download a malware and name it as 533.exe

Malware Domain: hxxp:// nalfonsotriston[.]city/2poef1/j.php?l=pleid10.fgs
Downloaded File in user profile: 533.exe



Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

"404" is not Malware, (Sat, Mar 30th)

Reader Chris submitted a PowerShell log. These are interesting too. Here’s what we saw:

A typical downloader command.

When I tried to download this using wget and the URL, I got a 404 page.

Next, I did a search for the URL on the free version of VirusTotal:

The URL has some detections. But more important: there is a link to the downloaded file. this can help me to find the actual malware that was downloaded:

Notice that the detection is 0, but that it has a very low community score. It’s a very small file: 564 bytes.

And it turns out to be HTML:

This time, VirusTotal too can’t help me to identify the file: the hash of that small HTML file is the same as the hash of the file I downloaded. It’s also a 404.

It’s something that happens more on VirusTotal: “404” downloads being scored as malware.

That doesn’t mean that the initial file (PowerShell script) wasn’t malware. But what was actually downloaded, wasn’t malware, but a 404 file. Probably because the compromised server was cleaned.



Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Annotating Golang binaries with Cutter and Jupyter, (Fri, Mar 29th)

In my previous post we’ve gone through some of the basics of analysing Golang binaries. This post will annotate source path and line information to the disassembly in Cutter. If you’re not familiair with Cutter, it is the Qt bases frontend for Radare2. 

Adding source path and line information for disassembled instructions makes it much easier to interpret and analyse the binaries. It will add a lot more context to the binary. If you want to play with the notebook, you’ can download the notebook here. Later the notebook can be converted to a plain python script / plugin, making it easier to use within Radare2 or Cutter as a plugin. 

Inside Cutter there is a tab to start Jupyter, which contains a link to open the Jupyter session. In the first cell we connect to the active Cutter session, assuming the Golang binary has been opened and analysed already.

Communication with Cutter is being done using r2pipe, executing radare2 commands returing json. To retrieve information about sections in the binary, we execute command iSj. Suffixing commands with a j will return json output, making it easier to use within python. The returned section information will be be stored in a map, with size and vaddr information for later reference.

The interesting bits are located in the .gopclntab section, this section contains the source path and line information per address. The .gopclntab section documentation can be found here. Next we’ll walk through the section, verifying the magic bits, extract the configuration and table size.

Defining a method file_table, which will be used to lookup indexes in the file_table and return the corresponding path. 

Now we can walk through the actual table. The table consists of a pair of a program counter and a function offset. Each function is being described in detail at the function offset, containing information about arguments, function name, arguments, path and line information. 

Using r2.cmd we can use the command CCu, which will add comments at the specified address. Executing the cell shows information about the program counters found and their corresponding source lines.

Switching back to Cutter again, you’ll see that specific addresses now have comments with the source location for the address.

Please share your ideas, comments and/or insights, with me via social media, @remco_verhoef or email, remco.verhoef at dutchsec dot com. Have a great day!

Remco Verhoef (@remco_verhoef)
ISC Handler – Founder of DutchSec

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Running your Own Passive DNS Service, (Wed, Mar 27th)

Passive DNS is not new but remains a very interesting component to have in your hunting arsenal. As defined by CIRCL, a passive DNS is “a database storing historical DNS records from various resources. The historical data is indexed, which makes it searchable for incident handlers, security analysts or researchers”. There are plenty of existing passive DNS services: CIRCL[1], VirusTotal, RiskIQ, etc. I’m using them quite often but, sometimes, they simply don’t have any record for a domain or an IP address I’m interested in. If you’re working for a big organization or a juicy target (depending on your business), why not operate your own passive DNS? You’ll collect data from your network that will represent the traffic of your own users.

The first step is to collect your DNS data. You can (and it’s highy recommended) log all DNS queries performed by your hosts but there is a specific free tool that I use to collect passive DNS data: passivedns[2]. It’s a network sniffer that collects all DNS answers and log them. So, not intrusive and low footprint. The tool is quite old but runs perfectly on recent Linux distros.

AFAIK, it is not available in classic repositories and you’ll have to compile it. I recommend to activate the JSON output format (not enabled by default):

sansisc# ./configure --enable-json
sansisc# make install

Note: You will probably need some dependencies (libjansson-dev, libldns-dev libpcap-dev)

Once compiled, run it with the following syntax:

sansisc# /usr/local/bin/passivedns -D -i eth1 -j -l /var/log/passivedns.json

Note: In the command above, I assume that you already get a mirror of your traffic on eth1. My passivedns process is running on my SecurityOnion instance.

A few seconds later, you should see interesting data stored in /var/log/passivedns.json:


The output has been reformated for more readibility. You have 1 JSON event per line in the log file.

The next step is to process the file. passivedns comes with some scripts to store data in a local db and perform some queries. In my case, I prefer to index the log file in my Splunk (or ELK, or … name your preferred tool). Here is an Splunk query example to search queries containing “sans.:

index=passivedns query=“*sans.*"
| stats earliest(timestamp_s) as start, latest(timestamp_s) as stop by query,answer 
| eval first_seen=strftime(start, "%d/%m/%Y %H:%M:%S") 
| eval last_seen=strftime(stop, "%d/%m/%Y %H:%M:%S") | fields - start - stop

Passive DNS is a nice alternative to the regular collection of DNS logs, if you can’t have access to the DNS logs because your System Admin is not cooperative (yeah, this happens!). You can search your passive DNS data with malicious DNS from a threat intelligence tool like MISP. The following query will search for malicious domains extracted from MISP for the last month and which IP address performed the query. This query should NOT return any record otherwise, it’s time to investigate.

[|mispgetioc last=30d category="Network activity" type="domain"| rename misp_domain as query | table query]
| stats earliest(timestamp_s) as start, latest(timestamp_s) as stop by client, query
| eval first_seen=strftime(start, "%d/%m/%Y %H:%M:%S") 
| eval last_seen=strftime(stop, "%d/%m/%Y %H:%M:%S") | fields - start - stop

Note: mispgetioc is a project available on GitHub that allow querying the MISP API from Splunk[3].

Not yet convinced? Do not hesitate to deploy such a tool in your networks. And you? Do you collect passive DNS data? Please share your ideas/comments!


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 1 of 7 12345...»