Archive for July, 2019

Video: Analyzing Compressed PowerShell Scripts, (Sun, Jul 28th)

In diary entry “Analyzing Compressed PowerShell Scripts“, we took a look at a malicious Word document with compressed PowerShell script.

I created a video for this analysis:

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

A Python TCP proxy, (Sat, Jul 27th)

I had to check how a particular email server would behave when receiving email via a Tor exit node, but the packet capture software wouldn’t run on my client. Instead of doing the logical thing, and figuring out what the problem was with the packet capture software, I searched for another solution. I looked for software that I could put between the email client and the Tor client, and monitor all traffic.
I found Python program on GitHub.
You give it an address and port to listen on, and an address and port to forward all connections to. It can log, inspect and modify traffic passing through it.
Just what I needed. It works with modules, one of them is called hexdump. As its name implies, it will perform a hexadecimal dump of all incoming and/or outgoing traffic.

Like this:

It isn’t easy to distinguish incoming and outgoing traffic, hence I made a small change to the hexdump module:

With the following result:

Unfortunately, tcpproxy is for Python 2 only. Although I was able to modify it to run on Python 3 too. Further experimenting will tell me if I’ll continue to use it.

The reason why I like this program so far: it’s a script, it’s configurable through command-line options and extensible via modules, and it depends on standard Python modules only. Which makes it usable on many machines I have to work on.

If you have a TCP proxy script you like, please let me know with a comment. Thanks.


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

DVRIP Port 34567 – Uptick, (Fri, Jul 26th)

We are seeing a recent uptick in port 34567 for recent weeks. [1]   I was curious, so I poked around to learn a few things.  At this point, it appears it could be a century of some kind..  

Admittedly, I do not know much about this port.  After a little digging, I see a possible affinity to Fbot and Mirai or its variants.  We have a Diary from Dr. J. on Mirai  [2].   After some reading, I can not definitively tie this to Mirai or Fbot or something else just yet.  However, in early 2019 there was a well publicized uptick in Fbot activity. [3]    I went looking for data on ports that coincided with the early 2019 events from Fbot.   I did find some correlation, but nothing purely consistent.  By that I mean, all ports with ties to Fbot did not see a recent correlating spike.  Some well known ports that showed activity back then for Fbot are TCP:80,81,88, 8000 and 8080.  Some of these have correlating spikes of late.   See some pics below.




Looking at these three graphs only, one could infer there were less infected hosts in early 2019.   The recent uptick shows a more equal distribution of sources and targets.  This can mean there are more infected hosts and possibly a new campaign has begun.

I invite you all to comment and share what you may know of this observation.


ISC Handler on Duty

[2]  – JUllrich Diary on Mirai 09-05-2017

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

When Users Attack! Users (and Admins) Thwarting Security Controls, (Thu, Jul 25th)

Today, I’d like to discuss a few of the Critical Controls, and how I see real people abusing or circumventing them in real companies.  (Sorry, no code in today’s story, but we do have some GPOs )

First, what controls are on the table today?


Ensure that only fully supported web browsers and email clients are allowed to execute in the organization, ideally only using the latest version of the browsers and email clients provided by the vendor.


Block all email attachments entering the organization’s email gateway if the file types are unnecessary for the organization’s business.


Ensure that all users with administrative account access use a dedicated or secondary account for elevated activities. This account should only be used for administrative activities and not Internet browsing, email, or similar activities.


First, let’s talk about admin folks.  In this first situation, we’ve got helpdesk and IT folks, that all require elevated privileges.  This client did the right thing, and created “admin accounts” for each of those folks – not all Domain Admin, but accounts with the correct, elevated rights for each of those users.

Back in the day, these users were then supposed to use “run-as” to use those accounts locally.  These days however, it’s recognized that this “plants” admin credentials on the workstation for Mimikatz to harvest, and also starts an admin-level process for malware to migrate to.  Today’s recommendation is that users should run a remote session to a trusted admin station, which doesn’t have email or internet access, and do their admin stuff there.

This means that those “admin” accounts have a list of stations that they are permitted to login to – ideally only servers (not workstations).  You can enforce this in AD with a Group Policy:

  • Create a GPO, maybe call it “Deny Domain Admin On Workstations”
  • In: Computer ConfigurationPoliciesWindows SettingsLocal PoliciesUser Rights Assignment
  • Set “Deny Logon locally and assign that right to “Domain Admins”, “Enterprise Admins” and any other relevant admin group
  • Repeat this for the rights “deny batch service” and “service logon
  • Now, test this on real workstations before you deploy this widely.  If you in fact do have scheduled tasks or services running with Domain Admin rights, consider this a good day to evaluate life choices within the IT department and come up with a better way J  (see:
  • When testing is complete, link this GPO to the various OUs where your workstations are, be sure that you DON’T link this GPO to any server OUs.

These controls also mean that you need to disable internet access on your jump hosts for admin sessions.  This might be an uphill battle, but we’ll get to why it’s important.  This is also easy in a Group Policy:

  • Create a new GPO, maybe call it “No Internet Access for Domain Admin Accounts”
  • Open User ConfigurationPoliciesWindows SettingsInternet Explorer Connection.
  • In  Proxy Settings, configure Enable proxy settings to point to a bogus proxy, maybe

So, what could possibly go wrong?  … let’s count the ways J

1/ Remember, these are admin folks.  So they’ve got the rights, and they also get impatient with extra work.  So, the natural inclination is to exempt themselves – maybe create a new group with admin rights, and move their account into that group.  That gives them admin login to workstations, and also gives them internet from the jump hosts.

2/ You’ll also see at least some of those folks “Connect” that admin account to their exchange mailbox.  That gives them email from their admin account

voila!  Now they are logged in all the time as admin, and checking email as admin.  So now, when that user receives a Word or Excel file (or whatever) with a macro in it, then open the file and run the macro, that macro is running as (in this case) Domain Admin.  Which of course means that (yes in real life) the customer’s Domain Controller got ransomwared (along with all of their other servers actually).

3/ Again with the extra work theme – the admin in question was using the “jump host” as they should be, but needed an attachment from an email that they got during a support call.  The right way to do this is to open the mail from their non-privileged account on their non-privileged workstation, collect that file, map a drive and copy it over.  Or if it’s text, just copy/paste that info into their admin “window” that’s running on the jump host.   (Same procedure if it’s a downloaded file from a vendor support site or whatever)

What happened instead?  They fired up a browser and browsed to their OWA server from that jump host!  This means that again, they’re checking email as admin.  Worse yet, it’s on the actual admin host, where *all* of the admin accounts are, so Mimikatz can now collect *all* of the admin account credentials!

How can we protect against these?  For IT admins, often your only protection is audit and logging.  You should be tracking creation of new groups, and any changes to admin accounts (group membership for starters).  You can do this with audit policies in AD, but you’ll need a decent central logging and alerting process so that the right folks get told when your admins “color outside of the lines”.  This seems like real “mom and apple pie” advice, but it’s more complicated than it sounds.  I took SEC 555 ( ) when I attended SANSFIRE this year, it was a real eye-opener for me as to how to assemble all the pieces to accomplish this sort of thing.

On the email thing, blocking OWA from the admin stations is likely a good idea.  You can do this using host based firewalls, or you can properly segment your network – isolate your user stations from your servers for starters, and start on a set of firewall permissions between user stations and servers, and from servers to user stations.  Because every user does not need tcp/1433 to your SQL servers, every user does not need tcp/445 to every server, and your Accounting Group, Receptionist and Visitors don’t need login access to your vCenter server.

Wait, we just dragged some new controls into the mix with this discussion – it’s funny how when you try to control a problem, often the list of controls that you think you need is not the complete list:


Configure systems to issue a log entry and alert when an account is added to or removed from any group assigned administrative privileges.


Ensure that appropriate logs are being aggregated to a central log management system for analysis and review.


Deploy Security Information and Event Management (SIEM) or log analytic tools for log correlation and analysis


Associate active ports, services, and protocols to the hardware assets in the asset inventory.


Ensure that only network ports, protocols, and services listening on a system with validated business needs are running on each system.


Place application firewalls in front of any critical servers to verify and validate the traffic going to the server. Any unauthorized traffic should be blocked and logged.




Deny communications with known malicious or unused Internet IP addresses and limit access only to trusted and necessary IP address ranges at each of the organization’s network boundaries.


Deny communication over unauthorized TCP or UDP ports or application traffic to ensure that only authorized protocols are allowed to cross the network boundary in or out of the network at each of the organization’s network boundaries.


How about non-admin people in your organization?  

Say you have a good spam filter, and you’re blocking office macros on inbound emails.  Or, more typically, you’re permitting macros inbound from a few partner organizations (where IT always loses the battle), and the business has “accepted the risk” that macro-based malware from those partner orgs is not blocked.  But in either case, an office document with a macro from some “random source email address” is not getting in.

You’ll always have targeted folks for things like this – in many organizations, that will be the CFO or people in accounting (who can transfer dollars), or in healthcare it will be the head of nursing (who has access to all health records) or the folks in charge of dispensing pharmaceuticals.  In any organization, it’s also recognized that the CEO and Sr Management (all the folks who’s names in posted on your website or are easily found with Google or LinkedIn) generally have the rights to ignore or bypass policies, so they’ll be targeted as well.  In addition, you’ll have those users who are always after the “cracker jack prize” in the email – once they start, they’ll click as many times as necessary to get there (see Xavier’s story yesterday ).  Word does get around – if you have folks who always open attachments and click links, your adversary likely has them on a list – you might call it “market research” or “knowing your demographic”.

Anyway, you’ll have a targeted person, and the attacker will have some in-band (ie email) or out of band (phone, facebook messenger, linkedin or whatever) access.  At some point, the attacker will realize that their malware isn’t getting into the organization via email.  So, what will they suggest?  “Let me send it to your personal email” will often be the first suggestion.  Your target will then pick the malware up via webmail.  The other out-of-band methods will often be the second choice – linkedin, facebook messenger, or other social media “email replacement” services.

How can this be prevented (at least partially)?  If you’ve got a Content Filter on your firewall or proxy server, you can usually block “Web Mail” as a category.  Similarly, you can usually differentiate between an app (for instance Facebook) and the messenger within that app (Facebook Messenger), and have different block/allow rules for each.  You’ll always have some folks that need things like the LinkedIn “InMail” application, HR in particular.  You really are stuck with GPOs and AV in that case.  If your organization does have partner organizations that use macros, likely HR does not need those.  Using GPO’s either block HR people from running macros entirely, or only permit them to run internally signed macros.

Which brings 2 more controls into the mix:


Enforce network-based URL filters that limit a system’s ability to connect to websites not approved by the organization. This filtering shall be enforced for each of the organization’s systems, whether they are physically at an organization’s facilities or not.


Subscribe to URL-categorization services to ensure that they are up-to-date with the most recent website category definitions available. Uncategorized sites shall be blocked by default.


In addition, within your SIEM you likely want to elevate the priority of any alerts from folks that have access to sensitive information, have elevated admin rights, or are known “clicky-clicky” folks.  You might call this “CVSS for people”, and you wouldn’t be far off the mark on that.   Alerts might include suspect email attachments, script or macro activity, failed logins, unusual IP addresses (on VPN connections or VDI for instance).   We obviously care about these alerts for all users, but we should care a bit more if that user is an IT Admin, is a Sr. Manager, or “clicks that link” every Tuesday (again, Xavier has some great pointers on this here )

This adds a few more controls the first two we’ve already added (but it’s worth bringing these up twice!), the third is new.  No SIEM is “set it and forget it” – in this case you are raising the priority of events involving specific users, but “adjusting” your SIEM is an ongoing process.  There are always new attacks, new ways to detect attacks, or new methods of combining multiple logs to create a better view of a problem.


Ensure that appropriate logs are being aggregated to a central log management system for analysis and review.


Deploy Security Information and Event Management (SIEM) or log analytic tools for log correlation and analysis


On a regular basis, tune your SIEM system to better identify actionable events and decrease event noise.


Defense of your environment is a continual thing, and it’s constantly evolving.  Do you have any stories that you can share that are related?  Have I missed an obvious control in the situations I’ve discussed?  Please, use our comment form and share!

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

May People Be Considered as IOC?, (Wed, Jul 24th)

That’s a tricky question! May we manage a list of people like regular IOC’s? An IOC (Indicator of Compromise) is a piece of information, usually technical, that helps to detect malicious (or at least suspicious) activities. Classic types of IOC are IP addresses, domains, hashes, filenames, registry keys, processes, mutexes, … 

There exists plenty of lists of usernames that must be controlled. Here is a short list or typical accounts used to perform (remote) administrative tasks or belong to default users:


If the activity of such kind of users must be logged and controlled (admin users, system accounts), let’s think about “real” people now. It could be very interesting to keep an eye on the activity of “high profiles” inside the organization like the board of directors (whose are always juicy targets). But, don’t forget the opposite with “low profiles”. Non-tech people who perform daily dangerous tasks. Can we consider them as “IOC”?

Let me tell you a story: A  few years ago, when ransomware waves were not very popular, a customer faced a security incident and several Windows shares were encrypted after a user opened a malicious attachment. The person was not to blame: (s)he was responsible for the “[email protected]” mailbox. A daily task was to process all emails sent to this address. Chances to see the profile of this person compromized is much higher than a regular user. Can we track him/her as an IOC? What if suddenly we detect a lot of logon attempts with his/her credentials?

How will behave less security-aware people when they are facing a threat? The same applies to specific departments, like human resources, where one of the tasks is to open and read candidates’ resume based on Office or PDF documents.  May we assign a “score” to people? In many organizations, people can try to bypass security controls and behave in an unsafe way (I call them “mad-clickers”). More they appear in security reports, their score gets higher and could attract our attention.

Of course, all this process must be performed with respect of privacy. The goal is NOT to spy them!

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Verifying SSL/TLS configuration (part 1), (Tue, Jul 23rd)

One of very important steps when performing penetration tests is to verify configuration of any SSL/TLS services. Specifically, the goal of this step is to check which protocols and ciphers are supported. This might sound easier than it is – so this will be a series of diaries where I will try to explain how to verify configuration but also how to assess risk.

We cover this in the SEC542 course (Web App Penetration Testing and Ethical Hacking), but verifying configuration is not limited only to web applications – we should do it in both external and internal penetration tests.

While one could create a small script around the openssl command to verify for all supported protocols and ciphers, it is much easier to use some of the following tools. Let’s dive in:


While some people might use nmap only for port scanning (Trinity used it for OS fingerprinting as well – good job Trinity), in last couple of years I became a huge fan of Nmap’s scripts. NSE (Nmap Scripting Engine) allows creation of scripts in the embedded Lua programming language. There are ~600 NSE scripts which are distributed with nmap today and some of them should be your everyday tools – especially for checking SSL/TLS configuration. Here they are:

ssl-enum-ciphers.nse – this is the most powerful NSE script for SSL/TLS checking. It will verify which protocols are supported (SSL3 and above) and then enumerate all cipher algorithms. Besides this, ssl-enum-ciphers will also give you a rating (score) on how good or bad a cipher is. Do not take this for granted, although generally the scores are good.

Below we can see ssl-enum-ciphers executed against my mail server. The output is pretty self-explanatory and you can see that I tried to lock the configuration a lot by supporting only TLSv1.2 and using strong ciphers, with preferred PFS (Perfect Forward Secrecy) algorithms. All of these will be explained in future diaries.

$ nmap -Pn -sT -p 443 –script ssl-enum-ciphers
Starting Nmap 7.70SVN ( ) at 2019-07-23 09:29 UTC
Nmap scan report for (
Host is up (0.087s latency).
rDNS record for

443/tcp open  https
| ssl-enum-ciphers:
|   TLSv1.2:
|     ciphers:
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) – A
|       TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) – A
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
|       TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
|     compressors:
|       NULL
|     cipher preference: server
|_  least strength: A

sslv2.nse – while the ssl-enum-ciphers script is amazing, it does not support SSLv2. Keep this in mind – because when you scan a network with it, it will not report usage of SSLv2 so you might miss it. This is why we have the sslv2 script, so make sure you run it everywhere.

ssl-cert.nse – this is a nice small script that will retrieve information about the X.509 certificate used on the target web site – perfect for easy collection and examination of this information on wide range of targets. We can see it running here:

$ nmap -Pn -sT -p 443 –script ssl-cert
Starting Nmap 7.70SVN ( ) at 2019-07-23 09:32 UTC
Nmap scan report for (
Host is up (0.039s latency).
rDNS record for

443/tcp open  https
| ssl-cert: Subject:
| Subject Alternative Name:,
| Issuer: commonName=Let’s Encrypt Authority X3/organizationName=Let’s Encrypt/countryName=US
| Public Key type: rsa
| Public Key bits: 2048
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2019-05-27T07:47:53
| Not valid after:  2019-08-25T07:47:53
| MD5:   f31d bf25 01d3 c303 e762 e1e3 bb86 9c5b
|_SHA-1: 3835 540e 9d14 2537 7c7f b9be 0abb ba37 161a cc4e

ssl-dh-params.nse – finally, this is the last script I use when checking SSL/TLS configuration. This script will analyse Diffie-Hellman MODP group parameters and report if weak DH parameters are used (i.e. CVE 2015-4000 and other weaknesses).

Finally, just keep one thing in mind – whenever you use nmap make sure that you have the latest version installed (I actually pull the one from Subversion and compile it manually). Many Linux distributions come with ancient nmap versions so not only you might be missing features and bug fixes, but the scores for ciphers might be wrong.

Another great resource for SSL/TLS configuration testing is the script. This script comes with its own, statically precompiled version of openssl which supports every possible protocol and cipher. This way the author ensures that nothing will be missed (there was a huge hiccup with Kali Linux and SSLv2 support back in 2013 when they removed it and a lot of penetration testers got burned).

My suggestion is to clone from github to make sure you have the latest version – no compilation is needed. Just run it against the target web site and you will see nice coloured results as shown in the screenshot below for my server:

Qualys SSL Labs:

Finally, the last great online resource is the Qualys SSL Labs SSL Server Test done by @ivanristic. SSL Server Test will perform a bunch of different tests and generate a very, very detailed report on SSL/TLS configuration, together with the final score – and everyone will try to achieve A+ of course (we all like green, A+ grades).

SSL Server Test is available at so, obviously, you can use it only for publicly available web sites. Additionally, don’t forget to select the “Do not show the results on the boards” box so your test will not be shown on the web site.


As we can see, there is a plethora of good tools we can use. But besides running tools, we need to be able to interpret the results as well. For example, if 3DES is used on your Microsoft Terminal Server (RDP), is that a bad thing or not? Can we live with it?
I will cover all these cases in future diaries – let us know if there is something specific you would like us to write about.

And I almost forgot – if you want to hear more about this, there is a nice (I hope!) SANS Webcast this Thursday, July 25th, 2019 at 10:30 AM EDT (14:30:00 UTC), “A BEAST and a POODLE celebrating SWEET32” where I will be demoing some of SSL/TLS vulnerabilities.

An even longer presentation will be at the BalCCon2k19 conference this year so let me know if you are around! 


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 2 of 7 12345...»