Blog

Archive for SANS

When Users Attack! Users (and Admins) Thwarting Security Controls, (Thu, Jul 25th)

Today, I’d like to discuss a few of the Critical Controls, and how I see real people abusing or circumventing them in real companies.  (Sorry, no code in today’s story, but we do have some GPOs )

First, what controls are on the table today?

7.1

Ensure that only fully supported web browsers and email clients are allowed to execute in the organization, ideally only using the latest version of the browsers and email clients provided by the vendor.

7.9

Block all email attachments entering the organization’s email gateway if the file types are unnecessary for the organization’s business.

4.3

Ensure that all users with administrative account access use a dedicated or secondary account for elevated activities. This account should only be used for administrative activities and not Internet browsing, email, or similar activities.

 

First, let’s talk about admin folks.  In this first situation, we’ve got helpdesk and IT folks, that all require elevated privileges.  This client did the right thing, and created “admin accounts” for each of those folks – not all Domain Admin, but accounts with the correct, elevated rights for each of those users.

Back in the day, these users were then supposed to use “run-as” to use those accounts locally.  These days however, it’s recognized that this “plants” admin credentials on the workstation for Mimikatz to harvest, and also starts an admin-level process for malware to migrate to.  Today’s recommendation is that users should run a remote session to a trusted admin station, which doesn’t have email or internet access, and do their admin stuff there.

This means that those “admin” accounts have a list of stations that they are permitted to login to – ideally only servers (not workstations).  You can enforce this in AD with a Group Policy:

  • Create a GPO, maybe call it “Deny Domain Admin On Workstations”
  • In: Computer ConfigurationPoliciesWindows SettingsLocal PoliciesUser Rights Assignment
  • Set “Deny Logon locally and assign that right to “Domain Admins”, “Enterprise Admins” and any other relevant admin group
  • Repeat this for the rights “deny batch service” and “service logon
  • Now, test this on real workstations before you deploy this widely.  If you in fact do have scheduled tasks or services running with Domain Admin rights, consider this a good day to evaluate life choices within the IT department and come up with a better way J  (see: https://isc.sans.edu/forums/diary/Windows+Service+Accounts+Why+Theyre+Evil+and+Why+Pentesters+Love+them/20029/)
  • When testing is complete, link this GPO to the various OUs where your workstations are, be sure that you DON’T link this GPO to any server OUs.

These controls also mean that you need to disable internet access on your jump hosts for admin sessions.  This might be an uphill battle, but we’ll get to why it’s important.  This is also easy in a Group Policy:

  • Create a new GPO, maybe call it “No Internet Access for Domain Admin Accounts”
  • Open User ConfigurationPoliciesWindows SettingsInternet Explorer Connection.
  • In  Proxy Settings, configure Enable proxy settings to point to a bogus proxy, maybe 127.0.0.1

So, what could possibly go wrong?  … let’s count the ways J

1/ Remember, these are admin folks.  So they’ve got the rights, and they also get impatient with extra work.  So, the natural inclination is to exempt themselves – maybe create a new group with admin rights, and move their account into that group.  That gives them admin login to workstations, and also gives them internet from the jump hosts.

2/ You’ll also see at least some of those folks “Connect” that admin account to their exchange mailbox.  That gives them email from their admin account

voila!  Now they are logged in all the time as admin, and checking email as admin.  So now, when that user receives a Word or Excel file (or whatever) with a macro in it, then open the file and run the macro, that macro is running as (in this case) Domain Admin.  Which of course means that (yes in real life) the customer’s Domain Controller got ransomwared (along with all of their other servers actually).

3/ Again with the extra work theme – the admin in question was using the “jump host” as they should be, but needed an attachment from an email that they got during a support call.  The right way to do this is to open the mail from their non-privileged account on their non-privileged workstation, collect that file, map a drive and copy it over.  Or if it’s text, just copy/paste that info into their admin “window” that’s running on the jump host.   (Same procedure if it’s a downloaded file from a vendor support site or whatever)

What happened instead?  They fired up a browser and browsed to their OWA server from that jump host!  This means that again, they’re checking email as admin.  Worse yet, it’s on the actual admin host, where *all* of the admin accounts are, so Mimikatz can now collect *all* of the admin account credentials!

How can we protect against these?  For IT admins, often your only protection is audit and logging.  You should be tracking creation of new groups, and any changes to admin accounts (group membership for starters).  You can do this with audit policies in AD, but you’ll need a decent central logging and alerting process so that the right folks get told when your admins “color outside of the lines”.  This seems like real “mom and apple pie” advice, but it’s more complicated than it sounds.  I took SEC 555 (https://www.sans.org/course/siem-with-tactical-analytics/type/asc/ ) when I attended SANSFIRE this year, it was a real eye-opener for me as to how to assemble all the pieces to accomplish this sort of thing.

On the email thing, blocking OWA from the admin stations is likely a good idea.  You can do this using host based firewalls, or you can properly segment your network – isolate your user stations from your servers for starters, and start on a set of firewall permissions between user stations and servers, and from servers to user stations.  Because every user does not need tcp/1433 to your SQL servers, every user does not need tcp/445 to every server, and your Accounting Group, Receptionist and Visitors don’t need login access to your vCenter server.

Wait, we just dragged some new controls into the mix with this discussion – it’s funny how when you try to control a problem, often the list of controls that you think you need is not the complete list:

4.8

Configure systems to issue a log entry and alert when an account is added to or removed from any group assigned administrative privileges.

6.5

Ensure that appropriate logs are being aggregated to a central log management system for analysis and review.

6.6

Deploy Security Information and Event Management (SIEM) or log analytic tools for log correlation and analysis

9.1

Associate active ports, services, and protocols to the hardware assets in the asset inventory.

9.2

Ensure that only network ports, protocols, and services listening on a system with validated business needs are running on each system.

9.5

Place application firewalls in front of any critical servers to verify and validate the traffic going to the server. Any unauthorized traffic should be blocked and logged.

 

Also

12.3

Deny communications with known malicious or unused Internet IP addresses and limit access only to trusted and necessary IP address ranges at each of the organization’s network boundaries.

12.4

Deny communication over unauthorized TCP or UDP ports or application traffic to ensure that only authorized protocols are allowed to cross the network boundary in or out of the network at each of the organization’s network boundaries.

 

How about non-admin people in your organization?  

Say you have a good spam filter, and you’re blocking office macros on inbound emails.  Or, more typically, you’re permitting macros inbound from a few partner organizations (where IT always loses the battle), and the business has “accepted the risk” that macro-based malware from those partner orgs is not blocked.  But in either case, an office document with a macro from some “random source email address” is not getting in.

You’ll always have targeted folks for things like this – in many organizations, that will be the CFO or people in accounting (who can transfer dollars), or in healthcare it will be the head of nursing (who has access to all health records) or the folks in charge of dispensing pharmaceuticals.  In any organization, it’s also recognized that the CEO and Sr Management (all the folks who’s names in posted on your website or are easily found with Google or LinkedIn) generally have the rights to ignore or bypass policies, so they’ll be targeted as well.  In addition, you’ll have those users who are always after the “cracker jack prize” in the email – once they start, they’ll click as many times as necessary to get there (see Xavier’s story yesterday https://isc.sans.edu/forums/diary/May+People+Be+Considered+as+IOC/25166/ ).  Word does get around – if you have folks who always open attachments and click links, your adversary likely has them on a list – you might call it “market research” or “knowing your demographic”.

Anyway, you’ll have a targeted person, and the attacker will have some in-band (ie email) or out of band (phone, facebook messenger, linkedin or whatever) access.  At some point, the attacker will realize that their malware isn’t getting into the organization via email.  So, what will they suggest?  “Let me send it to your personal email” will often be the first suggestion.  Your target will then pick the malware up via webmail.  The other out-of-band methods will often be the second choice – linkedin, facebook messenger, or other social media “email replacement” services.

How can this be prevented (at least partially)?  If you’ve got a Content Filter on your firewall or proxy server, you can usually block “Web Mail” as a category.  Similarly, you can usually differentiate between an app (for instance Facebook) and the messenger within that app (Facebook Messenger), and have different block/allow rules for each.  You’ll always have some folks that need things like the LinkedIn “InMail” application, HR in particular.  You really are stuck with GPOs and AV in that case.  If your organization does have partner organizations that use macros, likely HR does not need those.  Using GPO’s either block HR people from running macros entirely, or only permit them to run internally signed macros.

Which brings 2 more controls into the mix:

7.4

Enforce network-based URL filters that limit a system’s ability to connect to websites not approved by the organization. This filtering shall be enforced for each of the organization’s systems, whether they are physically at an organization’s facilities or not.

7.5

Subscribe to URL-categorization services to ensure that they are up-to-date with the most recent website category definitions available. Uncategorized sites shall be blocked by default.

 

In addition, within your SIEM you likely want to elevate the priority of any alerts from folks that have access to sensitive information, have elevated admin rights, or are known “clicky-clicky” folks.  You might call this “CVSS for people”, and you wouldn’t be far off the mark on that.   Alerts might include suspect email attachments, script or macro activity, failed logins, unusual IP addresses (on VPN connections or VDI for instance).   We obviously care about these alerts for all users, but we should care a bit more if that user is an IT Admin, is a Sr. Manager, or “clicks that link” every Tuesday (again, Xavier has some great pointers on this here https://isc.sans.edu/forums/diary/May+People+Be+Considered+as+IOC/25166/ )

This adds a few more controls the first two we’ve already added (but it’s worth bringing these up twice!), the third is new.  No SIEM is “set it and forget it” – in this case you are raising the priority of events involving specific users, but “adjusting” your SIEM is an ongoing process.  There are always new attacks, new ways to detect attacks, or new methods of combining multiple logs to create a better view of a problem.

6.5

Ensure that appropriate logs are being aggregated to a central log management system for analysis and review.

6.6

Deploy Security Information and Event Management (SIEM) or log analytic tools for log correlation and analysis

6.8

On a regular basis, tune your SIEM system to better identify actionable events and decrease event noise.

 

Defense of your environment is a continual thing, and it’s constantly evolving.  Do you have any stories that you can share that are related?  Have I missed an obvious control in the situations I’ve discussed?  Please, use our comment form and share!

===============
Rob VandenBrink
rob coherentsecurity.com
www.coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

May People Be Considered as IOC?, (Wed, Jul 24th)

That’s a tricky question! May we manage a list of people like regular IOC’s? An IOC (Indicator of Compromise) is a piece of information, usually technical, that helps to detect malicious (or at least suspicious) activities. Classic types of IOC are IP addresses, domains, hashes, filenames, registry keys, processes, mutexes, … 

There exists plenty of lists of usernames that must be controlled. Here is a short list or typical accounts used to perform (remote) administrative tasks or belong to default users:

root 
admin
test
guest
info
adm
mysql
user
administrator
oracle
ftp
pi
puppet
ansible
ec2-user
vagrant
azureuser

If the activity of such kind of users must be logged and controlled (admin users, system accounts), let’s think about “real” people now. It could be very interesting to keep an eye on the activity of “high profiles” inside the organization like the board of directors (whose are always juicy targets). But, don’t forget the opposite with “low profiles”. Non-tech people who perform daily dangerous tasks. Can we consider them as “IOC”?

Let me tell you a story: A  few years ago, when ransomware waves were not very popular, a customer faced a security incident and several Windows shares were encrypted after a user opened a malicious attachment. The person was not to blame: (s)he was responsible for the “[email protected]” mailbox. A daily task was to process all emails sent to this address. Chances to see the profile of this person compromized is much higher than a regular user. Can we track him/her as an IOC? What if suddenly we detect a lot of logon attempts with his/her credentials?

How will behave less security-aware people when they are facing a threat? The same applies to specific departments, like human resources, where one of the tasks is to open and read candidates’ resume based on Office or PDF documents.  May we assign a “score” to people? In many organizations, people can try to bypass security controls and behave in an unsafe way (I call them “mad-clickers”). More they appear in security reports, their score gets higher and could attract our attention.

Of course, all this process must be performed with respect of privacy. The goal is NOT to spy them!

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Verifying SSL/TLS configuration (part 1), (Tue, Jul 23rd)

One of very important steps when performing penetration tests is to verify configuration of any SSL/TLS services. Specifically, the goal of this step is to check which protocols and ciphers are supported. This might sound easier than it is – so this will be a series of diaries where I will try to explain how to verify configuration but also how to assess risk.

We cover this in the SEC542 course (Web App Penetration Testing and Ethical Hacking), but verifying configuration is not limited only to web applications – we should do it in both external and internal penetration tests.

While one could create a small script around the openssl command to verify for all supported protocols and ciphers, it is much easier to use some of the following tools. Let’s dive in:

Nmap:

While some people might use nmap only for port scanning (Trinity used it for OS fingerprinting as well – good job Trinity), in last couple of years I became a huge fan of Nmap’s scripts. NSE (Nmap Scripting Engine) allows creation of scripts in the embedded Lua programming language. There are ~600 NSE scripts which are distributed with nmap today and some of them should be your everyday tools – especially for checking SSL/TLS configuration. Here they are:

ssl-enum-ciphers.nse – this is the most powerful NSE script for SSL/TLS checking. It will verify which protocols are supported (SSL3 and above) and then enumerate all cipher algorithms. Besides this, ssl-enum-ciphers will also give you a rating (score) on how good or bad a cipher is. Do not take this for granted, although generally the scores are good.

Below we can see ssl-enum-ciphers executed against my mail server. The output is pretty self-explanatory and you can see that I tried to lock the configuration a lot by supporting only TLSv1.2 and using strong ciphers, with preferred PFS (Perfect Forward Secrecy) algorithms. All of these will be explained in future diaries.

$ nmap -Pn -sT -p 443 mail.infigo.hr –script ssl-enum-ciphers
Starting Nmap 7.70SVN ( https://nmap.org ) at 2019-07-23 09:29 UTC
Nmap scan report for mail.infigo.hr (213.202.103.52)
Host is up (0.087s latency).
rDNS record for 213.202.103.52: zion.infigo.hr

PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers:
|   TLSv1.2:
|     ciphers:
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) – A
|       TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) – A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) – A
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
|       TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
|     compressors:
|       NULL
|     cipher preference: server
|_  least strength: A

sslv2.nse – while the ssl-enum-ciphers script is amazing, it does not support SSLv2. Keep this in mind – because when you scan a network with it, it will not report usage of SSLv2 so you might miss it. This is why we have the sslv2 script, so make sure you run it everywhere.

ssl-cert.nse – this is a nice small script that will retrieve information about the X.509 certificate used on the target web site – perfect for easy collection and examination of this information on wide range of targets. We can see it running here:

$ nmap -Pn -sT -p 443 mail.infigo.hr –script ssl-cert
Starting Nmap 7.70SVN ( https://nmap.org ) at 2019-07-23 09:32 UTC
Nmap scan report for mail.infigo.hr (213.202.103.52)
Host is up (0.039s latency).
rDNS record for 213.202.103.52: zion.infigo.hr

PORT    STATE SERVICE
443/tcp open  https
| ssl-cert: Subject: commonName=mail.infigo.hr
| Subject Alternative Name: DNS:mail.infigo.hr, DNS:zion.infigo.hr
| Issuer: commonName=Let’s Encrypt Authority X3/organizationName=Let’s Encrypt/countryName=US
| Public Key type: rsa
| Public Key bits: 2048
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2019-05-27T07:47:53
| Not valid after:  2019-08-25T07:47:53
| MD5:   f31d bf25 01d3 c303 e762 e1e3 bb86 9c5b
|_SHA-1: 3835 540e 9d14 2537 7c7f b9be 0abb ba37 161a cc4e

ssl-dh-params.nse – finally, this is the last script I use when checking SSL/TLS configuration. This script will analyse Diffie-Hellman MODP group parameters and report if weak DH parameters are used (i.e. CVE 2015-4000 and other weaknesses).

Finally, just keep one thing in mind – whenever you use nmap make sure that you have the latest version installed (I actually pull the one from Subversion and compile it manually). Many Linux distributions come with ancient nmap versions so not only you might be missing features and bug fixes, but the scores for ciphers might be wrong.

Testssl.sh:

Another great resource for SSL/TLS configuration testing is the testssl.sh script. This script comes with its own, statically precompiled version of openssl which supports every possible protocol and cipher. This way the author ensures that nothing will be missed (there was a huge hiccup with Kali Linux and SSLv2 support back in 2013 when they removed it and a lot of penetration testers got burned).

My suggestion is to clone testssl.sh from github to make sure you have the latest version – no compilation is needed. Just run it against the target web site and you will see nice coloured results as shown in the screenshot below for my server:

Qualys SSL Labs:

Finally, the last great online resource is the Qualys SSL Labs SSL Server Test done by @ivanristic. SSL Server Test will perform a bunch of different tests and generate a very, very detailed report on SSL/TLS configuration, together with the final score – and everyone will try to achieve A+ of course (we all like green, A+ grades).

SSL Server Test is available at https://www.ssllabs.com/ssltest/ so, obviously, you can use it only for publicly available web sites. Additionally, don’t forget to select the “Do not show the results on the boards” box so your test will not be shown on the web site.

 

As we can see, there is a plethora of good tools we can use. But besides running tools, we need to be able to interpret the results as well. For example, if 3DES is used on your Microsoft Terminal Server (RDP), is that a bad thing or not? Can we live with it?
I will cover all these cases in future diaries – let us know if there is something specific you would like us to write about.

And I almost forgot – if you want to hear more about this, there is a nice (I hope!) SANS Webcast this Thursday, July 25th, 2019 at 10:30 AM EDT (14:30:00 UTC), “A BEAST and a POODLE celebrating SWEET32” where I will be demoing some of SSL/TLS vulnerabilities.

An even longer presentation will be at the BalCCon2k19 conference this year so let me know if you are around! 
 


Bojan
@bojanz
INFIGO IS

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Analyzing Compressed PowerShell Scripts, (Mon, Jul 22nd)

Malicious document 1d5794e6b276db06f6f70d5fae6d718e contains VBA macros, as can be verified with oledump.py:

Stream 15 is a “Stream O” and that is something we talked a bout before: these forms are often used to hide the payload.

No surprise here, it contains a BASE64 string:

And that is often indicative of PowerShell scripts.

Decoding the BASE64 string with base64dump.py here:

It’s UNICODE (UTF16), a characteristic of encoded PowerShell arguments:

This yields a PowerShell script, with more BASE64.

That BASE64 string is not a PowerShell script:

It’s compressed data: DeflateStream. DeflateStream tells us that this is Zlib compression, with header (raw). My tool translate.py can be used to decompress this:

This gives us the final PowerShell script, a downloader:

translate.py is a tool to transform (translate) byte streams. By default, it operates byte per byte with a given Python expression to translate a single byte.

Option -f directs the tool to operate on the complete byte stream, and the given Python expression is a function that expects a byte stream. ZlibD and ZlibRawD are buildin Python functions to inflate Zlib compressed data, with header and without reader (raw) respectively.

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Malicious RTF Analysis CVE-2017-11882 by a Reader, (Sun, Jul 21st)

This is a maldoc analysis submitted by reader Ahmed Elshaer.

 

I have come across a malicious rtf file that can be found here. I have started investigating it as usual using Didier Tool rtfdump.

As you can see it have a lot of nested strings, and only one of the strings had a control word of an object although it was not marked by as object ‘O’.

By selecting the object 157 and hex decoding the output, we can see that this object is calling Equation Editor EQNEDT32.EXE, which is another Microsoft component.


We can see all the strings in that object.


I tried to run this rtf file on a sandbox to see what this object can do, I found that it uses a Stack buffer overflow vulnerability in Equation Editor which is referenced CVE-2017-11882 this Vulnerability allow it to run code, like here to downloaded a vbscript which contains a powershell encoded base64 command. This code was downloaded from pastebin.

This Powershell encoded command here can be decoded using base64dump.

And we can see it gets downloaded as svchost.exe which then will be executed as you can see in the VBScript.

References:
https://www.hybrid-analysis.com/sample/d74e7786c5c733e88eaccfbc265e155538a504f530e3ce2639c138277418c716?environmentId=120
Exploit Poc: https://github.com/embedi/CVE-2017-11882
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11882
https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2017-11882
Mitigation: https://www.kb.cert.org/vuls/id/421280/

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 5 of 295 «...34567...»