Behaviors

From ema
Jump to: navigation, search

A Behavior corresponds to the specific purpose behind a particular snippet of code, as executed by a malware instance. Examples include keylogging, detecting a virtual machine, and installing a backdoor.

Behaviors are marked as follows:

  • Behaviors defined in ATT&CK, which could be expanded with malware-related content are denoted with an &.
  • Behaviors that might be potential ATT&CK techniques are denoted with a +.
  • Behaviors that are only detected via malware analysis have no markings.
Name Description Associated Attributes Associated Capabilities/ Subcapabilities Aliases
& Bootkit ATT&CK considers this a technique under the Persistence tactic. Might it also be a Defense Evasion tactic?

The 'execute before/external to kernel/hypervisor' Behavior executes some or all of the malware instance's code before or external to the system's kernel or hypervisor (e.g., through the BIOS).

Defense Evasion
& Component Firmware Cisco routers can have their firmware images modified in order to maliciously infect and persist on end-user machines in a network. This is accomplished by using default or acquired credentials to gain access to a router and to install a backdoor.

The implant resides within a modified Cisco IOS image and, when loaded, maintains its persistence in the environment, even after a system reboot. However, any further modules loaded by the attacker will only exist in the router’s volatile memory and will not be available for use after reboot. Known Affected Hardware:

  • Cisco 1841 router
  • Cisco 2811 router
  • Cisco 3825 router
Persistence
& Disabling Security Tools Malware examples include:
  • Disable Kernel Patch Protection - bypasses or disables kernel patch protection mechanisms such as Windows' PatchGuard, enabling the malware instance to operate at the same level as the operating system kernel and kernel mode drivers (KMD).
  • Disable System File Overwrite Protection - disables system file overwrite protection mechanisms such as Windows file protection, thereby enabling system files to be modified or replaced.
Defense Evasion
& Encrypt Files for Ransom (mobile) The 'encrypt files' Behavior encrypts one or more files on the system on which the malware instance is executing, to make them unavailable for use by the users of the system.

This is currently an ATT&CK Mobile technique. For malware, it should be extended to ATT&CK Enterprise.

A better name might be "Encrypt Files."

Common: applicable platform, Common: encryption algorithm, Common: technique Effects
& Generate Fraudulent Advertising Revenue (mobile) The ATT&CK Generate Fraudulent Advertising Revenue could be expanded to apply more broadly to malware.

The 'click fraud' Behavior simulates legitimate user clicks on website advertisements for the purpose of revenue generation.

Effects
& Hooking Alter API behavior, for example by inserting JMP/JCC instruction(s) at start of API code or to redirect benign API to a critical one. Sometimes hooking is used to prevent memory dumps.

Examples:

  • File Handling: File handling APIs are modified to make embedded files usable like external ones.
Anti-Behavioral Analysis
& Lock User Out of Device (mobile) The 'denial of service' Behavior causes the local machine on which the malware instance is executing and/or a remote network resource to be unavailable.

For malware, this might be extended to ATT&CK Enterprise (only defined for Mobile ATT&CK now).

A better name might be "Denial of Service."

Effects DOS, DDOS
& Obfuscated Files or Information This corresponds to the EMA 'code obfuscation' behavior.

This may be covered by the ATT&CK Obfuscated File or Information technique. However, the details below are more extensive than given in ATT&CK.

The code in the malware instance is obfuscated to hinder static analysis.

Examples:

  • Code Encryption: Encryption of the code in the malware instance in order to hinder static analysis.
  • Entry Point Obfuscation: Obfuscation of the entry point of the malware executable, in order to hinder static analysis.
  • Instruction Overlap: Jumping after the first byte of an instruction confuses some disassemblers.
  • Import Address Table Obfuscation: Obfuscation of the import address table of the malware instance, in order to hinder static analysis.
  • Symbolic Obfuscation: The removing or renaming of textual information in the code of the malware instance, in order to hinder static analysis.
  • Interleaving Code: A form of obfuscation that splits code into sections that are rearranged and connected by unconditional jumps, in order to hinder static analysis and disassembly.
  • Merged Code Sections: Merge all sections; just one entry in the sections table. Only affects readability slightly, so may not even be worth mitigating. May affect some detection signatures if written to be section dependent.
  • Import Compression: Imports are stored and loaded with a more compact import table format. Each DLL needed by the executable is mentioned in the IAT, but only one function from each/most is imported; the rest are imported "manually" via GetProcAddress calls.
  • Stack Strings: Strings are built and decrypted on the stack at each use, then discarded (to avoid obvious references).
  • Imports By Hash: Instead of calling GetProcAddress, a DLL is loaded and each export name is parsed until it matches a specific hash. This example is also known as GET_APIS_WITH_CRC. Also often seen used by shellcode, as it reduces the size of each import from a human-readable string to a sequence of four bytes.
  • Code Insertion: Code insertion can impede disassembly.
  * *Dead Code Insertion*: Inclusion of "dead" code in the malware instance with no real functionality but with the intent of impeding disassembly.
  * *Fake Code Insertion*: Add fake code similar to known packers or known goods to fool identification. Can confuse some automated unpackers.
  * *Jump Insertion*: Insertion of jumps to make analysis visually harder.
  * *Junk Code Insertion*: Insertion of dummy code between relevant opcodes. Can make signature writing more complex.
  * *Thunk Code Insertion*: Variation on “jump”; also used by some compilers for user-generated functions (ex: Visual Studio /INCREMENTAL.
Anti-Static Analysis
& Rootkit A Rootkit may have the following capabilities:
  • Hide Kernel Modules - hides the usage of any kernel modules by the malware instance.
  • Hide Services - hides any system services that the malware instance creates or injects itself into.
  • Hide Threads - hides one or more threads that belong to the malware instance.
  • Hide Userspace Libraries - hides the usage of userspace libraries by the malware instance.
  • Prevent API Unhooking - prevents the API hooks installed by the malware instance from being removed.
  • Prevent Registry Access - prevents access to the Windows registry, including to the entire registry and/or to particular registry keys/values.
  • Prevent Registry Deletion - prevent Windows registry keys and/or values associated with the malware instance from being deleted from a system.
  • Prevent File Access - prevents access to the file system, including to specific files and/or directories associated with the malware instance.
  • Prevent File Deletion - prevents files and/or directories associated with the malware instance from being deleted from a system.
  • Prevent Memory Access - prevents access to system memory where the malware instance may be storing code or data.
  • Prevent Native API Hooking - prevents other software from hooking native system APIs.
Defense Evasion
& Software Packing See the ATT&CK Software Packing technique. Anti-Static Analysis
& Wipe Device Data (mobile) The EMA 'erase data' Behavior destroys data stored on a disk or in memory by erasure.

This should be extended to ATT&CK Enterprise - is currently only defined in ATT&CK Mobile.

A better name might be "Erase Data."

Effects wipe data
+ analysis tool discovery Malware can employ various means to detect whether analysis tools are present or running on the system on which it is executing.

Methods:

  • Process detection: malware can scan for the process name associated with common analysis tools.
    • Debuggers: OllyDBG / ImmunityDebugger / WinDbg / IDA Pro
    • SysInternals Suite Tools (Process Explorer / Process Monitor / Regmon / Filemon, TCPView, Autoruns)
    • PCAP Utilities: Wireshark / Dumpcap
    • Process Utilities: ProcessHacker / SysAnalyzer / HookExplorer / SysInspector
    • PE Utilities: ImportREC / PETools / LordPE
    • Sandboxes: Joe Sandbox, etc.
Discovery
+ compromise data integrity Manipulates data stored on the file system of the system on which the malware instance is executing in order to compromise its integrity. Effects
+ destroy hardware The 'destroy hardware' Behavior physically destroys a piece of hardware, e.g., by causing it to overheat. Effects
+ hijack system resources Use system resources for other purposes. As a result, the system may not be available for intended uses.

Examples:

  • Password Cracking: Consume system resources for the purpose of password cracking.
  • Mine for Cryptocurrency: Consume system resources for cryptocurrency (e.g., Bitcoin, Litecoin, etc.) mining.
Effects
+ install secondary program The 'install secondary program' Behavior installs another, different malware instance on the system on which the malware instance is executing.

Examples:

  • Backdoor: install a server that accepts incoming connections.
  • Secondary module: install a secondary module, typically related to the malware instance itself, on the same system on which the malware instance is executing.
  • Dropper: execute a dropped executable file.
Execution, Persistence
+ malicious network driver Malicious network drivers can be installed on several machines on a network via an exploited server with high uptime. Once the drivers are installed on the host machines, they can re-infect the server if it is restarted, can infect other machines on the network, and can redirect traffic on the network as they please.

These drivers can tunnel traffic from the outside into the network, allowing the attackers to access remote desktop sessions or to connect to servers inside the domain by using previously acquired credentials. Using the credentials, they can re-deploy the entire platform following a massive shutdown or power loss The malware persists on machines connected to the network even after reboot. Once the machine connects to the server, the malware repopulates itself on the server. This, in turn, infects the remaining machines on the network.

The malware exploits a zero-day kernel-level vulnerability in Microsoft's Win32k TrueType-Font.

Persistence
+ manipulate network traffic The 'manipulate network traffic' Behavior intercepts and manipulates network traffic going to or originating from the system on which the malware instance is executing. Effects
+ private api exploitation (Mobile) On iOS, private APIs can be abused in the iOS system to implement malicious functionalities.

Such malware can download, install and launch arbitrary iOS apps, replace existing apps with those it downloads, hijack other apps’ execution to display advertisements, change Safari’s default search engine, bookmarks and opened pages, and upload device information to a C2 server.

The malware uses tricks to hide its icons from iOS’s SpringBoard, which prevents the user from finding and deleting it. The components also use the same name and logos of system apps to trick iOS power users.

Persistence
+ send email The 'send email message' Behavior sends an email message from the system on which the malware instance is executing to one or more recipients, most commonly for the purpose of spamming. Common: network protocol Execution
+ SMTP connection discovery The 'smtp connection discovery' Behavior tests whether an outgoing SMTP connection can be made from the system on which the malware instance is executing to some SMTP server, by sending a test SMTP transaction. Discovery
+ surreptitious application installation In OS X, application directories and files can be installed unbeknownst to the user. Web browsers and search engines can also be hijacked set to specific defaults. These files persist until the user manually deletes them.

One example is Geneio, a byproduct of the DYLD_PRINT_TIFILE vulnerability. Geneio can gain access to the MAC Keychain and persists until removed by the user. When the program is executed, it creates the following files:

  • /Application/Genieo.app
  • /Applications/Uninstall Genieo.app
  • ~/Library/Application Support/com.genieoinnovation.Installer/Completer.app
  • ~/Library/LaunchAgents/com.genieo.completer.download.plist
  • ~/Library/LaunchAgents/com.genieo.completer.update.plist
  • ~/Library/Safari/Extensions/Omnibar.safariextz
  • ~/Library/Application Support/Genieo/
  • /tmp/GenieoInstall.dmg
  • /tmp/tmpinstallmc.dmg

Next, the program changes the default search engine and homepage to the following domain: search.genieo.com The program then installs the following browser extension: ~/Library/Safari/Extensions/Omnibar.safariextz When the user inputs a search query it will appear to be carried out using Google Search but the results will be from genieo.com. Genieo (8/31/2015): https://blog.malwarebytes.org/mac/2015/08/genieo-installer-tricks-keychain/ https://support.norton.com/sp/en/us/home/current/solutions/v103415336_EndUserProfile_en_us https://www.symantec.com/security_response/writeup.jsp?docid=2014-071013-3137-99

Persistence
+ windows shutdown event In Windows, the shutdown event triggered by WinLogon can be registered by an application to allow a malicious DLL a chance to execute every time a machine shuts down.

When the machine is shutdown the malware is loaded into memory. Then it downloads the primary malware and reinfects the machine. The malware will also lie dormant during incident reporting processes. HKEY_LOCAL_MACHINESoftwareMicrosoftWindows NTCurrentVersionWinlogonNotify If the subkey doesn't exist you are in good shape. If a subkey with any name exists and it has a "shutdown" value then the dll in the "DLLName" key will be launched during the shutdown process.

Persistence
c2 communication Client/server communication indicates C2 behavior.

Examples:

  • check for payload
  • send system information
  • send heartbeat
  • request email template
  • request email address list
Command and Control
call graph prevention The 'call graph prevention' Behavior defeats accurate call graph generation during disassembly of the malware instance. Anti-Static Analysis
capture keyboard input The 'capture keyboard input' Behavior captures data from the keyboard attached to the system on which the malware instance is running. keylogging, keystroke logging, keyboard capturing
code optimization Code optimized in various ways can be harder to statically analyze.

Examples:

  • Call Optimization: Turn relative operands of jumps and calls into absolute (better compression). May confuse some basic block detection algorithms.
  • Minification: Per wikipedia, minification is 'the process of removing all unnecessary characters from source code without changing its functionality.' A simple example is when all the unecessary whitespace and comments are removed. This is distinguished from compression in that it neither adds to nor changes the code seen by the interpreter. Minification is often used for malware written in interpreted languages, such as JavaScript, PHP, or Python. Legitimate code that is transmitted many times a second, such as JavaScript on websites, often uses minification to simply reduce the number of bytes transmitted.
Anti-Static Analysis
debugger detect & evade The 'debugger evasion' Behavior detects whether the malware instance is being executed inside of a debugger and if so, executes benign path.

Variations:

  • Debugger Artifacts: Detects a debugger by its artifact (window title, device driver, exports, etc.).
  • API Call: IsDebuggerPresent: The kernel32!IsDebuggerPresent API call checks the Process Environment Block to see if the calling process is being debugged. This is one of the most basic and common ways of detecting debugging.
  • Monitoring Thread: Spawn a monitoring thread to detect tampering, breakpoints, etc.
  • Process Environment Block (PEB): The Process Environment Block (PEB) is a Windows data structure associated with each process that contains several fields, one of which is "BeingDebugged". Testing the value of this field in the PEB of a particular process can indicate whether the process is being debugged; this is equivalent to using the kernel32!IsDebuggerPresent API call.
  • Timing/Date Checks: Calling GetSystemTime or equiv and only executing code if the current date/hour/minute/second passes some check. Often this is for running only after or only until a specific date.
  • Timing/Delay Checks: Comparing time between two points to detect "unusual" execution, such as the (relative) massive delays introduced by debugging.
  • Timing/Uptime Check: Comparing single GetTickCount with some value to see if system has been started at least X amount ago.
  • Stack Canary: Similar to the anti-exploitation method of the same name, malware may try to detect mucking with values on the stack.
  • TIB Aware: Accessing thread information (fs:[20h]) for debug detection or process obfuscation.
  • RtlAdjustPrivilege: Calling RtlAdjustPrivilege to either prevent a debugger from attaching or to detect if a debugger is attached.
  • Interrupt 2D: If int 0x2d is mishandled by the debugger, it can cause a single-byte instrustion to be inadvertently skipped, which can be detected by the malware.
Common: applicable platform, Common: technique Anti-Behavioral Analysis
debugger obstruction Make debugger session difficult (BlockInput, slow down, etc.). This is a general category of anti-analysis and may refer to any number of techniques.

Examples:

  • Check Host Fingerprint: Compares a previously computed host fingerprint(e.g., based on installed applications) to the current system's to determine if the malware instance is still executing on the same system. If not, execution will stop, making debugging or sandbox analysis more difficult.
  • Malloc Use: Instead of unpacking into a pre-defined section/segment (ex: .text) of the binary, uses malloc() / VirtualAlloc() to create a new segment. This makes keeping track of memory locations across different runs more difficult, as there is no guarantee that malloc/VirtualAlloc will assign the same address range each time.
  • Pipeline Misdirection: Taking advantage of pipelining in modern processors to misdirect debugging, emulation, or static analysis tools. An unpacker can assume a certain number of opcodes will be cached and then proceed to overwrite them in memory, causing a debugger/emulator/analyzer to follow different code than is normally executed.
  • Loop Escapes: Using SEH or other methods to break out of a loop instead of a conditional jump.
  • Exception Misdirection: Using exception handling (SEH) to cause flow of program to non-obvious paths.
  • Break Point Clearing: Intentionally clearing software or hardware breakpoints.
  • Parallel Threads: Use several parallel threads to make analysis harder.
  • TIB Aware: Accessing thread information (fs:[20h]) for debug detection or process obfuscation.
  • Modify PE Header: Any part of the header is changed or erased.
  • Stolen API Code: A variation of “byte stealing” where the first few instructions or bytes of an API are executed in user code, allowing the IAT to point into the middle of an API function. This confuses IAT rebuilders such as ImpRec and Scylla and may bypass breakpoints.
  • Return Obfuscation: Overwrite the RET address on the stack or the code at the RET address. Variation seen that writes to the start-up code or main module that called the malware's WinMain or DllMain.
  • Section Misalignment: Some analysis tools cannot handle binaries with misaligned sections.
  • Static Linking: Copy locally the whole content of API code.
  • Inlining: variation of static linking where full API code inserted everywhere it would have been called.
  • Page Guard: Blocks of code are encrypted individually, and decrypted temporarily only upon execution. AKA guard pages. One variant uses self-debugging to accomplish.
  • Hook Interrupt: modification of interrupt vector or descriptor tables
  • Hook File System: do something when particular file/dir is accessed; often through hooking certain API calls such as CreateFileA and CreateFileW.
  • Byte Stealing: Move or copy the first bytes / instructions of the original code elsewhere. AKA stolen bytes or code splicing. For example, a packer may incorporate the first few instructions of the original EntryPoint (EP) into its unpacking stub before the tail transition in order to confuse automated unpackers and novice analysts. This can make it harder for rebuilding and may bypass breakpoints if set prematurely.
  • Get Base Indirectly: CALL to a POP; finds base of code or data, often the packed version of the code; also used often in obfuscated/packed shellcode.
  • Obfuscate Library Use: LoadLibrary API calls or direct access of kernel32 via PEB (fs[0]) pointers, used to rebuild IAT or just obfuscate library use.
  • Relocate API Code: relocate API code in separate buffer (calls don’t lead to imported DLLs).
  • Import Obfuscation: Add obfuscation between imports calls and APIs (obfuscation, virtualization, stealing, etc.).
Command and Control: port number Anti-Behavioral Analysis
debugger prevention The 'debugger prevention' Behavior prevents the execution of the malware instance in a debugger.

Examples:

  • Deposited Keys: Parts of the code and/or data is encrypted or otherwise relies on data external to the file itself. For example, malware that contains code that is encrypted with a key that is downloaded from a server; malware that only runs if certain other software is installed on the system; or malware that reads certain attributes of the system (BIOS version string, hostname, etc) and then encrypts portions of its code or data using those attributes as input, thus preventing itself from being able to be run on a different system (e.g., sandbox, emulator, etc.).
  • Secure Triggers: Code and/or data is encrypted until the underlying system satisfies a preselected condition unknown to the analyst (this is a form of Deposited Keys).
  • Token Check: Presence check to allow the program to run (ex: dongle, CD/DVD, key, file, network, etc.).
  • Fingerprinting: Token is specific to a hardware element (ex: disk, OS, CPU, NIC MAC, etc.)
  • Data Integrity Check: Check the contents of data sections are unmodified with checksum or hash. Depending on implementation, may detect file, in-memory, or both. Typically this does not affect analysis.
  • Code Integrity Check: Check that the unpacking code is unmodified. Variation exists where unpacking code is part of the “key” used to unpack, therefore any Software Breakpoints during debugging causes unpacking to completely fail or result in malformed unpacked code.
  • Illusion: Makes the analyst think something incorrect happened. This is a general category of anti-analysis and may refer to any number of techniques.
  • Self-Debugging: Debug itself to prevent another debugger to be attached.
  • Interrupt Hooking: Block interrupt 1 and/or 3 to prevent debuggers from working.
  • Interrupt Use: The unpacking code relies on use of int 1 or int 3, or it uses the interrupt vector table as part of the decryption “key”.
  • Self-Unmapping: UnmapViewOfFile() on itself
  • RtlAdjustPrivilege: Calling RtlAdjustPrivilege to either prevent a debugger from attaching or to detect if a debugger is attached.
  • Change SizeOfImage: Changinging this value during run time can prevent some debuggers from attaching. see example assembly on page 1. Also confuses some unpackers and dumpers.
  • Pre-Debug: Prevents debugger from attaching to process or to break until after the code of interest has been executed
  • Tampering: Erase or corrupt specific file parts to prevent rebuilding (header, packer stub, etc.).
  • Nanomites: int3 with code replacement table; debugs itself.
  • Encode File: Encode a file on disk, such as an implant’s config file.
  • Demo Mode: Inclusion of a demo binary/mode that is executed when token is absent or not enough privileged.
Anti-Behavioral Analysis
delete SMS warning messages The ‘remove SMS warning messages’ Behavior captures the message body of incoming SMS messages and aborts displaying messages that meets a certain criteria. Defense Evasion
domain name generation The 'domain name generation' Behavior generates the domain name of the command and control server to which it connects. The algorithm can be complicated in more advanced bots; understanding the details so that names can be predicted can be useful in mitigation and response. Command and Control
emulator detect & evade Detects whether the malware instance is being executed in an emulator; if so, a benign execution path is followed.

Examples:

  • Failed Network Connections: Some emulated systems fail to handle some network communications; such failures will indicate the emulated environment.
  • Check for Emulator-related Files: Checks if particular files (e.g., QEMU files) exist.
Anti-Behavioral Analysis anti-virtualization
emulator prevention Defeats or prevents the execution of the malware instance in an emulator.

Examples:

  • Different Opcode Sets: Use different opcodes sets (ex: FPU, MMX, SSE) to block emulators.
  • Undocumented Opcodes: Use rare or undocumented opcodes to block non-exhaustive emulators.
  • Unusual/Undocumented API Calls: Call unusual APIs to block non-exhaustive emulators (particularly anti-virus).
  • Extra Loops/Time Locks: Add extra loops to make time-constraint emulators give up.
  • Deposited Keys: Parts of the code and/or data is encrypted or otherwise relies on data external to the file itself. For example, malware that contains code that is encrypted with a key that is downloaded from a server; malware that only runs if certain other software is installed on the system; or malware that reads certain attributes of the system (BIOS version string, hostname, etc) and then encrypts portions of its code or data using those attributes as input, thus preventing itself from being able to be run on a different system (e.g., sandbox, emulator, etc.).
  • Secure Triggers: Code and/or data is encrypted until the underlying system satisfies a preselected condition unknown to the analyst (this is a form of Deposited Keys).
  • Malloc Use: Instead of unpacking into a pre-defined section/segment (ex: .text) of the binary, uses malloc()/VirtualAlloc() to create a new segment. This makes keeping track of memory locations across different runs more difficult, as there is no guarantee that malloc/VirtualAlloc will assign the same address range each time.
  • Pipeline Misdirection: Taking advantage of pipelining in modern processors to misdirect debugging, emulation, or static analysis tools. An unpacker can assume a certain number of opcodes will be cached and then proceed to overwrite them in memory, causing a debugger/emulator/analyzer to follow different code than is normally executed.
  • Loop Escapes: Using SEH or other methods to break out of a loop instead of a conditional jump.
Anti-Behavioral Analysis
exploitation for analysis evasion Detect or crash a tool via a specific backdoor. This is a general category of anti-analysis and may refer to any number of techniques. Anti-Behavioral Analysis
flow-oriented disassembler prevention The 'flow-oriented disassembler prevention' Behavior defeats disassembly of the malware instance in a flow-oriented (recursive traversal) disassembler. Some examples also apply to linear disassemblers.

Examples:

  • Flow Opcodes: flow opcodes are removed and emulated (or decrypted) by the packer during execution.
  • Conditional Misdirection: Conditional jumps are sometimes used to confuse disassembly engines, resulting in the wrong instruction boundaries and thus wrong mnemonic and operands; easy to “see” when jmp/jcc to a label+# (e.g., JNE loc_401345fe+2).
  • Value Dependent Jumps: Explicit use of computed values for control flow, often many times in the same basic block or function.
Anti-Static Analysis
illusionary issues Makes the analyst think something incorrect happened. This is a general behavior might be implemented in a variety of ways. Anti-Behavioral Analysis
linear disassembler prevention The 'linear disassembler prevention' Behavior prevent the disassembly of the malware instance in a linear disassembler. Some examples also apply to flow-oriented disassemblers.

Examples:

  • Conditional Misdirection: Conditional jumps are sometimes used to confuse disassembly engines, resulting in the wrong instruction boundaries and thus wrong mnemonic and operands; easy to “see” when jmp/jcc to a label+# (ex: JNE loc_401345fe+2).
  • Argument Obfuscation: simple number or string arguments to API calls are calculated at runtime, making static analysis more difficult.
  • Variable Recomposition: Variables, often strings, are broken into multiple parts and store out of order, in different memory ranges, or both. They must then be recomposed before use.
  • Value Dependent Jumps: Explicit use of computed values for control flow, often many times in the same basic block or function.
Anti-Static Analysis
memory dump obstruction Hinders retrieval and/or discovery of the contents of the physical memory of the system on which the malware instance is executing.

Examples:

  • Encrypts the executing malware instance code (in memory)
  • Erase the PE header from memory
  • Hide arbitrary segments of virtual memory belonging to the malware instance.
  • SizeOfImage
  • Tampering: Erase or corrupt specific file parts to prevent rebuilding (header, packer stub, etc.).
  • Page Guard: Blocks of code are encrypted individually, and decrypted temporarily only upon execution. AKA guard pages. One variant uses self-debugging to accomplish.
  • On-the-Fly APIs: API address is resolved before each use to prevent complete dumping.
  • Byte Stealing: Move or copy the first bytes / instructions of the original code elsewhere. AKA stolen bytes or code splicing. For example, a packer may incorporate the first few instructions of the original EntryPoint (EP) into its unpacking stub before the tail transition in order to confuse automated unpackers and novice analysts. This can make it harder for rebuilding and may bypass breakpoints if set prematurely.
  • Import Obfuscation: Add obfuscation between imports calls and APIs (obfuscation, virtualization, stealing, etc.).
  • Feed Misinformation: Report inaccurate data when the contents of the physical memory of the system on which the malware instance is executing is retrieved.
  • Hooking: Alter API behavior, for example by inserting JMP/JCC instruction(s) at start of API code or to redirect benign API to a critical one. Sometimes used for anti-dump. (Hooking is an ATT&CK technique.)
Anti-Behavioral Analysis anti-dumping
polymorphic code Packer stub generates polymorphic code on the fly (same file executes differently). This is a general category of defense evasion and may refer to any number of techniques. This capability is typically only found through analysis of related samples.

Examples

  • Packer Stub: A packer stub can generate polymorphic code on the fly, creating a file with the same functionality but different execution, making it difficult to detect.
Defense Evasion
prevent concurrent execution Indicates that the malware checks to see if it is already running on a system, in order to prevent multiple instances of the malware running concurrently. Execution
resource compression Compresses resources, avoiding critical ones, such as main icon, manifest, etc. Anti-Static Analysis
sandbox detect & evade Detects whether the malware instance is being executed inside of an instrumented sandbox environment (e.g., Cuckoo Sandbox). If so, conditional execution selects for benign execution path.

Examples:

  • Injected DLL Testing: Testing for the name of a particular DLL that is known to be injected by a sandbox for API hooking is a common way of detecting sandbox environments. This can be achieved through the kernel32!GetModuleHandle API call and other means.
  • Product Key/ID Testing: Checking for a particular product key/ID associated with a sandbox environment (commonly associated with the Windows host OS used in the environment) can be used to detect whether a malware instance is being executed in a particular sandbox. This can be achieved through several means, including testing for the Key/ID in the Windows registry.
  • Screen Resolution Testing: Sandboxes aren't used in the same manner as a typical user environment, so most of the time the screen resolution stays at the minimum 800x600 or lower. No one is actually working on a such small screen. Malware could potentially detect the screen resolution to determine if it's a user machine or a sandbox.
  • Human User Check: Detects whether there is any "user" activity on the machine, such as the movement of the mouse cursor, non-default wallpaper, or recently opened Office files. If there is no human activity, the machine is suspected to be a virtualized machine and/or sandbox.
  • Check Machine Name: Some sandboxes use a name like Sandbox, Maltest, Malware, malsand, ClonePC.
  • Monitoring Thread: Spawn a monitoring thread to detect tampering, breakpoints, etc.
  • Timing/Delay Checks: Comparing time between two points to detect "unusual" execution, such as the (relative) massive delays introduced by sandbox debugging.
  • Timing/Uptime Check: Comparing single GetTickCount with some value to see if system has been started at least X amount ago.
  • Hooked Function: To avoid some actions on the system by the malware like deleted a file. Cuckoo will hook some function and performs another action instead of the original one. For example the function DeleteFileW could be hooked to avoid file deletion. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Find Agent: Cuckoo uses a python agent to interact with the host guest. By listing the process and finding python.exe or pythonw.exe or by looking for an agent.py in the system, a malware can detect Cuckoo. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
Anti-Behavioral Analysis: targeted sandbox, Common: applicable platform, Common: technique Anti-Behavioral Analysis
sandbox obstruction The 'sandbox obstruction' Behavior impedes sandbox analysis.

Examples:

  • Delay Execution - Stalling code is typically executed before any malicious behavior. The attacker’s aim is to delay the execution of the malicious activity long enough so that an automated dynamic analysis system fails to extract the interesting malicious behavior.
    • Timing/Date Checks: Calling GetSystemTime or equiv and only executing code if the current date/hour/minute/second passes some check. Often this is for running only after or only until a specific date.
  • Overloading - overloads a sandbox (e.g., by generating a flood of meaningless behavioral data).
  • Check Host Fingerprint: Compares a previously computed host fingerprint(e.g., based on installed applications) to the current system's to determine if the malware instance is still executing on the same system. If not, execution will stop, making debugging or sandbox analysis more difficult.
  • GetVolumeInformation: This Windows API call is used to get the GUID on a system drive. Malware compares it to a previous (targeted) GUID value and only executes maliciously if they match.
Anti-Behavioral Analysis: targeted sandbox Anti-Behavioral Analysis
sandbox prevention Defeats or prevents the execution of the malware instance in a sandbox environment.

Examples:

  • Deposited Keys: Parts of the code and/or data is encrypted or otherwise relies on data external to the file itself. For example, malware that contains code that is encrypted with a key that is downloaded from a server; malware that only runs if certain other software is installed on the system; or malware that reads certain attributes of the system (BIOS version string, hostname, etc) and then encrypts portions of its code or data using those attributes as input, thus preventing itself from being able to be run on a different system (e.g., sandbox, emulator, etc.).
  • Secure Triggers: Code and/or data is encrypted until the underlying system satisfies a preselected condition unknown to the analyst (this is a form of Deposited Keys).
  • Hook Interrupt: modification of interrupt vector or descriptor tables
  • Hook File System: do something when particular file/dir is accessed; often through hooking certain API calls such as CreateFileA and CreateFileW.
  • Demo Mode: Inclusion of a demo binary/mode that is executed when token is absent or not enough privileged.
  • Drop Code: Original file is written to disk then executed. May confuse some sandboxes, especially if the dropped executable must be provided specific arguments and the original dropper is not associated with the drop file(s).
Anti-Behavioral Analysis
secondary CPU execution The 'execute non-main cpu code' Behavior executes some or all of the code of the malware instance on a secondary, non-CPU processor (e.g., a GPU). Defense Evasion
suicide exit The 'suicide exit' Behavior terminates the execution of the malware instance based on some trigger condition or value. Secondary Operation: trigger type Execution
virtual machine detect & evade Detects whether the malware instance is being executed in a virtual machine (VM).

Possible methods:

  • Guest Process Testing: Virtual machines offer guest additions that can be installed to add functionality such as clipboard sharing. Detecting the process, via its name or other methods, responsible for these tasks is a technique employed by malware for detecting whether it is being executed in a virtual machine.
  • HTML5 Performance Object: In three browser families, it is possible to extract the frequency of the Windows performance counter frequency, using standard HTML and Javascript. This value can then be used to detect whether the code is being executed in a virtual machine, by detecting two specific frequencies commonly used in virtual but not physical machines.
  • Named System Object Checks: Virtual machines often include specific named system objects by default, such as Windows device drivers, which can be detected by testing for specific strings, whether found in the Windows registry or other places.
  • Machine Specs: Different aspects of the hardware are inspected to determine whether the machine has standard, modern characteristics. Machines with substandard specs indicate a virtual environment:
    • Memory size: most modern machines have at leave 4 GB of memory.
    • Drive size: most modern machines have at least 80 GB disks.
    • USB drive: checks whether there is a potential USB drive; if not a virtual environment is suspected.
    • Printer: checks whether there is a potential connected printer or default Windows printers; if not a virtual environment is suspected.
    • CPU: checks number of processors; single CPU machines are suspect.
  • Human User Check: Detects whether there is any "user" activity on the machine, such as the movement of the mouse cursor, non-default wallpaper, or recently opened Office files. If there is no human activity, the machine is suspected to be a virtualized machine and/or sandbox.
  • x86 Instructions: The execution of certain x86 instructions will result in different values when executed inside of a VM instead of on bare metal. Accordingly, these can be used to detect the execution of the malware in a VM. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
    • SIDT (red pill): Red Pill is an anti-VM technique that executes the SIDT instruction to grab the value of the IDTR register. The virtual machine monitor must relocate the guest's IDTR to avoid conflict with the host's IDTR. Since the virtual machine monitor is not notified when the virtual machine runs the SIDT instruction, the IDTR for the virtual machine is returned.
    • SGDT/SLDT (no pill): The No Pill technique relies on the fact that the LDT structure is assigned to a processor not an Operating System. The LDT location on a host machine will be zero and on a virtual machine will be non-zero.
    • SMSW
    • STR
    • CPUID
    • IN
    • RDTSC
    • VMCPUID
    • VPCEXT
  • Check CPU Location: When an Operating System is virtualized, the CPU is relocated. That allows a malware to detect the virtual environment. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Check for Memory Artifacts: VMware leaves many artifacts in memory. Some are critical processor structures, which, because they are either moved or changed on a virtual machine, leave recognizable footprints. Malware can search through physical memory for the strings VMware, commonly used to detect memory artifacts. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Mac Address Detection: VMware uses specific virtual Mac address that can be detected by Malware. The usual mac address used started with the following numbers: "00:0C:29", "00:1C:14", "00:50:56", "00:05:69". Virtualbox uses specific virtual Mac address that can be detected by Malware. The usual mac address used started with the following numbers: 08:00:27. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Registry Keys: The VMware installation directory C:\Program Files\VMware\VMware Tools may also contain artifacts, as can the registry. A search for VMware in the registry might find some keys that include information about the virtual hard drive, adapters, and virtual mouse. The Virtualbox Guest addition leaves many artifacts in the registry. A search for VBOX in the registry might find some keys. Qemu registers some artifacts into the registry. A malware can detect the Qemu installation with a look at the registry key HARDWARE\DEVICEMAP\Scsi\Scsi Port 0\Scsi Bus 0\Target Id 0\Logical Unit Id 0 with the value of Identifier and the data of QEMU or HARDWARE\Description\System with a value of SystemBiosVersion and data of QEMU. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Check Processes: The VMware Tools use processes like VMwareServices.exe or VMwareTray.exe, to perform actions on the virtual environment. A malware can list the process and searches for the VMware string. Process related to Virtualbox can be detected by malware by query the process list. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Check Files: Some files are created by VMware on the system. Malware can check the different folder to find VMware artifacts. Some files are created by Virtualbox on the system. Malware can check the different folder to find Virtualbox artifacts like VBoxMouse.sys. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Check Running Services: VMwareService.exe runs the VMware Tools Service as a child of services.exe. It can be identified by listing services. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
  • Query I/O Communication Port: VMware uses virtual I/O ports for communication between the virtual machine and the host operating system to support functionality like copy and paste between the two systems. The port can be queried and compared with a magic number VMXh to identify the use of VMware. http://unprotect.tdgt.org/index.php/Sandbox_Evasion
Anti-Behavioral Analysis: targeted VM, Common: applicable platform, Common: technique Anti-Behavioral Analysis
virtualized code Virtualizes original code; dumped code won’t work without VM code.

Could also include virtualized [part of] packer stub code.

This is a general category of anti-analysis and may refer to any number of techniques.

Anti-Behavioral Analysis, Anti-Static Analysis