DNS under fire lately as nation-states and hacker groups steal credentials from unsuspecting victims.

DNS has come under fire lately as nation-states and hacker groups have targeted DNS as a method to steal credentials from unsuspecting victims.

According to Techcrunch the hackers first compromised the intended target via spearphishing.  They then used known exploits to target servers and routers and move laterally within the network.  In that process, the hackers obtained passwords which let them update the DNS records pointing the domain name away from the IP address on the target’s server to a server controlled by the hacker.  This allowed the hacker to gather username and passwords utilizing man-in-the-middle attacks.  The hacker also used fake certificates to make the malicious hacker server appear to be the real web server.

 

There are very few ways to prevent this attack.  A few areas to focus on include:

  1. Implementing two factor authorization for all DNS record changes.  While in theory, a very smart move, in practice difficult as not all registrars support this.
  2. Registry Lock – this is like a credit lock on your financial records.  This prevents unauthorized, unwanted or accidental changes to the domain name at the sponsoring registrar.  Unfortunately, not all top level domains support Registry Lock.
  3. Deploying email security to intercept and prevent the successful phishing campaign
  4. Host based malware detection tools

 

Starlight is committed to utilizing our Unified Security Analytics Platform to detect, alert, and respond to these types of behaviors.  Our pervasive data collection, coupled with advanced data handling and machine learning, gives us multiple areas where we can detect these types of attacks across the Lockheed Martin cyber kill-chain.  If the attack is missed in one stage of the kill chain, we will catch it in another stage.

  1. Successful spearphishing campaigns ultimately leave new binaries to be executed.  Stellar Cyber has built in malware analysis that would reassemble the binary in transit, evaluate it against known signatures, and ultimately put it in a sandbox for testing.  The results of that testing would drive action should the test determine the binary is malicious in nature.
  2. If the binary passes the malware testing, our server sensors detect the installation and execution of anomalous binaries and alert on those activities.
  3. If the binary is not detected, the resulting command and control activities will be detected, alerted, and potentially blocked.
  4. Binaries that issue commands to the OS are also detected as anomalous and would trigger an alert.
  5. Domain validated certificates – As these certificates can be generated without human intervention, they can be used to give the end user a false sense of security.  One example of a domain validated certificate is “Lets Encrypt.”  Our Starlight platform has the ability to detect domain validated certificates and alert on them.

Defense in-depth is still very much alive (despite some discussions to the contrary).  Catching new attack methods depends on visibility and detections at all stages of the cyber security kill-chain.  Stellar Cyber is uniquely positioned to help you quickly detect and protect against these types of attacks.

 

David W. Barton
Chief Information Security Officer
615-939-2861
www.stellarcyber.ai

Distributed Security Intelligence

Distributed Security Intelligence

 Artificial Intelligence is radically transforming the cybersecurity industry. To successfully use A.I for security, the quality of the data is paramount. Security-related data must be collected from many different sources – network data from packets, server data from commands and processes, application data such as logs, and threat intelligence data from security researchers, among others. These disparate streams of information are fed into a centralized processer, wherein machine learning is conducted to detect security threats.

 

Data Challenges

A few challenges appear in the data collection part of the process.

  • Not enough data

In some cases, the amount of data is insufficient for machine learning to generate an accurate output. When this happens, there may be too many false positives or false negatives. In general, the greater the volume of data, the more accurate the result.

  • Too much data

The downside of having a high volume of data, however, is the increasing cost of required computing power. There may be so much data that machine learning consumes too many resources and cannot be sustained. In these cases, it becomes unpractical or cost-prohibitive to deploy the machine learning models inline.

  • Missing data

The data may be missing or incomplete. If pieces of the puzzle are missing, certain security events cannot be detected. We will elaborate on what this means in a later section.

  • Incorrect data

If the data is incorrect, even a theoretically perfect machine learning model will produce the wrong results. Garbage in, Garbage out.

Because second and third challenges are less intuitive, we will focus on addressing these two challenges.

We will discuss why the architecture of the security intelligence matters greatly in determining its scalability and reliability in deployment.

 

Centralized vs Distributed Security Intelligence

To design machine learning for cybersecurity, two architectures can be considered. Centralized architecture is quite common. In centralized machine learning, the data feeds are from many sources while machine learning is running in a centralized place. The data feeds, which are logs or network traffic such as Netflow or IPFIx, contain little intelligence themselves – they are merely transport vehicles to the central big data platform. Machine learning is then conducted by the central platform on the aggregate data.

With the Distributed Security Intelligence (DSI) architecture, security intelligence is skillfully applied at critical junctures throughout system, starting from the data sources at the very beginning of the process. Though the DSI architecture similarly feeds these disparate data sources into a centralized big data platform for analysis, the application of intelligence at additional points reduces the amount of data that is ingested by the big data platform. Like FOG computing, this distinction enables the scalability and affordability that is highly sought by mid-to-large enterprises and MSSPs with multiple SME customers.

Use Cases

 DSI illustrates its superiority as an architecture for security intelligence in the following cases:

 

Problem Case 1: Raw Packet Data is not scalable

As previously demonstrated by IDS/IPS, using raw packets for detection has severe constraints on scalability. To mitigate this problem, most IDS/IPS are deployed in close proximity to, if not a part of, the perimeter firewall. Imagine attempting this on some centralized servers in a data center or cloud – the packets are duplicated and streamed across the network to the cluster of servers. While it may be possible to attempt, it will result in a heavy burden on the CPU of the source server, the network bandwidth, as well as the computing resources of the centralized servers. Running machine learning on raw packets is simply impractical. Furthermore, the security-relevant information density of each is very low, and the packets are formatted for efficient transmission, not for analysis like machine learning.

 

Problem Case 2: Netflow/IPFIX misses critical data 

It may seem prudent, following the lack of scalability of raw packets, to compress the data and extract only useful information. Netflow and IPFIX are protocols that track network traffic flow information instead of individual packets. They dramatically reduce the volume of data, making machine learning feasible. However, though Netflow/IPFIX are useful for network performance analysis, not much insight into application content can be achieved. Security threat detection requires information such as DNS domain names, HTTP URLs, database queries, among others.

Attempts have been made to augment IPFIX functionality to support content such as application name, but the results fall short due to the abundance of different applications as well as the complexities of each application.

 

The Solution: Superior Data with Application Content

Distributed intelligence represents a better way. Security-related information should be extracted from popular applications, such as DNS domain names and MySQL queries, by properly identifying the applications from raw packets. The extracted data can be enriched at collection time with flow information such as the session start, the session duration, the total byte count in each direction of the session, and the packet transmission pattern, just to name a few. This distributed model boasts a data reduction compared to only using raw packets, while also overcoming limitations from the standard protocols such as Netflow/IPFIX. The density of useful information to aid threat detection is increased, while the volume of data is decreased.

Considering the potential diversity and complexity of the applications and potential complexity of each application, application identification can be very time consuming. Open source tools such as BRO can extract application content, but performance continues to be a challenge. To achieve a certain throughput, it may seem necessary to have expensive dedicated hardware. Stellar Cyber’s data sifter is a powerful, lightweight solution with built-in intelligence that can identify thousands of applications with just the first packet of the flow. Its intelligence reduces the required computing power, and provides additional information that will prove critical in detecting security events.

 

Problem Case 3: Network Traffic alone misses critical data

Running machine learning on network traffic data can certainly detect some security events, but the results may not be quickly actionable. For example, it may be possible to identify a compromised server or container by its IP address. An improvement, however, would be to enrich the server’s IP information with its hostname, because IP addresses can change over time. A further improvement would be to pinpoint the command, process, or user on the server that generated the event, so that malicious processes can be stopped and compromised users can be cleared. To achieve these objectives, intelligent data acquisition and fusion must be conducted from other data sources, such as application logs, executed commands, and server processes.

 

The Solution: Superior Data from More Sources

Data from multiple sources can and should be acquired. Stellar Cyber’s Data Sifters employ distributed intelligence to support a diverse range of data sources, from network traffic with application content, to commands or processes running on servers, to application logs, among others. Our centralized processor can ingest data from additional sources such as firewall and IDS/IPS logs, threat intelligence feeds, and user information from AD. These rich data sets are then aggregated and correlated in preparation for advanced analysis.

 

Problem Case 4: Too much data for centralized processing

 Common threats such as port-scans, SYN floods and data exfiltration via DNS tunneling, can be detected by the intelligent central processor. A more efficient and economical strategy, however, is to detect them at the initial data collection stage. Applying intelligence at the local branches of the system reduces the volume of data that must be ingested, processed, and stored by the central processor. If the entire set of network traffic data that contains the relevant threats is fed to the processor, the machine learning module will unnecessarily run analysis on tens of thousands or millions of extra records. To conserve resources, the data collection agent should distill the data into significant items before proceeding. In addition to improved performance, the central processor will also benefit from reduced risk of receiving DOS attacks.

 

Smarter, Faster Security with Distributed Intelligence

The advantages of distributed intelligence in scaling machine learning and enhancing security detection extend beyond just these cases. An intelligent data collector, for example, can capture the packets of a DNS tunnel event at the moment of detection such that the tunneled information can be recovered.

Distributing security intelligence throughout the entire data processing chain enhances the scalability of the entire threat detection system. Intelligence at the data collection points, improves the quality of data while simultaneously reducing volume. The micro-service based architecture of the centralized data processor then enables both supervised and unsupervised machine learning to be used in the pipeline for timely and confident threat detections.

Changming Liu

CEO

Stellar Cyber

 

 

 

 

 

The 2017 Equifax Breach

In 2017, Equifax, one of the world’s largest credit reporting agencies suffered a cyber breach of unprecedented impact and scale.  More than 145 million records of personal identifiable information were stolen by cyber criminals.  Because of the nature of this breach, the CEO of Equifax resigned, a congressional investigation commenced, Equifax’s stock took a hit and a 50-state class action lawsuit was filed.

Continue reading “The 2017 Equifax Breach”