Also available as:
loading table of contents...

Understanding Throughput

The data flow for HCP is performed in real-time and contains the following steps:

  1. Information from telemetry data sources is ingested into Kafka files through the telemetry event buffer. This information is the raw telemetry data consisting of host logs, firewall logs, emails, and network data. Depending on the type of data you are streaming into HCP, you can use one of the following telemetry data collectors to ingest the data:


    This type of streaming method works for most types of telemetry data sources. See the NiFi documentation for more information,

    Performant network ingestion probes

    This type of streaming method is ideal for streaming high volume packet data. See Using pcap to View Your Raw Data for more information.

    Real-time and batch threat intelligence feed loaders

    This type of streaming method is used for real-time and batch threat intelligence feed loaders.

  2. Once the information is ingested into Kafka files, the data is parsed into a normalized JSON structure that HCP can read. This information is parsed using a Java or general purpose parser and then it is uploaded to ZooKeeper. A Kafka file containing the parser information is created for every telemetry data source.

  3. The information is then enriched with asset, geo, and threat intelligence information.

  4. The information is then indexed, stored, and any resulting alerts are sent to the Metron dashboard.