In a previous blog entitled Network Infrastructure Visibility and Analytics with Data Streaming, we discussed the theory and operations behind using data streaming to provide network visibility and analytics. That article also detailed how SLX products support data streaming.
Here, we’ll detail a concrete example of how you can build a complete solution using SLX products and two collectors configured in Splunk Enterprise. One collector provides an interface profile and the other provides a system profile.
Many of our enterprise customers are focused on operational capabilities and best practices around reporting from their network elements, and using this demo is a great way to bootstrap that effort. The plugins to Splunk Enterprise can be found in this GitHub link so you can evaluate the effort involved, or implement it in your own lab.
Background and Purpose of Streaming
Data streaming allows you to collect real time data to support this, and does so in a much more continuous and useful manner than traditional collection methods such as SNMP.
This data can then be analyzed to get a clear picture of the state of the system through parameters such as CPU and memory usage, security breaches, interface usage, and changes to logical topology. This analysis can in turn be used by automation tools for auto-remediation.
What is Being Demonstrated?
The intent of the telemetry/analytics example discussed here is as follows:
Fulfilling the third goal involves:
Streaming to a Client Collector
As described in Network Infrastucture Visibility and Analytics and Data Streaming, Extreme provides two models of streaming; data can be streamed to a collector, or a gRPC client can request data which is then pushed at desired intervals to the client. This demo uses the collector model (Figure 1), wherein the network element itself (an SLX 9540) acts as a client and streams data to the desired collector over TCP.
Figure 1: Collector Model Showing Two Collectors
In this example, we configure two collectors providing information such as IP address, TCP/UDP ports for connectivity, the streaming interval, and a telemetry profile to identify which data to stream to the collector.
Splunk Collection, Analysis, and Reporting
After the Splunk collector receives the data, it performs necessary conversions on it (to JSON, for instance) so that Splunk can understand it. Then the collector delivers the converted payload into Splunk. At that point, a search application in Splunk can view the data in its raw form.
Splunk users can then create specific reports and dashboards (Figure 2) to view the data in useful formats.
Figure 2: Defining Reports and Dashboards
The reports shown here can give, at a point in time:
And dashboards are available so you can continually see:
Sample Reports for Memory and Traffic Flow
Reports with multiple attributes give you more information at a single glance. For instance, in the following report (Figure 3), cache memory and total free memory (at points in time) are plotted on the same graph.
Figure 3: Cached Memory and Free Memory Shown Over Time
To analyze activity on the data plane, a report that allows you to see traffic flow on different interfaces is extremely useful. In the following dashboard (Figure 4), you can choose different interfaces and time periods.
Figure 4: Traffic Flow on Selected Interfaces
Both received (InOctets) and sent (OutOctets) traffic counters are plotted.
Next Steps: New Reports, Event Logs, Automation
We are continually enhancing this integration to show new reports and dashboards, including more detailed throughput information, errored and discarded packets, and event logging to help with auditing and analytics. Additionally, data streaming will continue to be enhanced in the future to stream new attributes to address various use cases.
Finally, Workflow Composer integration can be used for event-based automation tied to analytics of the streamed data (Figure 5).
Figure 5: Workflow Composer Integration
As an event-based automation engine, Workflow Composer can accept events from the collection unit (SLX in this case) or analytics engine (Splunk in this case) and can run workflows for automation actions. These could include remediation actions to the SLX element.
A typical way to set up a remediation stream would be to use triggering events from Splunk. For example, you might first forward all logs to Splunk, then trigger events when specific patterns are matched.
You can do this by creating a saved search in Splunk, and configuring it to send a webhook request to StackStorm when that event matches. After that, you would configure StackStorm to run a workflow when that event occurs. See here for a walkthrough example.
Workflow Composer can also be used to statically or dynamically configure the data streaming parameters on SLX. For example, on the occurrence of an event, Workflow Composer could tell the SLX element to start streaming specific data.
You can contact your Extreme account representatives or systems engineers for more information, or optionally you can find the code to integrate SLX Insight with Splunk Enterprise here.
The post Integrating SLX Telemetry Streaming with Splunk Enterprise appeared first on Brocade.
This post was originally published by Product Marketing Director Alan Sardella.