How to Easily Route Data to Multiple Destinations
By: Zubair Rauf | Senior Splunk Consultant
Splunk is a very powerful tool that allows you to transform your data and parse it the way you want to in most cases. Using some basic Splunk functionality, you can route data to multiple destinations, may it be multiple indexes, indexers, or even other destinations. This can help you address different scenarios where you may want to have data from a single source go to multiple destinations.
While there can be many use cases you can address with this, we will touch upon a few use-cases that I have come across during my years of work with Splunk.
- • Route data to multiple destinations from a Splunk Universal/Heavy Forwarder using inputs.conf
- • Route data to multiple destinations using props/transforms
Route Data to multiple destinations from Splunk UF/HF using inputs.conf
There can be multiple use-cases where you would want to use inputs.conf to route your data to multiple destinations. Some of the most common use-cases I’ve seen are
- • Customers need to send data to other external tools outside of Splunk.
- • Customers in a hybrid cloud/on-prem environment needed to send the same data to Splunk Cloud indexers and their Splunk on-prem (Enterprise) indexers
Before we dive deeper into this, if you are sending data externally outside of Splunk over Syslog, in Splunk you will need a Splunk Heavy Forwarder (HF). You can route data from multiple UFs to a single HF that can send data over Syslog to an external source.
On the universal forwarder, you can set multiple output destinations using multiple target groups. These can include the following;
- • TCP 9997 to Splunk servers
- • TCP to non-Splunk servers using the “sendCookedData=false” setting
Splunk Heavy Forwarders can also send Syslog data to third-party systems over TCP and UDP protocols. If your third-party system accepts data over Syslog then you can route data from your UFs to HFs and further to the Syslog endpoint.
Routing data using inputs.conf on a Universal Forwarder
You can create multiple target groups in the outputs.conf
Universal Forwarder
##outputs.conf
[tcpout]
defaultGroup = splunk-target-indexers
[tcpout:splunk-target-indexers]
server = x.x.x.x:9997
[tcpout:non-splunk-target-group]
server = x.x.x.x:port
sendCookedData = false
[tcpout:splunk-heavy-forwarder]
server = x.x.x.x:9997
A universal forwarder will route data to the default group if you do not specify a different destination using the _TCP_ROUTING attribute for your input.
##inputs.conf
# This will route to default group
[monitor:///var/log/abc.log]
index = os_nix
sourcetype = abc-log
# This will route to splunk-target-indexers and non-splunk-target-group
[monitor:///var/log/def.log]
index = os_nix
sourcetype = def-log
_TCP_ROUTING = splunk-target-group,non-splunk-target-group
Routing and filtering data on the Heavy Forwarder
You can use the heavy forwarder to do advanced routing and filtering. You can send data out of Splunk using the Syslog server and also use props and transforms to route a subset of data to certain destinations.
##outputs.conf
[tcpout]
defaultGroup = splunk-target-group
[tcpout:splunk-target-group]
server = x.x.x.x:9997
[tcpout:non-splunk-target-group]
server = x.x.x.x:port
sendCookedData = false
[syslog]
defaultGroup = syslogGroup
[syslog:syslogGroup]
server = ipddress:port OR hostname:port
type = udp|tcp
Route data to Syslog
In the example, we will see how a subset of events are routed to Syslog
##transforms.conf
#Create a transform to route data
[send-to-syslog]
REGEX = . #This will capture all events, you can write a regex to capture specific events
DEST_KEY = _SYSLOG_ROUTING
FORMAT = syslogGroup
##props.conf
#Route data from a specific host to syslog
[host::somehost*]
TRANSFORMS-syslog_output = send_to_syslog
#Route a specific sourcetype to syslog
[sourcetype::some-sourcetype]
TRANSFORMS-syslog_out = send_to_syslog
Splunk transforms are very powerful and allow you to transform, filter and route your data in multiple ways. To learn more about what transforms can do, you can review the documentation here.