Splunk

Splunk sysoprok Sun, 08/06/2017 - 17:56

Retain events as one file in Splunk

Retain events as one file in Splunk sysop Fri, 08/27/2010 - 10:54

In Splunk, many log files, especially custom log files, end up getting broken up into many single events and not one event (or log file) like one is used to seeing from the command line.   To configure Splunk to keep the more traditional log file instead of many events, you need to modify the prop.conf file located in /opt/splunk/etc/local/props.conf.   If props.conf doesn't exist there, make a copy from /opt/splunk/etc/default/prop.conf.

There are three settings to change in prop.conf -  TRUNCATE, MAX_EVENTS, BREAK_ONLY_BEFORE.  

[default]
CHARSET = UTF-8
TRUNCATE = 0
DATETIME_CONFIG = /etc/datetime.xml
MAX_DAYS_HENCE=2
MAX_DAYS_AGO=2000
MAX_DIFF_SECS_AGO=3600
MAX_DIFF_SECS_HENCE=604800
MAX_TIMESTAMP_LOOKAHEAD = 128
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE = SLp23kj4kala234ksksksk55skskQQtttQQQ
BREAK_ONLY_BEFORE_DATE = True
MAX_EVENTS = 10000
MUST_BREAK_AFTER =
MUST_NOT_BREAK_AFTER =
MUST_NOT_BREAK_BEFORE =

 

 

 

 

 

 

 

 

 

 

 

 

Set TRUNCATE to a high value or set the value to 0 for unlimited.

Set MAX_EVENTS to a value higher than its default of 256.  I've set mine to 10000.

BREAK_ONLY_BEFORE asks Splunk to look for a value in your log files and only break up the log file if you find this value.  You can set BREAK_ONLY_BEFORE to a value that you never expect Splunk to find and your log files should stay intact.  For example, I've set mine to BREAK_ONLY_BEFORE = SLp23kj4kala234ksksksk55skskQQtttQQQ

SPLUNK CLUSTER INDEXERS ERROR INDEXERDISCOVERYHEARTBEATTHREAD Part 2

SPLUNK CLUSTER INDEXERS ERROR INDEXERDISCOVERYHEARTBEATTHREAD Part 2 sysoprok Fri, 05/05/2017 - 12:11

Here I am again with the same error, but a different resolution.

ERROR IndexerDiscoveryHeartbeatThread - failed to parse response payload for group=group1, err=failed to extract FwdTarget from json node={"hostport":"?","ssl":false,"indexing_disk_space":-1}http_response=OK

In the original post (http://www.givemeit.com/Splunk-Cluster-Indexers-ERROR-IndexerDiscoveryH…), the error message was related to ports 9997, etc. However, I recently replaced and synchronized all of the splunk.secret files and restarted the cluster peers, cluster master, deployment server, and main search head. I started getting a similar message. My main heavy forwarders started showing the error again. Not right away, but eventually after I made a deploy change and the heavy forwarders restarted - I got the dreaded ERROR IndexerDiscoveryHeartbeatThread .

I had Splunk support verify my configuration was still correct, and the last effort was to do a rolling restart on the indexer cluster. It worked! It worked is really a relief and a level of frustration because I had restarted all of the roles individually a few times, but with no success.

From the Cluster Master, I executed the following commands:

[root@clustermaster bin]# ./splunk edit cluster-config -percent_peers_to_restart 20
The cluster-config property has been edited.

[root@clustermaster bin]# ./splunk rolling-restart cluster-peers
Rolling Restart of all the cluster peers has been kicked off. It might take some time for completion.

Reference: https://docs.splunk.com/Documentation/Splunk/6.5.3/Indexer/Userollingre…

Splunk Cluster Indexers ERROR IndexerDiscoveryHeartbeatThread

Splunk Cluster Indexers ERROR IndexerDiscoveryHeartbeatThread sysoprok Thu, 02/16/2017 - 15:31

I noticed I wasn't receiving all of the log data I was expecting from Splunk Heavy Forwarders to my newly setup Splunk Index Cluster. It was a simple problem, but it was very difficult to figure out based on the log messages.

This log file will indicate there is an actual credential problem. Most likely the passSymm4key value is wrong.
01-11-2017 17:10:03.017 -0500 ERROR IndexerDiscoveryHeartbeatThread - failed heartbeat for group=group1 uri=https://yourclustermanager:8089/services/indexer_discovery http_response=Unauthorized

However, this log indicates (not clearly at all), that the cluster peers are not listening on 9997. This can be a configuration issue with firewalld or inputs.conf

01-11-2017 19:48:54.027 -0500 WARN TcpOutputProc - Forwarding to indexer group group1 blocked for 2040 seconds.
01-11-2017 19:48:58.642 -0500 ERROR IndexerDiscoveryHeartbeatThread - failed to parse response payload for group=group1, err=failed to extract FwdTarget from json node={"hostport":"?","ssl":false,"indexing_disk_space":-1}http_response=OK

Verify that /opt/splunk/etc/system/local/inputs.conf has either splunktcp or splunk-tcpssl (not both) below the host = value.

[splunktcp://9997]
disabled = 0
Verify that firewalld is open to 9997/tcp.
firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: em1
sources:
services: dhcpv6-client ssh
ports: 8089/tcp 9997/tcp 8000/tcp 8080/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:

If not, add the port permanently:

firewall-cmd --permanent --add-port=9997/tcp
firewall-cmd --reload

Update- This may help resolve our issue as well - http://www.givemeit.com/Splunk-Cluster-Indexers-ERROR-IndexerDiscoveryH…

Splunk DB Connect Troubleshooting

Splunk DB Connect Troubleshooting sysop Thu, 09/11/2014 - 09:54

When attempting to use Splunk DB connect, please note that the Splunk documentation was out of date at the time of this writing.

The only two things you need for the Splunk DB connect to work is ojdbc6.jar copied into $SPLUNK_HOME/etc/apps/dbx/bin/lib and the latest JRE downloaded to a directory and the path set for the application.

The error messages that the Splunk Application provided were cryptic and either verifying/testing the connection when adding a new DB connection or when trying to pull data from the Application after setting up the DB Connect profile.

The simple solution was to use the latest Java JRE. The version that worked, even though the Splunk documentation referenced earlier version would work was JRE 7u55. In my case, jre-7u55-linux-x64.tar.gz on a Redhat Linux 64bit OS.

These did not solve any of the Splunk DB Connect error messages are are misleading:
http://answers.splunk.com/answers/72101/splunk-db-connect-error-connect…
http://answers.splunk.com/answers/68945/splunk-db-connect-oracle-connec…
http://answers.splunk.com/answers/74152/splunk-db-connect-to-oracle-ins…

Other things that are important. If you install the oracle client, it will typically install the following JAR files.
dms.jar
ojdbc5dms_g.jar
ojdbc5dms.jar
ojdbc5_g.jar
ojdbc5.jar
simplefan.jar
... and the version 6 files with a similar name.

Here is what I would suggest to try:

1) Reinstall db connect app from apps.splunk.com; restart splunk
2) Download ojdbc6.jar and place it into $SPLUNK_HOME/etc/apps/dbx
3) Restart splunk, and retest connecting to Oracle database.
4) Download the latest JRE (version 7).

Splunk Live Baltimore Maryland November 3, 2010 Review

Splunk Live Baltimore Maryland November 3, 2010 Review sysop Thu, 10/14/2010 - 14:27

This was my first SplunkLive event.  I'd have to say I like Splunk, I thought the facility was great, and breakfast/lunch was nice too.  Keep in mind nobody had to pay for anything at this event and there were some bits and peices of information to walk away with.

Pros: Lunch - no not just the food, but the Splunk users panel.  The Splunk users panel consisted of 7-8 people that had a range of experience in using splunk, integrating splunk, and even the politics that surround deploying it to areas that the Splunk admin didn't have direct control over the logs.  I think this is where the heart of SplunkLive lives.  You get people that will praise splunk where it shines and aren't afraid to say what Splunk lacks.  Nobody from the users panel tried to make claims that Splunk was a replacement product for SIEMs or claimed Splunk can do anything.  They were real about what is can do and can do very well.  It appears that Splunk 4.2 will bring added capability to fill in the few areas that are lacking.

Cons: Morning... 9-12 is really a Splunk> infomerical - live!  You get to hear from a few large companies that use Splunk and their success stories and from Splunk employees, but it is really just an unneeded sell to a crowd that already uses splunk.  We get it! Splunk is good.  We don't need 3 hours of people giving testimonials.

I went to the Advanced training session for the afternoon event.  It was okay.  If you had something you wanted to ask you could, but you weren't going to get a great answer.  You did however, get some advice where to go to look things up for yourself.  Not what I expected for a "training" session.  Again, it was free.

 

I'd probably go to one more to see if anything changes. 

 

 

Baltimore, Maryland
Nov 3, 2010

 
Agenda
----------------------

* What is Splunk? From Fixing IT to Delivering Real-time Business Insights
* Customer Presentations
* See Splunk in Action–overview demo
* Q & A
* Optional Technical Workshop

Join Splunk Experts for an overview of Splunk. From getting started to creating intelligent searches, alerts and building the dashboards your team needs to deliver insight to the business.
* Experience how easy it is to get Splunk up and running
* Learn new and powerful ways to sift through your mountains of IT data
* Get advice from the pros on how to deploy Splunk
* Set up custom roles, dashboards and reports
* Try your hand at custom application development with Splunk
* Network with Splunk users and fellow IT pros
* Learn tips and tricks you can take back to the office
* Have the most fun you've ever had with an IT software product

 
When:
----------------------
Wednesday, November 3, 2010
* 8:30am to 9:00am - Registration
* 9:00am to 12:00pm - SplunkLive!
* 12:00pm to 1:00pm - Lunch
* 1:00pm to 3:00pm - Technical Workshop: Getting Started User Training
* 1:00pm to 3:00pm - Technical Workshop: Advanced User Training

 
Where:
----------------------
Hyatt Regency Baltimore
300 Light Street
Baltimore, MD 21202
USA

T: 410.528.1234