Description: Elasticsearch is an open-source search engine based on Lucene, developed in Java. It provides a distributed and multitenant full-text search engine with an HTTP Dashboard web-interface (Kibana).
The data is queried, retrieved and stored with a JSON document scheme. Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files.
Procedure:
- Disable SELinux: Change the value in SELinux configuration from Enforcing to disable and reboot the machine. Verify current mode of SELinux using getenforce command
- Install JAVA: Elasticsearch requires at least Java 8. To install using below command
# yum update -y # yum install java-1.8.0-openjdk
- Verify Java version after installation completed
- Install and Configure Elasticsearch: First import key for Elasticsearch and download rpm file for elastic search and then install id using below command line
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch # wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm
# rpm -ivh elasticsearch-5.1.1.rpm
- Configure Elasticsearch: Modified below changes in the elastic configuration file elasticsearch.yml under /etc/elasticsearch/
Enable memory lock for Elasticsearch by removing a comment on line 40. This disables memory swapping for Elasticsearch.
bootstrap.memory_lock: true
In the 'Network' block, uncomment the network.host and http.port lines
network.host: localhost
http.port: 9200
# vi /usr/lib/systemd/system/elasticsearch.service
Uncomment LimitMEMLOCK line.
LimitMEMLOCK=infinity
Edit sysconfig configuration file for Elasticsearch and uncomment 60 line and sure value is unlimited
# vi /etc/sysconfig/elasticsearch
MAX_LOCKED_MEMORY=unlimited
- Once Elasticsearch installed start and enable services using below commands
# systemctl daemon-reload # systemctl enable elasticsearch.service # systemctl start elasticsearch.service
- Download and Install Kibana using below command
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
# rpm -ivh kibana-5.1.1-x86_64.rpm
- Open kibana configuration file and modified elasticsearch configuration as define follow
# vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "localhost" [Note: If You want to access using IP address replace IPAddress with localhost]
elasticsearch.url: "http://localhost:9200"
Save and Exit file
Restart and Enable kibana service
# systemctl enable kibana
# systemctl start kibana
- Now try to browse Kibana URL using IP_Address:5601 you will get a screen as follow
Install Logstash: To install logstash create repository as follow
# vi /etc/yum.repos.d/logstash.repo
[logstash]
name=Logstash
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
Save and Exit file and install Logstash
# yum install logstash -y
Start and Enable service of Logstash
# service logstash start
# chkconfig logstash on
SSL certificate for logstash
After the logstash installation, we will now create a SSL certificate for securing communication between logstash & filebeat (clients). Since we will be using IP address to connect to server, we will create SSL certificate for IP SAN.
Before creating an SSL certificate, we will make an entry of our IP in openssl.cnf,
# vi /etc/pki/tls/openssl.cnf
[ v3_ca ]
subjectAltName = IP: 35.173.249.228
Generate Self Signed certificate for 365 days
# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/server.key -out certs/server.crt
- Create an input file /etc/logstash/conf.d/input.conf and paste as follow
# vi /etc/logstash/conf.d/input.confinput {beats {port => 5044ssl => truessl_certificate => "/etc/pki/tls/certs/server.crt"ssl_key => "/etc/pki/tls/private/server.key"}}
- Create Output file /etc/logstash/conf.d/output.conf and paste as follow
# vi /etc/logstash/conf.d/output.confoutput {elasticsearch {hosts => ["localhost:9200"]sniffing => truemanage_template => falseindex => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"document_type => "%{[@metadata][type]}"}}
- Create Filter file /etc/logstash/conf.d/filter.conf and paste as follow
# vi /etc/logstash/conf.d/filter.conffilter {if [type] == "syslog" {grok {match => { "message" => "%{SYSLOGLINE}" }}date {match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]}}}
No comments:
Post a Comment