前言
Elasticsearch 是專門數據儲存、分析、索引
Kibana 是專門 Web 介面,讓 Elasticsearch 內容來圖表化
Logstash 是專門資料收集、過濾、儲存到 Elasticsearch
現今多出了 Beats 家族,主要是來當作輕量化 Logstash 的角色, Beats 是用 Golang 寫的,佔用資源更少
Beats 成員有 Filebeat / Metricbeat / Packetbeat / Winlogbeat / Heartbeat ,用於不同情境的資料抓取
測試環境
Centos7
ELK 版本是 7.7.1
流程架構
<---- cleint 端---> <--------------------- server 端 -----------------------> host ----> Filebeat --> Redis --> Logstash --> Elasticsearch --> Kibna --> Nginx --> manager
用 Filebeat 抓 host 的 apache access_log 資料,丟到 Redis 裡,再讓 Logstash 去 Redis 拿出來轉丟給 Elasticsearch
再透過 Kibna 來瀏覽結果 (而 Nginx 就 proxy 了 Kibana)
Redis 綁定 6379 port
Elasticsearch 綁定 9200 port
Nginx 綁定 8080 port
安裝 ELK 需要 java
yum -y install java java-devel
或到 https://www.oracle.com/java/technologies/javase-jre8-downloads.html 下載 jre-XXXXXXX-linux-x64.rpm
而 ELK 是下載個自的 RPM 檔
https://www.elastic.co/downloads/elasticsearch
https://www.elastic.co/downloads/kibana
https://www.elastic.co/downloads/logstash
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.1-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.7.1-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.7.1.rpm
yum localinstall -y elasticsearch-7.7.1-x86_64.rpm
yum localinstall -y kibana-7.7.1-x86_64.rpm
yum localinstall -y logstash-7.7.1.rpm
設定 Elasticsearch
編輯
vi /etc/elasticsearch/elasticsearch.yml
或
cat << EOF >> /etc/elasticsearch/elasticsearch.yml network.host: localhost # 如果 LISTEN 其它介面就 network.host: ["localhost", "1.1.1.1"] http.port: 9200 EOF
測試啟動 (並可直接觀查結果,有沒有設定錯誤)
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -v
測試結果
curl -X GET 'http://localhost:9200'
成功的話出現這個
{ "name" : "ssorc.tw", "cluster_name" : "elasticsearch", "cluster_uuid" : "SHos4GDQQSacAd5HgoiQLg", "version" : { "number" : "7.5.2", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf", "build_date" : "2020-01-15T12:11:52.313576Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
其它可用的資訊查看
curl -X GET 'http://localhost:9200/_cat/nodes' curl -X GET 'http://localhost:9200/_cat/master' curl -X GET 'http://localhost:9200/_cat/indices' curl -X GET 'http://localhost:9200/_cat/health'
後面可加 ?v 看詳細資訊
設定 Kibana
編輯
vi /etc/kibana/kibana.yml
或 ↓ (可不加,都保持預設即可)
cat << EOF >> /etc/kibana/kibana.yml server.port: 5601 server.host: "localhost" #elasticsearch.url: "http://localhost:9200" elasticsearch.hosts: ["http://localhost:9200/"] EOF
測試啟動
/usr/share/kibana/bin/kibana --allow-root
成功的話,最後訊息會是
log [09:50:41.508] [info][server][Kibana][http] http server running at http://localhost:5601
透過 nginx 瀏覽 kibana
yum -y install nginx httpd-tools
vi /etc/nginx/conf.d/ssorc.tw.conf
server { listen 8080; server_name ssorc.tw; access_log logs/access.log; error_log logs/error.log; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/htpasswd.kibana; location / { proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
限制登入需帳密
htpasswd -c /etc/nginx/htpasswd.kibana admin
啟動
systemctl start nginx
瀏覽 http://ssorc.tw:8080
瀏覽 http://ssorc.tw:8080/status
它一開始會出現樣版,可以先不用理會它,是可以匯入它預設的 sample 啦,看看漂介的畫面 ,可事後刪除就好 (現在比較友善的介面了)
安裝 redis
安裝參考 手動編譯安裝 Redis + phpiredis + phpRedisAdmin
設定 filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-x86_64.rpm
yum localinstall -y filebeat-7.7.1-x86_64.rpm
vi /etc/filebeat/filebeat.yml
- type: log enabled: true paths: - /var/log/httpd/access_log tail_files: true # tail file fields: log_source: access_log # 新增欄位變數, log_source 是變數,access_log 是值 ,可供 logstash 過濾用 - type: log enabled: true paths: - /var/log/httpd/error_log tail_files: true # tail file fields: log_source: error_log output.redis: hosts: ["localhost"] key: "apache_logs" data_type: "list" #output.elasticsearch: # 註解 #hosts: ["localhost:9200"] # 註解
測試啟動
/usr/share/filebeat/bin/filebeat -environment systemd -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
不過先停止 filebeat ,先測試登入 redis 查看是否有資料
/usr/local/redis/bin/redis-cli
輸入 , 一開始是沒有的,會是 (empty list or set)
> keys *
再啟動 filebeat
就會在 redis 裡看到 (記得瀏覽網頁,才會有資料讓 filebeat 塞到 redis)
1) "apache_logs"
接著再利用 logstash 從 redis 取資料來塞入 elasticsearch,成功的話 redis 就不會看到資料了,因為都被拿走了
(以前看到 logstash 就嚇死了,一堆亂七八槽、鬼畫符的設定檔內容,根本不想弄那麼複雜)
設定 logstash
vi /etc/logstash/conf.d/apache.conf
input { redis { host => "localhost" port => 6379 key => "apache_logs" # 同上面的 filebeat 的 key 名稱 data_type => "list" } } filter { } output { if [fields][log_source] == 'access_log' { # 這裡跟 filebeat.yml 裡設定的 fields log_source 一樣 elasticsearch { hosts => ["localhost:9200"] index => "apache_access_log" # 如果 log_source 是 access_log 在 elasticsearch 的 index 名稱就是 apache_access_log } } if [fields][log_source] == 'error_log' { elasticsearch { hosts => ["localhost:9200"] index => "apache_error_log" } } }
測試啟動
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --verbose
登入 Kibana
在左邊的功能按鈕下方會看到 Management
這裡分成
Elasticsearch 的 Index Management (已經有 Index,也就是 apache_access_log 與 apache_error_log)
與
Kibana 的 Index Patterns (還沒有 Index)
所以要在 Kibana 建立 Create index pattern
在 Index pattern 輸入 apache_access_log ,然後 Next setp (下一步)
在 Time Filter field name 選擇 @timestamp 就可以 create 了
最後記得設置開機啟動
/etc/init.d/redis start
systemctl enable elasticsearch kibana nginx filebeat logstash
後記
其實 Beats 家族是可以直接把資料餵給 ElasticSearch 的,試過後,真心覺得主機負載就降了許多 , logstash 太佔資源了
上面提到的流程架構也是網路上的文章都這樣搞,多了 Redis 也是怕 Logstash 弄丟了資料,也適合大型網路流量的情境吧
(少了 logstash 的負載就是差這麼多)
Q&A
Q
可能需要刪除 index,在啟動 kibana 會有一些錯誤,然後 restart kibana
A
curl -X DELETE http://localhost:9200/.kibana_task_manager_1
curl -X DELETE http://localhost:9200/.kibana_1
Q
啟動 kibana 時產生錯誤
FATAL Error: [config validation of [elasticsearch].url]: definition for this key is missing
A
#elasticsearch.url: “http://localhost:9200” # 註解這個
elasticsearch.hosts: [“http://localhost:9200/”] # 加入這個
Q
Kibana server is not ready yet
A
因為 kibana 與 elasticsearch 版本不一樣
Q
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
A
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
Q
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
A
因為 vi /etc/elasticsearch/elasticsearch.yml
把
network.host: localhost
改成了
network.host: [“localhost”, “1.1.1.1”]
所以要加入
cluster.initial_master_nodes: [“node-1”, “node-2″]
Q
有時後主機可能會設置 /tmp noexec ,又 logstash 會需要 exec
所以另起 tmpdir
A
mkdir /var/lib/logstash/tmp/
chmod 1777 /var/lib/logstash/tmp/
chown logstash.logstash -R /var/lib/logstash/tmp/
vi /etc/default/logstash
vi /etc/logstash/startup.options
LS_JAVA_OPTS=”-Djava.io.tmpdir=/var/lib/logstash/tmp/”
留言