#StackBounty: #amazon-web-services #elasticsearch #logstash #kibana elasticsearch showing only 1 docs.count on data migration using log…

Bounty: 50

I am trying to move data from S3 (.csv file’s data) to elastic search cluster using logstash using custom templete.
But it only shows docs.count=1 and rest of the records as docs.deleted when i check using following query in Kibana:-

GET /_cat/indices?v

My first question is :-

  1. why only one record [the last one] is transmitted and others are transmitted as deleted ?

Now when I query this index using below query in Kibana :-

GET /my_file_index/_search
{
  "query": {
    "match_all": {}
  }
}

I get only one record with comma separated data in "message" : field, So the second question is :-

  1. How can I get the data with column names just like in csv as I have specified all column mappings in my template file which is fed into logstash ?

I tried giving columns field in logstash csv filter also but no luck.

 columns => ["col1", "col2",...]

Any help would be appreciated.

EDIT-1: below is my logstash.conf file:-

input {
 s3{
     access_key_id => "xxx"
     secret_access_key => "xxxx"
     region => "eu-xxx-1"
     bucket => "xxxx"
     prefix => "abc/stocks_03-jul-2018.csv"
   }
}
filter {
  csv {
      separator => ","
      columns => ["AAA","BBB","CCC"]
  }
}
output {
    amazon_es {
        index => "deepakindex"
        document_type => "deepakindex"
        hosts => "vpc-totemdev-xxxx.eu-xxx-1.es.amazonaws.com"
        region => "eu-xxxx-1"
        aws_access_key_id => 'xxxxx'
        aws_secret_access_key => 'xxxxxx+xxxxx'
        document_id => "%{id}"
        template => "templates/template_2.json"
        template_name => "deepakindex"
 }
}

Note:
Version of logstash : 6.3.1
Version of elasticsearch : 6.2


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.