#StackBounty: #elasticsearch #grafana #grafana-variable How to create two levels Grafana Variable from ElasticSearch?

Bounty: 50

I work on creating some Grafana dashboards. At the moment, from ElasticSearch data source.

When I am trying to create a variable in Grafana like the one below:

{"find": "terms", "field":  "myServer.name"}

I get None, instead of getting these names: heroku, k8s, aws.

I tried looking through docs and existing StackOverflow questions, but it is still unclear how I can make it work.
enter image description here

Am I doing it wrong, or is it Grafana’s limitation?


Get this bounty!!!

#StackBounty: #laravel #elasticsearch Laravel Elasticsearch JSON Mapping Issue

Bounty: 50

I’m currently using Laravel v7.2, have the babenkoivan/scout-elasticsearch-driver installed (4.2) and am using AWS Elasticsearch 7.1. I have several tables mapped in my application that are working fine but am having issues with a nested mapping that was previously working and is now broken.

I’m saving data into a table and having that table data copied into AWS Elasticsearch. I’m using MySQL 5.6 so I am using a TEXT column to store JSON data. Data in the table looks as follows:

'id' => 1,
'results' => [{"finish":1,"other_id":1,"other_data":1}]

I have my model setup with the following mapping:

protected $mapping = [
        'properties' => [
            'results' => [
                'type' => 'nested',
                'properties' => [
                    'finish' => [
                        'type' => 'integer'
                    ],
                    'other_id' => [
                        'type' => 'integer'
                    ],
                    'other_data' => [
                        'type' => 'integer'
                    ]
                ]
            ],
        ]
    ];

And if it’s of any use, the toSearchableArray:

public function toSearchableArray()
    {
        $array = [
            'id' => $this->id,
            'results' => $this->results
        ];

        return $array;
    }

I have no problem creating this index and it worked up until about a couple of months ago. I don’t know exactly when, as it wasn’t a high priority item and may have occurred around an AWS ES update but not sure why this in particular would break. I receive the following error now:

{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"object mapping for [results] tried to parse field [results] as object, but found a concrete value"}],"type":"mapper_parsing_exception","reason":"object mapping for [results] tried to parse field [results] as object, but found a concrete value"},"status":400}

I’ve tried also storing the data in the table as such, thinking it was breaking due to the potential array, but it was to no avail:

'id' => 1,
'results' => {"finish":1,"other_id":1,"other_data":1}

I’m at a loss for what else to try to get this working again.

EDIT: Here is the entire model:

<?php
namespace AppModels;

use IlluminateDatabaseEloquentModel;
use ScoutElasticSearchable;


class ResultsModel extends Model
{

    use Searchable;

    protected $indexConfigurator = AppMyIndexConfiguratorResults::class;

    protected $searchRules = [
        //
    ];

    protected $mapping = [
        'properties' => [
            'results' => [
                'type' => 'nested',
                'properties' => [
                    'finish' => [
                        'type' => 'integer'
                    ],
                    'other_id' => [
                        'type' => 'integer'
                    ],
                    'other_data' => [
                        'type' => 'integer'
                    ]
                ]
            ],
        ]
    ];

    public function searchableAs()
    {
        return 'results_index';
    }

    public function toSearchableArray()
    {
        $array = [
            'id' => $this->id,
            'results' => $this->results
        ];

        return $array;
    }
    /**
     * The database table used by the model.
     *
     * @var string
     */
    protected $table = 'results_table';

}

Here is the AppMyIndexConfiguratorResults::class

<?php

namespace App;

use ScoutElasticIndexConfigurator;
use ScoutElasticMigratable;

class MyIndexConfiguratorResults extends IndexConfigurator
{
    use Migratable;

    protected $name = "results_index";
    /**
     * @var array
     */
    protected $settings = [
        //
    ];
}

This is all that is needed to have Laravel automatically update AWS ES each time the table is updated. For the initial load, I would SSH in and run the following command to have it create the index. This, as well as elastic:migrate and any update/insert into the table produces the mapping error.

php artisan elastic:create-index results_index


Get this bounty!!!

#StackBounty: #elasticsearch #logstash #kibana Configure ingest manager in elasticseach 7.9

Bounty: 50

I have configured ELK on a single VM ( Centos 7 / 10Gig / 4 CPU) along with filebeat and auditbeat as POC and encountered the below error while trying to use the ingest manager.

Screenshot

I followed the security setup as suggested in this link, the elasticsearch wont come up after many attempts of troubleshooting and i need some help to fix it.

The logs suggest the below errors

2:

HTTPS is required in order to use the API key service; please enable HTTPS using the [xpack.security.http.ssl.enabled] setting or disable the API key service using the [xpack.security.authc.api_key.enabled] setting

The elasticsearch config is below:

#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
bootstrap.memory_lock: true
#
#action.destructive_requires_name: true
xpack.license.self_generated.type: basic
xpack.monitoring.collection.enabled: true
xpack.security.enabled: true
xpack.security.authc.realms.file.users.order: 0
xpack.security.transport.ssl.enabled: true
xpack.security.authc.api_key.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/ca.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/ca.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/certs/ca.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/certs/ca.p12

The Kibana config is below:

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
xpack.ingestManager.enabled: true
xpack.ingestManager.fleet.tlsCheckDisabled: true
xpack.security.enabled: true
xpack.encryptedSavedObjects.encryptionKey: "8ru3HMCHCLJfcWdKz2rtHQUX43krs5t9deb"
elasticsearch.username: "kibana_system"
elasticsearch.password: "welcome"
xpack.security.encryptionKey: "something_at_least_32_characters"
xpack.security.session.idleTimeout: "10m"
xpack.security.session.lifespan: "8h"


Get this bounty!!!

#StackBounty: #c# #elasticsearch #nest NEST: Find Best match using elastic search

Bounty: 50

I am trying to write a query to find the best match. I have an index with the structure below.

  public class UserProfileSearch
    {
        public int Id { get; set; }
        public int Sex { get; set; }
        public int Age { get; set; }
        public int MaritalStatus { get; set; }
        public int CountryLivingIn { get; set; }
        public double Height { get; set; }
        public double BodyWeight { get; set; }
    ...
   }

When I start my search I use different parameters. I get the search parameters as an object which has the structure below.

  public class UserPreference
    {
        public int Id { get; set; }
        public int FromAge { get; set; }
        public int ToAge { get; set; }
        public int FromHeight { get; set; }
        public int ToHeight { get; set; }
        public string MartialStatus { get; set; } // This will have id in comma separated form: 11,23,24..
        public string CountriesLivingIn { get; set; } // This will also have id in comma separated form: 11,23,24..
        public string Sexes { get; set; }
        ...
     }

I am trying to achieve like below.

 QueryContainer qCs = null;
            userPartnerPreference.CountriesLivingIn.Split(",").ToList().ForEach(id =>
            {
                qCs |= new TermQuery { Field = "countryLivingIn ", Value = int.Parse(id) };
            });
 QueryContainer qSs = null;
userPartnerPreference.MartialStatus.Split(",").ToList().ForEach(id =>
              {
                  qSs &= new TermQuery { Field = "maritalStatus", Value = int.Parse(id) };
              });

 var searchResults = await _elasticClient.SearchAsync<UserProfileSearch>(s => s
              .Query(q => q
                   .Bool(b => b
                      .Must(qSs)
                      .Should(
                              bs => bs.Range(r => r.Field(f => f.Age).GreaterThanOrEquals(userPartnerPreference.FromAge).LessThan(userPartnerPreference.ToAge)),
                              bs => bs.Range(r => r.Field(f => f.Height).GreaterThanOrEquals(userPartnerPreference.FromHeight).LessThanOrEquals(userPartnerPreference.ToHeight)),
                              bs => bs.Bool(bsb=>bsb.Should(qCs))
                             )
                         )
                      )
                   ); 

I basically want to find the best match result based on the parameters passed ordered by highest number of fields matched. I’m new to elastic search so is this the way to do it?

Note: I have other fields that I need to match. There are around 15 field, which I am planning to have inside should like age and height.


Get this bounty!!!

#StackBounty: #elasticsearch #elasticsearch-aggregation #elasticsearch-query Elasticsearch Top level aggregation / search body metadata

Bounty: 50

It’s possible to include sub-aggregation metadata like so:

GET kibana_sample_data_flights/_search
{
  "size": 0,
  "aggs": {
    "by_delay": {
      "terms": {
        "field": "FlightDelay"
      },
      "meta": {                <---
        "key": "val"
      }
    },
    "by_cancelled": {
      "terms": {
        "field": "Cancelled"
      },
      "meta": {                <---
        "key": "val"
      }
    }
  }
}

Now, there are dozens of such sub-aggs and some shared metadata. Although it only applies to the aggs, I wouldn’t mind putting it somewhere in the query section. So is there a per-search-body metadata field?

I’m thinking I could wrap all these sub-aggs inside of a match_all filter group:

{
  "size": 0,
  "aggs": {
    "meta_parent": {
      "filter": {
        "match_all": {}
      },
      "meta": {
        "shared": "meta"
      },
      "aggs": {
        "by_delay": ...,
        "by_cancelled": ...
      }
    }
  }
}

Is there a better way?


Get this bounty!!!

#StackBounty: #python #elasticsearch How to update the elastic search document with python

Bounty: 50

I have code below to add the data into elastic search

from elasticsearch import Elasticsearch
es = Elasticsearch()
es.cluster.health()
r = [{'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'},
 {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'},
 {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}]
es.indices.create(index='my-index_1', ignore=400)

for e in enumerate(r):
    #es.indices.update(index="my-index_1", body=e[1])
    es.index(index="my-index_1", body=e[1])

#Retrieve the data
es.search(index = 'my-index_1')['hits']['hits']

Requirement
How to update the document

r = [{'Name': 'Dr. Messi', 'Specialised and Location': 'Health'},
     {'Name': 'Dr. Christiano', 'Specialised and Location': 'Health'},
     {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}]

Here Dr. Messi, Dr. Christiano has to update the index and Dr. Bernard M. Aaron should not update as it is already present in the index


Get this bounty!!!

#StackBounty: #linux #ubuntu #elasticsearch #vagrant #elastic-appsearch Elastic Enterprise/App search installation problem on ubuntu

Bounty: 50

I’m running a vagrant box which runs ubuntu inside a vm (using Laravel Homestead box)

I’m trying to install the Elastic App-search product.

The first requirement is to install Elastic search, which i have done multiple times. I did the following steps:
https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install elasticsearch

I’m using the systemd configuration:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

I’m running curl localhost:9200 and everything is working.

Next I try to install elastic app search.
https://www.elastic.co/guide/en/app-search/current/installation.html#installation-self-managed.
Which doesn’t have instructions for debian systems. But it does have a .deb install file. I downloaded the file and put it in my project route.

I ran dpkg -i on the file and it seems to have installed. When I run the command to check the file location it shows this:

 dpkg -L enterprise-search
/.
/etc
/etc/init.d
/etc/init.d/enterprise-search
/var
/var/log
/var/log/enterprise-search
/usr
/usr/share
/usr/share/enterprise-search
/usr/share/enterprise-search/README.md
/usr/share/enterprise-search/bin
/usr/share/enterprise-search/bin/vendor
/usr/share/enterprise-search/bin/vendor/filebeat
/usr/share/enterprise-search/bin/vendor/filebeat/filebeat-linux-x86_64
/usr/share/enterprise-search/bin/enterprise-search
/usr/share/enterprise-search/filebeat
/usr/share/enterprise-search/filebeat/ecs-template.json
/usr/share/enterprise-search/filebeat/filebeat-ecs.yml
/usr/share/enterprise-search/lib
/usr/share/enterprise-search/lib/require_java_version.sh
/usr/share/enterprise-search/lib/enterprise-search.war
/usr/share/enterprise-search/jetty
/usr/share/enterprise-search/jetty/webserver-ssl.xml
/usr/share/enterprise-search/jetty/webserver-ssl-with-redirect.xml
/usr/share/enterprise-search/jetty/webserver.xml
/usr/share/enterprise-search/LICENSE
/usr/share/enterprise-search/config
/usr/share/enterprise-search/config/env.sh
/usr/share/enterprise-search/config/enterprise-search.yml
/usr/share/enterprise-search/NOTICE.txt
/usr/share/doc
/usr/share/doc/enterprise-search
/usr/share/doc/enterprise-search/changelog.gz
/usr/lib
/usr/lib/systemd
/usr/lib/systemd/system
/usr/lib/systemd/system/enterprise-search.service

I’m not really sure if this is the correct location? I want it to live in the same place as my elasticsearch install, but I’m actually not sure. I did all the next steps for the install process and ran:
./usr/share/enterprise-search/bin/elasticsearch

But this gives me the error:

Could not find java in PATH

I’m very confused by this since the main elasticsearch installation works and that also needs java? Also i want it also to run with systemd auto-enable and i want it to be available with enterprise-search start / stop. Not sure how to handle that.


Get this bounty!!!

#StackBounty: #elasticsearch How to get results weighted by references from ElasticSearch

Bounty: 100

I have a dataset consisting of Notes referencing other Notes.

{id:1, note: "lorem ipsum", references: [3]},
{id:2, note: "lorem ipsum", references: [1,3]},
{id:3, note: "lorem ipsum", references: [2]},

I want elastic search to use the references to weight the results, so in this case if I search for lorem I should get id 3 back since it has the most references. According to their docs, their graph solution does exactly this, but I also see examples where they are doing similar things.

But no explanation of how this is mapped to the Index. So my question is: how does one set up an ES index that uses references (indices)?


Get this bounty!!!

#StackBounty: #python #json #elasticsearch Insert multiple documents in Elasticsearch – bulk doc formatter

Bounty: 100

TLDR; How can I bulk format my JSON file for ingestion to Elasticsearch?

I am attempting to ingest some NOAA data into Elasticsearch and have been utilizing NOAA Python SDK.

I have written the following Python script to load the data and store it in a JSON format.

from noaa_sdk import noaa
import json

n = noaa.NOAA()
alerts = n.alerts()
f = open('nhc_alerts.json', 'w')
json.dump(alerts, f)
f.write('n')

This script takes care of some of the formatting issues I encountered, my next hurdle has been attempting to format it so that I can utilize the bulk import function in elasticsearch. I stumbled across an answer which works to an extent, the issue that I run into is that it will insert the appropriate Index string, but it is doing it after every character.

The bulk convert script:

import json


JSON_FILE_IN = "nhc_alerts.json"
JSON_FILE_OUT = "nhc_bulk.json"


out = open(JSON_FILE_OUT, 'w')
with open(JSON_FILE_IN, 'r') as json_in:
    docs = json.dumps(json_in.read())
    for doc in docs:
        out.write('%sn' % json.dumps({'index': {}}));
        out.write('%sn' % json.dumps(doc, indent=0).replace('n', ''))

Output from bulk script:

{"index": {}}
"""
{"index": {}}
"{"
{"index": {}}
"\"
{"index": {}}
"""
{"index": {}}
"@"
{"index": {}}
"c"
{"index": {}}
"o"
{"index": {}}
"n"
{"index": {}}
"t"
{"index": {}}
"e"
{"index": {}}
"x"
{"index": {}}
"t"
{"index": {}}
"\"
{"index": {}}
"""
{"index": {}}
":"
{"index": {}}
" "
{"index": {}}
"["
{"index": {}}
"\"
{"index": {}}
"""
{"index": {}}
"h"
{"index": {}}
"t"
{"index": {}}
"t"
{"index": {}}
"p"
{"index": {}}
"s"
{"index": {}}
":"
{"index": {}}
"/"
{"index": {}}
"/"
{"index": {}}
"r"
{"index": {}}
"a"
{"index": {}}
"w"
{"index": {}}
"."
{"index": {}}
"g"
{"index": {}}
"i"
{"index": {}}
"t"
{"index": {}}
"h"
{"index": {}}
"u"
{"index": {}}
"b"
{"index": {}}
"u"
{"index": {}}
"s"
{"index": {}}
"e"
{"index": {}}
"r"
{"index": {}}
"c"
{"index": {}}
"o"
{"index": {}}
"n"
{"index": {}}

Ideally, I’d like to combine both of these scripts into one, but at this point, I’ll run two separate scripts if it gets the job done.


Get this bounty!!!

#StackBounty: #elasticsearch Scaling Elasticsearch down to single-node

Bounty: 50

Is it possible to scale Elasticsearch from multiple nodes down to one node?

I have a 3-node cluster that is way overkill for the amount of data being logged.
To scale it down, I set “cluster.routing.allocation.exclude._ip” to the IP nodes 2 and 3 to get all the data on to one node.

I stopped Elasticsearch on node 3, and the cluster remained healthy.

In preparation to turn off the second node, I adjusted the cluster settings to require a quorum of 1 and make sure it was persistent instead of transient. Then I stopped Elasticsearch on node 2.

Finally I went on to node 1 and set discovery.type to single-node and restarted Elasticsearch.

Elasticsearch is throwing an error:

cannot start with [discovery.type] set to [single-node] when local node {node1.customer.local}{r5tnzHEYRN6TNPNur9jpqA}{PjBDleWmTeSvkRUuUNWVWw}{10.132.135.55}{10.132.135.55:9300}{dilm}{ml.machine_memory=33730138112, xpack.installed=true, ml.max_open_jobs=20} does not have quorum in voting configuration VotingConfiguration{tlbB7vMgQXOzvO36iboqOQ,r5tnzHEYRN6TNPNur9jpqA,s1fLGX7RS│fL
GX│tGpFh2xPZkkIw}

How can I scale down to one node?


Get this bounty!!!