Search as you type functionality in amazon CloudSearch - amazon-dynamodb

How should I go about implementing search as you type in amazon CloudSearch to search Amazon dynamodb. Like the way algolia does it.

You can search-as-you-type by using a prefix search every time the user enters a character -- it would look something like this:
(prefix field=name 'dri')
The prefix search is necessary because a regular search for q=dri would not match drive, drivel, etc.
Here are the prefix search docs: http://docs.aws.amazon.com/cloudsearch/latest/developerguide/searching-text.html#searching-text-prefixes
If you don't want to specify the fields for your prefix search you can use a query of the form q=dri* | dri (the non-* term is necessary because q=dri* does not match the word "dri" -- it requires there to be at least one additional character).

Related

Ignore Turkish Characters On Firestore Query

I have a .net app that uses Firestore as a database and It's using Firestore Query to find some data. The problem is data fields that include Turkish characters but if someone uses my app and wants to search for data and if don't use Turkish characters, the query can not find this data.
For example, if I want to search my name on my app and my name is saved like "Ertuğrul" and if the user searches like "Ertugrul", the query can not find it. I need it to find it. Is there a way to do that?
My code that uses query is here:
QRef = DataBase.Collection("CollName").Document("DocName").Collection("CollName")
.WhereGreaterThanOrEqualTo("NameSurname", $"{NameSurname}")
.WhereLessThanOrEqualTo("NameSurname", $"{NameSurname}\uF7FF");
Firestore queries always return documents where a particular field holds a perfect match. If you want to be able to search for "Ertuğrul" as well as for "Ertugrul", then besides the "NameSurname" field you should consider adding a new field called "NameSurnameWithoutSpecialCharacters" and store each name without those Turkish characters.
When a user searches, simply verify if the searched term contains "special" characters. If it does, search on the "NameSurname", otherwise search on the newly created field.

What's the difference between the 'field' and 'field.keyword' fields in Kibana?

When adding a filter in kibana all string fields have a entry and .keyword entry. What is the difference?
From elasticsearch 5 there is no string field type, instead there is two types:
Keyword - use it for filter, aggregation and sort.
Text - use it for search text.
When you index documents with string field, for example name, elasticsearch mapping the field to text field for search and to keyword for filter.
Kibana use the field for filter and aggregation, therefore using the keyword.
Look at elasticsearch documentation
In fact, it is not an Kibana issue, it's an ElasticSearch issue which make full-text and keyword search both conformable. The field.keyword is for keyword search and aggregation, while the original field is used for full-text search.
There is an official blog specialized for this:
https://www.elastic.co/cn/blog/strings-are-dead-long-live-strings
There is also a post on the official discuss board, here is the link for your reference: https://discuss.elastic.co/t/why-am-i-getting-keyword-for-my-feilds-in-index-pattern/137983
to add on to this in case it helps someone is that the .keyword field can be used to create scripted fields on the index if you already have a matching keyword field. Do not need to go through the trouble of switching fielddata to true and reindexing

Querying for exact match in Kibana

In my Kibana, when I search my document I need to look for exact match:
In my document I have a field named message.
So If I search (Using Kibana) something like:
message: "Provider replied with error code 2006"
I get all the documents that have one instance of those words.
I would like to have exact match.
I am running Kibana: 5.3.2. and Elasticsearch is 5.3.2
In Elasticsearch are two Types of "Strings".
Keyword:
They are typically used for filtering (Find me all blog posts where status is published), for sorting, and for aggregations. Keyword
fields are only searchable by their exact value.
See the docs
Text
field to index full-text values, such as the body of an email or the description of a product. These fields are analyzed, that is they are passed through an analyzer to convert the string into a list of individual terms before being indexed.
See the docs
Sometimes it is possible to access to the Keyword by adding ".keyword" to your field. So try this one:
message.keyword: "Provider replied with error code 2006"
Otherwise you have to check your mapping and change it to Keyword.

How to model Not In query in Couch DB [duplicate]

Folks, I was wondering what is the best way to model document and/or map functions that allows me "Not Equals" queries.
For example, my documents are:
1. { name : 'George' }
2. { name : 'Carlin' }
I want to trigger a query that returns every documents where name not equals 'John'.
Note: I don't have all possible names before hand. So the parameters in query can be any random text like 'John' in my example.
In short: there is no easy solution.
You have four options:
sending a multi range query
filter the view response with a server-side list function
using a CouchDB plugin
use the mango query language
sending a multi range query
You can request the view with two ranges defined by startkey and endkey. You have to choose the range so, that the key John is not requested.
Unfortunately you have to find the commit request that somewhere exists and compile your CouchDB with it. Its not included in the official source.
filter the view response with a server-side list function
Its not recommended but you can use a list function and ignore the row with the key John in your response. Its like you will do it with a JavaScript array.
using a CouchDB plugin
Create an additional index with e.g. couchdb-lucene. The lucene server has such query capabilities.
use the "mango" query language
Its included in the CouchDB 2.0 developer preview. Not ready for production but will be definitely included in the stable release.

complex Queries in kibana or quering for different values of a single field type

I am new to Kibana. I have successfully installed Logstash ,Elasticsearch and Kibana. All the links or documents i read have simple query syntax like search by text,by typing phrase or putting logical operators .but all this is so basic.
How can we query in detail.for example i have logs of my magento store and the logs have time stamp,product ID and the action that is the product is purchased or viewed or removed like that.
I imported these logs in kibana via logstash.
Now i want to query logs for the action field not different fields.When i query the logs it returns me logs that have added action and logs that have remove action.The query is "added" OR "removed" when i do "added" AND "removed" there are no logs given because these both words are of same field type and kibana does not allow this it returns zero records because any particular log cant have two valuesin the action field that is product added and removed.I need to know the product which is added and removed the most by people and do a visualization of that.
please suggest if there are any tutorial for studying kibana lik, how to configure it learn to write complex queries
You can try to parse your logs in Logstash to multiple fields.
As your requirement, say add field-"Action" and field "Product".
In the Kibana you can add Table with terms set to "Product" field.
So, when you search for "Added", the table will show out all the product with Added action.
I wanted to match two disparate search terms in the SAME field using logical operators. For example, a field called 'product_comments' has value 'residential plumbing bathroom sink", and I want "residential" AND "sink" to match.
The documentation here: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#AND says this is possible, just as OP originally tried.
Using Kibana 5.1.1 I found that logical operator is case sensitive:
"residential" and "sink" matched documents with the word 'and' in it, but
"residential" AND "sink" worked as expected

Resources