Authorization error when trying to apply an index template to elasticsearch - kibana

so we have an elastic search service running in AWS, the elasticsearch version is 7.8.0. And I need to add an index template to limit the amount of shards that are allocated to new indices when they are added.
I followed this example of how to add an index template and got this very simple template:
PUT _index_template/shard_limitation
{
"template": {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
}
When running this request from the inside Kibana's Dev Tools console I get the following error: {"Message":"Your request: '/_index_template/shard_limitation' is not allowed."}. As well as an Unauthorized - 401 icon. I'm running this command with the admin user.
I tested it locally (elastic search running on my machine) and it all works fine. Any idea why this might happen?
SOLUTION:
As was suggested by #Ajinkya, the correct way to do this is to not include the "_index" before the template api. The correct way to achieve what I was trying to do is to type the following:
PUT _template/shard_limitation
{
"index_patterns": ["some-pattern"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}

'_index_template' operation might not be supported in AWS elasticsearch. You can check supported operations for your AWS ES version here
You can still use '_template' API to add index template
PUT _template/shard_limitation
{
"index_patterns": ["test*"],
"template": {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
}

Related

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

firebase dynamic link generated via API open appstore

I'm creating a dynamic link via API.
How can I specify to open the AppStore if the app is not installed?
here the body for my request:
{
"dynamicLinkInfo": {
"domainUriPrefix": "https://wi.page.link",
"link": "https://wiapp.com.au/faq?promocode=mypromo_code",
"iosInfo": {
"iosBundleId": "com.direce.sr",
"iosFallbackLink":"id1356389392",
"iosAppStoreId":"id1368389392",
},
"socialMetaTagInfo" :{
"socialImageLink":"https://vignette.wikia.nocookie.net/doraemon/images/b/b8/Doraemon_2005.PNG/revision/latest?cb=20151207094313&path-prefix=en",
"socialTitle":"my titu",
"socialDescription":"descripotio"
}
},
"suffix": {
"option":"UNGUESSABLE"
},
}
this works if I create the dynamic link via firebase console, where I can specify what to do if app not installed
Ok!
found the problem, is the
"iosAppStoreId":"id1368389392"
it is different value if creating from the dashboard or for API,
so the correct one when doing from, API should be without the "id"
"iosAppStoreId":"1368389392"
You can add a parameter called iosInfo, which has a property called iosAppStoreId (the app store id).
Check the documentation page here.

Simple GetItem with ctx.identity.username returns null

I'm using AppSync with IAM auth with a DynamoDB resolver and Cognito. I'm trying to do the following.
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"userId": $util.dynamodb.toDynamoDBJson($ctx.identity.username)
}
}
$ctx.identity.username is supposed to contain userId generated by Cognito and I'm trying to use it to fetch current user data.
Client side, I'm using AWS Amplify that tells me I'm currently logged:
this.amplifyService.authStateChange$.subscribe(authState => {
if (authState.state === 'signedIn') {
this.getUserLogged().toPromise();
this._isAuthenticated.next(true);
}
});
getUserLogged is the Apollo query that is supposed to returns user data.
What I've tried:
If I leave it like this, getUserLogged returns null.
If I replace in the resolver $util.dynamodb.toDynamoDBJson($ctx.identity.username) with a known userId like this $util.dynamodb.toDynamoDBJson("b1ad0902-2b70-4abd-9acf-e85b62d06fa8"): It works! I get this user data.
I tried to use the test tool in the resolver page but it only gives fake data so I can't rely on this.
Did I make a mistake? To me everything looks good but I guess I'm missing something?
Can I clearly see what $ctx.identity contains?
You'll want to use $ctx.identity.cognitoIdentityId to identify Cognito IAM users:
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference.html#aws-appsync-resolver-context-reference-identity
You could see the contents of $ctx.identity by creating a Lambda resolver and logging the event or by creating a local resolver and returning the input that the mapping template receives:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-local-resolvers.html
My cognitoIdentityId looks like this: eu-west-1:27ca1e79-a238-4085-9099-9f1570cd5fcf

Can't create cloudsql role for Service Account via api

I have been trying to use the api to create service accounts in GCP.
To create a service account I send the following post request:
base_url = f"https://iam.googleapis.com/v1/projects/{project}/serviceAccounts"
auth = f"?access_token={access_token}"
data = {"accountId": name}
# Create a service Account
r = requests.post(base_url + auth, json=data)
this returns a 200 and creates a service account:
Then, this is the code that I use to create the specific roles:
sa = f"{name}#dotmudus-service.iam.gserviceaccount.com"
sa_url = base_url + f'/{sa}:setIamPolicy' + auth
data = {"policy":
{"bindings": [
{
"role": roles,
"members":
[
f"serviceAccount:{sa}"
]
}
]}
}
If roles is set to one of roles/viewer, roles/editor or roles/owner this approach does work.
However, if I want to use, specifically roles/cloudsql.viewer The api tells me that this option is not supported.
Here are the roles.
https://cloud.google.com/iam/docs/understanding-roles
I don't want to give this service account full viewer rights to my project, it's against the principle of least privilege.
How can I set specific roles from the api?
EDIT:
here is the response using the resource manager api: with roles/cloudsql.admin as the role
POST https://cloudresourcemanager.googleapis.com/v1/projects/{project}:setIamPolicy?key={YOUR_API_KEY}
{
"policy": {
"bindings": [
{
"members": [
"serviceAccount:sa#{project}.iam.gserviceaccount.com"
],
"role": "roles/cloudsql.viewer"
}
]
}
}
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.cloudresourcemanager.projects.v1beta1.ProjectIamPolicyError",
"type": "SOLO_REQUIRE_TOS_ACCEPTOR",
"role": "roles/owner"
}
]
}
}
With the code provided it appears that you are appending to the first base_url which is not the correct context to modify project roles.
This will try to place the appended path to: https://iam.googleapis.com/v1/projects/{project}/serviceAccount
The POST path for adding roles needs to be: https://cloudresourcemanager.googleapis.com/v1/projects/{project]:setIamPolicy
If you remove /serviceAccounts from the base_url and it should work.
Edited response to add more information due to your edit
OK, I see the issue here, sorry but I had to set up a new project to test this.
cloudresourcemanager.projects.setIamPolicy needs to replace the entire policy. It appears that you can add constraints to what you change but that you have to submit a complete policy in json for the project.
Note that gcloud has a --log-http option that will help you dig through some of these issues. If you run
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NAME --role roles/cloudsql.viewer --log-http
It will show you how it pulls the existing existing policy, appends the new role and adds it.
I would recommend using the example code provided here to make these changes if you don't want to use gcloud or the console to add the role to the user as this could impact the entire project.
Hopefully they improve the API for this need.

Cordova firefoxos web access issue

I'm trying to port my properly working (in iOS, Android) cordova app to Firefoxos.
The simulator starts properly and its browser can load web pages BUT my app cannot load data from the web.
Looking at console I see the following errors:
"JavaScript error: app://aa2a2c24-a8d6-447d-92da-4f2e9af65661/plugins/org.apache.cordova.network-information/src/firefoxos/NetworkProxy.js, line 33: missing : after property id" simulator-process.js:44
"JavaScript error: app://aa2a2c24-a8d6-447d-92da-4f2e9af65661/cordova.js, line 1120: Module org.apache.cordova.network-information.NetworkProxy does not exist."
Any suggestion? Thanks.
Cordova 3.5.0
Simulator FirfeoxOS 1.3 and FirfeoxOS 1.4
After some research I figured out the issues
1- Despite upgrading cordova to 3.5.0 I must remember that plugins don't get automatically update.
To get the plugin code for firefoxos updated I added again the same plugin, removed the firefoxos platform and reinstalled it again.
At that point the javascript errors were gone
2- Then the ajax call were still not accessible due to permissions. To ensure you can have ajax call you have to put in your manifest.webapp the following code
"type": "privileged",
"permissions": {
"systemXHR": { "description": "Required for AJAX calls in app"}
}
Both "type" and "permissions" are needed
3- Finally you have to ensure the ajax calls use
mozSystem: true
Specifically for jquery, you could put something like the following on top of your js file:
if (device.platform == 'firefoxos') {
$.ajaxPrefilter( function( options ) {
if ( options.firefoxOS ) {
options.xhr = function() {
return new window.XMLHttpRequest( {
mozSystem: true
} );
}
}
} );
$.ajaxSetup( {
firefoxOS: true
} );
}
Now I can properly handle ajax calls.

Resources