Using nginx to redirect dynamic request - nginx

I have a druid service which runs at my local machine at port 8082 as follows:
Method POST: http://localhost:8082/druid/v2/?pretty
Body:
{
"queryType" : "topN",
"dataSource" : "some_source",
"intervals" : ["2015-09-12/2015-09-13"],
"granularity" : "all",
"dimension" : "page",
"metric" : "edits",
"threshold" : 25,
"filter": {
"type": "and",
"fields": [
{
"type": "selector",
"dimension": "pix_id",
"value": "1234"
}
}
}
Hitting this query gives me a list of records based on the value of the dimension 'pix_id'.
Now, I want to setup an nginx such that the external application should not have any clue about my druid service. I just want the external application to hit the URL:
http://localhost:80/pix_id/98765
This url should dynamically generate a JSON with the above mentioned pix_id and send a request to druid and return the response to the user.
Is it possible to do this in nginx?

Yes you can do this, but rather I would suggest to have a php or python script in between to give the results.
So the setup would be -
Have php page receive the request.
make a curl call from php to the druid, locally.
get the result and pass on the response.
There are multiple benefits of doing this eg. -
You completely mask druid, and not necessarily limited to druid.
You can do more calculations in php before sending the request to druid.
caching at php end.

Related

Can't create stack from template using Openstack Heat API

I'm trying to create a stack from a template using the Heat API. I'm using the API reference here as a guide. I have the following json in a file called single-server-template.json:
{
"stack_name": "api-test",
"template": {
"heat_template_version": "rocky",
"description": "Testing Heat API\n",
"resources": {
"server1": {
"type": "OS::Nova::Server",
"properties": {
"name": "Server1",
"image": "Ubuntu 22.04 (Jammy)",
"flavor": "alt.st1.small",
"key_name": "my_key",
"networks": "Internal"
}
}
}
}
}
I'm sending it with this: curl -X POST -H "X-Auth-Token:$OS_TOKEN" -d #single-server-template.json https://$OS_HOST_URL:8004/v1/$OS_PROJECT_ID/stacks
I've been at it for hours but no matter what I send I get a 400 error:
{"code": 400, "title": "Bad Request", "explanation": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "error": {"type": "HTTPBadRequest", "traceback": null, "message": "The server could not comply with the request since it is either malformed or otherwise incorrect."}}
Things I've tried:
confirmed my environment vars are filled in correctly before sending
confirmed that the template itself is valid by spinning it up from the UI console
confirmed that other endpoints that require auth work as expected
screaming and/or crying
Can anyone confirm that what I've got is correct, or test it against your own openstack instance? The only other thing I can think of is that maybe my openstack provider has an issue with their API

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

Elixir HttpPoison not working to send multipart/form-data requests

I have a very simple request to make but it looks like HttpPoison isn't able to resolve this.
The request has attachments, so i'm using multipart/form-data content type.
When I send just the file, the request works fine, but I need to add some other props to my request and that's where the issue comes.
My request:
HTTPoison.post(
"path.com/api/anything",
{:multipart, [
{
:file,
"/path/file.xlsx",
{"form-data", [name: "file", filename: "file.xlsx"]}, []
},
{"taskName", "#{task.name}"},
{"taskLink", "#{task.link}"}
]},
)
I receive the file without problems but the taskName and taskLink never reach the server.
( I tried with postman and had no problems )
Some issues related to this:
https://elixirforum.com/t/httpoison-post-multipart-with-more-form-than-the-file/4222/4
https://github.com/edgurgel/httpoison/issues/237
We have a working example of the multipart list that we use to send zip files along with other attributes. Something equivalent to this might work for you.
[
{"id", to_string(order_id)},
{"file_size", to_string(file_size)},
{"attachment", file, {"form-data", [name: "file", filename: filename]},
[{"Content-Type", "application/zip"}]
}
]

GAE endpoints generates wrong discovery doc

I have upgraded to the latest Cloud Endpoints 2.0 as well as the endpoints_proto_datastore to its latest commit. When I now try to generate the API discovery doc I get the following error messages:
Method user.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
Method position.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
The only two available endpoints are the following two methods which should update the User and the Position model:
#User.method(name='user.update', path='users/{id}', http_method='PUT')
def UserUpdate(self, user):
""" Update an user resource. """
user.put()
return user
#Position.method(name='position.update', path='positions/{id}', http_method='PUT')
def PositionUpdate(self, position):
""" Update a position resource. """
position.put()
return position
Before upgrading to Cloud Endpoints 2.0 everything worked fine. But now if I take a look into the generated discovery file both endpoints have a ProtorpcMessagesCombinedContainer in their request. But the combined container itself is defined with the properties of the Position model!
This is how both methods request attribute are defined:
"request": {
"$ref": "ProtorpcMessagesCombinedContainer",
"parameterName": "resource"
},
And this is the definition of the combined container (which has the properties of the Position model):
"ProtorpcMessagesCombinedContainer": {
"id": "ProtorpcMessagesCombinedContainer",
"type": "object",
"properties": {
"displayName": {
"type": "string"
},
"shortName": {
"type": "string"
}
}
},
Does anyone else had this issue with GAE and Cloud Endpoints 2.0?
What am I doing wrong? Usually the endpoints-proto-datastore should handle the ResourceContainer and the methods path parameters. Also the endpoints-proto-datastore wasn't updated for years ... I really don't know where the error comes from.
Thanks for your help!

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

Resources