I've got a graphite server set up with authentication turned on, and some metrics in telegraf (on a different server) that I'd like to send to it using the telegraf graphite plugin.
Obviously it fails without setting up any authentication from telegraf for that connection (thankfully :-) ), with a Could not write to any Graphite server in cluster.
But my question is how should I set up telegraf to authenticate? I notice that for other plugins (e.x. the influxdb one) there are config params for username/password, but are not any for the graphite plugin.
It seems like this is a pretty common integration, so it seems odd that it wouldn't be supported. So how do I authenticate between the two? Ideally I won't have to do any shenanigans with proxies.
Related
How can HTTP queries be run in the Azure Monitor workbooks?
I read all the documentation here and still can not find how could I use my application health checks http endpoints to report on my application status in an Azure Monitor woorkbook.
I have an ASP.NET application if it matters. It exposes endpoints which I would like to call from the workbook and do different visualizations depending on the data returned.
You'd use the "Custom Endpoint" data source in the Query step.
https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-data-sources#custom-endpoint
The endpoint needs to support CORS, because the calls would be coming from the azure portal, and also needs to support https, because the portal itself is loaded via https.
The endpoint is also expected to return JSON content, and then you can use JSONPath inside the settings of the custom endpoint's Result Settings tab to transform it into grid data.
I currently have a dockerized web application hosted on a Google Cloud Compute Instance, which is only accessible from our companies private network. This set up has worked well over the past year, but with additional development requirements and increased usage, I find myself constantly modifying the instances size, and having to restart the server with new updates. Also, other developers on the team have less experience with deploying this code which makes this app my responsibility.
I'd like to move this application to Cloud Run for scalability, ease of maintenance and deployments but still have it accessible only on our companies network. My idea was to move the application to an authenticated cloud run service and use the original server as an nginx proxy that would add an authentication header and forward the request to the cloud run service.
My question would be how would I use nginx to get the token (the server will have the necessary permissions), and add it to the request before passing it to the app. This is my current idea, but not sure where to go from here.
location / {
proxy_set_header Authentication "Bearer $ID_TOKEN";
proxy_pass https://the-library-clwysxi3sq-ue.a.run.app;
}
You're on the right track.
At this point, I recommend you to consider using Envoy Proxy instead of NGINX. Envoy has a well-documented protocol to fetch dynamic data such as $ID_TOKEN from an external source.
Whatever solution you choose, make sure you actually end up rewriting the "Host" header to your [...].run.app hostname, because if you preserve the hostname as is (somedomain.com), Cloud Run’s load balancer won't know which app to route it.
The remaining task is to figure out how to get the $ID_TOKEN dynamically.
Google Compute VM instance needs to retrieve an identity token (JWT) by querying the instance metadata service:
curl -H "Metadata-Flavor: Google" \
http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://hello-2wvlk7vg3a-uc.a.run.app
Make sure to replace the value of ?audience= with the targeted service's URL.
The response body of this call returns a JWT token that expires within an hour. You should cache this response (based on audience, and TTL<60 mins), or simply get a new one every time.
Note that on Cloud Run, you can only generate 50 identity tokens per second currently. However you are running on GCE (and I'm repeating myself here) I don't think there's a documented rate limit for metadata service on GCE. It's likely higher.
Then, you need to add it to the outgoing request (to Cloud Run) as an HTTP header:
Authorization: Bearer <TOKEN>
This procedure is explained at Service-to-service authentication documentation.
You can search Stack Overflow or Google on how to execute a Lua or Bash script in NGINX for every request.
I have seen that Google Firebase offers a static files hosting solution (for the front end) which is served in SSL and by CDN. That means, I can serve customers all around the world with a server located probably close to them and enjoying good speeds.
Now I want to do the same with my Node.js backend code.
That means, instead of hosting my backend code in my own VPS, that will be probably fast only for who lives close to my server, I want to deploy the same server to Firebase's CDN and ofcourse, over HTTPS.
What I have found for now is the Firebase Functions which is probably a Node.js server. However I am not sure if its running uppon a CDN, so it will be fast just as the static files serving, or that its just a server located somewhere in US that has to serve worldwide.
In addition, if there is such a service - where I can host my back end code with SSL, may I have the "standard" express configuration I have now on my VPS?
And what for about clusters/workers? How many workers I can have when using the Firebase solution (if there is one like that).
Thanks.
SSL and firebase functions & hosting?
You get HTTPS by default for hosting and functions. If you need functions to served from your custom domain and not https://us-central1-[projectname].cloudfunctions.net, you will need to configure your firebase.json file to rewrite your routes to your firebase functions. The main thing to flag here is both options you get HTTPS and certs issues directly from google/firebase.
When you bring a custom domain over it can take up to 1-2 hours for firebase to issue the certificate, but all this happens automatically without you having to do anything.
Does firebase functions integrate with a CDN?
Yes, but you need to set the correct s-maxage header in your response to ensure the firebase CDN will store it. See here for more info on this.
Cache invalidation is still hard with firebase so I would keep this in mind before you set anything.
How many workers I can have when using the Firebase solution (if there is one like that).
One benefit of using firebase functions is that you don't need to really give much thought to the resources behind the backend. If you have heavier workloads you can increase your ram/ cpu power in the google console for your selected function. The endpoint will scale up and down depending on how many requests it gets. On the flip side if it doesn't get any requests (usually in non prod environments) it'll go to an idle state. You need to be aware of a cold start problem before you fully commit to using this as a replacement to your current nodejs VPs hosting solution.
I personally use the cache control headers to ensure the functions responses are pushed into the CDN edge, which takes the edge off the cold start issue (for me and my use case).
Q: Kibana is great, but I want to make it so users have to authenticate in order to access it. How do I do that? A: This can be handled a number of ways. The best way is to run Kibana with Passenger and Apache or Nginx. There's sample configurations in the sample directory. You can then handle your preferred authentication mechanism with Apache or Nginx.
How do I do this? I do not use any of these programs. Could someone give me a basic overview of what I have to do? Any help would be nice I am a Student and learning but I need help to stay going, I don't know everything.
I am running Ubuntu.
Well, actually even if you make a kind of authentication against Kibana it won't be enough. As you probably know Kibana runs over Elasticsearch, so even if you "limit" permissions to Kibana , everyone can still have access to elastic search and see existed indeces or even create new ones. So, the main question is whether you can manage AuthN && AuthZ against ElasticSearch.
For authentication you can integrate Kibana/ElasticSearch with any framework (example Play,Spring MVC etc) which ever your are using.Create a login page (authentication) using the framework,point the Kibana to web server/app server embedded in the framework and pass the kibana request to Elastic Search and response from Elastic Search to Kibana through this framework.Basically this framework will be a mediator between kibana and ElasticSearch.Also need to block the Elastic Search server port,so that no body could directly access the ES.
Kibana<-->Intermediate Framework<-->Elastic Search
Hope this helps!
There is an application that should read user tweets for every registered user, process them and store data for future usage.
It can reach Twitter 2 ways: either REST API (poll twitter every x mins), or use its Streaming API to get tweets delivered.
Besides completely different implementations on server side I wonder what are other impacts on server side?
Say application has tousands of users. Is it better to build kind of queue and poll twitter for each user (the simplest scenario), or is it better to use Streaming API and keep HTTP connection open for each user? I'm a bit worried about the latter as it'd require keeping tousands of connections open all the time. Are there any drawbacks of that I'm not aware of? If I'd like to deploy my app on Heroku or on EC2 instance, would it be ok or are there any limits?
How it is done in other apps that constantly need getting data for each user?