Simplified multi-server routing - networking

Sorry for this newbie's question but I'm learning network.
Long story short : I have a web application separate in 3 modules access at :
https://server1:4353/module1
https://server1:4858/module2
https://server2:4959/module3
I would like to implement something like : user access to https://server3/module{1/2/3} and route to the good server. Ideally, user only see the "simplified" URL.
Something like a proxy but I don't know the real "tehnical name" so I can't find some solutions/how to.
Could ypu help me please ?

You are seeking to have a unified entry point for accessing application resources. This can be achieved through a reliable reverse-proxy or a forward-proxy. Here is an excellent explanation on the differences.
You may achieve your requested action through the following Perl script.
#!/usr/bin/perl
use strict;
use warnings;
use mojolicious::Lite;
# reverse proxy to 127.0.0.1:1234
get '/module1' => sub {
my $c = shift;
$c->proxy_to('http://127.0.0.1:4353');
};
get '/module2' => sub {
my $c = shift;
$c->proxy_to('http://127.0.0.1:4858');
};
get '/module3' => sub {
my $c = shift;
$c->proxy_to('http://127.0.0.1:4959');
};
Your environment type was not defined. If the system is in production, you may wish to look into Hypnotoad. Otherwise, the internal app->start would be sufficient.

Related

Zabbix - wildcard usage in perf_counter

colleges!
I really need to use wildcard in perf_counter.
We have .NET Data Provider for SqlServer counters. Unfortunately, the ID on counter changes after each reboot.
Right now I have counter like this:
perf_counter["\.NET Data Provider for SqlServer(_lm_w3svc_3_root-3-131958133162924330[18196])\NumberOfActiveConnectionPools"]
How can I use it permanently. Maybe I need to use a wildcard like this:
perf_counter["\.NET Data Provider for SqlServer(_lm_w3svc_3_root-3-131958133162924330[*])\NumberOfActiveConnectionPools"]
The counter became unsupported with "Cannot obtain performance information from collector".
I really need your help!
Thank you and have a nice day!
The documentation doesn't mention wildcards with performance counters.
If your counter changes at every reboot you need to use a discovery rule even if you're dealing with a single item.
The discovery rule could be a powershell script like:
$result = #{}
$result.data = #()
(get-counter -Listset *).paths | ForEach-Object {
if ($_ -Like "*_lm_w3svc_3_root-3-131958133162924330*\NumberOfActiveConnectionPools") {
$result.data += #{
"{#PATH}" = $_
}
}
}
$result | ConvertTo-Json
Set it to run every hour or less and create an item prototype like perf_counter["{#PATH}"], this should do the trick.

ngx lua: scope of local variable, init in init_by_lua_block

I'm new to nginx lua, and got a setup from previous developer. Trying to go through the docs to understand the scope but I'm pretty unsure.
It's like this right now
init_by_lua_block {
my_module = require 'my_module'
my_module.load_data()
}
location / {
content_by_lua_block {
my_module.use_data()
}
}
And in my_module
local _M = {}
local content = {}
function _M.use_data()
-- access content variable
end
function _M.load_data()
-- code to load json data into content variable
end
return _M
So my understand is, content is a local variable, so its lifetime is within each request. However, it's being initialized in init_by_lua_block, and is being used by other local functions, which makes me confused. Is this a good practice? And what's the actual lifetime of this content variable?
Thanks a lot for reading.
Found this: https://github.com/openresty/lua-nginx-module#data-sharing-within-an-nginx-worker
To globally share data among all the requests handled by the same nginx worker process, encapsulate the shared data into a Lua module, use the Lua require builtin to import the module, and then manipulate the shared data in Lua. This works because required Lua modules are loaded only once and all coroutines will share the same copy of the module (both its code and data). Note however that Lua global variables (note, not module-level variables) WILL NOT persist between requests because of the one-coroutine-per-request isolation design.
Here is a complete small example:
-- mydata.lua
local _M = {}
local data = {
dog = 3,
cat = 4,
pig = 5,
}
function _M.get_age(name)
return data[name]
end
return _M
and then accessing it from nginx.conf:
location /lua {
content_by_lua_block {
local mydata = require "mydata"
ngx.say(mydata.get_age("dog"))
}
}
init_by_lua[_block] runs at nginx-loading-config phase, before forking worker process.
so the content variable is global, it's all the same in every request.
https://github.com/openresty/lua-nginx-module/#init_by_lua

Allow access to web application only when my server redirects to it

My question is very very similar to this one
The idea is the following.
I have an app written in Node (specifically Sails.js) it a simple form for invoices.
And another one in Laravel.
So what I want is that the user can only access that form (Sails app) if one Controller of the Laravel app redirects to it.
On the link above it says that I could use sessions but as you can see this are very different applications. So I'm looking for the simplest and best way to do it.
Any advice is well received or if you have some better approach to solve this please let me know. Thanks
Probably the most simple way is to use the referer header in your Sails controller and do a simple comparison check.
For example:
getinvoice : function(req, res, next) {
var referer = req.headers.referer;
if(referer != 'http://somedomain.com/pageallowedtocallgetinvoice'){
return res.forbidden();
} else {
...
}
}

Change nginx access log data in logstash or elasticsearch

In my project I provide api for a mobile app , and in every api the front end use session_id to mark user authenticity, and in the server side accept and validate it.
Recently we want to use ELK(elasticsearch, logstash, kibana) to preserve and analyze web server access log to extract some commonly occurred user activities. I encountered some problems, I wanna change session_id in the log to user_id(in program I can get user_id from session_id through query database) but I just don't know how?
Can logstash's filter do this? or should I change data when log was indexed in elasticsearch?
Alright, I try to give you an answer assuming that you have some kind of interface from which you can retrieve the user_id. Actually you need to do two things:
Split your log line into separate fields to have a field which contains your session_id
Get the corresponding user_id using some kind of api
Split your log line
You need to split your input into separate fields. This could be done with filters like grok and/or kv. Take a look at some SO questions to find a matching grok pattern or use the grok debugger. Please provide a few log lines if you need help with that.
EDIT: For your given examples your configuration should look something like this:
filter {
grok {
match => [ 'message', '"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent} %{QS:xforwardedfor}' ]
}
kv {
field_split => "&?"
}
}
Please try it and adjust it yourself to get the session_id.
Once you have a field called session_id you can go on with step 2.
Get the user_id
As you have already mentioned you need a filter plugin because the session_id must be available. There are several official plugins but I think none of them suits your purpose. Since the session_id is assigned dynamically you cannot use a static translate filter or something like that.
It depends on your api but one possible approach is to get the corresponding user_id via http requests. For that purpose you could use a community plugin. For example logstash-filter-rest with a config like this:
filter {
rest {
url => "http://yourserver/getUserBySessionId/"
sprintf => true
method => "post"
params => {
"session_id" => "%{session_id}"
}
response_key => "user_id"
}
}

How to send erlang functions source to riak mapreduce via HTTP?

I'm trying to use Riak's mapreduce via http. his is what i'm sending:
{
"inputs":{
"bucket":"test",
"key_filters":[["matches", ".*"]]
},
"query":[
{
"map":{
"language":"erlang",
"source":"value(RiakObject, _KeyData, _Arg) -> Key = riak_object:key(RiakObject), Count = riak_kv_crdt:value(RiakObject, <<\"riak_kv_pncounter\">>), [ {Key, Count} ]."
}
}
]}
Riak fails with "[worker_startup_failed]", which isn't very informative. Could anyone please help me get this to actually execute the function?
WARNING
Allowing arbitrary Erlang functions via map-reduce is a security risk. Any valid Erlang can be executed, including sending your entire data set offsite or formatting the hard drive.
You have been warned.
However, if you implicitly trust any client that may connect to your cluster, you can allow Erlang source to be passed in a map-reduce request by setting {allow_strfun, true} in the riak_kv section of app.config, (or in the advanced.config if you are using riak.conf).
Once you have allowed passing an Erlang function in a map-reduce phase, you need to pass in a function of the form fun(RiakObject,KeyData,Arg) -> [result] end. Note that this must be an anonymous fun, so fun is a keyword, not a name, and it must end with end.
Your function should handle the case where {error,notfound} is passed as the first argument instead of an object. Simply adding a catch-all clause to the function could accomplish that.
Perhaps something like:
{
"inputs":{
"bucket":"test",
"key_filters":[["matches", ".*"]]
},
"query":[
{
"map":{
"language":"erlang",
"source":"fun(RiakObject, _KeyData, _Arg) ->
Key = riak_object:key(RiakObject),
Count = riak_kv_crdt:value(
RiakObject,
<<\"riak_kv_pncounter\">>),
[ {Key, Count} ];
(_,_,_) -> [{error,0}]
end."
}
}
]}
Allowing the source to be passed in the request is very useful while developing and debugging. For production, you really should put the functions in a dedicated pre-compiled module that you copy to the code path of each node so that the phase spec can specify the module and function by name instead of providing arbitrary code.
{"map":{
"language":"erlang",
"module":"yourprecompiledmodule",
"function":"functionname"}}
You need to enable allow_strfun on all nodes in your cluster. To do so in Riak 2, you will need to use the advanced.config file to add this to the riak_kv configuration:
[
{riak_kv, [
{allow_strfun, true}
]}
].
The other option is to create your own Erlang module by using the compiler shipped with Riak and placing the *.beam file in a well-known location for Riak to find. The basho-patches directory is one such place.
Please see the documentation as well:
advanced.config
Installing custom Erlang code
HTTP MapReduce
Using MapReduce
Advanced MapReduce
MapReduce / curl example

Resources