I'm using OpenResty with nginx to auto-obtain SSL certs from Let's Encrypt. There's a lua function where you can allow certain domains. In this function, I have a regex to whitelist my domains. After I add a certain amount (not sure the exact amount), I start getting this error:
nginx: [emerg] too long lua code block, probably missing terminating characters in /usr/local/openresty/nginx/conf/nginx.conf:60.
Shrinking down that string makes the error go away.
I'm not familiar with lua, but here's the example code. I have a few hundred domains to add in here.
auto_ssl:set("allow_domain", function(domain)
return ngx.re.match(domain, "^(domain1.com|domain2.com|domain3.com....)$", "ijo")
end)
Do I need to define this string ahead of time, or maybe specify it's length somewhere?
EDIT ok, so I was thinking about this another way. Does anyone see an issue if I were to try this? Any sort of performance issues, or lua related things? Maybe there's a more efficient way of doing this?
auto_ssl:set("allow_domain", function(domain)
domains = [[
domain1.com
domain2.com
domain3.com
-- continues up to domain300.com
]]
i, j = string.find(domains, domain)
return i ~= nil
end)
OpenResty allows for loading more complex lua code through files. https://github.com/openresty/lua-nginx-module#init_by_lua_file That is just one directive. There are multiple ways you can load lua code. This way worked for me.
Related
I have seen many references to this issue spanning several years but 95% of it relates to Apache. I'm on NGINX hence can't try solutions involving the .htaccess file.
{"code":"woocommerce_rest_cannot_view","message":"Sorry, you cannot list resources.","data":{"status":401}}
Since nothing really covers NGINX for this problem I thought of starting a new thread
The first time it happened was when I tried to link Woobotify who automatically generates its own keys. While the keys were created it says it doesn't have read/write error (despite having the right permissions setup)
So I created a new set of keys from within WP and made a direct call (while logged in as admin of course)
as in ://site.com/wp-json/wc/v3/products/categories?consumer_key=ck_8a9b...etc to see if it was on the server-side or Woobotify's and still got the error
If you refer me to http://woocommerce.github.io/woocommerce-rest-api-docs/#rest-api-keys
I am too much of a newbie to make use of this information. I either need a step by step or I am willing to hire someone to make it work for me.
LEMP Stack on self-manage VPS
Here is example how I solve it
require "woocommerce_api"
woocommerce = WooCommerce::API.new(
"https://example.com",
"consumer_key",
"consumer_secret",
{
wp_json: true,
version: "wc/v3",
query_string_auth: true
}
)
OR simply For POSTMAN
https://example.com/wp-json/wc/v3/products?consumer_key={{csk}}&consumer_secret={{cs}}
The key is query_string_auth: true you need to force basic authentication as query string true under HTTPS
I have been using Openshift/Kubernates for some time and this has been the understanding.
For service to service communication
use DNS name of ${service-name} if they are under the same namespace
use DNS name of ${service-name}.${namespace}.svc.cluster.local if they are from different namespaces (network is joined)
Recently i was introduced with the topic of "we should add a dot after the svc.cluster.local to make it FQDN, for better DNS lookup speed". Done some testing and indeed with lookup is much faster with the dot. (~100ms without dot, 10ms with dot)
After some research, it was caused by the default dns setting from the kubernates
sh-4.2$ cat /etc/resolv.conf
search ${namespace}.svc.cluster.local svc.cluster.local cluster.local
nameserver X.X.X.X
options ndots:5
the ndots = 5 will perform a local search (sequential) if the dns name does not contain 5 dots.
In the case of ${service-name}.${namespace}.svc.cluster.local, the local search will be as such
${service-name}.${namespace}.svc.cluster.local + ${namespace}.svc.cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local + svc.cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local + cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local // SUCCESS LOOKUP
And for ${service-name}.${namespace}.svc.cluster.local., the local search will be as such
${service-name}.${namespace}.svc.cluster.local // SUCCESS LOOKUP
References
link
how to debug
Questions:
Since the ndots = 5 is the default setting for kubernetes, why ${service-name}.${namespace}.svc.cluster.local. is not documented on the official side ?
Should we change all service call to ${service-name}.${namespace}.svc.cluster.local. ? any potential downsides ?
Since the ndots = 5 is the default setting for kubernetes, why
${service-name}.${namespace}.svc.cluster.local. is not documented on
the official side ?
Well, it's a really good question. I searched through the official docs and it looks like this is not a documented feature. For this reason much better place for posting your doubts and also request for documentation improvement is the official GitHub site of Kubernetes DNS.
Should we change all service call to
${service-name}.${namespace}.svc.cluster.local. ? any potential
downsides ?
If it works well for you and definitely increases the performance, I would say - Why not ? I can't see any potential downsides here. By adding the last dot you're simply omitting those first 3 lookups that are doomed to failure anyway if you use Service domain name in a form of ${service-name}.${namespace}.svc.cluster.local
Inferring from lookup process you described and your tests, I guess if you use only ${service-name} (of course only within the same namespace), dns lookup should be also much faster and closer to those 10ms you observed when using ${namespace}.svc.cluster.local svc.cluster.local cluster.local. as then it is matched in the very first iteration.
Based on the latest document here, it states that that we should use ${service}.${namespace} to call a service from different namespace and expect to resolve on the second attempt
I have some Lua scripts embedded in nginx. In one of those scripts I connect to my Redis cache and do it like so:
local redis_host = "127.0.0.1"
local redis_port = 6379
...
local ok, err = red:connect(redis_host, redis_port);
I do not like this, because, I have to hard code host and port. Should I instead use something like .ini file, parse it in Lua and get configuration information from this file? How do they solve this problem in real world practice?
Besides, I my scripts I use RSA decryption and encryption. For example, I do it like so now:
local public_key = [[ -----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7udJ++o3T6lgbFwWfaD/9xUMEZMtbm GvbI35gEgzjrRcZs4X3Sikm7QboxMJMrfzjQxISPLtsy9+vhbITQNVkCAwEAAQ== -----END PUBLIC KEY----- ]]
...
local jwt_obj = jwt:verify(public_key, token)
Once again what I do not like about this, is that I have to hard code public key. Do they use it in production like so or use some other techniques to store secrets (like storing them in environment variable)?
I'm sure some people do it this way in production. It is all a matter of what you're comfortable with and what your standards are. Some things that should determine your approach here -
What is the sensitivity of the data and risk if it were to be available publicly?
What is your deployment process? If you use an infrastructure as code approach or some type of config management then you surely don't want these items sitting embedded within code.
To solve the first item around sensitivity of the data, you'd need to consider many different scenarios of the best way to secure the secrets. Standard secret stores like AWS Parameter Store and CredStash are built just for this purpose and you'd need to pull the secrets at runtime to load them to memory.
For the second item, you could use a config file that is replaced per deployment.
To get the best of both worlds, you'd need to combine both a secure mechanism for storing secrets and a configuration approach for deployments/updates.
Like was mentioned in the comments, there are books written on both of these topics so the chances of getting enough detail in a SO answer is unlikely.
I have a compiled c binary (setch) which takes the parameters HIGH/LOW/OFF so I am trying to execute the command eg setch OFF from javascript/jquery thus:
$.get("cgi-bin/setch.cgi","OFF"); -or
$.get("cgi-bin/setch OFF");
As it's a get then the space is encoded into %20, of course. However the server then tries to execute the command setch%20OFF and returns:
404 Not Found
Without the parameter the program executes and returns my message:
no parameters
ie all paths, permissions etc are OK
Am I trying to do the impossible here? Or am I missing something in the server (lighttpd) config?
Thanks
You want to pass a parameter to a CGI. The way to do that is not to separate them with a space, instead you want to put the parameter behind a "?" character. The HTTP server will then store everything that follows behind the question mark in the QUERY_STRING environment variable that your CGI can then read.
I.e.
$.get("cgi-bin/setch?OFF");
In your C program use getenv("QUERY_STRING") to access the passed parameter.
Check https://en.wikipedia.org/wiki/Common_Gateway_Interface for a list of all environment variables that the HTTP server set for CGI programs. Be sure to treat the values as untrusted data.
both plugin seems to use the same code for redis_gzip_flag and memcached_gzip_flag not provide any instructions about this flag and how to set it, as redis string doesn't have any flag support
so what is this flag?
where I set it in redis?
what number should I choose in the nginx config?
Hadn't heard of this but I found an example here, looks like you add it manually to your location block when you know the data you're going to be requesting from redis is gzipped.