I have been trying to connect and list files with RCurl and SFTP. I can access this via WinSCP, but RCurl won't go through. Here is where I'm currently:
library(RCurl)
opts <- curlOptions(
proxy = "http://myproxy/",
proxyport = 8080,
httpproxytunnel = 1L,
ssh.private.keyfile = "ssh-gibberish"
)
url <- "sftp://user:pwd#ftp.address.com/"
test <- getURL(url = url, .opts = opts, dirlistonly = TRUE, verbose = TRUE, port = 22)
* Trying 123.12.123.12...
* Connected to myproxy (123.12.123.12) port 8080 (#0)
* Establish HTTP proxy tunnel to ftp.address.com:22
> CONNECT ftp.address.com:22 HTTP/1.1
Host: ftp.address.com:22
Proxy-Connection: Keep-Alive
* Proxy CONNECT aborted
* Connection #0 to host myproxy left intact
Error in function (type, msg, asError = TRUE) : Proxy CONNECT aborted
Finally solved it. I had to change proxy and add username and password.
opts <- curlOptions(
proxy = "http://mynewproxy/",
proxyport = 8080,
proxyusername = "domain\\user",
proxypassword = "pwd",
httpproxytunnel = 1L,
ssh.private.keyfile = "ssh-gibberish"
)
Normally proxy uses windows passowrds, but I quess this is different.
Related
I need to validate that the GET request is using the proxy i have set up.
Is there a way to validate the proxy used from the response?
library(httr)
HTTPUserAgents <- c("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/49.0.2623.112 Safari/537.36")
link <- paste0("https://www.instagram.com/carlosgcardenasv/")
response <- RETRY(verb = "GET",
url = link,
user_agent(HTTPUserAgents),
use_proxy(url = "XXXX",
port = 8888,
username = "XXXX",
password = "XXXX"),
verbose())
The proxy can be verified applying a GET request to https://jsonip.com/. This will return the IP.
I am using nginx as a proxy to a nodejs application. I have the same application running multiple times each on a different port. The request is directed to the correct application/port based on host name.
So
test1.domain.com would be proxied to 127.0.0.1:8000
test2.domain.com would be proxied to 127.0.0.1:8001
test3.domain.com would be proxied to 127.0.0.1:8002
When I hard code " proxy_pass http://127.0.0.1:8000;" Everything works fine.
Now I wrote a njs script to read a file in a users directory to get the port number based on the subdommain. Here is the script.
#inclusion of js file
js_include sites-available/port_assign.js;
js_set $myPort port;
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/port';
var port = fs.readFileSync(filename);
port.trim();
return(port);
}
this does read the file and returns the port number correctly. I have verified this in the error logs, Because I get:
2020/01/21 04:26:46 [error] 2729#2729: *6 invalid port in upstream "127.0.0.1:8001
", client: 96.54.17.234, server: *.foundryserver.com, request: "GET / HTTP/1.1", host: "test1.foundryserver.com"
now when I tried to issue the directive: proxy_pass http://127.0.0.1:$myPort I get an internal server error and the error stated above.
Not sure what is the difference it the two. I can only think somehow using a variable $myPort is got weird characters or something.
There was some extra information in the port variable. I was able to store the port number in a json format and parse it in the js. {"port":"8000"} is stored in the file.
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/myport';
var jport = fs.readFileSync(filename);
var port = JSON.parse(jport);
return(port.port);
}
by doing the json parsing it removed any unseen characters in the variable.
I'm a bit new to the language, and I want to start hacking away at a very simple HTTP server. My current code looks like this:
require "http/server"
port = 8080
host = "127.0.0.1"
mime = "text/html"
server = HTTP::Server.new(host, port, [
HTTP::ErrorHandler.new,
HTTP::LogHandler.new,
HTTP::StaticFileHandler.new("./public"),
]) do |context|
context.response.content_type = mime
end
puts "Listening at #{host}:#{port}"
server.listen
My goal here is that I don't want to list the directory, as this will do. I actually want to serve index.html if it is available at public/, without having to place index.html in the URL bar. Let's assume that index.html actually does exist at public/. Any pointers to docs that might be useful?
Something like this?
require "http/server"
port = 8080
host = "127.0.0.1"
mime = "text/html"
server = HTTP::Server.new(host, port, [
HTTP::ErrorHandler.new,
HTTP::LogHandler.new,
]) do |context|
req = context.request
if req.method == "GET" && req.path == "/public"
filename = "./public/index.html"
context.response.content_type = "text/html"
context.response.content_length = File.size(filename)
File.open(filename) do |file|
IO.copy(file, context.response)
end
next
end
context.response.content_type = mime
end
puts "Listening at #{host}:#{port}"
server.listen
How can I make in postfix recive mail on my domain?
ex. here I wont revice mail: neko#domain.com not like this: neko#mail.domain.com
Here is how I put in main.cf, but dosn't work
mydomain = domain.com
myorigin = $mydomain
Any solution?
take a closer look at mydestination in main.cf:
http://www.postfix.org/postconf.5.html#mydestination
here is an example, main.cf:
smtpd_banner = $myhostname ESMTP
biff = no
append_dot_mydomain = no
myhostname = mail.domain.com
inet_protocols = ipv4
mydestination = $myhostname, localhost.$mydomain
virtual_mailbox_domains = domain.com
I am trying to set a cookie with lua+nginx+redis. This is my idea: set cookie if cookie doesn't exist and then save to redis.
local redis = require "resty.redis"
local red = redis:new()
local md5 = require "md5"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.say("failed to connect to Redis: ", err)
return
end
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid,
"date_time", date_time,
"user-agent", args["user-agent"]
)
if not res then
ngx.say("failed to set cookie: ", err)
end
end
and my nginx conf
...
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file /lua/test.lua;
}
I am not seeing the cookie set however. I get [error] 63519#0: *408 attempt to set ngx.header.HEADER after sending out response headers, client: 127.0.0.1, server: localhost, request: "GET /cookie HTTP/1.1", host: "localhost"
I can't seem to figure out why this wouldn't work? I also thought I could set cookies with purely nginx. I need to track users who visit my page. Any thoughts?
Thanks!!
Update
I revised my idea to make redis requests from an upstream access point. Now I keep getting an invalid reply from redis.parser.
local redis = require "resty.redis"
local md5 = require "md5"
local parser = require "redis.parser"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
local test_cookie = ngx.location.capture("/redis_check_cookie", {args = {cookie_uid="cookie:"..uid}});
if test_cookie.status ~= 200 or not test_cookie.body then
ngx.log(ngx.ERR, "failed to query redis")
ngx.exit(500)
end
local reqs = {
{"hmset", "cookie:"..uid, "path=/"}
}
local raw_reqs = {}
for i, req in ipairs(reqs) do
table.insert(raw_reqs, parser.build_query(req))
end
local res = ngx.location.capture("/redis_set_cookie?" .. #reqs,
{ body = table.concat(raw_reqs, "")
})
local replies = parser.parse_replies(res.body, #reqs)
for i, reply in ipairs(replies) do
ngx.say(reply[1])
end
and my nginx conf now has:
upstream my_redis {
server 127.0.0.1:6379;
keepalive 1024 single;
}
and
location /redis_check_cookie {
internal;
set_unescape_uri $cookie_uid $arg_cookie_uid;
redis2_query hexists $cookie_uid uid;
redis2_pass my_redis;
}
location /redis_set_cookie {
internal;
redis2_raw_queries $args $echo_request_body;
redis2_pass my_redis;
}
maybe u forget to display some other things;
i don't have openresty environment; but our environment is similar.
the code below is my test, and it run perfectly
this is nginx.conf
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file test.lua;
}
this is lua script
local redis = require "redis"
local red = redis.connect('192.168.1.51',6379)
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = (uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid, "date_time", date_time,
"user-agent", args["user-agent"])
if not res then
ngx.say("failed to set cookie: ", err)
end
end
will u display more about your code?
friends, yesterday i use my own environment and change some of your code, the program run ok.
But, u say your code is also bad.
Just now, i also use the resty.redis. But the code runs ok.
So, i use your environment and your code according to your post, but the result is ok.
I can't provide u help any more.