I had configure the spider to access the Tor with setup Privoxy but this only work when I use in localhost as the setting I configure is pointed to 127.0.0.1: port. But when i deploy to the Scapinghub, the server side do not setup tor and privoxy as i do. Is that any solution that i can use to let the spider go through my machine through my network and port ?
As i know, if on the same network, we can use the internal IP. Can I just replace the public IP to 127.0.0.1 but i wonder how the network to forward to which machine.
Below is the configuration to access tor:
middlewares.py
class ProxyMiddleware(object):
#classmethod
def process_request(self, request, spider):
request.meta['proxy'] = "http://127.0.0.1:8118"
setting.py
DOWNLOADER_MIDDLEWARES = {
'tutorial.middlewares.ProxyMiddleware': 1
}
You can deploy a custom docker image with tor set up on it.
And then point to the 127.0.0.1.
https://shub.readthedocs.io/en/stable/deploy-custom-image.html#deploy-custom-image
Related
We have internal load balancer, where the request is coming to a instance group which is basically the sets of VM.
Load Balancer -> Instance Group (our service is running on the Port 8080)
Since we need to redirect one of the request to some other domain, this is done in order to test our new service
Load Balancer -> Send Request to VM 1 at port 8080 -> Redirect it newservice.com/filter=<with arguments>
-> VM 2 and VM 3 service is running on port 8080
What's the correct way to do that ? Since new service is not in the same gcp project, any help how we should achieve it ?
I have simple nginx configuration :
server {
listen 80;
server_name MY-MACHINE-IP (VM 1)
return 301 https://newService.com$request_uri;;
}
But no redirection is happening
There are only 3 ways to make cross-project connections possible: Shared VPC, Cloud VPN or VPC Peering. You may choose the option that most suits your use-case. I would try VPC Peering first.
Shared VPC
Cloud VPN
VPC Peering
I am running an apache server with a virtual host, called: test.net.ngrok.io.
What i would like to get is to make my virtual accessible for public with a ssl or tls certificate.
I have the pro package, see: https://ngrok.com/pricing
So far I managed to run this command, but without ssl:
./ngrok http -region=us -hostname=test.net.ngrok.io -host-header=rewrite test.net.ngrok.io:80.
Is there a way to make a fixed public url with ssl|tls certificate ? If yes, can you help me with the exact command to make the ngrok tunnel ?
Thank you
I try to get remote (client) IP addres:
var ip = httpContext.Features.Get<IHttpConnectionFeature>()?.RemoteIpAddress
But it works only for local requests (it will return ::1 value)
When I load page from remote machine the value is null. I investigated there is no IHttpConnectionFeature in the Features collection in this case.
Why? And how to get remote ip address correctly?
I know that this post is old but I came here looking for the same question and finnaly I did this:
On project.json add dependency:
"Microsoft.AspNetCore.HttpOverrides": "1.0.0"
On Startup.cs, in the Configure method add:
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto
});
And, of course:
using Microsoft.AspNetCore.HttpOverrides;
Then, I got the ip like this:
Request.HttpContext.Connection.RemoteIpAddress
In my case, when debugging in VS I got always IpV6 localhost, but when deployed on an IIS I got always the remote IP.
Some useful links:
How do I get client IP address in ASP.NET CORE? and RemoteIpAddress is always null
The ::1 may be because:
Connections termination at IIS, which then forwards to Kestrel, the v.next web server, so connections to the web server are indeed from localhost.
(https://stackoverflow.com/a/35442401/5326387)
Just try this:
var ipAddress = HttpContext.Connection.RemoteIpAddress;
And if you have another computer in same LAN, try to connect with this pc but use user ip instead of localhost. Otherwise you will get always ::1 result.
I've been trying to stream flv content from my openshift cartridge using nginx + rtmp module.
On my local machine, with the attached configuration, everything works just fine (I use ffplay for testing, e.g. ffplay rtmp://localhost:8080/test/streamkey)
When I try with the same configuration on openshift, I get the following error:
HandShake: Type mismatch: client sent 3, server answered 60 f=0/0
RTMP_Connect1, handshake failed.
However, if I enable port-forwarding and test the stream server using ffplay rtmp://127.0.0.1:8080/test/streamkey, everything works fine. here are my port forwardings:
rhc port-forward myappname
Checking available ports ... done
Forwarding ports ...
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
------- -------------- ---- -----------------
nginx 127.0.0.1:8080 => 127.10.103.1:8080
My cartridge is a "diy-0.1" cartridge. nginx 1.7.6 (also tested 1.4.4) + rtmp-module.
I suspect there are some issues with some proxy (apache?) that uses openshift for handling gears, maybe it does not allow rtmp headers(?)?
NB: Configuring nginx http-only works fine.
Can anybody help? I'm stuck, I think this is the first time I ask something on stackoverflow :-)
The nginx configuration (NB: the "play" path and the IP:PORT are taken using the openshift environment variables.):
rtmp {
server {
listen 127.10.103.1:8080;
chunk_size 8192;
application test {
play /var/lib/openshift/54da37644382ece45c000139/app-root/runtime/repo/public;
}
}
}
There is an apache proxy in front of your application on OpenShift Online, and it is possible that the content is trying to be streamed as HTTP traffic instead of RTMP traffic, that is why you are getting the content mismatch, but if you do it through the port-forward, you are gaining direct access to your application and bypassing the proxy. That is why it works fine with the port forward. There is currently no way to bypass the apache reverse proxy through the public ip, please see this developer portal article for more information about how requests are routed to your application: https://developers.openshift.com/en/managing-port-binding-routing.html
I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080. I also stopped the iptables service on the instance, per another post. So in theory this port should be wide open.
I started my RESTful service on 8080 and was able to access it locally via curl.
When I come in with curl remotely I get an error saying it couldn't connect to the host.
What else should I check to see if 8080 is truly open?
I started my RESTful service on 8080 and was able to access it locally via curl.
What kind of technology is your RESTful service based upon?
Many frameworks nowadays listen on localhost (127.0.0.1) only, be it by default or by means of their examples, see e.g. the canonical Node.js one (I realize that port 8080 hints towards Java/Tomcat, anyway):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
The log message generated by starting this is Server running at http://127.0.0.1:1337/ - the emphasized part is key here, i.e. the server has been configured to listen on the IP address 127.0.0.1 only, whereas you are trying to connect to it via your public Amazon EC2 IP Address.