Wolfram Mathematica import data from https website - http

I apologize for question banality but I have been getting to programming just recently.
I tried to import data from the following website by using the following command on Mathematica:
Import["https://www.soccerstats.com
/results.asp?league=england_2019","Data"]
and this error happens:
ExternalServiceSecuritySetSSLProperties::tsfail: SSL truststore failure: Could not create default trust store.
FetchURL::conopen: The connection to URL https://www.soccerstats.com/results.asp?league=england_2019 cannot be opened. If the URL is correct, you might need to configure your firewall program, or you might need to set a proxy in the Internet connectivity tab of the Preferences dialog (or by calling SetInternetProxy). For HTTPS connections, you might need to inspect the authenticity of the server's SSL certificate and choose to accept it.
It seems related to "https" website protocol, because by trying with any "http" kind the command works.
Any solution to bypass it in order of obtaining data?

Related

JupyterLab does not work when redirected using TLS

I have a local jupyter lab instance, running on mint-2 computer with command jupyter lab --ip "*", and it listens to port 8888. I can access it just fine via the URL mint-2:8888.
I also have a server instance ubuntu-2. I reverse ssh tunnel from mint-2:8888 to ubuntu-2:8888, meaning I can access it on my mint-1 laptop just fine via the URL ubuntu-2:8888 anywhere in the world.
However, it is not encrypted with TLS, so I wanted to improve this. On ubuntu-2 I have an nginx load balancer container that strips https traffic, and redirects http traffic to other locations. I have set up jupyter.ubuntu-2:443 so that it redirects to ubuntu-2:8888 so that it redirects to mint-2:8888. This version initially seems to open up just fine, and I can navigate directories. However, whenever I want to launch a new terminal or notebook instance, or even create new directories, it wouldn't work. Here's the network log when I save a modified notebook:
My question is, why won't the requests go through, considering I can still interact with the interface just fine everywhere else, but just not when creating folders/notebooks/terminals. I am thinking that JupyterLab might be using UDP and I'm considering passing UDP traffic through nginx, but this doesn't really make sense, as this is clearly a PUT request. Any other help regarding where to find more logs or speculation on what might have gone wrong is much appreciated.
I dig into it a little more, and managed to figured it out.
JupyterLab has CORS policy that doesn't allow requests to ubuntu-2. I then added c.NotebookApp.allow_origin = "*" to JupyterLab's config at ~/.jupyter/jupyter_lab_config.py, as mentioned here.
Then I found out that everything is still not functional, and this is because Jupyter requires both HTTP and WebSocket protocols, and my current server setup only allows http traffic. So I need to enable generic TCP traffic on ubuntu-2's HAProxy load balancer. Because I have multiple virtual hosts on the server, I need to distinguish between them, so I used Server Name Indication, server name included in TLS traffic.

Deno Server doesn't go through the Internet

I built a simple Webserver with just the serve function from the std http module. It just redirects a request to a new URL:
import { serve } from "https://deno.land/std#0.120.0/http/server.ts";
serve(req => Response.redirect("https://google.com"))
It works, when I access the server through a browser on my laptop, where the server is running, but when I try to access it on another machine in the same network using the ip-address of my laptop, there simply is no response at all. Is this one of the security features Deno has and if so, how can you deactivate it?
Update:
So I tried looking up the requests I make on my local machine in Wireshark, but when I run the server and send a request, it doesn't show up there. I disabled my Wifi Connection to see if that changes anything and to my surprise, I still got an answer from the server when I sent a request through the browser. I came to the conclusion that the Deno server somehow doesn't serve over the local network which really confuses me. Is there a way to change that behaviour?
This is not related to Deno, but rather the firewall features of your device/router/network or an error in the method that you are using to connect from the other device (typo, network configuration, etc.).
Without additional configuration (by default), serve binds to 0.0.0.0:8000, so — as an example — if your laptop is assigned the local address 192.168.0.100 by your router, you could reach the server at the address http://192.168.0.100:8000.
You might want to do research on SE/NetworkEngineering and elsewhere to determine the cause of the blocked connection.

Connection is not private in localhost

I am getting an error while debugging a asp.net web application . It says
"Your connection is not private
Attackers might be trying to steal your information from localhost (for example, passwords, messages, or credit cards)".This server could not prove that it is localhost;
It's security certificate is from some other machine .What steps should I follow to fix this? Is there a problem if I continue anyway ? Since it is just on localhost.
I know it's a bit old post. However, I have been searching for more than an hour to find a solution to this trivial yet irritating problem and I just found a solution.
You need to export the certificate from chrome. Open the url. Click the lock icon > Certificate Information > Details tab > Hit Copy to file.
You need to import the certificate into Windows. Open Run (Windows button + R) > type certmgr.msc > Expand to Trusted Root Certifications Authorities | Certificates > Action (from the menu bar) > All Tasks > import (now import the file you exported from the previous step).
This worked for me.
you can use port of 8000 insted 80
Fortunately, modern browsers consider http://127.0.0.1:8000/ to be a “potentially trustworthy” URL because it refers to a loopback address. Traffic sent to 127.0.0.1 is guaranteed not to leave your machine, and so is considered automatically secure against network interception.
read more from letsencript

Windows Azure VM SSL and Cloudapp.net

I installed an ASP.net application on a windows Azure VM (IIS 7). SSL certificate is installed, configured and the application works correctly. I have removed Http binding and http endpoints.
The issue I am having is that if I use the cloudapp.net link (using https), the application still opens with a mismatched certificate.
What can I do to deny any user from opening my application using https://xx.cloudapp.net/x?
It seems really silly that people are saying this isn't the right place for this question, since some of the solutions could be code related. ie: In your application, check the host and if it's cloudapp.net, do a URL redirect.
There's a few different options here but it sounds like what you're looking for is just the ability to prevent someone from viewing the application using that URL.
What I would do is set up a site in IIS that uses Host Header resolution to look for xx.cloudapp.net. If that URL is recognized, do a redirect using the HTTP redirect settings to the https version of your app. Don't bind the SSL port to this site or you'll run into SSL errors like you showed above.
The other option is to leave it out entirely and simply use the Host Header resolution to filter out requests for your site. I suspect what you've done is assign all incoming requests to the only IP address on the system, which is why the xx.cloudapp.net is showing your app and the cert is failing.
This would cause xx.cloudapp.net to fail to show any site at all but I think that might be what you want to do anyway.

Multiple certificates for HTTPS on a software NLB'd IIS7 cluster

We're currently trying to set up a HTTPS with multiple certificates. We've had some limited success but we're getting some results I can't make any sense of...
Basically we have two servers on our NLB (10.0.51.51 and 10.0.51.52) and two IPs assigned to our NLB (10.0.51.2 and 10.0.51.4) and we have IIS listening on both of these IPs with a different wildcard certificates (To avoid giving out public IP's let's say A:443 routes to 10.0.51.2:443 and B:443 routes to 10.0.51.4:443). We also have a Cisco router using port address translation to route port 443 from two external IP's to these internal NLB IPs.
The weird thing is, this works if we request A:443 or B:443, but if you go internally on 10.0.51.51:443, 10.0.51.52:443, 10.0.51.2:443 or 10.0.51.4:443 you ALWAYS get the same SSL cert. This cert was in the past assigned to *:443 but we've made sure there's no * bindings anymore defined in IIS.
When i run "netsh http show sslcert" after trimming out all the irrelevant stuff I get:
IP:port : 0.0.0.0:443
Certificate Hash : <Removed: Cert 1>
IP:port : 10.0.51.2:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.3:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.4:443
Certificate Hash : <Removed: Cert 2>
Which tells me that the * binding is still in there, which is a bit weird, but I can't see why that would prevent the other from working (Or even more more strangely why the request through the router would work).
It's got me wondering whether it's actually treating the requests as the machine's IP rather than the NLB IP, but unfortunately our dev environment is only a single server which sorta reduces the amount of trial/error I can take to this (Since all I can test on is a live environment) without convincing management to buy more servers for the test environment - which is something I'm trying.
Does anyone have any idea:
Why there's a difference between internal and through the router?
Why the internal request is getting the wrong cert?
How I can remedy this so that we get the same behavior on both sides?
I ended up tracking the problem down. Leaving this as a hint for anyone else who falls in the same trap...
The problem was caused by us using a shared configuration model on our IIS servers. When setting up a HTTPS binding this appears to only actually bind it on the box you're managing it on (Leaving the other completely unbound). Since our * binding still existed it was catching it on the server we didn't do through the UI and just let pick up the shared config.
Crazy bad luck with single-affinity NLB sent us down the garden path after the router being the cause by making our internal requests go to one server and our external requests to another.
We ended up finding this by running "netsh http show sslcert > certs.txt" on both servers and diff'ing the outputs.
Going forwards our plan is to no longer use the IIS UI for SSL configuration instead following the steps below:
Install the certificates on each server.
Run a command-line binding of the SSL port "netsh http add sslcert ipport=?:? certhash=? appid=?" (ip:port is easy to work out, certhash can be copied from the "certificate hash" section of the server certificates page, appid can be copied from an existing IIS binding on the netsh http add sslcert)
Edit the IIS ApplicationHost.config file directly to add the bindings without the UI being involved.
Our understanding is this will prevent a repeat of this error.

Resources