My workplace filters our internet traffic by forcing us to go through a proxy, and unfortunately sites such as IT Conversations and Libsyn are blocked. However, mp3 files in general are not filtered, if they come from sites not on the proxy's blacklist.
So is there a website somewhere that will let me give it a URL and then download the MP3 at that URL and send it my way, thus slipping through the proxy?
Alternatively, is there some other easy way for me to get the mp3 files for these podcasts from work?
EDIT and UPDATE: Since I've gotten downvoted a few times, perhaps I should explain/justify my situation. I'm a contractor working at a government facility, and we use some commercial filtering software which is very aggressive and overzealous. My boss is fine with me listening to podcasts at work and is fine with me circumventing the proxy filtering, and doesn't want to deal with the significant red tape (it's the government after all) associated with getting the IT department to make an exception for IT Conversations or the Java Posse, etc. So I feel that this is an important and relevant question for programmers.
Unfortunately, all of the proxy websites for bypassing web filters have also been blocked, so I may have to download the podcasts I like at home in advance and then bring them into work. If can tell me about a lesser-known service I can try which might not be blocked, I'd appreciate it.
Can you SSH out? SSH Tunnels are your friend!
Why not subscribe at home and have your favorite podcasts copied to your mp3 player or a USB drive and just take it to work with you each day and back home in the evening? Then you can listen and your are not circumventing your clients network.
There are many other Development/Dotnet/Technology podcasts, try one of those. for the blocked sites try an anonymous proxy site, there are plenty out there.
Since this is work related material, I would recommend opening up a request that the sites in question not be blocked.
I ended up writing an extremely dumb-and-simple cgi-script and hosting it on my web server, with a script on my work computer to get at it. Here's the CGI script:
#!/usr/local/bin/python
import cgitb; cgitb.enable()
import cgi
from urllib2 import urlopen
def tohex(data):
return "".join(hex(ord(char))[2:].rjust(2,"0") for char in data)
def fromhex(encoded):
data = ""
while encoded:
data += chr(int(encoded[:2], 16))
encoded = encoded[2:]
return data
if __name__=="__main__":
print("Content-type: text/plain")
print("")
url = fromhex( cgi.FieldStorage()["target"].value )
contents = urlopen(url).read()
for i in range(len(contents)/40+1):
print( tohex(contents[40*i:40*i+40]) )
and here's the client script used to download the podcasts:
#!/usr/bin/env python2.6
import os
from sys import argv
from urllib2 import build_opener, ProxyHandler
if os.fork():
exit()
def tohex(data):
return "".join(hex(ord(char))[2:].rjust(2,"0") for char in data)
def fromhex(encoded):
data = ""
while encoded:
data += chr(int(encoded[:2], 16))
encoded = encoded[2:]
return data
if __name__=="__main__":
if len(argv) < 2:
print("usage: %s URL [FILENAME]" % argv[0])
quit()
os.chdir("/home/courtwright/mp3s")
url = "http://example.com/cgi-bin/hex.py?target=%s" % tohex(argv[1])
fname = argv[2] if len(argv)>2 else argv[1].split("/")[-1]
with open(fname, "wb") as dest:
for line in build_opener(ProxyHandler({"http":"proxy.example.com:8080"})).open(url):
dest.write( fromhex(line.strip()) )
dest.flush()
Related
I am creating a GroupMe bot, and I'm testing out the callback URL and the basic WSGI app I've set up so far. I am planning host the bot on Heroku, but am testing it on my local machine first. I registered a bot, with the callback URL http://MY_IP_ADDRESS:8000. When I open a different shell and run requests.post('http://MY_IP_ADDRESS:8000', data = 'something') in the Python interpreter, everything works fine. However, when there is activity in the GroupMe group, nothing happens, not even an error message.
Here's my (simplified) code:
from wsgiref.simple_server import make_serve
def app(environ, startResponse):
try:
requestBodySize = int(environ.get('CONTENT_LENGTH', 0))
except ValueError:
requestBodySize = 0
# requestBody = environ['wsgi.input'].read(requestBodySize)
print('something')
responseBody = bytes('successful', 'utf-8')
status = '200 OK'
responseHeaders = [('Content-Type', 'text/plain'), ('Content-Length', str(len(responseBody)))]
startResponse(status, responseHeaders)
return [responseBody]
server = make_server('', 8000, app)
server.serve_forever()
I'm sure I'm doing something obvious, but I can't for the life of me figure out what. I'd appreciate any help!
I never figured out why the callback URL wasn't working with localhost, but when I deployed the app on Heroku, everything worked fine! It must have had something to do with my firewall settings.
When you run servers on your local machine your firewall doesn't really like that. GroupMe also cant send to anything but public facing addressees, which is why Heroku works. One thing I can recommend in the future is using Ngrok, https://ngrok.com/ this will work with your server to make a public facing address on your machine that you can use as callback url. I use Ngrok to test my bots and quickly iterate before pushing to a dedicated server like Heroku, honestly looking through Heroku log files is a pain...
I've been looking for a while through documentation to find a way to accomplish this and haven't been successful yet. The basic idea is, that I have a piece of html that I load through Qt's webview. The same content can be exported to a single html file.
This file uses Libraries such as Bootstrap and jQuery. Currently I load them through CDN which works when online just fine. However, my application also needs to run offline. So I'm looking for a way to intercept loading of the Libraries in Qt and serve a locally saved file instead. I've tried installing a https QWebEngineUrlSchemeHandler, but that never seems to trigger the requestStarted method on it.
(PyQT example follows)
QWebEngineProfile.defaultProfile().installUrlSchemeHandler(b'https', self)
If I use a different text for the scheme and embed that into the page it works, so my assumption is that it doesn't work as Qt has a default handler for it already registered. But that different scheme would fail in the file export.
Anyway, back to the core question; Is there a way to intercept loading of libraries, or to change the url scheme specifically within Qt only?
Got Further with QWebEngineUrlRequestInterceptor, now redirecting https requests to my own uri, which has a uri handler. However, the request never gets through to it, because: Redirect location 'conapp://webresource/bootstrap.min.css' has a disallowed scheme for cross-origin requests.
How do I whitelist my own conapp uri scheme?
Edit: For completeness sake, it turns out back when I originally stated the question, it was impossible to accomplish with PySide 5.11 due to bugs in it. The bug I reported back then is nowadays flagged as fixed (5.12.1 I believe) so it should now be possible to accomplish this again using Qt methods, however for my own project I'll stick to jinja for now which has become a solution for many other problems.
The following example shows how I've done it. It uses the QWebEngineUrlRequestInterceptor to redirect content to a local server.
As an example, I intercept the stacks.css for stackoverflow and make an obvious change.
import requests
import sys
import threading
from PyQt5 import QtWidgets, QtCore
from PyQt5.QtWebEngineWidgets import QWebEngineView, QWebEnginePage, QWebEngineProfile
from PyQt5.QtWebEngineCore import QWebEngineUrlRequestInterceptor, QWebEngineUrlRequestInfo
from http.server import HTTPServer, SimpleHTTPRequestHandler
from socketserver import ThreadingMixIn
# Set these to the address you want your local patch server to run
HOST = '127.0.0.1'
PORT = 1235
class WebEngineUrlRequestInterceptor(QWebEngineUrlRequestInterceptor):
def patch_css(self, url):
print('patching', url)
r = requests.get(url)
new_css = r.text + '#mainbar {background-color: cyan;}' # Example of some css change
with open('local_stacks.css', 'w') as outfile:
outfile.write(new_css)
def interceptRequest(self, info: QWebEngineUrlRequestInfo):
url = info.requestUrl().url()
if url == "https://cdn.sstatic.net/Shared/stacks.css?v=596945d5421b":
self.patch_css(url)
print('Using local file for', url)
info.redirect(QtCore.QUrl('http:{}:{}/local_stacks.css'.format(HOST, PORT)))
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
"""Threaded HTTPServer"""
app = QtWidgets.QApplication(sys.argv)
# Start up thread to server patched content
server = ThreadingHTTPServer((HOST, PORT), SimpleHTTPRequestHandler)
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
# Install an interceptor to redirect to patched content
interceptor = WebEngineUrlRequestInterceptor()
profile = QWebEngineProfile.defaultProfile()
profile.setRequestInterceptor(interceptor)
w = QWebEngineView()
w.load(QtCore.QUrl('https://stackoverflow.com'))
w.show()
app.exec_()
So, the solution I went with in the end was, first, introduce jinja templates. Then, using those the template would have variables and blocks set based on export or internal use and from there I did not need the interceptor anymore.
This is my first post with this account, and Ive been struggling for the last week to get this to work, so I hope someone can help me get this working.
Im trying to pull some data from https://api.connect2field.com/ but its rejecting all of my authentication attempts from python (not from a browser though).
The code Im using
import urllib.request as url
import urllib.error as urlerror
urlp = 'https://api.connect2field.com/api/Login.aspx'
# Create an OpenerDirector with support for Basic HTTP Authentication...
auth_handler = url.HTTPBasicAuthHandler()
auth_handler.add_password(realm='Connect2Field API',
uri=urlp,
user='*****',
passwd='*****')
opener = url.build_opener(auth_handler)
# ...and install it globally so it can be used with urlopen.
url.install_opener(opener)
try:
f = url.urlopen(urlp)
print (f.read())
except urlerror.HTTPError as e:
if hasattr(e, 'code'):
if e.code != 401:
print ('We got another error')
print (e.code)
else:
print (e.headers)
Im pretty sure the code is doing everything right, which makes me think that maybe theres another authentication step that ASP.net requires. Does anybody have any experience with ASP.Net's authentication protocol?
Im gonna be checking this post throughout the day, so I can post more info if required.
Edit: Ive also tried running my script against a basic http auth server running at home, and it authenticates, so Im pretty sure the request is set up properly.
It appears that IIS is set up to do basic authentication, ASP.NET will be most probably be configured to use windows authentication.
As you have said that authentication works via browser, so the best bet for you is to use tool such as fiddler to capture request/response when connecting via browser and also when connecting via your code. Compare them to troubleshoot the issue.
For example, I remember a case where the web site first requested authentication credentials and then re-directed to different url which prompted for different credentials.
I am trying to access my exchange mailbox via WebDav.
Locally I used the following URL to do so:
https://server/exchange/username/inbox/
Since we moved our server to bpos (exchange online) I am not sure which URL to use to access my mailbox. The bpos server does handle multiple domains and I am not sure where to put the domain in the url above.
Does anyone has some experience in accessing the bpos exchange server programmatically?
Thanks
Andreas
After a long search and dozens of tests I seem to have found my problem:
First of all, WebDav does NOT work with Exchange Online.
There is a solution using WebServices, that works quite nice.
I seem to have to set the Exchange Version to 2007_SP1. I did not find any option to leave this item blank or have it detected.
var service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
// Just to get the trace messages
service.TraceEnabled = true;
ServicePointManager.ServerCertificateValidationCallback = CertificateValidationCallBack;
service.UseDefaultCredentials = false;
service.Credentials = new WebCredentials("<Username>", "<Password>");
// Autodiscover does NOT work if Exchange is not in the local net
// This is the url you have to use for german account (red002)
service.Url = new Uri("https://red002.mail.emea.microsoftonline.com/ews/Exchange.asmx");
What I still cannot understand is why you have to know the exchange version in advance, and have no option to negotiate that with a call.
Similar is the fact that I have to know the URL to be able to connect to the exchange server. Is it not the basic idea of the cloud to NOT have to know where your data is hosted?
I hope this code helps somebody. I sure would have needed this advice :-)
We use "WebDAV .NET for Exchange", commercial library to access to Microsoft Exchange Online server over the WebDAV protocol. You have to use Forms Based Authentication to login and url is like https://red003.mail.microsoftonline.com/exchange/xxxxxx#xxxxx.microsoftonline.com
I have a flash video player which requests a flv file from a central server. That server might redirect the request to a server from the user's country if possible, a lot like a CDN.
This video player also reports usage stats. One thing I'd like to report is the true server/location from which the player is streaming the video from. So basically, if it gets redirected I want to know about it.
It seems that you can't extract the url from a URLLoader, you can only keep a copy of the URLRequest that you constructed it with.
I notice that you can listen for HTTP status events, which would include a 302 or similar. But unfortunately, the HTTPStatusEvent object doesn't show the redirected location.
Any ideas about how to monitor for a redirect, and get the redirected location?
I'm a bit surprised Flash allows you to redirect a video request at all. I did a bit of digging and it looks like you can get the info:
Handling Crossdomain.xml and 302 Redirects Using NetStream
His post specifically talks about the trouble of security issues that arise because of the fact some operations fail if data is from an untrusted server. Since he doesn't know where his video is coming from (302 redirect) the Flash Player doesn't trust it and prevents some operations on the loaded content.
How he gets the server the content was actually loaded from is to do an operation on the file that should not be allowed and he parses the domain information from the error message:
try
{
var bit:BitmapData = new BitmapData(progressiveVideoPlayer.measuredWidth, progressiveVideoPlayer.measuredHeight, false, 0x000000);
bit.draw(progressiveVideoPlayer);
}
catch(error:SecurityError)
{
var list:Array = error.toString().split(" ");
var swfURL:String = list[7] as String;
var domain:String = list[10] as String;
domain = domain.substring(0, domain.length - 1);
var domainList:Array = domain.split("/");
var protocol:String = domainList[0] as String;
var address:String = domainList[2];
var policyFileURL:String = protocol + "//" + address + "/crossdomain.xml";
Security.loadPolicyFile(policyFileURL);
}
Notice he is doing it so that he can load the policy file (to allow the security restricted operations on the file). I'm not sure it will be helpful to you but at least read the article and have a think about it. You may contact the blog author directly too - he is pretty active in the general Flash community.