I'm trying to port some code from Python to Dart (for a Flutter application). However, I'm having a bit of trouble encoding URLs. I'm trying to do the equivalent of parsedData = urllib.parse.quote(STR_DATA). The closest I've gotten is with the Uri Dart class, with:
parsedData = Uri(queryParameters: STR_DATA);
parsedData = Uri.encodeComponent(parsedData.toString());
This gets close to what I'm trying to get but not quite. The result I get using Python is something like this (side note: it's only encoding after the period):
ig_sig_key_version=4&signed_body=efbcf4ac8577da5eb43f33f369cda4248dba52a407e88a565038b53933737bba.%7B%22phone_id%22%3A%20%22ee79227a-cf89-41c8-9598-d0c6f3931fa4%22%2C%20%22_csrftoken%22%3A%20%22BB1PRXVV1y6FgU0Rfmcda3jJG5eVFSPd%22%2C%20%22username%22%3A%20%22USERNAME%22%2C%20%22guid%22%3A%20%22c668814c-a1e9-487c-97f0-8491b2c07c1c%22%2C%20%22device_id%22%3A%20%22android-7eb57ab90e1e2c3e%22%2C%20%22password%22%3A%20%22PASSWORD%22%2C%20%22login_attempt_count%22%3A%20%220%22%7D
While with Dart, I get something like this:
ig_sig_key_version=4&signed_body=d1c26c132b536b3f4ffdd7f5c0524503e48216fb7f638b5f4cab65d74a9834de.%3Fphone_id%3D112626ab-1946-4ad0-bc63-b233f57033f9%26_csrftoken%3DhiPh0jSvjd0eP2dP4VUr83t3htYF7xci%26username%3DUSERNAME%26guid%3D999d2242-f52a-4044-96d9-2245c6757fbc%26device_id%3Dandroid-5daff5c3029f414c%26password%3DPASSWORD%26login_attempt_count%3D0
By the way, the reason why I need this to encode in the same way is because otherwise my HTTP request returns 400. Anyway, any help is appreciated; thank you in advance.
OTHER SIDE NOTE: I think this is what's causing my request to be rejected but I don't know, I haven't done a lot of web stuff. If you think it might be something else, feel free to correct me.
You are doing too much.
In this case, you only need to do: parsedData = Uri.encodeComponent(STR_DATA); to get the same result as the Python code.
Use Dart Uri Class
var uri = 'http://example.com/path/to/page?name=ferret john';
var encoded = Uri.encodeFull(uri);
assert(encoded == 'http://example.com/path/to/page?name=ferret%20john');
var decoded = Uri.decodeFull(encoded);
assert(uri == decoded);
Related
At first, I am not the best programmer, so please excuse me if I ask something stupid.
I have a question about the following code (in language R) which I have written in order to get an authentication code for the Withings API:
library(httr)
my_client_id = "..." #deleted because it is secret
my_redirect_uri = "..." #deleted because it is secret
my_scope="user.activity,user.metrics,user.info"
access_url = "https://wbsapi.withings.net/v2/oauth2"
authorize_url = "https://account.withings.com/oauth2_user/authorize2"
my_response_type = "code"
my_state = "..." #deleted because it is secret
httr::BROWSE(authorize_url, query = list(response_type = my_response_type,
client_id = my_client_id,
redirect_uri = my_redirect_uri,
scope = my_scope,
state = my_state))
This code successfully opens the URL
http://%22https://account.withings.com/oauth2_user/account_login?response_type=code&client_id=...&redirect_uri=...&scope=user.activity%2Cuser.metrics%2Cuser.info&state=...&b=authorize2%22
where I can enter my e-mail-adress and password. After that, it redirects me to the URL
http://.../?code=...&state=...
where the first dots are my redirect URL. This gives me the code I need for getting the access token. I have tested the code, i.e. I tried to get an access token with using this code and I was successfull.
The problem is, I have to copy/paste the code from the URL (in my browser) to my POST statement (which I use to get the access token) manually and I would like to automatize that. So I would like to get returned the URL with the code so that I can parse it in order to extract the code. I know how to extract the code if I have the URL, but I have no idea how to avoid the copying/pasting and I am not even sure if it is possible. If it is possible, does anyone have an idea how I could add something to my existing code or how I could change my existing code in order to get the URL with the code (apart from doing it manually)?
I am very happy about any help and I want to say thank you in advance!
I am trying to connect to a http API. This API responses with a ndjson, that is a newline separated json strings. I need to consume these lines one by one, before I download them all (in fact even before the server knows what it will output on the future lines).
In Python, I can achieve this by:
import requests, json
lines = requests.get("some url", stream=True).iter_lines()
for line in lines:
#parse line as JSON and do whatever
and it works like charm.
I want the same effect done in Nim, but the program blocks. For example, I tried to load just the first line of the response:
import httpclient, json, streams
var stream = newHttpClient().get("some url").bodyStream
var firstLine = ""
discard stream.readLine(firstLine )
echo firstLine
but with no luck - that is, the program never echoes.
I also tried streams.lines iterator, but that didn't help either.
Is there some idiom similar to the Python snipet that would allow me to easily work with the http reponse stream line by line?
The solution is to use the net module as in the question linked by #pietroppeter. That initially didn't work for me, because I didn't construct the HTTP request correctly.
The resulting code:
import net, json
const HOST = "host"
const TOKEN = "token"
iterator getNdjsonStream(path: string): JsonNode =
let s = newSocket()
wrapSocket(newContext(), s)
s.connect(HOST, Port(443))
var req = &"GET {path} HTTP/1.1\r\nHost:{HOST}\r\nAuthorization: {TOKEN}\r\n\r\n"
s.send(req)
while true:
var line = ""
while line == "" or line[0] != '{':
line = s.recvLine
yield line.parseJson
I think this can't be achieved using the httpClient module. The async versions might look like they can do it but it seems to me that you can only work with the received data once the Future is completed, that is after all data is downloaded.
The fact that such a simple think cannot be done simply and the lack of examples I could find lead to a couple of days of frustration and the need of opening a stackoverflow account after 10 years of programming.
I want to do something very similar to what's shown in the docs for FSharp.Data:
The URL I'm requesting from though (TFS) requires client authentication. Is there any way I can provide this by propagating my Windows creds? I notice JsonProvider has a few other compile-time parameters, but none seem to be in support of this.
You don't have to provide a live URL as a type parameter to JsonProvider; you can also provide the filename of a sample file that reflects the structure you expect to see. With that feature, you can do the following steps:
First, log in to the service and save a JSON file that reflects the API you're going to use.
Next, do something like the following:
type TfsData = JsonProvider<"/path/to/sample/file.json">
let url = "https://example.com/login/etc"
// Use standard .Net API to log in with your Windows credentials
// Save the results in a variable `jsonResults`
let parsedResults = TfsData.Parse(jsonResults)
printfn "%A" parsedResults.Foo // At this point, Intellisense should work
This is all very generic, of course, since I don't know precisely what you need to do to log in to your service; presumably you already know how to do that. The key is to retrieve the JSON yourself, then use the .Parse() method of your provided type to parse it.
I have the following part of code:
let client = new WebClient()
let url = "https://..."
client.DownloadFile(Url, filename)
client.Dispose()
In which code i am performing a HttpGet method in which method i get a file excel with some data.
The method is executed correctly because i get my excel file.
The problem is that the content of my file excel is like this:
I think its because i don't pass ContentType:"application/vnd.ms-excel"
So anyone can help how can I pass that ContentType in my Client in F# ?
If you want to add HTTP headers to a request made using WebClient, use the Headers property:
let client = new WebClient()
let url = "https://..."
client.Headers.Add(HttpRequestHeader.Accept, "application/vnd.ms-excel")
client.DownloadFile(Url, filename)
In your case, I think you need the Accept header (Content-Type is what the response should contain to tell you what you got).
That said, I'm not sure if this is the problem you are actually having - as noted in the comments, your screenshot shows a different file, so it is hard to tell what's wrong with the file you get from the download (maybe it's just somewhere else? or maybe the encoding is wrong?)
I have a coding problem regarding Python 3.5 web clawing.
I try to use 'requests.get' to extract the real link from 'http://www.baidu.com/link?url=ePp1pCIHlDpkuhgOrvIrT3XeWQ5IRp3k0P8knV3tH0QNyeA042ZtaW6DHomhrl_aUXOaQvMBu8UmDjySGFD2qCsHHtf1pBbAq-e2jpWuUd3'. An example of the code is like below:
import requests
response = requests.get('http://www.baidu.com/link?url=ePp1pCIHlDpkuhgOrvIrT3XeWQ5IRp3k0P8knV3tH0QNyeA042ZtaW6DHomhrl_aUXOaQvMBu8UmDjySGFD2qCsHHtf1pBbAq-e2jpWuUd3')
c = response.url
I expected that 'c' should be 'caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm'. (I remove http:// from the link as I can't post two links in one question.)
However, it doesn't work, and keeps return me the same link as I putted in.
Can anyone help on this. Many thanks in advance.
#
Thanks a lot to Charlie.
I have found out the solution. I first use .content.decode to read the response history, but that will be mixed up with many irrelevant info. I then use .findall to extract the redirect url from the history, which should be the first url displayed in the response history. Then, I use requests.get to retrieve the info. Below is the code:
rep1 = requests.get(url)
cont = rep1.content.decode('utf-8')
extract_cont = re.findall('"([^"]*)"', cont)
redir_url = extract_cont[0]
rep = requests.get(redir_url)
You may consider looking into the response headers for a 'location' header.
response.headers['location']
You may also consider looking at the response history, which contains a response for each response instance in a chain of redirects
response.history
Your sample URL doesn't redirect; The response is a 200 and then it uses a JavaScript window.location change. The requests library won't support this type of redirect.
<script>window.location.replace("http://caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm")</script>
<noscript><META http-equiv="refresh" content="0;URL='http://caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm'"></noscript>
If you know you will always be using this one service, you could parse the response, maybe using regex.
If you don't know what service will always be used and also want to handle every possible situation, you might need to instantiate a WebKit instance or something and somehow try to determine when it finally finishes. I'm sure there's a page load complete event which you could use, but you still might have pages that do a window.location change after the page is loaded using a timer. This will be very heavyweight and still not cover every conceivable type of redirect.
I recommend starting with writing a special handler for each type of edge case and fallback on a default handler that just looks at the response.url. As new edge cases come up, write new handlers. It's kind of the 'trial and error' approach.