can you send data using libcurl on forum or any other ways like TCP/IP API?
curl -d "param1=value1¶m2=value2" http://site.com/forum
inspect the page source for form field names. With libcurl you would use With libcurl, you can build the form with curl_formadd().
Related
The Problem
Hi. I'm trying to use Google Script's "UrlFetchApp" function to pull data from an email marketing tool (Klaviyo). Klaviyo's documentation, linked here, provides an API endpoint via an HTTP request.
I'm trying to use Google Script's "UrlFetchApp" to pull data from Klaviyo's API. Unfortunately, Klaviyo requires a key parameter in order to access data, but Google doesn't document if (or where) I can add a custom parameter (note, it should look something like: "api_key=xxxxxxxxxxxxxxxx". Note, it's quite easy for me to pull data into my terminal using the api_key parameter, but ideally I'd have it pulled via google scripts and added to a google sheet appropriately. If I can get JSON into google scripts, I can work with the data to output it how i want.
KLAVIYO'S EXAMPLE REQUEST FOR TERMINAL
curl https://a.klaviyo.com/api/v1/metrics -G \
-d api_key=XX_XXXXXXXXXXXXXXX
THIS OUTPUTS CORRECT DATA IN JSON
Note: My ultimate end goal is to pipe the data into Google data studio on a recurring basis for reporting. I thought i'd get the data into a csv for download / upload into google data studio on a recurring basis. If I'm thinking about this the wrong way, let me know.
Regarding the -G flag, from the curl man pages (emphasis mine):
When used, this option will make all data specified with -d, --data,
--data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data
will be appended to the URL with a '?' separator.
Given that the default HTTP method for UrlFetchApp.fetch() is "GET", your request will be very simple:
UrlFetchApp.fetch("https://a.klaviyo.com/api/v1/metrics?api_key=XX_XXXXXXXXXXXXXXX");
I got a Atlassian/Jira account where projects are listed on. I would like to import the various issues in order to make some extra analysis. I found a way to connect to Atlassian/Jira and to import what I want on Python:
from jira import JIRA
import os
impot sys
options = {'server': 'https://xxxxxxxx.atlassian.net'}
jira = JIRA(options, basic_auth=('admin_email', 'admin_password'))
issues_in_proj = jira.search_issues('project=project_ID')
It works very well but I would like to make the same thing in R. Is it possible ? I found the RJIRA package but there are three problems for me:
It's still on a dev version
I am unable to install it as the DESCRIPTION file is "malformed".
It's based on a jira server URL: "https://JIRAServer:port/rest/api/" and I have a xxxxx.atlassian.net URL
I also found out that there are curl queries :
curl -u username:password -X GET -H 'Content-Type: application/json'
"http://jiraServer/rest/api/2/search?jql=created%20>%3D%202015-11-18"
but again it is based on a "https://JIRAServer:port/rest/api/" form and in addition I am using windows.
Do someone have an idea ?
Thank you !
The "https://JIRAServer:port/rest/api/" form is the Jira REST API https://docs.atlassian.com/jira/REST/latest/
As a rest api, it just makes http method calls and gives you data.
All jira instances should expose the rest api, just point your browser to your jira domain like this:
https://xxxxx.atlassian.net/rest/api/2/field
and you will see all the fields you have access to, for example
This means you can use php, java or a simple curl call from linux to get your jira data. I have not used RJIRA but if you dont want to use it, you can still use R (which I have not used) and make an HTTP call to the rest api.
These two links on my blog might give you more insight:
http://javamemento.blogspot.no/2016/06/rest-api-calls-with-resttemplate.html
http://javamemento.blogspot.no/2016/05/jira-confluence-3.html
Good luck :)
Recently, I have been experimenting with using Elisp to communicate with a local CouchDB database via HTTP requests. Sending and receiving JSON works nicely, but I hit a bit of a road-block when I tried to upload an attachment to a document. In the CouchDB tutorial they use this curl command to upload the attachment:
curl -vX PUT http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689 \
--data-binary #artwork.jpg -H "Content-Type:image/jpg"
Does anyone know how I would go about using the built-in url package to achieve this? I know that it is possible to do upload using multipart MIME requests. There is a section in the emacs-request manual about it. But I also read that CouchDB does not support multipart/form-data as part of their public API, even though Futon uses it under-the-hood.
I think you need to use url-retrieve and bind url-request-method to "PUT" around the call.
You will also need to bind url-request-data to your data:
(let ((url-request-data (with-temp-buffer
(insert-file-contents "artwork.jpg")
(buffer-substring-no-properties (point-min) (point-max)))))
(url-retrieve "http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689"))
Please see also
Creating a POST with url elisp package in emacs: utf-8 problem
How does emacs url package handle authentication?
You might also find enlightening by reading the sources.
For a data collection/analysis project, I am trying to download entries in a aspx web form at http://www.lasuperiorcourt.org/civilcasesummarynet/ui/?CT=AP&casetype=appellate, but I'm having little success so far.
The idea is to download the relevant information from the web page through wget and output the results to a single html file. From the resulting output I would then compile stats on the extracted data on relevant cases (e.g. from case nos BV024000 to BV028933).
However, I'm having trouble getting wget to retrieve data from the form. I've been using:
wget --post-data "frmsearch=BV024000" http://www.lasuperiorcourt.org/civilcasesummarynet/ui/?CT=AP^&casetype=appellate -O output.html
But I just get the original page back, not the form output. What am I doing wrong?
There two problems
You have typos in your command - you should wrap the http address between quotes and it's ui/index.aspx?CT=AP. without the ^
When you post the form you have to post all the input fields of the form otherwise your post request is not validated.
Here I did the request as follow
wget --post-data "__VIEWSTATE=%2FwEPDwUJMzM0NzAxOTczD2QWBgIBD2QWCmYPDxYCHgdWaXNpYmxlZ2RkAgIPDxYCHwBoZGQCBA8PFgIfAGhkZAIGDw8WAh8AaGRkAggPDxYCHwBoZGQCAw9kFgpmDw8WAh8AZ2RkAgIPDxYCHwBoZGQCBA8PFgIfAGhkZAIGDw8WAh8AaGRkAggPDxYCHwBoZGQCCQ9kFgICAw8PFgIfAGhkFgICAQ8QZA8WIGYCAQICAgMCBAIFAgYCBwIIAgkCCgILAgwCDQIOAg8CEAIRAhICEwIUAhUCFgIXAhgCGQIaAhsCHAIdAh4CHxYgEAUGU2VsZWN0BQZTZWxlY3RnEAUTQWxoYW1icmEgQ291cnRob3VzZQUDQUxIZxAFFUJlbGxmbG93ZXIgQ291cnRob3VzZQUDTEMgZxAFGEJldmVybHkgSGlsbHMgQ291cnRob3VzZQUDQkggZxAFEkJ1cmJhbmsgQ291cnRob3VzZQUDQlVSZxAFFUNoYXRzd29ydGggQ291cnRob3VzZQUDQ0hBZxAFEkNvbXB0b24gQ291cnRob3VzZQUDQ09NZxAFFkN1bHZlciBDaXR5IENvdXJ0aG91c2UFA0NDIGcQBRFEb3duZXkgQ291cnRob3VzZQUDRE9XZxAFG0Vhc3QgTG9zIEFuZ2VsZXMgQ291cnRob3VzZQUDRUxBZxAFE0VsIE1vbnRlIENvdXJ0aG91c2UFA0VMTWcQBRNHbGVuZGFsZSBDb3VydGhvdXNlBQNHTE5nEAUaSHVudGluZ3RvbiBQYXJrIENvdXJ0aG91c2UFA0hQIGcQBRRJbmdsZXdvb2QgQ291cnRob3VzZQUDSU5HZxAFFUxvbmcgQmVhY2ggQ291cnRob3VzZQUDTEIgZxAFEU1hbGlidSBDb3VydGhvdXNlBQNNQUxnEAUtTWljaGFlbCBBbnRvbm92aWNoIEFudGVsb3BlIFZhbGxleSBDb3VydGhvdXNlBQNBVFBnEAUTTW9ucm92aWEgQ291cnRob3VzZQUDU05JZxAFE1Bhc2FkZW5hIENvdXJ0aG91c2UFA1BBU2cQBRdQb21vbmEgQ291cnRob3VzZSBOb3J0aAUDUE9NZxAFGFJlZG9uZG8gQmVhY2ggQ291cnRob3VzZQUDU0JCZxAFF1NhbiBGZXJuYW5kbyBDb3VydGhvdXNlBQNMQVNnEAUUU2FuIFBlZHJvIENvdXJ0aG91c2UFA0xBUGcQBRhTYW50YSBDbGFyaXRhIENvdXJ0aG91c2UFA05FV2cQBRdTYW50YSBNb25pY2EgQ291cnRob3VzZQUDU00gZxAFFVNvdXRoIEdhdGUgQ291cnRob3VzZQUDU0cgZxAFF1N0YW5sZXkgTW9zayBDb3VydGhvdXNlBQNMQU1nEAUTVG9ycmFuY2UgQ291cnRob3VzZQUDU0JBZxAFGFZhbiBOdXlzIENvdXJ0aG91c2UgV2VzdAUDTEFWZxAFFldlc3QgQ292aW5hIENvdXJ0aG91c2UFA0NJVGcQBRtXZXN0IExvcyBBbmdlbGVzIENvdXJ0aG91c2UFA0xBV2cQBRNXaGl0dGllciBDb3VydGhvdXNlBQNXSCBnFgFmZGQk7ioHoNWuWLyRkeV2Jf7vbNorIw%3D%3D&CaseNumber=BV024000&submit1=Search&casetype=appellate" "http://www.lasuperiorcourt.org/civilcasesummarynet/ui/index.aspx?CT=AP&casetype=appellate" -O output.html
--2012-08-12 19:25:32-- http://www.lasuperiorcourt.org/civilcasesummarynet/ui/index.aspx?CT=AP&casetype=appellate
Resolving www.lasuperiorcourt.org... 153.43.255.56
Connecting to www.lasuperiorcourt.org|153.43.255.56|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: /civilcasesummarynet/ui/casesummary.aspx?CT=AP&casetype=appellate [following]
--2012-08-12 19:25:33-- http://www.lasuperiorcourt.org/civilcasesummarynet/ui/casesummary.aspx?CT=AP&casetype=appellate
and it worked see the picture http://i47.tinypic.com/35db8k3.png
Probably you will need to set up a new value of __VIEWSTATE for every request.
In what environment are you executing this command? In most unix shells, "&" is a special character which would terminate the command string and send the command, when executed, to the background., but you aren't quoting that url in any way.
Edit: Ok, nevermind... my answer isn't wasn't that useful except that I did not know that "^" was a quote character and now I know. http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ntcmds_shelloverview.mspx?mfr=true
I'm trying to automate the login to a website and submission of a form.
Is there a browser plugin (for firefox or Chrome) that allows you to record HTTP GET and POST requests in a form that allows them to be played back at a later point? I'm looking for something that will be possible to automate from a script e.g. via curl or wget.
I've tried using the Chrome developer tools to capture POST form data but I get errors when trying to replicate the request with wget which suggests I'm missing some cookies or other parameters. Ideally there would a nice automated way of doing this rather than doing lots of trial and error.
For a simple interaction, you don't really need a tool like Selenium that will record and playback requests.
You only need the tools you've already mentioned:
Chrome already comes with the Developer Tools that you need: use the Network tab. No plugin to download. I don't know if Safari will work -- I don't see a "Network" tab in its Developer Tools.
Both curl and wget support cookies and POST data, but I've only tried curl for automation.
There are several key steps that need to be done properly (this takes some experience):
The sequence of pages that are requested needs to model real user interaction. This is important because you have no idea exactly how the backend handles forms or authentication. This is where the Network tab of Chrome's Developer Tools comes in. (Note that there is "record" button that will prevent the clearing of the log.) When you prepare to log a real user interaction for your analysis, don't forget to clear your cookies at the beginning of each session.
You need to use all the proper options of curl and wget that will ensure that cookies and redirects are properly processed.
All POST form fields will likely need to be sent (you'll often see fields with nonce values to prevent CSRF
Here's a sample of 3 curl calls that I wrote for an automation script that I wrote to download broadband usage from my ISP:
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie-jar "$COOKIES_PATH.txt" \
'https://idp.optusnet.com.au/idp/optus/Authn/Service?spEntityID=https%3A%2F%2Fwww.optuszoo.com.au%2Fshibboleth&j_principal_type=ISP' >$USAGE_PATH-1.html 2>&1 && sleep 3 &&
# --location because the previous request returns with a series of redirects "302 Moved Temporarily" or "302 Found"
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie "$COOKIES_PATH.txt" \
--cookie-jar "$COOKIES_PATH.txt" \
--referer 'https://idp.optusnet.com.au/idp/optus/Authn/Service?spEntityID=https%3A%2F%2Fwww.optuszoo.com.au%2Fshibboleth&j_principal_type=ISP' \
--data "spEntityID=https://www.optuszoo.com.au/shibboleth&j_principal_type=ISP&j_username=$OPTUS_USERNAME&j_password=$OPTUS_PASSWORD&j_security_check=true" \
'https://idp.optusnet.com.au/idp/optus/Authn/Service' >$USAGE_PATH-2.html 2>&1 && sleep 1 &&
curl \
--silent \
--location \
--user-agent "$USER_AGENT" \
--cookie "$COOKIES_PATH.txt" \
--cookie-jar "$COOKIES_PATH.txt" \
--referer 'https://www.optuszoo.com.au/' \
'https://www.optuszoo.com.au//r/ffmu' >$USAGE_PATH-3.html 2>/dev/null
Note the careful use of --cookie-jar, --cookie, and --location. The sleeps, --user-agent, and --referer may not be necessary (the backend may not check) but they're simple enough that I include them to minimize the chance of errors.
In this example, I was lucky that there were no dynamic POST fields, e.g. anti-CSRF nonce fields, that I would have had to extract and pass on to a subsequent request. That's because this automation is for authentication. For automating other types of web interactions, after the user's already logged in, you're likely to run into more of these dynamically-generated fields.
Not exactly a browser plugin, but Fiddler can capture all the HTTP data passing back and forth; with FiddlerScript or FiddlerCore, it is then simple to export that into a text file - and pass that into cURL as request headers and request body.
In Firefox, turn on the Persist option in Firebug to be sure to capture the POST. Then install and use the "Bookmark POST" add-on to bookmark the POST request for later use.
Firefox Firebug already has a feature which allows you to copy a web request as a curl request, so you see all the various elements of the request on the command line.
Turn on the Firebug and right click on a request in the Net panel and pick Copy as cURL. Then use it in the curl
https://hacks.mozilla.org/2013/08/firebug-1-12-new-features/#copyAsCURL
Have you tried Selenium?
There are way too many methods for you to choose.
Use Firefox and selenium IDE. It can record your browser action
User selenium Web Driver. It can simulate different browser action by the script you write in Ruby or Java.
Use a macro plugin for Firefox to simulate absolute clicks and keypresses.
Use a OS level macro application and do the same as 3.
Write a script (such as PHP) to simulate the actual form post or cookie interations.
No.1 is common and easy to use.
No.4 can be powerful but you need time to polish the automation.
No.3 is in the middle of No.4 and No.1.
No.2 can be a tool for environment test and stress test also.
No.5 is seeming the most flexible and resource saving.
Request Maker chrome plugin does that.
https://chrome.google.com/webstore/detail/request-maker/kajfghlhfkcocafkcjlajldicbikpgnp?hl=en
The Safari developer tools and Firebug are sufficient for your needs.
Recently I cam across this beautiful chrome extension which does what you ask:
Katalon Recorder
Katalon Recorder will make your test automation work a lot easier.
Record, play, debug with speed control, pause/resume, breakpoints capabilities.
Enjoy fastest execution speed compared to other extensions with Selenium 3 core engine.
Make use of multiple locator types including XPath & CSS.
Use original Selenium IDE commands (Selenese), plus block statement if...elseIf...else...endIf and while...endWhile. Testing file input control is supported.
Import test data from CSV files for data-driven testing.
Report easily with logs, screenshots capturing, with historical data and analytics from Katalon Analytics.
Compose & organize test cases in suites. Never get your work lost with autosave feature.
Import original Selenium IDE (Firefox extension) tests.
Export to Selenium WebDriver scripts in these frameworks: C# (MSTest and NUnit), Java (TestNG and JUnit), Ruby (RSpec), Python (unittest), Groovy (Katalon Studio), Robot Framework, and XML.