DBI_QUERY replacement in twiki for HTTP - http

I am currently replacing some functionality in a twiki page that has been pulling data from a DB using the DBI_QUERY feature and generating a table complete with hyperlinks on one of the table columns. Is there a way to generate a similar table from a comma separated file pulled from an HTTP request that twiki makes when the page is loaded? Alternatively, I can pull the data as JSON.
Thanks,
SetJmp

Answer is: apparently not.
However, using an iframe one can implant a table if the GET is already pre-formatted appropriately.
Will look forward to better answers...

Related

What is "dlta" and "ridlist" parameters used in Requests library (Python)

Guys I am working on getting data as tables from QuickBase using Requests library (Python). I found somebody doing it using the URL of the report, but he added two parameters to the URL like that:
&dlta=xs%xx&ridlist=xxxx.
Can anybody please tell me what are those two parameters, I searched for them in the internet but found nothing related to them.
I've been using Quickbase for over ten years and haven't seen documentation for either of these parameters. I have noticed that ridList seems to be used by Quickbase's grid edit view of reports (I suspect it's an ID for a server-side cached list of record IDs to display especially when using the type-ahead search of a report before choosing to grid edit) and dlta is used in the "Download report as CSV" button.
That example you're following may have simply copy and pasted a link generated by Quickbase as a hack to get a CSV instead of XML response. I recommend following the Quickbase HTTP API Reference instead. If you don't want an XML response, Quickbase also has a JSON RESTful API which may be easier to work with.

Extracting relevant content from a blob

Daily, we get a 15+M xml dump that contains a bunch of superfluous content that masks the needed details. It is not problem to extract the content from the xml tags, however, the blob has proven to be a problem.
I can extract the headers of the info that I am after using str_extrac, however, I also need to the character vector that follows. An example
\n\nSubject:\n\tSecurity ID:\t\tS-1-5-21-1390067357-1580818891-1801674531-43388\n
Unfortunately, I cannot post a full copy of the blob, as it contains proprietary content. As you can see, the fields that I need are all separated with embedded new line and tab characters, which I am trying to trigger on, but I cannot find a way to configure str_extract to capture the additional content.
Any insight you might have would be greatly appreciated.

How to use data retrieved from a web API?

I have a simple concept where I want to retrieve a table from a website and use it in that exact format. This can be for example imported into Excel.
This is the website that I want to get the table from:
http://www.dota2.com/leaderboards/#europe
This is the API that the website uses to get the data:
http://www.dota2.com/webapi/ILeaderboard/GetDivisionLeaderboard/v0001?division=europe
In all the ways I attempted to get the data from the website, it is poorly formatted into one line, and I was wondering what is the best way to format/use this data through code or other tools available. I have tried:
Excel web importing
Writing a simple VB program to read each line (but it reads it all as one line)
Copy and pasting the table to Notepad to try format it by line(still one single line)
Any tips/suggestions regarding any form of website data retrieving is very much appreciated.

Some issue about Paw3

After trying Paw3 for a while, I found it's really amazing, but I have some little issue about the operation:
How can I bulk edit HTTP headers instead of editing in a table one by one?
How can I fold some of the JSON text code when response is too long?
When I search in the response, is there any way to show the number of the matches?
Many thanks.
Thanks for the kind words about Paw!
Unfortunately, none of the 3 things you've asked are already implemented.
How can I bulk edit HTTP headers instead of editing in a table one by one?
There's no ability to bulk-edit headers yet. Instead, we recommend users to use environment variables as reusable presets. We'd like to later add a batch-edit feature.
How can I fold some of the JSON text code when response is too long?
There's no way to fold JSON texts yet. You could use the regular JSON tree if you need to fold items. Same here, it's something we'd like to add to the text too.
When I search in the response, is there any way to show the number of the matches?
Not displayed. It would be easy to add though. I take note :)

What is the difference between GET and POST methods? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
When do you use POST and when do you use GET?
I know the basic difference between GET and POST methods. That is we can see the URL parameters in case of GET and can't see the URL parameters in case of POST. Of course we can pass huge amounts of data by POST which is not possible through GET.
Are there any other differences between these two methods ?
GET is for data retrieval only. You can refine what you are getting but it is a read only setup and yes, as you mentioned anything used for refinement are part of the URL.
POST is meant for sending data, but is generally a way to 'break' the simple workings of HTML because you are neither guaranteed of anything that is happening, it can just fetch data, send data or delete data.
There are also PUT and DELETE in the HTML standards, but its all about finding web servers that support these actions as well. As the names imply PUT sends data for either the creation or updating while DELETE is for removal of data.
Enjoy! :)
Other implementation differences in GET and POST:
they have different encoding schemes. multipart/form-data is for POST only
the result of POST may not result in an actual page.
url limit necessitates use of POST
If you are using HIDDEN inputs in form then submitting a GET request reveals those inputs

Resources