The company I work for uses the "Phriction" wiki in Phabricator for a considerable amount of documentation. I'd like to be able do the following, programmatically, in order of importance:
Download (e.g., with curl or wget) the ReStructuredTExt (RST) to a local file where I can edit it, diff it, etc. Ideally I should be able to download either the latest version or any specific version.
Locally render (e.g., in a local graphical web browser) the markup as Phabricator would render it. If relative links can link correctly back to the original wiki, that's a bonus.
Upload new versions of the wiki page.
If you have don't know how to do exactly any of this, but have information or tool suggestions that would help me get started on writing software to do the above, please mention them. (If you're worried about too many answers that don't actually answer any of the questions above, try adding or editing a single community answer for this sort of information.)
I would do the following in your situation:
Downloading the single phriction pages using the API (Conduit) methods in the phriction section.
Therefore you need a Conduit Api Token. You can create in your profile settings of your phabricators intstance.
Then take a look at the phriction.info mehtod: This methods needs the page slug as parameter. In this example I use the /changelog/ page.
You can choose between arcanist, cURl or PHP to use the RestApi. Additionally you can use any other way to preform RestApi commands in the cURL syntax.
If you need some more examples how to run the conduit method you can toggle between some variations at the bottom of the output page.
Transform the page content as you like.
Upload the page again with the conduit methods (phriction.edit).
The way you downloaded the content you can edit the documents, too. But here you need some more parameter:
I personally, try first all conduit methods via the web interface first and then transform it to an a script.
Related
So I am using InnoSetup 6 which natively supports downloading files from the internet during installation. I have figured out downloading files given a direct link, from this thread Inno Setup: Install file from Internet
However, I can't for the life of me figure out how to download the latest version of a file given a permalink URL. My specific example is to download the Microsoft Hosting package.
https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer
Going to this page automatically downloads the latest package.
Inno doesn't like this link (or I don't know how to get Inno to use it) since it doesn't point to the direct file. If I use the direct link (https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe) this works for obvious reasons.
I'd like to always download the latest, but I'm not sure how to accomplish this. Any suggestions?
Adding super basic code being used...
DownloadPage.Clear;
DownloadPage.Add('https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer', 'dotnet-hosting.exe', '');
DownloadPage.Show;
You would have to retrieve the HTML page, find the URL in the HTML code and use it in your download code.
See Inno Setup - HTTP request - Get www/web content
It would be quite unreliable. Microsoft can change the HTML any time.
You better setup your own webpage (web service) that will provide an up to date link to your installer. The web page can even do what I suggested: retrieve the URL from the Microsoft's download page. In case Microsoft changes the HTML, you can fix your web page any time. What you cannot do with the installer.
Without realizing it you are asking two different question here. That is because these "permalinks" aren't really permalinks but redirects to some dynamic resource that has a link to what you are looking for.
So first, addressing the Microsoft "permalink", you need to realize that under the hood you are accessing a URL that redirects to some page which will point to the latest. Then under the hood, that page invokes a JavaScript function, IF YOU ACCESSING VIA A WEB BROWSER, to download the installer. Note that both the page pointed to and the code to invoke the installer WILL eventually change. In fact, the code itself logs a "warning" when people attempt to download directly:
If you do a view source you'll see:
<script>
$(function () {
recordDownload('.NET', 'runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer');
window.open("https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe", "_self");
});
function recordManualDownload() {
ga("send", "event", "Download.Warning", "Direct Link Used", "runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer");
}
</script>
So you can download the HTML from this page and use some regex to get the directo downloadlink but beware, the link is going to change every time Microsoft releases a new version. Furthermore, WHEN (not if but when) MS decides to rebrand this entire process might break. So the best you can do here is try to download the html and try parse the download URL from this "permalink"
As an alternative. you can to download the latest DotNet powershell install script as described here.
If possible, execute that script directly. If not look at the function Get-AkaMSDownloadLink within the install script to see how it builds the url to get the latest version. You would probably be better served using that building and using that URL as opposed to attempting to download from some arbitrary HTML code.
Now, onto the second question you might not have realized your were asking is how to automate this for any random installer. The answer is you can't. Some might have a permalink that directly points to the latest but you are always going to find cases like Microsoft. Best you can down is hard code some links in some service, as #martin-prikryl suggested, and when the break update the links in those services.
I'm trying to download the data from this website: https://cdr.ffiec.gov/public/PWS/DownloadBulkData.aspx.
My questions are (1) how I can set the appropriate "payload" and post to the url for the three inputs: available products, report period end date and available file formats and (2)how I can get the link of the files since in the website, there is a download button (i can't get the link by right clicking on the button). Sorry that my questions are basic but i hope someone can provide me step-by-step guidance. Thanks.
You can’t manipulate the web page (selecting from drop downs etc) with just requests.
You need to use dev tools to capture the URL you’re redirected to when you submit the form, then use requests to call that URL with the parameters it expects.
We've been using Phabricator for post-commit code reviews (aka Audits) for some months now. When doing a commit we also attach an issue number ("Issue: XXXX").
The issue tracker url for this issue is:
site.com/issue/XXXX
I'm wondering if there's any way we could configure Phabricator to replace this text with a hyperlink to its corresponding URL when viewing the commit's comment from Phabricator.
First step
Go to the configuration interface at <your-hosted-phabricator>/config/all/.
Second step
Edit the following two parameters:
In bugtraq.logregex set:
/[Ii]ssues?:?(\s*,?\s*\d+)+/
/(\d+)/
In bugtraq.url set:
https://<your-issue-tracker>/issue/%BUGID%
If you are using these same config settings for another tracking system, you will have to pick and choose. But, we found this very useful when linking to an unsupported bug tracking system.
You should be able to replace the url to your local system's url and build your own regex that will match your "Issue: XXXX" commit message.
Good Luck!
I do not really understand how Google Code handles file versioning.
I am building a jQuery plugin that anyone can access. Like so:
<script type="text/javascript" src="http://jquery-old-browser-warning.googlecode.com/files/jquery.browser-warning.js"></script>
This script accesses other files on the same project (via ajax).
The problem is, that when I upload a new file, it just seems like there aren't any changed to it. Google recommends that new files should have new names.
But then I would have to change the filenames that the script loads.
But then I would have to change the script file as well, and that would break everybodys implementation (with the script-tag above)
Is there a way to force a file to change when uploading with the same filename?
PS: If I go directly to the project page's file list. Then I do get the file with the updated content. But as I said, not when getting it through ajax.
The cheapest trick in the book to prevent caching is adding some random content to a GET parameter:
www.example.com/resources/resource.js?random=1234567
You can for example use the current timestamp for this.
This, however, causes any and every access to re-fetch the content, and invalidates any client-side caching mechanism as well. I would use this only as a last resort. If Google are that stringent about caching, I'd rather develop a workflow that allows for easy renaming of files.
I don't know your workflow, but maybe you can work with versioned directories?
Like so:
www.example.com/50/resources/resource.js
www.example.com/51/resources/resource.js
that would keep whatever caching the client employs intact, but whenever there's a change from your end, the browser would reload the content.
I think Its just a cache on the browsers, So when you request file from ajax, just add random parameters or version number.
For example, Stackoverflow add version parameter to static contents like
http://sstatic.net/so/all.css?v=6638
Are you talking about uploading files to the "Downloads" area? Those should have distinct filenames, for example they should be versioned. If you're uploading the script code, that should be submitted by the version control system you're using, and should most definitely keep the same name across revisions.
Edit: your code snippet didn't show up on my page, misunderstood what you're trying. Don't imagine Google would be happy with you referencing the SVN repository every time some client page is loaded :)
I am trying to find out how to upload a file from a web user to a server using an ASP page. The displayed page has an Input tag of type "File" like this:
<input type="file" name="uploadfile">
And a submit button that passes the Form info to another .ASP page. This page must take the path it gets from the Input control and use it to somehow save the file to the server.
I keep thinking there must be a common way to do this, since I see this kind of thing on a number of websites, but how is it done? Is there some sort of server object that can be called for it?
This script will help you.
Also, you may google for "asp upload file" - there are tons of results.
If you are doing any serious uploading or have a commercial product you really need to use a COM component in classic asp. Check out SA-FileUp. It has been the defacto standard for this since like forever.
If your hosting service doesn't allow you to install components, you may also want to look at this script:
http://chris.brimson-read.com.au/index.php?option=com_content&view=article&id=6&Itemid=7
I've seen a wide variety of upload scripts floating around, and they ... vary ... in quality. I've not used the script in the selected answer, but its worth trying a few different options.
I can recommend SA-FileUp and Dundas Upload. They both are easy to install and have good tutorials on how to implement.