How to Find Exact Path of File on Webserver - networking

Is it possible to know the Exact Path of the File on the Server. For example this URL http://www.hdfcsec.com/Research/ResearchDetails.aspx?report_id=2987918 resolves to a PDF. How to determine the direct location to the PDF file ? Any tools like network connection traces or pointers to find the same is appreciated.
Thanks.

Short of examining the actual source of ResearchDetails.aspx and figuring out where it takes its file from, no. Server-side scripts (and non-script binaries) can handle the request in any way they need and produce any data. There are cases where PDFs are dynamically generated by scripts and do not exist as actual files at all.

Related

How to use LibTiff.NET Tiff2Pdf in .NET 6

I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?

How to download ONLY the metadata from an mp3 file?

I'm making an application which plays music from a remote server, and I would like to be able to sort by author/album/year/etc. AFAIK the only way to do this is by reading the metadata but I don't want to have to download the whole audio file just to read the metadata, is there any way to separate them and download only the metadata?
BTW. I am using webdav_client for flutter, which uses dio as a back-end so instructions for that specifically would be greatly appreciated. TY
Firstly, you can usually make a request for a certain byte range by using ranged requests. This is dependent on server behavior. Most servers support it, but many don't.
Next, you need to figure out the location of the ID3 tags you want. Some versions of ID3 are located at the front of the file. Some are at the back. Therefore, you should probably request the first 128 KB or so of the file and search for ID3 data, while also getting the Length response header. Then if you don't find your tag at the beginning, you can make a request for the last 128 KB or whatever of the file, and search there.
Most MP3 files aren't very big, and bandwidth is usually plentiful. Depending on the size and scope of this project, you might actually find it more efficient to just download the whole files.
I don't think that it is possible to read just the ID3-metadata (at the beginning or the end) of the audiofile without downloading the entire file first.
One idea would be to extract this information on the server side and provide it separately, in addition to the audio file itself. To do this, you would need one of the well-known extraction tools available for your platform. However, if you need to download hundreds or thousands of companion files, I am not sure about the reliability of such a system.

Accessing Excel file from Sharepoint with R

am trying to write an R script that will access an Excel file that is stored on my company's Sharepoint page so that I can make a few calculations and plot the results. I've tried various ways to do this (download.file, RCurl getURL(), gdata), but I can't seem to figure out how to do this. The url is HTTPS and there should be a username and password required. I've gotten the closest with this code:
require(RCurl)
URL<-"https://companyname.sharepoint.com/sites/folder/_layouts/15/WopiFrame.aspx?sourcedoc={2DCC2ED7-1C13-4910-AFAD-4A9ACFF1C797}&file=myfile.xlsx&action=default'
f<-getURL(URL,verbose=T,ssl.verifyhost=F,ssl.verifypeer=F,userpwd="mylogin:mypw")
This seems to connect (although the username and password don't seem to matter) and returns
> f
[1] "<html><head><title>Object moved</title></head><body>\r\n<h2>Object moved to here.</h2>\r\n</body></html>\r\n"`
However, I'm not sure what to do at this point, or even if I'm on the right track. Any help will be greatly appreciated.
I use
library(readxl)
read_excel('//companySharepointSite/project/.../ExcelFilename.xlsx', 'Sheet1', skip=1)
Note, no https:, and sometimes I have to open the file first (i.e., cut and paste //companySharepointSite/project/.../ExcelFilename.xlsx into my browser's address bar)
I found that other answers did not work for me, perhaps because I am on a Mac, which obviously does not play as well with Microsoft products such as Sharepoint.
Ended up having to split it into two pieces: first download the Excel file to disk and then separately read that Excel file.
library(httr)
library(readxl)
# the URL of your sharepoint file
file_url <- "https://yoursharepointsite/Documents/yourfile.xlsx"
# save the excel file to disk
GET(file_url,
authenticate(active_directory_username, active_directory_password, "ntlm"),
write_disk("tempfile.xlsx", overwrite = TRUE))
# save to dataframe
df <- read_excel("tempfile.xlsx")
df
# remove excel file from disk
file.remove("tempfile.xlsx")
This gets the job done, though would be interested if anyone knows how to avoid the interim step of writing to disk.
N.B. Depending on your specific machine/network/Sharepoint configuration, you may also be able to just use authenticate(":",":","ntlm") per this answer.
I was unable to accomplish this using hints from answers above in R (I tried many approaches found on this site). However, just to highlight the response by #RyanBradley above and especially the response by #ZS27:
I instead had to use the OneDrive Desktop client (Windows) to allow me to sync the folder to my computer. Newer versions of SharePoint (like that found in MS Teams) have a sync button or feature in the document libraries / folders that interfaces with OneDrive.
This is the functional equivalent of mounting the folder as a network drive, so R interfaces with the file as if it was a part of your file system. Works for me.
You may need to map a network drive to the SharePoint library so that you can connect to it directly. Or if you don't want to map a network drive you could also place a shortcut to the folder in your startup folder.
Example file path:
\company_sharepoint_site\ssp\site_name\sub_site_name\library_name
Example start up folder location (Windows 10):
C:\Users\USER_NAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup
Note direction of the slashes ("\" rather than "/") is important so that your file path is interpreted as a file location, not an internet browser location. By placing such a path in a network drive or as a shortcut in your startup folder your PC should connect to it when it boots.
# Load or install readxl
if(require(readxl) == FALSE){
install.packages("readxl")
if(require(readxl)== FALSE){stop("Unable to install and load readxl")}
}
# Define path to data
data_path <- "\\\\company_sharepoint_site\\ssp\\site_name\\sub_site_name\\library_name\\Example.xlsx"
# Pull data
df_employees <- read_xlsx(data_path)
I had a situation exactly like you. I want to access an excel file, available on an sharepoint site using R programming language.
I have also surfed many stuff in Internet and I didn't find anything relevant to my requirement.
Then, I have attempted the following thing:
I have made the sharepoint folder as a network drive folder, in my local system.
Then, I have accessed that excel file(in sharepoint site) from my machine without accessing web browser.
Hence, I have copied the network path, present in my system (it will be same as your sharepoint site, however it will not have https/http.
The site will start with "\" like the following: "\sharepoint.test.com\folder\path").
Launch RStudio and select Import Dataset option, under Environment section.
Choose 'From Excel'. 'Import Excel Data' form will be opened.
Under File/URL field: Paste the network path of sharepoint (copied from your machine).
Click Import, the excel file in Sharepoint will be imported in R successfully.
Ensure that the file should not have html language as input (lie %20 and all) and Backslash should be used as separator in the URL.
While importing the file, provide the input of the folder name exactly, as you see.
For example:
Sharepoint.microsoft.com - Sharepoint's Domain
Department name - name of the Folder
Project name - name of the folder
Sample.xlsx - name of the file
So, your URL to import dataset should be:
"\Sharepoint.microsoft.com\Department name\Project name\Sample.xlsx".
Thank you!
Try using the link in this format:
http://site/_layouts/download.aspx?SourceUrl=url-of-document-in-library
If above doesn't work try this syntax [note slash directions]:
"\\gov.sharepoint.com#SSL/DavWWWRoot/sites/SomePath/SomePath/SomePath/SomeFile"
See this for more info about syntax and what's going on:
Connect to a site via SSL/DavWWWRoot not usual URL? Why does this make a difference?

Best practices place to put URL that configs my app?

We have a Qt app that when it starts tries to connect to a servlet to get config parameters that it needs to keep running.
The URL may change frequently because we have to test the application in several environments. Right now (as a temporary solution) the URL is a constant in source code, but it is a little bit ugly.
Where is the best place to mainting this URL, so that we do not need to change the source code every time I want to change the environment target?
In a database table maybe (my application uses a SQLite DB), in a settings file, or in some other way?
Thank you for you replies.
You have a number of options:
Hard coded (like you have already)
Run-time user input
Command line arguments
QSettings
Read from a bespoke file as text.
I would think option 3 would be the most simple to implement without being intrusive, but it does depend on what kind of application you have.
I would keep the list of url in a document, e.g. a XML, stored in a central, well known place, e.g. a known web server, and hardcode the url of the known place in the app.
The list could then be edited externally without recompiling your app;
The app would at startup download and parse the list, pointing to the right servlet based upon an environment specified as a command line parameter.

From ASP.NET, How to Download Two Excel Files and Invoke Batch Command on Them?

I need to download two Excel files onto the client, and then run a (diff) executable against them. I know how to download a single Excel file, from
here. But how to download a second one automatically in succession? And then how to run a batch command on them? Is this even realistic? Any guidance or pointers would be greatly appreciated.
Thanks,
Mike
To download multiple files at once you have two main options:
1) Just open multiple windows to your page generation script to download multiple files as per http://www.webdeveloper.com/forum/showpost.php?s=b4f6b25edeb6b7ea55434c4685a675fe&p=950225&postcount=6
2) Archive the files into a package (zip/arj/7z etc..) and send the archive to the client.
eg. http://www.motobit.com/tips/detpg_multiple-files-one-request/
As for doing the diff client-side that is a lot more tricky as Shhnap has already mentioned. If you are doing this for a controlled client base you may be able to get them to allow permissions for an ActiveX script that runs something client side. (Or fire off a console application) - but if you don't have fine control over the client environment then i can't think of a way to do it.
As Shhnap suggested can you not just do the comparison server-side (and then send this to the client as a third file?)
Well, just some pointers because I'm not sure I completely understand the problem. You a user to be given two downloads at the same time and then run a diff command against those two files? On the server or the client i'm not sure? You'll have alot of problems automating the client side version because forcing people to run client side code is usually frowned upon by virus protection software.
The server side diff sounds exactly like a CGI moment to me: http://www.cs.tut.fi/~jkorpela/perl/cgi.html. That will allow you to generate a web-page that shows the diff between the two. CGI allows you to run programs on your server and display their output in a webpage; that's the simple explanation.
If that was not quite what you wanted then feel free to give me a comment and i'll try and edit to answer correctly.

Resources