Programmaticaly spoofing an http script request in an iframe - http

I'm building a backend admin system which edits json files that control the look and feel of the main site. I want to add a 'preview' button before the user hits save. To do that, I want to use the main site, but instead of calling the actual json file in production, save a temp version of it and redirect this user's traffic for that file to the temp file - from the original site code.
i've considered both chrome pluggins, configuring iframe somehow or, in worst case scenario, grabbing the production front-end, parsing out the call to the prod json file and replacing with new temp json file. That is obviously not ideal as it would entail a lot of work and if anything changes on the prod site, this will have to be updated.
I would love your ideas!

Do you have access to the main site's source code? You could implement a preview option from the main site which accepts a GET parameter and uses a temporary JSON setting based on this GET parameter.
From the backend admin system's point of view, it's just a matter of adding the JSON as part of the ajax GET request.
Unfortunately though, there is no easy way of doing this if you don't have access to the main site's source code or if you can't reach out to whoever maintains that main site.
Your cleanest option might be to recreate the main site's look and feel instead and pass it off as a "preview".

Related

How do I download an aspnetForm page with links

I'm trying to download a municipal planning plan together with all the relevant documents.
All documents can be reached from the following link
I've tried the following command (that worked well for other sites) and some variations without success.
wget -E -k -r -l 3 "http://www.mavat.moin.gov.il/MavatPS/Forms/SV4.aspx?tid=4&et=1&mp_id=ppnCWTcsST9gG0%2fa0ayWnjFyZ%2bo14s221Ujlpi7UvR4jIRAHLKhJ8lOLSkomZ%2fvlHk8b2T0oENpI6Wh2hKzxQJCw9BPJP8gav%2ftgiKlk5S0%3d"
The same plan in their new site I can't get the files either,
https://mavat.iplan.gov.il/SV4/1/5000931297/310
I'd appreciate any help.
Well, these days, and especially with .net web sites?
We don't use hyper-links with a simple (full) path name to actual files from the web server. In fact in most cases one will not even give the web server rights to those folders. (they are not exposed to Internet Services).
So, no actual links as a full "url" to documents exist.
What happens is when you click on a button or button link? Then the code behind on the web server runs. (and that is code you don't have). And further more, that code behind can browser, read, retrieve any file from any folder on the server or other servers. But links from the web site don't exist and it not even possible to type in a url to resolve to a actual file name on the server.
So the server side code (not internet services) goes and grabs the document. In fact, the documents could be in a database. So, the code behind on the server side runs and pulls the binary data from the database (which represents a valid PDF file). Or the code behind reads the file from disk and then STREAMS the file for a download.
Now, this is often done for reasons of security. It means that no valid URL exists to get at a document.
Not only is this done for security, but from a developer point of view, it often better to retrieve a row from a database. That row can have the information you SEE rendered on that form, but the web page is not static, and the display of information is thus a developer coding a pull of rows from a database, and then you simply "assign" that data to some type of control - save datagrid, or listview or whatever. (this assignment of data is only 1 or two lines of code, and then the control + web server renders that datagrid control.
So, this is done since the developer thus only assigns the result of a database query to the control when then renders on the form. Thus, to add or remove documents? Then you only have to edit the database for the information on the web page to render.
As a result? There is no direct links to the actual documents on the server. To retrieve a document, you would have to send to the web site the exact command required.
You can hit f12 (most browsers support this). This will put your browser into developer mode. If we do this, and then select elements (select element feature). Now click on a pdf link. You get this:
<img src="../images/ft/file_PDF.gif" style="cursor:pointer"
onclick="openDoc('99000526871729',
'AABA7BE646E182B67DB1C15220E531DF36BBB591D8EEA7757435B2606C08E6F9')">
So, note above. The above code event openDoc is the SERVER side code you have to run to retrive a document. There is thus NO link. And you not going to be able to wire up, or run your OWN web page that hits that server and runs the routine "onclick".
However, the onclick DOES expose the internal database document numbers used to pull/read and retrieve a given document. But the path name, and how the code gets/grabs this file? You have no idea, and HAVE to run server side code (c#, or vb.net) code. That code as noted grabs the file and then uses code to "stream" the file when you download or click on a link.
So for simple HTML like pages? Well, for those that took a one day HTML course? Sure, such web sites will have scr=some path name to a valid url). And these simple systems thus allow you to enter a URL to grab/get a document. And those documents are fully exposed to the web site, and a simple valid URL path name to a file exists. Not so with asp.net, and as noted, this is not only done for security, but it a better over all developer experience to write code that grabs the files as opposed to rendering full path link names to files.
There are many additional benefits. For example, the database that drives this likely has a setting (or some settings) that contain the path names to the documents. If they run out of storage, or say want to move older files to a much slower storage system, which of course is much lower cost? Then can move the files, and update the path name columns in the database. The web site will continue to work, since we NEVER using a exposed URL on the web site. And as noted, actual direct URL's don't exist, and the web server (IIS) as opposed to the code behind will not even have rights to the file names.
As a result?
You not be able to simply pull the web page, and THEN extract the URL's to file names.
What you might be able to do is write code that loads the web page, and then scans all the event code stubs for the links, and have your code click on each button with web browser automation. But, even that don't allow you to enter file names into the download prompts.
So, what you ask is not easy, likely not possible, and a very difficult task. And the simple reason is that site does not use simple HTML and static links to files, and it never actually exposes a direct link to files, and even worse yet is the web server does not have or even allow a URL direct link to a site - they don't exist, and the web site will not even have rights or even allow such URL's to file names. (only the .net code behind does - not internet services).
and grabs the document and then code "streams" the file to to the web site or link you clicked on. So the simple HTML coders in the past would create say a folder (usually a virtual folder) that points to the files on some server/folder. But with .net, it easier (and far more secure).
Modern development tools don't use old fashioned ideas like a URL's to directly retrieve a file - they are designed differently.
In some cases, URL's are allowed or created, and this is done for reasons of sharing links. So if you have a cute video or document? Then the designers of the system will often permit use of parameters in the URL, so you can share a link to someone else. This page has no such provisions. So, you can share a link to the page, but no actual URL to documents or even provisions to allow URL's to a document even exists.
So this quite much means to retrieve a document, you have to go to that web page, and ONLY when you click on a document will the web site "stream" down that one particular document in question.

Can I get the browsersync proxy to not update a particular URL?

Details:
I create virtual hosts for development like http://project.client.dev and then use browsersync for live reloading and whatnot, which creates a proxy at http://localhost:3000. This causes all URLs on the page to be written to http://localhost:3000, which in 99% of cases is exactly what I want, but sometimes I want to link to that original http://project.client.dev path instead.
Question:
Is it possible to have browsersync not update just particular URLs? Like, add a class of do-not-update or something and have those stay http://project.client.dev instead of being updated to http://localhost:3000?
Use Case:
I build WordPress websites. When I'm clicking around the site's front-end I want those http://localhost:300 addresses so live reloading occurs. On the back-end however, I don't want those paths ending up in the database or, worse yet, browsersync reloading my admin screens while I'm editing content. Typically I just open one tab on the front-end and one on the back-end using the respective domains, but I'd like to add an in-development link to the footer that says "edit page ID 123" and opens the current front-end page up on the back-end for editing; problem is, it opens the http://localhost:3000 version of the back-end and I need to manually update the URL every time (provided I remember to do so).
I think is it not possible at this moment to exclude some URLs. But have a look at the documentation : https://www.browsersync.io/docs/options

Accessing images in a different project, but in the same solution

I made this website for a client which wanted to be able to upload images and then use those images to create some dynamic content on his site. It all works fine, but now I want to isolate that administration part (where he can add images and create his content) on a subdomain.
So at the moment, I have two projects. One where images get uploaded to, and one who has to access those images (this is my problem).
I have read multiple topics related to this issue but have not found a solution, I can never get a path outside of my current project.
The only option I am thinking right now that could work is to have some kind of API on the main website, and when an image gets uploaded to the administration site, send that file over to the main site, but that seems pretty overkill knowing that my images will be on the same server.
Can this be done?
What is the cleanest/best way to achieve this?
Please note:
Saving images to the database is not an option. Uploading files on the server and then only storing the path is so much faster.
My images get uploaded at run-time, I can't use anything that relies on resources/compilation-time.
Thanks!
UPDATE (SOLUTION)
Rather than saving in the database only the name of the file (image), for example "image1.png" and then trying to retrieve the path in the other project, I ended up saving the absolute URL in the database so that I could then use that URL directly.
public static string ResolveServerUrl(string serverUrl, bool forceHttps)
{
if (serverUrl.IndexOf("://") > -1)
return serverUrl;
string newUrl = serverUrl;
Uri originalUri = System.Web.HttpContext.Current.Request.Url;
newUrl = (forceHttps ? "https" : originalUri.Scheme) +
"://" + originalUri.Authority + newUrl;
return newUrl;
}
This will give you a URL that looks like http://yourdomain/path/to/image.jpg, so you can save it directly in the database and use it as is in the other project.
The only option I am thinking right now that could work is to have some kind of API on the main website, and when an image gets uploaded to the administration site, send that file over to the main site
I think you just kind of answered your own question. That is indeed the way to go, or I should say you're on the correct direction towards a enterprise SOA architecture...you are still far from it. But, this is a good start where you start to realize that your system is growing and demanding a more robust architecture
but that seems pretty overkill knowing that my images will be on the same server.
This is a false statement because if you design it well, you can easily scale out to a different server and platform without affecting your client app(s). Let's say that in the future, the content is moved to its own server, you will only make the pertinent modifications to your "Content Service" while your client apps will not need to be changed at all, they're still pointing to the same endpoint and will never notice what's happening with the internals of the "Contents Service". What this means is that your client apps only care about getting content from the "Contents Service" without knowing where the content is actually hosted, whether in a Windows Server, a Linux server, a Sql Database, an Oracle database, in the US or China. It's not the responsibility of the client app(s) to care about how the content is handled, instead they only need to know how the content is served
Hope it makes sense. I could provide you with some links explaining the absolute benefits of such architectures

Project hosting on Google Code. Files are cached?

I do not really understand how Google Code handles file versioning.
I am building a jQuery plugin that anyone can access. Like so:
<script type="text/javascript" src="http://jquery-old-browser-warning.googlecode.com/files/jquery.browser-warning.js"></script>
This script accesses other files on the same project (via ajax).
The problem is, that when I upload a new file, it just seems like there aren't any changed to it. Google recommends that new files should have new names.
But then I would have to change the filenames that the script loads.
But then I would have to change the script file as well, and that would break everybodys implementation (with the script-tag above)
Is there a way to force a file to change when uploading with the same filename?
PS: If I go directly to the project page's file list. Then I do get the file with the updated content. But as I said, not when getting it through ajax.
The cheapest trick in the book to prevent caching is adding some random content to a GET parameter:
www.example.com/resources/resource.js?random=1234567
You can for example use the current timestamp for this.
This, however, causes any and every access to re-fetch the content, and invalidates any client-side caching mechanism as well. I would use this only as a last resort. If Google are that stringent about caching, I'd rather develop a workflow that allows for easy renaming of files.
I don't know your workflow, but maybe you can work with versioned directories?
Like so:
www.example.com/50/resources/resource.js
www.example.com/51/resources/resource.js
that would keep whatever caching the client employs intact, but whenever there's a change from your end, the browser would reload the content.
I think Its just a cache on the browsers, So when you request file from ajax, just add random parameters or version number.
For example, Stackoverflow add version parameter to static contents like
http://sstatic.net/so/all.css?v=6638
Are you talking about uploading files to the "Downloads" area? Those should have distinct filenames, for example they should be versioned. If you're uploading the script code, that should be submitted by the version control system you're using, and should most definitely keep the same name across revisions.
Edit: your code snippet didn't show up on my page, misunderstood what you're trying. Don't imagine Google would be happy with you referencing the SVN repository every time some client page is loaded :)

Web File Security best practices for ColdFusion 8 in IIS6 or IIS7

Let's say we have a web site with a CF app that was written in-house.
Assume that:
Server 2003 IIS6 or 2008 IIS7 will be used
ColdFusion 8 will be used
Directory browsing is denied
SSL is required to connect
The account login process is secure (yeah I know that is a whole other
ball of wax but that concept is discussed ad nauseum on the web).
Say I have a file at https://domain.com/folder1/folder2/ with a name like picture92352.ext imagine it as a jpg or pdf or whatever. The entire path between the domain name and the file varies widely in naming structure, depth, etc. Files are not all lumped together in one folder.
The app restricts links by user such that a user would have to have access to that file to find it in the first place but as it stands now if a person knew the full URL to that file they could retrieve it without logging in to the app. It's the classic security by obscurity situation. A random person isn't likely to find a file they shouldn't get to but once someone is given access they know how to access it from another PC where their actions might not be traced back to them.
How do I restrict access to these files before someone logs in and still make them accessible to outside users after they log in? Is there a way to do it with permissions only or is the only answer to have code dynamically moving files around at the time of the request or is there some obvious step I'm not even thinking of?
Let me clarify this slightly. No matter how the file is presented on a page a user can use the browser IE, Firefox, etc to examine the URL the file comes from. If the image is a link there is always copy shortcut in the right click menu for IE and the same functionality in FF is called copy link location. If the image is displayed inline as part of the page an IE user can right click and choose properties to see the URL, in FF the same functionality is present to see properties but there is an even quicker more convenient option labeled copy image location. Once a user knows the URL to a file if the location or file name doesn't change they can use that URL without authenticating in the CF app.
If I change the NTFS/share permissions so that IUSR can't see the content then my CF app and IIS can't push it. What strategy do I use to provide the file in the CF app that doesn't leave this hole open?
You could write a CFM page that serves up the images. Then you just make sure they are authenticated inside the CFM.
<!-- something like this -->
http://localhost/GetFile.cfm?file=foobar.jpg
In GetFile.cfm, you would do something like:
<!-- the filename part is what the browser will pre-popualate the file name in the download dialog as -->
<CFHEADER name="Content-disposition" value="attachment;filename=picture92352.ext">
<CFCONTENT type="text/plain" file="\\fileserver\folder1\folder2\picture92352.ext">
Take a look at the various MIME types.
If you wanted to do something similar but keep a more natural URL, I think you would need to leverage the Java servlet underpinnings of ColdFusion to create a handler for any URL matching a certain pattern.

Resources