This question already has an answer here:
Click not working on combo-box down arrow button karate UI testing
(1 answer)
Closed 1 year ago.
Using Karate FE tests I would like to know, if there is some way how to test file download. I understand that it is not possible to check the downloaded file, but if there is way to use and check if some file is behind the link. The FE link is simple like this:
Download ZIP file
After clicking, the browser starts the downloading the file immediately.
Is there any possibility to check e.g. the file is there and the downloading began?
Or is there some other possibility to check file downloading using Karate?
Thank you for some advice!
Since Karate is an API testing tool, you can download and validate files: https://stackoverflow.com/a/53706294/143475
So if you know what the URL of the file is, just do that. One thing is if any cookies are required, you will need to pass them from the "browser side" to the "API side" of Karate. * cookies driver.cookies may actually work, but I haven't tried this.
You have used href="#" in your example, but I hope that the URL is something you can easily get from the HTML / element.
If not, I don't know, you may need to do some research. Maybe you can scrape it out of the HTML. Or you need to monitor the click and do some JS magic.
Finally let me say that perhaps you should just ignore this in your test-flow. The risk of not testing this may be low and the effort of testing this may be too high to be worth it.
Related
I'd like to know the amount of data that is going over the wire when someone is first opening my Meteor app.
Pingdom is useful but I'd like something I can run locally on my own machine.
Ideally I'd also like to see a breakdown per package so I can decide on whether I want to keep or ditch a specific package.
You can just use your browser's developer tools. For example, in Chrome, open the developer tools (right click -> Inspect Element) and go to the network tab. Refresh and you'll see all of the javascript files and their sizes, one per package. You can filter for only Scripts and then sort by size (you may have to do a full refresh to clear out the cache for this to work). jQuery will probably be one of, if not the biggest package.
You can also run meteor with the --production flag and the server will send one concatenated and minified js file. This is much smaller than the total size of the individual package files, but shows you the actual size of the data that will be sent in production.
You also need to be aware of how much data you are publishing/subscribing. If you add the meteorhacks:fast-render package, the initial published set of data will be added as a script tag to the HTML. You should also be aware of how much data you are publishing while the user browses and uses your application. Something like Kadira is helpful with that.
I have a PDF document that is getting generated on the fly, and rendered on the fly to an iFrame within a radwindow. Basically the document is already largely prepopulated, however the user will still have a chunk of information that they are required to enter. I've found a good amount of information about sending a pdf TO an iframe, but not much information about going the other way. I have a button within the radwindow that can access the iframe object, however I'm somewhat lost as to where to go from there.
EDIT: The PDF is an editable form. I'm trying to pull back the entire PDF document as is, after the client side makes their entries to the form.
I think you'll need to send the file to the user so they can edit it locally and instruct them to upload it.
The content-disposition header with value attachment can help with the first task and you can use RadAsyncUpload to upload it: http://demos.telerik.com/aspnet-ajax/asyncupload/examples/overview/defaultcs.aspx.
I am not aware of ways to tap into the PDF viewer plugin the browsers use to show the PDF. Perhaps there is API from Adobe or some other third party plugin but that would rely on them and is out of your control.
Perhaps the JS PDF viewer from FireFox has something: https://mozillalabs.com/en-US/pdfjs/ but I don't know how stable and usable it is.
As per what was described in the comments, I ended up using postbacks through the PDF's themselves along with 1 pixel fields to store data required to identify the documents. It's a little hacky, but functional. I'm leaving this as an actual answer as this is as close to a real solution to the problem I originally had. This has been up and running for close to 4 years in this manner, and thus far hasn't caused any issues.
this is my first post, so if my question is too vague or not clear, please tell me so.
I'm trying to scrape a website with news-articles for a research project. But the link to the modified search on that webpage won't work, because the intranet-authentication will spit out an error.
So my idea was, that I fill out the search form and use the resulting link to scrape the website.
Since my boss likes to work with R, he would like me to write an R-skript to do so, but I have no idea how to and haven't found anything working.
You need two packages: RCurl and XML.
The RCurl package is used for internet browsing. It can access HTML forms with _GET or _PUT arguments. So, with it you can login or fill out the any form.
The output from the server would be in HTML. If you want to grep the links, you can use XLM package. I helps to get any data form XML format.
But before start, you have to find out that is the search form in webpage (and that arguments should be used). The Firefox browser could be useful. You need two add-ins: Live HTTP header and Firebug. With those add-ins you can inspect webpage much more easier.
I know that it did not solve you problem, but I could not say any more, since it deepens on particular situation and webpage structure. I believe that the tool I have mentioned is quite enough to achieve that you want.
Bet regards.
Whenever we make changes to the CSS, it generally takes 24 hours to reflect those changes on my site. I have tried clearing the server cache and browser cache but it doesn't help too. Is there any other way to make the CSS changes reflect immediately after updation?
it happens in all the browsers... when i check it in the browser , i can access my css file with two paths eg : i store my css in folder named "Cssfolder" and my css name is say 135.css
So when i access the folder paths, Cssfolder/135.css & cssfolder/135.css, one of the path shows me latest css whereas other one shows me old css.Notice the "c" is captital in one path whereas small in other path.
Thanks.
I've found this to be a pretty common problem in a lot of my projects. I would suggest two things...
If it's just an app that you are working on you can use the CSS Cachebuster during development.
Following the idea behind the Cachebuster I have found that often adding the timestamp of the CSS file as a query string off of the CSS link will help in telling the browser that the file is different... something like... whatever.css?12212009035543
You might want to use a monitoring tool, like Live Http Headers for Firefox, to see the requests and responses to and from the server. This usually solves a lot of problems for me. Take a look at the "Expire" headers and conditional requests (like: "If-modified-since"). This said, take a look at server and client local times and timezones - it might be that they differ significantly and conditional GET requests "seem to be" handled correctly, because of future or otherwise mangled timestamps.
You can force to load the current css directly from the server with appending a random unique value to the url, like http://example.com/Cssfolder/135.css?983274928374 and http://example.com/cssfolder/134.css?08973249827. There's no way that this would ever get cached unless you use the same random value twice.
This way you learn where to look further for the solution to your problem: At the server, the ISP/a proxy or your browser.
You really need to see whether this is server side or client side. If the server is still serving the old CSS then clearly you've got no chance on the client side.
I've occasionally seen times where I've had to show the CSS in the browser, and then next time I've been to the real page, it's used that new CSS. Usually just hitting refresh does it.
Do you have any web caches like Akamai involved anywhere?
If you try to go to the CSS page from a computer which has never seen the old version, which version does it show?
EDIT: Changed answer to reflect edits in question.
I have been dealing with this issue in the past, and ended up writing a httpmodule to deal with it.
It's pretty simple, it just finds all script/css links in head tag (they now need to have runat=server) and appends the assembly version number to the link, in the same way as Tim K describes. This way im sure my clients always fetches the newest css/scripts when my app is updated in production, and never have to deal with this issue again.
Maybe Internet Service Provider cache, as in this case?
I was perplexed by this issue then someone said Ctrl+F5. Worked for me :)
When I am developing and I need to be sure that I am seeing changes as I work, I stick the css in the page ie
<style type="text/css">
/* your css */
</style>
Or you could constantly change the name of the css file itself, not very useful in a production environment, but perhaps okay while developing.
I know it doesn't solve the problem, but for developing it is okay.
I have an ASP.Net application which as desired feature, users would like to be able to take a screenshot. While I know this can be simulated, it would be really great to have a way to take a URL (or the current rendered page), and turn it into an image which can be stored on the server.
Is this crazy? Is there a way to do it? If so, any references?
I can tell you right now that there is no way to do it from inside the browser, nor should there be. Imagine that your page embeds GMail in an iframe. You could then steal a screenshot of the person's GMail inbox!
This could be made safe by having the browser "black out" all iframes and embeds that would violate cross-domain restrictions.
You could certainly write an extension to do this, but be aware of the security considerations outlined above.
Update: You can use a canvas utility function to get a screenshot of a page on the same origin as your code. There's even a lib to allow you to do this: http://experiments.hertzen.com/jsfeedback/
You can find other possible answers here: Using HTML5/Canvas/JavaScript to take screenshots
Browsershots has an XML-RPC interface and available source code (in Python).
I used the free assembly UrlScreenshot.dll which you can download here.
Works nicely!
There is also WebSiteScreenShot but it's not free.
You could try a browser plugin like IE7 Pro for Internet Explorer which allows you to save a screenshot of the current site to a file on disk. I'm sure there is a comparable plugin for FireFox out there as well.
If you want to do something like you described. You need to call an external process that prints the IE output as described here.
Why don't you take another approach?
If you have the need that users can view the same content over again, then it sounds like that is a business requirement for your application, and so you should be building it into your application.
Structure the URL so that when the same user (assuming you have sessions and the application shows different things to different users) visits the same URL, they always see same thing. They can then bookmark the URL locally, or you can even have an application feature that saves it in a user profile.
Part of this would mean making "clean urls", eg, site.com/view/whatever-information-needed-here.
If you are doing time-based data, where it changes as it gets older, there are probably a couple possible approaches.
If your data is not changing on a regular basis, then you could make the "current" page always, eg, site.com/view/2008-10-20 (add hour/minute/second as appropriate).
If it is refreshing, and/or updating more regularly, have the "current" page as site.com/view .. but allow specifying the exact time afterwards. In this case, you'd have to have a "link to this page" type function, which would link to the permanent URL with the full date/time. Look to google maps for inspiration here-- if you scroll across a map, you can always click "link to here" and it will provide a link that includes the GPS coordinates, objects on the map, etc. In that case it's not a very friendly url but it does work quite well. :)