I have a process that adds tabs to Vivaldi (or any browser): one to an external url and one to a local html file. I am able to identify the process IDs associated with each tab.
I want to be able to close the tabs. I have tried kill <id>. That clears the page of the local file, but the tab is still there and can be reloaded if I refresh the page. kill has no effect on the tab associated with the external url.
Is there a way to do this?
Killing processes is the wrong approach here anyway because apart from causing unexpected termination and not orderly closing, nothing guarantees each tab to live its own process. You may have both of them living in the same process, or sharing a process with other, unrelated tabs. Bottom line, it's not going to work or at least it'll work only sometimes and cause collateral damage. (Others asked for such a way before.)
My suggestion would be a browser extension that uses native messaging. You could then ask it via the native messaging function to close certain tabs for you, using the officially supported tabs API that the browser exposes to extensions.
(These links are to the Chrome extension docs, but Vivaldi is Chromium-based as well and supports the same APIs.)
Alternative idea that works without an extension:
Tabs opened through the command line behave as if they were opened by a script of the same origin, insofar that the website in them is able to call window.close(). So depending on your use case, maybe you can arrange for the website in the tab to close the tab by itself.
If one of them is "external" in such a way that you can't control its contents, then you could instead have one tab open the other one through JavaScript, because then the first tab can close the second tab using close as well.
If you need a way to communicate to the website running in your tab(s) that you want it to close itself, you could also do something like starting a local server at a random unused port and passing the port into the website via a URL parameter1, and stopping the server when you want to close the tab. Then, inside your website you would regularly poll the local server URL using AJAX and close the tab when it fails2. (Remember to return CORS headers for this to work.)
This is just one of several possible ways, and yes it is a bit "hacky" - so I'm open to suggestions on how to improve on this idea.
Another alternative (which may or may not fit to your use case): Instead of opening a tab, you could open a separate popup window for each website using --app in the command line before the URL. Then you could find the corresponding window by checking what is the newest window with a matching title, and you could close it programmatically (check out xdotool and xwininfo).
1: Why not a fixed port number? Because you can't control whether something else is already listening on that port on the user's machine.
2: Why not the other way round, starting the server in order to close the tab? Because then you would have to wait to ensure that the website noticed that you started the server, and if you would stop the server too early then the tab would never close, so it's extra effort and an extra possible failure point, for example if there is high CPU usage at the moment or Vivaldi put the tab into sleep mode in the background. Additionally, with my method, killing your "manager" process would then also cause the tab to close instead of leaving it sticking around. And, finally, you don't want another process to interfere with your communication by opening a server on the same port that you chose before you do so, so it'll be best to open the server right away and not only once you want the tab to close.
Related
My server receives two GET requests when I use Ctrl+Shift+R in the browser. But when I use Ctrl+r, my server only receives 1 GET request. Why does my server receive 2 requests when using shift? (I believe using the shift key will clear cookies or something).
Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
I have a website deployed in IIS (local network). If I use the IE browser in IIS Server, it takes less than 10 seconds for 1 page.
But if I access from another PC (in local network - 1Gbps). It takes 3~4 minutes. Could anyone give me some advice? Thanks
Capture a FREB trace on IIS. Follow this article - https://blogs.msdn.microsoft.com/amol/2009/04/01/freb-failed-requests-tracing-in-iis-7/
Under step 2, instead of adding the status code as 404.2 as mentioned in the article, please add 200-999. Also, as the next step, you will have to select trace providers. Select everything here or leave it to default.
Once the rule is enabled, try accessing the application from a client and reproduce the issue. Go back to the location where FREB was saved. If there are multiple files created, be patient and look for the one with the requested page and observe the time it took.
Open this file in IE and click on compact view. On the right hand side, you will see the time spent by the request in each module. Keep scrolling down until you see a jump in time there. The module where you see there is a jump is the culprit in your case.
I found the issue, it related to driver of my network adapter. It work after I turn off checksum following this post: https://superuser.com/questions/961617/how-to-disable-checksums-on-ethernet-card-in-windows-10
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
My script searches for different strings in different tabs of a browser. Is there a way to keep the browser open after test execution is over so that results can be checked at a later time? Currently the browser closes automatically after 5 mins even though i am not using driver.quit().
Selenium: 2.33, Win 7, FF and Chrome
I don't know if you can let the browser open. But may I suggest you to use the TakeScreenshot functionality of Selenium in order to save the state of the browser when you want. That could help you to debug or to check that the page is as expected.
If this solution doesn't help you, could you please explain us exactly why you want to keep the browser open?
I was able to get the fix at least for my problem. I am using Remote Web Driver class and there is an option to provide timeout and browser_timeout parameters on the command line when the hub is started. Setting the value to 0 means the hub will wait indefinitely. For this scenario i wanted that only.
while 1==1:
pass
This will put your code into a loop, browser will stay responsive, and you ultimately kill the python program with control C. Just did this for the scripting of logging on to an application with encrypted credentials stored locally. This keeps plain text user ids and passwords out of the scripting code.
I have a Menu page. If a user selects an Menu Item it opens a new IE Window using JavaScript. So user can open different parts of applications in multiple IE Windows. These Windows have the same Session.
My issue is that these pages are accessed synchronously? If one of the child window is waiting for an action to be finished no other request from any other child window is processed. Is it because of using Session variables?
Update: This is only happening to the windows having the same parent. If I have IE child windows from different parent windows then this issue is not there.
Yes, if each page is using the session, then asp.net will serialize activity against it. If one or more of these sub-pages only need read only access, then mark that in your #Page directives (e.g. EnableSessionState="ReadOnly"), or turn it off completely if the session information isn't used.
Generally, it's a bad plan to have long-running activities pending on the server, and as you've found, this is especially true if they're using the session.
Edit
Last FAQ here also describes this serialization.
Edit 2
In response to comment re: closing the child window:
It will eventually process other requests, once the server side process finishes whatever request it's been working on. Closing a child window does not abort the request on the server side. Best you can hope is that the long-running request has a chance to check IsClientConnected every so often, and aborts its processing if it's no longer relevant.
From the server perspective, you have absolutely no way to know how many browser windows the client has open.
So, no, they are not synchronous.