I'm currently trying to get my Protractor tests working with Browserstack. My (automated) tests are run on a staging server that can only be accessed from within a VPN. I am using Browserstacklocal to access the staging server without a problem.
My question: Is it possible to direct ONLY the staging server URLs through Browserstacklocal? For exemple, during my test, I go to PayPal Sandbox to purchase an item. I would like the Paypal connection to be made directly from the Browserstack remote machine.
The "-only" parameter restricts the Local Testing access to specified local servers and/or folders. Consider the following example:
./BrowserStackLocal ACCESS_KEY -only localhost,443,1
In this case, only the traffic for the domain “localhost” will be directed to your private server and rest all URL’s/domains will be accessed directly through our remote VM’s.
More details on all the Local Testing modifiers can be found here.
Seems like Browserstacklocal's -only flag does the trick. Refer to this article for more info. Although I wouldn't mind someone explaining what the parameters are.
Related
I tried to record a site using JMeter which uses Firebase for data storage but it fails to access the firebase and I can not log into the site while recording. Is there any way to access firebase during the recording of load testing in JMeter? I entered the JMeter certificate also but still, the problem is there. And also tried using the chrome extension still it also didn't give the expected output Error Description Image
Most probably it's due to incorrect JMeter configuration for recording, you need to import JMeter's certificate into your browser. The file is called ApacheJMeterTemporaryRootCA.crt, JMeter generates it under its "bin" folder when you start the HTTP(S) Test Script Recorder.
See HTTPS recording and certificates documentation chapter for more details.
Going forward consider looking at View Results Tree listener output and jmeter.log file, they should provide sufficient amount of information in order to get to the bottom of the issue. If you cannot interpret what you see there yourself - add at least essential parts of response/log to your question.
Also be aware of alternative "non-invasive" way of recording a JMeter test - JMeter Chrome Extension, in that case you won't have to worry about proxies and certificates and should be able to normally record whatever HTTP(S) traffic your browser generates
I know that Next.js can do SSR.
I have questions about production.
In my experience(no SSR). Frontend build static files, and then give the folder to backend to integrate.And there is only one server.
I want to know that if we want to use SSR with Next.js (not static site).
Do we need host two server? One for host backend(nodes, java…), another for host frontned(next.js)?
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
If question one's answer is yes, I see the document use next start to host server, is it strong enough to host many users?
Do we need host two server? One for host backend(nodes, java…), another for host > frontned(next.js)?
In most cases you would have a single server producing the SSR as well as rendering the markup required for the client. The associated Javascript files that only the browser can be sent via a asset serving server ( e.g: An S3 bucket ) - You would front the whole thing using a CDN so your server would not get all public requests
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
Yes, for simplistic uses you can checkout the api solve that NextJS ships with. https://nextjs.org/docs/api-routes/introduction
If question one's answer is yes, I see the document use next start to host ? server, is it strong enough to host many users?
You would use a next build and next start - With the latest optimizations nextjs adds Static site generation (SSG) - Sorry one more confusing term but this lets your backend nodejs app receive much lesser requests and be smart about serving repetitive requests, However even with these abilities you should front the whole thing using a CDN to ensure high availability and low operating costs.
I have two servers: server A which is a web server running an ASP.NET application on IIS, and server B which is an SSIS server with a network share that contains a configuration file.
I need server A to be able to write to the configuration file on server B. This seems fairly straight-forward, however I keep getting the error: "Access to the path \\ServerB\files\config.xml is denied." What I have done for testing, to make this perfectly clear:
I have set "Everyone" to have full control of the folder.
I have set "Everyone" to have read/write access on the share.
I have set "Everyone" to have full control of the file.
I have verified that the file is not read-only.
I realize this isn't a good solution, I am just doing this for debugging so please don't comment to tell me not to do this.
Anyhow, even with these things set, I still get "Access is denied." I have also explicitly given access to a number of users, including Network Service, IUSR, Anonymous Logon, and IUSRS group, and it has not fixed the problem.
The application pool on Server A is using ApplictionPoolIdentity. I have Googled and Overflowed and found suggestions to give permissions to things like IIS AppPool\{Application Pool} or {MACHINE}\ASPNET, but I cannot access these resources from Server B so I do not understand how this would be possible.
Finally, the perplexing thing is that developers running solutions on their local machines are able to access the file. So it is something to do with how something is configured with Server A, however I cannot figure out what.
Edit: Truly wacky stuff going on here. I have figured out how to enable auditing and get the requests logged in the event viewer on Server B. When the developer runs the process from his local, I can see all the requests logged on Server B. Eg: "A network share object was checked to see whether client can be granted desired access."
However! When attempting to connect from server A, nothing gets logged. Nothing is there at all. Server A throws an "access to the path is denied" error, but I don't even see the request from Server B. :(
It has been long time ago but maybe it could be useful for someone.
Try and use the class NetworkConnection to access to the shared folder. You'll need to specify the path and credentials.
More reference about the class here:
https://gist.github.com/AlanBarber/92db36339a129b94b7dd#file-networkconnection-cs-L15
You seem to be using all Local Identities and I think that will not work on a network share despite what permissions you give. You need to make a domain user and run the app pool with that user and you should be good to go.
Also please do verify if the path is reachable or its access denied. Sometimes we go get the access denied message even though the path was not reachable.
You may need to edit settings in the Group Policy Editor on the machine where the share is hosted.
Open the Group Policy Editor via Start → Run → gpedit.msc. Set the following under Local Computer Policy → Computer Configuration → Windows Settings → Security Settings → Local Policies → Security Options:
Network access: Shares that can be accessed anonymously - Enter the name of the network share folder (files in your question above) in the text field. (Don't include the hostname.)
Network access: Let Everyone permissions apply to anonymous users - Set to Enabled. (For me, this was necessary for write access to be granted.)
When done making changes in gpedit.msc, from an admin-elevated command prompt, run gpupdate /force to apply the Group Policy changes.
Obviously, you should consider the security implications in your specific situation before making these changes.
You have to modify ApplicationPoolIdentity according to this article http://blogs.msdn.com/b/vijaysk/archive/2009/02/13/goodbye-network-service.aspx as it works with the NetworkService identity
Under system.web, on identity tag, set impersonate=true also set username and password of the production server
I'm searching for a way to change the way Meteor loads the Mongo database. Right now, I know I can set an environment variable when I launch Meteor (or export it), but I was hoping there was a way to do this in code. This way, I could dynamically connect to different instances based on conditions.
An example test case would be for the code to parse the url 'testxx.site.com' and then look up a URL based on the 'textxx' subdomain and then connect to that particular instance.
I've tried setting the process.env.MONGO_URL in the server code, but when things execute on the client, it's not picking up the new values.
Any help would be greatly appreciated.
Meteor connects to Mongo right when it starts (using this code), so any changes to process.env.MONGO_URL won't affect the database connection.
It sounds like you are trying to run one Meteor server on several domains and have it connect to several databases at the same time depending on the client's request. This might be possible with traditional server-side scripting languages, but it's not possible with Meteor because the server and database are pretty tightly tied together, and the server basically attaches to one main database when it starts up.
The *.meteor.com hosting is doing something similar to this right now, and in the future Meteor's Galaxy commercial product will allow you to do this - all by starting up separate Meteor servers per subdomain.
Is it possible/supported to have a CRM 2011 host work with two different host names? We have tried this, but not everything works perfectly.
Example:
A server with server name "app1".
An AD/DNS entry pointing the host name "crm" to "app1".
When users navigates to "crm" the requests work 99% of the time, but a few internal javascripts in CRM targets the original "app1" server. For example a request from normal edit forms that retrieves the roles. The javascript variable called "*SERVER_NAME*" always has value of "app1", no matter the request URL. A cross-server warning might appear or the functionality may just silently fail.
This also happens when accessing the FQDN of the server, so "app1.mydomain.com" still produces the same result and failing/warning functionality.
I imagine this would be a similar problem when dealing with load balanced installations? How do they handle this? I.e. they target host name X and can get host name Y or Z.
Edit: I've understood that this may be called "domain alias" or "host alias" since it is an active directory entry.
You cannot have multiple hostnames for the CRM system.
You have to specify an address which is used by the CRM system itself, for scripts like you have seen. But it is also used for the Discovery mechanisms.
Multiple bindings in IIS are not supported due to a limitation with the web service endpoint
Open the deployment manager on the CRM server.
Go to Actions -> Properties -> Addresses
Adjust the stated addresses to the one which you use to access the CRM system. These settings are important for the CRM to define its "identity".
If you have configured IFD you configure an additional external identity.
By the way. Depending on your environment it might be necessary to set an SPN. See http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx
Regarding NLB: http://technet.microsoft.com/en-us/library/hh699803.aspx
Daniel Cai seems to have a good way round the problem of getServerURL returning the value held in deployment manager rather than the calling page URL when you have different names used. He has come up with a replacement function:
CRM 2011: Get the Right Server URL in Your CRM Client
This looks like it works for all scenarios with Outlook offline client as well as online browser.