This is sort of a part two of the question here
VisualStudio cloud-based load test socket exception 100% of requests - webtest alone works OK
In short, my loadtest fails all requests with a weird socketexception, but only when it is run as a 'cloud based' test.
So, my load test somehow (dunno how) is no longer a 'team services cloud based' unit test, but a regular one - and I see that it runs fine.
However, when I add a new cloud-based test and then click on the 'location' setting in my old test and pick anything (or not pick, doesn't matter), VS connects to my TeamServices account, the test becomes cloud-based and it is launched in a different looking tab, with 100% request fails.
So, the question is - how do I turn off/toggle the 'cloud-based' mode for an existing loadtest?
Open the testsettings file in your solution/project, you should be able to change the Test run location from VSTS to local computer:
Related
I''m trying to get log output (Console.WriteLine(..)) in my Docker logs, but I'm getting zero avail.
I've tried:
Console.WriteLine(..)
Trace.WriteLine(..)
Flushing the console, flushing the trace.
I can see these outputs in a VS output window when I'm debugging, so they go somoewhere.
I'm on windows Container, using microsoft/aspnet:4.7.1-windowsservercore-1709 and net4.7
These are the logs I get on container start
docker logs -f exportapi
ERROR ( message:Cannot find requested collection element. )
Applied configuration changes to section "system.applicationHost/applicationPools" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST"
You have many good lateral options, like self-contained/server-contained executables (eg. Dotnet Core using microsoft/dotnet:runtime would proxy Console.WriteLine by default off the dotnet new web scaffold). Zero-configuration STDOUT logging has never been a common approach on IIS, but these modern options adopt it as best practice (logging should be a transparent backing service).
If you want or need a chain of three programs/assemblies to get your web service up (ServiceMonitor, W3SVC, and finally your assembly), then you need something like this: https://blog.sixeyed.com/relay-iis-log-entries-to-read-them-in-docker/
Overriding the entrypoint to tail more logs than the image does by default is unfortunately a common hack (not just in Microsoft land). So, in your case, I believe you need at least a trace listener config to emit Trace.WriteLine, and then the above approach to emit it: https://learn.microsoft.com/en-us/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners
I currently trying out Amazon Device Farm and was able to get a dummy app to work.
However, when I tried getting the actual App I want to get working on Amazon device farm, I'm unable to do so. I'm able to upload the .ipa file, and zip up and upload the py.tests/appium tests with their dependencies, however the tests fail.
What I think might be happening is ADF is not recognizing some of the Desired_capabilities to autodismiss the Alerts for Notifications and GPS coordinates.
My setup is very similar to the setup I used with the dummy app in my initial tests. (these worked with amazon device farm)
https://github.com/dlai0001/appium-spike-running-tests-oncloud
Only thing that is really different is I'm using a real production app, where 2 alert windows popup upon launch. In Appium it will cause the test to crash if I don't have the autodismiss or autoaccept alerts enabled in the desired capabilities.
Harness 00:00.0 1295 Info Starting 00001 with device c00e8ab68437161b894395e438ba8935a672bac0
Harness 00:00.0 1295 Info Using test content version 0.1.0
Harness 00:00.1v1295 Info Using image version ami-778b7c17
I work for the Amazon Device Farm team.
It appears that you are relying on desired capabilities to dismiss the alert window. Currently, Device farm has support for a very limited set of desired capabilities namely app name, package name and osversion. These are available to the application without having the user required to set it.
Appium runs with autoAcceptAlerts=true on Device Farm. This should handle the alert windows, if any, unless it is a system pop up which autoAcceptAlert cannot handle. You should check if your tests can handle the alert windows using autoAcceptAlerts=true with Appium pre launch mode locally.
If it can handle it then device farm should behave the same way.
Sometimes it may be the case that the alert window appears before the appium session is established in which case adding a delay can help.
When you test is locally please use Appium version 1.4.16 since this is the version being used on device farm right now.
I have a simple Meteor application. I would like to run some code periodically on the server end. I need to poll a remote site for XML orders.
It would look something like this (coffee-script):
unless process.env.ORDERS_NO_FETCH
Meteor.setInterval ->
checkForOrder()
, 600000
I am using Velocity to test. I do not want this code to run in the mirrored instance that runs the tests (otherwise it will poach my XML orders and I won't see them in the real instance). So, to that end, I would like to know how to tell if server code is running in the testing environment so that I can avoid setting up the the periodic checks.
EDIT I realised that I missed faking one of my server calls in the tests, which is why my test code was grabbing one of the XML orders from the real server. So, this might not be an issue. I am not sure yet how the tests are run for the server code and if the server code runs in a mirror (is that a client only concept)?.
The server and the client both run in a mirror when using mocha/jasmine integration tests.
If you want to know if you are in a mirror, you can use:
Meteor.call('velocity/isMirror', function(err, isMirror) {
if (isMirror) {
// do something
}
});
Also, on the server you can use:
process.env.IS_MIRROR
You've already got the fake working, and that is the right approach.
This is pretty cutting-edge as 0.7.2 was just released today, but I thought I'd ask in case somebody can shed some light.
I didn't report this to MDG because I can't reproduce this on my dev environment and thus I wouldn't have a recipe to give them.
I've set up oplog tailing in my production environment, which was deployed exactly as my dev environment was, except it's on a remote server.
The server runs Ubuntu + node 0.10.26 and I'm running the bundled version of the app with forever. Mongo reports its replSet is working in order.
The problem is that some collection updates made in server code don't make it to the client. This is the workflow the code is following:
Server publishes the collection using a very simple user_id: this.userId selector.
Client subscribes
Client calls a server method using Meteor.call()
Client starts observing a query on that collection using a specific _id: "something" selector. It will echo on "changed"
Server method calls .update() on the document matching that "something" _id, after doing some work.
If I run the app without oplog tailing (by not setting MONGO_OPLOG_URL), the above workflow works every time. However, if I run it with oplog tailing, the client doesn't echo any changes and if I query the collection directly from the JS console on the browser I don't see the updated version of the collection.
To add to the mystery, if I go into the mongo console and update the document manually, I see the the change on the client immediately. Or if I refresh the browser after the Meteor.call() and then query the collection manually from the js console the changes are there, as I'd expect.
As mentioned before, if I run the app on my dev environment with oplog tailing (verified using the facts package) it all works as expected and I can't reproduce the issue. The only difference here would be latency between client and server? (my dev environment is in my LAN).
Maybe if somebody is running into something similar we can isolate the issue and make it reproducible..
Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.