Loadrunner Test Scenario Controller doesn't want to start and throws this : "Failed to start/stop Service Virtualization" - automated-tests

I have a Loadrunner Test Scenario, here is the snapshot for it:
after opening my Test Scenario with Loadrunner Controller, I click then the "Start Scenario" button, the Scenario must run for 2 Hours, but it stops after 1 minute, and get the following Error:
Failed to stop Service Virtualization.
Failed to start Service Virtualization.
here you can see the error snapshot:
to increase the size of the snapshot please: Ctrl++

It seems that you have unwittingly activated an integration with HP Service Virtualization (or SV for short), without having SV installed on the same machine. In order to remove it, open the SV configuration dialog and uncheck all the entries.

I solved the problem with a lot of effort.
the problem is, when you have test scenario, which worked before, and you try to remove some script (groups) of the scenario,after that the test scenario will not work, what I've done, is I created from scratch a new scenario with the same test scripts, and voila it worked, I hope everyone will pay attention at that in the future

Related

Google Cloud Composer (Apache Airflow) cannot access log files

I'm running a DAG in Google Cloud Composer (hosted Airflow) which runs fine in Airflow locally. All it does is print "Hello World". However, when I run it through Cloud Composer I receive the error:
*** Log file does not exist: /home/airflow/gcs/logs/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Fetching from: http://airflow-worker-d775d7cdd-tmzj9:8793/log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-d775d7cdd-tmzj9', port=8793): Max retries exceeded with url: /log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8825920160>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I've also tried making the DAG add data into a database and it actually succeeds 50% of the time. However, it always returns this error message (and no other print statements or logs). Any help much appreciated on why this might be happening.
We also faced the same issue then raised a support ticket to GCP and got the following reply.
The message is related to the latency of syncing logs from Airflow workers to WebServer, it takes at least some minutes (depending on the number of objects and their size)
The total log size seems not large but it’s enough to noticeably slow down synchronization, hence, we recommend cleanup/archive the logs
Basically we recommend relying on Stackdriver logs instead, because of latency due to the design of this sync
I hope this will help you solve the problem.
I have the same problem after upgrading from 1.10.3 to 1.10.6 of Google Composer.
I can see in my logs that airflow is trying to get the logs from a bucket with a name ended with -tenant while the bucket in my account ends with -bucket
In the configuration, I can see something weird too.
## airflow.cfg
[core]
remote_base_log_folder = gs://us-east1-dada-airflow-xxxxx-bucket/logs
## also in the running configuration says
core remote_base_log_folder gs://us-east1-dada-airflow-xxxxx-tenant/logs env var
I wrote to google support and they said the team is working on a fix.
EDIT:
I've been accessing my logs with gsutil and replacing the bucket name suffix to -bucket
gsutil cat gs://us-east1-dada-airflow-xxxxx-bucket/logs/...../5.logs
I faced the same situation in multiple occasions.
As soon as when the job finished when I take a look at the log on Airflow Web UI, it used to give me the same error. Although when I check back the same logs on UI after a min or 2, I could see the logs properly.
As per the above answers, its a sync issue between the webserver and the Worker node.
In general, the issue describe here should be more like a sporadic issue.
In certain situations, what could help is setting default-task-retries to a value that allows for retrying a task at least 1.
This issue is resolved at least since Airflow version: 1.10.10+composer.

Could not create internal topics - Stream-thread exception

I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually

Retail Transaction Service Cache

I wrote some methods in Class RetailTransactionServiceEx, called them from Application.TransactionService.InvokeExtension method at POS, then I needed to do some changes in one of my methods in RetailTransactionServiceEx, but changes did not reflect, then for troubleshooting purposes I renamed my method name, but POS through error method not exists in class, I generated Incremented CIL, Full CIL, restarted AOS service, restarted my POS but it still said method does not exists.
Then I went home, and came next day at work, I found my method is working from POS, so the question is that why did it took a whole night ? I did not shut down my computer. Is there any sort of cache system and how do we clear it for quick reflection of our changes in Transaction service classes.
You can reset IIS.
open windows power shell and execute "iisreset" command
It worked for me.
though i don't know what happened at night, you can check the event viewer.

Error when running TcmReindex.exe

I am currently trying to get search working in my Tridion 2011 installation. I read in another article that I should run the TcmReIndex.exe tool in the Tridion/bin folder to re-index all my sites. So I tried this and it failed with a message box giving the following details
Unable to get list of Publication items.
Unable to Intialize TDSE object.
The wait operation timed out
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=21054; handshake=35;
The wait operation timed out
A database error occurred while executing Stored Procedure "EDA_TRUSTEES_GETTRUSTEEETOKEN"
I have four fairly large publications (100 000+ items in total) which I am trying to index.
Any ideas?
Whenever I get "Unable to Intialize TDSE object." errors, I typically write a small test script using VBScript, and try running it on the CMS server. Whilst this does not directly solve the problem, it often gives some insight into the issue by logging information in the event viewer. Try creating a test.vbs file as follows and running it:
Set tdse = CreateObject("TDS.TDSE")
tdse.initialize()
msgbox(tdse.User.Description)
Set tdse = Nothing
If it throws any errors, please let me know, and it may help us solve the problem. If it gives you a popup with your user description, then I am completely barking up the wrong tree.
I haven't come to anything conclusive but it seems like my issue may have been a temporary one as it just started working. I did increase all timeouts in Tridion MMC > Timeout Settings by 100 times their amounts but I suspect that this wasn't the issue, when it works the connection is almost instant.
If anyone else has this issue
Restart the computer the content manager is installed on, try again.
Wait an hour or two, try again.
Increase timeouts, try again.
I've run the process a few more times and it seems to be working correctly.

Can't terminate BizTalk instances in isolated adapter

Can anyone explain how I can remove service instances ?
- I've got a few which the BizTalk console shows as "Running"
- they are all in the Isolated Adapter
- tried doing a Stop with Full Stop option ...
- tried the Terminate Instance option ...
- even tried deleting the BizTalk application
But they're still there ??
my bad, the application delete did remove them, must have forgot to refresh
Event log has the errors ...
A request-response for the "HTTP" adapter at receive location "/foanite/BTSHTTPReceive.dll" has timed out before a response could be delivered.
but I still don't understand why the terminate wouldn't work
If you are running an Receive Location in an isolated host you normally need to perform a iisreset to be able to delete them.
If the IISReset does't help (it often doesn't) then use the BizTalk Health Monitor.
Select "Maintenance" from left-and tree view
For Task Type, select "Delete" and "Terminate Single Instance (Hard Termination)"
Paste the instance id taken from the BizTalk admin console
Click "Execute Task"

Resources