Amazon Alexa no output from Lambda Function - alexa-skills-kit

I am starting with Alexa. I have setup a basic Lambda function and a custom skill set.
When I test the lambda function, it produces the correct output
but when I test the skill from the Test Skill section, it does not receive any output from the lambda function.
What could be the reason?

Related

how to run "only" single test in specflow while running it normally (ignore all tests except one)

is there "only" (like in mocha) in specflow to run only this tests and ignore all the others?
I have thousands of tests so don't want to ignore 1 by 1.
I know that I can run only 1 manually, but I mean while running all the tests, to use some API like "only" to run only single test
You could implement BeforeScenario hook where you could fail tests other than the selected. The selected test could be marked with a tag e.g. 'OnlyThis' -> the logic of failing other test cases would be to verify if they are marked with the required tag.
I believe, there is no build-in option in SpecFlow.
It also depends on the test runner you use. You could filter tests e.g. using the test name.

How to run multiple Firestore Functions Sequentially?

We have 20 functions that must run everyday. Each of these functions do something different based on inputs from the previous function.
We tried calling all the functions in one function, but it hits the timeout error as these 20 functions take more than 9 minutes to execute.
How can we trigger these multiple functions sequentially, or avoid timeout error for one function that executes each of these functions?
There is no configuration or easy way to get this done. You will have to set up a fair amount of code and infrastructure to get this done.
The most straightforward solution involves chaining together calls using pubsub type functions. You can send a message to a pubsub topic that will trigger the next function to run. The payload of the message to send can be the parameters that the function should use to determine how it should operate. If the payload is too big, or some more complex sources of data are required to make that decision, you can use a database to store intermediate data that the next function can query and use.
Since we don't have any more specific details about how your functions actually work, nothing more specific can be said. If you run into problems with a specific detail of this scheme, please post again describing that specifically you're trying to do and what's not working the way you expect.
There is a variant to the Doug solution. At the end of the function, instead of publishing a message into pubsub, simply write a specific log (for example " end").
Then, go to stackdriver logging, search for this specific log trace (turn on advanced filters) and configure a sink into a PubSub topic of this log entry. Thereby, every time that the log is detected, a PubSub message is published with the log content.
Finally, plug your next function on this PubSub topic.
If you need to pass values from function to another one, you can simply add these values in the log trace at the end of the function and parse it at the beginning of the next one.
Chaining functions is not an easy things to do. Things are coming, maybe Google Cloud Next will announce new products for helping you in this task.
If you simply want the functions to execute in order, and you don't need to pass the result of one directly to the next, you could wrap them in a scheduled function (docs) that spaces them out with enough time for each to run.
Sketch below with 3 minute spacing:
exports.myScheduler = functions.pubsub
.schedule('every 3 minutes from 22:00 to 23:00')
.onRun(context => {
let time = // check the time
if (time === '22:00') func1of20();
else if (time === '22:03') func2of20();
// etc. through func20of20()
}
If you do need to pass the results of each function to the next, func1 could store its result in a DB entry, then func2 starts by reading that result, and ends by overwriting with its own so func3 can read when fired 3 minutes later, etc. — though perhaps in this case, the other solutions are more tailored to your needs.

Dynamic test cases in Nunit3

I have integer values as test cases(ids of different users), and I don't want to hardcode them, I have a method that gets users from API. It is said in specs, that dynamic test cases spec is not implemented yet. Is it possible to load test cases before test is executed?
We have used the term "dynamic test cases" to mean that the tests are not created before the run but during it. Specifically, the test cases can change while the test is running.
It doesn't sound like this is what you need. If I understand correctly, you want to get the user ids programmatically at the time the tests are created. You can easily do this using the TestCaseSourceAttribute on a method that uses your API to get the user id.

how to test a series of transformations in osb

I have the following problem: I'm making a request to external system using OSB. External system gives me a response and I would like to make a series of transformations with it. How can I test this series of transformations of a response?
Basically, I would like to write an XML response by hand, apply all transformations (in bulk) and get the result. So, I don't create a mock of an external service and start with a request, since I would like to test only the response part - whether my transformations are valid and are working properly.
Here is the screenshot of my message flow The part I would like to test is in yellow.
Thank you!
You can create a separate callable pipeline that includes only the response transformations.
Then you would be able to test it by calling it from the service bus console

Integration Test Best Practice

When creating integration tests, what is the best approach to introduce data?
Should sql scripts be used to create the data in the setup of the test or would it be better to use the actual business objects to generate data which can then be used by the tests.
Any help would be greatly appreciated.
When creating test data for automated test there are a few rules I try to stick to and I find these rules help me achieve reliable tests that have a lower maintenance overhead:
Avoid making the output of one test the input of another test i.e. dont use test A to create the test data for test B
Avoid using objects under test to create test data i.e. if your testing module A dont use module A to create test data for any test
Create test data in a way that's repeatable reliably at low cost e.g use SQL scripts to setup data
When deciding how test data is to be created also consider how the test data will be removed so that your tests can be ran from a clean base state
In my environment I create test data using SQL at either the test fixture or test set-up point and then I clean out the test data using SQL at either the test fixture or test tear-down point.

Resources