How to test POST-requests with NodeUnit and Express? - http

I'm writing a simple REST service in Node.js (just experimenting), trying to figure out if Node has matured enough yet. I'm also using NodeUnit for my unit testing.
Now, NodeUnit works fine as a testing framework for testing GET-requests, using the HttpUtils, however, testing POST-requests doesn't seem to be obvious.
Testing GET looks like this:
exports.testHelloWorld = function(test) {
test.expect(1);
httputil(app.cgi(), function(server, client) {
client.fetch('GET', '/', {}, function (resp) {
test.equals('hello world'), resp.body);
test.done();
});
});
}
But how do I test POST-requests? I can change 'GET' to 'POST' and try to write something to 'client', however this doesn't work before .fetch is called because there's no connection yet. And it doesn't work in the .fetch callback function either, because at that time the request has already been executed.
I've looked into the nodeunit code, and there doesn't seem to be support for POSTing data at the moment. So here's my questions:
What does it take to test POST-requests?
Should I even test POST-requests in a unit test, or does that fall under an integration test and I should use another approach?

You could try this library instead of nodeunit: https://github.com/masylum/testosterone
It's built specifically to test web apps over http.

I've just written this library for testing HTTP servers with nodeunit:
https://github.com/powmedia/nodeunit-httpclient

Related

Next.JS - localhost is prepended when making external API call

I got a simple Next app where I'm making an external API call to fetch some data. This worked perfectly fine until a couple days ago - when the app is making an API request, I can see in the network tab that the URL that it's trying to call, got Next app's address (localhost:3000) prepended in front of the actual URL that needs to be called e.g.: instead of http://{serverAddress}/api/articles it is calling http://localhost:3000/{serverAddress}/api/articles and this request resolves into 404 Not Found.
To make the API call, I'm using fetch. Before making the request, I've logged the URL that was passed into fetch and it was correct URL that I need. I also confirmed my API is working as expected by making the request to the expected URL using Postman.
I haven't tried using other library like axios to make this request because simply it doesn't make sense considering my app was working perfectly fine only using fetch so I want to understand why is this happening for my future experience.
I haven't made any code changes since my app was working, however, I was Dockerizing my services so I installed Docker and WSL2 with Ubuntu. I was deploying those containers on another machine, now both, the API I'm calling and Next app are running on my development machine directly when this issue is happening.
I saw this post, I confirmed I don't have any whitespaces in the URL, however, as one comment mentions, I installed WSL2, however, I am not running the app via WSL terminal. Also, I've tried executing wsl --shutdown to see if that helps, unfortunately the issue still persists. If this is the cause of the issue, how can I fix it? Uninstall WSL2? If not, what might be another possible cause for the issue?
Thanks in advance.
EDIT:
The code I'm using to call fetch:
fetcher.js
export const fetcher = (path, options) =>
fetch(`${process.env.NEXT_PUBLIC_API_URL}${path}`, options)
.then(res => res.json());
useArticles.js
import { useSWRInfinite } from 'swr';
import { fetcher } from '../../utils/fetcher';
const getKey = (pageIndex, previousPageData, pageSize) => {
if (previousPageData && !previousPageData.length) return null;
return `/api/articles?page=${pageIndex}&limit=${pageSize}`;
};
export default function useArticles(pageSize) {
const { data, error, isValidating, size, setSize } = useSWRInfinite(
(pageIndex, previousPageData) =>
getKey(pageIndex, previousPageData, pageSize),
fetcher
);
return {
data,
error,
isValidating,
size,
setSize
};
}
You might be missing protocol (http/https) in your API call. Fetch by default calls the host server URL unless you provide the protocol name.
Either put it into env variable:
NEXT_PUBLIC_API_URL=http://server_address
Or prefix your fetch call with the protocol name:
fetch(`http://${process.env.NEXT_PUBLIC_API_URL}${path}`, options)

Zerocode: Set system property in host configuration file

Configuration:
zerocode-tdd.1.3.2
${host}
At runtime, system property set with -D java option. All is well.
Problem / What I Need:
At unit test time, system property not set, and host not resolved.
App uses Junit and Zerocode, would like to simply configure Zerocode to set the system property.
Example:
host.properties
web.application.endpoint.host:${host}
web.application.endpoint.port=
web.application.endpoint.context=
More Info:
Requirement is for configuration only. Can't introduce new Java code, or entries into IDE.
Any help out there? Any ideas are appreciated.
This feature is available in zerocode version 1.3.9 and higher.
Please use the place holder like ${SYSTEM.PROP:host} e.g. ${SYSTEM.PROPERTY:java.vendor} resolves to Oracle Corporation or Azul Systems, Inc.
Example link:
https://github.com/authorjapps/zerocode/blob/master/README.md#general-place-holders
Found a solution, but not sure if this is the correct way to do so.
Step 1: Create a config file and load system properties.
Config.java
public class Config {
public Map<String, Object> readProperties(String optionalString) {
Map<String, Object> propertiesMap = new HashMap<>();
final String host = System.getProperty("host");
propertiesMap.put("host", host);
return propertiesMap;
}
}
Step 2: Add a step (before other steps) to use the loaded properties in the .json file.
test.json
{
"scenarioName": "Test ...",
"steps": [
{
"name": "config",
"url": "com.test.Config",
"operation": "readProperties",
"request": "",
"assertions": {}
}
]
}
Step 3: Use loaded property in step config
test.json
{
"scenarioName": "Test ...",
"steps": [
{
"name": "config",
"url": "com.test.Config",
"operation": "readProperties",
"request": "",
"assertions": {}
},
{
"name": "test",
"url": "${$.config.response.host}/test/xxx",
"operation": "GET",
"request": {},
"assertions": {
"status": 200
}
}
]
}
That's it, although it is working but I am looking for better approach.
Some possible options I am trying are:
Common step for load/config (in one place)
Directly using properties as {host} in json files
Custom client
Again any help/ideas are appreciated.
My question is why are you trying to access the actual host/port? Sorry for the long answer but bear with me. I think there is an easier way to achieve what you are attempting. I find its best to think about zerocode usage in two ways,
live integration tests (which is what I think your trying to do) [meaning this calls a live endpoint / service], or
what I refer to as a thintegration test (an integration test but using a mock endpoint / service).
Thinking about it this way gives you the opportunity for two different metrics,
when using the mock endpoint / service how performant / resilient is my code, and
when using live integration tests what is the rough real life performance (expect a bit slower than external load test due to data setup / test setup).
This lets you evaluate both yourself and your partner service.
So outside of the evaluation above why would you want to build a thintegration test? The real value in doing this is you still make it all the way through your code like you would in an integration test but you control the result of said test like you would in a standard unit test. Additionally since you control the result of the test this may improve build time test stability vs a live api.
Obviously it seems you already know how to setup an integration test so I'll assume you're good to go there but what about the thintegration tests?
To setup a thintegration test you really have two options,
use postman mock server (https://learning.postman.com/docs/designing-and-developing-your-api/mocking-data/setting-up-mock/)
a. more work to setup
b. external config to maintain
c. monthly api call limits
use WireMock (http://wiremock.org/)
a. lives with your code
b. all local so no limits
If you already have integration tests you can copy them to a new file and make the updates or just convert your existing.
**** To address your specific question ****
When using WireMock you can setup a dynamic local server url with dynamic port using the following.
protected String urlWithPort;
#Rule
public WireMockRule wireMockRule = new WireMockRule(wireMockConfig().dynamicPort().dynamicHttpsPort());
protected String getUriWithPort() {
return "http://localhost:" + wireMockRule.port();
}
Note: The above was tested using WireMock version 2.27.1 and ZeroCode 1.3.27
Hope that helps you answer how to dynamically get a server/port for your tests.

How to send delayed response (Slack api) with Google Apps Script webapp?

We have a small Google Apps Script webapp that handles Slack slash commands. It does some convenient things like adding, updating and querying records to our sheet, straight from Slack. Everything works just fine most of the time. However the Slack API expects an answer from the request in less than 3 seconds, or it will timeout. Our Google Apps Script is not always able to respond in that timeframe, which will only get worse as our sheet grows or our queries get more complicated.
The Slack API allows for the use of asynchronous calls using a delayed response, but that means that the Google Apps Script needs to respond immediately (within 3 seconds) and do some work in the background.
Now this is the problem
I can't figure out how to make an asynchronous call work in Google Apps Script
I know Workers are not supported in Google Apps Script and my solution below hits a wall because of ReferenceError: 'google' is not defined. (Just ignore the Payload class, it formats a Slack response)
function doPost(request) {
var responseUrl = request.parameter.response_url
// This is how I try to circumvent the lack of threads in Google Apps Script
google.script.run
// Send an asynchronous slack response with result
.withSuccessHandler(function(payload) {
UrlFetchApp.fetch(responseUrl, {
'method' : 'post',
'contentType': 'application/json',
'payload' : payload.toString()
});
})
// Send an asynchronous slack response with error message
.withFailureHandler(function(payload) {
UrlFetchApp.fetch(responseUrl, {
'method' : 'post',
'contentType': 'application/json',
'payload' : payload.toString()
});
})
// do work in the background
.sleep(5);
return new Payload("Let me think about this...").asResponse();
}
function sleep(seconds) {
Utilities.sleep(1000 * seconds);
return new Payload("I waited for " + seconds + " seconds");
}
Does anyone have any idea how to make this work? Are there any alternative solutions to handle an asynchronous request in Google Apps Script?
I'm not aware of any threading in Apps Script either and as you noticed google.script.run only works in the Apps Script frontend.
As a workaround you could use a Google Forms as your "task queue". I've put together a simple G-Form with one question and inspected its final version to get the appropriate parameter names and URL. Then I set an installable on-form-submit trigger to run my script. Here's my POC code:
function doPost(e) {
var form = 'https://docs.google.com/forms/d/e/1FAIpQLScWBM<my-form-id>CRxA/formResponse';
UrlFetchApp.fetch(form, {method:'POST', payload:'entry.505669405=' + e.parameter.example});
return ContentService.createTextOutput('OK');
}
function onForm(e) {
//triggered async from doPost via a Google Forms
SpreadsheetApp.getActive().getSheetByName('Sheet1').appendRow(e.values);
}
It worked fine on my tests and should suffice for your use case.

TodoList sample Bing Maps service returns error with status blank

I am working thru the sample todolist application for the Cordova SDK.
the url is here
https://msdn.microsoft.com/en-us/library/dn832630.aspx
I set up a key on the BING Maps website. I can access the location service sending latitude and longitude thru a standard web browser, pasting in the URL with my key.
However the angular call always fails. What is worse is the error is always blank. no status code no error message. Was thinking it must be CORS.
I have run through the sample and downloaded the code sample and both have the same issue.
For anyone going thru the sample. I have realised today that Angular is evil. They say it is nicely testable javascript with dependancy injection, however it doesn't seem to be too interested in telling you what the error is when you have one, it just fails. Great and noble programming ideas, but without an error message it isn't much good.
Anyhow the fix is that Angular is very strict about json code so the line in services.js for the Bings Maps Service method getAddressFromPosition
it used to work with .get() but this was probably an old version of Angular when the demo was written. I tried using 1.2 but the Ripple emulator didn't like references to browser specific code. So I used the latest 1.3.13 I believe.
This is where to access the Bing location service with the Cordova geolocation coordinates returns Json, but Angular wants them wrapped in JSONP. searching the increasing fragmented web it appeared the error might be CORS no, so a many different people had their JSONP calls in controllers, modules, services, some using $http others $resources. Finally using bits and pieces I got JSONP to work with $resources and to plug it into the $promise the call from the controller requires. I used a static Url with Coordinates I knew worked, so you will have to use the :param angular notation to put those back in. Hope it helps someone.
So change to:
getAddressFromPosition: function (position) {
var resource = $resource(url, {}, {
jsonp_query: {
method: 'JSONP'
}
});
return resource.jsonp_query().$promise.then(function (response) {
return response.resourceSets[0].resources[0].address.formattedAddress;
}, function (error) {
return position.coords.latitude + "," + position.coords.longitude
});
edit:
I put the above in and it worked. However the problem was for some reason, perhaps thru debugging, another instance of the app was deployed on another port in ripple. This then change the app to run on this new port. The initial port was 4400. The problem is that and $http or $resource calls in angular have to go thru this emulator, and the emulator was seeing this as cross domain, unless it is configured to the same port the app is running under.
so Url:
http://localhost:4409/index.html?enableripple=cordova-3.0.0-iPhone5
then in the Settings Div dropdown on the right side, the Proxy Port must also be set to 4409 or else the browser will complain that the $http request is cross-domain, before the emulator actually executes it to query Azure mobile service or Bing maps.
So this was very frustrating. However VS Cordova has definately reduced the amount of bits you have to configure to make hybrid mobile apps, there are still little glitches like this which can trip you up. I assumed it was something with angular, because there was no error messages, but in Chrome in the Dev Tools console that was where the error was, and after some googling it was plain that it was the ripple emulator running on a different port than its proxy was not allowing the call to be forwarded on due to Access-Control-Allow not being set.

How do I access Request Parameters in Meteor?

I am planning to use Meteor for a realtime logging application for various
My requirement is pretty simple, I will pass a log Message as request Parameter ( POST Or GET) from various application and Meteor need to simply update a collection.
I need to access Request Parameters in Meteor server code and update Mongo collection with the incoming logMessage. I cannot update Mongo Collection directly from existing applications, so please no replies suggesting the same.I want to know how can I do it from Meteor framework and not doing it by adding more packages.
EDIT: Updated to use Iron Router, the successor to Meteor Router.
Install Iron Router and define a server-side route:
Router.map(function () {
this.route('foo', {
where: 'server',
action: function () {
doSomethingWithParams(this.request.query);
}
});
});
So for a request like http://yoursite.com/foo?q=somequery&src=somesource, the variable this.request.query in the function above would be { q: 'somequery', src: 'somesource' } and therefore you can request individual parameters via this.request.query.q and this.request.query.src and the like. I've only tested GET requests, but POST and other request types should work identically; this works as of Meteor 0.7.0.1. Make sure you put this code inside a Meteor.isServer block or in a file in the /server folder in your project.
Original Post:
Use Meteorite to install Meteor Router and define a server-side route:
Meteor.Router.add('/foo', function() {
doSomethingWithParams(this.request.query);
});
So for a request like http://yoursite.com/foo?q=somequery&src=somesource, the variable this.request.query in the function above would be { q: 'somequery', src: 'somesource' } and therefore you can request individual parameters via this.request.query.q and this.request.query.src and the like. I've only tested GET requests, but POST and other request types should work identically; this works as of Meteor 0.6.2.1. Make sure you put this code inside a Meteor.isServer block or in a file in the /server folder in your project.
I know the questioner doesn't want to add packages, but I think that using Meteorite to install Meteor Router seems to me a more future-proof way to implement this as compared to accessing internal undocumented Meteor objects like __meteor_bootstrap__. When the Package API is finalized in a future version of Meteor, the process of installing Meteor Router will become easier (no need for Meteorite) but nothing else is likely to change and your code would probably continue to work without requiring modification.
I found a workaround to add a router to the Meteor application to handle custom requests.
It uses the connect router middleware which is shipped with meteor. No extra dependencies!
Put this before/outside Meteor.startup on the Server. (Coffeescript)
SomeCollection = new Collection("...")
fibers = __meteor_bootstrap__.require("fibers")
connect = __meteor_bootstrap__.require('connect')
app = __meteor_bootstrap__.app
router = connect.middleware.router (route) ->
route.get '/foo', (req, res) ->
Fiber () ->
SomeCollection.insert(...)
.run()
res.writeHead(200)
res.end()
app.use(router)
Use IronRouter, it's so easy:
var path = IronLocation.path();
As things stand, there isn't support for server side routing or specific actions on the server side when URLs are hit. So it's not easy to do what you want. Here are some suggestions.
You can probably achieve what you want by borrowing techniques that are used by the oauth2 package on the auth branch: https://github.com/meteor/meteor/blob/auth/packages/accounts-oauth2-helper/oauth2_server.js#L100-109
However this isn't really supported so I'm not certain it's a good idea.
Your other applications could actually update the collections using DDP. This is probably easier than it sounds.
You could use an intermediate application which accepts POST/GET requests and talks to your meteor server using DDP. This is probably the technically easiest thing to do.
Maybe this one will help you?
http://docs.meteor.com/#meteor_http_post

Resources