Protect API endpoint from being abused [closed] - next.js

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 days ago.
Improve this question
I am working on an application which enables logged in users to perform certain calls to my backend API. These API endpoints often call other APIs, such as Google's Geocoding API, as in the example below. As anyone is free to create an account on my application, I'm wondering what the best ways to prevent them from being abused or spammed is?
Here is an example of one of my NextJS backend endpoints:
export default async function handler(req, res) {
const { userId } = getAuth(req);
if (!userId) {
return res.status(401).json({
data: null,
error: "User is not logged in"
});
};
if (req.method === 'POST') {
try {
const client = new Client({});
client
.geocode({
params: {
address: req.body.postCode,
key: {process.env.GOOGLE_MAPS_API_KEY},
outputFormat: "json",
language: "en",
region: "gb"
},
timeout: 1000, // milliseconds
})
.then((r) => {
res.status(200).json(r.data.results[0]);
})
.catch((err) => {
console.log(e.response.data.error_message);
});
} catch (err) {
res.status(err.statusCode || 500).json(err.message);
}
} else {
res.setHeader('Allow', 'POST');
res.status(405).end('Method Not Allowed');
}
};
A logged in user would currently be able to repeatedly trigger this endpoint - which would essentially provide unlimited access to Google's Geocoding API.
I have already restricted my API keys so that they can only be triggered via requests on my website and for specific Google APIs. However, this does not cover a malicious user creating an account and hitting the endpoints whilst authenticated.
I'm struggling to find any documentation which provides a best practice approach for this, so any advice would be really helpful.
Thanks!

Related

Creating deeply nested object in Prisma securely

I am using Prisma and Nextjs with the following data structure, with authentication using Next-Auth.
user
|-->profile
|-->log
|-->sublog
Right now the CRUD is sent to the database via API routes on Nextjs. And I want to write to sublog securely via the API.
So when I write this, it is open-ended:
const sublog = await prisma.sublog.create({
data: {
name: req.body.name,
content: req.body.content,
log: {
connect: {
id: req.body.logId,
}
}
}
})
I have access to the user session from the frontend and backend in order to get the userID. But I am not sure how to make the form submission secure that only if the user who owns the log can they be allowed to submit a sublog.
Any ideas on how to securely submit something securely while it is deeply nested?
P.S. Note that I can turn on and off any component that edit/delete data at the frontend - but that's only on the frontend, I want to secure it on the API so that even if the client somehow is able to access a form within the log that doesn't belong to them, it would still push an error from the API since the client don't belong there.
You'd need to make a prisma query that checks who owns the log before allowing the prisma.sublog.create to be executed. Prisma is agnostic to the concept of ownership - You need to add and check that logic yourself.
const fullLog = await prisma.log.findUnique({
select: { // don't know what your model looks like, just guessing
id: true,
profile: {
select: {
userId: true
}
}
},
where: {
id: req.body.logId
}
});
// currentUserId = however you get the current user's id
if (fullLog && fullLog.profile.userId !== currentUserId) {
// throw an error
}

Best Stack/Solution to create a Single Page app that allows multiple users to contribute and see changes in real time [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Good Day,
I am trying to build a very small display application for a database. The application will need to display data dynamically from a connected database without the user refreshing constantly but the user needs to have a sync rate < 5 sec. I have looked into a web API and MVC structures (CRUD). Which work great for a single user input but I need the display to change based on any changes made by any user to the database. The application will only have a small amount of users (<20) and due to our connection, the lighter the application is the better. A web application would be ideal so that the client-side is display and input only.
I was hoping to get some feedback as to which technology to start to look at for the front end. I have built the database with Entity Framework.
Any input would be appreciated.
Regards,
Peter
WebSockets
The very best solution is to use WebSockets. WebSockets consist an advanced technology, which creates a duplex connection between the client and the server and whenever the server needs to be sending data, it can send it to active connections. Since you are using ASP.NET, SignalR would be very helpful for you. You can watch very good videos about SignalR.
Push Notifications
You can also use push notifications to listeners to push. The Push API would help you in this. This is not as good as WebSockets, since the client will have to work with HTTP requests if it was to send requests to the server.
Polling
You can create your own API function at the server and use polling to send requests to it. An example for a general-purpose poller is this one:
function Initializable(params) {
this.initialize = function(key, def, private) {
if (def !== undefined) {
(!!private ? params : this)[key] = (params[key] !== undefined) ? params[key] : def;
}
};
}
function Poller(params) {
Initializable.call(this, params);
var that = this;
this.initialize("url", window.location.href);
this.initialize("interval", 5000);
this.initialize("type", "POST");
this.initialize("method", "POST");
this.initialize("data", {});
this.initialize("strict", true);
this.initialize("isWebSocket", false);
this.initialize("message", "Poll");
this.initialize("webSocketHandler", undefined);
if (this.isWebSocket && !this.webSocketHandler) {
this.initialize("module", module);
this.initialize("page", page);
this.webSocketHandler = new WebSocketHandler({
ConnectNow: true,
module: this.module,
page: this.page,
message: this.message,
Message: function(e) {
that.done(e.data);
}
});
}
var defaultFunction = function() {};
this.initialize("done", defaultFunction);
this.initialize("fail", defaultFunction);
this.initialize("always", defaultFunction);
//WS
this.initialize("isWebSocketPrepared", function() {
return true;
});
this.initialize("sendingWebSocket", function() {});
this.initialize("handleUnpreparedWebSocket", function() {});
this.initialize("sendWSData", function(message) {
if (that.webSocketHandler.isReady()) {
if (that.isWebSocketPrepared()) {
that.webSocketHandler.send(JSON.stringify({
module: module,
page: page,
message: message
}));
that.sendingWebSocket();
} else {
that.handleUnpreparedWebSocket();
}
} else {
that.handleUnpreparedWebSocket();
}
});
this.isRunning = function() {
return !!params.intervalID;
};
this.run = function() {
if (this.strict && (this.green === false)) {
return;
}
this.green = false;
if (!that.isWebSocket) {
$.ajax({
url: this.url,
method: this.method,
data: this.data
}).done(function(data, textStatus, jqXHR) {
that.green = true;
that.done(data, textStatus, jqXHR);
}).fail(function(jqXHR, textStatus, errorThrown) {
that.green = true;
that.fail(jqXHR, textStatus, errorThrown);
}).always(function(param1, param2, param3) {
that.green = true;
that.always(param1, param2, param3);
});
} else {
that.sendWSData(that.message);
}
};
this.start = function() {
if (!params.intervalID) {
this.run();
params.intervalID = setInterval(this.run.bind(this), this.interval);
}
};
this.stop = function() {
if (!!params.intervalID) {
clearInterval(params.intervalID);
params.intervalID = undefined;
}
};
}
Forever Frame
You can also use forever frames, which are iframes which are loading forever. Read more here: https://vinaytech.wordpress.com/2008/09/25/long-polling-vs-forever-frame/

Google Calendar API watch channels not stopping

Stopping a watch channel is not working, though it's not responding with an error, even after allowing for propagation overnight.  I'm still receiving 5 notifications for one calendarlist change.  Sometimes 6.  Sometimes 3.  It's sporadic. We're also receiving a second round of notifications for the same action after 8 seconds.  Sometimes 6 seconds.  Sometimes a third set with a random count.  Also sporadic. Received a total of 10 unique messages for a single calendar created via web browser.
You can perform infinite amount of watch requests on specific calendar resource, Google will always return the same calendar resource Id for the same calendar, but the uuid you generate in the request will be different, and because of that, you will receive multiple notifications for each watch request that you've made. One way to stop all notifications from specific calendar resource, is to listen for notifications, pull out "x-goog-channel-id" and "x-goog-resource-id" from notification headers, and use them in Channels.stop request.
{
"id": string,
"resourceId": string
}
Every time you perform a watch request, you should persist the data from the response, and check if the uuid or resource id already exist, if yes don't perform watch request for that resource id again (if you don't want to receive multiple notifications).
e.g.
app.post("/calendar/listen", async function (req, res) {
var pushNotification = req.headers;
res.writeHead(200, {
'Content-Type': 'text/html'
});
res.end("Post recieved");
var userData = await dynamoDB.getSignInData(pushNotification["x-goog-channel-token"]).catch(function (err) {
console.log("Promise rejected: " + err);
});
if (!userData) {
console.log("User data not found in the database");
} else {
if (!userData.calendar) {
console.log("Calendar token not found in the user data object, can't perform Calendar API calls");
} else {
oauth2client.credentials = userData.calendar;
await calendarManager.stopWatching(oauth2client, pushNotification["x-goog-channel-id"], pushNotification["x-goog-resource-id"])
}
}
};
calendarManager.js
module.exports.stopWatching = function (oauth2client, channelId, resourceId) {
return new Promise(function (resolve, reject) {
calendar.channels.stop({
auth: oauth2client,
resource: {
id: channelId,
resourceId: resourceId
}
}, async function (err, response) {
if (err) {
console.log('The API returned an error: ' + err);
return reject(err);
} else {
console.log("Stopped watching channel " + channelId);
await dynamoDB.deleteWatchData(channelId)
resolve(response);
}
})
})
}
Not a google expert but I recently implement it in my application,
I am trying to answer some of your questions for future readers:
It's sporadic
Tha's because you have create more than 1 channels for watching events.
We're also receiving a second round of notifications for the same action after 8 seconds
Google doesn't say anything about the maximum delay for sending a push notification.
Suggestions:
CREATE:
When you create a new channel, always save the channel_id and channel_resource in your database.
DELETE:
When you want to delete a channel just use stop API endpoint with the channel data saved in your database
RENEW:
As you have noticed the channels do expire, so you need to update them once in a while. To do that create a crone in your server that is going to STOP all previous channels and it will create new one.
Comment: Whenever something is going wrong please read the error message sent from the Google API calendar. Most of the time, it tells you what is wrong.
Use Channels.stop which is mentioned in the docs. Supply the following data in your request body:
{
"id": string,
"resourceId": string
}
id is the channel ID when you created your watch request. Same goes with resource ID.
Read this SO thread and this github forum for additional reference.

Firebase : How to secure content sent without login?

I'm building a hybrid mobile app with Firebase as my backend. I want to let users post on a wall any message they want without authentication, but I feel concerned about spam possibilities. I mean, if users don't have to be authenticated to be able to post, my security rules are basically empty and anyone who gets the endpoint can post an infinite amount of content. And I don't see what I could do against it.
So I know about anonymous auth, but I'm not sure if it really fix the issue. The endpoint remains open, after all, just behind the necessity to call a method before. It adds a little complexity but not much, I think.
What I wonder is if there is a possibility to check for the call origin, to make sure it comes from my app and nothing else. Or, if you have another idea to get this more secure, I'm open to everything. Thanks!
You can accomplish this using a combination of recaptcha on the client, and firebase cloud functions on the backend.
You send the message you want to add to the store along with the captcha to the cloud function. In the cloud function, we first verify the captcha. If this one is ok, we add the message to the store. This works, because when adding items to the store via a cloud function, firebase authentication rules are ignored.
Here's an example cloud function:
const functions = require('firebase-functions')
const admin = require('firebase-admin')
const rp = require('request-promise')
const cors = require('cors')({
origin: true,
});
admin.initializeApp();
exports.createUser = functions.https.onRequest(function (req, res) {
cors(req, res, () => {
// the body is a json of form {message: Message, captcha: string}
const body = req.body;
// here we verify whether the captcha is ok. We need a remote server for
// for this so you might need a paid plan
rp({
uri: 'https://recaptcha.google.com/recaptcha/api/siteverify',
method: 'POST',
formData: {
secret: '<SECRET>',
response: body.captcha
},
json: true
}).then(result => {
if (result.success) {
// the captcha is ok! we can now send the message to the store
admin.firestore()
.collection('messages')
.add(body.message)
.then(writeResult => {
res.json({result: `Message with ID: ${writeResult.id} added.`});
});
} else {
res.send({success: false, msg: "Recaptcha verification failed."})
}
}).catch(reason => {
res.send({success: false, msg: "Recaptcha request failed."})
})
});
})
And here's some more info: https://firebase.googleblog.com/2017/08/guard-your-web-content-from-abuse-with.html

Meteor: Passing value to database after successful Paypal payment

I'd like to update the users database after a successful payment. Basically, converting $ to site credits. I've used https://github.com/tirtohadi/meteor-paypal-demo/, basically using his code implementing paypal to the web app. The only idea I have is when the site gets routed to the return page after payment. Code's here.
Router.map(function() {
this.route('/payment_return/:invoice_no/:amount/', {
where: 'server',
onBeforeAction: function() {
console.log("result");
result = paypal_return(this.params.invoice_no,this.params.amount,this.params.query.token,this.params.query.PayerID);
console.log(result);
if (result)
{
this.response.end("Payment captured successfully");
}
else
{
this.response.end("Error in processing payment");
}
}
});
});
I guess my question is, how do I securely update the db after a successful payment. Because I know client side update is dangerous (from what I've read anyway)

Resources