SQlite3 and knexjs results in Timeout acquiring a connection - sqlite

I am trying to run the following code and getting an error
{ TimeoutError: Knex: Timeout acquiring a connection. The pool is
probably full. Are you missing a .transacting(trx) call?
Is there anyway to make sqlite wait until the pool is empty? if not, what would you suggest?
const path = require('path');
const knex = require('knex')({
client: 'sqlite3',
useNullAsDefault: true,
connection: {
filename: path.join(__dirname, '/db/sqlite.db')
}
});
knex('lorem')
.insert({ rowid: 'Slaughterhouse Five' })
var z = 0;
while (z < 20000) {
knex('lorem')
.select('rowid')
.then(result => {
console.log('res', result);
})
.catch(error => console.log('Error in select', error));
z++;
}

I would suggest to not trying to run 20000 parallel queries. At which point would you like to wait the pool being empty? You could run all the queries one by one or you could use for example Bluebird's .map() which allows to pass concurrency paramater to limit how many queries are resolved at the same time.

Related

Firebase Functions ENOENT: no such file or directory, open 'HttpsErrorImpl'

I'm fighting with this issue for over 3 days. I have no idea what is happening.
Firebase Functions throws error only when I try to use Emulator. I try to execute this function in useEffect hook. Again, when I call deployed Cloud Functions everything seems fine, unfortunately in case of using Emulator things don't want to go so well.
const resolvePromise = async () => {
functions.useEmulator("https://0.0.0.0:5001");
const query = functions.httpsCallable("helloWorld");
query()
.then((result) => console.log(result))
.catch((err) => console.log(err));
};
I receive this useless (for me) error.
Error: ENOENT: no such file or directory, open 'HttpsErrorImpl#http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false'
at Object.openSync (node:fs:585:3)
at Object.readFileSync (node:fs:453:35)
at getCodeFrame (Z:\repo\PTCG_Marketplace\node_modules\metro\src\Server.js:1296:18)
at Z:\repo\PTCG_Marketplace\node_modules\metro\src\Server.js:1367:24
at Generator.next (<anonymous>)
at asyncGeneratorStep (Z:\repo\PTCG_Marketplace\node_modules\metro\src\Server.js:146:24)
at _next (Z:\repo\PTCG_Marketplace\node_modules\metro\src\Server.js:168:9)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
internal
at HttpsErrorImpl#http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:197178:29 in <unknown>
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:197273:29 in _errorForResponse
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:197751:39 in <unknown>
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:170357:26 in step
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:170287:21 in <unknown>
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:170241:31 in fulfilled
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:31526:15 in tryCallOne
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:31627:26 in <unknown>
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:31955:16 in _callTimer
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:31994:16 in _callImmediatesPass
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:32211:32 in callImmediates
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:3457:34 in __callImmediates
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:3236:33 in <unknown>
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:3440:14 in __guard
at http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false:3235:20 in flushedQueue
This is all the text which I can see after visiting
http://192.168.0.104:19000/index.bundle -- https://pastebin.com/ggsCMN0W
http://192.168.0.104:19000/index.bundle?platform=android&dev=true&hot=false&minify=false -- https://pastebin.com/LSeufs8H
It don't make any sense for me. At second address it seems like it's correlated to metro dependency so I updated it, it didn't work.
Any ideas? Thanks in advance :D
Edit 1: All errors are logged at client site, it seems like client can't even call emulator.
Edit 2:
I tried to update entire firebase to v.9 aswell as Expo to 44 SKD with react-native to 0.64.3
This is how my Request function looks now:
const requestApi = () => {
const functions = getFunctions(app);
connectFunctionsEmulator(functions, "127.0.0.1", 5001);
const helloWorld = httpsCallable(functions, "helloWorld");
helloWorld()
.then((result) => {
console.log(result);
})
.catch((error) => {
console.log(error.message, error.code, error.details);
});
};
I receive only this from catch block :(
internal functions/internal undefined
I also receive warning about Timer after executing that function
Setting a timer for a long period of time, i.e. multiple minutes, is a
performance and correctness issue on Android as it keeps the timer
module awake, and timers can only be called when the app is in the
foreground. See https://github.com/facebook/react-native/issues/12981
for more info. (Saw setTimeout with duration 70000ms)
It don't work only when I try to use Emulator
Couple of potential issues here:
Assuming you're using the latest version of Firebase, functions is actually a method: firebase.functions().useEmulator("localhost", 5001); - Note the () after functions. See docs for more info.
Maybe you've already done this, but have you made sure that the functions emulator is actually running and connectable on port 5001)? Could be useful to test it via Postman or similar.
Make sure you're using the correct IP address for the functions emulator given your setup. 0.0.0.0 probably doesn't map where you want it to... assuming the app is running locally and the functions emulator is too, try 127.0.0.1 or "localhost" ... this answer has more options to troubleshoot.
I am not sure if it is your case, but I had a function:
exports.findUserInAuth = functions.https.onCall((data, context) => {
let field = data.field;
let value = data.value;
if (!field || !value) {
return false;
}
if (field === "email") {
return admin.auth().getUserByEmail(value);
}
});
This one returns a promise, I had to change it to wait for the result before doing a return and problem solved...
exports.findUserInAuth = functions.https.onCall((data, context) => {
let field = data.field;
let value = data.value;
if (!field || !value) {
return false;
}
if (field === "email") {
admin.auth().getUserByEmail(value).then((result) => {
return result;
})
.catch((error) => {
if (error.code === "auth/user-not-found") {
return "Email or Password is incorrect";
}
return `${error.code} ${error.message}`;
});
}
return false;
});
Ok, so after almost a week of fighting with this sh!t.
When you use Expo Go like me. You should copy the host address on which you are emulating your app, and use the same address you emulate your functions (or other tools).
app.json
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"emulators": {
"functions": {
"host": "192.168.0.104",
"port": 5001
}
}
}
and final code of requestApi function
const requestApi = async () => {
const functions = firebase.functions()
functions.useEmulator("192.168.0.104", 5001); <--- ADDRESS!!!
const helloWorld = functions.httpsCallable("helloWorld");
helloWorld()
.then((result) => {
console.log(result);
})
.catch((error) => {
console.log(error.message, error.code, error.details);
});
};

Meteor/React/ApolloServer/BodyParser - Payload too large

I am trying to save quite a big object thanks to a Mutation object in my meteor/react app but I am getting a Payload too large error in the console :
PayloadTooLargeError: request entity too large
I know my object is more than the 100kb which is the default limit for bodyparser but I can not managed to have it changed.
I have tried the following while initiating the Apollo Server:
const server = new ApolloServer({
typeDefs,
resolvers,
context: async ({ req }) => {
return ({
user: await getUser(req.headers.authorization)
})
}
})
server.applyMiddleware({
app: WebApp.connectHandlers,
path: '/graphql'
})
WebApp.connectHandlers.use(bodyParser.json({limit: '100mb', extended: true}));
WebApp.connectHandlers.use('/graphql', (req, res) => {
if (req.method === 'GET') {
res.end()
}
})
But I am still getting the same error. I think my object is around 400kb. I am hoping one of you could help me. Thanks in advance.
apollo-server-express already includes body-parser so you shouldn't add it again as middleware. Instead, you can pass body-parser options to Apollo when calling applyMiddlware:
server.applyMiddleware({
app: WebApp.connectHandlers,
path: '/graphql',
bodyParserConfig: {
limit: '100mb',
},
})
See the docs for a full list of available options.

DynamoDB provisioned Read/Write Capacity Units exceeded unexpectedly

I run a program that sends data to dynamodb using api gateway and lambdas.
All the data sent to the db is small, and only sent from about 200 machines.
I'm still using free tier and sometimes unexpectedly in the middle of the month I'm start getting an higher provisioned read / write capacity and then from this day I pay a constant amount each day until the end of the month.
Can someone understand from the image below what happened in the 03/13 that caused this pike in the charts and caused these provisioned to rise from 50 to 65?
I can't tell what happened based on those charts alone, but some things to consider:
You may not be aware of the new "PAY_PER_REQUEST" billing mode option for DynamoDB tables which allows you to mostly forget about manually provisioning your throughput capacity: https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
Also, might not make sense for your use case, but for free tier projects I've found it useful to proxy all writes to DynamoDB through an SQS queue (use the queue as an event source for a Lambda with a reserved concurrency that is compatible with your provisioned throughput). This is easy if your project is reasonably event-driven, i.e. build your DynamoDB request object/params, write to SQS, then have the next step be a Lambda that is triggered from the DynamoDB stream (so you aren't expecting a synchronous response from the write operation in the first Lambda). Like this:
Example serverless config for SQS-triggered Lambda:
dynamodb_proxy:
description: SQS event function to write to DynamoDB table '${self:custom.dynamodb_table_name}'
handler: handlers/dynamodb_proxy.handler
memorySize: 128
reservedConcurrency: 95 # see custom.dynamodb_active_write_capacity_units
environment:
DYNAMODB_TABLE_NAME: ${self:custom.dynamodb_table_name}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- Fn::GetAtt: [ DynamoDbTable, Arn ]
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
events:
- sqs:
batchSize: 1
arn:
Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
Example write to SQS:
await sqs.sendMessage({
MessageBody: JSON.stringify({
method: 'putItem',
params: {
TableName: DYNAMODB_TABLE_NAME,
Item: {
...attributes,
created_at: {
S: createdAt.toString(),
},
created_ts: {
N: createdAtTs.toString(),
},
},
...conditionExpression,
},
}),
QueueUrl: SQS_QUEUE_URL_DYNAMODB_PROXY,
}).promise();
SQS-triggered Lambda:
import retry from 'async-retry';
import { getEnv } from '../lib/common';
import { dynamodb } from '../lib/aws-clients';
const {
DYNAMODB_TABLE_NAME
} = process.env;
export const handler = async (event) => {
const message = JSON.parse(event.Records[0].body);
if (message.params.TableName !== env.DYNAMODB_TABLE_NAME) {
console.log(`DynamoDB proxy event table '${message.params.TableName}' does not match current table name '${env.DYNAMODB_TABLE_NAME}', skipping.`);
} else if (message.method === 'putItem') {
let attemptsTaken;
await retry(async (bail, attempt) => {
attemptsTaken = attempt;
try {
await dynamodb.putItem(message.params).promise();
} catch (err) {
if (err.code && err.code === 'ConditionalCheckFailedException') {
// expected exception
// if (message.params.ConditionExpression) {
// const conditionExpression = message.params.ConditionExpression;
// console.log(`ConditionalCheckFailed: ${conditionExpression}. Skipping.`);
// }
} else if (err.code && err.code === 'ProvisionedThroughputExceededException') {
// retry
throw err;
} else {
bail(err);
}
}
}, {
retries: 5,
randomize: true,
});
if (attemptsTaken > 1) {
console.log(`DynamoDB proxy event succeeded after ${attemptsTaken} attempts`);
}
} else {
console.log(`Unsupported method ${message.method}, skipping.`);
}
};

How can I prevent postgres deadlocks when running jest tests on CircleCI?

When I run my tests on CircleCI, it logs the following message a many times and eventually the tests fail because none of the database methods can retrieve the data due to the deadlocks:
{
"message": "Error running raw sql query in pool.",
"stack": "error: deadlock detected\n at Connection.Object.<anonymous>.Connection.parseE (/home/circleci/backend/node_modules/pg/lib/connection.js:567:11)\n at Connection.Object.<anonymous>.Connection.parseMessage (/home/circleci/-backend/node_modules/pg/lib/connection.js:391:17)\n at Socket.<anonymous> (/home/circleci/backend/node_modules/pg/lib/connection.js:129:22)\n at emitOne (events.js:116:13)\n at Socket.emit (events.js:211:7)\n at addChunk (_stream_readable.js:263:12)\n at readableAddChunk (_stream_readable.js:250:11)\n at Socket.Readable.push (_stream_readable.js:208:10)\n at TCP.onread (net.js:597:20)",
"name": "error",
"length": 316,
"severity": "ERROR",
"code": "40P01",
"detail": "Process 1000 waits for AccessExclusiveLock on relation 17925 of database 16384; blocked by process 986.\nProcess 986 waits for RowShareLock on relation 17870 of database 16384; blocked by process 1000.",
"hint": "See server log for query details.",
"file": "deadlock.c",
"line": "1140",
"routine": "DeadLockReport",
"level": "error",
"timestamp": "2018-10-15T20:54:29.221Z"
}
This is the test command I run: jest --logHeapUsage --forceExit --runInBand
I also tried this: jest --logHeapUsage --forceExit --maxWorkers=2
Pretty much all of the tests run some sort of database function. This issue only started to occur when we added more tests. Has anyone else had this same issue?
Based on the error message we got Deadlock because of RowShareLock;
This means that two transactions (lets call them transactionOne and transactionTwo) have locked resurce which the other transaction requires
Example:
transactionOne locks record in UserTable with userId = 1
transactionTwo locks record in UserTable with userId = 2
transactionOne attempts to update in UserTable for userId = 2, but since it is locked by another transaction - it waits for the lock to be released
transactionTwo attempts to update in UserTable for userId = 1, but since it is locked by another transaction - it waits for the lock to be released
Now the SQL engine detects that there is a deadlock and randomly picks one of the transactions and terminates it.
Lets say the SQL engine picks transactionOne and terminates it. This will result in the exception that is posted in the question.
transactionTwo is now allowed to perform an update in UserTable for user with userId = 1.
transactionTwo completes with success
SQL engines are pretty fast in detecting deadlocks, and the exception will be instant.
This is the reason for the deadlocks.
Deadlocks can have different root causes.
I see you use the pg plugin. Make sure you use it right with the transactions: pg node-postgres transactions
I would suspect a few different root causes and their solutions:
Cause 1: Multiple tests are running against the same database instance
It may be different ci pipelines executing the same test against the same Postgres instance
Solution:
This is the least probable situation, but the CI pipeline should provision its own separate Postgres instance on each run.
Cause 2: Transactions are not handled with appropriate catch("ROLLBACK")
This means that some transactions may stay alive and block others.
Solution: All transactions should have appropriate error handling.
const client = await pool.connect()
try {
await client.query('BEGIN')
//do what you have to do
await client.query('COMMIT')
} catch (e) {
await client.query('ROLLBACK')
throw e
} finally {
client.release()
}
Cause 3: Concurrency. For example: Tests are running in parallel, and they cause deadlocks.
We are writing scalable apps. This means that the deadlocks are inevitable. We have to be prepared for them and handle those appropriately.
Solution: Use the strategy "Let's try again". When we detect in our code that there is a deadlock exception, we just retry finite times. This approach has been proven with all my production apps for more than a decade.
Solution with helper func :
//Sample deadlock wrapper
const handleDeadLocks = async (action, currentAttepmt = 1 , maxAttepmts = 3) {
try {
return await action();
} catch (e) {
//detect it is a deadlock. Not 100% sure whether this is deterministic enough
const isDeadlock = e.stack?.includes("deadlock detected");
const nextAttempt = currentAttepmt + 1;
if (isDeadlock && nextAttempt <= maxAttepmts) {
//try again
return await handleDeadLocks(action, nextAttempt, maxAttepmts);
} else {
throw e;
}
}
}
//our db access functions
const updateUserProfile = async (input) => {
return handleDeadLocks(async () => {
//do our db calls
});
};
If the code becomes to complex/ nested. We can try to do it with another solution using High order function
const handleDeadLocksHOF = (funcRef, maxAttepmts = 3) {
return async (...args) {
const currentAttepmt = 1;
while (currentAttepmt <= maxAttepmts) {
try {
await funcRef(...args);
} catch (e) {
const isDeadlock = e.stack?.includes("deadlock detected");
if (isDeadlock && currentAttepmt + 1 < maxAttepmts) {
//try again
currentAttepmt += 1;
} else {
throw e;
}
}
}
}
}
// instead of exporting the updateUserProfile we should export the decorated func, we can control how many retries we want or keep the default
// old code:
export const updateUserProfile = (input) => {
//out legacy already implemented data access code
}
// new code
const updateUserProfileLegacy = (input) => {
//out legacy already implemented data access code
}
export const updateUserProfile = handleDeadLocksHOF(updateUserProfile)

fetching mp3 file from MeteorJS and trying to convert it into a Blob so that I can play it

am playing around with downloading and serving mp3 files in Meteor.
I am trying to download an MP3 file (https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3) on my MeteorJS Server side (to circumvent CORS issues) and then pass it back to the client to play it in a AUDIO tag.
In Meteor you use the Meteor.call function to call a server method. There is not much to configure, it's just a method call and a callback.
When I run the method I receive this:
content:
"ID3���#K `�)�<H� e0�)������1������J}��e����2L����������fȹ\�CO��ȹ'�����}$A�Lݓ����3D/����fijw��+�LF�$?��`R�l�YA:A��#�0��pq����4�.W"�P���2.Iƭ5��_I�d7d����L��p0��0A��cA�xc��ٲR�BL8䝠4���T��..etc..", data:null,
headers: {
accept-ranges:"bytes",
connection:"close",
content-length:"443926",
content-type:"audio/mpeg",
date:"Mon, 20 Aug 2018 13:36:11 GMT",
last-modified:"Fri, 17 Jun 2016 18:16:53 GMT",
server:"Apache",
statusCode:200
which is the working Mp3 file (the content-length is exactly the same as the file I write to disk on the MeteorJS Server side, and it is playable).
However, my following code doesn't let me convert the response into a BLOB:
```
MeteorObservable.call( 'episode.download', episode.url.url ).subscribe( ( result: any )=> {
console.log( 'response', result);
let URL = window.URL;
let blob = new Blob([ result.content ], {type: 'audio/mpeg'} );
console.log('blob', blob);
let audioUrl = URL.createObjectURL(blob);
let audioElement:any = document.getElementsByTagName('audio')[0];
audioElement.setAttribute("src", audioUrl);
audioElement.play();
})
When I run the code, the Blob has the wrong size and is not playable
Blob(769806) {size: 769806, type: "audio/mpeg"}
size:769806
type:"audio/mpeg"
__proto__:Blob
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
On the backend I just run a return HTTP.get( url ); in the method which is using import { HTTP } from 'meteor/http'.
I have been trying to use btoa or atob but that doesn't work and as far as I know it is already a base64 encoded file, right?
I am not sure why the Blob constructor creates a larger file then the source returned from the backend. And I am not sure why it is not playing.
Can anyone point me to the right direction?
Finally found a solution that uses request instead of Meteor's HTTP:
First you need to install request and request-promise-native in order to make it easy to return your result to clients.
$ meteor npm install --save request request-promise-native
Now you just return the promise of the request in a Meteor method:
server/request.js
import { Meteor } from 'meteor/meteor'
import request from 'request-promise-native'
Meteor.methods({
getAudio (url) {
return request.get({url, encoding: null})
}
})
Notice the encoding: null flag, which causes the result to be binary. I found this in a comment of an answer related to downloading binary data via node. This causes not to use string but binary representation of the data (I don't know how but maybe it is a fallback that uses Node Buffer).
Now it gets interesting. On your client you wont receive a complex result anymore but either an Error or a Uint8Array which makes sense because Meteor uses EJSON to send data over the wires with DDP and the representation of binary data is a Uint8Array as described in the documentation.
Because you can just pass in a Uint8Array into a Blob you can now easily create the blob like so:
const blob = new Blob([utf8Array], {type: 'audio/mpeg'})
Summarizing all this into a small template if could look like this:
client/fetch.html
<template name="fetch">
<button id="fetchbutton">Fetch Mp3</button>
{{#if source}}
<audio id="player" src={{source}} preload="none" content="audio/mpeg" controls></audio>
{{/if}}
</template>
client/fetch.js
import { Template } from 'meteor/templating'
import { ReactiveVar } from 'meteor/reactive-var'
import './fetch.html'
Template.fetch.onCreated(function helloOnCreated () {
// counter starts at 0
this.source = new ReactiveVar(null)
})
Template.fetch.helpers({
source () {
return Template.instance().source.get()
},
})
Template.fetch.events({
'click #fetchbutton' (event, instance) {
Meteor.call('getAudio', 'https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3', (err, uint8Array) => {
const blob = new Blob([uint8Array], {type: 'audio/mpeg'})
instance.source.set(window.URL.createObjectURL(blob))
})
},
})
Alternative solution is adding a REST endpoint *using Express) to your Meteor backend.
Instead of HTTP we use request and request-progress to send the data chunked in case of large files.
On the frontend I catch the chunks using https://angular.io/guide/http#listening-to-progress-events to show a loader and deal with the response.
I could listen to the download via
this.http.get( 'the URL to a mp3', { responseType: 'arraybuffer'} ).subscribe( ( res:any ) => {
var blob = new Blob( [res], { type: 'audio/mpeg' });
var url= window.URL.createObjectURL(blob);
window.open(url);
} );
The above example doesn't show progress by the way, you need to implement the progress-events as explained in the angular article. Happy to update the example to my final code when finished.
The Express setup on the Meteor Server:
/*
Source:http://www.mhurwi.com/meteor-with-express/
## api.class.ts
*/
import { WebApp } from 'meteor/webapp';
const express = require('express');
const trackRoute = express.Router();
const request = require('request');
const progress = require('request-progress');
export function api() {
const app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.use('/episodes', trackRoute);
trackRoute.get('/:url', (req, res) => {
res.set('content-type', 'audio/mp3');
res.set('accept-ranges', 'bytes');
// The options argument is optional so you can omit it
progress(request(req.params.url ), {
// throttle: 2000, // Throttle the progress event to 2000ms, defaults to 1000ms
// delay: 1000, // Only start to emit after 1000ms delay, defaults to 0ms
// lengthHeader: 'x-transfer-length' // Length header to use, defaults to content-length
})
.on('progress', function (state) {
// The state is an object that looks like this:
// {
// percent: 0.5, // Overall percent (between 0 to 1)
// speed: 554732, // The download speed in bytes/sec
// size: {
// total: 90044871, // The total payload size in bytes
// transferred: 27610959 // The transferred payload size in bytes
// },
// time: {
// elapsed: 36.235, // The total elapsed seconds since the start (3 decimals)
// remaining: 81.403 // The remaining seconds to finish (3 decimals)
// }
// }
console.log('progress', state);
})
.on('error', function (err) {
// Do something with err
})
.on('end', function () {
console.log('DONE');
// Do something after request finishes
})
.pipe(res);
});
WebApp.connectHandlers.use(app);
}
and then add this to your meteor startup:
import { Meteor } from 'meteor/meteor';
import { api } from './imports/lib/api.class';
Meteor.startup( () => {
api();
});

Resources