NextJS - Read Data Once From File and usable in any module - next.js

I am trying to implement the following design. Read data from a file (xml) at server startup and have it available as in memory variables to be used in the backend api for certain calculations. This data never changes thus it only need to be read once.
I am getting alot of module not found errors as I believe from what I read is that FS functions should only be done on the server side using getStaticProps
But this will trigger the read request every time a client loads the page.
Can someone guide me with a simple example on how to do this so that the data is read once and usable in the back end server side modules for calculations
Thanks

Related

Chunking in Azure Logic Apps SFTP-SSH Create action doesn't work

What I want to achieve:
Send a >50MB file via HTTP to a Logic App
The Logic App to save the file to an SFTP server
An error I am getting in the SFTP-SSH 'Create file' action:
The provided file size '64065320' for the create or update operation
exceeded the maximum allowed file size '52428800' using non-chunked
transfer mode. Please enable chunked transfer mode to create or update
large files.
Chunking on the SFTP-SSH 'Create file' action is enabled. Overriding chunk size doesn't help. Using the body of the 'Compose' action as an input for 'Create file' also doesn't help.
The current workflow:
SFTP-SSH 'Create file' action parameters:
SFTP-SSH 'Create file' action settings:
Error:
Any ideas about the reason of the error?
P.S. I want to clarify the issue; it is about a very specific workflow: when a large file is sent to a Logic App via HTTP (the 'When a HTTP request is received' trigger) it needs to be saved to an SFTP server. Not transformed, just saved as it is. I know that when collecting (pulling) a large file from elsewhere (SFTP/blob/etc.) and saving it to SFTP, chunking works fine. But in this scenario (pushing the file to the Logic App) it doesn't. Although the Handle large messages with chunking in Azure Logic Apps article at first says that "Logic App triggers don't support chunking" and "for actions that support and are enabled for chunking you can't use trigger bodies", then it gives a workaround: "Instead, use the Compose action. Specifically, you must create a body field by using the Compose action to store the data output from the trigger body. Then, to reference the data, in the chunking action, use #body('Compose')". Well, this workaround didn't work for me as seen from the screenshots I provided. I'd appreciate if someone could clarify how to overcome this issue.
According to this Documentation, the endpoint to which you are sending the request, use chunking to download the whole data by sending partial data which enable HTTP connector. To comply with this connector's limit, Logic Apps splits any message larger than 30 MB into smaller chunks. You can split up large content downloads and uploads over HTTP so that your logic app and an endpoint can exchange large messages.
You can even refer HERE which discusses the same topic.

Hosting shinyApp on EC2 with background running capability

I want to host a shiny app on amazon EC2 which takes a excelsheet using fileinput(). Then I need to make some API calls for each row in the excelsheet which is expected to take 1-2 hours on average for my purposes. So I figured out that this is what I should do:
Host a shiny app where one can upload an excelsheet.
On receiving an excelsheet from a user, store it on the amazon servers, notify the user that an email will be sent once the processing is complete, and trigger run another R script (I'm not sure how to do that) which will keep running in background even if the user closes the browser window and collect all the information by making the slow API calls.
Once I have all the data, store it in another excelsheet and email back to the user.
If it is possible and reasonable to do it this way or you have some other ideas to do my task, please help me with how to do it.
Edit: I've found this is what I can do otherwise:
Get the excelsheet data and store it in a file.
Call a bash script from the R shiny like this: ./<my-script> &; disown
The bash script will call a python file which makes all API calls, decodes the relevant data from JSON output and stores it in another file on the server.
It finally sends an email to the user with he processed data attached.
I wanted to know if this is an appropriate way to do the job. Thanks a lot.
Try implementing simple web framework like Django since you are using python. Flask may come in handy for creating simple routes. Please comment if you find any issues.

R-Shiny two way communication

We have separate process which provides data to our R-Shiny application. I know I can provide data to R-Shiny via file or database and observe the data source via reactivePoll. This works fine and I understand it's sort of recommended way.
What I don't like on this approach is:
It's hard to send to shiny various type of inputs (like data and metadata)
I miss the shiny application feedback to the data providing process. I just write a file and hope that shiny app will pick it up and process and will be successful. Data sourcing process cannot be notified about failure for example (invalid data)
I would love to have some 2-ways protocol. For example send the data through a websocket (this would have to be different websocket than the one Shiny has with the UI obviously) or raw socket and be able to send response back.
Surely I can implement some file-based API where I store files under different names observing them with shiny and shiny then writes other files back and I would observe them with my application which provided the data. But this basically sucks :)
Any tips much appreciated!
Edit: it mas or may not be obvious from saying the Java and R applications are writing files for each other ... But the apps are running on the same host and I can live with this limitation

Writing large volume of web post requests to flat files (File based Queuing )

I am developing a Spring Based Web Application which will handle large volume of requests per minute and this web app needs to respond very quickly.
For this purpose, We decided to implement a flat-file based queuing mechanism, which would just write the requests (set of database columns values) to flat files and another process would pick this data from flat files periodically and write it to the database. I pick up only those files that am done writing with.
As am using a flat file, For each request I receive, I need to open and close the flat file inside my controller method.
My Question is : Is there a better way to implement this solution ? JMS is out of scope as we don't have the infrastructure right now.
If this file based approach seems good, then is there a better way to reduce the file I/O ? With the current design, I open/write/close the flat file for each web request received, which I know is bad. :(
Env : SpringSource ToolSuite, Apache/Tomcat with back-end as Oracle.
File access has to be synchronized, otherwise you'll corrupt it. Synchronized access clashes with the large volume of requests you plan.
Take a look at things like Kestrel or just go with a database like SQLite (at least you can delegate the synchronization burden)

Meteor's source code open to the clients?

From a general glimpse of it, it seems that source code for Meteor app is open to the clients due to 'Write one Javascript file, run it on client and server at once' theme.
If server side source code of particular app open to client sides, wouldn't it be easy for random person to copy them and create very look alike app?
Wouldn't it be easy for person with evil purpose to find security holes in the app, because its server side code is open to the public?
For instance, in Meteor 0.5.0 's new example of parties app, model.js file seems to be sent to the client side as well.
Am I misunderstanding something here?
Edit
Here is the part that I do not understand.
According to http://docs.meteor.com/#structuringyourapp,
Files outside the client and server subdirectories are loaded on both the client and the server! That's the place for model definitions and other functions
I really do not understand it. If every model implementation, (including DB interaction) is sent to client, wouldn't app be less secure and easily copied by other developers?
Any code in the server/ folder will not get sent to the client (see http://docs.meteor.com/#structuringyourapp)
EDIT
Regarding the second part:
Any code not in client/ or server/ is code you want to run both client and server side. So obviously it must be sent to the client.
The reason that you would place model code in there is because of latency compensation. If you want to make updates to your data, it's best to do it immediately client-side and then run the same code server side to 'commit' it for real. There are many examples where this would make sense.
If there is 'secret' model code that you don't want to run client side, you can certainly have a second server/models.js file.
The best way to secure a client-server app is by writing explicit security checks on the server, rather than hiding the database update logic from the client.
For a longer explanation of the security model, see https://stackoverflow.com/a/13334986/791538.

Resources