How can I run a standalone script within a next.js project? - next.js

Within a next.js project, I wish to run a standalone script written in typescript. This script needs to import various source files in my next.js project but it just performs some database queries and writes some output to stdout so it doesn't need to start a webserver or anything like that. What is the best way to do this?
If I just run node myscript.js then I get a ton of import errors, presumably because my typescript code is not directly parseable by node.
What is the canonical way to run a standalone script in a next.js project?

Related

How to use Next.js Automatic Static Site Optimization and still export for Netlify?

I have been working on implementing some of the updates from Next 9.3. I have been moving from getInitialProps to getServerSideProps and noticed that my exportPathMap became unhappy with these pages becoming dynamic. Everything works fine running next, but when I go to run next build && next export, I run into some issues.
In the docs for static html export it states If your pages don't have getInitialProps you may not need next export at all; next build is already enough thanks to Automatic Static Optimization. I am able to get that to work happily with my new getServerSideProps calls when I run next build && next start. What steps do I need to take in order for that to also work with next export so I can deploy via Netlify. Here's an example of the error I am getting when I attempt to run next export
Error occurred prerendering page "/videos/[videos_title]". Read more: https://err.sh/next.js/prerender-error:
Error: Error for page /videos/[videos_title]: pages with `getServerSideProps` can not be exported. See more info here: https://err.sh/next.js/gssp-export
Applications build with SSR cannot be deployed in Netlify or any other static hosting sites (Except Vercel, which supports NextJS SSR Deployment)
When you go for SSR (using getServerSideProps), it's meaningless to
use the command next export since it will try to create static
content which is totally opposite to the SSR.
   
One way of deployment is to run it in the Virtual Server (like EC2)
by creating custom server.js file with proper routing configuration.
Another easy & quick method is to use Vercel (Zeit
formerly) for deployment of SSR implemented Applications where
they handle it wisely
Vercel has poor documentation for deployment of SSR Applications. Luckily I've got the below information from the Support Team and requested them to update the documentation to elaborate more about the SSR Deployment in Vercel.
When deploying in Vercel,
Give Build Command as next build or npm run build
Leave the Output Directory as Empty
NOTE: Application with custom server.js will not work properly in Vercel, in that case, go for Virtual Server (Like EC2)

Appium script on an already launched app

I am automating android app tests with Appium, and I need to split my test script in half while the app is still running.
The first script initiates a log into the app, and the next script achieves it. Therefore the second script must get the app in the exact same state as it was left by the first step. The app cannot be closed meanwhile.
Is there a way to do this, and if there is, how?
Thank you!
This is quite simple.
Create your test suite directory and insert Initialization file into it.
In the Initialization file do the following setup:
Suite Setup Open Application
Suite Teardown Close All Apps
Then then in the scripts in this directory do not Open Application, just add your Appium keywords and those will use the current open application.
I use this when I need to do something in iOS application (AppiumLibrary), then do operation over same data on web client (SeleniumLibrary) and then finish in iOS and finally check in web client/database.
The file structure is something like this:
__init__.robot
01_login_appium.robot
02_do_appium_stuff1.robot
03_do_selenium_stuff1.robot
04_do_appium_stuff2.robot
...
Alternatively if you just want to have separate script for logging to reuse in scripts, I prefer to create keyword resource file for setup, logging or other common tasks where I would encapsulate several steps in such keyword and reuse it in setup for different scripts.

How to access environment variables in grunt application *after* it has been built

Application
I have an angularjs application that is built by grunt which uses grunt to inject environment variables (like API endpoints) into the angularjs code. However this problem is more specific to grunt applications being deployed in docker containers.
Motivation
I've just recently started trying to integrate docker into the deployment process (something similar to this) but realized I don't know how to best get the environment variables into the application anymore. I'll describe the sequence of events:
Make changes to my code
Build a complete docker image that accepts API endpoints as environment variables
Push the docker image to my server
Run a container based off this new image
As you can see, grunt build occurs in step 2, but the environment variables are not available until step 4.
Possible Solution
I could wrap the call to start the static server for my angular application in a small bash script which creates a javascript file containing the environment variable. I could then add a <script> tag in my index.html to import it and then start the server as usual, with everything working. However that feels like I'm sidestepping grunt inappropriately.
Does anyone know of a straightforward way to inject environment variables into client side code around the same time the connect:dist:keepalive task runs? To clarify, I'm already using the ng-constant grunt task but that can only access environment variables at build time, not server startup time.

Execute internal code on build

Background
In an ASP.NET site, I'm using a code documentation tool called Nocco. Nocco is a command line tool that you explicitly run on a particular code file to output an HTML rendered version of that code and it's documentation. I've currently setup some code in my Global_asax.Application_Start method to crawl through a couple directories and process all the code files in each directory.
Problem
Ultimately, putting it in Global_asax.Applicaton_Start means that it is building the Nocco documentation, which takes ~1 seconds per file, at the beginning of each session - not only once per deployment. This seems inefficient and an ultimate waste of the user's time while the page is loading.
Question
Is it possible to execute code internal to the ASP.NET application (such as a class method) as a post build event? I know that I could convert this part of my setup to a standalone application or even a batch script, but I've had this question for other circumstances as well and have wondered whether or not it's possible.
You could do your generation in Warm up script, here is the link for IIS 7.5
http://blogs.iis.net/thomad/archive/2009/10/14/now-available-the-iis-7-5-application-warm-up-module.aspx
or you can exclude code documentation functionality in separate assembly and include it in standalone app and call it as a external command from project Build events

Run web app code from command line?

I have an ASP.NET web application that includes code for enforcing its own database schema ; this code runs on application start.
I've recently started using LINQ to SQL, and I've added a pre-build event to run SqlMetal on my database so that I get objects representing my db tables.
What would be really cool is if I could enforce the database schema in the pre-build event, and then run SqlMetal. As it is, if the schema changes (e.g. I add a field to a table), I have to (a) build and run the website once so that application start fires and the schema is enforced, and then (b) build it again so that SqlMetal runs.
So: What are my options for running code that lives in my web application, from the command line?
Here's what we do.
We have a local 1-click build that is required to be run before check in (an integration build also runs in a separate environment every check in...).
The NANT script will:
Rebuild the database from scratch using Tarantino (Database change management)
Clean & Compile
Copy DLLs to a separate directory
Run Unit Tests against the DLLs
We have a separate script for SQL Metal, but your question is going to have me look at inserting the call between steps 1 and 2. This way your database changes and linq generated files are always in sync.
You could either write a small program that use CodeDOM to interpret a file in your repository or directly call the compiler executable inside your pre-build event.
Using CodeDOM avoid any problems with having to know where the compiler executable is but if your code can't be contained in one file without any dependency it's unusable and calling the compiler, then executing the result is a better option.

Resources