Deno web workers - Cannot find name "window" - deno

I'm trying to run a Deno app with a deno_webview and an http server but for some reason I cannot run both at the same time, calling webview.run() seems to block something and I can no longer reach my http server.
In order to prevent the blocking, I'm trying running either the server or the webview in a webworker, but in both scenarios I get the same error "Cannot find name 'window'"
What is the issue here?
api.webworker.ts
import { Application } from 'https://deno.land/x/oak/mod.ts';
const app = new Application();
await app.listen({ port: 8080 });
webview.webworker.ts
import { WebView } from 'https://deno.land/x/webview/mod.ts';
const webview = new WebView({ url: 'http://localhost:4200' });
await webview.run();
server.ts
const webviewWorker = new Worker(
'./workers/webview.worker.ts', {
type: 'module',
deno: true
});
Error:
const apiWorker = new Worker(
'./workers/api.worker.ts', {
type: 'module',
deno: true
});
Error:

Web Workers don't have window object, you have to use self or globalThis
So https://deno.land/x/webview/mod.ts doesn't support being called from a Web Worker.
The library will need to change window usage to globalThis so it will work int the main process and inside workers.

Related

oak on Deno - How do I build a URL to a route?

I come from a land of ASP.NET Core. Having fun learning a completely new stack.
I'm used to being able to:
name a route "orders"
give it a path like /customer-orders/{id}
register it
use the routing system to build a URL for my named route
An example of (4) might be to pass a routeName and then routeValues which is an object like { id = 193, x = "y" } and the routing system can figure out the URL /customer-orders/193?x=y - notice how it just appends extraneous key-vals as params.
Can I do something like this in oak on Deno?? Thanks.
Update: I am looking into some functions on the underlying regexp tool the routing system uses. It doesn't seem right that this often used feature should be so hard/undiscoverable/inaccessible.
https://github.com/pillarjs/path-to-regexp#compile-reverse-path-to-regexp
I'm not exactly sure what you mean by "building" a URL, but the URL associated to the incoming request is defined by the requesting client, and is available in each middleware callback function's context parameter at context.request.url as an instance of the URL class.
The documentation provides some examples of using a router and the middleware callback functions that are associated to routes in Oak.
Here's an example module which demonstrates accessing the URL-related data in a request:
so-74635313.ts:
import { Application, Router } from "https://deno.land/x/oak#v11.1.0/mod.ts";
const router = new Router({ prefix: "/customer-orders" });
router.get("/:id", async (ctx, next) => {
// An instance of the URL class:
const { url } = ctx.request;
// An instance of the URLSearchParams class:
const { searchParams } = url;
// A string:
const { id } = ctx.params;
const serializableObject = {
id,
// Iterate all the [key, value] entries and collect into an array:
searchParams: [...searchParams.entries()],
// A string representation of the full request URL:
url: url.href,
};
// Respond with the object as JSON data:
ctx.response.body = serializableObject;
ctx.response.type = "application/json";
// Log the object to the console:
console.log(serializableObject);
await next();
});
const app = new Application();
app.use(router.routes());
app.use(router.allowedMethods());
function printStartupMessage({ hostname, port, secure }: {
hostname: string;
port: number;
secure?: boolean;
}): void {
if (!hostname || hostname === "0.0.0.0") hostname = "localhost";
const address =
new URL(`http${secure ? "s" : ""}://${hostname}:${port}/`).href;
console.log(`Listening at ${address}`);
console.log("Use ctrl+c to stop");
}
app.addEventListener("listen", printStartupMessage);
await app.listen({ port: 8000 });
In a terminal shell (I'll call it shell A), the program is started:
% deno run --allow-net so-74635313.ts
Listening at http://localhost:8000/
Use ctrl+c to stop
Then, in another shell (I'll call it shell B), a network request is sent to the server at the route described in your question — and the response body (JSON text) is printed below the command:
% curl 'http://localhost:8000/customer-orders/193?x=y'
{"id":"193","searchParams":[["x","y"]],"url":"http://localhost:8000/customer-orders/193?x=y"}
Back in shell A, the output of the console.log statement can be seen:
{
id: "193",
searchParams: [ [ "x", "y" ] ],
url: "http://localhost:8000/customer-orders/193?x=y"
}
ctrl + c is used to send an interrupt signal (SIGINT) to the deno process and stop the server.
I am fortunately working with a React developer today!
Between us, we've found the .url(routeName, ...) method on the Router instance and that does exactly what I need!
Here's the help for it:
/** Generate a URL pathname for a named route, interpolating the optional
* params provided. Also accepts an optional set of options. */
Here's it in use in context:
export const routes = new Router()
.get(
"get-test",
"/test",
handleGetTest,
);
function handleGetTest(context: Context) {
console.log(`The URL for the test route is: ${routes.url("get-test")}`);
}
// The URL for the test route is: /test

Fail to connect to socket.io server on Window Server in dotnet core

I host a very simple node socket IO application on my Window Server, below are the code sample.
// socket.io 3.1.2"
const port = 30080;
const httpServer = require("http").createServer();
const io = require("socket.io")(httpServer, {
cors: {
origin: '*',
methods: ["GET", "POST"],
allowedHeaders: ["Access-Control-Allow-Origin"],
credentials: false
}
});
io.on("connection", socket => {
console.log('On Connection');
io.emit("message", 'Welcome to Socket Io.');
});
And I wrote some code to try connect to my socket IO server in a HTML File and work well. below are the code sample.
// <script src="https://cdn.socket.io/3.1.3/socket.io.min.js"></script>
const socket = io("http://myserverip:30080", {
withCredentials: false,
extraHeaders: {
"Access-Control-Allow-Origin": "*"
}
});
socket.on("connect", () => {
console.log('connect');
});
socket.on("message", (message) => {
console.log(message);
});
But when I try to use those above code in my .NET Core web application, I get the error "ERR_SSL_PROTOCOL_ERROR". Even I publish my web application on the Window Server still getting the same error message.
I have tried http, https, ws and wss protocol. None of these work. How can I get this possibly working?
I do not see the following in your server side code:
httpServer.listen()
Do you have a reverse proxy between your client and the server?
I would expect no SSL related error based on you code.
I would also use socket.io version 4 just for future maintenance reasons.

Next.js returns 500: internal server error in Production

Created a next.js full stack application. After production build when I run next start it returns 500 : internal server. I'm using environment varibles for hitting api.
env.development file
BASE_URL=http://localhost:3000
It was working fine in development
service.ts
import axios from 'axios';
const axiosDefaultConfig = {
baseURL: process.env.BASE_URL, // is this line reason for error?
headers: {
'Access-Control-Allow-Origin': '*'
}
};
const axio = axios.create(axiosDefaultConfig);
export class Steam {
static getGames = async () => {
return await axio.get('/api/getAppList');
};
}
Do you have a next.config.js file?
To add runtime configuration to your app open next.config.js and add the publicRuntimeConfig and serverRuntimeConfig configs:
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
mySecret: 'secret',
secondSecret: process.env.SECOND_SECRET, // Pass through env variables
},
publicRuntimeConfig: {
// Will be available on both server and client
staticFolder: '/static',
},
}
To get access to the runtime configs in your app use next/config, like so:
import getConfig from 'next/config'
// Only holds serverRuntimeConfig and publicRuntimeConfig
const { serverRuntimeConfig, publicRuntimeConfig } = getConfig()
// Will only be available on the server-side
console.log(serverRuntimeConfig.mySecret)
// Will be available on both server-side and client-side
console.log(publicRuntimeConfig.staticFolder)
function MyImage() {
return (
<div>
<img src={`${publicRuntimeConfig.staticFolder}/logo.png`} alt="logo" />
</div>
)
}
export default MyImage
I hope this helps.
I dont think you have setup env.
You need to configure it for it to work. Try it without it and it should work fine!

Log 'jsonPayload' in Firebase Cloud Functions

TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]

can we unregister old service worker by its name and register new service worker

I am facing some problem related to service worker before some time i am using gcm and service worker file name was service-worker.js after releasing fcm i changed my code and now my service worker file name is firebase-messaging-sw.js but in some my client browser calling old service-worker.js file which is generating an error(service-worker.js not found 500). I already used following code before gettoken().
const messaging = firebase.messaging();
navigator.serviceWorker.register('/firebase-messaging-sw.js')
.then((registration) => {
messaging.useServiceWorker(registration);
// Request permission and get token.....
});
but its still showing this error.
In general, if you have multiple service workers registered with different scopes, and you want to get a list of them from a client page (and potentially unregister some of them, based on either matching scope or SW URL), you can do the following:
async unregisterSWs({matchingScope, matchingUrl}) {
const registrations = await navigator.serviceWorker.getRegistrations();
const matchingRegistrations = registrations.filter(registration => {
if (matchingScope) {
return registration.scope === matchingScope;
}
if (matchingUrl) {
return registration.active.scriptURL === matchingUrl;
}
});
for (const registration of matchingRegistrations) {
await registration.unregister();
console.log('Unregistered ', registration);
}
}
and then call it passing in either a scope or SW script URL that you want to use to unregister:
unregisterSWs({matchingScope: 'https://example.com/push'});
unregisterSWs({matchingUrl: 'https://example.com/my-push-sw.js'});

Resources