Cloudflare Workers - changes are not visible on live (but are in preview) - cloudflare-workers

Hello and thank you for your help.
Sadly support over at CF does not think they need to help me.
I am learning to use workers, and have written a simple HTML injector just to see it working on my site.
this is the full worker code i have:
async function handleRequest(req) {
const res = await fetch(req)
const contentType = res.headers.get("Content-Type")
console.log('contentType: ', contentType)
// If the response is HTML, it can be transformed with
// HTMLRewriter -- otherwise, it should pass through
if (contentType.startsWith("text/html")) {
return rewriter.transform(res)
} else {
return res
}
}
class UserElementHandler {
async element(element) {
element.before("<div class='contbox'><img src='https://coverme.co.il/wp-content/uploads/2020/01/covermeLOGO-01-1024x183.png' style='width:200px;margin:20px;'><h1>testing inserting</h1></div>", {html: true});
// fill in user info using response
}
}
const rewriter = new HTMLRewriter()
.on("h1", new UserElementHandler())
addEventListener("fetch", event => {
event.respondWith(handleRequest(event.request))
})
it just uses element.before to inject some HTML.
in the worker preview pane i can see it!
but on the live site = nothing.
this is the active URL: [https://coverme.co.il/product/%D7%A0%D7%A8-%D7%91%D7%99%D7%A0%D7%95%D7%A0%D7%99-tuberosejasmine/]
these are the 4 routes i have set up to try to catch this, with and without encoding the letters:
coverme.co.il/product/נר-בינוני-tuberosejasmine/
*.coverme.co.il/product/נר-בינוני-tuberosejasmine/*
https://coverme.co.il/product/%D7%A0%D7%A8-%D7%91%D7%99%D7%A0%D7%95%D7%A0%D7%99-tuberosejasmine/
*.coverme.co.il/product/%D7%A0%D7%A8-%D7%91%D7%99%D7%A0%D7%95%D7%A0%D7%99-tuberosejasmine/*
thanks in advance!

I believe the problem here is that you've configured your routes to match "נר-בינוני" unescaped, but the browser will actually percent-encode the URL before sending to the server, therefore the route matching actually operates on percent-escaped URLs. So the actual URL is https://coverme.co.il/product/%D7%A0%D7%A8-%D7%91%D7%99%D7%A0%D7%95%D7%A0%D7%99-tuberosejasmine/, and this does not match your route because %D7%A0%D7%A8-%D7%91%D7%99%D7%A0%D7%95%D7%A0%D7%99 is not considered to be the same as נר-בינוני.
EDIT: Unfortunately, using percent-encoding in your route pattern won't fix the problem, due to a known bug. Unfortunately, it's just not possible to match non-ASCII characters in a Workers route today. We intend to fix this, but it's hard because some sites are accidentally dependent on the broken behavior, so the fix would break them.
What you can potentially do instead is match against coverme.co.il/product/*, and then, inside your worker, check if the path also has נר-בינוני-tuberosejasmine. If it does not, then your fetch event handler should simply return without calling event.respondWith(). This will trigger "default handling" of the request, meaning it will pass through and be sent to your origin server like normal. (Note that you will still be billed for a Workers request, though.)
So, something like this:
addEventListener("fetch", event => {
if (event.request.url.includes(
"coverme.co.il/product/נר-בינוני-tuberosejasmine/")) {
event.respondWith(handle(event.request));
} else {
return; // not a match, use default pass-through handling
}
})

Related

MSALjs with Single Page App makes debugging impossible

I'm not sure how to explain this succinctly, first part, Implicit flow. I don't exactly understand what implicit flow is. I haven't distilled it down in to a single sentence. Is it a design pattern or a way of handling tokens, both, I dunno.
Working with a pretty basic JavaScript Single Page Application and implementing Msal version 1.2.1. Each time an msalClient gets a token for a scope it leaves behind an iFrame to handle token refreshing.
window.msalConfig = {
auth: {
clientId: '<clientId>'
, authority: "https://login.microsoftonline.com/common"
, validateAuthority: true
}
, cache: {
cacheLocation: "localStorage"
}
, graphScope: {
scopes: ["https://graph.microsoft.com/User.Read", "https://graph.microsoft.com/Mail.Send" ]
}
, appScope: {
scopes: ["<clientId>"]
}
, appToken:{
token: null
}
, graphToken:{
token: null
}
};
clientApplication = new Msal.UserAgentApplication(window.msalConfig);
function onSignin(idToken) {
clientApplication.acquireTokenSilent(window.msalConfig.appScope)
.then(function (token) {
window.msalConfig.appToken.token = token;
}, function (error) {
clientApplication.acquireTokenPopup(window.msalConfig.appScope).then(function (token) {
window.msalConfig.appToken.token = token;
}, function (error) {
console.log(error);
});
});
getGraphToken();
};
function getGraphToken() {
clientApplication.acquireTokenSilent(window.msalConfig.graphScope)
.then(function (token) {
window.msalConfig.graphToken.token = token;
}), function (error) {
console.log(error);
};
};
These iFrames just sit here and refresh periodically, presumably getting a new token, or keeping it alive on the other end. (When in the chrome debugger each refresh throws me to the Sources tab, this is makes debugging nearly impossible.)
Also, wondering why cross site cookie errors are being thrown when working with the Graph Scope but not the App Scope.
Curious why this is closed, This still seems to be present.
https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/697
The Implicit Flow is one of the flows for acquiring tokens in OAuth, and is especially well-suited for single-page applications, as it does not require a server component. As you observed, it uses iframes to acquire tokens. You can read more about the Implicit Flow here: https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-implicit-grant-flow
The SameSite cookie warnings are expected, as there are some cookies on login.microsoftonline.com have been intentionally left without the SameSite attribute. All of the cookies for login.microsoftonline.com that are needed for authentication have been updated. You can read more here: https://learn.microsoft.com/en-us/azure/active-directory/develop/howto-handle-samesite-cookie-changes-chrome-browser?tabs=dotnet
For issue 697, the main issue of your application being reloaded in a hidden iframe has been addressed in v1.2.0, which now allows you to specify a page without MSAL or any other content (e.g. a blank html page) as your redirect URI, which mitigates the performance overhead of reloading your application in the iframe/popup. You can also set your redirect URI per request, in case you need a different URI that the one set in the configuration.
And if you find iframes difficult to debug, know that we are working a new version of the library which will use the Auth Code Flow w/ PKCE (instead of the Implicit Flow) to acquire tokens, which will use CORS http requests to obtain tokens, instead of iframes. We are planning to have that available this month.

How to access custom response values from a page script?

This might sound like a silly question but I have really tried everything I could to figure it out. I am creating a variable and adding it to my response object in custom Express server file like so:
server.get('*', (req, res) => {
res.locals.user = req.user || null;
handle(req, res);
});
Now I want to access this res.locals.user object from all of my pages, i.e. index.js, about.js, etc., in order to keep a tab on the active session's user credentials. It's got to be possible some way, right?
P.S.: Reading some thread on the NextJS Github page, I tried accessing it from my props object as this.props.user but it keeps returning null even when a server-side console.log shows non-null values.
The res object is available on the server as a parameter to getInitialProps. So, with the server code you have above, you can do
static async getInitialProps({res}) {
return { user: res.locals.user }
}
to make it available as this.props.user.

Can I delay PUT upload in Express until my server finishes an operation?

I have a server that receives files via PUT and in turn stores them in S3.
After a client PUTs files it will usually request them right in return for rendering. Unfortunately my server takes a bit of time to actually process the upload and store them in S3.
Here is the code I use in Express:
req.pipe(writeStream);
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Methods', 'PUT, OPTIONS');
res.header('Access-Control-Allow-Headers', 'Content-Type');
if ('OPTIONS' === req.method) {
res.sendStatus(200);
res.end('OK!');
return;
}
if ('PUT' === req.method) {
writeStream.on('error', () => {
res.sendStatus(500);
});
writeStream.on('close', async () => {
// This might take 30s
await this.storage.store(fn, args.blobID);
// This does not seem to have any effect
// I would like to prevent the client from 'finishing' the
// PUT request before I say it is done.
res.end('OK!')
});
res.sendStatus(200);
}
Ideally I could 'stall' my client's PUT request for until storage.store() completed, making sure the file is actually available.
Is there any way how I could do this?
Also, I am not sure about the sendStatus() / end() calls, feel free to comment if I mixed something up.
P.S. I know, in an ideal world I would just hand out signed URLs and let the client upload to S3 directly, but I need AES256 encryption ...
I'm not familiar with S3 uploading, but I guess that there will be a callback(such as sending back a response(success or fail... etc) to you). (If not, please tell me)
S3.upload(data,function(response){ // something like this
// then, you can callback
// ...
// res.end() or res.sendStatus(...)
})
As I remember it, Express don't send timeout automatically, so you just put response in the S3's response function, so that your request should not end until your storing end.
From Must res.end() be called in express with node.js?
You don't have to call res.end() if you call res.send(). res.send() calls res.end() for you.
and res.sendStatus() equals to res.status(...).send(message or code) (via http://expressjs.com/en/api.html)
So, res.sendStatus() will call end().

node.js, css validation, w3c banned me?

I apologize in advance for what I write through a translator, I am very bad at English.
I was faced with the following problem: I need to perform validation css files. To this end, I decided to use the NPM package w3c-css, first it worked, but then start giving "connected etimedout", in the course of research, I noticed that through the browser and the validator stops working.
Sniffer log at start of my script: link (<10 rep :( )
My code:
gulp.task('css', function() {
gulp.src('dev/sass/*.scss')
.pipe(through2.obj(function(file, enc, cb){
w3c_css.validate({text: file.contents.toString('utf8')}, function(err, data) {
if(err) {
// an error happened
console.error(err);
} else {
// validation errors
console.log('validation errors', data.errors);
// validation warnings
console.log('validation warnings', data.warnings);
}
});
cb(null, file);
}))
.pipe(gulp.dest('build/'));
});
What is the reason? It must be some mistake, or I block due to too frequent requests and it does not change? Maybe there is some other way to check the css files?
Thx!
From the "About" page of the CSS validation service of the W3C:
Can I build an application upon this validator? Is there an API?
Yes, and yes. The CSS Validator has a (RESTful) SOAP interface which should make it reasonably easy to build applications (Web or otherwise) upon it. Good manners and respectful usage of shared resources are of course customary: make sure your applications sleep() between calls to the validator, or install and run your own instance of the validator.
So yes, it seems you have been banned.
I don't know how to make a gulp task to be called from time to time. You may mount a local version of the CSS Validator webservice and editing the w3c-css package to point to your own server.
Make sure that your script will sleep for at least 1 second between requests.
From the manual:
Note: If you wish to call the validator programmatically for a batch
of documents, please make sure that your script will sleep for at
least 1 second between requests. The CSS Validation service is a free,
public service for all, your respect is appreciated. thanks.
To validate multiple links, use async + setTimeout or any related way to pause between the requests:
'use strict';
var async = require('async');
var validator = require('w3c-css');
var hrefs = [
'http://google.com',
'https://developer.mozilla.org',
'http://www.microsoft.com/'];
async.eachSeries(hrefs, function(href, next) {
validator.validate(href, function(err, data) {
// { process err, data.errors & data.warnings }
// sleep for 1.5 seconds between the requests
setTimeout(function() { next(err); }, 1500);
});
}, function(err) {
if(err) {
console.log('Failed to process an url', err);
} else {
console.log('All urls have been processed successfully');
}
});
EDIT:
To mitigate this issue:
Added some comments and an example.
Placed setTimeout right into the gulp-w3c-css plugin.

Meteor http calls limitations

Currently, I use the built-in meteor http method (see http://docs.meteor.com/#http) for issuing http calls, on both my client and my server.
However, I'm experiencing two issues:
is it possible to cancel a request?
is it possible to have multiple query parameters which share the same key?
Are these just Meteor limitations, or are there ways to get both to work using Meteor?
I know I could you jquery on the clientside, and there must be a server-side solution which supports both as wel, but I'd prefer sticking with meteor code here.
"is it possible to cancel a request?"
HTTP.call() does not appear to return an object on which we could call something like a stop() method. Perhaps a solution would be to prevent execution of your callback based on a Session variable?
HTTP.call("GET", url, function(error, result) {
if (!Session.get("stopHTTP")) {
// Callback code here
}
});
Then when you reach a point where you want to cancel the request, do this:
Session.set("stopHTTP", true);
On the server, instead of Session perhaps you could use an environment variable?
Note that the HTTP.call() options object does accept a timeout key, so if you're just worried about the request never timing out, you can set this to whatever millisecond integer you want.
"is it possible to have multiple query parameters which share the same key?"
Yes, this appears to be possible. Here's a simple test I used:
Meteor code:
HTTP.call("GET", "http://localhost:1337", {
query: "id=foo&id=bar"
}, function(error, result) {
// ...
});
Separate Node.js server: (just the basic example on the Node.js homepage, with a console.log line to output the request URL with query string)
var http = require('http');
http.createServer(function(req, res) {
console.log(req.url); // Here I log the request URL, with the query string
res.writeHead(200, {
'Content-Type': 'text/plain'
});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
When the Meteor server is run, the Node.js server logged:
/?id=foo&id=bar
Of course, this is only for GET URL query parameters. If you need to do this for POST params, perhaps you could store the separate values as a serialized array string with EJSON.stringify?

Resources