difference between protoloader and generated .js files from grpc - grpc

I am a beginner in GRPC and nodejs.
For the first time, I use protoloader to load my services from proto. Then, I want to switch to use generated files from grpc: in my case I have: comment_pb.js, So I replace this code with:
import protoLoader from '#grpc/proto-loader';
const packageDefinition = protoLoader.loadSync('../../../protos/comment.proto', {
keepCase: true,
longs: String,
enums: String,
arrays: true
});
//pass proto in grpc
var commentProto = grpc.loadPackageDefinition(packageDefinition);
with:
import protoLoader from './proto/comment_pb.js';
I don't know what's the diff between these two methods, does this change the rest of the code? there are any helpful links about this.

gRPC Node Basics tutorial goes through through a dynamic_codegen example, but there's also a very similar-looking static_codegen example. It should be useful to go through them and compare them to understand the difference in the static and dynamic approaches.

Related

NextJS: How can I create a server side request context

I need to forward a header from the browser to an external API I call from the server side.
The external API is called from getServerSideProps and API routes.
I was thinking about implementing some sort of a request context (using AsyncLocalStorage for example) that I can access from anywhere in the server side code.
That way I could create a middleware that will save the header to the context, and in the external API client I'll fetch it from the context and add it to the requests.
For example:
// context.ts
export const context = new AsyncLocalStorage<string>();
// middleware.ts
export function middleware(request: NextRequest) {
const store = request.headers[SOME_HEADER];
return context.run(store, () => NextResponse.next());
}
// client.ts
axios.post(EXTERNAL_API, DATA, {
headers: {
SOME_HEADER: context.getStore()
}
}).then(...)
Currently I simply send it as a parameter which is pretty tedious.
Is there a way of achieving it?
I tried adding async_hooks to my project but it got really messy when I tried to build the project.

Hyper cannot find Server module

I'm writing a "hello world" HTTP server with Hyper, however I am not able to find the Server and rt modules when trying to import them.
When invoking cargo run, then I see this error message:
26 | let server = hyper::Server::bind(&addr).serve(router);
| ^^^^^^ could not find `Server` in `hyper`
I must be missing something obvious about Rust and Hyper. What I am trying to do is writing something as dry/simple as possible with just the HTTP layer and some basic routes. I would like to include as little as possible 3rd party dependencies e.g avoiding Tokio which I think involves async behavior, but I am not sure about the context as I am new to Rust.
Looks like I must use futures, so I included this dependency and perhaps futures only work with the async reserved word (which I am not sure if it comes from Tokio or Rust itself).
What confuses me is that in the Hyper examples I do see imports like use hyper::{Body, Request, Response, Server};, so that Server thing must be there, somewhere.
These are the dependencies in Cargo.toml:
hyper = "0.14.12"
serde_json = "1.0.67"
futures = "0.3.17"
This is the code in main.rs:
use futures::future;
use hyper::service::service_fn;
use hyper::{Body, Method, Response, StatusCode};
use serde_json::json;
fn main() {
let router = || {
service_fn(|req| match (req.method(), req.uri().path()) {
(&Method::GET, "/foo") => {
let mut res = Response::new(
Body::from(json!({"message": "bar"}).to_string())
);
future::ok(res)
},
(_, _) => {
let mut res = Response::new(
Body::from(json!({"content": "route not found"}).to_string())
);
*res.status_mut() = StatusCode::NOT_FOUND;
future::ok(res)
}
})
};
let addr = "127.0.0.1:8080".parse::<std::net::SocketAddr>().unwrap();
let server = hyper::Server::bind(&addr).serve(router); // <== this line fails to compile
// hyper::rt::run(server.map_err(|e| {
// eprintln!("server error: {}", e);
// }));
}
How do I make the code above compile and run?
According to documentation, you are missing one module namespace in your call hyper::server::Server:
let server = hyper::server::Server::bind(&addr).serve(router)
In order to use server you need to activate the feature flag in cargo:
hyper = { version = "0.14.12", features = ["server"] }

Best way to intercept XHR request on page with Puppeteer and return mock response

I need to be able to intercept XHR requests on page loaded with Puppeteer and return mock responses in order to organize backendless testing for my web app. What's the best way to do this?
It seems that the way to go is request.respond() indeed, but still, I couldn't find a concrete example in the web on how to use it. The way I did it was like this:
// Intercept API response and pass mock data for Puppeteer
await page.setRequestInterception(true);
page.on('request', request => {
if (request.url() === constants.API) {
request.respond({
content: 'application/json',
headers: {"Access-Control-Allow-Origin": "*"},
body: JSON.stringify(constants.biddersMock)
});
}
else {
request.continue();
}
});
What happens here exactly?
Firstly, all requests are intercepted with page.setRequestInterception()
Then, for each request I look for the one I am interested in, by matching it by URL with if (request.url() === constants.API) where constants.API is just the endpoint I need to match.
If found, I pass my own response with request.respond(), otherwise I just let the request continue with request.continue()
Two more points:
constants.biddersMock above is an array
CORS header is important or access to your mock data will not be allowed
Please comment or refer to resources with better example(s).
Well. In the newest puppeteer,it provide the request.respond() method to handle this situation.
If anyone is interested I ended up creating special app build for my testing needs, which adds Pretender to the page. And I communicate with Pretender server using Puppeteer's evaluate method.
This is not ideal, but I couldn't find a way to achieve what I need with Puppeteer only. There is a way to intercept requests with Puppeteer, but seems to be no way to provide fake response for a given request.
UPDATE:
As X Rene mentioned there is now native support for this in Puppeteer v0.13.0 using request.respond() method. I'm going to rewrite my tests to use it instead of Pretender, since this will simplify many things for me.
UPDATE 2:
There is pptr-mock-server available now to accomplish this. Internally it relies on request interception and request.respond() method. Library is pretty minimal, and may not fit your needs, but it at least provides an example how to implement backendless testing using Puppeteer. Disclaimer: I'm an author of it.
I created a library that uses Puppeteer's page.on('request') and page.on('response') to record and respond with mocked requests.
https://github.com/axiomhq/puppeteer-request-intercepter
npm install puppeteer-request-intercepter
const puppeteer = require('puppeteer');
const { initFixtureRouter } = require('puppeteer-request-intercepter');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Intercept and respond with mocked data.
const fixtureRouter = await initFixtureRouter(page, { baseUrl: 'https://news.ycombinator.com' });
fixtureRouter.route('GET', '/y18.gif', 'y18.gif', { contentType: 'image/gif' });
await page.goto('https://news.ycombinator.com', { waitUntil: 'networkidle2' });
await page.pdf({ path: 'hn.pdf', format: 'A4' });
await browser.close();
})();
You may want to try out Mockiavelli - request mocking library for Puppeteer. It was build exactly for backendless testing of webapps. It integrates best with jest and jest-puppeteer, but works with any testing library.

Is it a bad idea to use synchronous filesystem methods in a Dart web server?

I'm playing around with HttpServer; and was adding support for serving static files (I'm aware of Shelf; I'm doing this as a learning exercise). I have a list of handlers that are given the opportunity to handle the request in sequence (stopping at the first that handles it):
const handlers = const [
handleStaticRequest
];
handleRequest(HttpRequest request) {
// Run through all handlers; and if none handle the request, 404
if (!handlers.any((h) => h(request))) {
request.response.statusCode = HttpStatus.NOT_FOUND;
request.response.headers.contentType = new ContentType("text", "html");
request.response.write('<h1>404 File Not Found</h1>');
request.response.close();
}
}
However, as I implemented the static file handler, I realised that I couldn't return true/false directly (which is required by the handleRequest code above, to signal if the request is handled) unless I use file.existsSync().
In something like ASP.NET, I wouldn't think twice about a blocking call in a request because it's threaded; however in Dart, it seems like it would be a bottleneck if every request is blocking every other request for the duration of IO hits like this.
So, I decided to have a look in Shelf, to see how that handled this; but disappointingly, that appears to do the same (in fact, it does several synchronous filesystem hits).
Am I overestimating the impact of this; or is this a bad idea for a Dart web service? I'm not writing Facebook; but I'd still like to learn to write things in the most efficient way.
If this is considered bad; is there a built-in way of doing "execute these futures sequentially until the first one returns a match for this condition"? I can see Future.forEach but that doesn't have the ability to bail. I guess "Future.any" is probably what it'd be called if it existed (but that doesn't)?
Using Shelf is the right approach here.
But there is still a trade-off between sync and async within the static handler package.
Blocking on I/O obviously limits concurrency, but there is a non-zero cost to injecting Future into a code path.
I will dig in a bit to get a better answer here.
After doing some investigation, it does not seem that adding async I/O in the shelf_static improves performance except for the bit that's already async: reading file contents.
return new Response.ok(file.openRead(), headers: headers);
The actual reading of file contents is done by passing a Stream to the response. This ensures that the bulk of the slow I/O happens in a non-blocking way. This is key.
In the mean time, you may want to look at Future.forEach for an easy way to invoke an arbitrary number of async methods.
There are a lot of good questions in your post (perhaps we should split them out into individual SO questions?).
To answer the post title's question, the best practice for servers is to use the async methods.
For command-line utilities and simple scripts, the sync methods are perfectly fine.
I think it becomes a problem if you do file access that is blocking for a long time (reading/writing/searching big files locally or over the network).
I can't imagine file.existsSync() doing much damage. If you are already in async code it's easy to stay async but if you have to go async just for the sake of not using file.existsSync() I would consider this premature optimization.
A little offtopick, but it solved my problem, I was trying to solve by reading discussion on this question. I was not able to achieve async operation in handler with io.serve, so I used dart:io for active pages and shelf.handleReguest for static files:
import 'dart:io';
import 'dart:async' show runZoned;
import 'package:path/path.dart' show join, dirname;
import 'package:shelf/shelf_io.dart' as io;
import 'package:shelf_static/shelf_static.dart';
import 'dart:async';
import 'package:sqljocky/sqljocky.dart';
void main(){
HttpServer
.bind(InternetAddress.ANY_IP_V4, 9999)
.then((server) {
server.listen((HttpRequest request) {
String path = request.requestedUri.path;
if(path == "/db"){
var pool = new ConnectionPool(host: 'localhost', port: 3306, user: 'root', db: 'db', max: 5);
var result = pool.query("select * from myTable");
result.then((Results data) {
data.first.then((Row row) {
request.response.write(row.toString());
request.response.close();
});
});
}else{
String pathToBuild = join(dirname(Platform.script.toFilePath()), '..', 'build/web');
var handler = createStaticHandler(pathToBuild, defaultDocument: 'index.html');
io.handleRequest(request, handler);
}
});
});
}
Many months later I've found how to create that Stream... (still offtopick .. a little)
shelf.Response _echoRequest(shelf.Request request) {
StreamController controller = new StreamController();
Stream<List<int>> out = controller.stream;
new Future.delayed(const Duration(seconds:1)).then((_){
controller.add(const Utf8Codec().encode("hello"));
controller.close();
});
return new shelf.Response.ok(out);
}

setting request headers in selenium

I'm attempting to set the request header 'Referer' to spoof a request coming from another site. We need the ability test that a specific referrer is used, which returns a specific form to the user, otherwise an alternative form is given.
I can do this within poltergeist by:
page.driver.headers = {"Referer" => referer_string}
but I can't find the equivalent functionality for the selemium driver.
How can I set request headers in the capybara selenium driver?
Webdriver doesn't contain an API to do it. See issue 141 from Selenium tracker for more info. The title of the issue says that it's about response headers but it was decided that Selenium won't contain API for request headers in scope of this issue. Several issues about adding API to set request headers have been marked as duplicates: first, second, third.
Here are a couple of possibilities that I can propose:
Use another driver/library instead of selenium
Write a browser-specific plugin (or find an existing one) that allows you to add header for request.
Use browsermob-proxy or some other proxy.
I'd go with option 3 in most of cases. It's not hard.
Note that Ghostdriver has an API for it but it's not supported by other drivers.
For those people using Python, you may consider using Selenium Wire which can set request headers as well as provide you with the ability to inspect requests and responses.
from seleniumwire import webdriver # Import from seleniumwire
# Create a new instance of the Chrome driver (or Firefox)
driver = webdriver.Chrome()
# Create a request interceptor
def interceptor(request):
del request.headers['Referer'] # Delete the header first
request.headers['Referer'] = 'some_referer'
# Set the interceptor on the driver
driver.request_interceptor = interceptor
# All requests will now use 'some_referer' for the referer
driver.get('https://mysite')
Install with:
pip install selenium-wire
I had the same issue. I solved it downloading modify-headers firefox add-on and activate it with selenium.
The code in python is the following
fp = webdriver.FirefoxProfile()
path_modify_header = 'C:/xxxxxxx/modify_headers-0.7.1.1-fx.xpi'
fp.add_extension(path_modify_header)
fp.set_preference("modifyheaders.headers.count", 1)
fp.set_preference("modifyheaders.headers.action0", "Add")
fp.set_preference("modifyheaders.headers.name0", "Name_of_header") # Set here the name of the header
fp.set_preference("modifyheaders.headers.value0", "value_of_header") # Set here the value of the header
fp.set_preference("modifyheaders.headers.enabled0", True)
fp.set_preference("modifyheaders.config.active", True)
fp.set_preference("modifyheaders.config.alwaysOn", True)
driver = webdriver.Firefox(firefox_profile=fp)
Had the same issue today, except that I needed to set different referer per test. I ended up using a middleware and a class to pass headers to it. Thought I'd share (or maybe there's a cleaner solution?):
lib/request_headers.rb:
class CustomHeadersHelper
cattr_accessor :headers
end
class RequestHeaders
def initialize(app, helper = nil)
#app, #helper = app, helper
end
def call(env)
if #helper
headers = #helper.headers
if headers.is_a?(Hash)
headers.each do |k,v|
env["HTTP_#{k.upcase.gsub("-", "_")}"] = v
end
end
end
#app.call(env)
end
end
config/initializers/middleware.rb
require 'request_headers'
if %w(test cucumber).include?(Rails.env)
Rails.application.config.middleware.insert_before Rack::Lock, "RequestHeaders", CustomHeadersHelper
end
spec/support/capybara_headers.rb
require 'request_headers'
module CapybaraHeaderHelpers
shared_context "navigating within the site" do
before(:each) { add_headers("Referer" => Capybara.app_host + "/") }
end
def add_headers(custom_headers)
if Capybara.current_driver == :rack_test
custom_headers.each do |name, value|
page.driver.browser.header(name, value)
end
else
CustomHeadersHelper.headers = custom_headers
end
end
end
spec/spec_helper.rb
...
config.include CapybaraHeaderHelpers
Then I can include the shared context wherever I need, or pass different headers in another before block. I haven't tested it with anything other than Selenium and RackTest, but it should be transparent, as header injection is done before the request actually hits the application.
I wanted something a bit slimmer for RSpec/Ruby so that the custom code only had to live in one place. Here's my solution:
/spec/support/selenium.rb
...
RSpec.configure do |config|
config.after(:suite) do
$custom_headers = nil
end
end
module RequestWithExtraHeaders
def headers
$custom_headers.each do |key, value|
self.set_header "HTTP_#{key}", value
end if $custom_headers
super
end
end
class ActionDispatch::Request
prepend RequestWithExtraHeaders
end
Then in my specs:
/specs/features/something_spec.rb
...
$custom_headers = {"Referer" => referer_string}
If you are using javacsript and only want to implement on chrome, Puppeteer is the best option as it has native support to modify headers.
Check this out: https://pptr.dev/#?product=Puppeteer&version=v10.1.0&show=api-pagesetextrahttpheadersheaders
Although for cross-browser usage you might check out #requestly/selenium npm package. It is a wrapper around requestly extension to enable easy integration in selenium-webdriver.The extension can modify headers.
Check out: https://www.npmjs.com/package/#requestly/selenium
Setting request headers in the web driver directly does not work. This is true.
However, you can work around this problem by using the browser devtools (I tested with edge & chrome) and this works perfectly.
According to the documentation, you have the possibility to add custom headers:
https://chromedevtools.github.io/devtools-protocol/tot/Network/
Please find below an example.
[Test]
public async Task AuthenticatedRequest()
{
await LogMessage("=== starting the test ===");
EdgeOptions options = new EdgeOptions {UseChromium = true};
options.AddArgument("no-sandbox");
var driver = new RemoteWebDriver(new Uri(_testsSettings.GridUrl), options.ToCapabilities(), TimeSpan.FromMinutes(3));
//Get DevTools
IDevTools devTools = driver;
//DevTools Session
var session = devTools.GetDevToolsSession();
var devToolsSession = session.GetVersionSpecificDomains<DevToolsSessionDomains>();
await devToolsSession.Network.Enable(new Network.EnableCommandSettings());
var extraHeader = new Network.Headers();
var data = await Base64KerberosTicket();
var headerValue = $"Negotiate {data}";
await LogMessage($"header values is {headerValue}");
extraHeader.Add("Authorization", headerValue);
await devToolsSession.Network.SetExtraHTTPHeaders(new Network.SetExtraHTTPHeadersCommandSettings
{
Headers = extraHeader
});
driver.Url = _testsSettings.TestUrl;
driver.Navigate();
driver.Quit();
await LogMessage("=== ending the test ===");
}
This is an example written in C# but the same shall probably work with java, python as well as the major platforms.
Hope it helps the community.
If you use the HtmlUnitDriver, you can set request headers by modifying the WebClient, like so:
final case class Header(name: String, value: String)
final class HtmlUnitDriverWithHeaders(headers: Seq[Header]) extends HtmlUnitDriver {
super.modifyWebClient {
val client = super.getWebClient
headers.foreach(h => client.addRequestHeader(h.name, h.value))
client
}
}
The headers will then be on all requests made by the web browser.
With the solutions already discussed above the most reliable one is using Browsermob-Proxy
But while working with the remote grid machine, Browsermob-proxy isn't really helpful.
This is how I fixed the problem in my case. Hopefully, might be helpful for anyone with a similar setup.
Add the ModHeader extension to the chrome browser
How to download the Modheader? Link
ChromeOptions options = new ChromeOptions();
options.addExtensions(new File(C://Downloads//modheader//modheader.crx));
// Set the Desired capabilities
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(ChromeOptions.CAPABILITY, options);
// Instantiate the chrome driver with capabilities
WebDriver driver = new RemoteWebDriver(new URL(YOUR_HUB_URL), options);
Go to the browser extensions and capture the Local Storage context ID of the ModHeader
Navigate to the URL of the ModHeader to set the Local Storage Context
.
// set the context on the extension so the localStorage can be accessed
driver.get("chrome-extension://idgpnmonknjnojddfkpgkljpfnnfcklj/_generated_background_page.html");
Where `idgpnmonknjnojddfkpgkljpfnnfcklj` is the value captured from the Step# 2
Now add the headers to the request using Javascript
.
((Javascript)driver).executeScript(
"localStorage.setItem('profiles', JSON.stringify([{ title: 'Selenium', hideComment: true, appendMode: '',
headers: [
{enabled: true, name: 'token-1', value: 'value-1', comment: ''},
{enabled: true, name: 'token-2', value: 'value-2', comment: ''}
],
respHeaders: [],
filters: []
}]));");
Where token-1, value-1, token-2, value-2 are the request headers and values that are to be added.
Now navigate to the required web-application.
driver.get("your-desired-website");
You can do it with PhantomJSDriver.
PhantomJSDriver pd = ((PhantomJSDriver) ((WebDriverFacade) getDriver()).getProxiedDriver());
pd.executePhantomJS(
"this.onResourceRequested = function(request, net) {" +
" net.setHeader('header-name', 'header-value')" +
"};");
Using the request object, you can filter also so the header won't be set for every request.
If you just need to set the User-Agent header, there is an option for Chrome:
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36"')
Now the browser sends User-Agent.

Resources