Using set_ldr for a NaCl application using sockets - google-nativeclient

I have a NaCl program that uses nacl_io. I'm looking for a way to run automated tests on it without relying on a browser. As long as I'm not using sockets, using set_ldr to run the executable seems to do the job. I can use mkdir() and create files, for example.
However, calling socket() in my program fails with "Permission denied".
Does the fact that I'm using sockets means that I must run my tests in a browser?
If so, what is the best way to automate this kind of tests?

There is no way to use real sockets from sel_ldr, but you can use test fakes instead.
For the nacl_io tests, we use fake Pepper interfaces that have simple implementations. See https://code.google.com/p/chromium/codesearch#chromium/src/native_client_sdk/src/tests/nacl_io_test/fake_ppapi/.
We haven't yet implemented a fake socket interface, but it should be possible. You would need to implement the following interfaces:
PPB_MESSAGE_LOOP_INTERFACE_1_0
PPB_TCPSOCKET_INTERFACE_1_1
PPB_UDPSOCKET_INTERFACE_1_0
Then when initalizing nacl_io in your tests, pass in your own PPB_GetInterface callback:
const void *my_get_interface(const char* interface_name) {
if (strcmp(interface_name, PPB_MESSAGE_LOOP_INTERFACE_1_0) == 0) {
return my_fake_message_loop_interface;
} else if (...) {
...
}
nacl_io_init_ppapi(pp_instance, my_get_interface);

Related

environment variable use in Cloudflare workers node.js

I saw many articles setting environment variables in Cloudflare workers. but I am not able to read or retrieve it in node.js
Code:
async function handleRequest(request) {
if ('OKOK' == process.env.API_KEY) {
return new Response('found', {
headers: { 'content-type': 'text/plain' },
})
}
}
wrangler.toml
name = "hello"
type = "javascript"
# account_id = ""
workers_dev = true
[env.production]
name = "API_KEY"
Cloudflare Workers does not use Node.js. In Workers, environment variables become simple globals. So, to access your environment variable, you would just write API_KEY, not process.env.API_KEY.
(Note: Workers is currently transitioning to a new syntax based on ES modules. In that syntax, environment variables work differently; an env object is passed to the event handler containing all variables. Most people aren't using this new syntax yet, though. You would know if you are using it if your JavaScript uses export default { to define event handlers; on the other hand, if it uses addEventListener("fetch", ...), then it is using the old syntax.)
I recommand you use Miniflare now, pretty easy and strenght forward
"start": "miniflare --watch --debug -e .env"
This is an official Cloudflare lib by the way:
https://www.npmjs.com/package/miniflare
Reasone:
Miniflare is a simulator for developing and testing Cloudflare Workers.
🎉 Fun: develop workers easily with detailed logging, file watching
and pretty error pages supporting source maps.
🔋 Full-featured: supports most Workers features, including KV,
Durable Objects, WebSockets, modules and more.
âš¡ Fully-local: test and develop Workers without an internet
connection. Reload code on change quickly. It's an alternative to
wrangler dev, written in TypeScript, that runs your workers in a
sandbox implementing Workers' runtime APIs.

How to start a new http server or using an existing one for pprof?

The pprof package documentation says
The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/."
The documentation says if you already have an http server running you don't need to start another one but if you are not using DefaultServeMux, you will have to register handlers with the mux you are using.
Shouldn't I always use a separate port for pprof? Is it okay to use the same port that I am using for prometheus metrics?
net/http/pprof is a convenience package. It always registers handlers on DefaultServeMux, because DefaultServeMux is a global variable that it can actually do that with.
If you want to serve pprof results on some other ServeMux there's really nothing to it; all it takes is calling runtime/pprof.StartCPUProfile(w) with an http.ResponseWriter and then sleeping, or calling p.WriteTo(w, debug) on a runtime/pprof.Profile object. You can look at the source of net/http/pprof to see how it does it.
In a slightly better universe, net/http/pprof would have a RegisterHandlers(*http.ServeMux) function that could be used anywhere, you would be able to import it without anything being registered implicitly, and there would be another package (say net/http/pprof/sugar) that did nothing except call pprof.RegisterHandlers(http.DefaultServeMux) in its init. However, we don't live in that universe.

Creating a simple Rust daemon that listens to a port

I've been trying to make a simple daemon in Rust that will listen to a port using tcp_stream and print the message. However, I'm running into two problems:
1) If my daemon uses println!, it crashes. If I remove all mentions of println!, the daemon works. How does stdout/stdin work when making a daemon?
One source I found on the Rust mailing list says "With modern init systems, such as systemd or launchctl, this works very nicely and application developer doesn't have to care about daemonisation and logging is also done simply via stdout." What do they mean by this?
2) When I run the code below in non-daemon mode, curls don't immediately return (running something like $ curl -XPOST localhost:9337 -d 'hi'). I have to kill curl for the server to print something. Doesn't curl close the connection automatically? Shouldn't the sent bytes be available to the server after they are sent, not after the connection is closed?
extern crate getopts;
use getopts::{optflag,getopts};
use std::io::Command;
use std::io::net::tcp::{TcpListener};
use std::io::{Acceptor,Listener};
use std::os;
fn main() {
let args: Vec<String> = os::args();
let opts = [
optflag("d", "daemon", "conver this into a daemon"),
];
let matches = match getopts(args.tail(), opts) {
Ok(m) => { m },
Err(f) => { fail!(f.to_string()) }
};
// Create a daemon? if necessary
if matches.opt_present("d") {
let child = Command::new(args[0].as_slice())
.detached().spawn().unwrap();
println!("Created child: {}", child.id());
// Do I wrap this in unsafe?
child.forget();
return;
}
let listener = TcpListener::bind("127.0.0.1", 9337u16).ok().expect("Failed to bind");
let mut acceptor = listener.listen().ok().expect("Could not listen");
loop {
let mut tcp_stream = acceptor.accept().ok().expect("Could not accept connection");
println!("Accepted new connection");
let message = tcp_stream.read_to_string().unwrap();
println!("Received message {}", message);
}
}
What do they mean by this?
They mean that you shouldn't do anything fancy like forking to create a daemon program. Your program should just work, logging its operations directly into stdout, and init systems like systemd or launchctl will automatically handle everything else, including startup, shutdown, logging redirection, lifecycle management etc. Seriously consider this approach because it would make your program much simpler.
Creating a daemon process properly, though, is not simple. You have to fork the process, close and set up new file descriptors, tweak process groups, add signals handlers and more. Googling for something like "fork daemon" gives a lot of articles on how to create a daemon, and you can see that this is not an easy task. Certainly, you can do something like this in Rust, because it exposes all of the necessary system calls through libc crate. There can be some caveats, though: for example, I'm not sure how Rust runtime would react on fork() system call.
As for why your "daemon" fails when you use println!(), I suspect it happens because you detach from your child process and its stdio handles are automatically closed, and Rust I/O routines are not happy with that and trigger a task failure.
Use https://docs.rs/daemonize/0.4.1/daemonize/ crate that takes care of most of this tricky stuff in creating a daemon for you?

How to open too many browsers using Selenium WebDriver?

I have a requirement to open 50 to 100 URLs once and verify the login for each URL. All URLs belongs to Same App but hosted for different customers? How I can open multiple browsers, say 20 to 50 browser with different URLs using Selenium WebDriver? I tried TestNG with Parallel attribute set to "Tests" and instantiating driver object in #BeforeTest but after opening 2 browsers getting selenium exception as browser closed or died for 3rd browser.
Below find code for this.
#Test
#Parameters({ "url" })
public void testParallel(String url) throws Exception {
try {
driver.get(url);
int i = 0;
i++;
System.out.println("Browser Count" + i);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I think it is not possible to use multiple IEDriver instances in parallel on the same machine using Java bindings. (remember reading somewhere.. .NET bindings support parallel IE instances)
As per official documentation of IEDriver, "Unlike other WebDriver classes, there should only ever be a single InternetExplorerDriver instance at one time for some language bindings. If you need to run more than one instance of the InternetExplorerDriver at a time, consider using the RemoteWebDriver and virtual machines.". Refer here.
This should work with FirefoxDriver provided you have got your testng xml right. Or if you want it on IE, then you should consider setting up a grid and launch IE nodes on different machines, so that parallel runs can happen.
Why do you need to open them all at once? Selenium is not designed for load testing. If you want to check how your application or server is doing under load you better have a look at JMeter.
For a test like that I would recommend not using a browser per-se but instead use HTMLUnit driver (which is like a headless browser). Also, there is a thing called GhostDriver than might also accomplish similar. Still, you should probably use a remote Grid node+hub but you don't need to in order to accomplish your goal.
Selenium can do load testing in that respect. Also, I wouldh't use TestNG: instead, I would use Gradle or Maven because they have JUnit forking-multithread capability in themselves. In Gradle or Maven, create a task that filters and identifies certain test class and then forks processes to run them multi-threaded. I created an example here.

Asynchronous requests not working using fsock - php - Nginx

I have a php script which does the accepted answer described here.
It doesn't work unless I add the following before fclose($fp)
while (!feof($fp)) {
$httpResponse .= fgets($fp, 128);
}
Even a blank for loop would do the job instead of the above!!
But whats the point? I wanted Async calls :(
To add to my pain, the same code is running fine without the above code snippet in an Apache driven environment.
Anybody knows if Nginx or php-fpm having a problem with such requests?
What you're looking for can only be done on Linux flavor systems with a PHP build that includes the Process Control functions (PCNTL library).
You'll find it's documentation here:
http://php.net/manual/en/book.pcntl.php
Specifically what you want to do is "fork" a process. This creates an identical copy of the current PHP script's process including all memory references and then allows both scripts to continue executing simultaneously.
The "parent" script is aware that it is still the primary script. And the "child" script (or scripts, you can do this as many times as you want) is aware that is is a child. This allows you to choose a different action for the parent and the child once the child is spun off and turned into a daemon.
To do this, you'd use something along these lines:
$pid = pcntl_fork(); //store the process ID of the child when the script forks
if ($pid == -1) {
die('could not fork'); // -1 return value means the process could not fork properly
} else if ($pid) {
// a process ID will only be set in the parent script. this is the main script that can output to the user's browser
} else {
// this is the child script executing. Any output from this script will NOT reach the user's browser
}
That will enable a script to spin off a child process that can continue executing along side (or long after) the parent script outputs it's content and exits.
You should keep in mind that these functions must be compiled into your PHP build and that the vast majority of hosting companies will not allow access to them on their servers. In order to use these functions, you generally will need to have a Virtual Private Server (VPS) or a Dedicated server. Not even cloud hosting setups will usually offer these functions as if used incorrectly (or maliciously) they can easily bring a server to it's knees.

Resources