How to check if a file is modified in Deno? - deno

Basically given a file example.txt and a time t, I want to know whether the version of that file at t is the same as the version now.
I need to know how to do this in order to perform cache invalidation.

You can use Deno.stat and check mtime.
const stat = await Deno.stat(filepath);
console.log(stat.mtime);
.stat requires --allow-read flag.

Related

What are the members of a qmake install set?

https://doc.qt.io/archives/qt-4.8/qmake-environment-reference.html#installs
To help in the install process qmake has the concept of a install set.
It looks like a install set have members, i.e. path, files and extra:
documentation.path = /usr/local/program/doc
documentation.files = docs/*
Are there other members?
What members need to be set in order to consider the install set fully described?
Where the create_docs come from in the case below
unix:documentation.extra = create_docs; mv master.doc toc.doc
QMake is documented... not. Whenever you want to know something you go browse source code. In this case, it's in qmake/generators/makefile.cpp, function Makefilegenerator::writeInstalls().
As we can see, it's path, files, base, extra, CONFIG, uninstall and depends. extra (or commands) is an arbitrary line to insert at the top.
What members need to be set in order to consider the install set fully described?
QMake is Makefile generator. Whatever human does it simply outputs some Makefile. Whether it will work or not, that's a human's problem.

How to "package" other resources with deno

In deno you can load related modules or other code by just referencing the relative path to those ES6 modules. Deno will handle loading them appropriately. What's the way to do this for non-es6 modules? For example: say I wanted to include some custom css with my deno project? Deno doesn't allow doing import mycss from "./relative.css";.
Deno file operations do work for local files, but they're evaluated relative to the cwd not the current file, and they don't work for arbitrary URLs. fetch, on the other hand, should be perfect, but currently doesn't support file schemes and the decision isn't being actively considered. Combining these yields the only solution I can come up with, but I really don't like it:
async function loadLocal(relative: string): Promise<string> {
const url = new URL(relative, import.meta.url);
if (url.protocol === 'file:') {
return await Deno.readTextFile(url.pathname);
} else {
const resp = await fetch(url.href);
return await resp.text();
}
}
This seems like it should mostly work, but it seems like a terrible way to hack in something that I expected would be supported by design in deno. It also must be redeclared in each file, or have the callers URL passed in, although there might be a way to avoid that. It doesn't work on windows without modifying the path delimiter.
Update
Deno.emit seems close to what I would want, however for some reason it has different behavior than standard importing:
If the rootSpecifier is a relative path, then the current working directory of the Deno process will be used to resolve the specifier. (Not relative to the current module!)
It also still requires that the paths be to valid modules, instead of arbitrary text.
As #Zwiers pointed out, deno 1.6 now supports fetch with the file protocol, so this is now irrelevant.

Issue when importing JSON via 'require' in Meteor

The following code works to load a local, static JSON file:
var stories = require('../stories/stories.json');
Now I want to load a file based on a variable, e.g. do something like this:
var storiesPath = '../stories/stories.json';
var stories = require(storiesPath);
But this triggers an error:
Error: Cannot find module '../stories/stories.json'
at require (packages/modules-runtime.js:123:19)
at meteorInstall.server.main.js (server/main.js:7:15)
Is there any way to get this working? I assume that I could load my file via the Meteor http package instead but I'd rather not add another package if I can avoid it.
Thanks for your hints
Like I said in the comment, you can easily use a variable in a require, e.g.,
> var x = 'fs';
> require(x).readFile
[Function]
So that's not the problem you are dealing with. Are you sure your first case indeed works? It would be surprising. I think you might be running into project file layout issues, due to the use of a relative path. I would stay away from that. And fortunately you can quite easily by using an asset! You can put your json file in private/ in your project folder and then use:
const stories = JSON.parse(Assets.getText('stories.json'));

grunt updating options in one task so subsequent tasks can use them

I need to run grunt-bump which bumps the version number in the package.json, then run grunt-xmlpoke and update a config file with new version number.
So I have tried a couple of things. Inside the grunt.initConfig I run bump, then I run xmlpoke.
1) xmlpoke takes grunt.file.readJSON('package.json').version
or
2) after bump I run a custom task that adds the new version to a grunt option and xmlpoke takes a value of grunt.options("versionNumber")
In both of these versions the xml result is the pre-bump version. So xmlpoke is getting it's values before the tasks are run and the uses them when it's task is called. But I need it to take the value that is the result of a previous task.
Is there anyway to do this?
Ok, I have figured out the, somewhat obvious, solution.
Using grunt-bump you can update the package.config, you can also update the package.config that is often read into the variable pkg at the beginning of the initConfig. so in the setup of the bump task you specify
{
updateConfigs:['pkg']
}
Then in the xmlpoke I can do
{ xpath:'myxpath', value:'blablabla/<%=pkg.version%>'}
and this works. What I was doing before was
{ xpath:'myxpath', value:'blablabla/' + grunt.options.versionNumber}
where I had set the versionnumber in a previous task after the bump. Or
{ xpath:'myxpath', value:'blablabla/'+ grunt.file.readJSON('package.json').version}
neither of those worked. I guess I was just getting to smart for my own good as the <%= %> is the more common and typical way of accessing parameters from within the initConfig.
Anyway, there you have it. Or I have it.

How to get and set the default output directory in Robot Framework(Ride) in Run time

I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.

Resources