I need to run grunt-bump which bumps the version number in the package.json, then run grunt-xmlpoke and update a config file with new version number.
So I have tried a couple of things. Inside the grunt.initConfig I run bump, then I run xmlpoke.
1) xmlpoke takes grunt.file.readJSON('package.json').version
or
2) after bump I run a custom task that adds the new version to a grunt option and xmlpoke takes a value of grunt.options("versionNumber")
In both of these versions the xml result is the pre-bump version. So xmlpoke is getting it's values before the tasks are run and the uses them when it's task is called. But I need it to take the value that is the result of a previous task.
Is there anyway to do this?
Ok, I have figured out the, somewhat obvious, solution.
Using grunt-bump you can update the package.config, you can also update the package.config that is often read into the variable pkg at the beginning of the initConfig. so in the setup of the bump task you specify
{
updateConfigs:['pkg']
}
Then in the xmlpoke I can do
{ xpath:'myxpath', value:'blablabla/<%=pkg.version%>'}
and this works. What I was doing before was
{ xpath:'myxpath', value:'blablabla/' + grunt.options.versionNumber}
where I had set the versionnumber in a previous task after the bump. Or
{ xpath:'myxpath', value:'blablabla/'+ grunt.file.readJSON('package.json').version}
neither of those worked. I guess I was just getting to smart for my own good as the <%= %> is the more common and typical way of accessing parameters from within the initConfig.
Anyway, there you have it. Or I have it.
Related
I am able to deploy my shiny app with:
rsconnect::deployApp(appName = 'Test', launch.browser = FALSE, forceUpdate = T)
However, it does not always successfully deploy the app. I plan to have this run in a script as a Scheduled Task, and want to make sure the deployApp finishes successfully (if the process doesn't succeed, try again).
I imagine you could place this in a while loop, but I am not sure how to include script that would recognize if the function executed successfully or failed. Anyone have ideas?
Error Messages:
Preparing to deploy application...DONE
Error: $ operator is invalid for atomic vectors
As I say in the comment above, I really don't think this is a good idea. To do it safely and robustly will take a lot of work. And the error message you quote above looks pretty "uncontrolled" to me, so I suspect it's got more to do with a problem in your app than a temporary issue with the publishing process. In which case, you will be in an infinitely loop unless you take steps to prevent it. Have you investigated what your publish record and remote deployment log tell you?
That said, this would be my approach if I had to do it.
Create a flag, deploymentFlag, say, in the global environment and set it to FALSE.
Write a function, onDeploymentFailure() say, which sets deploymentFlag to FALSE.
Wrap you call to deployApp in a while loop like this
while(!deploymentFlag) {
deploymentFlag <- TRUE
rsconnect::deployApp(
...,
on.failure=onDeploymentFailure,
logLevel="verbose",
recordDir=<some dir>
)
if (!deploymentFlag) {
...interrogate the publish record to try to determine what went wrong,
and correct it if possible...
}
}
For safety, especially whilst developing and testing, I'd make sure that each attempt wrote a different publish log and I'd limit the maximum number of attempts to a very small number: 1 to start with, then 2 or 3 after I'd solved the initial problems, and so on.
I am new to Groovy, and I am thinking about using Groovlets (not GRAILS) to replace some Servlets. If I change a Groovlet's script file, the Groovlet re-compiles and automatically picks up the changes, including scripts referenced from the Groovlet.
This is great for development, but I imagine that groovy must perform lots of file checks to see if any of the scripts have been modified, not just on the main Groovlet, but on all referenced sub-scripts. In a production environment, I imagine this could be lots of IO on every request.
I suppose there is a way to either disable having a Groovlet check to see if scripts have been modified, or perhaps there is a type of "update delay" like FreeMarker's setTemplateUpdateDelay() which only checks for modifications after N elapsed seconds/milliseconds since the last check.
This is done in GroovyScriptEngine. It checkes for the last modification date of the source file, and if it's newer than the compiled version, it will recompile.
You can set the minimumRecompilationInterval in CompilerConfiguration. If you set that to a very high value, the checking of the source file won't be done that often.
I have a Plone 4 site which stopped to rename new Archetypes objects; after creation (as something like /temp/portaltype.2015-04-23.1234567890) and saving the first changes, including giving it a title, it should be renamed to something nicer (/temp/an-object-with-a-meaningful-name), but this doesn't happen anymore.
Perhaps the problem arose when I applied some changes to update Plone from 4.3.3 to 4.3.4 (to make one step at a time); but I have inherited a long versions.cfg which is solely sorted by package names and doesn't include any hints why certain versions were chosen ...
I'm able to go back two months and have a version which does the renaming, but without more knowledge about what to look for, it will be a very time-consuming process of re-applying every single change, rebuilding, starting and testing; but there have not been any changes to my schema definitions. I have a temp browser which is involved in delivering the primary edit form. but this doesn't seem to be the case for the saving action.
Sadly I don't fully understand yet the mechanisms of the base_edit action which should - as far as I understand - call Archetypes.BaseObject.processForm and implicitly ._renameAfterCreation, so I'd be grateful for some pointers how to debug this. Thank you!
Update:
I have a few triggers in my product's configure.zcml, e.g.:
<subscriber
for=".content.portaltype.PortalType
Products.Archetypes.interfaces.IObjectInitializedEvent"
handler=".events.onInitPortalType"/>
… with, in events.py:
def onInitPortalType(self, event):
"""
Called after first edit of new objects?
"""
print '/// onInitPortalType(%(self)r, %(event)r)' % locals()
setInitialOwner(self, event)
setStateToPrivate(self, event)
However, the event doesn't seem to be triggered, since I couldn't find the output in an instance fg session.
Update 2:
I noticed that zope.event had been pinned to a quite old version (3.5.2), so I'm trying to update to 4.3.4 more seriously now (following this how-to). This got me zope.event v4.0.3, but I have a version conflict now:
There is a version conflict.
We already have: zc.recipe.egg 1.3.2.
While:
Installing.
Getting section test.
Initializing section test.
Installing recipe zc.recipe.testrunner.
There seems to be a requirement for zc.recipe.egg < 2dev somewhere, but I can't find it.
Nothing significant changed between Plone 4.3.3 and 4.3.4 on Archetypes. Products.Archetypes changed from 1.9.7 to 1.9.8 and Products.ATContentTypes remains on the same version.
Pointers could be:
There's a _at_rename_after_creation flag, which is True by default. This can be changed on the content type class.
Is your type still activated in portal_factorytool? (Afaik this should have no impact on renaming after creation - but who knows :-))
Any Products.Archetypes.interfaces.IObjectInitializedEvent subscriber?
Issue I had once was, that the tmp id portaltype.2015-04-23.1234567890 had the wrong format and AT did no recognise it as tmp id and therefore it did not rename it after creation. The method AT uses to check if the id is autogenerated --> https://github.com/plone/Products.CMFPlone/blob/4.3.4/Products/CMFPlone/utils.py#L111 AFAIK the problem was, that the meta_type and portal_type was not the same anymore.
I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.
In AS3 you can pass a constant to the compiler
-define+=CONFIG::DEBUG,true
And use it for conditional compilation like so:
CONFIG::DEBUG {
trace("This only gets compiled when debug is true.");
}
I'm looking for something like #ifndef so I can negate the value of debug and use it to conditionally add release code. The only solution I've found so far was in the conditional compilation documentation at adobe and since my debug and release configurations are mutually exclusive I don't like the idea of having both DEBUG and RELEASE constants.
Also, this format works, but I'm assuming that it's running the check at runtime which is not what I want:
if (CONFIG::DEBUG) {
//debug stuff
}
else {
//release stuff
}
I also considered doing something like this but it's still not the elegant solution I was hoping for:
-define+=CONFIG::DEBUG,true -define+=CONFIG::RELEASE,!CONFIG::DEBUG
Thanks in advance :)
This works fine and will strip out code that won't run:
if (CONFIG::DEBUG) {
//debug stuff
}
else {
//release stuff
}
BUT this will be evaluated at runtime:
if (!CONFIG::DEBUG) {
//release stuff
}
else {
//debug stuff
}
mxmlc apparently can only evaluate a literal Boolean, and not any kind of expression, including a simple not.
Use the if / else construct : the dead code will be removed by the compiler and it will not be tested at runtime. You will have only one version of your code in your swf.
If you are not sure use a decompiler or a dump tool to see what really happens.
http://apparat.googlecode.com/files/dump.zip
http://www.swftools.org/
...
While Patrick's answer fulfills the question's criteria, it does not cover all use cases. If you are in an area of code that allows you to use an if/else statement then this is a good answer. But if you are in a place where you cannot then you will need a better solution. For example, you may want to do something like this to declare a constant in a class:
private var server:String = "http://localhost/mystagingenvironment";
or for a live release:
private var server:String = "http://productionserver.com";
(this is an example and I'm not advocating this as production code).
I use xml configs and use the loadConfig+="myconfig.xml" to do my configuration instead of passing large numbers of command line params. So in the <compiler> section of your xml config:
<define>
<name>CONFIG::debug</name>
<value>false</value>
</define>
<define>
<name>CONFIG::release</name>
<value>!CONFIG::debug</value>
</define>
This works well for all use cases:
CONFIG::debug
{
private var server:String = "http://localhost/mystagingenvironment";
}
CONFIG::release
{
private var server:String = "http://productionserver.com";
}
This has the additional benefit of working consistently across applications. It also does not rely on the 'optimize' flag being true, like Patrick's answer (although I think we can assume that 99.999999% of all swfs have optimize=true, I only set it to false when the optimizer breaks my AS3).
It does have the drawback that it doesn't compile all code paths, just the ones that are included. So if you're not using a build server to create release builds and tell you when things break, be prepared for surprise errors when you do your release build ("But it compiled in debug! Crap, I need this to launch now!").
Just my two cents about Chris Hill's answer (which is the solution I also use regularly): it seems that using the loadConfig+="myconfig.xml" option makes the compiler searching for the myconfig.xml file in the Flex SDK directory whereas the -load-config+=myconfig.xml option makes it searching for the myconfig.xml file in the project's directory, which is the behavior I strongly prefer as you can then easily distribute this file with your project sources...