How can you get NVelocity to initialize correctly? - nvelocity

I can't get NVelocity to initialize. I'm not trying to do anything complicated, so it's just fine if it initializes at the defaults, but it won't even do that.
This:
VelocityEngine velocity = new VelocityEngine();
ExtendedProperties props = new ExtendedProperties();
velocity.Init(props);
Results in: "It appears that no class was specified as the ResourceManager..."
So does this:
VelocityEngine velocity = new VelocityEngine();
velocity.Init();
I can find precious little documentation on what the properties should be, nor how to get it to initialize with the simple defaults. Can anyone point to a resource?
A lot of pages point back to this page:
http://www.castleproject.org/others/nvelocity/usingit.html
But this page skips over the (seemingly) most important point -- how to set the properties and what to set them to.
I just want to load a simple template from a file.

Here's what I found out --
I was using the original NVelocity library, which hasn't had an update since 2003. I think it's a dead project.
I switched to the Castle Project version, and it's much easier -- in fact, it runs much like the examples on the page I linked to. It seems to set intelligent defaults for properties. I can initialize it without any properties set, but the template directory defaults to ".", so I generally set that one (do it before running "init").
To get the correct DLL, you need to download the latest NVelocity release (as of this writing it's 1.1).
Castle Project Download Page

You need to include the following files in your assembly, and make sure that their type is set to "Resource"
src\Runtime\Defaults\directive.properties
src\Runtime\Defaults\nvelocity.properties
These will then be found by ResourceLocator
src\Runtime\Resource\Loader\ResourceLocator.cs
If you get an exception on GetManifestResourceNames() as I did when trying to run Dvsl, then try modifying the ResourceLocator constructor to catch and ignore the error since the required files are in your local assembly (if you included them above) and the exception is only thrown by external assemblies (no idea why).
foreach(Assembly a in assemblies) {
String prefix = a.FullName.Substring(0,a.FullName.IndexOf(",")).ToLower();
try
{
String[] names = a.GetManifestResourceNames();
foreach (String s in names)
{
if (s.ToLower().Equals(fn) || s.ToLower().Equals(prefix + "." + fn))
{
this.filename = s;
assembly = a;
isResource = true;
}
}
} catch {
}
}

Related

asp.net aspx equivalent for .net core Environment.IsDevelopment

I'm trying to skip some logic in an asp.net aspx page during my development / deubgging session. Does asp.net aspx have an established equivalent to asp.net core Environment.IsDevelopment?
For the time being I've just created a web.config appSetting "Environment" and have defined it as "development". Is this the correct way to do it?
Thanks.
There are several ways. One often approach is to use a compiler directive. This has the advantage that such debug code WILL NOT be included in the resulting .dll compile, and can thus be useful to not include some code.
So this:
{
bool MyDebug = false;
#if debug
' we are running in debug mode
MyDebug = true;
#else
MyDebug = false;
#endif
Response.Write("Debug = " + MyDebug);
}
Another way, is of course to simple go file->settings, and make your own setting and change it.
So project->my project settings, and then this:
Now, keep in mind, that those settings are created into a class at build time. So, you can't just change the setting, and without a re-build expect to see the change.
I OFTEN use the above for connection strings (as you can see in that screen cap).
But, if you using web publishing, and some transforms (to replace things like connection strings, then you have to use the configeration manager to get such values, and NOT this handy dandy base class.
So, in above to get a connection string, or MyDebug, you do this:
Debug.Print("My Connection for Access db = " +
Properties.Settings.Default.AccessDB);
Debug.Print(" My debug setting = " +
Properties.Settings.Default.MyDebug);
Output:
Of course the big issue with above, is the setting is not automatic, and if you change from debug to release, then the above setting does not automatic change like the conditional compile example.
I suppose I would put/create a global IsDebug function in my "general" global class where I put all my hodge podge bunch of handy and helper routines.
(in vb.net, that is a code module)
(in c#, that is a static class)

setting CilBody.KeepOldMaxStack or MetadataOptions.Flags

While decompiling .net assembly using de4dot I am getting following message in console:
Error calculating max stack value. If the method's obfuscated, set CilBody.KeepOldMaxStack or MetadataOptions.Flags (KeepOldMaxStack, global option) to ignore this error
How do I set CilBody.KeepOldMaxStack or MetadataOptions.Flags?
Maybe a bit late, but I ran into the same problem today, finding your open question while looking for a solution, and this is how I solved it - I hope it works for you, too:
// Working with an assembly definition
var ass = AssemblyDef.Load("filename.dll");
// Do whatever you want to do with dnLib here
// Create global module writer options
var options = new ModuleWriterOptions(ass.Modules[0]);
options.MetadataOptions.Flags |= MetadataFlags.KeepOldMaxStack;
// Write the new assembly using the global writer options
ass.Write("newfilename.dll", options);
If you want to set the flag only for a selection of methods that produce the problem before writing, just for example:
// Find the type in the first module, then find the method to set the flag for
ass.Modules[0]
.Types.First((type) => type.Name == nameof(TypeToFind))
.FindMethod(nameof(MethodToFind))
.KeepOldMaxStack = true;
CilBody is maybe a bit confusing, if you're not too deep into the internal .NET assembly structures: It simply means the body object of the method that produces the problem, when writing the modified assembly. Obfuscators often try to confuse disassemblers by producing invalid structures, what may cause a problem when calculating the maxstack value before writing the assembly with dnLib. By keeping the original maxstack value, you can step over those invalid method structures.
In the context of de4dot it seems to be a bug, or the application is simply not designed to solve invalid method structures of obfuscated assemblies - in this case there's no solution for you, if the de4net developer won't fix/implement it, and you don't want to write a patch using the source code from GitHub.

Why is there no static QDir::makepath()?

I know, that to create a new path in Qt from a given absolute path, you use QDir::makepath() as dir.makepath(path), as it is suggested in this question. I do not have any trouble in using it and it works fine. My question is directed, as to why the developers would not provide a static function to call in a way like QDir::makepath("/Users/me/somepath/");. Needing to create a new QDir instance seems unnecessary to me.
I can only think of two possible reasons:
1. The developers were "lazy" or did not have time so they did not add one as it is not absolutely necessary.
2. The instance of QDir on which mkpath(path) is called, will be set to path as well, so it would be convenient for further usage - but I can not seem to find any hints that this is the actual behaviour within the docs.
I know I repeat myself, but again, I do not need help as of how to do it, but I am much interested as of why one has to do it that way.
Thanks for any reason I might have missed.
Let's have a look at the code of said method:
bool QDir::mkdir(const QString &dirName) const
{
const QDirPrivate* d = d_ptr.constData();
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
QString fn = filePath(dirName);
if (d->fileEngine.isNull())
return QFileSystemEngine::createDirectory(QFileSystemEntry(fn), false);
return d->fileEngine->mkdir(fn, false);
}
Source: http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n1381
As we can see, a static version would be simple to implement:
bool QDir::mkdir(const QString &dirName) const
{
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
return QFileSystemEngine::createDirectory(QFileSystemEntry(dirName), false);
}
(see also http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n681)
First, the non-static method comes with a few advantages. Obviously there is something to using the object's existing file engine. But also, I would imagine the use-case of creating several directories under a specific directory (that the QDir already points to).
So why not provide both?
Verdict (tl/dr): I think the reason is simple code hygiene. When you use the API, the difference between QDir::makepath(path); and QDir().makepath(path); is slim. The performance hit of creating the object is also negligible, as you would reuse the same object if you happen to perform the operation more often. But on the side of the code maintainers, it is arguably much more convenient (less work and less error prone) to not maintain two versions of the same method.

Flex: recover from a corrupt local SharedObject

My Flex app uses local SharedObjects. There have been incidents of the Flash cookie getting corrupt, for example, due to a plugin crash. In this case SharedObjects.getLocal will throw an exception (#2006).
My client wants the app to recover gracefully: if the cookie is corrupt, I should replace it with an empty one.
The problem is, if SharedObject.getLocal doesn't return an instance of SharedObject, I've nothing to call clear() on.
How can I delete or replace such a cookie?
Many thanks!
EDIT:
There isn't much code to show - I access the local cookie, and I can easily catch the exception. But how can I create a fresh shared object at the same location once I caught the exception?
try {
localStorage = SharedObject.getLocal("heywoodsApp");
} catch (err:Error) {
// what do I do here?
}
The error is easily reproduced by damaging the binary content of a Flash cookie with an editor.
I'm not really sure why you'd be getting a range error - esp if you report that can find it. My only guess for something like this is there is a possibility of crossing boundries with respect to the cross-domain policy. Assuming IT has control over where the server is hosted, if the sub-domain ever changed or even access type (from standard to https) this can cause issues especially if the application is ongoing (having been through several releases). I would find it rather hard to believe that you are trying to retrieve a named SO that has already been named by another application - essentially a name collision. In this regard many of us still uses the reverse-dns style naming convention even on these things.
If you can catch the error it should be relatively trivial to recover from: - just declare the variable outside the scope of the try so it's accessible to catch as well. [edit]: Since it's a static method, you may need to create a postfix to essentially start over with a new identifier.
var mySO:SharedObject;
....
catch(e:Error)
{
mySO = SharedObject.getLocal('my.reversedns.so_name_temp_name');
//might want to dispatch an error event or rethrow a specific exception
//to alert the user their "preferences" were reset.
}
You need to be testing for the length of SharedObject and recreate if it's 0. Also, always use flush to write to the object. Here's a function we use to count the number of times our software is launched:
private function usageNumber():void {
usage = SharedObject.getLocal("usage");
if (usage.size > 0) {
var usageStr:String = usage.data.usage;
var usageNum:Number = parseInt(usageStr);
usageNum = usageNum + 1;
usageStr = usageNum.toString();
usage.data.usage = usageStr;
usage.flush();
countService.send();
} else {
usage.data.usage = "1";
usage.flush();
countService.send();
}
}
It's important to note that if the object isn't available it will automatically be recreated. That's the confusing part about SharedObjects.
All we're doing is declaring the variable globally:
public var usage:SharedObject;
And then calling it in the init() function:
usage = SharedObject.getLocal("usage");
If it's not present, then it gets created.

Finding Display of an RCP App

I'm writing a testing Framework which is starting a GUI Application. To be able to test this GUI in the case of an SWT application I need to know it's display. In general, this display is loaded by another classloader, therefore I'm using the method findDisplay(Thread t) of the swt Display class by reflection to accomplish this task. My code looks something like this:
Thread[] threads = new Thread[10];
Thread.enumerate(threads);
Object foundObject = null;
for (Thread t : Arrays.asList(threads)){
foundObject = null;
Class<?> clazz = t.getContextClassLoader().loadClass("org.eclipse.swt.widgets.Display");
final Method method = clazz.getMethod("findDisplay", Thread.class);
foundObject = method.invoke(null, new Object[] {t});
if (foundObject != null) {
System.out.println("yeah, found it!");
break;
}
}
In my opinion this should find every Object of type Display in the current thread group. However I don't get any for the texteditor RCP example although the GUI is starting up perfectly.
Any ideas what is going wrong or how I can debug this in a reasonable way?
I figured out what the main problem was: The ContextClassloader had nothing to do with the classloader who actually loaded the classes.
To resolve my problem I took care of having the classloader which loads the swt display class both in the hierarchy of the RCP program and the hierarchy of my framework. This was possible by using the java extension classloader. (I couldn't use the application classloader since my RCP application doesn't work with it as parent, I haven't figured out yet why) It was then just a matter of adding the swt.jar to the java.ext.dirs property.
If you are using Eclipse RCP then maybe you can use:
PlatformUI.getWorkbench().getDisplay()

Resources