I have almost 0 scripting experience with UDK, but I think what I'm trying to do is fairly simple. I want to define a variable in script that I can then access in Kismet.
This is what I have so far:
MyGameInfo.uc looks like this
class MyGameInfo extends UDKGame;
defaultproperties{
PlayerControllerClass=class'MyGameInfo.MyPlayerController'
}
event InitGame( string Options, out string Error )
{
// Call the parent(i.e. GameInfo) InitGame event version and pass the parameters
super.InitGame( Options, Error );
// Unreal Engine 3
`log( "Hello, world!" );
}
MyPlayerController.uc looks like:
class MyPlayerController extends UDKPlayerController;
var() localized string LangText;
defaultproperties{
LangText="asdfasdf";
}
Then, in Kismet I'm using Get Property, with player 0 as the target, attempting to get property LangText. I have the string output going to Draw Text (and yes, I made sure to set a font).
It looks like this:
I feel like I'm really close, but no text shows up in the Draw Text. Any ideas?
Also, why am I trying to do this? For localization purposes. As far as I can tell, there's no easy way in Kismet to use localized strings for Draw Text. My thinking is just to create a localized string for each piece of dialog and then call those strings from Kismet as needed.
You need to make sure you added a Font asset from the content browser in the DrawText properties, and check it is actually placed on the screen (try Offset 0,0 for centered text)
Also, I think that DrawText doesn't work in UTGAME - in case that's your gametype. It works in UDKGame.
And if you have ToggleCinematic mode enabled, then check Disable HUD isn't on.
Test the DrawText with a regular string first...
Try using Player Spawned event instead of Level Loaded, as it may fire too fast for the player to have been added in the game yet.
Related
I have two MLModels in my app. The first one is generating an MLMultiArray output which is meant to be used as the second model input.
As I'm trying to make things as performance-best as possible. I was thinking about using VNImageRequestHandler to feed it with the first model output (MLMultiArray) and use Vision resize and rectOfIntersent to avoid converting the first input to an image, crop features, to avoid the need to convert the first output to image, do everything manually and use the regular image initializer.
Something like that:
let request = VNCoreMLRequest(model: mlModel) { (request, error) in
// handle logic?
}
request.regionOfInterest = // my region
let handler = VNImageRequestHandler(multiArray: myFirstModelOutputMultiArray)
Or I have to go through back and forth conversions? Trying to reduce processing delays.
Vision uses images (hence the name ;-) ). If you don't want to use images, you need to use the Core ML API directly.
If the output from the first model really is an image, it's easiest to change that model's output type to an image so that you get a CVPixelBuffer instead of an MLMultiArray. Then you can directly pass this CVPixelBuffer into the next model using Vision.
I can get basic html text to flip 180*, but I'd like to know how to get a whole Doc in my Drive file to flip using a standalone script (so I can do it repeatedly). I'm aware I can get a doc, open the scripts editor and then use my flippin' project to flip the doc I called, but I don't know what the syntax looks like. My first flippin' success was pasting text into the .html file as simply as possible and using:
function doGet() {
return HtmlService.createHtmlOutputFromFile('Page');
getContent()
}
I just test ran it from the dialog box as a web app. But I'm interested in building this one command feature out into several different domains to get experience with the variety of possibilities available in GAS. Anyone care to tutor me? Please?!...
I'm working on a script that does find/replace for missing items in your project. Unfortunately I'm running into a situation detecting and then replacing layered image sources (psd, ai, etc.).
1) I see no way of detecting if a AvItem is a layer within a layered image other than parsing the item.name, which is unreliable because a user can always rename items in the project panel.
2) Once I do know that it is a part of a layered image I cannot figure out how to re-link it to the correct image without replacing the layer with the merged image. item.replace(new_path) will replace that item with the whole image, not the layer within the image. For example:
var item = app.project.item(3); //assuming this is the 'layer' we want to replace
item.replace(new_path);
So is there a secret property somewhere which will reliably tell me if an item is a part of a layered image, and if so is there a way to relink it without replacing the layer with the entire merged image?
EDIT
Here's a function to guess if a layer is part of a layered image. It's not bullet-proof but it should work as long as the user does not rename the item:
function isSourceLayered (av_item) {
// check if there is a "/"
if (av_item.name.indexOf("/") != -1) {
// check if it is in a "layers" folder
if (av_item.parentFolder.name.indexOf("Layers") != -1) {
return true;
}
}
return false;
}
I just asked the same question on the Adobe extendscript forum. Unless there's undocumented features (and I spent a bit of time looking with Extendscript Toolkit's data browser) the fileSource object doesn't seem to have any attributes or methods to do this.
There is a kind of a workaround, you can import the file using ImportOptions.importAs(ImportAsType.COMP) This will import a comp, and you can loop through the layers matching the name, get the source of that layer and use that as your new source. But as you say, it doesn't work if the source has been renamed.
I've written this into a function, it's available on github Edit: I forgot that I changed the way that function works. It doesn't re-import layer sources because of this problem, it just uses the Duplicate menu command.
I have a text editor (QTextEdit). Some words in my editor contains additional information attached (namely, two corresponding integer positions in wave file for that word).
They are stored in Python object as custom properties for QTextCharFormat objects (I attach them with code like this: self.editor.textCursor().setCharFormat(QTextCharFormat().setProperty(MyPropertyID, myWordAttachment) )
Unfortunately, if I save my document to html, all of that additional formatting is lost.
So, I want to perform simplest task: to save my document with all of it's formatting,including myWordAttachment (and to load it from disk).
Am I right that Qt5 doesn't have something ready for it, and I have to write all that document's serialization code by myself? (I still hope that where is simple function that did the job)
1.you loop your text every character.
2.and you catch the character and its charFormat()
3.and you get the properties.
4.Because the properties are eventually a value of something, int,str,...
So you get the properties by charFormat().property(1),(2),(3)... or properties()
5.The most important thing is the character's position & the range.You get the position during the 1th loop.
6.When you catch the CharFormats, you insert into something hashable object like list.
& and you don't forget to insert the CharFormats position.
6.you save your document and the position & properties.
My suggestion for your solution.
1.you can get characterCount() by the QTextDocument object.
2.you loop the range of the characterCount()
3.Before doing it, you make a QTextCursor object.
4.you set the textcursor at the first position.(movePosition method & Start moveoperation & KeepAnchor flag)
5.you move the cursor to right one character & Another.
6.you check the character's charFormat() by tc.charFormat() and the tc.position()
7.But it is the time to Think twice. CharFormat is always the bunch of characters.
you probably get some characters of the same CharFormat().
You can prepare for it.I can Think about some way,but... you should set the QCharFormat objectType or propertyId() for specifing the QCharFormat in Advance(during editing your document).Why don't you set the texts into the properties for after saving & loading.I hope you manage to pass here during debugging & tring.
8.if you get a charFormat,and you check the objectType().
9.if the objectType() is the same as Before searched, you pass the search engine without doing anything.
10.The second important thing is that calls clearSelection() each searching.
11.You save your document() as it is html strings.and you save the charFormats() properties.
12.when you load your document(),the html sentence comes back.
and load the properties.
you make QTextCursor and setPosition( the property's position saved in advance.)
you move QTextCursor until the position and you select the target texts.
you adopt the charFormat properties again and the end.
Summary
The important thing how you specify the charFormat().
You can catch the charFormat without any problem.but the charFormat() is adopted in some range.So you must distinguish the range.
1.The targeted texts is set in the QTextCharFormat's property.
2.You have The QTextCursor pass during the same QTextCharFormat's object.
I can Think of them...
I Think it is some helps for you.
I'm using the VideoDisplay to play flv's, mov's, and mp4's and everything is working great. They are all being loaded via progressive download and are not being streamed. What I'd like to do is to grab a single specified frame (like whatever is being shown at the 10 second mark), convert it to a bitmap and use that bitmap as the preview image for the video. I'd like to do this at runtime so I don't have to create a preview image for every video that would be shown.
Any idea's on how to do this? I'd rather not fake it by playing it - seeking for that specific frame and then pausing it but I may have no other choice?
Ryan and James are correct -- the right way's probably to extract frames at upload/transcode-time. But if that's not an option, you could opt for using some sort of a default/placeholder image of your own (something generic or somehow suitable for all videos whose thumbs haven't yet been captured), and just use VideoDisplay's DisplayObject-ness to grab and then upload a frame to your server, e.g.:
<mx:Script>
<![CDATA[
var captured:Boolean;
private function creationCompleteHandler(event:Event):void
{
videoDisplay.source = "http://yourserver/yourvideo.flv";
}
private function videoDisplay_playheadUpdate(event:VideoEvent):void
{
if (!captured && videoDisplay.playheadTime >= 10)
capture();
}
private function capture():void
{
var bmpData:BitmapData = new BitmapData(videoDisplay.width, videoDisplay.height);
bmpData.draw(videoDisplay);
captured = true;
// Now just upload the byte array to your server for the next user
var loader:URLLoader = new URLLoader();
loader.dataFormat = URLLoaderDataFormat.BINARY;
// ... etc.
}
]]>
</mx:Script>
<mx:VideoDisplay id="videoDisplay" playheadUpdate="videoDisplay_playheadUpdate(event)" />
Again, it's perhaps not the most elegant solution, but it certainly works. This way, the first user sees the generic image, but every user thereafter gets the generated thumbnail. (Which, of course, you'll have uploaded and properly associated by then.) Make sense?
I'm pretty sure this isn't possible. It may well be... but don't think so. I think the only way to load in video is to use the NetStream and NetConnection object, which as you know just kicks off the loading of the video.
If this is user generated video i think the best bet is to have some serever side script that generates the preview image. Have no idea how this is done but think this is how most clip sites work.
If all the videos are in your control it may be possible to write a script for one of the video editing programs to automate generating the image for a specific frame from a list of files. I think this is probably your best route as alternative that you could get up and running quickly.
Sorry for the vague answer... it may point you in the right direction if you need a quick solution.
I agree with James, the only way to really do this would be to do it with a server side script and pull certain frames out of the video. Even if you could do this with flex, you really would not want to put the burden to do this (which would be processor intensive I would think) on the client machine. Not to mention it will be much more efficient to create the image before hand than to have flex determine the thumbnail to show every time it is loaded.