I am trying to get a simple example of face detection working with ML Kit on iOS. Here is excerpts of the Objective C code:
FIRVisionFaceDetectorOptions *faceDetectorOptions;
FIRVision *vision;
FIRVisionFaceDetector *faceDetector;
faceDetectorOptions = [[FIRVisionFaceDetectorOptions alloc] init];
faceDetectorOptions.performanceMode = FIRVisionFaceDetectorPerformanceModeAccurate;
faceDetectorOptions.landmarkMode = FIRVisionFaceDetectorLandmarkModeAll;
faceDetectorOptions.contourMode = FIRVisionFaceDetectorContourModeNone;
faceDetectorOptions.classificationMode = FIRVisionFaceDetectorClassificationModeAll;
faceDetectorOptions.minFaceSize = 0.1; // TODO: finalize this option value
vision = [FIRVision vision];
faceDetector = [vision faceDetectorWithOptions:faceDetectorOptions];
UIImage *staticImg = [UIImage imageNamed:#"sample.jpg"];
FIRVisionImage *visionImage = [[FIRVisionImage alloc] initWithImage:staticImg];
NSError* error = Nil;
NSArray<FIRVisionFace *> * faces = [faceDetector resultsInImage:visionImage error:&error];
NSLog(#"Synchronous result. error = %#, face count = %lu", error, faces.count);
The sample.jpg file is the following image downloaded and added as a resource to my Xcode project:
http://chwb.org/wp-content/uploads/2014/01/Theo_Janssen-Face1.jpg
The resultsInImage returns no error, but no faces either. It logs:
Synchronous result. error = (null), face count = 0
Am I doing something wrong?
I figured it out. The problem was I need to set the image metadata with orientation like this:
FIRVisionImageMetadata *imageMetadata = [FIRVisionImageMetadata new];
imageMetadata.orientation = [FcFaceDetector visionImageOrientationFromImageOrientation:uiImage.imageOrientation];
visionImage.metadata = imageMetadata;
+ (FIRVisionDetectorImageOrientation) visionImageOrientationFromImageOrientation:(UIImageOrientation)imageOrientation {
switch (imageOrientation) {
case UIImageOrientationUp:
return FIRVisionDetectorImageOrientationTopLeft;
case UIImageOrientationDown:
return FIRVisionDetectorImageOrientationBottomRight;
case UIImageOrientationLeft:
return FIRVisionDetectorImageOrientationLeftBottom;
case UIImageOrientationRight:
return FIRVisionDetectorImageOrientationRightTop;
case UIImageOrientationUpMirrored:
return FIRVisionDetectorImageOrientationTopRight;
case UIImageOrientationDownMirrored:
return FIRVisionDetectorImageOrientationBottomLeft;
case UIImageOrientationLeftMirrored:
return FIRVisionDetectorImageOrientationLeftTop;
case UIImageOrientationRightMirrored:
return FIRVisionDetectorImageOrientationRightBottom;
}
}
The docs seem to be unclear about it, because it seems to suggest to not set it:
https://firebase.google.com/docs/ml-kit/ios/detect-faces#2-run-the-face-detector
Related
I have one item. And item is fixed for item.start, and item shall not be moved. I need to change just item.end.
I was trying something. For example if item.class name is record-scheduled, item.start should be in the future. This code worked. But I couldn't build for start is constant.
onMoving: function (item, callback) {
if(item.className == 'record-scheduled'){
var min = moment()
var max = moment(item.start).add(6,'hours')
if (item.start < min) item.start = min;
if (item.start > max) item.start = max;
if (item.end > max) item.end = max;
callback(item); // send back the (possibly) changed item
}else if(item.className == 'record-finished'){
callback(null)
}else if(item.className =='record-active'){
callback(item)
}
else{
callback(item)
}
},
My timeline options:
options: {
editable: true,
stack: false,
start:moment().format(),
}
explain is briefly:
else if(item.className =='record-active'){
item.start = 'cannot be changed'
item.end = 'can be change'
item.draggable = 'cannot be draggable'
}
How could i solve it?
Thanks, Regards.
I thought of a new solution while opening the thread. Maybe It's not best way but the way worked for me. But you have to database. I didn't find how to get beforeItemStart other way. Maybe somebody help us for this. And my my solution is like;
else if(item.className =='record-active'){
axios
.get('http://localhost:3000/' + item.id)
.then(response => (this.selectedItem = response.data))
var findstarttime = new Long(this.selectedItem.StartDateTime.Ticks.low,
this.selectedItem.StartDateTime.Ticks.high,false).toNumber();
var oneZeroGoneFindStartTime = findstarttime / 10000
var findedstarttime = moment(oneZeroGoneFindStartTime)
console.log(findedstarttime)
var beforeItemStart = findedstarttime
if(beforeItemStart < item.start) item.start = beforeItemStart;
if(beforeItemStart > item.start) item.start = beforeItemStart;
callback(item)
}
I find beforeItemStart and It's always item.start = beforeItemStart.
And It can't draggable, just I can change end range. But here an axios causes too many queries. I need to fix that. Maybe it will be a solution for someone until a better answer comes.
Thanks, Regards.
I'm a very novice coder, and I'm working on my first CoreML project, where I use an Image Classification model. I used some tutorials online, and I converted the uploaded image to a CV pixel buffer.
This is my code:
let model = classifier()
private func performImageClassification(){
let currentImage = uploadedImage //image that was uploaded
let resizedImage = currentImage.resizeTo(size: CGSize(width: 255, height:255))
guard let pixelBuffer = resizedImage?.toBuffer() else { return }
let output = try? model.prediction(input: pixelBuffer)
if let output = output {
self.classificationLabel = output.classLabel
}
}
On the line let output = try? model.prediction(input: pixelBuffer), I get the following error: "Cannot convert value of type 'CVPixelBuffer' (aka 'CVBuffer') to expected argument type."
Please help!
I have been trying to figure out how to pan/zoom using onMouseDrag, and onMouseDown in paperjs.
The only reference I have seen has been in coffescript, and does not use the paperjs tools.
This took me longer than it should have to figure out.
var toolZoomIn = new paper.Tool();
toolZoomIn.onMouseDrag = function (event) {
var a = event.downPoint.subtract(event.point);
a = a.add(paper.view.center);
paper.view.center = a;
}
you can simplify Sam P's method some more:
var toolPan = new paper.Tool();
toolPan.onMouseDrag = function (event) {
var offset = event.downPoint - event.point;
paper.view.center = paper.view.center + offset;
};
the event object already has a variable with the start point called downPoint.
i have put together a quick sketch to test this.
Unfortunately you can't rely on event.downPoint to get the previous point while you're changing the view transform. You have to save it yourself in view coordinates (as pointed out here by Jürg Lehni, developer of Paper.js).
Here's a version that works (also in this sketch):
let oldPointViewCoords;
function onMouseDown(e) {
oldPointViewCoords = view.projectToView(e.point);
}
function onMouseDrag(e) {
const delta = e.point.subtract(view.viewToProject(oldPointViewCoords));
oldPointViewCoords = view.projectToView(e.point);
view.translate(delta);
}
view.translate(view.center);
new Path.Circle({radius: 100, fillColor: 'red'});
I need re-encoding a video file from photo library for web site service.
I tried below code but it has occurred error like 'video composition must have composition instructions'.
(code)
AVAsset *anAsset = [[AVURLAsset alloc] initWithURL:videoFileUrl options:nil];
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset];
if ([compatiblePresets containsObject:AVAssetExportPresetMediumQuality]) {
self.exportSession = [[AVAssetExportSession alloc]
initWithAsset:anAsset presetName:AVAssetExportPresetPassthrough];
AVMutableComposition* mixComposition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, anAsset.duration) ofTrack:[[anAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];
AVMutableVideoCompositionLayerInstruction *FirstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
AVMutableVideoCompositionInstruction * MainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
MainInstruction.layerInstructions = [NSArray arrayWithObjects:FirstlayerInstruction,nil];
AVMutableVideoComposition *MainCompositionInst = [AVMutableVideoComposition videoComposition];
MainCompositionInst.frameDuration = CMTimeMake(1, 30); // bit rate
MainCompositionInst.renderSize = CGSizeMake(640, 480); // frame rate
[self.exportSession setVideoComposition:MainCompositionInst];
NSURL *furl = [NSURL fileURLWithPath:self.tmpVideoPath];
self.exportSession.outputURL = furl;
self.exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(self.startTime, anAsset.duration.timescale);
CMTime duration = CMTimeMakeWithSeconds(self.stopTime-self.startTime, anAsset.duration.timescale);
CMTimeRange range = CMTimeRangeMake(start, duration);
self.exportSession.timeRange = range;
self.trimBtn.hidden = YES;
self.myActivityIndicator.hidden = NO;
[self.myActivityIndicator startAnimating];
[self.exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([self.exportSession status]) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed: %#", [[self.exportSession error] localizedDescription]);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export canceled");
break;
default:
NSLog(#"NONE");
dispatch_async(dispatch_get_main_queue(), ^{
[self.myActivityIndicator stopAnimating];
self.myActivityIndicator.hidden = YES;
self.trimBtn.hidden = NO;
[self playMovie:self.tmpVideoPath];
});
break;
}
}];
}
}
without the changing frame rate and bit rate, it works perfectly.
Please give me any advise.
Thanks.
Framerate I'm still looking for, but bitrate has been solved by this drop in replacement for AVAssetExportSession: https://github.com/rs/SDAVAssetExportSession
Not sure but some things to look at. Unfortunatley the error messages dont tell you much in these cases in AVFoundation.
In general, I would make things simple and slowly add functionality.To start, make sure all layers start at zero and then end at the final duration. Do the same for the main composition. Invalid times may give you an error like this. For your instruction, make sure that it starts at the same time and ends at the same time too.
I want to do mock for TnSettings, yes, it works if code by the following method, the problem is that we need to do write mock code for each case, if we only mock once then execute more than one case, then the second will report exception. I use the latest OCMock V2.01.
My question is that why OCMock has such restriction? Or is it my fault not to use it correctly?
Any idea or discussion will be appreciated, thanks in advance.
- (void) testFormattedDistanceValueWithMeters {
mockSettings = [OCMockObject mockForClass:[TnSettings class]];
mockClientModel = [TnClientModel createMockClientModel];
[[[mockClientModel expect] andReturn:mockSettings] settings];
[[[mockSettings expect] andReturn:[NSNumber numberWithInt:0]] preferencesGeneralUnits];
NSNumber *meters = [NSNumber numberWithDouble:0.9];
distance = [NSString formattedDistanceValueWithMeters:meters];
STAssertEqualObjects(distance, #"0.9", #"testformattedEndTimeForTimeInSeconds failed");
//------------- Another case -----------------
mockSettings = [OCMockObject mockForClass:[TnSettings class]];
mockClientModel = [TnClientModel createMockClientModel];
[[[mockClientModel expect] andReturn:mockSettings] settings];
[[[mockSettings expect] andReturn:[NSNumber numberWithInt:0]] preferencesGeneralUnits];
meters = [NSNumber numberWithDouble:100.9];
distance = [NSString formattedDistanceValueWithMeters:meters];
STAssertEqualObjects(distance, #"101", #"testformattedEndTimeForTimeInSeconds failed");
}
Not sure I understand your question or your code fully. I suspect that you stumbled over the difference between expect and stub, though.
Is this what you had in mind?
- (void) testFormattedDistanceValueWithMeters {
mockSettings = [OCMockObject mockForClass:[TnSettings class]];
mockClientModel = [TnClientModel createMockClientModel];
[[[mockClientModel stub] andReturn:mockSettings] settings];
[[[mockSettings stub] andReturn:[NSNumber numberWithInt:0]] preferencesGeneralUnits];
NSNumber *meters = [NSNumber numberWithDouble:0.9];
distance = [NSString formattedDistanceValueWithMeters:meters];
STAssertEqualObjects(distance, #"0.9", #"testformattedEndTimeForTimeInSeconds failed");
meters = [NSNumber numberWithDouble:100.9];
distance = [NSString formattedDistanceValueWithMeters:meters];
STAssertEqualObjects(distance, #"101", #"testformattedEndTimeForTimeInSeconds failed");
}