I've got a suite of KIF tests for our app, but one part that I can't work out how to cover is where we use UIImagePickerControllers. Obviously I can't check the camera, but I'd like to write a scenario where the user chooses an image from their library. I know that there's +[KIFTestStep stepsToChoosePhotoInAlbum:atRow:column:], but what I don't know is how to set it up it so that there's a consistent set of images for the test to choose from. How do I seed the simulator's photo albums?
There are 2 different ways (one involves programming) to populate the Photo Library of the iOS Simulator:
Open Safari in the iOS Simulator, search for some large sizes images in Google, open one and display it in full size. Then do a long press on the photo and choose save. Repeat this with several photos to fill up the library.
Create a folder on your Mac with the images you want to populate the Photo-Library with. Then write a small iOS application, that iterates over that directory and creates NSData objects from each photo file. Save the NSData object to the Photo-Library using the
(void)writeImageDataToSavedPhotosAlbum:(NSData *)imageData metadata:(NSDictionary *)metadata completionBlock:(ALAssetsLibraryWriteImageCompletionBlock)completionBlock
method of AssetsLibrary.
Here's a project that's working for me:
https://github.com/justin/PopulateSimulatorPhotos
It has proven very good in those cases that you need to reset the simulator again and again or want to test quickly on all the device types.
Related
I have created a barcode scanner; That works fine. I am using Xamarin Forms and working with Android. I had recently received some barcode samples in photo format, however the photo's are not being recognized when I print them out and scan them.
Next, I would like to try to import the photos and use those as my background, to then see if it would recognize the barcode.
I am using ZXing.Net.Mobile and .Forms extension. Is there any file, or code I can implement to point to a filepath and then load that image as my background?
Thanks for any answers or suggestions.
If photos are not recognized than it must be something wrong with your code. I don't see a point to test this this way. In your application it wont work anyways. ZXing.Net.Mobile uses camera to intercept frames and decode them in real time to match qr code pattern.
My HTC Vive is set up in a different room to my developer workstation. When developing for A-Frame, I understand I can: use my desktop monitor instead of a headset; use mouse dragging instead of motion controls; use WASD instead of room-scale tracking. However, what is the preferred way to simulate the tracked controllers?
For example, how can I move the cubes in this demo from my desktop: https://aframe.io/examples/showcase/tracked-controllers
This is not yet released, but we're working on tools to be able to record camera and controllers, output to a file, and then you can load it up any device and replay the camera and controller pose and events without needing a headset. Project is led by Diego:
https://github.com/dmarcos/aframe-motion-capture
http://swimminglessonsformodernlife.com/aframe-motion-capture/examples/
This will become the common way to develop for VR without having to enter and re-enter VR to test each code change, and to do it on the go
I am upgrading my watch app from the first version of watchOS. In my first version I was placing UIImageViews on top of each other and then rendering them with UIImagePNGRepresentation() and then converting it to NSData and transferring it across to the watch. As we know there are limited layout options on the apple watch so if you want cool blur effects behind images or images on images they have to be flattened off screen.
Now when I re-created my targets to watchOS2 etc suddenly the images transferred via NSData through [[WKSession defaultSession] sendMessage:replyHandler:] come up with an error saying its too large of a payload!
So as far as I can see I either have to work out how to combine images strictly via watchkit libs or use the transferFile option on the WKSession and still render them on the iPhone. The transferFile option sounds really slow and clumsy since I will have to render the file, save to disk on iPhone, transfer to watch, load into something that I can set on a WK component.
Does anyone know how to merge images on the watch? QuartzCore doesn't seem to be available as a dependency in watch land.
Instead of sendMessage, use transferFile. Note that the actual transfer will happen in a background thread at a time that the system determines to be best.
Sorry, but I have no experience with manipulating images on the watch.
I want to create music app in which the watch extension app shows audio wave so my question is Can we used the iOS technologies like openGL in watch app?
You can't run any code on the watch. You can only run code in a Watch extension in your iOS app and update a relatively static UI on the watch. You could generate images in your extension for the audio wave, put them together into an animation and then update the UI with that.
It would be possible to pass some information from your iOS app to the Watch extension running on the phone, which could then update a pre-defined interface on the Watch app. However, if you are wanting to provide a real-time audio waveform, I think this could face major problems regarding latency.
Note that as Stephen Johnson states, you could only do this by rendering static images which would then be sent to the watch for display, or by having pre-installed images in your watch interface that you rapidly show or hide to give the impression of levels changing. The latter would be a much more promising approach from a latency approach, and given Apple give a demonstration of a circular progress indicator made up of 360 images, perhaps it would appear to animate smoothly even. However, the key question would be whether the peaks would appear on the Watch screen close enough to when they actually occurred in the music that the user would see them as being linked.
It might be possible to pre-process the audio and build in a delay to both the display of the peaks and the audio playback to manage the communication latency—but testing that would really only be possible once you had Watch hardware in your hand.
Is there any way to access the internal images that the iPhone simulator uses?
For example, if I want to get the original image used for one of the default app icons (e.g. Contacts). This way I could get the highest possible resolution, and examine it for purposes of creating similar icons for my app.
Another example of an image I might want to access is the default icon for a contact:
I'm not asking for a programmatic solution (although that would work), I'm asking for a manual solution, possibly navigating the Simulator's file system using Finder.
You can find the apps at e.g. /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.1.sdk/Applications. The icon of Contacts is at /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.1.sdk/Applications/Contacts.app/icon.png or icon#2x.png. But you can't easily read them, as they are in a strange format (it's not standard PNG), you need to convert them. See for example this article or this article.
EDIT: added two more links for iphone icon images.
You dont need to do this.
Just grab a User Interface Kit:
Heres a website you can go to to download the .psd files which you can use for your self for free:
http://webdesignledger.com/freebi
For another library of iphone icons. This one includes the contacts icon:
http://www.iphonestudio.co.uk/page/iphone_icon_gallery
And here are the iphone icons on the main screen made downloadable for your own use.
here are official icons in different sizes.
check the quality of the Photos icon.
wow.
http://www.iconarchive.com/category/application/iphone-icons-by-judge.html
Hope this helps.
Let me know if it did
PK