.caf file in app not playing in iPod - avaudioplayer

In my app, i have some .caf audio file (Format: IMA 4:1, Stereo, 44.100 kHz). The problem what i m facing is that these audio file plays on button action (one button for each ringtone).They are playing fine on iphone(ios version 3 to 5 ) but im not able to listen it on ipod. I have done all the sound setting changes as solution described on other site but nothing happens.
Also ipod not stuck in headphone mode(this problem happens sometime when we plug out headphone while any audio file is playing).So please give any solution as soon as possible.

first You import framework then use below code:
#import <AudioToolbox/AudioToolbox.h>
call whenever you want by this method:
[self BackgroundSound:#"sounf file name"];//must have in resource(.wav,.mp3,.caf)
write this function by above method for call that :
- (void)BackgroundSound:(NSString *) name
{
NSString *spath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:name];
AVAudioPlayer* BackgroundPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:spath] error:nil];
BackgroundPlayer.numberOfLoops = -1; // set to -1 to loop repeated
BackgroundPlayer.volume = 0.5; // value is from 0 to 1.0 (float)
[BackgroundPlayer play]; // play
}
work in iPad/iPhone both in my application. i hope ,it's help you also.

Related

How to get the full image URL on iOS with UIImagePickerController using new PHAsset stuff

How the heck do you get the full image URL on iOS with UIImagePickerController using new PHAsset stuff, since the ALAssetLibrary stuff was deprecated?
I was having an awful time finding the answer to this, so to help people in the future, here is my code that works to get the actual file URL to a photo selected with the built-in iOS UIImagePickerController. Still a little nervous about using the ALAsset URL to get the PHAsset, if you know of another way to do that then by all means please share.
- (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
NSURL* sourceURL = [info objectForKey:UIImagePickerControllerReferenceURL];
if (sourceURL != nil) {
PHAsset* asset = [[PHAsset fetchAssetsWithALAssetURLs:#[sourceURL] options:nil] lastObject];
PHContentEditingInputRequestOptions* assetOptions = [[PHContentEditingInputRequestOptions alloc] init];
[assetOptions setNetworkAccessAllowed:false];
[asset requestContentEditingInputWithOptions:assetOptions
completionHandler:^(PHContentEditingInput* contentEditingInput, NSDictionary* info) {
NSURL* fullImageURL = contentEditingInput.fullSizeImageURL;
printf("Hey, got the actual image URL: %s\n", [[fullImageURL absoluteString] UTF8String]);
}];
}
// ...finish and dismiss the UIImagePickerController
}

How to Stop Single Tap From firing before Double Tap

I am trying to set a simple double tap recognition before moving to more complicated interactions. I have the single and double tap being recognised. However, my problem is that the double tap doesn't fire without the single tap.
I have seen the code which covers introducing a requirement to fail, but the sample code I do not understand how to modify to make work with my standard approach.
Here is my code - at the moment I am just trying to get the log to fire and it is. But on double tap I get the single tap message which I don't want. I have tried changing the TapGestureRecognizer event settings to no avail.
- (IBAction)didTapPhoto1:(UITapGestureRecognizer *)sender; {
NSLog(#"Did Tap Photo1 !");
}
- (IBAction)didDoubleTapPhoto1:(UITapGestureRecognizer *)sender; {
NSLog(#"DoubleTap");
}
Thank you
Use UIGestureRecognizer requireGestureRecognizerToFail: method.
[singleTapGestureRecognizer requireGestureRecognizerToFail:doubleTapGestureRecognizer].
There is a side effect of this method, if you only tap one time on the screen, it will react slower than that without calling the method.
Edit: It seems that you create the gesture recognizer in Storyboard or xib. You can also do it with code.
UITapGestureRecognizer *singleGR = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(didTapPhoto1:)] ;
singleGR.numberOfTapsRequired = 1 ;
UITapGestureRecognizer *doubleGR = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(didDoubleTapPhoto1:)] ;
doubleGR.numberOfTapsRequired = 2 ;
// you can change self.view to any view in the hierarchy.
[self.view addGestureRecognizer:singleGR] ;
[self.view addGestureRecognizer:doubleGR] ;
[singleGR requireGestureRecognizerToFail:doubleGR] ;

useLayoutToLayoutNavigationTransitions in iOS7 crash

I am building an iOS7 app and I am trying to make use of the new useLayoutToLayoutNavigationTransitions effect. I have 2 UICollectionViewControllers and when I do the following I get the transition effect
SecondViewController *secondVC = [[SecondViewController alloc] initWithNibName:#"SecondViewController" bundle:nil];
secondVC.useLayoutToLayoutNavigationTransitions = YES;
[self.navigationController pushViewController:secondVC animated:YES];
this works fine but what I want to do is make an api call and then in the completion block I want to push onto the nav stack like so
[webAPI getDetailsWithParams:params andCompletionBlock:^(id dict) {
//some work on the result
SecondViewController *secondVC = [[SecondViewController alloc] initWithNibName:#"SecondViewController" bundle:nil];
secondVC.useLayoutToLayoutNavigationTransitions = YES;
[self.navigationController pushViewController:secondVC animated:YES];
} andErrorBlock:^(NSError *error) {
}];
but this crashes every time with the following msg
-[UICollectionView _invalidateLayoutWithContext:]: message sent to deallocated instance 0x17a26400
can anyone tell me what I am doing wrong in this case? How can I get the transition effect when pushing from completion block?
EDIT: by changing it to the following I was able to transition to the second viewcontroller.
MyLayout *layout = [[MyLayout alloc] init];
SecondViewController *expandedVC = [[SecondViewController alloc] initWithCollectionViewLayout:layout];
and I also deleted the nib file that went with the file. nib file just consisted of a collection view and it was the file owners view.
While I can now transition I still do not understand why I could not do the previous nib method with in a block. So I would be grateful if someone could shed some light on it for me.
In order to use UICollectionViewController's useLayoutToLayoutNavigationTransitions to make transitions, the layout of the SecondViewController must be known. However, if you use initWithNibName:bundle: initializer, layout is not internally prepared yet, making your desired transitions impossible. As you mentioned in your edited question, you have to use [UICollectionViewController initWithCollectionViewLayout:] to initialize your second UICollectionViewController. Since your xib file has the same name as your class name, SecondViewController.xib is going to be loaded automatically by UIViewController, superclass of UICollectionViewController.
I think you were making UIKit calls from a thread that wasn't the main thread

AVAudioPlayer EXC_BAD_ACCESS

I am trying to pause the AVAudioPlayer if it is currently playing. When I debug my code and check AVAudioPlayer, I see it is allocated. When I try to access a method/property on it (i.e. myAudioPlayer isPlaying), I get an EXC_BAD_ACCESS error. This happens if the AVAudioPlayer does not have a sound loaded. Is there a way I can do a check to see if it has loaded a sound? I tried accessing myAudioPlayer.data, but I still get the same error.
For me it was because I wasn't calling the one of the designated initialisers. I was instantiating it with AVAudioPlayer() instead of the designated initialisers which are public init(contentsOfURL url: NSURL) throws and public init(data: NSData) throws
As of iOS 13, make sure you are removing the initialization on AVAudioPlayer before asigning it with AVAudioPlayer(contentsOf: URL(...))
i.e. change
var audioPlayer = AudioPlayer() to var audioPlayer: AVAudioPlayer!
I guess you have to use prepareToPlay method to find whether it has loaded or not.
First you have to add AVFoundation Framework.Then,import AVFounadtion framework in .h file.
.h:
AVAudioPlayer *audioPlayer;
.m:
NSURL *url1 = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/audio1.mp3", [[NSBundle mainBundle] resourcePath]]];
NSError *error;
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url1 error:&error];
//audioPlayer.numberOfLoops = 0;
[audioPlayer play];
Try this code it may help you..

To connect Gstreamer with Qt in order to play a gstreamer video in the Qt Widget

I tried using phonon to play the video but could not succeed. Off-late came to know through the Qt forums that even the latest version of Qt does not support phonon. That's when I started using Gstreamer. Any suggestions as to how to connect the Gstreamer window with the Qt widget? My aim is to play a video using Gstreamer on the Qt widget. So how do I link the Gstreamer window and the Qt widget?
I am successful in getting the Id of the widget through winid().
Further with the help of Gregory Pakosz, I have added the below 2 lines of code in my application -
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
However am not able to link the Qt widget with the gstreamer video window.
This is what my sample code would look like :-
int main(int argc, char *argv[])
{
printf("winid=%d\n", w.winId());
gst_init (NULL,NULL);
/* create a new bin to hold the elements */
bin = gst_pipeline_new ("pipeline");
/* create a disk reader */
filesrc = gst_element_factory_make ("filesrc", "disk_source");
g_assert (filesrc);
g_object_set (G_OBJECT (filesrc), "location", "PATH_TO_THE_EXECUTABLE", NULL);
demux = gst_element_factory_make ("mpegtsdemux", "demuxer");
if (!demux) {
g_print ("could not find plugin \"mpegtsmux\"");
return -1;
}
vdecoder = gst_element_factory_make ("mpeg2dec", "decode");
if (!vdecoder) {
g_print ("could not find plugin \"mpeg2dec\"");
return -1;
}
videosink = gst_element_factory_make ("xvimagesink", "play_video");
g_assert (videosink);
/* add objects to the main pipeline */
gst_bin_add_many (GST_BIN (bin), filesrc, demux, vdecoder, videosink, NULL);
/* link the elements */
gst_element_link_many (filesrc, demux, vdecoder, videosink, NULL);
gst_element_set_state(videosink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
/* start playing */
gst_element_set_state (bin, GST_STATE_PLAYING);
}
Could you explain more in detail about the usage of gst_x_overlay_set_xwindow_id() wrt my context?
Could I get any hint as to how I can integrate gstreamer under Qt?
Please help me solve this problem.
I just did this same thing using python. What I had to do was connect to 'sync-message::element' on the bus and listen for a message called 'prepare-xwindow-id' (disregard the name as it works on all platforms, not just X11) sent after the video sink is setup. It sends you the sink inside that message, and that is where you pass it the window id.
The sample code given above will link GStreamer video window to QtWidget provided the elements are linked correctly.
filesrc should be linked to the demuxer
decoder should be linked to the filesink
Finally, the demuxer should be linked to the decoder at runtime
// link filesrc to demuxer
gst_element_link(filesrc,demux)
// link vdecoder to filesink
gst_element_link_many(vdecoder,filesink,NULL)
/*
The demuxer will be linked to the decoder dynamically.
The source pad(s) will be created at run time,
by the demuxer when it detects the amount and nature of streams.
Connect a callback function which will be executed
when the "pad-added" is emitted.
*/
g_signal_connect(demux,"pad-added",G_CALLBACK(on_pad_added),vdecoder);
// callback definition
static void on_pad_added(GstElement* element,GstPad* pad,gpointer* data)
{
GstPad* sinkpad;
GstElement * decoder = (GstElement*)data;
GstCaps* caps;
GstStructure* str;
gchar* tex;
caps = gst_pad_get_caps(pad);
str = gst_caps_get_structure(caps,0);
tex = (gchar*)gst_structure_get_name(str);
if(g_strrstr(tex,"video"))
{
sinkpad = gst_element_get_static_pad(decoder,"sink");
gst_pad_link(pad,sinkpad);
gst_object_unref(sinkpad);
}
}
http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/tests/examples/overlay
has a minimal Qt example.
In your code, you should probably set the window ID before you do the state change to ready (I'm not 100% sure this is the problem though).
For playback, you should idally use the playbin2 element, something like this (completely untested):
GstElement *playbin, *videosink;
gchar *uri;
playbin = gst_element_factory_make ("playbin2", "myplaybin");
videosink = gst_element_factory_make ("xvimagesink", NULL);
g_object_set (playbin, "video-sink", videosink, NULL);
uri = g_filename_to_uri ("/path/to/file", NULL, NULL);
g_object_set (playbin, "uri", uri, NULL);
g_free (uri);
/* NOTE: at this point your main window needs to be realized,
* ie visible on the screen, and you might need to make sure
* that your widget w indeed has a 'native window' (just some
* things to check for if it doesn't work; there should be Qt
* API for this kind of thing if needed) */
QApplication::syncX();
 gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
gst_element_set_state (playbin, GST_STATE_PLAYING);
.. check for messages like error/statechanges/tags/eos on pipeline/playbin bus
A project wrapping gstreamer into usable C++/Qt classes including example code:
http://code.google.com/p/qbtgstreamer/
I don't know about a direct approach, as I am not familiar with gstreamer itself.

Resources