I am working on pulseaudio for recording sound and I faced with the "Access Denied" error.
First of all, I am working in Ubuntu 16.04 machine.
I am trying to connect to server with following part of code :
_s = NULL;
int32_t err = -1;
_ss.format = PA_SAMPLE_S16LE;
_ss.rate = 44100;
_ss.channels = 2;
_s = pa_simple_new(NULL, "Recorder", PA_STREAM_RECORD, NULL, "record", &_ss, NULL, NULL, &err);
pa_simple_new not return null, so I assume that, this part is not wrong.
But in another part of code, I am trying to read data from server like following :
int32_t err = -1;
int8_t buff[ ( CIRC_DATA_SIZE ) ] = { 0x00 };
if ( pa_simple_read(_s, &buff, ( CIRC_DATA_SIZE ), &err ) > 0 )
{
_ReadBuff->add_to_buffer(buff, ( CIRC_DATA_SIZE ) );
}else { DEBUG_MSG("Unable to read from audio device, %s\n", pa_strerror(err)); }
In the output of application, I saw following statement :
Unable to read from audio device, Access denied
Then I set the PULSE_COOKIE environment variable like this :
export PULSE_COOKIE=/home/sbahadirarslan/.config/pulse/cookie
By the way, cookie file is really exist in /home/sbahadirarslan/.config/pulse directory.
After this arrangement, application give me same error log.
Then I set PULSE_SERVER environment variable like this :
export PULSE_SERVER=unix:/run/user/1000/pulse/native
But after this change, application gave me the same error.
So Are these changes wrong or Do I have to make other changes ?
Thanks for your helps.
try it:
systemctl --user enable pulseaudio && systemctl --user start pulseaudio
Related
I've installed sbt using sdkman on wsl2 ubuntu setup. Currently sbt 1.4.2 is installed. When I try to launch it from the terminal it gives
sbt server is already booting. Create a new server? y/n (default y) if I choose n, nothing happens. If I choose y, then sbt starts. What I want to do is to be able to start sbt without that error message. Because this behaviour breaks metals on visual studio code.
I checked the sbt source code and found that the method below prints the error message - in sbt/main/src/main/scala/sbt/Main.scala
private def getSocketOrExit(
configuration: xsbti.AppConfiguration
): (Option[BootServerSocket], Option[Exit]) =
try (Some(new BootServerSocket(configuration)) -> None)
catch {
case _: ServerAlreadyBootingException
if System.console != null && !ITerminal.startedByRemoteClient =>
println("sbt server is already booting. Create a new server? y/n (default y)")
val exit = ITerminal.get.withRawInput(System.in.read) match {
case 110 => Some(Exit(1))
case _ => None
}
(None, exit)
case _: ServerAlreadyBootingException =>
if (SysProp.forceServerStart) (None, None)
else (None, Some(Exit(2)))
}
}
So, calling new BootServerSocket(configuration) throws an exception. Exception source is the method below from BootServerSocket.java;
static ServerSocket newSocket(final String sock) throws ServerAlreadyBootingException {
ServerSocket socket = null;
String name = socketName(sock);
try {
if (!isWindows) Files.deleteIfExists(Paths.get(sock));
socket =
isWindows
? new Win32NamedPipeServerSocket(name, false, Win32SecurityLevel.OWNER_DACL)
: new UnixDomainServerSocket(name);
return socket;
} catch (final IOException e) {
throw new ServerAlreadyBootingException();
}
}
I checked the isWindows method and it returns false. So the new UnixDomainServerSocket(name) part is running. And somehow it can't create a unix domain server socket. That's all I found out. Is there a way to fix this? Or is this a bug?
After moving my project files to a directory within wsl2, problem is solved. My project files were in a Windows directory before.
I created the following resource to encrypt 'All' disk of a VM, and it worked fine so far:
resource "azurerm_virtual_machine_extension" "vm_encry_win" {
count = "${var.vm_encry_os_type == "Windows" ? 1 : 0}"
name = "${var.vm_encry_name}"
location = "${var.vm_encry_location}"
resource_group_name = "${var.vm_encry_rg_name}"
virtual_machine_name = "${var.vm_encry_vm_name}"
publisher = "${var.vm_encry_publisher}"
type = "${var.vm_encry_type}"
type_handler_version = "${var.vm_encry_type_handler_version == "" ? "2.2" : var.vm_encry_type_handler_version}"
auto_upgrade_minor_version = "${var.vm_encry_auto_upgrade_minor_version}"
tags = "${var.vm_encry_tags}"
settings = <<SETTINGS
{
"EncryptionOperation": "${var.vm_encry_operation}",
"KeyVaultURL": "${var.vm_encry_kv_vault_uri}",
"KeyVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionKeyURL": "${var.vm_encry_kv_key_url}",
"KekVaultResourceId": "${var.vm_encry_kv_vault_id}",
"KeyEncryptionAlgorithm": "${var.vm_encry_key_algorithm}",
"VolumeType": "${var.vm_encry_volume_type}"
}
SETTINGS
}
When i ran the first time - ADE encryption is done for both OS and data disk.
However, When I re-run terraform using terraform plan or terraform apply, it wants to replace all my data disks I have already created, like the following screenshot illustrates.
I do not know how to solve it. And my already created disks should not be replaced.
I check on the lines of ignore_chnages
lifecycle {
ignore_changes = [encryption_settings]
}
I am not sure where to add or does this reference actually solves the problem?
Which resource block should i add them.
Or is there another way ?
I'm having trouble using the ftpUpload() function of RCurl to upload a file to a non-existent folder in an SFTP. I want the folder to be made if its not there, using the ftp.create.missing.dirs option. Here's my code currently:
.opts <- list(ftp.create.missing.dirs=TRUE)
ftpUpload(what = "test.txt",
to "sftp://ftp.testserver.com:22/newFolder/existingfile.txt",
userpwd = paste(user, pwd, sep = ":"), .opts = opts)`
It doesn't seem to be working as I get the following error:
* Initialized password authentication
* Authentication complete
* Failed to close libssh2 file
I can upload a file to an existent folder with success, its just when the folder isn't there I get the error.
The problem seems be due the fact you are trying to create the new folder, as seen in this question: Create an remote directory using SFTP / RCurl
The error can be found in Microsoft R Open git page:
case SSH_SFTP_CLOSE:
if(sshc->sftp_handle) {
rc = libssh2_sftp_close(sshc->sftp_handle);
if(rc == LIBSSH2_ERROR_EAGAIN) {
break;
}
else if(rc < 0) {
infof(data, "Failed to close libssh2 file\n");
}
sshc->sftp_handle = NULL;
}
if(sftp_scp)
Curl_safefree(sftp_scp->path);
In the code the parameter rc is related to libssh2_sftp_close function (more info here https://www.libssh2.org/libssh2_sftp_close_handle.html), that tries close the nonexistent directory, resulting in the error.
Try use curlPerform as:
curlPerform(url="ftp.xxx.xxx.xxx.xxx/";, postquote="MkDir /newFolder/", userpwd="user:pass")
I have a runnable jar for a java8 program which uses sqlite-jdbc 3.14.2. It works fine on windows 10 and ubuntu. i.e. i can query stuff on these platforms on all the tables. However, when i run it on FreeBSD 10.3-releasep4, it gives me the following error when i run queries on all the tables.
[SQLITE_IOERR_LOCK] I/O error in the advisory file locking logic (disk I/O error) on FreeBSD 10.3-release
Please advise a workaround or solution.
Same issue exists with 3.16.1
So I finally found out what was wrong. It was an NFS mounted volume that was causing the problem. With DB file on local file system, it works like a charm.
If anyone is coming to this question at later time, this error can be reproduced by first creating or opening a DB with WAL journalling mode, writing something, then closing the DB and trying to open it again in read-only mode with journalling off. This unit test will reproduce the error:
#Test
public void mixJournalingModesFailureTest()
{
File tempDb = File.createTempFile("tempdbtest", ".db");
tempDb.deleteOnExit();
// Open a temp DB in RW mode with WAL journalling
String url = "jdbc:sqlite:" + tempDb.getAbsolutePath();
SQLiteConfig config = new SQLiteConfig();
// Ser read-write with WAL journalling
config.setJournalMode( SQLiteConfig.JournalMode.WAL );
config.setReadOnly( false );
Properties props = config.toProperties();
Connection conn = DriverManager.getConnection( url, props );
// Write something
try ( Statement statement = conn.createStatement() )
{
statement.execute( "CREATE TABLE test (words text)" );
}
// Close the DB
conn.close();
// Open the DB again but with journalling off and in read-only mode
config.setJournalMode( SQLiteConfig.JournalMode.OFF );
config.setReadOnly( true );
props = config.toProperties();
try
{
// This will throw the SQLITE_IOERR_LOCK advisory lock exception
DriverManager.getConnection( url, props );
fail( "Should throw advisory lock exception" );
}
catch ( SQLException ignore ) {}
}
https://groups.google.com/group/caelyf/feed/rss_v2_0_topics.xml in a browser window correctly returns xml stream;
Using groovy in cloudfoundry app, this fails with http 403 permission failure like:
def url = "https://groups.google.com/group/caelyf/feed/rss_v2_0_topics.xml:443".toURL()
def tx = url.getText('UTF-8')
cloudfoundry forum implies only https plus port 443 can read an external url
any ideas ?
not sure why you stuck :443 on the end of the url?
403 means forbidden. I'm guessing Google doesn't let you scrape the groups site with java.
you could try setting the user agent to that of a browser?
def tx = url.openConnection().with {
setRequestProperty("User-Agent", "Firefox/2.0.0.4")
inputStream.with {
def ret = getText( 'UTF-8' )
close()
ret
}
}
or similar...
I don't think this is a cloudfoundry issue. have you tried running the above from your machine to confirm this?
Edit:
Just tried it, and it works (at least on my machine). This shows how to load the XMl into a parser, and print the titles from the feed:
URL url = "https://groups.google.com/group/caelyf/feed/rss_v2_0_topics.xml".toURL()
def tx = new XmlSlurper().with { x ->
url.openConnection().with {
// Pretend to be an old Firefox version
setRequestProperty("User-Agent", "Firefox/2.0.0.4")
// Get a reader
inputStream.withReader( 'UTF-8' ) {
// and parse it with the XmlSlurper
parse( it )
}
}
}
// Print all the titles
tx.channel.item.title.each { println it }