cameraViewRender and pipViewRender - ant-media-server

In the AntMedia Android Sdk documentation https://github.com/ant-media/Ant-Media-Server/wiki/WebRTC-Android-SDK-Documentation, there is the following code. What are cameraViewRender and pipViewRender? Why are there these two Views?
SurfaceViewRenderer cameraViewRenderer = findViewById(R.id.camera_view_renderer);
SurfaceViewRenderer pipViewRenderer = findViewById(R.id.pip_view_renderer);
webRTCClient.setVideoRenderers(pipViewRenderer, cameraViewRenderer);

Related

java corda template build issue

I am new to Corda development and am trying to configure a Corda java template for Hello World, downloaded template while syncing the gradle throwing below error:
Caused by: org.gradle.api.resources.ResourceException: Could not get resource 'https://repo1.maven.org/maven2/org/jetbrains/kotlin/kotlin-stdlib-jre8/1.1.60/kotlin-stdlib-jre8-1.1.60.jar'.
at org.gradle.internal.resource.ResourceExceptions.failure(ResourceExceptions.java:74)
at org.gradle.internal.resource.ResourceExceptions.getFailed(ResourceExceptions.java:57)
at org.gradle.internal.resource.transfer.DefaultCacheAwareExternalResourceAccessor.copyToCache(DefaultCacheAwareExternalResourceAccessor.java:198)
at org.gradle.internal.resource.transfer.DefaultCacheAwareExternalResourceAccessor.access$300(DefaultCacheAwareExternalResourceAccessor.java:55)
at org.gradle.internal.resource.transfer.DefaultCacheAwareExternalResourceAccessor$1.create(DefaultCacheAwareExternalResourceAccessor.java:88)
at org.gradle.internal.resource.transfer.DefaultCacheAwareExternalResourceAccessor$1.create(DefaultCacheAwareExternalResourceAccessor.java:80)
at org.gradle.cache.internal.ProducerGuard$AdaptiveProducerGuard.guardByKey(ProducerGuard.java:97)
at org.gradle.internal.resource.transfer.DefaultCacheAwareExternalResourceAccessor.getResource(DefaultCacheAwareExternalResourceAccessor.java:80)
at org.gradle.api.internal.artifacts.repositories.resolver.DefaultExternalResourceArtifactResolver.downloadStaticResource(DefaultExternalResourceArtifactResolver.java:97)
at org.gradle.api.internal.artifacts.repositories.resolver.DefaultExternalResourceArtifactResolver.resolveArtifact(DefaultExternalResourceArtifactResolver.java:67)
at org.gradle.api.internal.artifacts.repositories.resolver.ExternalResourceResolver.download(ExternalResourceResolver.java:310)
at org.gradle.api.internal.artifacts.repositories.resolver.ExternalResourceResolver.resolveArtifact(ExternalResourceResolver.java:296)
... 27 more
build.gradle details
ext {
corda_release_group = 'net.corda'
corda_release_version = '3.3-corda'
corda_gradle_plugins_version = '3.2.1'
junit_version = '4.12'
quasar_version = '0.7.9'
spring_boot_version = '2.0.2.RELEASE'`enter code here`
spring_boot_gradle_plugin_version = '2.0.2.RELEASE'
slf4j_version = '1.7.25'
log4j_version = '2.9.1'
}
Looks like you are using an very old version of the CorDapp template. Your build.gradle says Corda version 3.3
Whereas, we are currently 4.4 right now. Please download a copy of the latest template and work on it. https://github.com/corda/cordapp-template-java

data channel lock error while configuring flume with multiple channels

I have tried to fan out the flow from one source to two channels.Also I specified different dataDirs and checkpointDirs properties for each channel as in the channel lock error while configuring flume's multiple sources using FILE channels question.I have used a multiplexing channel selector. I have get the following error.
18/08/23 16:21:37 **ERROR file.FileChannel: Failed to start the file channel** [channel=fileChannel1_2]
java.io.IOException: Cannot lock /root/.flume/file-channel/data. The directory is already locked. [channel=fileChannel1_2]
at org.apache.flume.channel.file.Log.lock(Log.java:1169)
at org.apache.flume.channel.file.Log.<init>(Log.java:336)
at org.apache.flume.channel.file.Log.<init>(Log.java:76)
at org.apache.flume.channel.file.Log$Builder.build(Log.java:276)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:281)
at unAndReset(FutureTask.java:308) .....
My configuration file as follws.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDir=/mnt/alpha_data/
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDir=/mnt/beta_data/
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sources.source1_1.selector.type=multiplexing
agent1.sources.source1_1.selector.header=basenameHeader
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
can someone give any solution for that.
Try to set a channel for default property in multiplexing selector
agent1.sources.source1_1.selector.default=fileChannel1_1
Data channel lock error was corrected. But still could't do the multiplexing. Code as follows.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 hdfs-sink1_3
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.capacity=200000
agent1.channels.fileChannel1_1.transactionCapacity=1000
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDirs=/home/Flume/alpha_data
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sources.source1_1.basenameHeaderKey = basename
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDirs=/home/Flume/beta_data
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_3.type=file
agent1.channels.fileChannel1_3.capacity=200000
agent1.channels.fileChannel1_3.transactionCapacity=10
agent1.channels.fileChannel1_3.checkpointDir=/home/Flume/gamma/001
agent1.channels.fileChannel1_3.dataDirs=/home/Flume/gamma_data
agent1.channels.fileChannel1_3.checkpointOnClose=true
agent1.channels.fileChannel1_3.dataOnClose=true
agent1.sinks.hdfs-sink1_3.type=hdfs
agent1.sinks.hdfs-sink1_3.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_3.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/KT
agent1.sinks.hdfs-sink1_3.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_3.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_3.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_3.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_3.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_3.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_3.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sinks.hdfs-sink1_3.channel=fileChannel1_3
agent1.sources.source1_1.selector.type=replicating
agent1.sources.source1_1.selector.header=basename
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
agent1.sources.source1_1.selector.default=fileChannel1_3

AVCaptureMetadataOutputObjectsDelegate not called in swift 4 for QR scanner

I was working on QR code scanner app on iOS where i was getting output AVCaptureOutput on the delegate method captureOutput:didOutputMetadataObjects:fromConnection:.
It was working perfectly on swift 3. After I've updated to xcode 9 and swift 4, it stopped working.
Okay I've found an update here.
Found that AVCaptureMetadataOutputObjectsDelegate method is changed
from
captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!)
to
metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
After changing this delegate method, its working good now.
in Swift 4:
Replace
let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = metadataOutput.availableMetadataObjectTypes
With:
let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
let supportedCodeTypes = [AVMetadataObject.ObjectType.upce,
AVMetadataObject.ObjectType.code39,
AVMetadataObject.ObjectType.code39Mod43,
AVMetadataObject.ObjectType.code93,
AVMetadataObject.ObjectType.code128,
AVMetadataObject.ObjectType.ean8,
AVMetadataObject.ObjectType.ean13,
AVMetadataObject.ObjectType.aztec,
AVMetadataObject.ObjectType.pdf417,
AVMetadataObject.ObjectType.qr]
metadataOutput.metadataObjectTypes = supportedCodeTypes

cocoapods dylib dependency use_frameworks

I have built a dynamic library (to add ICU support in this case) which i need to add as a dependency to a pod. For that I created a pod with the following podspec (I removed things like authors, license, ... to keep it short)
Pod::Spec.new do |s|
s.name = 'unicode'
s.version = '57.0'
s.source = { :git => "git#bitbucket.org:mycompany/unicode.git", :tag => "#{s.version}" }
s.requires_arc = false
s.platform = :ios, '8.0'
s.default_subspecs = 'all'
s.subspec 'all' do |ss|
ss.header_mappings_dir = 'icu4c/include'
ss.source_files = 'icu4c/include/**/*.h'
ss.public_header_files = 'icu4c/include/**/*.h'
ss.vendored_libraries = 'Frameworks/lib*.dylib'
end
end
Here i have a second pod where i need to link these libraries too
Pod::Spec.new do |s|
s.name = 'sqlite3'
s.version = '3.14.2'
s.summary = 'SQLite is an embedded SQL database engine'
s.documentation_url = 'https://sqlite.org/docs.html'
s.homepage = 'https://github.com/clemensg/sqlite3pod'
s.authors = { 'Clemens Gruber' => 'clemensgru#gmail.com' }
v = s.version.to_s.split('.')
archive_name = "sqlite-amalgamation-"+v[0]+v[1].rjust(2, '0')+v[2].rjust(2, '0')+"00"
#s.source = { :http => "https://www.sqlite.org/#{Time.now.year}/#{archive_name}.zip" }
s.source = { :git => "git#bitbucket.org:wrthphoenixspeedy/sqlite3.git", :tag => "#{s.version}" }
s.requires_arc = false
s.platform = :ios, '8.0'
s.default_subspecs = 'common'
s.subspec 'common' do |ss|
ss.source_files = "#{archive_name}/sqlite*.{h,c}"
ss.osx.pod_target_xcconfig = { 'OTHER_CFLAGS' => '$(inherited) -DHAVE_USLEEP=1' }
# Disable OS X / AFP locking code on mobile platforms (iOS, tvOS, watchOS)
sqlite_xcconfig_ios = { 'OTHER_CFLAGS' => '$(inherited) -DHAVE_USLEEP=1 -DSQLITE_ENABLE_LOCKING_STYLE=0' }
ss.ios.pod_target_xcconfig = sqlite_xcconfig_ios
ss.tvos.pod_target_xcconfig = sqlite_xcconfig_ios
ss.watchos.pod_target_xcconfig = sqlite_xcconfig_ios
end
# enable support for icu - International Components for Unicode
s.subspec 'icu' do |ss|
ss.dependency 'sqlite3/common'
ss.pod_target_xcconfig = { 'OTHER_CFLAGS' => '$(inherited) -DSQLITE_ENABLE_ICU=1' }
ss.dependency 'unicode', '57.0'
ss.libraries = 'icucore', 'icudata.57.1', 'icui18n.57.1', 'icuio.57.1', 'icule.57.1', 'iculx.57.1', 'icutu.57.1', 'icuuc.57.1'
end
end
And with these i am able to compile it. Cocoapods is copying these libraries on build time into the folder ../Frameworks/ rather than to do while on run time. Instead it fails because it says that it doesn't find the library in ../lib.
dyld: Library not loaded: ../lib/libicudata.57.1.dylib
Referenced from: /var/containers/Bundle/Application/9663CB3A-6ACD-487E-A92D-48F8AFE5260C/MyApp.app/MyApp
Reason: image not found
I have to use use_frameworks! because i am using some Swift frameworks too.
So i am doing something wrong... the question is, can i link a dylib from one pod to another pod? and if so... how?
Based on the disparity between "libs" and "Frameworks", this looks like an issue with either runpath search paths (the running app is not looking for the library from Frameworks), or with the install name of the library not matching the location where it's placed relative to where it is dynamically loaded from.
Make sure that in the app that bundles the dynamic library you have the following paths included in your "Runpath Search Path": #executable_path/../Frameworks, #loader_path/../Frameworks
Make sure that the "Dynamic Library Install Name" name of the library being loaded is set to the equivalent of #rpath/$(EXECUTABLE_PATH) (i.e. in your case it should be "#rpath/libicudata.57.1.dylib"). You can set it during build time using the -install_name compiler (linker?) flag, or with install_name_tool, like so: install_name_tool -id "#rpath/libicudata.57.1.dylib" libicudata.57.1.dylib . Hopefully doesn't come to this though.

How can I specify server-side encryption of Amazon S3 objects with PowerShell?

Would someone explain how to enable Amazon S3 server-side encryption in a PowerShell script? I'm using the sample code below but when I check encryption in the AWS Console or Cloudberry S3 Explorer Pro the encryption type is still set to 'none'. Using AWS / Cloudberry to do this manually after files are uploaded isn't feasible because the script is to be deployed to 200+ servers, each with it's own bucket in S3. Here's a snippet of code from the script:
$TestFile="testfile.7z"
$S3ObjectKey = "mytestfile.7z"
#Create Amazon PutObjectRequest.
$AmazonS3 = [Amazon.AWSClientFactory]::CreateAmazonS3Client($S3AccessKeyID,$S3SecretKeyID)
$S3PutRequest = New-Object Amazon.S3.Model.PutObjectRequest
$S3PutRequest.BucketName = $S3BucketName
$S3PutRequest.Key = $S3ObjectKey
$S3PutRequest.FilePath = $TestFile
$S3Response = $AmazonS3.PutObject($S3PutRequest)
I've tried inserting the following without success (before the $S3Response line):
$S3PutRequest.ServerSideEncryption
When the above is added I get this message in the output but the file is still not tagged as encrypted on S3:
MemberType : Method
OverloadDefinitions : {Amazon.S3.Model.PutObjectRequest WithServerSideEncryptionMethod(Amazon.S3.Model.ServerSideEncryptionMethod encryption)}
TypeNameOfValue : System.Management.Automation.PSMethod
Value : Amazon.S3.Model.PutObjectRequest WithServerSideEncryptionMethod(Amazon.S3.Model.ServerSideEncryptionMethod encryption)
Name : WithServerSideEncryptionMethod
IsInstance : True
Can anyone tell me what I'm doing wrong? Many thanks in advance.
You should add:
$S3PutRequest.WithServerSideEncryptionMethod([Amazon.S3.Model.ServerSideEncryptionMethod]::AES256)
Or:
$S3PutRequest.ServerSideEncryptionMethod = [Amazon.S3.Model.ServerSideEncryptionMethod]::AES256
If you are using CloudBerry, it has its own PowerShell snapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapin
$s3 = Get-CloudS3Connection -Key XXXXXXX -Secret YYYYYYY
$destFolder = $s3 | Select-CloudFolder -path "mybucket"
$local = Get-CloudFilesystemConnection
$srcFolder = $local | Select-CloudFolder -path "c:\myzips"
$srcFolder | Copy-CloudItem $destFolder -filter "testfile.7z" -SSE
Notice -SSE parameter in the Copy-CloudItem command.
Some helpful examples can be found on their blog: http://blog.cloudberrylab.com/search?q=powershell

Resources