how to access a file from USB through Native Client - google-nativeclient

I am trying to access contents of USB through native client using URLloader API however I get an error message while I call the 'Open() function' and error code is -7 ( Indicates failure due to insufficient privileges. PP_ERROR_NOACCESS = -7).Can you suggest me any alternative way of loading file from USB through NaCl and is there any support in NaCl or not?

If you are able to access the file in JavaScript, tat least sending the file (via postMessage) to your nacl application should be possible.

Related

GetTwinAsync() are not supported in IoT edge Simulator v0.14.10

GetTwinAsync() returns always Twin object with empty properties, all properties of my IoT edge module are null when I run my IoT edge device in Simulator, in my Linux server works everything fine. I should wait also about 20 seconds to get a response from GetTwinAync().
If we look at it, this problem is expected. If you read this Understand and use module twins in IoT Hub document, then you will see that from module app, only has permission to read desired properties and read/write reported properties. If you check this image below you will understand better.
The lifecycle of a module twin is linked to the corresponding module identity. Modules twins are implicitly created and deleted when a module identity is created or deleted in IoT Hub.
To access all module properties, you can do it from solution back end and require the ServiceConnect permission. You will need Microsoft.Azure.Devices V1.16.0-preview-001 or later. The following is a console app code snippet.
...
RegistryManager registryManager = RegistryManager.CreateFromConnectionString(connectionString);
Module module;
try
{
module = await registryManager.AddModuleAsync(new Module(deviceID, moduleID));
}
catch (ModuleAlreadyExistsException)
{
module = await registryManager.GetModuleAsync(deviceID, moduleID);
}
...
For more detailed explanation and example check this Get started with IoT Hub module identity and module twin (.NET). If your issue still persist then you can open an issue on azure-iot-sdk-csharp repository.

Persona U are U 4500 Web API

I am new to biometrics. I bought a new Persona U are U 4500 Device and SDK from a vendor. The SDK has some samples (as expected). All of the samples run smoothly except the WebSample. it do not detects my device in addition it gives an error in the console.
Can anyone please help me how to fix this issue and guide me as why am i facing this problem? is it something related to my wss://localhost?
Update
By further diving into the program i found the specified url https://127.0.0.1:52181/get_connection in websdk.client.bundle.min.js when i opened the link it says
{
"code": -2147024894,
"message": "The system cannot find the file specified."
}
Am i missing some file?
I don't have it in front of me now, because I switched back to the U.are.U 2.2.3 SDK, which does not have this feature.
But it sounds like you possibly have not installed the Digital Persona Lite client component. This runs a separate WebSocket service on port 9001 (IIRC) through which the JavaScript client then communicates.
It is described here: https://hidglobal.github.io/digitalpersona-devices/tutorial.html
After installation, you will need to restart.
The call to https://127.0.0.1:52181/get_connection should then respond with details of the WebSocket service, to which the JavaScript client will connect.
NOTE: The WebSkd library requires DigitalPersona Agent running on a
client machine. This agent provides a secure communication channel
between a browser and a fingerprint or card device driver. The
DigitalPersona Agent is a part of a HID DigitalPersona Workstation. It
can be also installed with a DigitalPersona Lite Client. If you expect
your users do not use HID DigitalPersona Workstation, you may need to
provide your users with a link to the Lite Client download, which you
should show on a reader communication error:
A link is provided there to download the Lite client from here: https://www.crossmatch.com/AltusFiles/AltusLite/digitalPersonaClient.Setup64.exe
you just add a script call of the following code "crossorigin = '' ". "crossorigin=''". It will look like this:
<script src="scripts/websdk.client.bundle.min.js" crossorigin="*"></script>
<script src="scripts/fingerprint.sdk.min.js" crossorigin="*"></script>

Puppeteer name resolution error on Firebase Cloud Functions

I have created a brand new free tier project, cloned Puppeteer Firebase Functions demo repository and only changed the default project name in .firebaserc file.
When I run the simple test or version functions I get the correct result. When I open the .com/screenshot page without any parameter I get correct ("Please provide a URL...") response.
But when I try any url, i.e. .com/screenshot?url=https://en.wikipedia.org/wiki/Google I get Error: net::ERR_NAME_RESOLUTION_FAILED at https://en.wikipedia.org/wiki/Google thrown in response.
I tried looking for any name resolution errors related to Puppeteer but I could not find anything. Could this be a problem of using free tier?
The free Spark payment plan restricts all outgoing connections except those API endpoints that are fully controlled by Google. As a result, I expect that puppeteer would not be able to make any outgoing connections to external web sites.

Inconsistent Cognos errors

I am trying to do a couple of things within Cognos:
Load Framework Manager and view/modify SQL behind existing models and create new models
Modify existing reports through Report Studio via Cognos Connection
I was given an account on the Cognos application server and I installed Framework Manager. I was given the gateway URL and dispatcher URL from the System Admin and then transferred all of the project files to the server so that I could load the project in question. I'm able to open the .cpf file; however, when going into any models, I get the error:
Unable to access service at URL:
https://xxx.cognos.xxx.xxx:443/ibmcognos/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/close.xts
Please check that your gateway URI information is configured correctly and that the service is available.
For further information please contact your service administrator.
I then contacted the system admin and he indicated that the URL was correct.
Furthermore, now when I try to access Cognos Connection (which worked fine last week), I receive the error:
CM-REQ-4159
Content Manager returned an error in the response header. The error "cmAuthenticateFailed CM-CAM-4005 Unable to authenticate. Check your security directory server connection and confirm the credentials entered at login." can be found in the response SOAP header.
The odd thing is, another member of my team receives this error:
AAA-AUT-0016:
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/portal/images/msg_error.gif
The function call to 'Method.invoke(cmServiceInstance, queryRequest)' failed.
https://xxx.cognos.xxx.xxx/ps/images/space.gif
DetailsExpand:
CM-SYS-5192 An error occurred with Content Manager.
I've done some research (I'm not really familiar with Cognos or even networking) and found that these errors (the ones that I receive) are usually received when trying to run a single report; however, I can't even access FM models or Cognos Connection in general. I also don't understand how we can receive 2 different errors when accessing the same URLs from the same network.
Any guidance would be greatly appreciated. We are using Cognos 10.2.2.
http://www-01.ibm.com/support/docview.wss?uid=swg21624136
One possible reason is that the user does not have the required "Import relational metadata" capability.
Or maybe it is something to do with the registry
Note: Make sure you backup the registry before making any changes.
see http://www-01.ibm.com/support/docview.wss?uid=swg22015730
Open cmd and type "regedit".
Navigate to [HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl.
Right click "BMT.exe" = dword:00002af9.
Delete.
Re-launch Framework Manager.

Programmatically open an email from a POP3 and extract an attachment

We have a vendor that sends CSV files as email attachments. These CSV files contain statuses that are imported into our application. I'm trying to automate the process end-to-end, but it currently depends on someone opening an email, saving the attachment to a server share, so the application can use the file.
Since I cannot convince the vendor to change their process, such as offering an FTP location or a Web Service, I'm stuck with trying to automate the existing process.
Does anyone know of a way to programmatically open an email from a POP3 account and extract an attachment? The preferred solution would reside on a Windows 2003 server, be written VB.NET and secure. The application can reside on the same server as the POP3 server, for example, we could setup the free POP3 server that comes with Windows Server and pull against the mail file stored on the file system.
BTW, we are willing to pay for an off-the-shelf solution, if one exists.
Note: I did look at this question but the answer points to a CodeProject solution that doesn't deal with attachments.
Try Mail.dll email component, it's very affordable, supports attachments national characters and is easy to use, it also supports SSL:
Using pop3 As New Pop3()
pop3.Connect("mail.server.com")
pop3.Login("user", "password")
Dim builder As New MailBuilder()
For Each uid As String In pop3.GetAll()
' Receive email message'
Dim mail As IMail = builder.CreateFromEml(pop3.GetMessageByUID(uid))
'Write out received message'
Console.WriteLine(mail.Subject)
'Here you can use mail.Attachmets collection'
For Each attachment As MimeData In mail.Attachments
Console.WriteLine(attachment.FileName)
attachment.Save("c:\" + attachment.FileName)
' you can also use attachment.Data here'
Next attachment
Next
pop3.Close(true)
End Using
You can download it here: http://www.lesnikowski.com/mail.
possible duplication of Reading Email using Pop3 in C#
Atleast, there's a shed load of suggestions there that you may find useful
I'll throw in a late suggestion for a more generalized "download POP3 messages and extract attachments" solution using existing software and minimal programming. I needed to do this for a client who switched to receiving faxes via email and was not pleased with manually saving the attachments to a location where they could be imported into an application.
For downloading messages on *nix systems fetchmail seems to be the standard and is very capable, but I chose mpop for both simplicity and Windows compatibility (but it is cross-platform). If mpop hadn't done the trick for me, I probably would have ended up doing something with the Python-based getmail, which was created when fetchmail's development stalled for a time (it's since resumed).
Mpop is controlled either via command line or configuration file, so I simply created multiple configuration files and specify via command line which file to load. I'm using it in "Exchange pickup directory" mode, which means it simply downloads the messages and drops them as text (.eml) files in a specified directory.
For extraction of the message attachments, UUDeview appears to be the standard (I'm using the Windows port of UUDeview) across just about any system you could want with just about any features you could want. My main alternative to this was a much-less-capable Python script that I'd developed for a different client back in 2007, but I'm happy to go with a precompiled executable over either installing Python or packaging with any of the Python-to-exe options.
Finally there's the configuration - along with the two mpop configuration files mentioned above (which I could do away with by using command-line options), I also have two 2-line .cmd files launched every 10 minutes by scheduled task - the first line to launch mpop to download into a working directory and the second line to launch UUDeview and extract attachments of specified types (.pdf or .tif) then delete each file from which it extracted attachments. Output is sent to another directory from which staff can directly attach files as needed.
This is overall not the most elegant way to reach these ends, but it was quick, simple, functional and reasonably robust - at each stage if something goes wrong it fails such that no data is lost. The only places where data could be lost are any non-attachment messages being sent to the dedicated fax email addresses, and even those will sit in the processing directory and be caught eventually.

Resources