I have recently started to learn developing a mircocontroller-based device which will have BLE module. The device is supposed to send analog reading fetched from sensor to an android application that I am going to develop.
For what i have studied about the way GATT works is:
The microntroller-based device will be GATT server.
The android application will be GATT client.
As seen from communication point of view, the microntroller-based device is Slave and the android application is Master.
Questions:
How do I decide the number of attributes that I need to define in order to receive command from GATT Client and send the response (which is going to be a float value)? Do I need to have two distinct attributes: One for Android to send commands and one for the microncontroller-based device to send data to android? Or I can use a single attribute?
GATT appears to be an event-driven system.
2.1: What events will be generated when android sends a command to microcontroller-based device: (Client to Server) ?
2.2: Will an event be generated when the data is written on the attribute which is going to be read by Android application: (Server to Client) ?
The android application (GATT Client) should use read/write commands to communicate with the microncontroller-based device (GATT Server). And, the GATT Server should use Notify/Indicate to pass the data to the GATT Client. Is my understanding correct?
I am using this BlueGiga BLE112 Module for development.
The gatt.xml file that I so far have written is:
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<!-- 1800: org.bluetooth.service.generic_access -->
<service uuid="1800" id="generic_access">
<description>Generic Access</description>
<!-- 2A00: org.bluetooth.characteristic.gap.device_name -->
<characteristic uuid="2A00" id="c_device_name">
<description>Device Name</description>
<properties read="true" const="true" />
<value>MyBLEDev</value>
</characteristic>
<!-- 2A01: org.bluetooth.characteristic.gap.appearance -->
<characteristic uuid="2A01" id="c_appearance">
<description>Appearance</description>
<properties read="true" const="true" />
<value type="hex">0300</value>
</characteristic>
</service>
<!-- custom service -->
<service uuid="624e957f-cb42-4cd6-bacc-84aeb898f69b" advertise="true">
<description>Custom Device Service</description>
<!-- custom write-only characteristic for Client to send commands to fetch reading -->
<characteristic uuid="a57892fe-4f58-97d4-a5245-78a4125d3e6" id="c_cmd_TxReading">
<description>Request for Reading</description>
<properties write="true" />
<value length="4" />
</characteristic>
<characteristic uuid="8fde302a-56ac-b289-65ed-a577ed66b89c" id="c_reading">
<description>Measurement</description>
<properties read="true" write="true" />
<value length="4" type="float32" />
</characteristic>
</service>
I see a GATT server like a chunk of memory on another machine. You can request particular chunks by handles and get different information. You can make the other machine do different things or respond in different ways by writing values to those handles. The difference from memory space is that each handle can contain different sizes of information as well as each having a UUID that identifies how to interpret the data you find in there. In a regular memory space each "handle" would be an address, each chunk would be a single byte, and there's no way to figure out how to interpret that data without some other information.
So... questions:
Like most questions on here, the answer is "it depends". If you just want to fetch the value, you just have a single attribute with the data in there that the client can fetch from. If you'd also like to set it up so the GATT server sends notifications whenever that value changes then you'd also have to add a Client Characteristic Configuration handle to that attribute. (Ex. I have one accelerometer that has 3 attributes for the X, Y, and Z values and another device that reports all 3 values as a single attribute. Because this is a type of value that hasn't been standardize they can do this by defining their own custom UUID. If you're measuring something that already has a standard layout then you should probably use that instead)
GATT has some event driven aspects and other aspects that are done serially. For instance, you can only be negotiating one connection request at a time. However, you can be getting notifications in any order from any number of attributes at any time.
You can't really define your own commands with GATT. You're restricted to things like "read from handle" or "write to handle" similar to manipulating a chunk of memory. The underlying implementation can be dependent on the hardware, but usually you can trigger some sort of event when a handle is manipulated.
You can requests events by subscribing to notifications or indications on a particular attribute.
Yes, that's correct.
Related
How to improve datapower monitoring ? I want to improve our monitoring techniques say for example, want to check that all objects (FSH /MQFSHs, SSl proxy, crypto profile etc) are up and incase if it goes down , should be notified by email or something. checking number of files in file management ondisk folders.Basically validate the adapter after deployment (we use soapUi to test adapter functionality, however something else to improve or added validation).please suggest any ideas that can be implemented as a process improvement on Datapower
For example you can get the status off all your domains using this soma call. You can test this using soap UI. You can get the list of various soma calls using the datapower mgmt wsdl (available in datapower store directory).
<!-- get all the domains -->
<xsl:variable name="domainsList">
<dp:url-open target="{$XML-MGMT-URL}" response="responsecode">
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
<env:Body>
<dp:request xmlns:dp="http://www.datapower.com/schemas/management">
<dp:get-status class="DomainStatus"/>
</dp:request>
</env:Body>
</env:Envelope>
</dp:url-open>
</xsl:variable>
Try using SOMA commands of XML management interface to check the object status.
I am not sure if this is the best approach but this is how i have implemented it. You can always create a testing service in DataPower with/without interactive java application to perform all the soap test you are performing using soapUI. You can perform SOMA/AMP calls to check the status of objects, ping external services, etc. You can schedule these test on a regular interval or manual.
Depending how you set it up, you can either generate an email with status of each object/service you are testing or create a html dashboard that records the current status of everything.
A while back I set up BizTalk to pick up a file via FTP and drop it into a network directory. It's all passsthru so I didn't use an orchestration.
Now I've been asked to execute a stored procedure once the file is picked up. The procedure contains no parameters and I do not need the contents of the file.
It seems like such a simple request but I can't figure it out. Is there any way to do this without over complicating things?
This can be accomplished through the use of either the WCF-SQL adapter or the WCF_Custom adapter with a SQL binding. You can do this using messaging only with just a SendPort with a filter/map on it thus no orchestration needed.
For the SOAP action header use TypedProcedure/dbo/name_of_your_stored_procedure and in the messages tab you can specify the paramters to the stored procuders as well as add a payload in the following manner:
<name_of_your_stored_procedure xmlns="http://schemas.microsoft.com/Sql/2008/05/TypedProcedures/dbo">
<parameter1>XXXX</parameter1>
<xml_parameter>
<bts-msg-body xmlns="http://www.microsoft.com/schemas/bts2007" encoding="string"/>
</xml_parameter>
</name_of_your_stored_procedure>
In the above case xml_parameter will have the contents of the message payload passed to it.
The stored procedure should look something like :
CREATE PROCEDURE [dbo].[name_of_your_stored_procedure]
#parameter1 int,
#xml_parameter nvarchar(max)
AS
BEGIN
-- your code goes here
END
More details can be found here
Regards Hasse
This MSDN page describes the process and has this to say: "You must create a BizTalk orchestration to use BizTalk Server for performing an operation on SQL Server."
However if you're really desperate not to use an orchestration I believe you have the option of setting the operation context property in a custom pipeline component. Then you can initialise the message in a map on a port. In theory this should work but I can't guarantee it.
I've been running Biztalk 2004 with the Covast EDI accelerator since 2004. I'm currently upgrading to Biztalk 2013 R2 and having difficulty viewing the final outbound interchange document for an X12 document. My final destination is an AS2EDISend port.
I can see the interchange information (sender/receiver/control ID) in the report "EDI Interchange and Correlated ACK status". I can see more information by viewing the "Interchange Status and ack Details" screen. I can view the transaction set. I can view the transaction set details and from there get the final transaction set (ST to SE segment) in raw ASCII format.
But I can't see the raw final outbound interchange complete with the ISA/GS segments.
I do have tracking turned on and when I look at the tracked message events, I can see receive/send events for the AS2EDI pipeline. When I look at the message on the receive event, it's the XML representation of the transaction set. When I look at the message on the send event, it's already been AS2 encoded and I'm unable to view the raw ASCII EDI file complete with ISA/GS segments.
Am I missing something? Is there somewhere else to look? Will I have to configure a secondary send port which only does EDISend and write to my filesystem and maintain/archive that information myself?
The ISA and GS segments will be promoted into the context of the message, as ISA_String and GS_String respectively. The individual segment values are also promoted as ISA01, ISA02, etc. and GS01, GS02, etc.
Since you're using AS2, I think the easiest solution would be to create a send port group, use your existing send port with AS2 in it, and another SendPort with EdiSend using the FILE adapter. Another option would be to add a custom pipeline component in the Encode stage that would archive the results from the EDI Assembler - which would be more efficient but more work as well.
if you simply want to see the interchange message for testing/develop propose, Put the send port to stop state, the message in this port will suspend. then you can view/save the message in admin console.
if you need a solution to "see" the interchange message in operation level. a second send port is an option.
I have a simple azure biztalk services project.
It has a FTP source that reads a .CSV file and writes to an on-premise sqlserver database table.
I successfully deployed and it works for small .CSV files (around 800 rows) quite well. But, when I have a large file (around 6500 rows. Actually, this is also very samll file in my opinion), it fails with the following error. Below this error, you will see my configurations for the SQLServer Adapter Service.
<?xml version="1.0" encoding="utf-16"?>
<s:Fault xmlns:s="http://www.w3.org/2003/05/soap-envelope">
<s:Code>
<s:Value>s:Receiver</s:Value>
<s:Subcode>
<s:Value>s:SendError</s:Value>
</s:Subcode>
</s:Code>
<s:Reason>
<s:Text xml:lang="en-US">The operation with action "TableOp/Insert/dbo/tblVMSData"
took longer than the specified timeout "00:01:00".</s:Text>
</s:Reason>
</s:Fault>
My on premise SQL Server adapter service has the following configuration.
<basicHttpRelayBinding>
<binding name="basicHttpRelayBinding1"
closeTimeout="00:20:00"
openTimeout="00:20:00"
receiveTimeout="00:20:00"
sendTimeout="00:20:00"
maxBufferPoolSize=" 1048576"
maxBufferSize="67108864"
maxReceivedMessageSize="67108864">
<readerQuotas maxDepth="2147483647"
maxStringContentLength="2147483647"
maxArrayLength="2147483647"
maxBytesPerRead="67108864"
maxNameTableCharCount="2147483647" />
<security mode="Transport" />
</binding>
</basicHttpRelayBinding>
You might have to make the timeout value on the server side larger too! So if you have a look in your BizTalk Services project, look for your SQL endpoint configuration file (under you itinerary in solution explorer) and edit the WCF configuration values there too.
Does that work ?
I finally found the configuration options for the timeout. They are not in any config file. They are not even in the biztalk services project.
You have to right click your SQL Target, under LOB Types under Biztalk Adapter Services, and select the properties. In the properties, Click on the Binding Configuration. It opens up Advanced Adapter Configuration. There are four timeouts "Open Timeout", "Receive Timeout", "Send Timeout" and "Close Timeout".
What is strange is that these timeouts also appear in the config file that is automatically generated when you drag and drop the SQL Target on to your MessageFlowItinerary. But, changing them in the config file seems to be not enough in my case.
Also, when you open IIS Management console, there you can change client configuration for timeouts by clicking on the Configure option under "Manage WCF and WF Services". If you do not see "Manage WCF and WF Services", you haven't installed the Windows Server App Fabric SDK. Download and install it.
Microsoft should really make it easy to find and change configuration for both server and client from a single simple page. It is really frustrating to click through different things to find the options. It should simply work out of the box. On one hand they give flexibility by providing configuration option and with the other hand they rob it by hiding the configuration options under different rocks.
I have a flex application that communicates via BlazeDS with two webapps running inside a single instance of Tomcat.
The flex client is loaded by the browser from the first webapp and all is well. However on the initial call to the second webapp the client receives the following error:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly.
Subsequent calls to the same service method succeed.
I've seen a few posts around referring to the same error in the context of two flex apps calling a single webapp from the same browser page, but nothing which seems to help my situation - so I'd be very grateful if anyone could help out....
Cheers, Mark
Three potential solutions for you:
I found once that if I hit a remote object before setting up a messaging channel then the CientID would get screwed up. Try to establish an initial messaging channel once the application loads, and before any remote object calls are made.
Flash Builder's network monitoring tool can cause some problems with BlazeDS. I set up a configuration option on application load that checks to see if I'm in the dev environment (it is called just before setting up my channel from #1). If I'm in dev, I assign a UID manually. For some reason this doesn't take well outside the dev environment... been awhile since I set it all up so I can't remember the finer points as to why:
if (!(AppSettingsModel.getInstance().dev))
FlexClient.getInstance().id = UIDUtil.createUID();
BlazeDS by default only allows for a single HTTP session to be setup per client/browser. In my streaming channel definitions I added the following to allow for additional sessions per browser:
<channel-definition id="my-secure-amf-stream" class="mx.messaging.channels.SecureStreamingAMFChannel">
<endpoint url="https://{server.name}:{server.port}/FlexClient/messagebroker/securestreamingamf"
class="flex.messaging.endpoints.SecureStreamingAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<idle-timeout-minutes>0</idle-timeout-minutes>
<max-streaming-clients>10</max-streaming-clients>
<server-to-client-heartbeat-millis>5000</server-to-client-heartbeat-millis>
<user-agent-settings>
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
Problem: Duplicate session errors when flex.war and Livecycle.lca files are hosted in separate JVMs on WebSphere Server.
Solution:
Inside the command file for the event, set FlexClientId to null in execute method before calling remote service (Java method or LC Process).
Guess this approach can be used in other scenarios as well to prevent Duplicate session errors.
EventCommand.as file
—————————–
import mx.messaging.FlexClient;
//other imports as per your code
public function execute(event:CairngormEvent):void
{
var evt:EventName = event as EventName ;
var delegate:Delegate = new DelegateImpl(this as IResponder);
//***set client ID to null
FlexClient.getInstance().id = null;
delegate.functionName(evt.data);
}