Multiple Qt Bluetooth Low Energy servers on one device using Qt? - qt

I want to expose multiple BLE services from one device using Qt (on linux), but don't know how to do it if it's even possible.
In my specific case I want my device to be both a heart rate service (HRS) and a cycling power service (CPS).
My testing code is very similar to the heart rate server example from Qt's documentation, http://doc.qt.io/qt-5/qtbluetooth-heartrate-server-main-cpp.html, and I've tried the following two approaches:
Using two QLowEnergyControllers from one application
Using one QLowEnergyController, but adding two different services using bleController->addService().
However it doesn't seems like either one is working properly, or it is just the apps I use for testing that doesn't really handles this properly. I currently use my iPhone 6s with TrainerRoad for testing, and if I just expose one of the services it works well.
Can this be done, and if so, what is the correct way?

I retried this a while back and could successfully provide multiple services using one device. The correct way is to use option 2, so one QLowEnergyController and then add multiple services.
Unfortunately I didn't keep the code from my first attempt, so I can't provide a good answer of what I did wrong. But it works "as expected", no special stuff is needed.

Related

How to communicate between two applications in Android?

I'm developing a keyboard, so I'm implementing an InputMethodService. I have a requirement to add other features to this keyboard application but to separate it as another application in order to leave the keyboard as a lone keyboard implementation.
So I need to create a keyboard application and another application with all the other features (other features include but not limited to: a News Activity, a Messenger, a Lock Screen implementation and some Widgets).
Those two applications will need to communicate between them, from my research I found that there are several mechanisms I could use:
A Bounded Service
URI implementation
BroadcastReceivers
My question is: what would be the best implementation for my needs? Where my needs are to pass data from one application to another as well as starts activities and other components from one app in another.
After I made some research on this topic I found that there are several ways to do this operation:
Using Bounded Services that uses either a Messenger object to pass messages between the local process and the Remote Bounded Service or using AIDL to create an interface that will be passed from the Remote Bounded Service to the local process so that they can communicate.
The second options would be using the good old fashion BroadcastReceivers. That way as always it is possible to fire an Intent from the local process to the remote process and there receive some information.
The different for the usage of those both two would be decided by how strong would you like the connection to be between the two processes and how often should they be communicating. If they need to do one operation once in a while the BroadcastReceivers would be a perfectly good solution. But if you need a more consistent connection the Bounded Service is the way to go.

How to build a predictive dialer?

I need to build a reliable predictive dialer based on Asterisk. Currently the system we use includes Wombat and Asterisk, and we do not find this solution usable as Wombat provides a poor API and it's impossible to use it without regular manual operations.
The system we want:
Can be used solely via API or direct database queries (adding lists to campaigns, updating lists, starting campaigns, stopping campaigns etc.) so that it can be completely integrated into an existing product
Is free, or paid for annually independent to the usage rate
Is considered stable
Should be able to handle tens of thousands of calls per day, if it matters
Use vicidial.org or hire freelancer to build new core with your needed api.
You can also check OSdial for this, it also developed using asterisk.
We have been working with a preview of the next version of Wombat, through the Early Access program, and Wombat has a complete configuration and reporting JSON API and you can deploy it "headless" in order to scale up to thousands of parallel lines. If you ask Loway they can likely get you access to the Early Access program.
BTW, Vicidial is great for agent-based outbound, but imposes quite a large penalty on the number of agents per server - you cannot reasonably use it to do telecasting at the scale we are looking for as it would require too many servers. Wombat is leaner and can drive over one thousands channel per server. YMMV.
This question would be better placed on a "hire-a-freelancer" site like oDesk ... if you need custom programing done, those are the sorts of places to go to get manpower.
Your specifications are well within what is possible with Asterisk. I'd strongly recommend looking at Vici Dial and OS Dial as others have suggested; out of the can, they are pretty good.
The hard part of any auto-dialer is not the dialer, oddly enough. It's the prediction algorithms, the answering machine detection algorithms and the agent UI. Those are what makes or breaks an auto-dialer application for a company.

Using SignalR as service layer for WebRTC

This is a follow-up to another question I asked, but with more precise information.
I have two fundamentally identical web pages that demo WebRTC, one using XSockets as the backend signaling layer, and one using SignalR as the backend signaling layer.
The two backends are fundamentally identical, differing only at the points where they (obviously) have different ways of sending data down to the client. Similarly, the TypeScript/JavaScript WebRTC code on the two clients is completely identical, as I've abstracted out the signaling layer.
The problem is that the XSockets site works consistently, while the SignalR site fails (mostly consistently, though not completely). Usually it fails while calling peerConnection.setLocalDescription(), but it can also fail silently; or it can (sometimes) even work.
You can see the two different pages in operation here:
XSockets site: http://xsockets.demo.alanta.com/
SignalR site: http://signalr.demo.alanta.com/
The source code for both is at https://bitbucket.org/smithkl42/xsockets.webrtc, with the XSockets version on the xsockets branch, and the SignalR version on the signalr branch.
So my question is: does anybody know of any reason why using one signal layer instead of another would make any difference to WebRTC? For instance, does one or the other send back Unicode strings instead of ANSI? Or have I misdiagnosed the problem, and the real difference is elsewhere?
Figured it out. Turns out that SignalR 1.0 RC1 has a bug in it that changes any "+" in a string into a space. So lines in the SDP that looked like this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2+z
Were getting changed into this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2 z
But because not every SDP had a "+" in it on a critical line, sometimes it would work. Everything explained.
The bug has been reported to the good folks working on SignalR (see https://github.com/SignalR/SignalR/issues/1194), and in the meantime, a simple encodeURIComponent() and decodeURIComponent() around the strings in question fixed it.

Controlling Spotify through Processing/Arduino

I am making a tangible controller for Spotify (like the one from Jordi Parra, http://vimeo.com/21387481#at=0) using an Arduino microcontroller.
I have a Processing sketch running which does all the calculations with the data from the Arduino. I want this Processing sketch to be able to control different options in Spotify like: Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Right now I use an extra Arduino Leonardo which simulates key presses while AutoHotKey listens to those and sends them to Spotify. It does not work very well and I only have limited options.
I would love to get rid of that extra Arduino while getting more control.
I am working on a Windows thing so Apple script won't work (for me).
Is there a possibility to control the Spotify app from Processing? Or is it possible to use the library to create a new Spotify app in Processing?
Many thanks in advance!
Paul
Disclaimer: I work at Spotify
Right now there is no cross-platform way to control the Spotify application. On Linux, Spotify will respond to dbus commands, which means that a bit of hacking could send play/pause/next/previous. I have heard that it is also possible to control Spotify on Mac OSX via applescript, but I'm not 100% certain about this. A quick google search for "control spotify mac os x applescript" produced some interesting results, though I'm not sure how current or relevant any of them are. As for Windows, I'm not sure if/how one would control the application at all.
Otherwise, your best bet would be libspotify, for which you would need to write a Processing library to communicate with it. Based on a bit of quick research, it seems that Processing libraries are written in Java, which means you'd either need to use a wrapper such as jlibspotify or hand-roll your own JNI wrapper for libspotify.
I'm not sure how current jlibspotify is, given that they are wrapping a rather old version of the library. If you do any libspotify hacking it is better done in C/C++ with a minimal JNI wrapper, but all of this may be way more work than you are intending for this project.
Why not utilize Spotify's keyboard integration.
The Arduino Leonardo supports USB HID mode.
So, send the keyboard keys for Next, Previous, Play/Pause, Volume Up/Down, Shuffle.
Most everything has a single bound global key. I believe only shuffle does not. You could create a global hotkey in your OS to bind to the app's shuffle control key.
If you are looking for status feedback on the state of each button, this of course won't help you.
Good luck.

Approach for disconnected application development

Our company has people in every catastrophic event here in the U.S. and parts of Canada. An example is they were quite prevalent in Katrina immediately after the event.
We are constructing an application to improve their job in the field which may be either ASP.NET or WPF, and the disconnect requirement makes us believe it will be a WPF application. Our people need to be able to create their jobs, provide all of the insurance and measurement data, and save it as if in the database whether or not the internet is available.
The issue we are trying to get our heads around is that when at catastrophic events our people need to be able to use our new application even when the internet is not available. (They were offline for 3 days in Katrina)
Has anyone else had to address requirements like this and suggestions on how they approached functioning on small-footprint devices while saving data as if they were still connected to the backend services and database? We also have to incorporate security into this as well, and do it well enough that their entered data loads into the connected database without issues.
Our longterm goal is to also provide this application for Android and IPad Tablet devices as well as laptops. Our initial desire for ASP.NET was it gave us an immediate application for the tablet environment. In the old application they have, they run a local server, run remote connections on the tablets and run the application through terminal server. Not pretty. Not pretty.
I feel this is a serious question that is not subjective so hopefully this won't get deleted.
Our current architecture on the server side is Entity Framework with a repository pattern, WCF services to satisfy CRUD requests returning composite data transfer objects, and a proxy for use by the clients.
I'm interested in hearing other developers' input and this design puzzle.
Additional Information Added to the Discussion
Lots of good information provided!!! I'll have to look at Microsoft Sync for sure. For the disconnected database I would be placing only list tables (enumerations) in the initial database. Jobs and, if needed, an item we call dry books, will be added for each client we are helping. (though I hope the internet returns by the time we are cleaning and drying out the homes) These are the tables that would then populate back to the host once we have a stable link. In the case of Katrina we also lost internet connectivity in our offices which meant the office provided no communication relief for days as well.
Last night I realized that our client proxy is the key to everything working! The client remains unaware of the fact that it is online or offline and leaves the synchronization process within that library. We are discovering how much data we are talking about today. I also want to make it clear that ASP.NET was a like-to-have but a thick client (actually WPF with XAML) may end up being our end state.
Now -- for multiple updates. The disconnected work will be going to individual homes by a single franchise. In fact our home office dispatches specific franchises to specific events. So we have a reduced likelihood (if any) of the problem of multiple people updating a record. The reason is that they are creating records for each job (person's home/office/business) and only that one franchise will deal with it. Of course this also means that if they are disconnected for days that the device that creates the job (record of who, where, condition, insurance company, etc) is also the only device that knows of the job. But that can be lived with. In fact we may be able to have a facility to sync the franchise devices on a hub.
I'm looking forward to hearing additional stories of how you've implemented your disconnected environment.
Thanks!!!
Looking at new technology from Microsoft
I was directed to look at a video from TechEd 2012 and thought I might have an answer. The talk was on using ASP.NET and MVC4 along with 2 libraries for disconnected behavior. At first I thought it would be great but then as it continued it worried me quite a bit.
First the use of a javascript backend to support disconnected I/O does not generate confidence. As a compiler guy (and one who wrote two interpretive languages) I really do not like having a critical business model reliant upon interpretive javascript. And script at that! It may be me but it just makes me shudder.
Then they show their "great"(???) programming model having your ViewModel exist as just javascript. I do not care for an application (asp.net and javascript) that can be, and may as well be (for lack of intellisense ) written in notepad.
No offense meant to any asp lovers, but a well written C# program that has been syntactically and type checked gives me stronger confidence in software than something written with a hope and prayer that a class namespace has been properly typed without any means of cross check. I've seen too many hours of debugging looking for a bug that ended up in a huge namespace with transposed ie in it's name. I ran my thought past the other senior developers in my group and we are all in consensus on this technology.
But we continue to look. (I feel this is becoming more of a diary than a question) :)
Looks like a perfect example for Microsoft Sync Framework
http://msdn.microsoft.com/en-us/sync/bb736753.aspx
A comprehensive synchronization platform that enables collaboration
and offline access for applications, services, and devices with
support for any data type, any data store, any transfer protocol, and
any network topology.
I often find that building a lightweight framework to fit my specific needs is more beneficial to me than using an existing one. However, always look at what's available and weigh the pros and cons before making that decision.
I haven't use the Microsoft Sync Framework, but it sounds like that's a good one to research first. If you have Sql Server Standard (or some other version other than the Express version) then replication might also be an option.
If you want to develop your own homegrown solution, then be sure to put lastupdated and dateadded fields on any tables that need to stay in sync. It doesn't 'sound' like your scenario will be burdened by concurrency issues (i.e. if person A and B both modify a field at the same time, who wins?). If that's the case then developing your own lightweight solution will be pretty straightforward.
As Jeremy pointed out, you will need a way to get the changes. In addition to using a web service, you can also use WCF which is similar to a web service in some ways. But my personal bias would be towards just accessing a SQL server remotely over the internet. The downside of that solution is added security concerns, while the upside is decreased development overhead (i.e. faster/easier development now and less maintenance over time). Also, the direct SQL solution is also assuming that this is an internal application... that you're in charge of all development and not working with 3rd parties who need access to your data and wouldn't be allowed to access it this way.
Not really a full answer but too much for a comment.
I have two apps one that synchs one way and the other two way.
I do a one way synch to client for disconnected operation. At the server full SQL Server and at the client Compact Edition. TimeStamp is a prefect for finding any rows that needs to be synched. I also don't copy the whole database as some of the largest table are non nonessential. The common use is the user marks identified records they want to synch.
If synch does what you need great +1 for Jakub. For me I don't have the option to synch the whole MSSQL both based on size and security.
Have another smaller application that synchs two way but in this case it has regions and update are only within the region. So a region only synchs their data and in disconnected mode they can only add new records. Update to an existing records must be performed in connected mode. That was mangeable. In that case MSSQL for the master and used XML for the client.
No news to you but the hard part of a raw synch is that two parties may have added or revised the same record.

Resources