I am researching PCI compliance for an app I plan to build. It would essentially be an online market place. I will be using MangoPay as the payment processor (who are level 1 PCI compliant) along with their SDK to send information to them. No card information would be stored within the app.
The app it self would be hosted on AWS (which itself is level 1 compliant https://aws.amazon.com/compliance/pci-dss-level-1-faqs/).
Question
What level PCI certification would I need to get in order to be compliant.
Related
My department is coming from a .NET 4.8 wcf/wpf client/server architecture world. But times change and there a new requirements on connecting .NET back-ends to web applications, mobile apps and (also) desktop applications. We have experience on how to transfer messages from back-end to clients, as we implemented our own net. TCP message broker.
And here comes the first main topic of my question:
#1 Is the usage of a google protobuf/amqp event broker the right way to connect micro-services with all kind of clients (web, mobile, desktop), as well as with other back-end components?
My current investigation led me to 3 products:
RabbitMQ with its amqp
NATS/STAN with a google protobuf message protocol including event storage in MySQL/Postegres
Eventstore with google protobuf message protocol including event storave in an own database system
Which leads me to my second question:
#2 What experience did you make with any of these products? Any pros, cons or snares?
We don't need any clustered systems or highly scalable solutions, as every customer has its own encapsulated system with < 40 clients. But we want a reliable message/event transmission (which should be achieved by any of the 3 mentioned brokers).
BR Christian
I need a biometric (RFID or FINGER PRINT or face recognition) system which could consume web api directly - being independent of third party. I got some CAMS biometric unit and ZKTECO K20 pro model these could send data to server but we must go through their data server and need to expose the API. I want biometric system that could send activity data to my server.
This is required to develop a system to control attendance of branches from cooperate office. Here I need some suggestion of biometric system. I got
You can install a biometric device with its default software on your computer. The software stores the attendance data in your local database. You can develop a software to read the data from the local database and update your server on a schedule.
Online SDK is restricted from the manufacturers as it may lead to the security issues. As it is online based, anyone can access the anyone's biometric device if the online SDK has been exposed. So it is kept as confidential by the manufacturers CAMS and ZKTeco.
You can make your CAMS devices to communicate with your server directly, Below is the text snippet taken from their API documentation link: http://camsunit.com/application/biometric-web-api.html
Want to receive data without coming through CAMS Dataserver?
Yes. We provide the windows lite version protocol engine which should be
installed at your windows server. Once its installed, You web api
endpoints will start getting triggered whenever attendance is
registered at the device
I recently entered into some API management tools. I could see these API management tools can do whatever Data-power is doing and these are also placed in front of back-end services to protect the back-end servers.
So,what makes Data-power unique?Or is it fair to compare Data-power with API management tools as its competitors?If yes, why IBM itself brought in a tool named IBM API management?
Ok, so the API solution from IBM, now called IBM API Connect (APIc) is more or less just the GUI to handle, set or view your APIS and statistics about them.
The actual HTTP requests (or IBM MQ requests) when using one of your API's goes through the API run-time.
IBM offers two different run-times today, MicroGateway (former StrongLoop) or IBM DataPower. DataPower comes as either hardware appliance, a virtual appliance or as a Docker container.
If you select to run APIc on DataPower you will be able to use all of the other features of Datapower as well (and there is a ton of them!).
MicroGateway is a Node.js runtime so it requires its own server and cluster obviously.
DataPower has built in cluster support and of course a DataPower appliance is built to sit Internet facing in the DMZ so all security is covered!
You will also have a few more functions/features in APIc using DataPower as the runtime.
So, to answer your question; No, it is not fair to compare APIc on DataPower with the competitors of "just" API solutions as DataPower brings in so much more to the deal. DataPower is a full grown gateway solutions for all your integration needs and it comes with FTP, sFTP, IBM MQ, Node.js runtime, HTTP server, SOAP WS-I, AS1-4, EDI (X12 and EDIFACT), etc.
If you want to compare to other API vendors you should really compare APIc on MicroGateway in my opinion...
You can test both APIc and DataPower (Docker) for free in "non-production" use:
https://developer.ibm.com/apiconnect/getting-started/
https://hub.docker.com/r/ibmcom/datapower/
I was assigned with the re-architecture of a legacy (medical) product which is controlling several external devices. In the current architecture, we have several such stations in each customer's network, where each station is processing its own data, and they all share some of that data via a central server (that talks to the DB and BLOB storage).
I'm planning the new architecture such that it will allow more scenarios, such as monitoring the stations through a web interface, and allowing data processing to be scalable by adding additional servers.
This led me to choose NServicebus as the messaging and communication infrastructure. And I pretty much have a clear view of the new architecture.
However, another factor was recently added to the equation by my manager. He requires that the machine that communicates with the devices (hardware), will not be under the IT policies of the customer. The reason behind this, as I understand, is that we don't want the customer's IT to control OS updates, security, permissions and other settings, because we want full control over that machine in order to work properly with our hardware.
My manager thus added a requirement that this machine will be disconnected from the customer's LAN.
If I still want to deploy NServiceBus on that separated machine (because I want to pub/sub async messages to other machines - some are on the customer's LAN and some aren't), Will it require some special deployment? Will it require an NServiceBus gateway?
EDIT: I removed the other (1st) question, as it wasn't relevant to the scope of StackOverflow.
Regarding question 2, yes it would require the use of a "Gateway", however the current NServiceBus Gateway implementation does not support pub/sub so you would have to look at alternatives.
We are currently using SQL Server 2008 Express Edition, but would like to upgrade to Standard Edition. Does it mean that we need a license with 20 seats, if we have 20 Active Directory users that are using the DB from a C# application?
If yes, does it make sense to switch from Windows Forms to Web Applications in order to decrease the amount of licenses needed?
Switching to a web app won't change the licensing needs of your application. If you have 20 users connecting to your SQL Server then you need 20 CALs for Standard Edition as whilst you may have a single "user" connecting to the DB you're still servicing 20 users. The MS license docs cover this in some detail.
The alternative approach for to go with per processor licenses. You obviously need to do the maths to work out which option is more cost effective for your user growth estimates.
Given that you're starting at 20 users the per user (CAL) route will probably be the cheapest option.
You have two types of licenses available to you, each with their own set of rules and scenarios where they make sense.
Per Processor License. Here you license each physical (or virtual if you are using virtualization and depending on the Sql Server Edition) processors.
Server/CAL license. Here you would buy a license for each server running Sql Server and Client Access Licenses (CAL) for each user or device. Note that a CAL would allow that user or device to connect to any number of SQL Servers without the need to buy additional CALs if you add additional servers. Also, any type of software or hardware that reduces the number of devices or users that directly access SQL Server (an example would be the use of a web application to reduce the number of users that connect to the database directly through connection pooling) would NOT reduce the number of CALs you get. You will still need to get them for each user using the web application.
The following microsoft link provides pricing points for Sql Server 2008 and also includes a Sql Server 2008 R2 Quick Reference, which includes all the information that you might need. We can see that based on the above link:
Per Processor would cost you $7,171.00
Server/CAL would end up being $4,178.00 based on the bellow calculations
Server $898.00
CAL $164.00 x 20 = $3,280
Total $898.00 + $3,280 = $4,178.00
Of course this is an estimate that doesn't include tax, discounts, or software assurance.
If you want more information I would recommend asking on serverfault