Does Spring MVC/Rest have its own default connection pooling? - spring-mvc

I have developed a Spring Rest Application. I have not added code for any specific connection pooling. I was considering HikariCP, boneCP or cp30.
Someone told me that Spring MVC/Rest has its own internal connection pooling so even if I don't implement any of my own, there would not be any issues.
I tried finding on the internet but has some doubt. Please clarify.

No , Spring has not default connection pooling , It use web server [Tomcat] thread request handling , and heavily using Threadlocal but for database operation you can use any of them as your wish , spring boot using common pooling as default but adding other configurations override the others .
Tomcat in spring boot:
server.tomcat.accept-count= # Maximum queue length for incoming connection requests when all possible request processing threads are in use.
server.tomcat.accesslog.buffered=true # Buffer output such that it is only flushed periodically.
server.tomcat.accesslog.directory=logs # Directory in which log files are created. Can be relative to the tomcat base dir or absolute.
server.tomcat.accesslog.enabled=false # Enable access log.
server.tomcat.accesslog.file-date-format=.yyyy-MM-dd # Date format to place in log file name.
server.tomcat.accesslog.pattern=common # Format pattern for access logs.
server.tomcat.accesslog.prefix=access_log # Log file name prefix.
server.tomcat.accesslog.rename-on-rotate=false # Defer inclusion of the date stamp in the file name until rotate time.
server.tomcat.accesslog.request-attributes-enabled=false # Set request attributes for IP address, Hostname, protocol and port used for the request.
server.tomcat.accesslog.rotate=true # Enable access log rotation.
server.tomcat.accesslog.suffix=.log # Log file name suffix.
server.tomcat.additional-tld-skip-patterns= # Comma-separated list of additional patterns that match jars to ignore for TLD scanning.
server.tomcat.background-processor-delay=30 # Delay in seconds between the invocation of backgroundProcess methods.
server.tomcat.basedir= # Tomcat base directory. If not specified a temporary directory will be used.
server.tomcat.internal-proxies=10\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
192\\.168\\.\\d{1,3}\\.\\d{1,3}|\\
169\\.254\\.\\d{1,3}\\.\\d{1,3}|\\
127\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.1[6-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.2[0-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.3[0-1]{1}\\.\\d{1,3}\\.\\d{1,3} # regular expression matching trusted IP addresses.
server.tomcat.max-connections= # Maximum number of connections that the server will accept and process at any given time.
server.tomcat.max-http-post-size=0 # Maximum size in bytes of the HTTP post content.
server.tomcat.max-threads=0 # Maximum amount of worker threads.
server.tomcat.min-spare-threads=0 # Minimum amount of worker threads.
server.tomcat.port-header=X-Forwarded-Port # Name of the HTTP header used to override the original port value.
server.tomcat.protocol-header= # Header that holds the incoming protocol, usually named "X-Forwarded-Proto".
server.tomcat.protocol-header-https-value=https # Value of the protocol header that indicates that the incoming request uses SSL.
server.tomcat.redirect-context-root= # Whether requests to the context root should be redirected by appending a / to the path.
for Data Source :
# DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.continue-on-error=false # Do not stop if an error occurs while initializing the database.
spring.datasource.data= # Data (DML) script resource references.
spring.datasource.data-username= # User of the database to execute DML scripts (if different).
spring.datasource.data-password= # Password of the database to execute DML scripts (if different).
spring.datasource.dbcp2.*= # Commons DBCP2 specific settings
spring.datasource.driver-class-name= # Fully qualified name of the JDBC driver. Auto-detected based on the URL by default.
spring.datasource.generate-unique-name=false # Generate a random datasource name.
spring.datasource.hikari.*= # Hikari specific settings
spring.datasource.initialize=true # Populate the database using 'data.sql'.
spring.datasource.jmx-enabled=false # Enable JMX support (if provided by the underlying pool).
spring.datasource.jndi-name= # JNDI location of the datasource. Class, url, username & password are ignored when set.
spring.datasource.name=testdb # Name of the datasource.
spring.datasource.password= # Login password of the database.
spring.datasource.platform=all # Platform to use in the schema resource (schema-${platform}.sql).
spring.datasource.schema= # Schema (DDL) script resource references.
spring.datasource.schema-username= # User of the database to execute DDL scripts (if different).
spring.datasource.schema-password= # Password of the database to execute DDL scripts (if different).
spring.datasource.separator=; # Statement separator in SQL initialization scripts.
spring.datasource.sql-script-encoding= # SQL scripts encoding.
spring.datasource.tomcat.*= # Tomcat datasource specific settings
spring.datasource.type= # Fully qualified name of the connection pool implementation to use. By default, it is auto-detected from the classpath.
spring.datasource.url= # JDBC url of the database.
spring.datasource.username= # Login user of the database.
spring.datasource.xa.data-source-class-name= # XA datasource fully qualified name.
spring.datasource.xa.properties= # Properties to pass to the XA data source.

Related

R mongolite: correct format for connecting with a Mogodb on a remote server?

I'm writing some R code that queries a MongoDB database, imports records matching the query criteria into R, performs record linkage with another data source, and then pushes the updated records back into MongoDB.
The code needs to work with any instance of the MongoDB database. Some people have it installed as standalone on their own computers, while others have it installed on their organisational servers. Note that these are servers specific to individual organisations and not the public mongo server.
To test my code, I have access to both scenarios - one instance is set up on my own computer, and I have several remote server instances as well.
The MongoDB database has some APIs, but I was struggling with the adapting the APIs to include the correct syntax to form my query, so I thought I would try the mongolite package instead.
I was able to create a successful connection string for the MongoDB instance on my local computer, using my user ID (which I retrieve first with an API and save as the R object myids), password, the localhost and port number as below:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb://localhost:3000",
myids$userid,
":",
rawToChar(password)))
I understood from reading the mongolite user manual that to create the connection string / URI, you skip the http or https part of the address and preface it with either mongodb:// when the Mongodb database is on a local computer, or mongodb+srv:// when the Mongodb database is on a remote server.
However, when I try just changing the prefix and login details for the remote server version, the connection fails. Say the URL for my remote server is https://mydb-r21.orgname.org/ which opens a web page where you can log in to the Mongodb database and interact with it via a graphical user interface. Just swapping localhost:3000 for the web address mydb-r21.orgname.org/ and supplying the relevant login credentials for that server doesn't work:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb+srv://mydb-r21.orgname.org/",
myids$userid,
":",
rawToChar(password)))
When I try, this is the error I get:
Warning: [ERROR] Failed to look up SRV record "_mongodb._tcp.mydb-r21.orgname.org": DNS name does not exist.
Error: Invalid uri_string. Try mongodb://localhost
If I try changing to mongodb::// (not localhost because it isn't hosted locally) I get this:
Error: No suitable servers found (`serverSelectionTryOnce` set): [connection timeout calling hello on 'mydb-r21.orgname.org:27017']
Interestingly, the port that is suffixed in the error message is the correct one that I was expecting, but that still doesn't help me.
The documentation in the mongolite user manual and other places I've found online seem to add some read/write specifications to the connection string, but as I'm not very familiar with how connection strings are constructed, I don't know if these are specific to the databases they are using in their examples. I can't find any clear explanation of what the extra bits that are not part of the URL mean, e.g. as shown in this blog. All the prefixes seem to be a bit different too, so I am not even sure what would be appropriate to try in my case.
Can anyone explain why the connection string works fine with localhost:port number for the local instance, but doesn't work with the URL for the remote server / online instance?
Also what do I need to do to make the URI for the remote server valid?

ODBC Data source name not found, and no default driver specified

(Environment is Linux CentOs7)
I'm fiddling with odbc.ini file and testing the connection each time I make a change:
isql -v gdHive
[IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified
[ISQL]ERROR: Could not SQLConnect
Within odbc.ini file I've been trying different variations of the description name value pair. This originally was set to:
# Description: DSN Description.
Description=Hortonworks Hive ODBC Driver (64-bit) DSN
I tried changing this to
Description=gdHive
This did not change anything wrt the error I pasted above.
Here's the whole config, it's my first time setting this up so I'm not sure what I've missed:
# HS2 service discovery with ZooKeeper (ServiceDiscoveryMode=1).
ZKNamespace=/hive/hiveserver2
# Set to 1 if you are connecting to Hive Server 1. Set to 2 if you are connecting to Hive Server 2.
HiveServerType=2
# The authentication mechanism to use for the connection.
# Set to 0 for No Authentication
# Set to 1 for Kerberos
# Set to 2 for User Name
# Set to 3 for User Name and Password
# Note only No Authentication is supported when connecting to Hive Server 1.
AuthMech=1
# The Thrift transport to use for the connection.
# Set to 0 for Binary
# Set to 1 for SASL
# Set to 2 for HTTP
# Note for Hive Server 1 only Binary can be used.
ThriftTransport=1
# When this option is enabled (1), the driver does not transform the queries emitted by an
# application, so the native query is used.
# When this option is disabled (0), the driver transforms the queries emitted by an application and
# converts them into an equivalent from in HiveQL.
UseNativeQuery=0
# Set the UID with the user name to use to access Hive when using AuthMech 2 to 8.
UID=
# The following is settings used when using Kerberos authentication (AuthMech 1 and 10)
# The fully qualified host name part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbHostFQDN to myhs2.mydomain.com
KrbHostFQDN=hive.hadoop.p3.int.example.com
# The service name part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbServiceName to hive
KrbServiceName=hive
# The realm part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbRealm to EXAMPLE.COM
KrbRealm=HADOOP.PROD.INT.EXAMPLE.COM
# Set to 1 to enable SSL. Set to 0 to disable.
SSL=0
# Set to 1 to enable two-way SSL. Set to 0 to disable. You must enable SSL in order to
# use two-way SSL.
TwoWaySSL=0
# The file containing the client certificate in PEM format. This is required when using two-way SSL.
ClientCert=
# The client private key. This is used for two-way SSL authentication.
ClientPrivateKey=
# The password for the client private key. Password is only required for password protected
# client private key.
ClientPrivateKeyPassword=
The error message only says "Data source name not found, and no default driver specified". I'm not sure how to set this data source name or choose default driver?
The "Description" keyword's value is entirely cosmetic in an ODBC DSN. It has no functional use. You can set it to any arbitrary string.
ODBC will be very hard to learn by trial-and-error shots in the dark. I suggest you take a look at the official ODBC specification. You might also look at some docs for the Driver Manager in your environment, such as this for iODBC, or this for unixODBC.

Can't Connect with Alfresco repo via aos

You are my last hope... I have been looking for answers a long time ago.
I my trying to connect alfresco repo via aos. I push "Map network drive..." and i fill the textFields. The strange part of my story is that I can connect locally but i can't remotely. I have turned off my vm server firewall, just for case, but nothing.
I have edited alfresco-global.propertries, which I have append the following:
###############################
## Common Alfresco Properties #
###############################
dir.root=C:/ALFRES~1/alf_data
alfresco.context=alfresco
alfresco.host=127.0.0.1
alfresco.port=8080
alfresco.protocol=https
share.context=share
share.host=127.0.0.1
share.port=8080
share.protocol=https
### database connection properties ###
db.driver=org.postgresql.Driver
db.username=alfresco
db.password=root
db.name=alfresco
db.url=jdbc:postgresql://localhost:5432/${db.name}
# Note: your database must also be able to accept at least this many connections. Please see your database documentation for instructions on how to configure this.
db.pool.max=275
db.pool.validate.query=SELECT 1
# The server mode. Set value here
# UNKNOWN | TEST | BACKUP | PRODUCTION
system.serverMode=UNKNOWN
### FTP Server Configuration ###
ftp.port=21
ftp.enabled=true
### RMI registry port for JMX ###
alfresco.rmi.services.port=50500
avm.rmi.service.port=0
avmsync.rmi.service.port=0
attribute.rmi.service.port=0
authentication.rmi.service.port=0
repo.rmi.service.port=0
action.rmi.service.port=0
deployment.rmi.service.port=0
### External executable locations ###
ooo.exe=C:/ALFRES~1/LIBREO~1/App/libreoffice/program/soffice.exe
ooo.enabled=true
ooo.port=8100
img.root=C:\\alfresco-community\\imagemagick
img.coders=${img.root}\\modules\\coders
img.config=${img.root}
img.gslib=${img.root}\\lib
img.exe=${img.root}\\convert.exe
jodconverter.enabled=false
jodconverter.officeHome=C:/ALFRES~1/LIBREO~1/App/libreoffice
jodconverter.portNumbers=8100
### Initial admin password ###
alfresco_user_store.adminpassword=329153f560eb329c0e1deea55e88a1e9
### E-mail site invitation setting ###
notification.email.siteinvite=false
### License location ###
dir.license.external=C:/ALFRES~1
### Solr indexing ###
index.subsystem.name=solr
dir.keystore=${dir.root}/keystore
solr.host=localhost
solr.port.ssl=8443
##solr.port.ssl=443
### Allow extended ResultSet processing
security.anyDenyDenies=false
### Smart Folders Config Properties ###
smart.folders.enabled=false
### Remote JMX (Default: disabled) ###
alfresco.jmx.connector.enabled=false
### CIFS Server Configuration ###
# The tcpipSMB and netBIOSSMB beans have a platforms property that allow their configuration to be targeted to Alfresco servers running on specific platforms. The property is formatted as a comma-separated list of platform IDs. Valid platform IDs are windows,linux,solaris, macosx and aix.
cifs.platform=linux,solaris,macosx,windows
cifs.enabled= true
cifs.serverName=${localname}
cifs.broadcast=192.168.0.255
#An empty value indicates bind to all available network adapter
cifs.bindto=192.168.0.85
cifs.ipv6.enabled=true
cifs.hostannounce=true
cifs.pseudoFiles.enabled=false
#controls whether URL shortcuts or desktop actions are displayed on CIFS.
cifs.pseudoFiles.explorerURL.enabled=false
#Is the URL shortcut for alfresco explorer shown ?
#cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url
#Name of CIFS URL for alfresco explorer
#cifs.pseudoFiles.shareURL.enabled=false
# Enable the use of asynchronous sockets/NIO code
cifs.disableNIO=false
# Disable the use of JNI code. Only currently affects Windows
cifs.disableNativeCode=false
# Session timeout, in seconds. Defaults to 15 minutes i.e. 900 seconds, to match the default Windows client setting. Maximum is 3600.
# If no I/O is received within that time the session is closed by the server
cifs.sessionTimeout=3600
cifs.WINS.autoDetectEnabled=true
cifs.sessionDebug=false
# Can be mapped to non-privileged ports, then use firewall rules to forward requests from the standard ports
cifs.tcpipSMB.port=1445
cifs.netBIOSSMB.sessionPort=1139
cifs.netBIOSSMB.namePort=1137
cifs.netBIOSSMB.datagramPort=1138
### Authentication Chain ###
authentication.chain=ldap1:ldap,alfrescoNtlm1:alfrescoNtlm
ldap.authentication.java.naming.read.timeout=15000
### Sync Settings ###
synchronization.synchronizeChangesOnly=true
synchronization.syncOnStartup=true
synchronization.syncWhenMissingPeopleLogIn=true
synchronization.import.cron=0 0 * * * ?
### SharePoint settings ###
vti.server.port=7070
vti.server.external.host=shp.parkmill.splatcooking.net
vti.server.external.port=443
vti.server.external.protocol=https
vti.alfresco.alfrescoHostWithPort=https://alfresco.parkmill.splatcooking.net:443
vti.share.shareHostWithPort=https://alfresco.parkmill.splatcooking.net:443
vti.share.shareContext=/share
### replication ###
replication.enabled=true
### orphan removal ###
system.content.orphanProtectDays=1
alfresco.authentication.authenticationCIFS=true
Also I can connect via Browser but no hope with network drive mapping. What is causing this problem and how can I fix it?
Thx for your precious time.
I am waiting for your answers
I believe your problem is the alfresco.host setting. You can connect locally, because it's looking at 127.0.0.1 (localhost). Your remote machine is trying to use 127.0.0.1 (which is local to that machine) and failing.
To fix this, update the alfresco.host and share.host properties to the IP or DNS name for the server on which Alfresco is installed.

Specify port number and content database for a query in marklogic

I am writing few test cases in test server 8062, but my data resides on port number 8060 with a specific content database.Without changing the content db of test server using the admin console, is it possible to specify which port and content db to hit for a specific query? Also I do not want to load the contents of content db to test server db.
For e.g. something like:
let $current := (fn:count(cts:uri-match("*.xml*")),(),
<options xmlns="xdmp:eval">
<database>{xdmp:database("prj-content")}</database>
</options>)
In MarkLogic the data doesn't "reside" on a port. Rather an app server that's connected to a database listens on a port for HTTP or XDBC requests. You can have many app servers fronting the same database. Testing and administration are two good use cases for more than one app server configured for a database.
Your test server (port 8062) should specify your prj-content database. What is its current database configuration? Why is that different than prj-content. If you do that, you won't have to specify the database at runtime.
If you really do need to specify the database at runtime you can use xdmp:eval, xdmp:invoke, or xdmp:invoke-function.

Get a value which is being inserted in DB via an HTTP request and later being used in another transaction

JMeter Version: 2.9
Test Scenario:
To test purchase order creation process.
In the process, an HTTP request generates a temp id for the purchase being made and stores it into the DB. Later this tempid gets fethced from the DB and used in the purchase closure step.
Could anyone suggest how to get this temp id value from the DB and reuse the same later in the JMeter test plan in the purchase closure step.
If the value is stored in the DB only and doesn't appear anywhere in DOM (page source) the only way to get it is using JDBC Sampler (or JDBC Post Processor wrapped in Transaction Controller if you don't want extra sampler in your results and extra time being tracked to HTTP Request).
You'll need to know database URL, credentials, etc. and have a proper JDBC driver somewhere in JMeter classpath - download the JDBC driver for your database and drop it to /lib folder of your JMeter installation.

Resources