I am trying to connect to my database through R, but am getting an error. My coworker connects perfectly with the same exact script. Anyone know what else could be the problem? We have the same packages installed.
PostgreSQL servers control who they listen and talk too. You did well comparing the same script between machines, but you may be coming from different subnets, or any other difference which would tickle this behaviour.
On my box, this is controlled by the file /etc/postgresql/9.1/main/pg_hba.conf -- hba stands for host-based access.
So you may have to talk to your DBA or sysadmin.
Related
First, I'm not a R/RStudio user at all. I'm a Windows admin with the task to configure R and RStudio on a multi-user Citrix environment. To identify users between the multiple sessions, we are using the Palo Alto Terminal Server agent which will allocate a range of ports for each user and use them to identify each users. That's then used to give limited and specific access to resources for each users.
The problem is that the TS Agent also intercept the localhost connection that's created when you start RStudio (process rsession) and RStudio then cannot connect to R. One possible solution to solve this problem is to have control on the ports used when this local session is started.
I have made multiple research on the Internet but I have been unable to find if/how you can change the ports that are used. I have found different config files but none that seem to allow me to fix a single port or a port range.
Any insights on the way to fix the ports for the rsession process so I can better control them? Or another way to look at the problem: do you know the port range used by R/RStudio when they communicate together through the rsession? I can simply avoid using these range with the TS Agent.
I have only skimmed through the RStudio Source code, but it seems that the port is assigned randomly:
https://github.com/rstudio/rstudio/blob/bcc8655ba3676e3155d80296c421362881340a0f/src/node/desktop/src/main/application.ts#L226
However, it also seems like there is a startup parameter --www-port to set the port:
https://github.com/rstudio/rstudio/blob/bcc8655ba3676e3155d80296c421362881340a0f/src/node/desktop/src/main/session-launcher.ts#L592
It is found that some BizTalk Receive Locations are disabled after server reboot (BizTalk server and SQL Server are separately installed to different physical servers)
Is there any idea on this scenario? Due to the boot sequence or other issues?
I will assume that, once you enable the receive locations manually, they are working correctly.
Typically, when FILE receive locations fail while pointing to an external server/share, it is because they are no longer available.
Make sure that, during the night, there are no network issues, planned/unplanned downtime of the share (= here your SQL server). A BizTalk receive location will retry to access a share for quite a while before disabling itself. Check the event log(s) for more information. You would want to look for errors/warnings there indicating an issue with connectivity between BizTalk and SQL.
Another issue might be that there are too many connections between your BizTalk server and SQL server. You can provide a maximum number of connections in the FILE share properties.
Also, you could try this link: https://serverfault.com/questions/235032/intermittent-connection-to-windows-7-shared-folder-from-windows-xp-workstations
It describes a potential fix for optimizing throughput for file sharing, although it depends on your operating system.
By reading documents on MSDN, I realized that it is recommended to create separate hosts by functionality (Sending hosts, Receiving hosts and Processing hosts). And if there is only one host in this bizTalk server, this host can perform all receiving, sending, and processing messages functionality.
My question is: Is it possible to have multiple hosts that each host can perform its own sending, receiving and processing function , and not affect each other?
This is for multiple developers working on the same project, because our current situation doesn't allow us to have a full set of SQL Server Database and SQL server for each developer or using VM.
Thanks a lot!
Multiple hosts is not a solution for letting multiple developers work on a single server. A single send/receive adapter can only be assigned to one host.
You will also run into other problems, as all the configuration settings are shared in a single database, a change from 1 developer will effect the others.
This same question was asked and answered at MSDN. What you are trying to do is not supported and will not work. There is no way around this.
You must deploy the same application code to each computer in a BizTalk Group.
Sharing a BizTalk computer for development work is not a workable or productive solution and will have a definite negative affect on productivity.
You are correct, the best way to handle DEV is a VM with the entire stack. This is the issue you must address in your environment.
Is there any way to run an NBD (Network Block Device) client and server on the same machine without deadlocking the system?
I am very exhausted looking to find an answer for this. I appreciate if anyone can help.
UPDATE:
I'm writing an NBD server that talks to Google Storage system. I want to mount a file system on the NBD and backup my files. I will be hugely disappointed if I have to end up running the server on another machine. Few ideas I already had seem to lead nowhere:
telling the file system to open the block device using O_DIRECT flag to bypass the linux buffer cache
using a raw device (unfortunately, raw devices are character devices and FSes refuse to use them as underlying device)
Just for the record, having the NBD client and server on the same machine has been possible since 2008.
Use a virtual machine (not a container) - you need two kernels, but you don't need two physical machines.
Since the front page of the Sourceforge project for NBD say that a deadlock will happen "within seconds" in this scenario, I'm guessing the answer is a big "No."
Try to write a more complete question of what actual goal you're trying to accomplish. There's some times that you need to bang away at a little problem, and some times that you need to look at the big picture.
a few days we had a strange error with sqlite. We use a sqlite database on a network share with several computers accessing it. Our client reported, that the database is gone. A quick overview showed, that the database was still there but no computer could access it. It also showed a s3db-journal file indicating that someone is/was accessing the db when something happened. The thing that is strange - the s3db-journal file was locked by the file system (we could not copy/delete it). After restarting all applications, the locked file disappeared as it should be.
How does this happen? We would like to deduct somehow how our client got into this situation. We know, that there was a corrupt network cabeling to one of the computers.
Thank you for your help.
Tobias
To clarify this: several = up to 10 computer
From the "Appropriate uses for SQLite" page:
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
It very well might be a bug in the network filesystem you're using. Either way, the SQLite developers explicitly recommend against using databases on network filesystems.
The issue is resolved. The database-component (zeos) threw an exception and we tried a rollback. Due to the way the component was designed, this is only allowed when you started a transaction. If you don't you get the locked s3db-journal file.
In the end we learned 2 things: never rollback when you did not start a transaction, second - there is a function InTransaction from zeos for that.