I am using symfony 4 with doctrine ORM to build an SPA.
On initial load of the SPA, the page do between 5 and 6 request to load initial parameters. However, on windows, approximately 20% of the time I get a doctrine cache error :
Warning: rename(C:\Users\ZRTW9851\Documents\projects\PHP Tools\var\cache\dev/doctrine/orm/Proxies\__CG__AppEntityUpr.php.5eb16c31c580f1.02128983,C:\Users\ZRTW9851\Documents\projects\PHP Tools\var\cache\dev/doctrine/orm/Proxies\__CG__AppEntityUpr.php): Access denied. (code: 5) (500 Internal Server Error)
I have multiple questions :
Is it a bad design to do multiple request at once on the client side ?
How can I solve the cache problem given that I am using windows ?
Do I have to show some code for this problem ? This problem occur randomly and not on specific endpoints.
A Webserver should be able to handle multiple requests. Due to having a "Permission denied" it looks more that the File or Dir could not be renamed due to access restrictions.
This could occur due to many problems:
Dir ist not writable -> How to check on Windows
The Space " " for "PHP Tools" could be a problem -> try to rename to "PHP_Tools"
If you're using some kind of VM or Docker it could be a problem there or with mounting that folder
Related
Overview
I tried creating a VPC network, having a subnet and adding a Serverless VPC connector with terraform in GCP. I was following the official guide ( https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#terraform ) and initially everything was working well. After that I accidently commited my JSON key to github, someone stole it and used it for crypto, the project was disabled but shortly after that reinstated
After that my terraform VPC connector creations started to fail. I tried a lot of different things but nothing seems to work(running destroy, changine service accounts, changing names, deleting all of the terraform subfolders, deleting EVERY resource and restarting the process)
The errors I am getting are:
│ Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 13, message: An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
│
or
│ Error: Error creating Connector: googleapi: Error 409: Requested entity already exists
Today I tried to create VPC connector from the command line(gcloud) and from the UI tool. The errors persisted
Unknown error. Original error message: Operation failed: Insufficient CPU quota in region.
Max throughput of the connector per day over last seven days.
or
An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
errors while deleting:
│ Error: Error waiting for Deleting Network: The network resource 'projects/static-emblem-327016/global/networks/sun-serverless-network' is already being used by 'projects/static-emblem-327016/global/routes/default-route-5cbc9de02e21bb35'
│
I was lookint at this issue https://issuetracker.google.com/issues/164378672 In it I was problems with us-central1 but I tried a couple of different regions and still I have the same issue
Questions:
I am running out of ideas, I was wondering if this is an infrastructural issue, maybe I should dump the project and create a new one ? Where can I check if there are infra issues ? How can I resolve my issue?
I recently get this error Error: Error creating Connector: googleapi: Error 409: Requested entity already exists. So I can explain the root cause and it's fix.
What I was doing is like trying to create a GCP resource (Create PubSub topic) using terraform (plan and then apply).
But before executing the terraform apply, I created the resource manually long time back with the same name. I expected that the terraform plan or terraform apply will not try to create it again since the resource name is same. But instead of Refreshing state, I found it was trying to Creating the resource. The reason it that, terraform does not know about your resource history. Either you need to import your resource history using terraform import command or else delete the manually created resource and then run the terraform apply command.
The message “An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually” can indicate that you don't have enough resources in your project to create the connector. Please make sure you have enough Resource Quota available in your GCP project.
The message “googleapi: Error 409: Requested entity already exists” indicates that The resource that a client tried to create already exists.
If you want to know what the root cause is, you can check the logs of the VPC Connector creation in the System Event Audit Logs.
System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they aren't driven by direct user action. System Event audit logs are always written; you can't configure, exclude, or disable them. The instructions to access them are here.
On the other hand, generating and distributing service account keys poses severe security risks to your organization. They are long-lived credentials that are not automatically rotated. These keys can be leaked accidentally or maliciously allow attackers to gain access to your sensitive GCP resources. If you accidentally compromised your JSON Key, please read the recommendations in this link.
If you want to know more about the risk and alternatives to download Service Account, Key please follow this link. Please note that this is not GCP official documentation, so I cannot vouch for its accuracy.
I was able to resolve my issue. It turns out that I had deleted my default compute engine service account in panic. I was able to recover it and everything worked out from there. For more info go here: https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
you have to identify the default service acc for compute engine and undelete it:
gcloud beta iam service-accounts undelete ACCOUNT_ID
I am trying to do a couple of things within Cognos:
Load Framework Manager and view/modify SQL behind existing models and create new models
Modify existing reports through Report Studio via Cognos Connection
I was given an account on the Cognos application server and I installed Framework Manager. I was given the gateway URL and dispatcher URL from the System Admin and then transferred all of the project files to the server so that I could load the project in question. I'm able to open the .cpf file; however, when going into any models, I get the error:
Unable to access service at URL:
https://xxx.cognos.xxx.xxx:443/ibmcognos/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/close.xts
Please check that your gateway URI information is configured correctly and that the service is available.
For further information please contact your service administrator.
I then contacted the system admin and he indicated that the URL was correct.
Furthermore, now when I try to access Cognos Connection (which worked fine last week), I receive the error:
CM-REQ-4159
Content Manager returned an error in the response header. The error "cmAuthenticateFailed CM-CAM-4005 Unable to authenticate. Check your security directory server connection and confirm the credentials entered at login." can be found in the response SOAP header.
The odd thing is, another member of my team receives this error:
AAA-AUT-0016:
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/images/space.gif
https://xxx.cognos.xxx.xxx/ps/portal/images/msg_error.gif
The function call to 'Method.invoke(cmServiceInstance, queryRequest)' failed.
https://xxx.cognos.xxx.xxx/ps/images/space.gif
DetailsExpand:
CM-SYS-5192 An error occurred with Content Manager.
I've done some research (I'm not really familiar with Cognos or even networking) and found that these errors (the ones that I receive) are usually received when trying to run a single report; however, I can't even access FM models or Cognos Connection in general. I also don't understand how we can receive 2 different errors when accessing the same URLs from the same network.
Any guidance would be greatly appreciated. We are using Cognos 10.2.2.
http://www-01.ibm.com/support/docview.wss?uid=swg21624136
One possible reason is that the user does not have the required "Import relational metadata" capability.
Or maybe it is something to do with the registry
Note: Make sure you backup the registry before making any changes.
see http://www-01.ibm.com/support/docview.wss?uid=swg22015730
Open cmd and type "regedit".
Navigate to [HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl.
Right click "BMT.exe" = dword:00002af9.
Delete.
Re-launch Framework Manager.
I'm running into an issue integrating Spring Security with my Elastic Beanstalk app backed by a MySql database. If I deploy my app I'm able to login in correctly for some time but eventually I'll start to receive login errors without an exception being thrown so I'm unable to get any useful information about the issue. I've downloaded the logs as well and can't see anything of value. I can see where the logs show accessing the public page, attempting to access the private section, returning the login page, and then the loginError page; however, nothing about any issue.
Even though I'm unable to login through a browser I am able to login if I run the app from an IDE as well as view the db in MySQL Workbench. This suggests to me the problem is due to some persistent state on the server.
I've had a similar problem before with another Beanstalk app using Spring Security and was able to resolve it by setting application properties as follows:
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
I'm using a more recent version of Spring than that app and the properties have been changed to specific datasources so I tried adding the following properties:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
When that didn't work I added another based on an answer to a similar question here; now the properties are:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.validation-query=SELECT 1
That seemed to work (possibly due to less login activity) but eventually resulted in the same behavior .
I've looked into the various properties available but before I spend a lot of time randomly setting and/or overriding default settings I wanted to see if there's a reliable way to deal with this.
How can I configure my datasource to avoid login errors after long periods of time?
This isn't a problem of specific configuration values but with where those configurations reside. The default location for the application.properties (/resources; Intellij) is fine for deploying as a jar with an embedded Tomcat server but not as a war with a provided server. The file isn't found/used so no changes to the file affect the one given by AWS.
There are a number of ways to handle this; I chose to add an RDS configuration bean in my SpringBootServletInitializer:
#Bean
public RdsInstanceConfigurer instanceConfigurer() {
return () -> {
TomcatJdbcDataSourceFactory dataSourceFactory =
new TomcatJdbcDataSourceFactory();
// Abondoned connections...
dataSourceFactory.setRemoveAbandonedTimeout(60);
dataSourceFactory.setRemoveAbandoned(true);
dataSourceFactory.setLogAbandoned(true);
// Tests
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setTestOnReturn(false);
dataSourceFactory.setTestWhileIdle(false);
// Validations
dataSourceFactory.setValidationInterval(30000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(30000);
dataSourceFactory.setValidationQuery("SELECT 1");
return dataSourceFactory;
};
}
Below are the settings that worked for me.
From Connection to Db dies after >4<24 in spring-boot jpa hibernate
dataSourceFactory.setMaxActive(10);
dataSourceFactory.setInitialSize(10);
dataSourceFactory.setMaxIdle(10);
dataSourceFactory.setMinIdle(1);
dataSourceFactory.setTestWhileIdle(true);
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setValidationQuery("SELECT 1 FROM DUAL");
dataSourceFactory.setValidationInterval(10000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(20000);
dataSourceFactory.setMinEvictableIdleTimeMillis(60000);
I have set Data Source(ODBC) for running ASP Site in my local Computer selected Microsoft Access Driver.
Now I can run the whole site with out error.But If i apply leave then it will show an error.
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)
[Microsoft][ODBC Microsoft Access Driver] Operation must use an
updateable query. /eleave/leaveApplicationOut.asp, line 39
Updation
After giving the Write permission following error is showing
Error Type:
jmail.Message (0x8000FFFF) The message was undeliverable. All servers
failed to receive the message /eleave/leaveApplicationOut.asp, line 80
Thank you very much for your support.
It is solved..
4 possible causes are highlighted here: http://support.microsoft.com/kb/175168
I am guessing it's #1:
The most common reason is that the Internet Guest account (IUSR_MACHINE), which is by default part of the "Everyone" group, does not have Write permissions on the database file (.mdb). To fix this problem, use the Security tab in Explorer to adjust the properties for this file so that the Internet Guest account has the correct permissions.
First error (which seems like you solved) has to do with write permissions on the database..
The updated question ,though, seem to be completely unrelated..
You seem to be trying to send an email, right ? and it says it failed..
Perhaps the SMTP service is not running and so it cannot send the email ? could it be a wrong IP address defined somewhere ? wrong credentials for the email accounts ? (read http://host.cdesystems.com/faq/jmail_faq.asp for possible problem)
give some code about the configuration you do to the jmail ..
I'm having a major issue trying to configure a new install of BizTalk Server 2006 (not R2). The server had BizTalk installed on it before, and it was working fine. I've uninstalled BizTalk, removed the databases and jobs from the SQL server, which is a separate machine, and re-installed BizTalk. The install was successful, with no errors during the install, and nothing in the install logs.
I'm configuring the BizTalk server to be the SSO master secret server, along with creating a new BizTalk group and registering the BizTalk runtime. The process always errors out on creating the SSO database on the SQL server. In the ConfigLog, there are a couple of warnings that the MSSQLServerOLAPService does not exist, then it shows errors on creating the SSO database. There are 4 in a row. In order, they are:
Error ConfigHelper] [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied.
Error ConfigHelper] SQL error: 08001 Native error code: 17
Error ConfigHelper] c:\depotsetupv2\private\common\configwizard\confighelper\sqlhelper.cpp(1176): FAILED hr = 80004005
Error ConfigHelper] c:\depotsetupv2\private\common\configwizard\confighelper\sqlhelper.cpp(918): FAILED hr = 80004005
It then has similar errors trying to create each of the BizTalk databases.
On the SQL server, there are corresponding errors in the SQL Server Logs - 2 for each attempt
Login failed for user '[USERNAME]'.[CLIENT: [IP ADDRESS]]
Error: 18456, Severity: 14, State: 16
The first error from the SQL logs also shows up as a failure audit in the SQL server's application event log.
The biggest issue I am having with this is that the user I am logged on to the BizTalk server is a local admin on both the BizTalk server and the SQL server, and is in the SQL sysadmin group. The user that I am configuring the BizTalk services to run under is also a local admin on both servers and in the sysadmin group on the SQL server. I've checked the MSDTC settings on both machines and made sure they are set as the BizTalk documentation recommends. SQL Browser is running on the SQL machine, and I've verified that network access is allowed using the SQL Surface Area Configuration tool.
Can anyone help me find something that I might have missed?
Re: Igal:
Yes, all of the servers and users are on the same domain. I've run across that posting on SQL protocols in researching this, but I tried to select a count from one of the tables in the default database of the logged in user while connected to another database. I had no problems at all running that query.
Re: Yossi:
I'm installing BizTalk on Windows Server 2003 R2 SP1. Yes, I have removed the SSODB (Wouldn't out it past myself to miss something like that though!). I will make sure I am providing the usernames correctly and check out the sources you linked and get back to you.
A few of pointers:
Check out the two points at the end of the Configuring Enterprise SSO Using the Configuration Manager page on MSDN:
When configuring the SSO Windows
accounts using local accounts, you
must specify the account name without
the computer name.
When using a local SQL Server named
instance as data store, you must use
LocalMachineName\InstanceName instead
of LocalMachineName\InstanceName,
PortNumber.
Check out the relevant installation guide (don't worry about the fact that it relates to R2, they seems to have hidden the 'R1' documentation, but they are the same), and specifically the section around "Windows Groups and Service Accounts"
also - just to be sure - when you have uninstalled BizTalk and removed the databases - you have removed the SSODB as well, right?! :-)
The log files are very confusing - especially when deciding which error is the acutal problem - have you tried looking up any other errors you've had? (check out this blog entry, for example)
I had everything set up properly. Unfortunately for me, the answer was the standard "Windows" answer - reboot and try again. As soon as I rebooted the SQL server, I was able to configure BizTalk just fine.
I am going to set Yossi's answer as accepted, however, since that would be the most relevant for anyone else who may be reading this question.
Just remember to reboot after all setting changes!
Make sure the BizTalkMgmtDb and BizTalkMsgBoxDb have your local admin account as DB OWNER.
Right click on the databases --> Properties --> Files --> Owner: