rescuemop.blogg.se

Apache tomcat 8 webserver issues to troubleshoot
Apache tomcat 8 webserver issues to troubleshoot






apache tomcat 8 webserver issues to troubleshoot apache tomcat 8 webserver issues to troubleshoot
  1. #Apache tomcat 8 webserver issues to troubleshoot driver
  2. #Apache tomcat 8 webserver issues to troubleshoot code

We're continuing diagnosis efforts, but any tips would be helpful.Īdd connectionTimeout and keepAliveTimeout to your AJP connector found in /etc/tomcat7/server.xml. This suggests that it must be related to either the network between the server and the database, or the database itself. For some reason it is must be stuck in this reading state (the server never recovers on its own, it requires a restart).

#Apache tomcat 8 webserver issues to troubleshoot driver

It might be that the Oracle driver in this thread is forcing all other threads to wait for it to complete. "TP-Processor3" daemon prio=1 tid=0x08f142a8 nid=0圆52a waiting for monitor entry Īt .getConnection(OracleConnectionCacheImpl.java:268)Ĭuriously, only one thread out of all the 200 threads was in this state: "TP-Processor2" daemon prio=1 tid=0x08f135a8 nid=0圆529 runnable Īt 0(Native Method)Īt (SocketInputStream.java:129)Īt .Packet.receive(Unknown Source)Īt .DataPacket.receive(Unknown Source)Īt .NetInputStream.getNextPacket(Unknown Source)Īt .NetInputStream.read(Unknown Source) What I found was that all 200 threads were in one of the following states: "TP-Processor200" daemon prio=1 tid=0x73a4dbf0 nid=0x70dd waiting for monitor entry Īt .getActiveSize(OracleConnectionCacheImpl.java:988)

#Apache tomcat 8 webserver issues to troubleshoot code

known bugs?)Ģ) The network setup (two NICs versus one NIC) is causing confusion or throughput problems.ģ) The websites themselves (there's no common code, no platforms being used, just basic Java code with servlets and JSP)įollowing David Pashley's helpful advice, I did a stack trace/thread dump during the issue. We're still grasping at straws at what the problem could be:ġ) The setup with AJP and Tomcat is incorrect, or outdated (i.e. We're not sure where to look to diagnose the problem further. We've tried adjusting the various timeouts but that just made the server run slightly longer before dying. Googling the various error messages didn't provide anything useful (either old solutions or unrelated to our problem). We'd prefer to try and get it running on a single NIC setup. The problems with that are that it would cause some complications with the network setup, and it seems like ignoring the problem. The obvious solution would be to move back to a setup of two NICs. After that change, the servers started having these problems. After a network upgrade, we moved these servers to single NICs (this was recommended to us for security/simplicity reasons). They separated internal and external traffic.

apache tomcat 8 webserver issues to troubleshoot

The systems were actually setup each using two NICs during that time. These two servers had been running without a problem for quite some time. I'm assuming this is a result of the server suddenly dealing with too much data/traffic/threads.

apache tomcat 8 webserver issues to troubleshoot

This only lasts for 2-4 seconds before the MaxThreads message comes up. The other odd thing that we've noticed on the higher traffic server is that right before the problem starts happening, database queries are taking much longer than before (2000-5000 ms versus normally 5-50ms). ap_proxy_connect_backend disabling worker for (localhost) (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost) ajp_read_header: ajp_ilink_receive failed proxy: AJP: disabled connection for (localhost) (104)Connection reset by peer: ajp_ilink_receive() can't receive header Here's a sample of random message that we see (in no specific order): (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header However, in the Apache logs we're seeing random messages referring to AJP. Other than the mention of the MaxThreads problem, the Tomcat logs do not indicate any specific issues that could be causing this. On the second server, when the problem occurs requests take a long time and when they are done all you see is the service unavailable page. At that point the server is no longer responding (and comes up with the service unavailable page after a long period of time). On the first server, when the problem occurs all threads slowly start getting taken up until it reaches the limit (MaxThreads 200). Both websites are completely different codebases, but they exhibit similar issues. One houses a higher traffic website (several requests per second), the other a low traffic one (a handful of requests every few minutes). There are two servers with the same setup. Either it stops responding, or it puts up the generic 'Service Temporarily Unavailable'. After a certain period of time (no constant at all, can be between an hour or two, or one or more days) Tomcat will go down.








Apache tomcat 8 webserver issues to troubleshoot