OWASP Top 10 – A5 Security Misconfiguration

Description

Nowadays, besides the operating system and the JRE, most of the Java applications are based on third-party frameworks, open-source or proprietary. Moreover, a web application is deployed on an application server (or a servlet container).
All these components represent a lot of potential risks an attacker can use if he has enough information on the environment.

Examples

HTTP header

By checking the Server HTTP header sent by the bank’s website, the attacker knows which web server you are using and find vulnerabilities.

Stack trace

An attacker finds a way to generate an uncaught exception on your bank’s website. The stack trace is displayed in a web page. With that revealed information, the attacker knows which portal solution is used. And, as the default administration account was not deleted and the password was not modified, the attacker can easily log in as an administrator and do whatever he wants on the website.

Mitigations

First of all, let me give a few examples of components you have to properly configure:

  • Operating System (Linux, Solaris, AIX, Windows…)
  • Java Runtime Environment (Sun, IBM…)
  • Application Server, Servlet Container (Tomcat, Jetty, Glassfish, jBoss…)
  • Web Server (HTTPD, Nginx…)
  • RDBMS (Oracle, SQL Server, MySQL…)
  • Third-party frameworks
    • Spring
    • WS stack (CXF, Spring WS, Axis2…)
    • ORM framework (Hibernate, TopLink…)
    • JSF implementation (PrimeFaces, MyFaces…)
    • Portal (Liferay, Exo…)
Security Holes

source: xkcd.com

Install security patches

It is easier to attack a website when you know its Achilles’ heel.
Security advisories are published after the related security fixes are released by server provider (e.g. Apache 2.2, Apache 2.4, all versions of Nginx). If an attacker knows you are using an old version of a component and that version has a critical security issue, he can use this weakness to penetrate your system.
Hence, security fixes must be tested and deployed as soon as possible on production. Of course, if you have a continuous integration environment with automated unit tests and integration tests, it will be easier and faster to test all your application but this is out of the scope of this article. Depending on the severity of the security issue, the acceptable delay for deploying the security patches can be less than 1 month for critical issue up to 3 months for a low severity issue.
Tip: Subscribe to the mailing-list of each component (OS, servers, third-party frameworks…) used to run your application to be informed as soon as a new version is available. E.g. subscribe to Apache mailing-lists to follow all Apache projects or Apache CXF mailing-lists to follow only news about CXF.

Obfuscate sensitive information

Logs

Never store sensitive information in logs. If you protect these sensitive data (I hope you do), the way you protect them is also a sensitive information. E.g. if you use PBKDF2 to protect your passwords, don’t reveal neither the algorithm nor the number of iterations you apply.
Of course, some logs are generated by a third-party framework. In that case, configure properly your logging framework with the appropriate level to not store sensitive information. E.g. if your ORM framework logs the full connection string and the password at INFO level, don’t activate this level on production for the framework or at least for the class that logs this information.
Last but not least, if you store your logs in a file, you must protect this file with appropriate access rights. If you have an automatic archiving mechanism for your logs, make sure the location of the archived logs has the same access right restriction.

Stack traces

The stack traces are very useful to understand a bug and quickly fix it.
It is also a gold mine for an attacker. He can know which third-party frameworks you are using simply with the package names. He can also guess the version of that framework thanks to the class names or the line of code. This kind of information can be used to find security holes on your application. Even if your own code is very secure, it can rely on less secure frameworks. So don’t brag about your underlying frameworks!
If possible, don’t store stack traces in logs on production. Logging frameworks propose a way to not log stack traces.
E.g. Log4J’s Layout abstract class has the method abstract public boolean ignoresThrowable();. You can extend this class or an existing implementation class that fit you needs and override this method:
public class LayoutNoStackTraces extends PatternLayout {
@Override
public boolean ignoresThrowable() {
return false;
}
}

Be careful with the default configuration of your components. E.g. Tomcat’s default error page displays the full stack trace in a web page when an uncaught error occurs in the application. To prevent this, you need to define your own error page by adding the following in your web.xml:
<error-page>
<exception-type>java.lang.Throwable</exception-type>
<location>/uncaught_error.jsp</location>
</error-page>

If you are exposing web services, be careful with the information your web service stack can automatically send. Sometimes, a stack trace can be added in the SOAP fault message.

Server name and version

HTTP headers also contain information about your environment.

  • The “User-Agent” HTTP header gives information on the client that has generated the request (web browser, web service client…).
  • The “Server” HTTP header gives information on the server that has generated the response (web server, application server…).

The information usually include the name, the version, sometimes even the underlying operating system…
Obviously, with this kind of information, it is easier for an attacker to find vulnerabilities on your application.
You can to configure your components so that they send wrong or fake information. I recommend you always send a public information like your domain name in these headers. Or, you can send a wrong name and/or version of the server, e.g. nginx/1.5.6 instead of Apache/2.4.1 😉
Note that obfuscating the server banner string is not 100% efficient against HTTP fingerprinting tools, though.

What is HTTP fingerprinting?
This technique consists in analyzing the responses received from a server after different requests (well-formed request, non existing URL, bad request…) to determine the name and the version of the tested server. Each server can set different headers, with different values, in a different order, different HTTP status. The result may vary depending on the version of the server or the type of the request. By analyzing the results of all requests, you can guess with more or less accuracy the name and even the version of the server.
To defeat HTTP fingerprinting, besides HTTP headers obfuscation, you can for example configure your server to always send an HTTP500 status code whatever the bad request is.

Information about the server is not only in HTTP headers. On a default installation of your web server, if you are trying to get a resource that does not exist, you may have in the response a page with a default HTTP 404 error message and a beautiful footer containing all the information you are trying to hide in the HHTP header. So don’t forget to provide your own error page for each HTTP status code.

Java Runtime Environment

The default installation of the JRE does not contain samples so it is ready for production.
One exception though: Change the default password of the provided CA trust store (cacerts) even if you don’t plan to use SSL.

Servers (application server, servlet container, web server, database, portal…)

Protect the servers configuration

The content of configuration files is sensitive and should not be exposed to unauthorized persons. Hence, like the log files, you must set appropriate access restrictions on the configuration files. Only the application account should be authorized to view and modify the configuration files.

Delete samples applications

Servers are often provided with samples. They are useful during development phase to start using a new component. But they are not hardened for production and thus become backdoors for attackers.
Delete all sample applications, configuration files, accounts…

Review default configuration

Immediately after installation, you can start your servers because they are provided with samples and a simple default configuration.
On production, you must delete these samples immediately after installation. They can contain security issues that can be used as back door to penetrate your system.
You must also review this default configuration in details to ensure it fits your needs. Remove all unnecessary resources.
When possible, delete default administration accounts and create new ones. Otherwise, change the default passwords.

Operating system

Accounts

Protect the super user (root/administrator) account with a strong password. Don’t use this super user account to start a service unless it is required by this service. For example, to start Apache HTTPD server on reserved port 80 or 443, you must use root user. Refer to section “Servers/Apache” to see how to configure Apache in that case.
Create an dedicated account for each service (application server, web server, database…). That account will be used to start and stop that service and won’t have useless rights on other folders, accounts, applications or services. Only that account will have access to the configuration and the logs of its server.

Services

Depending on the operating system, you will have some services that will be automatically started. Stop the services you won’t use and remove them from the list of auto-started services.
Be careful with the open ports. They can be scanned by an attacker and used to penetrate your system. Close the ports you won’t use. In some cases, they can be opened by default for an auto-started service. When you deactivate a service, don’t forget to close the ports used by that service if any.

Penetration test

To ensure all components are properly configured, if security really matters and if you have time and money for that, submit your environment to a full penetration test.
A penetration test consists in collecting information about the system under test like the OS name and version, the open ports, the name and version of started services (web server, application server…), supported SSL versions and acceptable cipher suites… Then, after analysis, the tester will try to exploit the collected information to penetrate the system using appropriate vulnerabilities.

To see all articles related to OWASP Top 10, follow the tag #owasp

Tweet about this on TwitterShare on FacebookGoogle+Share on LinkedIn

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *


*