Test Application Platform Configuration
Test Application Platform Configuration
WSTG-CONF-02
Summary
Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture.
Reviewing and testing configurations are critical tasks in creating and maintaining an architecture. This is because various systems often come with generic configurations, which may not align well with the tasks they're supposed to perform on the specific sites where they're installed.
While the typical web and application server installation will contain a lot of functionality (like application examples, documentation, test pages), what is not essential should be removed before deployment to avoid post-install exploitation.
Test Objectives
Ensure that default and known files have been removed.
Validate that no debugging code or extensions are left in the production environments.
Review the logging mechanisms set in place for the application.
How to Test
Black-Box Testing
Sample and Known Files and Directories
In a default installation, many web servers and application servers provide sample applications and files for the benefit of the developer, in order to test if the server is working properly right after installation. However, many default web server applications have later been known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).
CGI scanners, which include a detailed list of known files and directory samples provided by different web or application servers, might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server or application server, and determine whether they are related to the application itself or not.
Comment Review
It is very common for programmers to add comments when developing large web-based applications. However, comments included inline in HTML code might reveal internal information that should not be available to an attacker. Sometimes, a part of the source code is commented out when a functionality is no longer required, but this comment is unintentionally leaked out to the HTML pages returned to the users.
Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server's static and dynamic content, and through file searches. It can be useful to browse the site in an automatic or guided fashion, and store all the retrieved content. This retrieved content can then be searched in order to analyse any HTML comments available in the code.
System Configuration
Various tools, documents, or checklists can be used to give IT and security professionals a detailed assessment of the target systems' conformance to various configuration baselines or benchmarks. Such tools include, but are not limited to, the following:
Gray-Box Testing
Configuration Review
The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed to determine if the server has been properly secured.
It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account:
Only enable server modules (ISAPI extensions in the case of IIS) that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software from affecting the site if they are only present in modules that have been already disabled.
Handle server errors (40x or 50x) with custom-made pages instead of with the default web server pages. Specifically make sure that any application errors will not be returned to the end user and that no code is leaked through these errors since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.
Make sure that the server software runs with minimized privileges in the operating system. This prevents an error in the server software from directly compromising the whole system, although an attacker could elevate privileges once running code as the web server.
Make sure the server software properly logs both legitimate access and errors.
Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance-tuned properly.
Never grant non-administrative identities (with the exception of
NT SERVICE\WMSvc
) access to applicationHost.config, redirection.config, and administration.config (either Read or Write access). This includesNetwork Service
,IIS_IUSRS
,IUSR
, or any custom identity used by IIS application pools. IIS worker processes are not meant to access any of these files directly.Never share out applicationHost.config, redirection.config, and administration.config on the network. When using Shared Configuration, prefer to export applicationHost.config to another location (see the section titled "Setting Permissions for Shared Configuration).
Keep in mind that all users can read .NET Framework
machine.config
and rootweb.config
files by default. Do not store sensitive information in these files if it should be for administrator eyes only.Encrypt sensitive information that should be read by the IIS worker processes only and not by other users on the machine.
Do not grant Write access to the identity that the Web server uses to access the shared
applicationHost.config
. This identity should have only Read access.Use a separate identity to publish applicationHost.config to the share. Do not use this identity for configuring access to the shared configuration on the Web servers.
Use a strong password when exporting the encryption keys for use with shared -configuration.
Maintain restricted access to the share containing the shared configuration and encryption keys. If this share is compromised, an attacker will be able to read and write any IIS configuration for your Web servers, redirect traffic from your site to malicious sources, and in some cases gain control of all web servers by loading arbitrary code into IIS worker processes.
Consider protecting this share with firewall rules and IPsec policies to allow only the member web servers to connect.
Logging
Logging is an important asset of the security of an application architecture, since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software. It is not common to find applications that properly log their actions to a log and, when they do, the main intention of the application logs is to produce debugging output that could be used by the programmer to analyze a particular error.
In both cases (server and application logs) several issues should be tested and analyzed based on the log contents:
Do the logs contain sensitive information?
Are the logs stored in a dedicated server?
Can log usage generate a Denial of Service condition?
How are they rotated? Are logs kept for the sufficient time?
How are logs reviewed? Can administrators use these reviews to detect targeted attacks?
How are log backups preserved?
Is the data being logged data validated (min/max length, chars etc) prior to being logged?
Sensitive Information in Logs
Some applications might, for example, use GET requests to forward form data which can be seen in the server logs. This means that server logs might contain sensitive information (such as usernames and passwords, or bank account details). This sensitive information can be misused by an attacker if they obtained the logs, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known server-status
misconfiguration in Apache-based HTTP servers).
Event logs will often contain data that is useful to an attacker (information leakage) or can be used directly in exploits:
Debug information
Stack traces
Usernames
System component names
Internal IP addresses
Less sensitive personal data (e.g. email addresses, postal addresses and telephone numbers associated with named individuals)
Business data
Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their backend databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.
A wider list of sensitive information is:
Application source code
Session identification values
Access tokens
Sensitive personal data and some forms of personally identifiable information (PII)
Authentication passwords
Database connection strings
Encryption keys
Bank account or payment card holder data
Data of a higher security classification than the logging system is allowed to store
Commercially-sensitive information
Information it is illegal to collect in the relevant jurisdiction
Information a user has opted out of collection, or not consented to e.g. use of do not track, or where consent to collect has expired
Log Location
Typically servers will generate local logs of their actions and errors, consuming the disk of the system the server is running on. However, if the server is compromised, its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or where the attack source was located. Actually, most attacker tool kits include a "log zapper" that is capable of cleaning up any logs that hold given information (like the IP address of the attacker) and are routinely used in attacker’s system-level root kits.
Therefore, it is wise to keep logs in a separate location and not on the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.
Log Storage
Improper storage of logs can introduce a Denial of Service condition. Any attacker with sufficient resources might be able to produce a sufficient number of requests that would fill up the allocated space to log files, if they are not specifically prevented from doing so. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that if the disk becomes filled, the operating system or the application might fail due to the inability to write on the disk.
Typically in UNIX systems logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is important to make sure that the directories in which logs are stored are in a separate partition. In some cases, and in order to prevent the system logs from being affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.
This is not to say that logs should be allowed to grow to fill up the file system they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack.
Testing this condition, which can be risky in production environments, can be done by firing off a sufficient and sustained number of requests to see if these requests are logged and if there's a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged, such as date and time, source IP address, URI request, and server result.
Log Rotation
Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the file system they reside on. The assumption during log rotation is that the information within them is only necessary for a limited duration.
This feature should be tested in order to ensure that:
Logs are kept for the time defined in the security policy, not more and not less.
Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space).
File system permissions for rotated log files should be the same as (or stricter than) those for the log files themselves. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs, which means that the permissions of the files can be changed upon rotation to prevent the web server process from modifying these.
Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide his tracks.
Log Access Control
Event log information should never be visible to end users. Even web administrators should not have access to such logs as it breaches separation of duty controls. Ensure that any access control schema that is used to protect access to raw logs, and any application providing capabilities to view or search the logs are not linked with access control schemas for other application user roles. Neither should any log data be visible to unauthenticated users.
Log Review
Reviewing logs can be used not only for extracting usage statistics of files in web servers (which is typically what most log-based applications focus on) but also for determining if attacks are occurring on the web server.
In order to analyze web server attacks, the error log files of the server need to be analyzed. Review should concentrate on:
40x (not found) error messages. A large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server
50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the backend database.
Log statistics or analysis should not be generated or stored in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as would be disclosed by log files themselves.
#Test Sub takeover
Test for Subdomain Takeover
WSTG-CONF-10
Summary
A successful exploitation of this kind of vulnerability allows an adversary to claim and take control of the victim's subdomain. This attack relies on the following:
The victim's external DNS server subdomain record is configured to point to a non-existing or non-active resource/external service/endpoint. The proliferation of XaaS (Anything as a Service) products and public cloud services offer a lot of potential targets to consider.
The service provider hosting the resource/external service/endpoint does not handle subdomain ownership verification properly.
If the subdomain takeover is successful, a wide variety of attacks are possible (serving malicious content, phishing, stealing user session cookies, credentials, etc.). This vulnerability could be exploited for a wide variety of DNS resource records including: A
, CNAME
, MX
, NS
, TXT
etc. In terms of the attack severity, an NS
subdomain takeover (although less likely) has the highest impact, because a successful attack could result in full control over the whole DNS zone and the victim's domain.
GitHub
The victim (victim.com) uses GitHub for development and configured a DNS record (
coderepo.victim.com
) to access it.The victim decides to migrate their code repository from GitHub to a commercial platform and does not remove
coderepo.victim.com
from their DNS server.An adversary discovers that
coderepo.victim.com
is hosted on GitHub and claims it using GitHub Pages and their own GitHub account.
Expired Domain
The victim (victim.com) owns another domain (victimotherdomain.com) and uses a CNAME record (www) to reference the other domain (
www.victim.com
-->victimotherdomain.com
)At some point, victimotherdomain.com expires, becoming available for registration by anyone. Since the CNAME record is not deleted from the victim.com DNS zone, anyone who registers
victimotherdomain.com
has full control overwww.victim.com
until the DNS record is removed or updated.
Test Objectives
Enumerate all possible domains (previous and current).
Identify any forgotten or misconfigured domains.
How to Test
Black-Box Testing
The first step is to enumerate the victim DNS servers and resource records. There are multiple ways to accomplish this task; for example, DNS enumeration using a list of common subdomains dictionary, DNS brute force or using web search engines and other OSINT data sources.
Using the dig command the tester looks for the following DNS server response messages that warrant further investigation:
NXDOMAIN
SERVFAIL
REFUSED
no servers could be reached.
Testing DNS A, CNAME Record Subdomain Takeover
Perform a basic DNS enumeration on the victim's domain (victim.com
) using dnsrecon
:
Identify which DNS resource records are dead and point to inactive/not-used services. Using the dig command for the CNAME
record:
The following DNS responses warrant further investigation: NXDOMAIN
.
To test the A
record the tester performs a whois database lookup and identifies GitHub as the service provider:
The tester visits subdomain.victim.com
or issues a HTTP GET request which returns a "404 - File not found" response which is a clear indication of the vulnerability.
Figure 4.2.10-1: GitHub 404 File Not Found response
The tester claims the domain using GitHub Pages:
Figure 4.2.10-2: GitHub claim domain
Testing NS Record Subdomain Takeover
Identify all nameservers for the domain in scope:
In this fictitious example, the tester checks if the domain expireddomain.com
is active with a domain registrar search. If the domain is available for purchase the subdomain is vulnerable.
The following DNS responses warrant further investigation: SERVFAIL
or REFUSED
.
Gray-Box Testing
The tester has the DNS zone file available, which means DNS enumeration is not necessary. The testing methodology is the same.
Remediation
To mitigate the risk of subdomain takeover, the vulnerable DNS resource record(s) should be removed from the DNS zone. Continuous monitoring and periodic checks are recommended as best practice.
Tools
References
Test Cloud Storage
WSTG-CONF-11
Summary
Cloud storage services allow web applications and services to store and access objects in the storage service. Improper access control configuration, however, may lead to the exposure of sensitive information, data tampering, or unauthorized access.
A known example is where an Amazon S3 bucket is misconfigured, although the other cloud storage services may also be exposed to similar risks. By default, all S3 buckets are private and can be accessed only by users who are explicitly granted access. Users can grant public access not only to the bucket itself but also to individual objects stored within that bucket. This may lead to an unauthorized user being able to upload new files, modify or read stored files.
Test Objectives
Assess that the access control configuration for the storage services is properly in place.
How to Test
First, identify the URL to access the data in the storage service, and then consider the following tests:
read unauthorized data
upload a new arbitrary file
You may use curl for the tests with the following commands and see if unauthorized actions can be performed successfully.
To test the ability to read an object:
To test the ability to upload a file:
In the above command, it is recommended to replace the single quotes (') with double quotes (") when running the command on a Windows machine.
Testing for Amazon S3 Bucket Misconfiguration
The Amazon S3 bucket URLs follow one of two formats, either virtual host style or path-style.
Virtual Hosted Style Access
In the following example, my-bucket
is the bucket name, us-west-2
is the region, and puppy.png
is the key-name:
Path-Style Access
As above, in the following example, my-bucket
is the bucket name, us-west-2
is the region, and puppy.png
is the key-name:
For some regions, the legacy global endpoint that does not specify a region-specific endpoint can be used. Its format is also either virtual hosted style or path-style.
Virtual Hosted Style Access
Path-Style Access
Identify Bucket URL
For black-box testing, S3 URLs can be found in the HTTP messages. The following example shows a bucket URL is sent in the img
tag in an HTTP response.
For gray-box testing, you can obtain bucket URLs from Amazon's web interface, documents, source code, and any other available sources.
Testing with AWS-CLI
In addition to testing with curl, you can also test with the AWS command-line tool. In this case s3://
URI scheme is used.
List
The following command lists all the objects of the bucket when it is configured public:
Upload
The following is the command to upload a file:
This example shows the result when the upload has been successful.
This example shows the result when the upload has failed.
Remove
The following is the command to remove an object:
Tools
References
Enumerate Infrastructure and Application Admin Interfaces
WSTG-CONF-05
Summary
Administrator interfaces may be present in the application or on the application server to allow certain users to perform privileged activities on the site. Tests should be undertaken to reveal if and how this privileged functionality can be accessed by an unauthorized or standard user.
An application may require an administrator interface to enable a privileged user to access functionality that may make changes to how the site functions. Such changes may include:
user account provisioning
site design and layout
data manipulation
configuration changes
In many instances, such interfaces do not have sufficient controls to protect them from unauthorized access. Testing is aimed at discovering these administrator interfaces and accessing functionality intended for the privileged users.
Test Objectives
Identify hidden administrator interfaces and functionality.
How to Test
Black Box Testing
The following section describes vectors that may be used to test for the presence of administrative interfaces. These techniques may also be used to test for related issues including privilege escalation, and are described elsewhere in this guide (for example, Testing for bypassing authorization schema and Testing for Insecure Direct Object References) in greater detail.
Directory and file enumeration: An administrative interface may be present but not visibly available to the tester. The path of the administrative interface may be guessed by simple requests such as /admin or /administrator. In some scenarios, these paths can be revealed within seconds using advanced Google search techniques - Google dorks. There are many tools available to perform brute forcing of server contents, see the tools section below for more information. A tester may have to also identify the filename of the administration page. Forcibly browsing to the identified page may provide access to the interface.
Comments and links in source code: Many sites use common code that is loaded for all site users. By examining all source sent to the client, links to administrator functionality may be discovered and should be investigated.
Reviewing server and application documentation: If the application server or application is deployed in its default configuration it may be possible to access the administration interface using information described in configuration or help documentation. Default password lists should be consulted if an administrative interface is found and credentials are required.
Publicly available information: Many applications, such as WordPress, have administrative interfaces that are available by default.
Alternative server port: Administration interfaces may be seen on a different port on the host than the main application. For example, Apache Tomcat's Administration interface can often be seen on port 8080.
Parameter tampering: A GET or POST parameter, or a cookie may be required to enable the administrator functionality. Clues to this include the presence of hidden fields such as:
or in a cookie:
Cookie: session_cookie; useradmin=0
Once an administrative interface has been discovered, a combination of the above techniques may be used to attempt to bypass authentication. If this fails, the tester may wish to attempt a brute force attack. In such an instance, the tester should be aware of the potential for administrative account lockout if such functionality is present.
Gray Box Testing
A more detailed examination of the server and application components should be undertaken to ensure hardening (i.e. administrator pages are not accessible to everyone through the use of IP filtering or other controls), and where applicable, verification that all components do not use default credentials or configurations. Source code should be reviewed to ensure that the authorization and authentication model ensures clear separation of duties between normal users and site administrators. User interface functions shared between normal and administrator users should be reviewed to ensure clear separation between the rendering of such components and the information leakage from such shared functionality.
Each web framework may have its own default admin pages or paths, as in the following examples:
PHP:
WordPress:
Joomla:
Tomcat:
Apache:
Nginx:
Tools
Several tools can assist in identifying hidden administrator interfaces and functionality, including:
ZAP - Forced Browse is a currently maintained use of OWASP's previous DirBuster project.
THC-HYDRA is a tool that allows brute-forcing of many interfaces, including form-based HTTP authentication.
A brute forcer is much more effective when it uses a good dictionary, such as the Netsparker dictionary.