Last tested: 01 Aug, 2018

yo vulnerabilities

CLI tool for running Yeoman generators

View on npm

yo (latest)

Published 25 Jul, 2018

Known vulnerabilities0
Vulnerable paths0
Dependencies572

No known vulnerabilities in yo

Security wise, yo seems to be a safe package to use.
Over time, new vulnerabilities may be disclosed on yo and other packages. To easily find, fix and prevent such vulnerabilties, protect your repos with Snyk!

Vulnerable versions of yo

Fixed in 2.0.3

Prototype Pollution

low severity

Detailed paths

  • Introduced through: pm2@2.0.2 > cli-table2@0.2.0 > lodash@3.10.1
  • Introduced through: yo@2.0.2 > insight@0.8.4 > inquirer@0.10.1 > lodash@3.10.1

Overview

lodash is a javaScript utility library delivering modularity, performance & extras.

Affected versions of this package are vulnerable to Prototype Pollution. The utilities function allow modification of the Object prototype. If an attacker can control part of the structure passed to this function, they could add or modify an existing property.

PoC by Olivier Arteau (HoLyVieR)

var _= require('lodash');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';

var a = {};
console.log("Before : " + a.oops);
_.merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);

Remediation

Upgrade lodash to version 4.17.5 or higher.

References

Fixed in 1.5.0

Regular Expression Denial of Service (ReDoS)

medium severity

Detailed paths

  • Introduced through: npm@1.4.8 > request@2.30.0 > tough-cookie@0.9.15
  • Introduced through: yo@1.4.8 > insight@0.6.0 > tough-cookie@1.2.0

Overview

tough-cookie is RFC6265 Cookies and Cookie Jar for node.js.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS) attacks. An attacker may pass a specially crafted cookie, causing the server to hang.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade to version 2.3.3 or newer.

References

Regular Expression Denial of Service (ReDoS)

high severity

Detailed paths

  • Introduced through: npm@1.4.8 > request@2.30.0 > tough-cookie@0.9.15
  • Introduced through: yo@1.4.8 > insight@0.6.0 > tough-cookie@1.2.0

Overview

tough-cookie Hawk is an HTTP authentication scheme using a message authentication code (MAC) algorithm to provide partial HTTP request cryptographic verification.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks. An attacker can provide a cookie, which nearly matches the pattern being matched. This will cause the regular expression matching to take a long time, all the while occupying the event loop and preventing it from processing other requests and making the server unavailable (a Denial of Service attack).

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade tough-cookie to at version 2.3.0 or greater.

References

Fixed in 1.4.4

Command Injection

high severity

Detailed paths

  • Introduced through: yo@1.4.2 > shelljs@0.3.0

Overview

shelljs is a portable Unix shell commands for Node.js.

Affected version of this package are vulnerable to Command Injection. It is possible to invoke commands from shell.exec() from external sources, allowing an attacker to inject arbitrary commands.

Remediation

There is no fix version for shelljs.

References

Regular Expression Denial of Service (ReDoS)

high severity

Detailed paths

  • Introduced through: yo@1.4.2 > underscore.string@2.4.0

Overview

underscore.string is a String manipulation helpers for javascript.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). It parses dates using regex strings, which may cause a slowdown of 2 seconds per 50k characters.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

There is no fix version for underscore.string.

References

Fixed in 1.4.1

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: socket.io@1.3.3 > debug@2.1.0
  • Introduced through: socket.io@1.3.3 > engine.io@1.5.1 > debug@1.0.3
  • Introduced through: socket.io@1.3.3 > socket.io-parser@2.2.3 > debug@0.7.4
  • Introduced through: socket.io@1.3.3 > socket.io-client@1.3.3 > socket.io-parser@2.2.3 > debug@0.7.4
  • Introduced through: socket.io@1.3.3 > socket.io-client@1.3.3 > debug@0.7.4
  • Introduced through: socket.io@1.3.3 > socket.io-adapter@0.3.1 > socket.io-parser@2.2.2 > debug@0.7.4
  • Introduced through: socket.io@1.3.3 > socket.io-adapter@0.3.1 > debug@1.0.2
  • Introduced through: socket.io@1.3.3 > socket.io-client@1.3.3 > engine.io-client@1.5.1 > debug@1.0.4
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > mocha@2.5.3 > debug@2.2.0
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > debug@1.0.5

Overview

debug is a JavaScript debugging utility modelled after Node.js core's debugging technique..

debug uses printf-style formatting. Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS) attacks via the the %o formatter (Pretty-print an Object all on a single line). It used a regular expression (/\s*\n\s*/g) in order to strip whitespaces and replace newlines with spaces, in order to join the data into a single line. This can cause a very low impact of about 2 seconds matching time for data 50k characters long.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade debug to version 2.6.9, 3.1.0 or higher.

References

Regular Expression Denial of Service (DoS)

high severity

Detailed paths

  • Introduced through: bower@1.3.3 > fstream-ignore@0.0.10 > minimatch@0.3.0
  • Introduced through: bower@1.3.3 > glob@3.2.11 > minimatch@0.3.0
  • Introduced through: nodemon@1.3.3 > minimatch@0.3.0
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > gaze@0.5.2 > globule@0.1.0 > minimatch@0.2.14
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > gaze@0.5.2 > globule@0.1.0 > glob@3.1.21 > minimatch@0.2.14
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > mocha@2.5.3 > glob@3.2.11 > minimatch@0.3.0
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > sass-graph@1.3.0 > glob@4.5.3 > minimatch@2.0.10
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > pangyp@2.3.3 > glob@4.3.5 > minimatch@2.0.10
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > pangyp@2.3.3 > minimatch@2.0.10
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > file-utils@0.2.2 > minimatch@2.0.10
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > glob@4.5.3 > minimatch@2.0.10
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > file-utils@0.2.2 > findup-sync@0.2.1 > glob@4.3.5 > minimatch@2.0.10
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > file-utils@0.2.2 > glob@4.5.3 > minimatch@2.0.10
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > findup-sync@0.1.3 > glob@3.2.11 > minimatch@0.3.0

Overview

minimatch is a minimalistic matching library used for converting glob expressions into JavaScript RegExp objects. Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Many Regular Expression implementations may reach edge cases that causes them to work very slowly (exponentially related to input size), allowing an attacker to exploit this and can cause the program to enter these extreme situations by using a specially crafted input and cause the service to excessively consume CPU, resulting in a Denial of Service.

An attacker can provide a long value to the minimatch function, which nearly matches the pattern being matched. This will cause the regular expression matching to take a long time, all the while occupying the event loop and preventing it from processing other requests and making the server unavailable (a Denial of Service attack).

You can read more about Regular Expression Denial of Service (ReDoS) on our blog.

Remediation

Upgrade minimatch to version 3.0.2 or greater.

References

Symlink File Overwrite

high severity

Detailed paths

  • Introduced through: bower@1.3.3 > tar@0.1.20
  • Introduced through: gulp-sass@1.3.3 > node-sass@2.1.1 > pangyp@2.3.3 > tar@1.0.3
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > download@1.0.7 > decompress@1.0.7 > decompress-tar@1.0.3 > tar@1.0.3
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > download@1.0.7 > decompress@1.0.7 > decompress-tarbz2@1.0.2 > tar@1.0.3
  • Introduced through: yo@1.3.3 > yeoman-generator@0.17.7 > download@1.0.7 > decompress@1.0.7 > decompress-targz@1.0.3 > tar@1.0.3

Overview

The tar module prior to version 2.0.0 does not properly normalize symbolic links pointing to targets outside the extraction root. As a result, packages may hold symbolic links to parent and sibling directories and overwrite those files when the package is extracted.

Remediation

Upgrade to version 2.0.0 or greater. If a direct dependency update is not possible, use snyk wizard to patch this vulnerability.

References

Fixed in 1.3.0

Prototype Pollution

low severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0 > cryptiles@0.2.2 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0 > cryptiles@0.2.2 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0 > sntp@0.2.4 > hoek@0.9.1
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0 > sntp@0.2.4 > hoek@0.9.1
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0 > hoek@0.9.1
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0 > cryptiles@0.2.2 > boom@0.4.2 > hoek@0.9.1
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0 > sntp@0.2.4 > hoek@0.9.1

Overview

hoek is a Utility methods for the hapi ecosystem.

Affected versions of this package are vulnerable to Prototype Pollution. The utilities function allow modification of the Object prototype. If an attacker can control part of the structure passed to this function, they could add or modify an existing property.

PoC by Olivier Arteau (HoLyVieR)

var Hoek = require('hoek');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';

var a = {};
console.log("Before : " + a.oops);
Hoek.merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);

Remediation

Upgrade hoek to versions 4.2.1, 5.0.3 or higher.

References

Regular Expression Denial of Service (DoS)

low severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0

Overview

hawk is an HTTP authentication scheme using a message authentication code (MAC) algorithm to provide partial HTTP request cryptographic verification.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

You can read more about Regular Expression Denial of Service (ReDoS) on our blog.

References

Uninitialized Memory Exposure

medium severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > tunnel-agent@0.3.0
  • Introduced through: bower@1.2.1 > request@2.25.0 > tunnel-agent@0.3.0
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > tunnel-agent@0.3.0

Overview

tunnel-agent is HTTP proxy tunneling agent. Affected versions of the package are vulnerable to Uninitialized Memory Exposure.

A possible memory disclosure vulnerability exists when a value of type number is used to set the proxy.auth option of a request request and results in a possible uninitialized memory exposures in the request body.

This is a result of unobstructed use of the Buffer constructor, whose insecure default constructor increases the odds of memory leakage.

Details

Constructing a Buffer class with integer N creates a Buffer of length N with raw (not "zero-ed") memory.

In the following example, the first call would allocate 100 bytes of memory, while the second example will allocate the memory needed for the string "100":

// uninitialized Buffer of length 100
x = new Buffer(100);
// initialized Buffer with value of '100'
x = new Buffer('100');

tunnel-agent's request construction uses the default Buffer constructor as-is, making it easy to append uninitialized memory to an existing list. If the value of the buffer list is exposed to users, it may expose raw server side memory, potentially holding secrets, private data and code. This is a similar vulnerability to the infamous Heartbleed flaw in OpenSSL.

Proof of concept by ChALkeR

require('request')({
  method: 'GET',
  uri: 'http://www.example.com',
  tunnel: true,
  proxy:{
      protocol: 'http:',
      host:"127.0.0.1",
      port:8080,
      auth:80
  }
});

You can read more about the insecure Buffer behavior on our blog.

Similar vulnerabilities were discovered in request, mongoose, ws and sequelize.

Remediation

Upgrade tunnel-agent to version 0.6.0 or higher. Note This is vulnerable only for Node <=4

References

Remote Memory Exposure

medium severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0
  • Introduced through: bower@1.2.1 > request@2.25.0
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0

Overview

request is a simplified http request client. A potential remote memory exposure vulnerability exists in request. If a request uses a multipart attachment and the body type option is number with value X, then X bytes of uninitialized memory will be sent in the body of the request.

Note that while the impact of this vulnerability is high (memory exposure), exploiting it is likely difficult, as the attacker needs to somehow control the body type of the request. One potential exploit scenario is when a request is composed based on JSON input, including the body type, allowing a malicious JSON to trigger the memory leak.

Details

Constructing a Buffer class with integer N creates a Buffer of length N with non zero-ed out memory. Example:

var x = new Buffer(100); // uninitialized Buffer of length 100
// vs
var x = new Buffer('100'); // initialized Buffer with value of '100'

Initializing a multipart body in such manner will cause uninitialized memory to be sent in the body of the request.

Proof of concept

var http = require('http')
var request = require('request')

http.createServer(function (req, res) {
  var data = ''
  req.setEncoding('utf8')
  req.on('data', function (chunk) {
    console.log('data')
    data += chunk
  })
  req.on('end', function () {
    // this will print uninitialized memory from the client
    console.log('Client sent:\n', data)
  })
  res.end()
}).listen(8000)

request({
  method: 'POST',
  uri: 'http://localhost:8000',
  multipart: [{ body: 1000 }]
},
function (err, res, body) {
  if (err) return console.error('upload failed:', err)
  console.log('sent')
})

Remediation

Upgrade request to version 2.68.0 or higher.

If a direct dependency update is not possible, use snyk wizard to patch this vulnerability.

References

Insecure Randomness

medium severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > hawk@1.0.0 > cryptiles@0.2.2
  • Introduced through: bower@1.2.1 > request@2.25.0 > hawk@1.0.0 > cryptiles@0.2.2
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > hawk@1.0.0 > cryptiles@0.2.2

Overview

cryptiles is a package for general crypto utilities.

Affected versions of this package are vulnerable to Insecure Randomness. The randomDigits() method is supposed to return a cryptographically strong pseudo-random data string, but it was biased to certain digits. An attacker could be able to guess the created digits.

Remediation

Upgrade to version 4.1.2 and higher.

References

Denial of Service (Event Loop Blocking)

medium severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > qs@0.6.6
  • Introduced through: bower@1.2.1 > request@2.25.0 > qs@0.6.6
  • Introduced through: body-parser@1.2.1 > qs@0.6.6
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > qs@0.6.6

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

Affected versions of this package are vulnerable to Denial of Service (DoS). When parsing a string representing a deeply nested object, qs will block the event loop for long periods of time. Such a delay may hold up the server's resources, keeping it from processing other requests in the meantime, thus enabling a Denial-of-Service attack.

Remediation

Update qs to version 1.0.0 or higher. In these versions, qs enforces a max object depth (along with other limits), limiting the event loop length and thus preventing such an attack.

References

Prototype Override Protection Bypass

high severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > qs@0.6.6
  • Introduced through: bower@1.2.1 > request@2.25.0 > qs@0.6.6
  • Introduced through: body-parser@1.2.1 > qs@0.6.6
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > qs@0.6.6

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

By default qs protects against attacks that attempt to overwrite an object's existing prototype properties, such as toString(), hasOwnProperty(),etc.

From qs documentation:

By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.

Overwriting these properties can impact application logic, potentially allowing attackers to work around security controls, modify data, make the application unstable and more.

In versions of the package affected by this vulnerability, it is possible to circumvent this protection and overwrite prototype properties and functions by prefixing the name of the parameter with [ or ]. e.g. qs.parse("]=toString") will return {toString = true}, as a result, calling toString() on the object will throw an exception.

Example:

qs.parse('toString=foo', { allowPrototypes: false })
// {}

qs.parse("]=toString", { allowPrototypes: false })
// {toString = true} <== prototype overwritten

For more information, you can check out our blog.

Disclosure Timeline

  • February 13th, 2017 - Reported the issue to package owner.
  • February 13th, 2017 - Issue acknowledged by package owner.
  • February 16th, 2017 - Partial fix released in versions 6.0.3, 6.1.1, 6.2.2, 6.3.1.
  • March 6th, 2017 - Final fix released in versions 6.4.0,6.3.2, 6.2.3, 6.1.2 and 6.0.4

Remediation

Upgrade qs to version 6.4.0 or higher. Note: The fix was backported to the following versions 6.3.2, 6.2.3, 6.1.2, 6.0.4.

References

Regular Expression Denial of Service (ReDoS)

low severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > mime@1.2.11
  • Introduced through: bower@1.2.1 > request@2.25.0 > mime@1.2.11
  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > form-data@0.1.4 > mime@1.2.11
  • Introduced through: bower@1.2.1 > request@2.25.0 > form-data@0.1.4 > mime@1.2.11
  • Introduced through: body-parser@1.2.1 > type-is@1.2.0 > mime@1.2.11
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > mime@1.2.11
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > form-data@0.1.4 > mime@1.2.11

Overview

mime is a comprehensive, compact MIME type module.

Affected versions of this package are vulnerable to Regular expression Denial of Service (ReDoS). It uses regex the following regex /.*[\.\/\\]/ in its lookup, which can cause a slowdown of 2 seconds for 50k characters.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Many Regular Expression implementations may reach extreme situations that cause them to work very slowly (exponentially related to input size), allowing an attacker to exploit this and can cause the program to enter these extreme situations by using a specially crafted input and cause the service to excessively consume CPU, resulting in a Denial of Service.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade mime to versions 1.4.1, 2.0.3 or higher.

References

Timing Attack

medium severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > http-signature@0.10.1
  • Introduced through: bower@1.2.1 > request@2.25.0 > http-signature@0.10.1
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > http-signature@0.10.1

Overview

http-signature is a reference implementation of Joyent's HTTP Signature scheme.

Affected versions of the package are vulnerable to Timing Attacks due to time-variable comparison of signatures.

The library implemented a character to character comparison, similar to the built-in string comparison mechanism, ===, and not a time constant string comparison. As a result, the comparison will fail faster when the first characters in the signature are incorrect. An attacker can use this difference to perform a timing attack, essentially allowing them to guess the signature one character at a time.

You can read more about timing attacks in Node.js on the Snyk blog.

Remediation

Upgrade http-signature to version 1.0.0 or higher.

References

Denial of Service (Memory Exhaustion)

high severity

Detailed paths

  • Introduced through: bower@1.2.1 > bower-registry-client@0.1.6 > request@2.27.0 > qs@0.6.6
  • Introduced through: bower@1.2.1 > request@2.25.0 > qs@0.6.6
  • Introduced through: body-parser@1.2.1 > qs@0.6.6
  • Introduced through: yo@1.2.1 > insight@0.3.1 > request@2.27.0 > qs@0.6.6

Overview

qs is a querystring parser that supports nesting and arrays, with a depth limit.

Affected versions of this package are vulnerable to Denial of Service (Dos) attacks. During parsing, the qs module may create a sparse area (an array where no elements are filled), and grow that array to the necessary size based on the indices used on it. An attacker can specify a high index value in a query string, thus making the server allocate a respectively big array. Truly large values can cause the server to run out of memory and cause it to crash - thus enabling a Denial-of-Service attack.

Remediation

Upgrade qs to version 1.0.0 or greater. In these versions, qs introduced a low limit on the index value, preventing such an attack

References

Fixed in 1.2.1

Regular Expression Denial of Service (ReDoS)

medium severity

Detailed paths

  • Introduced through: bower@1.2.0 > semver@2.1.0
  • Introduced through: bower@1.2.0 > update-notifier@0.1.10 > semver@2.3.2
  • Introduced through: nodemon@1.2.0 > update-notifier@0.1.10 > semver@2.3.2
  • Introduced through: yo@1.2.0 > update-notifier@0.1.10 > semver@2.3.2

Overview

npm is a package manager for javascript.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). The semver module uses regular expressions when parsing a version string. For a carefully crafted input, the time it takes to process these regular expressions is not linear to the length of the input. Since the semver module did not enforce a limit on the version string length, an attacker could provide a long string that would take up a large amount of resources, potentially taking a server down. This issue therefore enables a potential Denial of Service attack. This is a slightly differnt variant of a typical Regular Expression Denial of Service (ReDoS) vulnerability.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Update to a version 4.3.2 or greater. From the issue description [2]: "Package version can no longer be more than 256 characters long. This prevents a situation in which parsing the version number can use exponentially more time and memory to parse, leading to a potential denial of service."

References

Fixed in 1.2.0

Arbitrary Command Injection

high severity

Detailed paths

  • Introduced through: bower@1.1.2 > open@0.0.5
  • Introduced through: yo@1.1.2 > open@0.0.4

Overview

open Open a file or url in the user's preferred application.

Affected versions of this package are vulnerable to Arbitrary Command Injection. Urls are not properly escaped before concatenating them into the command that is opened using exec().

Remediation

There is no fix version for open.

References