siunam's Website

My personal website

Home About Blog Writeups Projects E-Portfolio

cgi fridays

Table of Contents

  1. Overview
  2. Background
  3. Enumeration
  4. Exploitation
  5. Conclusion



1999 called, and they want their challenge back.

Author: hashkitten


Home page:

In here, we can view a few of the web server’s information, like it’s kernel version, CPU info, etc:

In this challenge, we can download a file:

└> file    Zip archive data, at least v2.0 to extract, compression method=store
└> unzip 
   creating: src/cgi-bin/
  inflating: src/cgi-bin/    
  inflating: src/Dockerfile          
 extracting: src/flag.txt            
   creating: src/htdocs/
  inflating: src/htdocs/.htaccess    
  inflating: src/htdocs/index.shtml  
   creating: src/htdocs/pages/
  inflating: src/htdocs/pages/about.txt  
 extracting: src/htdocs/pages/denied.txt  
 extracting: src/htdocs/pages/home.txt  

In src/htdocs/index.shtml, we can see that it uses Server-Side Include (SSI) to include the /cgi-bin/ Perl script:

  <div class="content">
    <div class="status ok">
      <pre><!--#include virtual="/cgi-bin/$QUERY_STRING" --></pre>

Let’s dig through that Perl script!

#!/usr/bin/env perl

use strict;
use warnings;
use CGI::Minimal;

use constant HTDOCS => '/usr/local/apache2/htdocs';

sub read_file {
    my ($file_path) = @_;
    my $fh;

    local $/;
    open($fh, "<", $file_path) or return "read_file error: $!";
    my $content = <$fh>;

    return $content;

sub route_request {
    my ($page, $remote_addr) = @_;

    if ($page =~ /^about$/) {
        return HTDOCS . '/pages/about.txt';

    if ($page =~ /^version$/) {
        return '/proc/version';

    if ($page =~ /^cpuinfo$/) {
        return HTDOCS . '/pages/denied.txt' unless $remote_addr eq '';
        return '/proc/cpuinfo';

    if ($page =~ /^stat|io|maps$/) {
        return HTDOCS . '/pages/denied.txt' unless $remote_addr eq '';
        return "/proc/self/$page";

    return HTDOCS . '/pages/home.txt';

sub escape_html {
    my ($text) = @_;

    $text =~ s/</&lt;/g;
    $text =~ s/>/&gt;/g;

    return $text;

my $q = CGI::Minimal->new;

print "Content-Type: text/html\r\n\r\n";

my $file_path = route_request($q->param('page'), $ENV{'REMOTE_ADDR'});
my $file_content = read_file($file_path);

print escape_html($file_content);

In here, when GET parameter page is given, it’ll read the file content base on the file path:

Hmm… It seems like if we can view stat, io, or maps page, we can leverage the path traversal vulnerability.

But, how to bypass the client IP?

After trying bunch of headers like X-Forwarded-Host, I can’t bypass it.


According to the PDF from Black Hat Asia 2016 that found by one of my teammates, it seems like Perl’s param() has a fatal flaw.

If you read the PDF a little bit, it basically says param() may return a scalar or a list.

That being said, if we provide 2 page GET parameters:


It’ll return a list like this:

("version", "about")

That being said, we should be able to bypass the!

GET /cgi-bin/ HTTP/2


Then, we can now leverage path traversal to read the flag file!!

Oh crap… We have to match the regular expression pattern to get the flag, and the directory must exist…

Hmm… Let’s find a directory that contains stat, io, or maps.

To do so, I’ll use find command in Linux:

└> find / -type d -regex '.*/\(stat\|io\|maps\).*' 2>/dev/null

Let’s try that directory!

GET /cgi-bin/ HTTP/2

Nice! It worked!


What we’ve learned:

  1. Exploiting Perl’s param() flaw