SYNOPSIS

     use Benchmark::Command;
    
     Benchmark::Command::run(100, {
         perl        => [{env=>{PERL_UNICODE=>''}}, qw/perl -e1/],
         "bash+true" => [qw/bash --norc -c true/],
         ruby        => [qw/ruby -e1/],
         python      => [qw/python -c1/],
         nodejs      => [qw/nodejs -e 1/],
     });

    Sample output:

                          Rate      nodejs      python        ruby bash+true   perl
     nodejs    40.761+-0.063/s          --      -55.3%      -57.1%    -84.8% -91.7%
     python        91.1+-1.3/s 123.6+-3.3%          --       -4.0%    -66.0% -81.5%
     ruby         94.92+-0.7/s 132.9+-1.8%   4.2+-1.7%          --    -64.6% -80.8%
     bash+true   267.94+-0.7/s   557.3+-2%   194+-4.4% 182.3+-2.2%        -- -45.7%
     perl         493.8+-5.1/s   1112+-13% 441.9+-9.7% 420.3+-6.6%  84.3+-2%     --
    
     Average times:
       perl     :     2.0251ms
       bash+true:     3.7322ms
       ruby     :    10.5352ms
       python   :    10.9769ms
       nodejs   :    24.5333ms

DESCRIPTION

    This module provides run(), a convenience routine to benchmark
    commands. This module is similar to Benchmark::Apps except: 1) commands
    will be executed without shell (using the system {$_[0]} @_ syntax); 2)
    the existence of each program will be checked first; 3) Benchmark::Dumb
    is used as the backend.

    This module is suitable for benchmarking commands that completes in a
    short time, like the above example.

FUNCTIONS

 run($count, \%cmds[, \%opts])

    Do some checks and convert %cmds (which is a hash of names and command
    arrayrefs (e.g. {perl=>["perl", "-e1"], nodejs=>["nodejs", "-e", 1]})
    into %subs (which is a hash of names and coderefs (e.g.: {perl=>sub
    {system {"perl"} "perl", "-e1"}, nodejs=>sub {system {"nodejs"}
    "nodejs", "-e", 1}}).

    If a value in %cmds is already a coderef, it will be used as-is.

    If a value in %cmds is an arrayref, the first element of the arrayref
    (before the program name) can optionally contain a hashref of option.
    See per-command option below..

    The checks done are: each command must be an arrayref (to be executed
    without invoking shell) and the program (first element of each
    arrayref) must exist.

    Then run Benchmark::Dumb's cmpthese($count, \%subs). Usually, $count
    can be set to 0 but for the above example where the commands end in a
    short time (in the order milliseconds), I set to to around 100.

    Then also show the average run times for each command.

    Known options:

      * quiet => bool (default: from env QUIET or 0)

      If set to true, will hide program's output.

      * ignore_exit_code => bool (default: from env
      BENCHMARK_COMMAND_IGNORE_EXIT_CODE or 0)

      If set to true, will not die if exit code is non-zero.

      * skip_not_found => bool (default: 0)

      If set to true, will skip benchmarking commands where the program is
      not found. The default bahavior is to die.

    Known per-command options:

      * env => hash

      Locally set environment variables for the command.

      * ignore_exit_code => bool

      This overrides global ignore_exit_code option.

      * skip_not_found => bool

      This overrides global skip_not_found option.

ENVIRONMENT

 BENCHMARK_COMMAND_COUNT => num

    Set default for run()'s $count argument.

 BENCHMARK_COMMAND_IGNORE_EXIT_CODE => bool

    Set default for run()'s ignore_exit_code option.

 BENCHMARK_COMMAND_QUIET => bool

    Set default for run()'s quiet option (takes precedence of QUIET).

 BENCHMARK_COMMAND_SKIP_NOT_FOUND => bool

    Set default for run()'s skip_not_found option.

 QUIET => bool

    Set default for run()'s quiet option (if BENCHMARK_COMMAND_QUIET is not
    defined).

 SEE ALSO

    Benchmark::Apps