(PHP 4 >= 4.3.0, PHP 5, PHP 7, PHP 8)
proc_open — Execute a command and open file pointers for input/output
Description
proc_open(
array|string $command
,
array
$descriptor_spec
,
array &$pipes
,
?string $cwd
= null
,
?array $env_vars
= null
,
?array
$options
= null
): resource|false
Parameters
command
The commandline to execute as string. Special characters have to be properly escaped, and proper quoting has to be applied.
Note: On Windows, unless bypass_shell
is set to true
in
options
, the command
is passed to cmd.exe (actually, %ComSpec%
) with the /c
flag as unquoted string (i.e. exactly as has been given to proc_open()). This can cause cmd.exe to remove enclosing quotes from command
(for details see the cmd.exe documentation), resulting in unexpected, and potentially even dangerous behavior, because cmd.exe error messages may contain (parts of) the
passed command
(see example below).
As of PHP 7.4.0, command
may be passed as array of command parameters. In this case the process will be opened directly (without going through a shell) and PHP will take care of any necessary argument escaping.
Note:
On Windows, the argument escaping of the array elements assumes that the command line parsing of the executed command is compatible with the
parsing of command line arguments done by the VC runtime.
descriptor_spec
An indexed array where the key represents the descriptor number and the value represents how PHP will pass that descriptor to the child process. 0 is stdin, 1 is stdout, while 2 is stderr.
Each element can be:
- An array describing the pipe to pass to the process. The first element is the descriptor type and the second element is an option for the given type. Valid types
are
pipe
(the second element is either r
to pass the read end of the pipe to the process, or w
to pass the write end) and file
(the second element is a filename). Note that anything else than w
is treated like r
. - A stream resource representing a real file descriptor (e.g. opened file, a socket,
STDIN
).
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it
will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
pipes
Will be set to an indexed array of file pointers that correspond to PHP's end of any pipes that are created.
cwd
The initial working dir for the command. This must be an absolute directory path, or null
if you want to use the default value (the working dir of the current PHP process)
env_vars
An array with the environment variables for the command that will be run, or null
to use the same environment as the current PHP process
options
Allows you to specify additional
options. Currently supported options include:
-
suppress_errors
(windows only): suppresses errors generated by this function when it's set to true
-
bypass_shell
(windows only): bypass cmd.exe
shell when set to true
-
blocking_pipes
(windows only): force blocking pipes when set to true
-
create_process_group
(windows only): allow the child process to handle CTRL
events when set to true
-
create_new_console
(windows only): the new process has a new console, instead of inheriting its parent's console
Return Values
Returns a resource representing the process, which should be freed using proc_close() when you are finished with it. On failure returns false
.
Changelog
Version | Description |
---|
7.4.4
| Added the create_new_console option to the options parameter.
|
7.4.0
| proc_open() now also accepts an array for the command .
|
7.4.0
| Added the create_process_group option to the options parameter.
|
Examples
Example #1 A proc_open() example
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);$cwd = '/tmp';
$env = array('some_option' => 'aeiou');$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
if (
is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txtfwrite($pipes[0], '');
fclose($pipes[0]); echo
stream_get_contents($pipes[1]);
fclose($pipes[1]);// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process); echo
"command returned $return_value\n";
}
?>
The above example will output something similar to:
Array
(
[some_option] => aeiou
[PWD] => /tmp
[SHLVL] => 1
[_] => /usr/local/bin/php
)
command returned 0
Example #2 proc_open() quirk on Windows
While one may expect the following program to search the file
filename.txt for the text search
and to print the results, it behaves rather differently.
$descriptorspec = [STDIN, STDOUT, STDOUT];
$cmd = '"findstr" "search" "filename.txt"';
$proc = proc_open($cmd, $descriptorspec, $pipes);
proc_close($proc);
?>
The above example will output:
'findstr" "search" "filename.txt' is not recognized as an internal or external command,
operable program or batch file.
To work around that behavior, it is usually sufficient to enclose the command
in additional quotes:
$cmd = '""findstr" "search" "filename.txt""';
Notes
Note:
Windows compatibility: Descriptors beyond 2 (stderr) are made
available to the child process as inheritable handles, but since the Windows architecture does not associate file descriptor numbers with low-level handles, the child process does not (yet) have a means of accessing those handles. Stdin, stdout and stderr work as expected.
Note:
If you only need a uni-directional (one-way) process pipe, use popen()
instead, as it is much easier to use.
See Also
- popen() - Opens process file pointer
- exec() - Execute an external program
- system() - Execute an external program and display the output
- passthru() - Execute an external program and display raw output
- stream_select() - Runs the
equivalent of the select() system call on the given arrays of streams with a timeout specified by seconds and microseconds
- The backtick operator
devel at romanr dot
info ¶
10 years ago
The call works as should. No bugs.
But. In most cases you won't able to work with pipes in blocking mode.
When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.
When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.
So you should switch pipes into NONBLOCKING mode (stream_set_blocking).
Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.
Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.
I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.
simeonl at dbc dot co dot nz ¶
13 years ago
Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:
$read_output = $read_error = false;
$buffer_len = $prev_buffer_len = 0;
$ms = 10;
$output = '';
$read_output = true;
$error = '';
$read_error = true;
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);// dual reading of STDOUT and STDERR stops one full pipe blocking the other, because the external script is waiting
while ($read_error != false or $read_output != false)
{
if ($read_output != false)
{
if(feof($pipes[1]))
{
fclose($pipes[1]);
$read_output = false;
}
else
{
$str = fgets($pipes[1], 1024);
$len = strlen($str);
if ($len)
{
$output .= $str;
$buffer_len += $len;
}
}
}
if (
$read_error != false)
{
if(feof($pipes[2]))
{
fclose($pipes[2]);
$read_error = false;
}
else
{
$str = fgets($pipes[2], 1024);
$len = strlen($str);
if ($len)
{
$error .= $str;
$buffer_len += $len;
}
}
} if (
$buffer_len > $prev_buffer_len)
{
$prev_buffer_len = $buffer_len;
$ms = 10;
}
else
{
usleep($ms * 1000); // sleep for $ms milliseconds
if ($ms < 160)
{
$ms = $ms * 2;
}
}
} return
proc_close($process);
?>
php at keith tyler dot com ¶
12 years ago
Interestingly enough, it seems you actually have to store the return value in order for your streams to exist. You can't throw it away.
In other words, this works:
$proc=proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
prints:
foo
but this doesn't work:
proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
outputs:
Warning: stream_get_contents(): is not a valid stream resource in Command line code on line 1
The only difference is that in the second case we don't save the output of proc_open to a variable.
aaronw at catalyst dot net dot nz ¶
7 years ago
If you have a CLI script that prompts you for a password via STDIN, and you need to run it from PHP, proc_open() can get you there. It's better than doing "echo $password | command.sh", because then your password will be visible in the process list to any user who runs "ps". Alternately you could print the password to a file and use cat: "cat passwordfile.txt | command.sh", but then you've got to manage that file in a secure manner.
If your command will always prompt you for responses in a specific order, then proc_open() is quite simple to use and you don't really have to worry about blocking & non-blocking streams. For instance, to run the "passwd" command:
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$process = proc_open(
'/usr/bin/passwd ' . escapeshellarg($username),
$descriptorspec,
$pipes
);// It wil prompt for existing password, then new password twice.
// You don't need to escapeshellarg() these, but you should whitelist
// them to guard against control characters, perhaps by using ctype_print()
fwrite($pipes[0], "$oldpassword\n$newpassword\n$newpassword\n");// Read the responses if you want to look at them
$stdout = fread($pipes[1], 1024);
$stderr = fread($pipes[2], 1024);fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$exit_status = proc_close($process);// It returns 0 on successful password change
$success = ($exit_status === 0);
?>
chris AT w3style DOT co.uk ¶
14 years ago
It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".
I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.
The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().
$this
->_proc = proc_open($command, $descriptorSpec, $pipes);
stream_set_blocking($pipes[2], 0);
if ($err = stream_get_contents($pipes[2]))
{
throw new Swift_Transport_TransportException(
'Process could not be started [' . $err . ']'
);
}?>
If the process is opened successfully $pipes[2] will be empty, but if it failed the bash/sh error will be in it.Finally I can drop all my "workaround" error checking.
I realise this solution is obvious and I'm not sure how it took me 18 months to figure it out, but hopefully this will help someone else.
NOTE: Make sure your descriptorSpec has ( 2 => array('pipe', 'w')) for this to work.
mattis at xait dot no ¶
11 years ago
If you are, like me, tired of the buggy way proc_open handles streams and exit codes; this example demonstrate the power of pcntl, posix and some simple output redirection:
$outpipe = '/tmp/outpipe';
$inpipe = '/tmp/inpipe';
posix_mkfifo($inpipe, 0600);
posix_mkfifo($outpipe, 0600);$pid = pcntl_fork();//parent
if($pid) {
$in = fopen($inpipe, 'w');
fwrite($in, "A message for the inpipe reader\n");
fclose($in);$out = fopen($outpipe, 'r');
while(!feof($out)) {
echo "From out pipe: " . fgets($out) . PHP_EOL;
}
fclose($out);pcntl_waitpid($pid, $status);
if(
pcntl_wifexited($status)) {
echo "Reliable exit code: " . pcntl_wexitstatus($status) . PHP_EOL;
}unlink($outpipe);
unlink($inpipe);
}//child
else {
//parent
if($pid = pcntl_fork()) {
pcntl_exec('/bin/sh', array('-c', "printf 'A message for the outpipe reader' > $outpipe 2>&1 && exit 12"));
}//child
else {
pcntl_exec('/bin/sh', array('-c', "printf 'From in pipe: '; cat $inpipe"));
}
}
?>
Output:From in pipe: A message for the inpipe reader
From out pipe: A message for the outpipe reader
Reliable exit code: 12
ralf at dreesen[*NO*SPAM*] dot net ¶
18 years ago
The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".
If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.
A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.
Just imagine the following example:
/* assume that strlen($in) is about 30k
*/
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$process = proc_open("cat", $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $in);
/* fwrite writes to stdin, 'cat' will immediately write the data from stdin
* to stdout and blocks, when the stdout buffer is full. Then it will not
* continue reading from stdin and php will block here.
*/
fclose($pipes[0]);
while (!feof($pipes[1])) {
$out .= fgets($pipes[1], 1024);
}
fclose($pipes[1]);
$return_value = proc_close($process);
}
?>
bilge at boontex dot
com ¶
10 years ago
$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.
Kyle Gibson ¶
17 years ago
proc_open is hard coded to use "/bin/sh". So if you're working in a chrooted environment, you need to make sure that /bin/sh exists, for now.
Luceo ¶
12 years ago
It seems that stream_get_contents() on STDOUT blocks infinitly under Windows when STDERR is filled under some circumstances.
The trick is to open STDERR in append mode ("a"), then this will work, too.
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'a') // stderr
);
?>
mcuadros at gmail dot
com ¶
9 years ago
This is a example of how run a command using as output the TTY, just like crontab -e or git commit does.
$descriptors
= array(
array('file', '/dev/tty', 'r'),
array('file', '/dev/tty', 'w'),
array('file', '/dev/tty', 'w')
);$process = proc_open('vim', $descriptors, $pipes);
michael dot gross at NOSPAM dot flexlogic dot at ¶
9 years
ago
Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).
John Wehin ¶
14 years ago
STDIN STDOUT example
test.php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "r")
);
$process = proc_open('php test_gen.php', $descriptorspec, $pipes, null, null); //run test_gen.php
echo ("Start process:\n");
if (is_resource($process))
{
fwrite($pipes[0], "start\n"); // send start
echo ("\n\nStart ....".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "get\n"); // send get
echo ("Get: ".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "stop\n"); //send stop
echo ("\n\nStop ....".fgets($pipes[1],4096)); //get answerfclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$return_value = proc_close($process); //stop test_gen.php
echo ("Returned:".$return_value."\n");
}
?>
test_gen.php
$keys=0;
function play_stop()
{
global $keys;
$stdin_stat_arr=fstat(STDIN);
if($stdin_stat_arr[size]!=0)
{
$val_in=fread(STDIN,4096);
switch($val_in)
{
case "start\n": echo "Started\n";
return false;
break;
case "stop\n": echo "Stopped\n";
$keys=0;
return false;
break;
case "pause\n": echo "Paused\n";
return false;
break;
case "get\n": echo ($keys."\n");
return true;
break;
default: echo("Передан не верный параметр: ".$val_in."\n");
return true;
exit();
}
}else{return true;}
}
while(true)
{
while(play_stop()){usleep(1000);}
while(play_stop()){$keys++;usleep(10);}
}
?>
daniela at itconnect dot net dot au ¶
19 years ago
Just a small note in case it isn't obvious, its possible to treat the filename as in fopen, thus you can pass through the standard input from php like
$descs = array (
0 => array ("file", "php://stdin", "r"),
1 => array ("pipe", "w"),
2 => array ("pipe", "w")
);
$proc = proc_open ("myprogram", $descs, $fp);
andrew dot budd at adsciengineering dot
com ¶
16 years ago
The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.
MagicalTux at FF.ST ¶
18 years ago
Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.
Stream functions can be used on pipes like :
- pipes from popen, proc_open
- pipes from fopen('php://stdin') (or stdout)
- sockets (unix or tcp/udp)
- many other things probably but the most important is here
More informations about streams (you'll find many useful functions there) :
http://www.php.net/manual/en/ref.stream.php
Anonymous ¶
14 years ago
I needed to emulate a tty for a process (it wouldnt write to stdout or read from stdin), so I found this:
$descriptorspec = array(0 => array('pty'),
1 => array('pty'),
2 => array('pty'));
?>
pipes are bidirectional then
weirdall at
hotmail dot com ¶
5 years ago
This script will tail a file using tail -F to follow scripts that are rotated.
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that stdout will to write to
);$filename = '/var/log/nginx/nginx-access.log';
if( !file_exists( $filename ) ) {
file_put_contents($filename, '');
}
$process = proc_open('tail -F /var/log/nginx/stats.bluebillywig.com-access.log', $descriptorspec, $pipes);
if (
is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be sent to $pipes[2] // Closing $pipes[0] because we don't need it
fclose( $pipes[0] );// stderr should not block, because that blocks the tail process
stream_set_blocking($pipes[2], 0);
$count=0;
$stream = $pipes[1]; while ( (
$buf = fgets($stream,4096)) ) {
print_r($buf);
// Read stderr to see if anything goes wrong
$stderr = fread($pipes[2], 4096);
if( !empty( $stderr ) ) {
print( 'log: ' . $stderr );
}
}
fclose($pipes[1]);
fclose($pipes[2]);// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
proc_close($process);
}
?>
stoller at leonex dot de ¶
6 years
ago
If you are working on Windows and try to proc_open an executable that contains spaces in its path, you will get into trouble.
But there's a workaround which works quite well. I have found it here: http://stackoverflow.com/a/4410389/1119601
For example, if you want to execute "C:\Program Files\nodejs\node.exe", you will get the error that the command could not be found.
Try this:
$cmd = 'C:\\Program Files\\nodejs\\node.exe';
if (strtolower(substr(PHP_OS,0,3)) === 'win') {
$cmd = sprintf('cd %s && %s', escapeshellarg(dirname($cmd)), basename($cmd));
}
?>
joachimb at gmail dot com ¶
14 years ago
I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);$process = proc_open(escapeshellarg($scriptFile), $descriptorspec, $pipes, $this->wd);
...
while (!feof($pipes[1])) {
foreach($pipes as $key =>$pipe) {
$line = fread($pipe, 128);
if($line) {
print($line);
$this->log($line);
}
}
sleep(0.5);
}
...
?>
everything works fine.
Kevin Barr ¶
16 years ago
I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:
// fgetsPending( $in,$tv_sec ) - Get a pending line of data from stream $in, waiting a maximum of $tv_sec seconds
function fgetsPending(&$in,$tv_sec=10) {
if ( stream_select($read = array($in),$write=NULL,$except=NULL,$tv_sec) ) return fgets($in);
else return FALSE;
}
stevebaldwin21 at googlemail dot com ¶
7
years ago
For those who are finding that using the $cwd and $env options cause proc_open to fail (windows). You will need to pass all other server environment variables;
$descriptorSpec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
);
proc_open(
"C:\\Windows\\System32\\PING.exe localhost,
$descriptorSpec ,
$pipes,
"C:\\Windows\\System32",
array($_SERVER)
);
exel at example dot com ¶
9 years ago
pipe communications may break brains off. i want to share some stuff to avoid such result.
for proper control of the communications through the "in" and "out" pipes of the opened sub-process, remember to set both of them into non-blocking mode and especially notice that fwrite may return (int)0 but it's not an error, just process might not except input at that moment.
so, let us consider an example of decoding gz-encoded file by using funzip as sub-process: (this is not the final version, just to show important things)
// make gz file
$fd=fopen("/tmp/testPipe", "w");
for($i=0;$i<100000;$i++)
fwrite($fd, md5($i)."\n");
fclose($fd);
if(
is_file("/tmp/testPipe.gz"))
unlink("/tmp/testPipe.gz");
system("gzip /tmp/testPipe");// open process
$pipesDescr=array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/testPipe.log", "a"),
);$process=proc_open("zcat", $pipesDescr, $pipes);
if(!is_resource($process)) throw new Exception("popen error");// set both pipes non-blocking
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);////////////////////////////////////////////////////////////////////$text="";
$fd=fopen("/tmp/testPipe.gz", "r");
while(!feof($fd))
{
$str=fread($fd, 16384*4);
$try=3;
while($str)
{
$len=fwrite($pipes[0], $str);
while($s=fread($pipes[1], 16384*4))
$text.=$s; if(!
$len)
{
// if yo remove this paused retries, process may fail
usleep(200000);
$try--;
if(!$try)
throw new Exception("fwrite error");
}
$str=substr($str, $len);
}
echo strlen($text)."\n";
}
fclose($fd);
fclose($pipes[0]);// reading the rest of output stream
stream_set_blocking($pipes[1], 1);
while(!feof($pipes[1]))
{
$s=fread($pipes[1], 16384);
$text.=$s;
} echo
strlen($text)." / 3 300 000\n";
?>
radone at gmail dot com ¶
14 years ago
To complete the examples below that use proc_open to encrypt a string using GPG, here is a decrypt function:
function gpg_decrypt($string, $secret) {
$homedir = ''; // path to you gpg keyrings
$tmp_file = '/tmp/gpg_tmp.asc' ; // tmp file to write to
file_put_contents($tmp_file, $string);
$text = '';
$error = '';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr ?? instead of a file
);
$command = 'gpg --homedir ' . $homedir . ' --batch --no-verbose --passphrase-fd 0 -d ' . $tmp_file . ' ';
$process = proc_open($command, $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $secret);
fclose($pipes[0]);
while($s= fgets($pipes[1], 1024)) {
// read from the pipe
$text .= $s;
}
fclose($pipes[1]);
// optional:
while($s= fgets($pipes[2], 1024)) {
$error .= $s . "\n";
}
fclose($pipes[2]);
}
file_put_contents($tmp_file, '');
if (
preg_match('/decryption failed/i', $error)) {
return false;
} else {
return $text;
}
}
?>
Matou Havlena - matous at havlena dot
net ¶
12 years ago
There is some smart object Processes Manager which i have created for my application. It can control the maximum of simultaneously running processes.
Proccesmanager class:
class Processmanager {
public $executable = "C:\\www\\_PHP5_2_10\\php";
public $root = "C:\\www\\parallelprocesses\\";
public $scripts = array();
public $processesRunning = 0;
public $processes = 3;
public $running = array();
public $sleep_time = 2;
function
addScript($script, $max_execution_time = 300) {
$this->scripts[] = array("script_name" => $script,
"max_execution_time" => $max_execution_time);
} function
exec() {
$i = 0;
for(;;) {
// Fill up the slots
while (($this->processesRunning<$this->processes) and ($i<count($this->scripts))) {
echo "Adding script: ".$this->scripts[$i]["script_name"]."
";
ob_flush();
flush();
$this->running[] =& new Process($this->executable, $this->root, $this->scripts[$i]["script_name"], $this->scripts[$i]["max_execution_time"]);
$this->processesRunning++;
$i++;
}// Check if done
if (($this->processesRunning==0) and ($i>=count($this->scripts))) {
break;
}
// sleep, this duration depends on your script execution time, the longer execution time, the longer sleep time
sleep($this->sleep_time);// check what is done
foreach ($this->running as $key => $val) {
if (!$val->isRunning() or $val->isOverExecuted()) {
if (!$val->isRunning()) echo "Done: ".$val->script."
";
else echo "Killed: ".$val->script."
";
proc_close($val->resource);
unset($this->running[$key]);
$this->processesRunning--;
ob_flush();
flush();
}
}
}
}
}
?>
Process class:
class Process {
public $resource;
public $pipes;
public $script;
public $max_execution_time;
public $start_time; function
__construct(&$executable, &$root, $script, $max_execution_time) {
$this->script = $script;
$this->max_execution_time = $max_execution_time;
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
$this->resource = proc_open($executable." ".$root.$this->script, $descriptorspec, $this->pipes, null, $_ENV);
$this->start_time = mktime();
}// is still running?
function isRunning() {
$status = proc_get_status($this->resource);
return $status["running"];
}// long execution time, proccess is going to be killer
function isOverExecuted() {
if ($this->start_time+$this->max_execution_time<mktime()) return true;
else return false;
}}
?>
Example of using:
$manager = new Processmanager();
$manager->executable = "C:\\www\\_PHP5_2_10\\php";
$manager->path = "C:\\www\\parallelprocesses\\";
$manager->processes = 3;
$manager->sleep_time = 2;
$manager->addScript("script1.php", 10);
$manager->addScript("script2.php");
$manager->addScript("script3.php");
$manager->addScript("script4.php");
$manager->addScript("script5.php");
$manager->addScript("script6.php");
$manager->exec();
?>
And possible output:Adding script: script1.php
Adding script: script2.php
Adding script: script3.php
Done: script2.php
Adding script: script4.php
Killed: script1.php
Done: script3.php
Done: script4.php
Adding script: script5.php
Adding script: script6.php
Done: script5.php
Done: script6.php
vanyazin at gmail dot com ¶
7 years ago
If you want to use proc_open() function with socket streams, you can open connection with help of fsockopen() function and then just put handlers into array of io descriptors:
$fh
= fsockopen($address, $port);
$descriptors = [
$fh, // stdin
$fh, // stdout
$fh, // stderr
];
$proc = proc_open($cmd, $descriptors, $pipes);
snowleopard at amused dot NOSPAMPLEASE dot com dot au ¶
14 years ago
I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.
Included below is an example of decryption using a higher descriptor to push a passphrase.
Comments and emails welcome. :)
function GPGDecrypt($InputData, $Identity, $PassPhrase, $HomeDir="~/.gnupg", $GPGPath="/usr/bin/gpg") {
if(!
is_executable($GPGPath)) {
trigger_error($GPGPath . " is not executable",
E_USER_ERROR);
die();
} else {
// Set up the descriptors
$Descriptors = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
3 => array("pipe", "r") // This is the pipe we can feed the password into
);// Build the command line and start the process
$CommandLine = $GPGPath . ' --homedir ' . $HomeDir . ' --quiet --batch --local-user "' . $Identity . '" --passphrase-fd 3 --decrypt -';
$ProcessHandle = proc_open( $CommandLine, $Descriptors, $Pipes); if(
is_resource($ProcessHandle)) {
// Push passphrase to custom pipe
fwrite($Pipes[3], $PassPhrase);
fclose($Pipes[3]);// Push input into StdIn
fwrite($Pipes[0], $InputData);
fclose($Pipes[0]);// Read StdOut
$StdOut = '';
while(!feof($Pipes[1])) {
$StdOut .= fgets($Pipes[1], 1024);
}
fclose($Pipes[1]);// Read StdErr
$StdErr = '';
while(!feof($Pipes[2])) {
$StdErr .= fgets($Pipes[2], 1024);
}
fclose($Pipes[2]);// Close the process
$ReturnCode = proc_close($ProcessHandle); } else {
trigger_error("cannot create resource", E_USER_ERROR);
die();
}
} if (
strlen($StdOut) >= 1) {
if ($ReturnCode <= 0) {
$ReturnValue = $StdOut;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr . "\n\nStandard Output Follows:\n\n";
}
} else {
if ($ReturnCode <= 0) {
$ReturnValue = $StdErr;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr;
}
}
return $ReturnValue;
}
?>
mendoza at pvv dot ntnu dot no ¶
16 years ago
Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.
function authenticate($user,$password) {
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file","/dev/null", "w") // stderr is a file to write to
);
$process = proc_open("su ".escapeshellarg($user), $descriptorspec, $pipes);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0],$password);
fclose($pipes[0]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
return !$return_value;
}
}
?>
picaune at hotmail dot
com ¶
16 years ago
The above note on Windows compatibility is not entirely correct.
Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).
These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.
However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.
cbn at grenet dot org ¶
12 years ago
Display output (stdout/stderr) in real time, and get the real exit code in pure PHP (no shell workaround!). It works well on my machines (debian mostly).
#!/usr/bin/php
/*
* Execute and display the output in real time (stdout + stderr).
*
* Please note this snippet is prepended with an appropriate shebang for the
* CLI. You can re-use only the function.
*
* Usage example:
* chmod u+x proc_open.php
* ./proc_open.php "ping -c 5 google.fr"; echo RetVal=$?
*/
define(BUF_SIZ, 1024); # max buffer size
define(FD_WRITE, 0); # stdin
define(FD_READ, 1); # stdout
define(FD_ERR, 2); # stderr
/*
* Wrapper for proc_*() functions.
* The first parameter $cmd is the command line to execute.
* Return the exit code of the process.
*/
function proc_exec($cmd)
{
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);$ptr = proc_open($cmd, $descriptorspec, $pipes, NULL, $_ENV);
if (!is_resource($ptr))
return false; while ((
$buffer = fgets($pipes[FD_READ], BUF_SIZ)) != NULL
|| ($errbuf = fgets($pipes[FD_ERR], BUF_SIZ)) != NULL) {
if (!isset($flag)) {
$pstatus = proc_get_status($ptr);
$first_exitcode = $pstatus["exitcode"];
$flag = true;
}
if (strlen($buffer))
echo $buffer;
if (strlen($errbuf))
echo "ERR: " . $errbuf;
} foreach (
$pipes as $pipe)
fclose($pipe);/* Get the expected *exit* code to return the value */
$pstatus = proc_get_status($ptr);
if (!strlen($pstatus["exitcode"]) || $pstatus["running"]) {
/* we can trust the retval of proc_close() */
if ($pstatus["running"])
proc_terminate($ptr);
$ret = proc_close($ptr);
} else {
if ((($first_exitcode + 256) % 256) == 255
&& (($pstatus["exitcode"] + 256) % 256) != 255)
$ret = $pstatus["exitcode"];
elseif (!strlen($first_exitcode))
$ret = $pstatus["exitcode"];
elseif ((($first_exitcode + 256) % 256) != 255)
$ret = $first_exitcode;
else
$ret = 0; /* we "deduce" an EXIT_SUCCESS ;) */
proc_close($ptr);
} return (
$ret + 256) % 256;
}/* __init__ */
if (isset($argv) && count($argv) > 1 && !empty($argv[1])) {
if (($ret = proc_exec($argv[1])) === false)
die("Error: not enough FD or out of memory.\n");
elseif ($ret == 127)
die("Command not found (returned by sh).\n");
else
exit($ret);
}
?>
jonah at whalehosting dot ca ¶
14 years ago
@joachimb: The descriptorspec describes the i/o from the perspective of the process you are opening. That is why stdin is read: you are writing, the process is reading. So you want to open descriptor 2 (stderr) in write mode so that the process can write to it and you can read it. In your case where you want all descriptors to be pipes you should always use:
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
?>
The examples below where stderr is opened as 'r' is a mistake.
I would like to see examples of using higher descriptor numbers than 2. Specifically GPG as mentioned in the documentation.
jaroslaw at pobox dot sk ¶
14 years ago
Some functions stops working proc_open() to me.
This i made to work for me to communicate between two php scripts:
$abs_path = '/var/www/domain/filename.php';
$spec = array(array("pipe", "r"), array("pipe", "w"), array("pipe", "w"));
$process = proc_open('php '.$abs_path, $spec, $pipes, null, $_ENV);
if (is_resource($process)) {
# wait till something happens on other side
sleep(1);
# send command
fwrite($pipes[0], 'echo $test;');
fflush($pipes[0]);
# wait till something happens on other side
usleep(1000);
# read pipe for result
echo fread($pipes[1],1024).'
';
# close pipes
fclose($pipes[0]);fclose($pipes[1]);fclose($pipes[2]);
$return_value = proc_close($process);
}
?>
filename.php then contains this:
$test = 'test data generated here
';
while(true) {
# read incoming command
if($fh = fopen('php://stdin','rb')) {
$val_in = fread($fh,1024);
fclose($fh);
}
# execute incoming command
if($val_in)
eval($val_in);
usleep(1000);
# prevent neverending cycle
if($tmp_counter++ > 100)
break;
}
?>
toby at globaloptima dot co dot uk ¶
10 years ago
If script A is spawning script B and script B pushes a lot of data to stdout without script A consuming that data, script B is likely to hang but the result of proc_get_status on that process seems to continue to indicate it's running.
So either don't write to stdout i the spawned process (I write to log files instead now) or try to read in the stdout in a non-blocking way if your script A is spawning many instances of script B, I couldn't get this second option to work sadly.
PHP 5.3.8 CLI on Windows 7 64.
Step 1 - Go to 'Environmental Variables'. Step 2 - Find PATH variable and add the path to your PHP folder. Step 3 - For 'XAMPP' users put 'C:\xampp\php' and 'WAMP' users put 'C:\wamp64\bin\php\php7. 1.9' ) and save.